To see the other types of publications on this topic, follow the link: Multiple sources of evidences.

Dissertations / Theses on the topic 'Multiple sources of evidences'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multiple sources of evidences.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Baoping. "Intelligent Fusion of Evidence from Multiple Sources for Text Classification." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28198.

Full text
Abstract:
Automatic text classification using current approaches is known to perform poorly when documents are noisy or when limited amounts of textual content is available. Yet, many users need access to such documents, which are found in large numbers in digital libraries and in the WWW. If documents are not classified, they are difficult to find when browsing. Further, searching precision suffers when categories cannot be checked, since many documents may be retrieved that would fail to meet category constraints. In this work, we study how different types of evidence from multiple sources can be intelligently fused to improve classification of text documents into predefined categories. We present a classification framework based on an inductive learning method -- Genetic Programming (GP) -- to fuse evidence from multiple sources. We show that good classification is possible with documents which are noisy or which have small amounts of text (e.g., short metadata records) -- if multiple sources of evidence are fused in an intelligent way. The framework is validated through experiments performed on documents in two testbeds. One is the ACM Digital Library (using a subset available in connection with CITIDEL, part of NSF's National Science Digital Library). The other is Web data, in particular that portion associated with the Cadê Web directory. Our studies have shown that improvement can be achieved relative to other machine learning approaches if genetic programming methods are combined with classifiers such as kNN. Extensive analysis was performed to study the results generated through the GP-based fusion approach and to understand key factors that promote good classification.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
2

Abell, Meghann Lynn. "Assessing Fraud Risk, Trustworthiness, Reliability, and Truthfulness: Integrating Audit Evidence from Multiple Sources." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/27763.

Full text
Abstract:
To assess fraud risk, auditors collect evidence in a sequential manner by reviewing workpaper documentation, and by collecting corroborating and clarifying information from financial (management) personnel and nonfinancial (operating) personnel. SAS 99 (AICPA, 2002) noted that audit evidence gathered from financial personnel may be susceptible to deception. In addition, prior researchers have found auditors to be poor at detecting deception immediately following deceptive communication. Though the audit process is sequential and iterative, these studies measured auditors– ability to detect deception at a single point and did not provide corroborating evidence after the deceptive communication for auditors to revise their judgments. In this study, I examined auditors’ fraud risk assessments and truthfulness judgments throughout the audit process when there was an attempt at deception by management (financial) personnel. The belief adjustment model provided a framework to examine auditors’ initial judgments, their judgments directly following a deception attempt by financial personnel, and their judgments after receiving corroborating evidence from nonfinancial personnel. Sixty-four experienced auditors electronically completed one of four randomly assigned cases and, within each case, assessed the fraud risk, truthfulness, trustworthiness, and reliability of financial personnel at multiple points for a fictitious client. I manipulated the presence (absence) of fraud and the level of experience of the source of corroborating evidence (operating personnel). I hypothesized that auditors would not be able to differentially evaluate fraud risk and truthfulness judgments of financial personnel between the fraud and no fraud conditions when exposed to workpaper documentation and deceptive client inquiry evidence by management (financial personnel). However, I expected to find that auditors– would update their fraud risk and truthfulness judgments as they reviewed audit evidence from nonfinancial (operating) personnel. The results indicate that auditors in this study are not able to appropriately assess fraud risk and the truthfulness of financial personnel following the review of workpaper and client inquiry evidence. While the client was deceptive in the fraud condition only, auditors did not differentially assess the fraud risk and truthfulness of financial personnel between the fraud and no fraud conditions. After auditors reviewed evidence from nonfinancial personnel, in the presence of fraud, auditors increased their fraud risk and decreased their truthfulness judgments of financial personnel as inconsistent evidence was presented from a corroborating source. Therefore, in the presence of fraud, auditors improved the effectiveness of the audit process by appropriately increasing their fraud risk assessments in light of inconsistent audit evidence from nonfinancial (operating) personnel. Of equal importance, in the absence of fraud, auditors decreased their fraud risk assessments as consistent evidence was presented from a corroborating source. Therefore, auditors increased the efficiency of the audit process by appropriately decreasing their fraud risk assessments after integrating consistent audit evidence from nonfinancial personnel into their judgments. Further, I observed that these auditors revised their fraud risk assessments to a greater extent when audit evidence was provided by a source with a higher level of experience. Though prior research has found auditors to be poor at detecting deception, the results of this study indicate that auditors will increase or decrease their fraud risk assessments and truthfulness judgments based on the consistency of audit evidence gathered from a corroborating source. Therefore, in practice, auditors may be able to detect deception as the audit progresses.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Faverjon, Céline. "Risk based surveillance for vector-borne diseases in horses : combining multiple sources of evidence to improve decision making." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22604/document.

Full text
Abstract:
Les maladies émergentes à transmission vectorielle sont une préoccupation croissante et particulièrement lorsqu’elles affectent les chevaux, une population spécifiquement à risque vis-à-vis de la propagation de maladies. En effet, les chevaux voyagent fréquemment et, malgré l’impact sanitaire et économique des maladies équines, les règlementations sanitaires et les principes de biosécurité et de traçabilité censés assurer la sécurité des mouvements d'équidés ne sont pas toujours en place. Notre travail propose d'améliorer la surveillance des maladies à transmission vectorielle chez les chevaux en utilisant différentes méthodes pour estimer la probabilité d'émergence d'une maladie. Tout d'abord, nous avons développé un modèle quantitatif et spatio-temporel combinant différentes probabilités pour estimer les risques d'introduction de la peste équine et de l’encéphalose équine. Ces combinaisons permettent d’obtenir une image plus détaillée du risque posé par ces agents pathogènes. Nous avons ensuite évalué des systèmes de surveillance syndromique par deux approches méthodologiques: l'approche classique avec un seuil d'alarme basé sur un multiple de l'erreur standard de prédiction, et l'approche bayésienne basée sur le rapport de vraisemblance. Nous avons travaillé ici principalement sur la détection précoce du virus West Nile en utilisant les symptômes nerveux des chevaux. Les deux approches ont fourni des résultats prometteurs, mais l’approche bayésienne était particulièrement intéressante pour obtenir un résultat quantitatif et pour combiner différentes informations épidémiologiques. Pour finir, l'approche bayésienne a été utilisée pour combiner quantitativement différentes sources d'estimation du risque : surveillance syndromique multivariée, et combinaison de la surveillance syndromique avec les résultats d’analyses de risques. Ces combinaisons ont données des résultats prometteurs. Ce travail, basé sur des estimations de risque, contribue à améliorer la surveillance des maladies à transmission vectorielle chez les chevaux et facilite la prise de décision. Les principales perspectives de ce travail sont d'améliorer la collecte et le partage de données, de mettre en oeuvre une évaluation complète des performances des systèmes de surveillance multivariés, et de favoriser l'adoption de ce genre d’approche par les décideurs en utilisant une interface conviviale et en mettant en place un transfert de connaissance<br>Emerging vector-borne diseases are a growing concern, especially for horse populations, which are at particular risk for disease spread. In general, horses travel widely and frequently and, despite the health and economic impacts of equine diseases, effective health regulations and biosecurity systems to ensure safe equine movements are not always in place. The present work proposes to improve the surveillance of vector-borne diseases in horses through the use of different approaches that assess the probability of occurrence of a newly introduced epidemic. First, we developed a spatiotemporal quantitative model which combined various probabilities in order to estimate the risk of introduction of African horse sickness and equine encephalosis. Such combinations of risk provided more a detailed picture of the true risk posed by these pathogens. Second, we assessed syndromic surveillance systems using two approaches: a classical approach with the alarm threshold based on the standard error of prediction, and a Bayesian approach based on a likelihood ratio. We focused particularly on the early detection of West Nile virus using reports of nervous symptoms in horses. Both approaches provided interesting results but Bayes’ rule was especially useful as it provided a quantitative output and was able to combine different epidemiological information. Finally, a Bayesian approach was also used to quantitatively combine various sources of risk estimation in a multivariate syndromic surveillance system, as well as a combination of quantitative risk assessment with syndromic surveillance (applied to West Nile virus and equine encephalosis, respectively). Combining evidence provided promising results. This work, based on risk estimations, strengthens the surveillance of VBDs in horses and can support public health decision making. It also, however, highlights the need to improve data collection and data sharing, to implement full performance assessments of complex surveillance systems, and to use effective communication and training to promote the adoption of these approaches
APA, Harvard, Vancouver, ISO, and other styles
4

Lima, Márcia Sampaio. "Identificando o Tópico de Páginas Web." Universidade Federal do Amazonas, 2009. http://tede.ufam.edu.br/handle/tede/2957.

Full text
Abstract:
Made available in DSpace on 2015-04-11T14:03:16Z (GMT). No. of bitstreams: 1 DISSERTACAO MARCIA.pdf: 794477 bytes, checksum: 2cef05b5eceb08ee3829eec46ac4a278 (MD5) Previous issue date: 2009-04-24<br>Fundação de Amparo à Pesquisa do Estado do Amazonas<br>Textual and structural sources of evidences extracted from web pages are frequently used to improve the results of Information Retrieval (IR) systems. The main topic of a web page is a textual source of evidence that has a wide applicability in IR systems. It can be used as a new source of evidence to improve ranking results, page classification, filtering, among other applications. In this work, we propose to study, develop and evaluate a method to identify the main topic of a web page using a combination of different sources of evidences. We define the main topic of a web page as a set of, at most, five distinct keywords related to the main subject of the page. In general, the proposed method, is divided in four distinct phases: (1) identification of the keywords that describe the web page content, using multiple sources of evidences; (2) use of a genetic algorithm to combine the sources of evidences; (3) definition of the three better keywords of the page; and (4) use of a web directory to identify the page main topic. The results of the experiments show that: (1) the best source of evidence used to describe the keywords of a web page is the content link; (2) the proposed method is efficient to identify the main topic of a web page: 0.9129, in a scale of zero to one; and (3) the proposed method is also efficient to automatic classify web pages within the Google directory, reaching 88%±0.11 of precision in the classification task.<br>Evidências textuais e estruturais que podem ser extraídas dos documentos web são frequentemente usadas na busca pela melhoria da qualidade dos resultados obtidos pelos diversos sistemas de recuperação de informação (RI). O tópico de uma página web é uma evidência textual que possui uma vasta aplicabilidade nesses sistemas, podendo servir como uma nova fonte de evidência para melhorar ranking de páginas web, melhorar sistemas de classificação e filtragem destas páginas, entre outros. O presente trabalho tem por objetivo estudar, desenvolver e avaliar um método para identificar automaticamente o tópico de páginas web através da combinação de diferentes fontes de evidências. Definimos o tópico de uma página como sendo um conjunto de, no máximo, cinco termos distintos relacionadas ao assunto principal da página. Em linhas gerais, o método de identificação de tópicos proposto nesta dissertação, está dividido em quatro fases distintas: (1) identificação dos possíveis termos descritores de uma página web, fazendo uso de múltiplas fontes de evidências; (2) utilização de um algoritmo genético na combinação das fontes de evidências usadas; (3) definição dos três melhores termos descritores da página; e (4) utilização da estrutura hierárquica de um diretório abrangente e popular da web com o objetivo de identificar o tópico da referida página. Os resultados obtidos nos experimentos realizados para avaliar o método proposto foram os seguintes: (1) alto grau de importância do uso da concatenação do texto de âncora de links na descoberta dos termos descritores de uma página web; (2) boa avaliação da eficiência do método proposto na identificação de tópicos de páginas web: 0.9129, em uma escala de zero a um; e (3) boa avaliação da utilização de parte do método proposto na classificação automática de páginas web na estrutura hierárquica do diretório Google, atingindo 88%±0.11 de acertos das páginas classificadas. Os experimentos realizados demonstram que o modelo proposto é útil na identificação do tópico de uma página web e também na classificação de páginas na estrutura hierárquica do diretório Google.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Ping. "Learning from Multiple Knowledge Sources." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214795.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>In supervised learning, it is usually assumed that true labels are readily available from a single annotator or source. However, recent advances in corroborative technology have given rise to situations where the true label of the target is unknown. In such problems, multiple sources or annotators are often available that provide noisy labels of the targets. In these multi-annotator problems, building a classifier in the traditional single-annotator manner, without regard for the annotator properties may not be effective in general. In recent years, how to make the best use of the labeling information provided by multiple annotators to approximate the hidden true concept has drawn the attention of researchers in machine learning and data mining. In our previous work, a probabilistic method (i.e., MAP-ML algorithm) of iteratively evaluating the different annotators and giving an estimate of the hidden true labels is developed. However, the method assumes the error rate of each annotator is consistent across all the input data. This is an impractical assumption in many cases since annotator knowledge can fluctuate considerably depending on the groups of input instances. In this dissertation, one of our proposed methods, GMM-MAPML algorithm, follows MAP-ML but relaxes the data-independent assumption, i.e., we assume an annotator may not be consistently accurate across the entire feature space. GMM-MAPML uses a Gaussian mixture model (GMM) and Bayesian information criterion (BIC) to find the fittest model to approximate the distribution of the instances. Then the maximum a posterior (MAP) estimation of the hidden true labels and the maximum-likelihood (ML) estimation of quality of multiple annotators at each Gaussian component are provided alternately. Recent studies show that it is not the case that employing more annotators regardless of their expertise will result in improved highest aggregating performance. In this dissertation, we also propose a novel algorithm to integrate multiple annotators by Aggregating Experts and Filtering Novices, which we call AEFN. AEFN iteratively evaluates annotators, filters the low-quality annotators, and re-estimates the labels based only on information obtained from the good annotators. The noisy annotations we integrate are from any combination of human and previously existing machine-based classifiers, and thus AEFN can be applied to many real-world problems. Emotional speech classification, CASP9 protein disorder prediction, and biomedical text annotation experiments show a significant performance improvement of the proposed methods (i.e., GMM-MAPML and AEFN) as compared to the majority voting baseline and the previous data-independent MAP-ML method. Recent experiments include predicting novel drug indications (i.e., drug repositioning) for both approved drugs and new molecules by integrating multiple chemical, biological or phenotypic data sources.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Stevenson, Robert Mark. "Multiple knowledge sources for word sense disambiguation." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dench, M. "Structural vibration control using multiple synchronous sources." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/349006/.

Full text
Abstract:
The advantages of isolating vibrating machinery from its supporting structure are that the chances of vibration induced fatigue failure of structural components are reduced, the structure becomes more inhabitable for people due to less vibration exposure and the sound radiated by the structure into the environment is reduced. This last point is especially important for machinery operating in a marine environment because low frequency sound propagates very well underwater, and the machinery induced sound radiated from a ship or submarine is a primary detection and classification mechanism for passive sonar systems. This thesis investigates the control of vibration from an elastic support structure upon which multiple vibrating systems are passively mounted. The excitations are assumed to occur at discrete frequencies with a finite number of harmonic components and the machines are all assumed to be supplied with power from the same electrical supply. Active vibration control may be achieved by adjusting the phase of the voltage supplied to one or more of the machines, so that a minimum value of a measurable cost function is obtained. Adjusting the phase of a machine with respect to a reference machine is known as synchrophasing and is a well established technique for controlling the sound in aircraft cabins and in ducts containing axial fans. However, the use of the technique for reducing the vibration of machinery mounted on elastic structures seems to have received very little attention in the literature and would appear to be a gap in the current knowledge. This thesis aims to address that gap by investigating theoretically and experimentally how synchrophasing can be implemented as an active structural vibration control technique.
APA, Harvard, Vancouver, ISO, and other styles
8

MEDEIROS, Ícaro Rafael da Silva. "Tag suggestion using multiple sources of knowledge." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2275.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:56:06Z (GMT). No. of bitstreams: 2 arquivo2739_1.pdf: 2586871 bytes, checksum: 3a0e10a22b131714039f0e8ffe875d80 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010<br>Nos sistemas de tagging social usuários atribuem tags (palavras-chave) a recursos (páginas Web, fotos, publicações, etc), criando uma estrutura conhecida como folksonomia, que possibilita uma melhora na navegação, organização e recuperação de informação. Atualmente, esses sistemas são muito populares na Web, portanto, melhorar sua qualidade e automatizar o processo de atribuição de tags é uma tarefa importante. Neste trabalho é proposto um sistema que automaticamente atribui tags a páginas, baseando-se em múltiplas fontes de conhecimento como o conteúdo textual, estrutura de hiperlinks e bases de conhecimento. A partir dessas fontes, vários atributos são extraídos para construir um classificador que decide que termos devem ser sugeridos como tag. Experimentos usando um dataset com tags e páginas extraídas do Delicious, um importante sistema de tagging social, mostram que nossos métodos obtém bons resultados de precisão e cobertura, quando comparado com tags sugeridas por usuários. Além disso, uma comparação com trabalhos relacionados mostra que nosso sistema tem uma qualidade de sugestão comparável a abordagens estado da arte na área. Finalmente, uma avaliação com usuários foi feita para simular um ambiente real, o que também produziu bons resultados
APA, Harvard, Vancouver, ISO, and other styles
9

Rice, Michael, and Erik Perrins. "Maximum Likelihood Detection from Multiple Bit Sources." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596443.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV<br>This paper deals with the problem of producing the best bit stream from a number of input bit streams with varying degrees of reliability. The best source selector and smart source selector are recast as detectors, and the maximum likelihood bit detector (MLBD) is derived from basic principles under the assumption that each bit value is accompanied by a quality measure proportional to its probability of error. We show that both the majority voter and the best source selector are special cases of the MLBD and define the conditions under which these special cases occur. We give a mathematical proof that the MLBD is the same as or better than the best source selector.
APA, Harvard, Vancouver, ISO, and other styles
10

Kabzinska, Ewa Joanna. "Empirical likelihood approach for estimation from multiple sources." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/422166/.

Full text
Abstract:
Empirical likelihood is a non-parametric, likelihood-based inference approach. In the design-based empirical likelihood approach introduced by Berger and De La Riva Torres (2016), the parameter of interest is expressed as a solution to an estimating equation. The maximum empirical likelihood point estimator is obtained by maximising the empirical likelihood function under a system of constraints. A single vector of weights, which can be used to estimate various parameters, is created. Design-based empirical likelihood confidence intervals are based on the χ<sup>2</sup> approximation of the empirical likelihood ratio function. The confidence intervals are range-preserving and asymmetric, with the shape driven by the distribution of the data. In this thesis we focus on the extension and application of design-based empirical likelihood methods to various problems occurring in survey inference. First, a design-based empirical likelihood methodology for parameter estimation in two surveys context, in presence of alignment and benchmark constraints, is developed. Second, a design-based empirical likelihood multiplicity adjusted estimator for multiple frame surveys is proposed. Third, design-based empirical likelihood is applied to a practical problem of census coverage estimation. The main contribution of this thesis is defining the empirical likelihood methodology for the studied problems and showing that the aligned and multiplicity adjusted empirical likelihood estimators are √n-design-consistent. We also discuss how the original proofs presented by Berger and De La Riva Torres (2016) can be adjusted to show that the empirical likelihood ratio statistic is pivotal and follows a χ<sup>2</sup> distribution under alignment constraints and when the multiplicity adjustments are used. We evaluate the asymptotic performance of the empirical likelihood estimators in a series of simulations on real and artificial data. We also discuss the computational aspects of the calculations necessary to obtain empirical likelihood point estimates and confidence intervals and propose a practical way to obtain empirical likelihood confidence intervals in situations when they might be difficult to obtain using standard approaches.
APA, Harvard, Vancouver, ISO, and other styles
11

Brizzi, Francesco. "Estimating HIV incidence from multiple sources of data." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273803.

Full text
Abstract:
This thesis develops novel statistical methodology for estimating the incidence and the prevalence of Human Immunodeficiency Virus (HIV) using routinely collected surveillance data. The robust estimation of HIV incidence and prevalence is crucial to correctly evaluate the effectiveness of targeted public health interventions and to accurately predict the HIV- related burden imposed on healthcare services. Bayesian CD4-based multi-state back-calculation methods are a key tool for monitoring the HIV epidemic, providing estimates of HIV incidence and diagnosis rates by disentangling their competing contribution to the observed surveillance data. Improving the effectiveness of public health interventions, requires targeting specific age-groups at high risk of infection; however, existing methods are limited in that they do not allow for such subgroups to be identified. Therefore the methodological focus of this thesis lies in developing a rigorous statistical framework for age-dependent back-calculation in order to achieve the joint estimation of age-and-time dependent HIV incidence and diagnosis rates. Key challenges we specifically addressed include ensuring the computational feasibility of proposed methods, an issue that has previously hindered extensions of back-calculation, and achieving the joint modelling of time-and-age specific incidence. The suitability of non-parametric bivariate smoothing methods for modelling the age-and-time specific incidence has been investigated in detail within comprehensive simulation studies. Furthermore, in order to enhance the generalisability of the proposed model, we developed back-calculation that can admit surveillance data less rich in detail; these handle surveillance data collected from an intermediate point of the epidemic, or only available on a coarse scale, and concern both age-dependent and age-independent back-calculation. The applicability of the proposed methods is illustrated using routinely collected surveillance data from England and Wales, for the HIV epidemic among men who have sex with men (MSM).
APA, Harvard, Vancouver, ISO, and other styles
12

Farooq, Umar. "Product Reputation Evaluation based on Multiple Web Sources." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2013.

Full text
Abstract:
Internet est une immense source de données non structurées dont l'extraction et l'analyse devient un enjeu majeur. Ces informations peuvent être plus qu'utiles à des consommateurs et des fabricants dans leur processus de prise de décision quant à un produit. Dans ce contexte, l'exploitation de telles information se révèle être une tâche très difficile. De nombreuses méthodes d'évaluation de produits existent à l'heure actuelle, utilisant principalement les notes et les commentaires disponibles sur Internet. Cependant, ces méthodes rencontrent vite des limites et ne sont donc pas en mesure de répondre aux besoins et aux exigences des clients ou des fabricants. Par exemple, les méthodes existantes d'analyse de sentiments, qui classent les opinions des clients sur un produit à l’aide de leur polarité, ne sont pas en mesure de déterminer le contexte du mot dans une phrase avec précision, ce qui biaise fortement leurs résultats. De plus, les méthodes de traitement des négations utilisées, qui déterminent les sentiments exprimées par les clients dans leur commentaires, ne sont pas en mesure de traiter tous les types de négation, ne considèrent pas non plus toutes les exceptions où les négations se comportent différemment. De même, les modèles existants d'estimation de réputation de produits sont basés sur une source unique, et donc peu robuste aux fausses évaluations ou aux évaluations biaisées, ne sont pas en mesure de refléter les opinions récentes. Ils ne permettent pas aux utilisateurs d'évaluer le produit au regard de critères spécifiques, et ainsi ne fournissent pas une estimation précise. D'autre part, les systèmes de réputation évaluant des produits fonctionnent de manière centralisée, entraînant des problèmes de robustesse et des facilités de manipulation, voire de falsification, d'informations, ces approches ne convenant pas à résoudre un problème aussi complexe. Cette thèse propose des modèles et des méthodes d'évaluation de la réputation dédiées aux produits, fonctionnant à partir des données disponibles sur Internet, et visant à fournir des informations précises aux consommateurs et aux fabricants, les appuyant dans leur prise de décision. Ces méthodes concernent i) l'extraction des données d'évaluation des produits à partir de plusieurs sources; ii) une analyse sémantique des évaluations des clients pour déterminer si les opinions exprimées sur chacune des caractéristiques d'un produit sont positives ou négatives; iii) le calcul des différentes valeurs de réputation d'un produit, tout en considérant différents critères d'évaluation, et iv) enfin, le retour des résultats aux consommateurs ou aux fabricants afin de les aider dans leur prise de décisions. Cette thèse contribue à trois principaux domaines de recherche à savoir i) l'analyse des sentiments exprimées quant aux caractéristiques d'un produit, comprenant une méthode de désambiguïsation du sens des mots ainsi qu'une prise en compte plus fine des négations pour améliorer la performance de l'analyse de sentiments selon différents niveaux; ii) les modèles d'évaluation de la réputation d'un produit, basé sur un modèlemathématique calculant plusieurs valeurs de réputation pour une évaluation d'un produit selon différents critères et enfin iii) une architecture multi-agents robuste, facilitant le déploiement et la parallélisation des tâches. Sur Internet, la plupart des opinions sur des produits sont de nature textuelle, comme par exemple les avis des consommateurs. Afin d'analyser de tels commentaires, une méthode d'analyse de sentiments exprimés ciblant spécifiquement les caractéristiques d'un produit a été développée. Une méthode de désambiguïsation identifiant le sens des mots selon leur contexte tout en déterminant leur polarité a enrichi le processus, qui fût complété par une méthode d'analyse fine des négations, déterminant les séquences de mots affectées par chaque type de négation<br>The extraction of unstructured data from the Web and to analyzing them in order to determine useful information which can be used by customers and manufacturers to make decisions about product is a challengeable task. There are some existing techniques to evaluate products based on the ratings and product reviews posted on the Web. However, all these techniques have some inherent issues and limitations and therefore not able to fulfill the needs and requirements of both customer and manufacturer. For instance, the existing sentiment analysis methods (which classify the opinions in customer reviewsabout a product as positive or negative) are not able to determine the context of word in a sentence accurately. In addition, negation handling methods adopted while determining the sentiment are not able to deal with all types of negations and they also do not consider all exceptions where negations behave differently. Similarly, the existing product reputation models are based on single source, not robust to false and biased ratings, not able to reflect the recent opinions, do not allow users to evaluate product on different criteria, and also do not provide a good estimation accuracy. On the other hand, the existingproduct reputation systems are centralized which has issues such as single point of failure, easy to falsify evaluation information and not suitable approach to solve a complex problem. This thesis proposes methods and techniques for evaluating product reputation based on data available on the Web and to provide valuable information to customers and manufacturers for decision making. These methods perform the following tasks: 1) extract product evaluation data from multiple Web sources 2) analyze product reviews in order to determine that whether opinions about product features in customer reviews are positive or negative, 3) computes different product reputation values while considering different evaluation criteria, and 4) finally the results are provided to customers and manufacturers in order to make decisions. This thesis contributes in three main research areas i.e. 1) feature level sentiment analysis, 2) product reputation model and 3) multiagent architecture. First, a word sense disambiguation and negation handling methods are proposed in order to improve the performance of feature level sentiment analysis. Second, a novel mathematical model is proposed which computes several reputation values in order to evaluate product based on different criteria. Finally, multiagent architecture for review analysis and product evaluation is proposed. Huge amount of the product evaluation data on the Web is in textual form (i.e. product reviews). In order to analyze product reviews to evaluate product we propose a feature level sentiment analysis method which determines the opinions about different features of a product. A word sense disambiguation method is introduced which identify the sense of words according to the context while determining the polarity. Inaddition, a negation handling method is proposed which determine the sequence of words affected by different types of negations. The results show that both word sense disambiguation and negation handling methods improve the overall accuracy of feature level sentiment analysis. A multi-source product reputation model is proposed where informative, robust and strategy proof aggregation methods are introduced to compute different reputation values. Sources from which reviews are extracted may not be creditable hence a source credibility measuring method is proposed in order to avoid malicious web sources. In addition, suitable decay principles for product reputation are also introduced in order to reflect the newest opinions about product quickly. The model also considers several parameters such as reviewer expertise, rating trustworthiness, time span of ratings, reviewer age, sex and location in order to evaluate product in different ways
APA, Harvard, Vancouver, ISO, and other styles
13

Yankova-Doseva, Milena. "TERMS - Text Extraction from Redundant and Multiple Sources." Thesis, University of Sheffield, 2010. http://etheses.whiterose.ac.uk/933/.

Full text
Abstract:
In this work we present our approach to the identity resolution problem: discovering references to one and the same object that come from different sources. Solving this problem is important for a number of different communities (e.g. Database, NLP and Semantic Web) that process heterogeneous data where variations of the same objects are referenced in different formats (e.g. textual documents, web pages, database records, ontologies etc.). Identity resolution aims at creating a single view into the data where different facts are interlinked and incompleteness is remedied. We propose a four-step approach that starts with schema alignment of incoming data sources. As a second step - candidate selection - we discard those entities that are totally different from those that they are compared to. Next the main evidence for identity of two entities comes from applying similarity measures comparing their attribute values. The last step in the identity resolution process is data fusion or merging entities found to be identical into a single object. The principal novel contribution of our solution is the use of a rich semantic knowledge representation that allows for flexible and unified interpretation during the resolution process. Thus we are not restricted in the type of information that can be processed (although we have focussed our work on problems relating to information extracted from text). We report the implementation of these four steps in an IDentity Resolution Framework (IDRF) and their application to two use-cases. We propose a rule based approach for customisation in each step and introduce logical operators and their interpretation during the process. Our final evaluation shows that this approach facilitates high accuracy in resolving identity.
APA, Harvard, Vancouver, ISO, and other styles
14

Sitek, Arkadiusz. "The development of multiple line transmission sources for SPECT." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0019/NQ27248.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Selcon, Stephen Jonathan. "Multiple information sources : the effect of redundancy on performance." Thesis, University of Reading, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yeang, Chen-Hsiang 1969. "Inferring regulatory networks from multiple sources of genomic data." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28731.

Full text
Abstract:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.<br>Includes bibliographical references (p. 279-299).<br>(cont.) algorithm to identify the regulatory models from protein-DNA binding and gene expression data. These models to a large extent agree with the knowledge of gene regulation pertaining to the corresponding regulators. The three works in this thesis provide a framework of modeling gene regulatory networks.<br>This thesis addresses the problems of modeling the gene regulatory system from multiple sources of large-scale datasets. In the first part, we develop a computational framework of building and validating simple, mechanistic models of gene regulation from multiple sources of data. These models, which we call physical network models, annotate the network of molecular interactions with several types of attributes (variables). We associate model attributes with physical interaction and knock-out gene expression data according to the confidence measures of data and the hypothesis that gene regulation is achieved via molecular interaction cascades. By applying standard model inference algorithms, we are able to obtain the configurations of model attributes which optimally fit the data. Because existing datasets do not provide sufficient constraints to the models, there are many optimal configurations which fit the data equally well. In the second part, we develop an information theoretic score to measure the expected capacity of new knock-out experiments in terms of reducing the model uncertainty. We collaborate with biologists to perform suggested knock-out experiments and analyze the data. The results indicate that we can reduce model uncertainty by incorporating new data. The first two parts focus on the regulatory effects along single pathways. In the third part, we consider the combinatorial effects of multiple transcription factors on transcription control. We simplify the problem by characterizing a combinatorial function of multiple regulators in terms of the properties of single regulators: the function of a regulator and its direction of effectiveness. With this characterization, we develop an incremental<br>by Chen-Hsiang Yeang.<br>Sc.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Daniels, Reza Che. "The income distribution with multiple sources of survey error." Doctoral thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/5777.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references.<br>Estimating parameters of the income distribution in public-use micro datasets is frequently complicated by multiple sources of survey error. This dissertation consists of three main chapters that, taken together, provide insight into several important econometric concerns that arise when analysing income from household surveys. The country of interest is South Africa, but despite this geographical specificity, the discussion in each chapter is generalisable to any household survey concerned with measuring any component of income.
APA, Harvard, Vancouver, ISO, and other styles
18

Crawhall, Robert J. H. "EMI potential of multiple sources within a shielded enclosure." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6750.

Full text
Abstract:
An analytic model is developed for the prediction of electromagnetic emissions potential due to multiple integrated circuits (ICs) within a shielded enclosure. Detailed analysis of radiated emissions from an IC leads to dipole representations of the sources. These dipole sources are then applied in the determination of the fields and currents induced on the inside of the enclosure. The magnitude of these disturbances is taken as a metric of electromagnetic emissions potential. The power spectral density and the radiation efficiency of the ICs are investigated. ICs are represented by magnetic and electric dipoles, the magnitude and polarization of which are determined through measurement or calculation. Green's functions are derived that relate the dipole sources to the electromagnetic disturbances induced on the walls of the enclosure. Mapping matrices are proposed that relate multiple sources to multiple points on the wall. The role of source diversity in the summing problem is discussed. A stochastic analysis of the multiple source problem determines the distribution of disturbances due to known probability distributions of significant source factors. The shielding effectiveness of enclosures is determined using perturbation models of the leakage paths driven by the disturbances calculated through the application of the mapping matrices.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Zhenyu, Ali Bilgin, and Michael W. Marcellin. "JOINT SOURCE/CHANNEL CODING FOR TRANSMISSION OF MULTIPLE SOURCES." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604932.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>A practical joint source/channel coding algorithm is proposed for the transmission of multiple images and videos to reduce the overall reconstructed source distortion at the receiver within a given total bit rate. It is demonstrated that by joint coding of multiple sources with such an objective, both improved distortion performance as well as reduced quality variation can be achieved at the same time. Experimental results based on multiple images and video sequences justify our conclusion.
APA, Harvard, Vancouver, ISO, and other styles
20

Coogle, Richard A. "Using multiple agents in uncertainty minimization of ablating target sources." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53036.

Full text
Abstract:
The objective of this research effort is to provide an efficient methodology for a multi-agent robotic system to observe moving targets that are generated from an ablation process. An ablation process is a process where a larger mass is reduced in volume as a result of erosion; this erosion results in smaller, independent masses. An example of such a process is the natural process that gives rise to icebergs, which are generated through an ablation process referred to as ice calving. Ships that operate in polar regions continue to face the threat of floating ice sheets and icebergs generated from the ice ablation process. Although systems have been implemented to track these threats with varying degrees of success, many of these techniques require that the operations are conducted outside of some boundary where the icebergs are known not to drift. Since instances where polar operations must be conducted within such a boundary line do exist (e.g., resource exploration), methods for situational awareness of icebergs for these operations are necessary. In this research, efficacy of these methods is correlated to the initial acquisition time of observing newly ablated targets, as it provides for the ability to enact early countermeasures. To address the research objective, the iceberg tracking problem is defined such that it is re-cast within a class of robotic, multiagent target-observation problems. From this new definition, the primary contributions of this research are obtained: 1) A definition of the iceberg observation problem that extends an existing robotic observation problem to the requirements for the observation of floating ice masses; 2) A method for modeling the activity regions on an ablating source to extract ideal search regions to quickly acquire newly ablated targets; 3) A method for extracting metrics for this model that can be used to assess performance of observation algorithms and perform resource allocation. A robot controller is developed that implements the algorithms that result from these contributions and comparisons are made to existing target acquisition techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Zahrn, Frederick Craig. "Studies of inventory control and capacity planning with multiple sources." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29736.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010.<br>Committee Co-Chair: John H. Vande Vate; Committee Co-Chair: Shi-Jie Deng; Committee Member: Anton J. Kleywegt; Committee Member: Hayriye Ayhan; Committee Member: Mark E. Ferguson. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
22

Grobe, Gerrit. "Real options analysis of investments under multiple sources of uncertainty." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Benanzer, Todd W. "System Design of Undersea Vehicles with Multiple Sources of Uncertainty." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1215046954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Meyer, Ryan. "Multiple potential well structure in inertial electrostatic confinement devices." Diss., Columbia, Mo. : University of Missouri-Columbia, 2004. http://hdl.handle.net/10355/4098.

Full text
Abstract:
Thesis (M.S.) University of Missouri-Columbia, 2004.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (June 30, 2006) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
25

Lombard, Anthony [Verfasser]. "Localization of Multiple Independent Sound Sources in Adverse Environments / Anthony Lombard." München : Verlag Dr. Hut, 2012. http://d-nb.info/1029400148/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Rezazadeh, Arezou. "Error exponent analysis for the multiple access channel with correlated sources." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/667611.

Full text
Abstract:
Due to delay constraints of modern communication systems, studying reliable communication with finite-length codewords is much needed. Error exponents are one approach to study the finite-length regime from the information-theoretic point of view. In this thesis, we study the achievable exponent for single-user communication and also multiple-access channel with both independent and correlated sources. By studying different coding schemes including independent and identically distributed, independent and conditionally distributed, message-dependent, generalized constant-composition and conditional constant-composition ensembles, we derive a number of achievable exponents for both single-user and multi-user communication, and we analyze them.<br>A causa de les restriccions de retard dels sistemes de comunicació moderns, estudiar la fiabilitat de la comunicació amb paraules de codis de longitud finita és important. Els exponents d’error són un mètode per estudiar el règim de longitud finita des del punt de vista de la teoria de la informació. En aquesta tesi, ens centrem en assolir l’exponent per a la comunicació d’un sol usuari i també per l’accés múltiple amb fonts independents i correlacionades. En estudiar els següents esquemes de codificació amb paraules independents i idènticament distribuïdes, independents i condicionalment distribuïdes, depenent del missatge, composició constant generalitzada, i conjunts de composició constant condicional, obtenim i analitzem diversos exponents d’error assolibles tant per a la comunicació d’un sol usuari com per la de múltiples usuaris.<br>Las restricciones cada vez más fuertes en el retraso de transmisión de los sistemas de comunicación modernos hacen necesario estudiar la fiabilidad de la comunicación con palabras de códigos de longitud finita. Los exponentes de error son un método para estudiar el régimen de longitud finita desde el punto de vista la teoría de la información. En esta tesis, nos centramos en calcular el exponente para la comunicación tanto de un solo usuario como para el acceso múltiple con fuentes independientes y correladas. Estudiando diferentes familias de codificación, como son esquemas independientes e idénticamente distribuidos, independientes y condicionalmente distribuidos, que dependen del mensaje, de composición constante generalizada, y conjuntos de composición constante condicional, obtenemos y analizamos varios exponentes alcanzables tanto para la comunicación de un solo usuario como para la de múltiples usuarios.
APA, Harvard, Vancouver, ISO, and other styles
27

Ma, J. "Merging and revision of uncertain knowledge and information from multiple sources." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bruneaux, Luke Julien. "Multiple Unnecessary Protein Sources and Cost to Growth Rate in E.coli." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11041.

Full text
Abstract:
The fitness and macromolecular composition of the gram-negative bacterium E.coli are governed by a seemingly insurmountable level of complexity. However, simple phenomenological measures may be found that describe its systems-level response to a variety of inputs. This thesis explores phenomenological approaches providing accurate quantitative descriptions of complex systems in E.coli. Chapter 1 examines the relationship between unnecessary protein production and growth rate in E.coli. It was previously unknown whether the negative effects on growth rate due to multiple unnecessary protein fractions would add linearly or collectively to produce a nonlinear response. Within the regime of this thesis, it appears that the interplay between growth rate and protein is consistent with a non-interacting model. We do not need to account for complex interaction between system components. Appendix A describes a novel technique for real-time measurement of messenger RNA in single living E.coli cells. Using this technique, one may accurately describe the transcriptional response of gene networks in single cells.<br>Physics
APA, Harvard, Vancouver, ISO, and other styles
29

Hall, Kimberlee K., and Phillip R. Scheuerman. "Development of Multiple Regression Models to Predict Sources of Fecal Pollution." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etsu-works/2880.

Full text
Abstract:
This study assessed the usefulness of multivariate statistical tools to characterize watershed dynamics and prioritize streams for remediation. Three multiple regression models were developed using water quality data collected from Sinking Creek in the Watauga River watershed in Northeast Tennessee. Model 1 included all water quality parameters, model 2 included parameters identified by stepwise regression, and model 3 was developed using canonical discriminant analysis. Models were evaluated in seven creeks to determine if they correctly classified land use and level of fecal pollution. At the watershed level, the models were statistically significant (p < 0.001) but with low r2 values (Model 1 r2 = 0.02, Model 2 r2 = 0.01, Model 3 r2 = 0.35). Model 3 correctly classified land use in five of seven creeks. These results suggest this approach can be used to set priorities and identify pollution sources, but may be limited when applied across entire watersheds.
APA, Harvard, Vancouver, ISO, and other styles
30

Daly, Nancy Ann. "Recognition of words from their spellings : integration of multiple knowledge sources." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14791.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1987.<br>MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.<br>This research was supported by the National Science Foundation and the Defense Advanced Research Projects Agency.<br>Bibliography: leaves 112-114.<br>by Nancy Ann Daly.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
31

Vickery, Kathryn J. "Southern African dust sources as identified by multiple space borne sensors." Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/4814.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references (leaves 132-145).<br>Mineral aerosols emitted from arid and semi-arid regions effect global radiation, contribute to regional nutrient dynamics and impact local soil and water quality. Satellite imagery has been central to the identification and determining the distribution of source areas and the trajectories of dust around the globe. This study focuses on the dryland regions of Botswana, Namibia and South Africa. It uses the capabilities of the ultraviolet channels provided by the older Total Ozone Mapping Spectrometer (TOMS), the Ozone Monitoring Instrument (OMI) (a TOMS follow up), the visible bands of Moderate Resolution Imaging Spectroradiometer (MODIS), and the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG). This study compares various dust detection products but also focuses on the application of thermal infrared bands from MSG through the usage of the new "Pink Dust" visua lisation technique using channels 7 (8.7 ~m), 9 (lO.8 ~m), and 10 (12.0 ~m). This multisensor approach resulted in a regional maps highlighting the distribution of source points and establishing some of the prevalent transport pathways and likely deposition zones. Southern African dust sources include a few large and many small pans, subtle inland depressions and ephemeral river systems, which are subject to a range of climatic conditions as part of the Kalahari and Namib region. This work in particular examines if source points are productive due to favourable climatic conditions. The debate around transport limit verses supply limit can only be solved at the local scale which requires observation at higher spatial and temporal resolution as provided by the latest dust detection products. MSG and MODIS in particular have shown distinct source point clusters in Etosha and the Makgadikgadi Pans which based on the courser resolution of older TOMS, have so far been treated as homogeneous sources. Data analyses reveal 327 individual dust plumes over the 2005-2008 study period, some of which are more than 300 km in length. These are integrated into existing climate and weather records provided by National Centers for Environmental Prediction (NCEP) data. The results identified a set dust drivers such as the Continental High Pressure, Bergwinds, Tropical Temperate and West Coast Troughs, and Westerly and Easterly Wave lows. This enhances our ability to predict such events, in particular, if transport acts as the limiting driver. Some of these find ings also have the potential to enhance our knowledge of the aerosol generation process elsewhere. The quality of findings are still limited by problems associated with dust plume substrates and clearly require significant surface validation relating to hydrological and climatic controls at the micro-scale. It is furthermore evident that no current instrument fully meets the requirements of the mineral aerosol research community.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Sen. "Disease, Drug, and Target Association Predictions by Integrating Multiple Heterogeneous Sources." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1342194249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Argudo, Medrano Oscar. "Realistic reconstruction and rendering of detailed 3D scenarios from multiple data sources." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/620733.

Full text
Abstract:
During the last years, we have witnessed significant improvements in digital terrain modeling, mainly through photogrammetric techniques based on satellite and aerial photography, as well as laser scanning. These techniques allow the creation of Digital Elevation Models (DEM) and Digital Surface Models (DSM) that can be streamed over the network and explored through virtual globe applications like Google Earth or NASA WorldWind. The resolution of these 3D scenes has improved noticeably in the last years, reaching in some urban areas resolutions up to 1m or less for DEM and buildings, and less than 10 cm per pixel in the associated aerial imagery. However, in rural, forest or mountainous areas, the typical resolution for elevation datasets ranges between 5 and 30 meters, and typical resolution of corresponding aerial photographs ranges between 25 cm to 1 m. This current level of detail is only sufficient for aerial points of view, but as the viewpoint approaches the surface the terrain loses its realistic appearance. One approach to augment the detail on top of currently available datasets is adding synthetic details in a plausible manner, i.e. including elements that match the features perceived in the aerial view. By combining the real dataset with the instancing of models on the terrain and other procedural detail techniques, the effective resolution can potentially become arbitrary. There are several applications that do not need an exact reproduction of the real elements but would greatly benefit from plausibly enhanced terrain models: videogames and entertainment applications, visual impact assessment (e.g. how a new ski resort would look), virtual tourism, simulations, etc. In this thesis we propose new methods and tools to help the reconstruction and synthesis of high-resolution terrain scenes from currently available data sources, in order to achieve realistically looking ground-level views. In particular, we decided to focus on rural scenarios, mountains and forest areas. Our main goal is the combination of plausible synthetic elements and procedural detail with publicly available real data to create detailed 3D scenes from existing locations. Our research has focused on the following contributions: - An efficient pipeline for aerial imagery segmentation - Plausible terrain enhancement from high-resolution examples - Super-resolution of DEM by transferring details from the aerial photograph - Synthesis of arbitrary tree picture variations from a reduced set of photographs - Reconstruction of 3D tree models from a single image - A compact and efficient tree representation for real-time rendering of forest landscapes<br>Durant els darrers anys, hem presenciat avenços significatius en el modelat digital de terrenys, principalment gràcies a tècniques fotogramètriques, basades en fotografia aèria o satèl·lit, i a escàners làser. Aquestes tècniques permeten crear Models Digitals d'Elevacions (DEM) i Models Digitals de Superfícies (DSM) que es poden retransmetre per la xarxa i ser explorats mitjançant aplicacions de globus virtuals com ara Google Earth o NASA WorldWind. La resolució d'aquestes escenes 3D ha millorat considerablement durant els darrers anys, arribant a algunes àrees urbanes a resolucions d'un metre o menys per al DEM i edificis, i fins a menys de 10 cm per píxel a les fotografies aèries associades. No obstant, en entorns rurals, boscos i zones muntanyoses, la resolució típica per a dades d'elevació es troba entre 5 i 30 metres, i per a les corresponents fotografies aèries varia entre 25 cm i 1m. Aquest nivell de detall només és suficient per a punts de vista aeris, però a mesura que ens apropem a la superfície el terreny perd tot el realisme. Una manera d'augmentar el detall dels conjunts de dades actuals és afegint a l'escena detalls sintètics de manera plausible, és a dir, incloure elements que encaixin amb les característiques que es perceben a la vista aèria. Així, combinant les dades reals amb instàncies de models sobre el terreny i altres tècniques de detall procedural, la resolució efectiva del model pot arribar a ser arbitrària. Hi ha diverses aplicacions per a les quals no cal una reproducció exacta dels elements reals, però que es beneficiarien de models de terreny augmentats de manera plausible: videojocs i aplicacions d'entreteniment, avaluació de l'impacte visual (per exemple, com es veuria una nova estació d'esquí), turisme virtual, simulacions, etc. En aquesta tesi, proposem nous mètodes i eines per ajudar a la reconstrucció i síntesi de terrenys en alta resolució partint de conjunts de dades disponibles públicament, per tal d'aconseguir vistes a nivell de terra realistes. En particular, hem decidit centrar-nos en escenes rurals, muntanyes i àrees boscoses. El nostre principal objectiu és la combinació d'elements sintètics plausibles i detall procedural amb dades reals disponibles públicament per tal de generar escenes 3D d'ubicacions existents. La nostra recerca s'ha centrat en les següents contribucions: - Un pipeline eficient per a segmentació d'imatges aèries - Millora plausible de models de terreny a partir d'exemples d’alta resolució - Super-resolució de models d'elevacions transferint-hi detalls de la fotografia aèria - Síntesis d'un nombre arbitrari de variacions d’imatges d’arbres a partir d'un conjunt reduït de fotografies - Reconstrucció de models 3D d'arbres a partir d'una única fotografia - Una representació compacta i eficient d'arbres per a navegació en temps real d'escenes
APA, Harvard, Vancouver, ISO, and other styles
34

Tremblay, Monica Chiarini. "Uncertainty in the information supply chain : integrating multiple health care data sources." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Endress, William. "Merging Multiple Telemetry Files from Widely Separated Sources for Improved Data Integrity." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581824.

Full text
Abstract:
Merging telemetry data from multiple data sources into a single file, provides the ability to fill in gaps in the data and reduce noise by taking advantage of the multiple sources. This is desirable when analyzing the data as there is only one file to work from. Also, the analysts will spend less time trying to explain away gaps and spikes in data that are attributable to dropped and noisy telemetry frames, leading to more accurate reports. This paper discusses the issues and solutions for doing the merge.
APA, Harvard, Vancouver, ISO, and other styles
36

Liao, Zhining. "Query processing for data integration from multiple data sources over the Internet." Thesis, University of Ulster, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Huff, Amy K. "Multiple stable oxygen isotope analysis of atmospheric carbon monoxide and its sources /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9835376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Roberson, Daniel Richard. "Application of multiple information sources to prediction of engine time on-wing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99041.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2015. In conjunction with the Leaders for Global Operations Program at MIT.<br>Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015. In conjunction with the Leaders for Global Operations Program at MIT.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 85-88).<br>The maintenance and operation of commercial turbofan engines relies upon an understanding of the factors which contribute to engine degradation from the operational mission, environment and maintenance procedures. A multiple information source system is developed using the Pratt & Whitney engine to combine predictive engineering simulations with socio-technical effects and environmental factors for an improved predictive system for engine time on-wing. The system establishes an airport severity factor for all operating airports based upon mission parameters and environmental parameters. The final system involves three hierarchical layers: a 1-D engineering simulation; a parametric survival study; and a logistic regression study. Each of these layers is combined so that the output of the prior becomes the input of the next model. The combined system demonstrates an improvement in current practices at a fleet level from an R2 of 0.526 to 0.7966 and provides an indication of the relationship suspended particulate matter and engine degradation. The potential effects on the airline industry from city based severity in maintenance contracts are explored. Application of multiple information sources requires both knowledge of the system, and access to the data. The organizational structure of a data analytics organization is described; an architecture for integration of this team within an existing corporate environment is proposed.<br>by Daniel Richard Roberson.<br>M.B.A.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
39

Tecle, Ghebremuse Emehatsion. "Investigation and explanation of major factors affecting academic writing : using multiple sources." Thesis, University College London (University of London), 1998. http://discovery.ucl.ac.uk/10007488/.

Full text
Abstract:
This study investigates whether teaching writing using multiple sources approach (TWUMSA) is more effective than the current traditional approach to teaching writing for academic purposes. The main research questions are: Can teaching which involves teaching writing using multiple sources (on a topic) lead to improved academic writing? And what is the nature of the intertextual links made by the subjects (students) in the study? 112 subjects (56 control and 56 experimental) served in the study. The experimental groups received instruction on the basis of a teaching approach using multiple sources which involves understanding and organizing texts, selecting, generating and connecting ideas, paraphrasing, and integrating citations and documenting sources. The control groups received instruction on the basis of the current traditional approach for 16 weeks. The statistical method of comparison of means of independent samples T-test was applied and the results of the post tests (phases I and II) show that there are statistically significant differences between the approach using sources and the current traditional approach. The relationship between prior knowledge of subject matter and post test is (positively) modest. The analysis of the subjects' essays reveal that more subjects in the control groups composed their essays using information from, for instance , the second text, then moved to either the first or the third text one after the other but they did not take any more pieces of information from the text/s they had already drawn, whereas more subjects in the experimental groups composed some content units from one text, moved to another text and moved now and then to the text/s they had already drawn pieces of information or content units. Thus, the intertextual links made by the experimental groups appear better and more interconnected and interwoven than that of the control groups. Three major categories of composing content units (CUs) are established: (1) direct copy CUs, (2) paraphrased CUs, and (3) generated CUs . On the basis of the content units the subjects exhibited, they are classified into five kinds of writers: 'compilers', 'harmonizers', 'constructivists', 'dualists', and 'paraphrasers'. Thirty-eight lecturers who were teaching sophomore level at the University of Asmara, Eritrea and 200 sophomores completed questionnaires accompanied by rating scales in verbal form such as a. Very high b. high c. moderate, and d. little. The lecturers' ratings to (some similar) statements in the questionnaires are lower than that of the sophomores' ratings. The students' responses to the statements in the questionnaire indicate their unfamiliarity with writing using sources, positive attitudinal changes toward writing using sources, and mostly moderate perception of their capabilities towards writing using other texts. The students also linked the benefits of writing using sources to other courses. The experimental groups appear better users of strategies or activities in the processes of writing using sources. Analysis of the interview data revealed some causes of the problems the interviewees faced when they wrote using sources. The interviewees stressed the importance of prior knowledge to writing. They also reported positive attitudinal changes toward learning through TWUMSA.
APA, Harvard, Vancouver, ISO, and other styles
40

Lim, Young Shin. "Effects of Likability of Multiple Layers of Sources on Social Network Sites." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1461252155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sama, Sanjana. "An Empirical Study Investigating Source Code Summarization Using Multiple Sources of Information." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1527673352984124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Cooke, Payton, and Payton Cooke. "Comparative Analysis of Multiple Data Sources for Travel Time and Delay Measurement." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/622847.

Full text
Abstract:
Arterial performance measurement is an essential tool for both researchers and practitioners, guiding decisions on traffic management, future improvements, and public information. Link travel time and intersection control delay are two primary performance measures that are used to evaluate arterial level of service. Despite recent technological advancements, collecting travel time and intersection delay data can be a time-consuming and complicated process. Limited budgets, numerous available technologies, a rapidly changing field, and other challenges make performance measurement and comparison of data sources difficult. Three common data collection sources (probe vehicles, Bluetooth media access control readers, and manual queue length counts) are often used for performance measurement and validation of new data methods. Comparing these and other data sources is important as agencies and researchers collect arterial performance data. This study provides a methodology for comparing data sources, using statistical tests and linear correlation to compare methods and identify strengths and weaknesses. Additionally, this study examines data normality as an issue that is seldom considered, yet can affect the performance of statistical tests. These comparisons can provide insight into the selection of a particular data source for use in the field or for research. Data collected along Grant Road in Tucson, Arizona, was used as a case study to evaluate the methodology and the data sources. For evaluating travel time, GPS probe vehicle and Bluetooth sources produced similar results. Bluetooth can provide a greater volume of data more easily in addition to samples large enough for more rigorous statistical evaluation, but probe vehicles are more versatile and provide higher resolution data. For evaluating intersection delay, probe vehicle and queue count methods did not always produce similar results.
APA, Harvard, Vancouver, ISO, and other styles
43

Pather, Direshin. "A model for context awareness for mobile applications using multiple-input sources." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/2969.

Full text
Abstract:
Context-aware computing enables mobile applications to discover and benefit from valuable context information, such as user location, time of day and current activity. However, determining the users’ context throughout their daily activities is one of the main challenges of context-aware computing. With the increasing number of built-in mobile sensors and other input sources, existing context models do not effectively handle context information related to personal user context. The objective of this research was to develop an improved context-aware model to support the context awareness needs of mobile applications. An existing context-aware model was selected as the most complete model to use as a basis for the proposed model to support context awareness in mobile applications. The existing context-aware model was modified to address the shortcomings of existing models in dealing with context information related to personal user context. The proposed model supports four different context dimensions, namely Physical, User Activity, Health and User Preferences. A prototype, called CoPro was developed, based on the proposed model, to demonstrate the effectiveness of the model. Several experiments were designed and conducted to determine if CoPro was effective, reliable and capable. CoPro was considered effective as it produced low-level context as well as inferred context. The reliability of the model was confirmed by evaluating CoPro using Quality of Context (QoC) metrics such as Accuracy, Freshness, Certainty and Completeness. CoPro was also found to be capable of dealing with the limitations of the mobile computing platform such as limited processing power. The research determined that the proposed context-aware model can be used to successfully support context awareness in mobile applications. Design recommendations were proposed and future work will involve converting the CoPro prototype into middleware in the form of an API to provide easier access to context awareness support in mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Habool, Al-Shamery Maitham. "Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/79826/.

Full text
Abstract:
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
APA, Harvard, Vancouver, ISO, and other styles
45

Swartling, Mikael. "Direction of Arrival Estimation and Localization of Multiple Speech Sources in Enclosed Environments." Doctoral thesis, Blekinge Tekniska Högskola, Avdelningen för elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00520.

Full text
Abstract:
Speech communication is gaining in popularity in many different contexts as technology evolves. With the introduction of mobile electronic devices such as cell phones and laptops, and fixed electronic devices such as video and teleconferencing systems, more people are communicating which leads to an increasing demand for new services and better speech quality. Methods to enhance speech recorded by microphones often operate blindly without prior knowledge of the signals. With the addition of multiple microphones to allow for spatial filtering, many blind speech enhancement methods have to operate blindly also in the spatial domain. When attempting to improve the quality of spoken communication it is often necessary to be able to reliably determine the location of the speakers. A dedicated source localization method on top of the speech enhancement methods can assist the speech enhancement method by providing the spatial information about the sources. This thesis addresses the problem of speech-source localization, with a focus on the problem of localization in the presence of multiple concurrent speech sources. The primary work consists of methods to estimate the direction of arrival of multiple concurrent speech sources from an array of sensors and a method to correct the ambiguities when estimating the spatial locations of multiple speech sources from multiple arrays of sensors. The thesis also improves the well-known SRP-based methods with higher-order statistics, and presents an analysis of how the SRP-PHAT performs when the sensor array geometry is not fully calibrated. The thesis is concluded by two envelope-domain-based methods for tonal pattern detection and tonal disturbance detection and cancelation which can be useful to further increase the usability of the proposed localization methods. The main contribution of the thesis is a complete methodology to spatially locate multiple speech sources in enclosed environments. New methods and improvements to the combined solution are presented for the direction-of-arrival estimation, the location estimation and the location ambiguity correction, as well as a sensor array calibration sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Nik, Ali Nik Hakimi Bin. "Classification and localisation of multiple partial discharge sources within high voltage transformer windings." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415793/.

Full text
Abstract:
Partial discharge (PD) analysis is a common method for condition monitoring and diagnostics of power transformers, which can be used as a tool for assessing the lifespan of transformers and can detect insulation malfunctions before they lead to failure. This report describes the development of analytical tools for PD activities within HV transformer windings. In most cases, PD will occur in transformer windings due to ageing processes, operational over stressing or defects introduced during manufacture, and different PD sources have different effects on the condition and performance of power equipment insulation system. Therefore, for further analyses, the ability to accurately distinguish between the PD signals generated from different sources is seen as a critical function for future diagnostic systems. Under realistic field conditions, multiple PD sources may be activated simultaneously within the transformer winding. An experiment has been designed to assess different methodologies for the identification and localisation of multiple PD sources within a HV transformer winding. Previous work at Southampton developed a non-linear based technique that facilitates identification of the location of a single PD source within an interleaved winding. It is assumed that any discharge occurring at any point along a winding will produce an electrical signal that will propagate as a travelling wave towards both ends of the winding. This project is concerned with the feasibility of locating several sources simultaneously based only on measurement data from wideband radio frequency current transformers (RFCTs) placed at the neutral to earth point and the bushing tap-point to earth. The proposed processing technique relies on the assumption that the PD pulses generated from different sources exhibit unique waveform characteristics. Due to termination and path taken characteristics, the PD signals will suffer attenuation and distortion during the propagation of the PD signals along transformer windings. Therefore, it will cause changes in the energy characteristics of the PD pulses at both measurement points, which can be used to separate, identify and locate the multiple PDs within an HV transformer winding. Based on analysis of the captured data from experiment, various approaches for identifying multiple PD sources have been assessed. Obtained results indicate that the analysis of absolute energy distributions determined using Mathematical Morphology and the use of OPTICS for clustering will reliably separate PD data from two sources that are simultaneously active within a distributed winding.
APA, Harvard, Vancouver, ISO, and other styles
47

Chun, Seokjoon. "Using MIMIC Methods to Detect and Identify Sources of DIF among Multiple Groups." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5352.

Full text
Abstract:
This study investigated the efficacy of multiple indicators, multiple causes (MIMIC) methods in detecting uniform and nonuniform differential item functioning (DIF) among multiple groups, where the underlying causes of DIF was different. Three different implementations of MIMIC DIF detection were studied: sequential free baseline, free baseline, and constrained baseline. In addition, the robustness of the MIMIC methods against the violation of its assumption, equal factor variance across comparison groups, was investigated. We found that the sequential-free baseline methods provided similar Type I error and power rates to the free baseline method with a designated anchor, and much better Type I error and power rates than the constrained baseline method across four groups, resulting from the co-occurrence background variables. But, when the equal factor variance assumption was violated, the MIMIC methods yielded the inflated Type I error. Also, the MIMIC procedure had problems correctly identifying the sources DIF, so further methodological developments are needed.
APA, Harvard, Vancouver, ISO, and other styles
48

Larsson, Jimmy. "Taxonomy Based Image Retrieval : Taxonomy Based Image Retrieval using Data from Multiple Sources." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180574.

Full text
Abstract:
With a multitude of images available on the Internet, how do we find what we are looking for? This project tries to determine how much the precision and recall of search queries is improved by using a word taxonomy on traditional Text-Based Image Search and Content-Based Image Search. By applying a word taxonomy to different data sources, a strong keyword filter and a keyword extender were implemented and tested. The results show that depending on the implementation, the precision or the recall can be increased. By using a similar approach on real life implementations, it is possible to force images with higher precisions to the front while keeping a high recall value, thus increasing the experienced relevance of image search.<br>Med den mängd bilder som nu finns tillgänglig på Internet, hur kan vi fortfarande hitta det vi letar efter? Denna uppsats försöker avgöra hur mycket bildprecision och bildåterkallning kan öka med hjälp av appliceringen av en ordtaxonomi på traditionell Text-Based Image Search och Content-Based Image Search. Genom att applicera en ordtaxonomi på olika datakällor kan ett starkt ordfilter samt en modul som förlänger ordlistor skapas och testas. Resultaten pekar på att beroende på implementationen så kan antingen precisionen eller återkallningen förbättras. Genom att använda en liknande metod i ett verkligt scenario är det därför möjligt att flytta bilder med hög precision längre fram i resultatlistan och samtidigt behålla hög återkallning, och därmed öka den upplevda relevansen i bildsök.
APA, Harvard, Vancouver, ISO, and other styles
49

Boyum, Danielle C. "Primary Sources in Social Studies| A Multiple Case Study Examining the Successful Use of Primary Sources in the Secondary History Classroom." Thesis, Piedmont College, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10288372.

Full text
Abstract:
<p> The ultimate goal of teaching history to young people is to create effective, responsible citizens (Fallace, 2009). Despite such ambitious goals, the traditional teacher-centered method of instruction has not proven to have engaged students. As a result, students often rank history as their least-liked subject, particularly at the secondary level. One instructional strategy that may ameliorate this problem is the incorporation of primary sources. Identifying the inhibitors and inducers of primary sources, the researcher in this study explored and described the elements of successful primary source use in the secondary American and world history classrooms of three teacher participants in a qualitative, semester-long case study. Student and teacher perspectives of the impact of primary sources were also considered. In contrast to some of the existing literature, primary sources can be employed successfully and consistently in the secondary history classroom as demonstrated by the three teacher participants in this semester-long study in a large suburban Atlanta, Georgia, school district.</p>
APA, Harvard, Vancouver, ISO, and other styles
50

Merlin, Francesca. "Le hasard et les sources de la variation biologique : analyse critique d'une notion multiple." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2009. http://tel.archives-ouvertes.fr/tel-00397641.

Full text
Abstract:
La notion de hasard en biologie fait l'objet de débats philosophiques et scientifiques depuis la deuxième moitié du XIXe siècle, notamment depuis la publication de L'Origine des Espèces de Darwin (1859). Dans l'état actuel de la recherche, elle est encore un objet de controverse : le fait que cette notion puisse prendre des significations et des rôles multiples la rend difficile à cerner, même dans un contexte très spécifique. Notre thèse consiste en une analyse épistémologique de la notion de hasard lorsqu'elle est utilisée par les biologistes dans la caractérisation des phénomènes à l'origine de la variation au sein des <br />populations naturelles. Plus exactement, nous abordons la question de savoir quelle notion de hasard est conceptuellement et empiriquement appropriée en ce qui concerne deux sources de variation biologique : les mutations génétiques et le bruit dans l'expression des gènes. Nous <br />apportons une clarification conceptuelle de cette notion, selon une perspective évolutionnaire et d'un point de vue moléculaire, sur la base des avancées récentes au sujet de ces deux causes de la variation. En vertu de la relation apparemment privilégiée entre la notion de hasard et les probabilités, nous traitons aussi la question de l'interprétation des probabilités dans les descriptions formelles de ces phénomènes biologiques. L'objectif principal de ce travail est de fournir un cadre conceptuel précis à l'utilisation de la notion de hasard en biologie.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography