To see the other types of publications on this topic, follow the link: Information hypothesis.

Dissertations / Theses on the topic 'Information hypothesis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information hypothesis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Feeney, Aidan. "Information selection and belief updating in hypothesis evaluation." Thesis, University of Plymouth, 1996. http://hdl.handle.net/10026.1/344.

Full text
Abstract:
This thesis is concerned with the factors underlying both selection and use of evidence in the testing of hypotheses. The work it describes examines the role played in hypothesis evaluation by background knowledge about the probability of events in the environment as well as the influence of more general constraints. Experiments on information choice showed that subjects were sensitive both to explicitly presented probabilistic information and to the likelihood of evidence with regard to background beliefs. It is argued - in contrast with other views in the literature - that subjects' choice of evidence to test hypotheses is rational allowing for certain constraints on subjects' cognitive representations. The majority of experiments in this thesis, however, are focused on the issue of how the information which subjects receive when testing hypotheses affects their beliefs. A major finding is that receipt of early information creates expectations which influence the response to later information. This typically produces a recency effect in which presenting strong evidence after weak evidence affects beliefs more than if the same evidence is presented in the opposite order. These findings run contrary to the view of the belief revision process which is prevalent in the literature in which it is generally assumed that the effects of successive pieces of information are independent. The experiments reported here also provide evidence that processes of selective attention influence evidence interpretation: subjects tend to focus on the most informative part of the evidence and may switch focus from one part of the evidence to another as the task progresses. in some cases, such changes of attention can eliminate the recency effect. In summary, the present research provides new evidence about the role of background beliefs, expectations and cognitive constraints in the selection and use of information to test hypotheses. Several new findings emerge which require revision to current accounts of information integration in the belief revision literature.
APA, Harvard, Vancouver, ISO, and other styles
2

O'Rourke, Sean Michael. "Information-theoretic and hypothesis-based clustering in bioinformatics." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3356190.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 7, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 98-105).
APA, Harvard, Vancouver, ISO, and other styles
3

Sullivan, Terry. "The Cluster Hypothesis: A Visual/Statistical Analysis." Thesis, University of North Texas, 2000. https://digital.library.unt.edu/ark:/67531/metadc2444/.

Full text
Abstract:
By allowing judgments based on a small number of exemplar documents to be applied to a larger number of unexamined documents, clustered presentation of search results represents an intuitively attractive possibility for reducing the cognitive resource demands on human users of information retrieval systems. However, clustered presentation of search results is sensible only to the extent that naturally occurring similarity relationships among documents correspond to topically coherent clusters. The Cluster Hypothesis posits just such a systematic relationship between document similarity and topical relevance. To date, experimental validation of the Cluster Hypothesis has proved problematic, with collection-specific results both supporting and failing to support this fundamental theoretical postulate. The present study consists of two computational information visualization experiments, representing a two-tiered test of the Cluster Hypothesis under adverse conditions. Both experiments rely on multidimensionally scaled representations of interdocument similarity matrices. Experiment 1 is a term-reduction condition, in which descriptive titles are extracted from Associated Press news stories drawn from the TREC information retrieval test collection. The clustering behavior of these titles is compared to the behavior of the corresponding full text via statistical analysis of the visual characteristics of a two-dimensional similarity map. Experiment 2 is a dimensionality reduction condition, in which inter-item similarity coefficients for full text documents are scaled into a single dimension and then rendered as a two-dimensional visualization; the clustering behavior of relevant documents within these unidimensionally scaled representations is examined via visual and statistical methods. Taken as a whole, results of both experiments lend strong though not unqualified support to the Cluster Hypothesis. In Experiment 1, semantically meaningful 6.6-word document surrogates systematically conform to the predictions of the Cluster Hypothesis. In Experiment 2, the majority of the unidimensionally scaled datasets exhibit a marked nonuniformity of distribution of relevant documents, further supporting the Cluster Hypothesis. Results of the two experiments are profoundly question-specific. Post hoc analyses suggest that it may be possible to predict the success of clustered searching based on the lexical characteristics of users' natural-language expression of their information need.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, Chienting, Hsinchun Chen, and Jay F. Nunamaker. "Verifying the proximity and size hypothesis for self-organizing maps." M.E. Sharpe, Inc, 2000. http://hdl.handle.net/10150/106111.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
The Kohonen Self-Organizing Map (SOM) is an unsupervised learning technique for summarizing high-dimensional data so that similar inputs are, in general, mapped close to one another. When applied to textual data, SOM has been shown to be able to group together related concepts in a data collection and to present major topics within the collection with larger regions. Research in which properties of SOM were validated, called the Proximity and Size Hypotheses,is presented through a user evaluation study. Building upon the previous research in automatic concept generation and classification, it is demonstrated that the Kohonen SOM was able to perform concept clustering effectively, based on its concept precision and recall7 scores as judged by human experts. A positive relationship between the size of an SOM region and the number of documents contained in the region is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
5

Lehmann, Rüdiger, and Michael Lösler. "Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information Criteria." Hochschule für Technik und Wirtschaft Dresden, 2016. https://htw-dresden.qucosa.de/id/qucosa%3A23307.

Full text
Abstract:
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
APA, Harvard, Vancouver, ISO, and other styles
6

Lehmann, Rüdiger, and Michael Lösler. "Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information Criteria." Hochschule für Technik und Wirtschaft Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-225770.

Full text
Abstract:
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
APA, Harvard, Vancouver, ISO, and other styles
7

Sechidis, Konstantinos. "Hypothesis testing and feature selection in semi-supervised data." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/hypothesis-testing-and-feature-selection-in-semisupervised-data(97f5f950-f020-4ace-b6cd-49cb2f88c730).html.

Full text
Abstract:
A characteristic of most real world problems is that collecting unlabelled examples is easier and cheaper than collecting labelled ones. As a result, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully to partially supervised scenarios is a challenging problem. Our work focuses on two types of partially labelled data that can occur in binary problems: semi-supervised data, where the labelled set contains both positive and negative examples, and positive-unlabelled data, a more restricted version of partial supervision where the labelled set consists of only positive examples. In both settings, it is very important to explore a large number of features in order to derive useful and interpretable information about our classification task, and select a subset of features that contains most of the useful information. In this thesis, we address three fundamental and tightly coupled questions concerning feature selection in partially labelled data; all three relate to the highly controversial issue of when does additional unlabelled data improve performance in partially labelled learning environments and when does not. The first question is what are the properties of statistical hypothesis testing in such data? Second, given the widespread criticism of significance testing, what can we do in terms of effect size estimation, that is, quantification of how strong the dependency between feature X and the partially observed label Y? Finally, in the context of feature selection, how well can features be ranked by estimated measures, when the population values are unknown? The answers to these questions provide a comprehensive picture of feature selection in partially labelled data. Interesting applications include for estimation of mutual information quantities, structure learning in Bayesian networks, and investigation of how human-provided prior knowledge can overcome the restrictions of partial labelling. One direct contribution of our work is to enable valid statistical hypothesis testing and estimation in positive-unlabelled data. Focusing on a generalised likelihood ratio test and on estimating mutual information, we provide five key contributions. (1) We prove that assuming all unlabelled examples are negative cases is sufficient for independence testing, but not for power analysis activities. (2) We suggest a new methodology that compensates this and enables power analysis, allowing sample size determination for observing an effect with a desired power by incorporating user’s prior knowledge over the prevalence of positive examples. (3) We show a new capability, supervision determination, which can determine a-priori the number of labelled examples the user must collect before being able to observe a desired statistical effect. (4) We derive an estimator of the mutual information in positive-unlabelled data, and its asymptotic distribution. (5) Finally, we show how to rank features with and without prior knowledge. Also we derive extensions of these results to semi-supervised data. In another extension, we investigate how we can use our results for Markov blanket discovery in partially labelled data. While there are many different algorithms for deriving the Markov blanket of fully supervised nodes, the partially labelled problem is far more challenging, and there is a lack of principled approaches in the literature. Our work constitutes a generalization of the conditional tests of independence for partially labelled binary target variables, which can handle the two main partially labelled scenarios: positive-unlabelled and semi-supervised. The result is a significantly deeper understanding of how to control false negative errors in Markov Blanket discovery procedures and how unlabelled data can help. Finally, we present how our results can be used for information theoretic feature selection in partially labelled data. Our work extends naturally feature selection criteria suggested for fully-supervised data, to partially labelled scenarios. These criteria can capture both the relevancy and redundancy of the features and can be used for semi-supervised and positive-unlabelled data.
APA, Harvard, Vancouver, ISO, and other styles
8

Scheider, Linda [Verfasser]. "The command hypothesis versus the information hypothesis : how do domestic dogs (Canis familiaris) comprehend the human pointing gesture? / Linda Scheider." Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1025939069/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bennett, Simon James. "Exploring the boundaries of the specificity of learning hypothesis." Thesis, Manchester Metropolitan University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yu, Angel On Kei. "The outcome of person-job fit: A test of the realistic information hypothesis." CSUSB ScholarWorks, 1995. https://scholarworks.lib.csusb.edu/etd-project/1232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Petzold, Max. "Evaluation of information in longitudinal data." Göteborg : Statistical Research Unit, Göteborg University, 2003. http://catalog.hathitrust.org/api/volumes/oclc/52551306.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Escamilla, Pierre. "On cooperative and concurrent detection in distributed hypothesis testing." Electronic Thesis or Diss., Institut polytechnique de Paris, 2019. http://www.theses.fr/2019IPPAT007.

Full text
Abstract:
L’inférence statistique prend une place prépondérante dans le développement des nouvelles technologies et inspire un grand nombre d’algorithmes dédiés à des tâches de détection, d’identification et d’estimation. Cependant il n’existe pas de garantie théorique pour les performances de ces algorithmes. Dans cette thèse, nous considérons un réseau simplifié de capteurs communicant sous contraintes pour tenter de comprendre comment des détecteurs peuvent se partager au mieux les informations à leur disposition pour détecter un même événement ou des événements distincts. Nous investiguons différents aspects de la coopération entre détecteurs et comment des besoins contradictoires peuvent être satisfaits au mieux dans le cas de tâches de détection. Plus spécifiquement nous étudions un problème de test d’hypothèse où chaque détecteur doit maximiser l’exposant de décroissance de l’erreur de Type II sous une contrainte d’erreur de Type I donnée. Comme il y a plusieurs détecteurs intéressés par des informations distinctes, un compromis entre les vitesses de décroissance atteignables va apparaître. Notre but est de caractériser la région des compromis possibles entre exposants d’erreurs de Type II. Dans le cadre des réseaux de capteurs massifs, la quantité d’information est souvent soumise à des limitations pour des raisons de consommation d’énergie et de risques de saturation du réseau. Nous étudions donc, en particulier, le cas du régime de communication à taux de compression nul (i.e. le nombre de bits des messages croit de façon sous-linéaire avec le nombre d’observations). Dans ce cas, nous caractérisons complètement la région des exposants d’erreurs de Type II dans les configurations où les détecteurs peuvent avoir des buts différents. Nous étudierons aussi le cas d’un réseau avec des taux de compressions positifs (i.e. le nombre de bits des messages augmente de façon linéaire avec le nombre d’observations). Dans ce cas, nous présentons des sous-parties de la région des exposants d’erreur de Type II. Enfin, nous proposons dans le cas d’un problème point à point avec un taux de compression positif une caractérisation complète de l’exposant de l’erreur de Type II optimal pour une famille de tests gaussiens
Statistical inference plays a major role in the development of new technologies and inspires a large number of algorithms dedicated to detection, identification and estimation tasks. However, there is no theoretical guarantee for the performance of these algorithms. In this thesis we try to understand how sensors can best share their information in a network with communication constraints to detect the same or distinct events. We investigate different aspects of detector cooperation and how conflicting needs can best be met in the case of detection tasks. More specifically we study a hypothesis testing problem where each detector must maximize the decay exponent of the Type II error under a given Type I error constraint. As the detectors are interested in different information, a compromise between the achievable decay exponents of the Type II error appears. Our goal is to characterize the region of possible trade-offs between Type II error decay exponents. In massive sensor networks, the amount of information is often limited due to energy consumption and network saturation risks. We are therefore studying the case of the zero rate compression communication regime (i.e. the messages size increases sub-linearly with the number of observations). In this case we fully characterize the region of Type II error decay exponent. In configurations where the detectors have or do not have the same purposes. We also study the case of a network with positive compression rates (i.e. the messages size increases linearly with the number of observations). In this case we present subparts of the region of Type II error decay exponent. Finally, in the case of a single sensor single detector scenario with a positive compression rate, we propose a complete characterization of the optimal Type II error decay exponent for a family of Gaussian hypothesis testing problems
APA, Harvard, Vancouver, ISO, and other styles
13

Wachtmeister, Sofia. "Insomnia and fear extinction : Review and analysis of the evolutionary emotional hypothesis." Thesis, Högskolan i Skövde, Institutionen för biovetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20183.

Full text
Abstract:
Insomnia is one of the most common health issues, with occasional symptoms affecting up to 50% of the general population. Lack of sleep is associated with many negative health effects. A new evolutionary hypothesis has been proposed to explain the mechanism behind insomnia symptoms. The evolutionary-emotional hypothesis proposes that while acute insomnia might be advantageous from an evolutionary perspective, chronic insomnia is maladaptive and may follow from a failure or delay of fear extinction. The aim of the current thesis was to investigate which neural mechanisms might be at work if one is to consider the evolutionaryemotional hypothesis about the causes of insomnia plausible and to review studies from cognitive neuroscience to discover what support there might be for the hypothesis. Studies have found heightened activation in fear-related brain areas in insomnia patients. Delayed fear extinction and altered emotion regulation circuitry, among other things, were also observed for insomnia patients. However, few experimental studies on the effect of fear extinction on sleep in insomnia patients have been conducted. At this time, some emerging evidence lends support for the evolutionary-emotional hypothesis of insomnia, but more studies that directly assess fear conditioning and fear extinction processes in insomnia patients are needed to assess the explanatory power of the theory.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Xinyu. "Toward Scalable Hierarchical Clustering and Co-clustering Methods : application to the Cluster Hypothesis in Information Retrieval." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2123/document.

Full text
Abstract:
Comme une méthode d’apprentissage automatique non supervisé, la classification automatique est largement appliquée dans des tâches diverses. Différentes méthodes de la classification ont leurs caractéristiques uniques. La classification hiérarchique, par exemple, est capable de produire une structure binaire en forme d’arbre, appelée dendrogramme, qui illustre explicitement les interconnexions entre les instances de données. Le co-clustering, d’autre part, génère des co-clusters, contenant chacun un sous-ensemble d’instances de données et un sous-ensemble d’attributs de données. L’application de la classification sur les données textuelles permet d’organiser les documents et de révéler les connexions parmi eux. Cette caractéristique est utile dans de nombreux cas, par exemple, dans les tâches de recherche d’informations basées sur la classification. À mesure que la taille des données disponibles augmente, la demande de puissance du calcul augmente. En réponse à cette demande, de nombreuses plates-formes du calcul distribué sont développées. Ces plates-formes utilisent les puissances du calcul collectives des machines, pour couper les données en morceaux, assigner des tâches du calcul et effectuer des calculs simultanément.Dans cette thèse, nous travaillons sur des données textuelles. Compte tenu d’un corpus de documents, nous adoptons l’hypothèse de «bag-of-words» et applique le modèle vectoriel. Tout d’abord, nous abordons les tâches de la classification en proposant deux méthodes, Sim_AHC et SHCoClust. Ils représentent respectivement un cadre des méthodes de la classification hiérarchique et une méthode du co-clustering hiérarchique, basé sur la proximité. Nous examinons leurs caractéristiques et performances du calcul, grâce de déductions mathématiques, de vérifications expérimentales et d’évaluations. Ensuite, nous appliquons ces méthodes pour tester l’hypothèse du cluster, qui est l’hypothèse fondamentale dans la recherche d’informations basée sur la classification. Dans de tels tests, nous utilisons la recherche du cluster optimale pour évaluer l’efficacité de recherche pour tout les méthodes hiérarchiques unifiées par Sim_AHC et par SHCoClust . Nous aussi examinons l’efficacité du calcul et comparons les résultats. Afin d’effectuer les méthodes proposées sur des ensembles de données plus vastes, nous sélectionnons la plate-forme d’Apache Spark et fournissons implémentations distribuées de Sim_AHC et de SHCoClust. Pour le Sim_AHC distribué, nous présentons la procédure du calcul, illustrons les difficultés rencontrées et fournissons des solutions possibles. Et pour SHCoClust, nous fournissons une implémentation distribuée de son noyau, l’intégration spectrale. Dans cette implémentation, nous utilisons plusieurs ensembles de données qui varient en taille pour examiner l’échelle du calcul sur un groupe de noeuds
As a major type of unsupervised machine learning method, clustering has been widely applied in various tasks. Different clustering methods have different characteristics. Hierarchical clustering, for example, is capable to output a binary tree-like structure, which explicitly illustrates the interconnections among data instances. Co-clustering, on the other hand, generates co-clusters, each containing a subset of data instances and a subset of data attributes. Applying clustering on textual data enables to organize input documents and reveal connections among documents. This characteristic is helpful in many cases, for example, in cluster-based Information Retrieval tasks. As the size of available data increases, demand of computing power increases. In response to this demand, many distributed computing platforms are developed. These platforms use the collective computing powers of commodity machines to parallelize data, assign computing tasks and perform computation concurrently.In this thesis, we first address text clustering tasks by proposing two clustering methods, Sim_AHC and SHCoClust. They respectively represent a similarity-based hierarchical clustering and a similarity-based hierarchical co-clustering. We examine their properties and performances through mathematical deduction, experimental verification and evaluation. Then we apply these methods in testing the cluster hypothesis, which is the fundamental assumption in cluster-based Information Retrieval. In such tests, we apply the optimal cluster search to evaluation the retrieval effectiveness of different clustering methods. We examine the computing efficiency and compare the results of the proposed tests. In order to perform clustering on larger datasets, we select Apache Spark platform and provide distributed implementation of Sim_AHC and of SHCoClust. For distributed Sim_AHC, we present the designed computing procedure, illustrate confronted difficulties and provide possible solutions. And for SHCoClust, we provide a distributed implementation of its core, spectral embedding. In this implementation, we use several datasets that vary in size to examine scalability
APA, Harvard, Vancouver, ISO, and other styles
15

MacGahan, Christopher, and Christopher MacGahan. "Mathematical Methods for Enhanced Information Security in Treaty Verification." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621280.

Full text
Abstract:
Mathematical methods have been developed to perform arms-control-treaty verification tasks for enhanced information security. The purpose of these methods is to verify and classify inspected items while shielding the monitoring party from confidential aspects of the objects that the host country does not wish to reveal. Advanced medical-imaging methods used for detection and classification tasks have been adapted for list-mode processing, useful for discriminating projection data without aggregating sensitive information. These models make decisions off of varying amounts of stored information, and their task performance scales with that information. Development has focused on the Bayesian ideal observer, which assumes com- plete probabilistic knowledge of the detector data, and Hotelling observer, which assumes a multivariate Gaussian distribution on the detector data. The models can effectively discriminate sources in the presence of nuisance parameters. The chan- nelized Hotelling observer has proven particularly useful in that quality performance can be achieved while reducing the size of the projection data set. The inclusion of additional penalty terms into the channelizing-matrix optimization offers a great benefit for treaty-verification tasks. Penalty terms can be used to generate non- sensitive channels or to penalize the model's ability to discriminate objects based on confidential information. The end result is a mathematical model that could be shared openly with the monitor. Similarly, observers based on the likelihood probabilities have been developed to perform null-hypothesis tasks. To test these models, neutron and gamma-ray data was simulated with the GEANT4 toolkit. Tasks were performed on various uranium and plutonium in- spection objects. A fast-neutron coded-aperture detector was simulated to image the particles.
APA, Harvard, Vancouver, ISO, and other styles
16

Calvillo, Jesús [Verfasser], and Matthew W. [Akademischer Betreuer] Crocker. "Connectionist language production : distributed representations and the uniform information density hypothesis / Jesús Calvillo ; Betreuer: Matthew W. Crocker." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/1187240958/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Calvillo, Jesús Verfasser], and Matthew W. [Akademischer Betreuer] [Crocker. "Connectionist language production : distributed representations and the uniform information density hypothesis / Jesús Calvillo ; Betreuer: Matthew W. Crocker." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-279340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Leung, Kai-wan, and 梁啓雲. "The behavior of stock prices in relation to the efficient market hypothesis from the perspective of information costs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31221336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Leung, Kai-wan. "The behavior of stock prices in relation to the efficient market hypothesis from the perspective of information costs /." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20716540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Loureiro, Gilberto Ramos. "The reputation of Underwriters, the bonding hypothesis, and the impact on the information environment of U.S cross-listed firms." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1186628938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Palm, Alexander, and Adam Sjögren. "Aktierekommendationer i en ny tid : Podcasts på den finansiella marknaden." Thesis, Linnéuniversitetet, Institutionen för ekonomistyrning och logistik (ELO), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-55149.

Full text
Abstract:
Sammanfattning Magisteruppsats för Civilekonomexamen i företagsekonomi, Ekonomihögskolan vid Linnéuniversitetet, Växjö, 2016.   Författare Alexander Palm & Adam Sjögren   Handledare Christopher Von Koch & Katarina Eriksson   Examinator: Sven-Olof Yrjö Collin   Titel: ”Aktierekommendationer i en ny tid – podcasts på den finansiella marknaden”   Bakgrund & problem: Aktierekommendationer ges traditionellt ut av diverse banker och analyshus. En bias har observerats vilket är till nackdel för investerare. Podcasts är ett förhållandevis nytt fenomen som kan erbjuda aktierekommendationer. Eftersom denna form av media är ny, finns lite forskning kring dess roll och potential för att erbjuda finansiella råd.   Syfte: Uppsatsens syfte är att utvidga forskning kring podcasts och dess roll för marknadsfunktionen och marknadseffektiviteten   Metod: En deduktiv utgångspunkt och ett kvantitativt förhållningssätt mellan teori och forskning tillämpas. En traditionell eventstudie med två olika tidsspann tillämpas för att studera aktierekommendationers påverkan på aktiekurser.   Slutsats: Resultat pekar på att IH inte har stöd vid aktierekommendationer från podcasts vilket är ett tecken på stöd för EMH. Däremot har PPH stöd vilket i sin tur pekar på brister i EMH. Således är det ett tecken på att den svenska aktiemarknaden inte är fullt effektiv och den besitter inte semi-stark form. Inget informationsläckage kunde observeras i samband med rekommendationerna, något som skiljer sig från traditionella källor. Vi kunde visa på en viss temporär och positiv effekt gällande marknadsfunktionen för Small Cap. Genom en observerad ökad handelsvolym påvisar vi övertro på den svenska aktiemarknaden, något som gäller även för traditionella aktierekommendationer. Vi kan inte statistiskt säkerställa att kunskaps sprids mellan podcastlyssnare vilket skiljer sig mot teorier och tidigare forskning.
Abstract Master Thesis in Business Administration. School of Business and Economics at Linnaeus University, Växjö, 2016.   Authors Alexander Palm & Adam Sjögren   Supervisor: Christopher Von Koch & Katarina Eriksson   Examiner: Sven-Olof Yrjö Collin   Title: “Stock recommendations in a new era – Podcasts in financial markets.”   Background & problem: Banks and other financial institutes deliver traditionally stock recommendations. Bias from these sources has been observed which can be of disadvantage for individual investors. Podcasts is a relatively new kind of media that can supply the market with stock recommendations. Since podcasts is a new media, there is little research regarding its role on financial markets and its potential to offer financial advice.   Purpose: The purpose is to extend previous research regarding podcasts and it’s role on market efficiency and market function.   Method: We apply a deductive benchmark and a quantitative approach. A traditional event study with two different time-spans is conducted to analyse stock recommendation and the effect on stock prices.   Conclusion: Results indicate lack of support for IH with stock recommendations from podcasts, which in turn is support for EMH. However, PPH does have support, which indicate deficiency in EMH. Thus, we provide evidence that the Swedish stock market is not fully efficient and doesn’t posses semi-strong form. No information leakage could be observed, something that differs from previous research on stock recommendations. We could provide evidence of a temporary and positive effect regarding the market function for Small Cap. The observed increase in trading volume proves overconfidence on the Swedish stock market, something that has previously been shown. No knowledge dispersion exists between listeners of podcasts, something that differs from theory and previous research.
APA, Harvard, Vancouver, ISO, and other styles
22

Harper, Kevin M. "Challenging the Efficient Market Hypothesis with Dynamically Trained Artificial Neural Networks." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/718.

Full text
Abstract:
A review of the literature applying Multilayer Perceptron (MLP) based Artificial Neural Networks (ANNs) to market forecasting leads to three observations: 1) It is clear that simple ANNs, like other nonlinear machine learning techniques, are capable of approximating general market trends 2) It is not clear to what extent such forecasted trends are reliably exploitable in terms of profits obtained via trading activity 3) Most research with ANNs reporting profitable trading activity relies on ANN models trained over one fixed interval which is then tested on a separate out-of-sample fixed interval, and it is not clear to what extent these results may generalize to other out-of-sample periods. Very little research has tested the profitability of ANN models over multiple out-of-sample periods, and the author knows of no pure ANN (non-hybrid) systems that do so while being dynamically retrained on new data. This thesis tests the capacity of MLP type ANNs to reliably generate profitable trading signals over rolling training and testing periods. Traditional error statistics serve as descriptive rather than performance measures in this research, as they are of limited use for assessing a system’s ability to consistently produce above-market returns. Performance is measured for the ANN system by the average returns accumulated over multiple runs over multiple periods, and these averages are compared with the traditional buy-and-hold returns for the same periods. In some cases, our models were able to produce above-market returns over many years. These returns, however, proved to be highly sensitive to variability in the training, validation and testing datasets as well as to the market dynamics at play during initial deployment. We argue that credible challenges to the Efficient Market Hypothesis (EMH) by machine learning techniques must demonstrate that returns produced by their models are not similarly susceptible to such variability.
APA, Harvard, Vancouver, ISO, and other styles
23

Hailey, Jermaine A., and Frederick D. Higgs. "An analysis of organizational readiness at Anniston Army Depot for information technology change." Thesis, Monterey, California, Naval Postgraduate School, 2008. http://hdl.handle.net/10945/38047.

Full text
Abstract:
Approved for public release; distribution is unlimited.
MBA Professional Report
The purpose of this MBA Project is to assess the change readiness of Anniston Army Depot's (ANAD) organizational climate - especially now as the Depot prepares for large-scale Logistics Management Program (LMP) information technologies (IT) change. ANAD is a highly important division of the United States Army Materiel Command (AMC) and is the Army's designated Center of Industrial and Technical Excellence (CITE) for a variety of combat vehicles, artillery equipment, bridging systems and small-caliber weapons. It provides advanced maintenance support for all of these systems, in addition to fulfilling a host of other vitally importnat Army-wide logistical functions. ANAD presently uses the Standard Depot System (SDS) to manage its complex array of admministrativr and logistical functions. However, AMC has mandated that ANAD completely replace the SDS and employ the new Logistics Modernization Program (LMP) starting in March 2009. The researchers gathered a combination of historical information, personnel observations and responses to survey questionnaires on readiness for change in order to conduct a quality analysis on ANAD structure and climate and their implications, if any, for LMP omplementation. Ultimately, people are the heart of any IT systm, regardless of its size and degree of automation. The tremendous importance of organizational personnel in the change process is often under appreciated and under addressed in the civilian sector of the military - particularly when this sector embarks on significant IT transformation initiatives. Bold IT actions inevitably have a profound effect on any organization, regardless of its size, mission, and personnel composition. This project was conducted with the sponsorship and assistance of the Anniston Army Depot.
APA, Harvard, Vancouver, ISO, and other styles
24

Bergqvist, Karlsson Daniel. "Om fenomenell kunskap och Förmågehypotesen : Information eller förmåga – vad lär vi oss när vi får en ny upplevelse?" Thesis, Umeå universitet, Institutionen för idé- och samhällsstudier, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-162362.

Full text
Abstract:
Fysikalism ifråga om vårt upplevande – fenomenella – medvetande; att det är helt och enbartfysiskt, står i kontrast till olika former av dualism, som säger att medvetandet inte helt låter sigreduceras till det fysiska.Frank Jackson har presenterat det så kallade kunskapsargumentet mot fysikalism. Eftersom vilär oss något nytt då vi får en ny upplevelse, och eftersom detta inte kan läras på något annatsätt än att själv erfara upplevelsen, så drar kunskapsargumentet slutsatsen att det finns ickefysiskafakta om världen, och att fysikalismen därför är falsk.Förmågehypotesen (eng. the Ability Hypothesis) är ett svar på detta argument som lagts framav David Lewis och Laurence Nemirow. De menar att det vi lär oss då vi får en ny upplevelseinte är något annat än vissa förmågor, och därför är kunskapsargumentets slutsats att det finnsicke-fysiska fakta om världen falsk.Syftet med föreliggande uppsats är att undersöka om Förmågehypotesen utgör ett hållbartförsvar för fysikalismen mot kunskapsargumentet. För att genomföra detta utvärderar jag feminvändningar mot Förmågehypotesen och de svar på dessa som Nemirow anför. Jag kommeratt argumentera för att två av dessa invändningar pekar på problem med Förmågehypotesensom inte låter sig lösas, och därför drar jag slutsatsen att Förmågehypotesen inte lyckas försvarafysikalismen mot Jacksons kunskapsargument.
Physicalism concerning the phenomenal consciousness; the view that it is entirely physical,stands in contrast with various versions of dualism, which claims that consciousness isirreducibly non-physical.Frank Jackson has presented the so-called knowledge argument against physicalism. Becausewe do learn something new upon having a new experience, and because this something cannotbe learned any other way than to have the experience, the knowledge argument concludes thatthere are non-physical facts about the world. Hence, physicalism is false.The Ability Hypothesis is a response to the knowledge argument presented by David Lewis andLaurence Nemirow. They argue that what we learn upon having a new experience is nothingbut a set of abilities. Hence, the conclusion of the knowledge argument that there are nonphysicalfacts about the world, is false.The aim of this paper is to investigate whether the Ability Hypothesis constitutes a viabledefense for physicalism against the knowledge argument. To accomplish this, I evaluate fiveobjections that have been raised against the Ability Hypothesis and the answers to thesepresented by Nemirow. I will argue that two of these objections point to problems with theAbility Hypothesis which cannot be solved, and I therefore conclude that the Ability Hypothesisis unable to defend physicalism against Jacksons knowledge argument
APA, Harvard, Vancouver, ISO, and other styles
25

MOEHRING, PATRICIA MARIE. "THE USE OF GEOGRAPHIC INFORMATION SYSTEMS (G.I.S.) FOR ANALYSES OF THE SPATIAL MISMATCH HYPOTHESIS, HAMILTON COUNTY, AND THE OHIO WORKS FIRST PROGRAM." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1027005938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Koroglu, Muhammed Taha. "Multiple Hypothesis Testing Approach to Pedestrian Inertial Navigation with Non-recursive Bayesian Map-matching." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1577135195323298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hijazi, Bassem. "Bank Loans as a Financial Discipline: A Direct Agency Cost of Equity Perspective." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5411/.

Full text
Abstract:
In a 2004 study, Harvey, Lin and Roper argue that debt makers with a commitment to monitoring can create value for outside shareholders whenever information asymmetry and agency costs are pronounced. I investigate Harvey, Lin and Roper's claim for bank loans by empirically testing the effect of information asymmetry and direct agency costs on the abnormal returns of the borrowers' stock around the announcement of bank loans. I divide my study into two main sections. The first section tests whether three proxies of the direct agency costs of equity are equally significant in measuring the direct costs associated with outside equity agency problems. I find that the asset utilization ratio proxy is the most statistically significant proxy of the direct agency costs of equity using a Chow F-test statistic. The second main section of my dissertation includes and event study and a cross-sectional analysis. The event study results document significant and positive average abnormal returns of 1.01% for the borrowers' stock on the announcement day of bank loans. In the cross sectional analysis of the borrowers' average abnormal stock returns, I find that higher quality and more reputable banks/lenders provide a reliable certification to the capital market about the low level of the borrowers' direct agency costs of equity and information asymmetry. This certification hypothesis holds only for renewed bank loans. In other words, in renewing the borrowers' line of credit, the bank/lender is actually confirming that the borrower has a low level of information asymmetry and direct costs of equity. Given such a certificate from the banks/lenders, shareholders reward the company/borrower by bidding the share price up in the capital market.
APA, Harvard, Vancouver, ISO, and other styles
28

Nichols, Beth. "Geographic Profiling: Contributions to the Investigation of Serial Murders." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1559164233007786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ding, Runxiao. "Contextual information aided target tracking and path planning for autonomous ground vehicles." Thesis, Loughborough University, 2016. https://dspace.lboro.ac.uk/2134/23268.

Full text
Abstract:
Recently, autonomous vehicles have received worldwide attentions from academic research, automotive industry and the general public. In order to achieve a higher level of automation, one of the most fundamental requirements of autonomous vehicles is the capability to respond to internal and external changes in a safe, timely and appropriate manner. Situational awareness and decision making are two crucial enabling technologies for safe operation of autonomous vehicles. This thesis presents a solution for improving the automation level of autonomous vehicles in both situational awareness and decision making aspects by utilising additional domain knowledge such as constraints and influence on a moving object caused by environment and interaction between different moving objects. This includes two specific sub-systems, model based target tracking in environmental perception module and motion planning in path planning module. In the first part, a rigorous Bayesian framework is developed for pooling road constraint information and sensor measurement data of a ground vehicle to provide better situational awareness. Consequently, a new multiple targets tracking (MTT) strategy is proposed for solving target tracking problems with nonlinear dynamic systems and additional state constraints. Besides road constraint information, a vehicle movement is generally affected by its surrounding environment known as interaction information. A novel dynamic modelling approach is then proposed by considering the interaction information as virtual force which is constructed by involving the target state, desired dynamics and interaction information. The proposed modelling approach is then accommodated in the proposed MTT strategy for incorporating different types of domain knowledge in a comprehensive manner. In the second part, a new path planning strategy for autonomous vehicles operating in partially known dynamic environment is suggested. The proposed MTT technique is utilized to provide accurate on-board tracking information with associated level of uncertainty. Based on the tracking information, a path planning strategy is developed to generate collision free paths by not only predicting the future states of the moving objects but also taking into account the propagation of the associated estimation uncertainty within a given horizon. To cope with a dynamic and uncertain road environment, the strategy is implemented in a receding horizon fashion.
APA, Harvard, Vancouver, ISO, and other styles
30

Taskin, Yusuf, and Issa Batoul Gaballa. "Vilka orsaker kan leda till aktieinvesterarnas irrationella beteende? : En empirisk studie." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-28085.

Full text
Abstract:
Syfte: Syftet med studien är att förstå vilka orsaker som kan förklara svenska aktieinvesterares irrationalitet med hänsyn till deras informationsbearbetning.  Metod: Undersökningen är genomförd av en kvantitativ forskningsmetod med inslag av kvalitativa aspekter i form av webbaserade enkäter. Enkäten är prestrukturerad då de flesta undersökningsfrågor är stängda men består även av öppna svarsalternativ. Enkäten publicerades på olika aktieforum samt sociala nätverk där studiens urval består av de medlemmar som valde att delta i undersökningen.   Teori: Behavioral Finance ställs mot den effektiva marknadshypotesen och ligger till grund för undersökningen. Overconfidence, Herd Behavior och Egocentricity är de tre psykologiska faktorerna som studeras inom Behavioral Finance.  Slutsats: Slutsatsen för Overconfidence är att män har mer övertro än kvinnor och anledningen till detta förklaras av deras bakgrund. Studien visar att orsakerna till förekomsten av Herd Behavior är bland annat att investerare säljer sina aktier när deras omgivning gör det, till följd av rädslan att deras omgivning vet något de inte vet samt att de inte vill hamna i en sämre sits om aktien går ner. Anledningen till att Egocentricity uppstår hos aktieinvesterare förklaras av att aktien de äger får ett högre värde hos deras sinnen, vilket gör att priset på aktien sällan känns tillfredsställande. Investerare agerar dessutom annorlunda beroende på vilka informationskällor de använder samt hur mycket tid de lägger ner på information.
Purpose: The purpose of this study is to understand the reasons that can explain Swedish stock investors irrationality with consideration to their information processing.  Method: The survey was conducted using a quantitative research method with elements of qualitative aspects in form of web-based questionnaire. The survey was prestructured with closed survey questions, but also consists of open answers. The questionnaire was published on various websites for stock investors and social networks where the study sample consists of the members who chose to participate in the survey.   Theory: Behavioral Finance contradicts The Efficient Market Hypothesis and sets the basis for this survey. Overconfidence, Herd Behavior and Egocentricity are the three psychological factors studied in Behavioral Finance.  Conclusion: The conclusion of Overconfidence is that men are more overconfident than women, and the reason for this is explained by their backgrounds. The study shows that the causes for the occurrence of Herd Behavior include that investors sell their stocks when their surroundings do it, because of the fear that their surroundings knows something they don’t and they do not want to end up in a worse seat if the stock goes down. The reasons that Egocentricity occurs is explained by the stock they own gets a higher value in their minds, making the price of the stock rarely feeling satisfying. Investors also acts differently depending on which sources of information they use and how much time they devote to information.
APA, Harvard, Vancouver, ISO, and other styles
31

Svensson, Martin. "Rysk-georgiska kriget : Rysk maskirovka eller georgisk rundgång?" Thesis, Försvarshögskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:fhs:diva-104.

Full text
Abstract:
This essay aims to analyse if the Russian military operation carried out against Georgia between the 7th and 12th of august 2008 were executed with adherence to the Russian principles for military deception, maskirovka.    A superior purpose is to assess the situation according to the Swedish Armed Forces task of identifying possible needs for new or changed needs for abilities and competence.    The method used is two-alternative hypotheses which are tried by comparing actual events before and during the Russian-Georgian war with the ten methods of maskirovka, compiled from military analytical literature. Such traces of resemblance are further examined, both individually and as a part of a larger indication.    Further the essay describes the Russian art of war, the prerequisites for military surprise, information warfare in Russian doctrine, the disputed territories of South Ossetia and Abkhazia and the principles for maskirovka.    The conclusion is that the Russian operation was executed with some adherence to maskirovka, though unspecified of to what degree.    Author of this essay is Cadet Martin Svensson of the Swedish Army, currently a student at the Armed Forces Technical School in Halmstad.
APA, Harvard, Vancouver, ISO, and other styles
32

Chuairuang, Suranai. "Relational Networks and Family Firm Capital Structure in Thailand : Theory and Practice." Doctoral thesis, Umeå universitet, Företagsekonomi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-79317.

Full text
Abstract:
Firms must access capital to remain in business.  Small firms have greater difficulty accessing financial resources than have large firms because of their limited access to capital markets.  These difficulties are exacerbated by information asymmetries between a small firm’ s management and capital providers.  It has been theorized that many information asymmetries can be reduced through networks that link those in need of capital with those who can supply it. This research is about these relationships and their impact on the firms’ capital structure. This research has been limited to a sub-set of small firms, family firms.  I have collected data through a survey using a systematic sampling procedure. Both self-administered questionnaires and semi-structured interviews were utilized. The data analysis was based on the responses from two-hundred-and-fifty-six small manufacturing firms in Thailand. Seemingly unrelated regression (SUR), logistic regression, multiple discriminant analysis and Mann-Whitney U test were employed in the analysis. The hypothesis that firms apply a pecking order in their capital raising was confirmed although the generally accepted rationale based on poor access (and information asymmetries) was rejected.  Instead, at least for family firms, the desire to maintain family control had a significant impact on the use of retained earnings and owner’s savings. My results also indicated that while the depth of relationships had a positive effect on direct funding from family and friends, networks did not facilitate capital access from external providers of funds. Instead direct communications between owner-managers and their capital providers (particularly bank officials) mattered. A comparative analysisof small manufacturing firms in general and small family manufacturing firms revealed that there were differences between them in regard to their financial preferences, suggesting that family firms should be considered separately in small firm research. Further, the results of this research raise some questions about the appropriateness of applying theories directly from one research context to another without due consideration for  the impact of cultural influences. Through this research I have added evidence to the dialogue about small firms from a non-English speaking country by investigating the impact of networks on capital structure and the rationale behind family firm capital structure decisions.
APA, Harvard, Vancouver, ISO, and other styles
33

Necşulescu, Silvia. "Automatic acquisition of lexical-semantic relations: gathering information in a dense representation." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/374234.

Full text
Abstract:
Lexical-semantic relationships between words are key information for many NLP tasks, which require this knowledge in the form of lexical resources. This thesis addresses the acquisition of lexical-semantic relation instances. State of the art systems rely on word pair representations based on patterns of contexts where two related words co-occur to detect their relation. This approach is hindered by data sparsity: even when mining very large corpora, not every semantically related word pair co-occurs or not frequently enough. In this work, we investigate novel representations to predict if two words hold a lexical-semantic relation. Our intuition was that these representations should contain information about word co-occurrences combined with information about the meaning of words involved in the relation. These two sources of information have to be the basis of a generalization strategy to be able to provide information even for words that do not co-occur.
Les relacions lexicosemàntiques entre paraules són una informació clau per a moltes tasques del PLN, què requereixen aquest coneixement en forma de recursos lingüístics. Aquesta tesi tracta l’adquisició d'instàncies lexicosemàntiques. Els sistemes actuals utilitzen representacions basades en patrons dels contextos en què dues paraules coocorren per detectar la relació que s'hi estableix. Aquest enfocament s'enfronta a problemes de falta d’informació: fins i tot en el cas de treballar amb corpus de grans dimensions, hi haurà parells de paraules relacionades que no coocorreran, o no ho faran amb la freqüència necessària. Per tant, el nostre objectiu principal ha estat proposar noves representacions per predir si dues paraules estableixen una relació lexicosemàntica. La intuïció era que aquestes representacions noves havien de contenir informació sobre patrons dels contextos, combinada amb informació sobre el significat de les paraules implicades en la relació. Aquestes dues fonts d'informació havien de ser la base d'una estratègia de generalització que oferís informació fins i tot quan les dues paraules no coocorrien.
APA, Harvard, Vancouver, ISO, and other styles
34

Song, Wanlu. "Learning vocabulary without tears : a comparative study of the jigsaw and information gap tasks in vocabulary acquisition at school." Thesis, Högskolan Kristianstad, Sektionen för lärande och miljö, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-8493.

Full text
Abstract:
The primary purpose of the present study is to compare the effectiveness of the jigsaw task and information gap tasks in understanding new words and retaining them. Sixteen pupils aged between eleven and twelve were involved in the study and divided into two groups. They were allocated either a jigsaw task or an information gap task. This study consists of a pre-test, immediate post-test, delayed post-test as well as a questionnaire. The pupils were required to carry out the chosen tasks, tested immediately and then one week later. The results of the questionnaire are also discussed in order to establish the pupils’ attitudes towards their allotted tasks.   The results revealed marginally higher scores in the immediate post-test for pupils performing the information gap task in terms of recognizing the meaning of words. However, this advantage disappeared when it came to the depth of vocabulary knowledge and word meaning retention. Pupils performing jigsaw task outperformed group B in productive vocabulary knowledge and their retention. The gain in vocabulary among pupils who performed the jigsaw task is most evident in the delayed post-test. This result is consistent with the pupils’ assertion that they enjoyed doing the jigsaw task more than the information gap task. To sum up, the jigsaw task best promotes pupils understanding of words and their retention.
APA, Harvard, Vancouver, ISO, and other styles
35

Svenson, Niklas, and Niklas Wilsson. "Börsintroduktioners påverkan på konkurrenter : en eventstudie som kartlägger börsintroduktioners påverkan på sina konkurrenter." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-30727.

Full text
Abstract:
Purpose: The purpose of the study was to analyze whether initial public offerings had an impact on rival firms. Theory: The efficient market hypothesis model, Information asymmetry and the signal theory. Method: A quantitative approach in the methodology has been undertaken where an event study was constructed in order to measure cumulative average abnormal return. The empirical data used in the study consists of 243 rival firms that had an initial public offering occurring in their industry. Two hypotheses have been tested using the simple t-test. Results: A compilation of the abnormal return of rival firms was made which showed no clear patterns of an impact taking place. When testing the two hypotheses both were rejected which showed that no significant impact took place. Analysis: According to previous research and theories an impact on the rival firms should have shown but the different sample might be the reason of our results being different. Conclusion: Due to the two hypotheses being rejected the event study finds no significant evidence of an abnormal return occurring in connection to the initial public offering of a rival firm.
Syfte: Studiens syfte var att kartlägga om börsintroduktioner hade någon påverkan på konkurrerande företag. Teori: Den effektiva marknadshypotesen, Signalteori och asymmetrisk information. Metod: Studien använde en kvantitativ typ som övergripande forskningsdesign. En deduktiv ansats användes där teorier låg som grund till skapandet av hypoteser. Tillvägagångssättet var med en eventstudie som lämpar sig bra för stora mängder data. Studien använde aktiekurser från 243 konkurrerande företag. Resultat: En sammanställning gjordes av konkurrerande företags abnormala avkastning vilket inte visade något tydligt mönster för att påverkan finns. Vid test av hypoteser förkastades både hypoteserna vilket gav resultatet att ingen påverkan fanns. Analys: Enligt tidigare forskning och teorier borde en påverkan kunna utläsas. Skillnaden på urval kan vara en anledning att tidigare studier fått andra resultat. Slutsats: Den kumulativa genomsnittliga abnormala avkastningen gjorde rörelser vid de olika mätdagarna något mönster går inte att urskilja och den insamlade data som användes visade inget signifikant resultat vid hypotestest.
APA, Harvard, Vancouver, ISO, and other styles
36

Brinkfält, Hugo, and Tinnerholm Johan Kull. "The Information Content of Prices : A study on differences between integer and non-integer initial public offerings." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388062.

Full text
Abstract:
The purpose of this thesis is to analyze differences between IPOs with integer (e.g. $20,00) and non-integer (e.g. $20,32) offer prices on the post-decimalization US market. Research on IPOs suggests that there are viable differences between, and valuable information within, integer and non-integer prices. However, proposed effects on the information content of prices as a result of decimalization on US markets in 2001 motivates more up-to-date research on the subject. Our findings show that, while integer IPOs have higher initial return, uncertainty and offer price levels, there is no proof of different information content conveyed within integer and non-integer prices on the post-decimalization market. Consequently, suggesting that neither integer or non-integer prices provide valuable information to market participants, and in extension that decimalization may have influenced the IPO market.
Syftet med den här uppsatsen är att analysera skillnader mellan börsnoteringar med teckningskurser formulerade i heltal (t.ex. $20,00) och icke-heltal (t.ex. $20,32) efter decimaliseringen på den amerikanska marknaden. Tidigare studier har funnit stor skillnad mellan, och värdefull information inom, heltals- och icke-heltalskurser. Efter decimaliseringen på den amerikanska marknaden 2001 har studier dock funnit att prisers informationsinnehåll kan ha förändrats, vilket motiverar mer aktuell forskning inom ämnet. Våra resultat visar att även om börsnoteringar med heltals-kurser har högre initial avkastning, osäkerhet och teckningskursnivå, finns det inga tecken på att det är någon skillnad i informationsinnehåll mellan heltals- och icke-heltalskurser på den amerikanska marknaden efter decimaliseringen. Våra resultat antyder att det inte finns någon värdefull information för marknadsaktörer i huruvida en börsnoterings teckningskurs är formulerad i heltal eller icke-heltal, och i förlängningen att decimaliseringen kan ha påverkat marknaden för börsnoteringar.
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Lan. "Essays on two-player games with asymmetric information." Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E056/document.

Full text
Abstract:
Cette thèse est une contribution à la théorie économique sur trois aspects: la dynamique de prix dans les marchés financiers avec asymétrie d’information, la mise à jour des croyances et les raffinements d'équilibre dans les jeux de signaux, et l'introduction de l'ambiguïté dans la théorie du prix limite. Dans le chapitre 2, nous formalisons un jeu d'échange à somme nulle entre un secteur mieux informé et un autre qui l'est moins, pour déterminer de façon endogène, la dynamique du prix sous-jacent. Dans ce modèle, joueur 1 est informé de la conjoncture (L) mais est incertain de la croyance de joueur 2, car ce dernier est seulement informé à travers un message (M) qui est lié à cette conjoncture. Si L et M sont indépendants, alors le processus de prix sera une Martingale Continue à Variation Maximale (CMMV) et joueur 1 peut disposer de cet avantage informationnel. Par contre, si L et M ne sont pas indépendants, joueur 1 ne révèlera pas son information pendant le processus, et il ne bénéficiera donc pas de son avantage en matière d'information. Dans le chapitre 3, je propose une définition de l'équilibre de Test d'hypothèse (HTE) pour des jeux de signaux généraux, avec des joueurs non-Bayésiens qui sont soumis à une règle de mise à jour selon le modèle de vérification d'hypothèse caractérisé par Ortoleva (2012). Un HTE peut être différent d'un équilibre séquentiel de Nash en raison d'une incohérence dynamique. Par contre, dans le cas où joueur 2 traite seulement un message à probabilité nulle comme nouvelle inespérée, un HTE est un raffinement d'équilibre séquentiel de Nash et survit au critère intuitif dans les jeux de signaux généraux mais pas inversement. Nous fournissons un théorème d'existence qui couvre une vaste classe de jeux de signaux qui sont souvent étudiés en économie. Dans le chapitre 4, j'introduis l’ambiguïté dans un modèle d'organisation industrielle classique, dans lequel l'entreprise déjà établie est soit informée de la vraie nature de la demande agrégée, soit soumise à une incertitude mesurable classique sur la conjoncture, tandis qu'un éventuel nouvel arrivant fait face à une incertitude a la Knight (ambiguïté) concernant cette conjoncture. Je caractérise les conditions sou lesquelles le prix limite émerge en équilibre, et par conséquent l'ambigüité diminue la probabilité d'entrée. L'analyse du bien-être montre que le prix limite est plus nocif dans un marché où la demande escomptée est plus élevée que dans un autre où celle-ci est moindre
This thesis contributes to the economic theory literature in three aspects: price dynamics in financial markets with asymmetric information belief updating and equilibrium refinements in signaling games, and introducing ambiguity in limit pricing theory. In chapter 2, we formulate a zero-sum trading game between a better informed sector and a less 1nformed sector to endogenously determine the underlying price dynamics. In this model, player 1 is informed of the state (L) but is uncertain about player 2's belief about the state, because player 2 is informed through some message (M) related to the state. If L and M are independent, then the price proces s will be a Continuous Martingale of Maximal Variation (CMMV), and player 1 can benefit from his informational advantage. However, if L and M are not independent, player 1 will not reveal his information during the trading process, therefore, he does not benefit from his informational advantage. In chapter 3, I propose a definition of Hypothesis Testing Equilibrium (HTE) for general signaling games with non-Bayesian players nested, by an updating rule according to the Hypothesis Testing model characterized by Ortoleva (2012). An HTE may differ from a sequential Nash equilibrium because of dynamic inconsistency. However, in the case in which player 2 only treats a zero-probability message as an unexpected news, an HTE is a refinement of sequential Nash equilibrium and survives the intuitive Critenon in general signaling games but not vice versa. We provide an existence theorem covering a broad class of signaling games often studied in economics. In chapter 4, I introduce ambiguity in a standard industry organization model, in which the established firm is either informed of the true state of aggregate demand or is under classical measurable uncertainty about the state, while the potential entrant is under Knightian uncertainty (ambiguity) about the state. I characterize the conditions under which limit pricing emerges in equilibria, and thus ambiguity decreases the probability of entry. Welfare analysis shows that limit pricing is more harmful in a market with higher expected demand than in a market with lower expected demand
APA, Harvard, Vancouver, ISO, and other styles
38

Nguyen, Ngoc Tan. "A Security Monitoring Plane for Information Centric Networking : application to Named Data Networking." Thesis, Troyes, 2018. http://www.theses.fr/2018TROY0020.

Full text
Abstract:
L'architecture de l'Internet a été conçue pour connecter des hôtes distants. Mais l'évolution de son usage, qui s'apparente à celui d'une plate-forme mondiale pour la distribution de contenu met à mal son modèle de communication originale. Afin de mettre en cohérence l'architecture de l'Internet et son usage, de nouvelles architectures réseaux orientées contenu ont été proposées et celles-ci sont prêtes à être mises en oeuvre. Les questions de leur gestion, déploiement et sécurité se posent alors comme des verrous indispensables à lever pour les opérateurs de l'Internet. Dans cette thèse, nous proposons un plan de surveillance de la sécurité pour Named Data Networking (NDN), l'architecture la plus aboutie et bénéficiant d'une implémentation fonctionnelle. Dans le déploiement réel, nous avons caractérisé les attaques NDN les plus importantes - Interest Flooding Attack (IFA) et Content Poisoning Attack (CPA). Ces résultats ont permis de concevoir des micro-détecteurs qui reposent sur la théorie des tests d'hypothèses. L'approche permet de concevoir un test optimal (AUMP) capable d'assurer une probabilité de fausses alarmes (PFA) désirée en maximisant la puissance de détection. Nous avons intégré ces micro-détecteurs dans un plan de surveillance de la sécurité permettant de détecter des changements anormaux et les corréler par le réseau Bayésien, qui permet d'identifier les événements de sécurité dans un noeud NDN. Cette solution a été validée par simulation et expérimentation sur les attaques IFA et CPA
The current architecture of the Internet has been designed to connect remote hosts. But the evolution of its usage, which is now similar to that of a global platform for content distribution undermines its original communication model. In order to bring consistency between the Internet's architecture with its use, new content-oriented network architectures have been proposed, and these are now ready to be implemented. The issues of their management, deployment, and security now arise as locks essential to lift for Internet operators. In this thesis, we propose a security monitoring plan for Named Data Networking (NDN), the most advanced architecture which also benefits from a functional implementation. In this context, we have characterized the most important NDN attacks - Interest Flooding Attack (IFA) and Content Poisoning Attack (CPA) - under real deployment conditions. These results have led to the development of micro-detector-based attack detection solutions leveraging hypothesis testing theory. The approach allows the design of an optimal (AUMP) test capable of providing a desired false alarm probability (PFA) by maximizing the detection power. We have integrated these micro-detectors into a security monitoring plan to detect abnormal changes and correlate them through a Bayesian network, which can identify events impacting security in an NDN node. This proposal has been validated by simulation and experimentation on IFA and CPA attacks
APA, Harvard, Vancouver, ISO, and other styles
39

Holmberg, Anders, and Per-Erik Eriksson. "Decision Support System for Fault Isolation of JAS 39 Gripen : Development and Implementation." Thesis, Linköping University, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7021.

Full text
Abstract:

This thesis is a result of the increased requirements on availability and costs of the aircraft Jas 39 Gripen. The work has been to specify demands and to find methods suitable for development of a decision support system for the fault isolation of the aircraft. The work has also been to implement the chosen method. Two different methods are presented and a detailed comparison is performed with the demands as a starting point. The chosen method handle multiple faults in O(N2)-time where N is the number of components. The implementation shows how all demands are fulfilled and how new tests can be added during execution. Since the thesis covers the development of a prototype no practical evaluation with compare of manually isolation is done.

APA, Harvard, Vancouver, ISO, and other styles
40

Keeling, Kellie Bliss. "Developing Criteria for Extracting Principal Components and Assessing Multiple Significance Tests in Knowledge Discovery Applications." Thesis, University of North Texas, 1999. https://digital.library.unt.edu/ark:/67531/metadc2231/.

Full text
Abstract:
With advances in computer technology, organizations are able to store large amounts of data in data warehouses. There are two fundamental issues researchers must address: the dimensionality of data and the interpretation of multiple statistical tests. The first issue addressed by this research is the determination of the number of components to retain in principal components analysis. This research establishes regression, asymptotic theory, and neural network approaches for estimating mean and 95th percentile eigenvalues for implementing Horn's parallel analysis procedure for retaining components. Certain methods perform better for specific combinations of sample size and numbers of variables. The adjusted normal order statistic estimator (ANOSE), an asymptotic procedure, performs the best overall. Future research is warranted on combining methods to increase accuracy. The second issue involves interpreting multiple statistical tests. This study uses simulation to show that Parker and Rothenberg's technique using a density function with a mixture of betas to model p-values is viable for p-values from central and non-central t distributions. The simulation study shows that final estimates obtained in the proposed mixture approach reliably estimate the true proportion of the distributions associated with the null and nonnull hypotheses. Modeling the density of p-values allows for better control of the true experimentwise error rate and is used to provide insight into grouping hypothesis tests for clustering purposes. Future research will expand the simulation to include p-values generated from additional distributions. The techniques presented are applied to data from Lake Texoma where the size of the database and the number of hypotheses of interest call for nontraditional data mining techniques. The issue is to determine if information technology can be used to monitor the chlorophyll levels in the lake as chloride is removed upstream. A relationship established between chlorophyll and the energy reflectance, which can be measured by satellites, enables more comprehensive and frequent monitoring. The results have both economic and political ramifications.
APA, Harvard, Vancouver, ISO, and other styles
41

McIntire, William David. "Information Communication Technologies and Identity in Post-Dayton Bosnia: Mendingor Deepening the Ethnic Divide." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401978761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Alpsten, Edward, Henrik Holm, and Sebastian Ståhl. "Evaluation and optimization of an equity screening model." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-244761.

Full text
Abstract:
Screening models are tools for predicting which stock are the most likely to perform well on a stock market. They do so by examining the financial ratios of the companies behind the stock. The ratios examined by the model are chosen according to the personal preferences of the particular investor. Furthermore, an investor can apply different weights to the different parameters they choose to consider, according to the importance they apply to each included parameter. In this thesis, it is investigated whether a screening model can beat the market average in the long term. It is also explored whether parameter-weight-optimization in the context of equity trading can be used to improve an already existing screening model. More specifically, a starting point is set in a screening model currently in use at a successful asset management firm, through data analysis and an optimization algorithm, it is then examined whether a programmatic approach can identify ways to improve the original screening model by adjusting the parameters it looks at as well as the weights assigned to each parameter. The data set used in the model contains daily price data and annual data on financial ratios for all stocks on the Stockholm Stock Exchange as well as the NASDAQ-100 over the time period 2004-2018. The results indicate that it is possible to beat the market average in the long term. Results further show that a programmatic approach is suitable for optimizing screening models.
APA, Harvard, Vancouver, ISO, and other styles
43

Bergström, Carl, and Oscar Hjelm. "Impact of Time Steps on Stock Market Prediction with LSTM." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262221.

Full text
Abstract:
Machine learning models as tools for predicting time series have in recent years proven to perform exceptionally well. With financial time series in the form of stock indices being inherently complex and subject to noise and volatility, the prediction of stock market movements has proven to be especially difficult throughout extensive research. The objective of this study is to thoroughly analyze the LSTM architecture for neural networks and its performance when applied to the S&P 500 stock index. The main research question revolves around quantifying the impact of varying the number of time steps in the LSTM model on predictive performance when applied to the S&P 500 index. The data used in the model is of high reliability downloaded from the Bloomberg Terminal, where the closing price has been used as feature in the model. Other constituents of the model have been based in previous research, where satisfactory results have been reached. The results indicate that among the evaluated time steps, ten steps provided the superior performance. However, the impact of varying time steps is not all too significant for the overall performance of the model. Finally, the implications of the results for the field of research present themselves as good basis for future research, where parameters are varied and fine-tuned in pursuit of optimal performance.
Maskininlärningsmodeller som redskap för att förutspå tidsserier har de senaste åren visat sig prestera exceptionellt bra. Vad gäller finansiella tidsserier i formen av aktieindex, som har en inneboende komplexitet, och är föremål för störningar och volatilitet, har förutsägelse av aktiemarknadsrörelser visat sig vara särskilt svårt igenom omfattande forskning. Målet med denna studie är att grundligt undersöka LSTM-arkitekturen för neurala nätverk och dess prestanda när den appliceras på aktieindexet S&P 500. Huvudfrågan kretsar kring att kvantifiera inverkan som varierande av antal tidssteg i LTSM-modellen har på prediktivprestanda när den appliceras på aktieindexet S&P 500. Data som använts i modellen är av hög pålitlighet, nedladdad frånBloomberg-terminalen, där stängningskurs har använts som feature i modellen. Andra beståndsdelar av modellen har baserats i tidigare forskning, där tillfredsställande resultat har uppnåtts. Resultaten indikerar att bland de testade tidsstegen så producerartio tidssteg bäst resultat. Dock verkar inte påverkan av antalet tidssteg vara särskilt signifikant för modellens övergripandeprestanda. Slutligen så presenterar sig implikationerna av resultaten för forskningsområdet som god grund för framtida forskning, där parametrar kan varieras och finjusteras i strävan efter optimal prestanda.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Chaoyan. "Securities trading in multiple markets : the Chinese perspective." Thesis, University of Stirling, 2009. http://hdl.handle.net/1893/2278.

Full text
Abstract:
This thesis studies the trading of the Chinese American Depositories Receipts (ADRs) and their respective underlying H shares issued in Hong Kong. The primary intention of this work is to investigate the arbitrage opportunity between the Chinese ADRs and their underlying H shares. This intention is motivated by the market observation that hedge funds are often in the top 10 shareholders of these Chinese ADRs. We start our study from the origin place of the Chinese ADRs, China’s stock market. We pay particular attention to the ownership structure of the Chinese listed firms, because part of the Chinese ADRs also listed A shares (exclusively owned by the Chinese citizens) in Shanghai. We also pay attention to the market microstructures and trading costs of the three China-related stock exchanges. We then proceed to empirical study on the Chinese ADRs arbitrage possibility by comparing the return distribution of two securities; we find these two securities are different in their return distributions, and which is due to the inequality in the higher moments, such as skewness, and kurtosis. Based on the law of one price and the weak-form efficient markets, the prices of identical securities that are traded in different markets should be similar, as any deviation in their prices will be arbitraged away. Given the intrinsic property of the ADRs that a convenient transferable mechanism exists between the ADRs and their underlying shares which makes arbitrage easy; the different return distributions of the ADRs and the underlying shares address the question that if arbitrage is costly that the equilibrium price of the security achieved in each market is affected mainly by its local market where the Chinese ADRs/the underlying Hong Kong shares are traded, such as the demand for and the supply of the stock in each market, the different market microstructures and market mechanisms which produce different trading costs in each market, and different noise trading arose from asymmetric information across multi-markets. And because of these trading costs, noise trading risk, and liquidity risk, the arbitrage opportunity between the two markets would not be exploited promptly. This concern then leads to the second intention of this work that how noise trading and trading cost comes into playing the role of determining asset prices, which makes us to empirically investigate the comovement effect, as well as liquidity risk. With regards to these issues, we progress into two strands, firstly, we test the relationship between the price differentials of the Chinese ADRs and the market return of the US and Hong Kong market. This test is to examine the comovement effect which is caused by asynchronous noise trading. We find the US market impact dominant over Hong Kong market impact, though both markets display significant impact on the ADRs’ price differentials. Secondly, we analyze the liquidity effect on the Chinese ADRs and their underlying Hong Kong shares by using two proxies to measure illiquidity cost and liquidity risk. We find significant positive relation between return and trading volume which is used to capture liquidity risk. This finding leads to a deeper study on the relationship between trading volume and return volatility from market microstructure perspective. In order to verify a proper model to describe return volatility, we carry out test to examine the heteroscedasticity condition, and proceed to use two asymmetric GARCH models to capture leverage effect. We find the Chinese ADRs and their underlying Hong Kong shares have different patterns in the leverage effect as modeled by these two asymmetric GARCH models, and this finding from another angle explains why these two securities are unequal in the higher moments of their return distribution. We then test two opposite hypotheses about volume-volatility relation. The Mixture of Distributions Hypothesis suggests a positive relation between contemporaneous volume and volatility, while the Sequential Information Arrival Hypothesis indicates a causality relationship between lead-lag volume and volatility. We find supportive evidence for the Sequential Information Arrival Hypothesis but not for the Mixture of Distributions Hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
45

Lehmann, Rüdiger. "Transformation model selection by multiple hypotheses testing." Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211719.

Full text
Abstract:
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Zuxing. "Privacy-by-Design for Cyber-Physical Systems." Doctoral thesis, KTH, ACCESS Linnaeus Centre, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211908.

Full text
Abstract:
It is envisioned that future cyber-physical systems will provide a more convenient living and working environment. However, such systems need inevitably to collect and process privacy-sensitive information. That means the benefits come with potential privacy leakage risks. Nowadays, this privacy issue receives more attention as a legal requirement of the EU General Data Protection Regulation. In this thesis, privacy-by-design approaches are studied where privacy enhancement is realized through taking privacy into account in the physical layer design. This work focuses in particular on cyber-physical systems namely sensor networks and smart grids. Physical-layer performance and privacy leakage risk are assessed by hypothesis testing measures. First, a sensor network in the presence of an informed eavesdropper is considered. Extended from the traditional hypothesis testing problems, novel privacy-preserving distributed hypothesis testing problems are formulated. The optimality of deterministic likelihood-based test is discussed. It is shown that the optimality of deterministic likelihood-based test does not always hold for an intercepted remote decision maker and an optimal randomized decision strategy is completely characterized by the privacy-preserving condition. These characteristics are helpful to simplify the person-by-person optimization algorithms to design optimal privacy-preserving hypothesis testing networks. Smart meter privacy becomes a significant issue in the development of smart grid technology. An innovative scheme is to exploit renewable energy supplies or an energy storage at a consumer to manipulate meter readings from actual energy demands to enhance the privacy. Based on proposed asymptotic hypothesis testing measures of privacy leakage, it is shown that the optimal privacy-preserving performance can be characterized by a Kullback-Leibler divergence rate or a Chernoff information rate in the presence of renewable energy supplies. When an energy storage is used, its finite capacity introduces memory in the smart meter system. It is shown that the design of an optimal energy management policy can be cast to a belief state Markov decision process framework.

QC 20170815

APA, Harvard, Vancouver, ISO, and other styles
47

Basu, Somnath. "Information, expectations and equilibrium: Trading volume hypotheses." Diss., The University of Arizona, 1990. http://hdl.handle.net/10150/185109.

Full text
Abstract:
In analyses of the relationship between information and price-volume reactions, the role of investor expectations is often considered implicitly. Not allowing investors to either disagree among each other or remain uninformed is a consequence of the assumption of a free and perfect information flow. A more flexible definition of information allows the observation that trading volume is an accurate reflector of investor expectations and contains valuable information about price movements. Trading volume is also used to empirically show the effects of imperfect information and the inappropriateness of the event study method.
APA, Harvard, Vancouver, ISO, and other styles
48

Lehmann, Rüdiger. "Transformation model selection by multiple hypotheses testing." Hochschule für Technik und Wirtschaft Dresden, 2014. https://htw-dresden.qucosa.de/id/qucosa%3A23299.

Full text
Abstract:
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
APA, Harvard, Vancouver, ISO, and other styles
49

Ryabko, Daniil. "APPRENABILITÉ DANS LES PROBLÈMES DE L'INFÉRENCE SÉQUENTIELLE." Habilitation à diriger des recherches, Université des Sciences et Technologie de Lille - Lille I, 2011. http://tel.archives-ouvertes.fr/tel-00675680.

Full text
Abstract:
Les travaux présentés sont dédiés à la possibilité de faire de l'inférence statistique à partir de données séquentielles. Le problème est le suivant. Étant donnée une suite d'observations x_1,...,x_n,..., on cherche à faire de l'inférence sur le processus aléatoire ayant produit la suite. Plusieurs problèmes, qui d'ailleurs ont des applications multiples dans différents domaines des mathématiques et de l'informatique, peuvent être formulés ainsi. Par exemple, on peut vouloir prédire la probabilité d'apparition de l'observation suivante, x_{n+1} (le problème de prédiction séquentielle); ou répondre à la question de savoir si le processus aléatoire qui produit la suite appartient à un certain ensemble H_0 versus appartient à un ensemble différent H_1 (test d'hypothèse) ; ou encore, effectuer une action avec le but de maximiser une certain fonction d'utilité. Dans chacun de ces problèmes, pour rendre l'inférence possible il faut d'abord faire certaines hypothèses sur le processus aléatoire qui produit les données. La question centrale adressée dans les travaux présentés est la suivante : sous quelles hypothèses l'inférence est-elle possible ? Cette question est posée et analysée pour des problèmes d'inférence différents, parmi lesquels se trouvent la prédiction séquentielle, les tests d'hypothèse, la classification et l'apprentissage par renforcement.
APA, Harvard, Vancouver, ISO, and other styles
50

Cleve, Oscar, and Sara Gustafsson. "Automatic Feature Extraction for Human Activity Recognitionon the Edge." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260247.

Full text
Abstract:
This thesis evaluates two methods for automatic feature extraction to classify the accelerometer data of periodic and sporadic human activities. The first method selects features using individual hypothesis tests and the second one is using a random forest classifier as an embedded feature selector. The hypothesis test was combined with a correlation filter in this study. Both methods used the same initial pool of automatically generated time series features. A decision tree classifier was used to perform the human activity recognition task for both methods.The possibility of running the developed model on a processor with limited computing power was taken into consideration when selecting methods for evaluation. The classification results showed that the random forest method was good at prioritizing among features. With 23 features selected it had a macro average F1 score of 0.84 and a weighted average F1 score of 0.93. The first method, however, only had a macro average F1 score of 0.40 and a weighted average F1 score of 0.63 when using the same number of features. In addition to the classification performance this thesis studies the potential business benefits that automation of feature extractioncan result in.
Denna studie utvärderar två metoder som automatiskt extraherar features för att klassificera accelerometerdata från periodiska och sporadiska mänskliga aktiviteter. Den första metoden väljer features genom att använda individuella hypotestester och den andra metoden använder en random forest-klassificerare som en inbäddad feature-väljare. Hypotestestmetoden kombinerades med ett korrelationsfilter i denna studie. Båda metoderna använde samma initiala samling av automatiskt genererade features. En decision tree-klassificerare användes för att utföra klassificeringen av de mänskliga aktiviteterna för båda metoderna. Möjligheten att använda den slutliga modellen på en processor med begränsad hårdvarukapacitet togs i beaktning då studiens metoder valdes. Klassificeringsresultaten visade att random forest-metoden hade god förmåga att prioritera bland features. Med 23 utvalda features erhölls ett makromedelvärde av F1 score på 0,84 och ett viktat medelvärde av F1 score på 0,93. Hypotestestmetoden resulterade i ett makromedelvärde av F1 score på 0,40 och ett viktat medelvärde av F1 score på 0,63 då lika många features valdes ut. Utöver resultat kopplade till klassificeringsproblemet undersöker denna studie även potentiella affärsmässiga fördelar kopplade till automatisk extrahering av features.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography