Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Information content.

Dissertationen zum Thema „Information content“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Information content" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Laisathit, Kirati. „Information content of public announcements /“. Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/8832.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Harrison, Andre V. „Information content models of human vision“. Thesis, The Johns Hopkins University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3572710.

Der volle Inhalt der Quelle
Annotation:

From night vision goggles, to infrared imagers, to remote controlled bomb disposal robots; we are increasingly employing electronic vision sensors to extend or enhance the limitations of our own visual sensory system. And while we can make these systems better in terms of the amount of power they use, how much information they capture, or how much information they can send to the viewer, it is also important to keep in mind the capabilities of the human who must receive this visual information from the sensor and display system. The best interface between our own visual sensory system and that of the electronic image sensor and display system is one where the least amount of visual information is sent to our own sensory system for processing, yet contains all the visual information that we need to understand the desired environment and to make decisions based on that information. In order to do this it is important to understand both the physiology of the visual sensory system and the psychophysics of how this information is used. We demonstrate this idea by researching and designing the components needed to optimize the compression of dynamic range information onto a display, for the sake of maximizing the amount of perceivable visual information shown to the human visual system.

An idea that is repeated in the construction of our optimal system is the link between designing, modeling, and validating both the design and the model through human testing. Compressing the dynamic range of images while trying to maximize the amount of visual information shown is a unique approach to dynamic range cornpression. So the first component needed to develop our optimal compression method is a way to measure the amount of visual information present in a compressed image. We achieve this by designing an Information Content Quality Assessment metric and we validate the metric using data from our psychophysical experiments [in preparation]. Our psychophysical experiments compare different dynamic range compression methods in terms of the amount of information that is visible after compression. Our quality assessment metric is unique in that it models visual perception using information theory rather than biology. To validate this approach, we extend our model so that it can predict human visual fixation. We compare the predictions of our model against human fixation data and the fixation predictions of similar state of the art fixation models. We show that the predictions of our model are at least corn-parable or better than the predictions of these fixation models. We also present preliminary results on applying the saliency model to identify potentially salient objects in out-of-focus locations due to a finite depth-of-field [in preparation]. The final component needed to implement the optimization is a way to combine the quality assessment model with the results of the psychophysical experiments to reach an optimal compression setting. We discuss how this could be implemented in the future using a generic dynamic range compression algorithm. We also present the design of a wide dynamic range image sensor and a mixed mode readout scheme to improve the accuracy of illumination measurements for each pixel over the entire dynamic range of the imager.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Svensson, Martin, und Oskar Pettersson. „Case study: Extending content metadata by appending user context“. Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-743.

Der volle Inhalt der Quelle
Annotation:

Recent developments in modern computing and wireless networks allow mobile devices to be connected to the Internet regardless of their physical location. These mobile devices, such as smart phones and PDAs, have turned into powerful multimedia units allowing users to become producers of rich media content. This latest development contributes to the ever-growing amount of digital material existing on the World Wide Web, and at the same time creates a new information landscape that combines content coming from both, the wired and mobile Internet. Thus, it is important to understand the context or settings in which mobile devices are used, and what is the digital content produced by the different users. In order to gain more knowledge about this domain, we have investigated how to extend the standard metadata related to content with a metadata domain describing the context, or settings in which the content has been created.

In order to limit the scope of our work, we have focused our efforts in a specific case taking place in a project called AMULETS. The AMULETS-project contains all of the elements we need in order to resemble the contextual setting in a metadata model. Combined with the technical metadata associated to the digital content, we try to display the benefits of capturing the different attributes of the context that were present when the content was generated. Additionally, we have created a proof-of-concept Entity Relation (ER)-diagram which proposes how the metadata models can be implemented in a relational database. As the nature of the thesis is design-oriented, a model has been developed and it will be illustrated throughout this report. The aim of the thesis is to show how it is possible to design new metadata models that combine both relevant attributes of the context and content in order to develop new educational activities supported by location-based services.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Boberg, Edvin. „Datadriven content marketing : Användning av sociala data inom content marketing på Facebook“. Thesis, Uppsala universitet, Institutionen för informatik och media, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-347514.

Der volle Inhalt der Quelle
Annotation:
Denna uppsats innehåller en förståelseinriktad analys av hur sociala data från Facebook används inom Content Marketing bland ett urval av kommunikationsbyråer i Stockholm. Uppsatsen förklarar hur sociala data används för att skapa content, samt hur sociala data används för att mäta resultat av publicerat content på Facebook. Uppsatsen presenterar även en analys av begreppet Content Marketing genom att ställa olika vetenskapliga definitioner mot varandra. Uppsatsen belyser även den rådande bristen på aktuella vetenskapliga publiceringar inom området Content Marketing från ett datadrivet perspektiv, vilket motiverar uppsatsens relevans i ett vetenskaplig sammanhang.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Martinsson, Per. „Structural Information Content of the Optical Field“. Doctoral thesis, KTH, Optik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10281.

Der volle Inhalt der Quelle
Annotation:
The communication modes are a mathematical technique for the description of structural information in optical fields. These modes are orthogonal, optimally  connected functions characteristic of the optical system. Mathematically they are obtained by the singular value decomposition (SVD) of the operator that represents the field propagation. In this dissertation, the foundations of the technique are described, and the theory is extended and applied to a variety of specific systems.   In the Fresnel regime, the communication modes are closely related to the prolate spheroidal wavefunctions (PSWF). Within this approximation, the numerical propagation of the field in a one-dimensional optical system in terms of the PSWFs is demonstrated and the problem of assessing the best achievable realization of a given target field is addressed. Simplified equations for field propagation are presented. Approximate modes in large-aperture systems are derived and shown to agree with Gabor's theory on optics and information. The longitudinal resolution of an axicon is analyzed in terms of the communication modes. It is shown that in a generalized axicon geometry the communication modes are expressible in terms of the PSWFs, and that in usual circumstances a version of the large aperture approximation applies, resulting in quadratic waves in the aperture domain and sinc functions in the image domain   Eigenequations for the communication modes in scalar near-field diffraction are derived and applied to a simplified scanning near-field optical microscope (SNOM) geometry. It is suggested that the resolution of a SNOM system is essentially given by the width of the lowest-order communication modes. The best-connected mode is shown to effectively reduce to the Green function.   Within the context of random fluctuations the communication modes are defined for the cross-spectral density of partially coherent fields. These modes are compared to the well-known coherent modes. Expressions for the effective degree of coherence are derived, and it is demonstrated that optical fields of any state of coherence may readily be propagated through deterministic systems by means of the communication modes. Results are illustrated numerically in an optical near-field geometry.   The communication modes theory is further extended to vector diffraction on the basis of Maxwell's equations. The polarization properties of the electromagnetic communication modes as represented by the Stokes parameters are analyzed numerically for an example of a near-field geometry. The work presented in this dissertation shows that the communication modes are an advanced, versatile tool that can be applied to deterministic and random, scalar and electromagnetic optical systems in far-field and near-field arrangements. The method is likely to find further uses in applications such as polarization microscopy.
QC 20100802
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wallin, Erik Oskar. „Individual Information Adaptation Based on Content Description“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3710.

Der volle Inhalt der Quelle
Annotation:

Today’s increasing information supply raises the needfor more effective and automated information processing whereindividual information adaptation (personalization) is onepossible solution. Earlier computer systems for personalizationlacked the ability to easily define and measure theeffectiveness of personalization efforts. Numerous projectsfailed to live up to the their expectations, and the demand forevaluation increased.

This thesis presents some underlying concepts and methodsfor implementing personalization in order to increase statedbusiness objectives. A personalization system was developedthat utilizes descriptions of information characteristics(metadata) to perform content based filtering in anon-intrusive way.

Most of the described measurement methods forpersonalization in the literature are focused on improving theutility for the customer. The evaluation function of thepersonalization system described in this thesis takes thebusiness operator’s standpoint and pragmatically focuseson one or a few measurable business objectives. In order toverify operation of the personalization system, a functioncalled bifurcation was created. The bifurcation functiondivides the customers stochastically into two or morecontrolled groups with different personalizationconfigurations. Bygiving one of the controlled groups apersonalization configuration that deactivates thepersonalization, a reference group is created. The referencegroup is used to measure quantitatively objectives bycomparison with the groups with active personalization.

Two different companies had their websites personalized andevaluated: one of Sweden’s largest recruitment servicesand the second largest Swedish daily newspaper. The purposewith the implementations was to define, measure, and increasethe business objectives. The results of the two case studiesshow that under propitious conditions, personalization can bemade to increase stated business objectives.

Keywords:metadata, semantic web, personalization,information adaptation, one-to-one marketing, evaluation,optimization, personification, customization,individualization, internet, content filtering, automation.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mendoza, Kimberly. „Alleviating obesity bias: does information content matter?“ Honors in the Major Thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/882.

Der volle Inhalt der Quelle
Annotation:
Obesity bias has become the most acceptable form of prejudice in American society (Latner, O'Brien, Durso, Brinkman, & MacDonald, 2008). Stigmatization of the obese has tremendous social and economic costs both for the stigmatized population and for society as a whole. Few studies have been done to show effective ways to reduce obesity bias. This study looked to expand the research on effective ways to reduce obesity bias. Using a between-participants experimental design, the present study investigated whether multi-faceted information content about the causes of obesity (including psychological, social, and physiological causes) would be more effective in reducing obesity bias than any one of these causes presented alone. Results showed that participants' evaluations of a target woman who was overweight did not differ between the information content conditions, nor did they differ from a control condition. Implications, as well as limitations in the current study, are discussed.
B.S.
Bachelors
Sciences
Psychology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kochan, Mucahit. „Information Content of Iron Butterfly Arbitrage Bounds“. Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955071/.

Der volle Inhalt der Quelle
Annotation:
Informed traders trade options on underlying securities to lower transaction costs and increase financial leverage for price trend and variance strategies. Options markets play a significant role in price discovery by incorporating private information about future prices for an underlying security into option prices. I generate a new model-free volatility measure to calculate the "distance from arbitrage bounds" from minute-by-minute option series for the S&P 500 index and 30 individual underlying stocks. These iron butterfly arbitrage bounds (IBBs) use intraday call and put option prices from the Bloomberg database. Narrow and wide IBBs are expected to reveal the options market valuation of volatility by market participants. Data series is gathered by using successive one-minute intervals from the Bloomberg database. The data comprise the most recent bid and ask option prices and volumes. I collect S&P 500 index values and index options and use 30 underlying stock prices and option prices for the contracts that have the largest option trading volume during the sampling interval. These bid and ask prices reflect the information generated by intraday price pressures implied by S&P 500 index options or stock options. Consistent with the option micro-structure literature, I find that the IBB measure for actively traded stock options attains its highest level immediately after the open of the market, declines steadily throughout the first trading hour and remains relatively stable until market close. However, index IBBs behave differently. S&P 500 index option IBB attains its lowest level during the first hour of the trading day, then increases and remains relatively stable until market close. I present new evidence regarding the dynamic relation between stock returns and innovations in expected volatility by using the minute-by-minute change in implied volatility (IV) as a proxy. Unlike the relationship between individual stock returns and their respective changes in implied idiosyncratic volatility, I find that all the coefficients on the market volatility index (VIX) term are negative and significant. Therefore, the evidence supports the explanation that the negative relationship between stock returns and expected volatility innovations is primarily related to the systematic component of the expected volatility. I also test whether narrow and wide IBB values capture incremental information to explain the return-volatility relationship. Results indicate that neither narrow IBB nor wide IBB values provide additional information beyond that provided by VIX and IV. The results are robust to five-minute and ten-minute sampling frequencies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Patton, Kenneth. „Analyzing the Information Content in Gravitational Shadows“. The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1471838287.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bayer, Peter, und Henrik Widenfors. „Information Hiding : Steganografic Content in Streaming Media“. Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2298.

Der volle Inhalt der Quelle
Annotation:
For a long time, information hiding has focused on carriers like images and audio files. A problem with these carriers is that they do not support hiding in new types of network-based services. Nowadays, these services often arise as a consequence of the increasingly demand for higher connection speed to the Internet. By introducing streaming media as a carrier of hidden information, hiding in new network-based services is supported. The main purposes with this thesis are to investigate how information can be hidden in streaming media and how it measures up compared to images and audio files. In order to evaluate the approach, we have developed a prototype and used it as a proof of concept. This prototype hides information in some of the TCP/IP header fields and is used to collect experimental data as well. As reference, measurements have been collected from other available carriers of hidden information. In some cases, the results of these experiments show that the TCP/IP header is a good carrier of information. Its performance is outstanding and well suited for hiding information quickly. The tests showed that the capacity is slightly worse though.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Farrokhzadi, Meshkat. „Entropy, information rate and mutual information measures for the email content of information workers“. Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41643.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 84-85).
Although most people agree that the use of information technology increases workplace productivity, the exact relationship between productivity and different characteristics of information employees send and receive, such as entropy, information rate and mutual information is not very well studied. By using empirical data, this study develops methodologies to measure the entropy, information rate and mutual information of the email content exchanged between information workers. Furthermore, the validity of these methodologies is evaluated using comparable, publicly available datasets. The evaluation shows that important informational characteristics of email messages, namely the entropy values, are preserved even when messages undergo transformations that preserve privacy and anonymity.
by Meshkat Farrokhzadi.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kocurova, Anna. „Distributed collaborative context-aware content-centric workflow management for mobile devices“. Thesis, University of West London, 2013. https://repository.uwl.ac.uk/id/eprint/745/.

Der volle Inhalt der Quelle
Annotation:
Ubiquitous mobile devices have become a necessity in today’s society, opening new opportunities for interaction and collaboration between geographically distributed people. With the increased use of mobile phones, people can collaborate while on the move. Collaborators expect technologies that would enhance their teamwork and respond to their individual needs. Workflow is a widely used technology that supports collaboration and can be adapted for a variety of collaborative scenarios. Although the originally computer-based workflow technology has expanded also on mobile devices, there are still research challenges in the development of user-focused device-oriented collaborative workflows. As opposed to desktop computers, mobile devices provide a different, more personalised user experience and are carried by their owners everywhere. Mobile devices can capture user context and behave as digitalised user complements. By integrating context awareness into the workflow technology, workflow decisions can be based on local, context information and therefore, be more adapted to individual collaborators’ circumstances and expectations. Knowing the current context of collaborators and their mobile devices is useful, especially in mobile peer-topeer collaboration where the workflow process execution can be driven by devices according to the situation. In mobile collaboration, team workers share pictures, videos, or other content. Monitoring and exchanging the information on the current state of the content processed on devices can enhance the overall workflow execution. As mobile devices in peer-to-peer collaboration are not aware of a global workflow state, the content state information can be used to communicate progress among collaborators. However, there is still a lack of integrating content lifecycles in process-oriented workflows. The aim of this research was therefore to investigate how workflow technology can be adapted for mobile peer-to-peer collaboration, in particular, how the level of context awareness in mobile collaborative workflows can be increased and how the extra content lifecycle management support can be integrated. The collaborative workflow technology has been adapted for mobile peerto- peer collaboration by integrating context and content awareness. In the first place, a workflow-specific context management approach has been developed that allows defining workflow-specific context models and supports the integration of context models with collaborative workflows. Workflow process has been adapted to make decisions based on context information. Secondly, extra content management support has been added to the workflow technology. A representation for content lifecycles has been designed, and content lifecycles have been integrated with the workflow process. In this thesis, the MobWEL workflow approach is introduced. The Mob- WEL workflow approach allows defining, managing and executing mobile context-aware content-centric workflows. MobWEL is a workflow execution language that extends BPEL, using constructs from existing workflow approaches, Context4BPEL and BPELlight, and adopting elements from the BALSA workflow model. The MobWEL workflow management approach is a technology-based solution that has been designed to provide workflow management support to a specific class of mobile applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ram, Mohan Prabhakara. „Trigeiawriter: A content management system“. CSUSB ScholarWorks, 2011. https://scholarworks.lib.csusb.edu/etd-project/3331.

Der volle Inhalt der Quelle
Annotation:
The purpose of this project was to design and implement a Content Management System (CSM). TrigeiaWriter is a Content Management System for Trigeia.com, a web based magazine site. Since TrigeiaWriter is used for a web based magazine, it incorperates different roles for the users and these roles are authors, editors, and administrators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Huang, Chia Shein Jason. „Context, content and the process of participation in information systems development : a structuration perspective“. Thesis, London School of Economics and Political Science (University of London), 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265297.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Laucht, Silke. „Information content and testosterone dependence of animal signals“. Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-142791.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Ström, Niklas. „Essays on Information Disclosure : Content, Consequence and Relevance“. Doctoral thesis, Uppsala universitet, Företagsekonomiska institutionen, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6988.

Der volle Inhalt der Quelle
Annotation:
This thesis provides new insight into the information environments of Initial Public Offerings (IPOs) and analysts’ equity reports. The thesis consists of four essays that address the issues of firm disclosure and the relevance of information for analysts and investors in the capital market. The research question concerns the role of accounting information on the capital market. The present thesis entails the following analyses: (i) An analysis of the content in IPO prospectuses (ii) Cross-sectional analyses on factors affecting prospectus disclosure (iii) An analysis of the short- and long-run returns of IPOs (iv) An analysis of the relevance of IPO disclosure on IPO valuation (v) An analysis of non-financial information content in analysts’ reports (vi) An analysis of the valuation relevance of non-financial information The first essay examines prospectus disclosure and looks at explanations as to the factors that drive the disclosure. The findings reveal that IPO firms provide more information in their prospectus in comparison with non-IPO firms. The second essay analyzes how prospectus disclosure affects IPO valuation in the secondary market. It is hypothesized that increased disclosure in the prospectus decreases valuation uncertainty, which implicates lower underpricing for the IPO firm. The essay shows that Swedish IPOs are underpriced. However, disclosure is not found to be related to underpricing. The third essay examines the extent and type of forecasts provided in the prospectuses and the value relevance of this information. The study reveals a reduction in profit and sales forecast disclosures while at the same time shows an increase in sales growth forecasts for the period 1996-2004. The essay finds that forecast information is particularly relevant to investors and analysts. Forecast disclosing firms demonstrate a significantly lower underpricing and lower long-run return compared with non-forecast disclosing firms. The fourth essay concerns the valuation relevance of non-financial information contained in analysts’ equity reports. The essay notes that valuation relevance of non-financial information is positively related to the size of the target firm. Moreover, analysts were observed to rely more heavily on forward-looking non-financial information than historical non-financial information in their valuation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Johnson, Christine. „Information content of observations in variational data assimilation“. Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288737.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Trevino, Villarreal Maria de Lourdes. „Modelling the information content of sovereign credit ratings“. Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299284.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Wolfenbarger, J. Kenneth Seinfeld John H. „Aerosol data inversion : optimal solutions and information content /“. Diss., Pasadena, Calif. : California Institute of Technology, 1990. http://resolver.caltech.edu/CaltechETD:etd-11092007-094509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Fröjd, Sofia. „Measuring the information content of Riksbank meeting minutes“. Thesis, Umeå universitet, Institutionen för fysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-158151.

Der volle Inhalt der Quelle
Annotation:
As the amount of information available on the internet has increased sharply in the last years, methods for measuring and comparing text-based information is gaining popularity on financial markets. Text mining and natural language processing has become an important tool for classifying large collections of texts or documents. One field of applications is topic modelling of the minutes from central banks' monetary policy meetings, which tend to be about topics such as"inflation", "economic growth" and "rates". The central bank of Sweden is the Riksbank, which hold 6 annual monetary policy meetings where the members of the Executive Board decide on the new repo rate. Two weeks later, the minutes of the meeting is published and information regarding the future monetary policy is given to the market in the form of text. This information has before release been unknown to the market, thus having the potential to be market-sensitive. Using Latent Dirichlet Allocation (LDA), an algorithm used for uncovering latent topics in documents, the topics in the meeting minutes should be possible to identify and quantify. In this project, 8 topics were found regarding, among other, inflation, rates, household debt and economic development. An important factor in analysis of central bank communication is the underlying tone in the discussions. It is common to classify central bankers as hawkish or dovish. Hawkish members of the board tend to favour tightening monetary policy and rate hikes, while more dovish members advocate a more expansive monetary policy and rate cuts. Thus, analysing the tone of the minutes can give an indication of future moves of the monetary policy rate. The purpose of this project is to provide a fast method for analysing the minutes from the Riksbank monetary policy meetings. The project is divided into two parts. First, a LDA model was trained to identify the topics in the minutes, which was then used to compare the content of two consecutive meeting minutes. Next, the sentiment was measured as a degree of hawkishness or dovishness. This was done by categorising each sentence in terms of their content, and then counting words with hawkish or dovish sentiment. The resulting net score gives larger values to more hawkish minutes and was shown to follow the repo rate path well. At the time of the release of the minutes, the new repo rate is already known, but the net score does gives an indication of the stance of the board.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Abbadi, Sinan Sulieman. „Information content and interrelationships of multiple performance measures“. Thesis, Durham University, 2009. http://etheses.dur.ac.uk/1961/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Chafik, Sanaa. „Machine learning techniques for content-based information retrieval“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL008/document.

Der volle Inhalt der Quelle
Annotation:
Avec l’évolution des technologies numériques et la prolifération d'internet, la quantité d’information numérique a considérablement évolué. La recherche par similarité (ou recherche des plus proches voisins) est une problématique que plusieurs communautés de recherche ont tenté de résoudre. Les systèmes de recherche par le contenu de l’information constituent l’une des solutions prometteuses à ce problème. Ces systèmes sont composés essentiellement de trois unités fondamentales, une unité de représentation des données pour l’extraction des primitives, une unité d’indexation multidimensionnelle pour la structuration de l’espace des primitives, et une unité de recherche des plus proches voisins pour la recherche des informations similaires. L’information (image, texte, audio, vidéo) peut être représentée par un vecteur multidimensionnel décrivant le contenu global des données d’entrée. La deuxième unité consiste à structurer l’espace des primitives dans une structure d’index, où la troisième unité -la recherche par similarité- est effective.Dans nos travaux de recherche, nous proposons trois systèmes de recherche par le contenu de plus proches voisins. Les trois approches sont non supervisées, et donc adaptées aux données étiquetées et non étiquetées. Elles sont basées sur le concept du hachage pour une recherche efficace multidimensionnelle des plus proches voisins. Contrairement aux approches de hachage existantes, qui sont binaires, les approches proposées fournissent des structures d’index avec un hachage réel. Bien que les approches de hachage binaires fournissent un bon compromis qualité-temps de calcul, leurs performances en termes de qualité (précision) se dégradent en raison de la perte d’information lors du processus de binarisation. À l'opposé, les approches de hachage réel fournissent une bonne qualité de recherche avec une meilleure approximation de l’espace d’origine, mais induisent en général un surcoût en temps de calcul.Ce dernier problème est abordé dans la troisième contribution. Les approches proposées sont classifiées en deux catégories, superficielle et profonde. Dans la première catégorie, on propose deux techniques de hachage superficiel, intitulées Symmetries of the Cube Locality sensitive hashing (SC-LSH) et Cluster-Based Data Oriented Hashing (CDOH), fondées respectivement sur le hachage aléatoire et l’apprentissage statistique superficiel. SCLSH propose une solution au problème de l’espace mémoire rencontré par la plupart des approches de hachage aléatoire, en considérant un hachage semi-aléatoire réduisant partiellement l’effet aléatoire, et donc l’espace mémoire, de ces dernières, tout en préservant leur efficacité pour la structuration des espaces hétérogènes. La seconde technique, CDOH, propose d’éliminer l’effet aléatoire en combinant des techniques d’apprentissage non-supervisé avec le concept de hachage. CDOH fournit de meilleures performances en temps de calcul, en espace mémoire et en qualité de recherche.La troisième contribution est une approche de hachage basée sur les réseaux de neurones profonds appelée "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). UDN2H propose une indexation individuelle de la sortie de chaque neurone de la couche centrale d’un modèle non supervisé. Ce dernier est un auto-encodeur profond capturant une structure individuelle de haut niveau de chaque neurone de sortie.Nos trois approches, SC-LSH, CDOH et UDN2H, ont été proposées séquentiellement durant cette thèse, avec un niveau croissant, en termes de la complexité des modèles développés, et en termes de la qualité de recherche obtenue sur de grandes bases de données d'information
The amount of media data is growing at high speed with the fast growth of Internet and media resources. Performing an efficient similarity (nearest neighbor) search in such a large collection of data is a very challenging problem that the scientific community has been attempting to tackle. One of the most promising solutions to this fundamental problem is Content-Based Media Retrieval (CBMR) systems. The latter are search systems that perform the retrieval task in large media databases based on the content of the data. CBMR systems consist essentially of three major units, a Data Representation unit for feature representation learning, a Multidimensional Indexing unit for structuring the resulting feature space, and a Nearest Neighbor Search unit to perform efficient search. Media data (i.e. image, text, audio, video, etc.) can be represented by meaningful numeric information (i.e. multidimensional vector), called Feature Description, describing the overall content of the input data. The task of the second unit is to structure the resulting feature descriptor space into an index structure, where the third unit, effective nearest neighbor search, is performed.In this work, we address the problem of nearest neighbor search by proposing three Content-Based Media Retrieval approaches. Our three approaches are unsupervised, and thus can adapt to both labeled and unlabeled real-world datasets. They are based on a hashing indexing scheme to perform effective high dimensional nearest neighbor search. Unlike most recent existing hashing approaches, which favor indexing in Hamming space, our proposed methods provide index structures adapted to a real-space mapping. Although Hamming-based hashing methods achieve good accuracy-speed tradeoff, their accuracy drops owing to information loss during the binarization process. By contrast, real-space hashing approaches provide a more accurate approximation in the mapped real-space as they avoid the hard binary approximations.Our proposed approaches can be classified into shallow and deep approaches. In the former category, we propose two shallow hashing-based approaches namely, "Symmetries of the Cube Locality Sensitive Hashing" (SC-LSH) and "Cluster-based Data Oriented Hashing" (CDOH), based respectively on randomized-hashing and shallow learning-to-hash schemes. The SC-LSH method provides a solution to the space storage problem faced by most randomized-based hashing approaches. It consists of a semi-random scheme reducing partially the randomness effect of randomized hashing approaches, and thus the memory storage problem, while maintaining their efficiency in structuring heterogeneous spaces. The CDOH approach proposes to eliminate the randomness effect by combining machine learning techniques with the hashing concept. The CDOH outperforms the randomized hashing approaches in terms of computation time, memory space and search accuracy.The third approach is a deep learning-based hashing scheme, named "Unsupervised Deep Neuron-per-Neuron Hashing" (UDN2H). The UDN2H approach proposes to index individually the output of each neuron of the top layer of a deep unsupervised model, namely a Deep Autoencoder, with the aim of capturing the high level individual structure of each neuron output.Our three approaches, SC-LSH, CDOH and UDN2H, were proposed sequentially as the thesis was progressing, with an increasing level of complexity in terms of the developed models, and in terms of the effectiveness and the performances obtained on large real-world datasets
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Shetty, Shekar T. „The Information Content of Pension Fund Asset Reversion“. Thesis, University of North Texas, 1992. https://digital.library.unt.edu/ark:/67531/metadc279197/.

Der volle Inhalt der Quelle
Annotation:
Prior studies on the impact of the termination of overfunded defined benefit pension plans on shareholders' wealth have produced conflicting findings. The first study on the stock market reaction to pension plan termination was conducted by Alderson and Chen (1986); this study claimed that shareholders realize significant positive abnormal returns around the termination announcement date. A more recent study, by Moore and Pruitt (1990), disclaimed the findings of Alderson and Chen. Reexamination of these two studies with additional evidence and the use of the appropriate announcement date suggests that termination of pension plans is associated with significant wealth gain to shareholders. This study also analyzes samples from periods prior to and after the imposition in 1986 of a 10 percent excise tax on recaptured excess pension assets. The empirical results suggest that shareholders experience significant positive wealth effects for the pre-tax (1980-85) period and no wealth effects for the post-tax (1986-88) period. The primary purpose of this study is to determine the impact of stock market reaction upon shareholders' wealth under the partial anticipation hypothesis. The pre-tax sample is analyzed by isolating the expected terminators using the multiple discriminant analysis model. This study finds significant positive abnormal returns only for firms that are not anticipated by the investors as potential terminators. The results of this study do not lend support to either the "separation" or the "integration" hypothesis as proposed by Alderson and Chen (1986). Instead, the results are consistent with the information hypothesis that the market reacts to unanticipated events that provide new information. Cross-sectional regression analysis of unexpected terminators suggests that the abnormal performance of stocks of pension terminating firms is explained by the firms' debt ratio and the amount of surplus pension assets. It can be inferred that firms may resort to recapturing excess pension assets as a way of financing investments internally when faced with unfavorable credit markets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Pappa, Sara T. „A Content Analysis of Online HPV Immunization Information“. University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479819905816751.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Mauwa, Hope. „Information security awareness: generic content, tools and techniques“. Thesis, Nelson Mandela Metropolitan University, 2007. http://hdl.handle.net/10948/560.

Der volle Inhalt der Quelle
Annotation:
In today’s computing environment, awareness programmes play a much more important role in organizations’ complete information security programmes. Information security awareness programmes are there to change behaviour or reinforce good security practices, and provide a baseline of security knowledge for all information users. Security awareness is a learning process, which changes individual and organizational attitudes and perceptions so that the importance of security and the adverse consequences of its failure are realized. Therefore, with proper awareness, employees become the most effective layer in an organization’s security defence. With the important role that these awareness programmes play in organizations’ complete information security programmes, it is a must that all organizations that are serious about information security must implement it. But though awareness programmes have become increasing important, the level of awareness in most organizations is still low. It seems that the current approach of developing these programmes does not satisfy the needs of most organizations. Therefore, another approach, which tries to meet the needs of most organizations, is proposed in this project as part of the solution of raising the level of awareness programmes in organizations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Small, David L. „Information content of polarimetric synthetic aperture radar data“. Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30103.

Der volle Inhalt der Quelle
Annotation:
Research into the analysis of polarimetric synthetic aperture radar (SAR) data continues to reveal new applications and data extraction techniques. The objective of this thesis is to examine the information content of a quad-polarization SAR, and determine which polarimetric variables are most useful for classification purposes. The four complex polarimetric radar channels (HH, HV, VH, and VV) are expressed as nine scattering matrix cross-product "features" (with the loss of only absolute phase), and the relative utility of each for terrain classification is examined. Feature utility is examined in two ways — by measuring how each feature separates classes of terrain in an image, and by measuring how well a classifier performs with and without each feature. The features are then ranked in order of utility to the classifier, or in order of information content. A sharp distinction is found between those features that provide information useful to the classifier, and those that do not. It is found that those features that are defined as the product of a co-polarized and a cross-polarized term can be relatively safely ignored, with little loss of classification accuracy. This would be useful for reducing data transmission, storage, and processing requirements, and for designing future simplified radar systems. There is qualitative evidence that classification performance can actually be improved when these features are ignored. Of three simplified radar systems considered, the co-polarized design (returning only the complex HH and VV channels) in general produced classifications closest to that of a fully polarimetric SAR.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Gazkohani, Ali Esmaeily. „Exploring snow information content of interferometric SAR Data“. Thèse, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/2793.

Der volle Inhalt der Quelle
Annotation:
The objective of this research is to explore the information content of repeat-pass cross-track Interferometric SAR (InSAR) with regard to snow, in particular Snow Water Equivalent (SWE) and snow depth. The study is an outgrowth of earlier snow cover modeling and radar interferometry experiments at Schefferville, Quebec, Canada and elsewhere which has shown that for reasons of loss of coherence repeat-pass InSAR is not useful for the purpose of snow cover mapping, even when used in differential InSAR mode. Repeat-pass cross-track InSAR would overcome this problem. As at radar wavelengths dry snow is transparent, the main reflection is at the snow/ground interface. The high refractive index of ice creates a phase delay which is linearly related to the water equivalent of the snow pack. When wet, the snow surface is the main reflector, and this enables measurement of snow depth. Algorithms are elaborated accordingly. Field experiments were conducted at two sites and employ two different types of digital elevation models (DEM) produced by means of cross track InSAR. One was from the Shuttle Radar Topography Mission digital elevation model (SRTM DEM), flown in February 2000. It was compared to the photogrammetrically produced Canadian Digital Elevation Model (CDEM) to examine snow-related effects at a site near Schefferville, where snow conditions are well known from half a century of snow and permafrost research. The second type of DEM was produced by means of airborne cross track InSAR (TOPSAR). Several missions were flown for this purpose in both summer and winter conditions during NASA's Cold Land Processes Experiment (CLPX) in Colorado, USA. Differences between these DEM's were compared to snow conditions that were well documented during the CLPX field campaigns. The results are not straightforward. As a result of automated correction routines employed in both SRTM and AIRSAR DEM extraction, the snow cover signal is contaminated. Fitting InSAR DEM's to known topography distorts the snow information, just as the snow cover distorts the topographic information. The analysis is therefore mostly qualitative, focusing on particular terrain situations. At Schefferville, where the SRTM was adjusted to known lake levels, the expected dry-snow signal is seen near such lakes. Mine pits and waste dumps not included in the CDEM are depicted and there is also a strong signal related to the spatial variations in SWE produced by wind redistribution of snow near lakes and on the alpine tundra. In Colorado, cross-sections across ploughed roads support the hypothesis that in dry snow the SWE is measurable by differential InSAR. They also support the hypothesis that snow depth may be measured when the snow cover is wet. Difference maps were also extracted for a 1 km2 Intensive Study Area (ISA) for which intensive ground truth was available. Initial comparison between estimated and observed snow properties yielded low correlations which improved after stratification of the data set.In conclusion, the study shows that snow-related signals are measurable. For operational applications satellite-borne cross-track InSAR would be necessary. The processing needs to be snow-specific with appropriate filtering routines to account for influences by terrain factors other than snow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Chen, Su Ling, und 陳素玲. „Dividend Information Content--omissions“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/99886209266224393015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Hsieh, Tsung-Liang, und 謝宗良. „Information Content of Accrual Anomaly“. Thesis, 2005. http://ndltd.ncl.edu.tw/handle/46679421737458389521.

Der volle Inhalt der Quelle
Annotation:
碩士
中國文化大學
會計研究所
93
Sloan(1996) and some of the most recent researches suggest that the strategy of following the accrual accounts of the invested companies can create an extra 10% in return from the investment. With extended testing and experimenting, the phenomenon, which we call “accrual anomaly”, provides an extra guidance for the investors when they are searching for high return from their investments. However, due to the fact that there are too many elements that can potentially affect accrual anomaly, the uncertainties that are created by them prevented many investors from digging deep into the strategy. As the result, it created a motivation for me to try to further study the topic, which will be illustrated in the essay. Because of the fact that most of the scholars tend to overemphasize the relationship between the accrual accounts and the return of the investments when studying such strategy, accrual anomaly was often overlooked. They also tend to forget all the factors that are connected with accrual anomaly when they are looking into it. Therefore, this study will be based on Tzachi (2003) with added consideration on the effect of special company events and benchmark return, data from 1999 to 2002 in discussion of the following two questions. First of all, do companies that have higher accrual accounts in general tend to have more company events? And do the rewards that are generated by the company’s special events, which include selling company DEBT, SEO, and M&A, overlap with the extra returns from following accrual anomaly? Second, is using different benchmark return going to have an effect on accrual anomaly? The study shows that by taking out the company’s special event such as merge and acquisition will have a positive effect on accrual anomaly, and taking out events such as selling company debts and increasing cash in hands will have a negative effect on accrual anomaly. On the other hand, changing the benchmark return of the company will have an effect up to negative 20% on the company’s accrual anomaly.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Lee, Picheng, und 李丕正. „Information Content of Dealers'' Trading“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/72426275343715250632.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Liou, Rough Lan, und 劉若蘭. „Information Content of Financial Ratios“. Thesis, 1995. http://ndltd.ncl.edu.tw/handle/69871020068757177476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Huang, Li-Ting, und 黃莉婷. „Information content of customer satisfaction“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/v343uq.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中山大學
財務管理學系研究所
95
What do the enterprises concern most now is how the customers value their company. Customer satisfaction is the most important thing for enterprises to focus. Besides, financial statement can not fully reflect the value of the company, we need other information to help us to know value them, and customer satisfaction is such a thing we need. In the past research, they show the relationship between customer satisfaction and equity value. In this article, there is further discussion about this relation- is there any interactive effect between the customer satisfaction and abnormal earning to affect equity value? Would this effect bring incremental information content to help us value the company? The next part of this article is to discompose current customer satisfaction into the former customer satisfaction and the change of customer satisfaction, to see if they both have information content, and if influence abnormal earning and equity value. The main contribute of this article is to have more detail discussion in the information of customer satisfaction. Thus it makes us know more about the customer satisfaction, and give us the other way to think about valuation of the company.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Chin, Chen-Lung, und 金成隆. „The information content of information characteristic and dicclosure frequency“. Thesis, 1997. http://ndltd.ncl.edu.tw/handle/75579713313170376055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Hsu, Pei-jung, und 許倍榕. „The Information Content on VaR Disclosure and Information Asymmetry“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/07322813041604876917.

Der volle Inhalt der Quelle
Annotation:
碩士
國立成功大學
企業管理學系碩博士班
97
Abstract We hypothesize that firms’ quantitative market risk disclosures, including three alternative formats: tabular, sensitivity analysis and Value at risk (VaR) in their 10-K reports mandated by the Securities and Exchange Commission (SEC) Financial Reporting Release Number 48 (FRR No. 48) in 1997 convey useful information to investors, especially for VaR disclosures. First, we repeat Linsmeier et al. (2002) analysis: (1) In the absence of FRR No. 48 information, trading volume is positively associated with the absolute value of underlying market rate changes. (2) Trading volume sensitivity to absolute changes in underlying market rates is lower after FRR No. 48 disclosures than before the disclosures. We observe that firms’ trading volume is positively related to underlying market rate, but experiences a slightly greater coefficient of underlying market rate in the post regime than in the pre regime. We assume this difference attributes to time series data difference. In this paper, we collect data going through seven years. The macro effect may offset the significance in our result. Moreover, we further test whether the VaR disclosure is better than tabular/ sensitivity disclosures and whether VaR value has a significant effect on reducing investors’ information asymmetry. Consistent with our hypotheses, our results suggest that VaR disclosures are more informative to investors than the others, and VaR values are positively associated with trading volumes as well. Therefore, FRR No. 48 information is useful to investors, particularly in VaR disclosures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

You, Chun-Fan, und 游清芳. „Dividend Information Content and Future Profitability“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/91740547419229884640.

Der volle Inhalt der Quelle
Annotation:
博士
雲林科技大學
管理研究所博士班
97
Over the last few decades, many literatures have found that for those companies paying cash dividends, the change of dividends conveys the information of future profitability. This induces an interesting question, i.e., for a diverse dividends distrubition market in Taiwan, does the dividend signaling hypothesis still hold? To answer this question, this study proposes an “adjusted dual signal model” and examines the association between dividend change and future profitability. The main thrusts of the model are that adopting “predicted dividend change model” to find out a dividend change sample in compile with the prospective of firm profitability, and using the proxy variable of future profitability to indentify the validity of dividend change. The empirical results show that, using pooled cross section data, dividend change is positively associated with future profitability for any type of dividend payout. Such pattern is also applicable to future return of stock prices and portfolios of dividend change. Notably, similar result occurs in cross-sectional data. Finally, the robustness tests on share repurchase, mean reverting, etc, also present consistent results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Yen, Wei-Te, und 顏維德. „The information content of market volatility“. Thesis, 2012. http://ndltd.ncl.edu.tw/handle/75665079255148325491.

Der volle Inhalt der Quelle
Annotation:
碩士
淡江大學
財務金融學系碩士在職專班
100
In this paper, we use the method of Chang, Hsieh and Wang (2010) and Ni, Pan and Poteshman (2008) to investigate the information content of net vega demand to examine the predictive power of realized volatility of different types of traders in the TAIEX options market. We also examine straddle and strangle strategies and discuss the impact of significant events how to affect the traders, including financial crisis, president election and ex dividend. After regression analysis, this paper finds that foreign institutional investors, market maker and dealers have the predictive power on the future volatility. Foreign institutional investors and dealers are the informed traders of volatility. And market makers maybe get the vantage on the information content of volatility by providing liquidity. In the period of significant events, most of investors had been unchanged, just market makers’ predictive power is positive significant after financial crisis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Cheng, Ling-chi, und 鄭淩淇. „The Information Content of Discretionary Accruals“. Thesis, 2000. http://ndltd.ncl.edu.tw/handle/65042641239725112351.

Der volle Inhalt der Quelle
Annotation:
碩士
中原大學
會計學系
88
Discretionary accruals play an important role in determining the behavior of reported earnings. Since discretionary accruals are subject to management’s manipulation, past researches focused on the role of discretionary accruals in contracting and income smoothing. The current study extends Subramanyam(1996)and classifies discretionary accruals into those for the purposes of signaling and for the purposes of smoothing income. The results indicate that the Taiwan capital market responds to discretional accruals that are hypothesized for the purposes of signaling stronger than those for the purposes of smoothing income. Although the associations between discretionary accruals and future performance indicators are positive, the relationship between discretionary accruals that are hypothesized for the purposes of signaling and future performance indicators is weaker than that between discretionary accruals that are hypothesized for the purposes of smoothing income and future performance indicators.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Huang, Wei-Shang, und 黃煒翔. „Information Content of Earnings Forecasts Revision“. Thesis, 1997. http://ndltd.ncl.edu.tw/handle/30565436494954561376.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中興大學
企業管理學系
85
This thesis applies Intervention Model(IVM) to analyze about information contents of earnings forecasting revisions. In the past, previous research on forecasts of earnings forecasting has applied Market Model. The shortcoming of this method is that it ignores the rectification of outliers. So in this thesis,we input dummy variables to calculate the abnormal returns in order to make correct conclusions.Besides,we apply Market Model to calculate Abnormal Returns(AR) and Cumulative Abnormal Returns( CAR). If it is significant, we can make conclusions that the revision of earnings forecasting is effective.In this studies,it is no excess abnormal returns on the individual stock. But if we apply AR and CAR model, we find that it is significant in negative announcement of earnings revision but it is not significant in positive announcement of earnings revision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

CHEN, KENG-CHUN, und 陳秔君. „Information Content of Corporate Governance Evaluation“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ftfm36.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺北大學
會計學系
107
This study is based on the evaluation results of the Corporate Governance Rating evaluation system provided by the Financial Supervisory Commission as the index of Corporate Governance Rating evaluation, using the results of the third and fourth Corporate Governance reviews published in 2017 and 2018. The objective of this paper is to investigate the relationship between information content of change rating of corporate governance evaluation and cumulative abnormal returns (financial transaction, short selling). The results suggest that the up (down) rating is larger (smaller) cumulative abnormal returns; it beats the expectation, but is not significant. The results of financial transaction and short selling are the same. In general, the results of this study do not support the relationship between information content of change rating of corporate governance evaluation and cumulative abnormal returns (financial transaction, short selling). It could be information of change rating of corporate governance evaluation that cannot be caught by general investor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Li, Huan Tang, und 李煥堂. „The Information Content of Audit Qualifications“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/28733257975363704675.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hsu, Chu-Wen, und 徐楚雯. „Information Content of Securities Firms’ Recommendation“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/42hgzv.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺灣科技大學
財務金融研究所
107
This study is based on the brokerage investment recommendation from the Taiwan Economic Journal (TEJ). Among them, we only take the investment recommendation, including "Strong Buy" and "Buy", as the research object. The research period is from January 1, 2016 to December 31, 2018. It is discussed whether the shares of the listed company have information content after being recommended by the securities firms. This study use “Event Study” to estimate the expected rate of return and discuss whether the securities firm's recommendation information has information content. And this study also use cross-sectional complex regression model to find out the factors that may affect the cumulative abnormal return rate (CAR), and analyze the factors affecting the information content of the securities recommended stocks. According to the evidence, the result shows that the abnormal returns are significantly positive from the five days before the recommendation to the two days after the recommendation. There are two possible reasons for the positive abnormal return before the recommendation date: one is that the broker maybe disclose the information to the customers before their recommendation; another possible reason is that most of the stocks, recommended by the brokers, may perform better before the date of the recommendation. In addition, although the abnormal rerurn after the recommendation date is positive, it is lower than that on the recommendation date, and it decreases day by day. The possible reason is that the stock price rise caused by the information on the recommendation date, and there is an overreaction phenomenon. As a result, investors can only obtain significant abnormal return by recommendation information within two days after the recommendation date. In the miltiple regression analysis, the results show that the investors can obtain higher excess returns when the economy is worse. And the smaller the company size, the higher the excess returns. Also, the stocks with higher rises in the past can have a higher rate of return in the future. The study also found that foreign institutional investments was significantly negatively correlated with cumulative abnormal returns.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Li-MinWang und 王立民. „The Information Content of Price Range“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/25107526669473579411.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Tai-Lin, Liu, und 劉泰麟. „Incremental Information Content of Corporate Governance“. Thesis, 2004. http://ndltd.ncl.edu.tw/handle/60282563398344340969.

Der volle Inhalt der Quelle
Annotation:
碩士
東海大學
會計學系
92
Since The Asian Financial Crisis arose in 1997, the government body of financial has noticed the issue of “Corporate Governance”, and after the scandal of Enron and WorldCom erupted, the importance of Corporate Governance has been more confirmed. From the end of 2001, Taiwan has begun to amend The Corporate Law and built some relevant system. Because the system was just established, it was still in doubt whether the Corporate Governance might work. According to conception and structure built by OECD and World Bank, the function of Corporate Governance could be divided into two scopes: “to promote what is beneficial” and “to abolish what is harmful”. As far as the former is concerned, domestic and foreign literature had the same conclusion that effective corporate governance can enhance the value and performance of the corporations. The function stood out although Corporate Governance has just been put into practice. As far as the latter is concerned, it includes inappropriate policy, inefficiency and earning management, and etc. Because they are not easily quantified, this paper is focus on earning management which could be quantified. In the literature whether Corporate Governance could inhibit earning management had different conclusion and the reason is that one segment of Corporate Governance was discussed but each had inconsistent conclusion. It could not show the overall effect of Corporate Governance; therefore, this paper is used to test “what is harmful” by Corporate Governance Integrative Index. Many elements can affect earning management, such as debt contract, financial forecast and so on. Corporate management is perhaps one of those. In Taiwan, Corporate Governance has not become a significant element to inhibit earning management yet; accordingly, not only was the relationship between Corporate Governance and earning management researched but Corporate Governance was attached to financial forecast of corporations in thee first half year to indirectly test the incremental effect. The conclusions are as follows: (1)In the direct test, there is not remarkable effect in ”what is harmful”. It proves under the circumstances of Taiwan, Corporate Governance is not a significant factor to affect earning management. (2)In the indirect test, the variable of Corporate Governance attached to financial forecast error, the result showing that better Corporate Governance has incremental effect which can inhibit earning management of financial forecast in the second half year and the 4th quarter. (3)The indirect test shows that institutional shareholdings have incremental effect in 4th quarter significantly. The outcome is similar with foreign literature, and in Taiwan its effect is especially in 4th quarter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Krzanowski, Roman. „Ontological Information. Investigation into the properties of ontological information“. Praca doktorska, 2020. http://bc.upjp2.edu.pl/Content/5024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Lin, Pei-huei, und 林佩慧. „Effects of Information Disclosure Transparency on Information Content and Earnings Management“. Thesis, 2006. http://ndltd.ncl.edu.tw/handle/wx3zad.

Der volle Inhalt der Quelle
Annotation:
碩士
南台科技大學
會計資訊系
94
Information disclosure transparency is fundamental to improve corporate governance mechanism. Since the fraud of Enron, investors and Taiwan Securities and Futures Institute (SFI) have taken more account of firm’s information transparency. In order to promote corporate information disclosure transparency, SFI has launched “Information Transparency and Disclosure Rankings System” (ITDRS) to evaluate the level of the transparency for listed companies since 2003. First, we use the results of ITDRS in 2003 & 2004 to examine the effects of information transparency on the informativeness of earnings. Next, we explore whether the higher the degree of the corporate disclosure transparency will reduce the possibilities of the earning manipulation, since there are less information asymmetry between the investors and the corporate manager. Empirical results indicate that information disclosure transparency has information content after controlling corporate governance variable, but there are no differences between companies’ earnings management. This is probably because the results only reveal the first 1/3 of the whole ranking as more transparent companies. Therefore, this is likely a result of inadequacy of a two-group classification system in comparing clearly differences in earnings management.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Faraz, Zunaira. „The Information Content of Corporate Governance Ratings“. Thesis, 2013. http://spectrum.library.concordia.ca/977909/1/Faraz_MSc_F2013.pdf.

Der volle Inhalt der Quelle
Annotation:
Several corporate governance rating agencies in recent years have introduced quantitative measures of corporate governance rating for publicly traded firms. Firms invest significant resources to be rated by such agencies as they anticipate potential benefits for investors. One potential benefit is the reduction in information asymmetry between firms and investors. We examine the cross-sectional relation between commercial corporate governance ratings of firms and their contemporaneous information asymmetry proxies. We use two leading governance rating agencies; Governance Metrics International (GMI) and Institutional Shareholder Services (ISS) and six information asymmetry proxies and find a significant relation between the ratings and several measures of information asymmetry. We, however, find no significant impact on information asymmetry level of firms around the first time they get rated. In addition, contrary to our expectations, we find a negative significant relation between highly rated firms and the cumulative abnormal returns around the announcement date but insignificant relation for low or moderately rated firms. Overall, our results suggest that governance ratings are related to the information environment surrounding a firm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wu, Hsu-Jan, und 吳頊然. „The Information Content of Modified Audit Opinion“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/50246210137935565462.

Der volle Inhalt der Quelle
Annotation:
碩士
國立臺北大學
會計學系
90
Audit report is the media between accountants and investors, and the Opinion affects how public look at one company’s financial report, which usually reflects on its stock prices. This research compares investors’ perception of Modified Audit Opinion, a newly added Opinion type in the Statement of Auditing Standards No.33, Yr. 1999, with the perception of Unqualified Opinion, an Opinion being commonly used for last decades. This research also examines how investors interpret Modified Audit Opinion under three different circumstances, material item, accounting principle change, and going concern. The empirical findings are as follows: (1) significant stock price drop is observed for Modified Audit Opinion on an overall basis. (2) Investors have different responses to Modified Audit Opinion under different circumstances. When Modified Audit Opinion is specified as material item, or accounting principle change, its influence to stock price is not observed. However, when going concern is specified, stock price is severely dropped. (3) There is no difference between modified audit opinion at first time and on a continuous basis. (4) The effect of institutional holding is insignificant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lin, Ching Houng, und 林晉宏. „The Incremental Information Content Of Productivity Measures“. Thesis, 1994. http://ndltd.ncl.edu.tw/handle/72553607972608607787.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Hsieh, Chia-Hao, und 謝家豪. „Content-Based Video Retargeting with Depth Information“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/09354075958937138040.

Der volle Inhalt der Quelle
Annotation:
碩士
國立中正大學
電機工程所
98
In this thesis, we present the content aware image resizing algorithm with the help of disparity information which is acquired from stereo camera using block matching algorithm. The content aware image resizing algorithm requires some energy terms to help seperate the main contents and background contents. Here we use disparity, gradient, saliency residual, and motion history to describe the spatial discontinuity, intensity variatation, visual saliency, and scene difference. The disparity map and gradient map is smoothed and enhanced before merging with saliency residual and motion history together. The methods of smoothing and enhancing is discussed and compared with ordinary methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Hsu, Jia-Lien, und 徐嘉連. „Content-based Music Information Retrieval and Analysis“. Thesis, 2001. http://ndltd.ncl.edu.tw/handle/33474421178600440765.

Der volle Inhalt der Quelle
Annotation:
博士
國立清華大學
資訊工程學系
90
In this thesis, we first discuss the techniques used in content-based music information retrieval. The techniques include the methods to represent music objects, the similarity measures of music objects, and indexing and query processing for music object retrieval. To represent music objects, we introduce three coding schemes, i.e., chord, mubol, and music segment. Various similarity measures are then presented, followed by various index structures and the associated query processing algorithms. The index structures include suffix tree, n-gram, and augmented suffix tree. A qualitative comparison of these techniques is finally performed to show the intrinsic difficulty of the problem of content-based music information retrieval. We also initiate the Ultima project which aims to construct a platform for evaluating various approaches of music information retrieval. Three approaches with the corresponding tree-based, list-based, and (n-gram+tree)-based index structures are implemented. A series of experiments has been carried out. With the support of the experiment results, we compare the performance of index construction and query processing of the three approaches and give a summary for efficient content-based music information retrieval. The feature extraction problem for music objects is also studied to support content-based music information retrieval in searching, classification, recommendation, and so forth. A repeating pattern in music data is defined as a sequence of notes which appears more than once in a music object. The themes are a typical kind of repeating patterns. The themes and other non-trivial repeating patterns are important music features which can be used for both content-based retrieval of music data and music data analysis. We propose two approaches for fast discovering non-trivial repeating patterns in music objects. In the first approach, we develop a data structure called correlative matrix and its associated algorithms for extracting the repeating patterns. In the second approach, we introduce a string-join operation and a data structure called RP-tree for the same purpose. Experiments are performed to compare these two approaches with others. The results are also analyzed to show the efficiency and the effectiveness of our approaches. Further, we extend the problem of finding exact repeating patterns to the one of finding approximate repeating patterns. First, two applications are introduced to motivate our research of finding approximate repeating patterns from sequence data. An approximate repeating pattern is defined as a sequence of symbols which appears more than once under certain approximation types in a data sequence. We define three approximation types, i.e., longer_length, shorter_length, and equal_length. The problems of finding approximate repeating patterns with respect to the three types are specified. By applying the concept of ‘cut’ and ‘pattern_join’ operator, we develop a level-wise approach to solve the problem of finding approximate repeating patterns with respect to the type of longer_length approximation. In addition, we extend the pattern_join operator to the generalized_pattern_join operator for efficiently finding long patterns. The performance study shows that our approach is efficient and also scales well. We also refine our approach to extract repeating patterns from polyphonic music data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie