To see the other types of publications on this topic, follow the link: Tau Theory.

Dissertations / Theses on the topic 'Tau Theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tau Theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ricke, Charlotte [Verfasser]. "On tau-Tilting Theory and Perpendicular Categories / Charlotte Ricke." Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1122193823/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hart, Julie. "A study of tau leptons produced in Z'0 decays." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Williams, Michael D. Jr. "Searching for Clean Observables in $B -> D* /tau- \bar{\nu}_{\tau}$ Decays." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5885.

Full text
Abstract:
In this thesis, the clean angular observables in the $\bar{B} \to D^{*+} \ell^- \bar{\nu}_{\ell}$ angular distribution is studied. Similar angular observables are widely studied in $B \to K^* \mu^+ \mu^-$ decays. We believed that these angular observables may have different sensitivities to different new physics structures.
APA, Harvard, Vancouver, ISO, and other styles
4

Ridgway, Garnet R. "Optical Tau theory : current and future roles in fixed-wing flight operations." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/10973/.

Full text
Abstract:
Commercial air travel is widely regarded as one of the safest methods of transportation in terms of fatalities per distance travelled, and the annual number of fatal airliner accidents has been in decline since the end of the Second World War. However, a small but significant number of fatal accidents still occur each year, indicating that there is scope for further improvement in flight safety. A review of airliner safety statistics concluded that the greatest proportion of fatal accidents over the last ten years have occurred in the approach and landing phase of flight. In spite of recent advances in flight deck automation for large transport aircraft, certain piloting tasks are still performed manually by the pilot. The flare manoeuvre (an aft longitudinal stick input in the final moments before touchdown) is an example of such a task, and is often undertaken based solely upon the visual information available through the windscreen. Previous studies have shown the flare to be considered the most difficult piloting task undertaken during typical fixed-wing missions. Additionally, there is no single consensus amongst the existing body of work as to the precise nature of the piloting strategies used to perform the flare manoeuvre. Recent studies at the University of Liverpool (UoL) have sought to apply theories of visual perception to such piloting tasks in order to gain an understanding of how pilots make use of the available visual information. In particular, the optical parameter “time-to-contact”, or “tau” has been shown to provide an appropriate basis for understanding and modelling pilot behaviour for “gap closure” type manoeuvres. Such manoeuvres, of which the flare is an example, involve the pilot controlling the motion of the aircraft between a specified start and end point. The overall aim of the work reported in this Thesis was to build upon these findings to further develop the current and future roles of tau theory in fixed-wing piloting tasks. The first objective of this research was to establish the nature of the strategy used by pilots to initiate the flare manoeuvre. A number of previous studies have investigated this area, often with conflicting results; this study, therefore, sought to identify and address some of the limitations of these previous investigations. A piloted simulation experiment was undertaken using a model of a generic large transport aircraft (GLTA) in the HELIFLIGHT simulator at UoL. The results suggested that pilots use a constant, critical value of time-to-contact with runway, , to initiate the flare manoeuvre. In addition it was demonstrated that commanding flare initiation at a constant value of through use of a Head Up Display (HUD) resulted in more successful manoeuvres (in terms of vertical velocity at touchdown ) than any of the other parameters tested. This further demonstrated the appropriateness of the tau-based flare initiation strategy. The second aspect of the work presented in this Thesis was concerned with the development and evaluation of a tau-based pilot aid for the flare manoeuvre. This was based on both the findings of the flare initiation investigation and of a previous study at UoL. The concept was used to drive a set of HUD symbology which was implemented onto the GLTA simulation model to enable piloted evaluation. The tau-based HUD was evaluated against both a baseline Head Down Display (HDD) and an in-service example HUD in a piloted simulation experiment. The results showed that the tau-based concept provided a performance advantage over the baseline HDD, and performance comparable with the in-service example HUD. Recommendations were made for further refinement of the concept in future design iterations. A previous study at UoL identified two types of tau-based piloting strategy for the flare manoeuvre. Specifically, it had been observed that pilots used either a strategy in which the aircraft performed a continuous vertical deceleration until touchdown (“type 1” ), or a strategy in which the vertical deceleration was completed before touchdown (“type 2”). In the case of the type 2 flare, the deceleration phase was typically followed by a phase of approximately constant vertical velocity. A piloted simulation experiment was undertaken to test the hypothesis that the type 2 flare strategy was adopted to compensate for the paucity of the visual information available, i.e. the fact that the pilots could not directly observe the landing gear. Three groups of novice pilots performed a simplified flare task using varying levels of visual information; the standard windscreen view, a simulated video feed showing the main gear and a HUD representation of the main gear. The results supported the hypothesis, and also showed that an improvement in performance could be derived from enabling the pilot to directly observe the gap closure formed by the landing gear and the runway. The final aspect of this study sought to extend the tau-based approach to fixed-wing flight control to other phases of flight. To this end, two methods of tau-based pilot modelling for fixed-wing aircraft were described and evaluated. The first of these computed a tau-based reference trajectory that was passed through a conventional stability control augmentation system (SCAS) in order to minimise the error between it and the aircraft’s current trajectory. The second method used an approximation of the inverse dynamics of the aircraft to generate the appropriate open-loop control input. The error minimisation model was shown to provide appropriate guidance for a typical range of manoeuvres for a light fixed-wing training aircraft. The perfect control method was shown to provide appropriate guidance for the single manoeuvre tested, and as such was recommended for further investigation. Overall, through the investigation of piloting strategy, this study showed the current role of tau theory to be as an appropriate, succinct method of describing pilot behaviour for a range of fixed-wing flight tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Whitehead, John Gardner. "An examination of the kinematics and behavior of mallards (Anas platyrhynchos) during water landings." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99383.

Full text
Abstract:
This dissertation aims to address how a change in landing substrate may change landing kinematics. To examine this possibility, mallards (Anas playtrhynchos) were used as a study species and 177 water landings were recorded through the use of two camera systems with photogrammetric capabilities. This enabled the landing trajectory and landing transition kinematics to be tracked in three dimensions. From the resulting position data three questions were pursued. Do mallards regulate landing kinematics through a ̇-constant strategy? With what kinematics do mallards land on water? Do landing kinematics respond to external factors, such as an obstacle to landing? Chapter 2 assesses the presence of a ̇-constant regulatory strategy and compares the implementation to other landing behaviors. Chapter 3 examines the variation observed in the landing kinematics of mallards, identifies the primary kinematic drivers of that variation, and detects differences in kinematic profile. Chapter 4 inspects the landing kinematics combined with the positions of all other waterfowl in the vicinity to test for the presence of obstacle avoidance behavior.
Doctor of Philosophy
Control of landing is an important ability for any flying animal. However, with the exception of perch landing, we know very little about how birds and other flyers land on a variety of different surfaces. Here, we aim to extend our knowledge in this area by focusing on how mallard ducks land on water. This dissertation addresses the following questions. Do mallards regulate landing speed and trajectory the same way as pigeons? At what speeds, angles, and postures do mallards land on water? Can mallards adjust landing behavior to avoid collisions with other birds on the water surface? Chapter 2 determines how mallards regulate landings and how it is similar and different from pigeons and several other flyers. Chapter 3 describes the speeds, angles, and postures used by mallards to land on water. In addition, this chapter finds evidence for at least two different categories of landing performed by mallards. Chapter 4 provides evidence that mallards avoid situations in which a collision with another bird is likely. However, it is unclear if this is an active choice made by the mallard or due to other circumstances related to the landing behavior. Overall, this dissertation illustrates how the landing behavior of mallards is similar to what has been documented in other animals. However there are significant differences such as higher impact speeds, and shallower angles. Both of which are likely related to the ability of water to absorb a greater amount of the impact forces than a perch or the ground would.
APA, Harvard, Vancouver, ISO, and other styles
6

Corbo, Matteo. "La production de paires de quarks top dans le canal de désintégration avec un lepton tau." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00832952.

Full text
Abstract:
La production de paires de quarks top se désintégrant en deux leptons dont au moins un lepton tau est étudiée dans le cadre de l'expérience CDF auprès du collisionneur proton-antiproton, Tevatron, a FNAL aux USA. La sélection exige un électron ou un muon produit par désintégration du lepton tau ou par désintégration d'un W. L'analyse utilise toutes les données enregistrées, 9 fb-1, avec un déclenchement basé sur un électron ou muon à faible moment transverse et une trace chargée isolée. La section efficace de production de paires de top a cette énergie obtenue est de 8,2+-1.7(+1.2-1.1)+-0,5 pb, et le rapport de branchement en leptons tau est de 0,120+-0,027(+0,022-0,019)+-0,007 avec erreur statistique, systématique et sur la luminosité respectivement. Ce sont à jour les résultats les plus précis dans ce canal de désintégration du top, en bon accord avec les résultats obtenus au Tevatron avec tous les autres canaux de désintégration du top. Le rapport de branchement est aussi mesuré en séparant les événements tau plus lepton et avec deux leptons tau avec une méthode de maximum de vraisemblance. C'est la première fois que ces modes de désintégration sont identifiés séparément. Par une méthode de maximum de vraisemblance appliquée pour séparer ces deux canaux une mesure alternative du rapport de branchement du top en lepton tau de 0,098+-0,022(stat.)+-0,014(syst.) est obtenue, en bon accord avec les prédictions du Modèle Standard. Une limite supérieure de 0,159 pour ce rapport de branchement, avec 95% de niveau de confiance est extraite donnant un indice de Physique au delà du Modèle Standard en particulier un possible boson de Higgs chargé.
APA, Harvard, Vancouver, ISO, and other styles
7

Hariri, Faten. "Recherche de la désintegration du boson de Higgs en deux leptons taus dans l'expérience ATLAS." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS065/document.

Full text
Abstract:
Au LHC, l'un des buts essentiels à savoir était de trouver la dernière pièce manquante du modèle standard (MS), i.e. le boson de Higgs (H). La recherche fut couronnée de succès avec les données prises en 2012 et la découverte d'une nouvelle particule scalaire de masse ~126 GeV, se désintégrant en deux bosons (deux photons ou deux bosons électrofaibles ZZ or W+W-). Pour vérifier la compatibilité de la nouvelle particule avec les prédictions du MS, son couplage aux fermions devait être établi, ce qui motiva la recherche du Higgs dans le mode de désintégration en deux leptons taus ayant un rapport d'embranchement important. Dans ATLAS, cette analyse est divisée en trois canaux selon le mode de désintégration des leptons taus. Le travail présenté dans cette thèse concerne le canal “lepton-hadron”, où l'un des taus de l'état final se désintègre leptoniquement en un muon ou un electron, alors que l'autre se désintègre hadroniquement. Les canaux de l'analyse H→tau+ tau- sont caractérisés par de larges valeurs de l'énergie transverse manquante (MET) dans l'état final et adoptent la même technique pour identifier le lepton tau. Dans cette thèse, une contribution importante, mettant en relief l'amélioration obtenue avec une nouvelle MET, est montrée. En utilisant les traces chargées pour estimer la composante “molle” de MET dans les événements issus de collisions p-p, la sensibilité à l'empilement (pile-up), inévitable dans les collisionneurs hadroniques à haute luminosité, est bien réduite. Les erreurs systématiques associées à la composante molle ont été évaluées et leur dependence sur les conditions de pile-up et de modélisation de l'événement a été étudiée pour différentes définitions de MET. Ceci contribuera à améliorer les futures analyses H→tau+ tau-. Dans l'analyse “lepton-hadron”, le bruit de fond dominant provient des événements dont un jet de hadrons est mal identifié comme un tau se désintégrant hadroniquement (“fake tau”). Le travail discuté montre en détail l'estimation de ce bruit de fond pour les deux configurations les plus sensibles aux événements de signal H, i.e. les événements produits avec un Higgs bien boosté ou ceux produits par fusion de deux bosons vecteurs (mode VBF). L'état final de ces derniers est caractérisé par deux jets bien séparés en pseudorapidité, répartis sur les deux hemisphères, produits en association avec les produits de désintégration du H. Enfin, cette thèse rapporte une dernière contribution utilisant la théorie effective des champs pour la production du boson de Higgs et pour estimer les couplages de ce dernier (HEFT), et explorer la nouvelle physique au delà du MS de façon indépendante du modèle théorique. Le travail consiste à tester et valider le modèle “tauDecay” dans le cadre d'une caractérisation du Higgs utilisant HEFT au sein de Madgraph5_aMC@NLO. Après avoir écrit un outil permettant de fusionner les fichiers de production et de désintégration du Higgs (utile surtout en travaillant avec une précision au niveau NLO), la validation du modèle a été faite de 3 façons indépendantes: avec la génération d' événements au niveau d'éléments de matrice directement, avec l'outil créé et en désintégrant les taus avec MadSpin. Ce nouvel outil est prêt à être utilisé durant le Run-II du LHC
In the LHC project, one of the major goals was the search for the last missing piece of the standard model (SM), namely the Higgs boson (H). The quest was successful during the Run I data taking in 2012 with the discovery of a new scalar of mass ~126 GeV, compatible with the SM Higgs boson, and decaying to two bosons (either two photons or two electroweak vector bosons ZZ or W+W-). To complete the picture, one needed to establish the couplings of the new particle to fermions. This motivated the search for the decay mode into two tau leptons predicted with a high branching ratio.Inside the ATLAS collaboration, the analysis was divided into three channels according to the decay modes of the tau pair. The work reported in this Ph.D describes the “ lepton-hadron ” analysis where one tau lepton decays leptonically into an electron or a muon and the other decays hadronically. Common features of all three analyses are the identification of the tau lepton and the presence of large missing transverse energy (MET) due to the escaping neutrinos from the tau decays. An important contribution reported in this dissertation concerns the improvement brought by a new MET determination. By using charged tracks to estimate the contribution of the soft energy component produced in the proton-proton collision, the sensitivity to the overlayed events (“ pile-up ”), unavoidable in a high luminosity hadron collider, is very much reduced. The systematic uncertainties associated to this soft component were estimated, their dependence on physics modeling and pile-up conditions studied for various track-based MET definitions. It will contribute to an improved H→tau+ tau- analysis with future data.In the lepton-hadron H analysis, the dominant background comes from events where a hadronic jet is misidentified as a hadronic tau (“ fake-tau ”). The work reports in detail how this fake-tau background has been estimated in the two most sensitive event configurations predicted for the H signal i.e. events where the H boson is highly boosted or produced by fusion of vector bosons (VBF); VBF events are characterized by two forward and backward jets in addition to the H decay products.Finally, the thesis reports on a last contribution performed with the Higgs Effective Field Theory (HEFT) to study the H couplings and probe new physics beyond SM in a model independent way. The work consisted in testing and validating the “TauDecay” model in association with the Higgs characterization framework in Madgraph5_aMC@NLO. After implementing a tool to merge H production and decay in a single step (especially useful with NLO requirements), the validation was done in three different ways: direct matrix element generation, with the implemented merging tool and using MadSpin to decay taus. The combined package is ready for use in the LHC Run II context
APA, Harvard, Vancouver, ISO, and other styles
8

Lacroix, Florent. "Mesure de la section efficace de production de paires de quarks top dans le canal lepton+tau+jets+met dans l'expérience D0 et interprétation en terme de boson de Higgs chargé." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00731323.

Full text
Abstract:
Le modèle standard de la physique des particules décrit la matière constituée de particules élémentaires qui interagissent via les interactions fortes et électrofaibles. Le quark top est le quark le plus lourd décrit par ce modèle et a été découvert en 1995 par les collaborations CDF et DO dans les collisions proton-antipronton du Tevatron. Cette thèse est consacrée à la mesure de la section efficace de production de paire de quarks top par interaction forte, dans un état final contenant un lepton, un tau hadronique, deux jets de b et de l'énergie tranverse manquante. Cette analyse utilise les données collectées entre juillet 2006 et août 2007, soit une luminosité de 1,2 fb-1, qui sont combinées avec les données du Run IIa pour atteindre une luminosité de 2,2 fb-1. Une partie du travail de thèse fut consacrée au système de déclenchement du détecteur D0 et en particulier à l'identification des leptons taus au niveau 3 du système de déclenchement et aux déclenchements basés sur la présence de jets et d'énergie tranverse manquante. La problématique de la résolution en énergie des jets est également abordée, sous l'angle de l'intercalibration en eta du calorimètre hadronique et avec l'utilisation du détecteur de pied de gerbe central dans la définition de l'énergie des jets. La section efficace de production de paires de quark top obtenue est 7,32+1,34-1,24(stat)+1,20-1,06(syst)+-0,45(lumi)pb. Cette mesure est en accord avec les prédictions du modèle standard et permet de contraindre la présence de nouvelle physique, telle que l'existence d'un boson de Higgs plus léger que le quark top. Une limite d'exclusion a ainsi été obtenue dans le plan (tan beta,mH+-) et est présentée dans la dernière partie de ce manuscrit.
APA, Harvard, Vancouver, ISO, and other styles
9

Robbiano, Sylvain. "Méthodes d'apprentissage statistique pour le ranking : théorie, algorithmes et applications." Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00936092.

Full text
Abstract:
Le ranking multipartite est un problème d'apprentissage statistique qui consiste à ordonner les observations qui appartiennent à un espace de grande dimension dans le même ordre que les labels, de sorte que les observations avec le label le plus élevé apparaissent en haut de la liste. Cette thèse vise à comprendre la nature probabiliste du problème de ranking multipartite afin d'obtenir des garanties théoriques pour les algorithmes de ranking. Dans ce cadre, la sortie d'un algorithme de ranking prend la forme d'une fonction de scoring, une fonction qui envoie l'espace des observations sur la droite réelle et l'ordre final est construit en utilisant l'ordre induit par la droite réelle. Les contributions de ce manuscrit sont les suivantes : d'abord, nous nous concentrons sur la caractérisation des solutions optimales de ranking multipartite. Une nouvelle condition sur les rapports de vraisemblance est introduite et jugée nécessaire et suffisante pour rendre le problème de ranking multipartite bien posé. Ensuite, nous examinons les critères pour évaluer la fonction de scoring et on propose d'utiliser une généralisation de la courbe ROC nommée la surface ROC pour cela ainsi que le volume induit par cette surface. Pour être utilisée dans les applications, la contrepartie empirique de la surface ROC est étudiée et les résultats sur sa consistance sont établis. Le deuxième thème de recherche est la conception d'algorithmes pour produire des fonctions de scoring. La première procédure est basée sur l'agrégation des fonctions de scoring apprises sur des sous-problèmes de ranking binaire. Dans le but d'agréger les ordres induits par les fonctions de scoring, nous utilisons une approche métrique basée sur le de Kendall pour trouver une fonction de scoring médiane. La deuxième procédure est une méthode récursive, inspirée par l'algorithme TreeRank qui peut être considéré comme une version pondérée de CART. Une simple modification est proposée pour obtenir une approximation de la surface ROC optimale en utilisant une fonction de scoring constante par morceaux. Ces procédures sont comparées aux algorithmes de l'état de l'art pour le ranking multipartite en utilisant des jeux de données réelles et simulées. Les performances mettent en évidence les cas où nos procédures sont bien adaptées, en particulier lorsque la dimension de l'espace des caractéristiques est beaucoup plus grand que le nombre d'étiquettes. Enfin, nous revenons au problème de ranking binaire afin d'établir des vitesses minimax adaptatives de convergence. Ces vitesses sont montrées pour des classes de distributions contrôlées par la complexité de la distribution a posteriori et une condition de faible bruit. La procédure qui permet d'atteindre ces taux est basée sur des estimateurs de type plug-in de la distribution a posteriori et une méthode d'agrégation utilisant des poids exponentiels.
APA, Harvard, Vancouver, ISO, and other styles
10

Lancaster, Brian. "The 'I'-tag theory of perception, memory and consciousness." Thesis, Liverpool John Moores University, 1997. http://researchonline.ljmu.ac.uk/4954/.

Full text
Abstract:
The distinction between explicit and implicit psychological performance is held to arise as a consequence of differences in self-related processing. In the former, outputs from sensory and memory activity gain ready access to a model of self, referred to here as 'I'. Implicit performance comes about when activity is isolated from 'I' for pathological, or other, reasons. Under normal, explicit circumstances the model of 'I' constructed at a given time is stored in association with representations of concurrent thoughts or percepts. This memory model of' I' is referred to as an 'T'-tag, and is hypothesised to function in subsequent recall. Evidence for the above is drawn from neuropsychological data relating to the implicit/explicit distinction in terms of differential brain systems, and from introspective data concerning the characteristics of conscious processes. Studies of a variety of brain-damaged patients suggest a distinction between decrements in direct stimulus- or motor-related processing and compromised availability of material to consciousness. It is argued here that the latter are consequent on problems in the interpretations of direct processing, specifically those normally involving 'I' as the putative receiver of impressions, controller of memory recollection, and instigator of actions. The Buddhist philosophy of mind analyses the nature of self and details the stages operating in processes of thought and perception. In particular, the notion of'l' implied in the foregoing description is stated to be illusory. The alternative view, that'!' arises as a conditioned association and is without substantive continuity, is supportive of the 'I'-tag concept. The 'I'-tag theory is further developed through an analysis of the stages of perception as detailed in Buddhist thought. Finally, the theory is employed to advance a possible psychological interpretation of a strand of Jewish mysticism in which an artificial anthropoid the golem-was said to be created through linguistic techniques.
APA, Harvard, Vancouver, ISO, and other styles
11

BELDJOUDI, LARBI. "Extension du programme de la theorie des perturbations chirales et physique du tau." Paris 11, 1995. http://www.theses.fr/1995PA112182.

Full text
Abstract:
Le sujet principal de la these concerne la phenomenologie des interactions fortes a basses energies. On etudie les aspects des interactions de pions et de kaons energetiques, et qui ne peuvent etre traites par la theorie des perturbations chirales. En premier lieu, on construit les amplitudes de diffusion et k en utilisant la symetrie chirale de la qcd et les contraintes d'unitarite. La methode d'unitarisation utilisee a cette fin consiste a ecrire des relations de dispersion pour l'inverse de l'amplitude, dont la coupure droite est determinee exactement par l'unitarite elastique, alors que la coupure gauche est approximee par la theorie des perturbations chirales. Le resultat ainsi obtenu se confond avec l'approximation pade de l'amplitude. Cette methode conduit a un accord satisfaisant avec les donnees experimentales. Dans la seconde partie, on presente des applications basees sur les dephasages et k ainsi obtenus. Une premiere application concerne les facteurs de forme vectoriel et scalaire du pion. Les phases de ces facteurs de forme sont donnees par les dephasages , et ce grace au theoreme de fermi-watson sur l'interaction dans l'etat final. La solution a ces contraintes d'unitarite pour les facteurs de forme est du type omnes-muskhelishvili. Dans la derniere partie, on aborde quelques applications a certains modes hadroniques de la desintegration du lepton. En utilisant une approche similaire pour le facteur de forme du pion, on a etudie le canal k de la desintegration du 7. Un dernier travail analyse les canaux 3 et k en utilisant l'algebre des courants et la dominance axiale
APA, Harvard, Vancouver, ISO, and other styles
12

Bayer, Ralph-C., Harald Oberhofer, and Hannes Winner. "The Occurrence of Tax Amnesties: Theory and Evidence." Elsevier, 2015. http://epub.wu.ac.at/5689/1/amnestyR2_v2.pdf.

Full text
Abstract:
This paper presents a theoretical model and empirical evidence to explain the occurrence of tax amnesties. We treat amnesties as endogenous, resulting from a strategic game between many taxpayers discounting future payments from punishment and a government that balances costs and benefits of amnesty programs. From the model we derive hypotheses about the factors that should influence the occurrence of tax amnesties. To test these predictions empirically, we rely on amnesty information from US States between 1981 and 2011. In line with the theoretical model, our empirical findings suggest that the likelihood of amnesties is mainly driven by a government's fiscal requirements and the taxpayers' expectations on future amnesties.
APA, Harvard, Vancouver, ISO, and other styles
13

Sherliker, Warren. "Acoustic echo cancellation algorithms with TAP selection for non-stationary environments." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cabeça, Maria Antónia Augusta. "Tit for tat em adolescentes." Master's thesis, Instituto Superior de Psicologia Aplicada, 1996. http://hdl.handle.net/10400.12/373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Clarkson, Richard. "Taub-NUT Spacetime in the (A)dS/CFT and M-Theory." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1264.

Full text
Abstract:
In the following thesis, I will conduct a thermodynamic analysis of the Taub-NUT spacetime in various dimensions, as well as show uses for Taub-NUT and other Hyper-Kahler spacetimes.

Thermodynamic analysis (by which I mean the calculation of the entropy and other thermodynamic quantities, and the analysis of these quantities) has in the past been done by use of background subtraction. The recent derivation of the (A)dS/CFT correspondences from String theory has allowed for easier and quicker analysis. I will use Taub-NUT space as a template to test these correspondences against the standard thermodynamic calculations (via the Nöether method), with (in the Taub-NUT-dS case especially) some very interesting results.

There is also interest in obtaining metrics in eleven dimensions that can be reduced down to ten dimensional string theory metrics. Taub-NUT and other Hyper-Kahler metrics already possess the form to easily facilitate the Kaluza-Klein reduction, and embedding such metrics into eleven dimensional metrics containing M2 or M5 branes produces metrics with interesting Dp-brane results.
APA, Harvard, Vancouver, ISO, and other styles
16

Svendsen, Harald Georg. "Aspects of plane waves and Taub-NUT as exact string theory solutions." Thesis, Durham University, 2004. http://etheses.dur.ac.uk/2947/.

Full text
Abstract:
This thesis is a study of some aspects of string theory solutions that are exact in the inverse string tension ɑ', and thus are valid beyond the low-energy limit. I investigate D-brane interactions in the maximally supersymmetric plane wave solution of type IIb string theory, and study the fate of the stringy halo surrounding D-branes. I find that the halo is like in flat space for Lorentzian D-branes, while it receives a non-trivial modification for Euclidean D-branes. I also comment on the connection between Hagedorn temperature and T-duality, which motivates a more general study of T-duality in null directions. I consider such transformations in a spinning D-brane solution of supergravity, and find that divergences in the field components associated with null T-dualities are invisible to string and brane probes. I also observe that there are closed timelike curves in all the T-dual solutions, but that none of them are geodesies. The second half of the theses is an investigation of the fate of closed tirnelike curves and of cosmological singularities in an exact stringy Taub-NUT solution of heterotic string theory, and in a rotating generalisation of it. I compute the exact spacetime fields, using a description in terms of a gauged Wess-Zumino-Novikov-Witten model and find that the ɑ' corrections are mild. The key features of the Taub-NUT geometry persist, together with the emergence of a new region of space with Euclidean signature. Closed timelike curves are still present, which is inter-preted as a sign that they might be a natural ingredient in string theory, for instance in pre-Big-Bang cosmological scenarios.
APA, Harvard, Vancouver, ISO, and other styles
17

Hales, David. "Tag based co-operation in artificial societies." Thesis, University of Essex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Fincher, Jennie. "Decentering and the Theory of Social Development." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc149590/.

Full text
Abstract:
The concept of decentering originated with Piaget, who defined decentering as a feature of operational thought, the ability to conceptualize multiple perspectives simultaneously. Feffer applied Piaget’s concept of decentering to the cognitive maturity of social content. This study used Feffer’s Interpersonal Decentering scoring system for stories told about TAT pictures to investigate the developmental hierarchy of decentering for children and adolescents. The participants originated from the Berkeley Guidance Study, a longitudinal sample of more than 200 individuals followed for more than 60 years by the Institute of Human Development at the University of California, Berkeley. The hypotheses tested were: (1) chronological age will be positively related to Decentering as reflected in Feffer’s Interpersonal Decentering scores obtained annually between ages 10 and 13 and at 18; (2) children born into higher class homes would have higher Age 12 Decentering scores; (3) children born later in birth order will have higher Age 12 Decentering scores; (4) children whose parents were observed to have closer bonds with their children at age 21 months will have higher Age 12 Decentering scores; (5) adolescents with higher scores from the Decentering Q-sort Scale (derived from adolescent Q-sorts) will have higher Age 12 Decentering scores; and (6) participants who have higher Age 12 Decentering scores will self-report higher CPI Empathy scale scores at Age 30. A repeated measures ANOVA tested Hypothesis 1. Pearson product-moment correlation coefficients tested Hypotheses 2-6. Age and Decentering scores were unrelated, as was birth order; social class findings were mixed. Parents’ bonds with child and Age 12 Decentering were negatively correlated (closer bonds predicted higher Decentering), as were Age 12 Decentering and Age 30 Empathy (higher early Decentering predicted lower adulthood Empathy). Girls (age 12) tended to decenter more consistently and had higher Decentering scores than boys.
APA, Harvard, Vancouver, ISO, and other styles
19

Gordon, James Peter Fraser. "The economic theory of tax administration and taxpayer compliance." Thesis, London School of Economics and Political Science (University of London), 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jurisica, Igor. "TA3, theory, implementation, and applications of similarity-based retrieval for case-based reasoning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35199.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stovall, Preston John. "Toward a normative theory of rationality." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Decker, Marvin Glen. "Loop spaces in motivic homotopy theory." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rai, Birendra Kumar. "Essays in game theory and institutions." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lillywhite, Jay M. "Property Tax Capitalization: Theory and Empirical Evidence." DigitalCommons@USU, 1994. https://digitalcommons.usu.edu/etd/3889.

Full text
Abstract:
In an environment of increasing government expenditures financed largely viii through taxes, including a relatively visible and large residential property tax, the issue of whether property taxes are capitalized into market values is increasingly important. Property tax capitalization is the reflection of property taxes in the value of real property. The capitalization of property tax does not necessarily pose a problem; rather, problems arise when homes identical to each other have different taxes and these differentials are then capitalized into market values. These capitalized tax differentials result in large capital gains and losses to owners of real estate. This study (1) reviews existing economic theory and empirical evidence on the capitalization of property taxes, (2) develops a model of property valuation inclusive of tax effects, and (3) estimates the parameters of this model using a comprehensive data set of over 334 home sales in the Logan, Utah area. The empirical results include an estimate of the tax capitalization effect. Two closely related issues are also addressed in the study. They include: (I) changes in real estate prices, including a suggested method for measuring such change and (2) a study of property tax equity, including two specific measures of tax fairness. The conclusions are (I) tax differentials are capitalized; (2) real estate prices in the study area increased approximately 10 percent per year from 1989 to 1992; and (3) there is significant variation in assessment ratios.
APA, Harvard, Vancouver, ISO, and other styles
25

Yokoo, Seiichiro. "Model for a fundamental theory with supersymmetry." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Wei. "Kerr-NUT-AdS metrics and string theory." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Eriksson, Lina, and Mattias Hildén. "Vem tar beslut om inte chefen? : -en kvalitativ studie om chefslösa organisationer." Thesis, Uppsala universitet, Sociologiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-295504.

Full text
Abstract:
Hierarkier har funnits under en lång tid och är för många en självklarhet inom organisationer. Vi vill i denna uppsats undersöka tre organisationer i Sverige, som menar att de utmanar den hierarkiska strukturen genom att de inte har några chefer. Syftet med uppsatsen är att öka kunskapen om dessa organisationer genom att undersöka hur beslut tas och färdas. Vi undersöker även vad det är som påverkar beslut och beslutsfattande i dessa organisationer. Tidigare forskning behandlar dels organisationsutveckling ur ett historiskt perspektiv och hur beslut kommer till, olika beslutsmodeller, hur beslut färdas och implementeras och kopplingen mellan beslut och ansvar. Det teoretiska perspektivet är Actor- Network Theory, ANT. Empirin har samlats in genom nio stycken kvalitativa intervjuer. Den har kodats och analyserats utifrån teorins begrepp aktör-nätverk, performativitet, handlingsnät och översättning. Beslut har analyserats som en symbol, vilken kan skapas och färdas inom organisationen med hjälp av olika aktörer. Studien visar på att begreppet beslut är starkt kopplat till ett agerande. Beslut som något performativt blir en översättningskedja som består av många detaljer som lättare beskrivs med ett sammanfattande beskrivande begrepp, beslut, för att kunna begripliggöra vad det är. Vi har även hittat faktorer i materialet som visar på hur beslut är kopplat till den decentraliserade strukturen samt att medarbetaren som individ blir viktig. Slutligen förs en diskussion av resultatet i förhållande till frågeställningar, tidigare forskning, teori och metod samt ger implikationer för vidare forskning inom ämnet.
APA, Harvard, Vancouver, ISO, and other styles
28

Sengupta, Partha. "Essays on the theory of tax evasion." Diss., Virginia Tech, 1991. http://hdl.handle.net/10919/39419.

Full text
Abstract:
Literature on tax evasion has generally ignored the effects of tax evasion by a monopolist in a regulatory environment. When the government is asymmetrically informed about the monopolist's demand and/or costs, however, the fIrm may have the opportunity to cheat on its regulatory constraint and tax payments. Adjustments in the regulatory constraint then will directly impact on the tax revenues of the government while alterations in tax policies may alter the effectiveness and efficiency results of a particular regulatory policy. To analyze these issues two forms of regulation, a price ceiling regulation and a fixed profit per unit regulation are considered in an environment where the government is incompletely informed about the monopolist's cost function. For the price ceiling regulation (Chapter 2) it is shown that tax evasion decisions are affected by variations in the ceiling in the sense that an increase in the effective price ceiling results in misreporting by a larger proportion. Tax evasion decisions however are found not to affect output decisions of the monopolist. Thus the optimal price ceiling under evasion is set at the same level as without tax evasion, i.e., at the point where price equals expected marginal cost. Optimality in this economy can be achieved in a number of ways. Full compliance is one way but optimality can also be achieved with tax evasion. When the form of regulation considered is a fixed profit-per-unit regulation (Chapter 3), the results are quite different from above. Because profits of the monopolist are not costlessly observable by the government, fIrrns can cheat on the regulatory constraint itself. Thus output and tax evasion to affect the monopolist's output. Literature on tax evasion has often neglected the fact that income from different sources is taxed at different rates and provides different opportunities for misreporting. Once an individual obtains certain skills, his flexibility in switching jobs to evade taxes on his wage income becomes limited. Also the fact that a large part of the wage income in the U.S. is reported to the government by the employer and often withheld at the source, greatly limits the opportunity for evading wage taxes. However an individual faces many options when deciding on how to invest his savings and the income from at least some of these may not be subject to withholding and reporting. This fact suggests that the savings of an individual can be affected by tax evading opportunities. Chapter 4 examines this problem by considering a dynamic model of tax evasion. The results show that an increase in the penalty rate or audit probability leads to an increase in savings of the individual, given some assumptions on preferences. This fact implies that savings are reduced by the possibility of tax evasion. It also suggests that savings could be increased by stricter enforcement of tax laws. Because the model used in chapter 4 is fairly complicated, some of the comparative static results are found to be ambiguous under general conditions. It is also not clear from the theory what the optimal policy of the government would be. To address these issues in more details, chapter 5 considers some numerical exercises. A number of results emerge from these exercises. First, savings are found to increase with an increase in either the penalty rate or the audit rate, even when the restrictive assumptions on risk aversion do not hold and labor supply is variable. Second, full compliance seems to be the optimal policy of the government for the specification selected. These results seem to hold both for compensated and uncompensated taxes.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Larsson, Kajsa. "Hur ska det gå för Malmö stads gröna tak?" Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23197.

Full text
Abstract:
Sustainable urban planning is becoming more and more important. Ecological sustainability holds greats opportunities for urban areas in order to avoid some of the consequences climate change might cause. In sustainable planning Grönytefaktorn (The Biotope Area Factor) is one important tool and one which Malmö Stad has used in order to increase sustainable planning in the city. Since the beginning of 2015, municipalities in Sweden are no longer required to use The Biotope Area Factor, which might threaten the development of e.g. green roofs. The aim of this study is to investigate how real estate companies experience their role as investors in sustainable planning. Planning is a political activity with multiple stakeholders and in order to try to understand the difficulties sustainable solutions face in this context, Planning theory and Theory of Planned Behavior have been used. The results show that the companies are not willing to invest in sustainable solutions like green roofs, unless the market demands it.
APA, Harvard, Vancouver, ISO, and other styles
30

Andersson, Håkan. "School Timetabling in Theory and Practice A comparative study of Simulated Annealing and Tabu Search." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108275.

Full text
Abstract:
School timetabling is way of distributing resources such as teachers and classrooms over a fixed period of time. This task can be difficult and very time-consuming. If the process of generating timetables is automated with the help of algorithms then this can help save both time and money for the educational institute. In this thesis a general timetable is presented along with a set of constraints commonly used in school timetabling. Two meta heuristic algorithms with previous satisfying results, Simulated Annealing and Tabu Search, are implemented and benchmarked against each other in order to evaluate the performance of these. The results show that although both algorithms are good candidates for creating timetables, Simulated Annealing has the edge both in run time and the quality of the timetable.
APA, Harvard, Vancouver, ISO, and other styles
31

Blackburn, James Walton. "Environmental mediation: expert assessment of an eclectic theory." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/76505.

Full text
Abstract:
This dissertation developed an eclectic theory of environmental mediation and submitted it to 31 environmental mediation experts for an assessment of the 63 propositions of the theory. The propositions which represented factors which the mediation experts assessed as important elements contributing to effective mediation were grouped into an ''Essential Model," and the propositions which represented less important elements contributing to effective mediation were grouped into a "Secondary Model." Key dimensions which distinguish the two models were identified. The eclectic theory was developed from the case and theoretical literature, and practitioner comments on the art of environmental mediation. The expert assessment was conducted by submitting an instrument containing 63 propositions to the 31 mediation experts with a request that they rate the importance of each proposition in contributing to effective environmental mediation outcomes. The dissertation presents three alternative general models of mediation, and compares and contrasts the practice of environmental mediation with these models. Recommendations for further research are made, and critical reflections on the state of the art of environmental mediation are presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Lövström, Anna. "Från naturliga tal till hela tal (från N till Z) : Vad kan göra skillnad för elevers möjligheter att bli bekanta med de negativa talen?" Licentiate thesis, Högskolan i Jönköping, Högskolan för lärande och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-27972.

Full text
Abstract:
The aim of the thesis is to gain knowledge concerning what pupils aged 8 and 9 need to learn to become familiar with negative numbers. The framework used in this research, variation theory, impliesthat students' problems in learning what was intended may have to do with the fact that some critical aspects of the studied object have not yet been discerned by the student. To get the pupils to understand the idea behind each critical aspect, carefully constructed examples were used. According to variation theory it is necessary to experience differences before you experience similarities. To answer the research question data was collected by using the learning study model. It is characterized by an iterative design where I as a researcher collaborate with teachers to try to find and orchestrate the critical aspects. The method is interventionist, which means that interventions are done in teaching. In the learning study I have cooperated with two primary school teachers and 64 pupils in four different classes. The data consists of video-recordings of lessons, pre- and posttests, interviews with pupils and notes from the meetings of the learning study group. When planning lessons as well as analyzing data, concepts relating to the theory of variation have been used as analytical tools. This thesis contributes to research by investigating in detail what aspects students need to differentiate in order to become familiar with negative numbers. The results show that the pupils needed not only to discern, but also to differentiate three different critical aspects: To differentiate the values of two negative numbers. To differentiate the function of the minuend versus the function of the subtrahend in a subtraction. To differentiate the minus sign for negative numbers versus the minus sign for subtraction.
APA, Harvard, Vancouver, ISO, and other styles
33

Bowens, Karessa Natee. "Interactive musical visualization based on emotional and color theory." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Head, Katharine J. "Tanning bed use, deviance regulation theory, and source factors." Thesis, [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bryant, Malika S. "Johnson Publishing Company’s Tan Confessions and Ebony: Reader Response through the Lens of Social Comparison Theory." Ohio University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1618997653408659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Amez, Lucy. "Tag based Bayesian latent class models for movies : economic theory reaches out to big data science." Thesis, London Metropolitan University, 2017. http://repository.londonmet.ac.uk/1263/.

Full text
Abstract:
For the past 50 years, cultural economics has developed as an independent research specialism. At its core are the creative industries and the peculiar economics associated with them, central to which is a tension that arises from the notion that creative goods need to be experienced before an assessment can be made about the utility they deliver to the consumer. In this they differ from the standard private good that forms the basis of demand theory in economic textbooks, in which utility is known ex ante. Furthermore, creative goods are typically complex in composition and subject to heterogeneous and shifting consumer preferences. In response to this, models of linear optimization, rational addiction and Bayesian learning have been applied to better understand consumer decision- making, belief formation and revision. While valuable, these approaches do not lend themselves to forming verifiable hypothesis for the critical reason that they by-pass an essential aspect of creative products: namely, that of novelty. In contrast, computer sciences, and more specifically recommender theory, embrace creative products as a study object. Being items of online transactions, users of creative products share opinions on a massive scale and in doing so generate a flow of data driven research. Not limited by the multiple assumptions made in economic theory, data analysts deal with this type of commodity in a less constrained way, incorporating the variety of item characteristics, as well as their co-use by agents. They apply statistical techniques supporting big data, such as clustering, latent class analysis or singular value decomposition. This thesis is drawn from both disciplines, comparing models, methods and data sets. Based upon movie consumption, the work contrasts bottom-up versus top-down approaches, individual versus collective data, distance measures versus the utility-based comparisons. Rooted in Bayesian latent class models, a synthesis is formed, supported by the random utility theory and recommender algorithm methods. The Bayesian approach makes explicit the experience good nature of creative goods by formulating the prior uncertainty of users towards both movie features and preferences. The latent class method, thus, infers the heterogeneous aspect of preferences, while its dynamic variant- the latent Markov model - gets around one of the main paradoxes in studying creative products: how to analyse taste dynamics when confronted with a good that is novel at each decision point. Generated by mainly movie-user-rating and movie-user-tag triplets, collected from the Movielens recommender system and made available as open data for research by the GroupLens research team, this study of preference patterns formation for creative goods is drawn from individual level data.
APA, Harvard, Vancouver, ISO, and other styles
37

Allen, Adam L. "Modeling scattered intensities for multiple particle TIRM using Mie theory." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jukic, Boris. "Demand estimation techniques and investment incentives for the digital economy infrastructure : an econometric and simulation-based investigation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Modin, Felicia. "An Analysis of Tit for Tat in the Hawk-Dove Game." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54599.

Full text
Abstract:
In Axelrod's tournaments of the Prisoner's Dilemma, carried out in the 1980s, a strategy called Tit for Tat was declared the winner, and it has since then been thought of as the strategy to use to do as well as possible in different situations. In this thesis, we investigate whether Tit for Tat will still do as well if we change the game to the Hawk-Dove Game. This is done by comparing Tit for Tat to other strategies -- All C, All D, Joss and Random -- one at a time. First we analyse under which conditions each strategy will be an Evolutionary Stable Strategy, then if it is possible for a population of these two strategies to end up in a stable polymorphism, and finally, if we have a finite population instead of an infinite one, under which conditions selection will favour the fixation of each of the strategies. This leads to the conclusion that how well Tit for Tat will do depends a lot on the different conditions on the game, but in general, the more times that a pair of individuals will meet, and the higher the value of the resource is compared to the cost of fighting, the better Tit for Tat will do.
APA, Harvard, Vancouver, ISO, and other styles
40

LEGGETT, DAVID NEAL. "INCOME TAXES AND CAPITAL ASSET PRICING THEORY: SOME EMPIRICAL EVIDENCE." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187910.

Full text
Abstract:
Capital asset pricing theory assumes a no-tax, after-tax efficiency equivalence; ie., that the efficient information produced in a no-tax analysis is equivalent to that which is produced in an after-tax analysis. However, if the effect of income taxes is not systematic throughout the market, the useful application of the theory may be impaired by this assumption. This research seeks to determine the effect of income tax imposition on the risk-return expectations or individual investors. If the effect of income tax imposition is to produce non-homogeneous after-tax investor risk-return expectations, then the efficiency equivalence hypothesis must be rejected. This efficiency equivalency hypothesis is evaluated by testing two alternative hypotheses, (1) the systematic riskiness of any individual security, both with and without adjustment for the imposition of income tax, is equivalent, and (2) the no-tax and after-tax expected risk-return rank order of each individual security is the same. An after-tax capital asset pricing model is derived. This model is based upon the premise that the current income tax laws, which require investors to share with the taxing government the uncertain returns from risky assets, allow investors to reduce the riskiness of those returns. The returns on investment assets are derived from both capital gains and from ordinary income distributions. However, the tax treatment of capital gains (losses) and ordinary income (dividends/interest) is not the same. This results in an unsystematic effect on the risks and returns of investments, thus, the income tax effect is not likely to be homogeneous as an efficiency equivalence hypothesis would imply. The analysis focuses on the expected risk-return equivalencies for 465 firms, using ex-post data over a 10 year period. The findings of this study imply that income tax effects on the market are not homogeneous. Income tax differentials are apparent in both the observed beta terms and the risk-return rank-ordering of the securities.
APA, Harvard, Vancouver, ISO, and other styles
41

Clarke, Roger Anthony, and Roger Clarke@xamax com au. "Data Surveillance: Theory, Practice & Policy." The Australian National University. Faculty of Engineering and Information Technology, 1997. http://thesis.anu.edu.au./public/adt-ANU20031112.124602.

Full text
Abstract:
Data surveillance is the systematic use of personal data systems in the investigation or monitoring of the actions or communications of one or more persons. This collection of papers was the basis for a supplication under Rule 28 of the ANU's Degree of Doctor of Philosophy Rules. The papers develop a body of theory that explains the nature, applications and impacts of the data processing technologies that support the investigation or monitoring of individuals and populations. Literature review and analysis is supplemented by reports of field work undertaken in both the United States and Australia, which tested the body of theory, and enabled it to be articulated. The research programme established a firm theoretical foundation for further work. It provided insights into appropriate research methods, and delivered not only empirically-based descriptive and explanatory data, but also evaluative information relevant to policy-decisions. The body of work as a whole provides a basis on which more mature research work is able to build.
APA, Harvard, Vancouver, ISO, and other styles
42

Jones, Christopher Robert. "Understanding and Improving Use-Tax Compliance: A Theory of Planned Behavior Approach." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Feinstein, Samuel G. "An investigation into reference-day risk-free metrics in the context of modern portfolio theory on the JSE." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/31380.

Full text
Abstract:
Modern portfolio theory (MPT), asset pricing models and broader financial modelling are dependent upon the accuracy of input parameters. For example, the accuracy of expected returns, standard deviations and correlations as an input into MPT will result in a more efficient selection of the optimal portfolio. These metrics are exposed to reference-day risk which is the variation in input estimation due to the selection of initial reference-day in calculations. This paper examines whether a change in reference-day, the day on which a metric is calculated, significantly affects estimates of risk-return metrics on the Johannesburg Stock Exchange (JSE). Thereafter, it applies these findings to the asset allocation problem of constructing a maximum Sharpe portfolio. The objective of this paper is to further prior research through the evaluation of an alternative simulation method and an extension of the range of tested metrics. The advancement of this prior research is achieved through the use of the Cholesky decomposition and a nonparametric bootstrapping procedure to generate reference-day risk-free estimates for average returns, standard deviations, correlations and betas. Furthermore, this paper applies the reference-day risk-free metrics to the construction of optimal multi-asset portfolios in the mean-variance framework. The findings suggest that through the use of a five-year period of monthly returns, the selection of a reference-day materially affects risk-return metrics and the subsequent portfolio characteristics that are based upon these metrics. The performance of portfolios, optimised on each reference day, ranged between 10% during the out-of-sample period. Additionally, using traditional end of month data resulted in underperformance of out-of-sample, overstated average returns, understated standard deviations and lower correlations between asset classes. Based on these findings we propose an alternative bootstrapping method for calculating reference-day risk-free metrics which reduces the effect of reference-day risk. The purpose of this methodology is to use these estimates for portfolio construction, risk management and asset pricing. The results of this paper indicate that reference-day risk makes a material difference in portfolio construction.
APA, Harvard, Vancouver, ISO, and other styles
44

Tye, Jesse Wayne. "Explorations of iron-iron hydrogenase active site models by experiment and theory." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sabbar, Ehsan H. "Defect and Island Nucleation in Materials: Kinetic Monte Carlo, Rate Equation Theory and Temperature Accelerated Dynamics (TAD) Simulations." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544443201322287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tau, Baetsane Aaron. "Conservation laws in optimal control theory / Aaron Baetsane Tau." Thesis, 2005. http://hdl.handle.net/10394/11230.

Full text
Abstract:
Abstract: We study in optimal control the important relation between invariance of the problem under a family of transformations, and the existence of preserved quantities along the Pontryagin extremals. Several extensions of Noether's theorem are given, in the sense which enlarges the scope of its application. The dissertation looks at extending the second Noether's theorem to optimal control problems which are invariant under symmetry depending upon k arbitrary functions of the independent variable and their derivatives up to some order m. Furthermore, we look at the Conservation Laws, i.e. conserved quantities along Euler-Lagrange extremals, which are obtained on the basis of Noether's theorem. And finally we obtain a generalization of Noether's theorem for optimal control problems. The generalization involves a one-parameter family of smooth maps which may depend also on the control and a Lagrangian which is invariant up to an addition of an exact differential.
(M.Sc.) North-West University, Mafikeng Campus, 2005
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Dong-Yi, and 林東毅. "Chongxuan Philosophy in the Tang Dynasty:From Tao Theory to Mind–Nature Theory." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/66197185103714937610.

Full text
Abstract:
博士
國立中興大學
中國文學系所
105
Meng Wentong’s phrasing of “the Chongxuan (twofold mystery) school” initiated a nearly 50-year academic enthusiasm for the Chongxuan philosophy of the Tang dynasty. Currently, more than 100 monographs and articles devoted to the study of Chongxuan have been published. In particular, the publication of Lu Guolong’s Zhongguo Chongxuan Xue (Chongxuan Philosophy in China) comprehensively introduced Chongxuan philosophy to the academia. However, approximately two decades after the book’s publication, the academia has not managed to produce another more exhaustive monograph on the Chongxuan study. To address this gap, the author of this study aims to reinitiate a discourse on the Chongxuan philosophy of the Tang dynasty, basing the discussion in the changes of the Chongxuan philosophy and adopting the perspectives of Tao (the Way), Xing (nature), and Xin (mind) to scrutinize how the Chongxuan philosophy shifted from Tao theory to Mind–Nature theory in the Tang dynasty. The completion of this study will enable Chongxuan researchers to examine the rise and development of Tang Chongxuan philosophy from a more refined perspective.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Ying-ju, and 楊英珠. "The Study of Structure Theory on Tao-tzu." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/21165544332117061887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Moon-Hyuk, and 李文赫. "Research on Kim sheng-tan''''s literature criticism theory." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/96977813018882566879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Yufen, and 王郁芬. "The Research of Laozi Tao Theory of Guodian Chu Slips." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/52481026175909077770.

Full text
Abstract:
碩士
靜宜大學
中國文學系
100
This dissertation researches on the thoughts of Guodian Chu Slips. Guodian Chu Slips were excavated in today’s Jingmen district of Hebei province, the original hinterland of Chu in the pre-Qin era. Chu is the cradle of Taoist philosophy. At that time, the regime changed during the Spring and Autumn Epoch and the Warring States; furthermore, the fertile and fecund of natural and human environment of Chu. Because of those elements, the development of Taoism had been shaped. After Guodian Chu Slips unearthed, it subverted most of the scholars’ view of Taoism. Generally speaking, most of the scholars those who thought both Confucianism and Taoism were in opposing agreed that Taoism was negative and criticized the Confucian thoughts severely. However, after Guodian Chu Slips unearthed, its simple text description made a new interpretative direction for the part of the strange thoughts of Laozi which couldn’t be solved. Most of all, the thoughts of Laozi of Guo Jane was presented a positive engaged view; besides, it coexisted with Confucianism peacefully. By comparing the differences between the version of Guo Jane “Laozi” and the original version of Laozi, we can observe the changes of Taoism thought under the passage of history time. Therefore, the concept of Guo Jane was “to govern a country naturally rather than by force or penalty”; however, the original version of Laozi has been extended to the individuals on the State of “cultivate oneself by doing nothing”. Currently, Guo Jian “Laozi” can be defined as the axis of former history of Taoist thought. Guo Jian “Laozi” was the first version of “Laozi” in the world. The bamboo slips separated into a, b, c parts, and the last part was connected with “Taiyi Shengshui”. These four parts were the Taoist motif thought of Guodian Chu Slips. Each part had its theme theory. The Laozi's central theme, the A part, was “the central meaning of Ttao” and “govern a country naturally rather than by force or penalty”. The Laozi's central theme, the B part, was “Cultivate oneself”. The Laozi's central theme, the C part, was “govern a country well”. Taiyi Shengshui was the most special part of the bamboo slips. Because it’s not available for reference on the paper, it can only rely on ancient interpretation, and other cotemporaneous ancient books can take for consultation and comparison. In the course of the study, I found “Taiyi Shengshui” was the best interpretation of Laozi "Taoist nature". Finding from the process, the system of the pre-Qin Taoism was from the nature of Taiyi Shengshui to the Laozi's central theme, the A part “the central meaning of Ttao”, “govern a country naturally rather than by force or penalty”, and the Laozi's central theme, the B part “Cultivate oneself”; furthermore, the C part “governs a country well” also played a supplementary role. Though the numbers of the words in Guo Jane were much less than Silk Manuscript and the original version of “Laozi”, the guodian bamboo slips could be a complete thought system by itself; moreover, Taiyi Shengshui can be used as the source of the beginning of the essential nature of Taoist. Unearthed in Guodian Chu Slips must have its value and significance meaning. Throughout the whole historical thoughts, it has its merits and demerits (e.g. Taiyi Shengshui, the newly unearthed material, the misunderstanding thinking was broke down between Confucianism and Taoism). However, the “Ideological value” can’t be compared to Guo Jian, Silk Manuscript, and the original version of “Laozi”. The reason is that each them has their contribution for nowadays people to find the existence of life.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography