To see the other types of publications on this topic, follow the link: Triage Principal.

Dissertations / Theses on the topic 'Triage Principal'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Triage Principal.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marasco, Corena. "The Triage Principal| An Autoethnographic Tale of Leadership in a Catholic Turnaround School." Thesis, Loyola Marymount University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3689638.

Full text
Abstract:
<p>Catholic schools are in need of innovative change. The problem lies in how to construct the elements of change to create viability for a school in the face of rapid declining enrollment. Responding to this type of environment as an educational leader requires qualities and characteristics similar to those of first responders in a medical emergency, a term I coined as the triage principal. This autoethnographic research study was designed to answer three research questions: 1. As a new principal at Michael, the Archangel School (MAS), a Catholic school in danger of closing, what challenges did I experience? 2. As a new leader, how did I respond to the challenges to bring about change at MAS? 3. What did I learn from this first year leadership experience? This autoethnographic study is constructed from my voice as a first year, first time principal, using several data sources: my blog, my archival field notes, and three interviews from archdiocesan leaders. Each of the given data sources had contained a data collection procedure resulting in overarching thematic patterns that led to generalizations based on the past experiences at MAS and my review of the literature. The weaving of the past and present of my life's leadership journey in combination with the culture and the people that surround me for this study, has made me realize that I do have a story worth sharing, a story that can potentially help others who might find themselves seemingly lost and alone.
APA, Harvard, Vancouver, ISO, and other styles
2

Patuawa, Jacqueline Margaret. "Principal Voice: Triumphs, Trials and Training. The Experience of Beginning in Principalship from the Perspectives of Principals in Years 3 - 5." The University of Waikato, 2007. http://hdl.handle.net/10289/2273.

Full text
Abstract:
ABSTRACT It is widely accepted that the quality of school leadership and school improvement are inextricably linked. Therefore it can be said that, investment in principal development is an investment in quality schools, and therefore an investment in the future. This report describes a qualitative research project undertaken in 2006, to examine the experience of beginning in principalship in New Zealand, from the perspectives of principals now in their third to fifth year in the role. It attempts to seek answers to questions: What training do those entering principalship receive prior to taking up the role? How are principals supported as they begin in the role? What support is available to them currently - beyond the induction period? What training and support is considered to be effective by beginning principals? What else could do they believe could be introduced to enhance current support and training? Twelve principals were interviewed, from a diverse range of school contexts, individually, and then a focus group approach was used to affirm and clarify emergent findings, and to suggest a potential model for improved development. A review of the literature identified a series of stages that principals move through during their career and the importance of professional learning to support each career stage. It highlighted several strategies deemed to be effective in assisting the development of leadership within the stages identified. The literature concluded, that while there is an awareness of both the stages of leadership, and the importance of targeted development to meet the needs of individuals throughout those stages, most learning remains organisationally rather than individually focussed, and there remains a lack of a planned, structured and synergistic approach to principal development. The biggest area of concern is suggested as being in the stage where principals are deemed to be effective. The research findings showed that in the current New Zealand context, there are several effective strategies enhancing principal professional learning. It does, however, conclude with several recommendations for strengthening and enhancing the status quo. Participants in the research suggested that many of the current initiatives offered, remain isolated from each other and now need to be brought into a more robust and aligned framework. There is a perception from those involved in the research, that beyond the induction period, currently eighteen months, there is a void in professional learning opportunities, and that principals struggle to get targeted feedback that allows them to identify their needs. They further suggested that greater preparation for principalship on appointment was required, and believed that a period shadowing an experienced colleague would be invaluable.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Jing. "Extending the Principal Stratification Method To Multi-Level Randomized Trials." Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1651.

Full text
Abstract:
The Principal Stratification method estimates a causal intervention effect by taking account of subjects' differences in participation, adherence or compliance. The current Principal Stratification method has been mostly used in randomized intervention trials with randomization at a single (individual) level with subjects who were randomly assigned to either intervention or control condition. However, randomized intervention trials have been conducted at group level instead of individual level in many scientific fields. This is so called "two-level randomization", where randomization is conducted at a group (second) level, above an individual level but outcome is often observed at individual level within each group. The incorrect inferences may result from the causal modeling if one only considers the compliance from individual level, but ignores it or be determine it from group level for a two-level randomized trial. The Principal Stratification method thus needs to be further developed to address this issue. To extend application of the Principal Stratification method, this research developed a new methodology for causal inferences in two-level intervention trials which principal stratification can be formed by both group level and individual level compliance. Built on the original Principal Stratification method, the new method incorporates a range of alternative methods to assess causal effects on a population when data on exposure at the group level are incomplete or limited, and are data at individual level. We use the Gatekeeper Training Trial, as a motivating example as well as for illustration. This study is focused on how to examine the intervention causal effect for schools that varied by level of adoption of the intervention program (Early-adopter vs. Later-adopter). In our case, the traditional Exclusion Restriction Assumption for Principal Stratification method is no longer hold. The results show that the intervention had a stronger impact on Later-Adopter group than Early-Adopter group for all participated schools. These impacts were larger for later trained schools than earlier trained schools. The study also shows that the intervention has a different impact on middle and high schools.
APA, Harvard, Vancouver, ISO, and other styles
4

Lou, Yiyue. "Principal stratification : applications and extensions in clinical trials with intermediate variables." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5961.

Full text
Abstract:
Randomized clinical trials (RCTs) are considered to be the "gold standard" in order to demonstrate a causal relationship between a treatment and an outcome because complete randomization ensures that the only difference between the two units being compared is the treatment. The intention-to-treat (ITT) comparison has long been regarded as the preferred analytic approach for RCTs. However, if there exists an “intermediate” variable between the treatment and outcome, and the analysis conditions on this intermediate, randomization will break down, and the ITT approach does not account properly for the intermediate. In this dissertation, we explore the principal stratification approach for dealing with intermediate variables, illustrate its applications in two different clinical trial settings, and extend the existing analytic approaches with respect to specific challenges in these settings. The first part of our work focuses on clinical endpoint bioequivalence (BE) studies with noncompliance and missing data. In clinical endpoint BE studies, the primary analysis for assessing equivalence between a generic and an innovator product is usually based on the observed per-protocol (PP) population (usually completers and compliers). The FDA Missing Data Working Group recently recommended using “causal estimands of primary interest.” This PP analysis, however, is not generally causal because the observed PP is post-treatment, and conditioning on it may introduce selection bias. To date, no causal estimand has been proposed for equivalence assessment. We propose co-primary causal estimands to test equivalence by applying the principal stratification approach. We discuss and verify by simulation the causal assumptions under which the current PP estimator is unbiased for the primary principal stratum causal estimand – the "Survivor Average Causal Effect" (SACE). We also propose tipping point sensitivity analysis methods to assess the robustness of the current PP estimator from the SACE estimand when these causal assumptions are not met. Data from a clinical endpoint BE study is used to illustrate the proposed co-primary causal estimands and sensitivity analysis methods. Our work introduces a causal framework for equivalence assessment in clinical endpoint BE studies with noncompliance and missing data. The second part of this dissertation targets the use of principal stratification analysis approaches in a pragmatic randomized clinical trial -- the Patient Activation after DXA Result Notification (PAADRN) study. PAADRN is a multi-center, pragmatic randomized clinical trial that was designed to improve bone health. Participants were randomly assigned to either intervention group with usual care augmented by a tailored patient-activation Dual-energy X-ray absorptiometry (DXA) results letter accompanied by an educational brochure, or control group with usual care only. The primary analyses followed the standard ITT principle, which provided a valid estimate for the intervention assignment. However, findings might underestimate the effect of intervention because PAADRN might not have an effect if the patient did not read, remember and act on the letter. We apply principal stratification to evaluate the effectiveness of PAADRN for subgroups, defined by patient's recall of having received a DXA result letter, which is an intermediate outcome that's post-treatment. We perform simulation studies to compare the principal score weighting methods with the instrumental variable (IV) methods. We examine principal strata causal effects on three outcome measures regarding pharmacological treatment and bone health behaviors. Finally, we conduct sensitivity analyses to assess the effect of potential violations of relevant causal assumptions. Our work is an important addition to the primary findings based on ITT. It provides a profound understanding of why the PAADRN intervention does (or does not) work for patients with different letter recall statuses, and sheds light on the improvement of the intervention.
APA, Harvard, Vancouver, ISO, and other styles
5

Gamelin, Thomas. "Deux déesses pour un dieu. Des triades pour décrire des principes cosmologiques." Thesis, Lille 3, 2013. http://www.theses.fr/2013LIL30027.

Full text
Abstract:
Dans la religion égyptienne ancienne, l'association de trois divinités pour former une triade locale est répandue. Composées de deux dieux (le père et le fils) et d'une déesse (la mère), ces triades forment un schéma "familial", à l'image de la triade constituée d'Osiris, D'Isis et d'Horus. Parallèlement à ces triades "classiques", il existe des groupes divins plus inhabituels avec comme particularité d'avoir pour troisième membre une déesse et non un dieu, sans que celle-ci soit une déesse enfant ; ce sont les groupes gravés dans des scènes d'offrande qui ont été étudiés. Quel peut être alors le sens à donner à la présence de ces deux déesses ? Quelles relations entretiennent les divinités entre elles ? Plusieurs types de structure sont mis en lumière dans le cadre de cette étude. Si certains groupes sont un simple regroupement d'un dieu avec deux parèdres locales, d'autres réflexions, plus abouties encore, soulignent la volonté des théologiens de décrire des idées complexes de la pensée égyptienne. La triade d'Eléphantine (Khnoum, Satis et Anoukis) est probablement l'exemple le plus clair de ce type d'organisation théologique : les trois divinités de la région contrôlent la crue du Nil. Le dieu contrôle l'inondation et est aidé par les deux déesses : la première lance les eaux de l'inondation tandis que la seconde provoque le reflux. Dans plusieurs groupes, les théologiens ont réparti sur deux déesses deux fonctions complémentaires qui s'additionnent pour aider dans sa tâche le dieu principal. La complémentarité des rôles féminins n'est qu'un des nombreux outils utilisés par les prêtres pour se représenter et illustrer plus clairement l'univers qui les entoure<br>In Egyptian theology, the association of three deities in order to create a local triad is widely spread. Gathering two gods (the father and the son) and one goddess (the mother), this triad then defines a divine family, as the well-known triad of Osiris, Isis and Horus. More rare groups are structured as one god and two goddesses, a second goddess (who is never the daughter) replacing the divine child. In this work, we focus on groups that are represented on offering scenes carved in various Egyptian temples. What could explain the presence of these two goddesses in those scenes ? How are the relationship between the deities structured ? Different organisations of these groups are analysed in this study. part of these groups represents the association of a main god with two local goddesses. Others try to represent more elaborate cosmological principles. The triad of Elephantine (Khnum, Satet and Anuket) is a relevant example : the three deities control the flood of the Nile. the god commands the inundation and is helped by two goddesses ; one initiating the flow while the other one initiates the ebb. In several triads, the goddesses have complementary functions and assist the god in his task. The addition of the goddesses' functions is only one of the numerous tools used by theologians to describe their universe
APA, Harvard, Vancouver, ISO, and other styles
6

Odondi, Lang'O. "Causal modelling of survival data with informative noncompliance." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/causal-modelling-of-survival-data-with-informative-noncompliance(74f40dc0-e5d1-46c0-ab2f-ac42a3425ac7).html.

Full text
Abstract:
Noncompliance to treatment allocation is likely to complicate estimation of causal effects in clinical trials. The ubiquitous nonrandom phenomenon of noncompliance renders per-protocol and as- treated analyses or even simple regression adjustments for noncompliance inadequate for causal inference. For survival data, several specialist methods have been developed when noncompliance is related to risk. The Causal Accelerated Life Model (CALM) allows time-dependent departures from randomized treatment in either arm and relates each observed event time to a potential event time that would have been observed if the control treatment had been given throughout the trial. Alternatively, the structural Proportional Hazards (C-Prophet) model accounts for all-or-nothing noncompliance in the treatment arm only while the CHARM estimator allows time-dependent departures from randomized treatment by considering survival outcome as a sequence of binary outcomes to provide an 'approximate' overall hazard ratio estimate which is adjusted for compliance. The problem of efficacy estimation is compounded for two-active treatment trials (additional noncompliance) where the ITT estimate provides a biased estimator for the true hazard ratio even under homogeneous treatment effects assumption. Using plausible arm-specific predictors of compliance, principal stratification methods can be applied to obtain principal effects for each stratum. The present work applies the above methods to data from the Esprit trials study which was conducted to ascertain whether or not unopposed oestrogen (hormone replacement therapy - HRT) reduced the risk of further cardiac events in postmenopausal women who survive a first myocardial infarction. We use statistically designed simulation studies to evaluate the performance of these methods in terms of bias and 95% confidence interval coverage. We also apply a principal stratification method to adjust for noncompliance in two treatment arms trial originally developed for binary data for survival analysis in terms of causal risk ratio. In a Bayesian framework, we apply the method to Esprit data to account for noncompliance in both treatment arms and estimate principal effects. We apply statistically designed simulation studies to evaluate the performance of the method in terms of bias in the causal effect estimates for each stratum. ITT analysis of the Esprit data showed the effects of taking HRT tablets was not statistically significantly different from placebo for both all cause mortality and myocardial reinfarction outcomes. Average compliance rate for HRT treatment was 43% and compliance rate decreased as the study progressed. CHARM and C-Prophet methods produced similar results but CALM performed best for Esprit: suggesting HRT would reduce risk of death by 50%. Simulation studies comparing the methods suggested that while both C-Prophet and CHARM methods performed equally well in terms of bias, the CALM method performed best in terms of both bias and 95% confidence interval coverage albeit with the largest RMSE. The principal stratification method failed for the Esprit study possibly due to the strong distribution assumption implicit in the method and lack of adequate compliance information in the data which produced large 95% credible intervals for the principal effect estimates. For moderate value of sensitivity parameter, principal stratification results suggested compliance with HRT tablets relative to placebo would reduce risk of mortality by 43% among the most compliant. Simulation studies on performance of this method showed narrower corresponding mean 95% credible intervals corresponding to the the causal risk ratio estimates for this subgroup compared to other strata. However, the results were sensitive to the unknown sensitivity parameter.
APA, Harvard, Vancouver, ISO, and other styles
7

McFaddin, Rita Jane. "Combinatorics for the Third Grade Classroom." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2227.

Full text
Abstract:
After becoming interested in the beauty of numbers and the intricate patterns of their behavior, the author concluded that it would be a good idea to make the subject available for students earlier in their educational experience. In this thesis, the author developed four units in combinatorics, namely Fundamental Principles, Permutations, Combinations, and Pascal's Triangle, which are appropriate for third grade level.
APA, Harvard, Vancouver, ISO, and other styles
8

Karray-Meziou, Fatma. "Approche non standard du développement d'un opérateur symétrique en fonctionnelles propres." Paris 6, 2007. http://www.theses.fr/2007PA066152.

Full text
Abstract:
Gelfand et ses collaborateurs ont introduit les triades hilbertiennes afin de construire des familles complètes de fonctionnelles propres pour les opérateurs auto-adjoints (non nécessairement bornés) dans un espace de Hilbert. On reprend ici l’étude de ces fonctionnelles au travers des outils de l’Analyse non standard. L’espace étudié est ainsi plongé dans un espace qui « hérite » de nombre de propriétés des espaces de dimension finie et où en particulier les traditionnelles intégrales sur des mesures spectrales sont remplacées par des sommes hyperfinies. L’intégrale de Loeb fournit alors un puissant moyen pour transformer ces sommes hyperfinies en intégrales aisément maniables.
APA, Harvard, Vancouver, ISO, and other styles
9

Laval, Pierre-François. "La compétence ratione temporis des juridictions internationales." Thesis, Bordeaux 4, 2011. http://www.theses.fr/2011BOR40030.

Full text
Abstract:
La « compétence ratione temporis » est une expression d’origine jurisprudentielle dont la signification varie selon le contexte dans lequel elle se trouve employée. Telle qu’elle apparaît dans les décisions des juridictions internationales, celle-ci désigne d’abord la durée de l’habilitation à exercer le pouvoir juridictionnel que l’on associe à la durée de validité de l’engagement juridictionnel de l’Etat. La compétence temporelle désigne également le domaine temporel d’exercice du pouvoir de juger, les Etats précisant bien souvent les catégories de litiges ratione temporis pour lesquels ils peuvent être attraits en justice. Sur la base de ce constat, la doctrine ne voit dans la compétence temporelle qu’une notion à contenu variable sans véritable utilité pour l’analyse du droit positif, et préfère parler soit de compétence personnelle dès lors qu’est en cause l’existence du consentement de l’Etat à se soumettre à la juridiction, soit de compétence matérielle pour envisager les catégories de différends dont le tribunal pourra connaître. L’étude de la jurisprudence internationale conduit toutefois à remettre en cause le bien-fondé d’une telle analyse. Si l’on peut voir dans la compétence temporelle un élément d’identification de la sphère de compétence du tribunal, et donc un aspect de sa compétence matérielle, la résolution pratique du problème de la durée de l’habilitation à juger ne peut être comprise en ayant recours au concept de compétence personnelle. Par la manière dont les juridictions appliquent l’engagement juridictionnel ratione temporis, celui-ci n’apparaît pas simplement comme l’acte par lequel les Etats consentent à se soumettre à la juridiction, mais d’abord comme le titre qui fonde l’action des justiciables. En cela, l’explication des solutions retenues par les juridictions internationales ne peut faire l’économie d’un concept propre à la durée de l’habilitation : celui de compétence ratione temporis<br>"Jurisdiction ratione temporis" is an expression that derives from case law, the meaning of which varies depending on the context it is used in. As it appears in International court decisions, it is used to mean the time during which the court has the authority to exercise jurisdictional power which also relates to the time during which the State’s consent to jurisdiction is valid. Jurisdiction ratione temporis also means the time period during which the court has the power to judge as the States often specify categories of disputes for which they can be brought to justice as ratione temporis. On this basis, legal doctrine only sees temporal jurisdiction as a variable notion that is not particularly useful in analysing positive law, and prefers to refer to either jurisdiction ratione personae when there is an issue of whether the State has agreed to submit to the jurisdiction of the court, or to jurisdiction ratione materiae for categories of disputes for which a court could have jurisdiction. Studies on International case law however call into question the justification of such an analysis. If we can consider that in temporal jurisdiction there is an element of identifying the jurisdictional sphere of the court and therefore an aspect of its jurisdiction ratione materiae, the problem of the time during which a court has jurisdiction cannot be practically solved by referring to the concept of jurisdiction ratione personae. Given the way in which courts apply the title of jurisdiction ratione temporis, this does not appear to be just an act by which the States agree to submit to the jurisdiction of the court but first of all as the very basis of the action. In this, the explanations of the solutions of the International courts cannot ignore a concept that is specific to the duration of authorisation, that of jurisdiction ratione temporis
APA, Harvard, Vancouver, ISO, and other styles
10

Escosteguy, Silvana Maria Ramos. "O Processo de Escolha de Dirigentes Escolares e Seus Reflexos na Gestão Municipal de Novo Hamburgo/RS (2001-2009)." Universidade do Vale do Rio dos Sinos, 2011. http://www.repositorio.jesuita.org.br/handle/UNISINOS/4254.

Full text
Abstract:
Submitted by Nara Lays Domingues Viana Oliveira (naradv) on 2015-07-06T19:51:50Z No. of bitstreams: 1 SilvanaEscosguyEducacao.pdf: 14664417 bytes, checksum: 6e535c33469501d359f2517371476190 (MD5)<br>Made available in DSpace on 2015-07-06T19:51:50Z (GMT). No. of bitstreams: 1 SilvanaEscosguyEducacao.pdf: 14664417 bytes, checksum: 6e535c33469501d359f2517371476190 (MD5) Previous issue date: 2011-09<br>Nenhuma<br>Esta dissertação tem como tema questões referentes à gestão escolar, dentro de uma abordagem democrática. Realiza uma pesquisa a respeito dos diretores escolares da rede pública municipal de Novo Hamburgo/RS, levando em conta as diferentes formas de escolha para chegarem ao exercício da função. Problematiza as diferentes formas de provimento ao cargo de dirigente escolar, adotadas nesta cidade, entre o ano de 2001 a 2009, passando pela indicação do diretor pelo poder público, pelo projeto de escolha a partir da lista tríplice e da proposta de eleição direta. Faz uma trajetória histórica da administração escolar desde 1930, embasada na perspectiva teórica clássica e de uma educação comparada com os países estrangeiros, principalmente os norte-americanos, até chegar aos dias atuais, quando a administração escolar passa a ser denominada Gestão Escolar, passando a ser compreendidos juntamente com os impactos econômicos, políticos, sociais, culturais e tecnológicos do Brasil. Interpreta a Gestão Escolar dentro de uma postura que não se limita a funções tecnicamente burocráticas, mas ao começo de sua democratização e construção coletiva, por meio da participação, da cidadania, possibilitando o desenvolvimento da consciência transformadora do homem no mundo. O estudo aborda também os diferentes modelos administrativos: patrimonialista, burocrático, gerencialista e democrático, identificando se as formas de provimento à função de diretor escolar da rede pública municipal de Novo Hamburgo/RS interferiram no curso da gestão municipal de educação, analisando os pontos e contrapontos da indicação, da lista tríplice e da eleição direta. A pesquisa é qualitativa e utiliza técnicas como a entrevista semiestruturada, questionários e análise documental, apresentando alguns dos avanços democráticos e retrocessos na história da educação brasileira e o movimento pela gestão escolar democrática no Rio Grande do Sul. A pesquisa considera que a rede municipal de ensino não é democratizável simplesmente pela democratização de suas estruturas organizacionais e de gestão, nem apenas com a eleição direta de diretores escolares, mas sim através de suas formas de intervenção cívica e sociocultural com a participação de toda a comunidade escolar.<br>This dissertation has as its theme questions relating to school management, into a democratic approach. It realizes an investigation about the school principals of network municipal public school of Novo Hamburgo/RS, taking into account the different forms of choice to arrive to the exercise of the that function. Causing trouble in the different forms of provision to the post of school manager, assumed in this city, from the year 2001 until 2009, passing by the indication of school master by public power, by project of choice starting triple list and the proposal of direct election. Makes a historic trajectory of school administration since 1930, grounded in classic theoretical perspective in comparison with one education with the foreign countries, mainly the U.S, until today, when the school administration it becomes denominated school management, it becomes understood together with the economic impacts, political, social, cultural and technological of Brazil. Interprets the school management into a proposal that is not limited the bureaucratic functions technically, but in the beginning of its democratization and collective construction, by way of participation of citizenship, enabling the development of the transformer conscience of man in the world. The study broaches too the different administrative models: patrimonial, bureaucratic, managerialist and democratic, indentifying the forms of provision the function of the school principals of network municipal public of Novo Hamburgo/RS they intervened in course of the municipal management of education, have been analyzing the points and counterpoint of indication, of the triple list and the direct elections. The search it’s Qualitative and uses technical as the interview semistructured, questionnaire and document analysis, showing some of the democratic advances and regression in the history of Brazilian education and the movement by democratic school management in Rio Grande do Sul. The investigation it considers that municipal network of education cannot be democratized simply by democratization of its organizational structures and management, not just with the direct election of school principals, but yes through the its forms of intervention civic and sociocultural with the participation the whole school community.
APA, Harvard, Vancouver, ISO, and other styles
11

Riou, Jérémie. "Multiplicité des tests, et calculs de taille d'échantillon en recherche clinique." Thesis, Bordeaux 2, 2013. http://www.theses.fr/2013BOR22066/document.

Full text
Abstract:
Ce travail a eu pour objectif de répondre aux problématiques inhérentes aux tests multiples dans le contexte des essais cliniques. A l’heure actuelle un nombre croissant d’essais cliniques ont pour objectif d’observer l’effet multifactoriel d’un produit, et nécessite donc l’utilisation de co-critères de jugement principaux. La significativité de l’étude est alors conclue si et seulement si nous observons le rejet d’au moins r hypothèses nulles parmi les m hypothèses nulles testées. Dans ce contexte, les statisticiens doivent prendre en compte la multiplicité induite par cette pratique. Nous nous sommes consacrés dans un premier temps à la recherche d’une correction exacte pour l’analyse des données et le calcul de taille d’échantillon pour r = 1. Puis nous avons travaillé sur le calcul de taille d’´echantillon pour toutes valeurs de r, quand les procédures en une étape, ou les procédures séquentielles sont utilisées. Finalement nous nous sommes intéressés à la correction du degré de signification engendré par la recherche d’un codage optimal d’une variable explicative continue dans un modèle linéaire généralisé<br>This work aimed to meet multiple testing problems in clinical trials context. Nowadays, in clinical research it is increasingly common to define multiple co-primary endpoints in order to capture a multi-factorial effect of the product. The significance of the study is concluded if and only if at least r null hypotheses are rejected among the m null hypotheses. In this context, statisticians need to take into account multiplicity problems. We initially devoted our work on exact correction of the multiple testing for data analysis and sample size computation, when r = 1. Then we worked on sample size computation for any values of r, when stepwise and single step procedures are used. Finally we are interested in the correction of significance level generated by the search for an optimal coding of a continuous explanatory variable in generalized linear model
APA, Harvard, Vancouver, ISO, and other styles
12

Cantuarias-Villessuzanne, Carmen Amalia. "La mesure économique de la dépréciation du capital minier au Pérou." Thesis, Bordeaux 4, 2012. http://www.theses.fr/2012BOR40009/document.

Full text
Abstract:
Le Pérou, extrêmement riche en minerais, connaît depuis les années 2000 une forte croissance économique. Àla question de savoir si sa richesse minérale condamne le Pérou à la malédiction des ressources naturelles, nousrépondons que ce n’est pas le cas à l’heure actuelle, mais nous mettons en évidence une forte dépendance vis-à-visde l’activité minière. La question centrale est celle du développement durable de l’activité minière. La mesure dela dépréciation du capital minier (dcm) est l’indicateur fondamental pour évaluer la situation. Diverses méthodesd’estimation existent, mais notre analyse microéconomique basée sur la règle de Hotelling fournit une valeurd’environ 7 % du pib sur la période 2000–2008, soit le double de l’approximation donnée par la Banque Mondiale.Nous proposons d’intégrer la dcm aux indicateurs macroéconomiques traditionnels, ce qui permet de mettreen évidence la surestimation de la croissance économique. Conformément à la règle de Hartwick, il apparaîtclairement que le développement péruvien n’est pas durable ; les revenus miniers ne compensent pas la dcmet ne sont pas réinvestis en faveur du développement du pays. Il faudrait donc taxer les entreprises minières àhauteur de la dcm, et créer un fonds de ressources naturelles. Nos résultats montrent qu’épargner seulement 8 %de la dcm permettrait d’atteindre un revenu durable pour les générations futures. La création d’un tel fonds deressources naturelles aurait également pour avantage de réduire l’instabilité macroéconomique et de promouvoirune meilleure gouvernabilité<br>Since the 2000s, Peru, a country extremely rich in minerals has experienced strong economic growth. WouldPeru be condemned to the resource curse because of its mineral wealth? For now this is not the case; howeverwe point up a strong dependence upon the mining sector. The main question relates to the sustainability of themining industry. The mineral depletion rate is a fundamental indicator to assess the situation. To calculate this,there are many forecasting methods available ; our microeconomic analysis based on the Hotelling rule providesa value of around 7 % of gdp for the period between 2000 and 2008, which represents double the estimation ofthe World Bank.We recommend the mineral depletion be taken into account when calculating traditional macroeconomic indicators;it would highlight the overestimation of economic growth. According to the Hartwick rule, it is clearthat Peruvian development is not sustainable; mining revenues do not offset the mineral depletion and are notreinvested in the development of the country. Therefore, the solution should be to tax mining companies at alevel equivalent to that of depletion and, with the new income, to create a natural resource fund. Saving only8 % of the mineral depletion would suffice to generate sustainable rent for futures generations. In addition, thecreation of a natural resource fund would reduce macroeconomic instability and enforce better governance
APA, Harvard, Vancouver, ISO, and other styles
13

Patuawa, Jacqui. "Principal voice trials, triumphs and training : the experience of beginning in principalship from the perspectives of principals in years 3-5 /." 2006. http://adt.waikato.ac.nz/public/adt-uow20070127.165825/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Smith, Katherine M. "Principal subgroups of the nonarithmetic Hecke triangle groups and Galois orbits of algebraic curves /." 2000. http://hdl.handle.net/1957/16828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hwang, Susan. "Similarity-principle-based machine learning method for clinical trials and beyond." Thesis, 2020. https://hdl.handle.net/2144/41983.

Full text
Abstract:
The control of type-I error is a focal point for clinical trials. On the other hand, it is also critical to be able to detect a truly efficacious treatment in a clinical trial. With recent success in supervised learning (classification and regression problems), artificial intelligence (AI) and machine learning (ML) can play a vital role in identifying efficacious new treatments. However, the high performance of the AI methods, particularly the deep learning neural networks, requires a much larger dataset than those we commonly see in clinical trials. It is desirable to develop a new ML method that performs well with a small sample size (ranges from 20 to 200) and has advantages as compared with the classic statistical models and some of the most relevant ML methods. In this dissertation, we propose a Similarity-Principle-Based Machine Learning (SBML) method based on the similarity principle assuming that identical or similar subjects should behave in a similar manner. SBML method introduces the attribute-scaling factors at the training stage so that the relative importance of different attributes can be objectively determined in the similarity measures. In addition, the gradient method is used in learning / training in order to update the attribute-scaling factors. The method is novel as far as we know. We first evaluate SBML for continuous outcomes, especially when the sample size is small, and investigate the effects of various tuning parameters on the performance of SBML. Simulations show that SBML achieves better predictions in terms of mean squared errors or misclassification error rates for various situations under consideration than conventional statistical methods, such as full linear models, optimal or ridge regressions and mixed effect models, as well as ML methods including kernel and decision tree methods. We also extend and show how SBML can be flexibly applied to binary outcomes. Through numerical and simulation studies, we confirm that SBML performs well compared to classical statistical methods, even when the sample size is small and in the presence of unmeasured predictors and/or noise variables. Although SBML performs well with small sample sizes, it may not be computationally efficient for large sample sizes. Therefore, we propose Recursive SBML (RSBML), which can save computing time, with some tradeoffs for accuracy. In this sense, RSBML can also be viewed as a combination of unsupervised learning (dimension reduction) and supervised learning (prediction). Recursive learning resembles the natural human way of learning. It is an efficient way of learning from complicated large data. Based on the simulation results, RSBML performs much faster than SBML with reasonable accuracy for large sample sizes.
APA, Harvard, Vancouver, ISO, and other styles
16

Aumer-Ryan, Paul R. "Information triage : dual-process theory in credibility judgments of web-based resources." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-868.

Full text
Abstract:
This dissertation describes the credibility judgment process using social psychological theories of dual-processing, which state that information processing outcomes are the result of an interaction “between a fast, associative information- processing mode based on low-effort heuristics, and a slow, rule-based information processing mode based on high-effort systematic reasoning” (Chaiken & Trope, 1999, p. ix). Further, this interaction is illustrated by describing credibility judgments as a choice between examining easily identified peripheral cues (the messenger) and content (the message), leading to different evaluations in different settings. The focus here is on the domain of the Web, where ambiguous authorship, peer- produced content, and the lack of gatekeepers create an environment where credibility judgments are a necessary routine in triaging information. It reviews the relevant literature on existing credibility frameworks and the component factors that affect credibility judgments. The online encyclopedia (instantiated as Wikipedia and Encyclopedia Britannica) is then proposed as a canonical form to examine the credibility judgment process. The two main claims advanced here are (1) that information sources are composed of both message (the content) and messenger (the way the message is delivered), and that the messenger impacts perceived credibility; and (2) that perceived credibility is tempered by information need (individual engagement). These claims were framed by the models proposed by Wathen & Burkell (2002) and Chaiken (1980) to forward a composite dual process theory of credibility judgments, which was tested by two experimental studies. The independent variables of interest were: media format (print or electronic); reputation of source (Wikipedia or Britannica); and the participant’s individual involvement in the research task (high or low). The results of these studies encourage a more nuanced understanding of the credibility judgment process by framing it as a dual-process model, and showing that certain mediating variables can affect the relative use of low-effort evaluation and high- effort reasoning when forming a perception of credibility. Finally, the results support the importance of messenger effects on perceived credibility, implying that credibility judgments, especially in the online environment, and especially in cases of low individual engagement, are based on peripheral cues rather than an informed evaluation of content.<br>text
APA, Harvard, Vancouver, ISO, and other styles
17

Kohout, David. "Právněhistorické aspekty trestání nacistických zločinců na pozadí procesu s Adolfem Eichmannem." Doctoral thesis, 2013. http://www.nusl.cz/ntk/nusl-327184.

Full text
Abstract:
in English Dissertation Thesis David Kohout: Legal-Historical Aspects of Punishment of Nazi Criminals on the Background of the Adolf Eichmann Trial This Dissertation on the topic of "Legal-Historical Aspects of Punishment of Nazi Criminals on the Background of the Adolf Eichmann Trial" seeks to analyze the main approaches to the prosecution and punishment of the Nazi crimes. It was chosen to use the trial of Adolf Eichmann in Jerusalem in years 1961 - 1962 as a connecting thread of this whole work. It was so not only due to the individual remarkableness of the trial but also due to the fact that it was in many ways a very illustrative for the previous legal development until that time. Additionally, many commentators of this trial attribute it a great impact on the renewal of the interest in the prosecution of former Nazis who were implicated in perpetration of crimes committed until 1945 and who remained at large after the end of war. Therefore this Thesis goes beyond the Eichmann trial and focuses on its broader context in material but also personal sense (in the text it often referred to cases of prosecution of close collaborators of Adolf Eichmann). In the opening chapters this Dissertation, however, starts with events that go far back in time before the Adolf Eichmann trial. This is for the...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!