Siga este enlace para ver otros tipos de publicaciones sobre el tema: Post-hoc.

Tesis sobre el tema "Post-hoc"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Post-hoc".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Jeyasothy, Adulam. "Génération d'explications post-hoc personnalisées." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS027.

Texto completo
Resumen
La thèse se place dans le domaine de l'IA explicable (XAI, eXplainable AI). Nous nous concentrons sur les méthodes d'interprétabilité post-hoc qui visent à expliquer à un utilisateur la prédiction pour une donnée d'intérêt spécifique effectuée par un modèle de décision entraîné. Pour augmenter l'interprétabilité des explications, cette thèse étudie l'intégration de connaissances utilisateur dans ces méthodes, et vise ainsi à améliorer la compréhensibilité de l'explication en générant des explications personnalisées adaptées à chaque utilisateur. Pour cela, nous proposons un formalisme général qui intègre explicitement la connaissance via un nouveau critère dans les objectifs d'interprétabilité. Ce formalisme est ensuite décliné pour différents types connaissances et différents types d'explications, particulièrement les exemples contre-factuels, conduisant à la proposition de plusieurs algorithmes (KICE, Knowledge Integration in Counterfactual Explanation, rKICE pour sa variante incluant des connaissances exprimées par des règles et KISM, Knowledge Integration in Surrogate Models). La question de l'agrégation des contraintes de qualité classique et de compatibilité avec les connaissances est également étudiée et nous proposons d'utiliser l'intégrale de Gödel comme opérateur d'agrégation. Enfin nous discutons de la difficulté à générer une unique explication adaptée à tous types d'utilisateurs et de la notion de diversité dans les explications<br>This thesis is in the field of eXplainable AI (XAI). We focus on post-hoc interpretability methods that aim to explain to a user the prediction for a specific data made by a trained decision model. To increase the interpretability of explanations, this thesis studies the integration of user knowledge into these methods, and thus aims to improve the understandability of the explanation by generating personalized explanations tailored to each user. To this end, we propose a general formalism that explicitly integrates knowledge via a new criterion in the interpretability objectives. This formalism is then declined for different types of knowledge and different types of explanations, particularly counterfactual examples, leading to the proposal of several algorithms (KICE, Knowledge Integration in Counterfactual Explanation, rKICE for its variant including knowledge expressed by rules and KISM, Knowledge Integration in Surrogate Models). The issue of aggregating classical quality and knowledge compatibility constraints is also studied, and we propose to use Gödel's integral as an aggregation operator. Finally, we discuss the difficulty of generating a single explanation suitable for all types of users and the notion of diversity in explanations
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sebyhed, Hugo, and Emma Gunnarsson. "The Impotency of Post Hoc Power." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-433274.

Texto completo
Resumen
In this thesis, we hope to dispel some confusion regarding the so-called post hoc power, i.e. power computed making the assumption that the estimated sample effect is equal to the population effect size. In previous research, it has been shown that post hoc power is a function of the p-value, making it redundant as a tool of analysis. We go further, arguing for it to never be reported, since it is a source of confusion and potentially harmful incentives. We also conduct a Monte Carlo simulation to illustrate our points of view. Previous research is confirmed by the results of this study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Galoob, Robert Paul. "Post hoc propter hoc| The impact of martyrdom on the development of Hasidut Ashkenaz." Thesis, Graduate Theological Union, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10646811.

Texto completo
Resumen
<p> This dissertation explores the close literary, thematic and linguistic relationships between <i>The Hebrew Chronicles of the First Crusade</i> and the later pietistic text <i>Sefer Hasidim</i>. Despite a long-standing tendency to view the Jewish martyrdom of 1096 and the development of German pietism (<i>Hasidut Ashkenaz</i>) as unrelated. upon closer scrutiny, we find strong ties between the two texts. <i>Sefer Hasidim</i>, the most well-known pietistic text, contains dozens of martyrological stories and references that share similar language, themes and contexts as the crusade chronicles. Indeed, rather than standing alone, and unrelated to the first crusade literature, we find tales of martyrdom that closely resemble those in the first crusade narratives. <i>Sefer Hasidim</i> also contains numerous statements that indicate the primacy of martyrdom within the hierarchy of the pietistic belief system, while other martyrological references function as prooftext for the traditional pietistic themes distilled by Ivan Marcus and Haym Soloveitchik. The extent to which martyrological themes are integrated into the belief system articulated in <i>Sefer Hasidim</i> indicates that the martyrdom of the First Crusade should be viewed as formative to the development of <i>Hasidut Ashkenaz</i>. A close reading of <i> Sefer Hasidim</i> conclusively demonstrates this premise. Moreover, a similar analysis of the crusade chronicles reveals a wide range of martyrological tales described in quintessential pietistic terms; expressions of the will of God, the fear of God. and the pietistic preference for life in the hereafter, are found throughout the martyrological text.</p><p> When reading these two diverse texts side by side, we find substantive elements of a common world view spanning the period of the first crusade through the appearance of <i>Sefer Hasidim</i>. This allows us to understand each text through a new lens; the crusade chronicles now appear to be an early articulation of pietistic thought, while the later pietistic text now reads in part as a martyrological document of great significance.</p><p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Durand, Guillermo. "Tests multiples et bornes post hoc pour des données hétérogènes." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS289/document.

Texto completo
Resumen
Ce manuscrit présente mes contributions dans trois domaines des tests multiples où l'hétérogénéité des données peut être exploitée pour mieux détecter le signal tout en contrôlant les faux positifs : pondération des p-valeurs, tests discrets, et inférence post hoc. Premièrement, une nouvelle classe de procédures avec pondération données-dépendante, avec une structure de groupe et des estimateurs de la proportion de vraies nulles, est définie, et contrôle le False Discovery Rate (FDR) asymptotiquement. Cette procédure atteint aussi l'optimalité en puissance sous certaines conditions sur les estimateurs. Deuxièmement, de nouvelles procédures step-up et step-down, adaptées aux tests discrets sous indépendance, sont conçues pour contrôler le FDR pour une distribution arbitraire des marginales des p-valeurs sous l'hypothèse nulle. Finalement, de nouvelles familles de référence pour l'inférence post hoc, adaptées pour le cas où le signal est localisé, sont étudiées, et on calcule les bornes post hoc associées avec un algorithme simple<br>This manuscript presents my contributions in three areas of multiple testing where data heterogeneity can be exploited to better detect false null hypotheses or improve signal detection while controlling false positives: p-value weighting, discrete tests, and post hoc inference. First, a new class of data-driven weighting procedures, incorporating group structure and true null proportion estimators, is defined, and its False Discovery Rate (FDR) control is proven asymptotically. This procedure also achieves power optimality under some conditions on the proportion estimators. Secondly, new step-up and step-down procedures, tailored for discrete tests under independence, are designed to control the FDR for arbitrary p-value null marginals. Finally, new confidence bounds for post hoc inference (called post hoc bounds), tailored for the case where the signal is localized, are studied, and the associated optimal post hoc bounds are derived with a simple algorithm
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Laugel, Thibault. "Interprétabilité locale post-hoc des modèles de classification "boites noires"." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS215.

Texto completo
Resumen
Cette thèse porte sur le domaine du XAI (explicabilité de l'IA), et plus particulièrement sur le paradigme de l'interprétabilité locale post-hoc, c'est-à-dire la génération d'explications pour une prédiction unique d'un classificateur entraîné. En particulier, nous étudions un contexte totalement agnostique, c'est-à-dire que l'explication est générée sans utiliser aucune connaissance sur le modèle de classification (traité comme une boîte noire) ni les données utilisées pour l'entraîner. Dans cette thèse, nous identifions plusieurs problèmes qui peuvent survenir dans ce contexte et qui peuvent être préjudiciables à l'interprétabilité. Nous nous proposons d'étudier chacune de ces questions et proposons des critères et des approches nouvelles pour les détecter et les caractériser. Les trois questions sur lesquelles nous nous concentrons sont : le risque de générer des explications qui sont hors distribution ; le risque de générer des explications qui ne peuvent être associées à aucune instance d'entraînement ; et le risque de générer des explications qui ne sont pas assez locales. Ces risques sont étudiés à travers deux catégories spécifiques d'approches de l'interprétabilité : les explications contrefactuelles et les modèles de substitution locaux<br>This thesis focuses on the field of XAI (eXplainable AI), and more particularly local post-hoc interpretability paradigm, that is to say the generation of explanations for a single prediction of a trained classifier. In particular, we study a fully agnostic context, meaning that the explanation is generated without using any knowledge about the classifier (treated as a black-box) nor the data used to train it. In this thesis, we identify several issues that can arise in this context and that may be harmful for interpretability. We propose to study each of these issues and propose novel criteria and approaches to detect and characterize them. The three issues we focus on are: the risk of generating explanations that are out of distribution; the risk of generating explanations that cannot be associated to any ground-truth instance; and the risk of generating explanations that are not local enough. These risks are studied through two specific categories of interpretability approaches: counterfactual explanations, and local surrogate models
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Radulovic, Nedeljko. "Post-hoc Explainable AI for Black Box Models on Tabular Data." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT028.

Texto completo
Resumen
Les modèles d'intelligence artificielle (IA) actuels ont fait leurs preuves dans la résolution de diverses tâches, telles que la classification, la régression, le traitement du langage naturel (NLP) et le traitement d'images. Les ressources dont nous disposons aujourd'hui nous permettent d'entraîner des modèles d'IA très complexes pour résoudre différents problèmes dans presque tous les domaines : médecine, finance, justice, transport, prévisions, etc. Avec la popularité et l'utilisation généralisée des modèles d'IA, la nécessite d'assurer la confiance dans ces modèles s'est également accrue. Aussi complexes soient-ils aujourd'hui, ces modèles d'IA sont impossibles à interpréter et à comprendre par les humains. Dans cette thèse nous nous concentrons sur un domaine de recherche spécifique, à savoir l'intelligence artificielle explicable (xAI), qui vise à fournir des approches permettant d'interpréter les modèles d'IA complexes et d'expliquer leurs décisions. Nous présentons deux approches, STACI et BELLA, qui se concentrent sur les tâches de classification et de régression, respectivement, pour les données tabulaires. Les deux méthodes sont des approches post-hoc agnostiques au modèle déterministe, ce qui signifie qu'elles peuvent être appliquées à n'importe quel modèle boîte noire après sa création. De cette manière, l'interopérabilité présente une valeur ajoutée sans qu'il soit nécessaire de faire des compromis sur les performances du modèle de boîte noire. Nos méthodes fournissent des interprétations précises, simples et générales à la fois de l'ensemble du modèle boîte noire et de ses prédictions individuelles. Nous avons confirmé leur haute performance par des expériences approfondies et étude d'utilisateurs<br>Current state-of-the-art Artificial Intelligence (AI) models have been proven to be verysuccessful in solving various tasks, such as classification, regression, Natural Language Processing(NLP), and image processing. The resources that we have at our hands today allow us to trainvery complex AI models to solve different problems in almost any field: medicine, finance, justice,transportation, forecast, etc. With the popularity and widespread use of the AI models, the need toensure the trust in them also grew. Complex as they come today, these AI models are impossible to be interpreted and understood by humans. In this thesis, we focus on the specific area of research, namely Explainable Artificial Intelligence (xAI), that aims to provide the approaches to interpret the complex AI models and explain their decisions. We present two approaches STACI and BELLA which focus on classification and regression tasks, respectively, for tabular data. Both methods are deterministic model-agnostic post-hoc approaches, which means that they can be applied to any black-box model after its creation. In this way, interpretability presents an added value without the need to compromise on black-box model's performance. Our methods provide accurate, simple and general interpretations of both the whole black-box model and its individual predictions. We confirmed their high performance through extensive experiments and a user study
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lowman, Lisa. "A post-hoc assessment of the Assiniboine-La Salle River Diversion project." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62785.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Marchal, Cynthie. "Post-hoc prescience: retrospective reasoning and judgment among witnesses of interpersonal aggression." Doctoral thesis, Universite Libre de Bruxelles, 2011. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209818.

Texto completo
Resumen
When judging interpersonal aggression, witnesses are usually expected to rationally consider, based on the evidence they have, what another reasonable person could (or should) have thought, known and done. However their analysis may be affected by judgment biases and personal motivations. These evaluative and retrospective biases, as well as the ascription of blame, are the main interests of this research. More specifically, we investigated the consequences of witnesses being prone to the hindsight bias, which is a common bias that gives individuals the feeling that they would have been able to predict past events, what in fact, is not the case. This process may have important effects on the victim, who “should have known” that an aggression would happen to him/her. In this dissertation, we examine the moderators of this bias and the role of the communication context in which it develops. We hypothesized that the communication context might affect the perspective that is taken on the event of interpersonal aggression and the perceived distance towards it. Also, we expected that the hindsight bias and victim blame would be decreased when reducing the psychological distance towards the event (i.e. perceived temporal distance and perceived proximity with the victim’s fate). In a same vein, we expected that the aggressor would be more derogated in this condition. The first four studies were designed to investigate the role of communication goals about the aggression. Asking participants to describe how (vs. why) the aggression happened was expected to diminish the perceived distance. The following study (study 5) examined whether reporting the event in the passive voice (vs. active voice) would have a similar effect. The four last studies investigated how the time of presenting the event (before vs. after its antecedents) would influence the perception of distance towards the events and the judgments. We expected that knowing the outcome initially might reduce the perceived distance with the events. Results of the first five studies confirmed the main hypotheses: the communication context that focused on the “how” of the event or that presented it in the passive voice reduced the perceived distance and diminished the predictability of the aggression and victim derogation. It also increases the derogation of the aggressor. In addition, the latter studies revealed that learning about the outcome right away leads to reduced derogation of the perpetrator and increased derogation of the victim, even when reducing the perceived distance with the event. Overall, this research suggests that the communication context in which the hindsight bias emerges, as well as the perceived distance with the negative event, are important factors when examining the retrospective reasoning and judgments of witnesses.<p><p><p><p><p>Lorsque les témoins jugent une agression interpersonnelle, il est généralement attendu d’eux qu’ils considèrent rationnellement ce qu’une personne raisonnable aurait pu penser, savoir et faire dans pareille situation, et ce en se fondant uniquement sur les preuves qui leur sont fournies. Il n’en reste pas moins que leur analyse sera toutefois tronquée par des biais de jugement et des motivations personnelles. C’est pourquoi la détermination du blâme et l’influence des déformations rétrospectives et évaluatives sont au cœur de cette recherche. Ainsi, nous investiguons plus particulièrement le biais de rétrospection, à savoir l’erreur commune qui laisse à l’individu penser qu’il est en mesure de prévoir n’importe quel événement, alors qu’en réalité, il n’en est rien. Une telle erreur peut cependant avoir de graves conséquences pour la victime dès lors que les témoins sont amenés à croire qu’elle aurait « dû » prévoir ce qui allait survenir. Dans cette thèse, nous envisageons également les modérateurs de ce biais, dont le rôle du contexte communicationnel. Nous avons, dès lors, fait l’hypothèse que le contexte communicationnel pourrait affecter l’angle sous lequel les témoins considèrent l’événement et la distance perçue par rapport à celui-ci. Ce faisant, nous pensions que le biais de rétrospection et le blâme de la victime seraient réduits lorsque le contexte diminuait la distance perçue vis-à-vis de l’événement (en l’occurrence, la distance temporelle et la proximité perçue avec le sort de la victime). De même, il était attendu que l’agresseur soit davantage blâmé dans pareille condition. Les quatre premières études s’intéressaient donc au rôle des buts poursuivis lors de la communication à propos de l’agression, afin d’envisager en quoi décrire comment (vs. pourquoi) l’agression s’était produite aidait à réduire la distance perçue. Une cinquième étude nous a ensuite permis de considérer si la voix passive (versus active) avait aussi un effet similaire. Quant aux quatre dernières études, elles avaient pour objectif d’investiguer dans quelle mesure l’ordre de présentation des informations (connaître la fin avant, vs. après les antécédents) pouvait avoir également une incidence sur la prise de distance par rapport à l’événement et aux jugements. Plus précisément, nous faisions l’hypothèse que connaître l’événement en premier lieu (avant ses antécédents) facilitait la réduction de la distance perçue. Les résultats obtenus dans les cinq premières recherches semblaient confirmer nos hypothèses :Un contexte communicationnel qui réduisait la distance psychologique perçue par rapport à l’événement pouvait non seulement diminuer le biais de rétrospection et le blâme de la victime, mais augmenter aussi le blâme de l’agresseur. Toutefois, les dernières recherches ont semblé démontrer, a contrario, que connaître l’agression en premier lieu pouvait réduire le blâme de l’agresseur et augmenter celui de la victime, alors même que la distance perçue avec les événements était réduite. In fine, ce travail suggère donc que le contexte communicationnel, dans lequel le biais émerge, et la prise de distance face à l’événement négatif sont autant de pistes qu’il faudrait creuser à l’avenir pour mieux comprendre le raisonnement et les jugements rétrospectifs des témoins.<br>Doctorat en Sciences Psychologiques et de l'éducation<br>info:eu-repo/semantics/nonPublished
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zaffran, Margaux. "Post-hoc predictive uncertainty quantification : methods with applications to electricity price forecasting." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX033.

Texto completo
Resumen
L'essor d'algorithmes d'apprentissage statistique offre des perspectives prometteuses pour prévoir les prix de l'électricité. Cependant, ces méthodes fournissent des prévisions ponctuelles, sans indication du degré de confiance à leur accorder. Pour garantir un déploiement sûr de ces modèles prédictifs, il est crucial de quantifier leur incertitude prédictive. Cette thèse porte sur le développement d'intervalles prédictifs pour tout algorithme de prédiction. Bien que motivées par le secteur électrique, les méthodes développées, basées sur la prédiction conforme par partition (SCP), sont génériques : elles peuvent être appliquées dans de nombreux autres domaines sensibles.Dans un premier temps,cette thèse étudie la quantification post-hoc de l'incertitude prédictive pour les séries temporelles. Le premier obstacle à l'application de SCP pour obtenir des prévisions probabilistes théoriquement valides des prix de l'électricité de manière post-hoc est l'aspect temporel hautement non-stationnaire des prix de l'électricité, brisant l'hypothèse d'échangeabilité. La première contribution propose un algorithme qui ne dépend pas d'un paramètre et adapté aux séries temporelles, reposant sur l'analyse théorique de l'efficacité d'une méthode pré-existante, l'Inférence Conforme Adaptative. La deuxième contribution mène une étude d'application détaillée sur un nouveau jeu de données de prix spot français récents et turbulents en 2020 et 2021.Un autre défi sont les valeurs manquantes (NAs). Dans un deuxièmte temps, cette thèse analyse l'interaction entre les NAs et la quantification de l'incertitude prédictive. La troisième contribution montre que les NAs induisent de l'hétéroscédasticité, ce qui conduit à une couverture inégale en fonction de quelles valeurs sont manquantes. Deux algorithmes sont conçus afin d'assurer une couverture constante quelque soit le schéma de NAs, ceci étant assuré sous des hypothèses distributionnelles sur les NAs. La quatrième contribution approfondit l'analyse théorique afin de comprendre précisément quelles hypothèses de distribution sont inévitables pour construite des régions prédictives informatives. Elle unifie également les algorithmes proposés précédemment dans un cadre général qui démontre empiriquement être robuste aux violations des hypothèses distributionnelles sur les NAs<br>The surge of more and more powerful statistical learning algorithms offers promising prospects for electricity prices forecasting. However, these methods provide ad hoc forecasts, with no indication of the degree of confidence to be placed in them. To ensure the safe deployment of these predictive models, it is crucial to quantify their predictive uncertainty. This PhD thesis focuses on developing predictive intervals for any underlying algorithm. While motivated by the electrical sector, the methods developed, based on Split Conformal Prediction (SCP), are generic: they can be applied in many sensitive fields.First, this thesis studies post-hoc predictive uncertainty quantification for time series. The first bottleneck to apply SCP in order to obtain guaranteed probabilistic electricity price forecasting in a post-hoc fashion is the highly non-stationary temporal aspect of electricity prices, breaking the exchangeability assumption. The first contribution proposes a parameter-free algorithm tailored for time series, which is based on theoretically analysing the efficiency of the existing Adaptive Conformal Inference method. The second contribution conducts an extensive application study on novel data set of recent turbulent French spot prices in 2020 and 2021.Another challenge are missing values (NAs). In a second part, this thesis analyzes the interplay between NAs and predictive uncertainty quantification. The third contribution highlights that NAs induce heteroskedasticity, leading to uneven coverage depending on which features are observed. Two algorithms recovering equalized coverage for any NAs under distributional assumptions on the missigness mechanism are designed. The forth contribution pushes forwards the theoretical analysis to understand precisely which distributional assumptions are unavoidable for theoretical informativeness. It also unifies the previously proposed algorithms into a general framework that demontrastes empirical robustness to violations of the supposed missingness distribution
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sobotková, Marika. "Neurofeedback aktivity amygdaly pomocí funkční magnetické rezonance." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-378028.

Texto completo
Resumen
The aim of this diploma thesis is real-time fMRI neurofeedback. In this case, the activity of amygdala is monitored and controled by an emotional regulatory visual task. A procedure to process measured data online and to incorporate it into the stimulus protocol has been proposed. A pilot study was carried out. Offline analysis of measured data was performed, including evaluation of the results of the analysis. The data is processed in MATLAB using the functions of the SPM library.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ayad, Célia. "Towards Reliable Post Hoc Explanations for Machine Learning on Tabular Data and their Applications." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX082.

Texto completo
Resumen
Alors que l’apprentissage automatique continue de démontrer de solides capacités prédictives, il est devenu un outil très précieux dans plusieurs domaines scientifiques et industriels. Cependant, à mesure que les modèles ML évoluent pour atteindre une plus grande précision, ils deviennent également de plus en plus complexes et nécessitent davantage de paramètres.Être capable de comprendre les complexités internes et d’établir une confiance dans les prédictions de ces modèles d’apprentissage automatique est donc devenu essentiel dans divers domaines critiques, notamment la santé et la finance.Les chercheurs ont développé des méthodes d’explication pour rendre les modèles d’apprentissage automatique plus transparents, aidant ainsi les utilisateurs à comprendre pourquoi les prédictions sont faites. Cependant, ces méthodes d’explication ne parviennent souvent pas à expliquer avec précision les prédictions des modèles, ce qui rend difficile leur utilisation efficace par les experts du domaine. Il est crucial d'identifier les lacunes des explications du ML, d'améliorer leur fiabilité et de les rendre plus conviviales. De plus, alors que de nombreuses tâches de ML sont de plus en plus gourmandes en données et que la demande d'intégration généralisée augmente, il existe un besoin pour des méthodes offrant de solides performances prédictives de manière plus simple et plus rentable.Dans cette thèse, nous abordons ces problèmes dans deux axes de recherche principaux:1) Nous proposons une méthodologie pour évaluer diverses méthodes d'explicabilité dans le contexte de propriétés de données spécifiques, telles que les niveaux de bruit, les corrélations de caractéristiques et le déséquilibre de classes, et proposons des conseils aux praticiens et aux chercheurs pour sélectionner la méthode d'explicabilité la plus appropriée en fonction des caractéristiques de leurs ensembles de données, révélant où ces méthodes excellent ou échouent.De plus, nous fournissons aux cliniciens des explications personnalisées sur les facteurs de risque du cancer du col de l’utérus en fonction de leurs propriétés souhaitées telles que la facilité de compréhension, la cohérence et la stabilité.2) Nous introduisons Shapley Chains, une nouvelle technique d'explication conçue pour surmonter le manque d'explications conçues pour les cas à sorties multiples où les étiquettes sont interdépendantes, où les caractéristiques peuvent avoir des contributions indirectes pour prédire les étiquettes ultérieures dans la chaîne (l'ordre dans lequel ces étiquettes sont prédit). De plus, nous proposons Bayes LIME Chains pour améliorer la robustesse de Shapley Chains<br>As machine learning continues to demonstrate robust predictive capabili-ties, it has emerged as a very valuable tool in several scientific and indus-trial domains. However, as ML models evolve to achieve higher accuracy,they also become increasingly complex and require more parameters. Beingable to understand the inner complexities and to establish trust in the pre-dictions of these machine learning models, has therefore become essentialin various critical domains including healthcare, and finance. Researchershave developed explanation methods to make machine learning models moretransparent, helping users understand why predictions are made. However,these explanation methods often fall short in accurately explaining modelpredictions, making it difficult for domain experts to utilize them effectively.It’s crucial to identify the shortcomings of ML explanations, enhance theirreliability, and make them more user-friendly. Additionally, with many MLtasks becoming more data-intensive and the demand for widespread inte-gration rising, there is a need for methods that deliver strong predictiveperformance in a simpler and more cost-effective manner. In this disserta-tion, we address these problems in two main research thrusts: 1) We proposea methodology to evaluate various explainability methods in the context ofspecific data properties, such as noise levels, feature correlations, and classimbalance, and offer guidance for practitioners and researchers on selectingthe most suitable explainability method based on the characteristics of theirdatasets, revealing where these methods excel or fail. Additionally, we pro-vide clinicians with personalized explanations of cervical cancer risk factorsbased on their desired properties such as ease of understanding, consistency,and stability. 2) We introduce Shapley Chains, a new explanation techniquedesigned to overcome the lack of explanations of multi-output predictionsin the case of interdependent labels, where features may have indirect con-tributions to predict subsequent labels in the chain (i.e. the order in whichthese labels are predicted). Moreover, we propose Bayes LIME Chains toenhance the robustness of Shapley Chains
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Enjalbert, Courrech Nicolas. "Inférence post-sélection pour l'analyse des données transcriptomiques." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES199.

Texto completo
Resumen
Dans le domaine de la transcriptomique, les avancées technologiques, telles que les puces à ADN et le séquençage à haut-débit, ont permis de quantifier l'expression génique à grande échelle. Ces progrès ont soulevé des défis statistiques, notamment pour l'analyse d'expression différentielle, visant à identifier les gènes différenciant significativement deux populations. Cependant, les procédures classiques d'inférence perdent leurs garanties de contrôle du taux de faux positifs lorsque les biologistes sélectionnent un sous-ensemble de gènes. Les méthodes d'inférence post hoc surmontent cette limitation en garantissant un contrôle sur le nombre de faux positifs, même pour des ensembles de gènes sélectionnés de manière arbitraire. La première contribution de ce manuscrit démontre l'efficacité de ces méthodes pour les données transcriptomiques de deux conditions biologiques, notamment grâce à l'introduction d'un algorithme de calcul des bornes post hoc à complexité linéaire, adapté à la grande dimension des données. Une application interactive a également été développée, facilitant la sélection et l'évaluation simultanée des bornes post hoc pour des ensembles de gènes d'intérêt. Ces contributions sont présentées dans la première partie du manuscrit. L'évolution technologique vers le séquençage en cellule unique a soulevé de nouvelles questions, notamment l'identification des gènes dont l'expression se distingue d'un groupe cellulaire à un (des) autre(s). Cette problématique est complexe car les groupes cellulaires doivent d'abord être estimés par une méthode de clustering, avant d'effectuer un test comparatif, menant ainsi à une analyse circulaire. Dans la seconde partie de ce manuscrit, nous présentons une revue des méthodes d'inférence post-clustering résolvant ce problème ainsi qu'une comparaison numérique des approches multivariées et marginales de comparaison de classes. Enfin, nous explorons comment l'utilisation des modèles de mélange dans l'étape de clustering peut être exploitée dans les tests post-clustering, et nous discutons de perspectives pour l'application de ces tests aux données transcriptomiques<br>In the field of transcriptomics, technological advances, such as microarrays and high-throughput sequencing, have enabled large-scale quantification of gene expression. These advances have raised statistical challenges, particularly in differential expression analysis, which aims to identify genes that significantly differentiate between two populations. However, traditional inference procedures lose their ability to control the false positive rate when biologists select a subset of genes. Post-hoc inference methods address this limitation by providing control over the number of false positives, even for arbitrary gene sets. The first contribution of this manuscript demonstrates the effectiveness of these methods for the differential analysis of transcriptomic data between two biological conditions, notably through the introduction of a linear-time algorithm for computing post-hoc bounds, adapted to the high dimensionality of the data. An interactive application was also developed to facilitate the selection and simultaneous evaluation of post-hoc bounds for sets of genes of interest. These contributions are presented in the first part of the manuscript. The technological evolution towards single-cell sequencing has raised new questions, particularly regarding the identification of genes whose expression distinguishes one cellular group from another. This issue is complex because cell groups must first be estimated using clustering method before performing a comparative test, leading to a circular analysis. In the second part of this manuscript, we present a review of post-clustering inference methods addressing this problem, as well as a numerical comparison of multivariate and marginal approaches for cluster comparison. Finally, we explore how the use of mixture models in the clustering step can be exploited in post-clustering tests, and discuss perspectives for applying these tests to transcriptomic data
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

SEVESO, ANDREA. "Symbolic Reasoning for Contrastive Explanations." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2023. https://hdl.handle.net/10281/404830.

Texto completo
Resumen
La necessità di spiegazioni sui sistemi di Machine Learning (ML) sta crescendo man mano che i nuovi modelli superano in performance i loro predecessori, diventando più complessi e meno comprensibili per gli utenti finali. Un passaggio essenziale nella ricerca in ambito eXplainable Artificial Intelligence (XAI) è la creazione di modelli interpretabili che mirano ad approssimare la funzione decisionale di un algoritmo black box. Sebbene negli ultimi anni siano stati proposti diversi metodi di XAI, non è stata prestata sufficiente attenzione alla spiegazione di come i modelli modificano il loro comportamento in contrasto con altre versioni (ad esempio, a causa di nuovi addestramenti dei modelli o modifica dei dati sottostanti). In questi casi, un sistema XAI dovrebbe spiegare perché il modello cambia le sue previsioni sui risultati passati. In diverse situazioni pratiche, i decisori umani si confrontano con più di un modello di apprendimento automatico. Di conseguenza, sta crescendo l'importanza di capire come funzionano due modelli di Machine Learning al di là delle loro performance predittive, per comprendere il loro comportamento, le loro differenze e la loro somiglianza. Ad oggi, i modelli interpretabili sono sintetizzati per spiegare i cosiddetti modelli black-box e le loro previsioni, e possono essere utili per rappresentare formalmente e misurare le differenze nel comportamento del modello ri-addestrato nel trattare dati nuovi e diversi. Catturare e comprendere tali differenze è fondamentale, poiché la necessità di fiducia è fondamentale in qualsiasi applicazione a supporto dei processi decisionali umani-IA. Questa è l'idea di ContrXT, un nuovo approccio che (i) traccia i criteri decisionali di un classificatore black box codificando i cambiamenti nella logica decisionale attraverso Binary Decision Diagrams. Quindi (ii) fornisce spiegazioni globali, agnostici dalla tipologia di modello, Model-Contrastive (M-contrast) in linguaggio naturale, stimando perché -e in quale misura- il modello ha modificato il suo comportamento nel tempo. Abbiamo implementato e valutato questo approccio su diversi modelli ML supervisionati addestrati su set di dati di benchmark e un'applicazione reale, dimostrando che è efficace nel rilevare classi notevolmente modificate e nello spiegare la loro variazione attraverso un user study. L'approccio è stato implementato ed è disponibile per la comunità sia come pacchetto Python che tramite API REST, fornendo contrastive explanations as a service.<br>The need for explanations of Machine Learning (ML) systems is growing as new models outperform their predecessors while becoming more complex and less comprehensible for their end-users. An essential step in eXplainable Artificial Intelligence (XAI) research is to create interpretable models that aim at approximating the decision function of a black box algorithm. Though several XAI methods have been proposed in recent years, not enough attention was paid to explaining how models change their behaviour in contrast with other versions (e.g., due to retraining or data shifts). In such cases, an XAI system should explain why the model changes its predictions concerning past outcomes. In several practical situations, human decision-makers deal with more than one machine learning model. Consequently, the importance of understanding how two machine learning models work beyond their prediction performances is growing, to understand their behavior, their differences, and their likeness. To date, interpretable models are synthesised for explaining black boxes and their predictions and can be beneficial for formally representing and measuring the differences in the retrained model's behaviour in dealing with new and different data. Capturing and understanding such differences is crucial, as the need for trust is key in any application to support human-Artificial Intelligence (AI) decision-making processes. This is the idea of ContrXT, a novel approach that (i) traces the decision criteria of a black box classifier by encoding the changes in the decision logic through Binary Decision Diagrams. Then (ii) it provides global, model-agnostic, Model-Contrastive (M-contrast) explanations in natural language, estimating why -and to what extent- the model has modified its behaviour over time. We implemented and evaluated this approach over several supervised ML models trained on benchmark datasets and a real-life application, showing it is effective in catching majorly changed classes and in explaining their variation through a user study. The approach has been implemented, and it is available to the community both as a python package and through REST API, providing contrastive explanations as a service.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bailey, Bridget Catherine. "Comparing Psychotherapy With and Without Medication in Treating Adults with Bipolar II Depression: A Post-hoc Analysis." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593624227017954.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Paleo, Oswaldo Silva. "Gestão de relacionamento dos clientes com foco no mercado B2B através da metodologia de segmentação post hoc focometria." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/118858.

Texto completo
Resumen
O presente trabalho tem como objetivo central a proposição de ferramentas de gestão do relacionamento com os clientes, tendo como base analítica a utilização da segmentação dos clientes com foco no mercado industrial ou B2B. Para tanto, a metodologia de segmentação pós-fato proposta pelo autor, denominada de Focometria, terá como base de dados o comportamento histórico de compras dos clientes mensurado pelos descritores: valor, recência e frequência. O valor mede a margem de contribuição gerada por cada cliente; a recência mede o tempo transcorrido entre a data de corte estabelecida e a última operação comercial; e a frequência mede o número de operações comerciais realizadas no período de análise. A metodologia propõe a realização de dois estágios classificatórios da carteira de clientes, onde o primeiro estágio definirá os grupos de clientes e o segundo estágio os categorizará a partir de um ranking proposto. Para isso, o trabalho será estruturado em quatro artigos principais, os quais fazem uso das ferramentas propostas, com o objetivo de melhorar e qualificar a tomada de decisão na gestão do relacionamento dos clientes das organizações. Complementando, a Tese faz referência a um quinto artigo que reforça a aplicação da metodologia de segmentação Focometria na gestão orçamentária de vendas das empresas. Com base nos artigos apresentados conclui-se que a metodologia de segmentação de dois estágios a Focometria, é uma ferramenta eficiente para qualificar e focar a gestão comercial da carteira de clientes das empresas.<br>The present work has as main objective the proposition of relationship management tools with customers, based on the use of analytical customer segmentation with focus on industrial or B2B market. To this end, the methodology proposed by the author pos-fact segmentation, called Focometria, will be based on the data of customer purchase history behavior measured by key words: value, recency and frequency. The value measures the contribution margin generated by each client; the recency measures the elapsed time between the date of court established and the last commercial operation; and frequency measures the number of commercial operations performed during the analysis period. The methodology proposes two qualifying stages of customer, where the first stage will define the customer groups and the second stage will categorize them from a proposed ranking. For that, the work will be structured in four main articles, which make use of the tools proposed, with the goal of improving and qualify the decision-making on customer relationship management of organizations. Complementing the thesis references a fifth article that reinforces the application of the methodology of segmenting Focometria on budget management of enterprises sales. On the basis of articles presented it is concluded that the two-stage segmentation methodology to Focometria, is an efficient tool to qualify and focus on the commercial management of customer businesses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Al-Abdullatif, Fatimah. "Discriminant Function Analysis Versus Univariate ANOVAs as Post Hoc Procedures Following Significant MANOVA Test: A Monte Carlo Study." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1585072063453955.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Vries, Miriam de. "Emotionale Prozesse in der Informationsverarbeitung von Psychotherapeutinnen und Psychotherapeuten im Erstgespräch : eine qualitative Analyse von post-hoc Rekonstruktionen /." [S.l : s.n.], 1997. http://www.ub.unibe.ch/content/bibliotheken_sammlungen/sondersammlungen/dissen_bestellformular/index_ger.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Lawson, Kaila L. "The Application of Post-Hoc Correction Methods for Soft Tissue Artifact and Marker Misplacement in Youth Gait Knee Kinematics." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2340.

Texto completo
Resumen
Biomechanics research investigating the knee kinematics of youth participants is very limited. The most accurate method of measuring knee kinematics utilizes invasive procedures such as bone pins. However, various experimental techniques have improved the accuracy of gait kinematic analyses using minimally invasive methods. In this study, gait trials were conducted with two participants between the ages of 11 and 13 to obtain the knee flexion-extension (FE), adduction-abduction (AA) and internal-external (IE) rotation angles of the right knee. The objectives of this study were to (1) conduct pilot experiments with youth participants to test whether any adjustments were necessary in the experimental methods used for adult gait experiments, (2) apply a Triangular Cosserat Point Element (TCPE) analysis for Soft-Tissue Artifact (STA) correction of knee kinematics with youth participants, and (3) develop a code to conduct a Principal Component Analysis (PCA) to find the PCA-defined flexion axis and calculate knee angles with both STA and PCA-correction for youth participants. The kinematic results were analyzed for six gait trials on a participant-specific basis. The TCPE knee angle results were compared between uncorrected angles and another method of STA correction, Procrustes Solution, with a repeated measures ANOVA of the root mean square errors between each group and a post-hoc Tukey test. The PCA-corrected results were analyzed with a repeated measures ANOVA of the FE-AA correlations from a linear regression analysis between TCPE, PS, PCA-TCPE and PCA-PS angles. The results indicated that (1) youth experiments can be conducted with minor changes to experimental methods used for adult gait experiments, (2) TCPE and PS analyses did not yield statistically different knee kinematic results, and (3) PCA-correction did not reduce FE-AA correlations as predicted.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Dignan, Kathleen. "Post-Hoc Analysis of Challenging Behavior by Function: A Comparison of Multiple-Respondent Anecdotal Assessments, Functional Analyses, and Treatments." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862773/.

Texto completo
Resumen
The current study examines anecdotal assessment, functional analysis, and treatment outcomes from 44 participants. Agreement across Motivation Assessment Scale (MAS), Questions About Behavioral Function (QABF), and Functional Analysis Screening Tool (FAST) assessments, agreement between those anecdotal assessments and functional analyses, and agreement between those anecdotal assessments and treatment outcomes were analyzed across maintaining variables and topography categories of challenging behaviors. Overall, the QABF had the highest agreement results with functional analyses and treatment with 70% and 92% of cases respectively. Patterns in the distribution of maintaining variables was examined across behavior topography categories.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Zimmer, Stefanie [Verfasser]. "Die Entwicklung sekundärer Infektionen bei Patienten mit sepsis-assoziierter Immunsuppression : eine post hoc-Analyse der GM-CSF-Studie / Stefanie Zimmer." Berlin : Medizinische Fakultät Charité - Universitätsmedizin Berlin, 2019. http://d-nb.info/1202043739/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Charter, Simon. "A post hoc scoping assessment of the rapid rate and scale of urban development along the 'West Coast' : 3 case studies." Master's thesis, University of Cape Town, 2006. http://hdl.handle.net/11427/4834.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Krah, Rubertha Rosemarie [Verfasser], and Jens Martin [Akademischer Betreuer] Werner. "Post-hoc-outcome-Analyse der chronisch Hepatitis-C-Virus infizierten Patienten der SiLVER-Studie / Rubertha Rosemarie Krah ; Betreuer: Jens Martin Werner." Regensburg : Universitätsbibliothek Regensburg, 2020. http://d-nb.info/1214886965/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ferreira, Guilherme Zamboni. "Lo-fi : aproximações e processos criativos : da fotografia à arquitetura." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/174973.

Texto completo
Resumen
Lo-Fi vem de Low Fidelity, que em inglês significa baixa fidelidade. Na música, se transformou em linguagem sonora a partir da década de 1970, quando alguns artistas, buscando soluções econômicas para a autoprodução e independência em relação às gravadoras, passaram a utilizar gravadores caseiros para fazer seus próprios registros. A partir desse ato de libertação e experimentação, se obteve, por transbordamento, uma sonoridade ruidosa, crua, de baixa fidelidade de gravação. Mas, ao mesmo tempo, o ruído produzido por essa produção marginal engendrou cruzamentos autênticos e espontâneos que logo formaram uma paisagem sonora de grande vitalidade. De forma contrária, o Hi-fi, ou High-Fidelity, busca na alta-fidelidade a idealização e a perfeição do seu produto final que, uma vez ligado à indústria fonográfica e à alta tecnologia, tende a ser direcionado pelas lógicas do mercado. Pensado e produzido como objeto-fim, o fazer arquitetural contemporâneo vem se aproximando da gravação hi-fi: delimitado, ordenado por estratos, classificações e tipos que a definem como objeto-produto. Nessa lógica, a arquitetura torna-se imagem, espetáculo pronto para o consumo e descarte. Propõe-se, neste trabalho, uma inflexão, um deslocamento do pensar arquitetural a partir da aproximação metafórica acionada pelo modo lo-fi .<br>Lo-Fi is the acronym that stands for Low Fidelity. In the music industry, during the70’s it became a trendy language when artists, searching economical solutions for self-production and independence from record companies, using homemade devices for sound recording. Following this first experiment of liberation, was obtained a noisy overflowing sonority, raw, or as it became known, low fidelity of the recording. However, at the same time the noise produced by this marginal production created an authentic and spontaneous blend that soon formed a vivid sound scene. Contrariwise the Hi-fi, or High Fidelity, seeks the idealization of perfection as its final product which, once merged to the phonographic industry and the high technology, tends to be driven by the logic of the market. Thought and produced as a final product, contemporary architecture follows the steps of the hi-fi recording: delimited, ordered in layers, classifications and by stereotypes that define it as an object-product. In this logic, architecture becomes an image, a spectacle ready for consumption and disposal. This work proposes an inflexion, a displacement on the architectural thinking from the metaphorical approach triggered by the lo-fi mode. Flaws by noise, by experimentation, by the potency of invention leads to a turning point in the making process, broader, returning to the idea of the unfinished, unexpectedly, guiding into a process that its meaning is not in the final product, but rather in the path itself, in the desire for transformation, in the unknown. Reflecting about the process as lo-fi music evokes is returning to the initial sketch, it is decomposing and questioning its nature, it is exposing the traces. It is bringing back the architecture in the sense of discovery, as a body in experimentation, as the surprise that reveals the unforeseen crossing of distinct times. The intention here is, based from speculative narratives, to try the deconstruct of some consolidated practices on the design process catalyzed by the music action. Pointing analogous approximations and representations of other architectural assemblies that are not represented by the narratives determined and exhausted by the logic of the market.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Chernoff, William Avram. "On determining the power of a test after data collection." Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/2278.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Piepenburg, Sven Mathis [Verfasser], and Christiane E. [Gutachter] Angermann. "Prognostische Bedeutung depressiver Symptome bei Patienten mit systolischer Herzinsuffizienz - Post- hoc Analysen aus dem Datensatz des Interdisziplinären Netzwerkes Herzinsuffizienz (INH) / Sven Mathis Piepenburg ; Gutachter: Christiane E. Angermann." Würzburg : Universität Würzburg, 2020. http://d-nb.info/1202714072/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kandula, Uday Bhaskar. "DO THE CAUSES OF POVERTY VARY BY NEIGHBORHOOD TYPE?" Cleveland State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=csu1357184967.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Biggs, Iain Adam. "A post-hoc elucidation and contextualisation of Between Carterhaugh and Tamshiel Rig : a borderline episode, taken as a model for 'writing up' creative practice-led doctoral research projects." Thesis, University of the West of England, Bristol, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429529.

Texto completo
Resumen
The thesis argues that the artist's book Between Carlerhaugh and Tamshiel Rig: a borderline episode (2004) offers a valid and innovative model for writing-up creative practice-led research at doctoral level. It sets out a post-hoc elucidation and contextualisation of the creative / pedagogic / research position adopted there in the context of the wider 'politics' of the university sector. The Introduction provides aims and objectives, a rationale for the 'anthropological' methodology and 'polytheistic' terminology used, locating the project both generally and in the particular context of the AHRB bid that funded the book's production. Chapter One provides historical and pedagogic accounts of shifts from 'professional practice' to 'creative practice research' in the context of Gibbons et ai's discussion of 'knowledge production', seen in the context of the Research Assessment Exercise (RAE) and the politics of 'scientific' models of cultural evaluation. Both Apollonian and Hermetic perspectives are discussed. Chapter Two offers a discussion of the author's relevant research outputs between 1994 - 2005 as a context for the AHRB bid, raising issues of contingency, the 'metaphoric field', and Paul Ricoeur's 'Hermetic' understanding of 'new knowledge'. Chapter Three discusses the duel evaluation of Between Carlerhaugh and Tamshiel Rig and some of the implications arising from it. The conclusion claims the validity of the model for 'writing up' doctoral study has been demonstrated. Salient points identified include the voluntary accommodation between Hermetic and Hestian perspectives, issues of contingency and of 'approximate knowledge', both seen in the context of the work of Paul Ricoeur. The positive outcome of the double evaluation process is also cited. The author's contribution to the CHEAD paper Types of Research in the Creative Arts and Design is taken as further evidence that the project has made a contribution to the sector's self-understanding through a more accurate account of the psychosocial context within which research is conducted in the UK.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gerges, Peter Raouf Aziz. "Effect of intensity of care on mortality and withdrawal of life-sustaining therapies in severe traumatic brain injury patients : a post-hoc analysis of a multicenter cohort study." Master's thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/30951.

Texto completo
Resumen
Introduction et objectifs Le traumatisme craniocérébral (TCC) est un problème de santé majeur dans le monde. Chez les patients ayant subi un TCC grave, une amélioration de la mortalité a été observée dans les centres de traumatologie offrant une intensité de traitement élevée et un monitorage intensif. Cependant, la mortalité ainsi que l’incidence du retrait du maintien des fonctions vitales varient entre les différents centres de traumatologie. Notre étude visait à évaluer l’association en l’effet de l'intensité des soins sur l’incidence du retrait du maintien des fonctions vitales et de mortalité chez les patients ayant subi un TCC grave. Méthodes Notre étude est une analyse post-hoc d’une étude cohorte rétrospective multicentrique de patients ayant subi un TCC grave (n = 720). Nous avons défini l’intensité des soins en utilisant le type d’interventions effectuées à l’'unité de soins intensifs. Les interventions ont été classées en fonction de leur spécificité par rapport au TCC et en fonction de leur nature : 1) médicale, 2) chirurgicale, et 3) diagnostique. L’effet de l'intensité des soins, sur la mortalité et le retrait du maintien des fonctions vitales, a été évalué en utilisant des modèles à risques proportionnels de Cox ajustés. Résultats L’intensité des soins a été associée à une diminution de la mortalité (HR 0,69, IC à 95% 0,63 à 0,74, p <0,0001) et du retrait du maintien des fonctions vitales (HR 0,73, IC à 95% 0,67 à 0,79, p <0,0001). Les associations ont été significatives pour l'intensité des interventions spécifiques et non-spécifiques au TCC et pour les interventions médicales et diagnostiques, mais non significatives pour les interventions chirurgicales. Conclusion Nous avons observé une association significative entre l'intensité globale des soins sur la mortalité et sur l'incidence du retrait du maintien des fonctions vitales suivant un TCC grave. Cette association était significative avec les interventions spécifiques et non-spécifiques au TCC, ainsi qu’avec les interventions médicales et diagnostiques.<br>Introduction and objectives Traumatic brain injury (TBI) is a major health problem. In severe TBI, better outcomes and reduced mortality were shown in trauma centers providing high intensity of treatment and monitoring. Mortality as well as incidence of withdrawal of life-sustaining therapies were found to vary among different trauma centers. Our study aimed to evaluate the effect of intensity of care for severe TBI on the incidence of withdrawal of life-sustaining therapy and mortality. Methods Our study is post-hoc analysis of a Canadian multicenter retrospective cohort study of patients with severe TBI (n = 720). We defined the intensity of care using interventions performed in ICU. They were categorized into 1) TBI related interventions, 2) interventions non-specific to TBI, and according to type of interventions: 1) medical, 2) surgical, and 3) diagnostic interventions. The effect of intensity of care, on mortality and the withdrawal of life-sustaining therapies, was evaluated with adjusted Cox proportional-hazards regression analyses of time-to-event data. Results The intensity of care was associated with decreased mortality (HR 0.69, 95% CI 0.63–0.74, p<0.0001) and decreased withdrawal of life support (HR 0.73, 95% CI 0.67–0.79, p<0.0001). The associations with outcomes were also significant for both the intensity of interventions specific to TBI and general ICU interventions. The associations with outcomes also maintained their significance with medical and diagnostic components of care but were not significant with surgical component of care. Conclusion We observed a significant association between the overall intensity of care, defined by the different interventions commonly used, on mortality and on the incidence of withdrawal of life-sustaining therapies in severe TBI. This association was present whether interventions were specific or not specific to TBI, as well as whether they were medical or diagnostic interventions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Miller, Jody L. "Impact of the Purdue Extension Professor Popcorn nutrition curriculum on third grade students' knowledge, attitudes, and self-reported consumption of fruits and vegetables : a five-month post-hoc analysis." Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1273271.

Texto completo
Resumen
The purpose of this study was to determine if completion of the Purdue Extension Professor Popcorn nutrition curriculum impacts third grade students' long-term knowledge about, attitude toward, and self-reported consumption of fruits and vegetables. A secondary purpose was to measure any carry-over of concepts learned, or to identify any food behaviors acquired, by surveying the students' parents.A total of 74 third-grade students and 66 parents/guardians participated in this study. Data was analyzed using SPSS, version 11.0. Descriptive analysis, frequency counts, and Pearson Chi Square were used to test 15 research hypotheses. Significant differences were found in students' attitude toward vegetables, how often they ate fruit, and how often people should eat fruits and vegetables. No differences were found in parent/guardian surveys. Results of this study provide modest support for the impact of Professor Popcorn on students. No carryover of concepts to the students' parents, however, was observed.<br>Department of Family and Consumer Sciences
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Gunga, Matthias Martin Verfasser], Margarete [Akademischer Betreuer] [Landenberger, Karin [Akademischer Betreuer] Jordan, and Hanna [Akademischer Betreuer] Mayer. "Das Schmerz-, Fatigue- und Schlafstörung-Symptomcluster und dessen Einfluss auf Funktionalität und Lebensqualität von Tumorpatienten : eine Post-hoc-Analyse / Matthias Martin Gunga. Betreuer: Margarete Landenberger ; Karin Jordan ; Hanna Mayer." Halle, Saale : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2015. http://d-nb.info/1089085613/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Krein, Jonathan L. "Replication and Knowledge Production in Empirical Software Engineering Research." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4296.

Texto completo
Resumen
Although replication is considered an indispensable part of the scientific method in software engineering, few replication studies are published each year. The rate of replication, however, is not surprising given that replication theory in software engineering is immature. Not only are replication taxonomies varied and difficult to reconcile, but opinions on the role of replication contradict. In general, we have no clear sense of how to build knowledge via replication, particularly given the practical realities of our research field. Consequently, most replications in software engineering yield little useful information. In particular, the vast majority of external replications (i.e., replications performed by researchers unaffiliated with the original study) not only fail to reproduce the original results, but defy explanation. The net effect is that, as a research field, we consistently fail to produce usable (i.e., transferable) knowledge, and thus, our research results have little if any impact on industry. In this dissertation, we dissect the problem of replication into four primary concerns: 1) rate and explicitness of replication; 2) theoretical foundations of replication; 3) tractability of methods for context analysis; and 4) effectiveness of inter-study communication. We address each of the four concerns via a two-part research strategy involving both a theoretical and a practical component. The theoretical component consists of a grounded theory study in which we integrate and then apply external replication theory to problems of replication in empirical software engineering. The theoretical component makes three key contributions to the literature: first, it clarifies the role of replication with respect to the overall process of science; second, it presents a flexible framework for reconciling disparate replication terminology; and third, it informs a broad range of practical replication concerns. The practical component involves a series of replication studies, through which we explore a variety of replication concepts and empirical methods, ultimately culminating in the development of a tractable method for context analysis (TCA). TCA enables the quantitative evaluation of context variables in greater detail, with greater statistical power, and via considerably smaller datasets than previously possible. As we show (via a complex, real-world example), the method ultimately enables the empirically and statistically-grounded reconciliation and generalization of otherwise contradictory results across dissimilar replications—which problem has previously remained unsolved in software engineering.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Prasad, Nishchal. "Modèles de langage volumineux et leur adaptation hiérarchique sur de longs documents pour la classification et leur explication : un cas de TALN juridique." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES244.

Texto completo
Resumen
La prédiction des jugements juridiques pose des défis importants en raison de la longueur et de la structure non uniforme des documents de procédure, qui peuvent dépasser des dizaines de milliers de mots. Ces complexités sont encore exacerbées lorsque les documents manquent d'annotations structurelles. Pour résoudre ces problèmes, nous proposons un cadre hiérarchique basé sur l'apprentissage profond appelé MESc (Multi-stage Encoder-based Supervised with Clustering) pour la prédiction des jugements. MESc divise les longs documents juridiques en parties plus petites, en extrayant leurs incorporations des quatre dernières couches d'un modèle de langage large (LLM) personnalisé et affiné. Nous approximons la structure du document à l'aide d'un clustering non supervisé, en alimentant les incorporations groupées dans des couches d'encodeur de transformateur pour apprendre les représentations inter-blocs. Notre approche exploite des LLM à plusieurs milliards de paramètres, tels que GPT-Neo et GPT-J, dans ce cadre hiérarchique et démontre leur adaptabilité et leurs capacités d'apprentissage par transfert intra-domaine. Dans des expériences utilisant des textes juridiques de l'Inde, de l'Union européenne et des États-Unis, provenant des ensembles de données ILDC et LexGLUE, MESc obtient au moins 2 points d'amélioration des performances par rapport aux méthodes de pointe. Malgré le succès des cadres hiérarchiques dans le traitement de longs documents juridiques, leur nature de boîte noire limite souvent l'explicabilité de leurs prédictions, ce qui est essentiel pour les applications juridiques du monde réel. Pour résoudre ce problème, nous développons Ob-HEx (Occlusion-based Hierarchical Explanation-extracteur), un algorithme qui fournit des explications extractives pour les modèles hiérarchiques en évaluant la sensibilité des prédictions aux perturbations d'entrée. Plus précisément, nous utilisons l'occlusion pour perturber les séquences d'entrée et analysons les prédictions résultantes, générant ainsi des explications. Nous adaptons Ob-HEx aux modèles Hierarchical Transformer formés sur des textes juridiques indiens, démontrant son efficacité sur l'ensemble de données ILDC-Expert avec un gain minimum de 1 point par rapport aux références précédentes sur la plupart des mesures d'évaluation<br>Legal judgment prediction poses significant challenges due to the length and non-uniform structure of case documents, which can exceed tens of thousands of words. These complexities are further exacerbated when documents lack structural annotations. To address these issues, we propose a deep-learning-based hierarchical framework called MESc (Multi-stage Encoder-based Supervised with Clustering) for judgment prediction. MESc divides lengthy legal documents into smaller parts, extracting their embeddings from the last four layers of a custom fine-tuned Large Language Model (LLM). We approximate document structure using unsupervised clustering, feeding the clustered embeddings into transformer encoder layers to learn inter-chunk representations. Our approach leverages multi-billion parameter LLMs, such as GPT-Neo and GPT-J, within this hierarchical framework and demonstrates their adaptability and intra-domain transfer learning capabilities. In experiments using legal texts from India, the European Union, and the United States, sourced from the ILDC and LexGLUE datasets, MESc achieves at least a 2-point performance improvement over state-of-the-art methods. Despite the success of hierarchical frameworks in processing long legal documents, their black-box nature often limits the explainability of their predictions, which is critical for real-world legal applications. To address this, we develop Ob-HEx (Occlusion-based Hierarchical Explanation-extractor), an algorithm that provides extractive explanations for hierarchical models by assessing the sensitivity of predictions to input perturbations. Specifically, we use occlusion to perturb input sequences and analyze the resulting predictions, thereby generating explanations. We adapt Ob-HEx to Hierarchical Transformer models trained on Indian legal texts, demonstrating its effectiveness on the ILDC-Expert dataset with a minimum gain of 1 point over previous benchmarks across most evaluation metrics
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Reich, Lena Alexa Verfasser], Martin [Akademischer Betreuer] Lotze, Martin [Gutachter] Lotze, and Björn [Gutachter] [Rasch. "Beeinflussung der kognitiven Leistung durch so-tDCS während des Nachtschlafs bei älteren gesunden Probanden: Explorative post hoc Analyse verschiedener Faktoren bezüglich der Ansprechbarkeit auf die elektrische Stimulation / Lena Alexa Reich ; Gutachter: Martin Lotze, Björn Rasch ; Betreuer: Martin Lotze." Greifswald : Universität Greifswald, 2020. http://nbn-resolving.de/urn:nbn:de:gbv:9-opus-39519.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Reich, Lena Alexa [Verfasser], Martin [Akademischer Betreuer] Lotze, Martin [Gutachter] Lotze, and Björn [Gutachter] Rasch. "Beeinflussung der kognitiven Leistung durch so-tDCS während des Nachtschlafs bei älteren gesunden Probanden: Explorative post hoc Analyse verschiedener Faktoren bezüglich der Ansprechbarkeit auf die elektrische Stimulation / Lena Alexa Reich ; Gutachter: Martin Lotze, Björn Rasch ; Betreuer: Martin Lotze." Greifswald : Universität Greifswald, 2020. http://d-nb.info/1217784055/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Madfors, Ingela. "Backward time travel and its relevance for theological study : An explorative literature study based on physics, philosophy, counterfactual thinking and theology." Thesis, Högskolan i Gävle, Akademin för utbildning och ekonomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-8533.

Texto completo
Resumen
This paper explores the possibility and relevance of theological study of backward time travel and its consequences. An examination of current research on backward time travel reveals a number of interdisciplinary topics which are not handled within physics. Some of these topics, mainly concerning free will and determination, are of interest to philosophers, whereas topics such as meaning and responsibility are left aside.   In theology, there is a general dismissal of the idea of backward time travel. This study claims that this negative stance may be the result of taking science and its methods too seriously. The result of the study is that the interdisciplinary questions connected to backward time travel makes the subject very relevant for theological reflection. Thought experiments on backward time travel can provide valuable insights on how we deal with our lives, our world, time, and God today.<br>Denna explorativa studie utforskar möjligheten och relevansen av teologiska studier av tidsresor till det förflutna och deras konsekvenser. En undersökning av det aktuella forskningsläget visar på förekomsten av interdisciplinära frågeställningar som inte hanteras inom fysiken. Vissa frågor, framförallt knutna till den fria viljan och determinism, intresserar filosofer, medan andra områden som mening och ansvar inte behandlas vidare. Teologer ställer sig generellt negativa till tanken på resor till det förflutna. Denna studie hävdar att denna negativa inställning kan vara resultatet av en alltför stark respekt för vetenskapens fynd och metoder.  Resultatet av studien är att de interdisciplinära frågeställningar som är kopplade till tidsresor till det förflutna gör ämnet högst lämpligt för teologisk begrundan. Tankeexperiment kring ämnet kan ge värdefulla insikter om hur vi hanterar våra liv, vår värld, tiden och Gud idag.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Nymark, Marianne Kristine. "Taxonomy of the Rufous-naped lark (Mirafra africana) complex based on song analysis." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-435322.

Texto completo
Resumen
The Rufous-naped lark Mirafra africana complex consists of 22 subspecies spread across the African continent. Several of the subspecies have recently been suggested to potentially be treated as separate species. In this study a comparative analysis was done on the song from seven of the subspecies: M. a. africana, M. a. athi, M. a. grisescens, M. a. kabalii, M. a. nyikae, M. a. transvaalensis and M. a. tropicalis. The results showed that M. a. athi, M. a. kabalii and M. a. nyikae are all very divergent from each other as well as from the other four subspecies. In contrast, M. a. tropicalis, M. a. grisescens, M. a. africana and M. a. transvaalensis are not clearly separable from each other. Based on the results, I suggest that M. a. athi, M. a. kabalii and M. a. nyikae can be classified as separate species, with M. a. africana, M. a. tropicalis, M. a grisescens and M. a. transvaalensis forming a fourth species (M. africana sensu stricto). Finally, I conclude that this study shows that more studies need to be done on the subspecies of the Mirafra africana complex.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Kossaï, Mohamed. "Les Technologies de L’Information et des Communications (TIC), le capital humain, les changements organisationnels et la performance des PME manufacturières." Thesis, Paris 9, 2013. http://www.theses.fr/2013PA090035/document.

Texto completo
Resumen
Les TIC sont un facteur clé de performance dans les pays développés. Cette thèse s’intéresse à l’adoption des TIC et leur impact sur la performance des PME manufacturières d’un pays en développement. A la suite d’une première partie qui présente le cadre théorique et conceptuel, le reste de la thèse est organisé en trois études empiriques. La première étude propose une modélisation Probit afin d’identifier les déterminants d’adoption des TIC. Le capital humain est la variable explicative la plus significative. Se basant sur la régression linéaire à variables muettes, la causalité de Granger, le test de Kruskal-Wallis et le test de l’ANOVA de Welch, suivis des tests post-hoc correspondants, la deuxième étude met en évidence l’existence d’un fort lien statistique significatif entre le niveau d’adoption des TIC et la rentabilité. Dans une troisième étude, plusieurs modélisations Probit (simple, ordonné et multivarié) ont été testées sur différentes mesures de performance. Nous montrons, premièrement, que les TIC ont un impact positif sur la productivité, la rentabilité et la compétitivité. Deuxièmement, les TIC, le capital humain et la formation sont les déterminants de la performance globale. Enfin, la contribution des TIC à la performance globale est forte lorsqu’elles sont combinées au capital humain qualifié. En définitive, nos résultats empiriques ont montré un effet positif des TIC, du capital humain et du changement organisationnel sur la performance des PME<br>ICT is a key performance factor in developed countries. This PhD thesis focuses on the adoption of ICTs and their impact on the performance of manufacturing SMEs in a developing country. Following a first part covering the theoretical and conceptual framework, the rest of the thesis is organized in three empirical studies. The first study uses a Probit model in order to identify the determinants of ICT adoption. Human capital seems to be the most significant explanatory variable. Based on linear regression of dummy variables, Granger causality, Kruskal-Wallis test, ANOVA test of Welch, followed by corresponding post-hoc tests, the second study highlights the existence of a strong statistically significant relationship between the level of ICT adoption and profitability. In a third study, many Probit models (simple, ordered and multivariate) were tested on different measures of performance. Firstly, we show that ICT have a positive impact on productivity, profitability and competitiveness of SMEs. Secondly, ICT, human capital and training are determinants of firm overall performance. Thirdly, when combined together, ICT and highly skilled human resources have an important contribution to the global performance. In conclusion, our empirical results demonstrate a positive impact of ICT, human capital and organizational change on firm performance
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Reynaud, Laurent. "Stratégies de mobilité optimisées pour la tolérance aux perturbations dans les réseaux sans fil." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1060/document.

Texto completo
Resumen
L'objectif de cette thèse est de proposer des stratégies d'optimisation protocolaire et architecturale adaptées aux cas d'usages pour lesquels les communications entre les noeuds d'un réseau sont susceptibles d'être fortement perturbées par des conditions de déploiement défavorables, sans que les mécanismes standards de réparation de panne prévus pour ce réseau puissent convenablement traiter et résorber les effets de ces perturbations. Il peut s'agir de divers contextes applicatifs, comme celui des réseaux de communication d'urgence, mis en oeuvre suite à la survenue de désastres ou plus généralement d'incidents non planifiés capables de laisser les réseaux d'une zone affectée partiellement ou totalement endommagés. Les perturbations mentionnées peuvent être de différente nature : elles peuvent par exemple être provoquées par un dimensionnement du réseau défavorable (ex. nombre de noeuds trop faible, surface de dispersion des nœuds trop importante, portée des interfaces de communication sans fil trop réduites, . . . en regard des autres paramètres de déploiement considérés). Elles peuvent aussi être provoquées par des causes externes, comme par exemple la présence non anticipée d'obstacles ou la survenue de sources d'interférences extérieures au réseau considéré. De manière générale, on constate qu'en présence de telles perturbations, un réseau non conçu pour spécifiquement fonctionner dans de telles conditions peut voir ses performances et la qualité d'expérience de ses utilisateurs baisser significativement. Dans ce contexte, nous cherchons à comparer la perception que nous avons traditionnellement de la mobilité dans les réseaux sans fil, en particulier dans les réseaux ad hoc mobiles et les réseaux tolérants aux perturbations et aux délais, avec les principes de la mobilité contrôlée, selon lesquels un noeud est capable de participer directement à la détermination de sa trajectoire et à la réalisation de son déplacement. Nous définissons un système de forces virtuelles, comprenant diverses composantes répulsives, attractives, de frottement et d'alignement, pouvant être appliquées aux noeuds d'un réseau. Nous expliquons ensuite comment concrètement utiliser ces forces virtuelles dans un déploiement réseau, et nous spécifions une solution protocolaire utilisée selon diverses variations, que nous mettons en oeuvre à travers des stratégies de mobilité contrôlée adaptées à différents environnements réseau.Nous prenons tout d'abord appui sur un scénario applicatif relatif à la lutte contre la progression d'une espèce invasive, le frelon asiatique, et décrivons un déploiement sur un réseau ad hoc sans fil reposant sur un ensemble de véhicules mobiles aériens qui exécutent une première stratégie de mobilité contrôlée. Nous cherchons à identifier les plages de valeurs pour les paramètres-clés de notre protocole à base de forces virtuelles aboutissant aux meilleures performances du réseau constitué par l'ensemble des noeuds considérés. Par la suite, nous introduisons également un scénario de déploiement de réseau temporaire de secours en situation de désastre, toujours de type ad hoc sans fil, puis nous présentons une analyse de la performance d'une seconde stratégie de mobilité contrôlée adaptée à cet environnement. Nous montrons en particulier comment cette stratégie se comporte lorsque le nombre de noeuds du réseau augmente. Nous abordons ensuite le contexte des réseaux utilisés en conditions défavorables et des mécanismes de tolérance aux perturbations. Nous cherchons ici à concevoir un troisième type de stratégie de mobilité contrôlée utilisant conjointement des mécanismes de tolérance aux perturbations et aux délais et les principes de mobilité contrôlée afin d'augmenter significativement les performances du réseau<br>Throughout this thesis, we seek to propose and design optimized strategies that are adapted to a widespread class of use cases in which communications between network nodes may be disrupted by adverse deployment conditions, assuming that standard fault repair mechanisms are unable to address and mitigate the effects created by these disruptions. Such use cases include the applicative context of emergency communication networks, which are often met in the wake of disasters, or more generally after the occurrence of any unexpected event which may leave the existing networks of an affected area partially or even totally damaged. The aforementioned disruptions can be of different nature: they may result from a detrimental network dimensioning (e.g. low number of network nodes, excessive node scattering surface, insufficient radio communication range, . . . with respect to the other considered deployment parameter values). They may also stem from external causes, e.g. the unexpected presence of obstacles on the area of interest, or the existence of extrinsic interference sources that may disturb the considered network. In general, it can be observed that given such disruptions, a network which is not inherently designed to operate in these conditions is likely to under-perform and, as a result, to offer a significantly decreased quality of experience to its users. In this regard, we seek to compare our perception of the traditional concept of mobility as seen in common infrastructure, ad hoc or disruption- and delay-tolerant wireless networks with the principles of controlled mobility, according to which a network node may directly control its own movement and affect its trajectory accordingly. More precisely, we investigate the means to define a virtual force system which encompasses multiple repulsive, attractive, friction and alignment forces, all of which may be applied to network nodes in order to enforce this principle of controlled mobility.We then explain how virtual forces can concretely be implemented and used in a realistic network deployment, and we specify a protocol solution and its variations, which we enforce within controlled mobility strategies with the prospect that those prove best suited to the considered network environments. We first take as an applicative background a scenario aiming to fight the spread of an invasive species, the Asian hornet, and we outline a practical deployment relying on a wireless ad hoc network formed with unmanned aerial nodes which all enforce our first proposed controlled mobility strategy. We then seek to identify the best value intervals for the key parameters of our virtual force-based protocol, anticipating that configured with these values, the deployed network will yield its best performance in terms of delays and packet delivery. Later, we introduce a scenario related to the deployment of an emergency communication network, still on the basis of wireless ad hoc network principles. We then present an analysis of how a second proposed controlled mobility strategy performs in this applicative environment. In particular, we show how this strategy behaves when the number of network nodes increases. At that point, we address the context of networks deployed in challenging conditions, and of the use of disruption- and delaytolerant mechanisms. We aim here at designing a third type of strategy that jointly uses disruption- and delay-tolerant mechanisms as well as controlled mobility principles, in order to significantly increase the overall network performance. We then investigate and explain how this strategy allows transmitting a fraction of the user traffic with short delays, when an end-to-end route is available along a communication chain, while the other fraction of the traffic is delivered with longer delays, with the support of delay-tolerant routing mechanisms
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Patout, Maxime. "Evaluation des techniques pour la prise en charge diagnostique et thérapeutique de l'insuffisance respiratoire chronique A Randomized controlled trial on the effect of needle gauge on the pain and anxiety experienced during radial arterial puncture Long term survival following initiation of home non-invasive ventilation : a European study Neural respiratory drive predicts long-term outcome following admission for exacerbation of COPD : a post hoc analysis Neural respiratory drive and cardiac function in patients with obesity hypoventilation syndrome following initiation of non-invasive ventilation Polysomnography versus limited respiratory monitoring and nurse-led titration to optimise non-invasive ventilation set-up a pilot randomised clinical trial Chronic ventilator service Step-down from non-invasive ventilation to continuous positive airway pressure : a better phenotyping is required AVAPS-AE versus ST mode : a randomized controlled trial in patients with obesity hypoventilation syndrome Technological advances in home non-invasive ventilation monitoring : reliability of data and effect on patient outcomes Efficacy of a home discharge care bundle after acute exacerbation of COPD Prediction of severe acute exacerbation using changes in breathing pattern of COPD patients on home noninvasive ventilation Charasteristics and outcome of patients set up on high-flow oxygen therapy at home Trial of portable continuous positive airway pressure for the management of tracheobronchomalacia." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR115.

Texto completo
Resumen
L’insuffisance respiratoire chronique est un syndrome défini par une défaillance monoviscéralerespiratoire. Sa principale origine est aujourd’hui le syndrome obésité-hypoventilation qui concerne 4 à 5% des patients obèses. L’IRC est aussi le stade évolutif terminal de la bronchopneumopathie chronique obstructive qui touche 6 à 8% de la population adulte. L’incidence de ces pathologies et donc de l’insuffisance respiratoire est en augmentation constante. Dans cette thèse, nous avons évalué les nouvelles modalités diagnostiques et thérapeutiques qui pourraient améliorer la prise en charge des patients atteints d’insuffisance respiratoire chronique.Concernant la prise en charge diagnostique, nous avons montré que les données fournies par l’électromyographie de surface des muscles intercostaux, outil qui évalue le travail respiratoire, constituent un marqueur pronostique indépendant chez les patients atteints de bronchopneumopathie chronique obstructive. Nous avons également montré leur pertinence pour prédire l’efficacité clinique et l’observance à la ventilation non-invasive à domicile.Concernant la prise en charge thérapeutique, nous avons montré que l’utilisation d’un mode semi-automatisé de ventilation non-invasive a la même efficacité que celle de modes classiques en permettant une mise en place plus rapide du traitement. Nous avons également rapporté l’intérêt de l’oxygénothérapie à haut débit au domicile alors que ce traitement était utilisé jusque-là dans le seul cadre des soins intensifs. Enfin, nous avons rapporté les bénéfices de la pression positive continue au cours de l’effort chez les patients ayant une trachéobronchomalacie. Concernant le suivi des patients, nous avons montré que les données des logiciels de ventilation non invasive permettent de prédire la survenue d’une exacerbation sévère de BPCO mais que l’utilisation de la télémédecine chez les patients insuffisants respiratoires chroniques ne peut être encore pleinement intégrée dans la pratique clinique. Au cours de cette thèse, nous avons identifié de nouveaux outils physiologiques, de nouvelles modalités d’administration des traitements et de nouveaux outils de suivi à domicile, à même d’améliorer la prise en charge des patients insuffisants respiratoires chroniques<br>Single-organ respiratory failure defines chronic respiratory failure. Obesity hypoventilation syndrome is the main cause of chronic respiratory failure and occurs in 4 to 5% of obese patients. Chronic respiratory failure is also the end-stage evolution of chronic obstructive pulmonary disease that has a prevalence of 6 to 8% in the adult population. The incidence of these diseases increases so does the incidence of chronic respiratory failure. In this thesis, we will evaluate novel diagnostic and therapeutic modalities that could improve the care of patients with chronic respiratory failure. Regarding diagnostic modalities, we have seen that evaluating the work of breathing with surface parasternal electromyography was an independent prognostic marker in patients with chronic obstructive pulmonary disease. We have also seen that it was a relevant tool to predict the clinicalefficacy and compliance to home non-invasive ventilation. Regarding therapeutic modalities, we have shown that the use of a semi-automatic mode of non-invasive ventilation had the same efficacy of a standard mode with a shorter length of stay for its setup. We have shown the relevance and feasibility of the use of high-flow oxygen therapy in the home setting whilst it was only used in intensive care units. Finally, we have shown the benefits of continuous positive airway pressure during exertion in patients with tracheobronchomalacia. Regarding patients’ follow-up, we have shown that the use of data from built-in software could predict the onset of a severe exacerbation of chronic obstructive pulmonary disease. However, we also show that the implementation of tele-medicine in patients with chronic respiratory failure cannot be included in daily clinical practice yet. In this thesis, we have identified novel physiological tools, novel ways to administer treatments and novel follow-up tools that can improve the management of patients with chronic respiratory failure
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Jakubičková, Dominika. "Ověřování faktů post hoc v mediálních obsazích." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-392907.

Texto completo
Resumen
This diploma thesis deals with the post hoc fact-checking of media content (fact-checking after the text has been published). It focuses primarily on user's fact-checking and on fact- checking organizations. Both types of fact-checking process share some similar characteristics and differ in other aspects. Although often they complement each other. To achieve more complex perspective, I decided to extend the theoretical part and use not only the fact- checking theory, the definition and typology of disinformation content, but also selected media theories and description of fact-checking organizations. In analytical part, I decided to focus on analysis of both sides of post hoc fact-checking - habits and attitudes of average users towards media and fact-checking organisations and also outputs of fact-checking organizations (fact-checking articles). The main goal was to develop a methodology, which could effectively interlink both sections of the field and to generate conclusions, which could have a positive impact on future trends of the post hoc fact-checking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Maicas, Suso Gabriel. "Pre-hoc and Post-hoc Diagnosis and Interpretation of Breast Magnetic Resonance Volumes." Thesis, 2018. http://hdl.handle.net/2440/120330.

Texto completo
Resumen
Breast cancer is among the leading causes of death in women. Aiming at reducing the number of casualties, breast screening programs have been implemented to diagnose asymptomatic cancers due to the correlation of higher survival rates with earlier tumour detection. Although these programs are normally based on mammography, magnetic resonance imaging (MRI) is recommended for patients at high-risk. The interpretation of such MRI volumes is timeconsuming and prone to inter-observer variability, leading to missed cancers and a relatively high number of false positives provoking unnecessary biopsies. Consequently, computeraided diagnosis systems are being designed to help improve the efficiency and the diagnosis outcomes of radiologists in breast screening programs. Traditional automated breast screening systems are based on a two-stage pipeline consisting of the localization of suspicious regions of interest (ROIs) and their classification to perform the diagnosis (i.e. decide about their malignancy). This process is typically ineffective due to the usual expensive inference involved in the exhaustive search for ROIs and the employment of non-optimal hand-crafted features in both stages. These issues have been partially addressed with the introduction of deep learning methods that unfortunately need large strongly annotated training datasets (voxel-wise labelling of each lesion), which tend to be expensive to acquire. Alternatively, the use of weakly labelled datasets (i.e volume-level labels) allows diagnosis to become a supervised classification problem, where a malignancy probability is estimated after examining the entire volume. However, large weakly labelled training sets are still required. Additionally, to facilitate the adoption of such weakly trained systems in clinical practice, it is desirable that they are capable of providing the localization of lesions that justifies the automatically produced diagnosis for the whole volume. Nonetheless, current methods lack the precision required for the problem of weakly supervised lesion detection. Motivated by these limitations, we propose a number of methods that address these deficiencies. First, we propose two strongly supervised deep learning approaches that not only can be trained with relatively small datasets, but are efficient in the localization of suspicious tissue. In particular, we propose: 1) the global minimization of an energy functional containing information from the semantic segmentation produced by a deep learning model for lesion segmentation, and 2) a reinforcement learning model for suspicious region detection. Diagnosis is performed by classifying suspicious regions yielded by the reinforcement learning model. Second, aiming to reduce the burden associated to strongly annotating datasets, we propose a novel training methodology to improve the diagnosis performance on systems trained with weakly labelled datasets that contain a relatively small number of training samples. We further propose a novel 1-class saliency detector to automatically localize lesions associated with the diagnosis outcome of this model. Finally, we present a comparison between both of our proposed approaches for diagnosis and lesion detection. Experiments show that whole volume analysis with weakly labelled datasets achieves better performance for malignancy diagnosis than the strongly supervised methods. However, strongly supervised methods show better accuracy for lesion detection.<br>Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2018
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Huang, Hao. "Post hoc Indoor Localization Based on Rss Fingerprint in Wlan." 2014. https://scholarworks.umass.edu/theses/1185.

Texto completo
Resumen
In the investigation of crimes committed by wireless users, one of the key goals is to determine the location of the mobile device at the time of the crime. Since this happens during the investigative phase after the crime is committed, we term this the post hoc geographical localization estimation problem. In this thesis, we introduce the post hoc geographical localization estimation problem and present approaches for its solution based on radio frequency (RF) fingerprinting. Motivated by the goal of establishing a crime's location with enough accuracy to obtain a search warrant, our focus is on locating a criminal mobile device in indoor environments with roughly the granularity to distiguish between two adjacent rooms, without having the ability to enter those rooms or the building to gather input data for the RF fingerprinting algorithm. While empirical performance studies of instantaneous indoor positioning systems based radio frequency (RF) fingerprinting have been presented in the literature, the core of this thesis is the first empirical study focused on the post hoc version of problem from the viewpoint of digital forensics. In this study, we set up experiments in a residential area and collect a large set of raw data in order to analyze and evaluate the algorithms, the best of which provides a mean error distance of roughly 1.4 meters. In addition, we consider enhancements to the baseline algorithms if knowledge of the blueprint of the building is available. In particular, we consider whether compensating the raw data for the attenuation caused by walls can improve algorithm performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Piepenburg, Sven Mathis. "Prognostische Bedeutung depressiver Symptome bei Patienten mit systolischer Herzinsuffizienz - Post- hoc Analysen aus dem Datensatz des Interdisziplinären Netzwerkes Herzinsuffizienz (INH)." Doctoral thesis, 2020. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-193461.

Texto completo
Resumen
In der vorliegenden Dissertation wurde anhand von post- hoc Analysen aus dem Datensatz des Interdisziplinären Netzwerkes Herzinsuffizienz (INH, Unique identifier: ISRCTN 23325295) die prognostische Bedeutung depressiver Symptome bei Patienten mit systolischer Herzinsuffizienz bestimmt. Dazu wurden n=852 Patienten untersucht, die zur Baseline alle einen PHQ-9 Fragebogen zur Erhebung ihrer depressiven Symptome ausgefüllt hatten. Es konnte gezeigt werden, dass sich die kürzere Version des PHQ-9, der PHQ-2, ebenso gut zum Screening für Depression eignete und auch ein prognostischer Marker für Tod jeder Ursache und Rehospitalisierung nach 540 Tagen war. Ein Dosis- Wirkungseffekt konnte für zunehmende depressive Symptome nachgewiesen werden. Der PHQ-9 eignete sich als Risikomarker für beide Geschlechter. Es zeigten sich signifikante Unterschiede in den Baseline Charakteristiken und dem depressiven Symptomprofil von Frauen und Männern. Die weiblichen Teilnehmerinnen hatten zusätzlich eine signifikant schlechtere Lebensqualität anhand des krankheitsspezifischen Kansas City Cardiomyopathy Questionnaires. Dafür hatten nur Männer mit vermehrten depressiven Symptomen auch ein erhöhtes Rehospitalisierungsrisiko. Depressive Symptome verschlechterten die Lebensqualität bei beiden Geschlechtern. Die Ergebnisse tragen dazu bei die Aufmerksamkeit für die häufig auftretenden und zu selten diagnostizierten depressiven Symptome bei Herzinsuffizienz zu erhöhen. Der PHQ-2 ist zudem weniger zeitintensiv und kann mündlich erfragt werden. Die Informationen aus den hier gezeigten Geschlechtsunterschieden könnten darüber hinaus in der Zukunft für individuellere Behandlungsziele und Unterstützungsangebote verwendet werden<br>This thesis consists of post-hoc analyses from the Interdisciplinary Network for Heart Failure (INH, unique identifier: ISRCTN 23325295) to evaluate the prognostic meaning of depressive symptoms in patients with systolic heart failure. N=852 patients who had completed the PHQ-9 questionnaire for depressive symptom assessment at baseline were included. The PHQ-2 (extracted from the PHQ-9 and a shorter version) proved to be a valid screening tool and prognostic marker for all-cause death and rehospitalization after 540 days. A dose-response effect of depressive symptoms was shown. The PHQ-9 was a suitable risk predictor for both sexes. Some significant differences were found between men and women in baseline characteristics and depressive symptom profiles. Female participants had a worse quality of life according to disease-specific Kansas City Cardiomyopathy Questionnaire Scores. Only depressed men had a higher risk for rehospitalization. Depressive symptoms decreased quality of life for both men and women. The results raise awareness for the common yet under diagnosed depressive symptoms in heart failure patients. The PHQ-2 is less time consuming than the PHQ-9 and can be verbally used in any clinical interview. Information on gender-specific differences might help to develop more individual treatments und support programs in the future
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Bhattacharya, Debarpan. "A Learnable Distillation Approach For Model-agnostic Explainability With Multimodal Applications." Thesis, 2023. https://etd.iisc.ac.in/handle/2005/6108.

Texto completo
Resumen
Deep neural networks are the most widely used examples of sophisticated mapping functions from feature space to class labels. In the recent years, several high impact decisions in domains such as finance, healthcare, law and autonomous driving, are made with deep models. In these tasks, the model decisions lack interpretability, and pose difficulties in making the models accountable. Hence, there is a strong demand for developing explainable approaches which can elicit how the deep neural architecture, despite the astounding performance improvements observed in all fields, including computer vision, natural language processing, generates the output decisions. The current frameworks for explainability of deep models are based on gradients (eg. GradCAM, guided-gradCAM, Integrated gradients etc) or based on locally linear assumptions (eg. LIME). Some of these approaches require the knowledge of the deep model architecture, which may be restrictive in many applications. Further, most of the prior works in the literature highlight the results on a set of small number of examples to illustrate the performance of these XAI methods, often lacking statistical evaluation. This thesis proposes a new approach for explainability based on mask estimation approaches, called the Distillation Approach for Model-agnostic Explainability (DAME). The DAME is a saliency-based explainability model that is post-hoc, model-agnostic (applicable to any black box architecture), and requires only query access to black box. The DAME is a student-teacher modeling approach, where the teacher model is the original model for which the explainability is sought, while the student model is the mask estimation model. The input sample is augmented with various data augmentation techniques to produce numerous samples in the immediate vicinity of the input. Using these samples, the mask estimation model is learnt to generate the saliency map of the input sample for predicting the labels. A distillation loss is used to train the DAME model, and the student model tries to locally approximate the original model. Once the DAME model is trained, the DAME generates a region of the input (either in space or in time domain for images and audio samples, respectively) that best explains the model predictions. We also propose an evaluation framework, for both image and audio tasks, where the XAI models are evaluated in a statistical framework on a set of held-out of examples with the Intersection-over-Union (IoU) metric. We have validated the DAME model for vision, audio and biomedical tasks. Firstly, we deploy the DAME for explaining a ResNet-50 classifier pre-trained on ImageNet dataset for the object recognition task. Secondly, we explain the predictions made by ResNet-50 classifier fine-tuned on Environmental Sound Classification (ESC-10) dataset for the audio event classification task. Finally, we validate the DAME model on the COVID-19 classification task using cough audio recordings. In these tasks, the DAME model is shown to outperform existing benchmarks for explainable modeling. The thesis concludes with a discussion on the limitations of the DAME approach along with the potential future directions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Gachutha, Catherine Wanjiru. "The role of supervision in the management of counsellor burnout." Thesis, 2006. http://hdl.handle.net/10500/1876.

Texto completo
Resumen
The study investigated the extent of burnout condition among counsellors in Kenya. The sources of burnout were explored and personality style was positively correlated with burnout development. Impact of burnout on counsellor wellness and productivity was also established. It examined whether counsellor supervision was an appropriate strategy in the management of counsellor burnout. The study utilized a pluralistic design that combined both qualitative and quantitative methods (Howard, 1983). The qualitative design permitted collection of rich data from study subjects' experiential and perceptual fields. This ensured study findings would be relevant and applicable to specific counsellor situations. The study population comprised 20 counsellors and 9 Kenya Counselling Association (KCA) accredited counsellor supervisors. The counsellor sample was drawn from 2 Voluntary Counselling and Testing (VCT) centres, 2 rehabilitation centres and 2 educational institutions. This diverse population was a helpful representation in terms of generalizability of the study. Three data collection instruments utilized were: Questionnaires, focus group discussions and in-depth interviews. The study's validity and reliability were ensured through the two sample populations (counsellor and counsellor supervisors), test re-test and pre-test procedures for questionnaires and in-depth interviews. Tallying identified items checked content validity. The study findings showed that burnout seriously affected practitioner effectiveness and led to malpractice and client harm. The study predictably established that supervision is an appropriate strategy in the management of counsellor burnout. The metaphor of motor vehicle maintenance was utilized in the development of the Holistic Burnout Supervision Model (HBSM) that focussed on wellness maintenance of the counsellor in a lifecycle. HBSM identified two levels in wellness maintenance: Preventative (servicing) and curative (repair). The study recommended that counselor - training institutions should incorporate in their curriculum burnout and supervision modules. This would create awareness about burnout and appropriate prevention strategies at counsellor formation stages. People care agencies should also institutionalize the burnout supervision facility in order to ensure counsellor resiliency and vitality.<br>Psychology<br>D. Phil (Psychology)
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Campos, Francisca Vasconcelos Marques Palha de. "Relatório de estágio no Ad Hoc Studios." Master's thesis, 2019. http://hdl.handle.net/10400.14/32899.

Texto completo
Resumen
Este trabalho final de mestrado de Som e Imagem da Universidade Católica Portuguesa é apresentado sob a forma de relatório de estágio curricular e reflete toda a experiência e trabalho desenvolvido na área de pós-produção de som para cinema no Ad Hoc Studios em Madrid. Realizado no contexto do Erasmus+, este estágio reflete também a escolha de uma experiência profissional internacional num mercado dinâmico e atrativo em termos de oportunidades no mundo do audiovisual. Devido às suas valências técnicas, artísticas e humanas, o Ad Hoc Studios deu-me a oportunidade de uma extraordinária aprendizagem nas várias áreas da pósprodução de som, incluindo design de som e foleys, dobragens e ADR e edição e pré-mistura de som. Esta aprendizagem permitiu aplicar e explorar todas as competências adquiridas durante a minha formação na Escola das Artes da Universidade Católica Portuguesa e contribuiu, de forma determinante, para a construção da minha identidade artística e profissional.<br>This final work for my master’s degree in Sound and Image at the Catholic University of Portugal is presented as a curricular internship report and reflects all the experience and work developed in the area of sound post-production for cinema at Ad Hoc Studios in Madrid. Held in the context of Erasmus+, this internship also reflects the choice of an international professional experience in a dynamic and attractive market in terms of opportunities in the audiovisual world. Through its technical, artistic and human skills, Ad Hoc Studios has given me the opportunity for extraordinary learning in several areas of sound post-production, including sound design and foleys, dubbing and ADR and sound editing and premixing. Simultaneously, this learning process allowed me to apply and explore all the skills acquired during my training at the School of Arts of the Catholic University of Portugal and contributed, in a decisive way, to the construction of my artistic and professional identity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Kanyama, Busanga Jerome. "A comparison of the performance of three multivariate methods in investigating the effects of province and power usage on the amount of five power modes in South Africa." Diss., 2011. http://hdl.handle.net/10500/4681.

Texto completo
Resumen
Researchers perform multivariate techniques MANOVA, discriminant analysis and factor analysis. The most common applications in social science are to identify and test the effects from the analysis. The use of this multivariate technique is uncommon in investigating the effects of power usage and Province in South Africa on the amounts of the five power modes. This dissertation discusses this issue, the methodology and practical problems of the three multivariate techniques. The author examines the applications of each technique in social public research and comparisons are made between the three multivariate techniques. This dissertation concludes with a discussion of both the concepts of the present multivariate techniques and the results found on the use of the three multivariate techniques in the energy household consumption. The author recommends focusing on the hypotheses of the study or typical questions surrounding of each technique to guide the researcher in choosing the appropriate analysis in the social research, as each technique has some strengths and limitations.<br>Statistics<br>M. Sc. (Statistics)
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Kanyama, Busanga Jerome. "A comparison of the performance of three multivariate methods in investigating the effects of province and power usage on the amounts of five power modes in South Africa." Diss., 2011. http://hdl.handle.net/10500/4681.

Texto completo
Resumen
Researchers perform multivariate techniques MANOVA, discriminant analysis and factor analysis. The most common applications in social science are to identify and test the effects from the analysis. The use of this multivariate technique is uncommon in investigating the effects of power usage and Province in South Africa on the amounts of the five power modes. This dissertation discusses this issue, the methodology and practical problems of the three multivariate techniques. The author examines the applications of each technique in social public research and comparisons are made between the three multivariate techniques. This dissertation concludes with a discussion of both the concepts of the present multivariate techniques and the results found on the use of the three multivariate techniques in the energy household consumption. The author recommends focusing on the hypotheses of the study or typical questions surrounding of each technique to guide the researcher in choosing the appropriate analysis in the social research, as each technique has some strengths and limitations.<br>Statistics<br>M. Sc. (Statistics)
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Maas, Bea. "Birds, bats and arthropods in tropical agroforestry landscapes: Functional diversity, multitrophic interactions and crop yield." Doctoral thesis, 2013. http://hdl.handle.net/11858/00-1735-0000-0022-5E77-5.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía