Academic literature on the topic 'Kernel testing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Kernel testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Kernel testing"

1

Lee, Kevin Sung-ho. "Kernel-adaptor interface testing of Project Timeliner." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/49939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ozier-Lafontaine, Anthony. "Kernel-based testing and their application to single-cell data." Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0025.

Full text
Abstract:
Les technologies de sequençage en cellule unique mesurent des informations à l’échelle de chaque cellule d’une population. Les données issues de ces technologies présentent de nombreux défis : beaucoup d’observations en grande dimension et souvent parcimonieuses. De nombreuses expériences de biologie consistent à comparer des conditions.L’objet de la thèse est de développer un ensemble d’outils qui compare des échantillons de données issues des technologies de séquençage en cellule unique afin de détecter et décrire les différences qui existent. Pour cela, nous proposons d’appliquer les tests de comparaison de deux échantillons basés sur les méthodes à noyaux existants. Nous proposons de généraliser ces tests à noyaux pour les designs expérimentaux quelconques, ce test s’inspire du test de la trace de Hotelling- Lawley. Nous implémentons pour la première fois ces tests à noyaux dans un packageR et Python nommé ktest, et nos applications sur données simulées et issues d’expériences démontrent leurs performances. L’application de ces méthodes à des données expérimentales permet d’identifier les observations qui expliquent les différences détectées. Enfin, nous proposons une implémentation efficace de ces tests basée sur des factorisations matricielles de type Nyström, ainsi qu’un ensemble d’outils de diagnostic et d’interprétation des résultats pour rendre ces méthodes accessibles et compréhensibles par des nonspécialistes<br>Single-cell technologies generate data at the single-cell level. They are coumposed of hundreds to thousands of observations (i.e. cells) and tens of thousands of variables (i.e. genes). New methodological challenges arose to fully exploit the potentialities of these complex data. A major statistical challenge is to distinguish biological informationfrom technical noise in order to compare conditions or tissues. This thesis explores the application of kernel testing on single-cell datasets in order to detect and describe the potential differences between compared conditions.To overcome the limitations of existing kernel two-sample tests, we propose a kernel test inspired from the Hotelling-Lawley test that can apply to any experimental design. We implemented these tests in a R and Python package called ktest that is their first useroriented implementation. We demonstrate the performances of kernel testing on simulateddatasets and on various experimental singlecell datasets. The geometrical interpretations of these methods allows to identify the observations leading a detected difference. Finally, we propose a Nyström-based efficient implementationof these kernel tests as well as a range of diagnostic and interpretation tools
APA, Harvard, Vancouver, ISO, and other styles
3

Kotlyarova, Yulia. "Kernel estimators : testing and bandwidth selection in models of unknown smoothness." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85179.

Full text
Abstract:
Semiparametric and nonparametric estimators are becoming indispensable tools in applied econometrics. Many of these estimators depend on the choice of smoothing bandwidth and kernel function. Optimality of such parameters is determined by unobservable smoothness of the model, that is, by differentiability of the distribution functions of random variables in the model. In this thesis we consider two estimators of this class: the smoothed maximum score estimator for binary choice models and the kernel density estimator.<br>We present theoretical results on the asymptotic distribution of the estimators under various smoothness assumptions and derive the limiting joint distributions for estimators with different combinations of bandwidths and kernel functions. Using these nontrivial joint distributions, we suggest a new way of improving accuracy and robustness of the estimators by considering a linear combination of estimators with different smoothing parameters. The weights in the combination minimize an estimate of the mean squared error. Monte Carlo simulations confirm suitability of this method for both smooth and non-smooth models.<br>For the original and smoothed maximum score estimators, a formal procedure is introduced to test for equivalence of the maximum likelihood estimators and these semiparametric estimators, which converge to the true value at slower rates. The test allows one to identify heteroskedastic misspecifications in the logit/probit models. The method has been applied to analyze the decision of married women to join the labour force.
APA, Harvard, Vancouver, ISO, and other styles
4

Liero, Hannelore. "Testing the Hazard Rate, Part I." Universität Potsdam, 2003. http://opus.kobv.de/ubp/volltexte/2011/5151/.

Full text
Abstract:
We consider a nonparametric survival model with random censoring. To test whether the hazard rate has a parametric form the unknown hazard rate is estimated by a kernel estimator. Based on a limit theorem stating the asymptotic normality of the quadratic distance of this estimator from the smoothed hypothesis an asymptotic ®-test is proposed. Since the test statistic depends on the maximum likelihood estimator for the unknown parameter in the hypothetical model properties of this parameter estimator are investigated. Power considerations complete the approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Friedrichs, Stefanie Verfasser], Heike [Akademischer Betreuer] Bickeböller, Thomas [Gutachter] [Kneib, and Tim [Gutachter] Beißbarth. "Kernel-Based Pathway Approaches for Testing and Selection / Stefanie Friedrichs ; Gutachter: Thomas Kneib, Tim Beißbarth ; Betreuer: Heike Bickeböller." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2017. http://d-nb.info/114137952X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yinglei. "Genetic Association Testing of Copy Number Variation." UKnowledge, 2014. http://uknowledge.uky.edu/statistics_etds/8.

Full text
Abstract:
Copy-number variation (CNV) has been implicated in many complex diseases. It is of great interest to detect and locate such regions through genetic association testings. However, the association testings are complicated by the fact that CNVs usually span multiple markers and thus such markers are correlated to each other. To overcome the difficulty, it is desirable to pool information across the markers. In this thesis, we propose a kernel-based method for aggregation of marker-level tests, in which first we obtain a bunch of p-values through association tests for every marker and then the association test involving CNV is based on the statistic of p-values combinations. In addition, we explore several aspects of its implementation. Since p-values among markers are correlated, it is complicated to obtain the null distribution of test statistics for kernel-base aggregation of marker-level tests. To solve the problem, we develop two proper methods that are both demonstrated to preserve the family-wise error rate of the test procedure. They are permutation based and correlation base approaches. Many implementation aspects of kernel-based method are compared through the empirical power studies in a number of simulations constructed from real data involving a pharmacogenomic study of gemcitabine. In addition, more performance comparisons are shown between permutation-based and correlation-based approach. We also apply those two approaches to the real data. The main contribution of the dissertation is the development of marker-level association testing, a comparable and powerful approach to detect phenotype-associated CNVs. Furthermore, the approach is extended to high dimension setting with high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
7

Akcin, Haci Mustafa. "NONPARAMETRIC INFERENCES FOR THE HAZARD FUNCTION WITH RIGHT TRUNCATION." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_diss/12.

Full text
Abstract:
Incompleteness is a major feature of time-to-event data. As one type of incompleteness, truncation refers to the unobservability of the time-to-event variable because it is smaller (or greater) than the truncation variable. A truncated sample always involves left and right truncation. Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos et al. (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation. First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study. Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study. Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Na. "MMD and Ward criterion in a RKHS : application to Kernel based hierarchical agglomerative clustering." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0033/document.

Full text
Abstract:
La classification non supervisée consiste à regrouper des objets afin de former des groupes homogènes au sens d’une mesure de similitude. C’est un outil utile pour explorer la structure d’un ensemble de données non étiquetées. Par ailleurs, les méthodes à noyau, introduites initialement dans le cadre supervisé, ont démontré leur intérêt par leur capacité à réaliser des traitements non linéaires des données en limitant la complexité algorithmique. En effet, elles permettent de transformer un problème non linéaire en un problème linéaire dans un espace de plus grande dimension. Dans ce travail, nous proposons un algorithme de classification hiérarchique ascendante utilisant le formalisme des méthodes à noyau. Nous avons tout d’abord recherché des mesures de similitude entre des distributions de probabilité aisément calculables à l’aide de noyaux. Parmi celles-ci, la maximum mean discrepancy a retenu notre attention. Afin de pallier les limites inhérentes à son usage, nous avons proposé une modification qui conduit au critère de Ward, bien connu en classification hiérarchique. Nous avons enfin proposé un algorithme itératif de clustering reposant sur la classification hiérarchique à noyau et permettant d’optimiser le noyau et de déterminer le nombre de classes en présence<br>Clustering, as a useful tool for unsupervised classification, is the task of grouping objects according to some measured or perceived characteristics of them and it has owned great success in exploring the hidden structure of unlabeled data sets. Kernel-based clustering algorithms have shown great prominence. They provide competitive performance compared with conventional methods owing to their ability of transforming nonlinear problem into linear ones in a higher dimensional feature space. In this work, we propose a Kernel-based Hierarchical Agglomerative Clustering algorithms (KHAC) using Ward’s criterion. Our method is induced by a recently arisen criterion called Maximum Mean Discrepancy (MMD). This criterion has firstly been proposed to measure difference between different distributions and can easily be embedded into a RKHS. Close relationships have been proved between MMD and Ward's criterion. In our KHAC method, selection of the kernel parameter and determination of the number of clusters have been studied, which provide satisfactory performance. Finally an iterative KHAC algorithm is proposed which aims at determining the optimal kernel parameter, giving a meaningful number of clusters and partitioning the data set automatically
APA, Harvard, Vancouver, ISO, and other styles
9

Bissyande, Tegawende. "Contributions for improving debugging of kernel-level services in a monolithic operating system." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821893.

Full text
Abstract:
Alors que la recherche sur la qualité du code des systèmes a connu un formidable engouement, les systèmes d'exploitation sont encore aux prises avec des problèmes de fiabilité notamment dus aux bogues de programmation au niveau des services noyaux tels que les pilotes de périphériques et l'implémentation des systèmes de fichiers. Des études ont en effet montré que chaque version du noyau Linux contient entre 600 et 700 fautes, et que la propension des pilotes de périphériques à contenir des erreurs est jusqu'à sept fois plus élevée que toute autre partie du noyau. Ces chiffres suggèrent que le code des services noyau n'est pas suffisamment testé et que de nombreux défauts passent inaperçus ou sont difficiles à réparer par des programmeurs non-experts, ces derniers formant pourtant la majorité des développeurs de services. Cette thèse propose une nouvelle approche pour le débogage et le test des services noyau. Notre approche est focalisée sur l'interaction entre les services noyau et le noyau central en abordant la question des "trous de sûreté" dans le code de définition des fonctions de l'API du noyau. Dans le contexte du noyau Linux, nous avons mis en place une approche automatique, dénommée Diagnosys, qui repose sur l'analyse statique du code du noyau afin d'identifier, classer et exposer les différents trous de sûreté de l'API qui pourraient donner lieu à des fautes d'exécution lorsque les fonctions sont utilisées dans du code de service écrit par des développeurs ayant une connaissance limitée des subtilités du noyau. Pour illustrer notre approche, nous avons implémenté Diagnosys pour la version 2.6.32 du noyau Linux. Nous avons montré ses avantages à soutenir les développeurs dans leurs activités de tests et de débogage.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Yuvraj. "Regression Models to Predict Coastdown Road Load for Various Vehicle Types." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595265184541326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography