To see the other types of publications on this topic, follow the link: Kerben.

Dissertations / Theses on the topic 'Kerben'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Kerben.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schubnell, Jan [Verfasser], and V. [Akademischer Betreuer] Schulze. "Experimentelle und numerische Untersuchung des Ermüdungsverhaltens von verfestigten Kerben und Schweißverbindungen nach dem Hochfrequenzhämmern / Jan Schubnell ; Betreuer: V. Schulze." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1239180632/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kremer, Tobias [Verfasser]. "Analyse und Optimierung von Kerben in Faser-Kunststoff-Verbunden : Methoden zur Analyse und Bewertung von Ausschnitten sowie werkstoff-spezifische Optimierungsverfahren / Tobias Kremer." Aachen : Shaker, 2007. http://d-nb.info/116434045X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dieringer, Rolf. "Erweiterungen der Rand-Finite-Elemente-Methode zur Analyse von Platten und Laminaten mit besonderem Fokus auf der Ermittlung von Singularitätsordnungen an Rissen und Kerben." Phd thesis, Studienbereich Mechanik, Technische Universität Darmstadt, 2015. https://tuprints.ulb.tu-darmstadt.de/4592/1/Dissertation_Dieringer_final.pdf.

Full text
Abstract:
In dieser Arbeit werden Erweiterungen der Rand-Finite-Elemente-Methode zur Analyse von Platten und Laminaten vorgestellt. Mit der Rand-Finite-Elemente-Methode lassen sich nicht nur komplexe Randwertprobleme lösen, sondern auch Singularitätsordnungen an geometrischen und materiellen Diskontinuitäten ohne zusätzlichen numerischen Aufwand effizient und genau ermitteln. Dies stellt einen entscheidenden Vorteil der Methode gegenüber anderen Berechnungsverfahren dar. Im ersten Teil der Arbeit werden die theoretischen Grundlagen vermittelt. Danach werden die neuen Elemente für Platten und Laminate formuliert und mit Beispielen überprüft. Die Erweiterungen der Rand-Finite-Elemente-Methode konvergieren sehr gut. Abschließend werden Singularitäten an Rissen und Kerben ermittelt. Neben isotropen und anisotropen Materialien werden unterschiedliche Randbedingungen auf den Kerbflanken untersucht und ihre Einflüsse auf die Stärke der Singularitäten diskutiert. Bei Laminaten wird untersucht, wie Kopplungen zwischen Scheiben- und Plattenverhalten die Singularitäten beeinflussen. Bei vielen Konfigurationen werden Supersingularitäten gefunden, die stärker als die klassische 1/√r-Rissspitzensingularität sind.
APA, Harvard, Vancouver, ISO, and other styles
4

Daryusi, Ali. "Bedeutung der Kerbwirkung für den Konstrukteur - numerische Berechnungen mit Creo Simulate und didaktische Vermittlung in der CAE-Lehre." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-114526.

Full text
Abstract:
Der hier vorliegende Vortrag beschreibt erste Untersuchungsergebnisse mit dem CAE-Programm "Creo Simulate" im Vergleich zum Programm "Ansys Workbench" zur Berechnung der Spannungsformzahlen sowie der örtlichen Kerbspannungen an gekerbten Konstruktionselementen. Der Vortrag beginnt mit einer Einleitung zur Geschichte der Kerbwirkungsforschung. Es werden Schadensfälle dargestellt und anschließend die Grundlagen zur Berechnung der Form- und Kerbwirkungszahlen kurz beschrieben. Es wird fortgesetzt mit der Durchführung von Konvergenz-Untersuchungen an Vollwellen mit SR-Nuten nach DIN 471 sowie an Vollwellen mit Absatz nach DIN 743 bei Zug- und Torsionsbelastung. Die Konvergenz-Berechnungsergebnisse wurden zusammengestellt und kurz kommentiert. Weiterhin wurden FE-Berechnungen zur Bestimmung der Spannungsformzahlen an Hohlwellen mit Absatz sowie an Vollwellen mit einer Kerbüberlagerung "Absatz und Querbohrung" bei Zug-, Biege, und Torsionsbelastung durchgeführt. Es wurden entsprechende Formzahldiagramme und Formzahlwerte sowie Spannungsverteilungsbilder je nach Belastungsart und Vergleichsspannungshypothese (GEH bzw. NSH) angegeben. Eine Möglichkeit zur Herabsetzung der Kerbwirkung an der kritischen Kerbstelle besteht in der absichtlichen Anwendung von Zusatzkerben, welche der Hauptkerbe benachbart sind. Derartige Entlastungskerben können zwar eine bedeutende Spannungsverminderung an der gefährdeten Stelle ergeben, an den Entlastungskerben entstehen jedoch neue Spannungsspitzen, die sich u. U. ungünstig auswirken. Zur Milderung von Kerbwirkungen durch konstruktive Entlastungskerben wurden exemplarische FEM-Untersuchungen an Vollwellen mit Absatz und an Kerbzahnwellen nach DIN 5481 bei Zug-, Biege- und Torsionsbelastung ausgeführt. Die Einflüsse der rechteckigen Form der Entlastungsnut auf die Formzahlen je Belastungsart wurden präsentiert und kurz diskutiert. Ebenfalls wurden auch die Kerbwirkungen an Evolventen-Zahnwellenverbindungen nach DIN 5480 mit freiem Auslauf und mit gebundenem Auslauf sowie mit Sicherungsringnuten nach DIN 471 untersucht, mit dem Ziel, den Ort der Spannungsmaxima und deren Verteilung, die Höhe der Kerbspannungen und den Einfluss der Sicherungsringnut auf die Spannungsüberhöhungen in den kritischen Bereichen zu erfassen. Es wurden zusätzlich weitere FE-Untersuchungen zu den Spannungsüberhöhungen an komplexen Gussbauteilen am Beispiel eines Planetenträgers für Planetengetriebe im Einatzbereich der Windkraftanlagen realisiert. Die Festlegung der Randbedingungen wurde kurz präsentiert und die sich daraus ergebenden Ergebnisse dargestellt. In diesem Vortrag wurde auch die Entwicklung eines neuen didaktischen Konzepts für die Konstruktionsausbildung zur Verbesserung der Präsentationskompetenz und Teamfähigkeit der Studierenden kurz beschrieben und über erste Erfahrungen aus der Umsetzung in die Lehrveranstaltung "CAD/CAE" berichtet. Die Studierenden erarbeiten in nach der Rundlitzenseilmethode strukturierten Gruppen unter Berücksichtigung der heterogenen Umgebung numerische Lösungen zu Variantenrechnungen von Aufgaben. Die Studierenden präsentieren ihre Ergebnisse in Form von 100-Sekunden-Vorträgen. Bei der Entwicklung dieser Methode lässt man sich durch den Karatesport inspirieren. Es wurden verschiedene Kriterien zur Bewertung der Micro-Präsentationen festgelegt und angewandt. Erste Erfahrungen mit der Umsetzung dieser Methodenkombination sind erfolgversprechend. Eine detaillierte statistisch-psychologische Evaluation dieses didaktischen Konzepts ist Ziel weiterführender Untersuchungen.
APA, Harvard, Vancouver, ISO, and other styles
5

Dieringer, Rolf [Verfasser], Wilfried [Akademischer Betreuer] Becker, and Friedrich [Akademischer Betreuer] Gruttmann. "Erweiterungen der Rand-Finite-Elemente-Methode zur Analyse von Platten und Laminaten mit besonderem Fokus auf der Ermittlung von Singularitätsordnungen an Rissen und Kerben / Rolf Dieringer. Betreuer: Wilfried Becker ; Friedrich Gruttmann." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1111113572/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Keren, Ilai Naftaly. "Thermal balance model for cattle grazing winter range." Thesis, Montana State University, 2005. http://etd.lib.montana.edu/etd/2005/keren/KerenI0805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Daryusi, Ali. "Beitrag zur Ermittlung der Kerbwirkung an Zahnwellen mit freiem und gebundenem Auslauf." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1240915811153-56748.

Full text
Abstract:
Durch die zunehmende technologische Entwicklung des Getriebe-, Gelenkwellen-, Werkzeugmaschinen-, Kraftfahrzeug-, sowie Landmaschinenbaus steigen die zu übertragenden Leistungen und Drehmomente enorm. Dies führt zu einem wachsenden Bedarf an formschlüssigen Profilwellenverbindungen und deren erhöhter Lebensdauer und Genauigkeit. Hierbei bilden die Zahnwellenverbindungen (ZWVen) mit Evolventenflanken nach DIN 5480 /N1/ den Regelfall für eine Vielzahl der Anwendung. Abhängig von Festigkeitsüberlegungen, Herstellungsverfahren und Platzbedarf treten in der Praxis nahezu ausschließlich die folgenden zwei Grundtypen auf. Es handelt sich dabei zum Ersten um die Zahnwelle (ZW) mit freiem Auslauf.Die zweite Geometrievariante ist die Zahnwelle mit gebundenem Auslauf, die eine nach DIN 471 /N2/ genormte Sicherungsringnut (SRN) enthalten kann. Zahnwellenverbindungen dienen zur Übertragung großer, wechselnder und stoßartiger Drehmomente ohne zusätzliches Verbindungselement durch die Profilierung der Welle und Nabe. Axiale Verschiebbarkeit unter Last, Profilverschiebungsmöglichkeit, einfache Montage und Demontage sowie die Herstellung mit hochleistungsfähigen umformenden und spanenden Massenfertigungsverfahren, die die Herstellungskosten verhältnismäßig niedrig halten, sind technisch bedeutsame Eigenschaften, die zum ansteigenden Einsatz von ZWVen führen (z.B. /N1/, /Vil84/, /Koh86/ und /Wes96/). Starke Kerbwirkung und erhebliche Überdimensionierung benachbarter Gestaltungszonen sind die wesentlichen Schwachpunkte der Profilverbindungen. Eine große Anzahl (ca. 80 %) von Ausfällen im Maschinenbau ist auf Schäden an Achsen und Wellen infolge konstruktiv bedingter Kerben zurückzuführen (z.B. /N3/ und /Hai89/). Speziell im Bereich der hochbeanspruchten Profilwellen-Verbindungen kommt es auf Grund der starken Querschnittsveränderungen und der häufig angewandten Ausläufe und Formelemente, z. B. Zahn- und Keilwellen zu Kerbwirkungen, die erhebliche örtliche Spannungskonzentrationen sowohl im Zahnfußbereich und Zahnlückenauslauf als auch im Bereich der Verbindung selbst verursachen. Diese Beanspruchungskonzentrationen sind fast in der Hälfte aller Zahnwellenbrüche die häufigste Ursache für Dauerbrüche (Ermüdungs- bzw. Schwingungsbrüche) und für Schäden (bleibende Verformung, Anriss, Gewaltbruch) infolge Maximalbelastung. Hier trifft die Lastüberhöhung am Welle-Nabe-Verbindungsrand mit dem Steifigkeitssprung des Verzahnungsendes auf der Welle zusammen /Die93/. Die erwähnten Schadensfälle belegen, dass der heutige Kenntnisstand über eine beanspruchungsgerechte Auslegung von Zahnwellen noch recht lückenhaft ist. Deshalb sind neue Erkenntnisse über Form- bzw. Kerbwirkungszahlen bei Einzel- und Mehrfachkerben von scharf und weniger scharf gekerbten Zahnwellen mit Auslauf für eine treffsichere Festigkeitsberechnung erforderlich und stellen damit die Hauptschwerpunkte dieser Arbeit dar. Das vorliegende Forschungsprojekt, welches sich erstmals mit der Ermittlung der Beanspruchungen in torsions-, und biegebelasteten Zahnwellen mit freiem und gebundenem Auslauf befasst, wurde im Rahmen der Forschungsvereinigung für Antriebstechnik e.V. (FVA) unter der Nummer T 467 und dem Forschungsthema „ Ermittlung der Kerbwirkung bei Profilwellen für die praktische Getriebeberechnung von Zahnwellen“ initiiert und untersucht.
APA, Harvard, Vancouver, ISO, and other styles
8

Walls, Jacob. "Kernel." Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19203.

Full text
Abstract:
Kernel is a fifteen-minute work for wind ensemble. Its unifying strands of rhythm, melody, and harmony are spun out of simple four-note tone clusters which undergo changes in contour, intervallic inversion, register, texture, and harmonic environment. These four notes make up the "kernel" of the work, a word used by Breton to refer to the indestructible element of darkness prior to all creative invention, as well as a term used in computer science to refer to the crucial element of a system that, if it should fail, does so loudly.
APA, Harvard, Vancouver, ISO, and other styles
9

Müller-Kelwing, Karin. "Walter Kersten." Böhlau Verlag, 2020. https://slub.qucosa.de/id/qucosa%3A75105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Keränen, Soile V. E. "The developmental basis for the evolution of muroid dentition : analysis of gene expression patterns and tooth morphogenesis in the mouse and sibling vole." Helsinki : University of Helsinki, 2000. http://ethesis.helsinki.fi/julkaisut/mat/bioti/vk/keranen/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Keränen, Petteri. "Aspects of massive neutrinos in astrophysics and cosmology." Helsinki : University of Helsinki, 1999. http://ethesis.helsinki.fi/julkaisut/mat/fysii/vk/keranen/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chossy, Thomas von. "Oberflächen- und Kompressionseigenschaften von Kernen in relativistischer Mittelfeldnäherung." Diss., lmu, 2002. http://nbn-resolving.de/urn:nbn:de:bvb:19-10857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Peter, Ingo. "Verstärkter Neutronen-Paar-Transfer zwischen superfluiden schweren Kernen." [S.l.] : [s.n.], 1998. http://www.diss.fu-berlin.de/1999/12/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Pfeiffer, Marion. "Spektroskopische Untersuchung hochionisierten Plasmas in aktiven galaktischen Kernen." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=961688815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

George, Sharath. "Usermode kernel : running the kernel in userspace in VM environments." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2858.

Full text
Abstract:
In many instances of virtual machine deployments today, virtual machine instances are created to support a single application. Traditional operating systems provide an extensive framework for protecting one process from another. In such deployments, this protection layer becomes an additional source of overhead as isolation between services is provided at an operating system level and each instance of an operating system supports only one service. This makes the operating system the equivalent of a process from the traditional operating system perspective. Isolation between these operating systems and indirectly the services they support, is ensured by the virtual machine monitor in these deployments. In these scenarios the process protection provided by the operating system becomes redundant and a source of additional overhead. We propose a new model for these scenarios with operating systems that bypass this redundant protection offered by the traditional operating systems. We prototyped such an operating system by executing parts of the operating system in the same protection ring as user applications. This gives processes more power and access to kernel memory bypassing the need to copy data from user to kernel and vice versa as is required when the traditional ring protection layer is enforced. This allows us to save the system call trap overhead and allows application program mers to directly call kernel functions exposing the rich kernel library. This does not compromise security on the other virtual machines running on the same physical machine, as they are protected by the VMM. We illustrate the design and implementation of such a system with the Xen hypervisor and the XenoLinux kernel.
APA, Harvard, Vancouver, ISO, and other styles
16

Guo, Lisong. "Boost the Reliability of the Linux Kernel : Debugging kernel oopses." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066378/document.

Full text
Abstract:
Lorsqu'une erreur survient dans le noyau Linux, celui-ci émet un rapport d’erreur appelé "kernel oops" contenant le contexte d’exécution de cette erreur. Les kernel oops décrivent des erreurs réelles de Linux, permettent de classer les efforts de débogage par ordre de priorité et de motiver la conception d’outils permettant d'améliorer la fiabilité du code de Linux. Néanmoins, les informations contenues dans un kernel oops n’ont de sens que si elles sont représentatives et qu'elles peuvent être interprétées correctement. Dans cette thèse, nous étudions une collection de kernel oops provenant d'un dépôt maintenu par Red Hat sur une période de huit mois. Nous considérons l’ensemble des caractéristiques de ces données, dans quelle mesure ces données reflètent d’autres informations à propos de Linux et l’interprétation des caractéristiques pouvant être pertinentes pour la fiabilité de Linux. Nous constatons que ces données sont bien corrélées à d’autres informations à propos de Linux, cependant, elles souffrent parfois de problèmes de duplication et de manque d’informations. Nous identifions également quelques pièges potentiels lors de l'étude des fonctionnalités, telles que les causes d'erreurs fréquentes et les causes d'applications défaillant fréquemment. En outre, un kernel oops fournit des informations précieuses et de première main pour un mainteneur du noyau Linux lui permettant d'effectuer le débogage post-mortem car il enregistre l’état du noyau Linux au moment du crash. Cependant, le débogage sur la seule base des informations contenues dans un kernel oops est difficile. Pour aider les développeurs avec le débogage, nous avons conçu une solution afin d'obtenir la ligne fautive à partir d’un kernel oops, i.e., la ligne du code source qui provoque l'erreur. Pour cela, nous proposons un nouvel algorithme basé sur la correspondance de séquences approximative utilisé dans le domaine de bioinformatique. Cet algorithme permet de localiser automatiquement la ligne fautive en se basant sur le code machine à proximité de celle-ci et inclus dans un kernel oops. Notre algorithme atteint 92% de précision comparé à 26 % pour l’approche traditionnelle utilisant le débogueur gdb. Nous avons intégré notre solution dans un outil nommé OOPSA qui peut ainsi alléger le fardeau pour les développeurs lors du débogage de kernel oops
When a failure occurs in the Linux kernel, the kernel emits an error report called “kernel oops”, summarizing the execution context of the failure. Kernel oopses describe real Linux errors, and thus can help prioritize debugging efforts and motivate the design of tools to improve the reliability of Linux code. Nevertheless, the information is only meaningful if it is representative and can be interpreted correctly. In this thesis, we study a collection of kernel oopses over a period of 8 months from a repository that is maintained by Red Hat. We consider the overall features of the data, the degree to which the data reflects other information about Linux, and the interpretation of features that may be relevant to reliability. We find that the data correlates well with other information about Linux, but that it suffers from duplicate and missing information. We furthermore identify some potential pitfalls in studying features such as the sources of common faults and common failing applications. Furthermore, a kernel oops provides valuable first-hand information for a Linux kernel maintainer to conduct postmortem debugging, since it logs the status of the Linux kernel at the time of a crash. However, debugging based on only the information in a kernel oops is difficult. To help developers with debugging, we devised a solution to derive the offending line from a kernel oops, i.e., the line of source code that incurs the crash. For this, we propose a novel algorithm based on approximate sequence matching, as used in bioinformatics, to automatically pinpoint the offending line based on information about nearby machine-code instructions, as found in a kernel oops. Our algorithm achieves 92% accuracy compared to 26% for the traditional approach of using only the oops instruction pointer. We integrated the solution into a tool named OOPSA, which would relieve some burden for the developers with the kernel oops debugging
APA, Harvard, Vancouver, ISO, and other styles
17

Mika, Sebastian. "Kernel Fisher discriminats." [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=967125413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sun, Fangzheng. "Kernel Coherence Encoders." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/252.

Full text
Abstract:
In this thesis, we introduce a novel model based on the idea of autoencoders. Different from a classic autoencoder which reconstructs its own inputs through a neural network, our model is closer to Kernel Canonical Correlation Analysis (KCCA) and reconstructs input data from another data set, where these two data sets should have some, perhaps non-linear, dependence. Our model extends traditional KCCA in a way that the non-linearity of the data is learned through optimizing a kernel function by a neural network. In one of the novelties of this thesis, we do not optimize our kernel based upon some prediction error metric, as is classical in autoencoders. Rather, we optimize our kernel to maximize the "coherence" of the underlying low-dimensional hidden layers. This idea makes our method faithful to the classic interpretation of linear Canonical Correlation Analysis (CCA). As far we are aware, our method, which we call a Kernel Coherence Encoder (KCE), is the only extent approach that uses the flexibility of a neural network while maintaining the theoretical properties of classic KCCA. In another one of the novelties of our approach, we leverage a modified version of classic coherence which is far more stable in the presence of high-dimensional data to address computational and robustness issues in the implementation of a coherence based deep learning KCCA.
APA, Harvard, Vancouver, ISO, and other styles
19

Karlsson, Viktor, and Erik Rosvall. "Extreme Kernel Machine." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211566.

Full text
Abstract:
The purpose of this report is to examine the combination of an Extreme Learning Machine (ELM) with the Kernel Method . Kernels lies at the core of Support Vector Machines success in classifying non-linearly separable datasets. The hypothesis is that by combining ELM with a kernel we will utilize features in the ELM-space otherwise unused. The report is intended as a proof of concept for the idea of using kernel methods in an ELM setting. This will be done by running the new algorithm against five image datasets for a classification accuracy and time complexity analysis. Results show that our extended ELM algorithm, which we have named Extreme Kernel Machine (EKM), improve classification accuracy for some datasets compared to the regularised ELM, in the best scenarios around three percentage points. We found that the choice of kernel type and parameter values had great effect on the classification performance. The implementation of the kernel does however add computational complexity, but where that is not a concern EKM does have an advantage. This tradeoff might give EKM a place between other neural networks and regular ELMs.
APA, Harvard, Vancouver, ISO, and other styles
20

Klein, Heiko. "Protonen-Neutronen-Schwingungen in den Kernen 96Ru und 64Zn." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=962397466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Santing-Wubs, Albertha Harma. "Kerken in geding : de burgerlijke rechter en kerkelijke geschillen /." [Den Haag] : Boom Juridische uitgevers, 2002. http://www.gbv.de/dms/spk/sbb/recht/toc/36073300X.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kerpen, Nils Bernhard [Verfasser]. "Wave-induced responses of stepped revetments / Nils Bernhard Kerpen." Hannover : Technische Informationsbibliothek (TIB), 2017. http://d-nb.info/1149832770/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Michelfelder, Birgit Christiane. "Trag- und Verformungsverhalten von Kerven bei Brettstapel-Beton-Verbunddecken." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-28911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kerzel, Juliane. "Die kulturelle Gestaltung des Sonntags im 20. Jahrhundert." Mannheim : Mateo, 2000. http://www.uni-mannheim.de/mateo/verlag/diss/Kerzel/kerzelabs.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jin, Bo. "Evolutionary Granular Kernel Machines." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/15.

Full text
Abstract:
Kernel machines such as Support Vector Machines (SVMs) have been widely used in various data mining applications with good generalization properties. Performance of SVMs for solving nonlinear problems is highly affected by kernel functions. The complexity of SVMs training is mainly related to the size of a training dataset. How to design a powerful kernel, how to speed up SVMs training and how to train SVMs with millions of examples are still challenging problems in the SVMs research. For these important problems, powerful and flexible kernel trees called Evolutionary Granular Kernel Trees (EGKTs) are designed to incorporate prior domain knowledge. Granular Kernel Tree Structure Evolving System (GKTSES) is developed to evolve the structures of Granular Kernel Trees (GKTs) without prior knowledge. A voting scheme is also proposed to reduce the prediction deviation of GKTSES. To speed up EGKTs optimization, a master-slave parallel model is implemented. To help SVMs challenge large-scale data mining, a Minimum Enclosing Ball (MEB) based data reduction method is presented, and a new MEB-SVM algorithm is designed. All these kernel methods are designed based on Granular Computing (GrC). In general, Evolutionary Granular Kernel Machines (EGKMs) are investigated to optimize kernels effectively, speed up training greatly and mine huge amounts of data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
26

Karim, Khan Shahid. "Abstract Kernel Management Environment." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1806.

Full text
Abstract:

The Kerngen Module in MATLAB can be used to optimize a filter with regards to an ideal filter; while taking into consideration the weighting function and the spatial mask. To be able to remotely do these optimizations from a standard web browser over a TCP/IP network connection would be of interest. This master’s thesis covers the project of doing such a system; along with an attempt to graphically display three-dimensional filters and also save the optimized filter in XML format. It includes defining an appropriate DTD for the representation of the filter. The result is a working system, with a server and client written in the programming language PIKE.

APA, Harvard, Vancouver, ISO, and other styles
27

Andersson, Björn. "Contributions to Kernel Equating." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234618.

Full text
Abstract:
The statistical practice of equating is needed when scores on different versions of the same standardized test are to be compared. This thesis constitutes four contributions to the observed-score equating framework kernel equating. Paper I introduces the open source R package kequate which enables the equating of observed scores using the kernel method of test equating in all common equating designs. The package is designed for ease of use and integrates well with other packages. The equating methods non-equivalent groups with covariates and item response theory observed-score kernel equating are currently not available in any other software package. In paper II an alternative bandwidth selection method for the kernel method of test equating is proposed. The new method is designed for usage with non-smooth data such as when using the observed data directly, without pre-smoothing. In previously used bandwidth selection methods, the variability from the bandwidth selection was disregarded when calculating the asymptotic standard errors. Here, the bandwidth selection is accounted for and updated asymptotic standard error derivations are provided. Item response theory observed-score kernel equating for the non-equivalent groups with anchor test design is introduced in paper III. Multivariate observed-score kernel equating functions are defined and their asymptotic covariance matrices are derived. An empirical example in the form of a standardized achievement test is used and the item response theory methods are compared to previously used log-linear methods. In paper IV, Wald tests for equating differences in item response theory observed-score kernel equating are conducted using the results from paper III. Simulations are performed to evaluate the empirical significance level and power under different settings, showing that the Wald test is more powerful than the Hommel multiple hypothesis testing method. Data from a psychometric licensure test and a standardized achievement test are used to exemplify the hypothesis testing procedure. The results show that using the Wald test can provide different conclusions to using the Hommel procedure.
APA, Harvard, Vancouver, ISO, and other styles
28

Ho, Ka-Lung. "Kernel eigenvoice speaker adaptation /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20HOK.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 56-61). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
29

Pevný, Tomáš. "Kernel methods in steganalysis." Diss., Online access via UMI:, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Corrigan, Andrew. "Kernel-based meshless methods." Fairfax, VA : George Mason University, 2009. http://hdl.handle.net/1920/4585.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2009.
Vita: p. 108. Thesis co-directors: John Wallin, Thomas Wanner. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computational Science and Informatics. Title from PDF t.p. (viewed Oct. 12, 2009). Includes bibliographical references (p. 102-107). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
31

Reichenbach, Stephen Edward. "Small-kernel image restoration." W&M ScholarWorks, 1989. https://scholarworks.wm.edu/etd/1539623783.

Full text
Abstract:
The goal of image restoration is to remove degradations that are introduced during image acquisition and display. Although image restoration is a difficult task that requires considerable computation, in many applications the processing must be performed significantly faster than is possible with traditional algorithms implemented on conventional serial architectures. as demonstrated in this dissertation, digital image restoration can be efficiently implemented by convolving an image with a small kernel. Small-kernel convolution is a local operation that requires relatively little processing and can be easily implemented in parallel. A small-kernel technique must compromise effectiveness for efficiency, but if the kernel values are well-chosen, small-kernel restoration can be very effective.;This dissertation develops a small-kernel image restoration algorithm that minimizes expected mean-square restoration error. The derivation of the mean-square-optimal small kernel parallels that of the Wiener filter, but accounts for explicit spatial constraints on the kernel. This development is thorough and rigorous, but conceptually straightforward: the mean-square-optimal kernel is conditioned only on a comprehensive end-to-end model of the imaging process and spatial constraints on the kernel. The end-to-end digital imaging system model accounts for the scene, acquisition blur, sampling, noise, and display reconstruction. The determination of kernel values is directly conditioned on the specific size and shape of the kernel. Experiments presented in this dissertation demonstrate that small-kernel image restoration requires significantly less computation than a state-of-the-art implementation of the Wiener filter yet the optimal small-kernel yields comparable restored images.;The mean-square-optimal small-kernel algorithm and most other image restoration algorithms require a characterization of the image acquisition device (i.e., an estimate of the device's point spread function or optical transfer function). This dissertation describes an original method for accurately determining this characterization. The method extends the traditional knife-edge technique to explicitly deal with fundamental sampled system considerations of aliasing and sample/scene phase. Results for both simulated and real imaging systems demonstrate the accuracy of the method.
APA, Harvard, Vancouver, ISO, and other styles
32

Dhanjal, Charanpal. "Sparse Kernel feature extraction." Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/64875/.

Full text
Abstract:
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks, since it can decrease accuracy, make it harder to understand the learned model and increase computational and memory requirements. One approach to this problem is to extract appropriate features. General approaches such as Principal Components Analysis (PCA) are successful for a variety of applications, however they can be improved upon by targeting feature extraction towards more specific problems. More recent work has been more focused and considers sparser formulations which potentially have improved generalisation. However, sparsity is not always efficiently implemented and frequently requires complex optimisation routines. Furthermore, one often does not have a direct control on the sparsity of the solution. In this thesis, we address some of these problems, first by proposing a general framework for feature extraction which possesses a number of useful properties. The framework is based on Partial Least Squares (PLS), and one can choose a user defined criterion to compute projection directions. It draws together a number of existing results and provides additional insights into several popular feature extraction methods. More specific feature extraction is considered for three objectives: matrix approximation, supervised feature extraction and learning the semantics of two-viewed data. Computational and memory efficiency is prioritised, as well as sparsity in a direct manner and simple implementations. For the matrix approximation case, an analysis of different orthogonalisation methods is presented in terms of the optimal choice of projection direction. The analysis results in a new derivation for Kernel Feature Analysis (KFA) and the formation of two novel matrix approximation methods based on PLS. In the supervised case, we apply the general feature extraction framework to derive two new methods based on maximising covariance and alignment respectively. Finally, we outline a novel sparse variant of Kernel Canonical Correlation Analysis (KCCA) which approximates a cardinality constrained optimisation. This method, as well as a variant which performs feature selection in one view, is applied to an enzyme function prediction case study.
APA, Harvard, Vancouver, ISO, and other styles
33

Rademeyer, Estian. "Bayesian kernel density estimation." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/64692.

Full text
Abstract:
This dissertation investigates the performance of two-class classi cation credit scoring data sets with low default ratios. The standard two-class parametric Gaussian and naive Bayes (NB), as well as the non-parametric Parzen classi ers are extended, using Bayes' rule, to include either a class imbalance or a Bernoulli prior. This is done with the aim of addressing the low default probability problem. Furthermore, the performance of Parzen classi cation with Silverman and Minimum Leave-one-out Entropy (MLE) Gaussian kernel bandwidth estimation is also investigated. It is shown that the non-parametric Parzen classi ers yield superior classi cation power. However, there is a longing for these non-parametric classi ers to posses a predictive power, such as exhibited by the odds ratio found in logistic regression (LR). The dissertation therefore dedicates a section to, amongst other things, study the paper entitled \Model-Free Objective Bayesian Prediction" (Bernardo 1999). Since this approach to Bayesian kernel density estimation is only developed for the univariate and the uncorrelated multivariate case, the section develops a theoretical multivariate approach to Bayesian kernel density estimation. This approach is theoretically capable of handling both correlated as well as uncorrelated features in data. This is done through the assumption of a multivariate Gaussian kernel function and the use of an inverse Wishart prior.
Dissertation (MSc)--University of Pretoria, 2017.
The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF.
Statistics
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
34

Ataie, Abdul Ahad [Verfasser]. "QRPA-Rechnungen für Ladungsaustauschanregungen an exotischen Kernen / Abdul Ahad Ataie." München : Verlag Dr. Hut, 2011. http://d-nb.info/1013526570/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Scheck, Marcus. "Fragmentierung niedrigliegender Dipolmoden in ungeraden Kernen am N=82 Schalenabschluss." [S.l.] : [s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11878659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ansary, B. M. Saif. "High Performance Inter-kernel Communication and Networking in a Replicated-kernel Operating System." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/78338.

Full text
Abstract:
Modern computer hardware platforms are moving towards high core-count and heterogeneous Instruction Set Architecture (ISA) processors to achieve improved performance as single core performance has reached its performance limit. These trends put the current monolithic SMP operating system (OS) under scrutiny in terms of scalability and portability. Proper pairing of computing workloads with computing resources has become increasingly arduous with traditional software architecture. One of the most promising emerging operating system architectures is the Multi-kernel. Multi-kernels not only address scalability issues, but also inherently support heterogeneity. Furthermore, provide an easy way to properly map computing workloads to the correct type of processing resources in presence of heterogeneity. Multi-kernels do so by partitioning the resources and running independent kernel instances and co-operating amongst themselves to present a unified view of the system to the application. Popcorn is one the most prominent multi-kernels today, which is unique in the sense that it runs multiple Linux instances on different cores or group of cores, and provides a unified view of the system i.e., Single System Image (SSI). This thesis presents four contributions. First, it introduces a filesystem for Popcorn, which is a vital part to provide a SSI. Popcorn supports thread/process migration that requires migration of file descriptors which is not provided by traditional filesystems as well as popular distributed file systems, this work proposes a scalable messaging based file descriptor migration and consistency protocol for Popcorn. Second, multi-kernel OSs rely heavily on a fast low latency messaging layer to be scalable. Messaging is even more important in heterogeneous systems where different types of cores are on different islands with no shared memory. Thus, another contribution proposes a fast-low latency messaging layer to enable communication among heterogeneous processor islands for Heterogeneous Popcorn. With advances in networking technology, newest Ethernet technologies are able to support up to 40 Gbps bandwidth, but due to scalability issues in monolithic kernels, the number of connections served per second does not scale with this increase in speed.Therefore, the third and fourth contributions try to address this problem with Snap Bean, a virtual network device and Angel, an opportunistic load balancer for Popcorn's network system. With the messaging layer Popcorn gets over 30% performance benefit over OpenCL and Intel Offloading technique (LEO). And with NetPopcorn we achieve over 7 to 8 times better performance over vanilla Linux and 2 to 5 times over state-of-the-art Affinity Accept .
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
37

Liang, Zhiyu. "Eigen-analysis of kernel operators for nonlinear dimension reduction and discrimination." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1388676476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ceau, Alban. "Kernel. Application et potentiels scientifiques de l’interférométrie pleine pupille : Analyse statistique des observables kernel." Thesis, Université Côte d'Azur, 2020. http://www.theses.fr/2020COAZ4035.

Full text
Abstract:
Les observations à haute résolution du ciel sont assurées par deux grandes techniques : l'imagerie et l'interférométrie. L'imagerie consiste à estimer la distribution spatiale d'intensité d'une source, en formant une image de cette distribution d'intensité sur une plaque photosensible, anciennement chimiquement (plaque photographique), mais désormais exclusivement électronique, sous la forme de détecteur. L'imagerie est limitée par la qualité des images produites, qui peut être approximée par la taille de l'image que forme un point source sur le détecteur. Plus cette image est petite, plus la résolution est importante. La deuxième technique, l'interférométrie consiste à exploiter les propriétés ondulatoires de la lumière pour former non pas une image, mais des franges d'interférence, qui encodent dans leur position et leur contraste des informations sur la structure spatiale de l'objet observé.Bien que l'interférométrie et l'imagerie soient deux techniques différentes, et que leurs spécialistes tendent à former des communautés distinctes, les phénomènes à l'œuvre lors de la formation d'une image d'une part, et d’une figure d'interférence d'autre part sont fondamentalement les mêmes. Cela permet, sous certaines conditions, d'exploiter des techniques interférométriques sur des images. Une de ces techniques permet de former clôtures de phases, des observables robustes aux défauts optiques à partir d'observations interférométriques. Si les défauts optiques sont assez faibles (avec des erreurs sur le chemin optique plus petites que la longueur d'onde), il est possible de former des observables analogues à ces clôtures de phase à partir d'images, appelées des kernel phases, ou noyaux de phase.Le régime dans lequel ces observables peuvent être extraites n'a été rendu accessible que récemment, avec le lancement des premiers télescopes spatiaux d'une part, et d'autre part l'émergence de l'optique adaptative, qui peut corriger en temps réel les défauts liés aux turbulences atmosphériques. Si les défauts sont assez faibles, les images sont dites "limitées par la diffraction“ : la réponse du télescope peut être considérée comme dominée par les effets de diffraction, qui dépendent de la géométrie de l'ouverture d'entrée, et les défauts optiques comme des perturbations de la diffraction.Dans ce régime, la structure de la perturbation peut être utilisée pour construire des observables qu'elle n'affecte pas. Ces observables ne sont toutefois pas robustes à toutes les erreurs. Dans ce cas, je me suis concentré sur la détection de binaires dans les kernel phases extraites à partir d'images, en utilisant des méthodes statistiques robustes. En théorie de la détection, la procédure la plus efficace pour détecter un signal dans des données bruitées est le rapport de vraisemblance. Ici, je propose trois tests, tous basés sur cette procédure optimale pour effectuer des détections systématiques de binaires dans des images. Ces procédures sont applicables aux kernels phases extraites à partir de n'importe quelle image.Les performances de ces procédures de détection sont ensuite prédites pour des observations de naines brunes de type Y avec le télescope spatial James Webb. Nous montrons que des détections de binaires sont possible à des contrastes pouvant atteindre 1000 à des séparations correspondant à la limite de diffraction, qui est communément admise comme la "limite de résolution“ d'un télescope formant des images. Ces performances font de l'interférométrie kernel une méthode performante pour la détection de binaires de faible intensité. Ces limites dépendent fortement du flux disponible, qui détermine l'erreur sur les valeurs de flux mesurées au niveau de chaque pixel, et, par extension les erreurs qui affectent les kernel phases
High resolution observations of the sky are made using techniques that fall into two wide categories: imaging, and interferometry. Imaging consists in estimating the spatial intensity distribution of a source by forming an image of this source of a photosensitive plate, historically using chemical processes (a photographic plate), but nowadays electronically, with detectors. Imaging is limited by the quality of images, which can be approximated from the size of the image formed by point source on the detector. The smaller this size, the higher the resolution of an image. Interferometry, the second aforementioned technique, consists in exploiting the wave properties of light to form interference fringes rather than an image. These fringes encode information on the spatial structure of the observed object in their position and contrast Even though interferometry and imaging are two different techniques, and specialists of one or the other tend to form distinct communities, the phenomena that lead to the formation either of an image, or of an interference pattern are fundamentally the same. This enables, under some conditions, the use of techniques originally developed to treat interferometry data on images. One of these techniques allows to construct closure phases, observables that are robust to optical defaults from interferometry observations. if these optical defaults are small enough (with optical path differences smaller than the wavelength), it is possible to form observables analog to these closure phases called kernel phases. The regime in which these observables can be extracted was only attained recently, with the launch of the first space telescopes and the rise of extreme adaptive optics, which can correct in real time the defaults caused by atmospheric turbulence. If these defaults are small enough, images are called "diffraction limited": the response of the telescope can be considered dominated by the effects of diffraction, which depend on the geometry of the entrance aperture, optical defaults can be described ads perturbations of diffraction.In this regime, the structure of the perturbation can be used to build observables it does not affect. These observables are however to robust to all errors. An imperfect modeling on the entrance aperture and the approximations necessary to their construction can lead to systematic errors. Noises in the image also propagate to the observables. To be able to analyze a measurement, it is necessary to know the errors that affect it, and to propagate them to the final parameters deduced from these measurements.The use case we chose to evaluate these techniques was images of cold brown dwarfs produced by the James Webb Space Telescope (JWST), to predict the detection performances of companions around them. Currently, observation of these cold, Y type dwarfs has been made difficult by their very weak luminosity and temperature, which make observing them very difficult in the near infrared, the preferred domain of AO corrected ground based observatories. Thanks to its great sensitivity and stability, JWST will be able to observe these objects with the greatest precision achieved yet. This stability makes images produced by this telescope ideal candidates for kernel analysis.The performances of these detection procedures are then predicted for images of cold brown dwarfs produced by JWST. For these images, we show that binary detections are possible a contrast that can reach 1000 at separations corresponding to the diffraction limit, often considered to be the resolution limit of a telescope. These contrast detection limits make kernel interferometry a powerful method for the detection of low flux binaries. These detection limits strongly depend on the available flux, which determines the error level on each pixel, and therefore the noise that affects the kernel phases
APA, Harvard, Vancouver, ISO, and other styles
39

Cernea, Daniel [Verfasser], Achim [Akademischer Betreuer] Ebert, and Andreas [Akademischer Betreuer] Kerren. "User-Centered Collaborative Visualization / Daniel Cernea. Betreuer: Achim Ebert ; Andreas Kerren." Kaiserslautern : Technische Universität Kaiserslautern, 2015. http://d-nb.info/1069938807/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Schmidt, Jürgen. "Experimentelle und numerische Untersuchung dynamisch belasteter Verbundstrukturen mit zellularen metallischen Kernen." Düsseldorf VDI-Verl, 2009. http://d-nb.info/1000096068/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Prenov, B., and Nikolai Tarkhanov. "Kernel spikes of singular problems." Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2008/2619/.

Full text
Abstract:
Function spaces with asymptotics is a usual tool in the analysis on manifolds with singularities. The asymptotics are singular ingredients of the kernels of pseudodifferential operators in the calculus. They correspond to potentials supported by the singularities of the manifold, and in this form asymptotics can be treated already on smooth configurations. This paper is aimed at describing refined asymptotics in the Dirichlet problem in a ball. The beauty of explicit formulas highlights the structure of asymptotic expansions in the calculi on singular varieties.
APA, Harvard, Vancouver, ISO, and other styles
42

Guardati, Emanuele. "Path integrals and heat kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14608/.

Full text
Abstract:
L’integrale funzionale fu introdotto per la prima volta da Feynman nel 1948. Esso costituisce un differente approccio alla meccanica quantistica non relativistica, equivalente alle precedenti formulazioni. Contrariamente all'approccio prettamente matematico della quantizzazione canonica, Feynman predilige un’interpretazione più intuitiva, basando il suo lavoro su una generalizzazione dell’esperimento delle due fenditure. Oltre ad una comprensione più profonda dei ben noti risultati della meccanica quantistica non relativistica, l’utilizzo dell’integrale funzionale ha altresì il vantaggio di esemplificare calcoli perturbativi. Oltre a mostrare l’equivalenza con la formulazione di Schödinger, discutiamo il limite classico, e diamo degli esempi di calcoli espliciti, determinando i propagatori della particella libera e dell’oscillatore armonico quantistici. Infine, mediante l’integrale funzionale, studiamo una soluzione perturbativa dell’equazione del calore.
APA, Harvard, Vancouver, ISO, and other styles
43

Tullo, Alessandra. "Apprendimento automatico con metodo kernel." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23200/.

Full text
Abstract:
Il seguente lavoro ha come obbiettivo lo studio dei metodi kernel nell'apprendimento automatico. Partendo dalla definizione di spazi di Hilbert a nucleo riproducente vengono esaminate le funzioni kernel e i metodi kernel. In particolare vengono analizzati il kernel trick e il representer theorem. Infine viene dato un esempio di problema dell'apprendimento automatico supervisionato, il problema di regressione lineare del kernel, risolto attraverso il representer theorem.
APA, Harvard, Vancouver, ISO, and other styles
44

Cheung, Pak-Ming. "Kernel-based multiple-instance learning /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?COMP%202006%20CHEUNGP.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Brinker, Klaus. "Active learning with kernel machines." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974403946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Subhan, Fazli. "Multilevel sparse kernel-based interpolation." Thesis, University of Leicester, 2011. http://hdl.handle.net/2381/9894.

Full text
Abstract:
Radial basis functions (RBFs) have been successfully applied for the last four decades for fitting scattered data in Rd, due to their simple implementation for any d. However, RBF interpolation faces the challenge of keeping a balance between convergence performance and numerical stability. Moreover, to ensure good convergence rates in high dimensions, one has to deal with the difficulty of exponential growth of the degrees of freedom with respect to the dimension d of the interpolation problem. This makes the application of RBFs limited to few thousands of data sites and/or low dimensions in practice. In this work, we propose a hierarchical multilevel scheme, termed sparse kernel-based interpolation (SKI) algorithm, for the solution of interpolation problem in high dimensions. The new scheme uses direction-wise multilevel decomposition of structured or mildly unstructured interpolation data sites in conjunction with the application of kernel-based interpolants with different scaling in each direction. The new SKI algorithm can be viewed as an extension of the idea of sparse grids/hyperbolic cross to kernel-based functions. To achieve accelerated convergence, we propose a multilevel version of the SKI algorithm. The SKI and multilevel SKI (MLSKI) algorithms admit good reproduction properties: they are numerically stable and efficient for the reconstruction of large data in Rd, for d = 2, 3, 4, with several thousand data. SKI is generally superior over classical RBF methods in terms of complexity, run time, and convergence at least for large data sets. The MLSKI algorithm accelerates the convergence of SKI and has also generally faster convergence than the classical multilevel RBF scheme.
APA, Harvard, Vancouver, ISO, and other styles
47

Friess, Thilo-Thomas. "Perceptrons in kernel feature spaces." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Xiao, Bai. "Heat kernel analysis on graphs." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hsiao, Roger Wend Huu. "Kernel eigenspace-based MLLR adaptation /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?COMP%202004%20HSIAO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bloehdorn, Stephan. "Kernel Methods for knowledge structures." [S.l. : s.n.], 2008. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000009223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography