Gotowa bibliografia na temat „Kernel mean embedding”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Kernel mean embedding”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Kernel mean embedding"

1

Berquin, Yann. "Kernel mean embedding vs kernel density estimation: A quantum perspective." Physics Letters A 528 (December 2024): 130047. http://dx.doi.org/10.1016/j.physleta.2024.130047.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Jorgensen, Palle E. T., Myung-Sin Song, and James Tian. "Conditional mean embedding and optimal feature selection via positive definite kernels." Opuscula Mathematica 44, no. 1 (2024): 79–103. http://dx.doi.org/10.7494/opmath.2024.44.1.79.

Pełny tekst źródła
Streszczenie:
Motivated by applications, we consider new operator-theoretic approaches to conditional mean embedding (CME). Our present results combine a spectral analysis-based optimization scheme with the use of kernels, stochastic processes, and constructive learning algorithms. For initially given non-linear data, we consider optimization-based feature selections. This entails the use of convex sets of kernels in a construction o foptimal feature selection via regression algorithms from learning models. Thus, with initial inputs of training data (for a suitable learning algorithm), each choice of a kern
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Wei, Jun-Xiang Mao, and Min-Ling Zhang. "Learnware Specification via Label-Aware Neural Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15857–65. https://doi.org/10.1609/aaai.v39i15.33741.

Pełny tekst źródła
Streszczenie:
The learnware paradigm aims to establish a learnware dock system of numerous well-trained machine learning models, enabling users to reuse existing helpful models for their tasks instead of starting from scratch. Each learnware in the system is a well-established model submitted by its developer, associated with a specification generated by the learnware dock system. The specification characterizes the specialty of the corresponding model, enabling it to be identified accurately for new task requirements. Existing specification generation methods are mostly based on the Reduced Kernel Mean Emb
Style APA, Harvard, Vancouver, ISO itp.
4

Muandet, Krikamol, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Schölkopf. "Kernel Mean Embedding of Distributions: A Review and Beyond." Foundations and Trends® in Machine Learning 10, no. 1-2 (2017): 1–141. http://dx.doi.org/10.1561/2200000060.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Van Hauwermeiren, Daan, Michiel Stock, Thomas De Beer, and Ingmar Nopens. "Predicting Pharmaceutical Particle Size Distributions Using Kernel Mean Embedding." Pharmaceutics 12, no. 3 (2020): 271. http://dx.doi.org/10.3390/pharmaceutics12030271.

Pełny tekst źródła
Streszczenie:
In the pharmaceutical industry, the transition to continuous manufacturing of solid dosage forms is adopted by more and more companies. For these continuous processes, high-quality process models are needed. In pharmaceutical wet granulation, a unit operation in the ConsiGma TM -25 continuous powder-to-tablet system (GEA Pharma systems, Collette, Wommelgem, Belgium), the product under study presents itself as a collection of particles that differ in shape and size. The measurement of this collection results in a particle size distribution. However, the theoretical basis to describe the physica
Style APA, Harvard, Vancouver, ISO itp.
6

Xu, Bi-Cun, Kai Ming Ting, and Yuan Jiang. "Isolation Graph Kernel." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10487–95. http://dx.doi.org/10.1609/aaai.v35i12.17255.

Pełny tekst źródła
Streszczenie:
A recent Wasserstein Weisfeiler-Lehman (WWL) Graph Kernel has a distinctive feature: Representing the distribution of Weisfeiler-Lehman (WL)-embedded node vectors of a graph in a histogram that enables a dissimilarity measurement of two graphs using Wasserstein distance. It has been shown to produce better classification accuracy than other graph kernels which do not employ such distribution and Wasserstein distance. This paper introduces an alternative called Isolation Graph Kernel (IGK) that measures the similarity between two attributed graphs. IGK is unique in two aspects among existing gr
Style APA, Harvard, Vancouver, ISO itp.
7

Rustamov, Raif M., and James T. Klosowski. "Kernel mean embedding based hypothesis tests for comparing spatial point patterns." Spatial Statistics 38 (August 2020): 100459. http://dx.doi.org/10.1016/j.spasta.2020.100459.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hou, Boya, Sina Sanjari, Nathan Dahlin, and Subhonmesh Bose. "Compressed Decentralized Learning of Conditional Mean Embedding Operators in Reproducing Kernel Hilbert Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (2023): 7902–9. http://dx.doi.org/10.1609/aaai.v37i7.25956.

Pełny tekst źródła
Streszczenie:
Conditional mean embedding (CME) operators encode conditional probability densities within Reproducing Kernel Hilbert Space (RKHS). In this paper, we present a decentralized algorithm for a collection of agents to cooperatively approximate CME over a network. Communication constraints limit the agents from sending all data to their neighbors; we only allow sparse representations of covariance operators to be exchanged among agents, compositions of which defines CME. Using a coherence-based compression scheme, we present a consensus-type algorithm that preserves the average of the approximation
Style APA, Harvard, Vancouver, ISO itp.
9

Segera, Davies, Mwangi Mbuthia, and Abraham Nyete. "Particle Swarm Optimized Hybrid Kernel-Based Multiclass Support Vector Machine for Microarray Cancer Data Analysis." BioMed Research International 2019 (December 16, 2019): 1–11. http://dx.doi.org/10.1155/2019/4085725.

Pełny tekst źródła
Streszczenie:
Determining an optimal decision model is an important but difficult combinatorial task in imbalanced microarray-based cancer classification. Though the multiclass support vector machine (MCSVM) has already made an important contribution in this field, its performance solely depends on three aspects: the penalty factor C, the type of kernel, and its parameters. To improve the performance of this classifier in microarray-based cancer analysis, this paper proposes PSO-PCA-LGP-MCSVM model that is based on particle swarm optimization (PSO), principal component analysis (PCA), and multiclass support
Style APA, Harvard, Vancouver, ISO itp.
10

Muralinath, Rashmi N., Vishwambhar Pathak, and Prabhat K. Mahanti. "Metastable Substructure Embedding and Robust Classification of Multichannel EEG Data Using Spectral Graph Kernels." Future Internet 17, no. 3 (2025): 102. https://doi.org/10.3390/fi17030102.

Pełny tekst źródła
Streszczenie:
Classification of neurocognitive states from Electroencephalography (EEG) data is complex due to inherent challenges such as noise, non-stationarity, non-linearity, and the high-dimensional and sparse nature of connectivity patterns. Graph-theoretical approaches provide a powerful framework for analysing the latent state dynamics using connectivity measures across spatio-temporal-spectral dimensions. This study applies the graph Koopman embedding kernels (GKKE) method to extract latent neuro-markers of seizures from epileptiform EEG activity. EEG-derived graphs were constructed using correlati
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Kernel mean embedding"

1

Hsu, Yuan-Shuo Kelvin. "Bayesian Perspectives on Conditional Kernel Mean Embeddings: Hyperparameter Learning and Probabilistic Inference." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24309.

Pełny tekst źródła
Streszczenie:
This thesis presents the narrative of a particular journey towards discovering and developing Bayesian perspectives on conditional kernel mean embeddings. It is motivated by the desire and need to learn flexible and richer representations of conditional distributions for probabilistic inference in various contexts. While conditional kernel mean embeddings are able to achieve such representations, it is unclear how their hyperparameters can be learned for probabilistic inference in various settings. These hyperparameters govern the space of possible representations, and critically influence the
Style APA, Harvard, Vancouver, ISO itp.
2

Muandet, Krikamol [Verfasser], and Bernhard [Akademischer Betreuer] Schölkopf. "From Points to Probability Measures : Statistical Learning on Distributions with Kernel Mean Embedding / Krikamol Muandet ; Betreuer: Bernhard Schölkopf." Tübingen : Universitätsbibliothek Tübingen, 2015. http://d-nb.info/1163664804/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Muandet, Krikamol Verfasser], and Bernhard [Akademischer Betreuer] [Schölkopf. "From Points to Probability Measures : Statistical Learning on Distributions with Kernel Mean Embedding / Krikamol Muandet ; Betreuer: Bernhard Schölkopf." Tübingen : Universitätsbibliothek Tübingen, 2015. http://d-nb.info/1163664804/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fermanian, Jean-Baptiste. "High dimensional multiple means estimation and testing with applications to machine learning." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM035.

Pełny tekst źródła
Streszczenie:
Nous étudions dans cette thèse l'influence de la grande dimension dans des problèmes de test et d'estimation. Notre analyse porte sur la dépendance en la dimension de la vitesse de séparation d'un test de proximité et du risque quadratique de l'estimation multiples de vecteurs. Nous complétons les résultats existants en étudiant ces dépendances dans le cas de distributions non isotropes. Pour de telles distributions, le rôle de la dimension est alors joué par des notions de dimension effective définies à partir de la covariance des distributions. Ce cadre permet d'englober des données de dimen
Style APA, Harvard, Vancouver, ISO itp.
5

Chen, Tian Qi. "Deep kernel mean embeddings for generative modeling and feedforward style transfer." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/62668.

Pełny tekst źródła
Streszczenie:
The generation of data has traditionally been specified using hand-crafted algorithms. However, oftentimes the exact generative process is unknown while only a limited number of samples are observed. One such case is generating images that look visually similar to an exemplar image or as if coming from a distribution of images. We look into learning the generating process by constructing a similarity function that measures how close the generated image is to the target image. We discuss a framework in which the similarity function is specified by a pre-trained neural network without fi
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Kernel mean embedding"

1

Muandet, Krikamol, Kenji Fukumizu, Bharath Kumar Sriperumbudur VanGeepuram, and Bernhard Schölkopf. Kernel Mean Embedding of Distributions: A Review and Beyond. Now Publishers, 2017.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sriperumbudur, Bharath K. Kernel Mean Embedding of Distributions: A Review and Beyond. 2017.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Kernel mean embedding"

1

Fukumizu, Kenji. "Nonparametric Bayesian Inference with Kernel Mean Embedding." In Modern Methodology and Applications in Spatial-Temporal Modeling. Springer Japan, 2015. http://dx.doi.org/10.1007/978-4-431-55339-7_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wickstrøm, Kristoffer, J. Emmanuel Johnson, Sigurd Løkse, et al. "The Kernelized Taylor Diagram." In Communications in Computer and Information Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17030-0_10.

Pełny tekst źródła
Streszczenie:
AbstractThis paper presents the kernelized Taylor diagram, a graphical framework for visualizing similarities between data populations. The kernelized Taylor diagram builds on the widely used Taylor diagram, which is used to visualize similarities between populations. However, the Taylor diagram has several limitations such as not capturing non-linear relationships and sensitivity to outliers. To address such limitations, we propose the kernelized Taylor diagram. Our proposed kernelized Taylor diagram is capable of visualizing similarities between populations with minimal assumptions of the da
Style APA, Harvard, Vancouver, ISO itp.
3

Hsu, Kelvin, Richard Nock, and Fabio Ramos. "Hyperparameter Learning for Conditional Kernel Mean Embeddings with Rademacher Complexity Bounds." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10928-8_14.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Xie, Yi, Zhi-Hao Tan, Yuan Jiang, and Zhi-Hua Zhou. "Identifying Helpful Learnwares Without Examining the Whole Market." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2023. http://dx.doi.org/10.3233/faia230585.

Pełny tekst źródła
Streszczenie:
The learnware paradigm aims to construct a market of numerous well-performing machine learning models, which enables users to leverage these models to accomplish specific tasks without having to build models from scratch. Each learnware in the market is a model associated with a specification, representing the model’s utility and enabling it to be identified according to future users’ requirements. In the learnware paradigm, due to the vast and ever-increasing number of models in the market, a significant challenge is to identify helpful learnwares efficiently for a specific user task without
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Kernel mean embedding"

1

Luo, Mingjie, Jie Zhou, and Qingke Zou. "Multisensor Estimation Fusion Based on Kernel Mean Embedding." In 2024 27th International Conference on Information Fusion (FUSION). IEEE, 2024. http://dx.doi.org/10.23919/fusion59988.2024.10706487.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Xu, Deheng, Yun Li, Yun-Hao Yuan, Jipeng Qiang, and Yi Zhu. "Incomplete Multi-Kernel k-Means Clustering With Fractional-Order Embedding." In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825789.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

GUAN, ZENGDA, and JUAN ZHANG. "Quantitative Associative Classification Based on Kernel Mean Embedding." In CSAI 2020: 2020 4th International Conference on Computer Science and Artificial Intelligence. ACM, 2020. http://dx.doi.org/10.1145/3445815.3445827.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Tang, Shuhao, Hao Tian, Xiaofeng Cao, and Wei Ye. "Deep Hierarchical Graph Alignment Kernels." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/549.

Pełny tekst źródła
Streszczenie:
Typical R-convolution graph kernels invoke the kernel functions that decompose graphs into non-isomorphic substructures and compare them. However, overlooking implicit similarities and topological position information between those substructures limits their performances. In this paper, we introduce Deep Hierarchical Graph Alignment Kernels (DHGAK) to resolve this problem. Specifically, the relational substructures are hierarchically aligned to cluster distributions in their deep embedding space. The substructures belonging to the same cluster are assigned the same feature map in the Reproduci
Style APA, Harvard, Vancouver, ISO itp.
5

Ding, Xiao, Bibo Cai, Ting Liu, and Qiankun Shi. "Domain Adaptation via Tree Kernel Based Maximum Mean Discrepancy for User Consumption Intention Identification." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/560.

Pełny tekst źródła
Streszczenie:
Identifying user consumption intention from social media is of great interests to downstream applications. Since such task is domain-dependent, deep neural networks have been applied to learn transferable features for adapting models from a source domain to a target domain. A basic idea to solve this problem is reducing the distribution difference between the source domain and the target domain such that the transfer error can be bounded. However, the feature transferability drops dramatically in higher layers of deep neural networks with increasing domain discrepancy. Hence, previous work has
Style APA, Harvard, Vancouver, ISO itp.
6

Zhu, Jia-Jie, Wittawat Jitkrittum, Moritz Diehl, and Bernhard Scholkopf. "Worst-Case Risk Quantification under Distributional Ambiguity using Kernel Mean Embedding in Moment Problem." In 2020 59th IEEE Conference on Decision and Control (CDC). IEEE, 2020. http://dx.doi.org/10.1109/cdc42340.2020.9303938.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Romao, Licio, Ashish R. Hota, and Alessandro Abate. "Distributionally Robust Optimal and Safe Control of Stochastic Systems via Kernel Conditional Mean Embedding." In 2023 62nd IEEE Conference on Decision and Control (CDC). IEEE, 2023. http://dx.doi.org/10.1109/cdc49753.2023.10383997.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Liu, Qiao, and Hui Xue. "Adversarial Spectral Kernel Matching for Unsupervised Time Series Domain Adaptation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/378.

Pełny tekst źródła
Streszczenie:
Unsupervised domain adaptation (UDA) has been received increasing attention since it does not require labels in target domain. Most existing UDA methods learn domain-invariant features by minimizing discrepancy distance computed by a certain metric between domains. However, these discrepancy-based methods cannot be robustly applied to unsupervised time series domain adaptation (UTSDA). That is because discrepancy metrics in these methods contain only low-order and local statistics, which have limited expression for time series distributions and therefore result in failure of domain matching. A
Style APA, Harvard, Vancouver, ISO itp.
9

Tan, Peng, Zhi-Hao Tan, Yuan Jiang, and Zhi-Hua Zhou. "Handling Learnwares Developed from Heterogeneous Feature Spaces without Auxiliary Data." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/471.

Pełny tekst źródła
Streszczenie:
The learnware paradigm proposed by Zhou [2016] devotes to constructing a market of numerous well-performed models, enabling users to solve problems by reusing existing efforts rather than starting from scratch. A learnware comprises a trained model and the specification which enables the model to be adequately identified according to the user's requirement. Previous studies concentrated on the homogeneous case where models share the same feature space based on Reduced Kernel Mean Embedding (RKME) specification. However, in real-world scenarios, models are typically constructed from different f
Style APA, Harvard, Vancouver, ISO itp.
10

Shan, Siyuan, Vishal Athreya Baskaran, Haidong Yi, Jolene Ranek, Natalie Stanley, and Junier B. Oliva. "Transparent single-cell set classification with kernel mean embeddings." In BCB '22: 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics. ACM, 2022. http://dx.doi.org/10.1145/3535508.3545538.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!