Siga este enlace para ver otros tipos de publicaciones sobre el tema: Out-of-distribution generalization.

Artículos de revistas sobre el tema "Out-of-distribution generalization"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Out-of-distribution generalization".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Ye, Nanyang, Lin Zhu, Jia Wang, et al. "Certifiable Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10927–35. http://dx.doi.org/10.1609/aaai.v37i9.26295.

Texto completo
Resumen
Machine learning methods suffer from test-time performance degeneration when faced with out-of-distribution (OoD) data whose distribution is not necessarily the same as training data distribution. Although a plethora of algorithms have been proposed to mitigate this issue, it has been demonstrated that achieving better performance than ERM simultaneously on different types of distributional shift datasets is challenging for existing approaches. Besides, it is unknown how and to what extent these methods work on any OoD datum without theoretical guarantees. In this paper, we propose a certifiab
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Liu, Bowen, Haoyang Li, Shuning Wang, Shuo Nie, and Shanghang Zhang. "Subgraph Aggregation for Out-of-Distribution Generalization on Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 18763–71. https://doi.org/10.1609/aaai.v39i18.34065.

Texto completo
Resumen
Out-of-distribution (OOD) generalization in Graph Neural Networks (GNNs) has gained significant attention due to its critical importance in graph-based predictions in real-world scenarios. Existing methods primarily focus on extracting a single causal subgraph from the input graph to achieve generalizable predictions. However, relying on a single subgraph can lead to susceptibility to spurious correlations and is insufficient for learning invariant patterns behind graph data. Moreover, in many real-world applications, such as molecular property prediction, multiple critical subgraphs may influ
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Yuan, Lingxiao, Harold S. Park, and Emma Lejeune. "Towards out of distribution generalization for problems in mechanics." Computer Methods in Applied Mechanics and Engineering 400 (October 2022): 115569. http://dx.doi.org/10.1016/j.cma.2022.115569.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Anji, Hongming Xu, Guy Van den Broeck, and Yitao Liang. "Out-of-Distribution Generalization by Neural-Symbolic Joint Training." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (2023): 12252–59. http://dx.doi.org/10.1609/aaai.v37i10.26444.

Texto completo
Resumen
This paper develops a novel methodology to simultaneously learn a neural network and extract generalized logic rules. Different from prior neural-symbolic methods that require background knowledge and candidate logical rules to be provided, we aim to induce task semantics with minimal priors. This is achieved by a two-step learning framework that iterates between optimizing neural predictions of task labels and searching for a more accurate representation of the hidden task semantics. Notably, supervision works in both directions: (partially) induced task semantics guide the learning of the ne
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yu, Yemin, Luotian Yuan, Ying Wei, et al. "RetroOOD: Understanding Out-of-Distribution Generalization in Retrosynthesis Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 1 (2024): 374–82. http://dx.doi.org/10.1609/aaai.v38i1.27791.

Texto completo
Resumen
Machine learning-assisted retrosynthesis prediction models have been gaining widespread adoption, though their performances oftentimes degrade significantly when deployed in real-world applications embracing out-of-distribution (OOD) molecules or reactions. Despite steady progress on standard benchmarks, our understanding of existing retrosynthesis prediction models under the premise of distribution shifts remains stagnant. To this end, we first formally sort out two types of distribution shifts in retrosynthesis prediction and construct two groups of benchmark datasets. Next, through comprehe
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Du, Hongyi, Xuewei Li, and Minglai Shao. "Graph out-of-distribution generalization through contrastive learning paradigm." Knowledge-Based Systems 315 (April 2025): 113316. https://doi.org/10.1016/j.knosys.2025.113316.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Xu, Yiming, Bin Shi, Zhen Peng, Huixiang Liu, Bo Dong, and Chen Chen. "Out-of-Distribution Generalization on Graphs via Progressive Inference." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 12963–71. https://doi.org/10.1609/aaai.v39i12.33414.

Texto completo
Resumen
The development and evaluation of graph neural networks (GNNs) generally follow the independent and identically distributed (i.i.d.) assumption. Yet this assumption is often untenable in practice due to the uncontrollable data generation mechanism. In particular, when the data distribution shows a significant shift, most GNNs would fail to produce reliable predictions and may even make decisions randomly. One of the most promising solutions to improve the model generalization is to pick out causal invariant parts in the input graph. Nonetheless, we observe a significant distribution gap betwee
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhu, Lin, Xinbing Wang, Chenghu Zhou, and Nanyang Ye. "Bayesian Cross-Modal Alignment Learning for Few-Shot Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 11461–69. http://dx.doi.org/10.1609/aaai.v37i9.26355.

Texto completo
Resumen
Recent advances in large pre-trained models showed promising results in few-shot learning. However, their generalization ability on two-dimensional Out-of-Distribution (OoD) data, i.e., correlation shift and diversity shift, has not been thoroughly investigated. Researches have shown that even with a significant amount of training data, few methods can achieve better performance than the standard empirical risk minimization method (ERM) in OoD generalization. This few-shot OoD generalization dilemma emerges as a challenging direction in deep neural network generalization research, where the pe
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lavda, Frantzeska, and Alexandros Kalousis. "Semi-Supervised Variational Autoencoders for Out-of-Distribution Generation." Entropy 25, no. 12 (2023): 1659. http://dx.doi.org/10.3390/e25121659.

Texto completo
Resumen
Humans are able to quickly adapt to new situations, learn effectively with limited data, and create unique combinations of basic concepts. In contrast, generalizing out-of-distribution (OOD) data and achieving combinatorial generalizations are fundamental challenges for machine learning models. Moreover, obtaining high-quality labeled examples can be very time-consuming and expensive, particularly when specialized skills are required for labeling. To address these issues, we propose BtVAE, a method that utilizes conditional VAE models to achieve combinatorial generalization in certain scenario
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zhang, Xiao, Sunhao Dai, Jun Xu, Yong Liu, and Zhenhua Dong. "AdaO2B: Adaptive Online to Batch Conversion for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22596–604. https://doi.org/10.1609/aaai.v39i21.34418.

Texto completo
Resumen
Online to batch conversion involves constructing a new batch learner by utilizing a series of models generated by an existing online learning algorithm, for achieving generalization guarantees under i.i.d assumption. However, when applied to real-world streaming applications such as streaming recommender systems, the data stream may be sampled from time-varying distributions instead of persistently being i.i.d. This poses a challenge in terms of out-of-distribution (OOD) generalization. Existing approaches employ fixed conversion mechanisms that are unable to adapt to novel testing distributio
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Su, Hang, and Wei Wang. "An Out-of-Distribution Generalization Framework Based on Variational Backdoor Adjustment." Mathematics 12, no. 1 (2023): 85. http://dx.doi.org/10.3390/math12010085.

Texto completo
Resumen
In practical applications, learning models that can perform well even when the data distribution is different from the training set are essential and meaningful. Such problems are often referred to as out-of-distribution (OOD) generalization problems. In this paper, we propose a method for OOD generalization based on causal inference. Unlike the prevalent OOD generalization methods, our approach does not require the environment labels associated with the data in the training set. We analyze the causes of distributional shifts in data from a causal modeling perspective and then propose a backdo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Cao, Linfeng, Aofan Jiang, Wei Li, Huaying Wu, and Nanyang Ye. "OoDHDR-Codec: Out-of-Distribution Generalization for HDR Image Compression." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 158–66. http://dx.doi.org/10.1609/aaai.v36i1.19890.

Texto completo
Resumen
Recently, deep learning has been proven to be a promising approach in standard dynamic range (SDR) image compression. However, due to the wide luminance distribution of high dynamic range (HDR) images and the lack of large standard datasets, developing a deep model for HDR image compression is much more challenging. To tackle this issue, we view HDR data as distributional shifts of SDR data and the HDR image compression can be modeled as an out-of-distribution generalization (OoD) problem. Herein, we propose a novel out-of-distribution (OoD) HDR image compression framework (OoDHDR-codec). It l
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Li, Jiacheng, and Min Yang. "Dual-branch neural operator for enhanced out-of-distribution generalization." Engineering Analysis with Boundary Elements 171 (February 2025): 106082. https://doi.org/10.1016/j.enganabound.2024.106082.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Deng, Bin, and Kui Jia. "Counterfactual Supervision-Based Information Bottleneck for Out-of-Distribution Generalization." Entropy 25, no. 2 (2023): 193. http://dx.doi.org/10.3390/e25020193.

Texto completo
Resumen
Learning invariant (causal) features for out-of-distribution (OOD) generalization have attracted extensive attention recently, and among the proposals, invariant risk minimization (IRM) is a notable solution. In spite of its theoretical promise for linear regression, the challenges of using IRM in linear classification problems remain. By introducing the information bottleneck (IB) principle into the learning of IRM, the IB-IRM approach has demonstrated its power to solve these challenges. In this paper, we further improve IB-IRM from two aspects. First, we show that the key assumption of supp
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ashok, Arjun, Chaitanya Devaguptapu, and Vineeth N. Balasubramanian. "Learning Modular Structures That Generalize Out-of-Distribution (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12905–6. http://dx.doi.org/10.1609/aaai.v36i11.21589.

Texto completo
Resumen
Out-of-distribution (O.O.D.) generalization remains to be a key challenge for real-world machine learning systems. We describe a method for O.O.D. generalization that, through training, encourages models to only preserve features in the network that are well reused across multiple training domains. Our method combines two complementary neuron-level regularizers with a probabilistic differentiable binary mask over the network, to extract a modular sub-network that achieves better O.O.D. performance than the original network. Preliminary evaluation on two benchmark datasets corroborates the prom
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Zou, Xin, and Weiwei Liu. "Coverage-Guaranteed Prediction Sets for Out-of-Distribution Data." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 15 (2024): 17263–70. http://dx.doi.org/10.1609/aaai.v38i15.29673.

Texto completo
Resumen
Out-of-distribution (OOD) generalization has attracted increasing research attention in recent years, due to its promising experimental results in real-world applications. In this paper, we study the confidence set prediction problem in the OOD generalization setting. Split conformal prediction (SCP) is an efficient framework for handling the confidence set prediction problem. However, the validity of SCP requires the examples to be exchangeable, which is violated in the OOD setting. Empirically, we show that trivially applying SCP results in a failure to maintain the marginal coverage when th
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Bai, Haoyue, Rui Sun, Lanqing Hong, et al. "DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6705–13. http://dx.doi.org/10.1609/aaai.v35i8.16829.

Texto completo
Resumen
While deep learning demonstrates its strong ability to handle independent and identically distributed (IID) data, it often suffers from out-of-distribution (OoD) generalization, where the test data come from another distribution (w.r.t. the training one). Designing a general OoD generalization framework for a wide range of applications is challenging, mainly due to different kinds of distribution shifts in the real world, such as the shift across domains or the extrapolation of correlation. Most of the previous approaches can only solve one specific distribution shift, leading to unsatisfactor
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ren, Yifei, and Pouya Bashivan. "How well do models of visual cortex generalize to out of distribution samples?" PLOS Computational Biology 20, no. 5 (2024): e1011145. http://dx.doi.org/10.1371/journal.pcbi.1011145.

Texto completo
Resumen
Unit activity in particular deep neural networks (DNNs) are remarkably similar to the neuronal population responses to static images along the primate ventral visual cortex. Linear combinations of DNN unit activities are widely used to build predictive models of neuronal activity in the visual cortex. Nevertheless, prediction performance in these models is often investigated on stimulus sets consisting of everyday objects under naturalistic settings. Recent work has revealed a generalization gap in how predicting neuronal responses to synthetically generated out-of-distribution (OOD) stimuli.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Fan, Caoyun, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yaohui Jin. "Unlock the Potential of Counterfactually-Augmented Data in Out-Of-Distribution Generalization." Expert Systems with Applications 238 (March 2024): 122066. http://dx.doi.org/10.1016/j.eswa.2023.122066.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ramachandran, Sai Niranjan, Rudrabha Mukhopadhyay, Madhav Agarwal, C. V. Jawahar, and Vinay Namboodiri. "Understanding the Generalization of Pretrained Diffusion Models on Out-of-Distribution Data." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (2024): 14767–75. http://dx.doi.org/10.1609/aaai.v38i13.29395.

Texto completo
Resumen
This work tackles the important task of understanding out-of-distribution behavior in two prominent types of generative models, i.e., GANs and Diffusion models. Understanding this behavior is crucial in understanding their broader utility and risks as these systems are increasingly deployed in our daily lives. Our first contribution is demonstrating that diffusion spaces outperform GANs' latent spaces in inverting high-quality OOD images. We also provide a theoretical analysis attributing this to the lack of prior holes in diffusion spaces. Our second significant contribution is to provide a t
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Jia, Tianrui, Haoyang Li, Cheng Yang, Tao Tao, and Chuan Shi. "Graph Invariant Learning with Subgraph Co-mixup for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8562–70. http://dx.doi.org/10.1609/aaai.v38i8.28700.

Texto completo
Resumen
Graph neural networks (GNNs) have been demonstrated to perform well in graph representation learning, but always lacking in generalization capability when tackling out-of-distribution (OOD) data. Graph invariant learning methods, backed by the invariance principle among defined multiple environments, have shown effectiveness in dealing with this issue. However, existing methods heavily rely on well-predefined or accurately generated environment partitions, which are hard to be obtained in practice, leading to sub-optimal OOD generalization performances. In this paper, we propose a novel graph
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Zhang, Lily H., and Rajesh Ranganath. "Robustness to Spurious Correlations Improves Semantic Out-of-Distribution Detection." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15305–12. http://dx.doi.org/10.1609/aaai.v37i12.26785.

Texto completo
Resumen
Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs. However, as demonstrated in previous work, these methods struggle to detect OOD inputs that share nuisance values (e.g. background) with in-distribution inputs. The detection of shared-nuisance OOD (SN-OOD) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment. In this work, we provide a possible explanation for these
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Zhang, Jiaqiang, and Songcan Chen. "Expand Horizon: Graph Out-of-Distribution Generalization via Multi-Level Environment Inference." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 12 (2025): 13233–41. https://doi.org/10.1609/aaai.v39i12.33444.

Texto completo
Resumen
Graph neural networks (GNNs) are widely used for node classification tasks, but when encountering distribution shifts due to environmental change in real-world scenarios, they tend to learn unstable correlations between features and labels. To overcome this dilemma, a powerful class of approaches views the environment as the root cause of those unstable correlations, thereby their key focus is to infer the environment involved, enabling the model to avoid capturing environment-sensitive correlations. However, their inferences rely solely on the single-level information from one low-hop ego-gra
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Huang, Kunze, Luyao Tang, Yuxuan Yuan, et al. "Open world out-of-distribution generalization via dream open and sustain close." Knowledge-Based Systems 327 (October 2025): 114128. https://doi.org/10.1016/j.knosys.2025.114128.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Gwon, Kyungpil, and Joonhyuk Yoo. "Out-of-Distribution (OOD) Detection and Generalization Improved by Augmenting Adversarial Mixup Samples." Electronics 12, no. 6 (2023): 1421. http://dx.doi.org/10.3390/electronics12061421.

Texto completo
Resumen
Deep neural network (DNN) models are usually built based on the i.i.d. (independent and identically distributed), also known as in-distribution (ID), assumption on the training samples and test data. However, when models are deployed in a real-world scenario with some distributional shifts, test data can be out-of-distribution (OOD) and both OOD detection and OOD generalization should be simultaneously addressed to ensure the reliability and safety of applied AI systems. Most existing OOD detectors pursue these two goals separately, and therefore, are sensitive to covariate shift rather than s
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Qian, Quan, Xuezhong Chen, Jingdong Chen, and Yi Qin. "Common distribution discrepancy knowledge distilling: A new out-of-distribution generalization framework for machinery RUL prediction." Mechanical Systems and Signal Processing 237 (August 2025): 113079. https://doi.org/10.1016/j.ymssp.2025.113079.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Boccato, Tommaso, Alberto Testolin, and Marco Zorzi. "Learning Numerosity Representations with Transformers: Number Generation Tasks and Out-of-Distribution Generalization." Entropy 23, no. 7 (2021): 857. http://dx.doi.org/10.3390/e23070857.

Texto completo
Resumen
One of the most rapidly advancing areas of deep learning research aims at creating models that learn to disentangle the latent factors of variation from a data distribution. However, modeling joint probability mass functions is usually prohibitive, which motivates the use of conditional models assuming that some information is given as input. In the domain of numerical cognition, deep learning architectures have successfully demonstrated that approximate numerosity representations can emerge in multi-layer networks that build latent representations of a set of images with a varying number of i
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Chen, Minghui, Cheng Wen, Feng Zheng, Fengxiang He, and Ling Shao. "VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (2022): 321–29. http://dx.doi.org/10.1609/aaai.v36i1.19908.

Texto completo
Resumen
Invariance to diverse types of image corruption, such as noise, blurring, or colour shifts, is essential to establish robust models in computer vision. Data augmentation has been the major approach in improving the robustness against common corruptions. However, the samples produced by popular augmentation strategies deviate significantly from the underlying data manifold. As a result, performance is skewed toward certain types of corruption. To address this issue, we propose a multi-source vicinal transfer augmentation (VITA) method for generating diverse on-manifold samples. The proposed VIT
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Maier, Anatol, and Christian Riess. "Reliable Out-of-Distribution Recognition of Synthetic Images." Journal of Imaging 10, no. 5 (2024): 110. http://dx.doi.org/10.3390/jimaging10050110.

Texto completo
Resumen
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization to unseen generators or post-processing. This can be viewed as an issue of handling out-of-distribution inputs. Forensic detectors can be hardened by the extensive augmentation of the training data or specifically tailored networks. Nevertheless, such precautions only
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Xin, Shiji, Yifei Wang, Jingtong Su, and Yisen Wang. "On the Connection between Invariant Learning and Adversarial Training for Out-of-Distribution Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10519–27. http://dx.doi.org/10.1609/aaai.v37i9.26250.

Texto completo
Resumen
Despite impressive success in many tasks, deep learning models are shown to rely on spurious features, which will catastrophically fail when generalized to out-of-distribution (OOD) data. Invariant Risk Minimization (IRM) is proposed to alleviate this issue by extracting domain-invariant features for OOD generalization. Nevertheless, recent work shows that IRM is only effective for a certain type of distribution shift (e.g., correlation shift) while it fails for other cases (e.g., diversity shift). Meanwhile, another thread of method, Adversarial Training (AT), has shown better domain transfer
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Madan, Spandan, Mingran Cao, Will Xiao, Hanspeter Pfister, and Gabriel Kreiman. "Out-of-Distribution generalization behavior of DNN-based encoding models for the visual cortex." Journal of Vision 24, no. 10 (2024): 1148. http://dx.doi.org/10.1167/jov.24.10.1148.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Hassan, A., S. A. Dar, P. B. Ahmad, and B. A. Para. "A new generalization of Aradhana distribution: Properties and applications." Journal of Applied Mathematics, Statistics and Informatics 16, no. 2 (2020): 51–66. http://dx.doi.org/10.2478/jamsi-2020-0009.

Texto completo
Resumen
Abstract In this paper, we introduce a new generalization of Aradhana distribution called as Weighted Aradhana Distribution (WID). The statistical properties of this distribution are derived and the model parameters are estimated by maximum likelihood estimation. Simulation study of ML estimates of the parameters is carried out in R software. Finally, an application to real data set is presented to examine the significance of newly introduced model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Chen, Zhe, Zhiquan Ding, Xiaoling Zhang, Xin Zhang, and Tianqi Qin. "Improving Out-of-Distribution Generalization in SAR Image Scene Classification with Limited Training Samples." Remote Sensing 15, no. 24 (2023): 5761. http://dx.doi.org/10.3390/rs15245761.

Texto completo
Resumen
For practical maritime SAR image classification tasks with special imaging platforms, scenes to be classified are often different from those in the training sets. The quantity and diversity of the available training data can also be extremely limited. This problem of out-of-distribution (OOD) generalization with limited training samples leads to a sharp drop in the performance of conventional deep learning algorithms. In this paper, a knowledge-guided neural network (KGNN) model is proposed to overcome these challenges. By analyzing the saliency features of various maritime SAR scenes, univers
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Sha, Naijun. "A New Inference Approach for Type-II Generalized Birnbaum-Saunders Distribution." Stats 2, no. 1 (2019): 148–63. http://dx.doi.org/10.3390/stats2010011.

Texto completo
Resumen
The Birnbaum-Saunders (BS) distribution, with its generalizations, has been successfully applied in a wide variety of fields. One generalization, type-II generalized BS (denoted as GBS-II), has been developed and attracted considerable attention in recent years. In this article, we propose a new simple and convenient procedure of inference approach for GBS-II distribution. An extensive simulation study is carried out to assess performance of the methods under various settings of parameter values with different sample sizes. Real data are analyzed for illustrative purposes to display the effici
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Zhou, Pengyang, Chaochao Chen, Weiming Liu, et al. "FedGOG: Federated Graph Out-of-Distribution Generalization with Diffusion Data Exploration and Latent Embedding Decorrelation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 21 (2025): 22965–73. https://doi.org/10.1609/aaai.v39i21.34459.

Texto completo
Resumen
Federated graph learning (FGL) has emerged as a promising approach to enable collaborative training of graph models while preserving data privacy. However, current FGL methods overlook the out-of-distribution (OOD) shifts that occur in real-world scenarios. The distribution shifts between training and testing datasets in each client impact the FGL performance. To address this issue, we propose federated graph OOD generalization framework FedGOG, which includes two modules, i.e., diffusion data exploration (DDE) and latent embedding decorrelation (LED). In DDE, all clients jointly train score m
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Das, Siddhant, and Markus Nöth. "Times of arrival and gauge invariance." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 477, no. 2250 (2021): 20210101. http://dx.doi.org/10.1098/rspa.2021.0101.

Texto completo
Resumen
We revisit the arguments underlying two well-known arrival-time distributions in quantum mechanics, viz., the Aharonov–Bohm–Kijowski (ABK) distribution, applicable for freely moving particles, and the quantum flux (QF) distribution. An inconsistency in the original axiomatic derivation of Kijowski’s result is pointed out, along with an inescapable consequence of the ‘negative arrival times’ inherent to this proposal (and generalizations thereof). The ABK free-particle restriction is lifted in a discussion of an explicit arrival-time set-up featuring a charged particle moving in a constant magn
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Sharifi-Noghabi, Hossein, Parsa Alamzadeh Harjandi, Olga Zolotareva, Colin C. Collins, and Martin Ester. "Out-of-distribution generalization from labelled and unlabelled gene expression data for drug response prediction." Nature Machine Intelligence 3, no. 11 (2021): 962–72. http://dx.doi.org/10.1038/s42256-021-00408-w.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Bogin, Ben, Sanjay Subramanian, Matt Gardner, and Jonathan Berant. "Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering." Transactions of the Association for Computational Linguistics 9 (2021): 195–210. http://dx.doi.org/10.1162/tacl_a_00361.

Texto completo
Resumen
Abstract Answering questions that involve multi-step reasoning requires decomposing them and using the answers of intermediate steps to reach the final answer. However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples. In this work, we propose a model that computes a representation and denotation for all question spans in a bottom-up, compositional manner using a CKY-style parser. Our model induces latent trees, driven by end-to-end (the answer) supervision only. We sh
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Zhi Tan, Zhi Tan, and Zhao-Fei Teng Zhi Tan. "Image Domain Generalization Method based on Solving Domain Discrepancy Phenomenon." 電腦學刊 33, no. 3 (2022): 171–85. http://dx.doi.org/10.53106/199115992022063303014.

Texto completo
Resumen
<p>In order to solve the problem that the recognition performance is obviously degraded when the model trained by known data distribution transfer to unknown data distribution, domain generalization method based on attention mechanism and adversarial training is proposed. Firstly, a multi-level attention mechanism module is designed to capture the underlying abstract information features of the image; Secondly, increases the loss limit of the generative adversarial network,the virtual enhanced domain which can simulate the target domain of unknown data distribution is generated by advers
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Vasiliuk, Anton, Daria Frolova, Mikhail Belyaev, and Boris Shirokikh. "Limitations of Out-of-Distribution Detection in 3D Medical Image Segmentation." Journal of Imaging 9, no. 9 (2023): 191. http://dx.doi.org/10.3390/jimaging9090191.

Texto completo
Resumen
Deep learning models perform unreliably when the data come from a distribution different from the training one. In critical applications such as medical imaging, out-of-distribution (OOD) detection methods help to identify such data samples, preventing erroneous predictions. In this paper, we further investigate OOD detection effectiveness when applied to 3D medical image segmentation. We designed several OOD challenges representing clinically occurring cases and found that none of the methods achieved acceptable performance. Methods not dedicated to segmentation severely failed to perform in
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Yu, Bowen, Yuhong Liu, Xin Wu, Jing Ren, and Zhibin Zhao. "Trustworthy diagnosis of Electrocardiography signals based on out-of-distribution detection." PLOS ONE 20, no. 2 (2025): e0317900. https://doi.org/10.1371/journal.pone.0317900.

Texto completo
Resumen
Cardiovascular disease is one of the most dangerous conditions, posing a significant threat to daily health. Electrocardiography (ECG) is crucial for heart health monitoring. It plays a pivotal role in early heart disease detection, heart function assessment, and guiding treatments. Thus, refining ECG diagnostic methods is vital for timely and accurate heart disease diagnosis. Recently, deep learning has significantly advanced in ECG signal classification and recognition. However, these methods struggle with new or Out-of-Distribution (OOD) heart diseases. The deep learning model performs well
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Nguyen, Hai Van, Jau-Uei Chen, and Tan Bui-Thanh. "A model-constrained discontinuous Galerkin Network (DGNet) for compressible Euler equations with out-of-distribution generalization." Computer Methods in Applied Mechanics and Engineering 440 (May 2025): 117912. https://doi.org/10.1016/j.cma.2025.117912.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Lee, Ingyun, Wooju Lee, and Hyun Myung. "Domain Generalization with Vital Phase Augmentation." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 4 (2024): 2892–900. http://dx.doi.org/10.1609/aaai.v38i4.28070.

Texto completo
Resumen
Deep neural networks have shown remarkable performance in image classification. However, their performance significantly deteriorates with corrupted input data. Domain generalization methods have been proposed to train robust models against out-of-distribution data. Data augmentation in the frequency domain is one of such approaches that enable a model to learn phase features to establish domain-invariant representations. This approach changes the amplitudes of the input data while preserving the phases. However, using fixed phases leads to susceptibility to phase fluctuations because amplitud
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

He, Rundong, Yue Yuan, Zhongyi Han, et al. "Exploring Channel-Aware Typical Features for Out-of-Distribution Detection." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12402–10. http://dx.doi.org/10.1609/aaai.v38i11.29132.

Texto completo
Resumen
Detecting out-of-distribution (OOD) data is essential to ensure the reliability of machine learning models when deployed in real-world scenarios. Different from most previous test-time OOD detection methods that focus on designing OOD scores, we delve into the challenges in OOD detection from the perspective of typicality and regard the feature’s high-probability region as the feature’s typical set. However, the existing typical-feature-based OOD detection method implies an assumption: the proportion of typical feature sets for each channel is fixed. According to our experimental analysis, eac
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Ding, Kun, Haojian Zhang, Qiang Yu, Ying Wang, Shiming Xiang, and Chunhong Pan. "Weak Distribution Detectors Lead to Stronger Generalizability of Vision-Language Prompt Tuning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 2 (2024): 1528–36. http://dx.doi.org/10.1609/aaai.v38i2.27918.

Texto completo
Resumen
We propose a generalized method for boosting the generalization ability of pre-trained vision-language models (VLMs) while fine-tuning on downstream few-shot tasks. The idea is realized by exploiting out-of-distribution (OOD) detection to predict whether a sample belongs to a base distribution or a novel distribution and then using the score generated by a dedicated competition based scoring function to fuse the zero-shot and few-shot classifier. The fused classifier is dynamic, which will bias towards the zero-shot classifier if a sample is more likely from the distribution pre-trained on, le
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Simmachan, Teerawat, and Wikanda Phaphan. "Generalization of Two-Sided Length Biased Inverse Gaussian Distributions and Applications." Symmetry 14, no. 10 (2022): 1965. http://dx.doi.org/10.3390/sym14101965.

Texto completo
Resumen
The notion of length-biased distribution can be used to develop adequate models. Length-biased distribution was known as a special case of weighted distribution. In this work, a new class of length-biased distribution, namely the two-sided length-biased inverse Gaussian distribution (TS-LBIG), was introduced. The physical phenomenon of this scenario was described in a case of cracks developing from two sides. Since the probability density function of the original TS-LBIG distribution cannot be written in a closed-form expression, its generalization form was further introduced. Important proper
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Nain, Philippe. "On a generalization of the preemptive resume priority." Advances in Applied Probability 18, no. 1 (1986): 255–73. http://dx.doi.org/10.2307/1427245.

Texto completo
Resumen
This paper considers a queueing system with two classes of customers and a single server, where the service policy is of threshold type. As soon as the amount of work required by the class 1 customers is greater than a fixed threshold, the class 1 customers get the server's attention; otherwise the class 2 customers have the priority. Service interruptions can occur for both classes of customers on the basis of the above description of the service mechanism, and in this case the service interruption discipline is preemptive resume priority (PRP). This model, which turns out to be a generalizat
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Nain, Philippe. "On a generalization of the preemptive resume priority." Advances in Applied Probability 18, no. 01 (1986): 255–73. http://dx.doi.org/10.1017/s0001867800015652.

Texto completo
Resumen
This paper considers a queueing system with two classes of customers and a single server, where the service policy is of threshold type. As soon as the amount of work required by the class 1 customers is greater than a fixed threshold, the class 1 customers get the server's attention; otherwise the class 2 customers have the priority. Service interruptions can occur for both classes of customers on the basis of the above description of the service mechanism, and in this case the service interruption discipline is preemptive resume priority (PRP). This model, which turns out to be a generalizat
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Zhang, Weifeng, Zhiyuan Wang, Kunpeng Zhang, Ting Zhong, and Fan Zhou. "DyCVAE: Learning Dynamic Causal Factors for Non-stationary Series Domain Generalization (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (2023): 16382–83. http://dx.doi.org/10.1609/aaai.v37i13.27051.

Texto completo
Resumen
Learning domain-invariant representations is a major task of out-of-distribution generalization. To address this issue, recent efforts have taken into accounting causality, aiming at learning the causal factors with regard to tasks. However, extending existing generalization methods for adapting non-stationary time series may be ineffective, because they fail to model the underlying causal factors due to temporal-domain shifts except for source-domain shifts, as pointed out by recent studies. To this end, we propose a novel model DyCVAE to learn dynamic causal factors. The results on synthetic
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Chen, Zhengyu, Teng Xiao, Kun Kuang, et al. "Learning to Reweight for Generalizable Graph Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (2024): 8320–28. http://dx.doi.org/10.1609/aaai.v38i8.28673.

Texto completo
Resumen
Graph Neural Networks (GNNs) show promising results for graph tasks. However, existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data. The fundamental reason for the severe degeneration is that most GNNs are designed based on the I.I.D hypothesis. In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation. In this paper, we study the problem of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings. To solv
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!