Добірка наукової літератури з теми "Weakly-supervised semantic segmentation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Weakly-supervised semantic segmentation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Weakly-supervised semantic segmentation":

1

Zhang, Yachao, Zonghao Li, Yuan Xie, Yanyun Qu, Cuihua Li, and Tao Mei. "Weakly Supervised Semantic Segmentation for Large-Scale Point Cloud." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3421–29. http://dx.doi.org/10.1609/aaai.v35i4.16455.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Existing methods for large-scale point cloud semantic segmentation require expensive, tedious and error-prone manual point-wise annotation. Intuitively, weakly supervised training is a direct solution to reduce the labeling costs. However, for weakly supervised large-scale point cloud semantic segmentation, too few annotations will inevitably lead to ineffective learning of network. We propose an effective weakly supervised method containing two components to solve the above problem. Firstly, we construct a pretext task, \textit{i.e.,} point cloud colorization, with a self-supervised training manner to transfer the learned prior knowledge from a large amount of unlabeled point cloud to a weakly supervised network. In this way, the representation capability of the weakly supervised network can be improved by knowledge from a heterogeneous task. Besides, to generative pseudo label for unlabeled data, a sparse label propagation mechanism is proposed with the help of generated class prototypes, which is used to measure the classification confidence of unlabeled point. Our method is evaluated on large-scale point cloud datasets with different scenarios including indoor and outdoor. The experimental results show the large gain against existing weakly supervised methods and comparable results to fully supervised methods.
2

Chen, Jie, Fen He, Yi Zhang, Geng Sun, and Min Deng. "SPMF-Net: Weakly Supervised Building Segmentation by Combining Superpixel Pooling and Multi-Scale Feature Fusion." Remote Sensing 12, no. 6 (March 24, 2020): 1049. http://dx.doi.org/10.3390/rs12061049.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The lack of pixel-level labeling limits the practicality of deep learning-based building semantic segmentation. Weakly supervised semantic segmentation based on image-level labeling results in incomplete object regions and missing boundary information. This paper proposes a weakly supervised semantic segmentation method for building detection. The proposed method takes the image-level label as supervision information in a classification network that combines superpixel pooling and multi-scale feature fusion structures. The main advantage of the proposed strategy is its ability to improve the intactness and boundary accuracy of a detected building. Our method achieves impressive results on two 2D semantic labeling datasets, which outperform some competing weakly supervised methods and are close to the result of the fully supervised method.
3

Li, Xueyi, Tianfei Zhou, Jianwu Li, Yi Zhou, and Zhaoxiang Zhang. "Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 1984–92. http://dx.doi.org/10.1609/aaai.v35i3.16294.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Acquiring sufficient ground-truth supervision to train deep vi- sual models has been a bottleneck over the years due to the data-hungry nature of deep learning. This is exacerbated in some structured prediction tasks, such as semantic segmen- tation, which requires pixel-level annotations. This work ad- dresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level anno- tations and pixel-level segmentation. We formulate WSSS as a novel group-wise learning task that explicitly models se- mantic dependencies in a group of images to estimate more reliable pseudo ground-truths, which can be used for training more accurate segmentation models. In particular, we devise a graph neural network (GNN) for group-wise semantic min- ing, wherein input images are represented as graph nodes, and the underlying relations between a pair of images are char- acterized by an efficient co-attention mechanism. Moreover, in order to prevent the model from paying excessive atten- tion to common semantics only, we further propose a graph dropout layer, encouraging the model to learn more accurate and complete object responses. The whole network is end-to- end trainable by iterative message passing, which propagates interaction cues over the images to progressively improve the performance. We conduct experiments on the popular PAS- CAL VOC 2012 and COCO benchmarks, and our model yields state-of-the-art performance. Our code is available at: https://github.com/Lixy1997/Group-WSSS.
4

Cheng, Hao, Chaochen Gu, and Kaijie Wu. "Weakly-Supervised Semantic Segmentation via Self-training." Journal of Physics: Conference Series 1487 (March 2020): 012001. http://dx.doi.org/10.1088/1742-6596/1487/1/012001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ouassit, Youssef, Soufiane Ardchir, Mohammed Yassine El Ghoumari, and Mohamed Azouazi. "A Brief Survey on Weakly Supervised Semantic Segmentation." International Journal of Online and Biomedical Engineering (iJOE) 18, no. 10 (July 26, 2022): 83–113. http://dx.doi.org/10.3991/ijoe.v18i10.31531.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Semantic Segmentation is the process of assigning a label to every pixel in the image that share same semantic properties and stays a challenging task in computer vision. In recent years, and due to the large availability of training data the performance of semantic segmentation has been greatly improved by using deep learning techniques. A large number of novel methods have been proposed. However, in some crucial fields we can't assure sufficient data to learn a deep model and achieves high accuracy. This paper aims to provide a brief survey of research efforts on deep-learning-based semantic segmentation methods on limited labeled data and focus our survey on weakly-supervised methods. This survey is expected to familiarize readers with the progress and challenges of weakly supervised semantic segmentation research in the deep learning era and present several valuable growing research points in this field.
6

Kim, Beomyoung, Sangeun Han, and Junmo Kim. "Discriminative Region Suppression for Weakly-Supervised Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1754–61. http://dx.doi.org/10.1609/aaai.v35i2.16269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Weakly-supervised semantic segmentation (WSSS) using image-level labels has recently attracted much attention for reducing annotation costs. Existing WSSS methods utilize localization maps from the classification network to generate pseudo segmentation labels. However, since localization maps obtained from the classifier focus only on sparse discriminative object regions, it is difficult to generate high-quality segmentation labels. To address this issue, we introduce discriminative region suppression (DRS) module that is a simple yet effective method to expand object activation regions. DRS suppresses the attention on discriminative regions and spreads it to adjacent non-discriminative regions, generating dense localization maps. DRS requires few or no additional parameters and can be plugged into any network. Furthermore, we introduce an additional learning strategy to give a self-enhancement of localization maps, named localization map refinement learning. Benefiting from this refinement learning, localization maps are refined and enhanced by recovering some missing parts or removing noise itself. Due to its simplicity and effectiveness, our approach achieves mIoU 71.4% on the PASCAL VOC 2012 segmentation benchmark using only image-level labels. Extensive experiments demonstrate the effectiveness of our approach.
7

Zhou, Tianfei, Liulei Li, Xueyi Li, Chun-Mei Feng, Jianwu Li, and Ling Shao. "Group-Wise Learning for Weakly Supervised Semantic Segmentation." IEEE Transactions on Image Processing 31 (2022): 799–811. http://dx.doi.org/10.1109/tip.2021.3132834.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Li, Yi, Yanqing Guo, Yueying Kao, and Ran He. "Image Piece Learning for Weakly Supervised Semantic Segmentation." IEEE Transactions on Systems, Man, and Cybernetics: Systems 47, no. 4 (April 2017): 648–59. http://dx.doi.org/10.1109/tsmc.2016.2623683.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Xiong, Changzhen, and Hui Zhi. "Multi-model Integrated Weakly Supervised Semantic Segmentation Method." Journal of Computer-Aided Design & Computer Graphics 31, no. 5 (2019): 800. http://dx.doi.org/10.3724/sp.j.1089.2019.17379.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Shuo, and Yizhou Wang. "Weakly Supervised Semantic Segmentation with a Multiscale Model." IEEE Signal Processing Letters 22, no. 3 (March 2015): 308–12. http://dx.doi.org/10.1109/lsp.2014.2358562.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Weakly-supervised semantic segmentation":

1

Sawatzky, Johann [Verfasser]. "Weakly and Semi Supervised Semantic Segmentation of RGB Images / Johann Sawatzky." Bonn : Universitäts- und Landesbibliothek Bonn, 2021. http://d-nb.info/1227990367/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Götz, Michael [Verfasser], and R. [Akademischer Betreuer] Dillmann. "Variability-Aware and Weakly Supervised Learning for Semantic Tissue Segmentation / Michael Götz ; Betreuer: R. Dillmann." Karlsruhe : KIT-Bibliothek, 2017. http://d-nb.info/1137265000/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Giraldo, Zuluaga Jhony Heriberto. "Graph-based Algorithms in Computer Vision, Machine Learning, and Signal Processing." Electronic Thesis or Diss., La Rochelle, 2022. http://www.theses.fr/2022LAROS037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'apprentissage de la représentation graphique et ses applications ont suscité une attention considérable ces dernières années. En particulier, les Réseaux Neuronaux Graphiques (RNG) et le Traitement du Signal Graphique (TSG) ont été largement étudiés. Les RNGs étendent les concepts des réseaux neuronaux convolutionnels aux données non euclidiennes modélisées sous forme de graphes. De même, le TSG étend les concepts du traitement classique des signaux numériques aux signaux supportés par des graphes. Les RNGs et TSG ont de nombreuses applications telles que l'apprentissage semi-supervisé, la segmentation sémantique de nuages de points, la prédiction de relations individuelles dans les réseaux sociaux, la modélisation de protéines pour la découverte de médicaments, le traitement d'images et de vidéos. Dans cette thèse, nous proposons de nouvelles approches pour le traitement des images et des vidéos, les RNGs, et la récupération des signaux de graphes variant dans le temps. Notre principale motivation est d'utiliser l'information géométrique que nous pouvons capturer à partir des données pour éviter les méthodes avides de données, c'est-à-dire l'apprentissage avec une supervision minimale. Toutes nos contributions s'appuient fortement sur les développements de la TSG et de la théorie spectrale des graphes. En particulier, la théorie de l'échantillonnage et de la reconstruction des signaux de graphes joue un rôle central dans cette thèse. Les principales contributions de cette thèse sont résumées comme suit : 1) nous proposons de nouveaux algorithmes pour la segmentation d'objets en mouvement en utilisant les concepts de la TSG et des RNGs, 2) nous proposons un nouvel algorithme pour la segmentation sémantique faiblement supervisée en utilisant des réseaux de neurones hypergraphiques, 3) nous proposons et analysons les RNGs en utilisant les concepts de la TSG et de la théorie des graphes spectraux, et 4) nous introduisons un nouvel algorithme basé sur l'extension d'une fonction de lissage de Sobolev pour la reconstruction de signaux graphiques variant dans le temps à partir d'échantillons discrets
Graph representation learning and its applications have gained significant attention in recent years. Notably, Graph Neural Networks (GNNs) and Graph Signal Processing (GSP) have been extensively studied. GNNs extend the concepts of convolutional neural networks to non-Euclidean data modeled as graphs. Similarly, GSP extends the concepts of classical digital signal processing to signals supported on graphs. GNNs and GSP have numerous applications such as semi-supervised learning, point cloud semantic segmentation, prediction of individual relations in social networks, modeling proteins for drug discovery, image, and video processing. In this thesis, we propose novel approaches in video and image processing, GNNs, and recovery of time-varying graph signals. Our main motivation is to use the geometrical information that we can capture from the data to avoid data hungry methods, i.e., learning with minimal supervision. All our contributions rely heavily on the developments of GSP and spectral graph theory. In particular, the sampling and reconstruction theory of graph signals play a central role in this thesis. The main contributions of this thesis are summarized as follows: 1) we propose new algorithms for moving object segmentation using concepts of GSP and GNNs, 2) we propose a new algorithm for weakly-supervised semantic segmentation using hypergraph neural networks, 3) we propose and analyze GNNs using concepts from GSP and spectral graph theory, and 4) we introduce a novel algorithm based on the extension of a Sobolev smoothness function for the reconstruction of time-varying graph signals from discrete samples
4

Shen, Tong. "Context Learning and Weakly Supervised Learning for Semantic Segmentation." Thesis, 2018. http://hdl.handle.net/2440/120354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis focuses on one of the fundamental problems in computer vision, semantic segmentation, whose task is to predict a semantic label for each pixel of an image. Although semantic segmentation models have been largely improved thanks to the great representative power of deep learning techniques, there are still open questions needed to be discussed. In this thesis, we discuss two problems regarding semantic segmentation, scene consistency and weakly supervised segmentation. In the first part of the thesis, we discuss the issue of scene consistency in semantic segmentation. This issue comes from the fact that trained models sometimes produce noisy and implausible predictions that are not semantically consistent with the scene or context. By explicitly considering scene consistency both locally and globally, we can narrow down the possible categories for each pixel and generate the desired prediction more easily. In the thesis, we address this issue by introducing a dense multi-label module. In general, multi-label classification refers to the task of assigning multiple labels to a given image. We extend the idea to different levels of the image, and assign multiple labels to different regions of the image. Dense multi-label acts as a constraint to encourage scene consistency locally and globally. For dense prediction problems such as semantic segmentation, training a model requires densely annotated data as ground-truth, which involves a great amount of human annotation effort and is very time-consuming. Therefore, it is worth investigating semi- or weakly supervised methods that require much less supervision. Particularly, weakly supervised segmentation refers to training the model using only image-level labels, while semi-supervised segmentation refers to using partially annotated data or a small portion of fully annotated data to train. In the thesis, two weakly supervised methods are proposed where only image-level labels are required. The two methods share some similar motivations. First of all, since pixel-level masks are missing in this particular setting, the two methods are all designed to estimate the missing ground-truth and further use them as pseudo ground-truth for training. Secondly, they both use data retrieved from the internet as auxiliary data because web data are cheap to obtain and exist in a large amount. Although there are similarities between these two methods, they are designed from different perspectives. The motivation for the first method is that given a group of images crawled from the internet that belong to the same semantic category, it is a good choice to use co-segmentation to extract the masks of them, which gives us almost free pixel-wise training samples. Those internet images along with the extracted masks are used to train a mask generator to help us estimate the pseudo ground-truth for the training images. The second method is designed as a bi-directional framework between the target domain and the web domain. The term “bi-directional” refers to the concept that the knowledge learnt from the target domain can be transferred to the web domain and the knowledge encoded in the web domain can be transferred back to the target domain. This kind of interaction between two domains is the core to boost the performance of webly supervised segmentation.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2018
5

Ke, Zi-Yi, and 柯子逸. "Generating Self-Guided Dense Annotations for Weakly Supervised Semantic Segmentation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/x3w74r.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
碩士
國立清華大學
資訊系統與應用研究所
106
Learning semantic segmentation models under image-level supervision is far more challenging than under fully supervised setting. Without knowing the exact pixel-label correspondence, most weakly-supervised methods rely on external models to infer pseudo pixel-level labels for training semantic segmentation models. In this thesis, we aim to develop a single neural network without resorting to any external models. We propose a novel self-guided strategy to fully utilize features learned across multiple levels to progressively generate the dense pseudo labels. First, we use high-level features as class-specific localization maps to roughly locate the classes. Next, we propose an affinity-guided method to encourage each localization map to be consistent with their intermediate level features. Third, we adopt the training image itself as guidance and propose a self-guided refinement to further transfer the image's inherent structure into the maps. Finally, we derive pseudo pixel-level labels from these localization maps and use the pseudo labels as ground truth to train the semantic segmentation model. Our proposed self-guided strategy is a unified framework, which is built on a single network and alternatively updates the feature representation and refines localization maps during the training procedure. Experimental results on PASCAL VOC 2012 segmentation benchmark demonstrate that our method outperforms other weakly-supervised methods under the same setting.

Частини книг з теми "Weakly-supervised semantic segmentation":

1

Sun, Weixuan, Jing Zhang, and Nick Barnes. "3D Guided Weakly Supervised Semantic Segmentation." In Computer Vision – ACCV 2020, 585–602. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69525-5_35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Liyi, Weiwei Wu, Chenchen Fu, Xiao Han, and Yuntao Zhang. "Weakly Supervised Semantic Segmentation with Boundary Exploration." In Computer Vision – ECCV 2020, 347–62. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tokmakov, Pavel, Karteek Alahari, and Cordelia Schmid. "Weakly-Supervised Semantic Segmentation Using Motion Cues." In Computer Vision – ECCV 2016, 388–404. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46493-0_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sun, Guoying, Meng Yang, and Wenfeng Luo. "Adversarial Decoupling for Weakly Supervised Semantic Segmentation." In Pattern Recognition and Computer Vision, 188–200. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88013-2_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Sun, Guolei, Wenguan Wang, Jifeng Dai, and Luc Van Gool. "Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation." In Computer Vision – ECCV 2020, 347–65. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58536-5_21.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liang, Binxiu, Yan Liu, Linxi He, and Jiangyun Li. "Weakly Supervised Semantic Segmentation Based on Deep Learning." In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019), 455–64. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0474-7_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fan, Junsong, Zhaoxiang Zhang, and Tieniu Tan. "Employing Multi-estimations for Weakly-Supervised Semantic Segmentation." In Computer Vision – ECCV 2020, 332–48. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58520-4_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tan, Li, WenFeng Luo, and Meng Yang. "Weakly-Supervised Semantic Segmentation with Mean Teacher Learning." In Intelligence Science and Big Data Engineering. Visual Data Engineering, 324–35. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36189-1_27.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Aslan, Sinem, and Marcello Pelillo. "Weakly Supervised Semantic Segmentation Using Constrained Dominant Sets." In Lecture Notes in Computer Science, 425–36. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30645-8_39.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sang, Yu, Shi Li, and Yanfei Peng. "Multi-view Robustness-Enhanced Weakly Supervised Semantic Segmentation." In Intelligent Computing Theories and Application, 180–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13870-6_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Weakly-supervised semantic segmentation":

1

Shen, Tong, Guosheng Lin, Lingqiao Liu, Chunhua Shen, and Ian Reid. "Weakly Supervised Semantic Segmentation Based on Co-segmentation." In British Machine Vision Conference 2017. British Machine Vision Association, 2017. http://dx.doi.org/10.5244/c.31.17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ye, Chaojie, Min Jiang, and Zhiming Luo. "Smoke Segmentation based on Weakly Supervised Semantic Segmentation." In 2022 12th International Conference on Information Technology in Medicine and Education (ITME). IEEE, 2022. http://dx.doi.org/10.1109/itme56794.2022.00082.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Wei, Sheng Zeng, Dequan Wang, and Xiangyang Xue. "Weakly supervised semantic segmentation for social images." In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2015. http://dx.doi.org/10.1109/cvpr.2015.7298888.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dong, Jiahua, Yang Cong, Gan Sun, and Dongdong Hou. "Semantic-Transferable Weakly-Supervised Endoscopic Lesions Segmentation." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.01081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Lu, Zheng, Dali Chen, and Dingyu Xue. "Survey of weakly supervised semantic segmentation methods." In 2018 Chinese Control And Decision Conference (CCDC). IEEE, 2018. http://dx.doi.org/10.1109/ccdc.2018.8407307.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Feng, Yanqing, and Lunwen Wang. "A Weakly-Supervised Approach for Semantic Segmentation." In 2019 IEEE 3rd Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). IEEE, 2019. http://dx.doi.org/10.1109/itnec.2019.8729018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nivaggioli, Adrien, and Hicham Randrianarivo. "Weakly Supervised Semantic Segmentation of Satellite Images." In 2019 Joint Urban Remote Sensing Event (JURSE). IEEE, 2019. http://dx.doi.org/10.1109/jurse.2019.8809060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xing, Frank Z., Erik Cambria, Win-Bin Huang, and Yang Xu. "Weakly supervised semantic segmentation with superpixel embedding." In 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. http://dx.doi.org/10.1109/icip.2016.7532562.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Fei, Chaochen Gu, Chenyue Zhang, and Yuchao Dai. "Complementary Patch for Weakly Supervised Semantic Segmentation." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.00715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhu, Kaiyin, Neal N. Xiong, and Mingming Lu. "A Survey of Weakly-supervised Semantic Segmentation." In 2023 IEEE 9th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). IEEE, 2023. http://dx.doi.org/10.1109/bigdatasecurity-hpsc-ids58521.2023.00013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

До бібліографії