Academic literature on the topic 'Unsupervised self-training'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unsupervised self-training.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unsupervised self-training"

1

Orjuela-Cañón, Álvaro David, and Hugo Fernando Posada-Quintero. "Acoustic lung signals analysis based on Mel frequency cepstral coefficients and self-organizing maps." Revista Facultad de Ingeniería 25, no. 43 (2016): 73–82. http://dx.doi.org/10.19053/01211129.v25.n43.2016.5300.

Full text
Abstract:
This study analyzes acoustic lung signals with different abnormalities, using Mel Frequency Cepstral Coefficients (MFCC), Self-Organizing Maps (SOM), and K-means clustering algorithm. SOM models are known as artificial neural networks than can be trained in an unsupervised or supervised manner. Both approaches were used in this work to compare the utility of this tool in lung signals studies. Results showed that with a supervised training, the classification reached rates of 85 % in accuracy. Unsupervised training was used for clustering tasks, and three clusters was the most adequate number for both supervised and unsupervised training. In general, SOM models can be used in lung signals as a strategy to diagnose systems, finding number of clusters in data, and making classifications for computer-aided decision making systems.
APA, Harvard, Vancouver, ISO, and other styles
2

WANG, DONG, and YANG LIU. "A cross-corpus study of subjectivity identification using unsupervised learning." Natural Language Engineering 18, no. 3 (2011): 375–97. http://dx.doi.org/10.1017/s1351324911000234.

Full text
Abstract:
AbstractIn this study, we investigate using unsupervised generative learning methods for subjectivity detection across different domains. We create an initial training set using simple lexicon information and then evaluate two iterative learning methods with a base naive Bayes classifier to learn from unannotated data. The first method is self-training, which adds instances with high confidence into the training set in each iteration. The second is a calibrated EM (expectation-maximization) method where we calibrate the posterior probabilities from EM such that the class distribution is similar to that in the real data. We evaluate both approaches on three different domains: movie data, news resource, and meeting dialogues, and we found that in some cases the unsupervised learning methods can achieve performance close to the fully supervised setup. We perform a thorough analysis to examine factors, such as self-labeling accuracy of the initial training set in unsupervised learning, the accuracy of the added examples in self-training, and the size of the initial training set in different methods. Our experiments and analysis show inherent differences across domains and impacting factors explaining the model behaviors.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Hye-Woo, Noo-ri Kim, and Jee-Hyong Lee. "Deep Neural Network Self-training Based on Unsupervised Learning and Dropout." International Journal of Fuzzy Logic and Intelligent Systems 17, no. 1 (2017): 1–9. http://dx.doi.org/10.5391/ijfis.2017.17.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Yu, Meng Fang, Baosheng Yu, and Joey Tianyi Zhou. "Unsupervised Domain Adaptation on Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7480–87. http://dx.doi.org/10.1609/aaai.v34i05.6245.

Full text
Abstract:
Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate the problem, we investigate unsupervised domain adaptation on RC, wherein a model is trained on the labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, a model can not generalize well from one domain to another. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable performance to supervised models on multiple large-scale benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Meng, Zechen Li, and Pengtao Xie. "Self-supervised Regularization for Text Classification." Transactions of the Association for Computational Linguistics 9 (2021): 641–56. http://dx.doi.org/10.1162/tacl_a_00389.

Full text
Abstract:
Abstract Text classification is a widely studied problem and has broad applications. In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL). SSL (Devlin et al., 2019a) is an unsupervised learning approach that defines auxiliary tasks on input data without using any human-provided labels and learns data representations by solving these auxiliary tasks. In SSL-Reg, a supervised classification task and an unsupervised SSL task are performed simultaneously. The SSL task is unsupervised, which is defined purely on input texts without using any human- provided labels. Training a model using an SSL task can prevent the model from being overfitted to a limited number of class labels in the classification task. Experiments on 17 text classification datasets demonstrate the effectiveness of our proposed method. Code is available at https://github.com/UCSD-AI4H/SSReg.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Jiabo, Qi Dong, Shaogang Gong, and Xiatian Zhu. "Unsupervised Deep Learning via Affinity Diffusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11029–36. http://dx.doi.org/10.1609/aaai.v34i07.6757.

Full text
Abstract:
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
7

Weinlichová, Jana, and Jiří Fejfar. "Usage of self-organizing neural networks in evaluation of consumer behaviour." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 58, no. 6 (2010): 625–32. http://dx.doi.org/10.11118/actaun201058060625.

Full text
Abstract:
This article deals with evaluation of consumer data by Artificial Intelligence methods. In methodical part there are described learning algorithms for Kohonen maps on the principle of supervised learning, unsupervised learning and semi-supervised learning. The principles of supervised learning and unsupervised learning are compared. On base of binding conditions of these principles there is pointed out an advantage of semi-supervised learning. Three algorithms are described for the semi-supervised learning: label propagation, self-training and co-training. Especially usage of co-training in Kohonen map learning seems to be promising point of other research. In concrete application of Kohonen neural network on consumer’s expense the unsupervised learning method has been chosen – the self-organization. So the features of data are evaluated by clustering method called Kohonen maps. These input data represents consumer expenses of households in countries of European union and are characterised by 12-dimension vector according to commodity classification. The data are evaluated in several years, so we can see their distribution, similarity or dissimilarity and also their evolution. In the article we discus other usage of this method for this type of data and also comparison of our results with results reached by hierarchical cluster analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Keung, Phillip, Julian Salazar, Yichao Lu, and Noah A. Smith. "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings." Transactions of the Association for Computational Linguistics 8 (December 2020): 828–41. http://dx.doi.org/10.1162/tacl_a_00348.

Full text
Abstract:
We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text. We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training. We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F1 scores over previous unsupervised methods. We then improve an XLM-based unsupervised neural MT system pre-trained on Wikipedia by supplementing it with pseudo-parallel text mined from the same corpus, boosting unsupervised translation performance by up to 3.5 BLEU on the WMT’14 French-English and WMT’16 German-English tasks and outperforming the previous state-of-the-art. Finally, we enrich the IWSLT’15 English-Vietnamese corpus with pseudo-parallel Wikipedia sentence pairs, yielding a 1.2 BLEU improvement on the low-resource MT task. We demonstrate that unsupervised bitext mining is an effective way of augmenting MT datasets and complements existing techniques like initializing with pre-trained contextual embeddings.
APA, Harvard, Vancouver, ISO, and other styles
9

Tao, Gordon, William C. Miller, Janice J. Eng, Heather Lindstrom, Bita Imam, and Michael Payne. "Self-directed usage of an in-home exergame after a supervised telerehabilitation training program for older adults with lower-limb amputation." Prosthetics and Orthotics International 44, no. 2 (2020): 52–59. http://dx.doi.org/10.1177/0309364620906272.

Full text
Abstract:
Background: While home-based exergames help overcome accessibility barriers to rehabilitation, it is unclear what constitutes effective intervention design in using exergames to support self-efficacy and engagement. Objective: Examine usage of an in-home exergame, compared to control, unsupervised after supervised training by older persons with lower-limb amputation. Study design: Secondary analysis of a multi-site parallel evaluator-masked randomized control trial. Methods: WiiNWalk uses the WiiFit and teleconferencing for in-home group-based exergame therapy with clinical supervision. Participants engaged in a 4-week supervised training phase followed by a 4-week unsupervised phase in experimental (WiiNWalk) and attention control groups. Usage between phases and between groups was compared using unsupervised/supervised ratio of session count (over 4 weeks) and session time (mean min/session over 4 weeks) for each phase. Results: Participants: n=36 experimental, n=28 control, unilateral lower-limb amputation, age > 50 years, prosthesis usage ≥ 2 hours/day. Session count ratio unsupervised/supervised, median and interquartile range (IQR), was less than parity ( p<0.01) for experimental (0.25, IQR 0.00 -0.68) and control (0.18, IQR 0.00 -0.67) groups, with no different between groups ( p=0.92). Experimental session time unsupervised/supervised showed consistency (1.12, IQR 0.80 -1.41) between phases ( p=0.24); control showed lower (0.76, IQR 0.57 -1.08) ratios compared to experimental ( p=0.027). Conclusions: Unsupervised exercise duration remained consistent with supervised, but frequency was reduced. Social and clinical guidance features may remain necessary for sustained lower-limb amputation exergame engagement at home. Clinical relevance This study provides context regarding when prosthesis users are more likely to use exergames such as Wii Fit for exercise therapy. Clinicians may consider our results when applying exergames in their practice or when developing new exergame intervention strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yuanyuan, Sixin Chen, Guanqiu Qi, Zhiqin Zhu, Matthew Haner, and Ruihua Cai. "A GAN-Based Self-Training Framework for Unsupervised Domain Adaptive Person Re-Identification." Journal of Imaging 7, no. 4 (2021): 62. http://dx.doi.org/10.3390/jimaging7040062.

Full text
Abstract:
As a crucial task in surveillance and security, person re-identification (re-ID) aims to identify the targeted pedestrians across multiple images captured by non-overlapping cameras. However, existing person re-ID solutions have two main challenges: the lack of pedestrian identification labels in the captured images, and domain shift issue between different domains. A generative adversarial networks (GAN)-based self-training framework with progressive augmentation (SPA) is proposed to obtain the robust features of the unlabeled data from the target domain, according to the preknowledge of the labeled data from the source domain. Specifically, the proposed framework consists of two stages: the style transfer stage (STrans), and self-training stage (STrain). First, the targeted data is complemented by a camera style transfer algorithm in the STrans stage, in which CycleGAN and Siamese Network are integrated to preserve the unsupervised self-similarity (the similarity of the same image between before and after transformation) and domain dissimilarity (the dissimilarity between a transferred source image and the targeted image). Second, clustering and classification are alternately applied to enhance the model performance progressively in the STrain stage, in which both global and local features of the target-domain images are obtained. Compared with the state-of-the-art methods, the proposed method achieves the competitive accuracy on two existing datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Unsupervised self-training"

1

Ter-Hovhannisyan, Vardges. "Unsupervised and semi-supervised training methods for eukaryotic gene prediction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26645.

Full text
Abstract:
Thesis (Ph.D)--Biology, Georgia Institute of Technology, 2009.<br>Committee Chair: Mark Borodovky; Committee Member: Jung H. Choi; Committee Member: King Jordan; Committee Member: Leonid Bunimovich; Committee Member: Yury Chernoff. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
2

Tang, Shiyuyun. "Improving algorithms of gene prediction in prokaryotic genomes, metagenomes, and eukaryotic transcriptomes." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/54998.

Full text
Abstract:
Next-generation sequencing has generated enormous amount of DNA and RNA sequences that potentially carry volumes of genetic information, e.g. protein-coding genes. The thesis is divided into three main parts describing i) GeneMarkS-2, ii) GeneMarkS-T, and iii) MetaGeneTack. In prokaryotic genomes, ab initio gene finders can predict genes with high accuracy. However, the error rate is not negligible and largely species-specific. Most errors in gene prediction are made in genes located in genomic regions with atypical GC composition, e.g. genes in pathogenicity islands. We describe a new algorithm GeneMarkS-2 that uses local GC-specific heuristic models for scoring individual ORFs in the first step of analysis. Predicted atypical genes are retained and serve as ‘external’ evidence in subsequent runs of self-training. GeneMarkS-2 also controls the quality of training process by effectively selecting optimal orders of the Markov chain models as well as duration parameters in the hidden semi-Markov model. GeneMarkS-2 has shown significantly improved accuracy compared with other state-of-the-art gene prediction tools. Massive parallel sequencing of RNA transcripts by the next generation technology (RNA-Seq) provides large amount of RNA reads that can be assembled to full transcriptome. We have developed a new tool, GeneMarkS-T, for ab initio identification of protein-coding regions in RNA transcripts. Unsupervised estimation of parameters of the algorithm makes unnecessary several steps in the conventional gene prediction protocols, most importantly the manually curated preparation of training sets. We have demonstrated that the GeneMarkS-T self-training is robust with respect to the presence of errors in assembled transcripts and the accuracy of GeneMarkS-T in identifying protein-coding regions and, particularly, in predicting gene starts compares favorably to other existing methods. Frameshift prediction (FS) is important for analysis and biological interpretation of metagenomic sequences. Reads in metagenomic samples are prone to sequencing errors. Insertion and deletion errors that change the coding frame impair the accurate identification of protein coding genes. Accurate frameshift prediction requires sufficient amount of data to estimate parameters of species-specific statistical models of protein-coding and non-coding regions. However, this data is not available; all we have is metagenomic sequences of unknown origin. The challenge of ab initio FS detection is, therefore, twofold: (i) to find a way to infer necessary model parameters and (ii) to identify positions of frameshifts (if any). We describe a new tool, MetaGeneTack, which uses a heuristic method to estimate parameters of sequence models used in the FS detection algorithm. It was shown on several test sets that the performance of MetaGeneTack FS detection is comparable or better than the one of earlier developed program FragGeneScan.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Unsupervised self-training"

1

Mei, Ke, Chuang Zhu, Jiaqi Zou, and Shanghang Zhang. "Instance Adaptive Self-training for Unsupervised Domain Adaptation." In Computer Vision – ECCV 2020. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zou, Yang, Zhiding Yu, B. V. K. Vijaya Kumar, and Jinsong Wang. "Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training." In Computer Vision – ECCV 2018. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01219-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, You-Min, and Jong-Hwan Kim. "Robust and Reliable Feature Extractor Training by Using Unsupervised Pre-training with Self-Organization Map." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16841-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Xiaofeng, Fangxu Xing, Maureen Stone, et al. "Generative Self-training for Cross-Domain Unsupervised Tagged-to-Cine MRI Synthesis." In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-87199-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Training Algorithms." In Medical Diagnosis Using Artificial Neural Networks. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6146-2.ch006.

Full text
Abstract:
The process of assigning the weight to each connection is called training. A network can be subject to supervised or unsupervised training. In this chapter, supervised and unsupervised learning are explained and then various training algorithms such as multilayer perceptron (MLP) and Back Propagation (BP) as supervised training algorithms are introduced. The unsupervised training algorithm, namely Kohonen's self-organizing map (SOM), is introduced as one of most popular neural network models. SOMs convert high-dimensional, non-linear statistical relationships into simple geometric relationships in an n-dimensional array.
APA, Harvard, Vancouver, ISO, and other styles
6

Charles, Darryl, Colin Fyfe, Daniel Livingstone, and Stephen McGlinchey. "Unsupervised Learning in Artificial Neural Networks." In Biologically Inspired Artificial Intelligence for Computer Games. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59140-646-4.ch005.

Full text
Abstract:
With the artificial neural networks which we have met so far, we must have a training set on which we already have the answers to the questions which we are going to pose to the network. Yet humans appear to be able to learn (indeed some would say can only learn) without explicit supervision. The aim of unsupervised learning is to mimic this aspect of human capabilities and hence this type of learning tends to use more biologically plausible methods than those using the error descent methods of the last two chapters. The network must self-organise and to do so, it must react to some aspect of the input data - typically either redundancy in the input data or clusters in the data; i.e. there must be some structure in the data to which it can respond.
APA, Harvard, Vancouver, ISO, and other styles
7

Permatasari Tarigan, Amira, and Fannie Rizki Ananda. "Exercise Training and Pulmonary Rehabilitation in COPD." In Chronic Obstructive Pulmonary Disease - A Current Conspectus. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.97704.

Full text
Abstract:
Systemic inflammation and deconditioning syndrome lead to loss of structural and function of body muscle, particularly in extremity muscle. Longer period of inactivity due to dyspnea worsen the destruction of muscle. Regular and gradually increase exercise training as part of pulmonary rehabilitation (PR) can improve the function of essential muscles in doing daily life so stable Chronic Obstructive Pulmonary Disease (COPD) patient can maintenance their daily activities with minimal limitations. Pulmonary rehabilitation consists of exercise training, nutritional support, smoking cessation, and self-management of COPD. The prescription of exercise training is mandatory. Assessment of clinical condition to adjust the type of training, duration, frequency, and intensity of training must be completed before beginning the training session. Regular and gradually increased training gives significant impact in improving lung function, dyspnea scale, and quality of life in patient with stable COPD. However, in this covid era, the restriction of hospital attending PR was significantly affect PR program. As immunocompromised population, COPD patient have higher risk for COVID19 infection and develops more severe complications compare with normal population. So, the modified supervised and unsupervised training was needed to revise the classic type of PR. Tele-rehabilitation with teleconference, phone calls, and interactive web based PR can be the good alternative in decreasing hospital admission and improving quality of life in patient with COPD.
APA, Harvard, Vancouver, ISO, and other styles
8

Bhatia, Dinesh, and Animesh Mishra. "A Novel Artificial Intelligence Technique for Analysis of Real-Time Electro-Cardiogram Signal for the Prediction of Early Cardiac Ailment Onset." In Handbook of Research on Advancements of Artificial Intelligence in Healthcare Engineering. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2120-5.ch003.

Full text
Abstract:
The role of ECG analysis in the diagnosis of cardio-vascular ailments has been significant in recent times. Although effective, the present computational algorithms lack accuracy, and no technique till date is capable of predicting the onset of a CVD condition with precision. In this chapter, the authors attempt to formulate a novel mapping technique based on feature extraction using fractional Fourier transform (FrFT) and map generation using self-organizing maps (SOM). FrFT feature extraction from the ECG data has been performed in a manner reminiscent of short time Fourier transform (STFT). Results show capability to generate maps from the isolated ECG wavetrains with better prediction capability to ascertain the onset of CVDs, which is not possible using conventional algorithms. Promising results provide the ability to visualize the data in a time evolution manner with the help of maps and histograms to predict onset of different CVD conditions and the ability to generate the required output with unsupervised training helping in greater generalization than previous reported techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Ambrosetti, Marco, and Esteban Garcia-Porrero. "Specific issues with physical activity after cardiac rehabilitation." In ESC Handbook of Cardiovascular Rehabilitation. Oxford University Press, 2020. http://dx.doi.org/10.1093/med/9780198849308.003.0017.

Full text
Abstract:
The transition between phase II (structured, supervised) and phase III (long-term, unsupervised) cardiac rehabilitation (CR) provides an opportunity to promote regular physical activity (PA) in cardiac patients, with the aim of maintaining functional capacity and improving cardiovascular (CV) prognosis. Unfortunately, barriers at the individual and organizational/environmental level may lead to poor adherence to PA, with a consequent need for a call to action by the whole multidisciplinary CR staff. In particular, improvement of patients’ self-efficacy—defined as beliefs about one’s ability to perform a specific action—is clearly associated with better adherence to the programme. The gold standard is individualized prescription of a PA plan—type, intensity, duration, and frequency—which should be monitored and revised periodically on the basis of serial direct evaluations of cardiorespiratory fitness. If this is not available, good PA practice focusing on training intensity and volume should be recommended. In selected cases, the delivery of a long-term PA programme could be supported by digital health tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Minis, I. "Applications of Neural Networks in Supply Chain Management." In Handbook of Research on Nature-Inspired Computing for Economics and Management. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59140-984-7.ch039.

Full text
Abstract:
This chapter focuses on significant applications of self-organizing maps (SOMs), that is, unsupervised learning neural networks in two supply chain applications: cellular manufacturing and real-time management of a delayed delivery vehicle. Both problems require drastic complexity reduction, which is addressed effectively by clustering using SOMs. In the first problem, we cluster machines into cells and we use Latent Semantic Indexing for effective training of the network. In the second problem, we group the distribution sites into clusters based on their geographical location. The available vehicle time is distributed to each cluster by solving an appropriate non-linear optimization problem. Within each cluster an established orienteering heuristic is used to determine the clients to be served and the vehicle route. Extensive experimental results indicate that in terms of solution quality, our approach in general outperforms previously proposed methods. Furthermore, the proposed techniques are more efficient, especially in cases involving large numbers of data points. Neural networks have and will continue to play a significant role in solving effectively complex problems in supply chain applications, some of which are also highlighted in this chapter.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Unsupervised self-training"

1

Chrysos, Grigorios G., Jean Kossaifi, Zhiding Yu, and Anima Anandkumar. "Unsupervised Controllable Generation with Self-Training." In 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. http://dx.doi.org/10.1109/ijcnn52387.2021.9534045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohananey, Anhad, Katharina Kann, and Samuel R. Bowman. "Self-Training for Unsupervised Parsing with PRPN." In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.iwpt-1.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zhuoyi, Yu Lin, YiFan Li, et al. "Unsupervised Perturbation based Self-Supervised Adversarial Training." In 2021 7th IEEE Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS). IEEE, 2021. http://dx.doi.org/10.1109/bigdatasecurityhpscids52275.2021.00015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Novotney, Scott, Rich Schwartz, and Sanjeev Khudanpur. "Unsupervised Arabic dialect adaptation with self-training." In Interspeech 2011. ISCA, 2011. http://dx.doi.org/10.21437/interspeech.2011-226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Xiaofeng, Bo Hu, Xiongchang Liu, Jun Lu, Jane You, and Lingsheng Kong. "Energy-constrained Self-training for Unsupervised Domain Adaptation." In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. http://dx.doi.org/10.1109/icpr48806.2021.9413284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Jie, Chaoliang Zhong, Cheng Feng, Jun Sun, Masaru Ide, and Yasuto Yokota. "Dual-Consistency Self-Training For Unsupervised Domain Adaptation." In 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021. http://dx.doi.org/10.1109/icip42928.2021.9506074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Shaolei, Zhongyuan Wang, Wanxiang Che, and Ting Liu. "Combining Self-Training and Self-Supervised Learning for Unsupervised Disfluency Detection." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Haipeng, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, and Tiejun Zhao. "Self-Training for Unsupervised Neural Machine Translation in Unbalanced Training Data Scenarios." In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.naacl-main.311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rennie, Steven, Etienne Marcheret, Neil Mallinar, David Nahamoo, and Vaibhava Goel. "Unsupervised Adaptation of Question Answering Systems via Generative Self-training." In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.emnlp-main.87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Akshat, Sargam Menghani, Sai Krishna Rallabandi, and Alan W. Black. "Unsupervised Self-Training for Sentiment Analysis of Code-Switched Data." In Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.calcs-1.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!