To see the other types of publications on this topic, follow the link: Unsupervised self-training.

Journal articles on the topic 'Unsupervised self-training'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Unsupervised self-training.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Orjuela-Cañón, Álvaro David, and Hugo Fernando Posada-Quintero. "Acoustic lung signals analysis based on Mel frequency cepstral coefficients and self-organizing maps." Revista Facultad de Ingeniería 25, no. 43 (2016): 73–82. http://dx.doi.org/10.19053/01211129.v25.n43.2016.5300.

Full text
Abstract:
This study analyzes acoustic lung signals with different abnormalities, using Mel Frequency Cepstral Coefficients (MFCC), Self-Organizing Maps (SOM), and K-means clustering algorithm. SOM models are known as artificial neural networks than can be trained in an unsupervised or supervised manner. Both approaches were used in this work to compare the utility of this tool in lung signals studies. Results showed that with a supervised training, the classification reached rates of 85 % in accuracy. Unsupervised training was used for clustering tasks, and three clusters was the most adequate number for both supervised and unsupervised training. In general, SOM models can be used in lung signals as a strategy to diagnose systems, finding number of clusters in data, and making classifications for computer-aided decision making systems.
APA, Harvard, Vancouver, ISO, and other styles
2

WANG, DONG, and YANG LIU. "A cross-corpus study of subjectivity identification using unsupervised learning." Natural Language Engineering 18, no. 3 (2011): 375–97. http://dx.doi.org/10.1017/s1351324911000234.

Full text
Abstract:
AbstractIn this study, we investigate using unsupervised generative learning methods for subjectivity detection across different domains. We create an initial training set using simple lexicon information and then evaluate two iterative learning methods with a base naive Bayes classifier to learn from unannotated data. The first method is self-training, which adds instances with high confidence into the training set in each iteration. The second is a calibrated EM (expectation-maximization) method where we calibrate the posterior probabilities from EM such that the class distribution is similar to that in the real data. We evaluate both approaches on three different domains: movie data, news resource, and meeting dialogues, and we found that in some cases the unsupervised learning methods can achieve performance close to the fully supervised setup. We perform a thorough analysis to examine factors, such as self-labeling accuracy of the initial training set in unsupervised learning, the accuracy of the added examples in self-training, and the size of the initial training set in different methods. Our experiments and analysis show inherent differences across domains and impacting factors explaining the model behaviors.
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Hye-Woo, Noo-ri Kim, and Jee-Hyong Lee. "Deep Neural Network Self-training Based on Unsupervised Learning and Dropout." International Journal of Fuzzy Logic and Intelligent Systems 17, no. 1 (2017): 1–9. http://dx.doi.org/10.5391/ijfis.2017.17.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Yu, Meng Fang, Baosheng Yu, and Joey Tianyi Zhou. "Unsupervised Domain Adaptation on Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 7480–87. http://dx.doi.org/10.1609/aaai.v34i05.6245.

Full text
Abstract:
Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate the problem, we investigate unsupervised domain adaptation on RC, wherein a model is trained on the labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, a model can not generalize well from one domain to another. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable performance to supervised models on multiple large-scale benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Meng, Zechen Li, and Pengtao Xie. "Self-supervised Regularization for Text Classification." Transactions of the Association for Computational Linguistics 9 (2021): 641–56. http://dx.doi.org/10.1162/tacl_a_00389.

Full text
Abstract:
Abstract Text classification is a widely studied problem and has broad applications. In many real-world problems, the number of texts for training classification models is limited, which renders these models prone to overfitting. To address this problem, we propose SSL-Reg, a data-dependent regularization approach based on self-supervised learning (SSL). SSL (Devlin et al., 2019a) is an unsupervised learning approach that defines auxiliary tasks on input data without using any human-provided labels and learns data representations by solving these auxiliary tasks. In SSL-Reg, a supervised classification task and an unsupervised SSL task are performed simultaneously. The SSL task is unsupervised, which is defined purely on input texts without using any human- provided labels. Training a model using an SSL task can prevent the model from being overfitted to a limited number of class labels in the classification task. Experiments on 17 text classification datasets demonstrate the effectiveness of our proposed method. Code is available at https://github.com/UCSD-AI4H/SSReg.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Jiabo, Qi Dong, Shaogang Gong, and Xiatian Zhu. "Unsupervised Deep Learning via Affinity Diffusion." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11029–36. http://dx.doi.org/10.1609/aaai.v34i07.6757.

Full text
Abstract:
Convolutional neural networks (CNNs) have achieved unprecedented success in a variety of computer vision tasks. However, they usually rely on supervised model learning with the need for massive labelled training data, limiting dramatically their usability and deployability in real-world scenarios without any labelling budget. In this work, we introduce a general-purpose unsupervised deep learning approach to deriving discriminative feature representations. It is based on self-discovering semantically consistent groups of unlabelled training samples with the same class concepts through a progressive affinity diffusion process. Extensive experiments on object image classification and clustering show the performance superiority of the proposed method over the state-of-the-art unsupervised learning models using six common image recognition benchmarks including MNIST, SVHN, STL10, CIFAR10, CIFAR100 and ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
7

Weinlichová, Jana, and Jiří Fejfar. "Usage of self-organizing neural networks in evaluation of consumer behaviour." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 58, no. 6 (2010): 625–32. http://dx.doi.org/10.11118/actaun201058060625.

Full text
Abstract:
This article deals with evaluation of consumer data by Artificial Intelligence methods. In methodical part there are described learning algorithms for Kohonen maps on the principle of supervised learning, unsupervised learning and semi-supervised learning. The principles of supervised learning and unsupervised learning are compared. On base of binding conditions of these principles there is pointed out an advantage of semi-supervised learning. Three algorithms are described for the semi-supervised learning: label propagation, self-training and co-training. Especially usage of co-training in Kohonen map learning seems to be promising point of other research. In concrete application of Kohonen neural network on consumer’s expense the unsupervised learning method has been chosen – the self-organization. So the features of data are evaluated by clustering method called Kohonen maps. These input data represents consumer expenses of households in countries of European union and are characterised by 12-dimension vector according to commodity classification. The data are evaluated in several years, so we can see their distribution, similarity or dissimilarity and also their evolution. In the article we discus other usage of this method for this type of data and also comparison of our results with results reached by hierarchical cluster analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Keung, Phillip, Julian Salazar, Yichao Lu, and Noah A. Smith. "Unsupervised Bitext Mining and Translation via Self-Trained Contextual Embeddings." Transactions of the Association for Computational Linguistics 8 (December 2020): 828–41. http://dx.doi.org/10.1162/tacl_a_00348.

Full text
Abstract:
We describe an unsupervised method to create pseudo-parallel corpora for machine translation (MT) from unaligned text. We use multilingual BERT to create source and target sentence embeddings for nearest-neighbor search and adapt the model via self-training. We validate our technique by extracting parallel sentence pairs on the BUCC 2017 bitext mining task and observe up to a 24.5 point increase (absolute) in F1 scores over previous unsupervised methods. We then improve an XLM-based unsupervised neural MT system pre-trained on Wikipedia by supplementing it with pseudo-parallel text mined from the same corpus, boosting unsupervised translation performance by up to 3.5 BLEU on the WMT’14 French-English and WMT’16 German-English tasks and outperforming the previous state-of-the-art. Finally, we enrich the IWSLT’15 English-Vietnamese corpus with pseudo-parallel Wikipedia sentence pairs, yielding a 1.2 BLEU improvement on the low-resource MT task. We demonstrate that unsupervised bitext mining is an effective way of augmenting MT datasets and complements existing techniques like initializing with pre-trained contextual embeddings.
APA, Harvard, Vancouver, ISO, and other styles
9

Tao, Gordon, William C. Miller, Janice J. Eng, Heather Lindstrom, Bita Imam, and Michael Payne. "Self-directed usage of an in-home exergame after a supervised telerehabilitation training program for older adults with lower-limb amputation." Prosthetics and Orthotics International 44, no. 2 (2020): 52–59. http://dx.doi.org/10.1177/0309364620906272.

Full text
Abstract:
Background: While home-based exergames help overcome accessibility barriers to rehabilitation, it is unclear what constitutes effective intervention design in using exergames to support self-efficacy and engagement. Objective: Examine usage of an in-home exergame, compared to control, unsupervised after supervised training by older persons with lower-limb amputation. Study design: Secondary analysis of a multi-site parallel evaluator-masked randomized control trial. Methods: WiiNWalk uses the WiiFit and teleconferencing for in-home group-based exergame therapy with clinical supervision. Participants engaged in a 4-week supervised training phase followed by a 4-week unsupervised phase in experimental (WiiNWalk) and attention control groups. Usage between phases and between groups was compared using unsupervised/supervised ratio of session count (over 4 weeks) and session time (mean min/session over 4 weeks) for each phase. Results: Participants: n=36 experimental, n=28 control, unilateral lower-limb amputation, age > 50 years, prosthesis usage ≥ 2 hours/day. Session count ratio unsupervised/supervised, median and interquartile range (IQR), was less than parity ( p<0.01) for experimental (0.25, IQR 0.00 -0.68) and control (0.18, IQR 0.00 -0.67) groups, with no different between groups ( p=0.92). Experimental session time unsupervised/supervised showed consistency (1.12, IQR 0.80 -1.41) between phases ( p=0.24); control showed lower (0.76, IQR 0.57 -1.08) ratios compared to experimental ( p=0.027). Conclusions: Unsupervised exercise duration remained consistent with supervised, but frequency was reduced. Social and clinical guidance features may remain necessary for sustained lower-limb amputation exergame engagement at home. Clinical relevance This study provides context regarding when prosthesis users are more likely to use exergames such as Wii Fit for exercise therapy. Clinicians may consider our results when applying exergames in their practice or when developing new exergame intervention strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Yuanyuan, Sixin Chen, Guanqiu Qi, Zhiqin Zhu, Matthew Haner, and Ruihua Cai. "A GAN-Based Self-Training Framework for Unsupervised Domain Adaptive Person Re-Identification." Journal of Imaging 7, no. 4 (2021): 62. http://dx.doi.org/10.3390/jimaging7040062.

Full text
Abstract:
As a crucial task in surveillance and security, person re-identification (re-ID) aims to identify the targeted pedestrians across multiple images captured by non-overlapping cameras. However, existing person re-ID solutions have two main challenges: the lack of pedestrian identification labels in the captured images, and domain shift issue between different domains. A generative adversarial networks (GAN)-based self-training framework with progressive augmentation (SPA) is proposed to obtain the robust features of the unlabeled data from the target domain, according to the preknowledge of the labeled data from the source domain. Specifically, the proposed framework consists of two stages: the style transfer stage (STrans), and self-training stage (STrain). First, the targeted data is complemented by a camera style transfer algorithm in the STrans stage, in which CycleGAN and Siamese Network are integrated to preserve the unsupervised self-similarity (the similarity of the same image between before and after transformation) and domain dissimilarity (the dissimilarity between a transferred source image and the targeted image). Second, clustering and classification are alternately applied to enhance the model performance progressively in the STrain stage, in which both global and local features of the target-domain images are obtained. Compared with the state-of-the-art methods, the proposed method achieves the competitive accuracy on two existing datasets.
APA, Harvard, Vancouver, ISO, and other styles
11

Kolesau, Aliaksei, and Dmitrij Šešok. "Voice Activation for Low-Resource Languages." Applied Sciences 11, no. 14 (2021): 6298. http://dx.doi.org/10.3390/app11146298.

Full text
Abstract:
Voice activation systems are used to find a pre-defined word or phrase in the audio stream. Industry solutions, such as “OK, Google” for Android devices, are trained with millions of samples. In this work, we propose and investigate several ways to train a voice activation system when the in-domain data set is small. We compare self-training exemplar pre-training, fine-tuning a model pre-trained on another domain, joint training on both an out-of-domain high-resource and a target low-resource data set, and unsupervised pre-training. In our experiments, the unsupervised pre-training and the joint-training with a high-resource data set from another domain significantly outperform a strong baseline of fine-tuning a model trained on another data set. We obtain 7–25% relative improvement depending on the model architecture. Additionally, we improve the best test accuracy on the Lithuanian data set from 90.77% to 93.85%.
APA, Harvard, Vancouver, ISO, and other styles
12

C A Padmanabha Reddy, Y., P. Viswanath, and B. Eswara Reddy. "Semi-supervised learning: a brief review." International Journal of Engineering & Technology 7, no. 1.8 (2018): 81. http://dx.doi.org/10.14419/ijet.v7i1.8.9977.

Full text
Abstract:
Most of the application domain suffers from not having sufficient labeled data whereas unlabeled data is available cheaply. To get labeled instances, it is very difficult because experienced domain experts are required to label the unlabeled data patterns. Semi-supervised learning addresses this problem and act as a half way between supervised and unsupervised learning. This paper addresses few techniques of Semi-supervised learning (SSL) such as self-training, co-training, multi-view learning, TSVMs methods. Traditionally SSL is classified in to Semi-supervised Classification and Semi-supervised Clustering which achieves better accuracy than traditional supervised and unsupervised learning techniques. The paper also addresses the issue of scalability and applications of Semi-supervised learning.
APA, Harvard, Vancouver, ISO, and other styles
13

Pedró, Marta, Javier Martín-Martínez, Marcos Maestro-Izquierdo, Rosana Rodríguez, and Montserrat Nafría. "Self-Organizing Neural Networks Based on OxRAM Devices under a Fully Unsupervised Training Scheme." Materials 12, no. 21 (2019): 3482. http://dx.doi.org/10.3390/ma12213482.

Full text
Abstract:
A fully-unsupervised learning algorithm for reaching self-organization in neuromorphic architectures is provided in this work. We experimentally demonstrate spike-timing dependent plasticity (STDP) in Oxide-based Resistive Random Access Memory (OxRAM) devices, and propose a set of waveforms in order to induce symmetric conductivity changes. An empirical model is used to describe the observed plasticity. A neuromorphic system based on the tested devices is simulated, where the developed learning algorithm is tested, involving STDP as the local learning rule. The design of the system and learning scheme permits to concatenate multiple neuromorphic layers, where autonomous hierarchical computing can be performed.
APA, Harvard, Vancouver, ISO, and other styles
14

Banerjee, Biplab, Francesca Bovolo, Avik Bhattacharya, Lorenzo Bruzzone, Subhasis Chaudhuri, and B. Krishna Mohan. "A New Self-Training-Based Unsupervised Satellite Image Classification Technique Using Cluster Ensemble Strategy." IEEE Geoscience and Remote Sensing Letters 12, no. 4 (2015): 741–45. http://dx.doi.org/10.1109/lgrs.2014.2360833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fulcher, Eamon P. "WIS-ART: UNSUPERVISED CLUSTERING WITH RAM DISCRIMINATORS." International Journal of Neural Systems 03, no. 01 (1992): 57–63. http://dx.doi.org/10.1142/s0129065792000061.

Full text
Abstract:
WIS-ART merges the self-organising properties of Adaptive Resonance Theory (ART) with the operation of WISARD, an adaptive pattern recognition machine which uses discriminators of conventional Random Access Memories (RAMs). The result is an unsupervised pattern clustering system operating at near real-time that implements the leader algorithm. ART’s clustering is highly dependent upon the value of a “vigilance” parameter, which is set prior to training. However, for WIS-ART hierarchical clustering is performed automatically by the partitioning of discriminators into “multi-vigilance modules”. Thus, clustering may be controlled during the test phase according to the degree of discrimination (hierarchical level) required. Methods for improving the clustering characteristics of WIS-ART whilst still retaining stability are discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Liao, Guang Lan, Tie Lin Shi, and Zi Rong Tang. "Gearbox Failure Detection Using Growing Hierarchical Self-Organizing Map." Key Engineering Materials 348-349 (September 2007): 177–80. http://dx.doi.org/10.4028/www.scientific.net/kem.348-349.177.

Full text
Abstract:
Machine fault diagnosis is essentially an issue of pattern recognition, which heavily depends on suitable unsupervised learning method. The Self-Organizing Map (SOM), a popular unsupervised neural network, has been used for failure detection but with two limitations: needing predefined static architecture and lacking ability for the representation of hierarchical relations in the data. This paper presents a novel study on failure detection of gearbox using the Growing Hierarchical Self-Organizing Map (GHSOM), an artificial neural network model with hierarchical architecture composed of independent growing SOMs. The GHSOM can adapt its architecture during unsupervised training process and provide a global orientation in the individual layers of the hierarchy; hence the original data structure can be described correctly for machine faults diagnosis. Gearbox vibration signals measured under different operating conditions are analyzed using the proposed technique. The results prove that the hierarchical relations in the gearbox failure data can be intuitively represented, and inherent structure can be unfolded. Then gearbox operating conditions including normal, tooth cracked and tooth broken are classified and recognized clearly. The study confirms that GHSOM is very useful and effective for pattern recognition in mechanical fault diagnosis, and provides a good potential for application in practice.
APA, Harvard, Vancouver, ISO, and other styles
17

EL-GAMAL, M. A., H. L. ABDEL-MALEK, and M. A. SOROUR. "AUTOMATIC CIRCUIT TUNING VIA UNSUPERVISED LEARNING PARADIGMS." Journal of Circuits, Systems and Computers 15, no. 02 (2006): 217–42. http://dx.doi.org/10.1142/s0218126606003015.

Full text
Abstract:
This work describes a novel technique for automating the post-fabrication circuit tuning process. A training set that characterizes the behavior of the circuit under test is first constructed. The data in this set consists of input measurement vectors with no output attributes, and is clustered via unsupervised learning algorithm in order to explore its underlying structure and correlations. The generated clusters are labeled and utilized in circuit tuning by calculating the value(s) of the tuning parameter(s). Three prominent and fundamentally different unsupervised learning algorithms, namely, the self-organizing map, the Gaussian mixture model, and the fuzzy C-means algorithm are employed and their performance is compared. The experimental results demonstrate that the proposed technique provides a robust and efficient circuit tuning approach.
APA, Harvard, Vancouver, ISO, and other styles
18

Brazo-Sayavera, Javier, Olga López-Torres, Álvaro Martos-Bermúdez, Lorena Rodriguez-Garcia, Marcela González-Gross, and Amelia Guadalupe-Grau. "Effects of Power Training on Physical Activity, Sitting Time, Disability, and Quality of Life in Older Patients With Type 2 Diabetes During the COVID-19 Confinement." Journal of Physical Activity and Health 18, no. 6 (2021): 660–68. http://dx.doi.org/10.1123/jpah.2020-0489.

Full text
Abstract:
Background: To evaluate the effectiveness of a multicomponent supervised and unsupervised training program focused on muscle power to counteract the potential changes in sedentary behavior, disability, physical activity (PA), and health-related quality of life (HRQoL) caused by the COVID-19 pandemic domiciliary confinement in prefrail older adults with type 2 diabetes mellitus. Methods: Thirty-five older adults with type 2 diabetes mellitus were assigned to 2 groups according to their frailty status: exercise training group (prefrail or frail; n = 21; 74.7 [4.5] y; 33.3% male) and control group (robust; n = 14; 73.1 [3.9] y; 42.9% male). The exercise training group followed a multicomponent training program focusing on muscle power: supervised (5 wk) and unsupervised (6 wk). The primary outcomes, including PA and sitting time, perceived disability, and HRQoL, were assessed at the baseline and after 11 weeks. Results: At the end of confinement, there were significant decreases in PA in both groups (P < .05). Thus, sitting time increased more in the control group than in the exercise training group (P < .05). The HRQoL measures remained unchanged. Conclusions: Muscle power training before and during mandatory COVID-19 self-isolation in type 2 diabetes mellitus older adults (1) attenuates the COVID-19 domiciliary confinement-related increase in sitting time and (2) slightly decreases the self-reported levels of disability and maintains HRQoL.
APA, Harvard, Vancouver, ISO, and other styles
19

Berger, M., A. Calapai, V. Stephan, et al. "Standardized automated training of rhesus monkeys for neuroscience research in their housing environment." Journal of Neurophysiology 119, no. 3 (2018): 796–807. http://dx.doi.org/10.1152/jn.00614.2017.

Full text
Abstract:
Teaching nonhuman primates the complex cognitive behavioral tasks that are central to cognitive neuroscience research is an essential and challenging endeavor. It is crucial for the scientific success that the animals learn to interpret the often complex task rules and reliably and enduringly act accordingly. To achieve consistent behavior and comparable learning histories across animals, it is desirable to standardize training protocols. Automatizing the training can significantly reduce the time invested by the person training the animal. In addition, self-paced training schedules with individualized learning speeds based on automatic updating of task conditions could enhance the animals’ motivation and welfare. We developed a training paradigm for across-task unsupervised training (AUT) of successively more complex cognitive tasks to be administered through a stand-alone housing-based system optimized for rhesus monkeys in neuroscience research settings (Calapai A, Berger M, Niessing M, Heisig K, Brockhausen R, Treue S, Gail A. Behav Res Methods 5: 1–11, 2016). The AUT revealed interindividual differences in long-term learning progress between animals, helping to characterize learning personalities, and commonalities, helping to identify easier and more difficult learning steps in the training protocol. Our results demonstrate that 1) rhesus monkeys stay engaged with the AUT over months despite access to water and food outside the experimental sessions but with lower numbers of interaction compared with conventional fluid-controlled training; 2) with unsupervised training across sessions and task levels, rhesus monkeys can learn tasks of sufficient complexity for state-of-the-art cognitive neuroscience in their housing environment; and 3) AUT learning progress is primarily determined by the number of interactions with the system rather than the mere exposure time. NEW & NOTEWORTHY We demonstrate that highly structured training of behavioral tasks, as used in neuroscience research, can be achieved in an unsupervised fashion over many sessions and task difficulties in a monkey housing environment. Employing a predefined training strategy allows for an observer-independent comparison of learning between animals and of training approaches. We believe that self-paced standardized training can be utilized for pretraining and animal selection and can contribute to animal welfare in a neuroscience research environment.
APA, Harvard, Vancouver, ISO, and other styles
20

Aragon-Calvo, M. A., and J. C. Carvajal. "Self-supervised learning with physics-aware neural networks – I. Galaxy model fitting." Monthly Notices of the Royal Astronomical Society 498, no. 3 (2020): 3713–19. http://dx.doi.org/10.1093/mnras/staa2228.

Full text
Abstract:
ABSTRACT Estimating the parameters of a model describing a set of observations using a neural network is, in general, solved in a supervised way. In cases when we do not have access to the model’s true parameters, this approach can not be applied. Standard unsupervised learning techniques, on the other hand, do not produce meaningful or semantic representations that can be associated with the model’s parameters. Here we introduce a novel self-supervised hybrid network architecture that combines traditional neural network elements with analytic or numerical models, which represent a physical process to be learned by the system. Self-supervised learning is achieved by generating an internal representation equivalent to the parameters of the physical model. This semantic representation is used to evaluate the model and compare it to the input data during training. The semantic autoencoder architecture described here shares the robustness of neural networks while including an explicit model of the data, learns in an unsupervised way, and estimates, by construction, parameters with direct physical interpretation. As an illustrative application, we perform unsupervised learning for 2D model fitting of exponential light profiles and evaluate the performance of the network as a function of network size and noise.
APA, Harvard, Vancouver, ISO, and other styles
21

Nahang, Ali Akbar, Fahimeh Mosavi Najafi, and Roya Mohammadi. "The effect of Mindfulness Training on Emotional Self-Regulation and Psychological Resilience of Unsupervised Children." Quarterly Journal of Child Mental Health 7, no. 1 (2020): 106–17. http://dx.doi.org/10.29252/jcmh.7.1.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cygert, Sebastian, and Andrzej Czyżewski. "Vehicle Detection with Self-Training for Adaptative Video Processing Embedded Platform." Applied Sciences 10, no. 17 (2020): 5763. http://dx.doi.org/10.3390/app10175763.

Full text
Abstract:
Traffic monitoring from closed-circuit television (CCTV) cameras on embedded systems is the subject of the performed experiments. Solving this problem encounters difficulties related to the hardware limitations, and possible camera placement in various positions which affects the system performance. To satisfy the hardware requirements, vehicle detection is performed using a lightweight Convolutional Neural Network (CNN), named SqueezeDet, while, for tracking, the Simple Online and Realtime Tracking (SORT) algorithm is applied, allowing for real-time processing on an NVIDIA Jetson Tx2. To allow for adaptation of the system to the deployment environment, a procedure was implemented leading to generating labels in an unsupervised manner with the help of background modelling and the tracking algorithm. The acquired labels are further used for fine-tuning the model, resulting in a meaningful increase in the traffic estimation accuracy, and moreover, adding only minimal human effort to the process allows for further accuracy improvement. The proposed methods, and the results of experiments organised under real-world test conditions are presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
23

Weinstein, Ben G., Sergio Marconi, Stephanie Bohlman, Alina Zare, and Ethan White. "Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks." Remote Sensing 11, no. 11 (2019): 1309. http://dx.doi.org/10.3390/rs11111309.

Full text
Abstract:
Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep learning detection network. Individual crown delineation has been a long-standing challenge in remote sensing and available algorithms produce mixed results. We show that deep learning models can leverage existing Light Detection and Ranging (LIDAR)-based unsupervised delineation to generate trees that are used for training an initial RGB crown detection model. Despite limitations in the original unsupervised detection approach, this noisy training data may contain information from which the neural network can learn initial tree features. We then refine the initial model using a small number of higher-quality hand-annotated RGB images. We validate our proposed approach while using an open-canopy site in the National Ecological Observation Network. Our results show that a model using 434,551 self-generated trees with the addition of 2848 hand-annotated trees yields accurate predictions in natural landscapes. Using an intersection-over-union threshold of 0.5, the full model had an average tree crown recall of 0.69, with a precision of 0.61 for the visually-annotated data. The model had an average tree detection rate of 0.82 for the field collected stems. The addition of a small number of hand-annotated trees improved the performance over the initial self-supervised model. This semi-supervised deep learning approach demonstrates that remote sensing can overcome a lack of labeled training data by generating noisy data for initial training using unsupervised methods and retraining the resulting models with high quality labeled data.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Yunfei, and Feng Lu. "Separate in Latent Space: Unsupervised Single Image Layer Separation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 11661–68. http://dx.doi.org/10.1609/aaai.v34i07.6835.

Full text
Abstract:
Many real world vision tasks, such as reflection removal from a transparent surface and intrinsic image decomposition, can be modeled as single image layer separation. However, this problem is highly ill-posed, requiring accurately aligned and hard to collect triplet data to train the CNN models. To address this problem, this paper proposes an unsupervised method that requires no ground truth data triplet in training. At the core of the method are two assumptions about data distributions in the latent spaces of different layers, based on which a novel unsupervised layer separation pipeline can be derived. Then the method can be constructed based on the GANs framework with self-supervision and cycle consistency constraints, etc. Experimental results demonstrate its successfulness in outperforming existing unsupervised methods in both synthetic and real world tasks. The method also shows its ability to solve a more challenging multi-layer separation task.
APA, Harvard, Vancouver, ISO, and other styles
25

Elfadil, Nazar. "Machine Learning: Automated Knowledge Acquisition Based on Unsupervised Neural Network and Expert System Paradigms." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 6 (2005): 693–97. http://dx.doi.org/10.20965/jaciii.2005.p0693.

Full text
Abstract:
Self-organizing maps are unsupervised neural network models that lend themselves to the cluster analysis of high-dimensional input data. Interpreting a trained map is difficult because features responsible for specific cluster assignment are not evident from resulting map representation. This paper presents an approach to automated knowledge acquisition using Kohonen's self-organizing maps and k-means clustering. To demonstrate the architecture and validation, a data set representing animal world has been used as the training data set. The verification of the produced knowledge base is done by using conventional expert system.
APA, Harvard, Vancouver, ISO, and other styles
26

Zheng, Guoshu, and Qiuyu Zhu. "Unsupervised Image Feature Extraction Based on Scattering Transform and Self-supervised Learning with Highly Training Efficiency." Journal of Physics: Conference Series 1237 (June 2019): 032044. http://dx.doi.org/10.1088/1742-6596/1237/3/032044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Holubar, P., L. Zani, M. Hagar, W. Fröschl, Z. Radak, and R. Braun. "Modelling of anaerobic digestion using self-organizing maps and artificial neural networks." Water Science and Technology 41, no. 12 (2000): 149–56. http://dx.doi.org/10.2166/wst.2000.0259.

Full text
Abstract:
In this work the training of a self-organizing map and a feed-forward back-propagation neural network was made. The aim was to model the anaerobic digestion process. To produce data for the training of the neural nets an anaerobic digester was operated at steady state and disturbed by pulsing the organic loading rate. Measured parameters were: gas composition, gas production rate, volatile fatty acid concentration, pH, redox potential, volatile suspended solids and chemical oxygen demand of feed and effluent. It could be shown that both types of self-learning networks in principle could be used to model the process of anaerobic digestion. Using the unsupervised Kohonen self-organizing map, the model's predictions could not follow the measurements in all details. This resulted in an unsatisfactory regression coefficient of R2= 0.69 for the gas composition and R2= 0.76 for the gas production rate. When the supervised FFBP neural net was used the training resulted in more precise predictions. The regression coefficient was found to be R2= 0.74 for the gas composition and R2== 0.92 for the gas production rate.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Feiyang, Ying Jiang, Xiangrui Zeng, Jing Zhang, Xin Gao, and Min Xu. "PUB-SalNet: A Pre-Trained Unsupervised Self-Aware Backpropagation Network for Biomedical Salient Segmentation." Algorithms 13, no. 5 (2020): 126. http://dx.doi.org/10.3390/a13050126.

Full text
Abstract:
Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and proficiency in related background knowledge. In contrast, unsupervised learning makes data-driven decisions by obtaining insights directly from the data themselves. In this paper, we propose a completely unsupervised self-aware network based on pre-training and attentional backpropagation for biomedical salient segmentation, named as PUB-SalNet. Firstly, we aggregate a new biomedical data set from several simulated Cellular Electron Cryo-Tomography (CECT) data sets featuring rich salient objects, different SNR settings, and various resolutions, which is called SalSeg-CECT. Based on the SalSeg-CECT data set, we then pre-train a model specially designed for biomedical tasks as a backbone module to initialize network parameters. Next, we present a U-SalNet network to learn to selectively attend to salient objects. It includes two types of attention modules to facilitate learning saliency through global contrast and local similarity. Lastly, we jointly refine the salient regions together with feature representations from U-SalNet, with the parameters updated by self-aware attentional backpropagation. We apply PUB-SalNet for analysis of 2D simulated and real images and achieve state-of-the-art performance on simulated biomedical data sets. Furthermore, our proposed PUB-SalNet can be easily extended to 3D images. The experimental results on the 2d and 3d data sets also demonstrate the generalization ability and robustness of our method.
APA, Harvard, Vancouver, ISO, and other styles
29

Gao, Huachen, Xiaoyu Liu, Meixia Qu, and Shijie Huang. "PDANet: Self-Supervised Monocular Depth Estimation Using Perceptual and Data Augmentation Consistency." Applied Sciences 11, no. 12 (2021): 5383. http://dx.doi.org/10.3390/app11125383.

Full text
Abstract:
In recent studies, self-supervised learning methods have been explored for monocular depth estimation. They minimize the reconstruction loss of images instead of depth information as a supervised signal. However, existing methods usually assume that the corresponding points in different views should have the same color, which leads to unreliable unsupervised signals and ultimately damages the reconstruction loss during the training. Meanwhile, in the low texture region, it is unable to predict the disparity value of pixels correctly because of the small number of extracted features. To solve the above issues, we propose a network—PDANet—that integrates perceptual consistency and data augmentation consistency, which are more reliable unsupervised signals, into a regular unsupervised depth estimation model. Specifically, we apply a reliable data augmentation mechanism to minimize the loss of the disparity map generated by the original image and the augmented image, respectively, which will enhance the robustness of the image in the prediction of color fluctuation. At the same time, we aggregate the features of different layers extracted by a pre-trained VGG16 network to explore the higher-level perceptual differences between the input image and the generated one. Ablation studies demonstrate the effectiveness of each components, and PDANet shows high-quality depth estimation results on the KITTI benchmark, which optimizes the state-of-the-art method from 0.114 to 0.084, measured by absolute relative error for depth estimation.
APA, Harvard, Vancouver, ISO, and other styles
30

Nashaat, Mona, Aindrila Ghosh, James Miller, and Shaikh Quader. "TabReformer: Unsupervised Representation Learning for Erroneous Data Detection." ACM/IMS Transactions on Data Science 2, no. 3 (2021): 1–29. http://dx.doi.org/10.1145/3447541.

Full text
Abstract:
Error detection is a crucial preliminary phase in any data analytics pipeline. Existing error detection techniques typically target specific types of errors. Moreover, most of these detection models either require user-defined rules or ample hand-labeled training examples. Therefore, in this article, we present TabReformer, a model that learns bidirectional encoder representations for tabular data. The proposed model consists of two main phases. In the first phase, TabReformer follows encoder architecture with multiple self-attention layers to model the dependencies between cells and capture tuple-level representations. Also, the model utilizes a Gaussian Error Linear Unit activation function with the Masked Data Model objective to achieve deeper probabilistic understanding. In the second phase, the model parameters are fine-tuned for the task of erroneous data detection. The model applies a data augmentation module to generate more erroneous examples to represent the minority class. The experimental evaluation considers a wide range of databases with different types of errors and distributions. The empirical results show that our solution can enhance the recall values by 32.95% on average compared with state-of-the-art techniques while reducing the manual effort by up to 48.86%.
APA, Harvard, Vancouver, ISO, and other styles
31

Fang, Bo, Gang Chen, Jifa Chen, Guichong Ouyang, Rong Kou, and Lizhe Wang. "CCT: Conditional Co-Training for Truly Unsupervised Remote Sensing Image Segmentation in Coastal Areas." Remote Sensing 13, no. 17 (2021): 3521. http://dx.doi.org/10.3390/rs13173521.

Full text
Abstract:
As the fastest growing trend in big data analysis, deep learning technology has proven to be both an unprecedented breakthrough and a powerful tool in many fields, particularly for image segmentation tasks. Nevertheless, most achievements depend on high-quality pre-labeled training samples, which are labor-intensive and time-consuming. Furthermore, different from conventional natural images, coastal remote sensing ones generally carry far more complicated and considerable land cover information, making it difficult to produce pre-labeled references for supervised image segmentation. In our research, motivated by this observation, we take an in-depth investigation on the utilization of neural networks for unsupervised learning and propose a novel method, namely conditional co-training (CCT), specifically for truly unsupervised remote sensing image segmentation in coastal areas. In our idea, a multi-model framework consisting of two parallel data streams, which are superpixel-based over-segmentation and pixel-level semantic segmentation, is proposed to simultaneously perform the pixel-level classification. The former processes the input image into multiple over-segments, providing self-constrained guidance for model training. Meanwhile, with this guidance, the latter continuously processes the input image into multi-channel response maps until the model converges. Incentivized by multiple conditional constraints, our framework learns to extract high-level semantic knowledge and produce full-resolution segmentation maps without pre-labeled ground truths. Compared to the black-box solutions in conventional supervised learning manners, this method is of stronger explainability and transparency for its specific architecture and mechanism. The experimental results on two representative real-world coastal remote sensing datasets of image segmentation and the comparison with other state-of-the-art truly unsupervised methods validate the plausible performance and excellent efficiency of our proposed CCT.
APA, Harvard, Vancouver, ISO, and other styles
32

Tatoian, Robert, and Lutz Hamel. "Self-Organizing Map Convergence." International Journal of Service Science, Management, Engineering, and Technology 9, no. 2 (2018): 61–84. http://dx.doi.org/10.4018/ijssmet.2018040103.

Full text
Abstract:
Self-organizing maps are artificial neural networks designed for unsupervised machine learning. Here in this article, the authors introduce a new quality measure called the convergence index. The convergence index is a linear combination of map embedding accuracy and estimated topographic accuracy and since it reports a single statistically meaningful number it is perhaps more intuitive to use than other quality measures. The convergence index in the context of clustering problems was proposed by Ultsch as part of his fundamental clustering problem suite as well as real world datasets. First demonstrated is that the convergence index captures the notion that a SOM has learned the multivariate distribution of a training data set by looking at the convergence of the marginals. The convergence index is then used to study the convergence of SOMs with respect to the different parameters that govern self-organizing map learning. One result is that the constant neighborhood function produces better self-organizing map models than the popular Gaussian neighborhood function.
APA, Harvard, Vancouver, ISO, and other styles
33

Tian, Yang, and Guangyuan Pan. "An Unsupervised Regularization and Dropout based Deep Neural Network and Its Application for Thermal Error Prediction." Applied Sciences 10, no. 8 (2020): 2870. http://dx.doi.org/10.3390/app10082870.

Full text
Abstract:
Due to the large size of the heavy duty machine tool-foundation systems, space temperature difference is high related to thermal error, which affects to system’s accuracy greatly. The recent highly focused deep learning technology could be an alternative in thermal error prediction. In this paper, a thermal prediction model based on a self-organizing deep neural network (DNN) is developed to facilitate accurate-based training for thermal error modeling of heavy-duty machine tool-foundation systems. The proposed model is improved in two ways. Firstly, a dropout self-organizing mechanism for unsupervised training is developed to prevent co-adaptation of the feature detectors. In addition, a regularization enhanced transfer function is proposed to further reduce the less important weights of the process and improve the network feature extraction capability and generalization ability. Furthermore, temperature sensors are used to acquire temperature data from the heavy-duty machine tool and concrete foundation. In this way, sample data of thermal error predictive model are repeatedly collected from the same locations at different times. Finally, accuracy of the thermal error prediction model was validated by thermal error experiments, thus laying the foundation for subsequent studies on thermal error compensation.
APA, Harvard, Vancouver, ISO, and other styles
34

Siu, Man-hung, Herbert Gish, Arthur Chan, William Belfield, and Steve Lowe. "Unsupervised training of an HMM-based self-organizing unit recognizer with applications to topic classification and keyword discovery." Computer Speech & Language 28, no. 1 (2014): 210–23. http://dx.doi.org/10.1016/j.csl.2013.05.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Theil, K. S., M. O. Nakashima, S. L. Ondrejka, and C. V. Cotta. "Building Entrustable Professional Activities In Residency Training: Peripheral Blood Smear And Body Fluid Analysis." American Journal of Clinical Pathology 154, Supplement_1 (2020): S96. http://dx.doi.org/10.1093/ajcp/aqaa161.210.

Full text
Abstract:
Abstract Introduction/Objective Entrustable professional activities (EPAs) are defined as specialty-specific tasks representing a unit of professional practice that can be entrusted to unsupervised performance by a trainee following attainment of sufficient task-specific competence. EPAs and periodic competency assessments also provide a framework for evaluating relevant ACGME milestones. We describe our experience creating EPAs for peripheral blood smear (PBS) and body fluid (BF) analysis through which residents became qualified to act as laboratory testing personnel. Methods Training occurred during a 6 week “boot camp” for PGY2 and PGY4 residents in July-August 2018 and July-August 2019. Training for PBS included didactic lectures in automated hematology and RBC morphology (2 hr) and WBC and platelet morphology (2 hr); faculty-guided microscope reviews of RBC (2 hr) and WBC morphology (2 hr) using a training checklist; completion of RBC and WBC self-assessment quizzes; and a 40 question graded exam that covered cell identification, lab protocols, and case scenarios. Training for BF included didactic lectures (2 hr); faculty-guided microscope review of BF slides using a training checklist; completion of a BF self-assessment quiz; and a 40 question graded exam that covered cell identification, lab protocol, and case scenarios. Following successful completion of the graded exam residents were deemed competent to perform unsupervised review of cases initially flagged as abnormal by laboratory technologists; they were required to obtain attending review prior to release of results in defined situations. Formal competency assessments according to CLIA standards were done at 6 and 12 months after initial training. Impact on laboratory workflow and turnaround time was assessed before and after training. Conclusion We successfully created EPAs for PBS and BF analysis through which residents became qualified to act as laboratory testing personnel. There was no adverse impact on laboratory turnaroud time, and the number of PBS and BF cases requiring attending pathologist review decreased. Residents appreciated this tangible opportunity to gain graduated responsibility that prepared them for future practice. Periodic competency assessments provide an opportunity to evaluate relevant ACGME milestones. Our training and assessment program and EPAs can serve as a template for other residency programs.
APA, Harvard, Vancouver, ISO, and other styles
36

Cao, Qifan, and Lihong Xu. "Unsupervised Greenhouse Tomato Plant Segmentation Based on Self-Adaptive Iterative Latent Dirichlet Allocation from Surveillance Camera." Agronomy 9, no. 2 (2019): 91. http://dx.doi.org/10.3390/agronomy9020091.

Full text
Abstract:
It has long been a great concern in deep learning that we lack massive data for high-precision training sets, especially in the agriculture field. Plants in images captured in greenhouses, from a distance or up close, not only have various morphological structures but also can have a busy background, leading to huge challenges in labeling and segmentation. This article proposes an unsupervised statistical algorithm SAI-LDA (self-adaptive iterative latent Dirichlet allocation) to segment greenhouse tomato images from a field surveillance camera automatically, borrowing the language model LDA. Hierarchical wavelet features with an overlapping grid word document design and a modified density-based method quick-shift are adopted, respectively, according to different kinds of images, which are classified by specific proportions between fruits, leaves, and the background. We also utilize the feature correlation between several layers of the image to make further optimization through three rounds of iteration of LDA, with updated documents to achieve finer segmentation. Experiment results show that our method can automatically label the organs of the greenhouse plant under complex circumstances, fast and precisely, overcoming the difficulty of inferior real-time image quality caused by a surveillance camera, and thus obtain large amounts of valuable training sets.
APA, Harvard, Vancouver, ISO, and other styles
37

Kanoute, Pascale, Francois Henri Leroy, and Bruno Passily. "Mechanical Characterisation of Thermal Barrier Coatings Using a Micro-Indentation Instrumented Technique." Key Engineering Materials 345-346 (August 2007): 829–32. http://dx.doi.org/10.4028/www.scientific.net/kem.345-346.829.

Full text
Abstract:
An original instrumented microindenter capable of testing materials up to 1000°C in an inert atmosphere has been developed. The method of neural networks is used to solve the inverse problem, in order to determine the constitutive equation of the materials tested. To obtain a data basis for the training and validation of the neural network, finite element simulations were carried out for various sets of material parameters. To reduce the number of simulations a representative sampling of the loading-strain responses is performed using an unsupervised network, so-called self-organizing map.
APA, Harvard, Vancouver, ISO, and other styles
38

Udin, Wani Sofia, and Zuhaira Nadhila Zahuri. "Land Use and Land Cover Detection by Different Classification Systems using Remotely Sensed Data of Kuala Tiga, Tanah Merah Kelantan, Malaysia." Journal of Tropical Resources and Sustainable Science (JTRSS) 5, no. 3 (2017): 145–51. http://dx.doi.org/10.47253/jtrss.v5i3.660.

Full text
Abstract:
Land use and land cover classification system has been used widely in many applications such as for baseline mapping for Geographic Information System (GIS) input and also target identification for identification of roads, clearings and also land and water interface. The research was conducted in Kuala Tiga, Tanah Merah, Kelantan and the study area covering about 25 km2. The main purpose of this research is to access the possibilities of using remote sensing for the detection of regional land-use change by developing a land cover classification system. Another goal is to compare the accuracy of supervised and unsupervised classification systems by using remote sensing. In this research, both supervised and unsupervised classifications were tested on satellite images of Landsat 7 and 8 in the years 2001 and 2016. As for supervised classification, the satellite images are combined and classified. Information and data from the field and land cover classification are utilized to identify training areas that represent land cover classes. Then, for unsupervised classification, the satellite images are combined and classified by means of unsupervised classification by using an Iterative Self- Organizing Data Analysis Techniques (ISODATA) algorithm. Information and data from the field and land cover classification are utilized to assign the resulting spectral classes to the land cover classes. This research was then comparing the accuracy of two classification systems at dividing the landscape into five classes; built-up land, agricultural land, bare soil, forest land, water bodies. Overall accuracies for unsupervised classification are 36.34 % for 2016 and 51.76% for 2001 while for supervised classification, accuracy assessments are 95.59 % for 2016 and 96.29 % for 2001.
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Hang, Quan Zhang, Guoyin Zhang, Jinwei Fang, and Yangkang Chen. "Self-training and learning the waveform features of microseismic data using an adaptive dictionary." GEOPHYSICS 85, no. 3 (2020): KS51—KS61. http://dx.doi.org/10.1190/geo2019-0213.1.

Full text
Abstract:
Microseismic monitoring is an indispensable technique in characterizing the physical processes that are caused by extraction or injection of fluids during the hydraulic fracturing process. Microseismic data, however, are often contaminated with strong random noise and have a low signal-to-noise ratio (S/N). The low S/N in most microseismic data severely affects the accuracy and reliability of the source localization and source-mechanism inversion results. We have developed a new denoising framework to enhance the quality of microseismic data. We use the method of adaptive sparse dictionaries to learn the waveform features of the microseismic data by iteratively updating the dictionary atoms and sparse coefficients in an unsupervised way. Unlike most existing dictionary learning applications in the seismic community, we learn the features from 1D microseismic data, thereby to learn 1D features of the waveforms. We develop a sparse dictionary learning framework and then prepare the training patches and implement the algorithm to obtain favorable denoising performance. We use extensive numerical examples and real microseismic data examples to demonstrate the validity of our method. Results show that the features of microseismic waveforms can be learned to distinguish signal patches and noise patches even from a single channel of microseismic data. However, more training data can make the learned features smoother and better at representing useful signal components.
APA, Harvard, Vancouver, ISO, and other styles
40

Hoaas, Hanne, Bente Morseth, Anne E. Holland, and Paolo Zanaboni. "Are Physical Activity and Benefits Maintained After Long-Term Telerehabilitation in COPD?" International Journal of Telerehabilitation 8, no. 2 (2016): 39–48. http://dx.doi.org/10.5195/ijt.2016.6200.

Full text
Abstract:
This study investigated whether physical activity levels and other outcomes were maintained at 1-year from completion of a 2-year telerehabilitation intervention in COPD. During the post-intervention year, nine patients with COPD (FEV1 % of pred. 42.4±19.8%; age 58.1±6 years) were encouraged to exercise on a treadmill at home and monitor daily symptoms and training sessions on a webpage as during the intervention. Participants were not provided supervision or motivational support. Physical activity levels decreased from 3,806 steps/day to 2,817 steps/day (p= 0.039). There was a decline in time spent on light physical activity (p=0.009), but not on moderate-to-vigorous activity (p=0.053). Adherence to registration of symptoms and training sessions decreased significantly. Other outcomes including health status, quality of life, anxiety and depression, self-efficacy, and healthcare utilization did not change significantly. In conclusion, provision of equipment for self-management and unsupervised home exercise might not be enough to maintain physical activity levels.
APA, Harvard, Vancouver, ISO, and other styles
41

Yu, Xiaohe, and David J. Lary. "Cloud Detection Using an Ensemble of Pixel-Based Machine Learning Models Incorporating Unsupervised Classification." Remote Sensing 13, no. 16 (2021): 3289. http://dx.doi.org/10.3390/rs13163289.

Full text
Abstract:
Remote sensing imagery, such as that provided by the United States Geological Survey (USGS) Landsat satellites, has been widely used to study environmental protection, hazard analysis, and urban planning for decades. Clouds are a constant challenge for such imagery and, if not handled correctly, can cause a variety of issues for a wide range of remote sensing analyses. Typically, cloud mask algorithms use the entire image; in this study we present an ensemble of different pixel-based approaches to cloud pixel modeling. Based on four training subsets with a selection of different input features, 12 machine learning models were created. We evaluated these models using the cropped LC8-Biome cloud validation dataset. As a comparison, Fmask was also applied to the cropped scene Biome dataset. One goal of this research is to explore a machine learning modeling approach that uses as small a training data sample as possible but still provides an accurate model. Overall, the model trained on the sample subset (1.3% of the total training samples) that includes unsupervised Self-Organizing Map classification results as an input feature has the best performance. The approach achieves 98.57% overall accuracy, 1.18% cloud omission error, and 0.93% cloud commission error on the 88 cropped test images. By comparison to Fmask 4.0, this model improves the accuracy by 10.12% and reduces the cloud omission error by 6.39%. Furthermore, using an additional eight independent validation images that were not sampled in model training, the model trained on the second largest subset with an additional five features has the highest overall accuracy at 86.35%, with 12.48% cloud omission error and 7.96% cloud commission error. This model’s overall correctness increased by 3.26%, and the cloud omission error decreased by 1.28% compared to Fmask 4.0. The machine learning cloud classification models discussed in this paper could achieve very good performance utilizing only a small portion of the total training pixels available. We showed that a pixel-based cloud classification model, and that as each scene obviously has unique spectral characteristics, and having a small portion of example pixels from each of the sub-regions in a scene can improve the model accuracy significantly.
APA, Harvard, Vancouver, ISO, and other styles
42

Theljani, Anis, and Ke Chen. "Diffeomorphic unsupervised deep learning model for mono- and multi-modality registration." Journal of Algorithms & Computational Technology 14 (January 2020): 174830262097352. http://dx.doi.org/10.1177/1748302620973528.

Full text
Abstract:
Different from image segmentation, developing a deep learning network for image registration is less straightforward because training data cannot be prepared or supervised by humans unless they are trivial (e.g. pre-designed affine transforms). One approach for an unsupervised deep leaning model is to self-train the deformation fields by a network based on a loss function with an image similarity metric and a regularisation term, just with traditional variational methods. Such a function consists in a smoothing constraint on the derivatives and a constraint on the determinant of the transformation in order to obtain a spatially smooth and plausible solution. Although any variational model may be used to work with a deep learning algorithm, the challenge lies in achieving robustness. The proposed algorithm is first trained based on a new and robust variational model and tested on synthetic and real mono-modal images. The results show how it deals with large deformation registration problems and leads to a real time solution with no folding. It is then generalised to multi-modal images. Experiments and comparisons with learning and non-learning models demonstrate that this approach can deliver good performances and simultaneously generate an accurate diffeomorphic transformation.
APA, Harvard, Vancouver, ISO, and other styles
43

Blattman, Christopher, Nathan Fiala, and Sebastian Martinez. "Generating Skilled Self-Employment in Developing Countries: Experimental Evidence from Uganda *." Quarterly Journal of Economics 129, no. 2 (2013): 697–752. http://dx.doi.org/10.1093/qje/qjt057.

Full text
Abstract:
Abstract We study a government program in Uganda designed to help the poor and unemployed become self-employed artisans, increase incomes, and thus promote social stability. Young adults in Uganda’s conflict-affected north were invited to form groups and submit grant proposals for vocational training and business start-up. Funding was randomly assigned among screened and eligible groups. Treatment groups received unsupervised grants of $382 per member. Grant recipients invest some in skills training but most in tools and materials. After four years, half practice a skilled trade. Relative to the control group, the program increases business assets by 57%, work hours by 17%, and earnings by 38%. Many also formalize their enterprises and hire labor. We see no effect, however, on social cohesion, antisocial behavior, or protest. Effects are similar by gender but are qualitatively different for women because they begin poorer (meaning the impact is larger relative to their starting point) and because women’s work and earnings stagnate without the program but take off with it. The patterns we observe are consistent with credit constraints.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Yang, Jianyi Zhang, and Changyou Chen. "Self-Adversarially Learned Bayesian Sampling." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5893–900. http://dx.doi.org/10.1609/aaai.v33i01.33015893.

Full text
Abstract:
Scalable Bayesian sampling is playing an important role in modern machine learning, especially in the fast-developed unsupervised-(deep)-learning models. While tremendous progresses have been achieved via scalable Bayesian sampling such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD), the generated samples are typically highly correlated. Moreover, their sample-generation processes are often criticized to be inefficient. In this paper, we propose a novel self-adversarial learning framework that automatically learns a conditional generator to mimic the behavior of a Markov kernel (transition kernel). High-quality samples can be efficiently generated by direct forward passes though a learned generator. Most importantly, the learning process adopts a self-learning paradigm, requiring no information on existing Markov kernels, e.g., knowledge of how to draw samples from them. Specifically, our framework learns to use current samples, either from the generator or pre-provided training data, to update the generator such that the generated samples progressively approach a target distribution, thus it is called self-learning. Experiments on both synthetic and real datasets verify advantages of our framework, outperforming related methods in terms of both sampling efficiency and sample quality.
APA, Harvard, Vancouver, ISO, and other styles
45

Thawonmas, Ruck, Makoto Iwata, and Satoshi Fukunaga. "A Novel Parallel Model for Self-Organizing Map and its Efficient Implementation on a Data-Driven Multiprocessor." Journal of Advanced Computational Intelligence and Intelligent Informatics 7, no. 3 (2003): 355–61. http://dx.doi.org/10.20965/jaciii.2003.p0355.

Full text
Abstract:
The self-organizing map (SOM), with its related extensions, is one of the most widely used artificial neural algorithms in unsupervised learning and a wide variety of applications. Dealing with very large data sets, however, the training time on a single processor is too high to be acceptable for time-critical application domains. To cope with this problem, we present a scheme consisting of a novel parallel model and its implementation on a dynamic data-driven multiprocessor. The parallel model ensures that no load imbalance will occur, while the dynamic data-driven multiprocessor yields high scalability. We demonstrate the effectiveness of the scheme by comparing the parallel model with an existing parallel model, and the proposed implementation with an implementation on another multiprocessor.
APA, Harvard, Vancouver, ISO, and other styles
46

Galvin, T. J., M. T. Huynh, R. P. Norris, et al. "Cataloguing the radio-sky with unsupervised machine learning: a new approach for the SKA era." Monthly Notices of the Royal Astronomical Society 497, no. 3 (2020): 2730–58. http://dx.doi.org/10.1093/mnras/staa1890.

Full text
Abstract:
ABSTRACT We develop a new analysis approach towards identifying related radio components and their corresponding infrared host galaxy based on unsupervised machine learning methods. By exploiting Parallelized rotation and flipping INvariant Kohonen maps (pink), a self-organizing map (SOM) algorithm, we are able to associate radio and infrared sources without the a priori requirement of training labels. We present an example of this method using 894 415 images from the Faint Images of the Radio-Sky at Twenty centimeters (FIRST) and Wide-field Infrared Survey Explorer (WISE) surveys centred towards positions described by the FIRST catalogue. We produce a set of catalogues that complement FIRST and describe 802 646 objects, including their radio components and their corresponding AllWISE infrared host galaxy. Using these data products, we (i) demonstrate the ability to identify objects with rare and unique radio morphologies (e.g. ‘X’-shaped galaxies, hybrid FR I/FR II morphologies), (ii) can identify the potentially resolved radio components that are associated with a single infrared host, (iii) introduce a ‘curliness’ statistic to search for bent and disturbed radio morphologies, and (iv) extract a set of 17 giant radio galaxies between 700 and 1100 kpc. As we require no training labels, our method can be applied to any radio-continuum survey, provided a sufficiently representative SOM can be trained.
APA, Harvard, Vancouver, ISO, and other styles
47

FU, BILIN, and JIN XU. "A NEW GENOTYPE CALLING METHOD FOR AFFYMETRIX SNP ARRAYS." Journal of Bioinformatics and Computational Biology 09, no. 06 (2011): 715–28. http://dx.doi.org/10.1142/s0219720011005458.

Full text
Abstract:
Current genotype-calling methods such as Robust Linear Model with Mahalanobis Distance Classifier (RLMM) and Corrected Robust Linear Model with Maximum Likelihood Classification (CRLMM) provide accurate calling results for Affymetrix Single Nucleotide Polymorphisms (SNP) chips. However, these methods are computationally expensive as they employ preprocess procedures, including chip data normalization and other sophisticated statistical techniques. In the small sample case the accuracy rate may drop significantly. We develop a new genotype calling method for Affymetrix 100 k and 500 k SNP chips. A two-stage classification scheme is proposed to obtain a fast genotype calling algorithm. The first stage uses unsupervised classification to quickly discriminate genotypes with high accuracy for the majority of the SNPs. And the second stage employs a supervised classification method to incorporate allele frequency information either from the HapMap data or from a self-training scheme. Confidence score is provided for every genotype call. The overall performance is shown to be comparable to that of CRLMM as verified by the known gold standard HapMap data and is superior in small sample cases. The new algorithm is computationally simple and standalone in the sense that a self-training scheme can be used without employing any other training data. A package implementing the calling algorithm is freely available at .
APA, Harvard, Vancouver, ISO, and other styles
48

Arbillaga-Etxarri, Ane, Elena Gimeno-Santos, Anael Barberan-Garcia, et al. "Long-term efficacy and effectiveness of a behavioural and community-based exercise intervention (Urban Training) to increase physical activity in patients with COPD: a randomised controlled trial." European Respiratory Journal 52, no. 4 (2018): 1800063. http://dx.doi.org/10.1183/13993003.00063-2018.

Full text
Abstract:
There is a need to increase and maintain physical activity in patients with chronic obstructive pulmonary disease (COPD). We assessed 12-month efficacy and effectiveness of the Urban Training intervention on physical activity in COPD patients.This randomised controlled trial (NCT01897298) allocated 407 COPD patients from primary and hospital settings 1:1 to usual care (n=205) or Urban Training (n=202). Urban Training consisted of a baseline motivational interview, advice to walk on urban trails designed for COPD patients in outdoor public spaces and other optional components for feedback, motivation, information and support (pedometer, calendar, physical activity brochure, website, phone text messages, walking groups and a phone number). The primary outcome was 12-month change in steps·day−1 measured by accelerometer.Efficacy analysis (with per-protocol analysis set, n=233 classified as adherent to the assigned intervention) showed adjusted (95% CI) 12-month difference +957 (184–1731) steps·day−1 between Urban Training and usual care. Effectiveness analysis (with intention-to-treat analysis set, n=280 patients completing the study at 12 months including unwilling and self-reported non-adherent patients) showed no differences between groups. Leg muscle pain during walks was more frequently reported in Urban Training than usual care, without differences in any of the other adverse events.Urban Training, combining behavioural strategies with unsupervised outdoor walking, was efficacious in increasing physical activity after 12 months in COPD patients, with few safety concerns. However, it was ineffective in the full population including unwilling and self-reported non-adherent patients.
APA, Harvard, Vancouver, ISO, and other styles
49

Miranda, Enrique, and Jordi Suñé. "Memristors for Neuromorphic Circuits and Artificial Intelligence Applications." Materials 13, no. 4 (2020): 938. http://dx.doi.org/10.3390/ma13040938.

Full text
Abstract:
Artificial Intelligence has found many applications in the last decade due to increased computing power. Artificial Neural Networks are inspired in the brain structure and consist in the interconnection of artificial neurons through artificial synapses in the so-called Deep Neural Networks (DNNs). Training these systems requires huge amounts of data and, after the network is trained, it can recognize unforeseen data and provide useful information. As far as the training is concerned, we can distinguish between supervised and unsupervised learning. The former requires labelled data and is based on the iterative minimization of the output error using the stochastic gradient descent method followed by the recalculation of the strength of the synaptic connections (weights) with the backpropagation algorithm. On the other hand, unsupervised learning does not require data labeling and it is not based on explicit output error minimization. Conventional ANNs can function with supervised learning algorithms (perceptrons, multi-layer perceptrons, convolutional networks, etc.) but also with unsupervised learning rules (Kohonen networks, self-organizing maps, etc.). Besides, another type of neural networks are the so-called Spiking Neural Networks (SNNs) in which learning takes place through the superposition of voltage spikes launched by the neurons. Their behavior is much closer to the brain functioning mechanisms they can be used with supervised and unsupervised learning rules. Since learning and inference is based on short voltage spikes, energy efficiency improves substantially. Up to this moment, all these ANNs (spiking and conventional) have been implemented as software tools running on conventional computing units based on the von Neumann architecture. However, this approach reaches important limits due to the required computing power, physical size and energy consumption. This is particularly true for applications at the edge of the internet. Thus, there is an increasing interest in developing AI tools directly implemented in hardware for this type of applications. The first hardware demonstrations have been based on Complementary Metal-Oxide-Semiconductor (CMOS) circuits and specific communication protocols. However, to further increase training speed andenergy efficiency while reducing the system size, the combination of CMOS neuron circuits with memristor synapses is now being explored. It has also been pointed out that the short time non-volatility of some memristors may even allow fabricating purely memristive ANNs. The memristor is a new device (first demonstrated in solid-state in 2008) which behaves as a resistor with memory and which has been shown to have potentiation and depression properties similar to those of biological synapses. In this Special Issue, we explore the state of the art of neuromorphic circuits implementing neural networks with memristors for AI applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Jifa, Guojun Zhai, Gang Chen, Bo Fang, Ping Zhou, and Nan Yu. "Unsupervised Domain Adaption for High-Resolution Coastal Land Cover Mapping with Category-Space Constrained Adversarial Network." Remote Sensing 13, no. 8 (2021): 1493. http://dx.doi.org/10.3390/rs13081493.

Full text
Abstract:
Coastal land cover mapping (CLCM) across image domains presents a fundamental and challenging segmentation task. Although adversaries-based domain adaptation methods have been proposed to address this issue, they always implement distribution alignment via a global discriminator while ignoring the data structure. Additionally, the low inter-class variances and intricate spatial details of coastal objects may entail poor presentation. Therefore, this paper proposes a category-space constrained adversarial method to execute category-level adaptive CLCM. Focusing on the underlying category information, we introduce a category-level adversarial framework to align semantic features. We summarize two diverse strategies to extract category-wise domain labels for source and target domains, where the latter is driven by self-supervised learning. Meanwhile, we generalize the lightweight adaptation module to multiple levels across a robust baseline, aiming to fine-tune the features at different spatial scales. Furthermore, the self-supervised learning approach is also leveraged as an improvement strategy to optimize the result within segmented training. We examine our method on two converse adaptation tasks and compare them with other state-of-the-art models. The overall visualization results and evaluation metrics demonstrate that the proposed method achieves excellent performance in the domain adaptation CLCM with high-resolution remotely sensed images.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!