Academic literature on the topic 'Incremental learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Incremental learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Incremental learning"

1

Tsvetkov, V. Ya. "INCREMENTAL LEARNING." Образовательные ресурсы и технологии, no. 4 (2021): 44–52. http://dx.doi.org/10.21777/2500-2112-2021-4-44-52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sim, Kwee-Bo, Kwang-Seung Heo, Chang-Hyun Park, and Dong-Wook Lee. "The Speaker Identification Using Incremental Learning." Journal of Korean Institute of Intelligent Systems 13, no. 5 (October 1, 2003): 576–81. http://dx.doi.org/10.5391/jkiis.2003.13.5.576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Boukli Hacene, Ghouthi, Vincent Gripon, Nicolas Farrugia, Matthieu Arzel, and Michel Jezequel. "Transfer Incremental Learning Using Data Augmentation." Applied Sciences 8, no. 12 (December 6, 2018): 2512. http://dx.doi.org/10.3390/app8122512.

Full text
Abstract:
Deep learning-based methods have reached state of the art performances, relying on a large quantity of available data and computational power. Such methods still remain highly inappropriate when facing a major open machine learning problem, which consists of learning incrementally new classes and examples over time. Combining the outstanding performances of Deep Neural Networks (DNNs) with the flexibility of incremental learning techniques is a promising venue of research. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA is based on pre-trained DNNs as feature extractors, robust selection of feature vectors in subspaces using a nearest-class-mean based technique, majority votes and data augmentation at both the training and the prediction stages. Experiments on challenging vision datasets demonstrate the ability of the proposed method for low complexity incremental learning, while achieving significantly better accuracy than existing incremental counterparts.
APA, Harvard, Vancouver, ISO, and other styles
4

Basu Roy Chowdhury, Somnath, and Snigdha Chaturvedi. "Sustaining Fairness via Incremental Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (June 26, 2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Full text
Abstract:
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
APA, Harvard, Vancouver, ISO, and other styles
5

Rui, Xue, Ziqiang Li, Yang Cao, Ziyang Li, and Weiguo Song. "DILRS: Domain-Incremental Learning for Semantic Segmentation in Multi-Source Remote Sensing Data." Remote Sensing 15, no. 10 (May 12, 2023): 2541. http://dx.doi.org/10.3390/rs15102541.

Full text
Abstract:
With the exponential growth in the speed and volume of remote sensing data, deep learning models are expected to adapt and continually learn over time. Unfortunately, the domain shift between multi-source remote sensing data from various sensors and regions poses a significant challenge. Segmentation models face difficulty in adapting to incremental domains due to catastrophic forgetting, which can be addressed via incremental learning methods. However, current incremental learning methods mainly focus on class-incremental learning, wherein classes belong to the same remote sensing domain, and neglect investigations into incremental domains in remote sensing. To solve this problem, we propose a domain-incremental learning method for semantic segmentation in multi-source remote sensing data. Specifically, our model aims to incrementally learn a new domain while preserving its performance on previous domains without accessing previous domain data. To achieve this, our model has a unique parameter learning structure that reparametrizes domain-agnostic and domain-specific parameters. We use different optimization strategies to adapt to domain shift in incremental domain learning. Additionally, we adopt multi-level knowledge distillation loss to mitigate the impact of label space shift among domains. The experiments demonstrate that our method achieves excellent performance in domain-incremental settings, outperforming existing methods with only a few parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Shen, Furao, Hui Yu, Youki Kamiya, and Osamu Hasegawa. "An Online Incremental Semi-Supervised Learning Method." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 6 (September 20, 2010): 593–605. http://dx.doi.org/10.20965/jaciii.2010.p0593.

Full text
Abstract:
Using labeled data and large amounts of unlabeled data, our proposed online incremental semisupervised learning automatically learns the topology of input data distribution without prior knowledge of numbers of nodes or network structure. Using labeled data, it labels generated nodes and divides a learned topology into substructures corresponding to classes. Node weights used as prototype vectors enable classification. New labeled or unlabeled data is added incrementally to the system during learning. Experimental results for artificial and real-world data show that this learning efficiently learns online incremental tasks even in noisy and non-stationary environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Madhusudhanan, Sathya, Suresh Jaganathan, and Jayashree L S. "Incremental Learning for Classification of Unstructured Data Using Extreme Learning Machine." Algorithms 11, no. 10 (October 17, 2018): 158. http://dx.doi.org/10.3390/a11100158.

Full text
Abstract:
Unstructured data are irregular information with no predefined data model. Streaming data which constantly arrives over time is unstructured, and classifying these data is a tedious task as they lack class labels and get accumulated over time. As the data keeps growing, it becomes difficult to train and create a model from scratch each time. Incremental learning, a self-adaptive algorithm uses the previously learned model information, then learns and accommodates new information from the newly arrived data providing a new model, which avoids the retraining. The incrementally learned knowledge helps to classify the unstructured data. In this paper, we propose a framework CUIL (Classification of Unstructured data using Incremental Learning) which clusters the metadata, assigns a label for each cluster and then creates a model using Extreme Learning Machine (ELM), a feed-forward neural network, incrementally for each batch of data arrived. The proposed framework trains the batches separately, reducing the memory resources, training time significantly and is tested with metadata created for the standard image datasets like MNIST, STL-10, CIFAR-10, Caltech101, and Caltech256. Based on the tabulated results, our proposed work proves to show greater accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

CHALUP, STEPHAN K. "INCREMENTAL LEARNING IN BIOLOGICAL AND MACHINE LEARNING SYSTEMS." International Journal of Neural Systems 12, no. 06 (December 2002): 447–65. http://dx.doi.org/10.1142/s0129065702001308.

Full text
Abstract:
Incremental learning concepts are reviewed in machine learning and neurobiology. They are identified in evolution, neurodevelopment and learning. A timeline of qualitative axon, neuron and synapse development summarizes the review on neurodevelopment. A discussion of experimental results on data incremental learning with recurrent artificial neural networks reveals that incremental learning often seems to be more efficient or powerful than standard learning but can produce unexpected side effects. A characterization of incremental learning is proposed which takes the elaborated biological and machine learning concepts into account.
APA, Harvard, Vancouver, ISO, and other styles
9

LiMin Fu, Hui-Huang Hsu, and J. C. Principe. "Incremental backpropagation learning networks." IEEE Transactions on Neural Networks 7, no. 3 (May 1996): 757–61. http://dx.doi.org/10.1109/72.501732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Han, Zhi, De-Yu Meng, Zong-Ben Xu, and Nan-Nan Gu. "Incremental Alignment Manifold Learning." Journal of Computer Science and Technology 26, no. 1 (January 2011): 153–65. http://dx.doi.org/10.1007/s11390-011-9422-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Incremental learning"

1

Westendorp, James Computer Science &amp Engineering Faculty of Engineering UNSW. "Robust incremental relational learning." Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/43513.

Full text
Abstract:
Real-world learning tasks present a range of issues for learning systems. Learning tasks can be complex and the training data noisy. When operating as part of a larger system, there may be limitations on available memory and computational resources. Learners may also be required to provide results from a stream. This thesis investigates the problem of incremental, relational learning from imperfect data with constrained time and memory resources. The learning process involves incremental update of a theory when an example is presented that contradicts the theory. Contradictions occur if there is an incorrect theory or noisy data. The learner cannot discriminate between the two possibilities, so both are considered and the better possibility used. Additionally, all changes to the theory must have support from multiple examples. These two principles allow learning from imperfect data. The Minimum Description Length principle is used for selection between possible worlds and determining appropriate levels of additional justification. A new encoding scheme allows the use of MDL within the framework of Inductive Logic Programming. Examples must be stored to provide additional justification for revisions without violating resource requirements. A new algorithm determines when to discard examples, minimising total usage while ensuring sufficient storage for justifications. Searching for revisions is the most computationally expensive part of the process, yet not all searches are successful. Another new algorithm uses a notion of theory stability as a guide to occasionally disallow entire searches to reduce overall time. The approach has been implemented as a learner called NILE. Empirical tests include two challenging domains where this type of learner acts as one component of a larger task. The first of these involves recognition of behavior activation conditions in another agent as part of an opponent modeling task. The second, more challenging task is learning to identify objects in visual images by recognising relationships between image features. These experiments highlight NILE'S strengths and limitations as well as providing new n domains for future work in ILP.
APA, Harvard, Vancouver, ISO, and other styles
2

HILLNERTZ, FREDRIK. "Incremental Self Learning Road map." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155910.

Full text
Abstract:
This paper describes a system that incrementally constructs an increasingly accurate road map from GPS traces from a single vehicle. The resulting road map contains information about the road such as road gradient which can be used by functions in a heavy vehicle to drive more effectively. The system is supposed to run on an embedded system in a heavy vehicle and is therefore design to require as little working memory and processing time as possible.Pre- and post processing techniques that counters GPS noise, random movements and improve the quality of the road map are also described, for example tunnel estimation where GPS signals are missing. An aging method, designed for data from a single vehicle, that eventually removes closed and rarely used roads is proposed.A comparison between the constructed road map and a commercial one shows that the algorithms described creates a very accurate roadmap. The performance of the system is evaluated and it is concluded that it would be possible to run it on an embedded system in a heavyvehicle.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Min Sub Computer Science &amp Engineering Faculty of Engineering UNSW. "Reinforcement learning by incremental patching." Awarded by:University of New South Wales, 2007. http://handle.unsw.edu.au/1959.4/39716.

Full text
Abstract:
This thesis investigates how an autonomous reinforcement learning agent can improve on an approximate solution by augmenting it with a small patch, which overrides the approximate solution at certain states of the problem. In reinforcement learning, many approximate solutions are smaller and easier to produce than ???flat??? solutions that maintain distinct parameters for each fully enumerated state, but the best solution within the constraints of the approximation may fall well short of global optimality. This thesis proposes that the remaining gap to global optimality can be efficiently minimised by learning a small patch over the approximate solution. In order to improve the agent???s behaviour, algorithms are presented for learning the overriding patch. The patch is grown around particular regions of the problem where the approximate solution is found to be deficient. Two heuristic strategies are proposed for concentrating resources to those areas where inaccuracies in the approximate solution are most costly, drawing a compromise between solution quality and storage requirements. Patching also handles problems with continuous state variables, by two alternative methods: Kuhn triangulation over a fixed discretisation and nearest neighbour interpolation with a variable discretisation. As well as improving the agent???s behaviour, patching is also applied to the agent???s model of the environment. Inaccuracies in the agent???s model of the world are detected by statistical testing, using a selective sampling strategy to limit storage requirements for collecting data. The patching algorithms are demonstrated in several problem domains, illustrating the effectiveness of patching under a wide range of conditions. A scenario drawn from a real-time strategy game demonstrates the ability of patching to handle large complex tasks. These contributions combine to form a general framework for patching over approximate solutions in reinforcement learning. Complex problems cannot be solved by brute force alone, and some form of approximation is necessary to handle large problems. However, this does not mean that the limitations of approximate solutions must be accepted without question. Patching demonstrates one way in which an agent can leverage approximation techniques without losing the ability to handle fine yet important details.
APA, Harvard, Vancouver, ISO, and other styles
4

Giritharan, Balathasan. "Incremental Learning with Large Datasets." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc149595/.

Full text
Abstract:
This dissertation focuses on the novel learning strategy based on geometric support vector machines to address the difficulties of processing immense data set. Support vector machines find the hyper-plane that maximizes the margin between two classes, and the decision boundary is represented with a few training samples it becomes a favorable choice for incremental learning. The dissertation presents a novel method Geometric Incremental Support Vector Machines (GISVMs) to address both efficiency and accuracy issues in handling massive data sets. In GISVM, skin of convex hulls is defined and an efficient method is designed to find the best skin approximation given available examples. The set of extreme points are found by recursively searching along the direction defined by a pair of known extreme points. By identifying the skin of the convex hulls, the incremental learning will only employ a much smaller number of samples with comparable or even better accuracy. When additional samples are provided, they will be used together with the skin of the convex hull constructed from previous dataset. This results in a small number of instances used in incremental steps of the training process. Based on the experimental results with synthetic data sets, public benchmark data sets from UCI and endoscopy videos, it is evident that the GISVM achieved satisfactory classifiers that closely model the underlying data distribution. GISVM improves the performance in sensitivity in the incremental steps, significantly reduced the demand for memory space, and demonstrates the ability of recovery from temporary performance degradation.
APA, Harvard, Vancouver, ISO, and other styles
5

Monica, Riccardo. "Deep Incremental Learning for Object Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12331/.

Full text
Abstract:
In recent years, deep learning techniques received great attention in the field of information technology. These techniques proved to be particularly useful and effective in domains like natural language processing, speech recognition and computer vision. In several real world applications deep learning approaches improved the state-of-the-art. In the field of machine learning, deep learning was a real revolution and a number of effective techniques have been proposed for both supervised and unsupervised learning and for representation learning. This thesis focuses on deep learning for object recognition, and in particular, it addresses incremental learning techniques. With incremental learning we denote approaches able to create an initial model from a small training set and to improve the model as new data are available. Using temporal coherent sequences proved to be useful for incremental learning since temporal coherence also allows to operate in unsupervised manners. A critical point of incremental learning is called forgetting which is the risk to forget previously learned patterns as new data are presented. In the first chapters of this work we introduce the basic theory on neural networks, Convolutional Neural Networks and incremental learning. CNN is today one of the most effective approaches for supervised object recognition; it is well accepted by the scientific community and largely used by ICT big players like Google and Facebook: relevant applications are Facebook face recognition and Google image search. The scientific community has several (large) datasets (e.g., ImageNet) for the development and evaluation of object recognition approaches. However very few temporally coherent datasets are available to study incremental approaches. For this reason we decided to collect a new dataset named TCD4R (Temporal Coherent Dataset For Robotics).
APA, Harvard, Vancouver, ISO, and other styles
6

Sindhu, Muddassar. "Incremental Learning and Testing of Reactive Systems." Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37763.

Full text
Abstract:
This thesis concerns the design, implementation and evaluation of a specification based testing architecture for reactive systems using the paradigm of learning-based testing. As part of this work we have designed, verified and implemented new incremental learning algorithms for DFA and Kripke structures.These have been integrated with the NuSMV model checker to give a new learning-based testing architecture. We have evaluated our architecture on case studies and shown that the method is effective.
QC 20110822
APA, Harvard, Vancouver, ISO, and other styles
7

Suryanto, Hendra Computer Science &amp Engineering Faculty of Engineering UNSW. "Learning and discovery in incremental knowledge acquisition." Awarded by:University of New South Wales. School of Computer Science and Engineering, 2005. http://handle.unsw.edu.au/1959.4/20744.

Full text
Abstract:
Knowledge Based Systems (KBS) have been actively investigated since the early period of AI. There are four common methods of building expert systems: modeling approaches, programming approaches, case-based approaches and machine-learning approaches. One particular technique is Ripple Down Rules (RDR) which may be classified as an incremental case-based approach. Knowledge needs to be acquired from experts in the context of individual cases viewed by them. In the RDR framework, the expert adds a new rule based on the context of an individual case. This task is simple and only affects the expert???s workflow minimally. The rule added fixes an incorrect interpretation made by the KBS but with minimal impact on the KBS's previous correct performance. This provides incremental improvement. Despite these strengths of RDR, there are some limitations including rule redundancy, lack of intermediate features and lack of models. This thesis addresses these RDR limitations by applying automatic learning algorithms to reorganize the knowledge base, to learn intermediate features and possibly to discover domain models. The redundancy problem occurs because rules created in particular contexts which should have more general application. We address this limitation by reorganizing the knowledge base and removing redundant rules. Removal of redundant rules should also reduce the number of future knowledge acquisition sessions. Intermediate features improve modularity, because the expert can deal with features in groups rather than individually. In addition to the manual creation of intermediate features for RDR, we propose the automated discovery of intermediate features to speed up the knowledge acquisition process by generalizing existing rules. Finally, the Ripple Down Rules approach facilitates rapid knowledge acquisition as it can be initialized with a minimal ontology. Despite minimal modeling, we propose that a more developed knowledge model can be extracted from an existing RDR KBS. This may be useful in using RDR KBS for other applications. The most useful of these three developments was the automated discovery of intermediate features. This made a significant difference to the number of knowledge acquisition sessions required.
APA, Harvard, Vancouver, ISO, and other styles
8

Florez-Larrahondo, German. "Incremental learning of discrete hidden Markov models." Diss., Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-05312005-141645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

MOTTA, EDUARDO NEVES. "SUPERVISED LEARNING INCREMENTAL FEATURE INDUCTION AND SELECTION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28688@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A indução de atributos não lineares a partir de atributos básicos é um modo de obter modelos preditivos mais precisos para problemas de classificação. Entretanto, a indução pode causar o rápido crescimento do número de atributos, resultando usualmente em overfitting e em modelos com baixo poder de generalização. Para evitar esta consequência indesejada, técnicas de regularização são aplicadas, para criar um compromisso entre um reduzido conjunto de atributos representativo do domínio e a capacidade de generalização Neste trabalho, descrevemos uma abordagem de aprendizado de máquina supervisionado com indução e seleção incrementais de atributos. Esta abordagem integra árvores de decisão, support vector machines e seleção de atributos utilizando perceptrons esparsos em um framework de aprendizado que chamamos IFIS – Incremental Feature Induction and Selection. Usando o IFIS, somos capazes de criar modelos regularizados não lineares de alto desempenho utilizando um algoritmo com modelo linear. Avaliamos o nosso sistema em duas tarefas de processamento de linguagem natural em dois idiomas. Na primeira tarefa, anotação morfossintática, usamos dois corpora, o corpus WSJ em língua inglesa e o Mac-Morpho em Português. Em ambos, alcançamos resultados competitivos com o estado da arte reportado na literatura, alcançando as acurácias de 97,14 por cento e 97,13 por cento, respectivamente. Na segunda tarefa, análise de dependência, utilizamos o corpus da CoNLL 2006 Shared Task em português, ultrapassando os resultados reportados durante aquela competição e alcançando resultados competitivos com o estado da arte para esta tarefa, com a métrica UAS igual a 92,01 por cento. Com a regularização usando um perceptron esparso, geramos modelos SVM que são até 10 vezes menores, preservando sua acurácia. A redução dos modelos é obtida através da regularização dos domínios dos atributos, que atinge percentuais de até 99 por cento. Com a regularização dos modelos, alcançamos uma redução de até 82 por cento no tamanho físico dos modelos. O tempo de predição do modelo compacto é reduzido em até 84 por cento. A redução dos domínios e modelos permite também melhorar a engenharia de atributos, através da análise dos domínios compactos e da introdução incremental de novos atributos.
Non linear feature induction from basic features is a method of generating predictive models with higher precision for classification problems. However, feature induction may rapidly lead to a huge number of features, causing overfitting and models with low predictive power. To prevent this side effect, regularization techniques are employed to obtain a trade-off between a reduced feature set representative of the domain and generalization power. In this work, we describe a supervised machine learning approach that incrementally inducts and selects feature conjunctions derived from base features. This approach integrates decision trees, support vector machines and feature selection using sparse perceptrons in a machine learning framework named IFIS – Incremental Feature Induction and Selection. Using IFIS, we generate regularized non-linear models with high performance using a linear algorithm. We evaluate our system in two natural language processing tasks in two different languages. For the first task, POS tagging, we use two corpora, WSJ corpus for English, and Mac-Morpho for Portuguese. Our results are competitive with the state-of-the-art performance in both, achieving accuracies of 97.14 per cent and 97.13 per cent, respectively. In the second task, Dependency Parsing, we use the CoNLL 2006 Shared Task Portuguese corpus, achieving better results than those reported during that competition and competitive with the state-of-the-art for this task, with UAS score of 92.01 per cent. Applying model regularization using a sparse perceptron, we obtain SVM models 10 times smaller, while maintaining their accuracies. We achieve model reduction by regularization of feature domains, which can reach 99 per cent. Using the regularized model we achieve model physical size shrinking of up to 82 per cent. The prediction time is cut by up to 84 per cent. Domains and models downsizing also allows enhancing feature engineering, through compact domain analysis and incremental inclusion of new features.
APA, Harvard, Vancouver, ISO, and other styles
10

Tortajada, Velert Salvador. "Incremental Learning approaches to Biomedical decision problems." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17195.

Full text
Abstract:
During the last decade, a new trend in medicine is transforming the nature of healthcare from reactive to proactive. This new paradigm is changing into a personalized medicine where the prevention, diagnosis, and treatment of disease is focused on individual patients. This paradigm is known as P4 medicine. Among other key benefits, P4 medicine aspires to detect diseases at an early stage and introduce diagnosis to stratify patients and diseases to select the optimal therapy based on individual observations and taking into account the patient outcomes to empower the physician, the patient, and their communication. This paradigm transformation relies on the availability of complex multi-level biomedical data that are increasingly accurate, since it is possible to find exactly the needed information, but also exponentially noisy, since the access to that information is more and more challenging. In order to take advantage of this information, an important effort is being made in the last decades to digitalize medical records and to develop new mathematical and computational methods for extracting maximum knowledge from patient records, building dynamic and disease-predictive models from massive amounts of integrated clinical and biomedical data. This requirement enables the use of computer-assisted Clinical Decision Support Systems for the management of individual patients. The Clinical Decision Support System (CDSS) are computational systems that provide precise and specific knowledge for the medical decisions to be adopted for diagnosis, prognosis, treatment and management of patients. The CDSS are highly related to the concept of evidence-based medicine since they infer medical knowledge from the biomedical databases and the acquisition protocols that are used for the development of the systems, give computational support based on evidence for the clinical practice, and evaluate the performance and the added value of the solution for each specific medical problem.
Tortajada Velert, S. (2012). Incremental Learning approaches to Biomedical decision problems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17195
Palancia
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Incremental learning"

1

United States. National Aeronautics and Space Administration., ed. Representation in incremental learning. Moffet Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Govea, Alejandro Dizan Vasquez. Incremental Learning for Motion Prediction of Pedestrians and Vehicles. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13642-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hirsh, Haym. Incremental Version-Space Merging: A General Framework for Concept Learning. Boston, MA: Springer US, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hirsh, Haym. Incremental Version-Space Merging: A General Framework for Concept Learning. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-1557-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chakraborty, Sanjay, Sk Hafizul Islam, and Debabrata Samanta. Data Classification and Incremental Clustering in Data Mining and Machine Learning. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93088-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lim, Chee Peng. An incremental adaptive network for on-line, supervised learning and probability estimation. Sheffield: University of Sheffield, Dept. of Automatic Control & Systems Engineering, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Proietto Salanitri, Federica, Serestina Viriri, Ulaş Bağcı, Pallavi Tiwari, Boqing Gong, Concetto Spampinato, Simone Palazzo, et al., eds. Artificial Intelligence in Pancreatic Disease Detection and Diagnosis, and Personalized Incremental Learning in Medicine. Cham: Springer Nature Switzerland, 2025. http://dx.doi.org/10.1007/978-3-031-73483-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hwang, Francis. Effects of a Curriculum-Based Intervention on the Increments of Stimulus Control for Bidirectional Naming and Student Learning. [New York, N.Y.?]: [publisher not identified], 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Representation in incremental learning. Moffet Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Crompton, Deon. Keras Python : Keras Incremental Training: Learning Rate Keras. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Incremental learning"

1

Utgoff, Paul E., James Cussens, Stefan Kramer, Sanjay Jain, Frank Stephan, Luc De Raedt, Ljupčo Todorovski, et al. "Incremental Learning." In Encyclopedia of Machine Learning, 515–18. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geng, Xin, and Kate Smith-Miles. "Incremental Learning." In Encyclopedia of Biometrics, 731–35. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Geng, Xin, and Kate Smith-Miles. "Incremental Learning." In Encyclopedia of Biometrics, 912–17. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

E.Utgoff, Paul. "Incremental Learning." In Encyclopedia of Machine Learning and Data Mining, 1–5. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-1-4899-7502-7_130-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Utgoff, Paul E. "Incremental Learning." In Encyclopedia of Machine Learning and Data Mining, 634–37. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hirsh, Haym. "Incremental Batch Learning." In The Kluwer International Series in Engineering and Computer Science, 69–74. Boston, MA: Springer US, 1990. http://dx.doi.org/10.1007/978-1-4613-1557-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hyder, Rakib, Ken Shao, Boyu Hou, Panos Markopoulos, Ashley Prater-Bennette, and M. Salman Asif. "Incremental Task Learning with Incremental Rank Updates." In Lecture Notes in Computer Science, 566–82. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20050-2_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

M. Bagirov, Adil, Napsu Karmitsa, and Sona Taheri. "Incremental Clustering Algorithms." In Unsupervised and Semi-Supervised Learning, 185–200. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37826-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tschumitschew, Katharina, and Frank Klawonn. "Incremental Statistical Measures." In Learning in Non-Stationary Environments, 21–55. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-8020-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bouchachia, Abdelhamid, and Markus Prossegger. "Incremental Spectral Clustering." In Learning in Non-Stationary Environments, 77–99. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4419-8020-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Incremental learning"

1

Esaki, Yasushi, Satoshi Koide, and Takuro Kutsuna. "One-Shot Domain Incremental Learning." In 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yi, Huiyu. "Few-Shot Class-Incremental Learning with Class Centers and Contrastive Learning for Incremental Vehicle Recognition." In 2024 International Joint Conference on Neural Networks (IJCNN), 1–8. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baysal, Engin, and Cuneyt Bayilmis. "Incremental Machine Learning: Incremental Classification." In 2022 7th International Conference on Computer Science and Engineering (UBMK). IEEE, 2022. http://dx.doi.org/10.1109/ubmk55850.2022.9919487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Seoyoon, Seongjun Yun, and Jaewoo Kang. "DyGRAIN: An Incremental Learning Framework for Dynamic Graphs." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/438.

Full text
Abstract:
Graph-structured data provide a powerful representation of complex relations or interactions. Many variants of graph neural networks (GNNs) have emerged to learn graph-structured data where underlying graphs are static, although graphs in various real-world applications are dynamic (e.g., evolving structure). To consider the dynamic nature that a graph changes over time, the need for applying incremental learning (i.e., continual learning or lifelong learning) to the graph domain has been emphasized. However, unlike incremental learning on Euclidean data, graph-structured data contains dependency between the existing nodes and newly appeared nodes, resulting in the phenomenon that receptive fields of existing nodes vary by new inputs (e.g., nodes and edges). In this paper, we raise a crucial challenge of incremental learning for dynamic graphs as time-varying receptive fields, and propose a novel incremental learning framework, DyGRAIN, to mitigate time-varying receptive fields and catastrophic forgetting. Specifically, our proposed method incrementally learns dynamic graph representations by reflecting the influential change in receptive fields of existing nodes and maintaining previous knowledge of informational nodes prone to be forgotten. Our experiments on large-scale graph datasets demonstrate that our proposed method improves the performance by effectively capturing pivotal nodes and preventing catastrophic forgetting.
APA, Harvard, Vancouver, ISO, and other styles
5

Luo, Zilin, Yaoyao Liu, Bernt Schiele, and Qianru Sun. "Class-Incremental Exemplar Compression for Class-Incremental Learning." In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2023. http://dx.doi.org/10.1109/cvpr52729.2023.01094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Yue, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. "Large Scale Incremental Learning." In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019. http://dx.doi.org/10.1109/cvpr.2019.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Qing, Yudi Gu, and Dongsheng Wu. "Survey of incremental learning." In 2019 Chinese Control And Decision Conference (CCDC). IEEE, 2019. http://dx.doi.org/10.1109/ccdc.2019.8832774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bouchachia, Abdelhamid. "Incremental Learning By Decomposition." In 2006 5th International Conference on Machine Learning and Applications (ICMLA'06). IEEE, 2006. http://dx.doi.org/10.1109/icmla.2006.28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bouchachia, Abdelhamid, Markus Prossegg, and Hakan Duman. "Semi-supervised incremental learning." In 2010 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2010. http://dx.doi.org/10.1109/fuzzy.2010.5584328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mi, Fei, Lingjing Kong, Tao Lin, Kaicheng Yu, and Boi Faltings. "Generalized Class Incremental Learning." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00128.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Incremental learning"

1

Benz, Zachary O., Justin Derrick Basilico, Warren Leon Davis, Kevin R. Dixon, Brian S. Jones, Nathaniel Martin, and Jeremy Daniel Wendt. Incremental learning for automated knowledge capture. Office of Scientific and Technical Information (OSTI), December 2013. http://dx.doi.org/10.2172/1121921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fischer, Gerhard, Andreas Lemke, and Helga Nieper-Lemke. Enhancing Incremental Learning Processes With Knowledge-Based Systems. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada460163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gil, Yolanda. Learning by Experimentation: Incremental Refinement of Incomplete Planning Domains. Fort Belvoir, VA: Defense Technical Information Center, January 1993. http://dx.doi.org/10.21236/ada269671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lovett, Andrew, Morteza Dehghani, and Kenneth Forbus. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ada470431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Clement, Timothy, and Brett Vaughan. Evaluation of a mobile learning platform for clinical supervision. University of Melbourne, 2021. http://dx.doi.org/10.46580/124369.

Full text
Abstract:
Context: This report details a formative evaluation of the Clinical Supervision Online (CSO) course, a fee-paying, fully online ‘light touch’ program of study for clinical supervisors offered by the Melbourne Medical School, which was developed in conjunction with the University’s Mobile Learning Unit. The course requires between six to ten hours of self-directed study and is designed for any clinicians who teach. Methods: Evaluation of the course was guided by Rossi, Lipsey and Freeman’s (2004) approach to program evaluation, addressing the need for the course, its design, implementation, impact, and return on investment. Data were collected through interviews with key informants, document analysis, an embedded student survey, learning analytics data, financial data, and an audit against ‘best practice’ standards for online course design. Findings: The findings suggest that course development was driven by both a financial imperative and genuine concern to meet training needs of clinical supervisors. Two hundred and four students enrolled on the course in its first 18 months. This has been enough to cover its developmental costs. In relation to 64 quality standards for online course design, the level of performance was rated as ‘meets’ for 44 items; ‘exceeds’ for one item; ‘developing for 13 items’; and, ‘non-existent’ for six items. An additional 33 items were identified as ‘not applicable’ for the ‘light touch’ course design. Significance: From a learning design perspective there is much to like about the CSO course and the outcome of assessing it against the standards for ‘best practice’ online course design suggests that an evolutionary approach - making incremental changes - could improve the course whilst retaining its existing ‘light touch’ format. The CSO course on its own is unlikely to realise the depth of achievement implied in the course aims and learning outcomes. The CSO course may best be seen as an entrée into the art of clinical supervision.
APA, Harvard, Vancouver, ISO, and other styles
6

Bailey Bond, Robert, Pu Ren, James Fong, Hao Sun, and Jerome F. Hajjar. Physics-informed Machine Learning Framework for Seismic Fragility Analysis of Steel Structures. Northeastern University, August 2024. http://dx.doi.org/10.17760/d20680141.

Full text
Abstract:
The seismic assessment of structures is a critical step to increase community resilience under earthquake hazards. This research aims to develop a Physics-reinforced Machine Learning (PrML) paradigm for metamodeling of nonlinear structures under seismic hazards using artificial intelligence. Structural metamodeling, a reduced-fidelity surrogate model to a more complex structural model, enables more efficient performance-based design and analysis, optimizing structural designs and ease the computational effort for reliability fragility analysis, leading to globally efficient designs while maintaining required levels of accuracy. The growing availability of high-performance computing has improved this analysis by providing the ability to evaluate higher order numerical models. However, more complex models of the seismic response of various civil structures demand increasing amounts of computing power. In addition, computational cost greatly increases with numerous iterations to account for optimization and stochastic loading (e.g., Monte Carlo simulations or Incremental Dynamic Analysis). To address the large computational burden, simpler models are desired for seismic assessment with fragility analysis. Physics reinforced Machine Learning integrates physics knowledge (e.g., scientific principles, laws of physics) into the traditional machine learning architectures, offering physically bounded, interpretable models that require less data than traditional methods. This research introduces a PrML framework to develop fragility curves using the combination of neural networks of domain knowledge. The first aim involves clustering and selecting ground motions for nonlinear response analysis of archetype buildings, ensuring that selected ground motions will include as few ground motions as possible while still expressing all the key representative events the structure will probabilistically experience in its lifetime. The second aim constructs structural PrML metamodels to capture the nonlinear behavior of these buildings utilizing the nonlinear Equation of Motion (EOM). Embedding physical principles, like the general form of the EOM, into the learning process will inform the system to stay within known physical bounds, resulting in interpretable results, robust inferencing, and the capability of dealing with incomplete and scarce data. The third and final aim applies the metamodels to probabilistic seismic response prediction, fragility analysis, and seismic performance factor development. The efficiency and accuracy of this approach are evaluated against existing physics-based fragility analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Schiefelbein, Ernesto, Paulina Schiefelbein, and Laurence Wolff. Cost-Effectiveness of Education Policies in Latin America: A Survey of Expert Opinion. Inter-American Development Bank, December 1998. http://dx.doi.org/10.18235/0008789.

Full text
Abstract:
This paper provides an alternative approach to measuring the cost-effectiveness of educational interventions. The authors devised a questionnaire and gave it to ten international experts, mainly located in universities and international agencies, all of whom were well acquainted with educational research and with practical attempts at educational reform in the region; as well as to about 30 Latin American planner/practitioners, most of them working in the planning office of their ministry of education. Each respondent was asked to estimate the impact of 40 possible primary school interventions on learning as well as the probability of successful implementation. Using their own estimates of the incremental unit costs of these interventions, the authors created an innovative index ranking the cost-effectiveness of each of the 40 interventions.
APA, Harvard, Vancouver, ISO, and other styles
8

Aguiar, Brandon, Paul Bianco, and Arvind Agarwal. Using High-Speed Imaging and Machine Learning to Capture Ultrasonic Treatment Cavitation Area at Different Amplitudes. Florida International University, October 2021. http://dx.doi.org/10.25148/mmeurs.009773.

Full text
Abstract:
The ultrasonic treatment process strengthens metals by increasing nucleation and decreasing grain size in an energy efficient way, without having to add anything to the material. The goal of this research endeavor was to use machine learning to automatically measure cavitation area in the Ultrasonic Treatment process to understand how amplitude influences cavitation area. For this experiment, a probe was placed into a container filled with turpentine because it has a similar viscosity to liquid aluminum. The probe gyrates up and down tens of micrometers at a frequency of 20 kHz, which causes cavitations to form in the turpentine. Each experimental trial ran for 5 seconds. We took footage on a high-speed camera running the UST probe from 20% to 35% amplitude in increments of 1%. Our research examined how the amplitude of the probe changed the cavitation area per unit time. It was vital to get a great contrast between the cavitations and the turpentine so that we could train a machine learning model to measure the cavitation area in a software called Dragonfly. We observed that as amplitude increased, average cavitation area also increased. Plotting cavitation area versus time shows that the cavitation area for a given amplitude increases and decreases in a wave-like pattern as time passes.
APA, Harvard, Vancouver, ISO, and other styles
9

Turmena, Lucas, Flávia Guerra, Altiere Freitas, Alejandra Ramos-Galvez, Simone Sandholz, Michael Roll, Isadora Freire, and Millena Oliveira. TUC Urban Lab Profile: Alliance for the Centre of Recife, Brazil. United Nations University - Institute for Environment and Human Security (UNU-EHS), March 2024. http://dx.doi.org/10.53324/hcyv7857.

Full text
Abstract:
After almost two years in operation, the challenges and key achievements of the TUC Urban Lab established in Comunidade do Pilar in Recife, Brazil, provide valuable lessons for sustaining ongoing activities, accelerating broader transformations and guiding similar efforts elsewhere: 1. DEVELOPING A PLACE-BASED APPROACH AND BUILDING MUTUAL TRUST: Meaningful participation is contingent upon establishing and maintaining trust between UL facilitators and participants. In the case of Comunidade do Pilar, overcoming initial distrust and skepticism required tailoring UL activities to residents’ needs and linking those to climate action, while increasing presence in the territory and creating safe spaces for equal participation. The strengthening of a place-based approach has been a key contributor to the UL’s achievements. 2. NAVIGATING PARTICIPATION IN REALITY: Participation is often less smooth than planned. Facilitators must consider fluctuations in the frequency and manner of participation and develop strategies to adapt the UL process accordingly. Open dialogues and clear communication are essential. The UL is not a static organization but a flexible arrangement with the potential to bridge diverse interests and aspirations, linking local needs with the climate change agenda. 3. IMPLEMENTING STRATEGIES TO WIDEN THE IMPACT: The UL in Comunidade do Pilar strives to foster long-term outcomes through small-scale experiments. Incremental changes nurture individual and collective capacities, laying the foundation for broader and deeper transformations. However, scaling up learnings depends on institutionalizing changes and garnering support from decision-makers, which can be challenging.
APA, Harvard, Vancouver, ISO, and other styles
10

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Full text
Abstract:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography