Academic literature on the topic 'Learning Workflows'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning Workflows.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Learning Workflows"

1

Silva Junior, Daniel, Esther Pacitti, Aline Paes, and Daniel de Oliveira. "Provenance-and machine learning-based recommendation of parameter values in scientific workflows." PeerJ Computer Science 7 (July 5, 2021): e606. http://dx.doi.org/10.7717/peerj-cs.606.

Full text
Abstract:
Scientific Workflows (SWfs) have revolutionized how scientists in various domains of science conduct their experiments. The management of SWfs is performed by complex tools that provide support for workflow composition, monitoring, execution, capturing, and storage of the data generated during execution. In some cases, they also provide components to ease the visualization and analysis of the generated data. During the workflow’s composition phase, programs must be selected to perform the activities defined in the workflow specification. These programs often require additional parameters that serve to adjust the program’s behavior according to the experiment’s goals. Consequently, workflows commonly have many parameters to be manually configured, encompassing even more than one hundred in many cases. Wrongly parameters’ values choosing can lead to crash workflows executions or provide undesired results. As the execution of data- and compute-intensive workflows is commonly performed in a high-performance computing environment e.g., (a cluster, a supercomputer, or a public cloud), an unsuccessful execution configures a waste of time and resources. In this article, we present FReeP—Feature Recommender from Preferences, a parameter value recommendation method that is designed to suggest values for workflow parameters, taking into account past user preferences. FReeP is based on Machine Learning techniques, particularly in Preference Learning. FReeP is composed of three algorithms, where two of them aim at recommending the value for one parameter at a time, and the third makes recommendations for n parameters at once. The experimental results obtained with provenance data from two broadly used workflows showed FReeP usefulness in the recommendation of values for one parameter. Furthermore, the results indicate the potential of FReeP to recommend values for n parameters in scientific workflows.
APA, Harvard, Vancouver, ISO, and other styles
2

Deelman, Ewa, Anirban Mandal, Ming Jiang, and Rizos Sakellariou. "The role of machine learning in scientific workflows." International Journal of High Performance Computing Applications 33, no. 6 (2019): 1128–39. http://dx.doi.org/10.1177/1094342019852127.

Full text
Abstract:
Machine learning (ML) is being applied in a number of everyday contexts from image recognition, to natural language processing, to autonomous vehicles, to product recommendation. In the science realm, ML is being used for medical diagnosis, new materials development, smart agriculture, DNA classification, and many others. In this article, we describe the opportunities of using ML in the area of scientific workflow management. Scientific workflows are key to today’s computational science, enabling the definition and execution of complex applications in heterogeneous and often distributed environments. We describe the challenges of composing and executing scientific workflows and identify opportunities for applying ML techniques to meet these challenges by enhancing the current workflow management system capabilities. We foresee that as the ML field progresses, the automation provided by workflow management systems will greatly increase and result in significant improvements in scientific productivity.
APA, Harvard, Vancouver, ISO, and other styles
3

Nguyen, P., M. Hilario, and A. Kalousis. "Using Meta-mining to Support Data Mining Workflow Planning and Optimization." Journal of Artificial Intelligence Research 51 (November 29, 2014): 605–44. http://dx.doi.org/10.1613/jair.4377.

Full text
Abstract:
Knowledge Discovery in Databases is a complex process that involves many different data processing and learning operators. Today's Knowledge Discovery Support Systems can contain several hundred operators. A major challenge is to assist the user in designing workflows which are not only valid but also -- ideally -- optimize some performance measure associated with the user goal. In this paper we present such a system. The system relies on a meta-mining module which analyses past data mining experiments and extracts meta-mining models which associate dataset characteristics with workflow descriptors in view of workflow performance optimization. The meta-mining model is used within a data mining workflow planner, to guide the planner during the workflow planning. We learn the meta-mining models using a similarity learning approach, and extract the workflow descriptors by mining the workflows for generalized relational patterns accounting also for domain knowledge provided by a data mining ontology. We evaluate the quality of the data mining workflows that the system produces on a collection of real world datasets coming from biology and show that it produces workflows that are significantly better than alternative methods that can only do workflow selection and not planning.
APA, Harvard, Vancouver, ISO, and other styles
4

Kathryn Nichols Hess, Amanda. "Web tutorials workflows." New Library World 115, no. 3/4 (2014): 87–101. http://dx.doi.org/10.1108/nlw-11-2013-0087.

Full text
Abstract:
Purpose – This article examines a structured redesign of one academic library's offering of its online learning objects. This process considered both improving the online learning objects and developing a feasible workflow process for librarians. The findings for both processes are discussed. Design/methodology/approach – The scholarship on online library learning objects and web tutorials, beginning with Dewald's seminal study, was examined for trends, patterns, and best practices. From this research, informal interviews were conducted with library faculty members. Once this information had been collected, other public university libraries in the state of Michigan – 14 in all – were considered in terms of if, and how, they offered online learning objects and web tutorials. These three areas of inquiry provide a foundation for the best practices and workflows developed. Findings – Based on the scholarship, librarian feedback, and informal assessment of other public university libraries' practices, best practices were developed for web tutorial evaluation and creation. These best practices are to make online learning content: maintainable, available, geared at users, informative, and customizable. Workflows for librarians around these best practices were developed. Also, using these best practices, the library redesigned its tutorials web page and employed a different content management tool, which benefitted both librarians and users with increased interactivity and ease of use. Originality/value – This article shares best practices and library workflows for online learning objects in ways that are not commonly addressed in the literature. It also considers the library's online instructional presence from the perspectives of both user and librarian, and works to develop structures in which both can function effectively. This article is also of value because of the practical implications it offers to library professionals.
APA, Harvard, Vancouver, ISO, and other styles
5

Cantini, Riccardo, Fabrizio Marozzo, Alessio Orsino, Domenico Talia, and Paolo Trunfio. "Exploiting Machine Learning For Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines." Future Internet 13, no. 5 (2021): 121. http://dx.doi.org/10.3390/fi13050121.

Full text
Abstract:
Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing can bring important benefits to accelerate execution. However, optimization techniques are necessary to fully exploit in-memory processing, avoiding performance drops due to memory saturation events. This paper proposed a novel solution, called the Intelligent In-memory Workflow Manager (IIWM), for optimizing the in-memory execution of data-intensive workflows on parallel machines. IIWM is based on two complementary strategies: (1) a machine learning strategy for predicting the memory occupancy and execution time of workflow tasks; (2) a scheduling strategy that allocates tasks to a computing node, taking into account the (predicted) memory occupancy and execution time of each task and the memory available on that node. The effectiveness of the machine learning-based predictor and the scheduling strategy were demonstrated experimentally using as a testbed, Spark, a high-performance Big Data processing framework that exploits in-memory computing to speed up the execution of large-scale applications. In particular, two synthetic workflows were prepared for testing the robustness of the IIWM in scenarios characterized by a high level of parallelism and a limited amount of memory reserved for execution. Furthermore, a real data analysis workflow was used as a case study, for better assessing the benefits of the proposed approach. Thanks to high accuracy in predicting resources used at runtime, the IIWM was able to avoid disk writes caused by memory saturation, outperforming a traditional strategy in which only dependencies among tasks are taken into account. Specifically, the IIWM achieved up to a 31% and a 40% reduction of makespan and a performance improvement up to 1.45× and 1.66× on the synthetic workflows and the real case study, respectively.
APA, Harvard, Vancouver, ISO, and other styles
6

Succar, Bilal, and Willy Sher. "A Competency Knowledge-Base for BIM Learning." Australasian Journal of Construction Economics and Building - Conference Series 2, no. 2 (2014): 1. http://dx.doi.org/10.5130/ajceb-cs.v2i2.3883.

Full text
Abstract:
Building Information Modelling (BIM) tools and workflows continue to proliferate within the Design, Construction and Operation (DCO) industry. To equip current and future industry professionals with the necessary knowledge and skills to engage in collaborative workflows and integrated project deliverables, it is important to identify the competencies that need to be taught at educational institutions or trained on the job. Expanding upon a collaborative BIM education framework pertaining to a national BIM initiative in Australia, this paper introduces a conceptual workflow to identify, classify, and aggregate BIM competency items. Acting as a knowledge-base for BIM learners and learning providers, the aggregated competency items can be used to develop BIM learning modules to satisfy the learning requirements of varied audiences - be they students, practitioners, tradespeople or managers. This competency knowledge-base will facilitate a common understanding of BIM deliverables and their requirements, and support the national efforts to promote BIM learning.Keywords:BIM competency, BIM education, BIM learning modules, competency knowledge-base, learning triangle.
APA, Harvard, Vancouver, ISO, and other styles
7

Weigel, Tobias, Ulrich Schwardmann, Jens Klump, Sofiane Bendoukha, and Robert Quick. "Making Data and Workflows Findable for Machines." Data Intelligence 2, no. 1-2 (2020): 40–46. http://dx.doi.org/10.1162/dint_a_00026.

Full text
Abstract:
Research data currently face a huge increase of data objects with an increasing variety of types (data types, formats) and variety of workflows by which objects need to be managed across their lifecycle by data infrastructures. Researchers desire to shorten the workflows from data generation to analysis and publication, and the full workflow needs to become transparent to multiple stakeholders, including research administrators and funders. This poses challenges for research infrastructures and user-oriented data services in terms of not only making data and workflows findable, accessible, interoperable and reusable, but also doing so in a way that leverages machine support for better efficiency. One primary need to be addressed is that of findability, and achieving better findability has benefits for other aspects of data and workflow management. In this article, we describe how machine capabilities can be extended to make workflows more findable, in particular by leveraging the Digital Object Architecture, common object operations and machine learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Anjum, Samreen, Ambika Verma, Brandon Dang, and Danna Gurari. "Exploring the Use of Deep Learning with Crowdsourcing to Annotate Images." Human Computation 8, no. 2 (2021): 76–106. http://dx.doi.org/10.15346/hc.v8i2.121.

Full text
Abstract:
We investigate what, if any, benefits arise from employing hybrid algorithm-crowdsourcing approaches over conventional approaches of relying exclusively on algorithms or crowds to annotate images. We introduce a framework that enables users to investigate different hybrid workflows for three popular image analysis tasks: image classification, object detection, and image captioning. Three hybrid approaches are included that are based on having workers: (i) verify predicted labels, (ii) correct predicted labels, and (iii) annotate images for which algorithms have low confidence in their predictions. Deep learning algorithms are employed in these workflows since they offer high performance for image annotation tasks. Each workflow is evaluated with respect to annotation quality and worker time to completion on images coming from three diverse datasets (i.e., VOC, MSCOCO, VizWiz). Inspired by our findings, we offer recommendations regarding when and how to employ deep learning with crowdsourcing to achieve desired quality and efficiency for image annotation.
APA, Harvard, Vancouver, ISO, and other styles
9

Ha, Thang N., Kurt J. Marfurt, Bradley C. Wallet, and Bryce Hutchinson. "Pitfalls and implementation of data conditioning, attribute analysis, and self-organizing maps to 2D data: Application to the Exmouth Plateau, North Carnarvon Basin, Australia." Interpretation 7, no. 3 (2019): SG23—SG42. http://dx.doi.org/10.1190/int-2018-0248.1.

Full text
Abstract:
Recent developments in attribute analysis and machine learning have significantly enhanced interpretation workflows of 3D seismic surveys. Nevertheless, even in 2018, many sedimentary basins are only covered by grids of 2D seismic lines. These 2D surveys are suitable for regional feature mapping and often identify targets in areas not covered by 3D surveys. With continuing pressure to cut costs in the hydrocarbon industry, it is crucial to extract as much information as possible from these 2D surveys. Unfortunately, much if not most modern interpretation software packages are designed to work exclusively with 3D data. To determine if we can apply 3D volumetric interpretation workflows to grids of 2D seismic lines, we have applied data conditioning, attribute analysis, and a machine-learning technique called self-organizing maps to the 2D data acquired over the Exmouth Plateau, North Carnarvon Basin, Australia. We find that these workflows allow us to significantly improve image quality, interpret regional geologic features, identify local anomalies, and perform seismic facies analysis. However, these workflows are not without pitfalls. We need to be careful in choosing the order of filters in the data conditioning workflow and be aware of reflector misties at line intersections. Vector data, such as reflector convergence, need to be extracted and then mapped component-by-component before combining the results. We are also unable to perform attribute extraction along a surface or geobody extraction for 2D data in our commercial interpretation software package. To address this issue, we devise a point-by-point attribute extraction workaround to overcome the incompatibility between 3D interpretation workflow and 2D data.
APA, Harvard, Vancouver, ISO, and other styles
10

Aida, Saori, Junpei Okugawa, Serena Fujisaka, Tomonari Kasai, Hiroyuki Kameda, and Tomoyasu Sugiyama. "Deep Learning of Cancer Stem Cell Morphology Using Conditional Generative Adversarial Networks." Biomolecules 10, no. 6 (2020): 931. http://dx.doi.org/10.3390/biom10060931.

Full text
Abstract:
Deep-learning workflows of microscopic image analysis are sufficient for handling the contextual variations because they employ biological samples and have numerous tasks. The use of well-defined annotated images is important for the workflow. Cancer stem cells (CSCs) are identified by specific cell markers. These CSCs were extensively characterized by the stem cell (SC)-like gene expression and proliferation mechanisms for the development of tumors. In contrast, the morphological characterization remains elusive. This study aims to investigate the segmentation of CSCs in phase contrast imaging using conditional generative adversarial networks (CGAN). Artificial intelligence (AI) was trained using fluorescence images of the Nanog-Green fluorescence protein, the expression of which was maintained in CSCs, and the phase contrast images. The AI model segmented the CSC region in the phase contrast image of the CSC cultures and tumor model. By selecting images for training, several values for measuring segmentation quality increased. Moreover, nucleus fluorescence overlaid-phase contrast was effective for increasing the values. We show the possibility of mapping CSC morphology to the condition of undifferentiation using deep-learning CGAN workflows.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Learning Workflows"

1

Ouari, Salim. "Adaptation à la volée de situations d'apprentissage modélisées conformément à un langage de modélisation pédagogique." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00680028.

Full text
Abstract:
Le travail présenté dans ce mémoire s'inscrit dans le domaine des Environnements Informatiques pour l'Apprentissage Humain (EIAH), plus précisément celui de l'ingénierie des EIAH dans le cadre d'une approche de type " Learning Design ". Cette approche propose de construire des EIAH à partir de la description formelle d'une activité d'apprentissage. Elle suppose l'existence d'un langage de modélisation communément appelé EML (Educational Modelling Language) et d'un moteur capable d'interpréter ce langage. LDL est le langage sur lequel nous avons travaillé, en relation avec l'infrastructure LDI intégrant un moteur d'interprétation de LDL. L'EML est utilisé pour produire un scénario, modèle formel d'une activité d'apprentissage. L'EIAH servant de support au déroulement de l'activité décrite dans le scénario est alors construit de manière semi-automatique au sein de l'infrastructure associée au langage selon le processus suivant : le scénario est créé lors d'une phase de conception ; il est instancié et déployé sur une plate-forme de services lors d'une phase d'opérationnalisation (choix des participants à l'activité, affectation des rôles, choix des ressources et services) ; le scénario instancié et déployé est pris en charge par le moteur qui va l'interpréter pour en assurer l'exécution. Dans ce cadre, l'activité se déroule conformément à ce qui a été spécifié dans le scénario. Or il est impossible de prévoir par avance tout ce qui peut se produire dans une activité, les activités étant par nature imprévisibles. Des situations non prévues peuvent survenir et conduire à des perturbations dans l'activité, voire à des blocages. Il devient alors primordial de fournir les moyens de débloquer la situation. L'enseignant peut par ailleurs vouloir exploiter une situation ou une opportunité en modifiant l'activité en cours d'exécution. C'est le problème qui est traité dans cette thèse : il s'agit de fournir les moyens d'adapter une activité " à la volée ", c'est-à-dire pendant son exécution, de manière à pouvoir gérer une situation non prévue et poursuivre l'activité. La proposition que nous formulons s'appuie sur la différentiation entre les données convoquées dans chacune des trois phases du processus de construction de l'EIAH : la conception, l'opérationnalisation et l'exécution. Nous exhibons un modèle pour chacune de ces phases, qui organise ces données et les positionne les unes par rapport aux autres. Adapter une activité " à la volée " revient alors à modifier ces modèles en fonction des situations à traiter. Certaines nécessitent la modification d'un seul de ses modèles, d'autres conduisent à propager les modifications d'un modèle vers un autre. Nous considérons l'adaptation " à la volée " comme une activité à part entière menée, en parallèle de l'activité d'apprentissage, par un superviseur humain qui dispose d'un environnement adéquat pour observer l'activité, détecter les éventuels problèmes et y remédier par intervention dans l'activité d'apprentissage en modifiant les modèles qui la spécifient. Pour développer les outils support à la modification et les intégrer dans l'infrastructure LDI, nous avons eu recours à des techniques de l'Ingénierie Dirigée par les Modèles. Les modèles manipulés dans ces outils en sont ainsi des données à part entière : les outils réalisés n'en offrent ainsi que plus de flexibilité et d'abstraction. Les modèles sont alors exploités comme des leviers pour atteindre et modifier les données ciblées par l'adaptation.
APA, Harvard, Vancouver, ISO, and other styles
2

Parisi, Luca. "A Knowledge Flow as a Software Product Line." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12217/.

Full text
Abstract:
Costruire un "data mining workflow" dipende almendo dal dataset e dagli obiettivi degli utenti. Questo processo è complesso a causa dell'elevato numero di algoritmi disponibili e della difficoltà nel scegliere il migliore algoritmo, opportunamente parametrizzato. Di solito, i data scientists usano tools di analisi per decidere quale algoritmo ha le migliori performance nel loro specifico dataset, confrontando le performance fra i diversi algoritmi. Lo scopo di questo progetto è mettere le basi a un sistema software che porta verso la giusta direzione la costruzione di tali workflow, per trovare il migliore a seconda del dataset degli utenti e dei loro obiettivi.
APA, Harvard, Vancouver, ISO, and other styles
3

Klinga, Peter. "Transforming Corporate Learning using Automation and Artificial Intelligence : An exploratory case study for adopting automation and AI within Corporate Learning at financial services companies." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279570.

Full text
Abstract:
As the emergence of new technologies are continuously disrupting the way in which organizations function and develop, the majority of initiatives within Learning and Development (L&amp;D) are far from fully effective. The purpose of this study was to conduct an exploratory case study to investigate how automation and AI technologies could improve corporate learning within financial services companies. The study was delimited to study three case companies, all primarily operating in the Nordic financial services industry. The exploratory research was carried out through a literature review, several indepth interviews as well as a survey for a selected number of research participants. The research revealed that the current state of training within financial services is characterized by a significant amount of manual and administrative work, lack of intelligence within decision-making as well as a non-existing consideration of employee knowledge. Moreover, the empirical evidence similarly reveled a wide array of opportunities for adopting automation and AI technologies into the respective learning workflows of the L&amp;D organization within the case companies.<br>I takt med att företag kontinuerligt anammar nya teknologier för att förbättra sin verksamhet, befinner sig utbildningsorganisationer i ett märkbart ineffektivt stadie. Syftet med denna studie var att genomföra en explorativ fallstudie gällande hur finansbolag skulle kunna införa AI samt automatisering för att förbättra sin utbildningsorganisation. Studien var begränsat till att undersöka tre företag, alla med verksamhet i den nordiska finansbranschen. Den explorativa delen av studien genomfördes med hjälp av en litteraturstudie, flertal djupgående intervjuer samt en enkät för ett begränsat antal deltagare i forskningsprocessen. Forskning påvisade att den existerade utbildningsorganisationen inom finansbolag är starkt präglat av ett överflöd av manuellt och administrativt arbete, bristande intelligens inom beslutsprocesser samt en bristande hänsyn för existerande kunskapsnivåer bland anställda. Studien påvisade därtill en mängd möjligheter att införa automatisering samt AI för att förbättra utbildningsflödena inom samtliga deltagande bolag i fallstudien.
APA, Harvard, Vancouver, ISO, and other styles
4

Maita, Ana Rocío Cárdenas. "Um estudo da aplicação de técnicas de inteligência computacional e de aprendizado em máquina de mineração de processos de negócio." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-22012016-155157/.

Full text
Abstract:
Mineração de processos é uma área de pesquisa relativamente recente que se situa entre mineração de dados e aprendizado de máquina, de um lado, e modelagem e análise de processos de negócio, de outro lado. Mineração de processos visa descobrir, monitorar e aprimorar processos de negócio reais por meio da extração de conhecimento a partir de logs de eventos disponíveis em sistemas de informação orientados a processos. O principal objetivo deste trabalho foi avaliar o contexto de aplicação de técnicas provenientes das áreas de inteligência computacional e de aprendizado de máquina, incluindo redes neurais artificiais. Para fins de simplificação, denominadas no restante deste texto apenas como ``redes neurais\'\'. e máquinas de vetores de suporte, no contexto de mineração de processos. Considerando que essas técnicas são, atualmente, as mais aplicadas em tarefas de mineração de dados, seria esperado que elas também estivessem sendo majoritariamente aplicadas em mineração de processos, o que não tinha sido demonstrado na literatura recente e foi confirmado por este trabalho. Buscou-se compreender o amplo cenário envolvido na área de mineração de processos, incluindo as principais caraterísticas que têm sido encontradas ao longo dos últimos dez anos em termos de: tipos de mineração de processos, tarefas de mineração de dados usadas, e técnicas usadas para resolver tais tarefas. O principal enfoque do trabalho foi identificar se as técnicas de inteligência computacional e de aprendizado de máquina realmente não estavam sendo amplamente usadas em mineração de processos, ao mesmo tempo que se buscou identificar os principais motivos para esse fenômeno. Isso foi realizado por meio de um estudo geral da área, que seguiu rigor científico e sistemático, seguido pela validação das lições aprendidas por meio de um exemplo de aplicação. Este estudo considera vários enfoques para delimitar a área: por um lado, as abordagens, técnicas, tarefas de mineração e ferramentas comumente mais usadas; e, por outro lado, veículos de publicação, universidades e pesquisadores interessados no desenvolvimento da área. Os resultados apresentam que 81% das publicações atuais seguem as abordagens tradicionais em mineração de dados. O tipo de mineração de processos com mais estudo é Descoberta 71% dos estudos primários. Os resultados deste trabalho são valiosos para profissionais e pesquisadores envolvidos no tema, e representam um grande aporte para a área<br>Mining process is a relatively new research area that lies between data mining and machine learning, on one hand, and business process modeling and analysis, on the other hand. Mining process aims at discovering, monitoring and improving business processes by extracting real knowledge from event logs available in process-oriented information systems. The main objective of this master\'s project was to assess the application of computational intelligence and machine learning techniques, including, for example, neural networks and support vector machines, in process mining. Since these techniques are currently widely applied in data mining tasks, it would be expected that they were also widely applied to the process mining context, which has been not evidenced in recent literature and confirmed by this work. We sought to understand the broad scenario involved in the process mining area, including the main features that have been found over the last ten years in terms of: types of process mining, data mining tasks used, and techniques applied to solving such tasks. The main focus of the study was to identify whether the computational intelligence and machine learning techniques were indeed not being widely used in process mining whereas we sought to identify the main reasons for this phenomenon. This was accomplished through a general study area, which followed scientific and systematic rigor, followed by validation of the lessons learned through an application example. This study considers various approaches to delimit the area: on the one hand, approaches, techniques, mining tasks and more commonly used tools; and, on the other hand, the publication vehicles, universities and researchers interested in the development area. The results show that 81% of current publications follow traditional approaches to data mining. The type of mining processes more study is Discovery 71% of the primary studies. These results are valuable for practitioners and researchers involved in the issue, and represent a major contribution to the area
APA, Harvard, Vancouver, ISO, and other styles
5

Salvucci, Enrico. "MLOps - Standardizing the Machine Learning Workflow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23645/.

Full text
Abstract:
MLOps is a very recent approach aimed at reducing the time to get a Machine Learning model in production; this methodology inherits its main features from DevOps and applies them to Machine Learning, by adding more features specific for Data Analysis. This thesis, which is the result of the internship at Data Reply, is aimed at studying this new approach and exploring different tools to build an MLOps architecture; another goal is to use these tools to implement an MLOps architecture (by using preferably Open Source software). This study provides a deep analysis of MLOps features, also compared to DevOps; furthermore, an in-depth survey on the tools, available in the market to build an MLOps architecture, is offered by focusing on Open Source tools. The reference architecture, designed adopting an exploratory approach, is implemented through MLFlow, Kubeflow, BentoML and deployed by using Google Cloud Platform; furthermore, the architecture is compared to different use cases of companies that have recently started adopting MLOps. MLOps is rapidly evolving and maturing, for these reasons many companies are starting to adopt this methodology. Based on the study conducted with this thesis, companies dealing with Machine Learning should consider adopting MLOps. This thesis can be a starting point to explore MLOps both theoretically and practically (also by relying on the implemented reference architecture and its code).
APA, Harvard, Vancouver, ISO, and other styles
6

Aslan, Serdar. "Digital Educational Games: Methodologies for Development and Software Quality." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73368.

Full text
Abstract:
Development of a game in the form of software for game-based learning poses significant technical challenges for educators, researchers, game designers, and software engineers. The game development consists of a set of complex processes requiring multi-faceted knowledge in multiple disciplines such as digital graphic design, education, gaming, instructional design, modeling and simulation, psychology, software engineering, visual arts, and the learning subject area. Planning and managing such a complex multidisciplinary development project require unifying methodologies for development and software quality evaluation and should not be performed in an ad hoc manner. This dissertation presents such methodologies named: GAMED (diGital educAtional gaMe dEvelopment methoDology) and IDEALLY (dIgital eDucational gamE softwAre quaLity evaLuation methodologY). GAMED consists of a body of methods, rules, and postulates and is embedded within a digital educational game life cycle. The life cycle describes a framework for organization of the phases, processes, work products, quality assurance activities, and project management activities required to develop, use, maintain, and evolve a digital educational game from birth to retirement. GAMED provides a modular structured approach for overcoming the development complexity and guides the developers throughout the entire life cycle. IDEALLY provides a hierarchy of 111 indicators consisting of 21 branch and 90 leaf indicators in the form of an acyclic graph for the measurement and evaluation of digital educational game software quality. We developed the GAMED and IDEALLY methodologies based on the experiences and knowledge we have gained in creating and publishing four digital educational games that run on the iOS (iPad, iPhone, and iPod touch) mobile devices: CandyFactory, CandySpan, CandyDepot, and CandyBot. The two methodologies provide a quality-centered structured approach for development of digital educational games and are essential for accomplishing demanding goals of game-based learning. Moreover, classifications provided in the literature are inadequate for the game designers, engineers and practitioners. To that end, we present a taxonomy of games that focuses on the characterization of games.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Cao, Bingfei. "Augmenting the software testing workflow with machine learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119752.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 67-68).<br>This work presents the ML Software Tester, a system for augmenting software testing processes with machine learning. It allows users to plug in a Git repository of the choice, specify a few features and methods specific to that project, and create a full machine learning pipeline. This pipeline will generate software test result predictions that the user can easily integrate with their existing testing processes. To do so, a novel test result collection system was built to collect the necessary data on which the prediction models could be trained. Test data was collected for Flask, a well-known Python open-source project. This data was then fed through SVDFeature, a matrix prediction model, to generate new test result predictions. Several methods for the test result prediction procedure were evaluated to demonstrate various methods of using the system.<br>by Bingfei Cao.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Nordin, Alexander Friedrich. "End to end machine learning workflow using automation tools." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119776.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 79-80).<br>We have developed an open source library named Trane and integrated it with two open source libraries to build an end-to-end machine learning workflow that can facilitate rapid development of machine learning models. The three components of this workflow are Trane, Featuretools and ATM. Trane enumerates tens of prediction problems relevant to any dataset using the meta information about the data. Furthermore, Trane generates training examples required for training machine learning models. Featuretools is an open-source software for automatically generating features from a dataset. Auto Tune Models (ATM), an open source library, performs a high throughput search over modeling options to find the best modeling technique for a problem. We show the capability of these three tools and highlight the open-source development of Trane.<br>by Alexander Friedrich Nordin.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Kiaian, Mousavy Sayyed Ali. "A learning based workflow scheduling approach on volatile cloud resources." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282528.

Full text
Abstract:
Workflows, originally from the business world, provide a systematic organization to an otherwise chaotic complex process. Therefore, they have become dominant and popular in scientific computation, where complex and broadscale data analysis and scientific automation are required. In recent years, demand for reliable algorithms for workflow optimization problems, mainly scheduling and resource provisioning have grown considerably. There are various algorithms and proposals to optimize these problems. However, most of these provisioning and algorithms do not account for reliability and robustness. Besides, those that do require assumptions and handcrafted heuristics with manual parameter assignment to provide solutions. In this thesis, a new workflow scheduling algorithm is proposed that learns the heuristics required for reliability and robustness consideration in a volatile cloud environment, particularly on Amazon EC2 spot instances. Furthermore, the algorithm uses the learned data to propose an efficient scheduling strategy that prioritizes reliability but also considers minimization of execution time. The proposed algorithm mainly improves upon Failure rate and reliability in comparison to the other tested algorithms, such as Heterogeneous Earliest Finish Time(HEFT) and ReplicateAll, while at the same time, maintaining an acceptable degradation in Makespan compared to the vanilla HEFT, making it more reliable in an unreliable environment as a result. We have discovered that our proposed algorithm performs 5% worse than the baseline HEFT regarding total execution time. However, we realised that it wastes 52% less resources compared to the baseline HEFT and uses 40% less resources compared to the ReplicateAll algorithm as a result of reduced failure rate in the unreliable environment.<br>Efterfrågan på pålitliga algoritmer för arbetsflödesoptimering har ökat avsevärt. Det finns olika algoritmer och förslag till optimeringar av dem. De flesta algoritmerna står dock inte för tillförlitlighet och robusthet. I det här examensarbetet föreslås en ny arbetsflödesplaneringsalgoritm som tränas i den heuristik som krävs för tillförlitlighet och robusthetsbedömning i Amazon EC2 spotinstanser. Algoritmen förbättras med Heterogene Earliest Finish Time (HEFT), som är en populär heuristisk algoritm som används för att schemalägga arbetsflöden. Algoritmen använder inlärda data för att föreslå en effektiv schemaläggningsstrategi som prioriterar tillförlitlighet men också beaktar minimering av körningstiden. Vi har upptäckt att vår föreslagna algoritm har 5% längre exekveringstid än baslinje-HEFT, ödslar 52% mindre resurser jämfört med baslinje-HEFT och använder 40% mindre resurser jämfört med ReplicateAllalgoritmen som ett resultat av minskad felfrekvens i den opålitliga miljön.
APA, Harvard, Vancouver, ISO, and other styles
10

Rabenius, Michaela. "Deep Learning-based Lung Triage for Streamlining the Workflow of Radiologists." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160537.

Full text
Abstract:
The usage of deep learning algorithms such as Convolutional Neural Networks within the field of medical imaging has grown in popularity over the past few years. In particular, these types of algorithms have been used to detect abnormalities in chest x-rays, one of the most commonly performed type of radiographic examination. To try and improve the workflow of radiologists, this thesis investigated the possibility of using convolutional neural networks to create a lung triage to sort a bulk of chest x-ray images based on a degree of disease, where sick lungs should be prioritized before healthy lungs. The results from using a binary relevance approach to train multiple classifiers for different observations commonly found in chest x-rays shows that several models fail to learn how to classify x-ray images, most likely due to insufficient and/or imbalanced data. Using a binary relevance approach to create a triage is feasible but inflexible due to having to handle multiple models simultaneously. In future work it would therefore be interesting to further investigate other approaches, such as a single binary classification model or a multi-label classification model.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Learning Workflows"

1

Machine Learning in Production: Developing and Optimizing Data Science Workflows and Applications (Addison-Wesley Data & Analytics Series). Addison-Wesley Professional, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Innovative Performance Support Tools And Strategies For Learning In The Workflow. McGraw-Hill, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elsevier. SimChart for the Medical Office: Learning the Medical Office Workflow - 2018 Edition. Elsevier - Health Sciences Division, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Elsevier. SimChart for the Medical Office: Learning the Medical Office Workflow - 2020 Edition. Elsevier - Health Sciences Division, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Elsevier. SimChart for the Medical Office: Learning the Medical Office Workflow - 2017 Edition. Elsevier - Health Sciences Division, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elsevier. SimChart for the Medical Office: Learning the Medical Office Workflow - 2019 Edition. Elsevier - Health Sciences Division, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Petchey, Owen L., Andrew P. Beckerman, Natalie Cooper, and Dylan Z. Childs. Insights from Data with R. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198849810.001.0001.

Full text
Abstract:
Knowledge of how to get useful information from data is essential in the life and environmental sciences. This book provides learners with knowledge, experience, and confidence about how to efficiently and reliably discover useful information from data. The content is developed from first- and second-year undergraduate-level courses taught by the authors. It charts the journey from question, to raw data, to clean and tidy data, to visualizations that provide insights. This journey is presented as a repeatable workflow fit for use with many types of question, study, and data. Readers discover how to use R and RStudio, and learn key concepts for drawing appropriate conclusions from patterns in data. The book focuses on providing learners with a solid foundation of skills for working with data, and for getting useful information from data summaries and visualizations. It focuses on the strength of patterns (i.e. effect sizes) and their meaning (e.g. correlation or causation). It purposefully stays away from statistical tests and p-values. Concepts covered include distribution, sample, population, mean, median, mode, variance, standard deviation, correlation, interactions, and non-independence. The journey from data to insight is illustrated by one workflow demonstration in the book, and three online. Each involves data collected in a real study. Readers can follow along by downloading the data, and learning from the descriptions of each step in the journey from the raw data to visualizations that show the answers to the questions posed in the original studies.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Learning Workflows"

1

Ma, Jun, Erin Shaw, and Jihie Kim. "Computational Workflows for Assessing Student Learning." In Intelligent Tutoring Systems. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13437-1_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Striewe, Michael. "Lean and Agile Assessment Workflows." In Agile and Lean Concepts for Teaching and Learning. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2751-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Krause, Thomas, Bruno G. N. Andrade, Haithem Afli, Haiying Wang, Huiru Zheng, and Matthias L. Hemmje. "Understanding the Role of (Advanced) Machine Learning in Metagenomic Workflows." In Advanced Visual Interfaces. Supporting Artificial Intelligence and Big Data Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68007-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Monge, David A., Matĕj Holec, Filip Z̆elezný, and Carlos García Garino. "Ensemble Learning of Run-Time Prediction Models for Data-Intensive Scientific Workflows." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45483-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jiang, Xinzhao, Wei Kong, Xin Jin, and Jian Shen. "RETRACTED CHAPTER: A Cooperative Placement Method for Machine Learning Workflows and Meteorological Big Data Security Protection in Cloud Computing." In Machine Learning for Cyber Security. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30619-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Xinzhao, Wei Kong, Xin Jin, and Jian Shen. "Retraction Note to: A Cooperative Placement Method for Machine Learning Workflows and Meteorological Big Data Security Protection in Cloud Computing." In Machine Learning for Cyber Security. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30619-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yoon, GeumSeong, Jungsu Han, Seunghyung Lee, and JongWon Kim. "DevOps Portal Design for SmartX AI Cluster Employing Cloud-Native Machine Learning Workflows." In Advances in Internet, Data and Web Technologies. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39746-3_54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kargl, Michaela, Peter Regitnig, Heimo Müller, and Andreas Holzinger. "Towards a Better Understanding of the Workflows: Modeling Pathology Processes in View of Future AI Integration." In Artificial Intelligence and Machine Learning for Digital Pathology. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-50402-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sinke, Yuliya, Sebastian Gatz, Martin Tamke, and Mette Ramsgaard Thomsen. "Machine Learning for Fabrication of Graded Knitted Membranes." In Proceedings of the 2020 DigitalFUTURES. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4400-6_29.

Full text
Abstract:
AbstractThis paper examines the use of machine learning in creating digitally integrated design-to-fabrication workflows. As computational design allows for new methods of material specification and fabrication, it enables direct functional grading of material at high detail thereby tuning the design performance in response to performance criteria. However, the generation of fabrication data is often cumbersome and relies on in-depth knowledge of the fabrication processes. Parametric models that set up for automatic detailing of incremental changes, unfortunately, do not accommodate the larger topological changes to the material set up. The paper presents the speculative case study KnitVault. Based on earlier research projects Isoropia and Ombre, the study examines the use of machine learning to train models for fabrication data generation in response to desired performance criteria. KnitVault demonstrates and validates methods for shortcutting parametric interfacing and explores how the trained model can be employed in design cases that exceed the topology of the training examples.
APA, Harvard, Vancouver, ISO, and other styles
10

Manthey, Robert, Robert Herms, Marc Ritter, Michael Storz, and Maximilian Eibl. "A Support Framework for Automated Video and Multimedia Workflows for Production and Archive." In Human Interface and the Management of Information. Information and Interaction for Learning, Culture, Collaboration and Business,. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39226-9_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Learning Workflows"

1

MacGregor, L., N. Brown, A. Roubickova, I. Lampaki, J. Berrizbeitia, and M. Ellis. "Streamlining Petrophysical Workflows With Machine Learning." In First EAGE/PESGB Workshop Machine Learning. EAGE Publications BV, 2018. http://dx.doi.org/10.3997/2214-4609.201803027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

MacGregor, L., R. Keirstead, N. Brown, et al. "Streamlining Petrophysical Workflows With Machine Learning." In EAGE Conference on Reservoir Geoscience. European Association of Geoscientists & Engineers, 2018. http://dx.doi.org/10.3997/2214-4609.201803241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ramcharitar, Kamlesh, and Arti Kandice Ramdhanie. "Using Machine Learning Methods to Identify Reservoir Compartmentalization in Mature Oilfields from Legacy Production Data." In SPE Trinidad and Tobago Section Energy Resources Conference. SPE, 2021. http://dx.doi.org/10.2118/200979-ms.

Full text
Abstract:
Abstract Despite long production histories, operators of mature oilfields sometimes struggle to account for reservoir compartmentalization. Geological-led workflows do not adequately honor legacy production data since inherent bias is introduced into the process of allocating production by interpreted flow units. This paper details the application of machine learning methods to identify possible reservoir compartments based on legacy production data recorded from individual well completions. We propose an experimental data-driven workflow to rapidly generate multiple scenarios of connected volumes in the subsurface. The workflow is premised upon the logic that well completions draining the same connected reservoir space can exhibit similar production characteristics (rate declines, GOR trends and pressures). We show how the specific challenges of digitized legacy data are solved using outlier detection for error checking and Kalman smoothing imputation for missing data in the structural time series model. Finally, we compare the subsurface grouping of completions obtained by applying unsupervised pattern recognition with Hierarchal clustering. Application of this workflow results in multiple possible scenarios for defining reservoir compartments based on production data trends only. The method is powerful in that, it provides interpretations that are independent of subsurface scenarios generated by more traditional workflows. We demonstrate the potential to integrate interpretations generated from more conventional workflows to increase the robustness of the overall subsurface model. We have leveraged the power of machine learning methods to classify more than forty (40) well completions into discrete reservoir compartments using production characteristics only. This effort would be extremely difficult, or otherwise unreliable given the inherent limitations of human spatial, temporal, and cognitive abilities.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Ishtiaq, Shiyong Lu, Changxin Bai, and Fahima Amin Bhuyan. "Diagnosis Recommendation Using Machine Learning Scientific Workflows." In 2018 IEEE International Congress on Big Data (BigData Congress). IEEE, 2018. http://dx.doi.org/10.1109/bigdatacongress.2018.00018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alberti, Michele, Vinaychandran Pondenkandath, Lars Vogtlin, Marcel Wursch, Rolf Ingold, and Marcus Liwicki. "Improving Reproducible Deep Learning Workflows with DeepDIVA." In 2019 6th Swiss Conference on Data Science (SDS). IEEE, 2019. http://dx.doi.org/10.1109/sds.2019.00-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kren, Tomas, Martin Pilat, and Roman Neruda. "Multi-objective evolution of machine learning workflows." In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017. http://dx.doi.org/10.1109/ssci.2017.8285357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yenugu, M. "Leveraging Machine Learning to Improve Subsurface Interpretation Workflows." In First EAGE Conference on Machine Learning in Americas. European Association of Geoscientists & Engineers, 2020. http://dx.doi.org/10.3997/2214-4609.202084022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peskova, Klara, and Roman Neruda. "Hyperparameters Search Methods for Machine Learning Linear Workflows." In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE, 2019. http://dx.doi.org/10.1109/icmla.2019.00199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Limbeck, J., M. Araya, G. Joosten, A. Eales, P. Gelderblom, and D. Hohl. "Machine Learning Based Workflows in Exploration and Production." In 79th EAGE Conference and Exhibition 2017 - Workshops. EAGE Publications BV, 2017. http://dx.doi.org/10.3997/2214-4609.201701656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chahal, Dheeraj, Manju Ramesh, Ravi Ojha, and Rekha Singhal. "High Performance Serverless Architecture for Deep Learning Workflows." In 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGrid). IEEE, 2021. http://dx.doi.org/10.1109/ccgrid51090.2021.00096.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Learning Workflows"

1

Gupta, Ragini. Deploying Machine Learning Workflows into HPC environment. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1647199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salter, R., Quyen Dong, Cody Coleman, et al. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40203.

Full text
Abstract:
The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.
APA, Harvard, Vancouver, ISO, and other styles
3

Qi, Fei, Zhaohui Xia, Gaoyang Tang, et al. A Graph-based Evolutionary Algorithm for Automated Machine Learning. Web of Open Science, 2020. http://dx.doi.org/10.37686/ser.v1i2.77.

Full text
Abstract:
As an emerging field, Automated Machine Learning (AutoML) aims to reduce or eliminate manual operations that require expertise in machine learning. In this paper, a graph-based architecture is employed to represent flexible combinations of ML models, which provides a large searching space compared to tree-based and stacking-based architectures. Based on this, an evolutionary algorithm is proposed to search for the best architecture, where the mutation and heredity operators are the key for architecture evolution. With Bayesian hyper-parameter optimization, the proposed approach can automate the workflow of machine learning. On the PMLB dataset, the proposed approach shows the state-of-the-art performance compared with TPOT, Autostacker, and auto-sklearn. Some of the optimized models are with complex structures which are difficult to obtain in manual design.
APA, Harvard, Vancouver, ISO, and other styles
4

Griffin, Andrew, Sean Griffin, Kristofer Lasko, et al. Evaluation of automated feature extraction algorithms using high-resolution satellite imagery across a rural-urban gradient in two unique cities in developing countries. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40182.

Full text
Abstract:
Feature extraction algorithms are routinely leveraged to extract building footprints and road networks into vector format. When used in conjunction with high resolution remotely sensed imagery, machine learning enables the automation of such feature extraction workflows. However, many of the feature extraction algorithms currently available have not been thoroughly evaluated in a scientific manner within complex terrain such as the cities of developing countries. This report details the performance of three automated feature extraction (AFE) datasets: Ecopia, Tier 1, and Tier 2, at extracting building footprints and roads from high resolution satellite imagery as compared to manual digitization of the same areas. To avoid environmental bias, this assessment was done in two different regions of the world: Maracay, Venezuela and Niamey, Niger. High, medium, and low urban density sites are compared between regions. We quantify the accuracy of the data and time needed to correct the three AFE datasets against hand digitized reference data across ninety tiles in each city, selected by stratified random sampling. Within each tile, the reference data was compared against the three AFE datasets, both before and after analyst editing, using the accuracy assessment metrics of Intersection over Union and F1 Score for buildings and roads, as well as Average Path Length Similarity (APLS) to measure road network connectivity. It was found that of the three AFE tested, the Ecopia data most frequently outperformed the other AFE in accuracy and reduced the time needed for editing.
APA, Harvard, Vancouver, ISO, and other styles
5

Downard, Alicia, Stephen Semmens, and Bryant Robbins. Automated characterization of ridge-swale patterns along the Mississippi River. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40439.

Full text
Abstract:
The orientation of constructed levee embankments relative to alluvial swales is a useful measure for identifying regions susceptible to backward erosion piping (BEP). This research was conducted to create an automated, efficient process to classify patterns and orientations of swales within the Lower Mississippi Valley (LMV) to support levee risk assessments. Two machine learning algorithms are used to train the classification models: a convolutional neural network and a U-net. The resulting workflow can identify linear topographic features but is unable to reliably differentiate swales from other features, such as the levee structure and riverbanks. Further tuning of training data or manual identification of regions of interest could yield significantly better results. The workflow also provides an orientation to each linear feature to support subsequent analyses of position relative to levee alignments. While the individual models fall short of immediate applicability, the procedure provides a feasible, automated scheme to assist in swale classification and characterization within mature alluvial valley systems similar to LMV.
APA, Harvard, Vancouver, ISO, and other styles
6

de Caritat, Patrice, Brent McInnes, and Stephen Rowins. Towards a heavy mineral map of the Australian continent: a feasibility study. Geoscience Australia, 2020. http://dx.doi.org/10.11636/record.2020.031.

Full text
Abstract:
Heavy minerals (HMs) are minerals with a specific gravity greater than 2.9 g/cm3. They are commonly highly resistant to physical and chemical weathering, and therefore persist in sediments as lasting indicators of the (former) presence of the rocks they formed in. The presence/absence of certain HMs, their associations with other HMs, their concentration levels, and the geochemical patterns they form in maps or 3D models can be indicative of geological processes that contributed to their formation. Furthermore trace element and isotopic analyses of HMs have been used to vector to mineralisation or constrain timing of geological processes. The positive role of HMs in mineral exploration is well established in other countries, but comparatively little understood in Australia. Here we present the results of a pilot project that was designed to establish, test and assess a workflow to produce a HM map (or atlas of maps) and dataset for Australia. This would represent a critical step in the ability to detect anomalous HM patterns as it would establish the background HM characteristics (i.e., unrelated to mineralisation). Further the extremely rich dataset produced would be a valuable input into any future machine learning/big data-based prospectivity analysis. The pilot project consisted in selecting ten sites from the National Geochemical Survey of Australia (NGSA) and separating and analysing the HM contents from the 75-430 µm grain-size fraction of the top (0-10 cm depth) sediment samples. A workflow was established and tested based on the density separation of the HM-rich phase by combining a shake table and the use of dense liquids. The automated mineralogy quantification was performed on a TESCAN® Integrated Mineral Analyser (TIMA) that identified and mapped thousands of grains in a matter of minutes for each sample. The results indicated that: (1) the NGSA samples are appropriate for HM analysis; (2) over 40 HMs were effectively identified and quantified using TIMA automated quantitative mineralogy; (3) the resultant HMs’ mineralogy is consistent with the samples’ bulk geochemistry and regional geological setting; and (4) the HM makeup of the NGSA samples varied across the country, as shown by the mineral mounts and preliminary maps. Based on these observations, HM mapping of the continent using NGSA samples will likely result in coherent and interpretable geological patterns relating to bedrock lithology, metamorphic grade, degree of alteration and mineralisation. It could assist in geological investigations especially where outcrop is minimal, challenging to correctly attribute due to extensive weathering, or simply difficult to access. It is believed that a continental-scale HM atlas for Australia could assist in derisking mineral exploration and lead to investment, e.g., via tenement uptake, exploration, discovery and ultimately exploitation. As some HMs are hosts for technology critical elements such as rare earth elements, their systematic and internally consistent quantification and mapping could lead to resource discovery essential for a more sustainable, lower-carbon economy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography