To see the other types of publications on this topic, follow the link: Learning Workflows.

Dissertations / Theses on the topic 'Learning Workflows'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 dissertations / theses for your research on the topic 'Learning Workflows.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ouari, Salim. "Adaptation à la volée de situations d'apprentissage modélisées conformément à un langage de modélisation pédagogique." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00680028.

Full text
Abstract:
Le travail présenté dans ce mémoire s'inscrit dans le domaine des Environnements Informatiques pour l'Apprentissage Humain (EIAH), plus précisément celui de l'ingénierie des EIAH dans le cadre d'une approche de type " Learning Design ". Cette approche propose de construire des EIAH à partir de la description formelle d'une activité d'apprentissage. Elle suppose l'existence d'un langage de modélisation communément appelé EML (Educational Modelling Language) et d'un moteur capable d'interpréter ce langage. LDL est le langage sur lequel nous avons travaillé, en relation avec l'infrastructure LDI intégrant un moteur d'interprétation de LDL. L'EML est utilisé pour produire un scénario, modèle formel d'une activité d'apprentissage. L'EIAH servant de support au déroulement de l'activité décrite dans le scénario est alors construit de manière semi-automatique au sein de l'infrastructure associée au langage selon le processus suivant : le scénario est créé lors d'une phase de conception ; il est instancié et déployé sur une plate-forme de services lors d'une phase d'opérationnalisation (choix des participants à l'activité, affectation des rôles, choix des ressources et services) ; le scénario instancié et déployé est pris en charge par le moteur qui va l'interpréter pour en assurer l'exécution. Dans ce cadre, l'activité se déroule conformément à ce qui a été spécifié dans le scénario. Or il est impossible de prévoir par avance tout ce qui peut se produire dans une activité, les activités étant par nature imprévisibles. Des situations non prévues peuvent survenir et conduire à des perturbations dans l'activité, voire à des blocages. Il devient alors primordial de fournir les moyens de débloquer la situation. L'enseignant peut par ailleurs vouloir exploiter une situation ou une opportunité en modifiant l'activité en cours d'exécution. C'est le problème qui est traité dans cette thèse : il s'agit de fournir les moyens d'adapter une activité " à la volée ", c'est-à-dire pendant son exécution, de manière à pouvoir gérer une situation non prévue et poursuivre l'activité. La proposition que nous formulons s'appuie sur la différentiation entre les données convoquées dans chacune des trois phases du processus de construction de l'EIAH : la conception, l'opérationnalisation et l'exécution. Nous exhibons un modèle pour chacune de ces phases, qui organise ces données et les positionne les unes par rapport aux autres. Adapter une activité " à la volée " revient alors à modifier ces modèles en fonction des situations à traiter. Certaines nécessitent la modification d'un seul de ses modèles, d'autres conduisent à propager les modifications d'un modèle vers un autre. Nous considérons l'adaptation " à la volée " comme une activité à part entière menée, en parallèle de l'activité d'apprentissage, par un superviseur humain qui dispose d'un environnement adéquat pour observer l'activité, détecter les éventuels problèmes et y remédier par intervention dans l'activité d'apprentissage en modifiant les modèles qui la spécifient. Pour développer les outils support à la modification et les intégrer dans l'infrastructure LDI, nous avons eu recours à des techniques de l'Ingénierie Dirigée par les Modèles. Les modèles manipulés dans ces outils en sont ainsi des données à part entière : les outils réalisés n'en offrent ainsi que plus de flexibilité et d'abstraction. Les modèles sont alors exploités comme des leviers pour atteindre et modifier les données ciblées par l'adaptation.
APA, Harvard, Vancouver, ISO, and other styles
2

Parisi, Luca. "A Knowledge Flow as a Software Product Line." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12217/.

Full text
Abstract:
Costruire un "data mining workflow" dipende almendo dal dataset e dagli obiettivi degli utenti. Questo processo è complesso a causa dell'elevato numero di algoritmi disponibili e della difficoltà nel scegliere il migliore algoritmo, opportunamente parametrizzato. Di solito, i data scientists usano tools di analisi per decidere quale algoritmo ha le migliori performance nel loro specifico dataset, confrontando le performance fra i diversi algoritmi. Lo scopo di questo progetto è mettere le basi a un sistema software che porta verso la giusta direzione la costruzione di tali workflow, per trovare il migliore a seconda del dataset degli utenti e dei loro obiettivi.
APA, Harvard, Vancouver, ISO, and other styles
3

Klinga, Peter. "Transforming Corporate Learning using Automation and Artificial Intelligence : An exploratory case study for adopting automation and AI within Corporate Learning at financial services companies." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279570.

Full text
Abstract:
As the emergence of new technologies are continuously disrupting the way in which organizations function and develop, the majority of initiatives within Learning and Development (L&amp;D) are far from fully effective. The purpose of this study was to conduct an exploratory case study to investigate how automation and AI technologies could improve corporate learning within financial services companies. The study was delimited to study three case companies, all primarily operating in the Nordic financial services industry. The exploratory research was carried out through a literature review, several indepth interviews as well as a survey for a selected number of research participants. The research revealed that the current state of training within financial services is characterized by a significant amount of manual and administrative work, lack of intelligence within decision-making as well as a non-existing consideration of employee knowledge. Moreover, the empirical evidence similarly reveled a wide array of opportunities for adopting automation and AI technologies into the respective learning workflows of the L&amp;D organization within the case companies.<br>I takt med att företag kontinuerligt anammar nya teknologier för att förbättra sin verksamhet, befinner sig utbildningsorganisationer i ett märkbart ineffektivt stadie. Syftet med denna studie var att genomföra en explorativ fallstudie gällande hur finansbolag skulle kunna införa AI samt automatisering för att förbättra sin utbildningsorganisation. Studien var begränsat till att undersöka tre företag, alla med verksamhet i den nordiska finansbranschen. Den explorativa delen av studien genomfördes med hjälp av en litteraturstudie, flertal djupgående intervjuer samt en enkät för ett begränsat antal deltagare i forskningsprocessen. Forskning påvisade att den existerade utbildningsorganisationen inom finansbolag är starkt präglat av ett överflöd av manuellt och administrativt arbete, bristande intelligens inom beslutsprocesser samt en bristande hänsyn för existerande kunskapsnivåer bland anställda. Studien påvisade därtill en mängd möjligheter att införa automatisering samt AI för att förbättra utbildningsflödena inom samtliga deltagande bolag i fallstudien.
APA, Harvard, Vancouver, ISO, and other styles
4

Maita, Ana Rocío Cárdenas. "Um estudo da aplicação de técnicas de inteligência computacional e de aprendizado em máquina de mineração de processos de negócio." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-22012016-155157/.

Full text
Abstract:
Mineração de processos é uma área de pesquisa relativamente recente que se situa entre mineração de dados e aprendizado de máquina, de um lado, e modelagem e análise de processos de negócio, de outro lado. Mineração de processos visa descobrir, monitorar e aprimorar processos de negócio reais por meio da extração de conhecimento a partir de logs de eventos disponíveis em sistemas de informação orientados a processos. O principal objetivo deste trabalho foi avaliar o contexto de aplicação de técnicas provenientes das áreas de inteligência computacional e de aprendizado de máquina, incluindo redes neurais artificiais. Para fins de simplificação, denominadas no restante deste texto apenas como ``redes neurais\'\'. e máquinas de vetores de suporte, no contexto de mineração de processos. Considerando que essas técnicas são, atualmente, as mais aplicadas em tarefas de mineração de dados, seria esperado que elas também estivessem sendo majoritariamente aplicadas em mineração de processos, o que não tinha sido demonstrado na literatura recente e foi confirmado por este trabalho. Buscou-se compreender o amplo cenário envolvido na área de mineração de processos, incluindo as principais caraterísticas que têm sido encontradas ao longo dos últimos dez anos em termos de: tipos de mineração de processos, tarefas de mineração de dados usadas, e técnicas usadas para resolver tais tarefas. O principal enfoque do trabalho foi identificar se as técnicas de inteligência computacional e de aprendizado de máquina realmente não estavam sendo amplamente usadas em mineração de processos, ao mesmo tempo que se buscou identificar os principais motivos para esse fenômeno. Isso foi realizado por meio de um estudo geral da área, que seguiu rigor científico e sistemático, seguido pela validação das lições aprendidas por meio de um exemplo de aplicação. Este estudo considera vários enfoques para delimitar a área: por um lado, as abordagens, técnicas, tarefas de mineração e ferramentas comumente mais usadas; e, por outro lado, veículos de publicação, universidades e pesquisadores interessados no desenvolvimento da área. Os resultados apresentam que 81% das publicações atuais seguem as abordagens tradicionais em mineração de dados. O tipo de mineração de processos com mais estudo é Descoberta 71% dos estudos primários. Os resultados deste trabalho são valiosos para profissionais e pesquisadores envolvidos no tema, e representam um grande aporte para a área<br>Mining process is a relatively new research area that lies between data mining and machine learning, on one hand, and business process modeling and analysis, on the other hand. Mining process aims at discovering, monitoring and improving business processes by extracting real knowledge from event logs available in process-oriented information systems. The main objective of this master\'s project was to assess the application of computational intelligence and machine learning techniques, including, for example, neural networks and support vector machines, in process mining. Since these techniques are currently widely applied in data mining tasks, it would be expected that they were also widely applied to the process mining context, which has been not evidenced in recent literature and confirmed by this work. We sought to understand the broad scenario involved in the process mining area, including the main features that have been found over the last ten years in terms of: types of process mining, data mining tasks used, and techniques applied to solving such tasks. The main focus of the study was to identify whether the computational intelligence and machine learning techniques were indeed not being widely used in process mining whereas we sought to identify the main reasons for this phenomenon. This was accomplished through a general study area, which followed scientific and systematic rigor, followed by validation of the lessons learned through an application example. This study considers various approaches to delimit the area: on the one hand, approaches, techniques, mining tasks and more commonly used tools; and, on the other hand, the publication vehicles, universities and researchers interested in the development area. The results show that 81% of current publications follow traditional approaches to data mining. The type of mining processes more study is Discovery 71% of the primary studies. These results are valuable for practitioners and researchers involved in the issue, and represent a major contribution to the area
APA, Harvard, Vancouver, ISO, and other styles
5

Salvucci, Enrico. "MLOps - Standardizing the Machine Learning Workflow." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23645/.

Full text
Abstract:
MLOps is a very recent approach aimed at reducing the time to get a Machine Learning model in production; this methodology inherits its main features from DevOps and applies them to Machine Learning, by adding more features specific for Data Analysis. This thesis, which is the result of the internship at Data Reply, is aimed at studying this new approach and exploring different tools to build an MLOps architecture; another goal is to use these tools to implement an MLOps architecture (by using preferably Open Source software). This study provides a deep analysis of MLOps features, also compared to DevOps; furthermore, an in-depth survey on the tools, available in the market to build an MLOps architecture, is offered by focusing on Open Source tools. The reference architecture, designed adopting an exploratory approach, is implemented through MLFlow, Kubeflow, BentoML and deployed by using Google Cloud Platform; furthermore, the architecture is compared to different use cases of companies that have recently started adopting MLOps. MLOps is rapidly evolving and maturing, for these reasons many companies are starting to adopt this methodology. Based on the study conducted with this thesis, companies dealing with Machine Learning should consider adopting MLOps. This thesis can be a starting point to explore MLOps both theoretically and practically (also by relying on the implemented reference architecture and its code).
APA, Harvard, Vancouver, ISO, and other styles
6

Aslan, Serdar. "Digital Educational Games: Methodologies for Development and Software Quality." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73368.

Full text
Abstract:
Development of a game in the form of software for game-based learning poses significant technical challenges for educators, researchers, game designers, and software engineers. The game development consists of a set of complex processes requiring multi-faceted knowledge in multiple disciplines such as digital graphic design, education, gaming, instructional design, modeling and simulation, psychology, software engineering, visual arts, and the learning subject area. Planning and managing such a complex multidisciplinary development project require unifying methodologies for development and software quality evaluation and should not be performed in an ad hoc manner. This dissertation presents such methodologies named: GAMED (diGital educAtional gaMe dEvelopment methoDology) and IDEALLY (dIgital eDucational gamE softwAre quaLity evaLuation methodologY). GAMED consists of a body of methods, rules, and postulates and is embedded within a digital educational game life cycle. The life cycle describes a framework for organization of the phases, processes, work products, quality assurance activities, and project management activities required to develop, use, maintain, and evolve a digital educational game from birth to retirement. GAMED provides a modular structured approach for overcoming the development complexity and guides the developers throughout the entire life cycle. IDEALLY provides a hierarchy of 111 indicators consisting of 21 branch and 90 leaf indicators in the form of an acyclic graph for the measurement and evaluation of digital educational game software quality. We developed the GAMED and IDEALLY methodologies based on the experiences and knowledge we have gained in creating and publishing four digital educational games that run on the iOS (iPad, iPhone, and iPod touch) mobile devices: CandyFactory, CandySpan, CandyDepot, and CandyBot. The two methodologies provide a quality-centered structured approach for development of digital educational games and are essential for accomplishing demanding goals of game-based learning. Moreover, classifications provided in the literature are inadequate for the game designers, engineers and practitioners. To that end, we present a taxonomy of games that focuses on the characterization of games.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Cao, Bingfei. "Augmenting the software testing workflow with machine learning." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119752.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 67-68).<br>This work presents the ML Software Tester, a system for augmenting software testing processes with machine learning. It allows users to plug in a Git repository of the choice, specify a few features and methods specific to that project, and create a full machine learning pipeline. This pipeline will generate software test result predictions that the user can easily integrate with their existing testing processes. To do so, a novel test result collection system was built to collect the necessary data on which the prediction models could be trained. Test data was collected for Flask, a well-known Python open-source project. This data was then fed through SVDFeature, a matrix prediction model, to generate new test result predictions. Several methods for the test result prediction procedure were evaluated to demonstrate various methods of using the system.<br>by Bingfei Cao.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Nordin, Alexander Friedrich. "End to end machine learning workflow using automation tools." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119776.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 79-80).<br>We have developed an open source library named Trane and integrated it with two open source libraries to build an end-to-end machine learning workflow that can facilitate rapid development of machine learning models. The three components of this workflow are Trane, Featuretools and ATM. Trane enumerates tens of prediction problems relevant to any dataset using the meta information about the data. Furthermore, Trane generates training examples required for training machine learning models. Featuretools is an open-source software for automatically generating features from a dataset. Auto Tune Models (ATM), an open source library, performs a high throughput search over modeling options to find the best modeling technique for a problem. We show the capability of these three tools and highlight the open-source development of Trane.<br>by Alexander Friedrich Nordin.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Kiaian, Mousavy Sayyed Ali. "A learning based workflow scheduling approach on volatile cloud resources." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282528.

Full text
Abstract:
Workflows, originally from the business world, provide a systematic organization to an otherwise chaotic complex process. Therefore, they have become dominant and popular in scientific computation, where complex and broadscale data analysis and scientific automation are required. In recent years, demand for reliable algorithms for workflow optimization problems, mainly scheduling and resource provisioning have grown considerably. There are various algorithms and proposals to optimize these problems. However, most of these provisioning and algorithms do not account for reliability and robustness. Besides, those that do require assumptions and handcrafted heuristics with manual parameter assignment to provide solutions. In this thesis, a new workflow scheduling algorithm is proposed that learns the heuristics required for reliability and robustness consideration in a volatile cloud environment, particularly on Amazon EC2 spot instances. Furthermore, the algorithm uses the learned data to propose an efficient scheduling strategy that prioritizes reliability but also considers minimization of execution time. The proposed algorithm mainly improves upon Failure rate and reliability in comparison to the other tested algorithms, such as Heterogeneous Earliest Finish Time(HEFT) and ReplicateAll, while at the same time, maintaining an acceptable degradation in Makespan compared to the vanilla HEFT, making it more reliable in an unreliable environment as a result. We have discovered that our proposed algorithm performs 5% worse than the baseline HEFT regarding total execution time. However, we realised that it wastes 52% less resources compared to the baseline HEFT and uses 40% less resources compared to the ReplicateAll algorithm as a result of reduced failure rate in the unreliable environment.<br>Efterfrågan på pålitliga algoritmer för arbetsflödesoptimering har ökat avsevärt. Det finns olika algoritmer och förslag till optimeringar av dem. De flesta algoritmerna står dock inte för tillförlitlighet och robusthet. I det här examensarbetet föreslås en ny arbetsflödesplaneringsalgoritm som tränas i den heuristik som krävs för tillförlitlighet och robusthetsbedömning i Amazon EC2 spotinstanser. Algoritmen förbättras med Heterogene Earliest Finish Time (HEFT), som är en populär heuristisk algoritm som används för att schemalägga arbetsflöden. Algoritmen använder inlärda data för att föreslå en effektiv schemaläggningsstrategi som prioriterar tillförlitlighet men också beaktar minimering av körningstiden. Vi har upptäckt att vår föreslagna algoritm har 5% längre exekveringstid än baslinje-HEFT, ödslar 52% mindre resurser jämfört med baslinje-HEFT och använder 40% mindre resurser jämfört med ReplicateAllalgoritmen som ett resultat av minskad felfrekvens i den opålitliga miljön.
APA, Harvard, Vancouver, ISO, and other styles
10

Rabenius, Michaela. "Deep Learning-based Lung Triage for Streamlining the Workflow of Radiologists." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160537.

Full text
Abstract:
The usage of deep learning algorithms such as Convolutional Neural Networks within the field of medical imaging has grown in popularity over the past few years. In particular, these types of algorithms have been used to detect abnormalities in chest x-rays, one of the most commonly performed type of radiographic examination. To try and improve the workflow of radiologists, this thesis investigated the possibility of using convolutional neural networks to create a lung triage to sort a bulk of chest x-ray images based on a degree of disease, where sick lungs should be prioritized before healthy lungs. The results from using a binary relevance approach to train multiple classifiers for different observations commonly found in chest x-rays shows that several models fail to learn how to classify x-ray images, most likely due to insufficient and/or imbalanced data. Using a binary relevance approach to create a triage is feasible but inflexible due to having to handle multiple models simultaneously. In future work it would therefore be interesting to further investigate other approaches, such as a single binary classification model or a multi-label classification model.
APA, Harvard, Vancouver, ISO, and other styles
11

Halatchev, M., and E. Közle. "Workflow-Management in virtuellen Unternehmen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-208655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jakob, Persson. "How to annotate in video for training machine learning with a good workflow." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-187078.

Full text
Abstract:
Artificial intelligence and machine learning is used in a lot of different areas, one of those areas is image recognition. In the production of a TV-show or film, image recognition can be used to help the editors to find specific objects, scenes, or people in the video content, which speeds up the production. But image recognition is not working perfect all the time and can not be used in the production of a TV-show or film as it is intended to. Therefore the image recognition algorithms needs to be trained on large datasets to become better. But to create these datasets takes time and tools that can let users create specific datasets and retrain algorithms to become better is needed. The aim of this master thesis was to investigate if it was possible to create a tool that can annotate objects and people in video content and using the data as training sets, and a tool that can retrain the output of an image recognition to make the image recognition become better. It was also important that the tools have a good workflow for the users. The study consisted of a theoretical study to gain more knowledge about annotation, and how to make a good UX-design with a good workflow. Interviews were also held to get more knowledge of what the requirements of the product was. It resulted in a user scenario and a workflow that was used together with the knowledge from the theoretical study to create a hi-fi prototype by using an iterative process with usability testing. This resulted in a final hi-fi prototype with a good design and a good workflow for the users, where it is possible to annotate objects and people with a bounding box, and where it is possible to retrain an image recognition program that has been used on video content.<br>Artificiell intelligens och maskininlärning används inom många olika områden, ett av dessa områden är bildigenkänning. Vid produktionen av ett TV-program eller av en film kan bildigenkänning användas för att hjälpa redigerarna att hitta specifika objekt, scener eller personer i videoinnehållet, vilket påskyndar produktionen. Men bildigenkänningsprogram fungerar inte alltid helt perfekt och kan inte användas i produktionen av ett TV-program eller film som det är tänkt att användas i det sammanhanget. För att förbättra bildigenkänningsprogram så behöver dess algoritm tränas på stora datasets av bilder och labels. Men att skapa dessa datasets tar tid och det behövs program som kan skapa datasets och återträna algoritmer för bildigenkänning så att de fungerar bättre. Syftet med detta examensarbete var att undersöka om det var möjligt att skapa ett verktyg som kan markera(annotera) objekt och personer i video och använda datat som träningsdata för algoritmer. Men även att skapa ett verktyg som kan återträna algoritmer för bildigenkänning så att de blir bättre utifrån datat man får från ett bildigenkänningprogram. Det var också viktigt att dessa verktyg hade ett bra arbetsflöde för användarna. Studien bestod av en teoretisk studie för att få mer kunskap om annoteringar i video och hur man skapar bra UX-design med ett bra arbetsflöde. Intervjuer hölls också för att få mer kunskap om kraven på produkten och vilka som skulle använda den. Det resulterade i ett användarscenario och ett arbetsflöde som användes tillsammans med kunskapen från den teoretiska studien för att skapa en hi-fi prototyp, där en iterativ process med användbarhetstestning användes. Detta resulterade i en slutlig hi-fi prototyp med bra design och ett bra arbetsflöde för användarna där det är möjligt att markera(annotera) objekt och personer med en bounding box och där det är möjligt att återträna algoritmer för bildigenkänning som har körts på video.
APA, Harvard, Vancouver, ISO, and other styles
13

Viero, Daniel de Mello. "Projeto de um sistema de gerenciamento de workflow baseado em padrões abertos e de sua integração com um ambiente de educação à distância." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/17790.

Full text
Abstract:
A tecnologia de Workflow não é nova e muitos de seus benefícios já são largamente aproveitados em aplicações administrativas e de gestão de documentos, há bem mais de uma década. Atualmente, entretanto, tem-se buscado ampliar os horizontes da utilização do Workflow para outras aplicações. A área de educação à distância é uma das que tem grande potencial para aproveitar as características de coordenação de processos que o Workflow proporciona. Este trabalho buscou fornecer ao AdaptWeb - um ambiente de ensino-aprendizagem adaptativo para a Web desenvolvido na plataforma PHP/MySQL - a possibilidade de agregar um novo modo de funcionamento, baseado na tecnologia de Workflow. Para tanto, foi realizado o projeto de um Sistema de Gerenciamento deWorkflow completo, apropriado para ser utilizado por aplicações no ambiente PHP/MySQL, seguindo as recomendações da WfMC, entidade que estabelece os padrões para a área de Workflow. O trabalho também mostra como a execução de um curso à distância por um aluno no AdaptWeb pode ser mapeada para um processo de Workflow e como pode ser feita a descrição deste processo nos padrões da WfMC. Por fim, é apresentada uma proposta de extensão ao ambiente AdaptWeb para suportar um novo modo de interação, o modo dirigido, baseado no uso do Workflow, bem como idéias para estender o ambiente de autoria e de execução do curso para aproveitar novas perspectivas que o Workflow apresenta.<br>Workflow technology is not new and many administrative and document management applications have been taking benefits from it for more than one decade. Currently, however, usage of workflow is getting wider, since new kinds of applications start adopting the technology. Distance learning is one of those areas that have great potential for getting advantages from process coordination characteristics provided by workflow management systems. This work intended to providing a new interaction mode, based on workflow technology, to the AdaptWeb system - an adaptive web-based learning environment developed on PHP/MySQL platform. With this purpose, a Workflow Management System was designed to be used by applications running on PHP/MySQL environments, following WfMC's recommendations. WfMC is the main organization that establishes workflow standards and references. This work also shows how the execution of a distance learning course by an student in AdaptWeb can be mapped into a workflow process and how the description of such a process can be built using WfMC standards. Finally, some extensions to the AdaptWeb environment are proposed to support a new interaction mode based on workflow, as well as suggestions are discussed to extend the course authoring and execution environments to take benefits from the new perspective provided by the workflow approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Eriksson, Caroline, and Emilia Kallis. "NLP-Assisted Workflow Improving Bug Ticket Handling." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301248.

Full text
Abstract:
Software companies spend a lot of resources on debugging, a process where previous solutions can help in solving current problems. The bug tickets, containing this information, are often time-consuming to read. To minimize the time spent on debugging and to make sure that the knowledge from prior solutions is kept in the company, an evaluation was made to see if summaries could make this process more efficient. Abstractive and extractive summarization models were tested for this task and fine-tuning of the bert-extractive-summarizer was performed. The model-generated summaries were compared in terms of perceived quality, speed, similarity to each other, and summarization length. The average description summary contained part of the description needed and the found solution was either well documented or did not answer the problem at all. The fine-tuned extractive model and the abstractive model BART provided good conditions for generating summaries containing all the information needed.<br>Vid mjukvaruutveckling går mycket resurser åt till felsökning, en process där tidigare lösningar kan hjälpa till att lösa aktuella problem. Det är ofta tidskrävande att läsa felrapporterna som innehåller denna information. För att minimera tiden som läggs på felsökning och säkerställa att kunskap från tidigare lösningar bevaras inom företaget, utvärderades om sammanfattningar skulle kunna effektivisera detta. Abstrakta och extraherande sammanfattningsmodeller testades för uppgiften och en finjustering av bert-extractive- summarizer gjordes. De genererade sammanfattningarna jämfördes i avseende på upplevd kvalitet, genereringshastighet, likhet mellan varandra och sammanfattningslängd. Den genomsnittliga sammanfattningen innehöll delar av den viktigaste informationen och den föreslagna lösningen var antingen väldokumenterad eller besvarade inte problembeskrivningen alls. Den finjusterade BERT och den abstrakta modellen BART visade goda förutsättningar för att generera sammanfattningar innehållande all den viktigaste informationen.
APA, Harvard, Vancouver, ISO, and other styles
15

Tuovinen, L. (Lauri). "From machine learning to learning with machines:remodeling the knowledge discovery process." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205243.

Full text
Abstract:
Abstract Knowledge discovery (KD) technology is used to extract knowledge from large quantities of digital data in an automated fashion. The established process model represents the KD process in a linear and technology-centered manner, as a sequence of transformations that refine raw data into more and more abstract and distilled representations. Any actual KD process, however, has aspects that are not adequately covered by this model. In particular, some of the most important actors in the process are not technological but human, and the operations associated with these actors are interactive rather than sequential in nature. This thesis proposes an augmentation of the established model that addresses this neglected dimension of the KD process. The proposed process model is composed of three sub-models: a data model, a workflow model, and an architectural model. Each sub-model views the KD process from a different angle: the data model examines the process from the perspective of different states of data and transformations that convert data from one state to another, the workflow model describes the actors of the process and the interactions between them, and the architectural model guides the design of software for the execution of the process. For each of the sub-models, the thesis first defines a set of requirements, then presents the solution designed to satisfy the requirements, and finally, re-examines the requirements to show how they are accounted for by the solution. The principal contribution of the thesis is a broader perspective on the KD process than what is currently the mainstream view. The augmented KD process model proposed by the thesis makes use of the established model, but expands it by gathering data management and knowledge representation, KD workflow and software architecture under a single unified model. Furthermore, the proposed model considers issues that are usually either overlooked or treated as separate from the KD process, such as the philosophical aspect of KD. The thesis also discusses a number of technical solutions to individual sub-problems of the KD process, including two software frameworks and four case-study applications that serve as concrete implementations and illustrations of several key features of the proposed process model<br>Tiivistelmä Tiedonlouhintateknologialla etsitään automoidusti tietoa suurista määristä digitaalista dataa. Vakiintunut prosessimalli kuvaa tiedonlouhintaprosessia lineaarisesti ja teknologiakeskeisesti sarjana muunnoksia, jotka jalostavat raakadataa yhä abstraktimpiin ja tiivistetympiin esitysmuotoihin. Todellisissa tiedonlouhintaprosesseissa on kuitenkin aina osa-alueita, joita tällainen malli ei kata riittävän hyvin. Erityisesti on huomattava, että eräät prosessin tärkeimmistä toimijoista ovat ihmisiä, eivät teknologiaa, ja että heidän toimintansa prosessissa on luonteeltaan vuorovaikutteista eikä sarjallista. Tässä väitöskirjassa ehdotetaan vakiintuneen mallin täydentämistä siten, että tämä tiedonlouhintaprosessin laiminlyöty ulottuvuus otetaan huomioon. Ehdotettu prosessimalli koostuu kolmesta osamallista, jotka ovat tietomalli, työnkulkumalli ja arkkitehtuurimalli. Kukin osamalli tarkastelee tiedonlouhintaprosessia eri näkökulmasta: tietomallin näkökulma käsittää tiedon eri olomuodot sekä muunnokset olomuotojen välillä, työnkulkumalli kuvaa prosessin toimijat sekä niiden väliset vuorovaikutukset, ja arkkitehtuurimalli ohjaa prosessin suorittamista tukevien ohjelmistojen suunnittelua. Väitöskirjassa määritellään aluksi kullekin osamallille joukko vaatimuksia, minkä jälkeen esitetään vaatimusten täyttämiseksi suunniteltu ratkaisu. Lopuksi palataan tarkastelemaan vaatimuksia ja osoitetaan, kuinka ne on otettu ratkaisussa huomioon. Väitöskirjan pääasiallinen kontribuutio on se, että se avaa tiedonlouhintaprosessiin valtavirran käsityksiä laajemman tarkastelukulman. Väitöskirjan sisältämä täydennetty prosessimalli hyödyntää vakiintunutta mallia, mutta laajentaa sitä kokoamalla tiedonhallinnan ja tietämyksen esittämisen, tiedon louhinnan työnkulun sekä ohjelmistoarkkitehtuurin osatekijöiksi yhdistettyyn malliin. Lisäksi malli kattaa aiheita, joita tavallisesti ei oteta huomioon tai joiden ei katsota kuuluvan osaksi tiedonlouhintaprosessia; tällaisia ovat esimerkiksi tiedon louhintaan liittyvät filosofiset kysymykset. Väitöskirjassa käsitellään myös kahta ohjelmistokehystä ja neljää tapaustutkimuksena esiteltävää sovellusta, jotka edustavat teknisiä ratkaisuja eräisiin yksittäisiin tiedonlouhintaprosessin osaongelmiin. Kehykset ja sovellukset toteuttavat ja havainnollistavat useita ehdotetun prosessimallin merkittävimpiä ominaisuuksia
APA, Harvard, Vancouver, ISO, and other styles
16

Dergachyova, Olga. "Knowledge-based support for surgical workflow analysis and recognition." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S059/document.

Full text
Abstract:
L'assistance informatique est devenue une partie indispensable pour la réalisation de procédures chirurgicales modernes. Le désir de créer une nouvelle génération de blocs opératoires intelligents a incité les chercheurs à explorer les problèmes de perception et de compréhension automatique de la situation chirurgicale. Dans ce contexte de prise de conscience de la situation, un domaine de recherche en plein essor adresse la reconnaissance automatique du flux chirurgical. De grands progrès ont été réalisés pour la reconnaissance des phases et des gestes chirurgicaux. Pourtant, il existe encore un vide entre ces deux niveaux de granularité dans la hiérarchie du processus chirurgical. Très peu de recherche se concentre sur les activités chirurgicales portant des informations sémantiques vitales pour la compréhension de la situation. Deux facteurs importants entravent la progression. Tout d'abord, la reconnaissance et la prédiction automatique des activités chirurgicales sont des tâches très difficiles en raison de la courte durée d'une activité, de leur grand nombre et d'un flux de travail très complexe et une large variabilité. Deuxièmement, une quantité très limitée de données cliniques ne fournit pas suffisamment d'informations pour un apprentissage réussi et une reconnaissance précise. À notre avis, avant de reconnaître les activités chirurgicales, une analyse soigneuse des éléments qui composent l'activité est nécessaire pour choisir les bons signaux et les capteurs qui faciliteront la reconnaissance. Nous avons utilisé une approche d'apprentissage profond pour évaluer l'impact de différents éléments sémantiques de l'activité sur sa reconnaissance. Grâce à une étude approfondie, nous avons déterminé un ensemble minimum d'éléments suffisants pour une reconnaissance précise. Les informations sur la structure anatomique et l'instrument chirurgical sont de première importance. Nous avons également abordé le problème de la carence en matière de données en proposant des méthodes de transfert de connaissances à partir d'autres domaines ou chirurgies. Les méthodes de ''word embedding'' et d'apprentissage par transfert ont été proposées. Ils ont démontré leur efficacité sur la tâche de prédiction d'activité suivante offrant une augmentation de précision de 22%. De plus, des observations pertinentes<br>Computer assistance became indispensable part of modern surgical procedures. Desire of creating new generation of intelligent operating rooms incited researchers to explore problems of automatic perception and understanding of surgical situations. Situation awareness includes automatic recognition of surgical workflow. A great progress was achieved in recognition of surgical phases and gestures. Yet, there is still a blank between these two granularity levels in the hierarchy of surgical process. Very few research is focused on surgical activities carrying important semantic information vital for situation understanding. Two important factors impede the progress. First, automatic recognition and prediction of surgical activities is a highly challenging task due to short duration of activities, their great number and a very complex workflow with multitude of possible execution and sequencing ways. Secondly, very limited amount of clinical data provides not enough information for successful learning and accurate recognition. In our opinion, before recognizing surgical activities a careful analysis of elements that compose activity is necessary in order to chose right signals and sensors that will facilitate recognition. We used a deep learning approach to assess the impact of different semantic elements of activity on its recognition. Through an in-depth study we determined a minimal set of elements sufficient for an accurate recognition. Information about operated anatomical structure and surgical instrument was shown to be the most important. We also addressed the problem of data deficiency proposing methods for transfer of knowledge from other domains or surgeries. The methods of word embedding and transfer learning were proposed. They demonstrated their effectiveness on the task of next activity prediction offering 22% increase in accuracy. In addition, pertinent observations about the surgical practice were made during the study. In this work, we also addressed the problem of insufficient and improper validation of recognition methods. We proposed new validation metrics and approaches for assessing the performance that connect methods to targeted applications and better characterize capacities of the method. The work described in this these aims at clearing obstacles blocking the progress of the domain and proposes a new perspective on the problem of surgical workflow recognition
APA, Harvard, Vancouver, ISO, and other styles
17

Halatchev, M., and E. Közle. "Unternehmensübergreifendes Workflow-Management als Instrument zur Unterstützung von Lieferketten (Supply Chain Management)." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210883.

Full text
Abstract:
In diesem Beitrag wollen wir die Besonderheiten von Logistikketten im Kontext virtueller Unternehmen diskutieren. Ziel ist es, zu zeigen, wie das operative Geschehen in einer Lieferkette auch ohne spezialisierte (und kostenintensive) Supply Chain Management (SCM) Software in einer für die virtuellen Unternehmen konzipierten Softwarelandschaft wie die Plattformen für virtuelle Unternehmen (PVU) vollzogen werden kann.
APA, Harvard, Vancouver, ISO, and other styles
18

Halatchev, M., and E. Közle. "Unternehmensübergreifendes Workflow-Management als Instrument zur Unterstützung von Lieferketten (Supply Chain Management)." Josef Eul Verlag GmbH, 1999. https://tud.qucosa.de/id/qucosa%3A28817.

Full text
Abstract:
In diesem Beitrag wollen wir die Besonderheiten von Logistikketten im Kontext virtueller Unternehmen diskutieren. Ziel ist es, zu zeigen, wie das operative Geschehen in einer Lieferkette auch ohne spezialisierte (und kostenintensive) Supply Chain Management (SCM) Software in einer für die virtuellen Unternehmen konzipierten Softwarelandschaft wie die Plattformen für virtuelle Unternehmen (PVU) vollzogen werden kann.
APA, Harvard, Vancouver, ISO, and other styles
19

Zeve, Carlos Mario Dal Col. "Modelo cooperativo construtivista para autoria de cursos a distância usando tecnologia de Workflow." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/14815.

Full text
Abstract:
Este trabalho tem por fim especificar um modelo de descrição de tarefas, usando a tecnologia de workflow para representar a automação das atividades de autoria de cursos em ambientes distribuídos como a web, baseado em um modelo interacionista de cooperação. Busca, também, obter respostas relevantes às necessidades de especificação de workflow, tendo em vista a possibilidade de agregar ou modificar alguns elementos, para que possam expressar situações não previstas nos modelos atuais. O interesse deste trabalho em workflow devese ao fato de as tarefas nele desenvolvidas se relacionarem à autoria de documentos multimídia, tais como os utilizados em educação a distância, que compreendem não somente a construção, mas os processos cooperativos que sugerem as decisões, as escolhas, as preferências durante o processo de autoria. A abordagem proposta trata o problema da concepção do workflow de forma declarativa, através de um modelo que permita especificar tarefas, assim como sua ordenação temporal. A ordenação temporal pode ser obtida através do sequenciamento, seleção e interação de atividades, bem como através de propriedades que identificam o início e o fim de cada atividade. Por último, este trabalho visa estender as possibilidades da construção dos modelos de workflow, propondo uma técnica de planejamento que possibilite uma política de alocação dos autores associando a disponibilidade de tempo e as competências envolvidas na execução das atividades. Assim, o objetivo que se busca é um modelo de processo de autoria que possibilite expressar a interação e cooperação entre os autores, através de uma política de alocação que seja orientada pelas competências para execução de determinadas atividades.<br>This work aims to specify a model of description of tasks, using workflow technology to represent the automation of e-learning authoring activities in a distributed environment as the web, based on an interacionist model of cooperation. It also has the objective of obtaining important answers to workflow specification needs, taking into consideration the possibility to add or modify some elements, so that they can express situations not foreseen in current models. The interest of this work in workflow relies in the fact that the developed tasks in are related to the multimedia documents authoring, such as those used in distance education. These comprise not only construction, but the cooperative processes that suggest decisions, choices, and preferences during the authoring process. The proposed approach deals with the workflow conception problem of the in a declarative way, through a model that allows the specification of tasks, as well as the temporary ordination. The temporary ordination can be obtained by the sequence, selection and interaction of activities, as well as, by the properties which identify the beginning and the end of each activity. Finally, this work seeks to extend the possibilities of workflow models construction, proposing a planning technique that makes possible an author’s allocation politic which associates time availability and the competences involved in the activities. Therefore, the objective of this work is an authorship process model that enables to express the interaction and cooperation among authors, by a competence for performing certain activities guided allocation politic.
APA, Harvard, Vancouver, ISO, and other styles
20

Fletcher, Douglas. "Generalized Empirical Bayes: Theory, Methodology, and Applications." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/546485.

Full text
Abstract:
Statistics<br>Ph.D.<br>The two key issues of modern Bayesian statistics are: (i) establishing a principled approach for \textit{distilling} a statistical prior distribution that is \textit{consistent} with the given data from an initial believable scientific prior; and (ii) development of a \textit{consolidated} Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this thesis, we propose generalized empirical Bayes as a new framework for exploring these fundamental questions along with a wide range of applications spanning fields as diverse as clinical trials, metrology, insurance, medicine, and ecology. Our research marks a significant step towards bridging the ``gap'' between Bayesian and frequentist schools of thought that has plagued statisticians for over 250 years. Chapters 1 and 2---based on \cite{mukhopadhyay2018generalized}---introduces the core theory and methods of our proposed generalized empirical Bayes (gEB) framework that solves a long-standing puzzle of modern Bayes, originally posed by Herbert Robbins (1980). One of the main contributions of this research is to introduce and study a new class of nonparametric priors ${\rm DS}(G, m)$ that allows exploratory Bayesian modeling. However, at a practical level, major practical advantages of our proposal are: (i) computational ease (it does not require Markov chain Monte Carlo (MCMC), variational methods, or any other sophisticated computational techniques); (ii) simplicity and interpretability of the underlying theoretical framework which is general enough to include almost all commonly encountered models; and (iii) easy integration with mainframe Bayesian analysis that makes it readily applicable to a wide range of problems. Connections with other Bayesian cultures are also presented in the chapter. Chapter 3 deals with the topic of measurement uncertainty from a new angle by introducing the foundation of nonparametric meta-analysis. We have applied the proposed methodology to real data examples from astronomy, physics, and medical disciplines. Chapter 4 discusses some further extensions and application of our theory to distributed big data modeling and the missing species problem. The dissertation concludes by highlighting two important areas of future work: a full Bayesian implementation workflow and potential applications in cybersecurity.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
21

Lecuyer, Gurvan. "Analyse automatique et assistance à l'annotation des processus chirurgicaux basées Deep Learning." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S061.

Full text
Abstract:
La salle d’opération a profité de nombreuses avancées technologiques majeures touchant jusqu’aux pratiques médicales comme dans le cas des chirurgies minimalement invasives. Les nombreux appareils médicaux rendent les interventions plus précises. Cependant, de nombreux challenges restent cependant sans réponse technique. En 2004, un groupe de travail nommé « OR2020 » s’est réuni pour identifier ces challenges et imaginer la salle d’opération du futur intelligente et connectée. Les réseaux de neurones artificiels sont au cœur du développement des systèmes intelligents, ils requièrent des milliers de données annotées pour être entraînés. L’annotation est une tâche fastidieuse qui peut être compliquée comme dans le cas des données médicales où les connaissances nécessaires requièrent l’intervention de médecins. Dans cette thèse, nous avons mené des travaux pour analyser et identifier les erreurs de prédictions faites par des réseaux de neurones sur la tâche de reconnaissance de processus chirurgicaux. Nous avons proposé une catégorisation de ces erreurs de prédiction permettant de couvrir 100 % des cas rencontrés. En se basant sur cette analyse, nous avons développé deux méthodes de détection automatique des erreurs de prédiction pour la tâche de reconnaissance des processus chirurgicaux. Ces méthodes ont été utilisées pour pré-annoter les vidéos chirurgicales et ont été intégrées dans un logiciel d’annotation de processus chirurgicaux. Deux tests utilisateurs ont été conduits et ont montrés une accélération de l’annotation de l’ordre de dix minutes et d’une amélioration de la précision des annotations de 1% pour les phases et de 7% pour les étapes<br>The operating room benefited from many technological breakthroughs changing medical practice like for minimally invasive surgery or robot assisted surgery. The various medical devices allow to perform more accurate interventions. However, many challenges remain unexplored. In 2004, a workshop named “OR2020” was led to identify these challenges and to imagine the future operating room, smart and connected. Neural networks are the heart of intelligent systems, they require thousands of annotated data to be train. Annotation is a tedious task which might be complicated. In the case of medical data, the annotation process required a very specific knowledge provided by surgeons. In this thesis, we conducted studies to analysis and to identify prediction errors made by neural networks on the surgical workflow recognition problem. We proposed a categorization of those mistakes which covered 100% of error cases. Based on this analysis, we developed two methods to detect automatically prediction errors on the task of surgical workflow recognition. Those methods were used to pre-annotate surgical videos and have been implemented in an annotation software. Two user studies were conducted and showed the system fastened the annotation process by ten minutes and allowed to increase the annotation accuracy by 1% for the phases and by 7% for the steps
APA, Harvard, Vancouver, ISO, and other styles
22

Chacón, Pérez Jonathan 1986. "Community platform management mechanisms to support integrated Learning Design." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/360849.

Full text
Abstract:
This PhD Thesis contributes to the domain of Educational Technologies, and more specifically to the Learning Design (LD) research field, which focuses on supporting teachers in the creation of effective computer-supported learning activities considering the needs of their educational contexts. Research in LD has provided a myriad of tools and methods. Yet, existing tools lack collaboration support for communities of teachers engaged in learning (co-)design. Moreover, scope of tools is varied in terms of representations used, pedagogical approaches supported, and design phases targeted (from conceptualization to authoring and implementation). This diversity of tools contrasts with lack of articulation of their synergies to offer meaningful, manageable and integrated LD ecosystems for teachers and communities of teachers. This Thesis is framed in this problem area. Its guiding research question is: How can community platform management mechanisms support teachers in integrated learning design ecosystems? This question is addressed by more specific investigation towards addressing four specific research objectives. The first objective is explorative, focused on understanding needs for management mechanisms in LD community platforms. The resulting contribution includes participation in building and evaluating LD community platforms (LdShake, Learning design Sharing and co-edition, and ILDE, Integrated Learning Design Environment) in the context of Spanish and European projects, and the identification of needs tackled in the following three research objectives. The second objective deals with enabling flexible management of learning (co-)design processes that involve use of several LD tools. The associated contribution is a model and implementation for LD Workflows, which shape orchestrated uses of selected LD tools that can be applied to LD Projects. The third objective focuses on supporting management of multiple learning design versions in scenarios of reuse and co-design. The contribution is a model and visualization strategy based on a family tree metaphor. The fourth objective concerns the need for interoperability between co-(design) tools and platforms, and in particular focuses on design patterns as structured LD representations of special interest because they collect repeatable good teaching practices. The contribution is a pattern ontology for computationally representing a pattern language (working case of design patterns in Computer-Supported Collaborative Learning) and a derived model together with an architecture for interoperable management of patterns across LD tooling. Contributions have been implemented in LdShake and ILDE community platforms, showing feasibility, enabling proofof-concept in significant scenarios and user studies involving teachers.<br>Las contribuciones de esta Tesis Doctoral se enmarcan en el ámbito de las Tecnologías Educativas, y más concretamente en el campo de investigación del Diseño de Aprendizaje (LD acrónimo en inglés). Este campo se centra en dar apoyo a los profesores en la creación de actividades educativas apoyadas por ordenador teniendo en consideración sus contextos educativos. La investigación en el campo de LD ha proporcionado gran cantidad de herramientas y métodos. Sin embargo, estas herramientas todavía carecen de mecanismos que posibiliten la colaboración en comunidades de profesores involucradas en el (co-)diseño de aprendizaje. Además, el alcance de las herramientas es muy variado en cuanto a las representaciones utilizadas, los enfoques pedagógicos utilizados, y fases de diseño a las que van dirigidas (desde la conceptualización, hasta la autoría y hasta la implementación). Esta diversidad de herramientas contrasta con la falta de articulación de sus sinergias para ofrecer ecosistemas LD significativos, manejables e integrados para profesores y comunidades de profesores. Esta problemática motiva la investigación realizada en esta Tesis. La pregunta de investigación que la guía es: ¿Cómo pueden apoyar los mecanismos de gestión de plataformas comunitarias dar soporte en ecosistemas de diseño de aprendizaje integrado? Esta cuestión se aborda en la investigación más concreta de cuatro objetivos específicos. El primer objetivo es exploratorio, se centra en la comprensión de las necesidades de mecanismos de gestión en plataformas para comunidades en LD. La contribución resultante incluye la participación en la implementación y evaluación de las plataformas para comunidades en LD (LdShake, acrónimo en inglés de Learning design Sharing and co-edition, e ILDE, acrónimo en inglés de Integrated Learning Design Environment) en el contexto de proyectos españoles y europeos, así como la identificación de las necesidades abordadas en los tres siguientes objetivos de la investigación. El segundo objetivo busca permitir una gestión flexible de los procesos de (co-)diseño de aprendizaje que implique el uso de varias herramientas de LD. La contribución asociada es un modelo e implementación de los flujos de trabajo de LD (LD Workflows en inglés). Los LD Workflows se definen para permitir la representación de las herramientas de LD seleccionadas que se pueden aplicar a proyectos de LD (LD Projects, en inglés). El tercer objetivo se centra en el apoyo a la gestión de múltiples versiones de diseño de aprendizaje en escenarios de reutilización y (co-)diseño. La contribución es un modelo y una visualización basada en una metáfora del árbol familiar (family tree, en inglés). El cuarto objetivo trata la necesidad de interoperabilidad entre herramientas de (co-)diseño y plataformas de LD, y en particular, se centra en los patrones de diseño como representaciones LD estructuradas de especial interés ya que recogen buenas prácticas docentes repetibles. La contribución es una ontología de patrones que representa computacionalmente un lenguaje de patrones (centrándose en los patrones de CSCL, del inglés:Computer-Supported Collaborative Learning) y un modelo derivado junto con una arquitectura para la gestión interoperable de patrones a través de herramientas de LD. Las contribuciones se han implementado en las plataformas de comunidades de LD LdShake e ILDE mostrando su viabilidad, ofreciendo la prueba de conceptos en escenarios significativos y estudios con profesores en entornos reales.
APA, Harvard, Vancouver, ISO, and other styles
23

Breininger, Katharina [Verfasser], Andreas [Akademischer Betreuer] Maier, Andreas [Gutachter] Maier, and Philippe Claude [Gutachter] Cattin. "Machine Learning and Deformation Modeling for Workflow-Compliant Image Fusion during Endovascular Aortic Repair / Katharina Breininger ; Gutachter: Andreas Maier, Philippe Claude Cattin ; Betreuer: Andreas Maier." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1225938473/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Twinanda, Andru Putra. "Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAD005/document.

Full text
Abstract:
Cette thèse a pour objectif la conception de méthodes pour la reconnaissance automatique des activités chirurgicales. Cette reconnaissance est un élément clé pour le développement de systèmes réactifs au contexte clinique et pour des applications comme l’assistance automatique lors de chirurgies complexes. Nous abordons ce problème en utilisant des méthodes de Vision puisque l’utilisation de caméras permet de percevoir l’environnement sans perturber la chirurgie. Deux types de vidéos sont utilisées : des vidéos laparoscopiques et des vidéos multi-vues RGBD. Nous avons d’abord étudié les résultats obtenus avec les méthodes de l’état de l’art, puis nous avons proposé des nouvelles approches basées sur le « Deep learning ». Nous avons aussi généré de larges jeux de données constitués d’enregistrements de chirurgies. Les résultats montrent que nos méthodes permettent d’obtenir des meilleures performances pour la reconnaissance automatique d’activités chirurgicales que l’état de l’art<br>The main objective of this thesis is to address the problem of activity recognition in the operating room (OR). Activity recognition is an essential component in the development of context-aware systems, which will allow various applications, such as automated assistance during difficult procedures. Here, we focus on vision-based approaches since cameras are a common source of information to observe the OR without disrupting the surgical workflow. Specifically, we propose to use two complementary video types: laparoscopic and OR-scene RGBD videos. We investigate how state-of-the-art computer vision approaches perform on these videos and propose novel approaches, consisting of deep learning approaches, to carry out the tasks. To evaluate our proposed approaches, we generate large datasets of recordings of real surgeries. The results demonstrate that the proposed approaches outperform the state-of-the-art methods in performing surgical activity recognition on these new datasets
APA, Harvard, Vancouver, ISO, and other styles
25

Werneck, Rafael de Oliveira 1989. "A unified framework for design, deployment, execution, and recommendation of machine learning experiments = Uma ferramenta unificada para projeto, desenvolvimento, execução e recomendação de experimentos de aprendizado de máquina." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275531.

Full text
Abstract:
Orientadores: Ricardo da Silva Torres, Anderson de Rezende Rocha<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação<br>Made available in DSpace on 2018-08-25T19:48:27Z (GMT). No. of bitstreams: 1 Werneck_RafaeldeOliveira_M.pdf: 2395829 bytes, checksum: 8f190aeb6dbafb841d0c03f7d7099041 (MD5) Previous issue date: 2014<br>Resumo: Devido ao grande crescimento do uso de tecnologias para a aquisição de dados, temos que lidar com grandes e complexos conjuntos de dados a fim de extrair conhecimento que possa auxiliar o processo de tomada de decisão em diversos domínios de aplicação. Uma solução típica para abordar esta questão se baseia na utilização de métodos de aprendizado de máquina, que são métodos computacionais que extraem conhecimento útil a partir de experiências para melhorar o desempenho de aplicações-alvo. Existem diversas bibliotecas e arcabouços na literatura que oferecem apoio à execução de experimentos de aprendizado de máquina, no entanto, alguns não são flexíveis o suficiente para poderem ser estendidos com novos métodos, além de não oferecerem mecanismos que permitam o reuso de soluções de sucesso concebidos em experimentos anteriores na ferramenta. Neste trabalho, propomos um arcabouço para automatizar experimentos de aprendizado de máquina, oferecendo um ambiente padronizado baseado em workflow, tornando mais fácil a tarefa de avaliar diferentes descritores de características, classificadores e abordagens de fusão em uma ampla gama de tarefas. Também propomos o uso de medidas de similaridade e métodos de learning-to-rank em um cenário de recomendação, para que usuários possam ter acesso a soluções alternativas envolvendo experimentos de aprendizado de máquina. Nós realizamos experimentos com quatro medidas de similaridade (Jaccard, Sorensen, Jaro-Winkler e baseada em TF-IDF) e um método de learning-to-rank (LRAR) na tarefa de recomendar workflows modelados como uma sequência de atividades. Os resultados dos experimentos mostram que a medida Jaro-Winkler obteve o melhor desempenho, com resultados comparáveis aos observados para o método LRAR. Em ambos os casos, as recomendações realizadas são promissoras, e podem ajudar usuários reais em diferentes tarefas de aprendizado de máquina<br>Abstract: Due to the large growth of the use of technologies for data acquisition, we have to handle large and complex data sets in order to extract knowledge that can support the decision-making process in several domains. A typical solution for addressing this issue relies on the use of machine learning methods, which are computational methods that extract useful knowledge from experience to improve performance of target applications. There are several libraries and frameworks in the literature that support the execution of machine learning experiments. However, some of them are not flexible enough for being extended with novel methods and they do not support reusing of successful solutions devised in previous experiments made in the framework. In this work, we propose a framework for automating machine learning experiments that provides a workflow-based standardized environment and makes it easy to evaluate different feature descriptors, classifiers, and fusion approaches in a wide range of tasks. We also propose the use of similarity measures and learning-to-rank methods in a recommendation scenario, in which users may have access to alternative machine learning experiments. We performed experiments with four similarity measures (Jaccard, Sorensen, Jaro-Winkler, and a TF-IDF-based measure) and one learning-to-rank method (LRAR) in the task of recommending workflows modeled as a sequence of activities. Experimental results show that Jaro-Winkler yields the highest effectiveness performance with comparable results to those observed for LRAR. In both cases, the recommendations performed are very promising and might help real-world users in different daily machine learning tasks<br>Mestrado<br>Ciência da Computação<br>Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
26

Högman, Ordning Herman. "Utbildningsmaterial ur mjukvarudokumentation." Thesis, KTH, Lärande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-266747.

Full text
Abstract:
Utbildning för slutanvändare av IT-system på arbetsplatsen är en dyr och tidskrävande affär. Trots att mycket information om systemen tas fram under utvecklingenav systemet, i form av kravdokument och annan dokumentation, används den informationen sällan i utbildningssyfte. Företaget Multisoft önskadeundersöka hur den dokumentation de tar fram under utvecklingen av skräddarsyddaverksamhetssystem åt olika företag, kallade Softadmin R -system, kananvändas i utbildningssyfte.Syftet med detta examensarbete var att identifiera vilka lärbehov slutanvändareav verksamhetssystem utvecklade av företaget Multisoft har. Baserat på dessa lärbehov undersöktes hur den dokumentation som tas fram under utvecklingenskulle kunna nyttjas i utbildningssyfte. En kvalitativ undersökning med narrativa semistrukturerade intervjuer genomfördes med slutanvändare och projektdeltagare hos sex olika företag som implementeratett Softadmin R -system inom de senaste två åren. Projektdeltagarna hade varit involverade under utvecklingen från kundföretagets sida och slutanvändarnahade inte varit involverade i utvecklingen. Tio intervjuer genomfördes och en tematisk analys utfördes på intervjutranskriptionerna. Framtagna teman tolkades utifrån en kognitivistisk syn på lärande. Resultatet från analys av intervjuerna pekar på att slutanvändare vill kunna lära sig genom att testa sig runt i systemet. Slutanvändare vill lära sig genomatt få information visuellt och inte enbart via text. Ett utbildningsmaterial om ett Softadmin R -system ska inte kräva tidigare kunskap om systemet för attvara tillgängligt. Vidare indikerar resultatet att slutanvändare upplever att systemen har en komplex affärslogik där det är svårt att förstå hur systemet solika processer påverkar varandra. Övergången från ett gammalt system till det nya kan innebära svårigheter i lärandet för slutanvändarna. Avsaknad avstruktur då slutanvändarna lärde sig använda systemet identifierades som ett problem. Ett förslag på struktur för ett utbildningsmaterial har tagits fram. Detta utbildivningsmaterial är tänkt att använda information från den dokumentation som tas fram under utvecklingen av Softadmin R -system. Denna användning av dokumentationen skulle i nuläget behöva göras manuellt med viss anpassning. Förslag på hur det kan automatiseras har presenterats. Funktionella krav på ett system för framtagning och underhåll av den informationsom krävs för det föreslagna utbildningsmaterialet har presenterats. När Softadmin R -system som utbildningsmaterialet berör uppdateras möjliggörsystemet uppdatering av utbildningsmaterialet. Systemet möjliggör även framtagning av utbildningsmaterial anpassat för en viss yrkesroll.<br>End user training of IT systems at the workplace is an expensive and time consuming ordeal. Despite a lot of information about the systems being produced during development of the system, in the form of requirement documents and other documentation, the information is seldom used for educational purposes. Multisoft wished to explore how the documentation produced during the development of their tailor-made business systems, named Softadmin R systems, can be used for educational purposes.The purpose of this master thesis was to identify what learning-needs end users have in regards to business systems developed by the company Multisoft. Based on these learning-needs an investigation would be placed on how the documentation produced during development could be used for educational purposes. A qualitative study with narrative semi-structured interviews was conducted with end users and project participants at six different companies that had implemented a Softadmin R system at their workplace within the last two years. The project participants had been involved from the customer company’s side during the development whereas the end users had not been involved. Ten interviews were conducted and a the matic analysis was performed on the interview transcripts. Procured themes were then interpreted from a cognitiveperspective on learning. The results indicated that end users want to be able to learn by trying to use the system themselves. End users want to learn by getting information visually and not only via text. A training material for a Softadmin R system should not require prior knowledge about the system to be available to the learner. Furthermore the results indicate that end users feel the systems have a complex business logic where it is difficult to understand how the different processes in the system affect each other. The transition from an old system to a new system can be problematic to the end users’ learning. A lack of structure in the end users’ learning of the system was identified as an issue.
APA, Harvard, Vancouver, ISO, and other styles
27

Fuller, Chevita. "Refining Computerized Physician Order Entry Initiatives in an Adult Intensive Care Unit." ScholarWorks, 2014. https://scholarworks.waldenu.edu/dissertations/115.

Full text
Abstract:
Computerized physician order entry (CPOE) is used in healthcare organizations to improve workflow processes and transcription, as well as to prevent prescribing errors. Previous research has indicated challenges associated with CPOE for end-users that predispose patients to unsafe practices. Unsafe CPOE practices can be detrimental within the intensive care unit (ICU) setting due to the complexity of nursing care. Consequently, end-user satisfaction and understanding of CPOE and electronic health record (EHR) functionality are vital to avoid error omissions. CPOE initiatives should be refined post system implementation to improve clinical workflow, medication processes, and end-user satisfaction. The purpose of this quality improvement project was to refine CPOE system initiatives and develop an e-learning educational module to facilitate end-user understanding of and satisfaction with CPOE. The Iowa model of evidence-based practice, Lean methodology, and Provider Order Entry User Satisfaction and Usage Survey (POESUS) were used to guide the study. An e-learning module was implemented to increase staff understanding of the newly implemented CPOE system, and a plan was provided for ongoing data collection and investigation of end-user satisfaction and medication inadequacies with the CPOE system. A mixed-method design was recommended to key stakeholders to identify the impact of the e-learning course and refined CPOE initiatives on both end-user satisfaction and patient outcomes in the medical-surgical ICU. Findings from the study informed the impact of e-learning educational modules with CPOE system implementation. Those in organizations implementing advanced technology such as CPOE and EHR systems in critical care settings will find this paper of interest.
APA, Harvard, Vancouver, ISO, and other styles
28

Pučálka, Martin. "Herní engine pro ITIL trenažér." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385956.

Full text
Abstract:
This master's thesis is focused on Information Technology Infrastructure Library (ITIL). Objective of the project was to analyze, design and implement a game engine, which would provide simulation of IT service operation in real or accelerated time or in turns. Basic part of the engine is a creators mode, which allows users to create custom IT services and specify their behaviour during operation, like the service would be used in real. Another part of the engine is a players mode and simple service desk. In this mode, players can take care of fluent operation of their services. Thanks to this, they can learn and train practices, which are described in ITIL.
APA, Harvard, Vancouver, ISO, and other styles
29

Hsiao, Yu-Tung, and 蕭宇彤. "Oracle: A Deep Learning Model for Predicting and Optimizing Complex Query Workflows." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/t84rwu.

Full text
Abstract:
碩士<br>國立清華大學<br>資訊工程學系所<br>106<br>Hive is a widely-used open-source framework for data warehouse system. Based on the Hadoop execution engine and distributed storage systems, Hive adopts high-level SQL statements to simplify the difficulties for developing big data analytic applications. As more attention has been drawn to optimize the performance of Hive, the performance estimation has been an important role in finding the appropriate parameters. However, since execution time of Hive queries is affected by over hundreds of configurations, resulting in different execution plans and job behaviors, performance prediction becomes more challenging. In this thesis we proposed a time prediction model for optimizing the execution of Hive. Our proposed prediction model is called Oracle, which is a data-driven solution based on deep learning techniques. The prediction also employs recurrent neural networks(RNN) to consider dependencies between stages in a DAG work-flow. We have implemented Oracle with intensive evaluation for TPC-H benchmark queries in different complexity running on the in-house cluster. The experiment results show that Oracle achieved 5.8\% error rate and outperformed three other comparison approaches. Based on prediction models, we can bring about 40% performance improvements without any modification on architecture.
APA, Harvard, Vancouver, ISO, and other styles
30

(9237002), Amani M. Abu Jabal. "Digital Provenance Techniques and Applications." Thesis, 2020.

Find full text
Abstract:
This thesis describes a data provenance framework and other associated frameworks for utilizing provenance for data quality and reproducibility. We first identify the requirements for the design of a comprehensive provenance framework which can be applicable to various applications, supports a rich set of provenance metadata, and is interoperable with other provenance management systems. We then design and develop a provenance framework, called SimP, addressing such requirements. Next, we present four prominent applications and investigate how provenance data can be beneficial to such applications. The first application is the quality assessment of access control policies. Towards this, we design and implement the ProFact framework which uses provenance techniques for collecting comprehensive data about actions which were either triggered due to a network context or a user (i.e., a human or a device) action. Provenance data are used to determine whether the policies meet the quality requirements. ProFact includes two approaches for policy analysis: structure-based and classification-based. For the structure-based approach, we design tree structures to organize and assess the policy set efficiently. For the classification-based approach, we employ several classification techniques to learn the characteristics of policies and predict their quality. In addition, ProFact supports policy evolution and the assessment of its impact on the policy quality. The second application is workflow reproducibility. Towards this, we implement ProWS which is a provenance-based architecture for retrieving workflows. Specifically, ProWS transforms data provenance into workflows and then organizes data into a set of indexes to support efficient querying mechanisms. ProWS supports composite queries on three types of search criteria: keywords of workflow tasks, patterns of workflow structure, and metadata about workflows (e.g., how often a workflow was used). The third application is the access control policy reproducibility. Towards this, we propose a novel framework, Polisma, which generates attribute-based access control policies from data, namely from logs of historical access requests and their corresponding decisions. Polisma combines data mining, statistical, and machine learning techniques, and capitalizes on potential context information obtained from external sources (e.g., LDAP directories) to enhance the learning process. The fourth application is the policy reproducibility by utilizing knowledge and experience transferability. Towards this, we propose a novel framework, FLAP, which transfer attribute-based access control policies between different parties in a collaborative environment, while considering the challenges of minimal sharing of data and support policy adaptation to address conflict. All frameworks are evaluated with respect to performance and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

陳耀坤. "Workflow-based E-learning environment." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/09280804817947677794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Shie, Tsai-Yan, and 謝采燕. "Building an e-Learning Workflow Based on Semantic Web." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/56486930905679104425.

Full text
Abstract:
碩士<br>朝陽科技大學<br>工業工程與管理系碩士班<br>97<br>The thriving of information as well as web communication technologies has encouraged the development of synchronous and asynchronous e-learning tool. Lifelong learning, on-job training and schooling may take advantages of the e-learning for its convenient accessibility, rich of contents, and material reusability. Most up-to-date implementations have placed a large amount of electronic course material onto the web domain for browsing and indexing. The function of teaching was somehow ignored for lack of learning directions, and consequently the learners may have incomplete or limited learning achievement. This research aims at building a teaching workflow mechanism that will incorporate learning activities and course material into a multimedia platform using semantic web. The workflow will guarantee the integrity of the teaching process, and the semantic web will ensure the meaningfulness among the activities and materials. Thus, creating an intelligent entity of tutor substitute. The intelligent tutor will plan a learning map and prepare appropriate materials for each learner according to his/her ability or background. During the process, the tutor may direct the learner based on realtime interactions. All workflows comply with the learning theories, and each material/activity is semantically defined. In the implementation, a trial course is developed for testing the functionality of the proposed method, and the efficiency of teaching were not included for lack of time.
APA, Harvard, Vancouver, ISO, and other styles
33

TSAI, MENG-HAN, and 蔡孟翰. "A Study of Adaptive Workflow Scheduling based on Machine Learning and an Extensible Simulation Environment." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/wn2xfm.

Full text
Abstract:
碩士<br>國立臺中教育大學<br>資訊工程學系<br>105<br>With the advancement of technology and emergence of grid and cloud computing, now many large-scale scientific and engineering applications are usually constructed as workflows due to large amounts of interrelated computation and communication. Scheduling algorithms are crucial to efficient workflow execution and have become an important research topic. In this thesis, we study list-based workflow scheduling algorithms. The thesis consists of three major parts. In the rest part, we propose a new list-based workflow scheduling algorithm which can outperform current state-of-art algorithm. The second part presents a Parallel Extensible Workload Scheduling Simulator (Pewss) which we have developed to aid research work in parallel job scheduling. Based on Pewss, we conducted various simulation experiments for workflow scheduling and found that no single workflow scheduling algorithm can always achieve the best performance in all workload and platform conditions. The experimental results motivated our research work on the third part which aims to develop an adaptive workflow scheduling algorithm based on machine learning technology. The adaptive workflow scheduling algorithm is expected to achieve consistently better performance under various circumstances than any single existing workflow scheduling approach. A series of simulation experiments have been conducted to evaluate the proposed workflow scheduling algorithms. The experimental results indicate that our workflow scheduling algorithms can outperform previous scheduling methods significantly.
APA, Harvard, Vancouver, ISO, and other styles
34

"A MULTI-FUNCTIONAL PROVENANCE ARCHITECTURE: CHALLENGES AND SOLUTIONS." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-12-1419.

Full text
Abstract:
In service-oriented environments, services are put together in the form of a workflow with the aim of distributed problem solving. Capturing the execution details of the services' transformations is a significant advantage of using workflows. These execution details, referred to as provenance information, are usually traced automatically and stored in provenance stores. Provenance data contains the data recorded by a workflow engine during a workflow execution. It identifies what data is passed between services, which services are involved, and how results are eventually generated for particular sets of input values. Provenance information is of great importance and has found its way through areas in computer science such as: Bioinformatics, database, social, sensor networks, etc. Current exploitation and application of provenance data is very limited as provenance systems started being developed for specific applications. Thus, applying learning and knowledge discovery methods to provenance data can provide rich and useful information on workflows and services. Therefore, in this work, the challenges with workflows and services are studied to discover the possibilities and benefits of providing solutions by using provenance data. A multifunctional architecture is presented which addresses the workflow and service issues by exploiting provenance data. These challenges include workflow composition, abstract workflow selection, refinement, evaluation, and graph model extraction. The specific contribution of the proposed architecture is its novelty in providing a basis for taking advantage of the previous execution details of services and workflows along with artificial intelligence and knowledge management techniques to resolve the major challenges regarding workflows. The presented architecture is application-independent and could be deployed in any area. The requirements for such an architecture along with its building components are discussed. Furthermore, the responsibility of the components, related works and the implementation details of the architecture along with each component are presented.
APA, Harvard, Vancouver, ISO, and other styles
35

Fouché, Marie-Louise. "The role of taxonomies in knowledge management." Diss., 2006. http://hdl.handle.net/10500/2498.

Full text
Abstract:
The knowledge economy has brought about some new challenges for organisations. Accessing data and information in a logical manner is a critical component of information and knowledge management. Taxonomies are viewed as a solution to facilitate ease of access to information in a logical manner. The aim of this research was to investigate the role of taxonomies within organisations which utilise a knowledge management framework or strategy. An interview process was utilised to gain insight from leading organisations as to the use of taxonomies within the knowledge management environment. Organisations are starting to use taxonomies to manage multi-sourced environments and facilitate the appropriate sourcing of the organisations intellectual capital. Based on the research it is clear that taxonomies will play a central role in the coming years to help manage the complexity of the organisation's environment and ease the access to relevant information.<br>Information Science<br>M.Inf.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography