To see the other types of publications on this topic, follow the link: Computer programming. Human-machine systems.

Dissertations / Theses on the topic 'Computer programming. Human-machine systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer programming. Human-machine systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dewan, Prasun. "Automatic generation of user interfaces." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1986. http://catalog.hathitrust.org/api/volumes/oclc/14706019.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Allen, Jeanette. "Effects of representation on programming behavior." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sheikholeslami, Sina. "Ablation Programming for Machine Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258413.

Full text
Abstract:
As machine learning systems are being used in an increasing number of applications from analysis of satellite sensory data and health-care analytics to smart virtual assistants and self-driving cars they are also becoming more and more complex. This means that more time and computing resources are needed in order to train the models and the number of design choices and hyperparameters will increase as well. Due to this complexity, it is usually hard to explain the effect of each design choice or component of the machine learning system on its performance.A simple approach for addressing this problem is to perform an ablation study, a scientific examination of a machine learning system in order to gain insight on the effects of its building blocks on its overall performance. However, ablation studies are currently not part of the standard machine learning practice. One of the key reasons for this is the fact that currently, performing an ablation study requires major modifications in the code as well as extra compute and time resources.On the other hand, experimentation with a machine learning system is an iterative process that consists of several trials. A popular approach for execution is to run these trials in parallel, on an Apache Spark cluster. Since Apache Spark follows the Bulk Synchronous Parallel model, parallel execution of trials includes several stages, between which there will be barriers. This means that in order to execute a new set of trials, all trials from the previous stage must be finished. As a result, we usually end up wasting a lot of time and computing resources on unpromising trials that could have been stopped soon after their start.We have attempted to address these challenges by introducing MAGGY, an open-source framework for asynchronous and parallel hyperparameter optimization and ablation studies with Apache Spark and TensorFlow. This framework allows for better resource utilization as well as ablation studies and hyperparameter optimization in a unified and extendable API.
Eftersom maskininlärningssystem används i ett ökande antal applikationer från analys av data från satellitsensorer samt sjukvården till smarta virtuella assistenter och självkörande bilar blir de också mer och mer komplexa. Detta innebär att mer tid och beräkningsresurser behövs för att träna modellerna och antalet designval och hyperparametrar kommer också att öka. På grund av denna komplexitet är det ofta svårt att förstå vilken effekt varje komponent samt designval i ett maskininlärningssystem har på slutresultatet.En enkel metod för att få insikt om vilken påverkan olika komponenter i ett maskinlärningssytem har på systemets prestanda är att utföra en ablationsstudie. En ablationsstudie är en vetenskaplig undersökning av maskininlärningssystem för att få insikt om effekterna av var och en av dess byggstenar på dess totala prestanda. Men i praktiken så är ablationsstudier ännu inte vanligt förekommande inom maskininlärning. Ett av de viktigaste skälen till detta är det faktum att för närvarande så krävs både stora ändringar av koden för att utföra en ablationsstudie, samt extra beräkningsoch tidsresurser.Vi har försökt att ta itu med dessa utmaningar genom att använda en kombination av distribuerad asynkron beräkning och maskininlärning. Vi introducerar maggy, ett ramverk med öppen källkodsram för asynkron och parallell hyperparameteroptimering och ablationsstudier med PySpark och TensorFlow. Detta ramverk möjliggör bättre resursutnyttjande samt ablationsstudier och hyperparameteroptimering i ett enhetligt och utbyggbart API.
APA, Harvard, Vancouver, ISO, and other styles
4

Sims, Pauline. "Turing's P-type machine and neural network hybrid systems." Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lau-Kee, David Andrew. "Visual and by-example interactive systems for non-programmers." Thesis, University of York, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Levine, Jonathan. "Computer based dialogs : theory and design /." Online version of thesis, 1990. http://hdl.handle.net/1850/10590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jarvis, Matthew P. "Applying machine learning techniques to rule generation in intelligent tutoring systems." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-112724.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Intelligent Tutoring Systems; Model Tracing; Machine Learning; Artificial Intelligence; Programming by Demonstration. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Tchernavskij, Philip. "Designing and Programming Malleable Software." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS499.

Full text
Abstract:
Les besoins des utilisateurs en matière de fonctionnalités et d'interfaces logicielles sont variés et changeants. Mon objectif est de permettre aux utilisateurs eux-mêmes de facilement modifier ou faire modifier leur logiciel en fonction de l'évolution de leurs besoins. Toutefois, à mon avis, les approches actuelles ne traitent pas cette question de façon adéquate : L'ingénierie logicielle favorise la flexibilité du code mais, dans la pratique, cela n'aide pas les utilisateurs finaux à apporter des changements à leurs logiciels. Les systèmes permettant à l'utilisateur de programmer en direct (“live programming”) ou de modifier le code du logiciel (“end-user programming”) permettent aux utilisateurs de personnaliser les interfaces de leur logiciel en accédant et modifiant le code source. J'adopte une approche différente, qui cherche à maximiser les modifications qui peuvent être faites à travers des interactions habituelles, par exemple la manipulation directe d'éléments d'interface. J'appelle cette approche la malléabilité logicielle. Pour comprendre les besoins des utilisateurs et les obstacles à la modification des logiciels interactifs, j'étudie comment les logiciels actuels sont produits, maintenus, adoptés et appropriés dans un réseau de communautés travaillant avec des données sur la biodiversité. Je montre que le mode de production des logiciels, c'est-à-dire les technologies et les modèles économiques qui les produisent, est biaisé en faveur de systèmes centralisés et uniformisés. Cela m'amène à proposer un programme de recherche interdisciplinaire à long terme pour repenser les outils de développement logiciel afin de créer des infrastructures pour la pluralité. Ces outils peuvent aider de multiples communautés à collaborer sans les forcer à adopter des interfaces ou représentations de données identiques. Le logiciel malléable représente une telle infrastructure, dans laquelle les systèmes interactifs sont des constellations dynamiques d'interfaces, de dispositifs et de programmes construits au moment de leur utilisation. Ma contribution technologique est de recréer des mécanismes de programmation pour concevoir des comportements interactifs. Je généralise les structures de contrôle existantes pour l'interaction en ce que j’appelle des intrications (“entanglements”). J'élabore une structure de contrôle d'ordre supérieur, les intricateurs (“entanglers”), qui produisent ces intrications lorsque des conditions préalables particulières sont remplies. Ces conditions préalables sont appelées co-occurrences. Les intricateurs organisent l'assemblage des interactions dynamiquement en fonction des besoins des composants du système. Je développe ces mécanismes dans Tangler, un prototype d’environnement pour la construction de logiciels interactifs malléables. Je démontre comment Tangler supporte la malléabilité à travers un ensemble de cas d'étude illustrant comment les utilisateurs peuvent modifier les systèmes par eux-mêmes ou avec l'aide d'un programmeur. Cette thèse est un premier pas vers un paradigme de programmation et de conception de logiciels malléables capables de s'adapter à la diversité des usages et des utilisateurs
User needs for software features and interfaces are diverse and changing, motivating the goal of making it as easy as possible for users themselves to change software, or to have it changed on their behalf in response to their developing needs. However, in my opinion, current approaches do not address this issue adequately: software engineering promotes flexible code, but in practice this does not help end-users effect change in their software. End-user and live programming systems help users customize their interfaces by accessing and modifying the underlying source code. I take a different approach, seeking to maximize the kinds of modifications that can take place through regular interactions, e.g. direct manipulation of interface elements. I call this approach malleable software. To understand contemporary needs for and barriers to modifying software, I study how it is produced, maintained, adopted, and appropriated in a network of communities working with biodiversity data. I find that the mode of software production, i.e. the technologies and economic relations that produce software, is biased towards centralized, one-size-fits-all systems. This leads me to propose a long-term, interdisciplinary research program in reforming the tools of software development to create infrastructures for plurality. These tools should help multiple communities collaborate without forcing them to consolidate around identical interfaces or data representations. Malleable software is one such infrastructure, in which interactive systems are dynamic constellations of interfaces, devices, and programs assembled at the site of use. My technological contribution is a reconstruction of the programming mechanisms used to create interactive behavior. I generalize existing control structures for interaction as entanglements, and develop a higher-order control structure, entanglers, which produces entanglements when particular pre-conditions, called co-occurrences, are met. Entanglers cause interactions to be assembled dynamically as system components come and go. I develop these mechanisms in Tangler, a prototype environment for building malleable interactive software. I demonstrate how Tangler supports malleability through a set of benchmark cases illustrating how users can modify systems by themselves or with programmer assistance. This thesis is an early step towards a paradigm for programming and designing malleable software that can keep up with human diversity
APA, Harvard, Vancouver, ISO, and other styles
9

Yunten, Tamer. "Supervisory methodology and notation (SUPERMAN) for human-computer system development." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/49969.

Full text
Abstract:
The underlying goal of SUPERvisory Methodology And Notation (SUPERMAN) is to enhance productive operation of human-computer system developers by providing easy-to-use concepts and automated tools for developing high-quality (e.g., human-engineered, cost-effective, easy-to-maintain) target systems. The supervisory concept of the methodology integrates functions of many modeling techniques, and allows complete representation of the designer's conceptualization of a system's operation. The methodology views humans as functional elements of a system in addition to computer elements. Parts of software which implement human-computer interaction are separated from the rest of software. A single, unified system representation is used throughout a system lifecycle. The concepts of the methodology are notationally built into a graphical programming language. The use of this language in developing a system leads to a natural and orderly application of the methodology.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
10

Ferreira, Ana. "Modelling access control for healthcare information systems : how to control access through policies, human processes and legislation." Thesis, University of Kent, 2010. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529399.

Full text
Abstract:
The introduction of Electronic Medical Records (EMR) within healthcare organizations has the main goal of integrating heterogeneous patient information that is usually scattered over different locations. However, there are some barriers that impede the effective integration of EMR within the healthcare practice (e.g., educational, time/costs, security). A focus in improving access control definition and implementation is fundamental to define proper system workflow and access. The main objectives of this research are: to involve end users in the definition of access control rules; to determine which access control rules are important to those users; to define an access control model that can model these rules; and to implement and evaluate this model. Technical, methodological and legislative reviews were conducted on access control both in general and the healthcare domain. Grounded theory was used together with mixed methods to gather users experiences and needs regarding access control. Focus groups (main qualitative method) followed by structured questionnaires (secondary quantitative method) were applied to the healthcare professionals whilst structured telephone interviews were applied to the patients. A list of access control rules together with the new Break-The-Glass (BTG) RBAC model were developed. A prototype together with a pilot case study was implemented in order to test and evaluate the new model. A research process was developed during this work that allows translating access control procedures in healthcare, from legislation to practice, in a systematic and objective way. With access controls closer to the healthcare practice, educational, time/costs and security barriers of EMR integration can be minimized. This is achieved by: reducing the time needed to learn, use and alter the system; allowing unanticipated or emergency situations to be tackled in a controlled manner (BTG) and reducing unauthorized and non-justified accesses. All this helps to achieve a faster and safer patient treatment.
APA, Harvard, Vancouver, ISO, and other styles
11

Fernaeus, Ylva. "Let's Make a Digital Patchwork : Designing for Childrens Creative Play with Programming Materials." Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6706.

Full text
Abstract:
This thesis explores new approaches to making and playing with programming materials, especially the forms provided with screen-based digital media. Designing with these media expressions can be very attractive to children, but they are usually not made available to them in the same degree as are physical materials. Inspired by children's play with physical materials, this work includes design explorations of how different resources alter, scaffold and support children in activities of making dynamic, screen-based systems. How tangibles turn the activity of programming into a more physical, social and collaborative activity is emphasised. A specific outcome concerns the importance of considering 'offline' and socially oriented action when designing tangible technologies. The work includes the design of a tangible programming system, Patcher, with which groups of children can program systems displayed on a large screen surface. The character of children's programming is conceptualised through the notion of a digital patchwork, emphasising (1) children's programming as media-sensitive design, (2) making programming more concrete by combining and reusing readily available programming constructs, and (3) the use of tangibles for social interaction.
APA, Harvard, Vancouver, ISO, and other styles
12

Fernaeus, Ylva. "Let´s Make a Digital Patchwork : Designing for Childrens Creative Play with Digital Material." Doctoral thesis, Stockholm University, Department of Computer and Systems Sciences (together with KTH), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6706.

Full text
Abstract:

This thesis explores new approaches to making and playing with programming materials, especially the forms provided with screen-based digital media. Designing with these media expressions can be very attractive to children, but they are usually not made available to them in the same degree as are physical materials.

Inspired by children's play with physical materials, this work includes design explorations of how different resources alter, scaffold and support children in activities of making dynamic, screen-based systems. How tangibles turn the activity of programming into a more physical, social and collaborative activity is emphasised. A specific outcome concerns the importance of considering 'offline' and socially oriented action when designing tangible technologies. The work includes the design of a tangible programming system, Patcher, with which groups of children can program systems displayed on a large screen surface.

The character of children's programming is conceptualised through the notion of a digital patchwork, emphasising (1) children's programming as media-sensitive design, (2) making programming more concrete by combining and reusing readily available programming constructs, and (3) the use of tangibles for social interaction.

APA, Harvard, Vancouver, ISO, and other styles
13

Herbert, George D. "Compiling Unit Clauses for the Warren Abstract Machine." UNF Digital Commons, 1987. http://digitalcommons.unf.edu/etd/571.

Full text
Abstract:
This thesis describes the design, development, and installation of a computer program which compiles unit clauses generated in a Prolog-based environment at Argonne National Laboratories into Warren Abstract Machine (WAM) code. The program enhances the capabilities of the environment by providing rapid unification and subsumption tests for the very significant class of unit clauses. This should improve performance substantially for large programs that generate and use many unit clauses.
APA, Harvard, Vancouver, ISO, and other styles
14

Partin, Michael. "Scalable, Pluggable, and Fault Tolerant Multi-Modal Situational Awareness Data Stream Management Systems." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1567073723628721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

ARAUJO, SUMAIR G. de. "Projeto e implantacao de automacao em sistemas de irradiacao de alvos solidos, liquidos e gasosos em ciclotrons visando a producao de radioisotopos." reponame:Repositório Institucional do IPEN, 2001. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10921.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:45:36Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:09:31Z (GMT). No. of bitstreams: 1 07306.pdf: 9914890 bytes, checksum: 6fe7e1a8d060f6bbbccd63db16915245 (MD5)
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
16

Gargesa, Padmashri. "Reward-driven Training of Random Boolean Network Reservoirs for Model-Free Environments." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/669.

Full text
Abstract:
Reservoir Computing (RC) is an emerging machine learning paradigm where a fixed kernel, built from a randomly connected "reservoir" with sufficiently rich dynamics, is capable of expanding the problem space in a non-linear fashion to a higher dimensional feature space. These features can then be interpreted by a linear readout layer that is trained by a gradient descent method. In comparison to traditional neural networks, only the output layer needs to be trained, which leads to a significant computational advantage. In addition, the short term memory of the reservoir dynamics has the ability to transform a complex temporal input state space to a simple non-temporal representation. Adaptive real-time systems are multi-stage decision problems that can be used to train an agent to achieve a preset goal by performing an optimal action at each timestep. In such problems, the agent learns through continuous interactions with its environment. Conventional techniques to solving such problems become computationally expensive or may not converge if the state-space being considered is large, partially observable, or if short term memory is required in optimal decision making. The objective of this thesis is to use reservoir computers to solve such goal-driven tasks, where no error signal can be readily calculated to apply gradient descent methodologies. To address this challenge, we propose a novel reinforcement learning approach in combination with reservoir computers built from simple Boolean components. Such reservoirs are of interest because they have the potential to be fabricated by self-assembly techniques. We evaluate the performance of our approach in both Markovian and non-Markovian environments. We compare the performance of an agent trained through traditional Q-Learning. We find that the reservoir-based agent performs successfully in these problem contexts and even performs marginally better than Q-Learning agents in certain cases. Our proposed approach allows to retain the advantage of traditional parameterized dynamic systems in successfully modeling embedded state-space representations while eliminating the complexity involved in training traditional neural networks. To the best of our knowledge, our method of training a reservoir readout layer through an on-policy boot-strapping approach is unique in the field of random Boolean network reservoirs.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnston, Christopher Troy. "VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1219.

Full text
Abstract:
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
APA, Harvard, Vancouver, ISO, and other styles
18

Malki, Khalil. "Automated Knowledge Extraction from Archival Documents." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2019. http://digitalcommons.auctr.edu/cauetds/204.

Full text
Abstract:
Traditional archival media such as paper, film, photographs, etc. contain a vast storage of knowledge. Much of this knowledge is applicable to current business and scientific problems, and offers solutions; consequently, there is value in extracting this information. While it is possible to manually extract the content, this technique is not feasible for large knowledge repositories due to cost and time. In this thesis, we develop a system that can extract such knowledge automatically from large repositories. A Graphical User Interface that permits users to indicate the location of the knowledge components (indexes) is developed, and software features that permit automatic extraction of indexes from similar documents is presented. The indexes and the documents are stored in a persistentdata store.The system is tested on a University Registrar’s legacy paper-based transcript repository. The study shows that the system provides a good solution for large-scale extraction of knowledge from archived paper and other media.
APA, Harvard, Vancouver, ISO, and other styles
19

König, Rikard. "Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality." Licentiate thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-3517.

Full text
Abstract:
Today, decision support systems based on predictive modeling are becoming more common, since organizations often collectmore data than decision makers can handle manually. Predictive models are used to find potentially valuable patterns in the data, or to predict the outcome of some event. There are numerous predictive techniques, ranging from simple techniques such as linear regression,to complex powerful ones like artificial neural networks. Complexmodels usually obtain better predictive performance, but are opaque and thus cannot be used to explain predictions or discovered patterns.The design choice of which predictive technique to use becomes even harder since no technique outperforms all others over a large set of problems. It is even difficult to find the best parameter values for aspecific technique, since these settings also are problem dependent.One way to simplify this vital decision is to combine several models, possibly created with different settings and techniques, into an ensemble. Ensembles are known to be more robust and powerful than individual models, and ensemble diversity can be used to estimate the uncertainty associated with each prediction.In real-world data mining projects, data is often imprecise, contain uncertainties or is missing important values, making it impossible to create models with sufficient performance for fully automated systems.In these cases, predictions need to be manually analyzed and adjusted.Here, opaque models like ensembles have a disadvantage, since theanalysis requires understandable models. To overcome this deficiencyof opaque models, researchers have developed rule extractiontechniques that try to extract comprehensible rules from opaquemodels, while retaining sufficient accuracy.This thesis suggests a straightforward but comprehensive method forpredictive modeling in situations with poor data quality. First,ensembles are used for the actual modeling, since they are powerful,robust and require few design choices. Next, ensemble uncertaintyestimations pinpoint predictions that need special attention from adecision maker. Finally, rule extraction is performed to support theanalysis of uncertain predictions. Using this method, ensembles can beused for predictive modeling, in spite of their opacity and sometimesinsufficient global performance, while the involvement of a decisionmaker is minimized.The main contributions of this thesis are three novel techniques that enhance the performance of the purposed method. The first technique deals with ensemble uncertainty estimation and is based on a successful approach often used in weather forecasting. The other twoare improvements of a rule extraction technique, resulting in increased comprehensibility and more accurate uncertainty estimations.

Sponsorship:

This work was supported by the Information Fusion Research

Program (www.infofusion.se) at the University of Skövde, Sweden, in

partnership with the Swedish Knowledge Foundation under grant

2003/0104.

APA, Harvard, Vancouver, ISO, and other styles
20

Aarno, Daniel. "Intention recognition in human machine collaborative systems." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4303.

Full text
Abstract:

Robotsystem har använts flitigt under de senaste årtiondena för att skapa automationslösningar i ett flertal områden. De flesta nuvarande automationslösningarna är begränsade av att uppgifterna de kan lösa måste vara repetitiva och förutsägbara. En av anledningarna till detta är att dagens robotsystem saknar förmåga att förstå och resonera om omvärlden. På grund av detta har forskare inom robotik och artificiell intelligens försökt att skapa intelligentare maskiner. Trots att stora framsteg har gjorts då det gäller att skapa robotar som kan fungera och interagera i en mänsklig miljö så finns det för nuvarande inget system som kommer i närheten av den mänskliga förmågan att resonera om omvärlden.

För att förenkla problemet har vissa forskare föreslagit en alternativ lösning till helt självständiga robotar som verkar i mänskliga miljöer. Alternativet är att kombinera människors och maskiners förmågor. Exempelvis så kan en person verka på en avlägsen plats, som kanske inte är tillgänglig för personen i fråga på grund av olika orsaker, genom att använda fjärrstyrning. Vid fjärrstyrning skickar operatören kommandon till en robot som verkar som en förlängning av operatörens egen kropp.

Segmentering och identifiering av rörelser skapade av en operatör kan användas för att tillhandahålla korrekt assistans vid fjärrstyrning eller samarbete mellan människa och maskin. Assistansen sker ofta inom ramen för virtuella fixturer där eftergivenheten hos fixturen kan justeras under exekveringen för att tillhandahålla ökad prestanda i form av ökad precision och minskad tid för att utföra uppgiften.

Den här avhandlingen fokuserar på två aspekter av samarbete mellan människa och maskin. Klassificering av en operatörs rörelser till ett på förhand specificerat tillstånd under en manipuleringsuppgift och assistans under manipuleringsuppgiften baserat på virtuella fixturer. Den specifika tillämpningen som behandlas är manipuleringsuppgifter där en mänsklig operatör styr en robotmanipulator i ett fjärrstyrt eller samarbetande system.

En metod för att följa förloppet av en uppgift medan den utförs genom att använda virtuella fixturer presenteras. Istället för att följa en på förhand specificerad plan så har operatören möjlighet att undvika oväntade hinder och avvika från modellen. För att möjliggöra detta estimeras kontinuerligt sannolikheten att operatören följer en viss trajektorie (deluppgift). Estimatet används sedan för att justera eftergivenheten hos den virtuella fixturen så att ett beslut om hur rörelsen ska fixeras kan tas medan uppgiften utförs.

En flerlagers dold Markovmodell (eng. layered hidden Markov model) används för att modellera mänskliga färdigheter. En gestemklassificerare som klassificerar en operatörs rörelser till olika grundläggande handlingsprimitiver, eller gestemer, evalueras. Gestemklassificerarna används sedan i en flerlagers dold Markovmodell för att modellera en simulerad fjärrstyrd manipuleringsuppgift. Klassificeringsprestandan utvärderas med avseende på brus, antalet gestemer, typen på den dolda Markovmodellen och antalet tillgängliga träningssekvenser. Den flerlagers dolda Markovmodellen tillämpas sedan på data från en trajektorieföljningsuppgift i 2D och 3D med en robotmanipulator för att ge både kvalitativa och kvantitativa resultat. Resultaten tyder på att den flerlagers dolda Markovmodellen är väl lämpad för att modellera trajektorieföljningsuppgifter och att den flerlagers dolda Markovmodellen är robust med avseende på felklassificeringar i de underliggande gestemklassificerarna.


Robot systems have been used extensively during the last decades to provide automation solutions in a number of areas. The majority of the currently deployed automation systems are limited in that the tasks they can solve are required to be repetitive and predicable. One reason for this is the inability of today’s robot systems to understand and reason about the world. Therefore the robotics and artificial intelligence research communities have made significant research efforts to produce more intelligent machines. Although significant progress has been made towards achieving robots that can interact in a human environment there is currently no system that comes close to achieving the reasoning capabilities of humans.

In order to reduce the complexity of the problem some researchers have proposed an alternative to creating fully autonomous robots capable of operating in human environments. The proposed alternative is to allow fusion of human and machine capabilities. For example, using teleoperation a human can operate at a remote site, which may not be accessible for the operator for a number of reasons, by issuing commands to a remote agent that will act as an extension of the operator’s body.

Segmentation and recognition of operator generated motions can be used to provide appropriate assistance during task execution in teleoperative and human-machine collaborative settings. The assistance is usually provided in a virtual fixture framework where the level of compliance can be altered online in order to improve the performance in terms of execution time and overall precision. Acquiring, representing and modeling human skills are key research areas in teleoperation, programming-by-demonstration and human-machine collaborative settings. One of the common approaches is to divide the task that the operator is executing into several sub-tasks in order to provide manageable modeling.

This thesis is focused on two aspects of human-machine collaborative systems. Classfication of an operator’s motion into a predefined state of a manipulation task and assistance during a manipulation task based on virtual fixtures. The particular applications considered consists of manipulation tasks where a human operator controls a robotic manipulator in a cooperative or teleoperative mode.

A method for online task tracking using adaptive virtual fixtures is presented. Rather than executing a predefined plan, the operator has the ability to avoid unforeseen obstacles and deviate from the model. To allow this, the probability of following a certain trajectory sub-task) is estimated and used to automatically adjusts the compliance of a virtual fixture, thus providing an online decision of how to fixture the movement.

A layered hidden Markov model is used to model human skills. A gestem classifier that classifies the operator’s motions into basic action-primitives, or gestemes, is evaluated. The gestem classifiers are then used in a layered hidden Markov model to model a simulated teleoperated task. The classification performance is evaluated with respect to noise, number of gestemes, type of the hidden Markov model and the available number of training sequences. The layered hidden Markov model is applied to data recorded during the execution of a trajectory-tracking task in 2D and 3D with a robotic manipulator in order to give qualitative as well as quantitative results for the proposed approach. The results indicate that the layered hidden Markov model is suitable for modeling teleoperative trajectory-tracking tasks and that the layered hidden Markov model is robust with respect to misclassifications in the underlying gestem classifiers.

APA, Harvard, Vancouver, ISO, and other styles
21

Belkhir, Abdelkader. "Conception d'une machine orientee fonctions, application a l'implantation d'un langage dirige par les donnees." Paris 6, 1988. http://www.theses.fr/1988PA066055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ibrahim-Sakre, Mohammed M. A. "A fast and expert machine translation system involving Arabic language." Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Strickland, Ted John Jr. "Dynamic management of multichannel interfaces for human interaction with computer-based intelligent assistants." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184793.

Full text
Abstract:
For complex man-machine tasks where multi-media interaction with computer-based assistants is appropriate, a portion of the assistant's intelligence must be devoted to managing its communication processes with the user. Since people often serve the role of assistants, the conventions of human communication provide a basis for designing the communication processes of the computer-based assistant. Human decision making for communication requires knowledge of the user's style, the task demands, and communication practices, and knowledge of the current situation. Decisions necessary for effective communication, when, how, and what to communicate, can be expressed using these knowledge sources. A system based on human communication rules was developed to manage the communication decisions of an intelligent assistant. The Dynamic Communication Management (DCM) system consists of four components, three models and a manager. The model of the user describes the user's communication preferences for different task situations. The model of the task is used to establish the user's current activity and to describe how communication should be conducted for this activity. The communication model provides the rules needed to make decisions: when to communicate the message, how to present the message to the user, and what information should be communicated. The Communication Manager controls and coordinates these models to conduct all communication with the user. Performance with DCM as the interface to a simulated Flexible Manufacturing System (FMS) control task was established to learn about the potential benefits of the concept. An initial comparison showed no improvement over a keyboard and monitor interface, but provided performance data which exposed the differences in information needed for decision making using auditory and visual communication. This knowledge and related performance data were used to redesign features of the DCM. The redesigned DCM significantly improved all aspects of system performance compared to the keyboard and monitor interface. The FMS performance measures and performance on a secondary task improved, user communication behavior was changed favorably, and users preferred the advanced features of DCM. These types of benefits can potentially accrue for a variety of tasks where multi-media communication with computer-based intelligent assistants is managed with DCM.
APA, Harvard, Vancouver, ISO, and other styles
24

Prabhala, Sasanka V. "Designing Computer Agents with Personality to Improve Human-Machine Collaboration in Complex Systems." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1173299872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gapsevicius, Mindaugas. "An artistic perspective on distributed computer networks : creativity in human-machine systems." Thesis, Goldsmiths College (University of London), 2016. http://research.gold.ac.uk/18258/.

Full text
Abstract:
This thesis is written from an artistic perspective as a reflection on currently significant discussions in media theory, with a focus on the impact of technology on society. While mapping boundaries of contemporary art, post-digital art is considered the best for describing current discourses in media theory in the context of this research. Bringing into the discussion artworks by Martin Howse & Jonathan Kemp (2001-2008), Maurizio Bolognini (Bolognini 1988-present), and myself (mi_ga 2006), among many others, this research defines post-digital art, which in turn defines a complexity of interactions between elements of different natures, such as the living and non-living, human and machine, art and science. Within the analysis of P2P networks, I highlight Milgram's (1967) idea of six degrees of separation, which, at least from a speculative point of view, is interesting for the implementation of human-machine concepts in future technological developments. From this perspective, I argue that computer networks could, in the future, have more potential for merging with society if developed similarly to the computer routing scheme implemented in the Freenet distributed information storage and retrieval system. The thesis then describes my own artwork, 0.30402944246776265, including two newly developed plugins for the Freenet storage system; the first plugin is constructed to fulfill the idea of interacting elements of different natures (in this case, the WWW and Freenet), while the other plugin attempts to visualize data flow within the Freenet storage and retrieval system. All together, this paper proposes that a reconsideration of distributed and self-organized information systems, through an artistic and philosophical lens, can open up a space for the rethinking of the current integration of society and technology.
APA, Harvard, Vancouver, ISO, and other styles
26

Hale, Rodney D. "Gesture recognition as a means of human-machine interface." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0014/MQ36129.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wagy, Mark David. "Enabling Machine Science through Distributed Human Computing." ScholarWorks @ UVM, 2016. http://scholarworks.uvm.edu/graddis/618.

Full text
Abstract:
Distributed human computing techniques have been shown to be effective ways of accessing the problem-solving capabilities of a large group of anonymous individuals over the World Wide Web. They have been successfully applied to such diverse domains as computer security, biology and astronomy. The success of distributed human computing in various domains suggests that it can be utilized for complex collaborative problem solving. Thus it could be used for "machine science": utilizing machines to facilitate the vetting of disparate human hypotheses for solving scientific and engineering problems. In this thesis, we show that machine science is possible through distributed human computing methods for some tasks. By enabling anonymous individuals to collaborate in a way that parallels the scientific method -- suggesting hypotheses, testing and then communicating them for vetting by other participants -- we demonstrate that a crowd can together define robot control strategies, design robot morphologies capable of fast-forward locomotion and contribute features to machine learning models for residential electric energy usage. We also introduce a new methodology for empowering a fully automated robot design system by seeding it with intuitions distilled from the crowd. Our findings suggest that increasingly large, diverse and complex collaborations that combine people and machines in the right way may enable problem solving in a wide range of fields.
APA, Harvard, Vancouver, ISO, and other styles
28

Bushman, James B. "Identification of an operator's associate model for cooperative supervisory control situations." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/30992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Jaichun. "The development and implementation of an intelligent, semantic machine control system with specific reference to human-machine interface design." Thesis, Cape Peninsula University of Technology, 2005. http://hdl.handle.net/20.500.11838/2292.

Full text
Abstract:
Thesis (MTech (Information Technology))--Cape Peninsula University of Technology, 2005.
This thesis explores the design and implementation of an intelligent semantic machine control system with specific reference to human-machine interface design. The term "intelligent" refers to machines that can execute some level of decision taking in context. The term "semantic" refers to a structured language that allows user and machine to communicate. This study will explore all the key concepts about an intelligent semantic machine control system with human-machine interface. The key concepts to be investigated will include Artificial Intelligence, Intelligent Control, Semantics, Intelligent Machine Architecture, Human-Machine Interaction, Information systems and Graphical User Interface. The primary purpose of this study is to develop a methodology for designing a machine control system and its related human-machine interface.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Kelvin. "0Direct interaction with large displays through monocular computer vision." Connect to full text, 2008. http://ses.library.usyd.edu.au/handle/2123/5331.

Full text
Abstract:
Thesis (Ph. D.)--University of Sydney, 2009.
Title from title screen (viewed November 5, 2009). Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy to the School of Information Technologies in the the Faculty of Engineering & Information Technologies. Degree awarded 2009; thesis submitted 2008. Includes bibliographical references. Also available in print form.
APA, Harvard, Vancouver, ISO, and other styles
31

Zander, Thorsten Oliver [Verfasser], and Matthias [Akademischer Betreuer] Roetting. "Utilizing Brain-Computer Interfaces for Human-Machine Systems / Thorsten Oliver Zander. Betreuer: Matthias Roetting." Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2012. http://d-nb.info/1023762099/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Russell, C. Ray. "Effects of withholding information about implementation details on the design of a human-computer interface." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/9253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Pawlowski, Thomas J. III. "Design of operator interfaces for supervisory control and to facilitate intent inferencing by a computer-based operator's associate." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/24593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sedighian, Kamran. "A user interface builder/manager for knowledge craft /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Saisi, Donna Lynn. "The use of model-based window display interfaces in real time supervisory control systems." Thesis, Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/25203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bass, Ellen J. "Human-automated judgment learning : a research paradigm based on interpersonal learning to investigate human interaction with automated judgments of hazards." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/25498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Anderson, Corin R. "A machine learning approach to Web personalization /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lee, Wei. "Bibliographic system for microcomputer environments." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Haritos, Tom. "A Study of Human-Machine Interface (HMI) Learnability for Unmanned Aircraft Systems Command and Control." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1018.

Full text
Abstract:
The operation of sophisticated unmanned aircraft systems (UAS) involves complex interactions between human and machine. Unlike other areas of aviation where technological advancement has flourished to accommodate the modernization of the National Airspace System (NAS), the scientific paradigm of UAS and UAS user interface design has received little research attention and minimal effort has been made to aggregate accurate data to assess the effectiveness of current UAS human-machine interface (HMI) representations for command and control. UAS HMI usability is a primary human factors concern as the Federal Aviation Administration (FAA) moves forward with the full-scale integration of UAS in the NAS by 2025. This study examined system learnability of an industry standard UAS HMI as minimal usability data exists to support the state-of-the art for new and innovative command and control user interface designs. This study collected data as it pertained to the three classes of objective usability measures as prescribed by the ISO 9241-11. The three classes included: (1) effectiveness, (2) efficiency, and (3) satisfaction. Data collected for the dependent variables incorporated methods of video and audio recordings, a time stamped simulator data log, and the SUS survey instrument on forty-five participants with none to varying levels of conventional flight experience (i.e., private pilot and commercial pilot). The results of the study suggested that those individuals with a high level of conventional flight experience (i.e., commercial pilot certificate) performed most effectively when compared to participants with low pilot or no pilot experience. The one-way analysis of variance (ANOVA) computations for completion rates revealed statistical significance for trial three between subjects [F (2, 42) = 3.98, p = 0.02]. Post hoc t-test using a Bonferroni correction revealed statistical significance in completion rates [t (28) = -2.92, p<0.01] between the low pilot experience group (M = 40%, SD =. 50) and high experience group (M = 86%, SD = .39). An evaluation of error rates in parallel with the completion rates for trial three also indicated that the high pilot experience group committed less errors (M = 2.44, SD = 3.9) during their third iteration when compared to the low pilot experience group (M = 9.53, SD = 12.63) for the same trial iteration. Overall, the high pilot experience group (M = 86%, SD = .39) performed better than both the no pilot experience group (M = 66%, SD = .48) and low pilot experience group (M = 40%, SD =.50) with regard to task success and the number of errors committed. Data collected using the SUS measured an overall composite SUS score (M = 67.3, SD = 21.0) for the representative HMI. The subscale scores for usability and learnability were 69.0 and 60.8, respectively. This study addressed a critical need for future research in the domain of UAS user interface designs and operator requirements as the industry is experiencing revolutionary growth at a very rapid rate. The deficiency in legislation to guide the scientific paradigm of UAS has generated significant discord within the industry leaving many facets associated with the teleportation of these systems in dire need of research attention. Recommendations for future work included a need to: (1) establish comprehensive guidelines and standards for airworthiness certification for the design and development of UAS and UAS HMI for command and control, (2) establish comprehensive guidelines to classify the complexity associated with UAS systems design, (3) investigate mechanisms to develop comprehensive guidelines and regulations to guide UAS operator training, (4) develop methods to optimize UAS interface design through automation integration and adaptive display technologies, and (5) adopt methods and metrics to evaluate human-machine interface related to UAS applications for system usability and system learnability.
APA, Harvard, Vancouver, ISO, and other styles
40

Moussa, Ahmed S. "On learning and visualizing lexicographic preference trees." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/882.

Full text
Abstract:
Preferences are very important in research fields such as decision making, recommendersystemsandmarketing. The focus of this thesis is on preferences over combinatorial domains, which are domains of objects configured with categorical attributes. For example, the domain of cars includes car objects that are constructed withvaluesforattributes, such as ‘make’, ‘year’, ‘model’, ‘color’, ‘body type’ and ‘transmission’.Different values can instantiate an attribute. For instance, values for attribute ‘make’canbeHonda, Toyota, Tesla or BMW, and attribute ‘transmission’ can haveautomaticormanual. To this end,thisthesis studiesproblemsonpreference visualization and learning for lexicographic preference trees, graphical preference models that often are compact over complex domains of objects built of categorical attributes. Visualizing preferences is essential to provide users with insights into the process of decision making, while learning preferences from data is practically important, as it is ineffective to elicit preference models directly from users. The results obtained from this thesis are two parts: 1) for preference visualization, aweb- basedsystem is created that visualizes various types of lexicographic preference tree models learned by a greedy learning algorithm; 2) for preference learning, a genetic algorithm is designed and implemented, called GA, that learns a restricted type of lexicographic preference tree, called unconditional importance and unconditional preference tree, or UIUP trees for short. Experiments show that GA achieves higher accuracy compared to the greedy algorithm at the cost of more computational time. Moreover, a Dynamic Programming Algorithm (DPA) was devised and implemented that computes an optimal UIUP tree model in the sense that it satisfies as many examples as possible in the dataset. This novel exact algorithm (DPA), was used to evaluate the quality of models computed by GA, and it was found to reduce the factorial time complexity of the brute force algorithm to exponential. The major contribution to the field of machine learning and data mining in this thesis would be the novel learning algorithm (DPA) which is an exact algorithm. DPA learns and finds the best UIUP tree model in the huge search space which classifies accurately the most number of examples in the training dataset; such model is referred to as the optimal model in this thesis. Finally, using datasets produced from randomly generated UIUP trees, this thesis presents experimental results on the performances (e.g., accuracy and computational time) of GA compared to the existent greedy algorithm and DPA.
APA, Harvard, Vancouver, ISO, and other styles
41

Randolph, Adriane B. "Individual-technology fit matching individual characteristics and features of biometric interface technologies with performance /." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-05182007-113229/.

Full text
Abstract:
Thesis (Ph. D.)--Georgia State University, 2007.
Title from file title page. Melody Moore, committee chair; Detmar Straub, Veda Storey, Bruce Walker, committee members. Electronic text (166 p. : ill. (some col.)) : digital, PDF file. Description based on contents viewed Nov. 5, 2007. Includes bibliographical references (p. 160-164).
APA, Harvard, Vancouver, ISO, and other styles
42

Riley, Jennifer M. "The utility of measures of attention and situation awareness for quantifying telepresence." Diss., Mississippi State : Mississippi State University, 2001. http://library.msstate.edu/etd/show.asp?etd=etd-07112001-104450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kivila, Arto. "Touchscreen interfaces for machine control and education." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49051.

Full text
Abstract:
The touchscreen user interface is an inherently dynamic device that is becoming ubiquitous. The touchscreen’s ability to adapt to the user’s needs makes it superior to more traditional haptic devices in many ways. Most touchscreen devices come with a very large array of sensors already included in the package. This gives engineers the means to develop human-machine interfaces that are very intuitive to use. This thesis presents research that was done to develop a best touchscreen interface for driving an industrial crane for novice users. To generalize the research, testing also determined how touchscreen interfaces compare to the traditional joystick in highly dynamic tracking situations using a manual tracking experiment. Three separate operator studies were conducted to investigate touchscreen control of cranes. The data indicates that the touchscreen interfaces are superior to the traditional push-button control pendent and that the layout and function of the graphical user interface on the touchscreen plays a roll in the performance of the human operators. The touchscreen interface also adds great promise for allowing users to navigate through interactive textbooks. Therefore, this thesis also presents developments directed at creating the next generation of engineering textbooks. Nine widgets were developed for an interactive mechanical design textbook that is meant to be delivered via tablet computers. Those widgets help students improve their technical writing abilities, introduce them to tools they can use in product development, as well as give them knowledge in how some dynamical systems behave. In addition two touchscreen applications were developed to aid the judging of a mechanical design competition.
APA, Harvard, Vancouver, ISO, and other styles
44

Gavigan, Kevin Charles. "The design, development and application of a combined connectionist expert system and 'Pocket' Boltzmann machine approach to the dynamic customer assignment and vehicle routing problem." Thesis, University of Warwick, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Bositty, Aishwarya. "Development of Real-Time Systems for Supporting Collaborations in Distributed HumanAnd Machine Teams." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1610538704384575.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Huot, Stéphane. "'Designeering Interaction': un chaînon manquant dans l'évolution de l'Interaction Homme-Machine." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00823763.

Full text
Abstract:
Human Computer Interaction (HCI) is a fascinating research field because of its multidisciplinary nature, combining such diverse research domains as design, human factors and computer science as well as a variety of methods including empirical and theoretical research. HCI is also fascinating because it is still young and so much is left to discover, invent and understand. The evolution of computers, and more generally of interactive systems, is not frozen, and so are the ways in which we interact with them. From desktop computers, to mobile devices, to large displays or multi-surface environments, technology extends the possibles, needs initiate technologies, and HCI is thus a constantly moving field. The variety of challenges to address, as well as their underlying combinations of sub-domains (design, computer science, experimental psychology, sociology, etc.), imply that we should also adapt, question and sometimes reinvent our research methods and processes, pushing the limits of HCI research further. Since I entered the field 12 years ago, my research activities have essentially revolved around two main themes: the design, implementation and evaluation of novel interaction techniques (on desktop computers, mobile devices and multi- surface environments) and the engineering of interactive systems (models and toolkits for advanced input and interaction). Over time, I realized that I had entered a loop between these two concerns, going back and forth between design- ing and evaluating new interaction techniques, and defining and implementing new software architectures or toolkits. I observed that they strongly influence each other: The design of interaction techniques informs on the capabilities and limitations of the platform and the software being used, and new architectures and software tools open the way to new designs and possibilities. Through the discussion of several of my research contributions in these fields, this document investigates how interaction design challenges technology, and how technology - or engineering of interactive systems - could support and unleash interaction design. These observations will lead to a first definition of the "Designeering Interaction" conceptual framework that encompasses the specificities of these two fields and builds a bridge between them, paving the way to new research perspectives. In particular, I will discuss which types of tools, from the system level to the end user, should be designed, implemented and studied in order to better support interaction design along the evolution of interactive systems. At a more general level, Designeering Interaction is also a contribution that, I hope, will help better "understand how HCI works with technology".
APA, Harvard, Vancouver, ISO, and other styles
47

Ekvall, Staffan. "Robot Task Learning from Human Demonstration." Doctoral thesis, Stockholm : School of Computer Science and Communication, Kungliga Tekniska högskolan (KTH), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Chao. "Human behavioural skills modelling and recognition." Electrical, Computer and Telecommunications Engineering - Faculty of Informatics, 2007. http://ro.uow.edu.au/theses/677.

Full text
Abstract:
Human behaviour can be considered as the ensemble of various activities performed by an individual towards performing a particular task. There are many factors influencing human behaviour including culture, attitudes, emotions, values, ethics, and so on. In this work, the concept of 'human behaviour' in the context of human psycho-motor behaviour is studied. This work is primarily concerned with the development of a system to learn, distinguish and recognise various pre-defined human behavioural tasks. As an initial constraint, the challenging goal, subject to the limitation of hardware, is to model various human behaviours with only one integrated inertial sensor. The motions are captured with the sensor and recorded as streams of multi-dimensional sensory data, which are subsequently analysed into certain patterns. Since only one point on the human body can be measured with that sensor at a time, there are not sufficient motion data to enable the generation of new synthetic behaviours (which might be possible with multiple sensors). It is not really possible to develop a comprehensive model of complex behaviours under this condition. Thus, this work has focussed on building a system to model the behaviour of a specific part of the human body, and in turn to recognise and compare these behaviours. The experimental rig consists of an inertial sensor mounted on the subject providing kinematics data in real-time. Through this sensor, the behavioural motions are transformed into continuous streams of signals including Euler angles and accelerations in three spatial dimensions. Unsupervised machine learning algorithms and other techniques are implemented to analyse and build models of human behaviours in this work. An intrinsic classification algorithm called MML (Minimum Message Length encoding), and a popular unsupervised fuzzy clustering algorithm FCM (Fuzzy c-Means) are used to segment the complex data streams respectively, formulating inherent models of the dynamic modes they represent. Subsequent representation and analysis including FSM (Finite State Machines), DTW (Dynamic Time Warping), Kullback-Leibler divergence and Smith-Waterman sequence alignment have proved quite effective in distinguishing between behavioural characteristics that persist across a variety of tasks and multiple candidates. The hypothesis pursued in the thesis has been validated based on two machine learning algorithms for unsupervised learning namely MML and FCM. Each of these methods is capable of producing a range of primitives from the motion training data. However, the outcomes of regular expression and Dynamic Time Warping analysis results indicate that MML provides better results compared with the FCM algorithm in terms of identifying behaviours.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Hanxiao. "Minimising human annotation for scalable person re-identification." Thesis, Queen Mary, University of London, 2017. http://qmro.qmul.ac.uk/xmlui/handle/123456789/30884.

Full text
Abstract:
Among the diverse tasks performed by an intelligent distributed multi-camera surveillance system, person re-identification (re-id) is one of the most essential. Re-id refers to associating an individual or a group of people across non-overlapping cameras at different times and locations, and forms the foundation of a variety of applications ranging from security and forensic search to quotidian retail and health care. Though attracted rapidly increasing academic interests over the past decade, it still remains a non-trivial and unsolved problem for launching a practical reid system in real-world environments, due to the ambiguous and noisy feature of surveillance data and the potentially dramatic visual appearance changes caused by uncontrolled variations in human poses and divergent viewing conditions across distributed camera views. To mitigate such visual ambiguity and appearance variations, most existing re-id approaches rely on constructing fully supervised machine learning models with extensively labelled training datasets which is unscalable for practical applications in the real-world. Particularly, human annotators must exhaustively search over a vast quantity of offline collected data, manually label cross-view matched images of a large population between every possible camera pair. Nonetheless, having the prohibitively expensive human efforts dissipated, a trained re-id model is often not easily generalisable and transferable, due to the elastic and dynamic operating conditions of a surveillance system. With such motivations, this thesis proposes several scalable re-id approaches with significantly reduced human supervision, readily applied to practical applications. More specifically, this thesis has developed and investigated four new approaches for reducing human labelling effort in real-world re-id as follows: Chapter 3 The first approach is affinity mining from unlabelled data. Different from most existing supervised approaches, this work aims to model the discriminative information for reid without exploiting human annotations, but from the vast amount of unlabelled person image data, thus applicable to both semi-supervised and unsupervised re-id. It is non-trivial since the human annotated identity matching correspondence is often the key to discriminative re-id modelling. In this chapter, an alternative strategy is explored by specifically mining two types of affinity relationships among unlabelled data: (1) inter-view data affinity and (2) intra-view data affinity. In particular, with such affinity information encoded as constraints, a Regularised Kernel Subspace Learning model is developed to explicitly reduce inter-view appearance variations and meanwhile enhance intra-view appearance disparity for more discriminative re-id matching. Consequently, annotation costs can be immensely alleviated and a scalable re-id model is readily to be leveraged to plenty of unlabelled data which is inexpensive to collect. Chapter 4 The second approach is saliency discovery from unlabelled data. This chapter continues to investigate the problem of what can be learned in unlabelled images without identity labels annotated by human. Other than affinity mining as proposed by Chapter 3, a different solution is proposed. That is, to discover localised visual appearance saliency of person appearances. Intuitively, salient and atypical appearances of human are able to uniquely and representatively describe and identify an individual, whilst also often robust to view changes and detection variances. Motivated by this, an unsupervised Generative Topic Saliency model is proposed to jointly perform foreground extraction, saliency detection, as well as discriminative re-id matching. This approach completely avoids the exhaustive annotation effort for model training, and thus better scales to real-world applications. Moreover, its automatically discovered re-id saliency representations are shown to be semantically interpretable, suitable for generating useful visual analysis for deployable user-oriented software tools. Chapter 5 The third approach is incremental learning from actively labelled data. Since learning from unlabelled data alone yields less discriminative matching results, and in some cases there will be limited human labelling resources available for re-id modelling, this chapter thus investigate the problem of how to maximise a model's discriminative capability with minimised labelling efforts. The challenges are to (1) automatically select the most representative data from a vast number of noisy/ambiguous unlabelled data in order to maximise model discrimination capacity; and (2) incrementally update the model parameters to accelerate machine responses and reduce human waiting time. To that end, this thesis proposes a regression based re-id model, characterised by its very fast and efficient incremental model updates. Furthermore, an effective active data sampling algorithm with three novel joint exploration-exploitation criteria is designed, to make automatic data selection feasible with notably reduced human labelling costs. Such an approach ensures annotations to be spent only on very few data samples which are most critical to model's generalisation capability, instead of being exhausted by blindly labelling many noisy and redundant training samples. Chapter 6 The last technical area of this thesis is human-in-the-loop learning from relevance feedback. Whilst former chapters mainly investigate techniques to reduce human supervision for model training, this chapter motivates a novel research area to further minimise human efforts spent in the re-id deployment stage. In real-world applications where camera network and potential gallery size increases dramatically, even the state-of-the-art re-id models generate much inferior re-id performances and human involvements at deployment stage is inevitable. To minimise such human efforts and maximise re-id performance, this thesis explores an alternative approach to re-id by formulating a hybrid human-computer learning paradigm with humans in the model matching loop. Specifically, a Human Verification Incremental Learning model is formulated which does not require any pre-labelled training data, therefore scalable to new camera pairs; Moreover, the proposed model learns cumulatively from human feedback to provide an instant improvement to re-id ranking of each probe on-the-fly, thus scalable to large gallery sizes. It has been demonstrated that the proposed re-id model achieves significantly superior re-id results whilst only consumes much less human supervision effort. For facilitating a holistic understanding about this thesis, the main studies are summarised and framed into a graphical abstract.
APA, Harvard, Vancouver, ISO, and other styles
50

Damacharla, Praveen Lakshmi Venkata Naga. "Simulation Studies and Benchmarking of Synthetic Voice Assistant Based Human-Machine Teams (HMT)." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1535119916261581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography