To see the other types of publications on this topic, follow the link: Human computer interfaces.

Dissertations / Theses on the topic 'Human computer interfaces'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Human computer interfaces.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wong, Shu-Fai. "Motion recognition for human-computer interfaces." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613368.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lamont, Charles. "Human-computer interfaces to reactive graphical images." Thesis, Teesside University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Costanza, Enrico. "Subtle, intimate interfaces for mobile human computer interaction." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37387.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.
Includes bibliographical references (p. 113-122).
The mobile phone is always carried with the user and is always active: it is a very personal device. It fosters and satisfies a need to be constantly connected to one's significant other, friends or business partners. At the same time, mobile devices are often used in public, where one is surrounded by others not involved in the interaction. This private interaction in public is often a cause of unnecessary disruption and distraction, both for the bystanders and even for the user. Nevertheless, mobile devices do fulfill an important function, informing of important events and urgent communications, so turning them off is often not practical nor possible. This thesis introduces Intimate Interfaces: discreet interfaces that allow subtle private interaction with mobile devices in order to minimize disruption in public and gain social acceptance. Intimate Interfaces are inconspicuous to those around the users, while still allowing them to communicate. The concept is demonstrated through the design, implementation and evaluation of two novel devices: * Intimate Communication Armband - a wearable device, embedded in an armband, that detects motionless gestures through electromyographic (EMG) sensing for subtle input and provides tactile output;
(cont.) * Notifying Glasses - a wearable notification display embedded in eyeglasses; it delivers subtle cues to the peripheral field of view of the wearer, while being invisible to others. The cues can convey a few bits of information and can be designed to meet specific levels of visibility and disruption. Experimental results show that both interfaces can be reliably used for subtle input and output. Therefore, Intimate Interfaces can be profitably used to improve mobile human-computer interaction.
by Enrico Costanza.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Deborah H. "The structure and development of human-computer interfaces." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54305.

Full text
Abstract:
The Dialogue Management System (DMS), the setting for this research, is a system for designing, implementing, testing, and modifying interactive human-computer systems. As in the early stages of software engineering development, current approaches to human-computer interface design are ad hoc, unstructured, and incomplete. The primary goal of this research has been to develop a structural, descriptive, language-oriented model of human-computer interaction, based on a theory of human-computer interaction. This model is a design and implementation model, serving as the framework for a dialogue engineering methodology for human-computer interface design and interactive tools for human-computer interface implementation. This research has five general task areas, each building on the previous task. The theory of human-computer interaction is a characterization of the inherent properties of human-computer interaction. Based on observations of humans communicating with computers using a variety of interface types, it addresses the fundamental question of what happens when humans interact with computers. Formalization of the theory has led to a muIti-dimensional dialogue transaction model, which encompasses the set of dialogue components and relationships among them. The model is based on three traditional levels of language: semantic, syntactic, and lexical. Its dimensions allow tailoring of an interface to specific states of the dialogue, based on the sequence of events that might occur during human-computer interaction. This model has two major manifestations: a dialogue engineering methodology and a set of interactive dialogue implementation tools. The dialogue engineering methodology consists of a set of procedures and a specification notation for the design of human-computer interfaces. The interactive dialogue implementation tools of AIDE provide automated support for implementing human-computer interfaces. The AIDE interface is based on a "what you see is what you get" concept, allowing the dialogue author to implement interfaces without writing programs. Finally, an evaluation of work has been conducted to determine its efficacy and usefulness in developing human-computer interfaces. A group of subject dialogue authors using AIDE created and modified a prespecified interface in a mean time of just over one hour, while a group of subject application programmers averaged nearly four hours to program the identical interface. Theories, models, methodologies, and tools such as those addressed by this research promise to contribute greatly to the ease of production and evaluation of human-computer interfaces.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Madritsch, Franz. "Optical beacon tracking for human computer interfaces : Dissertation /." Wien ; München : Oldenbourg, 1997. http://www.gbv.de/dms/goettingen/224593714.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

King, William Joseph. "Toward the human-computer dyad /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/10325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Witt, Hendrik. "Human computer interfaces for wearable computers a systematic approach to development and evaluation /." kostenfrei kostenfrei, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?idn=987607065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Condon, Chris. "A semiotic approach to the use of metaphor in human-computer interfaces." Thesis, Brunel University, 1999. http://bura.brunel.ac.uk/handle/2438/4800.

Full text
Abstract:
Although metaphors are common in computing, particularly in human-computer interfaces, opinion is divided on their usefulness to users and little evidence is available to help the designer in choosing or implementing them. Effective use of metaphors depends on understanding their role in the computer interface, which in tum means building a model of the metaphor process. This thesis examines some of the approaches which might be taken in constructing such a model before choosing one and testing its applicability to interface design. Earlier research into interface metaphors used experimental psychology techniques which proved useful in showing the benefits or drawbacks of specific metaphors, but did not give a general model of the metaphor process. A cognitive approach based on mental models has proved more successful in offering an overall model of the process, although this thesis questions whether the researchers tested it adequately. Other approaches which have examined the metaphor process (though not in the context of human-computer interaction) have come from linguistic fields, most notably semiotics, which extends linguistics to non-verbal communication and thus could cover graphical user interfaces (GUls). The main work described in this thesis was the construction of a semiotic model of human-computer interaction. The basic principle of this is that even the simplest element of the user interface will signify many simultaneous meanings to the user. Before building the model, a set of assertions and questions was developed to check the validity of the principles on which the model was based. Each of these was then tested by a technique appropriate to the type of issue raised. Rhetorical analysis was used to establish that metaphor is commonplace in command-line languages, in addition to its more obvious use in GUIs. A simple semiotic analysis, or deconstruction, of the Macintosh user interface was then used to establish the validity of viewing user interfaces as semiotic systems. Finally, an experiment was carried out to test a mental model approach proposed by previous researchers. By extending their original experiment to more realistically complex interfaces and tasks and using a more typical user population, it was shown that users do not always develop mental models of the type proposed in the original research. The experiment also provided evidence to support the existence of multiple layers of signification. Based on the results of the preliminary studies, a simple means of testing the semiotic model's relevance to interface design was developed, using an interview technique. The proposed interview technique was then used to question two groups of users about a simple interface element. Two independent researchers then carried out a content analysis of the responses. The mean number of significations in each interview, as categorised by the researchers, was 15. The levels of signification were rapidly revealed, with the mean time for each interview being under two minutes, providing effective evidence that interfaces signify many meanings to users, a substantial number of which are easily retrievable. It is proposed that the interview technique could provide a practical and valuable tool for systems analysis and interface designers. Finally, areas for further research are proposed, in particular to ascertain how the model and the interview technique could be integrated with other design methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Ji, Ze. "Development of tangible acoustic interfaces for human computer interaction." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54576/.

Full text
Abstract:
Tangible interfaces, such as keyboards, mice, touch pads, and touch screens, are widely used in human computer interaction. A common disadvantage with these devices is the presence of mechanical or electronic devices at the point of interaction with the interface. The aim of this work has been to investigate and develop new tangible interfaces that can be adapted to virtually any surface, by acquiring and studying the acoustic vibrations produced by the interaction of the user's finger on the surface. Various approaches have been investigated in this work, including the popular time difference of arrival (TDOA) method, time-frequency analysis of dispersive velocities, the time reversal method, and continuous object tracking. The received signal due to a tap at a source position can be considered the impulse response function of the wave propagation between the source and the receiver. With the time reversal theory, the signals induced by impacts from one position contain the unique and consistent information that forms its signature. A pattern matching method, named Location Template Matching (LTM), has been developed to identify the signature of the received signals from different individual positions. Various experiments have been performed for different purposes, such as consistency testing, acquisition configuration, and accuracy of recognition. Eventually, this can be used to implement HCI applications on any arbitrary surfaces, including those of 3D objects and inhomogeneous materials. The resolution with the LTM method has been studied by different experiments, investigating factors such as optimal sensor configurations and the limitation of materials. On plates of the same material, the thickness is the essential determinant of resolution. With the knowledge of resolution for one material, a simple but faster search method becomes feasible to reduce the computation. Multiple simultaneous impacts are also recognisable in certain cases. The TDOA method has also been evaluated with two conventional approaches. Taking into account the dispersive properties of the vibration propagation in plates, time-frequency analysis, with continuous wavelet transformation, has been employed for the accurate localising of dispersive signals. In addition, a statistical estimation of maximum likelihood has been developed to improve the accuracy and reliability of acoustic localisation. A method to measure and verify the dispersive velocities has also been introduced. To enable the commonly required "drag & drop" function in the operation of graphical user interface (GUI) software, the tracking of a finger scratching on a surface needs to be implemented. To minimise the tracking error, a priori knowledge of previous measurements of source locations is needed to linearise the state model that enables prediction of the location of the contact point and the direction of movement. An adaptive Kalman filter has been used for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
10

White, Tom 1971. "Introducing liquid haptics in high bandwidth human computer interfaces." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/62938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Marchesi, Marco <1977&gt. "Advanced Technologies for Human-Computer Interfaces in Mixed Reality." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7522/1/marchesi_marco_tesi.pdf.

Full text
Abstract:
As human beings, we trust our five senses, that allow us to experience the world and communicate. Since our birth, the amount of data that every day we can acquire is impressive and such a richness reflects the complexity of humankind in arts, technology, etc. The advent of computers and the consequent progress in Data Science and Artificial Intelligence showed how large amounts of data can contain some sort of “intelligence” themselves. Machines learn and create a superimposed layer of reality. How data generated by humans and machines are related today? To give an answer we will present three projects in the context of “Mixed Reality”, the ideal place where Reality, Virtual Reality and Augmented Reality are increasingly connected as long as data enhance the digital experiences, making them more “real”. We will start with BRAVO, a tool that exploits the brain activity to improve the user’s learning process in real time by means of a Brain-Computer Interface that acquires EEG data. Then we will see AUGMENTED GRAPHICS, a framework for detecting objects in the reality that can be captured easily and inserted in any digital scenario. Based on the moments invariants theory, it looks particularly designed for mobile devices, as it assumes a light concept of object detection and it works without any training set. As third work, GLOVR, a wearable hand controller that uses inertial sensors to offer directional controls and to recognize gestures, particularly suitable for Virtual Reality applications. It features a microphone to record voice sequences that then are translated in tasks by means of a natural language web service. For each project we will summarize the main results and we will trace some future directions of research and development.
APA, Harvard, Vancouver, ISO, and other styles
12

Marchesi, Marco <1977&gt. "Advanced Technologies for Human-Computer Interfaces in Mixed Reality." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7522/.

Full text
Abstract:
As human beings, we trust our five senses, that allow us to experience the world and communicate. Since our birth, the amount of data that every day we can acquire is impressive and such a richness reflects the complexity of humankind in arts, technology, etc. The advent of computers and the consequent progress in Data Science and Artificial Intelligence showed how large amounts of data can contain some sort of “intelligence” themselves. Machines learn and create a superimposed layer of reality. How data generated by humans and machines are related today? To give an answer we will present three projects in the context of “Mixed Reality”, the ideal place where Reality, Virtual Reality and Augmented Reality are increasingly connected as long as data enhance the digital experiences, making them more “real”. We will start with BRAVO, a tool that exploits the brain activity to improve the user’s learning process in real time by means of a Brain-Computer Interface that acquires EEG data. Then we will see AUGMENTED GRAPHICS, a framework for detecting objects in the reality that can be captured easily and inserted in any digital scenario. Based on the moments invariants theory, it looks particularly designed for mobile devices, as it assumes a light concept of object detection and it works without any training set. As third work, GLOVR, a wearable hand controller that uses inertial sensors to offer directional controls and to recognize gestures, particularly suitable for Virtual Reality applications. It features a microphone to record voice sequences that then are translated in tasks by means of a natural language web service. For each project we will summarize the main results and we will trace some future directions of research and development.
APA, Harvard, Vancouver, ISO, and other styles
13

Évain, Andéol. "Optimizing the use of SSVEP-based brain-computer interfaces for human-computer interaction." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S083/document.

Full text
Abstract:
Cette thèse porte sur la conception et l'évaluation de systèmes interactifs utilisant des interfaces cerveau-machine (BCI pour Brain-Computer Interfaces). Ce type d'interfaces s'est développé dans les années récentes tout d'abord dans le domaine du handicap, afin de fournir aux grands handicapés des moyens d'interaction et de communication, et plus récemment dans d'autres domaines comme celui des jeux vidéo. Néanmoins, la plupart des travaux ont porté sur l'identification des signaux du cerveau susceptibles de porter une information utile, et sur les traitements nécessaires à l'extraction de cette information. Peu de travaux ont porté sur les aspects d'utilisabilité et de prise en compte des facteurs humains dans l'ensemble du système interactif. Cette thèse se concentre sur les systèmes basées sur SSVEP (steady-state visually evoked potentials), et se propose d'étudier l'ensemble du système interactif cerveau-machine, selon les critères de l'interaction homme-machine (IHM). Plus précisément, les points étudiés portent sur la demande cognitive, la frustration de l'utilisateur, les conditions de calibration, et les BCI hybrides
This PhD deals with the conception and evaluation of interactive systems based on Brain-Computer Interfaces (BCI). This type of interfaces has developed in recent years, first in the domain of handicaps, in order to provide disabled people means of interaction and communication, and more recently in other fields as video games. However, most of the research so far focused on the identification of cerebral pattern carrying useful information, a on signal processing for the detection of these patterns. Less attention has been given to usability aspects. This PhD focuses on interactive systems based on Steady-State Visually Evoked Potentials (SSVEP), and aims at considering the interactive system as a whole, using the concepts of Human-Computer Interaction. More precisely, a focus is made on cognitive demand, user frustration, calibration conditions, and hybrid BCIs
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Grant. "WIMP and Beyond: The Origins, Evolution, and Awaited Future of User Interface Design." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1126.

Full text
Abstract:
The field of computer user interface design is rapidly changing and diversifying as new devices are developed every day. Technology has risen to become an integral part of life for people of all ages around the world. Modern life as we know it depends on computers, and understanding the interfaces through which we communicate with them is critically important in an increasingly digital age. The first part of this paper examines the technological origins and historical background driving the development of graphical user interfaces from its earliest incarnations to today. Hardware advancements and key turning points are presented and discussed. In the second part of this paper, skeuomorphism and flat design, two of the most common design trends today, are analyzed and explained. Finally, the future course of user interface is predicted based off of emergent technologies such as the Apple Watch, Google Glass, Microsoft HoloLens, and Microsoft PixelSense. Through understanding the roots and current state of computer user interface design, engineers, designers, and scientists can help us get the most out of our ever-changing world of advanced technology as it becomes further intertwined with our existence.
APA, Harvard, Vancouver, ISO, and other styles
15

Hawthorn, Dan. "Designing Effective Interfaces for Older Users." The University of Waikato, 2006. http://hdl.handle.net/10289/2538.

Full text
Abstract:
The thesis examines the factors that need to be considered in order to undertake successful design of user interfaces for older users. The literature on aging is surveyed for age related changes that are of relevance to interface design. The findings from the literature review are extended and placed in a human context using observational studies of older people and their supporters as these older people attempted to learn about and use computers. These findings are then applied in three case studies of interface design and product development for older users. These case studies are reported and examined in depth. For each case study results are presented on the acceptance of the final product by older people. These results show that, for each case study, the interfaces used led to products that the older people evaluating them rated as unusually suitable to their needs as older users. The relationship between the case studies and the overall research aims is then examined in a discussion of the research methodology. In the case studies there is an evolving approach used in developing the interface designs. This approach includes intensive contribution by older people to the shaping of the interface design. This approach is analyzed and is presented as an approach to designing user interfaces for older people. It was found that a number of non-standard techniques were useful in order to maximize the benefit from the involvement of the older contributors and to ensure their ethical treatment. These techniques and the rationale behind them are described. Finally the interface design approach that emerged has strong links to the approach used by the UTOPIA team based at the university of Dundee. The extent to which the thesis provides support for the UTOPIA approach is discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Kutubi, Mostafa. "Sensor fusion for tangible acoustic interfaces for human computer intreraction." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54654/.

Full text
Abstract:
This thesis presents the development of tangible acoustic interfaces for human computer interaction. The method adopted was to position sensors on the surface of a solid object to detect acoustic waves generated during an interaction, process the sensor signals and estimate either the location of a discrete impact or the trajectory of a moving point of contact on the surface. Higher accuracy and reliability were achieved by employing sensor fusion to combine the information collected from redundant sensors electively positioned on the solid object. Two different localisation approaches are proposed in the thesis. The learning-based approach is employed to detect discrete impact positions. With this approach, a signature vector representation of time-series patterns from a single sensor is matched with database signatures for known impact locations. For improved reliability, a criterion is proposed to extract the location signature from two vectors. The other approach is based on the Time Difference of Arrival (TDOA) of a source signal captured by a spatially distributed array of sensors. Enhanced positioning algorithms that consider near-field scenario, dispersion, optimisation and filtration are proposed to tackle the problems of passive acoustic localisation in solid objects. A computationally efficient algorithm for tracking a continuously moving source is presented. Spatial filtering of the estimated trajectory has been performed using Kalman filtering with automated initialisation.
APA, Harvard, Vancouver, ISO, and other styles
17

Ellis, Loftie. "Human-computer interface using a web camera." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/1988.

Full text
Abstract:
Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2007.
In this thesis we present a human-computer interface (HCI) system for disabled persons using only a basic web camera. Mouse movements are simulated by small movements of the head, while clicks are simulated by eye blinks. In this study, a system capable of face tracking, eye detection (including iris detection), blink detection and finally skin detection and face recognition has been developed. A detection method based on Haar-like features are used to detect the face and eyes. Once the eyes have been detected, a support vector machines classifier is used to detect whether the eye is open or closed (for use in blink detection). Skin detection is done using K-means clustering, while Eigenfaces is used for face recognition. It is concluded that using a web camera as a human-computer interface can be a viable input method for the severely disabled.
APA, Harvard, Vancouver, ISO, and other styles
18

Dunlap, Susan L. "A toolkit for designing user interfaces." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA231558.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, March 1990.
Thesis Advisor(s): Zyda, Michael J. Second Reader: Bradbury, Leigh W. "March 1990." Description based on signature page as viewed on August 25, 2009. DTIC Descriptor(s): Interfaces, Silicon, Graphics, Iris, Work Stations, Generators, Writing, Coding, User Needs. DTIC Identifier(s): Software engineering, interfaces, computer graphics, theses. Author(s) subject terms: Interface, graphics. Includes bibliographical references (p. 66). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
19

Dewan, Prasun. "Automatic generation of user interfaces." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1986. http://catalog.hathitrust.org/api/volumes/oclc/14706019.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cooper, Geoff. "Representing the user : a sociological study of the discourse of human computer interaction." Thesis, n.p, 1991. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stander, Adrie. "Computer user interfaces in a multicultural society." Thesis, Cape Technikon, 1997. http://hdl.handle.net/20.500.11838/1369.

Full text
Abstract:
Thesis (MTech(Information Technology))--Cape Technikon, Cape Town, 1997
This research discusses some of the cultural issues that could influence the human computer encounter in a multicultural community. The results of research to determine differences in computer usage caused by cultural differences when using computer user interfaces in simulated and real-world environments are also discussed. Various cultural aspects could possibly influence the effectiveness of the user interface in a multicultural society. Language is an important factor and studies have shown that simple translation will increase productivity (Bodley, 1993:23). However all languages do not contain the necessary technical vocabulary. Mothers from a lower social class typically use a limited language code when communicating with their children (Mussen et aI.,1984:206). As this causes the children to think in more concrete and less conceptual terms, it may influence the human computer interaction, particularly where a high degree of abstraction, such as in graphical interfaces, is used. Symbolism is problematic as symbols like light bulbs, recycle bins and VCR controls do not feature in the life of users living in slum and backward rural conditions. Lack of exposure to technology might negatively influence user attitude (Downton, 1991:25) with a corresponding inhibition of learning and performance. All external locus of control is common among disadvantaged groups due to the high degree of rejection, hostile control and criticism they experience. As the sense of being out of control is largely associated with the indication to avoid stressful situations, users from these groups might prefer to avoid situations where they do not feel in control. The strong differentiation between the roles of the sexes in certain cultures can also influence the encounter with the computer (Downton, 1991:10) It has been shown that the different gender orientations towards problem solving in these cultures can have an important influence on computer usage. The intracultural factors of social class play a significant role in determining how a person acts and thinks (Baruth & Manning, 1991 :9-1 0). Such differences may sometimes be more pronounced than those resulting from cultural diversity and may influence the orientation of the user towards abstraction and generalization.
APA, Harvard, Vancouver, ISO, and other styles
22

Wells, Evelyn Frances. "A Comparison of Interactive Color Specification Systems for Human-Computer Interfaces." Thesis, Texas A&M University, 1994. http://hdl.handle.net/1969.1/90683.

Full text
Abstract:
Color specification is a time-consuming and challenging task in computer graphics applications. The purpose of this research is to examine the color specification process in the context of current human-computer interface technology, and to investigate how certain attributes of a color specification system affect its usability during a visual color matching task. Eighteen color specification systems are compared, each composed of different combinations of color space (red-green-blue, RGB; opponent channel, OPP; hue-saturation-value, HSV), slider type (plain, static, dynamic), and background context (achromatic, chromatic). A total of 83 undergraduate students, both male and female, participated in the study. Each subject completed six trials, with each trial consisting of a set of color matches using a particular system. Color matching performance was analyzed to yield measures of time, physical effort, accuracy, and convergence speed. The systems were then compared quantitatively according to these measures and qualitatively based on preference. The results indicate that the OPP color space led to greatest convergence and most user comfort, while the RGB space ranked second in terms of convergence, and the HSV space ranked second in terms of user comfort. Among the slider types, the dynamic sliders were superior according to almost every usability measure, followed by the static sliders and then the plain sliders. Context had a mixed effect in that the achromatic background led to slower but more accurate matches than did the chromatic background.
APA, Harvard, Vancouver, ISO, and other styles
23

Rencken, D. Wolfgang. "A quantitative model for adaptive task allocation in human-computer interfaces." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yong, Kin Fuai. "Emerging human-computer interaction interfaces : a categorizing framework for general computing." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90692.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 86).
Executive summary: The dominant design of Human-Computer Interface over last thirty years has been the combination of monitor, keyboard and mouse. However the constant miniaturization of IC and sensors and the availability of computing power has spurred incredible new dimensions of inputs (touch, gesture, voice, brain wave, etc.) and outputs (watch, glasses, phone, surface, etc.), which started the explosive growth of recombination of both inputs and outputs into new classes of devices. The design constraints have also noticeably shifted from technical to ergonomic and contextual. This thesis sets out to map these new interfaces to the use context in general computing and project the adoption path and the driving factors behind them. The theoretical foundation of this thesis is based on multiple technology innovation theories including the importance of Innovation and Technology Diffusion Models from Paul Geroski, Dominant Design from James Utterback, the Curse of Innovation from John Gourville and Lead User Innovation by Eric Von Hippel. System Architecture thinking, founded most notably by Ed Crawley and Olivier de Weck from MIT, is also applied to analyze the architecture of Human- Computer Interface. The study of Human-Computer Interface starts with a case study of the invention of the computer mouse - conceived in 1968 by Douglas Engelbart. A paper published by Engelbart compared different technologies and the mouse emerged as superior with lower fatigue and error rate yet a surprisingly short learning time. The mouse, however, was not popularized until Apple showcased the design with the first GUI1 on a personal computer on its Macintosh in 1984, and its subsequent mass adoption by Microsoft Windows in the late 1980s. The case study showed that even with the superior design of a specific HCI, a number of other factors, including holistic solution, killer application, market position and platform strategy, are required for successful adoption. The next chapter maps out developing Human-Computer Interface technologies and notable existing or developing products and their company background. The superiority of an interface depends on how well it fits into the inherent nature of a specific use context. The daily general computing domains of an average computer user include collaboration, productivity, media consumption, communication and augmentation. The clear distinction of the use context in each domain strongly correlates with the effectiveness of the Human-Computer Interface in each class of device. The chapter includes analysis of proposed frameworks that place HCI interface on a plot of interaction complexity against screen sizes. Several industry experts generally agreed on a few observations: the keyboard and mouse will remain as the primary input interface for the productivity domain, the growing importance of collaboration, the increasing emphasis on human-centered design, and the huge opportunity in the wearable market with a potential size of $50 billion. In conclusion, the projected future of adoption is: * The collaboration domain needs the combination of a low fatigue, high precision interface for productivity; a high freedom, low precision interface for creativity; and a large output screen for multiple collaborators. This will remain the frontier battleground for a variety of concepts from several giant players and niche players, each with a different competitive edge. * Productivity domain input interfaces will likely continue to be dominated by low fatigue, high precision interfaces that are not necessarily intuitive i.e. a keyboard and mouse. 3D manipulation will remain a niche interface only needed by specific industries, while a 3D general computing environment is unlikely to be realized in the short term. * The media consumption domain will be the major area of adoption for medium accuracy, highly intuitive interfaces, e.g. gesture and sound. Personal media consumption devices might be challenged by head-mounted display while group media consumption devices face an interesting challenge from bridging devices like Chromecast. * The communication domain needs an input interface that is fairly accurate and responsive, with just enough screen space. Voice recognition is rising fast to challenge typing. The dominating form factor will be the smartphone but challenged by glasses. * The augmentation domain needs an interface that is simple and fairly accurate. New input interfaces like brainwave, gaze detection, and muscle signal will be adopted here given the right context. Flexible OLED is likely to revolutionize both input and output interfaces for wearable devices. Product developers should choose technology according to their targeted domain and identify competitors using this framework. Killer applications should be developed early, internally or with partners, to ensure success, while platform strategy can leverage innovation of third-party developers to widen the application. During the course of research, other opportunities arising from the proliferation of computing are also identified in the areas of the Internet of Things, smart objects and smart healthcare. This thesis is based mainly in qualitative analysis due to the lack of comprehensive data on the new Human-Computer Interfaces. Future research can collect quantitative data based on the framework of the five domains of general computing activities and their categorical requirements. It is also possible to extend the model to other computing use cases, for example Gaming, Virtual Reality and Augmented Reality.
by Kin Fuai Yong.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
25

CANNAVO', ALBERTO. "Interfaces for human-centered production and use of computer graphics assets." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2841170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bourges-Waldegg, Paula. "Handling cultural factors in human-computer interaction." Thesis, University of Derby, 1998. http://hdl.handle.net/10545/310928.

Full text
Abstract:
The main objective of the research described in this thesis was to investigate and understand the origins of culturally-determined usability problems in the context of Human Computer Interaction (HCI) to develop a method for treating this issue, when designing systems intended to be shared by culturally-heterogeneous user groups, such as Computer Supported Co-operative Work (CSCW) systems and the Internet. The resulting approach supports HCI designers by providing an alternative to internationalisation and localisation guidelines, which are inappropriate for tackling culturally-determined usability problems in the context of shared-systems. The research also sought to apply and test the developed approach in order to assess its efficacy and to modify or improve it accordingly.
APA, Harvard, Vancouver, ISO, and other styles
27

Raisamo, Roope. "Multimodal human-computer interaction a constructive and empirical study /." Tampere, [Finland] : University of Tampere, 1999. http://acta.uta.fi/pdf/951-44-4702-6.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Garcia, Frey Alfonso. "Quality of Human-Computer Interaction : Self-Explanatory User Interfaces by Model-Driven Engineering." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM015/document.

Full text
Abstract:
En Interaction Homme-Machine, la qualité est une utopie : malgré toutes les précautions prises en conception, il existe toujours des utilisateurs et des situations d'usage pour lesquels l'Interface Homme-Machine (IHM) est imparfaite. Cette thèse explore l'auto-explication des IHM pour améliorer la qualité perçue par les utilisateurs. L'approche s'inscrit dans une Ingénierie Dirigée par les Modèles. Elle consiste à embarquer à l'exécution les modèles de conception pour dynamiquement augmenter l'IHM d'un ensemble de questions et de réponses. Les questions peuvent être relatives à l'utilisation de l'IHM (par exemple, "A quoi sert ce bouton ?", "Pourquoi telle action n'est pas possible ?) et à sa justification (par exemple, "Pourquoi les items ne sont-ils pas rangés par ordre alphabétique ?"). Cette thèse propose une infrastructure logicielle UsiExplain basée sur les méta-modèles UsiXML. L'évaluation sur un cas d'étude d'achat de voitures montre que l'approche est pertinente pour les questions d'utilisation de l'IHM. Elle ouvre des perspectives en justification de conception
In Human-Computer Interaction, quality is an utopia. Despite all the design efforts, there are always uses and situations for which the user interface is not perfect. This thesis investigates self-explanatory user interfaces for improving the quality perceived by end users. The approach follows the principles of model-driven engineering. It consists in keeping the design models at runtime so that to dynamically enrich the user interface with a set of possible questions and answers. The questions are related to usage (for instance, "What's the purpose of this button?", "Why is this action not possible"?) as well as to design rationale (for instance, "Why are the items not alphabetically ordered?"). This thesis proposes a software infrastructure UsiExplain based on the UsiXML metamodels. An evaluation conducted on a case study related to a car shopping webiste confirms that the approach is relevant especially for usage questions. Design rationale will be further explored in the future.STAR
APA, Harvard, Vancouver, ISO, and other styles
29

Buckthal, Eric D. ebucktha. "JUICINESS IN CITIZEN SCIENCE COMPUTER GAMES: ANALYSIS OF A PROTOTYPICAL GAME." DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1278.

Full text
Abstract:
Incorporating the collective problem-solving skills of non-experts could ac- celerate the advancement of scientific research. Citizen science games leverage puzzles to present computationally difficult problems to players. Such games typ- ically map the scientific problem to game mechanics and visual feed-back helps players improve their solutions. Like games for entertainment, citizen science games intend to capture and retain player attention. “Juicy” game design refers to augmented visual feedback systems that give a game personality without modi- fying fundamental game mechanics. A “juicy” game feels alive and polished. This thesis explores the use of “juicy” game design applied to the citizen science genre. We present the results of a user study in its effect on player motivation with a prototypical citizen science game inspired by clustering-based E. coli bacterial strain analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Stupak, Noah. "Time-delays and system response times in human-computer interaction /." Online version of thesis, 2009. http://hdl.handle.net/1850/10867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Strickland, Ted John Jr. "Dynamic management of multichannel interfaces for human interaction with computer-based intelligent assistants." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184793.

Full text
Abstract:
For complex man-machine tasks where multi-media interaction with computer-based assistants is appropriate, a portion of the assistant's intelligence must be devoted to managing its communication processes with the user. Since people often serve the role of assistants, the conventions of human communication provide a basis for designing the communication processes of the computer-based assistant. Human decision making for communication requires knowledge of the user's style, the task demands, and communication practices, and knowledge of the current situation. Decisions necessary for effective communication, when, how, and what to communicate, can be expressed using these knowledge sources. A system based on human communication rules was developed to manage the communication decisions of an intelligent assistant. The Dynamic Communication Management (DCM) system consists of four components, three models and a manager. The model of the user describes the user's communication preferences for different task situations. The model of the task is used to establish the user's current activity and to describe how communication should be conducted for this activity. The communication model provides the rules needed to make decisions: when to communicate the message, how to present the message to the user, and what information should be communicated. The Communication Manager controls and coordinates these models to conduct all communication with the user. Performance with DCM as the interface to a simulated Flexible Manufacturing System (FMS) control task was established to learn about the potential benefits of the concept. An initial comparison showed no improvement over a keyboard and monitor interface, but provided performance data which exposed the differences in information needed for decision making using auditory and visual communication. This knowledge and related performance data were used to redesign features of the DCM. The redesigned DCM significantly improved all aspects of system performance compared to the keyboard and monitor interface. The FMS performance measures and performance on a secondary task improved, user communication behavior was changed favorably, and users preferred the advanced features of DCM. These types of benefits can potentially accrue for a variety of tasks where multi-media communication with computer-based intelligent assistants is managed with DCM.
APA, Harvard, Vancouver, ISO, and other styles
32

Kivila, Arto. "Touchscreen interfaces for machine control and education." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49051.

Full text
Abstract:
The touchscreen user interface is an inherently dynamic device that is becoming ubiquitous. The touchscreen’s ability to adapt to the user’s needs makes it superior to more traditional haptic devices in many ways. Most touchscreen devices come with a very large array of sensors already included in the package. This gives engineers the means to develop human-machine interfaces that are very intuitive to use. This thesis presents research that was done to develop a best touchscreen interface for driving an industrial crane for novice users. To generalize the research, testing also determined how touchscreen interfaces compare to the traditional joystick in highly dynamic tracking situations using a manual tracking experiment. Three separate operator studies were conducted to investigate touchscreen control of cranes. The data indicates that the touchscreen interfaces are superior to the traditional push-button control pendent and that the layout and function of the graphical user interface on the touchscreen plays a roll in the performance of the human operators. The touchscreen interface also adds great promise for allowing users to navigate through interactive textbooks. Therefore, this thesis also presents developments directed at creating the next generation of engineering textbooks. Nine widgets were developed for an interactive mechanical design textbook that is meant to be delivered via tablet computers. Those widgets help students improve their technical writing abilities, introduce them to tools they can use in product development, as well as give them knowledge in how some dynamical systems behave. In addition two touchscreen applications were developed to aid the judging of a mechanical design competition.
APA, Harvard, Vancouver, ISO, and other styles
33

Leiva, Torres Luis Alberto. "Diverse Contributions to Implicit Human-Computer Interaction." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/17803.

Full text
Abstract:
Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.
Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803
Palancia
APA, Harvard, Vancouver, ISO, and other styles
34

Dill, Byron. "Human robot interaction using a personal digital assistant interface : a study of feedback modes /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Glinert, Eitan M. "The human controller : usability and accessibility in video game interfaces." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46106.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (leaves 41-43).
Despite the advances in user interfaces and the new gaming genres, not all people can play all games - disabled people are frequently excluded from game play experiences. On the one hand this adds to the list of discriminations disabled people face in our society, while on the other hand actively including them potentially results in games that are better for everyone. The largest hurdle to involvement is the user interface, or how a player interacts with the game. Analyzing usability and adhering to accessibility design principles makes it both possible and practical to develop fun and engaging game user interfaces that a broader range of the population can play. To demonstrate these principles we created AudiOdyssey, a PC rhythm game that is accessible to both sighted and non-sighted audiences. By following accessibility guidelines we incorporated a novel combination of features resulting in a similar play experience for both groups. Testing AudiOdyssey yielded useful insights into which interface elements work and which don't work for all users. Finally a case is made for considering accessibility when designing future versions of gaming user interfaces, and speculative scenarios are presented for what such interfaces might look like.
by Eitan M. Glinert.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
36

Brewster, Stephen. "Providing a structured method for integrating non-speech audio into human-computer interfaces." Thesis, University of York, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Booth, Stuart. "Multisensory theory for interface design." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Nylander, Stina. "The ubiquitous interactor : Mobile services with multiple user interfaces." Licentiate thesis, Uppsala : Univ. : Dept. of Information Technology, Univ, 2003. http://www.it.uu.se/research/reports/lic/2003-013/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Gnanayutham, Paul Wesley. "Interaction paradigms for brain-body interfaces for computer users with brain injuries." Thesis, University of Sunderland, 2008. http://sure.sunderland.ac.uk/3554/.

Full text
Abstract:
In comparison to all types of injury, those to the brain are among the most likely to result in death or permanent disability. Some of these brain-injured people cannot communicate, recreate, or control their environment due to severe motor impairment. This group of individuals with severe head injury have received limited help from assistive technology. Brain-Computer Interfaces have opened up a spectrum of assistive technologies, which are particularly appropriate for people with traumatic brain injury, especially those who suffer from “locked-in” syndrome. The research challenge here is to develop novel interaction paradigms that suit brain-injured individuals, who could then use it for everyday communications. The developed interaction paradigms should require minimum training, reconfigurable and minimum effort to use. This thesis reports on the development of novel interaction paradigms for Brain-Body Interfaces to help brain-injured people to communicate better, recreate and control their environment using computers despite the severity of their brain injury. The investigation was carried out in three phases. Phase one was an exploratory study where a first novel interaction paradigm was developed and evaluated with able-bodied and disabled participants. Results obtained were fed into the next phase of the investigation. Phase two was carried out with able participants who acted as development group for the second novel interaction paradigm. This second novel interaction paradigm was evaluated with non-verbal participants with severe brain injury in phase three. An iterative design research methodology was chosen to develop the interaction paradigms. A non-invasive assistive technology device named Cyberlink™ was chosen as the Brain-Body Interface. This research improved previous work in this area by developing new interaction paradigms of personalised tiling and discrete acceleration in Brain- Body Interfaces. The research hypothesis of this study ‘that the performance of the Brain-Body Interface can be improved by the use of novel interaction paradigms’ was successfully demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
40

Moore, Melody M. "User interface reengineering." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/12899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Eriksson, Alexander, and Gustav Ljungberg. "Layout management in distributed user interfaces." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-168041.

Full text
Abstract:
Human computer interaction is a topic which is quickly progressing forwards, with users looking for new ways to interact with digital content. Graphical user interfaces are all around us, featured in smartphones, tablets, and personal computers. The next step from a graphical user interface, a distributed user interface is becoming increasingly popular, and is offered by many applications, such as Spotify for example. This bachelor's thesis carried out at the University of Linköping at the institution for computer science discusses the history of human computer interaction, user interaction and interfaces, and how to manage the layout of a disturbed user interface system. A framework for managing the layout is developed and tested in a prototype.
APA, Harvard, Vancouver, ISO, and other styles
42

Vrazalic, Lejla. "Towards holistic human-computer interaction evaluation research and practice development and validation of the distributed usability evaluation method /." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050106.151954/index.html.

Full text
Abstract:
Thesis (Ph.D.)--University of Wollongong, 2004.
Typescript. This thesis is subject to a 2 year embargo (16/09/2004 to 16/09/2006) and may only be viewed and copied with the permission of the author. For further information please Contact the Archivist. Includes bibliographical references: p. 360-374.
APA, Harvard, Vancouver, ISO, and other styles
43

Bernard, Arnaud Jean Marc. "Human computer interface based on hand gesture recognition." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42748.

Full text
Abstract:
With the improvement of multimedia technologies such as broadband-enabled HDTV, video on demand and internet TV, the computer and the TV are merging to become a single device. Moreover the previously cited technologies as well as DVD or Blu-ray can provide menu navigation and interactive content. The growing interest in video conferencing led to the integration of the webcam in different devices such as laptop, cell phones and even the TV set. Our approach is to directly use an embedded webcam to remotely control a TV set using hand gestures. Using specific gestures, a user is able to control the TV. A dedicated interface can then be used to select a TV channel, adjust volume or browse videos from an online streaming server. This approach leads to several challenges. The first is the use of a simple webcam which leads to a vision based system. From the single webcam, we need to recognize the hand and identify its gesture or trajectory. A TV set is usually installed in a living room which implies constraints such as a potentially moving background and luminance change. These issues will be further discussed as well as the methods developed to resolve them. Video browsing is one example of the use of gesture recognition. To illustrate another application, we developed a simple game controlled by hand gestures. The emergence of 3D TVs is allowing the development of 3D video conferencing. Therefore we also consider the use of a stereo camera to recognize hand gesture.
APA, Harvard, Vancouver, ISO, and other styles
44

Levine, Jonathan. "Computer based dialogs : theory and design /." Online version of thesis, 1990. http://hdl.handle.net/1850/10590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Márquez, Jessica J. "Human-automation collaboration : decision support for lunar and planetary exploration /." Cambridge, Mass. : Ft. Belvior, VA : Springfield, Va. : Massachusetts Institute of Technology, Department of Aeronautics and Astronautics ; Available to the public through the Defense Technical Information Center ; National Technical Information Service [distributor], 2007. http://web.mit.edu/aeroastro/labs/halab/index.shtml.

Full text
Abstract:
Thesis (Ph. D in Philosophy (Human-Systems Engineering))--Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, February 2007.
"February 2007." Thesis advisor: Mary L. Cummings. Performed by Massachusetts Institute of Technology, Humans & Automation Laboratory, Cambridge, Mass. "Submitted to the Department of Aeronautics and Astronautics on February 1, 2007 in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Human-Systems Engineering."--P. 3. Includes bibliographical references (p. 219-225). Also available online from the Massachusetts Institute of Technology (MIT) Humans and Automation Lab (HAL) Web site.
APA, Harvard, Vancouver, ISO, and other styles
46

Smith, Timothy William. "Assessing the usability of user interfaces: Guidance and online help features." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184328.

Full text
Abstract:
The purpose of this research was to provide evidence to support specific features of a software user interface implementation. A 3 x 2 x 2 full factorial, between subjects design was employed, in a laboratory experiment systematically varying existence or non-existence of a user interface and media of help documentation (either online or written), while blocking for varying levels of user experience. Subjects completed a set of tasks using a computer, so the experimenters could collect and evaluate various performance and attitudinal measures. Several attitudinal measures were developed and validated as part of this research. Consistent with previous findings, this research found that a user's previous level of experience in using a computer had a significant impact on their performance measures. Specifically, increased levels of user experience were associated with reduced time to complete the tasks, fewer number of characters typed, fewer references to help documentation, and fewer requests for human assistance. In addition, increased levels of user experience were generally associated with higher levels of attitudinal measures (general attitude toward computers and satisfaction with their experiment performance). The existence of a user interface had a positive impact on task performance across all levels of user experience. Although experienced users were not more satisfied with the user interface than without it, their performance was better. This contrasts with at least some previous findings that suggest experienced users are more efficient without a menu-driven user interface. The use of online documentation, as opposed to written, had a significant negative impact on task performance. Specifically, users required more time, made more references to the help documentation, and required more human assistance. However, these users generally indicated attitudinal measures (satisfied) that were as high with online as written documentation. There was a strong interaction between the user interface and online documentation for the task performance measures. This research concludes that a set of tasks can be performed in significantly less time when online documentation is facilitated by the presence of a user interface. Written documentation users seemed to perform equivalently with or without the user interface. With online documentation the user interface became crucial to task performance. Research implications are presented for practitioners, designers and researchers.
APA, Harvard, Vancouver, ISO, and other styles
47

Covington, Michael J. "A flexible security architecture for pervasive computing environments." Diss., Available online, Georgia Institute of Technology, 2004:, 2004. http://etd.gatech.edu/theses/available/etd-06072004-131113/unrestricted/covington%5Fmichael%5Fj%5F200405%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Knowles, Christine Joan. "A qualitative approach to the assessment of the cognitive complexity of an interface." Thesis, Queen Mary, University of London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Debard, Quentin. "Automatic learning of next generation human-computer interactions." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI036.

Full text
Abstract:
L’Intelligence Artificielle (IA) et les Interfaces Homme-Machine (IHM) sont deux champs de recherche avec relativement peu de travaux communs. Les spécialistes en IHM conçoivent habituellement les interfaces utilisateurs directement à partir d’observations et de mesures sur les interactions humaines, optimisant manuellement l’interface pour qu’elle corresponde au mieux aux attentes des utilisateurs. Ce processus est difficile à optimiser : l’ergonomie, l’intuitivité et la facilité d’utilisation sont autant de propriétés clé d’une interface utilisateur (IU) trop complexes pour être simplement modélisées à partir de données d’interaction. Ce constat restreint drastiquement les utilisations potentielles de l’apprentissage automatique dans ce processus de conception. A l’heure actuelle, l’apprentissage automatique dans les IHMs se cantonne majoritairement à la reconnaissance de gestes et à l’automatisation d’affichage, par exemple à des fins publicitaires ou pour suggérer une sélection. L’apprentissage automatique peut également être utilisé pour optimiser une interface utilisateur existante, mais il ne participe pour l’instant pas à concevoir de nouvelles façons d’intéragir. Notre objectif avec cette thèse est de proposer grâce à l’apprentissage automatique de nouvelles stratégies pour améliorer le processus de conception et les propriétés des IUs. Notre but est de définir de nouvelles IUs intelligentes – comprendre précises, intuitives et adaptatives – requérant un minimum d’interventions manuelles. Nous proposons une nouvelle approche à la conception d’IU : plutôt que l’utilisateur s’adapte à l’interface, nous cherchons à ce que l’utilisateur et l’interface s’adaptent mutuellement l’un à l’autre. Le but est d’une part de réduire le biais humain dans la conception de protocoles d’interactions, et d’autre part de construire des interfaces co-adaptatives capables de correspondre d’avantage aux préférences individuelles des utilisateurs. Pour ce faire, nous allons mettre à contribution les différents outils disponibles en apprentissage automatique afin d’apprendre automatiquement des comportements, des représentations et des prises de décision. Nous expérimenterons sur les interfaces tactiles pour deux raisons majeures : celles-ci sont largement utilisées et fournissent des problèmes facilement interprétables. La première partie de notre travail se focalisera sur le traitement des données tactiles et l’utilisation d’apprentissage supervisé pour la construction de classifieurs précis de gestes tactiles. La seconde partie détaillera comment l’apprentissage par renforcement peut être utilisé pour modéliser et apprendre des protocoles d’interaction en utilisant des gestes utilisateur. Enfin, nous combinerons ces modèles d’apprentissage par renforcement avec de l’apprentissage non supervisé pour définir une méthode de conception de nouveaux protocoles d’interaction ne nécessitant pas de données d’utilisation réelles
Artificial Intelligence (AI) and Human-Computer Interactions (HCIs) are two research fields with relatively few common work. HCI specialists usually design the way we interact with devices directly from observations and measures of human feedback, manually optimizing the user interface to better fit users’ expectations. This process is hard to optimize: ergonomy, intuitivity and ease of use are key features in a User Interface (UI) that are too complex to be simply modelled from interaction data. This drastically restrains the possible uses of Machine Learning (ML) in this design process. Currently, ML in HCI is mostly applied to gesture recognition and automatic display, e.g. advertisement or item suggestion. It is also used to fine tune an existing UI to better optimize it, but as of now it does not participate in designing new ways to interact with computers. Our main focus in this thesis is to use ML to develop new design strategies for overall better UIs. We want to use ML to build intelligent – understand precise, intuitive and adaptive – user interfaces using minimal handcrafting. We propose a novel approach to UI design: instead of letting the user adapt to the interface, we want the interface and the user to adapt mutually to each other. The goal is to reduce human bias in protocol definition while building co-adaptive interfaces able to further fit individual preferences. In order to do so, we will put to use the different mechanisms available in ML to automatically learn behaviors, build representations and take decisions. We will be experimenting on touch interfaces, as these interfaces are vastly used and can provide easily interpretable problems. The very first part of our work will focus on processing touch data and use supervised learning to build accurate classifiers of touch gestures. The second part will detail how Reinforcement Learning (RL) can be used to model and learn interaction protocols given user actions. Lastly, we will combine these RL models with unsupervised learning to build a setup allowing for the design of new interaction protocols without the need for real user data
APA, Harvard, Vancouver, ISO, and other styles
50

Thompson, Cynthia Ann. "Semantic lexicon acquisition for learning natural language interfaces /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography