Journal articles on the topic 'Human-computer interaction. Computer interfaces. User interfaces (Computer systems)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Human-computer interaction. Computer interfaces. User interfaces (Computer systems).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Luz María, Alonso-Valerdi, and Mercado-García Víctor Rodrigo. "Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments." Computational Intelligence and Neuroscience 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/6076913.

Full text
Abstract:
Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities.
APA, Harvard, Vancouver, ISO, and other styles
2

KONSTANTOPOULOS, STASINOS, and VANGELIS KARKALETSIS. "SYSTEM PERSONALITY AND ADAPTIVITY IN AFFECTIVE HUMAN-COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 22, no. 02 (April 2013): 1350014. http://dx.doi.org/10.1142/s0218213013500140.

Full text
Abstract:
It has been demonstrated that human users attribute a personality to the computer interfaces they use, regardless of whether one has been explicitly encoded in the system's design or not. In this paper, we explore a method for having explicit control over the personality that a spoken human-robot interface is perceived to exhibit by its users. Our method focuses on the interaction between users and semantic knowledge-based systems where the goal of the interaction is that information from the semantic store is relayed to the user. We describe a personality modelling method that complements a standard dialogue manager by calculating parameters related to adaptivity and emotion for the various interaction modules that realize the system's dialogue acts. This calculation involves the planned act, the user adaptivity model, the system's own goals, but also a machine representation of the personality that we want the system to exhibit, so that systems with different personality will react differently even when in the same dialogue state and with the same user or user type.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, Alessandro Luiz Stamatto, Leonardo Cunha de Miranda, Erica Esteves Cunha de Miranda, and Sarah Gomes Sakamoto. "A Survey of Interactive Systems based on Brain-Computer Interfaces." Journal on Interactive Systems 4, no. 1 (August 28, 2013): 1. http://dx.doi.org/10.5753/jis.2013.623.

Full text
Abstract:
Brain-Computer Interface (BCI) enables users to interact with a computer only through their brain biological signals, without the need to use muscles. BCI is an emerging research area but it is still relatively immature. However, it is important to reflect on the different aspects of the Human-Computer Interaction (HCI) area related to BCIs, considering that BCIs will be part of interactive systems in the near future. BCIs most attend not only to handicapped users, but also healthy ones, improving interaction for end-users. Virtual Reality (VR) is also an important part of interactive systems, and combined with BCI could greatly enhance user interactions, improving the user experience by using brain signals as input with immersive environments as output. This paper addresses only noninvasive BCIs, since this kind of capture is the only one to not present risk to human health. As contributions of this work we highlight the survey of interactive systems based on BCIs focusing on HCI and VR applications, and a discussion on challenges and future of this subject matter.
APA, Harvard, Vancouver, ISO, and other styles
4

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
5

West, A. A., B. A. Bowen, R. P. Monfared, and A. Hodgson. "User-responsive interface generation for manufacturing systems: A theoretical basis." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 214, no. 5 (May 1, 2000): 379–92. http://dx.doi.org/10.1243/0954405001518161.

Full text
Abstract:
Computer integrated manufacturing (CIM) systems with a significant level of human-computer interaction are often inefficient. This is particularly problematical for those users who have to interact with multiple subsystem interfaces. These difficulties can be traced back to the fact that representation of the user in existing manufacturing models and systems is inadequate. An approach that increases user representation to improve CIM interface design is proposed, in which stereotype-based user and task models are used to specify a common user interface for each individual system user. An overview of the architecture is followed by discussion of an application domain (statistical process control) in which a demonstrator based on the architecture has been tested.
APA, Harvard, Vancouver, ISO, and other styles
6

Wojciechowski, A. "Hand’s poses recognition as a mean of communication within natural user interfaces." Bulletin of the Polish Academy of Sciences: Technical Sciences 60, no. 2 (October 1, 2012): 331–36. http://dx.doi.org/10.2478/v10175-012-0044-3.

Full text
Abstract:
Abstract. Natural user interface (NUI) is a successor of command line interfaces (CLI) and graphical user interfaces (GUI) so well known to computer users. A new natural approach is based on extensive human behaviors tracking, where hand tracking and gesture recognition seem to play the main roles in communication. The presented paper reviews common approaches to discussed hand features tracking and provides a very effective proposal of the contour based hand’s poses recognition method which can be straightforwardly used for a hand-based natural user interface. Its possible usage varies from medical systems interaction, through games up to impaired people communication support.
APA, Harvard, Vancouver, ISO, and other styles
7

Reynoso, Juan Manuel Gómez, and Lizeth Itziguery Solano Romo. "Measuring the Effectiveness of Designing End-User Interfaces Using Design Theories." International Journal of Information Technologies and Systems Approach 13, no. 2 (July 2020): 54–72. http://dx.doi.org/10.4018/ijitsa.2020070103.

Full text
Abstract:
Software systems are one of the most important technologies that are present in every task that humans and computers perform. Humans perform their tasks by using a computer interface. However, because many developers have not been exposed to one or more courses on Human Computer Interaction (HCI), they sometimes create software using their own preferences based on their skills and abilities and do not consult theories that could help them produce better outcomes. A study was carried out to identity whether software that is developed by using Gestalt Theory combined with interface development principles produces better outcomes compared to software developed using developers' current skills. Results show that participants perceived the system that was developed by a team that had been given training about Gestalt Theory and design guidelines had superior perceived quality compared to another team that did not receive the training. However, results should be taken cautiously.
APA, Harvard, Vancouver, ISO, and other styles
8

Murano, Pietro, and Patrik O’Brian Holt. "Anthropomorphic Feedback in User Interfaces." International Journal of Technology and Human Interaction 3, no. 4 (October 2007): 52–63. http://dx.doi.org/10.4018/jthi.2007100104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tijerina, Louis. "Design Guidelines and the Human Factors of Interface Design." Proceedings of the Human Factors Society Annual Meeting 30, no. 14 (September 1986): 1358–62. http://dx.doi.org/10.1177/154193128603001403.

Full text
Abstract:
The proliferation of computer systems in recent years has prompted a growing concern about the human factors of interface design. Industrial and military organizations have responded by supporting studies in user-computer interaction and, more recently, products which might aid in the design of interfaces. One type of design aid which attempts to make findings of user-computer interface (UCI) research available to the system designer is the interface design guidelines document. This paper reviews literature about the design process and how design guidelines or standards might fit into that activity. Suggestions are offered about where future research and development might be directed in order to enhance the use of guidelines in the interface design process and so enhance the final product as well.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed, Naveed, Hind Kharoub, Selma Manel Medjden, and Areej Alsaafin. "A Natural User Interface for 3D Animation Using Kinect." International Journal of Technology and Human Interaction 16, no. 4 (October 2020): 35–54. http://dx.doi.org/10.4018/ijthi.2020100103.

Full text
Abstract:
This article presents a new natural user interface to control and manipulate a 3D animation using the Kinect. The researchers design a number of gestures that allow the user to play, pause, forward, rewind, scale, and rotate the 3D animation. They also implement a cursor-based traditional interface and compare it with the natural user interface. Both interfaces are extensively evaluated via a user study in terms of both the usability and user experience. Through both quantitative and the qualitative evaluation, they show that a gesture-based natural user interface is a preferred method to control a 3D animation compared to a cursor-based interface. The natural user interface not only proved to be more efficient but resulted in a more engaging and enjoyable user experience.
APA, Harvard, Vancouver, ISO, and other styles
11

Clark, Leigh, Philip Doyle, Diego Garaialde, Emer Gilmartin, Stephan Schlögl, Jens Edlund, Matthew Aylett, et al. "The State of Speech in HCI: Trends, Themes and Challenges." Interacting with Computers 31, no. 4 (June 1, 2019): 349–71. http://dx.doi.org/10.1093/iwc/iwz016.

Full text
Abstract:
AbstractSpeech interfaces are growing in popularity. Through a review of 99 research papers this work maps the trends, themes, findings and methods of empirical research on speech interfaces in the field of human–computer interaction (HCI). We find that studies are usability/theory-focused or explore wider system experiences, evaluating Wizard of Oz, prototypes or developed systems. Measuring task and interaction was common, as was using self-report questionnaires to measure concepts like usability and user attitudes. A thematic analysis of the research found that speech HCI work focuses on nine key topics: system speech production, design insight, modality comparison, experiences with interactive voice response systems, assistive technology and accessibility, user speech production, using speech technology for development, peoples’ experiences with intelligent personal assistants and how user memory affects speech interface interaction. From these insights we identify gaps and challenges in speech research, notably taking into account technological advancements, the need to develop theories of speech interface interaction, grow critical mass in this domain, increase design work and expand research from single to multiple user interaction contexts so as to reflect current use contexts. We also highlight the need to improve measure reliability, validity and consistency, in the wild deployment and reduce barriers to building fully functional speech interfaces for research.RESEARCH HIGHLIGHTSMost papers focused on usability/theory-based or wider system experience research with a focus on Wizard of Oz and developed systems Questionnaires on usability and user attitudes often used but few were reliable or validated Thematic analysis showed nine primary research topics Challenges identified in theoretical approaches and design guidelines, engaging with technological advances, multiple user and in the wild contexts, critical research mass and barriers to building speech interfaces
APA, Harvard, Vancouver, ISO, and other styles
12

Feng, Jiangfan, and Yanhong Liu. "Intelligent Context-Aware and Adaptive Interface for Mobile LBS." Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/489793.

Full text
Abstract:
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users’ demands in a complicated environment and suggested the feasibility by the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
13

Paton, Chris, Andre W. Kushniruk, Elizabeth M. Borycki, Mike English, and Jim Warren. "Improving the Usability and Safety of Digital Health Systems: The Role of Predictive Human-Computer Interaction Modeling." Journal of Medical Internet Research 23, no. 5 (May 27, 2021): e25281. http://dx.doi.org/10.2196/25281.

Full text
Abstract:
In this paper, we describe techniques for predictive modeling of human-computer interaction (HCI) and discuss how they could be used in the development and evaluation of user interfaces for digital health systems such as electronic health record systems. Predictive HCI modeling has the potential to improve the generalizability of usability evaluations of digital health interventions beyond specific contexts, especially when integrated with models of distributed cognition and higher-level sociotechnical frameworks. Evidence generated from building and testing HCI models of the user interface (UI) components for different types of digital health interventions could be valuable for informing evidence-based UI design guidelines to support the development of safer and more effective UIs for digital health interventions.
APA, Harvard, Vancouver, ISO, and other styles
14

Hicinbothom, James H., and Wayne W. Zachary. "A Tool for Automatically Generating Transcripts of Human-Computer Interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 15 (October 1993): 1042. http://dx.doi.org/10.1177/154193129303701514.

Full text
Abstract:
Recording transcripts of human-computer interaction can be a very time-consuming activity. This demonstration presents a new technology to automatically capture such transcripts in Open Systems environments (e.g., from graphical user interfaces running on the X Window System). This technology forms an infrastructure for performing distributed usability testing and human-computer interaction research, by providing integrated data capture, storage, browsing, retrieval, and export capabilities. It may lead to evaluation cost reductions throughout the software development life cycle.
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, We, Keng Soon The, Roshan Peiris, Yongsoon Choi, Adrian David Cheok, Charissa Lim Mei-Ling, Yin-Leng Theng, Ta Huynh Duy Nguyen, Tran Cong Thien Qui, and Athanasios V. Vasilakos. "Internet-Enabled User Interfaces for Distance Learning." International Journal of Technology and Human Interaction 5, no. 1 (January 2009): 51–77. http://dx.doi.org/10.4018/jthi.2009010105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Medicherla, Harsha, and Ali Sekmen. "Human–robot interaction via voice-controllable intelligent user interface." Robotica 25, no. 5 (September 2007): 521–27. http://dx.doi.org/10.1017/s0263574707003414.

Full text
Abstract:
SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.
APA, Harvard, Vancouver, ISO, and other styles
17

Kesharwani, Subodh. "Enterprise Resource Planning Interactive Via duct B/w Human & Computer." Asia Pacific Business Review 1, no. 2 (July 2005): 72–82. http://dx.doi.org/10.1177/097324700500100209.

Full text
Abstract:
Understanding human thinking is crucial in the design and evaluation of human-computer interaction. Computing devices and applications are at this moment employed ahead of the desktop, in dissimilar environments, and this tendency toward ubiquitous computing is gathering speed. As computers become major necessity and having connectivity widespread, we are increasingly becoming competent to access computer power, data, information and knowledge from anyplace and at anytime. Conversely, in order to fetch the benefits of such accessible intelligence (commonly known as ‘ERP’ or a latest buzzword in a contemporary scenario and in the field of information technology too); we should not overlook the ongoing evolutions and revolutions in human-computer communications and its interfaces. Indeed, as the human factors of information systems and knowledge systems emerge as a research area in and of itself, it is thoughtful to go on board on this electrifying field of study. The operation of the Human-Computer with a flavour of ERP is to encourage interdisciplinary study and education in user-centered computer systems. The location of ERP is to make interactive and intelligent human-computer interfaces, in order to effectively enable users to accomplish their desired tasks. This paper present a personal outlook of the HCI landscape in a historical perspective. The paper also aims part to support newcomers in the filed to grasp the origins of HCI and in part to provide grounds for a discussion of the field of usability that is being challenged by the social and cultural developments (Jorgensen 2000). This paper has argued that in order to properly understand the interaction between ERP system and human computer interactions networks one must scrutinize the mutual flows of influence and the dynamic interaction between the two. Thus to bridge this gap, the paper critically reviews the existing ERP in a humanity context and upgrade decision-drivers, synthesizes a framework based on the literature, and extends the framework as necessary. At last but not least the paper show-off a personal, historical overview of these develoments in accompanying ERP system as seen form a HCI perspective.
APA, Harvard, Vancouver, ISO, and other styles
18

Kocaballi, Ahmet Baki, Liliana Laranjo, and Enrico Coiera. "Understanding and Measuring User Experience in Conversational Interfaces." Interacting with Computers 31, no. 2 (March 1, 2019): 192–207. http://dx.doi.org/10.1093/iwc/iwz015.

Full text
Abstract:
Abstract Although various methods have been developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This paper reviews the understandings of user experience (UX) in conversational interfaces literature and examines the six questionnaires commonly used for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure different UX dimensions in that context. The method to examine the questionnaires involved developing an assessment framework for main UX dimensions with relevant attributes and coding the items in the questionnaires according to the framework. The results show that (i) the understandings of UX notably differed in literature; (ii) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (iii) while the dimension of affect was covered by two questionnaires, playfulness, motivation, and frustration dimensions were covered by one questionnaire only. The largest coverage of UX dimensions has been provided by the Subjective Assessment of Speech System Interfaces (SASSI). We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension. RESEARCH HIGHLIGHTS Varying understandings of UX in conversational interfaces literature. A UX assessment framework with UX dimensions and their relevant attributes. Descriptions of the six main questionnaires for evaluating conversational interfaces. A comparison of the six questionnaires based on their coverage of UX dimensions.
APA, Harvard, Vancouver, ISO, and other styles
19

Jain, Riya, Muskan Jain, Roopal Jain, and Suman Madan. "Human Computer Interaction – Hand Gesture Recognition." Advanced Journal of Graduate Research 11, no. 1 (September 1, 2021): 1–9. http://dx.doi.org/10.21467/ajgr.11.1.1-9.

Full text
Abstract:
The creation of intelligent and natural interfaces between users and computer systems has received a lot of attention. Several modes of knowledge like visual, audio, and pen can be used individually or in combination have been proposed in support of this endeavour. Human communication relies heavily on the use of gestures to communicate information. Gesture recognition is a subject of science and language innovation that focuses on numerically quantifying human gestures. It is possible for people to communicate properly with machines using gesture recognition without the use of any mechanical devices. Hand gestures are a form of nonverbal communication that can be applied to several fields, including deaf-mute communication, robot control, human–computer interaction (HCI), home automation, and medical applications. Many different methods have been used in hand gesture research papers, including those focused on instrumented sensor technology and computer vision. To put it another way, the hand sign may be categorized under a variety of headings, including stance and motion, dynamic and static, or a combination of the two. This paper provides an extensive study on hand gesture methods and explores their applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Chu, Chi-Cheng, Jianzhong Mo, and Rajit Gadh. "A Quantitative Analysis on Virtual Reality-Based Computer Aided Design System Interfaces." Journal of Computing and Information Science in Engineering 2, no. 3 (September 1, 2002): 216–23. http://dx.doi.org/10.1115/1.1518265.

Full text
Abstract:
In this paper, a series of interface tests on interaction approach for the generation of geometric shape designs via multi-sensory user interface of a Virtual Reality (VR) based System is presented. The goal of these interface tests is to identify an effective user interface for VR based Computer-Aided Design (CAD) system. The intuitiveness of the VR based interaction approach arises from the use of natural hand movements/gestures, and voice commands that emulate the way in which human beings discuss geometric shapes in reality. In order to evaluate the proposed interaction approach, a prototypical VR-CAD system is implemented. A series of interface tests were performed on the prototypical systems to determine the relative efficiency of a set of potential interaction approach with respect to specific fundamental design tasks. The interface test and its results are presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
21

Cronin, Seán, and Gavin Doherty. "Touchless computer interfaces in hospitals: A review." Health Informatics Journal 25, no. 4 (February 10, 2018): 1325–42. http://dx.doi.org/10.1177/1460458217748342.

Full text
Abstract:
The widespread use of technology in hospitals and the difficulty of sterilising computer controls has increased opportunities for the spread of pathogens. This leads to an interest in touchless user interfaces for computer systems. We present a review of touchless interaction with computer equipment in the hospital environment, based on a systematic search of the literature. Sterility provides an implied theme and motivation for the field as a whole, but other advantages, such as hands-busy settings, are also proposed. Overcoming hardware restrictions has been a major theme, but in recent research, technical difficulties have receded. Image navigation is the most frequently considered task and the operating room the most frequently considered environment. Gestures have been implemented for input, system and content control. Most of the studies found have small sample sizes and focus on feasibility, acceptability or gesture-recognition accuracy. We conclude this article with an agenda for future work.
APA, Harvard, Vancouver, ISO, and other styles
22

Alemerien, Khalid. "User-Friendly Security Patterns for Designing Social Network Websites." International Journal of Technology and Human Interaction 13, no. 1 (January 2017): 39–60. http://dx.doi.org/10.4018/ijthi.2017010103.

Full text
Abstract:
The number of users in Social Networking Sites (SNSs) is increasing exponentially. As a result, several security and privacy problems in SNSs have appeared. Part of these problems is caused by insecure Graphical User Interfaces (GUIs). Therefore, the developers of SNSs should take into account the balance between security and usability aspects during the development process. This paper proposes a set of user-friendly security patterns to help SNS developers to design interactive environments which protect the privacy and security of individuals while being highly user friendly. The authors proposed four patterns and evaluated them against the Facebook interfaces. The authors found that participants accepted the interfaces constructed through the proposed patterns more willingly than the Facebook interfaces.
APA, Harvard, Vancouver, ISO, and other styles
23

Barrera-León, Luisa, Nadia Mejia-Molina, Angela Carrillo-Ramos, Leonardo Flórez-Valencia, and Jaime A. Pavlich-Mariscal. "Tukuchiy: a dynamic user interface generator to improve usability." International Journal of Web Information Systems 12, no. 2 (June 20, 2016): 150–76. http://dx.doi.org/10.1108/ijwis-09-2015-0028.

Full text
Abstract:
Purpose This paper aims to present a detailed description of Tukuchiy, a framework to dynamically generate adapted user interfaces. Tukuchiy is based on Runa-Kamachiy, a conceptual integration model that combines human–computer interaction (HCI) standards to create user interfaces with user-centered concepts usually addressed by adaptation. Design/methodology/approach The first step was the definition of three profiles: user, context and interface. These profiles contain information, such as user disabilities, location characteristics (e.g. illumination) and preferences (e.g. interface color or type of system help). The next step is to define the rules that ensure usability for different users. All of this information is used to create the Tukuchiy framework, which generates dynamic user interfaces, based on the specified rules. The last step is the validation through a prototype called Idukay. This prototype uses Tukuchiy to provide e-learning services. The functionality and usability of the system was evaluated by five experts. Findings To validate the approach, a prototype of Tukuchiy, called Idukay, was created. Idukay was evaluated by experts in education, computing and HCI, who based their evaluation in the system usability scale (SUS), a standard usability test. According to them, the prototype complies with the usability criteria addressed by Tukuchiy. Research limitations/implications This work was tested in an academic environment and was validated by different experts. Further tests in a production environment are required to fully validate the approach. Originality/value Tukuchiy generates adapted user interfaces based on user and context profiles. Tukuchiy uses HCI standards to ensure usability of interfaces that dynamically change during execution time. The interfaces generated by Tukuchiy adapt to context, functionality, disabilities (e.g. color blindness) and preferences (usage and presentation) of the user. Tukuchiy enforces specific HCI standards for color utilization, button size and grouping, etc., during execution.
APA, Harvard, Vancouver, ISO, and other styles
24

Van Hees, Kris, and Jan Engelen. "Equivalent representations of multimodal user interfaces." Universal Access in the Information Society 12, no. 4 (September 18, 2012): 339–68. http://dx.doi.org/10.1007/s10209-012-0282-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Maleki, Maryam, Robert Woodbury, Rhys Goldstein, Simon Breslav, and Azam Khan. "Designing DEVS visual interfaces for end-user programmers." SIMULATION 91, no. 8 (August 2015): 715–34. http://dx.doi.org/10.1177/0037549715598570.

Full text
Abstract:
Although the Discrete Event System specification (DEVS) has over recent decades provided systems engineers with a scalable approach to modeling and simulation, the formalism has seen little uptake in many other disciplines where it could be equally useful. Our observations of end-user programmers confronted with DEVS theory or software suggest that learning barriers are largely responsible for this lack of utilization. To address these barriers, we apply ideas from human–computer interaction to the design of visual interfaces intended to promote their users’ effective knowledge of essential DEVS concepts. The first step is to propose a set of names that make these concepts easier to learn. We then design and provide rationale for visual interfaces for interacting with various elements of DEVS models and simulation runs. Both the names and interface designs are evaluated using the Cognitive Dimensions of Notations framework, which emphasizes trade-offs between 14 aspects of information artifacts. As a whole, this work illustrates a generally applicable design process for the development of interactive formalism-based simulation environments that are learnable and usable to those who are not experts in simulation formalisms.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Kuen Meau, and Ming Jen Wang. "Using the Interactive Design of Gesture Recognition in Augmented Reality." Applied Mechanics and Materials 311 (February 2013): 185–90. http://dx.doi.org/10.4028/www.scientific.net/amm.311.185.

Full text
Abstract:
Due to the rapid development of computer hardware, the mobile computer systems such as PDAs, high-end mobile phones are capable of running augmented reality (AR, hereafter) system nowadays. The mouse and keyboard based user interfaces of the traditional AR system may not be suitable for the mobile AR system because of different hardware interface and use environment. The goal of this research is to propose a novel computer-vision based human-computer interaction model, which is expected to greatly improve usability of the mobile augmented reality. In this research, we will conduct an experiment on testing the usability of a new gesture-based interface and propose a product evaluation model for e-commerce applications based on the gesture interface. In the end, we expected the new interaction model could encourage more commercial applications and other research projects. In this paper, we propose a new interface interaction model called PinchAR. The focus of PinchAR is on adapting the interface design to the changing hardware design. This paper summarizes the PinchAR project, that is, the design of an intuitive interaction model in an AR environment. Also included in this paper are the results of the PinchAR experiments.
APA, Harvard, Vancouver, ISO, and other styles
27

Eike, David R., Stephen A. Fleger, and Elizabeth R. Phillips. "User Interface Design Guidelines for Expert Troubleshooting Systems." Proceedings of the Human Factors Society Annual Meeting 30, no. 10 (September 1986): 1024–28. http://dx.doi.org/10.1177/154193128603001019.

Full text
Abstract:
This paper describes the status and preliminary results of an ongoing research project to develop and validate user interface design guidelines for expert troubleshooting systems (ETS). The project, which is sponsored by the Systems Technology Branch of NASA's Goddard Space Flight Center, is part of a larger research program to study the application of emerging user interface technologies to the design and development of user interfaces for Space Station-era systems. The project has two separate research thrusts. The first and central thrust is to develop and validate a set of human engineering guidelines for designing the user interface of an ETS. The second thrust is to design and implement an electronic data base to manage storage and retrieval of the guidelines. This paper discusses the human factors issues that are unique to the design of a user interface for an ETS. This paper is not intended to address the breadth of research that has been conducted on human-computer interaction with conventional systems. This topic is well-represented in established human engineering principles, criteria and practices as desribed in the literature (e.g., Hendricks, et al, 1982; Norman, et al, 1983; Smith and Mosier, 1985; Norman and Draper, 1986; etc.).
APA, Harvard, Vancouver, ISO, and other styles
28

Wilkinson, Alexander, Michael Gonzales, Patrick Hoey, David Kontak, Dian Wang, Noah Torname, Sam Laderoute, et al. "Design guidelines for human–robot interaction with assistive robot manipulation systems." Paladyn, Journal of Behavioral Robotics 12, no. 1 (January 1, 2021): 392–401. http://dx.doi.org/10.1515/pjbr-2021-0023.

Full text
Abstract:
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
APA, Harvard, Vancouver, ISO, and other styles
29

Wintersberger, Philipp, Clemens Schartmüller, and Andreas Riener. "Attentive User Interfaces to Improve Multitasking and Take-Over Performance in Automated Driving." International Journal of Mobile Human Computer Interaction 11, no. 3 (July 2019): 40–58. http://dx.doi.org/10.4018/ijmhci.2019070103.

Full text
Abstract:
Automated vehicles promise engagement in side activities, but demand drivers to resume vehicle control in Take-Over situations. This pattern of alternating tasks thus becomes an issue of sequential multitasking, and it is evident that random interruptions result in a performance drop and are further a source of stress/anxiety. To counteract such drawbacks, this article presents an attention-aware architecture for the integration of consumer devices in level-3/4 vehicles and traffic systems. The proposed solution can increase the lead time for transitions, which is useful to determine suitable timings (e.g., between tasks/subtasks) for interruptions in vehicles. Further, it allows responding to Take-Over-Requests directly on handheld devices in emergencies. Different aspects of the Attentive User Interface (AUI) concept were evaluated in two driving simulator studies. Results, mainly based on Take-Over performance and physiological measurements, confirm the positive effect of AUIs on safety and comfort. Consequently, AUIs should be implemented in future automated vehicles.
APA, Harvard, Vancouver, ISO, and other styles
30

Jin, Yucheng, Nava Tintarev, Nyi Nyi Htun, and Katrien Verbert. "Effects of personal characteristics in control-oriented user interfaces for music recommender systems." User Modeling and User-Adapted Interaction 30, no. 2 (October 25, 2019): 199–249. http://dx.doi.org/10.1007/s11257-019-09247-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cuomo, Donna L., and Charles D. Bowen. "Stages of User Activity Model as a Basis for User-System Interface Evaluations." Proceedings of the Human Factors Society Annual Meeting 36, no. 16 (October 1992): 1254–58. http://dx.doi.org/10.1177/154193129203601616.

Full text
Abstract:
This paper discusses the results of the first phase of a research project concerned with developing methods and measures of user-system interface effectiveness for command and control systems with graphical, direct manipulation style interfaces. Due to the increased use of prototyping user interfaces during concept definition and demonstration/validation phases, the opportunity exists for human factors engineers to apply evaluation methodologies early enough in the life cycle to make an impact on system design. Understanding and improving user-system interface (USI) evaluation techniques is critical to this process. In 1986, Norman proposed a descriptive “stages of user activity” model of human-computer interaction. Hutchins, Hollin, and Norman (1986) proposed concepts of measures based on the model which would assess the directness of the engagements between the user and the interface at each stage of the model. This first phase of our research program involved applying three USI evaluation techniques to a single interface, and assessing which, if any, provided information on the directness of engagement at each stage of Norman's model. We also classified the problem types identified according to the Smith and Mosier (1986) functional areas. The three techniques used were cognitive walkthrough, heuristic evaluation, and guidelines. It was found that the cognitive walkthrough method applied almost exclusively to the action specification stage. The guidelines were applicable to more of the stages evaluated but all the techniques were weak in measuring semantic distance and all of the stages on the evaluation side of the HCI activity cycle. Improvements to existing or new techniques are required for evaluating the directness of engagement for graphical, direct manipulation style interfaces.
APA, Harvard, Vancouver, ISO, and other styles
32

Pettitt, Michael, and Gary Burnett. "Visual Demand Evaluation Methods for In-Vehicle Interfaces." International Journal of Mobile Human Computer Interaction 2, no. 4 (October 2010): 45–57. http://dx.doi.org/10.4018/jmhci.2010100103.

Full text
Abstract:
The primary aim of the research presented in this paper is developing a method for assessing the visual demand (distraction) afforded by in-vehicle information systems (IVIS). In this respect, two alternative methods are considered within the research. The occlusion technique evaluates IVIS tasks in interrupted vision conditions, predicting likely visual demand. However, the technique necessitates performance-focused user trials utilising robust prototypes, and consequently has limitations as an economic evaluation method. In contrast, the Keystroke Level Model (KLM) has long been viewed as a reliable and valid means of modelling human performance and making task time predictions, therefore not requiring empirical trials or a working prototype. The research includes four empirical studies in which an extended KLM was developed and subsequently validated as a means of predicting measures relevant to the occlusion protocol. Future work will develop the method further to widen its scope, introduce new measures, and link the technique to existing design practices.
APA, Harvard, Vancouver, ISO, and other styles
33

Oh, Ji-Young, and Hong Hua. "Usability of Multi-Scale Interfaces for 3D Workbench Displays." Presence: Teleoperators and Virtual Environments 17, no. 5 (October 1, 2008): 415–40. http://dx.doi.org/10.1162/pres.17.5.415.

Full text
Abstract:
We consider that multi-scale visualization interfaces support users to view different levels of scales simultaneously and to understand large-scale, complex 3D information in 3D display environments. This article presents a user evaluation on three multi-scale interfaces on a 3D workbench display: focus + context (f + c), fixed f + c, and overview + detail (o + d). The interfaces differ in terms of (1) window arrangement and (2) positioning of detailed information relative to the user. Our goal is to identify the effect of these interface differences in large scale information visualization on a 3D workbench. To address the usability of the interfaces for a wide range of applications, we designed two tasks that differ by the level of information integration and cognitive demand. The evaluation results suggest that focus-based interfaces (i.e., the f + c and fixed f + c interfaces) are useful for tasks that require tight coupling between information layers and the o + d interface is useful for tasks performed in a densely populated information space. In terms of interface design on a 3D workbench, it is important to provide an up-close view of the current region of interest for fast scene navigation and an easy way to change viewing direction to see the 3D information from more comfortable directions. The detailed design guidelines based on the evaluation analysis are presented in this article.
APA, Harvard, Vancouver, ISO, and other styles
34

Kulshreshtha, Neelabh. "HCI: Use in Cyber Security." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 10, 2021): 109–13. http://dx.doi.org/10.22214/ijraset.2021.36246.

Full text
Abstract:
This paper deals with the uses of HCI (Human-Computer Interaction) with Cyber Security and Information Security. Even though there have been efforts to strengthen the infrastructure of the security systems, there are many endemic problems which still exist and are a major source of vulnerabilities. The paper also aims to bridge the gap between the end-user and the technology of HCI. There have been many widespread security problems from the perspective of the security community, many of which arise due to the bad interaction between humans and systems. Developing on the Human-Computer Interaction is an important part of the security system architecture because even the most secure systems exist to serve human users and carry out human-oriented processes, and are designed and built by humans. HCI is concerned with the user interfaces and how they can be improved because most users' perceptions are based on their experience with these interfaces. There has been immense research on this field and many advances have been made in this arena of HCI. Speaking of Information Security on the other hand has been a major concern for the present world scenario where everything is done in the digital world.
APA, Harvard, Vancouver, ISO, and other styles
35

Zarikas, Vasilios. "Modeling decisions under uncertainty in adaptive user interfaces." Universal Access in the Information Society 6, no. 1 (April 27, 2007): 87–101. http://dx.doi.org/10.1007/s10209-007-0072-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bias, Randolph G., and Douglas J. Gillan. "Whither the Science of Human-Computer Interaction? A Debate Involving Researchers and Practitioners." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 5 (October 1998): 526. http://dx.doi.org/10.1177/154193129804200517.

Full text
Abstract:
The objectives of the debate are (1) to foster a frank discussion and exchange of ideas on the potential value for the design of user interfaces of HCI-related scientific research - both basic research in perception, cognition, and social psychology and applied research on how people interact with computer systems, (2) to identify ways in which technology transfer (from researchers to designers) and design-need transfer (from designers to researchers) can be enhanced, and (3) to continue our on-going attempt to increase the dialogue between HCI researchers and practitioners (see Bias, 1994; Bias, Gillan, and Tullis, 1993; Gillan and Bias, 1992).
APA, Harvard, Vancouver, ISO, and other styles
37

Green, Paul. "ISO Human-Computer Interaction Standards: Finding Them and What They Contain." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 400–404. http://dx.doi.org/10.1177/1071181320641090.

Full text
Abstract:
An HFES Task Force is considering if, when, and which, HFES research publications should require the citation of relevant standards, policies, and practices to help translate research into practice. To support the Task Force activities, papers and reports are being written about how to find relevant standards produced by various organizations (e.g., the International Standards Organization, ISO) and the content of those standards. This paper describes the human-computer interaction standards being produced by ISO/IEC Joint Technical Committee 1 (Information Technology). Subcommittees 7 (Software and Systems Engineering) and 35 (User Interfaces), and Technical Committee 159, Subcommittee 4 (Ergonomics of Human-System Interaction), in particular, the contents of the ISO 9241 series and the ISO 2506x series. Also included are instructions on how to find standards using the ISO Browsing Tool and Technical Committee listings, and references to other materials on finding standards and standards-related teaching materials.
APA, Harvard, Vancouver, ISO, and other styles
38

Atzenbeck, Claus. "Interview with Beat Signer." ACM SIGWEB Newsletter, Winter (January 2021): 1–5. http://dx.doi.org/10.1145/3447879.3447881.

Full text
Abstract:
Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and co-director of the Web & Information Systems Engineering (WISE) research lab. He received a PhD in Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab as a senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards. Beat has 20 years of experience in research on cross-media information management and multimodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.
APA, Harvard, Vancouver, ISO, and other styles
39

Paschoarelli, Luis Carlos. "Ergonomics and interfaces of traditional information systems – Case study: packaging." InfoDesign - Revista Brasileira de Design da Informação 10, no. 3 (December 23, 2013): 313–22. http://dx.doi.org/10.51358/id.v10i3.211.

Full text
Abstract:
The contemporary world is characterized, among other factors, by the influence of the new computer information systems on the behavior of individuals. However, traditional information systems still have interaction problems with users. The aim of this study was to determine whether the interaction aspects between user versus traditional information systems (particularly the graphics) have been fully studied. To do so, the ergonomic aspects and usability of such systems were reviewed, with emphasis on the problems of visibility, legibility and readability. From that criteria, the evolution of ergonomic studies of information systems was reviewed (bibliometrics technique); and examples of ergonomic and usability problems in packaging were demonstrated (case study). The results confirm that traditional information systems still have problems of interaction between human X system, hindering the effective perception of information.
APA, Harvard, Vancouver, ISO, and other styles
40

Stepanyan, Ivan V. "Ergonomic qualities of graphic user interfaces (GUI): state and evolution." Occupational Health and Industrial Ecology, no. 12 (February 15, 2019): 51–56. http://dx.doi.org/10.31089/1026-9428-2018-12-51-57.

Full text
Abstract:
More workers are involved into interaction with graphic user interfaces most part of the working shift. However, low ergonomic qualities or incorrect usage of graphic user interface could result in risk of unfavorable influence on workers’ health. The authors revealed and classified typical scenarios of graphic user interface usage. Various types of graphic user interface and operator occupations are characterized by various parameters of exertion, both biomechanical and psycho-physiological. Among main elements of graphic user interface are presence or absence of mouse or joystick, intuitive clearness, balanced palette, fixed position of graphic elements, comfort level, etc. Review of various graphic user interface and analysis of their characteristics demonstrated possibility of various occupational risk factors. Some disclosed ergonomic problems are connected with incorporation of graphic user interface into various information technologies and systems. The authors presented a role of ergonomic characteristics of graphic user interface for safe and effective work of operators, gave examples of algorithms to visualize large information volumes for easier comprehension and analysis. Correct usage of interactive means of computer visualization with competent design and observing ergonomic principles will optimize mental work in innovative activity and preserve operators’ health. Prospective issues in this sphere are ergonomic interfaces developed with consideration of information hygiene principles, big data analysis technology and automatically generated cognitive graphics.
APA, Harvard, Vancouver, ISO, and other styles
41

Marsh, William E., Jonathan W. Kelly, Veronica J. Dark, and James H. Oliver. "Cognitive Demands of Semi-Natural Virtual Locomotion." Presence: Teleoperators and Virtual Environments 22, no. 3 (August 1, 2013): 216–34. http://dx.doi.org/10.1162/pres_a_00152.

Full text
Abstract:
There is currently no fully natural, general-purpose locomotion interface. Instead, interfaces such as gamepads or treadmills are required to explore large virtual environments (VEs). Furthermore, sensory feedback that would normally be used in real-world movement is often restricted in VR due to constraints such as reduced field of view (FOV). Accommodating these limitations with locomotion interfaces afforded by most virtual reality (VR) systems may induce cognitive demands on the user that are unrelated to the primary task to be performed in the VE. Users of VR systems often have many competing task demands, and additional cognitive demands during locomotion must compete for finite resources. Two studies were previously reported investigating the working memory demands imposed by semi-natural locomotion interfaces (Study 1) and reduced sensory feedback (Study 2). This paper expands on the previously reported results and adds discussion linking the two studies. The results indicated that locomotion with a less natural interface increases spatial working memory demands, and that locomotion with a lower FOV increases general attentional demands. These findings are discussed in terms of their practical implications for selection of locomotion interfaces when designing VEs.
APA, Harvard, Vancouver, ISO, and other styles
42

Bowman, Doug A., Ernst Kruijff, Joseph J. LaViola, and Ivan Poupyrev. "An Introduction to 3-D User Interface Design." Presence: Teleoperators and Virtual Environments 10, no. 1 (February 2001): 96–108. http://dx.doi.org/10.1162/105474601750182342.

Full text
Abstract:
Three-dimensional user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of 3-D interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3-D tasks and the use of traditional 2-D interaction styles in 3-D environments. We divide most user-interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques but also practical guidelines for 3-D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3-D interaction design and some example applications with complex 3-D interaction requirements. We also present an annotated online bibliography as a reference companion to this article.
APA, Harvard, Vancouver, ISO, and other styles
43

Di Tore, Pio Alfredo, Nadia Carlomagno, Stefano Di Tore, and Maurizio Sibilio. "Digital Umwelt." International Journal of Digital Literacy and Digital Competence 4, no. 1 (January 2013): 38–46. http://dx.doi.org/10.4018/jdldc.2013010104.

Full text
Abstract:
The spread of Natural Interfaces, based on devices which allow the retrieval to the Human Computer Interaction of natural paradigms of human interaction (sound, voice, touch, movement), limiting graphic interfaces: the interaction doesn’t occur “through the mirror” (Carroll, 2012) of the screen, but it takes place through movement, in the natural space of the user, in relation to an augmented (digital) umwelt that inter-acts continuously with the user’s whole body. The aim of this work is to present natural interfaces as the tool that constitutes the effective place of convergence between body and movement, manipulation of spatial reference systems and man-machine interaction, and inquire the possible didactic declinations.
APA, Harvard, Vancouver, ISO, and other styles
44

Marsh, William E., Jonathan W. Kelly, Julie Dickerson, and James H. Oliver. "Fuzzy Navigation Engine: Mitigating the Cognitive Demands of Semi-Natural Locomotion." Presence: Teleoperators and Virtual Environments 23, no. 3 (October 1, 2014): 300–319. http://dx.doi.org/10.1162/pres_a_00195.

Full text
Abstract:
Many interfaces exist for locomotion in virtual reality, although they are rarely considered fully natural. Past research has found that using such interfaces places cognitive demands on the user, with unnatural actions and concurrent tasks competing for finite cognitive resources. Notably, using semi-natural interfaces leads to poor performance on concurrent tasks requiring spatial working memory. This paper presents an adaptive system designed to track a user's concurrent cognitive task load and adjust interface parameters accordingly, varying the extent to which movement is fully natural. A fuzzy inference system is described and the results of an initial validation study are presented. Users of this adaptive interface demonstrated better performance than users of a baseline interface on several movement metrics, indicating that the adaptive interface helped users manage the demands of concurrent spatial tasks in a virtual environment. However, participants experienced some unexpected difficulties when faced with a concurrent verbal task.
APA, Harvard, Vancouver, ISO, and other styles
45

Halmetoja, Esa, and Francisco Forns-Samso. "Evaluating graphical user interfaces for buildings." Journal of Corporate Real Estate 22, no. 1 (January 11, 2020): 48–70. http://dx.doi.org/10.1108/jcre-08-2019-0037.

Full text
Abstract:
Purpose The purpose of this paper is to evaluate six different graphical user interfaces (GUIs) for facilities operations using human–machine interaction (HMI) theories. Design/methodology/approach The authors used a combined multi-functional method that includes a review of the theories behind HMI for GUIs as its first approach. Consequently, heuristic evaluations were conducted to identify usability problems in a professional context. Ultimately, thematic interviews were conducted with property managers and service staff to determine special needs for the interaction of humans and the built environment. Findings The heuristic evaluation revealed that not all the studied applications were complete when the study was done. The significant non-motivational factor was slowness, and a lighter application means the GUI is more comfortable and faster to use. The evaluators recommended not using actions that deviate from regular practice. Proper implementation of the GUI would make it easier and quicker to work on property maintenance and management. The thematic interviews concluded that the GUIs form an excellent solution that enables communication between the occupant, owner and service provider. Indoor conditions monitoring was seen as the most compelling use case for GUIs. Two-dimensional (2D) layouts are more demonstrative and faster than three-dimensional (3D) layouts for monitoring purposes. Practical implications The study provides an objective view of the strengths and weaknesses of specific types of GUI. So, it can help to select a suitable GUI for a particular environment. The 3D view is not seen as necessary for monitoring indoor conditions room by room or sending a service request. Many occupants’ services can be implemented without any particular layout. On the other hand, some advanced services were desired for the occupants, such as monitoring occupancy, making space reservations and people tracking. These aspects require a 2D layout at least. The building information model is seen as useful, especially when monitoring complex technical systems. Originality/value Earlier investigations have primarily concentrated on investigating human–computer interaction. The authors’ studied human–building interaction instead. The notable difference to previous efforts is that the authors considered the GUI as a medium with which to communicate with the built environment, and looked at its benefits for top-level processes, not for the user interface itself.
APA, Harvard, Vancouver, ISO, and other styles
46

Shatilov, Kirill A., Dimitris Chatzopoulos, Lik-Hang Lee, and Pan Hui. "Emerging ExG-based NUI Inputs in Extended Realities: A Bottom-up Survey." ACM Transactions on Interactive Intelligent Systems 11, no. 2 (July 19, 2021): 1–49. http://dx.doi.org/10.1145/3457950.

Full text
Abstract:
Incremental and quantitative improvements of two-way interactions with e x tended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduce to the area of XR.
APA, Harvard, Vancouver, ISO, and other styles
47

Donnerer, Michael, and Anthony Steed. "Using a P300 Brain–Computer Interface in an Immersive Virtual Environment." Presence: Teleoperators and Virtual Environments 19, no. 1 (February 1, 2010): 12–24. http://dx.doi.org/10.1162/pres.19.1.12.

Full text
Abstract:
Brain–computer interfaces (BCIs) provide a novel form of human–computer interaction. The purpose of these systems is to aid disabled people by affording them the possibility of communication and environment control. In this study, we present experiments using a P300 based BCI in a fully immersive virtual environment (IVE). P300 BCIs depend on presenting several stimuli to the user. We propose two ways of embedding the stimuli in the virtual environment: one that uses 3D objects as targets, and a second that uses a virtual overlay. Both ways have been shown to work effectively with no significant difference in selection accuracy. The results suggest that P300 BCIs can be used successfully in a 3D environment, and this suggests some novel ways of using BCIs in real world environments.
APA, Harvard, Vancouver, ISO, and other styles
48

LUKOSCH, STEPHAN, and MOHAMED BOURIMI. "TOWARDS AN ENHANCED ADAPTABILITY AND USABILITY OF WEB-BASED COLLABORATIVE SYSTEMS." International Journal of Cooperative Information Systems 17, no. 04 (December 2008): 467–94. http://dx.doi.org/10.1142/s0218843008001944.

Full text
Abstract:
Web-based collaborative systems support a variety of complex scenarios. Not only the interaction among one user and a computer has to be modeled but also the interaction among the collaborating users as well. As a result, the user interfaces of many web-based collaborative systems are quite complex, but hardly use approved user interface concepts for the design of interactive systems. Thereby, web-based collaborative systems aggravate the interaction of the users with the system and also with each other. In this article, we describe how the adaptability and usability of such systems can particularly be improved by supporting direct manipulation techniques for navigation as well as tailoring. The new functionality for tailoring and navigation is complemented by new forms of visualizing synchronous awareness information and supporting communication in web-based systems. We show this exemplarily by retrofitting the web-based collaborative system CURE while highlighting the concepts that can be easily transferred to other web-based collaborative systems.
APA, Harvard, Vancouver, ISO, and other styles
49

Jones, Sara. "Graphical interfaces for knowledge engineering: an overview of relevant literature." Knowledge Engineering Review 3, no. 3 (September 1988): 221–47. http://dx.doi.org/10.1017/s0269888900004483.

Full text
Abstract:
AbstractLiterature relevant to the design and development of graphical interfaces for knowledge-based systems is briefly reviewed and discussed. The efficiency of human-computer interaction depends to a large extent on the degree to which the human-machine interface can answer the user's cognitive needs and accurately support his or her natural cognitive processes and structures. Graphical interfaces can often be particularly suitable in this respect, especially in cases where the user's “natural idiom” is graphical. Illustrated examples are given of the way in which graphical interfaces have successfully been used in various fields with particular emphasis on their use in the field of knowledge-based systems. The paper ends with a brief discussion of possible future developments in the field of knowledge-based system interfaces and of the role that graphics might play in such developments.
APA, Harvard, Vancouver, ISO, and other styles
50

Hedley, Nicholas R., Mark Billinghurst, Lori Postner, Richard May, and Hirokazu Kato. "Explorations in the Use of Augmented Reality for Geographic Visualization." Presence: Teleoperators and Virtual Environments 11, no. 2 (April 2002): 119–33. http://dx.doi.org/10.1162/1054746021470577.

Full text
Abstract:
In this paper, we describe two explorations in the use of hybrid user interfaces for collaborative geographic data visualization. Our first interface combines three technologies: augmented reality (AR), immersive virtual reality (VR), and computer vision-based hand and object tracking. Wearing a lightweight display with an attached camera, users can look at a real map and see three-dimensional virtual terrain models overlaid on the map. From this AR interface, they can fly in and experience the model immersively, or use free hand gestures or physical markers to change the data representation. Building on this work, our second interface explores alternative interface techniques, including a zoomable user interface, paddle interactions, and pen annotations. We describe the system hardware and software and the implications for GIS and spatial science applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography