Journal articles on the topic 'User interfaces (Computer systems) Human-computer interaction. Computer terminals'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'User interfaces (Computer systems) Human-computer interaction. Computer terminals.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Luz María, Alonso-Valerdi, and Mercado-García Víctor Rodrigo. "Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments." Computational Intelligence and Neuroscience 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/6076913.

Full text
Abstract:
Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI). Those cognitive processes take place while a user navigates and explores a virtual environment (VE) and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence) that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI). BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1) set out working environmental conditions, (2) maximize the efficiency of BCI control panels, (3) implement navigation systems based not only on user intentions but also on user emotions, and (4) regulate user mental state to increase the differentiation between control and noncontrol modalities.
APA, Harvard, Vancouver, ISO, and other styles
2

Lopes, José, Francisco Alegria, Luís Redondo, Jorge Rocha, and Eduardo Alves. "Computer Control of a 3 MV Van de Graaff Accelerator." Metrology and Measurement Systems 17, no. 3 (2010): 415–25. http://dx.doi.org/10.2478/v10178-010-0035-3.

Full text
Abstract:
Computer Control of a 3 MV Van de Graaff AcceleratorThe development of accurate computer control of a 3 MV Van de Graaff accelerator operation is described. The developed system comprises the accelerator turn-on and turn-off procedures during a normal run, which includes the setting of the terminal voltage, ion source light up, beam focusing and control of ion beam current and energy during operation. In addition, the computer monitors the vacuum and is able to make a detail register of the most important events during a normal run. The computer control system uses a LabVIEW application for interaction with the operator and an I/O board that interfaces the computer and the accelerator system. For everyday operating conditions the control implemented is able to turn-on and off the machine in about the same time as a specialized technician. In addition, today more users can make experiments in the accelerator without the help of a specialized operator, which in turns increases the number of hours during which the accelerator can be used.
APA, Harvard, Vancouver, ISO, and other styles
3

KONSTANTOPOULOS, STASINOS, and VANGELIS KARKALETSIS. "SYSTEM PERSONALITY AND ADAPTIVITY IN AFFECTIVE HUMAN-COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 22, no. 02 (2013): 1350014. http://dx.doi.org/10.1142/s0218213013500140.

Full text
Abstract:
It has been demonstrated that human users attribute a personality to the computer interfaces they use, regardless of whether one has been explicitly encoded in the system's design or not. In this paper, we explore a method for having explicit control over the personality that a spoken human-robot interface is perceived to exhibit by its users. Our method focuses on the interaction between users and semantic knowledge-based systems where the goal of the interaction is that information from the semantic store is relayed to the user. We describe a personality modelling method that complements a standard dialogue manager by calculating parameters related to adaptivity and emotion for the various interaction modules that realize the system's dialogue acts. This calculation involves the planned act, the user adaptivity model, the system's own goals, but also a machine representation of the personality that we want the system to exhibit, so that systems with different personality will react differently even when in the same dialogue state and with the same user or user type.
APA, Harvard, Vancouver, ISO, and other styles
4

Ferreira, Alessandro Luiz Stamatto, Leonardo Cunha de Miranda, Erica Esteves Cunha de Miranda, and Sarah Gomes Sakamoto. "A Survey of Interactive Systems based on Brain-Computer Interfaces." Journal on Interactive Systems 4, no. 1 (2013): 1. http://dx.doi.org/10.5753/jis.2013.623.

Full text
Abstract:
Brain-Computer Interface (BCI) enables users to interact with a computer only through their brain biological signals, without the need to use muscles. BCI is an emerging research area but it is still relatively immature. However, it is important to reflect on the different aspects of the Human-Computer Interaction (HCI) area related to BCIs, considering that BCIs will be part of interactive systems in the near future. BCIs most attend not only to handicapped users, but also healthy ones, improving interaction for end-users. Virtual Reality (VR) is also an important part of interactive systems, and combined with BCI could greatly enhance user interactions, improving the user experience by using brain signals as input with immersive environments as output. This paper addresses only noninvasive BCIs, since this kind of capture is the only one to not present risk to human health. As contributions of this work we highlight the survey of interactive systems based on BCIs focusing on HCI and VR applications, and a discussion on challenges and future of this subject matter.
APA, Harvard, Vancouver, ISO, and other styles
5

Murano, Pietro, and Patrik O’Brian Holt. "Anthropomorphic Feedback in User Interfaces." International Journal of Technology and Human Interaction 3, no. 4 (2007): 52–63. http://dx.doi.org/10.4018/jthi.2007100104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hicinbothom, James H., and Wayne W. Zachary. "A Tool for Automatically Generating Transcripts of Human-Computer Interaction." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 15 (1993): 1042. http://dx.doi.org/10.1177/154193129303701514.

Full text
Abstract:
Recording transcripts of human-computer interaction can be a very time-consuming activity. This demonstration presents a new technology to automatically capture such transcripts in Open Systems environments (e.g., from graphical user interfaces running on the X Window System). This technology forms an infrastructure for performing distributed usability testing and human-computer interaction research, by providing integrated data capture, storage, browsing, retrieval, and export capabilities. It may lead to evaluation cost reductions throughout the software development life cycle.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, We, Keng Soon The, Roshan Peiris, et al. "Internet-Enabled User Interfaces for Distance Learning." International Journal of Technology and Human Interaction 5, no. 1 (2009): 51–77. http://dx.doi.org/10.4018/jthi.2009010105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kocaballi, Ahmet Baki, Liliana Laranjo, and Enrico Coiera. "Understanding and Measuring User Experience in Conversational Interfaces." Interacting with Computers 31, no. 2 (2019): 192–207. http://dx.doi.org/10.1093/iwc/iwz015.

Full text
Abstract:
Abstract Although various methods have been developed to evaluate conversational interfaces, there has been a lack of methods specifically focusing on evaluating user experience. This paper reviews the understandings of user experience (UX) in conversational interfaces literature and examines the six questionnaires commonly used for evaluating conversational systems in order to assess the potential suitability of these questionnaires to measure different UX dimensions in that context. The method to examine the questionnaires involved developing an assessment framework for main UX dimensions with relevant attributes and coding the items in the questionnaires according to the framework. The results show that (i) the understandings of UX notably differed in literature; (ii) four questionnaires included assessment items, in varying extents, to measure hedonic, aesthetic and pragmatic dimensions of UX; (iii) while the dimension of affect was covered by two questionnaires, playfulness, motivation, and frustration dimensions were covered by one questionnaire only. The largest coverage of UX dimensions has been provided by the Subjective Assessment of Speech System Interfaces (SASSI). We recommend using multiple questionnaires to obtain a more complete measurement of user experience or improve the assessment of a particular UX dimension. RESEARCH HIGHLIGHTS Varying understandings of UX in conversational interfaces literature. A UX assessment framework with UX dimensions and their relevant attributes. Descriptions of the six main questionnaires for evaluating conversational interfaces. A comparison of the six questionnaires based on their coverage of UX dimensions.
APA, Harvard, Vancouver, ISO, and other styles
9

Chu, Chi-Cheng, Jianzhong Mo, and Rajit Gadh. "A Quantitative Analysis on Virtual Reality-Based Computer Aided Design System Interfaces." Journal of Computing and Information Science in Engineering 2, no. 3 (2002): 216–23. http://dx.doi.org/10.1115/1.1518265.

Full text
Abstract:
In this paper, a series of interface tests on interaction approach for the generation of geometric shape designs via multi-sensory user interface of a Virtual Reality (VR) based System is presented. The goal of these interface tests is to identify an effective user interface for VR based Computer-Aided Design (CAD) system. The intuitiveness of the VR based interaction approach arises from the use of natural hand movements/gestures, and voice commands that emulate the way in which human beings discuss geometric shapes in reality. In order to evaluate the proposed interaction approach, a prototypical VR-CAD system is implemented. A series of interface tests were performed on the prototypical systems to determine the relative efficiency of a set of potential interaction approach with respect to specific fundamental design tasks. The interface test and its results are presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
10

Jin, Yucheng, Nava Tintarev, Nyi Nyi Htun, and Katrien Verbert. "Effects of personal characteristics in control-oriented user interfaces for music recommender systems." User Modeling and User-Adapted Interaction 30, no. 2 (2019): 199–249. http://dx.doi.org/10.1007/s11257-019-09247-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Clark, Leigh, Philip Doyle, Diego Garaialde, et al. "The State of Speech in HCI: Trends, Themes and Challenges." Interacting with Computers 31, no. 4 (2019): 349–71. http://dx.doi.org/10.1093/iwc/iwz016.

Full text
Abstract:
AbstractSpeech interfaces are growing in popularity. Through a review of 99 research papers this work maps the trends, themes, findings and methods of empirical research on speech interfaces in the field of human–computer interaction (HCI). We find that studies are usability/theory-focused or explore wider system experiences, evaluating Wizard of Oz, prototypes or developed systems. Measuring task and interaction was common, as was using self-report questionnaires to measure concepts like usability and user attitudes. A thematic analysis of the research found that speech HCI work focuses on nine key topics: system speech production, design insight, modality comparison, experiences with interactive voice response systems, assistive technology and accessibility, user speech production, using speech technology for development, peoples’ experiences with intelligent personal assistants and how user memory affects speech interface interaction. From these insights we identify gaps and challenges in speech research, notably taking into account technological advancements, the need to develop theories of speech interface interaction, grow critical mass in this domain, increase design work and expand research from single to multiple user interaction contexts so as to reflect current use contexts. We also highlight the need to improve measure reliability, validity and consistency, in the wild deployment and reduce barriers to building fully functional speech interfaces for research.RESEARCH HIGHLIGHTSMost papers focused on usability/theory-based or wider system experience research with a focus on Wizard of Oz and developed systems Questionnaires on usability and user attitudes often used but few were reliable or validated Thematic analysis showed nine primary research topics Challenges identified in theoretical approaches and design guidelines, engaging with technological advances, multiple user and in the wild contexts, critical research mass and barriers to building speech interfaces
APA, Harvard, Vancouver, ISO, and other styles
12

Wintersberger, Philipp, Clemens Schartmüller, and Andreas Riener. "Attentive User Interfaces to Improve Multitasking and Take-Over Performance in Automated Driving." International Journal of Mobile Human Computer Interaction 11, no. 3 (2019): 40–58. http://dx.doi.org/10.4018/ijmhci.2019070103.

Full text
Abstract:
Automated vehicles promise engagement in side activities, but demand drivers to resume vehicle control in Take-Over situations. This pattern of alternating tasks thus becomes an issue of sequential multitasking, and it is evident that random interruptions result in a performance drop and are further a source of stress/anxiety. To counteract such drawbacks, this article presents an attention-aware architecture for the integration of consumer devices in level-3/4 vehicles and traffic systems. The proposed solution can increase the lead time for transitions, which is useful to determine suitable timings (e.g., between tasks/subtasks) for interruptions in vehicles. Further, it allows responding to Take-Over-Requests directly on handheld devices in emergencies. Different aspects of the Attentive User Interface (AUI) concept were evaluated in two driving simulator studies. Results, mainly based on Take-Over performance and physiological measurements, confirm the positive effect of AUIs on safety and comfort. Consequently, AUIs should be implemented in future automated vehicles.
APA, Harvard, Vancouver, ISO, and other styles
13

Alemerien, Khalid. "User-Friendly Security Patterns for Designing Social Network Websites." International Journal of Technology and Human Interaction 13, no. 1 (2017): 39–60. http://dx.doi.org/10.4018/ijthi.2017010103.

Full text
Abstract:
The number of users in Social Networking Sites (SNSs) is increasing exponentially. As a result, several security and privacy problems in SNSs have appeared. Part of these problems is caused by insecure Graphical User Interfaces (GUIs). Therefore, the developers of SNSs should take into account the balance between security and usability aspects during the development process. This paper proposes a set of user-friendly security patterns to help SNS developers to design interactive environments which protect the privacy and security of individuals while being highly user friendly. The authors proposed four patterns and evaluated them against the Facebook interfaces. The authors found that participants accepted the interfaces constructed through the proposed patterns more willingly than the Facebook interfaces.
APA, Harvard, Vancouver, ISO, and other styles
14

Van Hees, Kris, and Jan Engelen. "Equivalent representations of multimodal user interfaces." Universal Access in the Information Society 12, no. 4 (2012): 339–68. http://dx.doi.org/10.1007/s10209-012-0282-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wilkinson, Alexander, Michael Gonzales, Patrick Hoey, et al. "Design guidelines for human–robot interaction with assistive robot manipulation systems." Paladyn, Journal of Behavioral Robotics 12, no. 1 (2021): 392–401. http://dx.doi.org/10.1515/pjbr-2021-0023.

Full text
Abstract:
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
APA, Harvard, Vancouver, ISO, and other styles
16

Green, Paul. "ISO Human-Computer Interaction Standards: Finding Them and What They Contain." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (2020): 400–404. http://dx.doi.org/10.1177/1071181320641090.

Full text
Abstract:
An HFES Task Force is considering if, when, and which, HFES research publications should require the citation of relevant standards, policies, and practices to help translate research into practice. To support the Task Force activities, papers and reports are being written about how to find relevant standards produced by various organizations (e.g., the International Standards Organization, ISO) and the content of those standards. This paper describes the human-computer interaction standards being produced by ISO/IEC Joint Technical Committee 1 (Information Technology). Subcommittees 7 (Software and Systems Engineering) and 35 (User Interfaces), and Technical Committee 159, Subcommittee 4 (Ergonomics of Human-System Interaction), in particular, the contents of the ISO 9241 series and the ISO 2506x series. Also included are instructions on how to find standards using the ISO Browsing Tool and Technical Committee listings, and references to other materials on finding standards and standards-related teaching materials.
APA, Harvard, Vancouver, ISO, and other styles
17

Bias, Randolph G., and Douglas J. Gillan. "Whither the Science of Human-Computer Interaction? A Debate Involving Researchers and Practitioners." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, no. 5 (1998): 526. http://dx.doi.org/10.1177/154193129804200517.

Full text
Abstract:
The objectives of the debate are (1) to foster a frank discussion and exchange of ideas on the potential value for the design of user interfaces of HCI-related scientific research - both basic research in perception, cognition, and social psychology and applied research on how people interact with computer systems, (2) to identify ways in which technology transfer (from researchers to designers) and design-need transfer (from designers to researchers) can be enhanced, and (3) to continue our on-going attempt to increase the dialogue between HCI researchers and practitioners (see Bias, 1994; Bias, Gillan, and Tullis, 1993; Gillan and Bias, 1992).
APA, Harvard, Vancouver, ISO, and other styles
18

Paton, Chris, Andre W. Kushniruk, Elizabeth M. Borycki, Mike English, and Jim Warren. "Improving the Usability and Safety of Digital Health Systems: The Role of Predictive Human-Computer Interaction Modeling." Journal of Medical Internet Research 23, no. 5 (2021): e25281. http://dx.doi.org/10.2196/25281.

Full text
Abstract:
In this paper, we describe techniques for predictive modeling of human-computer interaction (HCI) and discuss how they could be used in the development and evaluation of user interfaces for digital health systems such as electronic health record systems. Predictive HCI modeling has the potential to improve the generalizability of usability evaluations of digital health interventions beyond specific contexts, especially when integrated with models of distributed cognition and higher-level sociotechnical frameworks. Evidence generated from building and testing HCI models of the user interface (UI) components for different types of digital health interventions could be valuable for informing evidence-based UI design guidelines to support the development of safer and more effective UIs for digital health interventions.
APA, Harvard, Vancouver, ISO, and other styles
19

Pettitt, Michael, and Gary Burnett. "Visual Demand Evaluation Methods for In-Vehicle Interfaces." International Journal of Mobile Human Computer Interaction 2, no. 4 (2010): 45–57. http://dx.doi.org/10.4018/jmhci.2010100103.

Full text
Abstract:
The primary aim of the research presented in this paper is developing a method for assessing the visual demand (distraction) afforded by in-vehicle information systems (IVIS). In this respect, two alternative methods are considered within the research. The occlusion technique evaluates IVIS tasks in interrupted vision conditions, predicting likely visual demand. However, the technique necessitates performance-focused user trials utilising robust prototypes, and consequently has limitations as an economic evaluation method. In contrast, the Keystroke Level Model (KLM) has long been viewed as a reliable and valid means of modelling human performance and making task time predictions, therefore not requiring empirical trials or a working prototype. The research includes four empirical studies in which an extended KLM was developed and subsequently validated as a means of predicting measures relevant to the occlusion protocol. Future work will develop the method further to widen its scope, introduce new measures, and link the technique to existing design practices.
APA, Harvard, Vancouver, ISO, and other styles
20

Gupta, Brij B., and Shaifali Narayan. "A Key-Based Mutual Authentication Framework for Mobile Contactless Payment System Using Authentication Server." Journal of Organizational and End User Computing 33, no. 2 (2021): 1–16. http://dx.doi.org/10.4018/joeuc.20210301.oa1.

Full text
Abstract:
This paper presents a framework for mutual authentication between a user device and a point of sale (POS) machine using magnetic secure transmission (MST) to prevent the wormhole attack in Samsung pay. The primary attribute of this method is authenticating the POS terminals by an authentication server to bind the generated token to a single POS machine. To secure the system from eavesdropping attack, the data transmitted between the user device and the machine is encrypted by using the Elgamal encryption method. The keys used in the method are dynamic in nature. Furthermore, comparison and security analysis are presented with previously proposed systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Ahmed, Naveed, Hind Kharoub, Selma Manel Medjden, and Areej Alsaafin. "A Natural User Interface for 3D Animation Using Kinect." International Journal of Technology and Human Interaction 16, no. 4 (2020): 35–54. http://dx.doi.org/10.4018/ijthi.2020100103.

Full text
Abstract:
This article presents a new natural user interface to control and manipulate a 3D animation using the Kinect. The researchers design a number of gestures that allow the user to play, pause, forward, rewind, scale, and rotate the 3D animation. They also implement a cursor-based traditional interface and compare it with the natural user interface. Both interfaces are extensively evaluated via a user study in terms of both the usability and user experience. Through both quantitative and the qualitative evaluation, they show that a gesture-based natural user interface is a preferred method to control a 3D animation compared to a cursor-based interface. The natural user interface not only proved to be more efficient but resulted in a more engaging and enjoyable user experience.
APA, Harvard, Vancouver, ISO, and other styles
22

Zarikas, Vasilios. "Modeling decisions under uncertainty in adaptive user interfaces." Universal Access in the Information Society 6, no. 1 (2007): 87–101. http://dx.doi.org/10.1007/s10209-007-0072-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Medicherla, Harsha, and Ali Sekmen. "Human–robot interaction via voice-controllable intelligent user interface." Robotica 25, no. 5 (2007): 521–27. http://dx.doi.org/10.1017/s0263574707003414.

Full text
Abstract:
SUMMARYAn understanding of how humans and robots can successfully interact to accomplish specific tasks is crucial in creating more sophisticated robots that may eventually become an integral part of human societies. A social robot needs to be able to learn the preferences and capabilities of the people with whom it interacts so that it can adapt its behaviors for more efficient and friendly interaction. Advances in human– computer interaction technologies have been widely used in improving human–robot interaction (HRI). It is now possible to interact with robots via natural communication means such as speech. In this paper, an innovative approach for HRI via voice-controllable intelligent user interfaces is described. The design and implementation of such interfaces are described. The traditional approaches for human–robot user interface design are explained and the advantages of the proposed approach are presented. The designed intelligent user interface, which learns user preferences and capabilities in time, can be controlled with voice. The system was successfully implemented and tested on a Pioneer 3-AT mobile robot. 20 participants, who were assessed on spatial reasoning ability, directed the robot in spatial navigation tasks to evaluate the effectiveness of the voice control in HRI. Time to complete the task, number of steps, and errors were collected. Results indicated that spatial reasoning ability and voice-control were reliable predictors of efficiency of robot teleoperation. 75% of the subjects with high spatial reasoning ability preferred using voice-control over manual control. The effect of spatial reasoning ability in teleoperation with voice-control was lower compared to that of manual control.
APA, Harvard, Vancouver, ISO, and other styles
24

Yuan, Haiyue, Shujun Li, and Patrice Rusconi. "CogTool+." ACM Transactions on Computer-Human Interaction 28, no. 2 (2021): 1–38. http://dx.doi.org/10.1145/3447534.

Full text
Abstract:
Cognitive modeling tools have been widely used by researchers and practitioners to help design, evaluate, and study computer user interfaces (UIs). Despite their usefulness, large-scale modeling tasks can still be very challenging due to the amount of manual work needed. To address this scalability challenge, we propose CogTool+, a new cognitive modeling software framework developed on top of the well-known software tool CogTool. CogTool+ addresses the scalability problem by supporting the following key features: (1) a higher level of parameterization and automation; (2) algorithmic components; (3) interfaces for using external data; and (4) a clear separation of tasks, which allows programmers and psychologists to define reusable components (e.g., algorithmic modules and behavioral templates) that can be used by UI/UX researchers and designers without the need to understand the low-level implementation details of such components. CogTool+ also supports mixed cognitive models required for many large-scale modeling tasks and provides an offline analyzer of simulation results. In order to show how CogTool+ can reduce the human effort required for large-scale modeling, we illustrate how it works using a pedagogical example, and demonstrate its actual performance by applying it to large-scale modeling tasks of two real-world user-authentication systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Schmutz, Peter, Silvia Heinz, Yolanda Métrailler, and Klaus Opwis. "Cognitive Load in eCommerce Applications—Measurement and Effects on User Satisfaction." Advances in Human-Computer Interaction 2009 (2009): 1–9. http://dx.doi.org/10.1155/2009/121494.

Full text
Abstract:
Guidelines for designing usable interfaces recommend reducing short term memory load. Cognitive load, that is, working memory demands during problem solving, reasoning, or thinking, may affect users' general satisfaction and performance when completing complex tasks. Whereas in design guidelines numerous ways of reducing cognitive load in interactive systems are described, not many attempts have been made to measure cognitive load in Web applications, and few techniques exist. In this study participants' cognitive load was measured while they were engaged in searching for several products in four different online book stores. NASA-TLX and dual-task methodology were used to measure subjective and objective mental workload. The dual-task methodology involved searching for books as the primary task and a visual monitoring task as the secondary task. NASA-TLX scores differed significantly among the shops. Secondary task reaction times showed no significant differences between the four shops. Strong correlations between NASA-TLX, primary task completion time, and general satisfaction suggest that NASA-TLX can be used as a valuable additional measure of efficiency. Furthermore, strong correlations were found between browse/search preference and NASA-TLX as well as between search/browse preference and user satisfaction. Thus we suggest browse/search preference as a promising heuristic assessment method of cognitive load.
APA, Harvard, Vancouver, ISO, and other styles
26

Reynoso, Juan Manuel Gómez, and Lizeth Itziguery Solano Romo. "Measuring the Effectiveness of Designing End-User Interfaces Using Design Theories." International Journal of Information Technologies and Systems Approach 13, no. 2 (2020): 54–72. http://dx.doi.org/10.4018/ijitsa.2020070103.

Full text
Abstract:
Software systems are one of the most important technologies that are present in every task that humans and computers perform. Humans perform their tasks by using a computer interface. However, because many developers have not been exposed to one or more courses on Human Computer Interaction (HCI), they sometimes create software using their own preferences based on their skills and abilities and do not consult theories that could help them produce better outcomes. A study was carried out to identity whether software that is developed by using Gestalt Theory combined with interface development principles produces better outcomes compared to software developed using developers' current skills. Results show that participants perceived the system that was developed by a team that had been given training about Gestalt Theory and design guidelines had superior perceived quality compared to another team that did not receive the training. However, results should be taken cautiously.
APA, Harvard, Vancouver, ISO, and other styles
27

Shatilov, Kirill A., Dimitris Chatzopoulos, Lik-Hang Lee, and Pan Hui. "Emerging ExG-based NUI Inputs in Extended Realities: A Bottom-up Survey." ACM Transactions on Interactive Intelligent Systems 11, no. 2 (2021): 1–49. http://dx.doi.org/10.1145/3457950.

Full text
Abstract:
Incremental and quantitative improvements of two-way interactions with e x tended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduce to the area of XR.
APA, Harvard, Vancouver, ISO, and other styles
28

West, A. A., B. A. Bowen, R. P. Monfared, and A. Hodgson. "User-responsive interface generation for manufacturing systems: A theoretical basis." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 214, no. 5 (2000): 379–92. http://dx.doi.org/10.1243/0954405001518161.

Full text
Abstract:
Computer integrated manufacturing (CIM) systems with a significant level of human-computer interaction are often inefficient. This is particularly problematical for those users who have to interact with multiple subsystem interfaces. These difficulties can be traced back to the fact that representation of the user in existing manufacturing models and systems is inadequate. An approach that increases user representation to improve CIM interface design is proposed, in which stereotype-based user and task models are used to specify a common user interface for each individual system user. An overview of the architecture is followed by discussion of an application domain (statistical process control) in which a demonstrator based on the architecture has been tested.
APA, Harvard, Vancouver, ISO, and other styles
29

Brajnik, Giorgio, Stefano Mizzaro, Carlo Tasso, and Fabio Venuti. "Strategic help in user interfaces for information retrieval." Journal of the American Society for Information Science and Technology 53, no. 5 (2002): 343–58. http://dx.doi.org/10.1002/asi.10035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vaughan, Misha W., and Marc L. Resnick. "Search user interfaces: Best practices and future visions." Journal of the American Society for Information Science and Technology 57, no. 6 (2006): 777–80. http://dx.doi.org/10.1002/asi.20291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kesharwani, Subodh. "Enterprise Resource Planning Interactive Via duct B/w Human & Computer." Asia Pacific Business Review 1, no. 2 (2005): 72–82. http://dx.doi.org/10.1177/097324700500100209.

Full text
Abstract:
Understanding human thinking is crucial in the design and evaluation of human-computer interaction. Computing devices and applications are at this moment employed ahead of the desktop, in dissimilar environments, and this tendency toward ubiquitous computing is gathering speed. As computers become major necessity and having connectivity widespread, we are increasingly becoming competent to access computer power, data, information and knowledge from anyplace and at anytime. Conversely, in order to fetch the benefits of such accessible intelligence (commonly known as ‘ERP’ or a latest buzzword in a contemporary scenario and in the field of information technology too); we should not overlook the ongoing evolutions and revolutions in human-computer communications and its interfaces. Indeed, as the human factors of information systems and knowledge systems emerge as a research area in and of itself, it is thoughtful to go on board on this electrifying field of study. The operation of the Human-Computer with a flavour of ERP is to encourage interdisciplinary study and education in user-centered computer systems. The location of ERP is to make interactive and intelligent human-computer interfaces, in order to effectively enable users to accomplish their desired tasks. This paper present a personal outlook of the HCI landscape in a historical perspective. The paper also aims part to support newcomers in the filed to grasp the origins of HCI and in part to provide grounds for a discussion of the field of usability that is being challenged by the social and cultural developments (Jorgensen 2000). This paper has argued that in order to properly understand the interaction between ERP system and human computer interactions networks one must scrutinize the mutual flows of influence and the dynamic interaction between the two. Thus to bridge this gap, the paper critically reviews the existing ERP in a humanity context and upgrade decision-drivers, synthesizes a framework based on the literature, and extends the framework as necessary. At last but not least the paper show-off a personal, historical overview of these develoments in accompanying ERP system as seen form a HCI perspective.
APA, Harvard, Vancouver, ISO, and other styles
32

Bowman, Doug A., Ernst Kruijff, Joseph J. LaViola, and Ivan Poupyrev. "An Introduction to 3-D User Interface Design." Presence: Teleoperators and Virtual Environments 10, no. 1 (2001): 96–108. http://dx.doi.org/10.1162/105474601750182342.

Full text
Abstract:
Three-dimensional user interface design is a critical component of any virtual environment (VE) application. In this paper, we present a broad overview of 3-D interaction and user interfaces. We discuss the effect of common VE hardware devices on user interaction, as well as interaction techniques for generic 3-D tasks and the use of traditional 2-D interaction styles in 3-D environments. We divide most user-interaction tasks into three categories: navigation, selection/manipulation, and system control. Throughout the paper, our focus is on presenting not only the available techniques but also practical guidelines for 3-D interaction design and widely held myths. Finally, we briefly discuss two approaches to 3-D interaction design and some example applications with complex 3-D interaction requirements. We also present an annotated online bibliography as a reference companion to this article.
APA, Harvard, Vancouver, ISO, and other styles
33

Yamada, Seiji, Tsuyoshi Murata, and Yasufumi Takama. "Selected Papers from IWI 2009." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 4 (2010): 383. http://dx.doi.org/10.20965/jaciii.2010.p0383.

Full text
Abstract:
Various Web systems and services currently provide a great deal of benefits to users, with Web interaction becoming increasingly important in research and business. Such Web interaction has been realized through related technologies as interaction design, interactive information retrieval, interactive intelligent systems, personalization, user interfaces and interactive machine learning. However, each study and development in such different fields has been done independently, which might discourage us from studying Web interaction from an unified view of human-system interaction and making Web interaction more intelligent by applying AI and computational intelligence. Guest Editors (Seiji Yamada, Tsuyoshi Murata, and Yasufumi Takama) organized an Intelligent Web Interaction Workshop 2009 (IWIf09) in Milano, Italy, last year to bring together researchers in diversified fields including Web systems, AI, computational intelligence, humancomputer interaction and user interfaces. Held jointly with 2009 IEEE/WIC/ACM International Conference on Web Intelligence (WI-2009), IWIf09 produced 14 outstanding papers - an acceptance rate of 50%, and active discussions among speakers and participants. A subsequent workshop Intelligent Web Interaction Workshop 2010 (IWIf10) will be held in Toronto, Canada in this September. This special issue presents intelligent Web interaction as a new and promising research field. Speakers selected from among those at IWIf09 were encouraged to submit papers for this issue. The submissions were then reviewed for relevance, originality, significance and presentation based on JACIII review criteria. This special issue consists of five papers which describe excellent studies on Web interface, Web systems, Web credibility, constrained clustering for interactive Web application and graph analysis on the Web. The acceptance rate was 56%. All papers introduce promising approaches and interesting results that readers will find inspiring. We strongly believe intelligent Web interaction has tremendous potential as a new, active field of research, and we hope this issue will motivate researchers to expand studies on intelligent Web interaction.
APA, Harvard, Vancouver, ISO, and other styles
34

Nakatsu, Robbie T., and Izak Benbasat. "Designing intelligent systems to handle system failures: Enhancing explanatory power with less restrictive user interfaces and deep explanations." International Journal of Human-Computer Interaction 21, no. 1 (2006): 55–72. http://dx.doi.org/10.1080/10447310609526171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Resnick, Marc L., and Misha W. Vaughan. "Best practices and future visions for search user interfaces." Journal of the American Society for Information Science and Technology 57, no. 6 (2006): 781–87. http://dx.doi.org/10.1002/asi.20292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tobergte, Andreas. "MiroSurge—Advanced User Interaction Modalities in Minimally Invasive Robotic Surgery." Presence: Teleoperators and Virtual Environments 19, no. 5 (2010): 400–414. http://dx.doi.org/10.1162/pres_a_00022.

Full text
Abstract:
This paper presents MiroSurge, a telepresence system for minimally invasive surgery developed at the German Aerospace Center (DLR), and introduces MiroSurge's new user interaction modalities: (1) haptic feedback with software-based preservation of the fulcrum point, (2) an ultrasound-based approach to the quasi-tactile detection of pulsating vessels, and (3) a contact-free interface between surgeon and telesurgery system, where stereo vision is augmented with force vectors at the tool tip. All interaction modalities aim to increase the user's perception beyond stereo imaging by either augmenting the images or by using haptic interfaces. MiroSurge currently provides surgeons with two different interfaces. The first option, bimanual haptic interaction with force and partial tactile feedback, allows for direct perception of the remote environment. Alternatively, users can choose to control the surgical instruments by optically tracked forceps held in their hands. Force feedback is then provided in augmented stereo images by constantly updated force vectors displayed at the centers of the teleoperated instruments, regardless of the instruments' position within the video image. To determine the centerpoints of the instruments, artificial markers are attached and optically tracked. A new approach to detecting pulsating vessels beneath covering tissue with an omnidirectional ultrasound Doppler sensor is presented. The measurement results are computed and can be provided acoustically (by displaying the typical Doppler sound), optically (by augmenting the endoscopic video stream), or kinesthetically (by a gentle twitching of the haptic input devices). The control structure preserves the fulcrum point in minimally invasive surgery and user commands are followed by the surgical instrument. Haptic feedback allows the user to distinguish between interaction with soft and hard environments. The paper includes technical evaluations of the features presented, as well as an overview of the system integration of MiroSurge.
APA, Harvard, Vancouver, ISO, and other styles
37

Tijerina, Louis. "Design Guidelines and the Human Factors of Interface Design." Proceedings of the Human Factors Society Annual Meeting 30, no. 14 (1986): 1358–62. http://dx.doi.org/10.1177/154193128603001403.

Full text
Abstract:
The proliferation of computer systems in recent years has prompted a growing concern about the human factors of interface design. Industrial and military organizations have responded by supporting studies in user-computer interaction and, more recently, products which might aid in the design of interfaces. One type of design aid which attempts to make findings of user-computer interface (UCI) research available to the system designer is the interface design guidelines document. This paper reviews literature about the design process and how design guidelines or standards might fit into that activity. Suggestions are offered about where future research and development might be directed in order to enhance the use of guidelines in the interface design process and so enhance the final product as well.
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Su-Zhen, Min Wu, and Yong-Hua Xiong. "Mobile Transparent Computing to Enable Ubiquitous Operating Systems and Applications." Journal of Advanced Computational Intelligence and Intelligent Informatics 18, no. 1 (2014): 32–39. http://dx.doi.org/10.20965/jaciii.2014.p0032.

Full text
Abstract:
Mobile devices have emerged as an indispensable part of our daily life, one that has resulted in an increased demand for mobile devices to be able to access the Internet and obtain a variety of network services. However, mobile devices are often constrained by limited storage, huge power consumption, and low processing capability. This paper presents a new computing mode, mobile transparent computing (MTC), which combines ubiquitous mobile networks with transparent computing, to address the above challenges and possibly to enable a new world of ubiquitous operating systems (OSes) and applications with the following characteristics: (1) Mobile devices with no OSes pre-installed are able to load and boot multiple OSes on demand through a transparent network; (2) All resources, including the operating system (OS), applications, and user data, are stored on a transparent server (TS) rather than a mobile terminal, and can be streamed to be executed on mobile devices in small execution blocks; (3) All the personalized services (applications and data) can be synchronized to any other devices with the same user credential. Specifically, we propose a Pre OS technique, which can achieve feature (1) in the MTC model by initializing the mobile device and driving a network interface card (NIC) prior to OS loading, thereby transferring the needed OS streaming block to the mobile device. Experimental results conducted on the tablet demo-board with the model OK6410 based on the ARM11 architecture demonstrate that the Pre OS is able to support remote boot and streaming execution for both Android and Linux OS with satisfactory performance.
APA, Harvard, Vancouver, ISO, and other styles
39

Paternò, Fabio. "Concepts and design space for a better understanding of multi-device user interfaces." Universal Access in the Information Society 19, no. 2 (2019): 409–32. http://dx.doi.org/10.1007/s10209-019-00650-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Barrera-León, Luisa, Nadia Mejia-Molina, Angela Carrillo-Ramos, Leonardo Flórez-Valencia, and Jaime A. Pavlich-Mariscal. "Tukuchiy: a dynamic user interface generator to improve usability." International Journal of Web Information Systems 12, no. 2 (2016): 150–76. http://dx.doi.org/10.1108/ijwis-09-2015-0028.

Full text
Abstract:
Purpose This paper aims to present a detailed description of Tukuchiy, a framework to dynamically generate adapted user interfaces. Tukuchiy is based on Runa-Kamachiy, a conceptual integration model that combines human–computer interaction (HCI) standards to create user interfaces with user-centered concepts usually addressed by adaptation. Design/methodology/approach The first step was the definition of three profiles: user, context and interface. These profiles contain information, such as user disabilities, location characteristics (e.g. illumination) and preferences (e.g. interface color or type of system help). The next step is to define the rules that ensure usability for different users. All of this information is used to create the Tukuchiy framework, which generates dynamic user interfaces, based on the specified rules. The last step is the validation through a prototype called Idukay. This prototype uses Tukuchiy to provide e-learning services. The functionality and usability of the system was evaluated by five experts. Findings To validate the approach, a prototype of Tukuchiy, called Idukay, was created. Idukay was evaluated by experts in education, computing and HCI, who based their evaluation in the system usability scale (SUS), a standard usability test. According to them, the prototype complies with the usability criteria addressed by Tukuchiy. Research limitations/implications This work was tested in an academic environment and was validated by different experts. Further tests in a production environment are required to fully validate the approach. Originality/value Tukuchiy generates adapted user interfaces based on user and context profiles. Tukuchiy uses HCI standards to ensure usability of interfaces that dynamically change during execution time. The interfaces generated by Tukuchiy adapt to context, functionality, disabilities (e.g. color blindness) and preferences (usage and presentation) of the user. Tukuchiy enforces specific HCI standards for color utilization, button size and grouping, etc., during execution.
APA, Harvard, Vancouver, ISO, and other styles
41

Donnerer, Michael, and Anthony Steed. "Using a P300 Brain–Computer Interface in an Immersive Virtual Environment." Presence: Teleoperators and Virtual Environments 19, no. 1 (2010): 12–24. http://dx.doi.org/10.1162/pres.19.1.12.

Full text
Abstract:
Brain–computer interfaces (BCIs) provide a novel form of human–computer interaction. The purpose of these systems is to aid disabled people by affording them the possibility of communication and environment control. In this study, we present experiments using a P300 based BCI in a fully immersive virtual environment (IVE). P300 BCIs depend on presenting several stimuli to the user. We propose two ways of embedding the stimuli in the virtual environment: one that uses 3D objects as targets, and a second that uses a virtual overlay. Both ways have been shown to work effectively with no significant difference in selection accuracy. The results suggest that P300 BCIs can be used successfully in a 3D environment, and this suggests some novel ways of using BCIs in real world environments.
APA, Harvard, Vancouver, ISO, and other styles
42

Paschoarelli, Luis Carlos. "Ergonomics and interfaces of traditional information systems – Case study: packaging." InfoDesign - Revista Brasileira de Design da Informação 10, no. 3 (2013): 313–22. http://dx.doi.org/10.51358/id.v10i3.211.

Full text
Abstract:
The contemporary world is characterized, among other factors, by the influence of the new computer information systems on the behavior of individuals. However, traditional information systems still have interaction problems with users. The aim of this study was to determine whether the interaction aspects between user versus traditional information systems (particularly the graphics) have been fully studied. To do so, the ergonomic aspects and usability of such systems were reviewed, with emphasis on the problems of visibility, legibility and readability. From that criteria, the evolution of ergonomic studies of information systems was reviewed (bibliometrics technique); and examples of ergonomic and usability problems in packaging were demonstrated (case study). The results confirm that traditional information systems still have problems of interaction between human X system, hindering the effective perception of information.
APA, Harvard, Vancouver, ISO, and other styles
43

Rose, Daniel E. "Reconciling information-seeking behavior with search user interfaces for the Web." Journal of the American Society for Information Science and Technology 57, no. 6 (2006): 797–99. http://dx.doi.org/10.1002/asi.20295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kulshreshtha, Neelabh. "HCI: Use in Cyber Security." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (2021): 109–13. http://dx.doi.org/10.22214/ijraset.2021.36246.

Full text
Abstract:
This paper deals with the uses of HCI (Human-Computer Interaction) with Cyber Security and Information Security. Even though there have been efforts to strengthen the infrastructure of the security systems, there are many endemic problems which still exist and are a major source of vulnerabilities. The paper also aims to bridge the gap between the end-user and the technology of HCI. There have been many widespread security problems from the perspective of the security community, many of which arise due to the bad interaction between humans and systems. Developing on the Human-Computer Interaction is an important part of the security system architecture because even the most secure systems exist to serve human users and carry out human-oriented processes, and are designed and built by humans. HCI is concerned with the user interfaces and how they can be improved because most users' perceptions are based on their experience with these interfaces. There has been immense research on this field and many advances have been made in this arena of HCI. Speaking of Information Security on the other hand has been a major concern for the present world scenario where everything is done in the digital world.
APA, Harvard, Vancouver, ISO, and other styles
45

Kontogiorgos, Dimosthenis, Andre Pereira, and Joakim Gustafson. "Grounding behaviours with conversational interfaces: effects of embodiment and failures." Journal on Multimodal User Interfaces 15, no. 2 (2021): 239–54. http://dx.doi.org/10.1007/s12193-021-00366-y.

Full text
Abstract:
AbstractConversational interfaces that interact with humans need to continuously establish, maintain and repair common ground in task-oriented dialogues. Uncertainty, repairs and acknowledgements are expressed in user behaviour in the continuous efforts of the conversational partners to maintain mutual understanding. Users change their behaviour when interacting with systems in different forms of embodiment, which affects the abilities of these interfaces to observe users’ recurrent social signals. Additionally, humans are intellectually biased towards social activity when facing anthropomorphic agents or when presented with subtle social cues. Two studies are presented in this paper examining how humans interact in a referential communication task with wizarded interfaces in different forms of embodiment. In study 1 (N = 30), we test whether humans respond the same way to agents, in different forms of embodiment and social behaviour. In study 2 (N = 44), we replicate the same task and agents but introduce conversational failures disrupting the process of grounding. Findings indicate that it is not always favourable for agents to be anthropomorphised or to communicate with non-verbal cues, as human grounding behaviours change when embodiment and failures are manipulated.
APA, Harvard, Vancouver, ISO, and other styles
46

Bailey, Shannon K. T., Daphne E. Whitmer, Bradford L. Schroeder, and Valerie K. Sims. "Development of Gesture-based Commands for Natural User Interfaces." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (2017): 1466–67. http://dx.doi.org/10.1177/1541931213601851.

Full text
Abstract:
Human-computer interfaces are changing to meet the evolving needs of users and overcome limitations of previous generations of computer systems. The current state of computers consists largely of graphical user interfaces (GUI) that incorporate windows, icons, menus, and pointers (WIMPs) as visual representations of computer interactions controlled via user input on a mouse and keyboard. Although this model of interface has dominated human-computer interaction for decades, WIMPs require an extra step between the user’s intent and the computer action, imposing both limitations on the interaction and introducing cognitive demands (van Dam, 1997). Alternatively, natural user interfaces (NUI) employ input methods such as speech, touch, and gesture commands. With NUIs, users can interact directly with the computer without using an intermediary device (e.g., mouse, keyboard). Using the body as an input device may be more “natural” because it allows the user to apply existing knowledge of how to interact with the world (Roupé, Bosch-Sijtsema, & Johansson, 2014). To utilize the potential of natural interfaces, research must first determine what interactions can be considered natural. For the purpose of this paper, we focus on the naturalness of gesture-based interfaces. The purpose of this study was to determine how people perform natural gesture-based computer actions. To answer this question, we first narrowed down potential gestures that would be considered natural for an action. In a previous study, participants ( n=17) were asked how they would gesture to interact with a computer to complete a series of actions. After narrowing down the potential natural gestures by calculating the most frequently performed gestures for each action, we asked participants ( n=188) to rate the naturalness of the gestures in the current study. Participants each watched 26 videos of gestures (3-5 seconds each) and were asked how natural or arbitrary they interpreted each gesture for the series of computer commands (e.g., move object left, shrink object, select object, etc.). The gestures in these videos included the 17 gestures that were most often performed in the previous study in which participants were asked what gesture they would naturally use to complete the computer actions. Nine gestures were also included that were created arbitrarily to act as a comparison to the natural gestures. By analyzing the ratings on a continuum from “Completely Arbitrary” to “Completely Natural,” we found that the natural gestures people produced in the first study were also interpreted as the intended action by this separate sample of participants. All the gestures that were rated as either “Mostly Natural” or “Completely Natural” by participants corresponded to how the object manipulation would be performed physically. For example, the gesture video that depicts a fist closing was rated as “natural” by participants for the action of “selecting an object.” All of the gestures that were created arbitrarily were interpreted as “arbitrary” when they did not correspond to the physical action. Determining how people naturally gesture computer commands and how people interpret those gestures is useful because it can inform the development of NUIs and contributes to the literature on what makes gestures seem “natural.”
APA, Harvard, Vancouver, ISO, and other styles
47

Neerincx, Mark A., Anita H. M. Cremers, Judith M. Kessens, David A. van Leeuwen, and Khiet P. Truong. "Attuning speech-enabled interfaces to user and context for inclusive design: technology, methodology and practice." Universal Access in the Information Society 8, no. 2 (2008): 109–22. http://dx.doi.org/10.1007/s10209-008-0136-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Still, Jeremiah D., Ashley Cain, and David Schuster. "Human-centered authentication guidelines." Information & Computer Security 25, no. 4 (2017): 437–53. http://dx.doi.org/10.1108/ics-04-2016-0034.

Full text
Abstract:
Purpose Despite the widespread use of authentication schemes and the rapid emergence of novel authentication schemes, a general set of domain-specific guidelines has not yet been developed. This paper aims to present and explain a list of human-centered guidelines for developing usable authentication schemes. Design/methodology/approach The guidelines stem from research findings within the fields of psychology, human–computer interaction and information/computer science. Findings Instead of viewing users as the inevitable weak point in the authentication process, this study proposes that authentication interfaces be designed to take advantage of users’ natural abilities. This approach requires that one understands how interactions with authentication interfaces can be improved and what human capabilities can be exploited. A list of six guidelines that designers ought to consider when developing a new usable authentication scheme has been presented. Research limitations/implications This consolidated list of usable authentication guidelines provides system developers with immediate access to common design issues impacting usability. These guidelines ought to assist designers in producing more secure products in fewer costly development cycles. Originality/value Cybersecurity research and development has mainly focused on technical solutions to increase security. However, the greatest weakness of many systems is the user. It is argued that authentication schemes with poor usability are inherently insecure, as users will inadvertently weaken the security in their efforts to use the system. The study proposes that designers need to consider the human factors that impact end-user behavior. Development from this perspective will address the greatest weakness in most security systems by increasing end-user compliance.
APA, Harvard, Vancouver, ISO, and other styles
49

Paine, Garth. "Interaction as Material: The techno-somatic dimension." Organised Sound 20, no. 1 (2015): 82–89. http://dx.doi.org/10.1017/s1355771814000466.

Full text
Abstract:
This paper proposes an alternative approach to the analysis and design of interaction in real-time performance systems. It draws on the idea that the connection between the human engagement with the interface itself (digital or analogue) and the resultant rich media output forms a proposed experiential dimension containing both technical and somatic considerations. The proposed dimension is characterised by its materiality and is referred to by the author as the techno-somatic dimension. The author proposes that the materiality of the techno-somatic dimension may be usefully examined as part of a re-consideration of the nature of interaction in systems where the input characteristics of the performer’s actions, the musician’s gesture, the dancer’s movements and so on are analysed and also drive the rich media content of the work in real time. The author will suggest that such a techno-somatic dimension exists in all human engagement with technologies, analogue or digital. Furthermore, the author is proposing that design and analysis efforts for new interactive systems should focus on the techno-somatic dimension; that, if this dimension is designed with care to produce a detailed and nuanced experience for the user, design specifications for the interface will automatically result; and that such an interface will produce the somatic and functional characteristics to produce the desired materiality and actional intentionality. For the purposes of this discussion, the author will focus principally on musical interfaces.
APA, Harvard, Vancouver, ISO, and other styles
50

Feng, Jiangfan, and Yanhong Liu. "Intelligent Context-Aware and Adaptive Interface for Mobile LBS." Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/489793.

Full text
Abstract:
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users’ demands in a complicated environment and suggested the feasibility by the experimental results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!