To see the other types of publications on this topic, follow the link: Human Artificial interface.

Journal articles on the topic 'Human Artificial interface'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Human Artificial interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Takemura, Haruo, and Fumio Kishino. ""Human interface using artificial reality"." Journal of the Institute of Television Engineers of Japan 44, no. 8 (1990): 981–85. http://dx.doi.org/10.3169/itej1978.44.981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stein, Sebastian, Enrico H. Gerding, Adrian Nedea, Avi Rosenfeld, and Nicholas R. Jennings. "Market Interfaces for Electric Vehicle Charging." Journal of Artificial Intelligence Research 59 (June 22, 2017): 175–227. http://dx.doi.org/10.1613/jair.5387.

Full text
Abstract:
We consider settings where owners of electric vehicles (EVs) participate in a market mechanism to charge their vehicles. Existing work on such mechanisms has typically assumed that participants are fully rational and can report their preferences accurately via some interface to the mechanism or to a software agent participating on their behalf. However, this may not be reasonable in settings with non-expert human end-users.Thus, our overarching aim in this paper is to determine experimentally if a fully expressive market interface that enables accurate preference reports is suitable for the EV charging domain, or, alternatively, if a simpler, restricted interface that reduces the space of possible options is preferable. In doing this, we measure the performance of an interface both in terms of how it helps participants maximise their utility and how it affects deliberation time. Our secondary objective is to contrast two different types of restricted interfaces that vary in how they restrict the space of preferences that can be reported. To enable this analysis, we develop a novel game that replicates key features of an abstract EV charging scenario. In two experiments with over 300 users, we show that restricting the users' preferences significantly reduces the time they spend deliberating (by up to half in some cases). An extensive usability survey confirms that this restriction is furthermore associated with a lower perceived cognitive burden on the users. More surprisingly, at the same time, using restricted interfaces leads to an increase in the users' performance compared to the fully expressive interface (by up to 70%). We also show that some restricted interfaces have the desirable effect of reducing the energy consumption of their users by up to 20% while achieving the same utility as other interfaces. Finally, we find that a reinforcement learning agent displays similar performance trends to human users, enabling a novel methodology for evaluating market interfaces.
APA, Harvard, Vancouver, ISO, and other styles
3

Sato, Makoto, Yukihiro Hirata, and Hiroshi Kawarada. "Space Interface Device for Artificial Reality – SPIDAR –." Journal of Robotics and Mechatronics 9, no. 3 (1997): 177–84. http://dx.doi.org/10.20965/jrm.1997.p0177.

Full text
Abstract:
In order to realize a human interface for the efficient modeling of three-dimensional shapes over the computer, it is necessary to create an environment in which shape models can be manipulated in the same way as their actual three-dimensional objects. Such an environment is called a virtual work space. In case that a human manipulates an object with his or her own hands, that person unconsciously uses the sensations, such as those of sight, touch, and force. In order to compose a virtual work space, it is important that information on such sensations be given comprehensively to a human. Moreover, it is necessary that all this information be generated artificially through computer processing. On the basis of these observations, the present paper newly proposes a space interface device SPIDAR as an input/output device necessary for composing a virtual work space. This device can not only obtain information on the positions of end-effectors but also provide information concerning the sensation of force to the end-effectors. Furthermore, an experiment is carried out for investigating the effect of information concerning the sensation of force on the direct manipulability of three-dimensional shapes in this virtual work space, and its effectiveness is verified.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Yiyi. "Interaction Design System for Artificial Intelligence User Interfaces Based on UML Extension Mechanisms." Mobile Information Systems 2022 (June 16, 2022): 1–8. http://dx.doi.org/10.1155/2022/3534167.

Full text
Abstract:
With the rapid development of computer network technology in recent years, more and more demands have been placed on the functionality and attributes of the user interface. In the development of many computer projects, the variability and flexibility of user interface requirements have greatly increased the complexity of program development for researchers. In addition, the poor reusability of page access control writing has created a pressing need for a highly standardized and flexible way of developing software. Thus, the development and design of user interfaces for application software systems occupy an important position and have been a hot topic of research in the field of human-computer interaction. The traditional methods of describing user interaction, such as state transitions and data flow diagrams, are not based on global and intuitive concepts. Moreover, there is little support for the design of user interface interaction behavior, resulting in user interfaces being ignored at design time and left to implementers to grasp at coding time. It is therefore an issue that needs to be addressed in order to integrate traditional methods and intuitive descriptions from the user’s perspective into a new interface development model and methodology. This research creates a user interface framework based on interaction behavior from the user’s perspective. Furthermore, UML extension mechanisms are used to enable the user interface framework to better support UML-based modelling environments. In addition, the UML is structured and extended to include structural elements that support interface generation, and a structured use case model is proposed, which drives the analysis and design of the individual submodels. The extracted abstract interface elements and their mapping to concrete interface elements are documented in a way that explores the generation of different target languages under different platforms. This study incorporates user requirements and provides a scientific reference for the development and design of user interfaces.
APA, Harvard, Vancouver, ISO, and other styles
5

HINDE, C. J. "The human-robot interface: the role of artificial intelligence." International Journal of Production Research 27, no. 3 (1989): 463–76. http://dx.doi.org/10.1080/00207548908942560.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

DI MASCIO, TANIA, and LAURA TARANTINO. "Advanced visual interfaces: the focus is on the user." Knowledge Engineering Review 18, no. 2 (2003): 175–81. http://dx.doi.org/10.1017/s0269888903000584.

Full text
Abstract:
The current trend in the production of information appliances is the human-centred, customer-centred approach, where technology serves human needs invisibly, unobtrusively. The emphasis shifts from the application programs to the users, their tasks and their workplaces, making computation often move off the desktop to become embedded in the world around us. In this scenario the role of the visual interface becomes crucial since, as far as the customer is concerned, the interface is the product. In this paper we briefly survey the most recent results in the field of advanced visual interfaces, with focus on users' needs and ways to serve them.
APA, Harvard, Vancouver, ISO, and other styles
7

Kundu, Subhasis. "Deep Neural Networks for Human-AI Telepathy: Enabling Thought-Driven Command Execution Without External Hardware." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 06, no. 12 (2022): 1–7. https://doi.org/10.55041/ijsrem17141.

Full text
Abstract:
This groundbreaking research explores the creation of deep neural networks for human-AI telepathy, enabling command execution through thought alone, without relying on external devices. This study presents a novel brain-to-AI interface system that allows for the direct and effortless control of artificial systems via thought. Utilizing sophisticated machine-learning methods, this approach deciphers neural signals and converts them into executable commands. Through comprehensive experiments and detailed analysis, this research showcases the system's ability to accurately interpret intricate thought patterns and perform corresponding actions instantaneously. The results revealed notable improvements in speed, precision, and user experience over conventional brain-computer interfaces. This study paves the way for new possibilities in human-AI interaction and has significant implications in fields such as assistive technology, robotics, and immersive virtual environments. Keywords — Neural Networks, Brain-Computer Interface, Telepathy, Thought Control, Artificial Intelligence, Neural Signal Processing, Human-AI Interaction, Deep Learning, EEG, fMRI.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Xiaoli, and Yajun Li. "An Experimental Analysis Method of Visual Performance on the Error Factors of Digital Information Interface." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 09 (2019): 2055019. http://dx.doi.org/10.1142/s0218001420550198.

Full text
Abstract:
As per global accident statistics, human error accounts for over 85% of accidents. Therefore, human error analysis of cognitive behavior of operators can be the key to solving information interface design problems of digital smart task monitoring interfaces. This paper proposes an analytical method based on psychological experiments introduced into task monitoring interfaces to study reactions to error factors. It uses psychological techniques to conduct experiments which evoke physiological reactions to various error factors under different sub-interfaces of the monitoring system and sub-task environments. The behavioral and eye tracking data demonstrate the association between the error factors and the information interface. Error factors arising in visual search are directly related to the layout of the task-interface, the information proximity–position, and the information features-volume. Our method opens up new approaches for design optimizations of visual information interfaces and introduces novel concepts for the introduction of interface design via error factor analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Ruijten, Peter, Jacques Terken, and Sanjeev Chandramouli. "Enhancing Trust in Autonomous Vehicles through Intelligent User Interfaces That Mimic Human Behavior." Multimodal Technologies and Interaction 2, no. 4 (2018): 62. http://dx.doi.org/10.3390/mti2040062.

Full text
Abstract:
Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them.
APA, Harvard, Vancouver, ISO, and other styles
10

Gu, Jie. "AI-empowered neural processing for intelligent human-machine interface and biomedical devices." Open Access Government 43, no. 1 (2024): 268–69. http://dx.doi.org/10.56367/oag-043-11463.

Full text
Abstract:
AI-empowered neural processing for intelligent human-machine interface and biomedical devices Jie Gu, Associate Professor from Northwestern University, examines AI-empowered neural processing for intelligent human-machine interface and biomedical devices. Most conventional wearable devices rely on motion detection or image classifications to capture users’ activities. However, they lack the ability to decode neural signals generated by the human body. Neural signals, such as EEG, ECG, and EMG, offer a rich amount of information on a person’s physiological and psychological activities. Recognition and use of such signals present many new opportunities for applications in medical and daily commercial usage. Recently, artificial intelligence (AI) has been applied to neural signal processing, leading to a new generation of intelligent human-machine interfaces and biomedical devices.
APA, Harvard, Vancouver, ISO, and other styles
11

Reilly, Ralph T. "Gender Specific User Design Face vs. Interface." International Journal of Management & Information Systems (IJMIS) 13, no. 1 (2011): 9. http://dx.doi.org/10.19030/ijmis.v13i1.4937.

Full text
Abstract:
Human factors research has shown that the design and display of computer graphics plays a crucial role in the user operability of computer applications. In the future people will communicate with a face on the computer display screen. Already, advancements in artificial intelligence allow humans to communicate with computers through voice pattern recognition. Current work in artificial intelligence will allow the computer and user to read each others facial expressions, understanding what can be communicated through facial mechanics. Research in facial emotion processing has suggested that gender plays a major role in the ability to correctly process human facial emotion.
APA, Harvard, Vancouver, ISO, and other styles
12

張, 言亮. "在人工智能時代需要謹慎對待腦機介面技術". International Journal of Chinese & Comparative Philosophy of Medicine 21, № 2 (2023): 69–72. http://dx.doi.org/10.24112/ijccpm.212685.

Full text
Abstract:
LANGUAGE NOTE | Document text in Chinese; abstract also in English.
 The essay “The Ethics of Thinking with Machines: Brain-Computer Interfaces in the Era of Artificial Intelligence” summarizes the author’s fundamental views. It points out that we need to be cautious about brain–computer interface technology. In particular, we should be cautious when dealing with forms of such technology that may threaten human autonomy, psychological identity, personal identity, and personal privacy, and standardize the development of brain–computer interface technology in accordance with the five principles of technology ethics: improving human well-being, respecting the right to life, upholding fairness and impartiality, reasonably controlling risks, and maintaining open and transparent.
APA, Harvard, Vancouver, ISO, and other styles
13

Gebeshuber, I. C. "Engineering at the interface revisited." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 223, no. 1 (2008): 65–101. http://dx.doi.org/10.1243/09544062jmes1138.

Full text
Abstract:
Three publications from Part C which strongly influenced the development of the field of lubrication in human joints are revisited and their impact on the field is outlined. Furthermore, the impact of the Journal of Mechanical Engineering Science on the field of lubrication and wear in living and artificial human joints is analysed. ‘Analysis of “boosted lubrication” in human joints’ by Duncan Dowson, Anthony Unsworth, and Verna Wright appeared in 1970, ‘The lubrication of porous elastic solids with reference to the functioning of human joints’ by Gordon R. Higginson and Roger Norman was published in 1974, and ‘Engineering at the interface’ by Duncan Dowson addressed the audience in 1992.
APA, Harvard, Vancouver, ISO, and other styles
14

Perzanowski, D., A. C. Schultz, W. Adams, E. Marsh, and M. Bugajska. "Building a multimodal human-robot interface." IEEE Intelligent Systems 16, no. 1 (2001): 16–21. http://dx.doi.org/10.1109/mis.2001.1183338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Crawford, John L. "The Intelligent Graphic Interface Project: Operator Interfaces for the Year 2000." Proceedings of the Human Factors Society Annual Meeting 36, no. 4 (1992): 465–69. http://dx.doi.org/10.1177/154193129203600443.

Full text
Abstract:
Managing a complex computerized process such as a telecommunications network, an electric power system or a pulp and paper mill is an increasingly difficult task. Developing effective human-computer interfaces for the supervisory control centres of the future requires an interdisciplinary approach, applying research results from a range of academic disciplines to the real-life problems faced by industrial users of the technology. This is the approach of the Intelligent Graphic Interface (IGI) Research Project, a unique applied research project linking Canadian industry and academic communities. The goal of this five-year, $6.8 million project, which began in 1991, is to combine artificial intelligence research with advanced computer graphics technology and human factors engineering to produce an Intelligent Graphic Interface; essentially an “expert assistant” for operators in real-time supervisory control environments, dedicated to enhancing the interactions between people and these complex computerized systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Idesawa, Masanori, and Editor. "Special Issue on Human Interface." Journal of Robotics and Mechatronics 4, no. 1 (1992): 1. http://dx.doi.org/10.20965/jrm.1992.p0001.

Full text
Abstract:
In recent years, the expression ""human interface"" is often heard. Now that information systems have been ingrained deeply in the society, it is no longer possible to ignore the existence of information systems even though in man-to-man communications. The expression ""human interface"" may be considered to encompass not only the conventional man-machine interfaces related to communication between man and machine but also the promotion and harmonization of communication between people, between societies and people, and even between different cultures and between different languages. It also gives the impression that it is trying to come closer to the human side. On the other hand, ""human"" can be read in the Japanese Romanize language as ""human"" which phonetically means ""dissatisfaction."" Thus the human interface may ironically be called the ""dissatisfied"" interface. The conventional ""man-machine interface,"" namely the interface between ""man"" and ""machine,"" tended to favor the efficiency of the machine and often attempted to push men closer to the side of the machine, that is, to force the burden on the men. This is precisely the ""dissatisfied"" interface itself. It is no exaggeration to say that whether the human interface is considered truly to be human or not will depend upon the effort to eliminate this dissatisfaction and make the interface pleasant to the human beings. Fortunately, study and research efforts have been made, in recent years, more on interfaces emphasizing the human side than on the conventional man-machine interfaces. In particular, the importance of welfare systems for conquering the physical trouble of men have been recognized and their developmental work is attempted at various research centers. Moreover, research efforts are also being directed towards not only the passive attempt to conquer men's physical trouble but also the active attempt to draw out hidden capabilities of men. In addition, the recent years have seen a great deal of developmental work on information presenting systems which make full use of information perceiving capabilities by human senses such as artificial reality system or virtual reality system. The application of such systems as a new means of communication is awaited in expec tation. To be more precise, these systems are utilized for facilitating such tasks as, for example, the tele-existence in which work at a remote place is carried out at a near place after the environment at the remote place has been transferred to the near place, operations involing the joining of capillary vessels under microscopes, operations at the molecular levels in micro-environments under electron microscopes, and tasks in gigantic environments like assembly of cosmic structures, after achieving the imaginary creation of working conditions similar to normal conditions in the normal environment to which abnormal envirnments have been transferred. In order to succeed in these attempts, it is important to have environment transforming technology, environment transferring technology, and environment presenting technology. To realize these technologies, the maximal consideration of the characteristics of men is indispensable. In such human interface, it is desirable to develop means of transmitting the intentions of men accurately and presenting these intentions effectively so that men can easily recognize, understand, and judge them. Moreover, in view of the fact that it is important in facilitating tasks to react to actions of men, that is, to have the existence of reactions, it is desirable to develop means of presentation including reactions, operation, instruction, and inputting. In addition, it is important to have still deeper understandings of the characteristics of men and develop instructive techniques and presentation techniques appropriate to the characteristics of men, if more effective presentation to the men is to be achieved and the instructions from men to systems facilitated. Research on the functions and characteristics of men themselves such as human sensory functions, brain functions, and psychological characteristics has now become important. Although the trends of the human interface are not yet clear, this special issue has taken up various topics related to this subject cross-sectionally, although it may be judged somewhat biased. It is our hope that this issue will provide some help in seeking the developmental direction of the human interface in the future.
APA, Harvard, Vancouver, ISO, and other styles
17

Battista, Daniele, Borivoje Baltezarević, Marta Gallina, Alessandra Petrone, and Massimo Santoro. "The Boundary Between Natural and Artificial: Challenges of Artificial Intelligence and Emerging Technologies." Journal of Sociological Research 16, no. 1 (2025): 38. https://doi.org/10.5296/jsr.v16i1.22535.

Full text
Abstract:
The rapid development of neuroscience and emerging technologies is opening new horizons where the distinction between the natural and the artificial becomes increasingly blurred. In this context, projects like Neuralink raise fundamental questions about our identity, human consciousness, and the potential social transformations. The idea of a direct interface between the human brain and technological devices not only opens extraordinary possibilities for treating neurological diseases but also raises important ethical and moral dilemmas. The fusion between human and machine could lead to a redefinition of the human condition itself, altering our relationship with the body, knowledge, and society. While the prospect of "enhancing" human cognitive abilities may seem appealing, the need to cautiously address the risks associated with these technologies—such as social control, surveillance, or economic inequalities—becomes apparent. Ultimately, the convergence between artificial intelligence and human biology could mark the beginning of a new era, one that demands a profound reflection on how we want humanity to evolve in an increasingly technological world.
APA, Harvard, Vancouver, ISO, and other styles
18

Jones, Sara. "Graphical interfaces for knowledge engineering: an overview of relevant literature." Knowledge Engineering Review 3, no. 3 (1988): 221–47. http://dx.doi.org/10.1017/s0269888900004483.

Full text
Abstract:
AbstractLiterature relevant to the design and development of graphical interfaces for knowledge-based systems is briefly reviewed and discussed. The efficiency of human-computer interaction depends to a large extent on the degree to which the human-machine interface can answer the user's cognitive needs and accurately support his or her natural cognitive processes and structures. Graphical interfaces can often be particularly suitable in this respect, especially in cases where the user's “natural idiom” is graphical. Illustrated examples are given of the way in which graphical interfaces have successfully been used in various fields with particular emphasis on their use in the field of knowledge-based systems. The paper ends with a brief discussion of possible future developments in the field of knowledge-based system interfaces and of the role that graphics might play in such developments.
APA, Harvard, Vancouver, ISO, and other styles
19

Wilson, Denise L., Gilbert G. Kuperman, Robyn L. Crawford, and William A. Perez. "Artificial Intelligence (AI) System Interface Attributes: Survey and Analyses." Proceedings of the Human Factors Society Annual Meeting 32, no. 16 (1988): 1036–40. http://dx.doi.org/10.1177/154193128803201609.

Full text
Abstract:
This study represents a first phase in the design of a human factors tool for artificial intelligence (AI) system assessment. Desirable attributes of AI interfaces were identified as a result of a review of the literature. A questionnaire was developed where explicit definitions were presented for 17 selected attributes. Nineteen AI system developers rated the attributes under four different context conditions: (1) no context (i.e., general application); (2) a bomber crew system; (3) a command and control station; and (4) an intelligence analyst position. Examination of the ratings showed that attributes associated with communication and education aspects of AI were given the lowest ratings, whereas attributes pertaining to tasks which impose a high level of time stress received the highest ratings of importance. The ratings data were subjected to Multidimensional Scaling (MDS) analyses where the following dimensions were determined: (1) tasks performed principally by the system versus tasks requiring system-human communication; and (2) system attributes that principally require algorithmic interpretation versus those that require a high level of AI capabilities.
APA, Harvard, Vancouver, ISO, and other styles
20

Yuan, Ye, and YongHua Lian. "Design Analysis of Human-Computer Interaction and Information Communication in Artificial Intelligence Environments." International Journal of e-Collaboration 21, no. 1 (2025): 1–13. https://doi.org/10.4018/ijec.371626.

Full text
Abstract:
This study delves into Human-Computer Interaction (HCI), a multidisciplinary arena exploring the interface between humans and technology in designing computing systems. Drawing from interactional communication theories that posit communication as a reciprocal process involving message exchanges in specific sociocultural contexts, recent advancements in Artificial Intelligence (AI) have significantly advanced these interactions. However, inefficient interfaces remain a prominent challenge within HCI. To address this, the research introduces the Augmented Scalar Computation Algorithm (ASCA), aimed at enhancing HCI efficiency. The methodology encompasses collecting and preprocessing a dataset, employing Principal Component Analysis (PCA) for feature extraction, and utilizing a Genetic Algorithm (GA) in feature selection to refine ASCA. The efficacy of the proposed ASCA is rigorously assessed, demonstrating its superiority over conventional algorithms through comparative analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Muralidhar, Deepa. "Thesis Summary: Operationalizing User-Inclusive Transparency in Artificial Intelligence Systems." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 21 (2024): 23401–2. http://dx.doi.org/10.1609/aaai.v38i21.30401.

Full text
Abstract:
Artificial intelligence system architects can increase user trust by designing systems that are inherently transparent. We propose the idea of representing an AI system as an amalgamation of the AI Model (algorithms), data (input and output, including outcomes), and the user interface with visual interpretations (e.g. graphs, Venn diagrams). By designing human controls and feedback mechanisms for AI systems that allow users to exert control over them we can integrate transparency into existing user interfaces. Our plan is to design prototypes of transparent user interfaces for AI systems using well-known usability principles. By conducting surveys we will study their impact to see if these principles help the user to work with the AI system with confidence and if the user perceives the system to be adequately transparent.
APA, Harvard, Vancouver, ISO, and other styles
22

KONSTANTOPOULOS, STASINOS, and VANGELIS KARKALETSIS. "SYSTEM PERSONALITY AND ADAPTIVITY IN AFFECTIVE HUMAN-COMPUTER INTERACTION." International Journal on Artificial Intelligence Tools 22, no. 02 (2013): 1350014. http://dx.doi.org/10.1142/s0218213013500140.

Full text
Abstract:
It has been demonstrated that human users attribute a personality to the computer interfaces they use, regardless of whether one has been explicitly encoded in the system's design or not. In this paper, we explore a method for having explicit control over the personality that a spoken human-robot interface is perceived to exhibit by its users. Our method focuses on the interaction between users and semantic knowledge-based systems where the goal of the interaction is that information from the semantic store is relayed to the user. We describe a personality modelling method that complements a standard dialogue manager by calculating parameters related to adaptivity and emotion for the various interaction modules that realize the system's dialogue acts. This calculation involves the planned act, the user adaptivity model, the system's own goals, but also a machine representation of the personality that we want the system to exhibit, so that systems with different personality will react differently even when in the same dialogue state and with the same user or user type.
APA, Harvard, Vancouver, ISO, and other styles
23

Nian, Shan Po, Lei Chen, Cai Xia Hou, Run Kun Gong, and Li Peng. "Research of Virtual Emotional Human System Based on Artificial Life." Applied Mechanics and Materials 325-326 (June 2013): 1792–95. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.1792.

Full text
Abstract:
t is the task for artificial intelligence to give human intelligence to machine, such as thinking, reasoning and deciding, etc. To give life to a machine is the research fields of artificial life like evolution, generating, self-adaptation, self-organization, etc. Artificial emotion gives a machine various senses as laughing, anger, sorrow and happiness, etc. It is intolerable for artificial emotion to be separated from artificial life. So the research frame of the virtual emotional human system is represented. And the emotional model, method and technology are investigated in this paper. A simulation has been done. The results are encouraging and it will be applied into the interface between human and machine.
APA, Harvard, Vancouver, ISO, and other styles
24

Weißkirchen, Norman, and Ronald Böck. "Behaviour of True Artificial Peers." Multimodal Technologies and Interaction 6, no. 8 (2022): 64. http://dx.doi.org/10.3390/mti6080064.

Full text
Abstract:
Typical current assistance systems often take the form of optimised user interfaces between the user interest and the capabilities of the system. In contrast, a peer-like system should be capable of independent decision-making capabilities, which in turn require an understanding and knowledge of the current situation for performing a sensible decision-making process. We present a method for a system capable of interacting with their user to optimise their information-gathering task, while at the same time ensuring the necessary satisfaction with the system, so that the user may not be discouraged from further interaction. Based on this collected information, the system may then create and employ a specifically adapted rule-set base which is much closer to an intelligent companion than a typical technical user interface. A further aspect is the perception of the system as a trustworthy and understandable partner, allowing an empathetic understanding between the user and the system, leading to a closer integrated smart environment.
APA, Harvard, Vancouver, ISO, and other styles
25

Lieberman, Henry. "User Interface Goals, AI Opportunities." AI Magazine 30, no. 4 (2009): 16. http://dx.doi.org/10.1609/aimag.v30i4.2266.

Full text
Abstract:
This is an opinion piece about the relationship between the fields of human-computer interaction (HCI), and artificial intelligence (AI). The ultimate goal of both fields is to make user interfaces more effective and easier to use for people. But historically, they have disagreed about whether "intelligence" or "direct manipulation" is the better route to achieving this. There is an unjustified perception in HCI that AI is unreliable. There is an unjustified perception in AI that interfaces are merely cosmetic. This disagreement is counterproductive.This article argues that AI's goals of intelligent interfaces would benefit enormously by the user-centered design and testing principles of HCI. It argues that HCI's stated goals of meeting the needs of users and interacting in natural ways, would be best served by application of AI. Peace.
APA, Harvard, Vancouver, ISO, and other styles
26

Konstan, Joseph, and Loren Terveen. "Human-Centered Recommender Systems: Origins, Advances, Challenges, and Opportunities." AI Magazine 42, no. 3 (2021): 31–42. http://dx.doi.org/10.1609/aimag.v42i3.18142.

Full text
Abstract:
From the earliest days of the field, Recommender Systems research and practice has struggled to balance and integrate approaches that focus on recommendation as a machine learning or missing-value problem with ones that focus on machine learning as a discovery tool and perhaps persuasion platform. In this article, we review 25 years of recommender systems research from a human-centered perspective, looking at the interface and algorithm studies that advanced our understanding of how system designs can be tailored to users objectives and needs. At the same time, we show how external factors, including commercialization and technology developments, have shaped research on human-centered recommender systems. We show how several unifying frameworks have helped developers and researchers alike incorporate thinking about user experience and human decision-making into their designs. We then review the challenges, and the opportunities, in today’s recommenders, looking at how deep learning and optimization techniques can integrate with both interface designs and human performance statistics to improve recommender effectiveness and usefulness
APA, Harvard, Vancouver, ISO, and other styles
27

Edén, Anniki Skeidsvoll, Pernilla Sandlund, Montathar Faraon, and Kari Rönkkö. "VoiceBack: Design of Artificial Intelligence-Driven Voice-Based Feedback System for Customer-Agency Communication in Online Travel Services." Information 15, no. 8 (2024): 468. http://dx.doi.org/10.3390/info15080468.

Full text
Abstract:
Online travel booking has become increasingly popular; however, most travel websites do not yet offer voice interaction. This study introduces VoiceBack, an artificial intelligence (AI)-driven voice-based feedback system conceptualized to support both customers and online travel agencies during the booking process. It proposes a theoretically and empirically underpinned design concept that involves a voice user interface (VUI) for customer feedback. This feedback, collected by an AI agent, is analyzed and converted into actionable statistics, which are then presented to online travel agencies through a visual interface. The interface is designed to highlight problem areas and usability issues during the booking process. This study contributes to the field of human-centered AI, by offering insight into the complex process of designing and integrating voice, emotion, and feedback within user interfaces. This integrated approach can enrich the user experience of customers when booking travel online, and pave the way for more intuitive and responsive interaction designs in the future.
APA, Harvard, Vancouver, ISO, and other styles
28

Deng, Lawrence Y., Chun-Liang Hsu, Tzu-Ching Lin, Jui-Sen Tuan, and Shih-Ming Chang. "EOG-based Human–Computer Interface system development." Expert Systems with Applications 37, no. 4 (2010): 3337–43. http://dx.doi.org/10.1016/j.eswa.2009.10.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Shalagin, S. V., and G. E. Shalagina. "Assessing the anthropological impact of interfaces in the design stage of software and hardware." Ontology of designing 14, no. 4 (2024): 483–92. http://dx.doi.org/10.18287/2223-9537-2024-14-4-483-492.

Full text
Abstract:
In the information society, as the scope of human activity expands and deepens, continuous formalisation occurs, underpinned by ontological models of technical and natural systems. The user's perception of comprehensive information about a complex subject area leads to information overload. This situation encourages the use of algorithms that present information in a compressed form. The paper introduces modified indicators and qualitative criteria to estimate the potential for errors in compressing subject area information, as well as evaluate the anthropometric nature of the interface using threshold values set by experts, both for artificial intelligence algorithms and users. A method is proposed for quantitatively assessing the anthropological impact of application-level interfaces during the design stage of hardware and software. The method consists of eight stages, which assess the comprehensibility of the hardware and software interface for both artificial intelligence algorithms and users. This approach reduces the likelihood of developing destructive hardware and software systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Subhash S., Siddesh S., Prajwal N. Srivatsa, Ullas A., and Santhosh B. "Developing a Graphical User Interface for an Artificial Intelligence-Based Voice Assistant." International Journal of Organizational and Collective Intelligence 11, no. 3 (2021): 49–67. http://dx.doi.org/10.4018/ijoci.2021070104.

Full text
Abstract:
Artificial intelligence machineries have been extensively active in human life in recent times. Self-governing devices are enhancing their way of interacting with both human and devices. Contemporary vision in this topic can pave the way for a new process of human-machine interaction in which users will get to know how people can understand human language, adapting and communicating through it. One such tool is voice assistant, which can be incorporated into many other brilliant devices. In this article, the voice assistant will receive the audio from the microphone and then convert that into text, later with the help of ‘pyttsx3', and then the text response will be converted into an audio file; then the audio file will be played. The audio is processed using the voice user interface (VUI). This article develops a functional intelligent personal assistant (IPA) and integrates it with a graphical user interface that can perform mental tasks such as ON/OFF of smart applications based on the user commands.
APA, Harvard, Vancouver, ISO, and other styles
31

Williamson, Rebecca, Yu Zhang, Bruce Mehler, and Ying Wang. "Challenges and Opportunities in Developing Next Generation In-Vehicle HMI Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (2019): 2115–16. http://dx.doi.org/10.1177/1071181319631516.

Full text
Abstract:
The next generation of automotive human machine interface (HMI) systems is expected to be heavily dependent upon artificial intelligence; from autonomous driving to speech assistance, from gesture & touch-enabled interfaces to web & mobile integration. Smooth, safe, and user-friendly interaction between the driver and the vehicle is a key to winning market share. This panel aims to discuss challenges and opportunities for the next generation of automotive HMI from the perspective of human factors and user behavior. Panelists from industry and academia will offer their unique perspectives on the concerns and opportunities in developing future in-vehicle HMIs.
APA, Harvard, Vancouver, ISO, and other styles
32

Melo, Leonardo Roza, Aline Cristina Antoneli de Oliveira, Priscila Basto Fagundes, Maria José Baldessar, and Luciana Schmitz. "Usability Analysis of Multimodal Interface Human-Computer Interaction Based on Artificial Voice." Advanced Science Letters 22, no. 10 (2016): 3146–50. http://dx.doi.org/10.1166/asl.2016.7976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Martinsen, Kristian, Jonathan Downey, and Ivanna Baturynska. "Human-Machine Interface for Artificial Neural Network Based Machine Tool Process Monitoring." Procedia CIRP 41 (2016): 933–38. http://dx.doi.org/10.1016/j.procir.2015.10.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bennett, Lorraine. "Optimising the Interface between Artificial Intelligence and Human Intelligence in Higher Education." International Journal of Teaching, Learning and Education 2, no. 3 (2023): 12–25. http://dx.doi.org/10.22161/ijtle.2.3.3.

Full text
Abstract:
If you thought the recent pandemic was a major disruptor, society is on the cusp of a far more pervasive and ubiquitous shift in human existence. The rapid development and roll-out of Artificial Intelligence (AI) technologies, is looming as a quintessential disruptor of quantum proportions and will eventually permeate most, if not all, corners of our lives. This paper investigates the initial fall-out and potential impact of AI generative technologies on the higher education sector. When ChatGPT and similar software programs were released in late 2022, a knee-jerk reaction from some Government authorities, and institutions was to ban their use in education. Others recommended the development of academic policies and strategies to mitigate the risks to academic integrity and quality assurance. Pragmatic and adventurous educators embraced the opportunities that AI technologies offer to enhance the education sector and expand opportunities for life-long learning. Whilst there is general consensus that that higher education will need to undergo major reform to address the changes that AI will force on the future of learning and higher education institutions. The way forward is less clear. The proposition explored in this paper is that in order to filter, interpret, evaluate and apply AI generated content it might be helpful to consider the challenge through a research lens. The preliminary result is a Framework which focuses on the interface between artificial intelligence and human intelligence in the development and design of future-orientated curriculum, pedagogy, learning activities and assessments in higher education.
APA, Harvard, Vancouver, ISO, and other styles
35

Gill, Satinder P. "Entrainment and musicality in the human system interface." AI & SOCIETY 21, no. 4 (2007): 567–605. http://dx.doi.org/10.1007/s00146-007-0103-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Lalitha, Anusha, and Nitish V. Thakor. "Design of an Accelerometer-Controlled Myoelectric Human Computer Interface." Advanced Materials Research 403-408 (November 2011): 3973–79. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.3973.

Full text
Abstract:
The purpose of this study is to develop an alternate in-air input device which is intended to make interaction with computers easier for amputees. This paper proposes the design and utility of accelerometer controlled Myoelectric Human Computer Interface (HCI). This device can function as a PC mouse. The two dimensional position control of the mouse cursor is done by an accelerometer-based method. The left click and right click and other extra functions of this device are controlled by the Electromyographic (EMG) signals. Artificial Neural Networks (ANNs) are used to decode the intended movements during run-time. ANN is a pattern recognition based classification. An amputee can control it using phantom wrist gestures or finger movements.
APA, Harvard, Vancouver, ISO, and other styles
37

T, Vijayakumar. "Enhancing User Experience through Emotion-Aware Interfaces: A Multimodal Approach." Journal of Innovative Image Processing 6, no. 1 (2024): 27–39. http://dx.doi.org/10.36548/jiip.2024.1.003.

Full text
Abstract:
The ability of a system or entity—such as an artificial intelligence system, computer program, or interface—to identify, comprehend, and react to human emotions is known as emotion awareness. In human-computer interaction, where the aim is to develop more intuitive and sympathetic systems that can comprehend and adjust to users' emotional states, this idea is especially pertinent. Improving user experience with emotion-aware interfaces is a multifaceted problem that calls for a multimodal strategy. Through the integration of several modalities, such as auditory, haptic, and visual feedback, interface designers may develop systems that not only react to user inputs but also identify and adjust based on the emotional states of users. The way users interact in the multimodal domain of emotion awareness will be explained in this research. Following that, a multimodal exploration of the user's experience with emotion awareness will take place.
APA, Harvard, Vancouver, ISO, and other styles
38

Kuts, Vladimir, Jeremy A. Marvel, Murat Aksu, et al. "Digital Twin as Industrial Robots Manipulation Validation Tool." Robotics 11, no. 5 (2022): 113. http://dx.doi.org/10.3390/robotics11050113.

Full text
Abstract:
The adoption of Digital Twin (DT) solutions for industrial purposes is increasing among small- and medium-sized enterprises and is already being integrated into many large-scale companies. As there is an increasing need for faster production and shortening of the learning curve for new emerging technologies, Virtual Reality (VR) interfaces for enterprise manufacturing DTs seem to be a good solution. Furthermore, with the emergence of Industry 5.0 (I5.0) paradigm, human operators will be increasingly integrated in the systems interfaces though advanced interactions, pervasive sensors, real time tracking and data acquisition. This scenario is especially relevant in collaborative automated systems where the introduction of immersive VR interfaces based on production cell DTs might provide a solution for the integration of the human factors in the modern industrial scenarios. This study presents experimental results of the comparison between users controlling a physical industrial robot system via a traditional teach pendant and a DT leveraging a VR user interface. The study group involves forty subjects including experts in robotics and VR as well as non-experts. An analysis of the data gathered in both the real and the virtual use case scenario is provided. The collected information includes time for performing a task with an industrial robot, stress level evaluation, physical and mental effort, and the human subjects’ perceptions of the physical and simulated robots. Additionally, operator gazes were tracked in the VR environment. In this study, VR interfaces in the DT representation are exploited to gather user centered metrics and validate efficiency and safety standards for modern collaborative industrial systems in I5.0. The goal is to evaluate how the operators perceive and respond to the virtual robot and user interface while interacting with them and detect if any degradation of user experience and task efficiency exists compared to the real robot interfaces. Results demonstrate that the use of DT VR interfaces is comparable to traditional tech pendants for the given task and might be a valuable substitute of physical interfaces. Despite improving the overall task performance and considering the higher stress levels detected while using the DT VR interface, further studies are necessary to provide a clearer validation of both interfaces and user impact assessment methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Sengers, Phoebe. "Schizophrenia and Narrative in Artificial Agents." Leonardo 35, no. 4 (2002): 427–31. http://dx.doi.org/10.1162/002409402760181240.

Full text
Abstract:
Artificial-agent technology has become commonplace in technical research from com-puter graphics to interface design and in popular culture through the Web and computer games. On the one hand, the population of the Web and our PCs with characters who reflect us can be seen as a humaniza-tion of a previously purely mechanical interface. On the other hand, the mechanization of subjectivity carries the danger of simply reducing the human to the machine. The author argues that predominant artificial intelligence (AI) ap-proaches to modeling agents are based on an erasure of subjectivity analogous to that which appears when people are subjected to institutionalization. The result is agent behavior that is fragmented, depersonalized, lifeless and incomprehensible. Approaching the problem using a hybrid of critical theory and AI agent technology, the author argues that agent behavior should be narratively under-standable; she presents a new agent architecture that struc-tures behavior to be comprehen-sible as narrative.
APA, Harvard, Vancouver, ISO, and other styles
40

Wilkinson, Alexander, Michael Gonzales, Patrick Hoey, et al. "Design guidelines for human–robot interaction with assistive robot manipulation systems." Paladyn, Journal of Behavioral Robotics 12, no. 1 (2021): 392–401. http://dx.doi.org/10.1515/pjbr-2021-0023.

Full text
Abstract:
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
APA, Harvard, Vancouver, ISO, and other styles
41

Netzer, Eitan, and Amir B. Geva. "Human-in-the-loop active learning via brain computer interface." Annals of Mathematics and Artificial Intelligence 88, no. 11-12 (2020): 1191–205. http://dx.doi.org/10.1007/s10472-020-09689-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Schmalzried, Martin. "A Philosophical and Ontological Perspective on Artificial General Intelligence and The Metaverse." Journal of Metaverse 5, no. 2 (2025): 168–80. https://doi.org/10.57019/jmv.1668494.

Full text
Abstract:
This paper leverages various philosophical and ontological frameworks to explore the concept of embodied artificial general intelligence (AGI), its relationship to human consciousness, and the key role of the metaverse in facilitating this relationship. Several theoretical frameworks underpin this exploration, such as embodied cognition, Michael Levin's computational boundary of a "Self," and Donald D. Hoffman's Interface Theory of Perception, which lead to considering human perceived outer reality as a symbolic representation of alternate inner states of being, and where AGI could embody a different form of consciousness with a larger computational boundary. The paper further discusses the necessary architecture for the emergence of an embodied AGI, how to calibrate an AGI’s symbolic interface, and the key role played by the Metaverse, decentralized systems and open-source blockchain technology. The paper concludes by emphasizing the importance of achieving a certain degree of harmony in human relations and recognizing the interconnectedness of humanity at a global level, as key prerequisites for the emergence of a stable embodied AGI.
APA, Harvard, Vancouver, ISO, and other styles
43

Starke, Alain, Martijn Willemsen, and Chris Snijders. "Promoting Energy-Efficient Behavior by Depicting Social Norms in a Recommender Interface." ACM Transactions on Interactive Intelligent Systems 11, no. 3-4 (2021): 1–32. http://dx.doi.org/10.1145/3460005.

Full text
Abstract:
How can recommender interfaces help users to adopt new behaviors? In the behavioral change literature, social norms and other nudges are studied to understand how people can be convinced to take action (e.g., towel re-use is boosted when stating that “75% of hotel guests” do so), but most of these nudges are not personalized. In contrast, recommender systems know what to recommend in a personalized way, but not much human-computer interaction ( HCI ) research has considered how personalized advice should be presented to help users to change their current habits. We examine the value of depicting normative messages (e.g., “75% of users do X”), based on actual user data, in a personalized energy recommender interface called “Saving Aid.” In a study among 207 smart thermostat owners, we compared three different normative explanations (“Global.” “Similar,” and “Experienced” norm rates) to a non-social baseline (“kWh savings”). Although none of the norms increased the total number of chosen measures directly, we show that depicting high peer adoption rates alongside energy-saving measures increased the likelihood that they would be chosen from a list of recommendations. In addition, we show that depicting social norms positively affects a user’s evaluation of a recommender interface.
APA, Harvard, Vancouver, ISO, and other styles
44

Beisov, Nurbol, Gulnar Madyarova, and Nurassyl Kerimbayev. "Gesture recognition technology: a new dimension in human-computer interaction interface." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 2 (2024): 1311. http://dx.doi.org/10.11591/ijeecs.v35.i2.pp1311-1324.

Full text
Abstract:
Development of an interface for intelligent gesture control to improve user experience and increase the efficiency of interaction with a computer. This paper proposes a gesture recognition system based on artificial intelligence using convolutional neural networks (CNN). The system comprises three stages: pre-processing, optimal frame determination, and gesture category identification. The extracted features used are independent of movement, scaling, and rotation, providing greater flexibility to the system. The suggested gesture control technology, known as Kazakh Sign Language (KSL) for Kazakh alphabets, eliminates the need for additional devices, enabling users to interact with the system naturally. Experiments demonstrated that the proposed KSL system can accurately recognize Kazakh language alphabet letters with a high precision of 97.3%, owing to the utilization of artificial intelligence and CNN to enhance the accuracy and effectiveness of gesture control. Gestures, a type of visual formation, are perceivable by computers through machine learning models. The selection of methods and systems for recognizing Kazakh sign language gestures was accompanied by addressing various challenges related to language-specific orthographic and gestural features. The developed gesture control interface for human-computer interaction is applied in the field of inclusive education, aiming to assist deaf and hard-of-hearing children in learning sign language.
APA, Harvard, Vancouver, ISO, and other styles
45

Nurbol, Beisov Gulnar Madyarova Nurassyl Kerimbayev. "Gesture recognition technology: a new dimension in human-computer interaction interface." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 2 (2024): 1311–24. https://doi.org/10.11591/ijeecs.v35.i2.pp1311-1324.

Full text
Abstract:
Development of an interface for intelligent gesture control to improve user experience and increase the efficiency of interaction with a computer. This paper proposes a gesture recognition system based on artificial intelligence using convolutional neural networks (CNN). The system comprises three stages: pre-processing, optimal frame determination, and gesture category identification. The extracted features used are independent of movement, scaling, and rotation, providing greater flexibility to the system. The suggested gesture control technology, known as Kazakh Sign Language (KSL) for Kazakh alphabets, eliminates the need for additional devices, enabling users to interact with the system naturally. Experiments demonstrated that the proposed KSL system can accurately recognize Kazakh language alphabet letters with a high precision of 97.3%, owing to the utilization of artificial intelligence and CNN to enhance the accuracy and effectiveness of gesture control. Gestures, a type of visual formation, are perceivable by computers through machine learning models. The selection of methods and systems for recognizing Kazakh sign language gestures was accompanied by addressing various challenges related to language-specific orthographic and gestural features. The developed gesture control interface for human-computer interaction is applied in the field of inclusive education, aiming to assist deaf and hard-of-hearing children in learning sign language.
APA, Harvard, Vancouver, ISO, and other styles
46

Miikulainen, Risto, Myles Brundage, Jonathan Epstein, et al. "Ascend by Evolv: AI-Based Massively Multivariate Conversion Rate Optimization." AI Magazine 41, no. 1 (2020): 44–60. http://dx.doi.org/10.1609/aimag.v41i1.5256.

Full text
Abstract:
Conversion rate optimization (CRO) means designing an e-commerce web interface so that as many users as possible take a desired action such as registering for an account, requesting a contact, or making a purchase. Such design is usually done by hand, evaluating one change at a time through A/B testing, evaluating all combinations of two or three variables through multivariate testing, or evaluating multiple variables independently. Traditional CRO is thus limited to a small fraction of the design space only, and often misses important interactions between the design variables. This article describes Ascend by Evolv,1 an automatic CRO system that uses evolutionary search to discover effective web interfaces given a human-designed search space. Design candidates are evaluated in parallel online with real users, making it possible to discover and use interactions between the design elements that are difficult to identify otherwise. A commercial product since September 2016, Ascend has been applied to numerous web interfaces across industries and search space sizes, with up to fourfold improvements over human design. Ascend can therefore be seen as massively multivariate CRO made possible by artificial intelligence.
APA, Harvard, Vancouver, ISO, and other styles
47

Roosan, Don, Jay Chok, Mazharul Karim, et al. "Artificial Intelligence–Powered Smartphone App to Facilitate Medication Adherence: Protocol for a Human Factors Design Study." JMIR Research Protocols 9, no. 11 (2020): e21659. http://dx.doi.org/10.2196/21659.

Full text
Abstract:
Background Medication Guides consisting of crucial interactions and side effects are extensive and complex. Due to the exhaustive information, patients do not retain the necessary medication information, which can result in hospitalizations and medication nonadherence. A gap exists in understanding patients’ cognition of managing complex medication information. However, advancements in technology and artificial intelligence (AI) allow us to understand patient cognitive processes to design an app to better provide important medication information to patients. Objective Our objective is to improve the design of an innovative AI- and human factor–based interface that supports patients’ medication information comprehension that could potentially improve medication adherence. Methods This study has three aims. Aim 1 has three phases: (1) an observational study to understand patient perception of fear and biases regarding medication information, (2) an eye-tracking study to understand the attention locus for medication information, and (3) a psychological refractory period (PRP) paradigm study to understand functionalities. Observational data will be collected, such as audio and video recordings, gaze mapping, and time from PRP. A total of 50 patients, aged 18-65 years, who started at least one new medication, for which we developed visualization information, and who have a cognitive status of 34 during cognitive screening using the TICS-M test and health literacy level will be included in this aim of the study. In Aim 2, we will iteratively design and evaluate an AI-powered medication information visualization interface as a smartphone app with the knowledge gained from each component of Aim 1. The interface will be assessed through two usability surveys. A total of 300 patients, aged 18-65 years, with diabetes, cardiovascular diseases, or mental health disorders, will be recruited for the surveys. Data from the surveys will be analyzed through exploratory factor analysis. In Aim 3, in order to test the prototype, there will be a two-arm study design. This aim will include 900 patients, aged 18-65 years, with internet access, without any cognitive impairment, and with at least two medications. Patients will be sequentially randomized. Three surveys will be used to assess the primary outcome of medication information comprehension and the secondary outcome of medication adherence at 12 weeks. Results Preliminary data collection will be conducted in 2021, and results are expected to be published in 2022. Conclusions This study will lead the future of AI-based, innovative, digital interface design and aid in improving medication comprehension, which may improve medication adherence. The results from this study will also open up future research opportunities in understanding how patients manage complex medication information and will inform the format and design for innovative, AI-powered digital interfaces for Medication Guides. International Registered Report Identifier (IRRID) PRR1-10.2196/21659
APA, Harvard, Vancouver, ISO, and other styles
48

CHANG, MING-SHAUNG, and JUNG-HUA CHOU. "A ROBUST AND FRIENDLY HUMAN–ROBOT INTERFACE SYSTEM BASED ON NATURAL HUMAN GESTURES." International Journal of Pattern Recognition and Artificial Intelligence 24, no. 06 (2010): 847–66. http://dx.doi.org/10.1142/s0218001410008214.

Full text
Abstract:
In this paper, we design a robust and friendly human–robot interface (HRI) system for our intelligent mobile robot based only on natural human gestures. It consists of a triple-face detection method and a fuzzy logic controller (FLC)-Kalman filter tracking system to check the users and predict their current position in a dynamic and cluttered working environment. In addition, through the combined classifier of the principal component analysis (PCA) and back-propagation artificial neural network (BPANN), single and successive commands defined by facial positions and hand gestures are identified for real-time command recognition after dynamic programming (DP). Therefore, the users can instruct this HRI system to make member recognition or expression recognition corresponding to their gesture commands, respectively based on the linear discriminant analysis (LDA) and BPANN. The experimental results prove that the proposed HRI system perform accurately in real-time face detection and tracking, and robustly react to the corresponding gesture commands at eight frames per second (fps).
APA, Harvard, Vancouver, ISO, and other styles
49

Nazari, Vaheh, and Yong-Ping Zheng. "Controlling Upper Limb Prostheses Using Sonomyography (SMG): A Review." Sensors 23, no. 4 (2023): 1885. http://dx.doi.org/10.3390/s23041885.

Full text
Abstract:
This paper presents a critical review and comparison of the results of recently published studies in the fields of human–machine interface and the use of sonomyography (SMG) for the control of upper limb prothesis. For this review paper, a combination of the keywords “Human Machine Interface”, “Sonomyography”, “Ultrasound”, “Upper Limb Prosthesis”, “Artificial Intelligence”, and “Non-Invasive Sensors” was used to search for articles on Google Scholar and PubMed. Sixty-one articles were found, of which fifty-nine were used in this review. For a comparison of the different ultrasound modes, feature extraction methods, and machine learning algorithms, 16 articles were used. Various modes of ultrasound devices for prosthetic control, various machine learning algorithms for classifying different hand gestures, and various feature extraction methods for increasing the accuracy of artificial intelligence used in their controlling systems are reviewed in this article. The results of the review article show that ultrasound sensing has the potential to be used as a viable human–machine interface in order to control bionic hands with multiple degrees of freedom. Moreover, different hand gestures can be classified by different machine learning algorithms trained with extracted features from collected data with an accuracy of around 95%.
APA, Harvard, Vancouver, ISO, and other styles
50

B.K, Ms Sunitha. "The Impact of AI on Human Roles in the User Interface & User Experience Design Industry." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem29692.

Full text
Abstract:
The ever-evolving field of User Interface & User Experience UI/UX design prioritizes creating user-friendly interfaces. Traditionally, this has been a human-centred process. However, Artificial Intelligence AI offers new possibilities for automating design tasks and leveraging data for user insights. This research paper delves into the potential impact of AI on UI/UX design. The core question is whether AI tools can effectively replace human designers in crafting user experiences. Alternatively, can AI work alongside designers to enhance their capabilities? This paper explores these questions through a review of existing research and proposes a research methodology to further investigate the topic. The literature review analyses how AI can be used for tasks like user behaviour analysis, A/B testing, and prototype generation. However, it also acknowledges AI's limitations in understanding user emotions, implementing creative solutions, and adapting to unforeseen user needs. The suggested research methodology will give a combined approach, making a review of existing literature with interviews from UI/UX professionals and a case study analyzing a design project utilising AI tools. By investigating the strengths and limitations of AI in design tasks, how AI can be integrated into the workflow, and the ethical considerations surrounding AI-driven design decisions, this research helps to provide a better understanding of the relationship between AI and human designers in the UI/UX field. The findings can inform design education, industry practices, and the development of future AI tools specifically tailored for UI/UX design applications. Keywords: UI/UX Design, Artificial Intelligence, Human-Computer Interaction, User Experience, User Interface, Design Automation
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!