Academic literature on the topic 'Trust in machines'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Trust in machines.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Trust in machines"

1

Cascio, Jamais. "In machines we trust." New Scientist 229, no. 3064 (March 2016): 26–27. http://dx.doi.org/10.1016/s0262-4079(16)30413-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Jieun, Yusuke Yamani, and Makoto Itoh. "Revisiting Trust in Machines: Examining Human–Machine Trust Using a Reprogrammed Pasteurizer Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1767–70. http://dx.doi.org/10.1177/1541931218621400.

Full text
Abstract:
Automated technologies have brought a number of benefits to professional domains, expanding the area in which humans can perform optimally in complex work environments. Human–automation trust has become an important aspect when designing acceptable automated systems considering general users who have no comprehensive knowledge of the systems. Muir and Moray (1996) proposed a model of human–machine trust incorporating predictability, dependability, and faith as predictors of overall trust in machines. Though Muir and Moray (1996) predicted that trust in machines grows from predictability, then dependability, and finally faith, their results suggested the opposite. This study will reexamine their theoretical framework and test which of the three dimensions governs initial trust in automation. Participants will be trained to operate a simulated pasteurization plant, as in Muir and Moray (1996), and they will be asked to maximize system performance in the pasteurizing task. We hypothesized that faith governs overall trust early in the interaction with the automated system, then dependability, and finally predictability as lay automation users become more familiar with the system. We attempt to replicate the results of Muir and Moray (1996) and argue that their model should be revised for trust development for general automation users.
APA, Harvard, Vancouver, ISO, and other styles
3

Stanley, Jeff, Ozgur Eris, and Monika Lohani. "A Conceptual Framework for Machine Self-Presentation and Trust." International Journal of Humanized Computing and Communication 2, no. 1 (March 1, 2021): 20–45. http://dx.doi.org/10.35708/hcc1869-148366.

Full text
Abstract:
Increasingly, researchers are creating machines with humanlike social behaviors to elicit desired human responses such as trust and engagement, but a systematic characterization and categorization of such behaviors and their demonstrated effects is missing. This paper proposes a taxonomy of machine behavior based on what has been experimented with and documented in the literature to date. We argue that self-presentation theory, a psychosocial model of human interaction, provides a principled framework to structure existing knowledge in this domain and guide future research and development. We leverage a foundational human self-presentation taxonomy (Jones and Pittman, 1982), which associates human verbal behaviors with strategies, to guide the literature review of human-machine interaction studies we present in this paper. In our review, we identified 36 studies that have examined human-machine interactions with behaviors corresponding to strategies from the taxonomy. We analyzed frequently and infrequently used strategies to identify patterns and gaps, which led to the adaptation of Jones and Pittman’s human self-presentation taxonomy to a machine self-presentation taxonomy. The adapted taxonomy identifies strategies and behaviors machines can employ when presenting themselves to humans in order to elicit desired human responses and attitudes. Drawing from models of human trust we discuss how to apply the taxonomy to affect perceived machine trustworthiness.
APA, Harvard, Vancouver, ISO, and other styles
4

Mebane Jr., W. R. "POLITICAL SCIENCE: Can We Trust the Machines?" Science 322, no. 5902 (October 31, 2008): 677a—678a. http://dx.doi.org/10.1126/science.1165818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Willis-Owen, C. A. "Don't place all your trust in machines." BMJ 327, no. 7423 (November 8, 2003): 1084. http://dx.doi.org/10.1136/bmj.327.7423.1084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anitha, H. M., and P. Jayarekha. "An Software Defined Network Based Secured Model for Malicious Virtual Machine Detection in Cloud Environment." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 526–30. http://dx.doi.org/10.1166/jctn.2020.8481.

Full text
Abstract:
Cloud computing is an emerging technology that offers the services to all the users as per their demand. Services are leveraged according to the Service level agreement (SLA). Service level agreement is monitored so that services are offered to the users without any problem and deprival. Software Defined Network (SDN) is used in order to monitor the trust score of the deployed Virtual Machines (VM) and Quality of Service (QoS) parameters offered. Software Defined Network controller is used to compute the trust score of the Virtual Machines and find whether Virtual Machine is malicious or trusted. Genetic algorithm is used to find the trusted Virtual Machine and release the resources allocated to the malicious Virtual Machine. This monitored information is intimated to cloud provider for further action. Security is enhanced by avoiding attacks from the malicious Virtual Machine in the cloud environment. The main objective of the paper is to enhance the security in the system using Software Defined Network based secured model.
APA, Harvard, Vancouver, ISO, and other styles
7

Quinn, Daniel B., Richard Pak, and Ewart J. de Visser. "Testing the Efficacy of Human-Human Trust Repair Strategies with Machines." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1794–98. http://dx.doi.org/10.1177/1541931213601930.

Full text
Abstract:
Trust is a critical component to both human-automation and human-human interactions. Interface manipulations, such as visual anthropomorphism and machine politeness, have been used to affect trust in automation. However, these design strategies are meant to primarily facilitate initial trust formation but have not been examined as a means to actively repair trust that has been violated by a system failure. Previous research has shown that trust in another party can be effectively repaired after a violation using various strategies, but there is little evidence substantiating such strategies in human-automation context. The current study will examine the effectiveness of trust repair strategies, derived from a human-human or human-organizational context, in human-automation interaction.
APA, Harvard, Vancouver, ISO, and other styles
8

Swanson, LaTasha R., Jennica L. Bellanca, and Justin Helton. "Automated Systems and Trust: Mineworkers' Trust in Proximity Detection Systems for Mobile Machines." Safety and Health at Work 10, no. 4 (December 2019): 461–69. http://dx.doi.org/10.1016/j.shaw.2019.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Andras, Peter, Lukas Esterle, Michael Guckert, The Anh Han, Peter R. Lewis, Kristina Milanovic, Terry Payne, et al. "Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems." IEEE Technology and Society Magazine 37, no. 4 (December 2018): 76–83. http://dx.doi.org/10.1109/mts.2018.2876107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Madhavan, Poornima, and Douglas A. Wiegmann. "A New Look at the Dynamics of Human-Automation Trust: Is Trust in Humans Comparable to Trust in Machines?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 48, no. 3 (September 2004): 581–85. http://dx.doi.org/10.1177/154193120404800365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Trust in machines"

1

Norstedt, Emil, and Timmy Sahlberg. "Human Interaction with Autonomous machines: Visual Communication to Encourage Trust." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19706.

Full text
Abstract:
En pågående utveckling sker inom konstruktionsbranschen där maskiner går från att styras manuellt av en mänsklig förare till styras autonomt, d.v.s. utan mänsklig förare. Detta arbete har varit i samarbete med Volvo CE och deras nya autonoma hjullastare. Då maskinen kommer operera i en miljö kring människor, så krävs en hög säkerhet för att eliminera olyckor. Syftet med arbetet har varit att utveckla ett system för öka säkerheten och förtroendet för människorna i närheten av den autonoma maskinen. Systemet byggs på visuell kommunikation för att uppnå en tillit mellan parterna. Arbetet har baserats på en iterativ process där prototypande, testande och analysering har varit i focus för att uppnå ett lyckat resultat. Genom skapande av modeller med olika funktioner så har en större förståelse kring hur visuell kommunikation mellan människa och maskin kan skapas för att bygga upp en tillit sinsemellan. Detta resulterade i ett koncept som bygger på en kommunikation via ögon från maskinen. Ögonkontakt har visats sig vara en viktig faktor för människor för att skapa ett förtroende för någon eller något i obekväma och utsatta situationer. Maskinen förmedlar olika uttryck genom att ändra färg och form på ögonen för att uppmärksamma och informera människor som rör sig i närheten av maskinen. Genom att anpassa färg och form på ögon kan information uppfattas på olika sätt. Med denna typ av kommunikation kan ett förtroende för maskinen skapas och på så sätt höjs säkerhet och tillit.
Ongoing development is happening within the construction industry. Machines are transformed from being operated by humans to being autonomous. This project has been a collaboration with Volvo Construction Equipment (Volvo CE), and their new autonomous wheel loader. The autonomous machine is supposed to operate in the same environment as people. Therefore, a developed safety system is required to eliminate accidents. The purpose has been developing a system to increase the safety for the workers and to encourage trust for the autonomous machine. The system is based on visual communication to achieve trust between the machine and the people around it. An iterative process, with a focus on testing, prototyping, and analysing, has been used to accomplish a successful result. Better understanding has been developed on how to design a human-machine-interface to encourage trust by creating models with a variety of functions. The iterative process resulted in a concept that communicates through eyes. Eye-contact is an essential factor for creating trust in unfamiliar and exposed situations. The solution mediating different expressions by changing the colour and shape of the eyes to create awareness and to inform people moving around in the same environment. Specific information can be mediated in various situations by adopting the colour and shape of the eyes. Trust can be encouraged for the autonomous machine using this way of communicating.
APA, Harvard, Vancouver, ISO, and other styles
2

Conrad, Tim. "The machine we trust and other stories." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-1/rp/conradt/timconrad.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Akteke, Basak. "Derivative Free Optimization Methods: Application In Stirrer Configuration And Data Clustering." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606591/index.pdf.

Full text
Abstract:
Recent developments show that derivative free methods are highly demanded by researches for solving optimization problems in various practical contexts. Although well-known optimization methods that employ derivative information can be very effcient, a derivative free method will be more effcient in cases where the objective function is nondifferentiable, the derivative information is not available or is not reliable. Derivative Free Optimization (DFO) is developed for solving small dimensional problems (less than 100 variables) in which the computation of an objective function is relatively expensive and the derivatives of the objective function are not available. Problems of this nature more and more arise in modern physical, chemical and econometric measurements and in engineering applications, where computer simulation is employed for the evaluation of the objective functions. In this thesis, we give an example of the implementation of DFO in an approach for optimizing stirrer configurations, including a parametrized grid generator, a flow solver, and DFO. A derivative free method, i.e., DFO is preferred because the gradient of the objective function with respect to the stirrer&rsquo
s design variables is not directly available. This nonlinear objective function is obtained from the flow field by the flow solver. We present and interpret numerical results of this implementation. Moreover, a contribution is given to a survey and a distinction of DFO research directions, to an analysis and discussion of these. We also state a derivative free algorithm used within a clustering algorithm in combination with non-smooth optimization techniques to reveal the effectiveness of derivative free methods in computations. This algorithm is applied on some data sets from various sources of public life and medicine. We compare various methods, their practical backgrounds, and conclude with a summary and outlook. This work may serve as a preparation of possible future research.
APA, Harvard, Vancouver, ISO, and other styles
4

Ross, Jennifer. "MODERATORS OF TRUST AND RELIANCE ACROSS MULTIPLE DECISION AIDS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3975.

Full text
Abstract:
The present work examines whether user's trust of and reliance on automation, were affected by the manipulations of user's perception of the responding agent. These manipulations included agent reliability, agent type, and failure salience. Previous work has shown that automation is not uniformly beneficial; problems can occur because operators fail to rely upon automation appropriately, by either misuse (overreliance) or disuse (underreliance). This is because operators often face difficulties in understanding how to combine their judgment with that of an automated aid. This difficulty is especially prevalent in complex tasks in which users rely heavily on automation to reduce their workload and improve task performance. However, when users rely on automation heavily they often fail to monitor the system effectively (i.e., they lose situation awareness – a form of misuse). However, if an operator realizes a system is imperfect and fails, they may subsequently lose trust in the system leading to underreliance. In the present studies, it was hypothesized that in a dual-aid environment poor reliability in one aid would impact trust and reliance levels in a companion better aid, but that this relationship is dependent upon the perceived aid type and the noticeability of the errors made. Simulations of a computer-based search-and-rescue scenario, employing uninhabited/unmanned ground vehicles (UGVs) searching a commercial office building for critical signals, were used to investigate these hypotheses. Results demonstrated that participants were able to adjust their reliance and trust on automated teammates depending on the teammate's actual reliability levels. However, as hypothesized there was a biasing effect among mixed-reliability aids for trust and reliance. That is, when operators worked with two agents of mixed-reliability, their perception of how reliable and to what degree they relied on the aid was effected by the reliability of a current aid. Additionally, the magnitude and direction of how trust and reliance were biased was contingent upon agent type (i.e., 'what' the agents were: two humans, two similar robotic agents, or two dissimilar robot agents). Finally, the type of agent an operator believed they were operating with significantly impacted their temporal reliance (i.e., reliance following an automation failure). Such that, operators were less likely to agree with a recommendation from a human teammate, after that teammate had made an obvious error, than with a robotic agent that had made the same obvious error. These results demonstrate that people are able to distinguish when an agent is performing well but that there are genuine differences in how operators respond to agents of mixed or same abilities and to errors by fellow human observers or robotic teammates. The overall goal of this research was to develop a better understanding how the aforementioned factors affect users' trust in automation so that system interfaces can be designed to facilitate users' calibration of their trust in automated aids, thus leading to improved coordination of human-automation performance. These findings have significant implications to many real-world systems in which human operators monitor the recommendations of multiple other human and/or machine systems.
Ph.D.
Department of Psychology
Sciences
Psychology PhD
APA, Harvard, Vancouver, ISO, and other styles
5

Torre, Ilaria. "The impact of voice on trust attributions." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9858.

Full text
Abstract:
Trust and speech are both essential aspects of human interaction. On the one hand, trust is necessary for vocal communication to be meaningful. On the other hand, humans have developed a way to infer someone’s trustworthiness from their voice, as well as to signal their own. Yet, research on trustworthiness attributions to speakers is scarce and contradictory, and very often uses explicit data, which do not predict actual trusting behaviour. However, measuring behaviour is very important to have an actual representation of trust. This thesis contains 5 experiments aimed at examining the influence of various voice characteristics — including accent, prosody, emotional expression and naturalness — on trusting behaviours towards virtual players and robots. The experiments have the "investment game"—a method derived from game theory, which allows to measure implicit trustworthiness attributions over time — as their main methodology. Results show that standard accents, high pitch, slow articulation rate and smiling voice generally increase trusting behaviours towards a virtual agent, and a synthetic voice generally elicits higher trustworthiness judgments towards a robot. The findings also suggest that different voice characteristics influence trusting behaviours with different temporal dynamics. Furthermore, the actual behaviour of the various speaking agents was modified to be more or less trustworthy, and results show that people’s trusting behaviours develop over time accordingly. Also, people reinforce their trust towards speakers that they deem particularly trustworthy when these speakers are indeed trustworthy, but punish them when they are not. This suggests that people’s trusting behaviours might also be influenced by the congruency of their first impressions with the actual experience of the speaker’s trustworthiness — a "congruency effect". This has important implications in the context of Human–Machine Interaction, for example for assessing users’ reactions to speaking machines which might not always function properly. Taken together, the results suggest that voice influences trusting behaviour, and that first impressions of a speaker’s trustworthiness based on vocal cues might not be indicative of future trusting behaviours, and that trust should be measured dynamically.
APA, Harvard, Vancouver, ISO, and other styles
6

Parker, Christopher Alonzo. "K x N Trust-Based Agent Reputation." VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/702.

Full text
Abstract:
In this research, a multi-agent system called KMAS is presented that models an environment of intelligent, autonomous, rational, and adaptive agents that reason about trust, and adapt trust based on experience. Agents reason and adapt using a modification of the k-Nearest Neighbor algorithm called (k X n) Nearest Neighbor where k neighbors recommend reputation values for trust during each of n interactions. Reputation allows a single agent to receive recommendations about the trustworthiness of others. One goal is to present a recommendation model of trust that outperforms MAS architectures relying solely on direct agent interaction. A second goal is to converge KMAS to an emergent system state where only successful cooperation is allowed. Three experiments are chosen to compare KMAS against a non-(k X n) MAS, and between different variations of KMAS execution. Research results show KMAS converges to the desired state, and in the context of this research, KMAS outperforms a direct interaction-based system.
APA, Harvard, Vancouver, ISO, and other styles
7

Mayer, Andrew K. "Manipulation of user expectancies effects on reliance, compliance, and trust using an automated system /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Abuhamad, Grace M. (Grace Marie). "The fallacy of equating "blindness" with fairness : ensuring trust in machine learning applications to consumer credit." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122094.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-82).
Fifty years ago, the United States Congress coalesced around a vision for fair consumer credit: equally accessible by all consumers, and developed on accurate and relevant information, with controls for consumer privacy. In two foundational pieces of legislation, the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), legislators described mechanisms by which these goals would be met, including, most notably, prohibiting certain information, such as a consumer's race, as the basis for credit decisions, under the assumption that being "blind" to this information would prevent wrongful discrimination. While the policy goals for fair credit are still valid today, the mechanisms designed to achieve them are no longer effective.
The consumer credit industry is increasingly interested in using new data and machine learning modeling techniques to determine consumer creditworthiness, and with these technological advances come new risks not mitigated by existing mechanisms. This thesis evaluates how these "alternative" credit processes pose challenges to the mechanisms established in the FCRA and the ECOA and their vision for fairness. "Alternative" data and models facilitate inference or prediction of consumer information, which make them non-compliant. In particular, this thesis investigates the idea that "blindness" to certain attributes hinders consumer fairness more than it helps since it limits the ability to determine whether wrongful discrimination has occurred and to build better performing models for populations that have been historically underscored.
This thesis concludes with four recommendations to modernize fairness mechanisms and ensure trust in the consumer credit system by: 1) expanding the definition of consumer report under the FCRA; 2) encouraging model explanations and transparency; 3) requiring self-testing using prohibited information; and 4) permitting the use of prohibited information to allow for more comprehensive models.
This work was partially supported by the MIT-IBM Watson AI Lab and the Hewlett Foundation through the MIT Internet Policy Research Initiative (IPRI)
by Grace M. Abuhamad.
S.M. in Technology and Policy
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society
APA, Harvard, Vancouver, ISO, and other styles
9

Templeton, Julian. "Designing Robust Trust Establishment Models with a Generalized Architecture and a Cluster-Based Improvement Methodology." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42556.

Full text
Abstract:
In Multi-Agent Systems consisting of intelligent agents that interact with one another, where the agents are software entities which represent individuals or organizations, it is important for the agents to be equipped with trust evaluation models which allow the agents to evaluate the trustworthiness of other agents when dishonest agents may exist in an environment. Evaluating trust allows agents to find and select reliable interaction partners in an environment. Thus, the cost incurred by an agent for establishing trust in an environment can be compensated if this improved trustworthiness leads to an increased number of profitable transactions. Therefore, it is equally important to design effective trust establishment models which allow an agent to generate trust among other agents in an environment. This thesis focuses on providing improvements to the designs of existing and future trust establishment models. Robust trust establishment models, such as the Integrated Trust Establishment (ITE) model, may use dynamically updated variables to adjust the predicted importance of a task’s criteria for specific trustors. This thesis proposes a cluster-based approach to update these dynamic variables more accurately to achieve improved trust establishment performance. Rather than sharing these dynamic variables globally, a model can learn to adjust a trustee’s behaviours more accurately to trustor needs by storing the variables locally for each trustor and by updating groups of these variables together by using data from a corresponding group of similar trustors. This work also presents a generalized trust establishment model architecture to help models be easier to design and be more modular. This architecture introduces a new transaction-level preprocessing module to help improve a model’s performance and defines a trustor-level postprocessing module to encapsulate the designs of existing models. The preprocessing module allows a model to fine-tune the resources that an agent will provide during a transaction before it occurs. A trust establishment model, named the Generalized Trust Establishment Model (GTEM), is designed to showcase the benefits of using the preprocessing module. Simulated comparisons between a cluster-based version of ITE and ITE indicate that the cluster-based approach helps trustees better meet the expectations of trustors while minimizing the cost of doing so. Comparing GTEM to itself without the preprocessing module and to two existing models in simulated tests exhibits that the preprocessing module improves a trustee’s trustworthiness and better meets trustor desires at a faster rate than without using preprocessing.
APA, Harvard, Vancouver, ISO, and other styles
10

Wagner, Alan Richard. "The role of trust and relationships in human-robot social interaction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31776.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010.
Committee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Trust in machines"

1

Cao, Mei. Supply Chain Collaboration: Roles of Interorganizational Systems, Trust, and Collaborative Culture. London: Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Computational trust models and machine learning. Boca Raton: Taylor & Francis, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Taylor, Gina. Teaching patients who use machines for patient controlled analgesia: A report on a collaborative study between Middlesex University, Faculty of Health Studies and Toronto Ward, Chase Farm Hospitals NHS Trust. [London]: [Foundation of Nursing Studies], 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Philip S. Machine Learning in Cyber Trust: Security, Privacy, and Reliability. Boston, MA: Springer-Verlag US, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The political economy of trust: Institutions, interests and inter-firm cooperation in italy and germany. New York: Cambridge University Press, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fenske, David Allan. Real-time control of the trussarm variable-truss manipulator utilizing machine vision. Ottawa: National Library of Canada, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fenske, David Allan. Real-time control of the Trussarm variable-geometry-truss manipulator utilizing machine vision. [Downsview, Ont.]: University of Toronto, [Institute for Aerospace Studies], 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pine, Carol. A crowning achievement: 130 years of innovation, perseverance and trust. [Minneapolis?]: Crown Holdings, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

United States. Congress. House. Committee on Banking, Finance, and Urban Affairs. Subcommittee on Financial Institutions Supervision, Regulation and Insurance. Resolution Trust Corporation Task Force. Consideration of the implications of the RTC control problems for proposals to restructure the bail-out machinery: Hearing before the Subcommittee on Financial Institutions Supervision, Regulation and Insurance, Resolution Trust Corporation Task Force of the Committee on Banking, Finance, and Urban Affairs, House of Representatives, One Hundred Second Congress, first session, June 17, 1991. Washington: U.S. G.P.O., 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Qingyu, and Mei Cao. Supply Chain Collaboration: Roles of Interorganizational Systems, Trust, and Collaborative Culture. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Trust in machines"

1

Fabris, Adriano. "Can We Trust Machines? The Role of Trust in Technological Environments." In Studies in Applied Philosophy, Epistemology and Rational Ethics, 123–35. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44018-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Basharat, Shifaa, and Manzoor Ahmad. "Inferring Trust from Message Features Using Linear Regression and Support Vector Machines." In Communications in Computer and Information Science, 577–98. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8660-1_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rakhi and G. L. Pahuja. "An Efficient Trust-Based Approach to Load Balanced Routing Enhanced by Virtual Machines in Vehicular Environment." In International Conference on Intelligent Computing and Smart Communication 2019, 925–35. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0633-8_95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Banasiewicz, Andrew. "In Machine We Trust." In Organizational Learning in the Age of Data, 223–54. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74866-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dziczkowski, Grzegorz, Szymon Głowania, and Bogna Zacny. "Trust in Machine-learning Systems." In Trust, Organizations and the Digital Economy, 108–20. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003165965-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Kun, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, and Fang Chen. "Do I Trust a Machine? Differences in User Trust Based on System Performance." In Human and Machine Learning, 245–64. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90403-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Challagulla, Venkata U. B., Farokh B. Bastani, and I.-Ling Yen. "High-Confidence Compositional Reliability Assessment of SOA-Based Systems Using Machine Learning Techniques." In Machine Learning in Cyber Trust, 279–322. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sha, Lui, Sathish Gopalakrishnan, Xue Liu, and Qixin Wang. "Cyber-Physical Systems: A New Frontier." In Machine Learning in Cyber Trust, 3–13. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shaneck, Mark, Yongdae Kim, and Vipin Kumar. "Privacy Preserving Nearest Neighbor Search." In Machine Learning in Cyber Trust, 247–76. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Stephen J. H., Jia Zhang, and Angus F. M. Huang. "Model, Properties, and Applications of Context-Aware Web Services." In Machine Learning in Cyber Trust, 323–58. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Trust in machines"

1

Sundar, S. Shyam, and Akshaya Sreenivasan. "In machines we trust." In the Seventh International Conference. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2737856.2737896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tinati, Ramine, and Leslie Carr. "Understanding Social Machines." In 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT). IEEE, 2012. http://dx.doi.org/10.1109/socialcom-passat.2012.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Merchant, Arpit, Tushant Jha, and Navjyoti Singh. "The Use of Trust in Social Machines." In the 25th International Conference Companion. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/2872518.2890597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yong Shi, Zhen Han, and Chang-Xiang Shen. "The transitive trust in Java virtual machines." In 2009 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2009. http://dx.doi.org/10.1109/icmlc.2009.5212620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Yunho, Seungjoo Kim, and Dongho Won. "How to Trust DRE Voting Machines Preserving Voter Privacy." In 2008 IEEE International Conference on e-Business Engineering. IEEE, 2008. http://dx.doi.org/10.1109/icebe.2008.37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Eisenbarth, Thomas, Tim Guneysu, Christof Paar, Ahmad-Reza Sadeghi, Marko Wolf, and Russell Tessier. "Establishing Chain of Trust in Reconfigurable Hardware." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Biedermann, Sebastian, Martin Zittel, and Stefan Katzenbeisser. "Improving security of virtual machines during live migrations." In 2013 Eleventh Annual Conference on Privacy, Security and Trust (PST). IEEE, 2013. http://dx.doi.org/10.1109/pst.2013.6596088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fukuda, Shuichi. "How Can Man and Machine Trust Each Other and Work Better Together?" In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49828.

Full text
Abstract:
Our traditional machines are operated by commands. But Increasing diversification and frequent changes in our environments make it more and more difficult for a designer to foresee the operating conditions. Therefore, designs are shifting from designer-centric to user-centric, because it is a user who knows the situation and can make decisions what he or she should do. So now machines should be designed to help a user understand the current situation better and to help him or her make better decisions. They need more flexibility to work better together with its user. But there are many examples, where although a machine is equipped with a wide variety of functions to cope with almost all sorts of situations, accidents occur due to a human error. Such typical case is CFIT (Controlled Flight into terrain) [1] in airplanes. Norman pointed out that simple mechanical objects can be trusted because their behaviors are so simple people know how to operate them. But machines are getting more and more complicated so a user does not know what to expect from them. And if it does not react to his or her expectations, a user sometimes gets very much emotionally upset and gets panicked. How can we solve this problem? A solution may be found in software development. Software was produced in the past just in the same way as hardware, with their functions fixed. But now software changed its product development style. Software first provides a user with simple functions and once he or she becomes familiar with this basic level of functions, it evolves to a little higher level. Through experience and feedback from a user, software evolves its function gradually and continually. It must be noted that most of our machines are not hardware or software alone. They are combinations of both. So we can develop such a machine which possesses a diversity of functions but reveals at the very early stage of operation a basic level of functions to a user, until he or she gets accustomed to it and puts confidence in it. And when he or she fully experiences this level and desires higher level functions, then the machine evolves. How a user cope with situations varies from user to user, but if a machine is customized this way, a user would trust our machines and would operate with full confidence in them.
APA, Harvard, Vancouver, ISO, and other styles
9

Contractor, Dipen, and Dhiren Patel. "Analyzing trustworthiness of virtual machines in data-intensive cloud computing." In 2014 Twelfth Annual Conference on Privacy, Security and Trust (PST). IEEE, 2014. http://dx.doi.org/10.1109/pst.2014.6890967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jangda, Abhinav, and Mohit Mishra. "RandHeap: Heap Randomization for Mitigating Heap Spray Attacks in Virtual Machines." In 2017 15th Annual Conference on Privacy, Security and Trust (PST). IEEE, 2017. http://dx.doi.org/10.1109/pst.2017.00028.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Trust in machines"

1

Konaev, Margarita, Tina Huang, and Husanjot Chahal. Trusted Partners: Human-Machine Teaming and the Future of Military AI. Center for Security and Emerging Technology, February 2021. http://dx.doi.org/10.51593/20200024.

Full text
Abstract:
As the U.S. military integrates artificial intelligence into its systems and missions, there are outstanding questions about the role of trust in human-machine teams. This report examines the drivers and effects of such trust, assesses the risks from too much or too little trust in intelligent technologies, reviews efforts to build trustworthy AI systems, and offers future directions for research on trust relevant to the U.S. military.
APA, Harvard, Vancouver, ISO, and other styles
2

Fang, Chen. Unsettled Issues in Vehicle Autonomy, Artificial Intelligence, and Human-Machine Interaction. SAE International, April 2021. http://dx.doi.org/10.4271/epr2021010.

Full text
Abstract:
Artificial intelligence (AI)-based solutions are slowly making their way into our daily lives, integrating with our processes to enhance our lifestyles. This is major a technological component regarding the development of autonomous vehicles (AVs). However, as of today, no existing, consumer ready AV design has reached SAE Level 5 automation or fully integrates with the driver. Unsettled Issues in Vehicle Autonomy, AI and Human-Machine Interaction discusses vital issues related to AV interface design, diving into speech interaction, emotion detection and regulation, and driver trust. For each of these aspects, the report presents the current state of research and development, challenges, and solutions worth exploring.
APA, Harvard, Vancouver, ISO, and other styles
3

Konaev, Margarita, Husanjot Chahal, Ryan Fedsiuk, Tina Huang, and Ilya Rahkovsky. U.S. Military Investments in Autonomy and AI: A Strategic Assessment. Center for Security and Emerging Technology, October 2020. http://dx.doi.org/10.51593/20190044.

Full text
Abstract:
This brief examines how the Pentagon’s investments in autonomy and AI may affect its military capabilities and strategic interests. It proposes that DOD invest in improving its understanding of trust in human-machine teams and leverage existing AI technologies to enhance military readiness and endurance. In the long term, investments in reliable, trustworthy, and resilient AI systems are critical for ensuring sustained military, technological, and strategic advantages.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography