Academic literature on the topic 'Trust in machines'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Trust in machines.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Trust in machines"
Cascio, Jamais. "In machines we trust." New Scientist 229, no. 3064 (March 2016): 26–27. http://dx.doi.org/10.1016/s0262-4079(16)30413-4.
Full textLee, Jieun, Yusuke Yamani, and Makoto Itoh. "Revisiting Trust in Machines: Examining Human–Machine Trust Using a Reprogrammed Pasteurizer Task." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1767–70. http://dx.doi.org/10.1177/1541931218621400.
Full textStanley, Jeff, Ozgur Eris, and Monika Lohani. "A Conceptual Framework for Machine Self-Presentation and Trust." International Journal of Humanized Computing and Communication 2, no. 1 (March 1, 2021): 20–45. http://dx.doi.org/10.35708/hcc1869-148366.
Full textMebane Jr., W. R. "POLITICAL SCIENCE: Can We Trust the Machines?" Science 322, no. 5902 (October 31, 2008): 677a—678a. http://dx.doi.org/10.1126/science.1165818.
Full textWillis-Owen, C. A. "Don't place all your trust in machines." BMJ 327, no. 7423 (November 8, 2003): 1084. http://dx.doi.org/10.1136/bmj.327.7423.1084.
Full textAnitha, H. M., and P. Jayarekha. "An Software Defined Network Based Secured Model for Malicious Virtual Machine Detection in Cloud Environment." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 526–30. http://dx.doi.org/10.1166/jctn.2020.8481.
Full textQuinn, Daniel B., Richard Pak, and Ewart J. de Visser. "Testing the Efficacy of Human-Human Trust Repair Strategies with Machines." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1794–98. http://dx.doi.org/10.1177/1541931213601930.
Full textSwanson, LaTasha R., Jennica L. Bellanca, and Justin Helton. "Automated Systems and Trust: Mineworkers' Trust in Proximity Detection Systems for Mobile Machines." Safety and Health at Work 10, no. 4 (December 2019): 461–69. http://dx.doi.org/10.1016/j.shaw.2019.09.003.
Full textAndras, Peter, Lukas Esterle, Michael Guckert, The Anh Han, Peter R. Lewis, Kristina Milanovic, Terry Payne, et al. "Trusting Intelligent Machines: Deepening Trust Within Socio-Technical Systems." IEEE Technology and Society Magazine 37, no. 4 (December 2018): 76–83. http://dx.doi.org/10.1109/mts.2018.2876107.
Full textMadhavan, Poornima, and Douglas A. Wiegmann. "A New Look at the Dynamics of Human-Automation Trust: Is Trust in Humans Comparable to Trust in Machines?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 48, no. 3 (September 2004): 581–85. http://dx.doi.org/10.1177/154193120404800365.
Full textDissertations / Theses on the topic "Trust in machines"
Norstedt, Emil, and Timmy Sahlberg. "Human Interaction with Autonomous machines: Visual Communication to Encourage Trust." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19706.
Full textOngoing development is happening within the construction industry. Machines are transformed from being operated by humans to being autonomous. This project has been a collaboration with Volvo Construction Equipment (Volvo CE), and their new autonomous wheel loader. The autonomous machine is supposed to operate in the same environment as people. Therefore, a developed safety system is required to eliminate accidents. The purpose has been developing a system to increase the safety for the workers and to encourage trust for the autonomous machine. The system is based on visual communication to achieve trust between the machine and the people around it. An iterative process, with a focus on testing, prototyping, and analysing, has been used to accomplish a successful result. Better understanding has been developed on how to design a human-machine-interface to encourage trust by creating models with a variety of functions. The iterative process resulted in a concept that communicates through eyes. Eye-contact is an essential factor for creating trust in unfamiliar and exposed situations. The solution mediating different expressions by changing the colour and shape of the eyes to create awareness and to inform people moving around in the same environment. Specific information can be mediated in various situations by adopting the colour and shape of the eyes. Trust can be encouraged for the autonomous machine using this way of communicating.
Conrad, Tim. "The machine we trust and other stories." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-1/rp/conradt/timconrad.pdf.
Full textAkteke, Basak. "Derivative Free Optimization Methods: Application In Stirrer Configuration And Data Clustering." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606591/index.pdf.
Full texts design variables is not directly available. This nonlinear objective function is obtained from the flow field by the flow solver. We present and interpret numerical results of this implementation. Moreover, a contribution is given to a survey and a distinction of DFO research directions, to an analysis and discussion of these. We also state a derivative free algorithm used within a clustering algorithm in combination with non-smooth optimization techniques to reveal the effectiveness of derivative free methods in computations. This algorithm is applied on some data sets from various sources of public life and medicine. We compare various methods, their practical backgrounds, and conclude with a summary and outlook. This work may serve as a preparation of possible future research.
Ross, Jennifer. "MODERATORS OF TRUST AND RELIANCE ACROSS MULTIPLE DECISION AIDS." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3975.
Full textPh.D.
Department of Psychology
Sciences
Psychology PhD
Torre, Ilaria. "The impact of voice on trust attributions." Thesis, University of Plymouth, 2017. http://hdl.handle.net/10026.1/9858.
Full textParker, Christopher Alonzo. "K x N Trust-Based Agent Reputation." VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/702.
Full textMayer, Andrew K. "Manipulation of user expectancies effects on reliance, compliance, and trust using an automated system /." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22633.
Full textAbuhamad, Grace M. (Grace Marie). "The fallacy of equating "blindness" with fairness : ensuring trust in machine learning applications to consumer credit." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122094.
Full textThesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-82).
Fifty years ago, the United States Congress coalesced around a vision for fair consumer credit: equally accessible by all consumers, and developed on accurate and relevant information, with controls for consumer privacy. In two foundational pieces of legislation, the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), legislators described mechanisms by which these goals would be met, including, most notably, prohibiting certain information, such as a consumer's race, as the basis for credit decisions, under the assumption that being "blind" to this information would prevent wrongful discrimination. While the policy goals for fair credit are still valid today, the mechanisms designed to achieve them are no longer effective.
The consumer credit industry is increasingly interested in using new data and machine learning modeling techniques to determine consumer creditworthiness, and with these technological advances come new risks not mitigated by existing mechanisms. This thesis evaluates how these "alternative" credit processes pose challenges to the mechanisms established in the FCRA and the ECOA and their vision for fairness. "Alternative" data and models facilitate inference or prediction of consumer information, which make them non-compliant. In particular, this thesis investigates the idea that "blindness" to certain attributes hinders consumer fairness more than it helps since it limits the ability to determine whether wrongful discrimination has occurred and to build better performing models for populations that have been historically underscored.
This thesis concludes with four recommendations to modernize fairness mechanisms and ensure trust in the consumer credit system by: 1) expanding the definition of consumer report under the FCRA; 2) encouraging model explanations and transparency; 3) requiring self-testing using prohibited information; and 4) permitting the use of prohibited information to allow for more comprehensive models.
This work was partially supported by the MIT-IBM Watson AI Lab and the Hewlett Foundation through the MIT Internet Policy Research Initiative (IPRI)
by Grace M. Abuhamad.
S.M. in Technology and Policy
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society
Templeton, Julian. "Designing Robust Trust Establishment Models with a Generalized Architecture and a Cluster-Based Improvement Methodology." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42556.
Full textWagner, Alan Richard. "The role of trust and relationships in human-robot social interaction." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31776.
Full textCommittee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andrea. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Books on the topic "Trust in machines"
Cao, Mei. Supply Chain Collaboration: Roles of Interorganizational Systems, Trust, and Collaborative Culture. London: Springer London, 2013.
Find full textComputational trust models and machine learning. Boca Raton: Taylor & Francis, 2014.
Find full textTaylor, Gina. Teaching patients who use machines for patient controlled analgesia: A report on a collaborative study between Middlesex University, Faculty of Health Studies and Toronto Ward, Chase Farm Hospitals NHS Trust. [London]: [Foundation of Nursing Studies], 1995.
Find full textYu, Philip S. Machine Learning in Cyber Trust: Security, Privacy, and Reliability. Boston, MA: Springer-Verlag US, 2009.
Find full textThe political economy of trust: Institutions, interests and inter-firm cooperation in italy and germany. New York: Cambridge University Press, 2009.
Find full textFenske, David Allan. Real-time control of the trussarm variable-truss manipulator utilizing machine vision. Ottawa: National Library of Canada, 1993.
Find full textFenske, David Allan. Real-time control of the Trussarm variable-geometry-truss manipulator utilizing machine vision. [Downsview, Ont.]: University of Toronto, [Institute for Aerospace Studies], 1993.
Find full textPine, Carol. A crowning achievement: 130 years of innovation, perseverance and trust. [Minneapolis?]: Crown Holdings, 2008.
Find full textUnited States. Congress. House. Committee on Banking, Finance, and Urban Affairs. Subcommittee on Financial Institutions Supervision, Regulation and Insurance. Resolution Trust Corporation Task Force. Consideration of the implications of the RTC control problems for proposals to restructure the bail-out machinery: Hearing before the Subcommittee on Financial Institutions Supervision, Regulation and Insurance, Resolution Trust Corporation Task Force of the Committee on Banking, Finance, and Urban Affairs, House of Representatives, One Hundred Second Congress, first session, June 17, 1991. Washington: U.S. G.P.O., 1991.
Find full textZhang, Qingyu, and Mei Cao. Supply Chain Collaboration: Roles of Interorganizational Systems, Trust, and Collaborative Culture. Springer, 2014.
Find full textBook chapters on the topic "Trust in machines"
Fabris, Adriano. "Can We Trust Machines? The Role of Trust in Technological Environments." In Studies in Applied Philosophy, Epistemology and Rational Ethics, 123–35. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-44018-3_9.
Full textBasharat, Shifaa, and Manzoor Ahmad. "Inferring Trust from Message Features Using Linear Regression and Support Vector Machines." In Communications in Computer and Information Science, 577–98. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8660-1_44.
Full textRakhi and G. L. Pahuja. "An Efficient Trust-Based Approach to Load Balanced Routing Enhanced by Virtual Machines in Vehicular Environment." In International Conference on Intelligent Computing and Smart Communication 2019, 925–35. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0633-8_95.
Full textBanasiewicz, Andrew. "In Machine We Trust." In Organizational Learning in the Age of Data, 223–54. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74866-1_8.
Full textDziczkowski, Grzegorz, Szymon Głowania, and Bogna Zacny. "Trust in Machine-learning Systems." In Trust, Organizations and the Digital Economy, 108–20. New York: Routledge, 2021. http://dx.doi.org/10.4324/9781003165965-9.
Full textYu, Kun, Shlomo Berkovsky, Dan Conway, Ronnie Taib, Jianlong Zhou, and Fang Chen. "Do I Trust a Machine? Differences in User Trust Based on System Performance." In Human and Machine Learning, 245–64. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90403-0_12.
Full textChallagulla, Venkata U. B., Farokh B. Bastani, and I.-Ling Yen. "High-Confidence Compositional Reliability Assessment of SOA-Based Systems Using Machine Learning Techniques." In Machine Learning in Cyber Trust, 279–322. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_11.
Full textSha, Lui, Sathish Gopalakrishnan, Xue Liu, and Qixin Wang. "Cyber-Physical Systems: A New Frontier." In Machine Learning in Cyber Trust, 3–13. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_1.
Full textShaneck, Mark, Yongdae Kim, and Vipin Kumar. "Privacy Preserving Nearest Neighbor Search." In Machine Learning in Cyber Trust, 247–76. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_10.
Full textYang, Stephen J. H., Jia Zhang, and Angus F. M. Huang. "Model, Properties, and Applications of Context-Aware Web Services." In Machine Learning in Cyber Trust, 323–58. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-88735-7_12.
Full textConference papers on the topic "Trust in machines"
Sundar, S. Shyam, and Akshaya Sreenivasan. "In machines we trust." In the Seventh International Conference. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2737856.2737896.
Full textTinati, Ramine, and Leslie Carr. "Understanding Social Machines." In 2012 International Conference on Privacy, Security, Risk and Trust (PASSAT). IEEE, 2012. http://dx.doi.org/10.1109/socialcom-passat.2012.25.
Full textMerchant, Arpit, Tushant Jha, and Navjyoti Singh. "The Use of Trust in Social Machines." In the 25th International Conference Companion. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/2872518.2890597.
Full textYong Shi, Zhen Han, and Chang-Xiang Shen. "The transitive trust in Java virtual machines." In 2009 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2009. http://dx.doi.org/10.1109/icmlc.2009.5212620.
Full textLee, Yunho, Seungjoo Kim, and Dongho Won. "How to Trust DRE Voting Machines Preserving Voter Privacy." In 2008 IEEE International Conference on e-Business Engineering. IEEE, 2008. http://dx.doi.org/10.1109/icebe.2008.37.
Full textEisenbarth, Thomas, Tim Guneysu, Christof Paar, Ahmad-Reza Sadeghi, Marko Wolf, and Russell Tessier. "Establishing Chain of Trust in Reconfigurable Hardware." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.23.
Full textBiedermann, Sebastian, Martin Zittel, and Stefan Katzenbeisser. "Improving security of virtual machines during live migrations." In 2013 Eleventh Annual Conference on Privacy, Security and Trust (PST). IEEE, 2013. http://dx.doi.org/10.1109/pst.2013.6596088.
Full textFukuda, Shuichi. "How Can Man and Machine Trust Each Other and Work Better Together?" In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-49828.
Full textContractor, Dipen, and Dhiren Patel. "Analyzing trustworthiness of virtual machines in data-intensive cloud computing." In 2014 Twelfth Annual Conference on Privacy, Security and Trust (PST). IEEE, 2014. http://dx.doi.org/10.1109/pst.2014.6890967.
Full textJangda, Abhinav, and Mohit Mishra. "RandHeap: Heap Randomization for Mitigating Heap Spray Attacks in Virtual Machines." In 2017 15th Annual Conference on Privacy, Security and Trust (PST). IEEE, 2017. http://dx.doi.org/10.1109/pst.2017.00028.
Full textReports on the topic "Trust in machines"
Konaev, Margarita, Tina Huang, and Husanjot Chahal. Trusted Partners: Human-Machine Teaming and the Future of Military AI. Center for Security and Emerging Technology, February 2021. http://dx.doi.org/10.51593/20200024.
Full textFang, Chen. Unsettled Issues in Vehicle Autonomy, Artificial Intelligence, and Human-Machine Interaction. SAE International, April 2021. http://dx.doi.org/10.4271/epr2021010.
Full textKonaev, Margarita, Husanjot Chahal, Ryan Fedsiuk, Tina Huang, and Ilya Rahkovsky. U.S. Military Investments in Autonomy and AI: A Strategic Assessment. Center for Security and Emerging Technology, October 2020. http://dx.doi.org/10.51593/20190044.
Full text