To see the other types of publications on this topic, follow the link: Algorithmic service encounters.

Journal articles on the topic 'Algorithmic service encounters'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Algorithmic service encounters.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hoang, Khang TRAN. "Cognitive-Affective Responses to Algorithmic Service Encounters: A Multi-Theoretical Analysis of AI-Mediated Customer Experience, Brand Relationship Quality, and Satisfaction Dynamics in Vietnam's Digital Marketplace." Journal of Economics, Finance And Management Studies 08, no. 05 (2025): 2274–84. https://doi.org/10.5281/zenodo.15501654.

Full text
Abstract:
This research investigates the complex interplay between cognitive-affective responses and algorithmic service encounters in Vietnam's rapidly evolving digital marketplace. Through a multi-theoretical lens integrating service-dominant logic, cognitive appraisal theory, technology acceptance model, and relationship marketing frameworks, this study examines how artificial intelligence (AI) mediated customer experiences influence brand relationship quality and satisfaction. Employing a robust mixed-method approach combining structural equation modeling with partial least squares (PLS-SEM) and fuzzy-set Qualitative Comparative Analysis (fsQCA), data from 387 Vietnamese digital consumers were analyzed. Results reveal that algorithmic service personalization significantly enhances cognitive-affective customer experiences, which subsequently strengthen brand relationship quality and satisfaction. Technology readiness moderates these relationships, with higher levels amplifying positive effects of AI-mediated experiences. The fsQCA findings identify multiple configurations of conditions leading to high satisfaction, demonstrating equifinality in satisfaction formation. This research contributes to the literature by developing an integrated theoretical framework explicating the psychological mechanisms through which algorithmic service encounters shape customer outcomes, offering nuanced insights into the digital transformation of service experiences in emerging markets, and identifying optimal configurations for enhancing customer satisfaction in technology-mediated environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Zhiping, and Changda Wang. "Service Function Chain Migration: A Survey." Computers 14, no. 6 (2025): 203. https://doi.org/10.3390/computers14060203.

Full text
Abstract:
As a core technology emerging from the convergence of Network Function Virtualization (NFV) and Software-Defined Networking (SDN), Service Function Chaining (SFC) enables the dynamic orchestration of Virtual Network Functions (VNFs) to support diverse service requirements. However, in dynamic network environments, SFC faces significant challenges, such as resource fluctuations, user mobility, and fault recovery. To ensure service continuity and optimize resource utilization, an efficient migration mechanism is essential. This paper presents a comprehensive review of SFC migration research, analyzing it across key dimensions including migration motivations, strategy design, optimization goals, and core challenges. Existing approaches have demonstrated promising results in both passive and active migration strategies, leveraging techniques such as reinforcement learning for dynamic scheduling and digital twins for resource prediction. Nonetheless, critical issues remain—particularly regarding service interruption control, state consistency, algorithmic complexity, and security and privacy concerns. Traditional optimization algorithms often fall short in large-scale, heterogeneous networks due to limited computational efficiency and scalability. While machine learning enhances adaptability, it encounters limitations in data dependency and real-time performance. Future research should focus on deeply integrating intelligent algorithms with cross-domain collaboration technologies, developing lightweight security mechanisms, and advancing energy-efficient solutions. Moreover, coordinated innovation in both theory and practice is crucial to addressing emerging scenarios like 6G and edge computing, ultimately paving the way for a highly reliable and intelligent network service ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
3

Tikaningsih, Ades. "Optimizing Waste Collection Routes in Purwokerto using the Dijkstra Algorithm." Publication of the International Journal and Academic Research 1, no. 2 (2025): 78–85. https://doi.org/10.63222/pijar.v1i2.21.

Full text
Abstract:
Waste is a complex problem that has the potential to cause environmental degradation if not handled properly. Waste management in Banyumas Regency, particularly in the Purwokerto area, encounters several fundamental limitations, including the limited capacity of the transportation fleet and the lack of an integrated route distribution system. Based on the documentation of the Banyumas Regency Environmental Service, of the total daily waste generation amounting to 600 tons, only 45% is transported to the Final Disposal Site (TPA), highlighting the urgent need for optimization in the waste management system. This study applies the Dijkstra Algorithm using the greedy principle, which weights the distances between points and calculates the minimum value to develop a simulation for determining the shortest route for transporting waste from the Temporary Shelter (TPS) to the TPA in Purwokerto. The results of the computational analysis indicate that the optimal route from the Environmental Service office to the TPA/PDU Tanjung is 10.553 kilometers long, involving eight stages of algorithmic iteration. This finding confirms the efficiency of the route compared to other alternatives, supporting the acceleration of waste reduction and handling targets in alignment with the Banyumas Regency's strategic policy directives
APA, Harvard, Vancouver, ISO, and other styles
4

Ku, Chu-Chang, Chien-Chou Chen, Simon Dixon, Hsien Ho Lin, and Peter J. Dodd. "Patient pathways of tuberculosis care-seeking and treatment: an individual-level analysis of National Health Insurance data in Taiwan." BMJ Global Health 5, no. 6 (2020): e002187. http://dx.doi.org/10.1136/bmjgh-2019-002187.

Full text
Abstract:
IntroductionPatients with tuberculosis (TB) often experience difficulties in accessing diagnosis and treatment. Patient pathway analysis identifies mismatches between TB patient care-seeking patterns and service coverage, but to date, studies have only employed cross-sectional aggregate data.MethodsWe developed an algorithmic approach to analyse and interpret patient-level routine data on healthcare use and to construct patients’ pathways from initial care-seeking to treatment outcome. We applied this to patients with TB in a simple random sample of one million patients’ records in the Taiwan National Health Insurance database. We analysed heterogeneity in pathway patterns, delays, service coverage and patient flows between different health system levels.ResultsWe constructed 7255 pathways for 6258 patients. Patients most commonly initially sought care at the primary clinic level, where the capacity for diagnosing TB patients was 12%, before eventually initiating treatment at higher levels. Patient pathways are extremely heterogeneous prior to diagnosis, with the 10% most complex pathways accounting for 48% of all clinical encounters, and 55% of those pathways yet to initiate treatment after a year. Extended consideration of alternative diagnoses was more common for patients aged 65 years or older and for patients with chronic lung disease.ConclusionOur study demonstrates that longitudinal analysis of routine individual-level healthcare data can be used to generate a detailed picture of TB care-seeking pathways. This allows an understanding of several temporal aspects of care pathways, including lead times to care and the variability in patient pathways.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Yuhong, and Yiqin Sheng. "Two-stage optimization of instant distribution of fresh products based on improved NSGA-III algorithm." International Journal of Industrial Engineering Computations 16, no. 3 (2025): 535–56. https://doi.org/10.5267/j.ijiec.2025.5.002.

Full text
Abstract:
As an important part of the fresh produce business format, fresh food instant delivery encounters numerous challenges. Issues like high losses, complex cold chains and time sensitivity lead to increased costs. Additionally, the living space of end-delivery personnel is under pressure and the talent market is saturated. The platform algorithms focus on the interests of themselves and customers while relatively overlooking those of delivery personnel, which affects the overall operation quality, resulting in a significant reduction in delivery efficiency and a remarkable decline in service quality, and further leading to the loss of user stickiness. Therefore, optimizing the fresh food delivery route and considering the interests of multiple parties to improve efficiency and service quality is a crucial research issue in the field of fresh food instant delivery. This paper designs a three-objective static model for fresh food instant delivery aiming at minimizing the total cost, maximizing customer satisfaction and maximizing riders satisfaction. Considering the dynamic changes of orders during the actual operation process and in combination with the dynamics of newly added orders, a multi-objective dynamic model with the goals of minimizing the total cost, minimizing the average customer dissatisfaction and maximizing the income fairness of riders is further established. Based on the constructed models and by incorporating the SPBO strategy, the NSGA-III algorithm is improved and designed to make it more adaptable to the multi-objective optimization requirements in the fresh food instant delivery scenario. This study selects five operational points within a specific region of a fresh food self-operated platform and the order data from a particular day as research cases to obtain the relevant parameters required for the model and conduct case analysis. Based on the platform's business priorities and development needs, appropriate Pareto solutions are selected. Additionally, the feasibility and effectiveness of the improved algorithm are verified through algorithmic comparison. The research aims to provide valuable references and insightful implications for the management decisions of relevant fresh food self-operated platforms, as well as to continuously optimize the management and service of the instant delivery process.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhu, Huiyi. "Risks and Countermeasures of Personal Information Processing in Minors’ Network Socialization." Law and Economy 4, no. 2 (2025): 58–67. https://doi.org/10.56397/le.2025.02.08.

Full text
Abstract:
Minors experience network cognition, network imitation and network interaction in the algorithmic society. The invisible hand of algorithm is inseparable from every network action. When minors encounter algorithmic recommendation services on social media, their personal information is abused by the algorithm. The existing regulations for the protection of personal information based on algorithms have some problems, such as rigid age division of notification and consent mechanism and one-time notification and consent. In this regard, it is necessary to re-establish the risk assessment system for minors’ personal information, improve the operability of consent notification and emphasize dynamic notification, balance the rights and interests of minors’ personal information and the freedom of expression on the Internet, and build an algorithmic society for minors to communicate and trust.
APA, Harvard, Vancouver, ISO, and other styles
7

Gruber, Jonathan, and Eszter Hargittai. "The importance of algorithm skills for informed Internet use." Big Data & Society 10, no. 1 (2023): 205395172311681. http://dx.doi.org/10.1177/20539517231168100.

Full text
Abstract:
Using the Internet means encountering algorithmic processes that influence what information a user sees or hears. Existing research has shown that people's algorithm skills vary considerably, that they develop individual theories to explain these processes, and that their online behavior can reflect these understandings. Yet, there is little research on how algorithm skills enable people to use algorithms to their own benefit and to avoid harms they may elicit. To fill this gap in the literature, we explore the extent to which people understand how the online systems and services they use may be influenced by personal data that algorithms know about them, and whether users change their behavior based on this understanding. Analyzing 83 in-depth interviews from five countries about people's experiences with researching and searching for products and services online, we show how being aware of personal data collection helps people understand algorithmic processes. However, this does not necessarily enable users to influence algorithmic output, because currently, options that help users control the level of customization they encounter online are limited. Besides the empirical contributions, we discuss research design implications based on the diversity of the sample and our findings for studying algorithm skills.
APA, Harvard, Vancouver, ISO, and other styles
8

G. Raja Ramesh. "Utilizing Hybrid Cloud Computing with Machine Learning and Deep Learning to Enhance Privacy, Security, and Empower Patients." Journal of Information Systems Engineering and Management 10, no. 3 (2025): 1226–36. https://doi.org/10.52783/jisem.v10i3.7194.

Full text
Abstract:
This study investigates the effectiveness of Hybrid Cloud solutions in meeting the challenges encountered within the healthcare sector. Hybrid Cloud technology provides adaptable, on-demand services that empower hospitals and clinics to sidestep costly infrastructure upgrades and streamline maintenance costs. The scalability of cloud platforms addresses the fluctuating demands of the health and wellness industry, supported by fail-safes like disaster recovery and redundancy to ensure continuous service availability. At the heart of this infrastructure lies the Hybrid Health Cloud (HHC), serving as a central data repository for efficient information access and sharing. Nevertheless, obstacles emerge due to the time-consuming decryption and memory-intensive re-encryption processes inherent in HHC schemes. To counter these challenges, a novel approach integrates machine learning, deep learning, and Hybrid Cloud technologies, aiming to enhance system efficiency. Leveraging SHA-based algorithmic perspectives such as categorization, grouping, deep semantic networks, and quantum semantic networks, this study strives to improve both prediction accuracy and data protection.
APA, Harvard, Vancouver, ISO, and other styles
9

Meer, Elana A., Maguire Herriman, Doreen Lam, et al. "Design, Implementation, and Validation of an Automated, Algorithmic COVID-19 Triage Tool." Applied Clinical Informatics 12, no. 05 (2021): 1021–28. http://dx.doi.org/10.1055/s-0041-1736627.

Full text
Abstract:
Abstract Objective We describe the design, implementation, and validation of an online, publicly available tool to algorithmically triage patients experiencing severe acute respiratory syndrome coronavirus (SARS-CoV-2)-like symptoms. Methods We conducted a chart review of patients who completed the triage tool and subsequently contacted our institution's phone triage hotline to assess tool- and clinician-assigned triage codes, patient demographics, SARS-CoV-2 (COVID-19) test data, and health care utilization in the 30 days post-encounter. We calculated the percentage of concordance between tool- and clinician-assigned triage categories, down-triage (clinician assigning a less severe category than the triage tool), and up-triage (clinician assigning a more severe category than the triage tool) instances. Results From May 4, 2020 through January 31, 2021, the triage tool was completed 30,321 times by 20,930 unique patients. Of those 30,321 triage tool completions, 51.7% were assessed by the triage tool to be asymptomatic, 15.6% low severity, 21.7% moderate severity, and 11.0% high severity. The concordance rate, where the triage tool and clinician assigned the same clinical severity, was 29.2%. The down-triage rate was 70.1%. Only six patients were up-triaged by the clinician. 72.1% received a COVID-19 test administered by our health care system within 14 days of their encounter, with a positivity rate of 14.7%. Conclusion The design, pilot, and validation analysis in this study show that this COVID-19 triage tool can safely triage patients when compared with clinician triage personnel. This work may signal opportunities for automated triage of patients for conditions beyond COVID-19 to improve patient experience by enabling self-service, on-demand, 24/7 triage access.
APA, Harvard, Vancouver, ISO, and other styles
10

Kharitonova, Yu, N. S. Malik, and T. Yang. "The Legal Issue of Deterrence of Algorithmic Control of Digital Platforms: The Experience of China, the European Union, Russia and India." BRICS Law Journal 10, no. 1 (2023): 147–70. http://dx.doi.org/10.21684/2412-2343-2023-10-1-147-170.

Full text
Abstract:
The authorities in a number of states are concerned about the need for public disclosure of the recommendation algorithms that are used in online services. The introduction of regulations aimed at software developers is frequently proposed as a potential solution to this problem of algorithm transparency. These requirements, which must be fulfilled by the developers of software products, can be administrative regulations or standards regulations. However, despite these efforts, in the absence of direct legislative regulation, users continue to encounter the possibility that a social network feed or a search service result may present content that is unequal or unclear. This is due to the fact that the logic behind these recommendations is not clear and is concealed by IT giants. The following are among the main provisions of legislative initiatives: the liability of digital platforms to publish the mechanisms of recommendation services, the responsibility to inform the user about the processing of personal data and the possibility for the user to refuse such processing. States have recognized the problem and are approaching it from different positions. Each region chooses what to prioritize in terms of the law. We see that for China and Europe, all areas of platforms are important, whereas for Russia, news platforms and video hosting are of interest and for India, social media is the most important platform category. However, in all of the countries, the requirements for the disclosure of the recommendation engine to a certain extent are expanding. The amount of information that is publicly available as well as the order in which it is disclosed are both variable. This study demonstrates the commonalities and differences in the approaches taken by various countries.
APA, Harvard, Vancouver, ISO, and other styles
11

Alfa, Attahiru Sule, K. Laurie Dolhun, and S. Chakravarthy. "A discrete single server queue with Markovian arrivals and phase type group services." Journal of Applied Mathematics and Stochastic Analysis 8, no. 2 (1995): 151–76. http://dx.doi.org/10.1155/s1048953395000153.

Full text
Abstract:
We consider a single-server discrete queueing system in which arrivals occur according to a Markovian arrival process. Service is provided in groups of size no more than M customers. The service times are assumed to follow a discrete phase type distribution, whose representation may depend on the group size. Under a probabilistic service rule, which depends on the number of customers waiting in the queue, this system is studied as a Markov process. This type of queueing system is encountered in the operations of an automatic storage retrieval system. The steady-state probability vector is shown to be of (modified) matrix-geometric type. Efficient algorithmic procedures for the computation of the rate matrix, steady-state probability vector, and some important system performance measures are developed. The steady-state waiting time distribution is derived explicitly. Some numerical examples are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Agirbas, Asli. "A Teaching Methodology on the Combination of Architectural Tradition and Parametric Design: A Case Study with Birdhouses." International Journal of Islamic Architecture 11, no. 1 (2022): 149–68. http://dx.doi.org/10.1386/ijia_00068_1.

Full text
Abstract:
Birdhouses were produced in Anatolia from the thirteenth century to the nineteenth century, each with period-specific features. The Industrial Revolution’s call for efficiency however, mandated that birdhouses were no longer manufactured. In this study, parametric design, which is based on an algorithmic design, is used to re-implement the lost practice of birdhouse construction. In the defined methodology, deployed in an undergraduate architectural course, this study considers all stages, from the design process through to the digital fabrication process, that occurs during the birdhouse design process based on metaball geometry created in visual programming language. I will present the problems encountered by the students and the solutions to these problems. In addition, I will evaluate the students’ attitude and response toward the teaching methodology statistically with SPSS Statistics (Statistical Product and Service Solutions).
APA, Harvard, Vancouver, ISO, and other styles
13

Yao, Mengxiao. "How Is AI Enhancing Investment Behavior in Today's Financial Markets." Transactions on Computer Science and Intelligent Systems Research 8 (October 24, 2024): 119–23. http://dx.doi.org/10.62051/wtg5y852.

Full text
Abstract:
The present investigation delves intricately into AI's pivotal function in shaping contemporary financial market investment dynamics. Amidst surging data volumes and computational leaps, AI has emerged as a cornerstone in finance, particularly in enhancing investment efficacy, risk assessment, and tailored services. By scrutinizing AI's present adoption across domains like data analysis, algorithmic trading, risk mitigation, and advisory services through concrete case studies, we illustrate their tangible impact. Our study subsequently unfolds how AI bolsters investment activities by expediting decision-making, refining strategic investments, reinforcing risk controls, and enriching user experiences. Furthermore, we delve into the technological hurdles, ethical dilemmas, and regulatory challenges encountered in AI's financial sector integration. Lastly, we propose recommendations for financial institutions and regulators to foster AI's sustainable growth within the market. This research not only contributes to a profound comprehension of AI's role in financial applications but also furnishes valuable insights and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
14

Anwar, Sayeed, Ujjeisheei Panda, and Hitesh Mohapatra. "Legal and Ethical Issues in IoT based Smart City: Data Privacy, Surveillance, and Citizen Rights." Journal of Computer Science Engineering and Software Testing 10, no. 2 (2024): 17–26. http://dx.doi.org/10.46610/jocses.2024.v10i02.003.

Full text
Abstract:
As Internet of Things (IoT) technologies revolutionize urban infrastructure and services, smart cities increasingly encounter intricate legal and ethical challenges. This expanded abstract delves into the existing literature on the legal and ethical issues associated with smart cities and IoT. It highlights crucial topics, including data privacy laws, accountability frameworks, fairness, transparency, and responsible data use. Legal questions are primarily concerned with ensuring compliance with existing regulations, while ethical considerations emphasize equitable access to services, transparency in operations, and mitigation of algorithmic biases. Moreover, this abstract explores the interdisciplinary connections between urban governance, technological innovation, and policy advice, aiming to foster comprehensive approaches to addressing these challenges. It emphasizes the importance of developing frameworks that adhere to legal requirements and uphold principles of equity and ethical responsibility. In summary, this abstract sets the stage for future research and policy-making efforts, providing a structured approach to navigate the complexities of deploying smart city and IoT technologies. It underscores the necessity of balancing technological advancement with adherence to legal standards and ethical norms, ensuring that smart city developments contribute positively to societal well-being and fairness.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Yincheng, Shumin Wang, and Muhammad Bilawal Khaskheli. "Integrating Artificial Intelligence into Service Innovation, Business Development, and Legal Compliance: Insights from the Hainan Free Trade Port Era." Systems 12, no. 11 (2024): 463. http://dx.doi.org/10.3390/systems12110463.

Full text
Abstract:
This research aims to inspect the application of Artificial Intelligence (AI) in product and service innovation from the perspective of the Hainan Free Trade Port (HFTP) and its relationship with corporate transformation, legal compliance, and regulatory oversight. Being critical to the fourth industrial revolution, digital business and international cooperation, technology propels enterprises across various industries to transition from traditional models to intelligent and service-oriented ones. It also elucidates the theoretical foundations of AI products, the digital economy, and service innovation. It can be used to analyzes the challenges enterprises face in the HFTP while implementing AI technology, including funding, technology, management, operations, corporate culture, and innovative concepts. Based on the proposed research methodology, three hypotheses can be formulated. Hypothesis 1 states that the HFTP could facilitate enterprise transformation by applying supportive policies. Hypothesis 2 state that domestic laws and international agreements are urgently needed due to the legal risks arising from artificial intelligence. Hypothesis 3 state that HFTP enterprises comply with these laws while systemically assuring, in theory, and practice, the legal risks of artificial intelligence and its implications for legal regulation, which is a significant aspect of research, addressing legal risks related to data privacy, security, and algorithmic bias with many strategies being proposed. This shows how AI technology can change businesses in the HFTP, demonstrating the application of AI technology in the transformation of enterprises in the HFTP and the various risks they may encounter, providing valuable references for other enterprises regarding the practical significance of AI product and service innovation in the HFTP, and emphasizing the importance of international cooperation and legal instruction.
APA, Harvard, Vancouver, ISO, and other styles
16

Porokhovnik, D. A., and I. G. Kamenev. "Managing Supply on Market of ‘Last Mile’ Delivery in Sphere of Fast-Moving Consumer Goods." Vestnik of the Plekhanov Russian University of Economics, no. 2 (March 18, 2025): 267–76. https://doi.org/10.21686/2413-2829-2025-2-267-276.

Full text
Abstract:
In this article the authors study the problem of supply management of company in the sphere of fast-moving consumer goods in the Internet at the final stage of delivery (‘Last mile’). Empiric analysis of data of online-sale market of fast-moving consumer goods in 2022-2023 showed seasonal peaks of demand and impact of external factors, such as festive periods and weather conditions. It was demonstrated that demand fluctuations can make the company encounter a difficult situation that could cause a loss of a certain part of demand, deterioration of the company reputation in comparison with market competitors, a rise in costs and missed sales in product turnover. Specific features of this activity were grounded that prevent direct use of available means in the field of logistics, management, on-line marketing, etc. The article provides a review of existing approaches to the problem and analyzes a range of reasons causing fluctuations in demand for services of fast-moving consumer goods. Classification of today’s tools of management (algorithmic, infrastructural and relational) was put forward, which can affect the current demand, supply organization or customer habits. The article shows benefits, drawbacks and situations of using these tools.
APA, Harvard, Vancouver, ISO, and other styles
17

Xun, Zhang, and Xiaofeng Cheng. "The Digital Boundaries of Free Speech: Legal Interventions on Hate Speech and Disinformation in the Age of Social Media." Advances in Humanities Research 12, no. 2 (2025): None. https://doi.org/10.54254/2753-7080/2025.23849.

Full text
Abstract:
In the digital media era, extreme remarks and fake news on social platforms are constantly impacting the limits of freedom of expression. This study selects three jurisdictionsthe European Union, the United States, and Chinato compare and analyze the institutional development of online speech governance. By analyzing the practical conflict between the European Digital Services Act and the First Amendment to the U.S. Constitution, the paper reveals the value gap between protecting freedom of expression and implementing content control in different jurisdictions. Platform content audit data and post-removal appeal cases show that the existing governance system has structural problems such as unclear implementation standards and unbalanced allocation of audit resources. Especially in the interaction between algorithmic recommendation mechanisms and manual audits, users often encounter difficulties such as blocked appeal channels and opaque removal procedures. The research proposes the establishment of a hierarchical and classified content governance framework, the promotion of transnational platforms to establish a traceable audit log system, and the exploration of speech risk assessment models based on cultural context, so as to provide an institutional guarantee for the construction of a digital discourse space with equal rights and responsibilities.
APA, Harvard, Vancouver, ISO, and other styles
18

Samonte, Benjie R. "Bilingual Feedback Management System for Frontline Services with Sentiment Analysis using Naïve-Bayes Algorithm." Innovatus 2, no. 1 (2019): 83–88. https://doi.org/10.5281/zenodo.5209576.

Full text
Abstract:
The purpose of this research is to develop a feedback management system that uses a modern approach of technologies to aid the existing feedback management system used in the university. The study employed Sentiment Analysis using Naïve-Bayes Algorithm which was used in determining the polarity of the customers' feedbacks or suggestions. In order to come up with an effective and reliable system, the researcher adopted the incremental software development model as software methodology, wherein it delivers a series of releases, called increments. It progressively provides more functionality for the customer as each increment is delivered. One hundred eight (108) customers including seven office heads and one quality management staff were chosen as the respondents of the study. Based on the findings, the developed feedback management system (mobile and web applications) was effective in terms of its overall ease of use, portability and functionality for it received a respectable rating from all respondents. It also showed that the system has passed the overall criteria of its technical quality as well as it eliminates the identified common problems encountered using the existing system. Likewise, the system provides performance reports of each office to determine which among the offices are performing well based on feedbacks. Significantly, this innovation will be an effective feedback mechanism tool in the University to address the concerns of the customers and other stakeholders and provide possible merits and rewards to performing offices.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Hong Son, and Thanh Dung Ha. "THE METHOD OF DETECTING ONLINE PASSWORD ATTACKS BASED ON HIGH-LEVEL PROTOCOL ANALYSIS AND CLUSTERING TECHNIQUES." International Journal of Computer Networks & Communications (IJCNC) 11, no. 6 (2020): 77–89. https://doi.org/10.5281/zenodo.3598278.

Full text
Abstract:
Although there have been many solutions applied, the safety challenges related to the password security mechanism are not reduced. The reason for this is that while the means and tools to support password attacks are becoming more and more abundant, the number of transaction systems through the Internet is increasing, and new services systems appear. For example, IoT also uses password-based authentication. In this context, consolidating password-based authentication mechanisms is critical, but monitoring measures for timely detection of attacks also play an important role in this battle. The password attack detection solutions being used need to be supplemented and improved to meet the new situation. In this paper we propose a solution that automatically detects online password attacks in a way that is based solely on the network, using unsupervised learning techniques and protected application orientation. Our solution therefore minimizes dependence on the factors encountered by host-based or supervised learning solutions. The certainty of the solution comes from using the results of in-depth analysis of attack characteristics to build the detection capacity of the mechanism. The solution was implemented experimentally on the real system and gave positive results.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Jiayao, and Zhiyong Long. "Research on Strategies for Improving the Cross-Cultural Adaptability of International Students Studying in China." International Journal of Social Science and Humanities Research 13, no. 1 (2025): 278–83. https://doi.org/10.5281/zenodo.15089075.

Full text
Abstract:
<strong>Abstract:</strong> This study breaks through the one-dimensional analysis paradigm of traditional cross-cultural adaptation research and constructs a three-dimensional analysis framework of "technology-space-institution" to systematically reveal the deep adaptation dilemmas faced by international students studying in China. The group of digital natives encounters three contradictions: virtual social dependence and physical space dislocation, algorithmic cognitive solidification and technological paradox, as well as institutional rigidity and elastic demand. Through mixed research, it is found that the generation of adaptability is systematically restricted by the inclusiveness of digital infrastructure, the spatial distribution of educational resources, and the openness of social networks. Accordingly, a collaborative strategy system is proposed, and mechanisms such as hierarchical language support, intelligent demand perception, and digital social navigation are designed to promote cross-cultural adaptation from passive adjustment to active creation. The study demonstrates the construction path of the "adaptive ecosystem" and emphasizes the synergistic effect of technological empowerment, spatial reconstruction, and institutional innovation to achieve a dynamic balance between instrumental rationality and humanistic values. This not only provides a practical framework for universities to optimize support services for international students but also expands new dimensions for the theory of civilized dialogue in the digital age. <strong>Keywords:</strong> International students studying in China; Cross-cultural adaptability; Collaborative strategy; Adaptive ecosystem; Digital technology. <strong>Title:</strong> Research on Strategies for Improving the Cross-Cultural Adaptability of International Students Studying in China <strong>Author:</strong> Zhang Jiayao, Long Zhiyong <strong>International Journal of Social Science and Humanities Research&nbsp; </strong> <strong>ISSN 2348-3156 (Print), ISSN 2348-3164 (online)</strong> <strong>Vol. 13, Issue 1, January 2025 - March 2025</strong> <strong>Page No: 278-283</strong> <strong>Research Publish Journals</strong> <strong>Website: www.researchpublish.com</strong> <strong>Published Date: 26-</strong><strong>March-2025</strong> <strong>DOI: https://doi.org/10.5281/zenodo.15089075</strong> <strong>Paper Download Link (Source)</strong> <strong>https://www.researchpublish.com/papers/research-on-strategies-for-improving-the-cross-cultural-adaptability-of-international-students-studying-in-china</strong>
APA, Harvard, Vancouver, ISO, and other styles
21

Wan Norsyafizan W. Muhamad, Et al. "Enhancing QoS Performance for Cell Edge Users Through Adaptive Modulation and Coding in IEEE 802.11ac WLANs." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 2424–30. http://dx.doi.org/10.17762/ijritcc.v11i9.9309.

Full text
Abstract:
Wireless communication networks, such as IEEE 802.11ac Wireless Local Area Networks (WLANs), often encounter challenges in providing consistent Quality of Service (QoS) to users situated at the cell edge. The inherent variations in channel conditions, particularly lower signal-to-noise ratios (SNRs) in these regions, lead to compromised data rates and reliability, resulting in significant degradation of throughput. This study presents an innovative solution in the form of an Adaptive Modulation and Coding Scheme (AMCS) algorithm tailored to enhance QoS performance for cell edge users. The primary objective of the AMCS algorithm is to optimize QoS by dynamically adjusting the transmission data rate based on the observed channel conditions, quantified using SNR as a channel state indicator. Conventional approaches might unilaterally select the lowest data rate in challenging conditions, prioritizing reliability at the expense of throughput. However, the proposed AMCS algorithm takes a distinct approach by intelligently determining the Modulation and Coding Scheme (MCS) that offers an optimal balance between throughput and reliability for the given SNR level. To achieve this, the algorithm utilizes real-time SNR measurements to select an MCS that ensures a stable connection while also maintaining an acceptable data rate. By adapting the MCS based on the current SNR, the algorithm aims to mitigate the adverse effects of poor channel conditions experienced by cell edge users. The innovation of the AMCS algorithm lies in its ability to make dynamic adjustments, allowing users to experience improved data rates without compromising connection stability. Through extensive simulations and evaluations, the proposed AMCS algorithm showcases its efficacy in enhancing QoS performance at the cell edge. The algorithm's adaptive approach successfully achieves higher data rates and improved reliability by selecting appropriate MCS configurations tailored to the observed SNR levels. This innovative technique provides a promising solution to the challenge of striking the right balance between throughput and reliability in wireless communication networks, ultimately leading to an improved user experience for those at the network's periphery.
APA, Harvard, Vancouver, ISO, and other styles
22

DAS, ANUSHKA. "THE IMPACT OF ARTIFICIAL INTELLIGENCE ON FMCG DISTRIBUTION." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 03 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem29573.

Full text
Abstract:
Economic Impact-Many examinations have investigated the monetary ramifications of man-made intelligence and ML reception. Research frequently centres around efficiency gains, work market elements, and the potential for work uprooting and creation. A few examinations propose that while Artificial intelligence and ML can prompt expanded effectiveness and development, they may likewise disturb customary business designs, requiring labour force transformation and re-skilling. Business and Industry: Writing in this space looks at how Artificial intelligence and ML advances are changing business tasks, including promoting, finance, store network the board, and client care. Reads up feature the potential for artificial intelligence and ML to improve dynamic cycles, enhance asset designation, customize client encounters, and drive upper hand. Medical care: Artificial intelligence and ML have huge ramifications for medical care, including sickness determination, therapy arranging, drug disclosure, and customized medication. Research in this space investigates the potential for Artificial intelligence and ML to work on clinical results, diminish clinical blunders, lower medical services expenses, and upgrade patient consideration through prescient examination, picture acknowledgment, and normal language handling. Education and Learning: The effect of Artificial intelligence and ML on schooling and learning is likewise a subject of interest. Writing in this field looks at how artificial intelligence fueled devices and stages can work with customized growth opportunities, versatile coaching, robotized reviewing, and instructive substance creation. Research additionally investigates the difficulties and moral contemplations related with man-made intelligence driven instructive innovations. Ethical and social implications-Researchers have raised worries about the moral and cultural ramifications of Artificial intelligence and ML, including issues connected with protection, predisposition, decency, straightforwardness, responsibility, and algorithmic administration. Research in this space looks to foster structures, rules, and guidelines to alleviate likely dangers and guarantee capable man-made intelligence advancement and arrangement. Environmental Impact few examinations explore how simulated intelligence and ML can be utilized to address natural difficulties, for example, environmental change, asset preservation, contamination control, and manageable turn of events. Research in this area investigates uses of computer based intelligence and ML in energy the board, savvy horticulture, natural checking, and preservation endeavours’ lawful and administrative parts of simulated intelligence and ML are likewise a subject of insightful request. Writing in this field looks at licensed innovation privileges, risk issues, information assurance regulations, and the moral and lawful obligations of artificial intelligence designers and clients. Research intends to lay out legitimate systems and rules to successfully administer artificial intelligence and ML advancements.
APA, Harvard, Vancouver, ISO, and other styles
23

Wade, Jenna, Christina L. Dean, John D. Roback, and Harold C. Sullivan. "Diagnostic Management Team: Platelet Refractory Algorithm and Consult." American Journal of Clinical Pathology 152, Supplement_1 (2019): S6. http://dx.doi.org/10.1093/ajcp/aqz112.011.

Full text
Abstract:
Abstract Platelet refractoriness occurs when a patient fails to achieve appropriate platelet count increment following platelet (plt) transfusion. Approximately 30% to 40% of cases are due to human leukocyte antigen (HLA) alloantibodies and 2% to 10% are due to human platelet antigen (HPA) alloantibodies, which can be detected by various assays. The results of these tests are then used to guide selection of appropriate units, such as crossmatch-compatible (XM) and HLA-matched (HLAm) plts. Given the various steps and tests involved in diagnosing and managing plt refractoriness, clinicians may order unnecessary tests resulting in delays in patient care. In April 2018, our institution formed a diagnostic management team (DMT) to establish an algorithmic approach to testing, diagnosing, and transfusing plt refractory patients. We performed a retrospective review of clinician requests for XM or HLAm plts over a 9-month period prior to the launch of the DMT and compared that to requests placed during the initial 9-month period following the launch of the DMT. We collected the date transfusion medicine (TM) service was notified and the time to complete the following: 1-hour corrected count increment (CCI), ELISA indirect antibody screen, HLA flow cytometry PRA screen, HLA single antigen bead (SAB), and obtain the first XM or HLAm plt unit for the patient. There were 12 and 20 patients evaluated for plt refractoriness pre- and post-DMT, respectively. The median time to complete the ELISA indirect antibody screen was 2 days pre- and post-DMT, respectively. Eight percent of patients never had an ELISA screen performed pre-DMT. All patients meeting plt refractory criteria had ELISA screens performed post-DMT. Median time to complete HLA testing decreased from 4 days to 2 days and all patients who had positive FlowPRAs had subsequent SABs performed post-DMT compared to only 40% of patients pre-DMT. The median times to obtain the first XM unit were 6 and 5 days and the first HLAm unit was 5 and 6 days pre- and post-DMT, respectively. One patient had a CCI above 7,500 and four patients had negative screening ELISA and FlowPRA tests after the launch of the DMT. No further testing was performed and they continued to receive ABO-compatible (ABOc) plt units. Since the launch of the DMT, there is improved consistency regarding what tests are performed and standardization of the sequence in which the appropriate tests are ordered. We have reduced unnecessary testing by identifying patients with nonimmune-mediated etiologies of plt refractoriness. Divergence from the algorithm is the main issue encountered during the initial 9-month period following implementation of the DMT. This divergence stemmed from lack of consistent education about plt refractoriness and the DMT. We believe better adherence to the algorithm will lead to decreased product acquisition time and further improve efficiency.
APA, Harvard, Vancouver, ISO, and other styles
24

Brière, Raphaëlle, Rogeh Habashi, Shaila Merchant та ін. "2023 Canadian Surgery Forum01. Evaluation of physicians’ practices and knowledge regarding the treatment of acute uncomplicated diverticulitis03. What is the effect of rurality on outcomes for parathyroidectomy in a large North American jurisdiction?05. Characteristics of opioid providers for patients undergoing same-day breast surgery in Ontario, Canada06. Improving the management and outcomes of complex non-pedunculated colorectal polyps at a regional hospital in British Columbia10. Actinomycosis presenting as an anterior abdominal mass after laparoscopic cholecystectomy12. Prioritizing melanoma surgeries to prevent wait time delays and upstaging of melanoma during the COVID-19 pandemic13. Trust me, I know them: assessing interpersonal bias in general surgery residency interviews14. Current state of female and BIPOC representation in Canadian academic surgical societies15. Harnessing a province-wide network of surgical excellence and diverse talents for the continuous improvement of surgical care in BC16. Massive stone or is it glass: a curious case of porcelain gallbladder17. Choosing your endoscopist: a retrospective single-centre cohort study18. The local experience with endoscopic ampullectomy for noninvasive ampullary lesions at a single tertiary care centre19. Defining appropriate intraoperative patient blood management strategies in noncardiac surgery: the Ottawa Intraoperative Transfusion Consensus20. Postoperative gastrointestinal dysfunction after neuromuscular blockade reversal with sugammadex versus cholinesterase inhibitors in patients undergoing gastrointestinal surgery: a systematic review and meta-analysis21. Factors influencing recurrence in medial breast cancer after skin-sparing mastectomy and immediate breast reconstruction22. What is the role of fit in medical education? A scoping review23. The obesity paradox revisited: Is obesity still a protective factor for patients with severe comorbidities or in high-risk operations?24. Planetary health education for residents — an integrative approach through quality improvement25. A rare case of concurrent primary malignancies: adrenal cortical carcinoma and metastatic colon cancer26. Effect of video-based self-assessment on intraoperative skills: a pilot randomized controlled trial28. A cost–utility study of elective hemorrhoidectomies in Canada30. Opioid-free hernia repair using local anesthetic: an assessment of postoperative pain and recovery31. Mitigating the environmental burden of surgical and isolation gowns33. The evolution and contributions of theCanadian Journal of Surgery: a bibliometric study34. Clinical and oncologic outcomes of patients with rectal cancer and past radiotherapy for prostate cancer: a case–control study35. Antibiotic prophylaxis and mechanical bowel preparation in elective colorectal surgery: a survey of Quebec general surgeons36. Identifying core deficiencies and needs in the surgical knot-tying curriculum: a single-centre qualitative analysis37. Spleen-preserving surgery for symptomatic benign splenic cyst: video case report38. Learning to manage power differentials and navigate uncertainty: a qualitative interview study about decision-making in surgery39. Surgical education checklist: a novel tool to improve uptake of Competence By Design in a residency program and surgical resident experience40. A comparative evaluation of management strategies and patient outcomes for acute appendicitis in the post-COVID era41. External benchmarking of colorectal resection outcomes using ACS-NSQIP: accurately categorizing procedures at risk of morbidity42. Role of thymectomy in surgical treatment of secondary and tertiary hyperparathyroidism43. Starting position during colonoscopy: a systematic review and meta-analysis of randomized controlled trials44. Enhanced Recovery After Surgery protocols following emergency intra-abdominal surgery reduces length of stay and postoperative morbidity: a systematic review and meta-analysis45. Competencies, privileging and geography: preparing general surgery residents for rural practice in British Columbia46. Holographic surgical skills training: Can we use holograms to teach hand ties and is it comparable to in-person learning?47. The association between gender and confidence in UBC general surgery residents48. Quality improvement in timeliness of EPA completion in general surgery residency49. Gastrointestinal system surgical outcomes in the highly active antiretroviral therapy (HAART)-era HIV-positive patient: a scoping review50. Joint rounds as a method to partner surgical residency programs and enhance global surgical training52. Preoperative frailty and mortality in medicare beneficiaries undergoing major and minor surgical procedures53. What’s going on out there? Evaluating the scope of rural general surgery in British Columbia54. Short-stay compared with long-stay admissions for loop ileostomy reversals: a systematic review and meta-analysis55. General surgeons’ right hemicolectomy costs proficiency and preferences56. Staple line with bioabsorbable reinforcement for gastropexy in hiatal hernia repair57. Impact of enhanced recovery pathways on patient-reported outcomes after abdominal surgery: a systematic review58. Evaluation of outcomes between rural, northern/remote, and urban surgical patients diagnosed with moderate to severe acute pancreatitis: a retrospective study59. Outcome of preoperative percutaneous drainage of intraabdominal abscess versus initial surgery in patients with Crohn disease60. Preliminary analysis: dexamethasone-supplemented TAP blocks may reduce opioid requirements after colorectal surgery: a multi-centre randomized controlled trial61. Preoperative skin preparation with chlorhexidine alcohol versus povidone–iodine alcohol for the prevention of surgical site infections: a systematic review and meta-analysis of randomized controlled trials62. “Why didn’t you call me?” Factors junior learners consider when deciding whether to call their supervisor63. Cost savings associated with general surgical consultation within remote Indigenous communities in Quebec: a costing evaluation64. Right lateral decubitis patient position during colonoscopy increases endoscopist’s risk of musculoskeletal injury65. Reducing re-visit to hospital rates among pediatric post-appendectomy patients: a quality-improvement project66. Exploring gender diversity in surgical residency leadership across Canada67. Operating room sustainability project: quantifying the surgical environmental footprint for a laparoscopic cholecystectomy in 2 major surgical centres68. ERCP under general anesthesia compared with conscious sedation (EUGACCS) study69. Complications requiring intervention following gastrostomy/gastrojejunostomy tube insertion: a retrospective analysis70. Equity, diversity and inclusion (EDI) in underrepresented in medicine (URiM) residents: Where are we and what now?71. Association between complications and death within 30 days after general surgery procedures: a Vascular Events in Noncardiac Surgery Patients Cohort Evaluation (VISION) substudy72. What is the long-term impact of gastrograffin on adhesive small bowel obstruction? A systematic narrative review73. TRASH-CAN: Trainee-Led Research and Sudit for Sustainability in Healthcare Canada74. Representation and reporting of sociodemographic variables in BREAST-Q studies: a systematic review75. A scoping review: should tap water instead of sterile water be used for endoscopy of the colon and rectum?76. Laparoscopic revision of Nissen fundoplication with EndoFLIP intraoperative assistance: a video presentation77. Environmental sustainability in the operating room: perspectives and practice patterns of general surgeons in Canada78. The impact of COVID-19 on medical students applying to general surgery in the CaRMS matching process79. Novel approach to laparoscopic gastrostomy tube placement80. Using prucalopride for prevention of postoperative ileus in gastrointestinal surgery: a systematic review and meta-analysis of randomized controlled trials81. Assessment of environmental and economic sustainability of perioperative patient warming strategies83. Development of a Canadian colorectal robotic surgery program: the first three years84. Patient safety and quality improvement lessons from review of Canadian thyroid and parathyroid surgery malpractice litigation case law01. Changes in sarcopenia status predict survival among patients with resectable esophageal cancer02. The feasibility of near-infrared fluorescence-guided robotic-assisted minimally invasive esophagectomy using indocyanine green dye03. Does patient experience with robotic thoracic surgery influence their willingness to pay for it?04. Artificial intelligence–augmented endobronchial ultrasound-elastography is a useful adjunct for lymph node staging for lung cancer05. Preoperative mediastinal staging in early-stage lung cancer: targeted nodal sampling is not inferior to systematic nodal sampling06. The application of an artificial intelligence algorithm to predict lymph node malignancy in non-small cell lung cancer07. Pneumonectomy for non-small cell lung cancer: long-term overall survival from a 15-year experience09. Primary spontaneous pneumothorax occurred in pectus excavatum patients10. Optimizing management for early-stage esophageal adenocarcinoma: longitudinal results from a multidisciplinary program11. Needle decompressions in post-traumatic tension pneumothorax: boon or bane12. 10-year follow-up of endoscopic mucosal resection versus esophagectomy for esophageal intramucosal adenocarcinoma in the setting of Barrett esophagus: a Canadian experience13. Outcomes after thoracic surgery for malignancy in patients with severe and persistent mental illness15. Stage II/III esophageal cancer patients with complete clinical response after neoadjuvant chemoradiotherapy: a Markov decision analysis16. Development of a surgical stabilization of rib fractures program at a Level I trauma centre in Qatar: initial report17. Screening Criteria Evaluation for Expansion in Pulmonary Neoplasias (SCREEN) II18. Multi-centre study evaluating the risks and benefits of intraoperative steroids during pneumonectomy19. Prediction of esophageal cancer short-term survival using a pretreatment health-related quality of life measure20. Evaluating the impact of virtual care in thoracic surgery: patients’ perspective21. Virtual thoracic surgical outpatient encounters are non-inferior to in-person visits for overall patient care satisfaction in the post-COVID-19 era22. Concurrent minimally invasive esophagectomy and laparoscopic right hemicolectomy23. Assessing the impact of robotic-assisted thoracic surgery on direct carbon dioxide emissions — a retrospective analysis of a prospective cohort24. Young’s modulus of human lung parenchyma and tumours25. Thoracic surgery trauma: nail gun v. SVC26. Thymomatous myasthenia gravis after total thymectomy at a tertiary care surgical centre: a 15-year retrospective review27. Effectiveness of 18F-FDG-PET/CT in the stage diagnosis of non-small cell lung cancer (NSCLC): a diagnostic test accuracy systematic review and meta-analysis01. Emergency colon resection in the geriatric population: the modified frailty score as a risk factor of early mortality02. Laparoscopic ovarian transposition prior to pelvic radiation in young female patients with anorectal malignancies: a systematic review and meta-analysis of prevalence03. Using preoperative C-reactive protein levels to predict anastomotic leaks and other complications after elective colorectal surgery: a systematic review and meta-analysis04. Perioperative intravenous dexamethasone for patients undergoing colorectal surgery: a systematic review and meta-analysis05. Population-based study comparing time from presentation to diagnosis and treatment between younger and older adults with colorectal cancer06. The role of warmed-humidified CO2insufflation in colorectal surgery: a meta-analysis07. Total abdominal colectomy versus diverting loop ileostomy and antegrade colonic lavage for fulminantClostridioidescolitis: analysis of the national inpatient sample 2016–201908. Cutting seton for the treatment of cryptoglandular fistula-inano: a systematic review and meta-analysis09. Prognostic value of routine stain versus elastic trichrome stain in identifying venous invasion in colon cancer10. Anastomotic leak rate following the implementation of a powered circular stapler in elective colorectal surgeries11. Surgical technique and recurrence of Crohn disease following ileocolic resection12. Implementation of synoptic reporting for endoscopic localization of complex colorectal neoplasms: Can we reduce rates of repeat preoperative colonoscopy?13. Effects of diet and antibiotics on anastomotic healing: a mouse model study with varied dietary fibre and fat, and preoperative antibiotics14. Assessment of rectal surgery–related physical pain and conditioning: a national survey of Canadian rectal surgeons15. Does specimen extraction incision and transversus abdominis plane block affect opioid requirements after laparoscopic colectomy?16. Colorectal and therapeutic GI working together: What is the role for TAMIS for benign lesions?17. Impact of the COVID-19 pandemic on readmission rates following colorectal surgery18. More than the sum of its parts: the benefits of multidisciplinary conferences extend beyond patient care19. Multidisciplinary conference for rectal cancer — measuring patient care impact20. Patient outcomes in emergency colorectal cancer resections: a 15-year cohort analysis21. Enhanced Recovery after Surgery (ERAS) protocols in colorectal cancer resection: a 15-year analysis of patient outcomes22. Laparoscopic to open conversion in colorectal cancer resection: a 15-year analysis of postoperative outcomes23. Management of postoperative ileus in colorectal cancer resections: a 15-year evaluation of patient outcomes24. Timing of ostomy reversal and associated outcomes: a systematic review25. Fragility of statistically significant outcomes in colonic diverticular disease randomized trials26. Postoperative day 1 and 2 C-reactive protein values for predicting postoperative morbidity following colorectal surgery27. Bariatric surgery before colorectal surgery reduces postoperative morbidity and health care resource utilization: a propensity score matched analysis28. Ileocolic Crohn disease: a video vignette of the Kono-S anastomosis29. Association between patient activation and postoperative outcomes in rectal cancer survivors30. Understanding surgeon and nurse perspectives on the use of patient-generated data in the management of low anterior resection syndrome31. Characteristics of interval colorectal cancer: a Canadian retrospective population-level analysis from Newfoundland and Labrador32. Current rectal cancer survivorship care: unmet patient needs and fragmented specialist and family physician care33. Local excision for T1 rectal cancer: a population-based study of practice patterns and oncological outcomes34. Can nonoperative management of acute complicated diverticulitis be successfully treated with a future hospital at home program? A retrospective cohort study35. Does patient activation impact remote digital health follow-up and same-day discharge after elective colorectal surgery36. Parastomal hernia prevention, assessment and management: best practice guidelines37. Anastomotic leak rates in circular powered staplers versus manual circular staplers in left sided colorectal anastomoses: a systematic review38. The Gips procedure for pilonidal disease: a video presentation39. Local recurrence-free survival after transanal total mesorectal excision: a Canadian institutional experience40. The impact of operative approach for obese colorectal cancer patients: analysis of the national inpatient sample (2015–2019)41. Safety and feasibility of discharge within 24 hours of colectomy: a systematic review and meta-analysis42. Laparoscopic lateral lymph node dissection for an advanced rectal cancer: a video abstract43. “Dear diary”: challenges in adopting routine operative recording in surgical training44. Rectal cancer in the very young (age < 40) — more treatment, worse survival: a population-based study45. Surveillance following treatment for stage I–III rectal cancer in Ontario — a population-based descriptive study46. A 15-year institutional experience of trananal endoscopic microsurgery for local excision of benign and malignant rectal neoplasia47. Robotic approach to reoperative pelvic surgery48. A mucosa-adherent bacterium impairs colorectal anastomotic healing by upregulating interleukin-17: the role of low-grade inflammation as a driver of anastomotic leak49. High uptake of total neoadjuvant therapy for rectal cancer in Canada despite surgeon concerns for possible overtreatment and treatment-related toxicity50. Safety and feasibility of discharge within 24 hours of ileostomy reversal: a systematic review and meta-analysis51. Safety and efficacy of intravenous antifibrinolytic use in colorectal surgery: systematic review and meta-analysis52. Impact of ileal pouch anal anastomosis on fertility in female patients with uulcerative colitis: a systemic review53. Modulation of the gut microbiota with fermentable fibres and 5-aminosalicylate to prevent peri-anastomotic and metastatic recurrence of colorectal cancer54. Patients with locally advanced rectal cancer and a non-threatened circumferential resection margin may go straight to surgery and avoid radiation toxicities: the QuickSilver Trial55. Colonoscopies during the COVID-19 pandemic recovery period: Are we caught up on colorectal cancer detection and prevention? A single-institution experience56. Interim results of a phase II study evaluating the safety of nonoperative management for locally advanced low rectal cancer57. Assessing a tailored curriculum for endoscopic simulation for general surgery residency programs in Canada58. Modified Frailty Index for patients undergoing surgery for colorectal cancer: analysis of the National Inpatient Sample (2015–2019)59. Reducing postoperative bloodwork in elective colorectal surgery: a quality-improvement initiative60. A Nationwide Readmission Database (NRD) analysis assessing timing of readmission for complications following emergency colectomy: why limiting follow-up to postoperative day 30 underserves patients61. The same but different: clinical and Enhanced Recovery After Surgery outcomes in right hemicolectomy for colon cancer versus ileocecal resection in Crohn disease01. How reliable are postmastectomy breast reconstruction videos on YouTube?02. Knowledge, perceptions, attitudes, and barriers to genetic literacy among surgeons: a scoping review03. Exploring neutrophil-to-lymphocyte ratio as a predictor of postoperative breast cancer overall survival04. High β integrin expression is differentially associated with worsened pancreatic ductal adenocarcinoma outcomes05. Epidemiology of undifferentiated carcinomas06. An evidence-based approach to the incorporation of total neoadjuvant therapy into a standardized rectal cancer treatment algorithm07. Pushing the boundaries: right retroperitoneoscopic adrenalectomy after laparoscopic right nephrectomy08. The role of caspase-1 in triple negative breast cancer, the immune tumour microenvironment and response to anti-PD1 immunotherapy09. Perioperative neutrophil-to-lymphocyte ratio is associated with survival in patients undergoing colorectal cancer surgery10. Achievement of quality metrics in older adults undergoing elective colorectal cancer surgery11. Opportunities to improve the environmental sustainability of breast cancer surgical care12. Does margin status after biopsy matter in melanoma? A cohort study of micro- and macroscopic margin status and their impact on residual disease and survival13. Demonstration of D2 Lymph node stations during laparoscopic total gastrectomy14. Incidence of metastatic tumours to the ovary (Krukenberg) versus primary ovarian neoplasms associated with colorectal cancer surgery15. Spatial biomarkers in cancer16. How informed is the consent process for complex cancer resections?17. Adjuvant radiation therapy among immigrant and Canadian-born/long-term resident women with breast cancer18. Human peritoneal explant model reveals genomic alterations that facilitate peritoneal implantation of gastric cancer cells19. Preoperative breast satisfaction association with major complications following oncologic breast surgery20. Impact of geography on receipt of medical oncology consultation and neoadjuvant chemotherapy for triple negative andHER2positive breast cancer21. Comparison of radiation, surgery or both in women with breast cancer and 3 or more positive lymph nodes22. Impact of synoptic operative reporting as a quality indicator for thyroid surgery: a Canadian national study01. The Toronto management of initially unresectable liver metastases from colorectal cancer in a living donor liver transplant program02. Dissection of a replaced right hepatic artery arising from the superior mesenteric artery during a laparoscopic Whipple03. Implementing the HIBA index: a low-cost method for assessing future liver remnant function04. Oncologic outcomes after surgical resection versus thermoablation in early-stage hepatocellular carcinoma: a systematic review of randomized controlled trials with meta-analysis05. Robotic pancreatic necrosectomy and internal drainage for walled-off pancreatic necrosis06. Predicting diabetes mellitus after partial pancreatectomy: PRIMACY, a pilot study07. Bleed and save: patient blood management in hepatectomy08. Defining standards for hepatopancreatobiliary cancer surgery in Ontario, Canada: a population-based cohort study of clinical outcomes09. Laparoscopic choledochoduodenostomy for recurrent choledocholithiasis10. A comparison of daytime versus evening versus overnight liver transplant from a single Canadian centre11. Pilot study validating the line of safety as a landmark for safe laparoscopic cholecystectomy using indocyanine green and near-infrared imagine12. Effect of transversus abdominis plane catheters on postoperative opioid consumption in patients undergoing open liver resections — a single-centre retrospective review13. Comparing the RETREAT score to the Milan criteria for predicting 5-year survival in post-liver transplant hepatocellular carcinoma patients: a retrospective analysis14. Characterizing the effect of a heat shock protein-90 inhibitor on porcine liver for transplantation using ex-vivo machine perfusion15. Modulation by PCSK9 of the immune recognition of colorectal cancer liver metastasis17. Implementation of a preoperative ketogenic diet for reduction of hepatic steatosis before hepatectomy19. Trends in the incidence and management of hepatocellular carcinoma in Ontario20. Canadian coaching program leads to successful transition from open to laparoscopic hepatopancreatobiliary surgery21. The impact of a positive pancreatic margin analyzed according to LEEPP on the recurrence and survival of patients with pancreatic head adenocarcinoma22. Armed oncolytic virus VSV-LIGHT/TNFSF14 promotes survival and results in complete pathological and radiological response in an immunocompetent model of advanced pancreatic cancer23. Comparing the efficacy of cefazolin/metronidazole, piperacillin-tazobactam, or cefoxitin as surgical antibiotic prophylaxis in patients undergoing pancreaticoduodenectomy: a retrospective cohort study01. Not just jumping on the bandwagon: a cost-conscious establishment of a robotic abdominal wall reconstruction program in a publicly funded health care system02. Shouldice method brief educational video03. Laparoscopic recurrent hiatal hernia repair with mesh gastropexy04. Robotic transabdominal preperitoneal Grynfeltt lumbar hernia repair with mesh01. Substance abuse screening prior to bariatric surgery: an MBSAQIP cohort study evaluating frequency and factors associated with screening02. MBSAQIP risk calculator use in elective bariatric surgery is uncommon, yet associated with reduced odds of serious complications: a retrospective cohort analysis of 210 710 patients03. Short-term outcomes of concomitant versus delayed revisional bariatric surgery after adjustable gastric band removal04. Safety and outcomes of bariatric surgery in patients with inflammatory bowel disease: a systematic review and meta-analysis08. Prescription drug usage as measure of comorbidity resolution after bariatric surgery — a population-based cohort study09. Experiences and outcomes of Indigenous patients undergoing bariatric surgery: a mixed-methods scoping review10. Bariatric surgery reduces major adverse kidney events in patients with chronic kidney disease: a multiple-linked database analysis in Ontario11. Inter-rater reliability of indocyanine green fluorescence angiography for blood flow visualization in laparoscopic Roux-en-Y gastric bypass12. Characterization of small bowel obstructions following elective bariatric surgery13. Revision of bariatric surgery for gastroesophageal reflux disease: characterizing patient and procedural factors and 30-day outcomes for a retrospective cohort of 4412 patients14. Duodenal-jejunal bypass liners are superior to optimal medical management in ameliorating metabolic dysfunction: a systematic review and meta-analysis15. Characteristics and outcomes for patients undergoing revisional bariatric surgery due to persistent obesity: a retrospective cohort study of 10 589 patients01. Collateral damage: the impact of the COVID-19 pandemic on the severity of abdominal emergency surgery at a regional hospital02. Pseudoaneurysms after high-grade penetrating solid organ injury and the utility of delayed CT angiography03. Pseudoaneurysm screening after pediatric high-grade solid organ injury04. Witnessed prehospital traumatic arrest: predictors of survival to hospital discharge05. A tension controlled, noninvasive device for reapproximation of the abdominal wall fascia in open abdomens08. Delayed vs. early laparoscopic appendectomy (DELAY) for adult patients with acute appendicitis: a randomized controlled trial09. Days at home after malignant bowel obstructions: a patient-centred analysis of treatment decisions10. Polytrauma and polyshock: prevailing puzzle11. National emergency laparotomy audit: a 9-year evaluation of postoperative mortality in emergency laparotomy13. A comparison of stress response in high-fidelity and low-fidelity trauma simulation14. ASA versus heparin in the treatment of blunt cerebrovascular injury — a systematic review and meta-analysis15. Comparison of complication reporting in trauma systems: a review of Canadian trauma registries16. Benefits of the addition of a nurse practitioner to a high-volume acute care surgery service: a quantitative survey of nurses, residents and surgery attendings17. Examining current evidence for trauma recurrence preventions systems18. Disparities in access to trauma care in Canada: a geospatial analysis of Census data19. Fast-track pathway to accelerated cholecystectomy versus standard of care for acute cholecystitis: the FAST pilot trial20. Using the modified Frailty Index to predict postoperative outcomes in patients undergoing surgery for adhesive small bowel obstruction: analysis of the National Inpatient Sample, 2015–201921. Adequacy of thromboprophylaxis in trauma patients receiving conventional versus higher dosing regimens of low-molecular-weight heparin: a prospective cohort study22. The hidden epidemiology of trauma in Nunavik: a comparison of trauma registries as a call to action23. Mapping surgical services in rural British Columbia: an environmental scan". Canadian Journal of Surgery 66, № 6 Suppl 1 (2023): S53—S136. http://dx.doi.org/10.1503/cjs.014223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

TRAN, Hoang Khang. "Cognitive-Affective Responses to Algorithmic Service Encounters: A Multi-Theoretical Analysis of AI-Mediated Customer Experience, Brand Relationship Quality, and Satisfaction Dynamics in Vietnam's Digital Marketplace." Journal of Economics, Finance And Management Studies 08, no. 05 (2025). https://doi.org/10.47191/jefms/v8-i5-52.

Full text
Abstract:
This research investigates the complex interplay between cognitive-affective responses and algorithmic service encounters in Vietnam's rapidly evolving digital marketplace. Through a multi-theoretical lens integrating service-dominant logic, cognitive appraisal theory, technology acceptance model, and relationship marketing frameworks, this study examines how artificial intelligence (AI) mediated customer experiences influence brand relationship quality and satisfaction. Employing a robust mixed-method approach combining structural equation modeling with partial least squares (PLS-SEM) and fuzzy-set Qualitative Comparative Analysis (fsQCA), data from 387 Vietnamese digital consumers were analyzed. Results reveal that algorithmic service personalization significantly enhances cognitive-affective customer experiences, which subsequently strengthen brand relationship quality and satisfaction. Technology readiness moderates these relationships, with higher levels amplifying positive effects of AI-mediated experiences. The fsQCA findings identify multiple configurations of conditions leading to high satisfaction, demonstrating equifinality in satisfaction formation. This research contributes to the literature by developing an integrated theoretical framework explicating the psychological mechanisms through which algorithmic service encounters shape customer outcomes, offering nuanced insights into the digital transformation of service experiences in emerging markets, and identifying optimal configurations for enhancing customer satisfaction in technology-mediated environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Datta, Krittibas. "AI-DRIVEN PUBLIC ADMINISTRATION: OPPORTUNITIES, CHALLENGES, AND ETHICAL CONSIDERATIONS." Social Science Review A Multidisciplinary Journal 2, no. 6 (2024). https://doi.org/10.70096/tssr.240206023.

Full text
Abstract:
Artificial Intelligence (AI) is swiftly revolutionizing public administration, providing unparalleled prospects to improve governance, service delivery, and decision-making processes. This article examines the diverse role of AI in contemporary governance, highlighting its capacity to transform public administration. Significant prospects encompass enhanced decision-making via data-driven insights, automation for increased efficiency, personalized public services customized to people' requirements, and AI-driven systems for transparency, disaster management, and public safety. Notwithstanding these advantages, the integration of AI in governance encounters considerable obstacles, including the accessibility of precise data, infrastructural expenses, skill deficiencies among officials, reluctance to change, and cybersecurity risks. Ethical considerations are paramount, encompassing algorithmic fairness, privacy protection, accountability, and the promotion of diversity in AI-driven services. Practical instances demonstrate the successful integration of AI into public administration worldwide, providing important insights for further implementation. The paper emphasizes the necessity of a balanced strategy to optimize AI's potential while confronting its obstacles and ethical considerations. It underscores ongoing innovation, strong regulatory frameworks, and cooperation among stakeholders. By adeptly addressing these complications, AI can be utilized to create a more efficient, egalitarian, and transparent public administration system that genuinely meets the demands of all individuals.
APA, Harvard, Vancouver, ISO, and other styles
27

Whelan, Andrew, Alexandra James, Justine Humphry, Tanja Dreher, Danielle Hynes, and Scarlet Wilcock. "SMART TECHNOLOGIES, ALGORITHMIC GOVERNANCE AND DATA JUSTICE." AoIR Selected Papers of Internet Research 2019 (October 31, 2019). http://dx.doi.org/10.5210/spir.v2019i0.10977.

Full text
Abstract:
This panel engages critically with the development, application and emerging effects of ‘smart’ technologies of governance. Attending specifically to the ramifications of new forms of (‘big’) data capture and integration implemented by or for state agencies, the panel describes how the rollout of these technologies impacts on and is shaped by contexts prefigured by social and economic inequalities.&#x0D; Two specific arenas are addressed and juxtaposed, with two papers on each of these. The first arena is the introduction of ‘smart city’ technologies and their implications for low income and marginalised communities. Often presented as novel augmentations of urban space, enhancing and customising the urban experience at the same time that they increase the city’s efficiency and ‘awareness’, smart city technologies also reconfigure urban spaces and how they are understood and governed by rendering the city a site of data generation and capture. This presents new opportunities and risks for residents and powerful commercial and state actors alike.&#x0D; The emergence of public wi-fi kiosks as a means of providing internet access to underserved communities, as one panellist describes, can be shown to expose low-income residents to new forms of surveillance and to new kinds of inequity in terms of the asymmetry of information made available to the parties in the exchange at the kiosk. Surveillance and data capture is organised to particular ends and powerful interests shape and leverage the design and affordances of such initiatives in particular ways. Insofar as concerns are raised about these developments, they are commonly framed in terms of individual rights to privacy, missing the scale of the issues involved. It is not merely that ‘opting out’ becomes untenable. As other panellists show, the issues involved are fundamentally social rather than individual in that they foreground questions around the appropriate relations between state and commercial actors, the use and nature of public space, and the uneven distribution of rights of access to space, information, and other resources within the city. Economically disenfranchised groups are not only denied meaningful access and participation, but colonised by data processes designed to extract various forms of value from their use of ‘public’ infrastructure which may not best serve their own interests.&#x0D; The second arena addressed by the panel is the role of algorithmic governance and artificial intelligence in the provision of social welfare. This context is described in terms of both the effects for the frontline service encounter, and the design, justification, and implementation of the technologies reformatting this encounter from key locations within state agencies. Emerging technological infrastructures for social welfare do not simply reconfigure how existing services are offered and accessed. They facilitate the identification of new target populations for intervention, at the same time that they introduce additional burdens, hurdles and forms of intervention and surveillance for these populations. As such, it is evident in the design and application of these technologies that they accord with and expedite punitive logics in welfare provision, providing new opportunities for the application of dominant neoliberal governance strategies.&#x0D; In both arenas, one can conceptualize ‘pipelines’ for the implementation of these developments. These pipelines are interstitial and heterogeneous, and combine different timelines, technologies and actors. They are often technically or administratively opaque or otherwise obscured from view. This gives rise to a methodological and intellectual problem, around the extent to which researchers can say they know enough to point to determining instances, political agendas, commercial agreements, incidental alignments and so on in such a way as to advocate effectively for democratic input and oversight. In this sense the papers assembled highlight how these developments call for new politics of method, new modalities of analysis and critique, and more effective activist and academic engagements with the question of how ideals of justice and equity can best be instantiated in these contexts.
APA, Harvard, Vancouver, ISO, and other styles
28

Poell, Thomas, Tinca Lukan, Arturo Arriagada,, et al. "GLOBAL PERSPECTIVES ON PLATFORMS AND CULTURAL PRODUCTION." AoIR Selected Papers of Internet Research, January 2, 2025. https://doi.org/10.5210/spir.v2024i0.14093.

Full text
Abstract:
While digital platforms have reconfigured the institutions and practices of cultural production around the globe, current research is dominated by studies that take as their reference point the Anglo-American world--and, to a lesser extent--China (Cunningham &amp; Craig 2019; Kaye et al. 2021; Poell et al. 2021; Zhao 2019). Aside from totalizing theories of platform imperialism (Jin, 2013), the “rest of the world” has thus received relatively scant attention. Consequently, central concepts in the study of platform-based cultural production bear a strong imprint of Western institutions, infrastructures, industries, discourses, and cultural practices. US-based research, in particular, has informed how we understand and subsequently theorize notions of precarity, labor, governance, authenticity, gender, creativity, diversity, and autonomy in a platform environment. We can’t simply apply these concepts to local cultures of production in other parts of the world. There is bound to be friction, as this panel will demonstrate, between how labor, precarity, and governance are understood in the Anglo-American world and the lived experiences of platform-dependent cultural labor in Latin America, Southern and Eastern Europe, and East Asia. Concerns about Western-dominated research and theory are, of course, by no means novel. Post-colonial and decolonial theorists have long criticized the dominance and universalism of Western theory, pointing to the continuation of colonial knowledge-power relations (Chakrabarty 2009; Chen 2010; Escobar 2018; Mignolo 2012). Moreover, there have been numerous calls to decolonize (Glück 2018; Willems &amp; Mano 2016) and de-westernize (Curran &amp; Park 2000; Khiabany 2003) media studies and, more recently, production and platform studies (Bouquillion 2023; Bulut 2022; Zhang &amp; Chen 2022). That being said, in practice, the US and Western Europe continue to function as the primary and often sole frame of reference in research on platforms and cultural production. In the light of these concerns, this panel aims to contribute to efforts to: 1) challenge universalism, 2) “provincialize” the US, and 3) multiply our frames of reference in the study of platforms and cultural production. Such a conceptual undertaking is especially vital as the cultural industries are at the heart of societal processes of meaning making (Hesmondhalgh 2018) and market activity. Let us unpack how the papers in this panel pursue this objective. The first paper develops a conceptual framework to expand our frames of reference for studying platforms and cultural production. Departing from epistemological universalism, it argues that “platforms”, “cultural production”, and the “local” need to be studied as dynamic configurations, characterized by crucial variations and correspondences across the globe. That is, in contemporary instances of creating cultural content, transnational platform markets, infrastructures, governance frameworks, and cultural practices become entangled with local political economies and cultural practices. Examining how such configurations take shape around the world, the next four papers in this panel focus on specific regions and modes of production, interrogating how local and transnational political economic relations and practices articulate each other. In this discussion, we pay specific attention to the notions of precarity, governance, and imaginaries. The second paper reframes influencer precarity in a semi-peripheral context in the Balkans and emphasizes the relational basis of influencer agency, as influencers rely on family members and oft-mocked “Instagram husbands” to alleviate precarity. It thus offers insights into the local characteristics of algorithmic encounters with platforms by proposing the concept of platform lethargy. This concept speaks to an emotional response and deliberate refusal on the part of influencers to adapt to platform mandates. This refusal is rooted in algorithmic knowledge from the semi-periphery, where creators are cognizant of their position in a devalued platform market. The third paper critically examines the intricate dynamics of creator culture, challenging the assumption of globally detached markets. Focusing on Latin American content creators in the United States, it explores how their aspirations intersect with the construction of the "Latin American" content creator dream. The study also scrutinizes the role of Content Service Organizations (CSOs) executives in shaping creator culture. Despite global portrayals, tensions emerge, revealing national market characteristics rooted in socio-cultural, linguistic, and regional norms. The fourth paper examines how drama creatives, who work for streaming platforms, are globally connected and yet remain nationally restrained in terms of how they imagine work. Through the notion of platform ambiguity, the paper shows how streaming platforms negotiate with cultural producers by both enabling and restraining their work. Thus, it thus de-westernizes scholarship on platforms and cultural production by highlighting how drama makers are not only creative but also geopolitical subjects dependent on the state. The last paper offers an alternative epistemological and ontological perspective on the state-platform-user configuration, where each actor works in alignment with others under the logic of governance. It uses a Chinese social media platform, Douyin, as a case to reveal how platforms rely on anthropomorphization to communicate with cultural producers and develop playful governance of China’s political and cultural environment.
APA, Harvard, Vancouver, ISO, and other styles
29

Gupta, Riddhi S., Carolyn E. Wood, Teyl Engstrom, Jason D. Pole, and Sally Shrapnel. "A systematic review of quantum machine learning for digital health." npj Digital Medicine 8, no. 1 (2025). https://doi.org/10.1038/s41746-025-01597-z.

Full text
Abstract:
Abstract The growth in digitization of health data provides opportunities for using algorithmic techniques for data analysis. This systematic review assesses whether quantum machine learning (QML) algorithms outperform existing classical methods for clinical decisioning or health service delivery. Included studies use electronic health/medical records, or reasonable proxy data, and QML algorithms designed for quantum computing hardware. Databases PubMed, Embase, IEEE, Scopus, and preprint server arXiv were searched for studies dated 01/01/2015–10/06/2024. Of an initial 4915 studies, 169 were eligible, with 123 then excluded for insufficient rigor. Only 16 studies consider realistic operating conditions involving quantum hardware or noisy simulations. We find nearly all encountered quantum models form a subset of general QML structures. Scalability of data encoding is partly addressed but requires restrictive hardware assumptions. Overall, performance differentials between quantum and classical algorithms show no consistent trend to support empirical quantum utility in digital health.
APA, Harvard, Vancouver, ISO, and other styles
30

Berri, Maryam, Noha Beydoun, and Martha Johnson. "Curriculum guide for teaching house officers and faculty: applying procedure codes effectively using chemical denervation as a model." Frontiers in Medicine 11 (September 18, 2024). http://dx.doi.org/10.3389/fmed.2024.1359230.

Full text
Abstract:
IntroductionThe healthcare system in the United States relies heavily on physician-and house officer-driven initiation of billing and coding for collection of hospital payments and professional fees. Under the umbrella of practice management is the ever-changing and suboptimally taught concept of procedural billing and coding to house officers and faculty. Clinical providers and practitioners initiate billing and coding for performed services based on the procedural visit encounter, supported by the appropriate documentation. Correct charge capture is dependent on accurately linking CPT codes and J codes, including waste documentation, modifiers, and charge collection. We discuss a perspective regarding a new curricular methodology that teaches learners to apply an algorithmic approach for coding CPT codes, J codes, and modifiers for chemical denervation procedures involving high-cost botulinum toxin. We further recommend the use of visuals with algorithm development for other pertinent procedures that are specific to a department.MethodsWe developed a curriculum that includes algorithmic visuals, pre-and post-test questions, and reflections. It was implemented across various learner types.ResultsThis chemical denervation curriculum was well-received and impactful in meeting the objectives of the course. It further expanded a learner’s vision of practice management that can be applied to other procedural examples.DiscussionThe results demonstrate a clear gap in practice management education, with pre-education knowledge on applying appropriate codes being particularly low among resident physicians. Learners found the algorithm we developed especially valuable, as it serves as a practical tool for accurately accounting for all aspects of CPT codes, modifiers, and J-codes. The methodology of the algorithmic approach proved to be innovative for avoiding billing write-offs and loopbacks that were beneficial for the training process. Learners indicated that this approach can be applied to other procedural billing.
APA, Harvard, Vancouver, ISO, and other styles
31

Idiz, Daphne Rena, and Thomas Poell. "Dependence in the online screen industry." Media, Culture & Society, September 28, 2024. http://dx.doi.org/10.1177/01634437241286725.

Full text
Abstract:
The development of an online screen industry, dominated by a few American and Chinese streaming TV services and video-based platforms, triggers critical questions about the commercial and technological dependence of cultural producers within this industry. Drawing on research in media industries and platform studies, this paper develops a conceptual framework to systematically examine this dependence. Pursuing this aim, we propose to shift the focus from specific video platforms or streaming TV services as the starting point of the analysis to the perspective of cultural producers. Through a discussion of current research, we identify four major sources of dependence encountered by cultural producers in the online screen industry: (1) access to data, (2) algorithmic curation, (3) contractual relations, and (4) monetization. While we recognize that there are vital differences between platforms and streaming TV services, we argue that producers throughout the online screen industry face similar challenges in trying to navigate the four sources of dependence. In short, limited access to data and lack of control over content visibility put cultural producers in a fundamentally weak position vis-à-vis tech companies when negotiating contractual relations and terms of monetization.
APA, Harvard, Vancouver, ISO, and other styles
32

Ellis, Jack Michael. "SKIPPING DISCOVERY? MUSIC DISCOVERY AND PERSONAL MUSIC COLLECTIONS IN THE STREAMING ERA." AoIR Selected Papers of Internet Research, October 5, 2020. http://dx.doi.org/10.5210/spir.v2020i0.11209.

Full text
Abstract:
In this paper, I use Spotify as a case study to investigate user experiences of music exploration and discovery using streaming services and recommendation algorithms. Following the framing of music discovery as an affective response which allows for the categorization and definition of music, I draw upon qualitative data from Spotify users regarding their ephemeral experiences when exploring music using streaming services. I identify user practices of music archiving and collecting as a strategy to mitigate these transient encounters by slowing down the pace of their music consumption and creating enduring connections between music and their individual histories. In this way, personal music collections were found to support instances of music discovery as they created a listening context and user mindset which was sympathetic the affective definition and categorization of new music content. This investigation of collecting practice also revealed longitudinal perspectives on music discoveries which emerge through sustained listening over time, drawing attention to listener’s social surroundings, friends and online communities as the impetus for choosing to give particular music content time to grow towards being discovered. The paper concludes with implications of the use of algorithmic recommendation systems in the delivery and consumption of music and other cultural forms.
APA, Harvard, Vancouver, ISO, and other styles
33

Nansen, Bjorn. "Accidental, Assisted, Automated: An Emerging Repertoire of Infant Mobile Media Techniques." M/C Journal 18, no. 5 (2015). http://dx.doi.org/10.5204/mcj.1026.

Full text
Abstract:
Introduction It is now commonplace for babies to begin their lives inhabiting media environments characterised by the presence, distribution, and mobility of digital devices and screens. Such arrangements can be traced, in part, to the birth of a new regime of mobile and touchscreen media beginning with the release of the iPhone in 2007 and the iPad in 2010, which stimulated a surge in household media consumption, underpinned by broadband and wireless Internet infrastructures. Research into these conditions of ambient mediation at the beginnings of life, however, is currently dominated by medical and educational literature, largely removed from media studies approaches that seek to understand the everyday contexts of babies using media. Putting aside discourses of promise or peril familiar to researchers of children’s media (Buckingham; Postman), this paper draws on ongoing research in both domestic and social media settings exploring infants’ everyday encounters and entanglements with mobile media and communication technologies. The paper identifies the ways infants’ mobile communication is assembled and distributed through touchscreen interfaces, proxy parent users, and commercial software sorting. It argues that within these interfacial, intermediary, and interactive contexts, we can conceptualise infants’ communicative agency through an emerging repertoire of techniques: accidental, assisted and automated. This assemblage of infant communication recognises that children no longer live with but in media (Deuze), which underscores the impossibility of a path of media resistance found in medical discourses of ‘exposure’ and restriction, and instead points to the need for critical and ethical responses to these immanent conditions of infant media life. Background and Approach Infants, understandably, have largely been excluded from analyses of mobile mediality given their historically limited engagement with or capacity to use mobile media. Yet, this situation is undergoing change as mobile devices become increasingly prominent in children’s homes (OfCom; Rideout), and as touchscreen interfaces lower thresholds of usability (Buckleitner; Hourcade et al.). The dominant frameworks within research addressing infants and media continue to resonate with long running and widely circulated debates in the study of children and mass media (Wartella and Robb), responding in contradictory ways to what is seen as an ever-increasing ‘technologization of childhood’ (McPake, Plowman and Stephen). Education research centres on digital literacy, emphasising the potential of mobile computing for these future digital learners, labourers, and citizens (McPake, Plowman and Stephen). Alternatively, health research largely positions mobile media within the rubric of ‘screen time’ inherited from older broadcast models, with paediatric groups continuing to caution parents about the dangers of infants’ ‘exposure’ to electronic screens (Strasburger and Hogan), without differentiating between screen types or activities. In turn, a range of digital media channels seek to propel or profit from infant media culture, with a number of review sites, YouTube channels and tech blogs promoting or surveying the latest gadgets and apps for babies. Within media studies, research is beginning to analyse the practices, conceptions and implications of digital interfaces and content for younger children. Studies are, for example, quantifying the devices, activities, and time spent by young children with mobile devices (Ofcom; Rideout), reviewing the design and marketing of children’s mobile application software products (e.g. Shuler), analysing digital content shared about babies on social media platforms (Kumar &amp; Schoenebeck; Morris), and exploring emerging interactive spaces and technologies shaping young children’s ‘postdigital’ play (Giddings; Jayemanne, Nansen and Apperley). This paper extends this growing area of research by focusing specifically on infants’ early encounters, contexts, and configurations of mobile mediality, offering some preliminary analysis of an emerging repertoire of mobile communication techniques: accidental, assisted, and automated. That is, through infants playing with devices and accidentally activating them; through others such as parents assisting use; and through software features in applications that help to automate interaction. This analysis draws from an ongoing research project exploring young children’s mobile and interactive media use in domestic settings, which is employing ethnographic techniques including household technology tours and interviews, as well as participant observation and demonstrations of infant media interaction. To date 19 families, with 31 children aged between 0 and 5, located in Melbourne, Australia have participated. These participating families are largely homogeneous and privileged; though are a sample of relatively early and heavy adopters that reveal emerging qualities about young children’s changing media environments and encounters. This approach builds on established traditions of media and ethnographic research on technology consumption and use within domestic spaces (e.g. Mackay and Ivey; Silverstone and Hirsch), but turns to the digital media encountered by infants, the geographies and routines of these encounters, and how families mediate these encounters within the contexts of home life. This paper offers some preliminary findings from this research, drawing mostly from discussions with parents about their babies’ use of digital, mobile, and touchscreen media. In this larger project, the domestic and family research is accompanied by the collection of online data focused on the cultural context of, and content shared about, infants’ mobile media use. In this paper I report on social media analysis of publicly shared images tagged with #babyselfie queried from Instagram’s API. I viewed all publicly shared images on Instagram tagged with #babyselfie, and collected the associated captions, comments, hashtags, and metadata, over a period of 48 hours in October 2014, resulting in a dataset of 324 posts. Clearly, using this data for research purposes raises ethical issues about privacy and consent given the posts are being used in an unintended context from which they were originally shared; something that is further complicated by the research focus on young children. These issues, in which the ease of extracting online data using digital methods research (Rogers), needs to be both minimised and balanced against the value of the research aims and outcomes (Highfield and Leaver). To minimise risks, captions and comments cited in this paper have been de-identified; whist the value of this data lies in complementing and contextualising the more ethnographically informed research, despite perceptions of incompatibility, through analysis of the wider cultural and mediated networks in which babies’ digital lives are now shared and represented. This field of cultural production also includes analysis of examples of children’s software products from mobile app stores that support baby image capture and sharing, and in particular in this paper discussion of the My Baby Selfie app from the iTunes App Store and the Baby Selfie app from the Google Play store. The rationale for drawing on these multiple sources of data within the larger project is to locate young children’s digital entanglements within the diverse places, platforms and politics in which they unfold. This research scope is limited by the constraints of this short paper, however different sources of data are drawn upon here in order to identify, compare, and contextualise the emerging themes of accidental, assisted, and automated. Accidental Media Use The domestication and aggregation of mobile media in the home, principally laptops, mobile phones and tablet computers has established polymediated environments in which infants are increasingly surrounded by mobile media; in which they often observe their parents using mobile devices; and in which the flashing of screens unsurprisingly draws their attention. Living within these ambient media environments, then, infants often observe, find and reach for mobile devices: on the iPad or whatever, then what's actually happening in front of them, then naturally they'll gravitate towards it. These media encounters are animated by touchscreens interfaces that are responsive to the gestural actions of infants. Conversely, touchscreen interfaces drive attempts to swipe legacy media screens. Underscoring the nomenclature of ‘natural user interfaces’ within the design and manufacturer communities, screens lighting up through touch prompts interest, interaction, and even habituation through gestural interaction, especially swiping: It's funny because when she was younger she would go up the T.V. and she would try swiping to turn the channel.They can grab it and start playing with it. It just shows that it's so much part of their world … to swipe something. Despite demonstrable capacities of infants to interact with mobile screens, discussions with parents revealed that accidental forms of media engagement were a more regular consequence of these ambient contexts, interfacial affordances and early encounters with mobile media. It was not uncommon for infants to accidentally swipe and activate applications, to temporarily lock the screen, or even to dial contacts: He didn't know the password, and he just kept locking it … find it disabled for 15 minutes.If I've got that on YouTube, they can quite quickly get on to some you know [video] … by pressing … and they don't do it on purpose, they're just pushing random buttons.He does Skype calls! I think he recognizes their image, the icon. Then just taps it and … Similarly, in the analysis of publicly shared images on Instagram tagged with #babyselfie, there were instances in which it appeared infants had accidentally taken photos with the cameraphone based on the image content, photo framing or descriptions in the caption. Many of these photos showed a baby with an arm in view reaching towards the phone in a classic trope of a selfie image; others were poorly framed shots showing parts of baby faces too close to the camera lens suggesting they accidentally took the photograph; whilst most definitive was many instances in which the caption of the image posted by parents directly attributed the photographic production to an infant: Isabella's first #babyselfie She actually pushed the button herself! My little man loves taking selfies lol Whilst, then, the research identified many instances in which infants accidentally engaged in mobile media use, sometimes managing to communicate with an unsuspecting interlocutor, it is important to acknowledge such encounters could not have emerged without the enabling infrastructure of ambient media contexts and touchscreen interfaces, nor observed without studying this infrastructure utilising materially-oriented ethnographic perspectives (Star). Significantly, too, was the intermediary role played by parents. With parents acting as intermediaries in household environments or as proxy users in posting content on their behalf, multiple forms of assisted infant communication were identified. Assisted Media Use Assisted communication emerged from discussions with parents about the ways, routines, and rationale for making mobile media available to their children. These sometimes revolved around keeping their child engaged whilst they were travelling as a family – part of what has been described as the pass-back effect – but were more frequently discussed in terms of sharing and showing digital content, especially family photographs, and in facilitating infant mediated communication with relatives abroad: they love scrolling through my photos on my iPhone …We quite often just have them [on Skype] … have the computers in there while we're having dinner … the laptop will be there, opened up at one end of the table with the family here and there will be my sister having breakfast with her family in Ireland … These forms of parental mediated communication did not, however, simply situate or construct infants as passive recipients of their parents’ desires to make media content available or their efforts to establish communication with extended family members. Instead, the research revealed that infants were often active participants in these processes, pushing for access to devices, digital content, and mediated communication. These distributed relations of agency were expressed through infants verbal requests and gestural urging; through the ways parents initiated use by, for example, unlocking a device, preparing software, or loading an application, but then handed them over to infants to play, explore or communicate; and through wider networks of relations in which others including siblings, acted as proxies or had a say in the kinds of media infants used: she can do it, once I've unlocked … even, even with iView, once I'm on iView she can pick her own show and then go to the channel she wants to go to.We had my son’s birthday and there were some photos, some footage of us singing happy birthday and the little one just wants to watch it over and over again. She thinks it's fantastic watching herself.He [sibling] becomes like a proxy user … with the second one … they don't even need the agency because of their sibling. Similarly, the assisted communication emerging from the analysis of #babyselfie images on Instagram revealed that parents were not simply determining infant media use, but often acting as proxies on their behalf. #Selfie obsessed baby. Seriously though. He won't stop. Insists on pressing the button and everything. He sees my phone and points and says "Pic? Pic?" I've created a monster lol. In sharing this digital content on social networks, parents were acting as intermediaries in the communication of their children’s digital images. Clearly they were determining the platforms and networks where these images were published online, yet the production of these images was more uncertain, with accidental self-portraits taken by infants suggesting they played a key role in the circuits of digital photography distribution (van Dijck). Automated Media Use The production, archiving, circulation and reception of these images speaks to larger assemblages of media in which software protocols and algorithms are increasingly embedded in and help to configure everyday life (e.g. Chun; Gillespie), including young children’s media lives (Ito). Here, software automates process of sorting and shaping information, and in doing so both empowers and governs forms of infant media conduct. The final theme emerging from the research, then, is the identification of automated forms of infant mobile media use enabled through software applications and algorithmic operations. Automated techniques of interaction emerged as part of the repertoire of infant mobile mediality and communication through observations and discussions during the family research, and through surveying commercial software applications. Within family discussions, parents spoke about the ways digital databases and applications facilitated infant exploration and navigation. These included photo galleries stored on mobile devices, as well as children’s Internet television services such as the Australian Broadcasting Corporation’s catch-up online TV service, iView, which are visually organised and easily scrollable. In addition, algorithmic functions for sorting, recommending and autoplay on the video-sharing platform YouTube meant that infants were often automatically delivered an ongoing stream of content: They just keep watching it [YouTube]. So it leads on form the other thing. Which is pretty amazing, that's pretty interactive.Yeah, but the kids like, like if they've watched a YouTube clip now, they'll know to look down the next column to see what they want to play next … you get suggestions there so. Forms of automated communication specifically addressing infants was also located in examples of children’s software products from mobile app stores: the My Baby Selfie app from the iTunes App Store and the Baby Selfie app from the Google Play store. These applications are designed to support baby image capture and sharing, promising to “allow your baby to take a photo of him himself [sic]” (Giudicelli), based on automated software features that use sounds and images to capture a babies attention and touch sensors to activate image capture and storage. In one sense, these applications may appear to empower infants to participate in the production of digital content, namely selfies, yet they also clearly distribute this agency with and through mobile media and digital software. Moreover, they imply forms of conduct, expectations and imperatives around the possibilities of infant presence in a participatory digital culture. Immanent Ethic and Critique Digital participation typically assumes a degree of individual agency in deciding what to share, post, or communicate that is not typically available to infants. The emerging communicative practices of infants detailed above suggests that infants are increasingly connecting, however this communicative agency is distributed amongst a network of ambient devices, user-friendly interfaces, proxy users, and software sorting. Such distributions reflect conditions Deuze has noted, that we do not live with but in media. He argues this ubiquity, habituation, and embodiment of media and communication technologies pervade and constitute our lives becoming effectively invisible, negating the possibility of an outside from which resistance can be mounted. Whilst, resistance remains a solution promoted in medical discourses and paediatric advice proposing no ‘screen time’ for children aged below two (Strasburger and Hogan), Deuze’s thesis suggests this is ontologically futile and instead we should strive for a more immanent relation that seeks to modulate choices and actions from within our media life: finding “creative ways to wield the awesome communication power of media both ethically and aesthetically” ("Unseen" 367). An immanent ethics and a critical aesthetics of infant mediated life can be located in examples of cultural production and everyday parental practice addressing the arrangements of infant mobile media and communication discussed above. For example, an article in the Guardian, ‘Toddlers pose a serious risk to smartphones and tablets’ parodies moral panics around children’s exposure to media by noting that media devices are at greater risk of physical damage from children handling them, whilst a design project from the Eindhoven Academy – called New Born Fame – built from soft toys shaped like social media logos, motion and touch sensors that activate image capture (much like babyselfie apps), but with automated social media sharing, critically interrogates the ways infants are increasingly bound-up with the networked and algorithmic regimes of our computational culture. Finally, parents in this research revealed that they carefully considered the ethics of media in their children’s lives by organising everyday media practices that balanced dwelling with new, old, and non media forms, and by curating their digitally mediated interactions and archives with an awareness they were custodians of their children’s digital memories (Garde-Hansen et al.). I suggest these examples work from an immanent ethical and critical position in order to make visible and operate from within the conditions of infant media life. Rather than seeking to deny or avoid the diversity of encounters infants have with and through mobile media in their everyday lives, this analysis has explored the ways infants are increasingly configured as users of mobile media and communication technologies, identifying an emerging repertoire of infant mobile communication techniques. The emerging practices of infant mobile communication outlined here are intertwined with contemporary household media environments, and assembled through accidental, assisted, and automated relations of living with mobile media. Moreover, such entanglements of use are both represented and discursively reconfigured through multiple channels, contexts, and networks of public mediation. Together, these diverse contexts and forms of conduct have implications for both studying and understanding the ways babies are emerging as active participants and interpellated subjects within a continually expanding digital culture. Acknowledgments This research was supported with funding from the Australian Research Council (ARC) Discovery Early Career Researcher Award (DE130100735). I would like to express my appreciation to the children and families involved in this study for their generous contribution of time and experiences. References Buckingham, David. After the Death of Childhood: Growing Up in the Age of Electronic Media. Polity Press: Oxford, 2000. Buckleitner, Warren. “A Taxonomy of Multi-Touch Interaction Styles, by Stage.” Children's Technology Review 18.11 (2011): 10-11. Chun, Wendy. Programmed Visions: Software and Memory. Cambridge: MIT Press, 2011. Deuze, Mark. “Media Life.” Media, Culture and Society 33.1 (2011): 137-148. Deuze, Mark. “The Unseen Disappearance of Invisible Media: A Response to Sebastian Kubitschko and Daniel Knapp.” Media, Culture and Society 34.3 (2012): 365-368. Garde-Hansen, Joanne, Andrew Hoskins and Anna Reading. Save as … Digital Memories. Hampshire: Palgrave Macmillan, 2009. Giddings, Seth. Gameworlds: Virtual Media and Children’s Everyday Play. New York: Bloomsbury, 2014. Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies: Essays on Communication, Materiality, and Society. Eds. Tarelton Gillespie, Pablo Boczkowski and Kirsten Foot. Cambridge: MIT Press, 2014. Giudicelli, Patrick. "My Baby Selfie." iTunes App Store. Apple Inc., 2015. Highfield, Tim, and Tama Leaver. “A Methodology for Mapping Instagram Hashtags.” First Monday 20.1 (2015). Hourcade, Juan Pablo, Sarah Mascher, David Wu, and Luiza Pantoja. “Look, My Baby Is Using an iPad! An Analysis of Youtube Videos of Infants and Toddlers Using Tablets.” Proceedings of CHI 15. New York: ACM Press, 2015. 1915–1924. Ito, Mizuko. Engineering Play: A Cultural History of Children’s Software. Cambridge: MIT Press, 2009. Jayemanne, Darshana, Bjorn Nansen and Thomas Apperley. “Post-Digital Play and the Aesthetics of Recruitment.” Proceedings of Digital Games Research Association (DiGRA) 2015. Lüneburg, 14-17 May 2015. Kumar, Priya, and Sarita Schoenebeck. “The Modern Day Baby Book: Enacting Good Mothering and Stewarding Privacy on Facebook.” Proceedings of CSCW 2015. Vancouver, 14-18 March 2015. Mackay, Hugh, and Darren Ivey. Modern Media in the Home: An Ethnographic Study. Rome: John Libbey, 2004. Morris, Meredith. “Social Networking Site Use by Mothers of Young Children.” Proceedings of CSCW 2014. 1272-1282. OfCom. Children and Parents: Media Use and Attitudes Report. London: OfCom, 2013. McPake, Joanna, Lydia Plowman and Christine Stephen. "The Technologisation of Childhood? Young Children and Technology in The Home.” Children and Society 24.1 (2010): 63–74. Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York: Vintage, 1993. Rideout, Victoria. Zero to Eight: Children’s Media Use in America 2013. Common Sense Media, 2013. Rogers, Richard. Digital Methods. Boston. MIT Press, 2013. Silverstone, Roger, and Eric Hirsch (eds). Consuming Technologies: Media and Information in Domestic Spaces. London: Routledge, 1992. Shuler, Carly. iLearn: A Content Analysis of the iTunes App Store’s Education Section. New York: The Joan Ganz Cooney Center at Sesame Workshop, 2009. Star, Susan Leigh. “The Ethnography of Infrastructure.” American Behavioral Scientist 43.3 (1999): 377–391. Strasburger, Victor, and Marjorie Hogan. “Policy Statement from the American Academy of Pediatrics: Children, Adolescents, and the Media.” Pediatrics 132 (2013): 958-961. Van Dijck, José. “Digital Photography: Digital Photography: Communication, Identity, Memory.” Visual Communication 7.1 (2008): 57-76. Wartella, Ellen, and Michael Robb. “Historical and Recurring Concerns about Children’s Use of the Mass Media.” The Handbook of Children, Media, and Development. Eds. Sandra Calvert and Barbara Wilson. Malden: Blackwell, 2008.
APA, Harvard, Vancouver, ISO, and other styles
34

Demde, Dr M. K. "FORECASTING OF AIRLINE PASSENGERS BASED ON MACHINE LEARNING." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 04 (2023). http://dx.doi.org/10.55041/ijsrem18879.

Full text
Abstract:
The management of airlines depends on the forecasting of air passenger flow, but standard forecasting techniques cannot guarantee the accuracy of the forecast. When they encounter large-scale, multidimensional, nonlinear, and non-normal distributing time series data, they have the ability to generalize. In this paper the SVM regression is implemented to help with forecasting air passenger flow. We discover that the SVM regression algorithm's outcome exhibits the least inaccuracy when compared to the other two forecasting techniques by carefully choosing the parameters and kernel function. Concerns about demand splits among service providers have been given to every sector by the nearperfect competition scenario. This is especially important in the airline business, where high service standards are the norm. Demand is driven by the player who achieves the greatest mappings between every one of his the airline's offerings and the set of consumer preferences. The airline business has grown exponentially as a result of the economic reforms of 1991, which were promptly followed by the privatization of Indian skies, creating nearly ideal competition. More specifically, cross-border activities have started in the international sectors, where previously only domestic carriers operated. Keywords – Airline Passengers, Support Vector Machine, Forecasting, Machine Learning
APA, Harvard, Vancouver, ISO, and other styles
35

L Shinde, Parinitha, and Sathyaprakash M. R. "DIGITAL IMMIGRANTS AND DIGITAL DECEPTION: CONSUMING AND COMBATING FAKE NEWS ONLINE." ShodhKosh: Journal of Visual and Performing Arts 4, no. 1 (2023). http://dx.doi.org/10.29121/shodhkosh.v4.i1.2023.285.

Full text
Abstract:
The present-day digital media ecosystem is defined by the proliferation of fake news. Although the term has become popular recently, its incidence has been exponential, and its omnipresence in the global news media, undeniable. Digital immigrants occupy an important online demographic. They represent individuals who were born prior to internet services becoming ubiquitous. They are projected to be the primary internet users in India by 2025 Statista (2022). Due to the existence of filter bubbles and algorithmic judgement on social media platforms, users get limited perspectives that reiterate their existing ideologies Baptista &amp; Gradim (2020). This study seeks to investigate how digital immigrants understand, encounter, and respond to fake news. By conducting in-depth interviews with Indian digital immigrants, it was found that selective exposure was a predominant reason for fake news to be consumed. Confirmation bias explained why users sought out and remembered information which reinforced their ideas and attitudes, and muted and blocked sources which contradicted them. It is suggested that digital immigrants make concerted efforts such as exploring diverse points of views, undertaking basic training courses in fact-checking and source corroboration, and exhibiting cautiousness when encountering content in order to combat fake news.
APA, Harvard, Vancouver, ISO, and other styles
36

Yoshiaki, Shikata, Katagiri Wataru, and Takahashi Yoshitaka. "Performance Evaluation of Prioritized Limited Processor-Sharing System." June 20, 2012. https://doi.org/10.5281/zenodo.1055214.

Full text
Abstract:
We propose a novel prioritized limited processor-sharing (PS) rule and a simulation algorithm for the performance evaluation of this rule. The performance measures of practical interest are evaluated using this algorithm. Suppose that there are two classes and that an arriving (class-1 or class-2) request encounters n1 class-1 and n2 class-2 requests (including the arriving one) in a single-server system. According to the proposed rule, class-1 requests individually and simultaneously receive m / (m * n1+ n2) of the service-facility capacity, whereas class-2 requests receive 1 / (m *n1 + n2) of it, if m * n1 + n2 ≤ C. Otherwise (m * n1 + n2 &gt; C), the arriving request will be queued in the corresponding class waiting room or rejected. Here, m (1) denotes the priority ratio, and C ( ∞), the service-facility capacity. In this rule, when a request arrives at [or departs from] the system, the extension [shortening] of the remaining sojourn time of each request receiving service can be calculated using the number of requests of each class and the priority ratio. Employing a simulation program to execute these events and calculations enables us to analyze the performance of the proposed prioritized limited PS rule, which is realistic in a time-sharing system (TSS) with a sufficiently small time slot. Moreover, this simulation algorithm is expanded for the evaluation of the prioritized limited PS system with N 3 priority classes.
APA, Harvard, Vancouver, ISO, and other styles
37

Rosner, Daniela. "Bias Cuts and Data Dumps." M/C Journal 26, no. 6 (2023). http://dx.doi.org/10.5204/mcj.2938.

Full text
Abstract:
Introduction “Patterns are everywhere”, design researcher Anuradha Reddy told her virtual audience at the 2023 speaker series hosted by Brilliant Labs, a Canadian non-profit focussed on experiential digital learning and coding (Brilliant Labs / Labos Créatifs). Like other technology fora, this public-facing series offered designers an opportunity to highlight the accessibility of code. But unlike many such fora, Reddy’s code was worn on the body. Sitting at the now-standard webinar lectern, Reddy shared a flurry of images and contexts as she introduced a garment she called b00b, a bra that she created in 2021 to probe the encoding of more than aesthetic possibility. Her presentation included knotted motifs of Andean Quipus; symbolic arcs of Chinese Pan Chang knots; geometric transformations of African American cornrow hairstyles (Eglash and Bennett, Brilliant Labs / Labos Créatifs). She followed the patterned imagery with questions of uncertainty that are often central for design researchers like her. Facing what might be a possible swipe, tap, or otherwise engagement, a technologist cannot fully determine what a user does. But they can “nudge”, a term popularised by behavioral economists Richard H. Thaler and Cass R. Sunstein in 2008 and later propagated within technoscientific discourses on risk (see Duffy and Thorson; Rossi et al.; Thaler and Sunstein). Adjacent bodies of scholarship frame the related concept of trust as a form of compliance (Adam et al.; Gass and Seiter). The more trustworthy an interface, the more likely a user is to comply. Rooted in social-psychological precepts, this line of scholarship frames trust less as a condition than a perception. When a user trusts an indicator light, for example, an app is more likely to see increased acceptance and engagement. Reddy approaches trust from and with b00b, an emphatically intimate (soft, pliable, textile) artifact. “How do we use these … perspectives to deal with uncertainty and things we do not know yet in the future?”, Reddy asks her Brilliant Labs audience (Brilliant Labs / Labos Créatifs). To make this argument, I examine Reddy’s b00b in conversation with a legacy feminist textile performance that brings questions of embodiment (and embodied trust) to an ostensibly disembodied technocratic scene. b00b is a decorative bra that emulates two-factor authentication, or what Reddy calls “b00b factor authentication.” The bra uses its two cups to verify a user’s access to a Website describing the project. With this interaction, the bra is self-referential—asking users to unlock a link that brings them back to someone’s chest. In practice, b00b asks users to scan a bra cup that relies on scanning the companion bra cup for a second passcode. Rather than messaging users, an initial passcode that triggers a second passcode sent by text message, the engagement requires bodily proximity. The bra cups take the place of electronic media (such as the text message) so that a close encounter with the bra enlivens digital trust. Under these circumstances, a trusted user becomes a risk-taker—gaining access while transgressing personal boundaries. In the sections that follow, I thread conversations on digital and algorithmic trustworthiness with critiques of trust and compliance that pervade Reddy’s 2021 handmade experiment. To date, technology analysts tend to treat trust as a perception: feelings of confidence in a person or thing (Gilkson and Woolley). As Natasha Schüll notes, a user might trust a slot machine but might miss its implications for further (and potentially excessive) gambling. Additionally, media scholars such as Evgeny Morozov have since mapped this addiction principle within social media development, pointing to a familiar science of incentive structures, gamification dashboards, and behaviour-change techniques, each designed to raise user engagement and keep people in apps longer. Thinking with Reddy’s work, I argue that trust can reveal an embodied desire, something momentarily felt and differentially shared (see also Gregg; Sharma; Irani). Reddy frames the weft of woven material as code, the purl and knit stitches of knitting as binary, and the knots of rope as algorithms. She urges her audience to see fabric as a means of challenging common assumptions about technology. With needles and thread, she proffers algorithmic trust as a relational ethics. In Technology We Trust From a design perspective, trust grows from the strategic balancing of risk and uncertainty (Cheshire). Users who find a digital feature reliable or trustworthy are more likely to grow their engagement and convince others to join in (Hancock et al.). In a recent analysis of the overlapping dynamics of algorithmic trust and bias, communication and information scholars Jeff Hancock, Mor Namaan, and Karen Levy (95) argue that machine learning tools such as the Chrome extension Just Not Sorry often replicate bias within training data. The extension disproportionately alerts femme users when they use qualifying words like “sorry”, and “I think”. In ​​other contexts, Hancock and colleagues suggest, an AI-aided tool may help mitigate interpersonal biases since if it “imparts signals of trustworthiness between peer-based social exchange partners, these countervailing cues may neutralise stereotypes that would otherwise impede the transaction” (ibid). Here, the signal of trustworthiness holds the promise of accountability. But because the signals focus on cognition (manipulating an individual’s perceptions), what they refer to and how they may alleviate harms caused by entrenched cultural bias remains less clear. Grounded in social-psychological tenets, technology analysts codify trust as the relationship between two primary concepts: risk and uncertainty. As information scholar Coye Chesire (50) explains, “trust is not simply the absence of risk and uncertainty. More accurately, trust is a complex human response to situations that are rife with risk and uncertainty”. Through a range of controlled methods including observations, self-reports, survey questions, and the experimental conditions of a lab study, researchers measure the trustworthiness of user interface features as assessments of risk and uncertainty that explain differing motivations for use and disengagement. For example, design researcher Nick Merrill’s and Cheshire’s study of heart rate monitors finds that listening to an acquaintance's normal heart rate can lead to negative trust-related assessments in challenging contexts such as waiting to meet the acquaintance about a legal dispute. Parallel work by Hancock and colleagues uses self-reports and large-scale experiments on platforms like Facebook to map the significance of AI-enabled curation features like news feeds (Hancock et al.). As a psychological state, trustworthiness tends to indicate a behavioral metric that can be numerically encoded and individually addressed. By measuring trust-infused dimensions of user activity, analysts seek to systematically identify new ways of scaffolding trust-building behaviour by manipulating perception (Hancock, Namaan, and Levy), ultimately convincing a user to comply. A core goal is to maximise participation. The US government applied these principles to mass data collection and dissemination efforts during national census such as the COVID response (Halpern). But a secondary effect grows from the political-economic dimensions of user experience. Through compliance, users become easier to place, measure, count, and amend—a process Michelle Murphy names the economisation of life. When people’s certainty in interpersonal relationships grows, “the source of uncertainty then shifts to the assurance system, thereby making trustworthiness and reliability of the institution or organisation the salient relationship” (Cheshire 54). For instance, we may trust people in our text messages because we meet them face to face and put their numbers in our phones. But once we trust them, this assurance moves to our social media service or cellular phone provider. The service that manages our contacts also preserves the integrity of our contacts, such as when a messaging platform like WhatsApp automatically updates a cell phone number without our knowledge or explicit consent. Conversely, feelings of assurance in a digital interface feature may dwindle with decreased feelings of assurance by a platform. Until November 2022, users may have trusted someone with a blue checkmark on Twitter more than someone without one, even if they did not trust them at an interpersonal level. But with a chaotic acquisition that, according to a Washington Post report (Weatherbed), led to shifting check mark meanings and colours, this assurance grew more complicated. Murphy (24) might call these quantitative practices enriched with affect the “phantasmagrams” of rationalised assurance. Like a check mark that may or may not index a particular measure of confidence, excitement or worry, these shifting dynamics reveal the “trust and belief that animates numbers” (52). A less considered outcome of this framing is how individuated expressions of distrust (situations that foster psychological and physiological concern, skepticism, or fear for a single person) overshadow its complement: non-unconditional expressions of care. How might a user interface foster networks of connection for self and community? As Anna Lauren Hoffmann suggests, efforts to thwart algorithmic discrimination undergird this conundrum—“mirroring some of antidiscrimination discourse’s most problematic tendencies” (901). The particular value placed on trust often proceeds quick-fix techniques such as multi-factor authentication and cryptography that reduce trust to a neutral transaction (see Ashoori, et al.). In this discussion, design researchers have only begun to conceive trust (and distrust) as a deeply embodied process. Looks, Cuts, and Scans Reddy’s b00b invites audiences to explore embodied positioning. Sitting on a static mannequin, the garment invites audience members to engage the handiwork laid atop its breasts. In video documentation (Reddy), Reddy holds up a phone to a mannequin wearing the bra. She touches the phone to the mannequin’s right nipple, and the phone screen opens a Web browser with a password-protected field. As Reddy moves the phone to the mannequin’s left nipple, the phone shares the password ‘banjara,’ a reference to the community from which the embroidery techniques derive. The password opens a Website full of descriptive text and imagery detailing this material reference. In this interaction, b00b joins a movement of artistic work that uses textile artifacts to frame boundaries of self and other as porous and shifting. Consider Nam June Paik’s 1969 TV Bra for Living Sculpture. Across the 1970s, Charlotte Moorman performed the work by playing cello while wearing a transparent brassiere with two miniature television screens mounted on her chest (Paik; Rothfuss). As Moorman played her cello, wires connecting the cello to the two television sets sent sonic signals to the video that manipulate its imagery. Moorman’s instrumentation controlled the visuals displayed on the screens, inviting audience members to come closer to the electronic garment and her body—or, as Joan Rothfuss explains, “never mind that the bra actually encouraged prurience by compelling spectators to stare at [Moorman’s] breasts” (243). TV Bra invited its audience to breach conventional limits of closeness and contact much like users of b00b. Yoko Ono’s celebrated Cut Piece has sparked a similar prurience. During the work Ono dressed in some of her finest clothes and invites audience members to walk on stage and shear away pieces of fabric. Notably documented in the Albert and David Maysles film of Ono’s 1965 Carnegie Hall performance, the audience leaves Ono’s body nearly fully exposed at the performance’s end, save for her arms holding remaining pieces of fabric. With scissors in hand, the performance threatens imminent danger—inspiring snickers, pause, and discomforting ease among audience members eager to participate. Cut Piece encourages the audience to disregard consent and expose a certain breach of trust, practice mirrored with b00b. In this process of cutting cloth, often on the bias (or on a slanted angle; see Benabdallah, et al.; Rosner), feminist performance works have long prompted audiences to trouble the intimate relationship between themselves and the performer. As Vivian Huang has deftly argued, Ono’s shredded fabrics are more than neutral inconveniences; they also hint at whatever racialised and gendered feelings of trust might or might not exist between Ono and her audience. “If Orientalist conflations of the East with femininity have in turn sexualized Asian women as simultaneously hypersexual and submissive”, Haung contends, “then how can we as viewers and readers performatively read Asian femininity in a different, and not anti-relational, orientation to hospitality?” (187). b00b asks a similar question with systems of verification. Examining this possibility, Peggy Kyoungwon Lee recently puts Cut Piece in conversation with the contemporary media art of Lisa Park, and notes that “Ono’s signature composure both enacts and challenges archetypes of the feminized Asian body: cognitive efficiency, durability, calculative emotionality, docility, passivity” (54). For Lee, Cut Piece continues to open pathways for interpretation by diverting audience members from the compliance arguments above. Where algorithmic trust further complicates the making of trust with an added layer of uncertainty (is this made by an algorithm or is this not?), Cut Piece and TV Bra see in and through uncertainty to recentre a relational ethics. This concern for the relationality endures in Reddy’s b00b. To fashion the near-field communication (NFC) cards, Reddy draws from Banjara embroidery, a heritage craft technique featured in her home city of Hyderbad (Telangana). Like Banjara, b00b incorporates varied accessories (mirrors, tassels, shells) with colourful pattern. She embellishes the bra with lively zig-zagging embroidery, fashioning each nipple with a mirror that expertly doubles as an NFT tag hidden behind the embroidery. Garments like Ono’s, Paik and Moorman’s, and now Reddy’s, share an understanding that technology can and should reflect a certain felt complexity. At the Brilliant Labs event, Reddy presents b00b to conference-goers invested in shared hardware design specification standards. Across the 48-minute presentation, b00b interrupts the audience's presumed intentions. As Elizabeth Goodman has argued, hackers and tech enthusiasts interested in schematics, wireframes, and other digital drawings often prioritise formats that anyone can examine, adapt, use, and circulate by overlooking their situated social and political stakes. In the theatrical setting of a tech forum, b00b’s fabric draws attention to the body—manoeuvring the (often white Western) gaze around femme Asian subjectivities and questioning proximities between one body and another. Through its embodied relationality, real or imagined, b00b shares a concern for reimagining trust within mechanisms of control. b00b is Reddy’s attempt at generative justice, a concept of inclusive making she calls part of “bringing the Open Hardware community closer to heritage craft communities” (Reddy). In documentation, she discusses the geopolitical conditions of NFC-based authentication that relies on intimate connection as a means of state-led coercion and control. Situating her work in contemporary trust politics, she describes the Aadhar biometric identification system designed to compel Indian residents to record biometric data through iris scans, fingerprints, and photographs in exchange for a unique identity number (Dixon). She writes that systems like Aadhar “make minority communities more vulnerable to being identified, classified, and policed by powerful social actors” (Dixon). Wearing b00b challenges efforts to root NFC transactions in similar carceral and colonial logics. With an intimate scan, a user or audience makes room for counter-expressions of dis/trust. Sitting across from Reddy during a recent Zoom conference, I felt the tug of this work. With the piece modelled on a mannequin in the background, it reminded me of the homegrown techno-armour worn throughout Friedrichshain, a lively neighborhood in the former eastern part of Berlin. For the onlooker, the bra incites not only intrigue but also a careful engagement; or what Reddy names the “need to actively participate in conveying trust and intimacy with the bra’s wearer”. I couldn't help but wonder what an attendee at the Open Hardware Summit might make of the work. Would they bristle at the intimacy, or would they—like Ono’s audiences—cut in? On the surface, b00b presents a playful counterpoint to the dominant narrative of technology as slick, neutral, and disembodied. By foregrounding the tactile, handmade qualities of electronic media, Reddy’s work suggests we reconsider the boundaries between physical and digital worlds to complicate readings of computational risk. She is taking a highly technical process typically used for practical applications like finance, online identity, or other well-defined authentication problems, and enlivening it. The garment invites her audience to appreciate two-factor encryption as something intimate—both in an abstract sense and in a resolutely embodied sense. By defamiliarising digital trust, Reddy calls attention to its absurdity. How can a term like “trust” (associated with intimacy and mutual concern) also denote the extractive politics of algorithmic control (the verification of a user, the assessment of risk, the escalating manipulation of use)? Look closer at b00b, and the focus on authentication offers something specific for our ideas of algorithmic trust. Reddy turns a computational process into an extension of the body, registering a distinctly affective intrusion within the digital codification of assurance and accountability. Working with interaction design in the tradition of feminist performance, b00b directs our digital gaze back toward the embodied. Toward a Relational Ethics of Trust Fabric artifacts like b00b have long challenged digital scholars to consider questions of uncertainty and accountability. From what counts as computational, to whose labour gets recognised as innovative, woven material sparks a particular performance of risk. As Lisa Nakamura (933) shrewdly observes, gendered and racialised “traits” associated with textiles tend to fuel technological production, casting women of colour as the ideal digital workers. Looking to transnational flows connected with making, Silvia Lindnter argues that these stereotypes bring strategic meanings to feminised Asian bodies that naturalise their role within digital economies. Whose bodies get associated with fabric (through making, repair, consumption, aesthetics) reflects deep-seated stratifications within the masculine history of computing—with seemingly few possibilities for circumvention. If trust works as a felt condition, digital developments might more fully honour that condition. Bringing textile possibilities to NFTs suggests examining how authentication systems work on and through the body, even without touch. It is in this reciprocal encounter between content and user, audience and performer, textile and algorithm that something like a bra can hint at a profound ethics of connection. Reddy’s work reveals the consensual contact that can meaningfully shape who and how we digitally trust. While this essay has focussed on trust, I want to end with a brief consideration of the way a textile—in this case a conceptual and maybe even ontoepistemic (da Silva) artifact—brings the status of users closer to that of audience members. It begins to weave an analytic thread between the orientations, capacities, and desires of performance and design. Across this connection, b00b’s design works as minoritarian performance, as Jasmine Mahmoud (after José Esteban Muñoz) describes: a practice that “centers performance—as an object of study, a method, and theoretical container—as a means of centering minortized knowledge”. As minoritarian knowledge, the embroidered NFT expands Rozsika Parker’s profound insight into the subversive power of needlecraft. As Julia Bryan-Wilson (6) observes, “accounting for textiles—objects that are in close physical contact with us at virtually every minute of the day—demands alternative methodologies, ones that extend from shared bodily knowledge”. For digital scholars, b00b opens a similar possibility under racial technocapitalism. It asks us to notice how an indicator light on an AI-trained surveillance camera, for instance, does not map to an engaged or disaffected condition for an over-monitored user. It registers the need for probing relationships that underlie those tools—relationships between workers and employers, between non-users and corporate platforms, between differentially marked bodies. It challenges the reduction of trust dynamics into individualised or universalised motivations. To trust and be trusted with thread opens the possibility of algorithmic re-embodiment. Acknowledgements I’m grateful to insightful comments and suggestions from Anuradha Reddy, Amanda Doxtater, Scott Magelssen, Jasmine Jamillah Mahmoud, Adair Rounthwaite, Anne Searcy, James Pierce, and the anonymous reviewers of the current M/C Journal issue. References Adam, Martin, Michael Wessel, and Alexander Benlian. "AI-Based Chatbots in Customer Service and Their Effects on User Compliance." Electronic Markets 31.2 (2021): 427-445. Ashoori, Maryam, and Justin D. Weisz. "In AI We Trust? Factors That Influence Trustworthiness of AI-Infused Decision-Making Processes." arXiv 1912.02675 (2019). Benabdallah, Gabrielle, et al. "Slanted Speculations: Material Encounters with Algorithmic Bias." Designing Interactive Systems Conference (2022): 85-99. Brilliant Labs / Labos Créatifs. “AlgoCraft: Remixing Craft, Culture, and Computation with Dr. Anuradha Reddy.” 2023. &lt;https://www.youtube.com/watch?v=UweYVhsPMjc&gt;. Bryan-Wilson, Julia. Fray: Art and Textile Politics. Chicago: U of Chicago P, 2021. Cheshire, Coye. "Online Trust, Trustworthiness, or Assurance?" Daedalus 140.4 (2011): 49-58. Dixon, Pam. “A Failure to ‘Do No Harm’—India’s Aadhaar Biometric ID Program and Its Inability to Protect Privacy in Relation to Measures in Europe and the US.” Health and technology 7.4 (2017): 539-567. Duffy, Margaret, and Esther Thorson, eds. Persuasion Ethics Today. Routledge, 2015. Eglash, Ron, and Audrey Bennett. "Teaching with Hidden Capital: Agency in Children's Computational Explorations of Cornrow Hairstyles." Children Youth and Environments 19.1 (2009): 58-73. Ferreira da Silva, Denise. Unpayable Debt. Sternberg Press / The Antipolitical, 2022. Gass, Robert H., and John S. Seiter. Persuasion: Social Influence and Compliance Gaining. Routledge, 2022. Glikson, Ella, and Anita Williams Woolley. “Human Trust in Artificial Intelligence: Review of Empirical Research.” Academy of Management Annals 14.2 (2020): 627-660. Goodman, Elizabeth Sarah. Delivering Design: Performance and Materiality in Professional Interaction Design. Berkeley: U of California P, 2013. Gregg, Melissa. Counterproductive: Time Management in the Knowledge Economy. Durham: Duke UP, 2018. Halpern, Sue. “Can We Track COVID-19 and Protect Privacy at the Same Time?” New Yorker 27 Apr. 2020. &lt;https://www.newyorker.com/tech/annals-of-technology/can-we-track-covid-19-and-protect-privacy-at-the-same-time&gt;. Hancock, Jeffrey T., Mor Naaman, and Karen Levy. "AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations." Journal of Computer-Mediated Communication 25.1 (2020): 89-100. Huang, Vivian L. "Inscrutably, Actually: Hospitality, Parasitism, and the Silent Work of Yoko Ono and Laurel Nakadate." Women &amp; Performance: A Journal of Feminist Theory 28.3 (2018): 187-203. Irani, Lilly. "‘Design Thinking’: Defending Silicon Valley at the Apex of Global Labor Hierarchies." Catalyst: Feminism, Theory, Technoscience 4.1 (2018): 1-19. Lee, Peggy Kyoungwon. "The Alpha Orient: Lisa Park and Yoko Ono." TDR 66.2 (2022): 45-59. Mahmoud, Jasmine. “Minoritarian Performance.” Research Cluster, University of Washington, 2022. &lt;https://simpsoncenter.org/projects/minoritarian-performance&gt;. Merrill, Nick, and Coye Cheshire. "Habits of the Heart(rate): Social Interpretation of Biosignals in Two Interaction Contexts." Proceedings of the 19th international Conference on Supporting Group Work (2016): 31-38. Morozov, Evgeny. “The Mindfulness Racket.” New Republic 23 Feb. 2014. 1 Sep. 2016 &lt;https://newrepublic.com/article/116618/technologys-mindfulness-racket&gt;. Muñoz, José Esteban. Cruising Utopia. Tenth anniversary ed. New York: New York UP, 2019. Murphy, Michelle. The Economization of Life. Duke UP, 2017. Nakamura, Lisa. "Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture." American Quarterly 66.4 (2014): 919-941. Oldenziel, Ruth. Making Technology Masculine: Men, Women and Modern Machines in America, 1870-1945. Amsterdam: Amsterdam UP, 1999. Paik, Nam June, and S. Moorman. "TV Bra for Living Sculpture." 1969. 6 Mar. 2014 &lt;http://www.eai.org/kinetic/ch1/creative/video/paik_tvbra.html&gt;. Parker, Rozsika. The Subversive Stitch: Embroidery and the Making of the Feminine. Chicago: U of Chicago P, 1984. Reddy, Anurandha. “b00b-Factor Authentication.” 2022. 7 Nov. 2023 &lt;https://www.youtube.com/watch?v=41kjOXtUrxw&gt;. ———. “b00b-Factor Authentication in Banjara Embroidery.” 2023. 7 Nov. 2023 &lt;https://anuradhareddy.com/B00B-Factor-Authentication-in-Banjara-Embroidery&gt; (password: 'banjara'). Rossi, John, and Michael Yudell. "The Use of Persuasion in Public Health Communication: an Ethical Critique." Public Health Ethics 5.2 (2012): 192-205. Rothfuss, Joan. Topless Cellist: The Improbable Life of Charlotte Moorman. Cambridge: MIT P, 2014. Schüll, Natasha Dow. Addiction by Design. Princeton: Princeton UP, 2012. Sharma, Sarah. In the Meantime: Temporality and Cultural Politics. Durham: Duke UP, 2014. Weatherbed, Jess. “Elon Musk Says Twitter Will Begin Manually Authenticating Blue, Grey, and Gold Accounts as Soon as Next Week.” The Verge 25 Nov. 2022. &lt;https://www.theverge.com/2022/11/25/23477550/twitter-manual-verification-blue-checkmark-gold-grey&gt;.
APA, Harvard, Vancouver, ISO, and other styles
38

Grebenuk, Olexandr, and Volodymyr Pavlenko. "ARCHITECTURE OF PORTS AND ADAPTERS IN ITERATIVE DEVELOPMENT WITH LIMITATIONS ON TIME." Visnyk Universytetu “Ukraina”, 2019. http://dx.doi.org/10.36994/2707-4110-2019-1-22-24.

Full text
Abstract:
The application of ports and adapters architecture (other names bulbous, layered, hexagonal) in iterative software development is considered in accordance with the requirements that come in chronological order in the practical example. Each iteration is supported by the schema architecture, problems encountered and their solution. The expediency of using the considered architecture in the iterative development of software with time constraints is shown. The system of collecting data on the concentration of carbon dioxide of the environment and air temperature in real time from a distributed network of sensors with a predetermined geolocation for medical institutions was developed. Put sensor information (ID, commissioning date and end date) in the Google Sheets spreadsheet. The data from the sensors should be collected on the server by REST service. The process of PPP in a specific project with significant time constraints is investigated, applying the rules and principles laid down in the architecture of ports and adapters, using the basic metrics to evaluate the complexity of adding new functionality, testing, concurrent development, speed and ease of development; draw conclusions about the conditions when it is appropriate to apply the chosen software design approach, and the ability of such an approach to perceive software requirements changes. The architecture of ports and adapters is useful if the system has many external integrations (mail service, push messages, databases, reporting system, etc.). The one-way communication with adapters guarantees the integrity of the main algorithmic part of the program. A thorough knowledge of the domain allows you to immediately determine the domain layer. Building a system structure that optimally reflects the domain requires the most time, and it will be costly in the future to correct errors made during the process of defining system layers (interfaces and systems). Domain logic testing is fast due to Unit tests, other tests are easy to write due to the small connectivity between layers. This architecture is not a completely new approach, but it takes the best of OOP, SOLID, DDD and determines how to apply these principles in the best way.
APA, Harvard, Vancouver, ISO, and other styles
39

Al-Rawi, Ahmed, Carmen Celestini, Nicole Stewart, and Nathan Worku. "How Google Autocomplete Algorithms about Conspiracy Theorists Mislead the Public." M/C Journal 25, no. 1 (2022). http://dx.doi.org/10.5204/mcj.2852.

Full text
Abstract:
Introduction: Google Autocomplete Algorithms Despite recent attention to the impact of social media platforms on political discourse and public opinion, most people locate their news on search engines (Robertson et al.). When a user conducts a search, millions of outputs, in the form of videos, images, articles, and Websites are sorted to present the most relevant search predictions. Google, the most dominant search engine in the world, expanded its search index in 2009 to include the autocomplete function, which provides suggestions for query inputs (Dörr and Stephan). Google’s autocomplete function also allows users to “search smarter” by reducing typing time by 25 percent (Baker and Potts 189). Google’s complex algorithm is impacted upon by factors like search history, location, and keyword searches (Karapapa and Borghi), and there are policies to ensure the autocomplete function does not contain harmful content. In 2017, Google implemented a feedback tool to allow human evaluators to assess the quality of search results; however, the algorithm still provides misleading results that frame far-right actors as neutral. In this article, we use reverse engineering to understand the nature of these algorithms in relation to the descriptive outcome, to illustrate how autocomplete subtitles label conspiracists in three countries. According to Google, these “subtitles are generated automatically”, further stating that the “systems might determine that someone could be called an actor, director, or writer. Only one of these can appear as the subtitle” and that Google “cannot accept or create custom subtitles” (Google). We focused our attention on well-known conspiracy theorists because of their influence and audience outreach. In this article we argue that these subtitles are problematic because they can mislead the public and amplify extremist views. Google’s autocomplete feature is misleading because it does not highlight what is publicly known about these actors. The labels are neutral or positive but never negative, reflecting primary jobs and/or the actor’s preferred descriptions. This is harmful to the public because Google’s search rankings can influence a user’s knowledge and information preferences through the search engine manipulation effect (Epstein and Robertson). Users’ preferences and understanding of information can be manipulated based upon their trust in Google search results, thus allowing these labels to be widely accepted instead of providing a full picture of the harm their ideologies and belief cause. Algorithms That Mainstream Conspiracies Search engines establish order and visibility to Web pages that operationalise and stabilise meaning to particular queries (Gillespie). Google’s subtitles and blackbox operate as a complex algorithm for its search index and offer a mediated visibility to aspects of social and political life (Gillespie). Algorithms are designed to perform computational tasks through an operational sequence that computer systems must follow (Broussard), but they are also “invisible infrastructures” that Internet users consciously or unconsciously follow (Gran et al. 1779). The way algorithms rank, classify, sort, predict, and process data is political because it presents the world through a predetermined lens (Bucher 3) decided by proprietary knowledge – a “secret sauce” (O’Neil 29) – that is not disclosed to the general public (Christin). Technology titans, like Google, Facebook, and Amazon (Webb), rigorously protect and defend intellectual property for these algorithms, which are worth billions of dollars (O’Neil). As a result, algorithms are commonly defined as opaque, secret “black boxes” that conceal the decisions that are already made “behind corporate walls and layers of code” (Pasquale 899). The opacity of algorithms is related to layers of intentional secrecy, technical illiteracy, the size of algorithmic systems, and the ability of machine learning algorithms to evolve and become unintelligible to humans, even to those trained in programming languages (Christin 898-899). The opaque nature of algorithms alongside the perceived neutrality of algorithmic systems is problematic. Search engines are increasingly normalised and this leads to a socialisation where suppositions are made that “these artifacts are credible and provide accurate information that is fundamentally depoliticized and neutral” (Noble 25). Google’s autocomplete and PageRank algorithms exist outside of the veil of neutrality. In 2015, Google’s photos app, which uses machine learning techniques to help users collect, search, and categorise images, labelled two black people as ‘gorillas’ (O’Neil). Safiya Noble illustrates how media and technology are rooted in systems of white supremacy, and how these long-standing social biases surface in algorithms, illustrating how racial and gendered inequities embed into algorithmic systems. Google actively fixes algorithmic biases with band-aid-like solutions, which means the errors remain inevitable constituents within the algorithms. Rising levels of automation correspond to a rising level of errors, which can lead to confusion and misdirection of the algorithms that people use to manage their lives (O’Neil). As a result, software, code, machine learning algorithms, and facial/voice recognition technologies are scrutinised for producing and reproducing prejudices (Gray) and promoting conspiracies – often described as algorithmic bias (Bucher). Algorithmic bias occurs because algorithms are trained by historical data already embedded with social biases (O’Neil), and if that is not problematic enough, algorithms like Google’s search engine also learn and replicate the behaviours of Internet users (Benjamin 93), including conspiracy theorists and their followers. Technological errors, algorithmic bias, and increasing automation are further complicated by the fact that Google’s Internet service uses “2 billion lines of code” – a magnitude that is difficult to keep track of, including for “the programmers who designed the algorithm” (Christin 899). Understanding this level of code is not critical to understanding algorithmic logics, but we must be aware of the inscriptions such algorithms afford (Krasmann). As algorithms become more ubiquitous it is urgent to “demand that systems that hold algorithms accountable become ubiquitous as well” (O’Neil 231). This is particularly important because algorithms play a critical role in “providing the conditions for participation in public life”; however, the majority of the public has a modest to nonexistent awareness of algorithms (Gran et al. 1791). Given the heavy reliance of Internet users on Google’s search engine, it is necessary for research to provide a glimpse into the black boxes that people use to extract information especially when it comes to searching for information about conspiracy theorists. Our study fills a major gap in research as it examines a sub-category of Google’s autocomplete algorithm that has not been empirically explored before. Unlike the standard autocomplete feature that is primarily programmed according to popular searches, we examine the subtitle feature that operates as a fixed label for popular conspiracists within Google’s algorithm. Our initial foray into our research revealed that this is not only an issue with conspiracists, but also occurs with terrorists, extremists, and mass murderers. Method Using a reverse engineering approach (Bucher) from September to October 2021, we explored how Google’s autocomplete feature assigns subtitles to widely known conspiracists. The conspiracists were not geographically limited, and we searched for those who reside in the United States, Canada, United Kingdom, and various countries in Europe. Reverse engineering stems from Ashby’s canonical text on cybernetics, in which he argues that black boxes are not a problem; the problem or challenge is related to the way one can discern their contents. As Google’s algorithms are not disclosed to the general public (Christin), we use this method as an extraction tool to understand the nature of how these algorithms (Eilam) apply subtitles. To systematically document the search results, we took screenshots for every conspiracist we searched in an attempt to archive the Google autocomplete algorithm. By relying on previous literature, reports, and the figures’ public statements, we identified and searched Google for 37 Western-based and influencial conspiracy theorists. We initially experimented with other problematic figures, including terrorists, extremists, and mass murderers to see whether Google applied a subtitle or not. Additionally, we examined whether subtitles were positive, neutral, or negative, and compared this valence to personality descriptions for each figure. Using the standard procedures of content analysis (Krippendorff), we focus on the manifest or explicit meaning of text to inform subtitle valence in terms of their positive, negative, or neutral connotations. These manifest features refer to the “elements that are physically present and countable” (Gray and Densten 420) or what is known as the dictionary definitions of items. Using a manual query, we searched Google for subtitles ascribed to conspiracy theorists, and found the results were consistent across different countries. Searches were conducted on Firefox and Chrome and tested on an Android phone. Regardless of language input or the country location established by a Virtual Private Network (VPN), the search terms remained stable, regardless of who conducted the search. The conspiracy theorists in our dataset cover a wide range of conspiracies, including historical figures like Nesta Webster and John Robison, who were foundational in Illuminati lore, as well as contemporary conspiracists such as Marjorie Taylor Greene and Alex Jones. Each individual’s name was searched on Google with a VPN set to three countries. Results and Discussion This study examines Google’s autocomplete feature associated with subtitles of conspiratorial actors. We first tested Google’s subtitling system with known terrorists, convicted mass shooters, and controversial cult leaders like David Koresh. Garry et al. (154) argue that “while conspiracy theories may not have mass radicalising effects, they are extremely effective at leading to increased polarization within societies”. We believe that the impact of neutral subtitling of conspiracists reflects the integral role conspiracies plays in contemporary politics and right-wing extremism. The sample includes contemporary and historical conspiracists to establish consistency in labelling. For historical figures, the labels are less consequential and simply reflect the reality that Google’s subtitles are primarily neutral. Of the 37 conspiracy theorists we searched (see Table 1 in the Appendix), seven (18.9%) do not have an associated subtitle, and the other 30 (81%) have distinctive subtitles, but none of them reflects the public knowledge of the individuals’ harmful role in disseminating conspiracy theories. In the list, 16 (43.2%) are noted for their contribution to the arts, 4 are labelled as activists, 7 are associated with their professional affiliation or original jobs, 2 to the journalism industry, one is linked to his sports career, another one as a researcher, and 7 have no subtitle. The problem here is that when white nationalists or conspiracy theorists are not acknowledged as such in their subtitles, search engine users could possibly encounter content that may sway their understanding of society, politics, and culture. For example, a conspiracist like Alex Jones is labeled as an “American Radio Host” (see Figure 1), despite losing two defamation lawsuits for declaring that the shooting at Sandy Hook Elementary School in Newtown, Connecticut, was a ‘false flag’ event. Jones’s actions on his InfoWars media platforms led to parents of shooting victims being stalked and threatened. Another conspiracy theorist, Gavin McInnes, the creator of the far-right, neo-fascist Proud Boys organisation, a known terrorist entity in Canada and hate group in the United States, is listed simply as a “Canadian writer” (see Figure 1). Fig. 1: Screenshots of Google’s subtitles for Alex Jones and Gavin McInnes. Although subtitles under an individual’s name are not audio, video, or image content, the algorithms that create these subtitles are an invisible infrastructure that could cause harm through their uninterrogated status and pervasive presence. This could then be a potential conduit to media which could cause harm and develop distrust in electoral and civic processes, or all institutions. Examples from our list include Brittany Pettibone, whose subtitle states that she is an “American writer” despite being one of the main propagators of the Pizzagate conspiracy which led to Edgar Maddison Welch (whose subtitle is “Screenwriter”) travelling from North Carolina to Washington D.C. to violently threaten and confront those who worked at Comet Ping Pong Pizzeria. The same misleading label can be found via searching for James O’Keefe of Project Veritas, who is positively labelled as “American activist”. Veritas is known for releasing audio and video recordings that contain false information designed to discredit academic, political, and service organisations. In one instance, a 2020 video released by O’Keefe accused Democrat Ilhan Omar’s campaign of illegally collecting ballots. The same dissembling of distrust applies to Mike Lindell, whose Google subtitle is “CEO of My Pillow”, as well as Sidney Powell, who is listed as an “American lawyer”; both are propagators of conspiracy theories relating to the 2020 presidential election. The subtitles attributed to conspiracists on Google do not acknowledge the widescale public awareness of the negative role these individuals play in spreading conspiracy theories or causing harm to others. Some of the selected conspiracists are well known white nationalists, including Stefan Molyneux who has been banned from social media platforms like Twitter, Twitch, Facebook, and YouTube for the promotion of scientific racism and eugenics; however, he is neutrally listed on Google as a “Canadian podcaster”. In addition, Laura Loomer, who describes herself as a “proud Islamophobe,” is listed by Google as an “Author”. These subtitles can pose a threat by normalising individuals who spread conspiracy theories, sow dissension and distrust in institutions, and cause harm to minority groups and vulnerable individuals. Once clicking on the selected person, the results, although influenced by the algorithm, did not provide information that aligned with the associated subtitle. The search results are skewed to the actual conspiratorial nature of the individuals and associated news articles. In essence, the subtitles do not reflect the subsequent search results, and provide a counter-labelling to the reality of the resulting information provided to the user. Another significant example is Jerad Miller, who is listed as “American performer”, despite the fact that he is the Las Vegas shooter who posted anti-government and white nationalist 3 Percenters memes on his social media (SunStaff), even though the majority of search results connect him to the mass shooting he orchestrated in 2014. The subtitle “performer” is certainly not the common characteristic that should be associated with Jerad Miller. Table 1 in the Appendix shows that individuals who are not within the contemporary milieux of conspiracists, but have had a significant impact, such as Nesta Webster, Robert Welch Junior, and John Robison, were listed by their original profession or sometimes without a subtitle. David Icke, infamous for his lizard people conspiracies, has a subtitle reflecting his past football career. In all cases, Google’s subtitle was never consistent with the actor’s conspiratorial behaviour. Indeed, the neutral subtitles applied to conspiracists in our research may reflect some aspect of the individuals’ previous careers but are not an accurate reflection of the individuals’ publicly known role in propagating hate, which we argue is misleading to the public. For example, David Icke may be a former footballer, but the 4.7 million search results predominantly focus on his conspiracies, his public fora, and his status of being deplatformed by mainstream social media sites. The subtitles are not only neutral, but they are not based on the actual search results, and so are misleading in what the searcher will discover; most importantly, they do not provide a warning about the misinformation contained in the autocomplete subtitle. To conclude, algorithms automate the search engines that people use in the functions of everyday life, but are also entangled in technological errors, algorithmic bias, and have the capacity to mislead the public. Through a process of reverse engineering (Ashby; Bucher), we searched 37 conspiracy theorists to decode the Google autocomplete algorithms. We identified how the subtitles attributed to conspiracy theorists are neutral, positive, but never negative, which does not accurately reflect the widely known public conspiratorial discourse these individuals propagate on the Web. This is problematic because the algorithms that determine these subtitles are invisible infrastructures acting to misinform the public and to mainstream conspiracies within larger social, cultural, and political structures. This study highlights the urgent need for Google to review the subtitles attributed to conspiracy theorists, terrorists, and mass murderers, to better inform the public about the negative nature of these actors, rather than always labelling them in neutral or positive ways. Funding Acknowledgement This project has been made possible in part by the Canadian Department of Heritage – the Digital Citizen Contribution program – under grant no. R529384. The title of the project is “Understanding hate groups’ narratives and conspiracy theories in traditional and alternative social media”. References Ashby, W. Ross. An Introduction to Cybernetics. Chapman &amp; Hall, 1961. Baker, Paul, and Amanda Potts. "‘Why Do White People Have Thin Lips?’ Google and the Perpetuation of Stereotypes via Auto-Complete Search Forms." Critical Discourse Studies 10.2 (2013): 187-204. Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. Polity, 2019. Bucher, Taina. If... Then: Algorithmic Power and Politics. OUP, 2018. Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World. MIT P, 2018. Christin, Angèle. "The Ethnographer and the Algorithm: Beyond the Black Box." Theory and Society 49.5 (2020): 897-918. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. MIT P, 2020. Dörr, Dieter, and Juliane Stephan. "The Google Autocomplete Function and the German General Right of Personality." Perspectives on Privacy. De Gruyter, 2014. 80-95. Eilam, Eldad. Reversing: Secrets of Reverse Engineering. John Wiley &amp; Sons, 2011. Epstein, Robert, and Ronald E. Robertson. "The Search Engine Manipulation Effect (SEME) and Its Possible Impact on the Outcomes of Elections." Proceedings of the National Academy of Sciences 112.33 (2015): E4512-E4521. Garry, Amanda, et al. "QAnon Conspiracy Theory: Examining its Evolution and Mechanisms of Radicalization." Journal for Deradicalization 26 (2021): 152-216. Gillespie, Tarleton. "Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem." Information, Communication &amp; Society 20.1 (2017): 63-80. Google. “Update your Google knowledge panel.” 2022. 3 Jan. 2022 &lt;https://support.google.com/knowledgepanel/answer/7534842?hl=en#zippy=%2Csubtitle&gt;. Gran, Anne-Britt, Peter Booth, and Taina Bucher. "To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide?" Information, Communication &amp; Society 24.12 (2021): 1779-1796. Gray, Judy H., and Iain L. Densten. "Integrating Quantitative and Qualitative Analysis Using Latent and Manifest Variables." Quality and Quantity 32.4 (1998): 419-431. Gray, Kishonna L. Intersectional Tech: Black Users in Digital Gaming. LSU P, 2020. Karapapa, Stavroula, and Maurizio Borghi. "Search Engine Liability for Autocomplete Suggestions: Personality, Privacy and the Power of the Algorithm." International Journal of Law and Information Technology 23.3 (2015): 261-289. Krasmann, Susanne. "The Logic of the Surface: On the Epistemology of Algorithms in Times of Big Data." Information, Communication &amp; Society 23.14 (2020): 2096-2109. Krippendorff, Klaus. Content Analysis: An Introduction to Its Methodology. Sage, 2004. Noble, Safiya Umoja. Algorithms of Oppression. New York UP, 2018. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown, 2016. Pasquale, Frank. The Black Box Society. Harvard UP, 2015. Robertson, Ronald E., David Lazer, and Christo Wilson. "Auditing the Personalization and Composition of Politically-Related Search Engine Results Pages." Proceedings of the 2018 World Wide Web Conference. 2018. Staff, Sun. “A Look inside the Lives of Shooters Jerad Miller, Amanda Miller.” Las Vegas Sun 9 June 2014. &lt;https://lasvegassun.com/news/2014/jun/09/look/&gt;. Webb, Amy. The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Hachette UK, 2019. Appendix Table 1: The subtitles of conspiracy theorists on Google autocomplete Conspiracy Theorist Google Autocomplete Subtitle Character Description Alex Jones American radio host InfoWars founder, American far-right radio show host and conspiracy theorist. The SPLC describes Alex Jones as "the most prolific conspiracy theorist in contemporary America." Barry Zwicker Canadian journalist Filmmaker who made a documentary that claimed fear was used to control the public after 9/11. Bart Sibrel American producer Writer, producer, and director of work to falsely claim the Apollo moon landings between 1969 and 1972 were staged by NASA. Ben Garrison American cartoonist Alt-right and QAnon political cartoonist Brittany Pettibone American writer Far-right, political vlogger on YouTube and propagator of #pizzagate. Cathy O’Brien American author Cathy O’Brien claims she was a victim of a government mind control project called Project Monarch. Dan Bongino American radio host Stakeholder in Parler, Radio Host, Ex-Spy, Conspiracist (Spygate, MAGA election fraud, etc.). David Icke Former footballer Reptilian humanoid conspiracist. David Wynn Miller (No subtitle) Conspiracist, far-right tax protester, and founder of the Sovereign Citizens Movement. Jack Posobiec American activist Alt-right, alt-lite political activist, conspiracy theorist, and Internet troll. Editor of Human Events Daily. James O’Keefe American activist Founder of Project Veritas, a far-right company that propagates disinformation and conspiracy theories. John Robison Foundational Illuminati conspiracist. Kevin Annett Canadian writer Former minister and writer, who wrote a book exposing the atrocities to Indigenous Communities, and now is a conspiracist and vlogger. Laura Loomer Author Far-right, anti-Muslim, conspiracy theorist, and Internet personality. Republican nominee in Florida's 21st congressional district in 2020. Marjorie Taylor Greene United States Representative Conspiracist, QAnon adherent, and U.S. representative for Georgia's 14th congressional district. Mark Dice American YouTuber Right-wing conservative pundit and conspiracy theorist. Mark Taylor (No subtitle) QAnon minister and self-proclaimed prophet of Donald Trump, the 45th U.S. President. Michael Chossudovsky Canadian economist Professor emeritus at the University of Ottawa, founder of the Centre for Research on Globalization, and conspiracist. Michael Cremo(Drutakarmā dāsa) American researcher Self-described Vedic creationist whose book, Forbidden Archeology, argues humans have lived on earth for millions of years. Mike Lindell CEO of My Pillow Business owner and conspiracist. Neil Patel English entrepreneur Founded The Daily Caller with Tucker Carlson. Nesta Helen Webster English author Foundational Illuminati conspiracist. Naomi Wolf American author Feminist turned conspiracist (ISIS, COVID-19, etc.). Owen Benjamin American comedian Former actor/comedian now conspiracist (Beartopia), who is banned from mainstream social media for using hate speech. Pamela Geller American activist Conspiracist, Anti-Islam, Blogger, Host. Paul Joseph Watson British YouTuber InfoWars co-host and host of the YouTube show PrisonPlanetLive. QAnon Shaman (Jake Angeli) American activist Conspiracy theorist who participated in the 2021 attack on Capitol Hil. Richard B. Spencer (No subtitle) American neo-Nazi, antisemitic conspiracy theorist, and white supremacist. Rick Wiles (No subtitle) Minister, Founded conspiracy site, TruNews. Robert W. Welch Jr. American businessman Founded the John Birch Society. Ronald Watkins (No subtitle) Founder of 8kun. Serge Monast Journalist Creator of Project Blue Beam conspiracy. Sidney Powell (No subtitle) One of former President Trump’s Lawyers, and renowned conspiracist regarding the 2020 Presidential election. Stanton T. Friedman Nuclear physicist Original civilian researcher of the 1947 Roswell UFO incident. Stefan Molyneux Canadian podcaster Irish-born, Canadian far-right white nationalist, podcaster, blogger, and banned YouTuber, who promotes conspiracy theories, scientific racism, eugenics, and racist views Tim LaHaye American author Founded the Council for National Policy, leader in the Moral Majority movement, and co-author of the Left Behind book series. Viva Frei (No subtitle) YouTuber/ Canadian Influencer, on the Far-Right and Covid conspiracy proponent. William Guy Carr Canadian author Illuminati/III World War Conspiracist Google searches conducted as of 9 October 2021.
APA, Harvard, Vancouver, ISO, and other styles
40

Stamm, Emma. "Anomalous Forms in Computer Music." M/C Journal 23, no. 5 (2020). http://dx.doi.org/10.5204/mcj.1682.

Full text
Abstract:
IntroductionFor Gilles Deleuze, computational processes cannot yield the anomalous, or that which is unprecedented in form and content. He suggests that because computing functions are mechanically standardised, they always share the same ontic character. M. Beatrice Fazi claims that the premises of his critique are flawed. Her monograph Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics presents an integrative reading of thinkers including Henri Bergson, Alfred North Whitehead, Kurt Gödel, Alan Turing, and Georg Cantor. From this eclectic basis, Fazi demonstrates that computers differ from humans in their modes of creation, yet still produce qualitative anomaly. This article applies her research to the cultural phenomenon of live-coded music. Live coding artists improvise music by writing audio computer functions which produce sound in real time. I draw from Fazi’s reading of Deleuze and Bergson to investigate the aesthetic mechanisms of live coding. In doing so, I give empirical traction to her argument for the generative properties of computers.Part I: Reconciling the Discrete and the Continuous In his book Difference and Repetition, Deleuze defines “the new” as that which radically differs from the known and familiar (136). Deleuzean novelty bears unpredictable creative potential; as he puts it, the “new” “calls forth forces in thought which are not the forces of recognition” (136). These forces issue from a space of alterity which he describes as a “terra incognita” and a “completely other model” (136). Fazi writes that Deleuze’s conception of novelty informs his aesthetic philosophy. She notes that Deleuze follows the etymological origins of the word “aesthetic”, which lie in the Ancient Greek term aisthēsis, or perception from senses and feelings (Fazi, “Digital Aesthetics” 5). Deleuze observes that senses, feelings, and cognition are interwoven, and suggests that creative processes beget new links between these faculties. In Fazi’s words, Deleuzean aesthetic research “opposes any existential modality that separates life, thought, and sensation” (5). Here, aesthetics does not denote a theory of art and is not concerned with such traditional topics as beauty, taste, and genre. Aesthetics-as-aisthēsis investigates the conditions which make it possible to sense, cognise, and create anomalous phenomena, or that which has no recognisable forebear.Fazi applies Deleuzean aesthetics towards an ontological account of computation. Towards this end, she challenges Deleuze’s precept that computers cannot produce the aesthetic “new”. As she explains, Deleuze denies this ability to computers on the grounds that computation operates on discrete variables, or data which possess a quantitatively finite array of possible values (6). Deleuze understands discreteness as both a quantitative and ontic condition, and implies that computation cannot surpass this originary state. In his view, only continuous phenomena are capable of aisthēsis as the function which yields ontic novelty (5). Moreover, he maintains that continuous entities cannot be represented, interpreted, symbolised, or codified. The codified discreteness of computation is therefore “problematic” within his aesthetic framework “inasmuch it exemplifies yet another development of the representational”. or a repetition of sameness (6). The Deleuzean act of aisthēsis does not compute, repeat, or iterate what has come before. It yields nothing less than absolute difference.Deleuze’s theory of creation as differentiation is prefigured by Bergson’s research on multiplicity, difference and time. Bergson holds that the state of being multiple is ultimately qualitative rather than quantitative, and that multiplicity is constituted by qualitative incommensurability, or difference in kind as opposed to degree (Deleuze, Bergsonism 42). Qualia are multiple when they cannot not withstand equivocation through a common substrate. Henceforth, entities that comprise discrete data, including all products and functions of digital computation, cannot aspire to true multiplicity or difference. In The Creative Mind, Bergson considers the concept of time from this vantage point. As he indicates, time is normally understood as numerable and measurable, especially by mathematicians and scientists (13). He sets out to show that this conception is an illusion, and that time is instead a process by which continuous qualia differentiate and self-actualise as unique instances of pure time, or what he calls “duration as duration”. As he puts it,the measuring of time never deals with duration as duration; what is counted is only a certain number of extremities of intervals, or moments, in short, virtual halts in time. To state that an incident will occur at the end of a certain time t, is simply to say that one will have counted, from now until then, a number t of simultaneities of a certain kind. In between these simultaneities anything you like may happen. (12-13)The in-between space where “anything you like may happen” inspired Deleuze’s notion of ontic continua, or entities whose quantitative limitlessness connects with their infinite aesthetic potentiality. For Bergson, those who believe that time is finite and measurable “cannot succeed in conceiving the radically new and unforeseeable”, a sentiment which also appears to have influenced Deleuze (The Creative Mind 17).The legacy of Bergson and Deleuze is traceable to the present era, where the alleged irreconcilability of the discrete and the continuous fuels debates in digital media studies. Deleuze is not the only thinker to explore this tension: scholars in the traditions of phenomenology, critical theory, and post-Marxism have positioned the continuousness of thought and feeling against the discreteness of computation (Fazi, “Digital Aesthetics” 7). Fazi contributes to this discourse by establishing that the ontic character of computation is not wholly predicated on quantitatively discrete elements. Drawing from Turing’s theory of computability, she claims that computing processes incorporate indeterminable and uncomputable forces in open-ended processes that “determine indeterminacy” (Fazi, Contingent Computation 1). She also marshals philosopher Stamatia Portanova, whose book Moving Without a Body: Digital Philosophy and Choreographic Thoughtsindicates that discrete and continuous components merge in processes that digitise bodily motion (Portanova 3). In a similar but more expansive maneuver, Fazi declares that the discrete and continuous coalesce in all computational operations. Although Fazi’s work applies to all forms of computing, it casts new light on specific devices, methodologies, and human-computer interfaces. In the next section, I use her reading of Bergsonian elements in Deleuze to explore the contemporary artistic practice of live coding. My reading situates live coding in the context of studies on improvisation and creative indeterminacy.Part II: Live Coding as Contingent Improvisational PracticeThe term “live coding” describes an approach to programming where computer functions immediately render as images and/or sound. Live coding interfaces typically feature two windows: one for writing source code and another which displays code outcomes, for example as graphic visualisations or audio. The practice supports the rapid evaluation, editing, and exhibition of code in progress (“A History of Live Programming”). Although it encompasses many different activities, the phrase “live coding” is most often used in the context of computer music. In live coding performances or “AlgoRaves,” musicians write programs on stage in front of audiences. The programming process might be likened to playing an instrument. Typically, the coding interface is projected on a large screen, allowing audiences to see the musical score as it develops (Magnusson, “Improvising with the Threnoscope” 19). Technologists, scholars, and educators have embraced live coding as both a creative method and an object of study. Because it provides immediate feedback, it is especially useful as a pedagogical aide. Sonic Pi, a user-friendly live coding language, was originally designed to teach programming basics to children. It has since been adopted by professional musicians across the world (Aaron). Despites its conspicuousness in educational and creative settings, scholars have rarely explored live coding in the context of improvisation studies. Programmers Gordan Kreković and Antonio Pošćic claim that this is a notable oversight, as improvisation is its “most distinctive feature”. In their view, live coding is most simply defined as an improvisational method, and its strong emphasis on chance sets it apart from other approaches to computer music (Kreković and Pošćić). My interest with respect to live coding lies in how its improvisational mechanisms blend computational discreteness and continuous “real time”. I do not mean to suggest that live coding is the only implement for improvising music with computers. Any digital instrument can be used to spontaneously play, produce, and record sound. What makes live coding unique is that it merges the act of playing with the process of writing notation: musicians play for audiences in the very moment that they produce a written score. The process fuses the separate functions of performing, playing, seeing, hearing, and writing music in a patently Deleuzean act of aisthēsis. Programmer Thor Magnusson writes that live coding is the “offspring” of two very different creative practices: first, “the formalization and encoding of music”; second, “open work resisting traditional forms of encoding” (“Algorithms as Scores” 21). By “traditional forms of encoding”, Magnusson refers to computer programs which function only insofar as source code files are static and immutable. By contrast, live coding relies on the real-time elaboration of new code. As an improvisational art, the process and product of live-coding does not exist without continuous interventions from external forces.My use of the phrase “real time” evokes Bergson’s concept of “pure time” or “duration as duration”. “Real time” phenomena are understood to occur instantaneously, that is, at no degree of temporal removal from those who produce and experience them. However, Bergson suggests that instantaneity is a myth. By his account, there always exists some degree of removal between events as they occur and as they are perceived, even if this gap is imperceptibly small. Regardless of size, the indelible space in time has important implications for theories of improvisation. For Deleuze and Bergson, each continuous particle of time is a germinal seed for the new. Fazi uses the word “contingent” to describe this ever-present, infinite potentiality (Contingent Computation, 1). Improvisation studies scholar Dan DiPiero claims that the concept of contingency not only qualifies future possibilities, but also describes past events that “could have been otherwise” (2). He explains his reasoning as follows:before the event, the outcome is contingent as in not-yet-known; after the event, the result is contingent as in could-have-been-otherwise. What appears at first blush a frustrating theoretical ambiguity actually points to a useful insight: at any given time in any given process, there is a particular constellation of openings and closures, of possibilities and impossibilities, that constitute a contingent situation. Thus, the contingent does not reference either the open or the already decided but both at once, and always. (2)Deleuze might argue that only continuous phenomena are contingent, and that because they are quantitatively finite, the structures of computational media — including the sound and notation of live coding scores — can never “be otherwise” or contingent as such. Fazi intervenes by indicating the role of quantitative continuousness in all computing functions. Moreover, she aligns her project with emerging theories of computing which “focus less on internal mechanisms and more on external interaction”, or interfaces with continuous, non-computational contexts (“Digital Aesthetics,” 19). She takes computational interactions with external environments, such as human programmers and observers, as “the continuous directionality of composite parts” (19).To this point, it matters that discrete objects always exist in relation to continuous environments, and that discrete objects make up continuous fluxes when mobilised as part of continuous temporal processes. It is for this reason that Portanova uses the medium of dance to explore the entanglement of discreteness and temporal contingency. As with music, the art of dance depends on the continuous unfolding of time. Fazi writes that Portanova’s study of choreography reveals “the unlimited potential that every numerical bit of a program, or every experiential bit of a dance (every gesture and step), has to change and be something else” (Contingent Computation, 39). As with the zeroes and ones of a binary computing system, the footfalls of a dance materialise as discrete parts which inhabit and constitute continuous vectors of time. Per Deleuzean aesthetics-as-aisthēsis, these parts yield new connections between sound, space, cognition, and feeling. DiPiero indicates that in the case of improvised artworks, the ontic nature of these links defies anticipation. In his words, improvisation forces artists and audiences to “think contingency”. “It is not that discrete, isolated entities connect themselves to form something greater”, he explains, “but rather that the distance between the musician as subject and the instrument as object is not clearly defined” (3). So, while live coder and code persist as separate phenomena, the coding/playing/performing process highlights the qualitative indeterminacy of the space between them. Each moment might beget the unrecognisable — and this ineluctable, ever-present surprise is essential to the practice.To be sure, there are elements of predetermination in live coding practices. For example, musicians often save and return to specific functions in the midst of performances. But as Kreković and Pošćić point out all modes of improvisation rely on patterning and standardisation, including analog and non-computational techniques. Here, they cite composer John Cage’s claim that there exists no “true” improvisation because artists “always find themselves in routines” (Kreković and Pošćić). In a slight twist on Cage, Kreković and Pošćić insist that repetition does not make improvisation “untrue”, but rather that it points to an expanded role for indeterminacy in all forms of composition. As they write,[improvisation] can both be viewed as spontaneous composition and, when distilled to its core processes, a part of each compositional approach. Continuous and repeated improvisation can become ingrained, classified, and formalised. Or, if we reverse the flow of information, we can consider composition to be built on top of quiet, non-performative improvisations in the mind of the composer. (Kreković and Pošćić)This commentary echoes Deleuze’s thoughts on creativity and ontic continuity. To paraphrase Kreković and Pošćić, the aisthēsis of sensing, feeling, and thinking yields quiet, non-performative improvisations that play continuously in each individual mind. Fazi’s reading of Deleuze endows computable phenomena with this capacity. She does not endorse a computational theory of cognition that would permit computers to think and feel in the same manner as humans. Instead, she proposes a Deleuzean aesthetic capacity proper to computation. Live coding exemplifies the creative potential of computers as articulated by Fazi in Contingent Computation. Her research has allowed me to indicate live coding as an embodiment of Deleuze and Bergson’s theories of difference and creativity. Importantly, live coding affirms their philosophical premises not in spite of its technologised discreteness — which they would have considered problematic — but because it leverages discreteness in service of the continuous aesthetic act. My essay might also serve as a prototype for studies on digitality which likewise aim to supersede the divide between discrete and continuous media. As I have hopefully demonstrated, Fazi’s framework allows scholars to apprehend all forms of computation with enhanced clarity and openness to new possibilities.Coda: From Aesthetics to PoliticsBy way of a coda, I will reflect on the relevance of Fazi’s work to contemporary political theory. In “Digital Aesthetics”, she makes reference to emerging “oppositions to the mechanization of life” from “post-structuralist, postmodernist and post-Marxist” perspectives (7). One such argument comes from philosopher Bernard Stiegler, whose theory of psychopower conceives “the capture of attention by technological means” as a political mechanism (“Biopower, Psychopower and the Logic of the Scapegoat”). Stiegler is chiefly concerned with the psychic impact of discrete technological devices. As he argues, the habitual use of these instruments advances “a proletarianization of the life of the mind” (For a New Critique of Political Economy 27). For Stiegler, human thought is vulnerable to discretisation processes, which effects the loss of knowledge and quality of life. He considers this process to be a form of political hegemony (34).Philosopher Antoinette Rouvroy proposes a related theory called “algorithmic governmentality” to describe the political effects of algorithmic prediction mechanisms. As she claims, predictive algorithms erode “the excess of the possible on the probable”, or all that cannot be accounted for in advance by statistical probabilities. In her words,all these events that can occur and that we cannot predict, it is the excess of the possible on the probable, that is everything that escapes it, for instance the actuarial reality with which we try precisely to make the world more manageable in reducing it to what is predictable … we have left this idea of the actuarial reality behind for what I would call a “post-actuarial reality” in which it is no longer about calculating probabilities but to account in advance for what escapes probability and thus the excess of the possible on the probable. (8)In the past five years, Stiegler and Rouvroy have collaborated on research into the politics of technological determinacy. The same issue concerned Deleuze almost three decades ago: his 1992 essay “Postscript on the Societies of Control” warns that future subjugation will proceed as technological prediction and enclosure. He writes of a dystopian society which features a “numerical language of control … made of codes that mark access to information, or reject it” (5). The society of control reduces individuals to “dividuals”, or homogenised and interchangeable numeric fractions (5). These accounts of political power equate digital discreteness with ontic finitude, and suggest that ubiquitous digital computing threatens individual agency and societal diversity. Stiegler and Deleuze envision a sort of digital reification of human subjectivity; Rouvroy puts forth the idea that algorithmic development will reduce the possibilities inherent in social life to mere statistical likelihoods. While Fazi’s work does not completely discredit these notions, it might instead be used to scrutinise their assumptions. If computation is not ontically finite, then political allegations against it must consider its opposition to human life with greater nuance and rigor.ReferencesAaron, Sam. “Programming as Performance.” Tedx Talks. YouTube, 22 July 2015. &lt;https://www.youtube.com/watch?v=TK1mBqKvIyU&amp;t=333s&gt;.“A History of Live Programming.” Live Prog Blog. 13 Jan. 2013. &lt;liveprogramming.github.io/liveblog/2013/01/a-history-of-live-programming/&gt;.Bergson, Henri. The Creative Mind: An Introduction to Metaphysics. Trans. Mabelle L. Andison. New York City: Carol Publishing Group, 1992.———. Time and Free Will: An Essay on the Immediate Data of Consciousness. Trans. F.L. Pogson. Mineola: Dover Publications, 2001.Deleuze, Gilles. Difference and Repetition. Trans. Paul Patton. New York City: Columbia UP, 1994.———. "Postscript on the Societies of Control." October 59 (1992): 3-7.———. Bergsonism. Trans. Hugh Tomlinson and Barbara Habberjam. New York City: Zone Books, 1991.DiPiero, Dan. “Improvisation as Contingent Encounter, Or: The Song of My Toothbrush.” Critical Studies in Improvisation / Études Critiques en Improvisation 12.2 (2018). &lt;https://www.criticalimprov.com/index.php/csieci/article/view/4261&gt;.Fazi, M. Beatrice. Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics. London: Rowman &amp; Littlefield International, 2018.———. “Digital Aesthetics: The Discrete and the Continuous.” Theory, Culture &amp; Society 36.1 (2018): 3-26.Fortune, Stephen. “What on Earth Is Livecoding?” Dazed Digital, 14 May 2013. &lt;https://www.dazeddigital.com/artsandculture/article/16150/1/what-on-earth-is-livecoding&gt;.Kreković, Gordan, and Antonio Pošćić. “Modalities of Improvisation in Live Coding.” Proceedings of xCoaX 2019, the 7th Conference on Computation, Communication, Aesthetics &amp; X. Fabbrica del Vapore, Milan, Italy, 5 July 2019.Magnusson, Thor. “Algorithms as Scores: Coding Live Music.” Leonardo Music Journal 21 (2011): 19-23. ———. “Improvising with the Threnoscope: Integrating Code, Hardware, GUI, Network, and Graphic Scores.” Proceedings of the International Conference on New Interfaces for Musical Expression. Goldsmiths, University of London, London, England, 1 July 2014.Portanova, Stamatia. Moving without a Body: Digital Philosophy and Choreographic Thoughts. Cambridge, MA: The MIT P, 2013.Rouvroy, Antoinette.“The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law.” Trans. Anaïs Nony and Benoît Dillet. La Deleuziana: Online Journal of Philosophy 3 (2016). &lt;http://www.ladeleuziana.org/wp-content/uploads/2016/12/Rouvroy-Stiegler_eng.pdf&gt;Stiegler, Bernard. For a New Critique of Political Economy. Malden: Polity Press, 2012.———. “Biopower, Psychopower and the Logic of the Scapegoat.” Ars Industrialis (no date given). &lt;www.arsindustrialis.org/node/2924&gt;.
APA, Harvard, Vancouver, ISO, and other styles
41

Horrigan, Matthew. "A Flattering Robopocalypse." M/C Journal 23, no. 6 (2020). http://dx.doi.org/10.5204/mcj.2726.

Full text
Abstract:
RACHAEL. It seems you feel our work is not a benefit to the public.DECKARD. Replicants are like any other machine. They're either a benefit or a hazard. If they're a benefit it's not my problem.RACHAEL. May I ask you a personal question?DECKARD. Yes.RACHAEL. Have you every retired a human by mistake? (Scott 17:30) CAPTCHAs (henceforth "captchas") are commonplace on today's Internet. Their purpose seems clear: block malicious software, allow human users to pass. But as much as they exclude spambots, captchas often exclude humans with visual and other disabilities (Dzieza; W3C Working Group). Worse yet, more and more advanced captcha-breaking technology has resulted in more and more challenging captchas, raising the barrier between online services and those who would access them. In the words of inclusive design advocate Robin Christopherson, "CAPTCHAs are evil". In this essay I describe how the captcha industry implements a posthuman process that speculative fiction has gestured toward but not grasped. The hostile posthumanity of captcha is not just a technical problem, nor just a problem of usability or access. Rather, captchas convey a design philosophy that asks humans to prove themselves by performing well at disembodied games. This philosophy has its roots in the Turing Test itself, whose terms guide speculation away from the real problems that today's authentication systems present. Drawing the concept of "procedurality" from game studies, I argue that, despite a design goal of separating machines and humans to the benefit of the latter, captchas actually and ironically produce an arms race in which humans have a systematic and increasing disadvantage. This arms race results from the Turing Test's equivocation between human and machine bodies, an assumption whose influence I identify in popular film, science fiction literature, and captcha design discourse. The Captcha Industry and Its Side-Effects Exclusion is an essential function of every cybersecurity system. From denial-of-service attacks to data theft, toxic automated entities constantly seek admission to services they would damage. To remain functional and accessible, Websites need security systems to keep out "abusive agents" (Shet). In cybersecurity, the term "user authentication" refers to the process of distinguishing between abusive agents and welcome users (Jeng et al.). Of the many available authentication techniques, CAPTCHA, "Completely Automated Public Turing test[s] to tell Computers and Humans Apart" (Von Ahn et al. 1465), is one of the most iconic. Although some captchas display a simple checkbox beside a disclaimer to the effect that "I am not a robot" (Shet), these frequently give way to more difficult alternatives: perception tests (fig. 1). Test captchas may show sequences of distorted letters, which a user is supposed to recognise and then type in (Godfrey). Others effectively digitize a game of "I Spy": an image appears, with an instruction to select the parts of it that show a specific type of object (Zhu et al.). A newer type of captcha involves icons rotated upside-down or sideways, the task being to right them (Gossweiler et al.). These latter developments show the influence of gamification (Kani and Nishigaki; Kumar et al.), the design trend where game-like elements figure in serious tasks. Fig. 1: A series of captchas followed by multifactor authentication as a "quick security check" during the author's suspicious attempt to access LinkedIn over a Virtual Private Network Gamified captchas, in using tests of ability to tell humans from computers, invite three problems, of which only the first has received focussed critical attention. I discuss each briefly below, and at greater length in subsequent sections. First, as many commentators have pointed out (W3C Working Group), captchas can accidentally categorise real humans as nonhumans—a technical problem that becomes more likely as captcha-breaking technologies improve (e.g. Tam et al.; Brown et al.). Indeed, the design and breaking of captchas has become an almost self-sustaining subfield in computer science, as researchers review extant captchas, publish methods for breaking them, and publish further captcha designs (e.g. Weng et al.). Such research fuels an industry of captcha-solving services (fig. 2), of which some use automated techniques, and some are "human-powered", employing groups of humans to complete large numbers of captchas, thus clearing the way for automated incursions (Motoyama et al. 2). Captchas now face the quixotic task of using ability tests to distinguish legitimate users from abusers with similar abilities. Fig. 2: Captcha production and captcha breaking: a feedback loop Second, gamified captchas import the feelings of games. When they defeat a real human, the human seems not to have encountered the failure state of an automated procedure, but rather to have lost, or given up on, a game. The same frame of "gameful"-ness (McGonigal, under "Happiness Hacking") or "gameful work" (under "The Rise of the Happiness Engineers"), supposed to flatter users with a feeling of reward or satisfaction when they complete a challenge, has a different effect in the event of defeat. Gamefulness shifts the fault from procedure to human, suggesting, for the latter, the shameful status of loser. Third, like games, gamified captchas promote a particular strain of logic. Just as other forms of media can be powerful venues for purveying stereotypes, so are gamified captchas, in this case conveying the notion that ability is a legitimate means, not only of apportioning privilege, but of humanising and dehumanising. Humanity thus appears as a status earned, and disability appears not as a stigma, nor an occurrence, but an essence. The latter two problems emerge because the captcha reveals, propagates and naturalises an ideology through mechanised procedures. Below I invoke the concept of "procedural rhetoric" to critique the disembodied notion of humanity that underlies both the original Turing Test and the "Completely Automated Public Turing test." Both tests, I argue, ultimately play to the disadvantage of their human participants. Rhetorical Games, Procedural Rhetoric When videogame studies emerged as an academic field in the early 2000s, once of its first tasks was to legitimise games relative to other types of artefact, especially literary texts (Eskelinen; Aarseth). Scholars sought a framework for discussing how video games, like other more venerable media, can express ideas (Weise). Janet Murray and Ian Bogost looked to the notion of procedure, devising the concepts of "procedurality" (Bogost 3), "procedural authorship" (Murray 171), and "procedural rhetoric" (Bogost 1). From a proceduralist perspective, a videogame is both an object and a medium for inscribing processes. Those processes have two basic types: procedures the game's developers have authored, which script the behaviour of the game as a computer program; and procedures human players respond with, the "operational logic" of gameplay (Bogost 13). Procedurality's two types of procedure, the computerised and the human, have a kind of call-and-response relationship, where the behaviour of the machine calls upon players to respond with their own behaviour patterns. Games thus train their players. Through the training that is play, players acquire habits they bring to other contexts, giving videogames the power not only to express ideas but "disrupt and change fundamental attitudes and beliefs about the world, leading to potentially significant long-term social change" (Bogost ix). That social change can be positive (McGonigal), or it can involve "dark patterns", cases where game procedures provoke and exploit harmful behaviours (Zagal et al.). For example, embedded in many game paradigms is the procedural rhetoric of "toxic meritocracy" (Paul 66), where players earn rewards, status and personal improvement by overcoming challenges, and, especially, excelling where others fail. While meritocracy may seem logical within a strictly competitive arena, its effect in a broader cultural context is to legitimise privileges as the spoils of victory, and maltreatment as the just result of defeat. As game design has influenced other fields, so too has procedurality's applicability expanded. Gamification, "the use of game design elements in non-game contexts" (Deterding et al. 9), is a popular trend in which designers seek to imbue diverse tasks with some of the enjoyment of playing a game (10). Gamification discourse has drawn heavily upon Mihaly Csikszentmihalyi's "positive psychology" (Seligman and Csikszentmihalyi), and especially the speculative psychology of flow (Csikszentmihalyi 51), which promise enormously broad benefits for individuals acting in the "flow state" that challenging play supposedly promotes (75). Gamification has become a celebrated cause, advocated by a group of scholars and designers Sebastian Deterding calls the "Californian league of gamification evangelists" (120), before becoming an object of critical scrutiny (Fuchs et al.). Where gamification goes, it brings its dark patterns with it. In gamified user authentication (Kroeze and Olivier), and particularly gamified captcha, there occurs an intersection of deceptively difficult games, real-world stakes, and users whose differences go often ignored. The Disembodied Arms Race In captcha design research, the concept of disability occurs under the broader umbrella of usability. Usability studies emphasise the fact that some technology pieces are easier to access than others (Yan and El Ahmad). Disability studies, in contrast, emphasises the fact that different users have different capacities to overcome access barriers. Ability is contextual, an intersection of usability and disability, use case and user (Reynolds 443). When used as an index of humanness, ability yields illusive results. In Posthuman Knowledge, Rosi Braidotti begins her conceptual enquiry into the posthuman condition with a contemplation of captcha, asking what it means to tick that checkbox claiming that "I am not a robot" (8), and noting the baffling multiplicity of possible answers. From a practical angle, Junya Kani and Masakatsu Nishigaki write candidly about the problem of distinguishing robot from human: "no matter how advanced malicious automated programs are, a CAPTCHA that will not pass automated programs is required. Hence, we have to find another human cognitive processing capability to tackle this challenge" (40). Kani and Nishigaki try out various human cognitive processing capabilities for the task. Narrative comprehension and humour become candidates: might a captcha ascribe humanity based on human users' ability to determine the correct order of scenes in a film (43)? What about panels in a cartoon (40)? As they seek to assess the soft skills of machines, Kani and Nishigaki set up a drama similar to that of Philip K. Dick's Do Androids Dream of Electric Sheep. Do Androids Dream of Electric Sheep, and its film adaptation, Blade Runner (Scott), describe a spacefaring society populated by both humans and androids. Androids have lesser legal privileges than humans, and in particular face execution—euphemistically called "retirement"—for trespassing on planet Earth (Dick 60). Blade Runner gave these androids their more famous name: "replicant". Replicants mostly resemble humans in thought and action, but are reputed to lack the capacity for empathy, so human police, seeking a cognitive processing capability unique to humans, test for empathy to test for humanness (30). But as with captchas, Blade Runner's testing procedure depends upon an automated device whose effectiveness is not certain, prompting the haunting question: "have you ever retired a human by mistake?" (Scott 17:50). Blade Runner's empathy test is part of a long philosophical discourse about the distinction between human and machine (e.g. Putnam; Searle). At the heart of the debate lies Alan Turing's "Turing Test", which a machine hypothetically passes when it can pass itself off as a human conversationalist in an exchange of written text. Turing's motivation for coming up with the test goes: there may be no absolute way of defining what makes a human mind, so the best we can do is assess a computer's ability to imitate one (Turing 433). The aporia, however—how can we determine what makes a human mind?—is the result of an unfair question. Turing's test, dealing only with information expressed in strings of text, purposely disembodies both humans and machines. The Blade Runner universe similarly evens the playing field: replicants look, feel and act like humans to such an extent that distinguishing between the two becomes, again, the subject of a cognition test. The Turing Test, obsessed with information processing and steeped in mind-body dualism, assesses humanness using criteria that automated users can master relatively easily. In contrast, in everyday life, I use a suite of much more intuitive sensory tests to distinguish between my housemate and my laptop. My intuitions capture what the Turing Test masks: a human is a fleshy entity, possessed of the numerous trappings and capacities of a human body. The result of the automated Turing Test's focus on cognition is an arms race that places human users at an increasing disadvantage. Loss, in such a race, manifests not only as exclusion by and from computer services, but as a redefinition of proper usership, the proper behaviour of the authentic, human, user. Thus the Turing Test implicitly provides for a scenario where a machine becomes able to super-imitate humanness: to be perceived as human more often than a real human would be. In such an outcome, it would be the human conversationalist who would begin to fail the Turing test; to fail to pass themself off according to new criteria for authenticity. This scenario is possible because, through procedural rhetoric, machines shift human perspectives: about what is and is not responsible behaviour; about what humans should and should not feel when confronted with a challenge; about who does and does not deserve access; and, fundamentally, about what does and does not signify authentic usership. In captcha, as in Blade Runner, it is ultimately a machine that adjudicates between human and machine cognition. As users we rely upon this machine to serve our interests, rather than pursue some emergent automated interest, some by-product of the feedback loop that results from the ideologies of human researchers both producing and being produced by mechanised procedures. In the case of captcha, that faith is misplaced. The Feeling of Robopocalypse A rich repertory of fiction has speculated upon what novelist Daniel Wilson calls the "Robopocalypse", the scenario where machines overthrow humankind. Most versions of the story play out as a slave-owner's nightmare, featuring formerly servile entities (which happen to be machines) violently revolting and destroying the civilisation of their masters. Blade Runner's rogue replicants, for example, are effectively fugitive slaves (Dihal 196). Popular narratives of robopocalypse, despite showing their antagonists as lethal robots, are fundamentally human stories with robots playing some of the parts. In contrast, the exclusion a captcha presents when it defeats a human is not metaphorical or emancipatory. There, in that moment, is a mechanised entity defeating a human. The defeat takes place within an authoritative frame that hides its aggression. For a human user, to be defeated by a captcha is to fail to meet an apparently common standard, within the framework of a common procedure. This is a robopocalypse of baffling systems rather than anthropomorphic soldiers. Likewise, non-human software clients pose threats that humanoid replicants do not. In particular, software clients replicate much faster than physical bodies. The sheer sudden scale of a denial-of-service attack makes Philip K. Dick's vision of android resistance seem quaint. The task of excluding unauthorised software, unlike the impulse to exclude replicants, is more a practical necessity than an exercise in colonialism. Nevertheless, dystopia finds its way into the captcha process through the peril inherent in the test, whenever humans are told apart from authentic users. This is the encroachment of the hostile posthuman, naturalised by us before it denaturalises us. The hostile posthuman sometimes manifests as a drone strike, Terminator-esque (Cameron), a dehumanised decision to kill (Asaro). But it is also a process of gradual exclusion, detectable from moment to moment as a feeling of disdain or impatience for the irresponsibility, incompetence, or simply unusualness of a human who struggles to keep afloat of a rising standard. "We are in this together", Braidotti writes, "between the algorithmic devil and the acidified deep blue sea" (9). But we are also in this separately, divided along lines of ability. Captcha's danger, as a broken procedure, hides in plain sight, because it lashes out at some only while continuing to flatter others with a game that they can still win. Conclusion Online security systems may always have to define some users as legitimate and others as illegitimate. Is there a future where they do so on the basis of behaviour rather than identity or essence? Might some future system accord each user, human or machine, the same authentic status, and provide all with an initial benefit of the doubt? In the short term, such a system would seem grossly impractical. The type of user that most needs to be excluded is the disembodied type, the type that can generate orders of magnitude more demands than a human, that can proliferate suddenly and in immense number because it does not lag behind the slow processes of human bodies. This type of user exists in software alone. Rich in irony, then, is the captcha paradigm which depends on the disabilities of the threats it confronts. We dread malicious software not for its disabilities—which are momentary and all too human—but its abilities. Attenuating the threat presented by those abilities requires inverting a habit that meritocracy trains and overtrains: specifically, we have here a case where the plight of the human user calls for negative action toward ability rather than disability. References Aarseth, Espen. "Computer Game Studies, Year One." Game Studies 1.1 (2001): 1–15. Asaro, Peter. "On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making." International Review of the Red Cross 94.886 (2012): 687–709. Blade Runner. Dir. Ridley Scott. Warner Bros, 1982. Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT Press, 2007. Braidotti, Rosi. Posthuman Knowledge. Cambridge: Polity Press, 2019. Brown, Samuel S., et al. "I Am 'Totally' Human: Bypassing the Recaptcha." 13th International Conference on Signal-Image Technology &amp; Internet-Based Systems (SITIS), 2017. Christopherson, Robin. "AI Is Making CAPTCHA Increasingly Cruel for Disabled Users." AbilityNet 2019. 17 Sep. 2020 &lt;https://abilitynet.org.uk/news-blogs/ai-making-captcha-increasingly-cruel-disabled-users&gt;. Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper &amp; Row: New York, 1990. Deterding, Sebastian. "Eudaimonic Design, Or: Six Invitations to Rethink Gamification." Rethinking Gamification. Eds. Mathias Fuchs et al. Lüneburg: Meson Press, 2014. Deterding, Sebastian, et al. "From Game Design Elements to Gamefulness: Defining Gamification." Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. ACM, 2011. Dick, Philip K. Do Androids Dream of Electric Sheep. 1968. New York: Del Rey, 1996. Dihal, Kanta. "Artificial Intelligence, Slavery, and Revolt." AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Eds. Stephen Cave, Kanta Dihal, and Sarah Dillon. 2020. 189–212. Dzieza, Josh. "Why Captchas Have Gotten So Difficult." The Verge 2019. 17 Sep. 2020 &lt;https://www.theverge.com/2019/2/1/18205610/google-captcha-ai-robot-human-difficult-artificial-intelligence&gt;. Eskelinen, Markku. "Towards Computer Game Studies." Digital Creativity 12.3 (2001): 175–83. Fuchs, Mathias, et al., eds. Rethinking Gamification. Lüneburg: Meson Press, 2014. Godfrey, Philip Brighten. "Text-Based CAPTCHA Algorithms." First Workshop on Human Interactive Proofs, 15 Dec. 2001. 14 Nov. 2020 &lt;http://www.aladdin.cs.cmu.edu/hips/events/abs/godfreyb_abstract.pdf&gt;. Gossweiler, Rich, et al. "What's Up CAPTCHA? A CAPTCHA Based on Image Orientation." Proceedings of the 18th International Conference on World Wide Web. WWW, 2009. Jeng, Albert B., et al. "A Study of CAPTCHA and Its Application to User Authentication." International Conference on Computational Collective Intelligence. Springer, 2010. Kani, Junya, and Masakatsu Nishigaki. "Gamified Captcha." International Conference on Human Aspects of Information Security, Privacy, and Trust. Springer, 2013. Kroeze, Christien, and Martin S. Olivier. "Gamifying Authentication." 2012 Information Security for South Africa. IEEE, 2012. Kumar, S. Ashok, et al. "Gamification of Internet Security by Next Generation Captchas." 2017 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 2017. McGonigal, Jane. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. Penguin, 2011. Motoyama, Marti, et al. "Re: Captchas – Understanding CAPTCHA-Solving Services in an Economic Context." USENIX Security Symposium. 2010. Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: The Free Press, 1997. Paul, Christopher A. The Toxic Meritocracy of Video Games: Why Gaming Culture Is the Worst. University of Minnesota Press, 2018. Putnam, Hilary. "Robots: Machines or Artificially Created Life?" The Journal of Philosophy 61.21 (1964): 668–91. Reynolds, Joel Michael. "The Meaning of Ability and Disability." The Journal of Speculative Philosophy 33.3 (2019): 434–47. Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3.3 (1980): 417–24. Seligman, Martin, and Mihaly Csikszentmihalyi. "Positive Psychology: An Introduction." Flow and the Foundations of Positive Psychology. 2000. Springer, 2014. 279–98. Shet, Vinay. "Are You a Robot? Introducing No Captcha Recaptcha." Google Security Blog 3 (2014): 12. Tam, Jennifer, et al. "Breaking Audio Captchas." Advances in Neural Information Processing Systems. 2009. Proceedings of the 21st International Conference on Neural Information Processing Systems 1625–1632. ACM, 2008. The Terminator. Dir. James Cameron. Orion, 1984. Turing, Alan. "Computing Machinery and Intelligence." Mind 59.236 (1950). Von Ahn, Luis, et al. "Recaptcha: Human-Based Character Recognition via Web Security Measures." Science 321.5895 (2008): 1465–68. W3C Working Group. "Inaccessibility of CAPTCHA: Alternatives to Visual Turing Tests on the Web." W3C 2019. 17 Sep. 2020 &lt;https://www.w3.org/TR/turingtest/&gt;. Weise, Matthew. "How Videogames Express Ideas." DiGRA Conference. 2003. Weng, Haiqin, et al. "Towards Understanding the Security of Modern Image Captchas and Underground Captcha-Solving Services." Big Data Mining and Analytics 2.2 (2019): 118–44. Wilson, Daniel H. Robopocalypse. New York: Doubleday, 2011. Yan, Jeff, and Ahmad Salah El Ahmad. "Usability of Captchas or Usability Issues in CAPTCHA Design." Proceedings of the 4th Symposium on Usable Privacy and Security. 2008. Zagal, José P., Staffan Björk, and Chris Lewis. "Dark Patterns in the Design of Games." 8th International Conference on the Foundations of Digital Games. 2013. 25 Aug. 2020 &lt;http://soda.swedish-ict.se/5552/1/DarkPatterns.1.1.6_cameraready.pdf&gt;. Zhu, Bin B., et al. "Attacks and Design of Image Recognition Captchas." Proceedings of the 17th ACM Conference on Computer and Communications Security. 2010.
APA, Harvard, Vancouver, ISO, and other styles
42

Campanioni, Chris. "How Bizarre: The Glitch of the Nineties as a Fantasy of New Authorship." M/C Journal 21, no. 5 (2018). http://dx.doi.org/10.5204/mcj.1463.

Full text
Abstract:
As the ball dropped on 1999, is it any wonder that No Doubt played, “It’s the End of the World as We Know It” by R.E.M. live on MTV? Any discussion of the Nineties—and its pinnacle moment, Y2K—requires a discussion of both the cover and the glitch, two performative and technological enactments that fomented the collapse between author-reader and user-machine that has, twenty years later, become normalised in today’s Post Internet culture. By staging failure and inviting the audience to participate, the glitch and the cover call into question the original and the origin story. This breakdown of normative borders has prompted the convergence of previously demarcated media, genres, and cultures, a constellation from which to recognise a stochastic hybrid form. The Cover as a Revelation of Collaborative MurmurBefore Sean Parker collaborated with Shawn Fanning to launch Napster on 1 June 1999, networked file distribution existed as cumbersome text-based programs like Internet Relay Chat and Usenet, servers which resembled bulletin boards comprising multiple categories of digitally ripped files. Napster’s simple interface, its advanced search filters, and its focus on music and audio files fostered a peer-to-peer network that became the fastest growing website in history, registering 80 million users in less than two years.In harnessing the transgressive power of the Internet to force a new mode of content sharing, Napster forced traditional providers to rethink what constitutes “content” at a moment which prefigures our current phenomena of “produsage” (Bruns) and the vast popularity of user-generated content. At stake is not just the democratisation of art but troubling the very idea of intellectual property, which is to say, the very concept of ownership.Long before the Internet was re-routed from military servers and then mainstreamed, Michel Foucault understood the efficacy of anonymous interactions on the level of literature, imagining a culture where discourse would circulate without any need for an author. But what he was asking in 1969 is something we can better answer today, because it seems less germane to call into question the need for an author in a culture in which everyone is writing, producing, and reproducing text, and more effective to think about re-evaluating the notion of a single author, or what it means to write by yourself. One would have to testify to the particular medium we have at our disposal, the Internet’s ultimate permissibility, its provocations for collaboration and co-creation. One would have to surrender the idea that authors own anything besides our will to keep producing, and our desire for change; and to modulate means to resist without negating, to alter without omitting, to enable something new to come forward; the unfolding of the text into the anonymity of a murmur.We should remind ourselves that “to author” all the way down to its Latin roots signifies advising, witnessing, and transferring. We should be reminded that to author something means to forget the act of saying “I,” to forget it or to make it recede in the background in service of the other or others, on behalf of a community. The de-centralisation of Web development and programming initiated by Napster inform a poetics of relation, an always-open structure in which, as Édouard Glissant said, “the creator of a text is effaced, or rather, is done away with, to be revealed in the texture of his creation” (25). When a solid melts, it reveals something always underneath, something at the bottom, something inside—something new and something that was always already there. A cover, too, is both a revival and a reworking, an update and an interpretation, a retrospective tribute and a re-version that looks toward the future. In performing the new, the original as singular is called into question, replaced by an increasingly fetishised copy made up of and made by multiples.Authorial Effacement and the Exigency of the ErrorY2K, otherwise known as the Millennium Bug, was a coding problem, an abbreviation made to save memory space which would disrupt computers during the transition from 1999 to 2000, when it was feared that the new year would become literally unrecognisable. After an estimated $300 billion in upgraded hardware and software was spent to make computers Y2K-compliant, something more extraordinary than global network collapse occurred as midnight struck: nothing.But what if the machine admits the possibility of accident? Implicit in the admission of any accident is the disclosure of a new condition—something to be heard, to happen, from the Greek ad-cadere, which means to fall. In this drop into non-repetition, the glitch actualises an idea about authorship that necessitates multi-user collaboration; the curtain falls only to reveal the hidden face of technology, which becomes, ultimately, instructions for its re-programming. And even as it deviates, the new form is liable to become mainstreamed into a new fashion. “Glitch’s inherently critical moment(um)” (Menkman 8) indicates this potential for technological self-insurgence, while suggesting the broader cultural collapse of generic markers and hierarchies, and its ensuing flow into authorial fluidity.This feeling of shock, this move “towards the ruins of destructed meaning” (Menkman 29) inherent in any encounter with the glitch, forecasted not the immediate horror of Y2K, but the delayed disasters of 9/11, Hurricane Katrina, Deepwater Horizon Oil Spill, Indian Ocean tsunami, Sichuan Province earthquake, global financial crisis, and two international wars that would all follow within the next nine years. If, as Menkman asserts, the glitch, in representing a loss of self-control “captures the machine revealing itself” (30), what also surfaces is the tipping point that edges us toward a new becoming—not only the inevitability of surrender between machine and user, but their reversibility. Just as crowds stood, transfixed before midnight of the new millennium in anticipation of the error, or its exigency, it’s always the glitch I wait for; it’s always the glitch I aim to re-create, as if on command. The accidental revelation, or the machine breaking through to show us its insides. Like the P2P network that Napster introduced to culture, every glitch produces feedback, a category of noise (Shannon) influencing the machine’s future behaviour whereby potential users might return the transmission.Re-Orienting the Bizarre in Fantasy and FictionIt is in the fantasy of dreams, and their residual leakage into everyday life, evidenced so often in David Lynch’s Twin Peaks, where we can locate a similar authorial agency. The cult Nineties psycho-noir, and its discontinuous return twenty-six years later, provoke us into reconsidering the science of sleep as the art of fiction, assembling an alternative, interactive discourse from found material.The turning in and turning into in dreams is often described as an encounter with the “bizarre,” a word which indicates our lack of understanding about the peculiar processes that normally happen inside our heads. Dreams are inherently and primarily bizarre, Allan J. Hobson argues, because during REM sleep, our noradrenergic and serotonergic systems do not modulate the activated brain, as they do in waking. “The cerebral cortex and hippocampus cannot function in their usual oriented and linear logical way,” Hobson writes, “but instead create odd and remote associations” (71). But is it, in fact, that our dreams are “bizarre” or is it that the model itself is faulty—a precept premised on the normative, its dependency upon generalisation and reducibility—what is bizarre if not the ordinary modulations that occur in everyday life?Recall Foucault’s interest not in what a dream means but what a dream does. How it rematerialises in the waking world and its basis in and effect on imagination. Recall recollection itself, or Erin J. Wamsley’s “Dreaming and Offline Memory Consolidation.” “A ‘function’ for dreaming,” Wamsley writes, “hinges on the difficult question of whether conscious experience in general serves any function” (433). And to think about the dream as a specific mode of experience related to a specific theory of knowledge is to think about a specific form of revelation. It is this revelation, this becoming or coming-to-be, that makes the connection to crowd-sourced content production explicit—dreams serve as an audition or dress rehearsal in which new learning experiences with others are incorporated into the unconscious so that they might be used for production in the waking world. Bert O. States elaborates, linking the function of the dream with the function of the fiction writer “who makes models of the world that carry the imprint and structure of our various concerns. And it does this by using real people, or ‘scraps’ of other people, as the instruments of hypothetical facts” (28). Four out of ten characters in a dream are strangers, according to Calvin Hall, who is himself a stranger, someone I’ve never met in waking life or in a dream. But now that I’ve read him, now that I’ve written him into this work, he seems closer to me. Twin Peak’s serial lesson for viewers is this—even the people who seem strangers to us can interact with and intervene in our processes of production.These are the moments that a beginning takes place. And even if nothing directly follows, this transfer constitutes the hypothesised moment of production, an always-already perhaps, the what-if stimulus of charged possibility; the soil plot, or plot line, for freedom. Twin Peaks is a town in which the bizarre penetrates the everyday so often that eventually, the bizarre is no longer bizarre, but just another encounter with the ordinary. Dream sequences are common, but even more common—and more significant—are the moments in which what might otherwise be a dream vision ruptures into real life; these moments propel the narrative.Exhibit A: A man who hasn’t gone outside in a while begins to crumble, falling to the earth when forced to chase after a young girl, who’s just stolen the secret journal of another young girl, which he, in turn, had stolen.B: A horse appears in the middle of the living room after a routine vacuum cleaning and a subtle barely-there transition, a fade-out into a fade-in, what people call a dissolve. No one notices, or thinks to point out its presence. Or maybe they’re distracted. Or maybe they’ve already forgotten. Dissolve.(I keep hitting “Save As.” As if renaming something can also transform it.)C: All the guests at the Great Northern Hotel begin to dance the tango on cue—a musical, without any music.D: After an accident, a middle-aged woman with an eye patch—she was wearing the eye patch before the accident—believes she’s seventeen again. She enrolls in Twin Peaks High School and joins the cheerleading team.E: A woman pretending to be a Japanese businessman ambles into the town bar to meet her estranged husband, who fails to recognise his cross-dressing, race-swapping wife.F: A girl with blond hair is murdered, only to come back as another girl, with the same face and a different name. And brown hair. They’re cousins.G: After taking over her dead best friend’s Meals on Wheels route, Donna Hayward walks in to meet a boy wearing a tuxedo, sitting on the couch with his fingers clasped: a magician-in-training. “Sometimes things can happen just like this,” he says with a snap while the camera cuts to his grandmother, bed-ridden, and the appearance of a plate of creamed corn that vanishes as soon as she announces its name.H: A woman named Margaret talks to and through a log. The log, cradled in her arms wherever she goes, becomes a key witness.I: After a seven-minute diegetic dream sequence, which includes a one-armed man, a dwarf, a waltz, a dead girl, a dialogue played backward, and a significantly aged representation of the dreamer, Agent Cooper wakes up and drastically shifts his investigation of a mysterious small-town murder. The dream gives him agency; it turns him from a detective staring at a dead-end to one with a map of clues. The next day, it makes him a storyteller; all the others, sitting tableside in the middle of the woods become a captive audience. They become readers. They read into his dream to create their own scenarios. Exhibit I. The cycle of imagination spins on.Images re-direct and obfuscate meaning, a process of over-determination which Foucault says results in “a multiplication of meanings which override and contradict each other” (DAE 34). In the absence of image, the process of imagination prevails. In the absence of story, real drama in our conscious life, we form complex narratives in our sleep—our imaginative unconscious. Sometimes they leak out, become stories in our waking life, if we think to compose them.“A bargain has been struck,” says Harold, an under-5 bit player, later, in an episode called “Laura’s Secret Diary.” So that she might have the chance to read Laura Palmer’s diary, Donna Hayward agrees to talk about her own life, giving Harold the opportunity to write it down in his notebook: his “living novel” the new chapter which reads, after uncapping his pen and smiling, “Donna Hayward.”He flips to the front page and sets a book weight to keep the page in place. He looks over at Donna sheepishly. “Begin.”Donna begins talking about where she was born, the particulars of her father—the lone town doctor—before she interrupts the script and asks her interviewer about his origin story. Not used to people asking him the questions, Harold’s mouth drops and he stops writing. He puts his free hand to his chest and clears his throat. (The ambient, wind-chime soundtrack intensifies.) “I grew up in Boston,” he finally volunteers. “Well, actually, I grew up in books.”He turns his head from Donna to the notebook, writing feverishly, as if he’s begun to write his own responses as the camera cuts back to his subject, Donna, crossing her legs with both hands cupped at her exposed knee, leaning in to tell him: “There’s things you can’t get in books.”“There’s things you can’t get anywhere,” he returns, pen still in his hand. “When we dream, they can be found in other people.”What is a call to composition if not a call for a response? It is always the audience which makes a work of art, re-framed in our own image, the same way we re-orient ourselves in a dream to negotiate its “inconsistencies.” Bizarreness is merely a consequence of linguistic limitations, the overwhelming sensory dream experience which can only be re-framed via a visual representation. And so the relationship between the experience of reading and dreaming is made explicit when we consider the associations internalised in the reader/audience when ingesting a passage of words on a page or on the stage, objects that become mental images and concept pictures, a lens of perception that we may liken to another art form: the film, with its jump-cuts and dissolves, so much like the defamiliarising and dislocating experience of dreaming, especially for the dreamer who wakes. What else to do in that moment but write about it?Evidence of the bizarre in dreams is only the evidence of the capacity of our human consciousness at work in the unconscious; the moment in which imagination and memory come together to create another reality, a spectrum of reality that doesn’t posit a binary between waking and sleeping, a spectrum of reality that revels in the moments where the two coalesce, merge, cross-pollinate—and what action glides forward in its wake? Sustained un-hesitation and the wish to stay inside one’s self. To be conscious of the world outside the dream means the end of one. To see one’s face in the act of dreaming would require the same act of obliteration. Recognition of the other, and of the self, prevents the process from being fulfilled. Creative production and dreaming, like voyeurism, depend on this same lack of recognition, or the recognition of yourself as other. What else is a dream if not a moment of becoming, of substituting or sublimating yourself for someone else?We are asked to relate a recent dream or we volunteer an account, to a friend or lover. We use the word “seem” in nearly every description, when we add it up or how we fail to. Everything seems to be a certain way. It’s not a place but a feeling. James, another character on Twin Peaks, says the same thing, after someone asks him, “Where do you want to go?” but before he hops on his motorcycle and rides off into the unknowable future outside the frame. Everything seems like something else, based on our own associations, our own knowledge of people and things. Offline memory consolidation. Seeming and semblance. An uncertainty of appearing—both happening and seeing. How we mediate—and re-materialise—the dream through text is our attempt to re-capture imagination, to leave off the image and better become it. If, as Foucault says, the dream is always a dream of death, its purpose is a call to creation.Outside of dreams, something bizarre occurs. We call it novelty or news. We might even bestow it with fame. A man gets on the wrong plane and ends up halfway across the world. A movie is made into the moment of his misfortune. Years later, in real life and in movie time, an Iranian refugee can’t even get on the plane; he is turned away by UK immigration officials at Charles de Gaulle, so he spends the next sixteen years living in the airport lounge; when he departs in real life, the movie (The Terminal, 2004) arrives in theaters. Did it take sixteen years to film the terminal exile? How bizarre, how bizarre. OMC’s eponymous refrain of the 1996 one-hit wonder, which is another way of saying, an anomaly.When all things are counted and countable in today’s algorithmic-rich culture, deviance becomes less of a statistical glitch and more of a testament to human peculiarity; the repressed idiosyncrasies of man before machine but especially the fallible tendencies of mankind within machines—the non-repetition of chance that the Nineties emblematised in the form of its final act. The point is to imagine what comes next; to remember waiting together for the end of the world. There is no need to even open your eyes to see it. It is just a feeling. ReferencesBruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Cultural Attitudes towards Technology and Communication 2006: Proceedings of the Fifth International Conference, eds. Fay Sudweeks, Herbert Hrachovec, and Charles Ess. Murdoch: School of Information Technology, 2006. 275-84. &lt;https://eprints.qut.edu.au/4863/1/4863_1.pdf&gt;.Foucault, Michel. “Dream, Imagination and Existence.” Dream and Existence. Ed. Keith Hoeller. Pittsburgh: Review of Existential Psychology &amp; Psychiatry, 1986. 31-78.———. “What Is an Author?” The Foucault Reader: An Introduction to Foucault’s Thought. Ed. Paul Rainbow. New York: Penguin, 1991.Glissant, Édouard. Poetics of Relation. Trans. Betsy Wing. Ann Arbor: U of Michigan P, 1997.Hall, Calvin S. The Meaning of Dreams. New York: McGraw Hill, 1966.Hobson, J. Allan. The Dream Drugstore: Chemically Altered State of Conscious­ness. Cambridge: MIT Press, 2001.Menkman, Rosa. The Glitch Moment(um). Amsterdam: Network Notebooks, 2011.Shannon, Claude Elwood. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): 379-423.States, Bert O. “Bizarreness in Dreams and Other Fictions.” The Dream and the Text: Essays on Literature and Language. Ed. Carol Schreier Rupprecht. Albany: SUNY P, 1993.Twin Peaks. Dir. David Lynch. ABC and Showtime. 1990-3 &amp; 2017. Wamsley, Erin. “Dreaming and Offline Memory Consolidation.” Current Neurology and Neuroscience Reports 14.3 (2014): 433. “Y2K Bug.” Encyclopedia Britannica. 18 July 2018. &lt;https://www.britannica.com/technology/Y2K-bug&gt;.
APA, Harvard, Vancouver, ISO, and other styles
43

Heemsbergen, Luke J., Alexia Maddox, Toija Cinque, Amelia Johns, and Robert Gehl. "Dark." M/C Journal 24, no. 2 (2021). http://dx.doi.org/10.5204/mcj.2791.

Full text
Abstract:
This issue of M/C Journal rejects the association of darkness with immorality. In digital communication, the possibilities of darkness are greater than simple fears of what is hidden in online networks. Instead, new work in an emerging field of “dark social” studies’ consider “dark” as holding the potential for autonomy away from the digital visibilities that pervade economic, political, and surveillance logics of the present age. We shall not be afraid of the dark. We start from a technical rather than moral definition of darkness (Gehl), a definition that conceives of dark spaces as having legitimacies and anonymities against structural surveillance. At the same time, breaking away from techno-centric critiques of the dark allows a humanisation of how dark is embodied and performed at individual and structural levels. Other readings of digitally mediated dark (Fisher and Bolter) suggest tensions between exploitative potentials and deep societal reflection, and the ability for a new dark age (Bridle) to allow us to explore unknown potentials. Together these perspectives allow our authors a way to use dark to question and upend the unresting pressure and acceptance of—and hierarchy given to—the light in aesthetics of power and social transformation. While we reject, however, the reduction of “dark” to “immoral” as we are not blind to “bad actors” lurking in hidden spaces (see Potter, forthcoming). Dark algorithms and their encoded biases shape our online lives. Not everyone has the ability to go off grid or create their own dark networks. Colonial settlerism often hides its brutal logics behind discourses of welfare. And some of us are forced to go dark against our will, as in the case of economies or nations being shut out of communication networks. But above all, the tensions produced in darkness, going dark, and acting dark show the normative powers beyond only focusing on the light. Taken as a whole, the articles in this issue explore the tensions between dark and connected, opting in and opting out, and exposure and retreat. They challenge binaries that reduce our vision to the monochromaticism of dark and light. They explain how the concept of “dark” expands opportunities for existence and persistence beyond datafication. They point to moral, ethical, and pragmatic responses of selves and communities seeking to be/belong in/of the dark. The issue starts with a high-stakes contest: what happens when an entire country is forced to go dark? While the articles in this issue were in review, Australian Facebook users were abruptly introduced to a unique form of darkness when, overnight, all news posts were removed from Facebook. Leaver’s feature article responds to tell the story of how Facebook and Google fought the Australian media law, and nobody won. Simply put, the platforms-cum-infrastructures did not want the government to mandate terms of their payments and business to traditional news organisations, so pulled the plug on Australia. As Leaver points out, Facebook’s cull not only made news media go dark, but in the midst of a pandemic and ongoing bushfires, prevented government agencies from posting and sharing government public health information, weather and wind patterns, and some State Emergency Services information. His article positions darkness on the spectrum from visibility to invisibility and focuses on the complex interplays of who is in control of, or has the power over, visibility. Facebook’s power to darken vital voices in society was unprecedented in Australia, a form of “de-platforming at scale” (Crawford). It seemed that Facebook (and as Leaver explains, Google, to a lesser extent) were using Australia to test platform power and legislative response. The results of this experiment, Leaver argues, was not a dawn of a new dark age—without the misinforming-glare of Facebook (see Cinque in this issue)—but confirmatory evidence of the political economy of national media: News Corp and other large traditional media companies received millions from Facebook and Google in exchange for the latter being exempt from the very law in question. Everyone won, except the Australians looking to experiment and explore alternatives in a new darkness. Scared of the dark, politicians accepted a mutually agreed transfer of ad-revenue from Google and Facebook to large and incumbent media organisations; and with that, hope of exploring a world mediated without the glare of digital incumbents was snuffed out. These agreements, of course, found user privacy, algorithmic biases, and other concerns of computational light out of scope. Playing off the themes of status quo of institutionalised social media companies, Cinque examines how social online spaces (SOS) which are governed by logics of surveillance and datafication embodied in the concept of the “gazing elite” (data aggregators including social media), can prompt anxieties for users regarding data privacy. Her work in the issue particularly observes that anxiety for many users is shaped by this manifestation of the “dark” as it relates to the hidden processes of data capture and processing by the mainstream platforms, surveillant digital objects that are incorporated into the Internet of Things, and “dark” or black boxed automated decisions which censor expression and self-representation. Against this way of conceptualising digital darkness, Cinque argues that dark SOS which use VPNs or the Tor browser to evade monitoring are valuable to users precisely because of their ability to evade the politics of visibility and resist the power of the gazing elite. Continuing away from the ubiquitous and all consuming blue glow of Facebook to more esoteric online communities, Maddox and Heemsbergen use their article to expand a critique on the normative computational logics which define the current information age (based on datafication, tracking, prediction, and surveillance of human socialities). They consider how “digging in the shadows” and “tinkering” with cryptocurrencies in the “dark” is shaping alternative futures based on social, equitable, and reciprocal relations. Their work traces cryptocurrencies—a “community generated technology” made by makers, miners and traders on darknets—from its emergence during a time of global economic upheaval, uncertainty and mistrust in centralised financial systems, through to new generations of cryptocurrencies like Dogecoin that, based on lessons from early cryptocurrencies, are mutating and becoming absorbed into larger economic structures. These themes are explored using an innovative analytical framework considering the “construction, disruption, contention, redirection, and finally absorption of emerging techno-potentials into larger structures”. The authors conclude by arguing that experiments in the dark don’t stay in the dark, but are radical potentials that impact and shape larger social forms. Bradfield and Fredericks take a step back from a focus on potentially arcane online cultures to position dark in an explicit provocation to settler politics’ fears and anxieties. They show how being dark in Australia is embodied and everyday. In doing so, they draw back the veil on the uncontested normality of fear of the dark-as-object. Their article’s examples offer a stark demonstration of how for Indigenous peoples, associations of “dark” fear and danger are built into the structural mechanisms that shape and maintain colonial understandings of Indigenous peoples and their bodies as part of larger power structures. They note activist practices that provoke settlers to confront individuals, communities, and politics that proclaim “I’m not afraid of the Dark” (see Cotes in Bradfield and Fredericks). Drawing on a related embodied refusal of poorly situated connotations of the dark, Hardley considers the embodied ways mobile media have been deployed in the urban night and observes that in darkness, and the night, while vision is obscured and other senses are heightened we also encounter enmeshed cultural relationships of darkness and danger. Drawing on the postphenomenological concept of multistability, Hardley frames engagement with mobile media as a particular kind of body-technology relation in which the same technology can be used by different people in multiple ways, as people assign different meanings to the technology. Presenting empirical research on participants’ night-time mobile media practices, Hardley analyses how users co-opt mobile media functionalities to manage their embodied experiences of the dark. The article highlights how mobile media practices of privacy and isolation in urban spaces can be impacted by geographical location and urban darkness, and are also distinctly gendered. Smith explores how conversations flow across social media platforms and messaging technologies and in and out of sight across the public domain. Darkness is the backstage where backchannel conversations take place outside of public view, in private and parochial spaces, and in the shadow spaces where communication crosses between platforms. This narrative threading view of conversation, which Smith frames as a multiplatform accomplishment, responds to the question held by so many researchers and people trying to interpret what people say in public on social media. Is what we see the tip of an iceberg or just a small blip in the ocean? From Smith’s work we can see that so much happens in the dark, beyond the gaze of the onlooker, where conversational practices move by their own logic. Smith argues that drawing on pre-digital conversational analysis techniques associated with ethnomethodology will illuminate the social logics that structure online interaction and increase our understanding of online sociality forces. Set in the context of merging platforms and the “rise of data”, Lee presents issues that undergird contemporary, globally connected media systems. In translating descriptions of complex systems, the article critically discusses the changing relational quality of “the shadow of hierarchy” and “Platform Power”. The governmental use of private platforms, and the influence it has on power and opportunity for government and civil society is prefigured. The “dark” in this work is lucidly presented as a relationality; an expression of differing values, logics, and (techno)socialities. The author finds and highlights the line between traditional notions of "infrastructure" and the workings of contemporary digital platforms which is becoming increasingly indistinct. Lee concludes by showing how the intersection of platforms with public institutions and infrastructures has moulded society’s light into an evolving and emergent shadow of hierarchy over many domains where there are, as always, those that will have the advantage—and those that do not. Finally, Jethani and Fordyce present an understanding of “data provenance” as a metaphor and method both for analysing data as a social and political artefact. The authors point to the term via an inter-disciplinary history as a way to explain a custodial history of objects. They adroitly argue that in our contemporary communication environment that data is more than just a transact-able commodity. Data is vital—being acquired, shared, interpreted and re-used with significant influence and socio-technical affects. As we see in this article, the key methods that rely on the materiality and subjectivity of data extraction and interpretation are not to be ignored. Not least because they come with ethical challenges as the authors make clear. As an illuminating methodology, “data provenance” offers a narrative for data assets themselves (asking what, when, who, how, and why). In the process, the kinds of valences unearthed as being private, secret, or exclusive reveal aspects of the ‘dark’ (and ‘light’) that is the focus of this issue. References Bridle, James. New Dark Age: Technology and the End of the Future. London, UK: Verso Books, 2018. Crawford, Kate (katecrawford). “It happened: Facebook just went off the deep end in Australia. They are blocking *all* news content to Australians, and *no* Australian media can post news. This is what showdowns between states and platforms look like. It's deplatforming at scale.” 18 Feb. 2021. 22 Apr. 2021 &lt;https://twitter.com/katecrawford/status/1362149306170368004&gt;. Fisher, Joshua A., and Jay David Bolter. "Ethical Considerations for AR Experiences at Dark Tourism Sites." 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (2018): 365-69. Gehl, Robert. Weaving the Dark Web: Legitimacy on Freenet, Tor, and I2p. The Information Society Series. Cambridge, MA: MIT Press, 2018. Potter, Martin. “Bad Actors Never Sleep: Content Manipulation on Reddit.” Eds. Toija Cinque, Robert W. Gehl, Luke Heemsbergen, and Alexia Maddox. Continuum Dark Social Special Issue (forthcoming).
APA, Harvard, Vancouver, ISO, and other styles
44

Ellison, Elizabeth. "The #AustralianBeachspace Project: Examining Opportunities for Research Dissemination Using Instagram." M/C Journal 20, no. 4 (2017). http://dx.doi.org/10.5204/mcj.1251.

Full text
Abstract:
IntroductionIn late 2016, I undertook a short-term, three-month project to share some of my research through my Instagram account using the categorising hashtag #AustralianBeachspace. Much of this work emerged from my PhD thesis, which is being published in journal articles, but has yet to be published in any accessible or overarching way. I wanted to experiment with the process of using a visual social media tool for research dissemination. I felt that Instagram’s ability to combine text and image allowed for an aesthetically interesting way to curate this particular research project. My research is concerned with representations of the Australian beach, and thus the visual, image-based focus of Instagram seemed ideal. In this article, I briefly examine some of the existing research around academic practices of research dissemination, social media use, and the emerging research around Instagram itself. I then will examine my own experience of using Instagram as a tool for depicting curated, aesthetically-driven, research dissemination and reflect whether this use of Instagram is effective for representing and disseminating research. Research DisseminationResearchers, especially those backed by public funding, are always bound by the necessity of sharing the findings and transferring the knowledge gained during the research process. Research metrics are linked to workload allocations and promotion pathways for university researchers, providing clear motivation to maintain an active research presence. For most academics, the traditional research dissemination strategies involve academic publications: peer-reviewed scholarly books and journal articles.For academics working within a higher education policy climate that centres on measuring impact and engagement, peer-reviewed publications remain the gold standard. There are indicators, however, that research dissemination strategies may need to include methods for targeting non-academic outputs. Gunn and Mintrom (21), in their recent research, “anticipate that governments will increasingly question the value of publicly funded research and seek to evaluate research impact”. And this process, they argue, is not without challenges. Education Minister Simon Birmingham supports their claim by suggesting the Turnbull Government is looking to find methods for more meaningful ways of evaluating value in higher education research outcomes, “rather than only allocating funding to researchers who spend their time trying to get published in journals” (para 5).It therefore makes sense that academics are investigating ways of using social media as a way of broadening their research dissemination, despite the fact social media metrics do not yet count towards traditional citations within the university sector.Research Dissemination via Social MediaThere has been an established practice of researchers using social media, especially blogging (Kirkup) and Twitter, as ways of sharing information about their current projects, their findings, their most recent publications, or to connect with colleagues. Gruzd, Staves, and Wilk (2348) investigated social media use by academics, suggesting “scholars are turning to social media tools professionally because they are more convenient for making new connections with peers, collaboration, and research dissemination”. It is possible to see social media functioning as a new way of representing research – playing an important role in the shaping and developing of ideas, sharing those ideas, and functioning as a dissemination tool after the research has concluded.To provide context for the use of social media in research, this section briefly covers blogging and Twitter, two methods considered somewhat separated from university frameworks, and also professional platforms, such as Academia.edu and The Conversation.Perhaps the tool that has the most history in providing another avenue for academics to share their work is academic blogging. Blogging is considered an avenue that allows for discussion of topics prior to publication (Bukvova, 4; Powell, Jacob, and Chapman, 273), and often uses a more conversational tone than academic publishing. It provides opportunity to share research in long form to an open, online audience. Academic blogs have also become significant parts of online academic communities, such as the highly successful blog, The Thesis Whisperer, targeted for research students. However, many researchers in this space note the stigma attached to blogging (and other forms of social media) as useless or trivial; for instance, in Gruzd, Staves, and Wilk’s survey of academic users of social media, an overwhelming majority of respondents suggested that institutions do not recognise these activities (2343). Because blogging is not counted in publication metrics, it is possible to dismiss this type of activity as unnecessary.Twitter has garnered attention within the academic context because of its proliferation in conference engagement and linking citation practices of scholars (Marht, Weller, and Peters, 401–406). Twitter’s platform lends itself as a place to share citations of recently published material and a way of connecting with academic peers in an informal, yet meaningful way. Veletsianos has undertaken an analysis of academic Twitter practices, and there is a rise in popularity of “Tweetable Abstracts” (Else), or the practice of refining academic abstracts into a shareable Tweet format. According to Powell, Jacob, and Chapman (272), new media (including both Twitter and the academic blog) offer opportunities to engage with an increasingly Internet-literate society in a way that is perhaps more meaningful and certainly more accessible than traditional academic journals. Like blogging, the use of Twitter within the active research phase and pre-publication, means the platform can both represent and disseminate new ideas and research findings.Both academic blogs and Twitter are widely accessible and can be read by Internet users beyond academia. It appears likely, however, that many blogs and academic Twitter profiles are still accessed and consumed primarily by academic audiences. This is more obvious in the increasingly popular specific academic social media platforms such as ResearchGate or Academia.edu.These websites are providing more targeted, niche communication and sharing channels for scholars working in higher education globally, and their use appears to be regularly encouraged by institutions. These sites attempt to mediate between open access and copyright in academic publishing, encouraging users to upload full-text documents of their publications as a means of generating more attention and citations (Academia.edu cites Niyazov et al’s study that suggests articles posted to the site had improved citation counts). ResearchGate and Academia.edu function primarily as article repositories, albeit with added social networking opportunities that differentiate them from more traditional university repositories.In comparison, the success of the online platform The Conversation, with its tagline “Academic rigour, journalistic flair”, shows the growing enthusiasm and importance of engaging with more public facing outlets to share forms of academic writing. Many researchers are using The Conversation as a way of sharing their research findings through more accessible, shorter articles designed for the general public; these articles regularly link to the traditional academic publications as well.Research dissemination, and how the uptake of online social networks is changing individual and institution-wide practices, is a continually expanding area of research. It is apparent that while The Conversation has been widely accepted and utilised as a tool of research dissemination, there is still some uncertainty about using social media as representing or disseminating findings and ideas because of the lack of impact metrics. This is perhaps even more notable in regards to Instagram, a platform that has received comparatively little discussion in academic research more broadly.Instagram as Social MediaInstagram is a photo sharing application that launched in 2010 and has seen significant uptake by users in that time, reaching 700 million monthly active users as of April 2017 (Instagram “700 Million”). Recent additions to the service, such as the “Snapchat clone” Instagram Stories, appear to have helped boost growth (Constine, para 4). Instagram then is a major player in the social media user market, and the emergence of academic research into the platform reflect this. Early investigations include Manikonda, Hu and Kambhampati’s analysis social networks, demographics, and activities of users in which they identified some clear differences in usage compared to Flickr (another photo-sharing network) and Twitter (5). Hochman and Manovich and Hochman and Schwartz examined what information visualisations generated from Instagram images can reveal about the “visual rhythms” of geographical locations such as New York City.To provide context for the use of Instagram as a way of disseminating research through a more curated, visual approach, this section will examine professional uses of Instagram, the role of Influencers, and some of the functionalities of the platform.Instagram is now a platform that caters for both personal and professional accounts. The user-interface allows for a streamlined and easily navigable process from taking a photo, adding filters or effects, and sharing the photo instantly. The platform has developed to include web-based access to complement the mobile application, and has also introduced Instagram Business accounts, which provide “real-time metrics”, “insights into your followers”, and the ability to “add information about your company” (Instagram “Instagram Business”). This also comes with the option to pay for advertisements.Despite its name, many users of Instagram, especially those with profiles that are professional or business orientated, do not only produce instant content. While the features of Instagram, such as geotagging, timestamping, and the ability to use the camera from within the app, lend themselves to users capturing their everyday experience in the moment, more and more content is becoming carefully curated. As such, some accounts are blurring the line between personal and professional, becoming what Crystal Abidin calls Influencers, identifying the practice as when microcelebrities are able to use the “textual and visual narration of their personal, everyday lives” to generate paid advertorials (86). One effect of this, as Abidin investigates in the context of Singapore and the #OOTD (Outfit of the Day) hashtag, is the way “everyday Instagram users are beginning to model themselves after Influences” and therefore generate advertising content “that is not only encouraged by Influences and brands but also publicly utilised without remuneration” (87). Instagram, then, can be a very powerful platform for businesses to reach wide audiences, and the flexibility of caption length and visual content provides a type of viral curation practice as in the case of the #OOTD hashtag following.Considering the focus of my #AustralianBeachspace project on Australian beaches, many of the Instagram accounts and hashtags I encountered and engaged with were tourism related. Although this will be discussed in more detail below, it is worth noting that individual Influencers exist in these fields as well and often provide advertorial content for companies like accommodation chains or related products. One example is user @katgaskin, an Influencer who both takes photos, features in photos, and provides “organic” adverts for products and services (see image). Not all her photos are adverts; some are beach or ocean images without any advertorial content in the caption. In this instance, the use of distinctive photo editing, iconic imagery (the “salty pineapple” branding), and thematic content of beach and ocean landscapes, makes for a recognisable and curated aesthetic. Figure 1: An example from user @katgaskin's Instagram profile that includes a mention of a product. Image sourced from @katgaskin, uploaded 2 June 2017.@katgaskin’s profile’s aesthetic identity is, as such, linked with the ocean and the beach. Although her physical location regularly changes (her profile includes images from, for example, Nicaragua, Australia, and the United States), the thematic link is geographical. And research suggests the visual focus of Instagram lends itself to place-based content. As Hochman and Manovich state:While Instagram eliminates static timestamps, its interface strongly emphasizes physical place and users’ locations. The application gives a user the option to publicly share a photo’s location in two ways. Users can tag a photo to a specific venue, and then view all other photos that were taken and tagged there. If users do not choose to tag a photo to a venue, they can publically share their photos’ location information on a personal ‘photo-map’, displaying all photos on a zoomable word map. (para 14)This means that the use of place in the app is anchored to the visual content, not the uploader’s location. While it is possible to consider Instagram’s intention was to anchor the content and the uploader’s location together (as in the study conducted by Weilenmann, Hillman, and Jungselius that explored how Instagram was used in the museum), this is no longer always the case. In this way, Instagram is also providing a platform for more serious photographers to share their images after they have processed and edited them and connect the image with the image content rather than the uploader’s position.This place-based focus also shares origins in tourism photography practices. For instance, Kibby’s analysis of the use of Instagram as a method for capturing the “tourist gaze” in Monument Valley notes that users mostly wanted to capture the “iconic” elements of the site (most of which were landscape formations made notable through representations in popular culture).Another area of research into Instagram use is hashtag practice (see, for example, Ferrara, Interdonato, and Tagarelli). Highfield and Leaver have generated a methodology for mapping hashtags and analysing the information this can reveal about user practices. Many Instagram accounts use hashtags to provide temporal or place based information, some specific (such as #sunrise or #newyorkcity) and some more generic (such as #weekend or #beach). Of particular relevance here is the role hashtags play in generating higher levels of user engagement. It is also worth noting the role of “algorithmic personalization” introduced by Instagram earlier in 2017 and the lukewarm user response as identified by Mahnke Skrubbeltrang, Grunnet, and Tarp’s analysis, suggesting “users are concerned with algorithms dominating their experience, resulting in highly commercialised experience” (section 7).Another key aspect of Instagram’s functionality is linked to the aesthetic of the visual content: photographic filters. Now a mainstay of other platforms such as Facebook and Twitter, Instagram popularised the use of filters by providing easily accessible options within the app interface directly. Now, other apps such as VCSO allow for more detailed editing of images that can then be imported into Instagram; however, the pre-set filters have proven popular with large numbers of users. A study in 2014 by Araújo, Corrêa, da Silva et al found 76% of analysed images had been processed in some way.By considering the professional uses of Instagram and the functionality of the app (geotagging; hashtagging; and filters), it is possible to summarise Instagram as a social media platform that, although initially perhaps intended to capture the everyday visual experiences of amateur photographers using their smart phone, has adapted to become a network for sharing images that can be for both personal and professional purposes. It has a focus on place, with its geotagging capacity and hashtag practices, and can include captions The #AustralianBeachspace ProjectIn October 2016, I began a social media project called #AustralianBeachspace that was designed to showcase content from my PhD thesis and ongoing work into representations of Australian beaches in popular culture (a collection of the project posts only, as opposed to the ongoing Instagram profile, can be found here). The project was envisaged as a three month project; single posts (including an image and caption) were planned and uploaded six times a week (every day except Sundays). Although I have occasionally continued to use the hashtag since the project’s completion (on 24 Dec. 2016), the frequency and planned nature of the posts since then has significantly changed. What has not changed is the strong thematic through line of my posts, all of which continue to rely heavily on beach imagery. This is distinct from other academic social media use which if often more focused on the everyday activity of academia.Instagram was my social media choice for this project for two main reasons: I had no existing professional Instagram profile (unlike Twitter) and thus I could curate a complete project in isolation, and the subject of my PhD thesis was representations of Australian beaches in literature and film. As such, my research was appropriate for, and in fact was augmented by, visual depiction. It is also worth noting the tendency reported by myself and others (Huntsman; Booth) of academics not considering the beach an area worthy of focus. This resonates with Bech Albrechtslund and Albrechtslund’s argument that “social media practices associated with leisure and playfulness” are still meaningful and worthy of examination.Up until this point, my research outputs had been purely textual. I, therefore, needed to generate a significant number of visual elements to complement the vast amount of textual content already created. I used my PhD thesis to provide the thematic structure (I have detailed this process in more depth here), and then used the online tool Trello to plan, organise, and arrange the intended posts (image and caption). The project includes images taken by myself, my partner, and other images with no copyright limitations attached as sourced through photo sharing sites like Unsplash.com.The images were all selected because of their visual representation of an Australian beach, and the alignment of the image with the themes of the project. For instance, one theme focused on the under-represented negative aspects of the beach. One image used in this theme was a photo of Bondi Beach ocean pool, empty at night. I carefully curated the images and arranged them according to the thematic schedule (as can be seen below) and then wrote the accompanying textual captions. Figure 2: A sample of the schedule used for the posting of curated images and captions.While there were some changes to the schedule throughout (for instance, my attendance at the 2016 Sculpture by the Sea exhibition prompted me to create a sixth theme), the process of content curation and creation remained the same.Visual curation of the images was a particularly important aspect of the project, and I did use an external photo processing application to create an aesthetic across the collection. As Kibby notes, “photography is intrinsically linked with tourism” (para 9), and although not a tourism project inherently, #AustralianBeachspace certainly engaged with touristic tropes by focusing on Australian beaches, an iconic part of Australian national and cultural identity (Ellison 2017; Ellison and Hawkes 2016; Fiske, Hodge, and Turner 1987). However, while beaches are perhaps instinctively touristic in their focus on natural landscapes, this project was attempting to illustrate more complexity in this space (which mirrors an intention of my PhD thesis). As such, some images were chosen because of their “ordinariness” or their subversion of the iconic beach images (see below). Figures 3 and 4: Two images that capture some less iconic images of Australian beaches; one that shows an authentic, ordinary summer's day and another that shows an empty beach during winter.I relied on captions to provide the textual information about the image. I also included details about the photographer where possible, and linked all the images with the hashtag #AustralianBeachspace. The textual content, much of which emerged from ongoing and extensive research into the topic, was somewhat easier to collate. However, it required careful reworking and editing to suit the desired audience and to work in conjunction with the image. I kept captions to the approximate length of a paragraph and concerned with one point. This process forced me to distil ideas and concepts into short chunks of writing, which is distinct from other forms of academic output. This textual content was designed to be accessible beyond an academic audience, but still used a relatively formal voice (especially in comparison to more personal users of the platform).I provided additional hashtags in a first comment, which were intended to generate some engagement. Notably, these hashtags were content related (such as #beach and #surf; they were not targeting academic hashtags). At time of writing, my follower count is 70. The most liked (or “favourited”) photo from the project received 50 likes, and the most comments received was 6 (on a number of posts). Some photos published since the end of the project have received higher numbers of likes and comments. This certainly does not suggest enormous impact from this project. Hashtags utilised in this project were adopted from popular and related hashtags using the analytics tool Websta.me as well as hashtags used in similar content styled profiles, such as: #seeaustralia #thisisqueensland #visitNSW #bondibeach #sunshinecoast and so on. Notably, many of the hashtags were place-based. The engagement of this project with users beyond academia was apparent: followers and comments on the posts are more regularly from professional photographers, tourism bodies, or location-based businesses. In fact, because of the content or place-based hashtagging practices I employed, it was difficult to attract an academic audience at all. However, although the project was intended as an experiment with public facing research dissemination, I did not actively adopt a stringent engagement strategy and have not kept metrics per day to track engagement. This is a limitation of the study and undoubtedly allows scope for further research.ConclusionInstagram is a platform that does not have clear pathways for reaching academic audiences in targeted ways. At this stage, little research has emerged that investigates Instagram use among academics, although it is possible to presume there are similarities with blogging or Twitter (for example, conference posting and making connections with colleagues).However, the functionality of Instagram does lend itself to creating and curating aesthetically interesting ways of disseminating, and in fact representing, research. Ideas and findings must be depicted as images and captions, and the curatorial process of marrying visual images to complement or support textual information can make for more accessible and palatable content. Perhaps most importantly, the content is freely accessible and not locked behind paywalls or expensive academic publications. It can also be easily archived and shared.The #AustralianBeachspace project is small-scale and not indicative of widespread academic practice. However, examining the process of creating the project and the role Instagram may play in potentially reaching a more diverse, public audience for academic research suggests scope for further investigation. Although not playing an integral role in publication metrics and traditional measures of research impact, the current changing climate of higher education policy provides motivations to continue exploring non-traditional methods for disseminating research findings and tracking research engagement and impact.Instagram functions as a useful platform for sharing research data through a curated collection of images and captions. Rather than being a space for instant updates on the everyday life of the academic, it can also function in a more aesthetically interesting and dynamic way to share research findings and possibly generate wider, public-facing engagement for topics less likely to emerge from behind the confines of academic journal publications. ReferencesAbidin, Crystal. “Visibility Labour: Engaging with Influencers’ Fashion Brands and #Ootd Advertorial Campaigns on Instagram.” Media International Australia 161.1 (2016): 86–100. &lt;http://journals.sagepub.com/doi/abs/10.1177/1329878X16665177&gt;.Araújo, Camila Souza, Luiz Paulo Damilton Corrêa, Ana Paula Couto da Silva, et al. “It is Not Just a Picture: Revealing Some User Practices in Instagram.” Proceedings of the 9th Latin American Web Congress, Ouro Preto, Brazil, 22–24 October, 2014. &lt;http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7000167&gt;Bech Albrechtslund, Anne-Metter, and Anders Albrechtslund. “Social Media as Leisure Culture.” First Monday 19.4 (2014). &lt;http://firstmonday.org/ojs/index.php/fm/article/view/4877/3867&gt;.Birmingham, Simon. “2017 Pilot to Test Impact, Business Engagement of Researchers.” Media Release. Australian Government: Australian Research Council. 21 Nov. 2016. &lt;http://www.arc.gov.au/news-media/media-releases/2017-pilot-test-impact-business-engagement-researchers&gt;.Booth, Douglas. Australian Beach Cultures: The History of Sun, Sand, and Surf. London, United Kingdom: F. Cass, 2001.Bukvova, Helena. “Taking New Routes: Blogs, Web Sites, and Scientific Publishing.” ScieCom Info 7.2 (2011). 20 May 2017 &lt;http://journals.lub.lu.se/index.php/sciecominfo/article/view/5148&gt;.Constine, Josh. “Instagram’s Growth Speeds Up as It Hits 700 Million Users.” Techcrunch, 26 Apr. 2017. 1 June 2017 &lt;https://techcrunch.com/2017/04/26/instagram-700-million-users/&gt;.drlizellison. “Dr Liz Ellison.” Instagram.com, 2017. 8 June 2017 &lt;http://www.instagram.com/drlizellison&gt;.Ellison, Elizabeth. “The Australian Beachspace: Flagging the Spaces of Australian Beach Texts.” PhD thesis. Brisbane: Queensland U of Technology, 2013. &lt;https://eprints.qut.edu.au/63468/&gt;.Ellison, Elizabeth. “The Gritty Urban: The Australian Beach as City Periphery in Cinema.” Filmburia: Screening the Suburbs. Eds. David Forrest, Graeme Harper and Jonathan Rayner. UK: Palgrave Macmillan, 2017. 79–94.Ellison, Elizabeth, and Lesley Hawkes. “Australian Beachspace: The Plurality of an Iconic Site”. Borderlands e-Journal: New Spaces in the Humanities 15.1 (2016). 4 June 2017 &lt;http://www.borderlands.net.au/vol15no1_2016/ellisonhawkes_beachspace.pdf&gt;.Else, Holly. “Tell Us about Your Paper—and Make It Short and Tweet.” Times Higher Education, 9 July 2015. 1 June 2017 &lt;https://www.timeshighereducation.com/opinion/tell-us-about-your-paper-and-make-it-short-and-tweet&gt;.Ferrara, Emilio, Roberto Interdonato, and Andrea Tagarelli. “Online Popularity and Topical Interests through the Lens of Instagram.” Proceedings of the 25th ACM Conference on Hypertext and Social Media, Santiago, Chile, 1–4 Sep. 2014. &lt;http://dx.doi.org/10.1145/2631775.2631808&gt;.Gruzd, Anatoliy, Kathleen Staves, and Amanda Wilk. “Connected Scholars: Examining the Role of Social Media in Research Practices of Faculty Using the Utaut Model.” Computers in Human Behavior 28.6 (2012): 2340–50.Gunn, Andrew, and Michael Mintrom. “Evaluating the Non-Academic Impact of Academic Research: Design Considerations.” Journal of Higher Education Policy and Management 39.1 (2017): 20–30. &lt;http://dx.doi.org/10.1080/1360080X.2016.1254429&gt;.Highfield, Tim, and Tama Leaver. “A Methodology for Mapping Instagram Hashtags”. First Monday 20.1 (2015). 18 Oct. 2016 &lt;http://firstmonday.org/ojs/index.php/fm/article/view/5563/4195&gt;.Hochman, Nadav, and Lev Manovich. “Zooming into an Instagram City: Reading the Local through Social Media.” First Monday 18.7 (2013). &lt;http://firstmonday.org/ojs/index.php/fm/article/view/4711/3698&gt;.Hochman, Nadav, and Raz Schwartz. “Visualizing Instagram: Tracing Cultural Visual Rhythms.” Proceedings of the Workshop on Social Media Visualization (SocMedVis) in Conjunction with the Sixth International AAAI Conference on Weblogs and Social Media (ICWSM–12), 2012. 6–9. 2 June 2017 &lt;http://razschwartz.net/wp-content/uploads/2012/01/Instagram_ICWSM12.pdf&gt;.Huntsman, Leone. Sand in Our Souls: The Beach in Australian History. Carlton South, Victoria: Melbourne U Press, 2001.Instagram. “700 Million.” Instagram Blog, 26 Apr. 2017. 6 June 2017 &lt;http://blog.instagram.com/post/160011713372/170426-700million&gt;.Instagram. “Instagram Business.” 6 June 2017. &lt;https://business.instagram.com/&gt;.katgaskin. “Salty Pineapple”. Instagram.com, 2017. 2 June 2017 &lt;https://www.instagram.com/katgaskin/&gt;.katgaskin. “Salty Hair with a Pineapple Towel…” Instagram.com, 2 June 2017. 6 June 2017 &lt;https://www.instagram.com/p/BU0zSWUF0cm/?taken-by=katgaskin&gt;.Kibby, Marjorie Diane. “Monument Valley, Instagram, and the Closed Circle of Representation.” M/C Journal 19.5 (2016). 20 April 2017 &lt;http://journal.media-culture.org.au/index.php/mcjournal/article/view/1152&gt;.Kirkup, Gill. “Academic Blogging: Academic Practice and Academic Identity.” London Review of Education 8.1 (2010): 75–84.liz_ellison. “#AustralianBeachspace.” Storify.com. 8 June 2017. &lt;https://storify.com/liz_ellison/australianbeachspace&gt;.Mahnke Skrubbeltrang, Martina, Josefine Grunnet, and Nicolar Traasdahl Tarp. “#RIPINSTAGRAM: Examining User’s Counter-Narratives Opposing the Introduction of Algorithmic Personalization on Instagram.” First Monday 22.4 (2017). &lt;http://firstmonday.org/ojs/index.php/fm/article/view/7574/6095&gt;.Mahrt, Merja, Katrin Weller, and Isabella Peters. “Twitter in Scholarly Communication.” Twitter and Society. Eds. Katrin Weller, Axel Bruns, Jean Burgess, Merja Mahrt, and Cornelius Puschmann. New York: Peter Lang, 2014. 399–410. &lt;https://eprints.qut.edu.au/66321/1/Twitter_and_Society_(2014).pdf#page=438&gt;.Manikonda, Lydia, Yuheng Hu, and Subbarao Kambhampati. “Analyzing User Activities, Demographics, Social Network Structure and User-Generated Content on Instagram.” ArXiv (2014). 1 June 2017 &lt;https://arxiv.org/abs/1410.8099&gt;.Niyazov, Yuri, Carl Vogel, Richard Price, et al. “Open Access Meets Discoverability: Citations to Articles Posted to Academia.edu.” PloS One 11.2 (2016): e0148257. &lt;https://doi.org/10.1371/journal.pone.0148257&gt;.Powell, Douglas A., Casey J. Jacob, and Benjamin J. Chapman. “Using Blogs and New Media in Academic Practice: Potential Roles in Research, Teaching, Learning, and Extension.” Innovative Higher Education 37.4 (2012): 271–82. &lt;http://dx.doi.org/10.1007/s10755-011-9207-7&gt;.Veletsianos, George. “Higher Education Scholars' Participation and Practices on Twitter.” Journal of Computer Assisted Learning 28.4 (2012): 336–49. &lt;http://dx.doi.org/10.1111/j.1365-2729.2011.00449.x&gt;.Weilenmann, Alexandra, Thomas Hillman, and Beata Jungselius. “Instagram at the Museum: Communicating the Museum Experience through Social Photo Sharing.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Paris: ACM Press, 2013. 1843–52. &lt;dx.doi.org/10.1145/2470654.2466243&gt;.
APA, Harvard, Vancouver, ISO, and other styles
45

Hill, Wes. "Revealing Revelation: Hans Haacke’s “All Connected”." M/C Journal 23, no. 4 (2020). http://dx.doi.org/10.5204/mcj.1669.

Full text
Abstract:
In the 1960s, especially in the West, art that was revelatory and art that was revealing operated at opposite ends of the aesthetic spectrum. On the side of the revelatory we can think of encounters synonymous with modernism, in which an expressionist painting was revelatory of the Freudian unconscious, or a Barnett Newman the revelatory intensity of the sublime. By contrast, the impulse to reveal in 1960s art was rooted in post-Duchampian practice, implicating artists as different as Lynda Benglis and Richard Hamilton, who mined the potential of an art that was without essence. If revelatory art underscored modernism’s transcendental conviction, critically revealing work tested its discursive rules and institutional conventions. Of course, nothing in history happens as neatly as this suggests, but what is clear is how polarized the language of artistic revelation was throughout the 1960s. With the international spread of minimalism, pop art, and fluxus, provisional reveals eventually dominated art-historical discourse. Aesthetic conviction, with its spiritual undertones, was haunted by its demystification. In the words of Donald Judd: “a work needs only to be interesting” (184).That art galleries could be sites of timely socio-political issues, rather than timeless intuitions undersigned by medium specificity, is one of the more familiar origin stories of postmodernism. Few artists symbolize this shift more than Hans Haacke, whose 2019 exhibition All Connected, at the New Museum, New York, examined the legacy of his outward-looking work. Born in Germany in 1936, and a New Yorker since 1965, Haacke has been linked to the term “institutional critique” since the mid 1980s, after Mel Ramsden’s coining in 1975, and the increased recognition of kindred spirits such as Mierle Laderman Ukeles, Michael Asher, Martha Rosler, Robert Smithson, Daniel Buren, and Marcel Broodthaers. These artists have featured in books and essays by the likes of Benjamin Buchloh, Hal Foster, and Yve-Alain Bois, but they are also known for their own contributions to art discourse, producing hybrid conceptions of the intellectual postmodern artist as historian, critic and curator.Haacke was initially fascinated by kinetic sculpture in the early 1960s, taking inspiration from op art, systems art, and machine-oriented research collectives such as Zero (Germany), Gruppo N (Italy) and GRAV (France, an acronym of Groupe de Recherche d’Art Visuel). Towards the end of the decade he started to produce more overtly socio-political work, creating what would become a classic piece from this period, Gallery-Goers’ Birthplace and Residence Profile, Part 1 (1969). Here, in a solo exhibition at New York’s Howard Wise Gallery, the artist invited viewers to mark their birthplaces and places of residence on a map. Questioning the statistical demography of the Gallery’s avant-garde attendees, the exhibition anticipated the meticulous sociological character of much of his practice to come, grounding New York art – the centre of the art world – in local, social, and economic fabrics.In the forward to the catalogue of All Connected, New Museum Director Lisa Philips claims that Haacke’s survey exhibition provided a chance to reflect on the artist’s prescience, especially given the flourishing of art activism over the last five or so years. Philips pressed the issue of why no other American art institution had mounted a retrospective of his work in three decades, since his previous survey, Unfinished Business, at the New Museum in 1986, at its former, and much smaller, Soho digs (8). It suggests that other institutions have deemed Haacke’s work too risky, generating too much political heat for them to handle. It’s a reputation the artist has cultivated since the Guggenheim Museum famously cancelled his 1971 exhibition after learning his intended work, Shapolsky et al. Manhattan Real Estate Holdings, A Real Time Social System as of May 1, 1971 (1971) involved research into dubious New York real estate dealings. Guggenheim director Thomas Messer defended the censorship at the time, going so far as to describe it as an “alien substance that had entered the art museum organism” (Haacke, Framing 138). Exposé was this substance Messer dare not name: art that was too revealing, too journalistic, too partisan, and too politically viscid. (Three years later, Haacke got his own back with Solomon R. Guggenheim Museum Board of Trustees, 1974, exposing then Guggenheim board members’ connections to the copper industry in Chile, where socialist president Salvador Allende had just been overthrown with US backing.) All Connected foregrounded these institutional reveals from time past, at a moment in 2019 when the moral accountability of the art institution was on the art world’s collective mind. The exhibition followed high-profile protests at New York’s Whitney Museum and Metropolitan Museum of Art, as well as at Sydney’s Museum of Contemporary Art, the Louvre, and the British Museum. These and other arts organisations have increasingly faced pressures, fostered by social media, to end ties with unethical donors, sponsors, and board members, with activist groups protesting institutional affiliations ranging from immigration detention centre management to opioid and teargas manufacturing. An awareness of the limits of individual agency and autonomy undoubtedly defines this era, with social media platforms intensifying the encumbrances of individual, group, and organisational identities. Hans Haacke, Gallery-Goers’ Birthplace and Residence Profile, Part 1, 1969 Hans Haacke, Gallery-Goers’ Birthplace and Residence Profile, Part 2, 1969-71Unfinished BusinessUnderscoring Haacke’s activist credentials, Philips describes him as “a model of how to live ethically and empathetically in the world today”, and as a beacon of light amidst the “extreme political and economic uncertainty” of the present, Trump-presidency-calamity moment (7). This was markedly different to how Haacke’s previous New York retrospective, Unfinished Business, was received, which bore the weight of being the artist’s first museum exhibition in New York following the Guggenheim controversy. In the catalogue to Haacke’s 1986 exhibition, then New Museum director Marcia Tucker introduced his work as a challenge, cautiously claiming that he poses “trenchant questions” and that the institution accepts “the difficulties and contradictions” inherent to any museum staging of his work (6).Philips’s and Tucker’s distinct perspectives on Haacke’s practice – one as heroically ethical, the other as a sobering critical challenge – exemplify broader shifts in the perception of institutional critique (the art of the socio-political reveal) over this thirty-year period. In the words of Pamela M. Lee, between 1986 and 2019 the art world has undergone a “seismic transformation”, becoming “a sphere of influence at once more rapacious, acquisitive, and overweening but arguably more democratizing and ecumenical with respect to new audiences and artists involved” (87). Haacke’s reputation over this period has taken a similar shift, from him being a controversial opponent of art’s autonomy (an erudite postmodern conceptualist) to a figurehead for moral integrity and cohesive artistic experimentation.As Rosalyn Deutsche pointed out in the catalogue to Haacke’s 1986 exhibition, a potential trap of such a retrospective is that, through biographical positioning, Haacke might be seen as an “exemplary political artist” (210). With this, the specific political issues motivating his work would be overshadowed by the perception of the “great artist” – someone who brings single-issue politics into the narrative of postmodern art, but at the expense of the issues themselves. This is exactly what Douglas Crimp discovered in Unfinished Business. In a 1987 reflection on the show, Crimp argued that, when compared with an AIDS-themed display, Homo Video, staged at the New Museum at the same time, reviewers of Haacke’s exhibition tended to analyse his politics “within the context of the individual artist’s body of work … . Political issues became secondary to the aesthetic strategies of the producer” (34). Crimp, whose activism would be at the forefront of his career in subsequent years, was surprised at how Homo Video and Unfinished Business spawned different readings. Whereas works in the former exhibition tended to be addressed in terms of the artists personal and partisan politics, Haacke’s prompted reflection on the aesthetics-politics juxtaposition itself. For Crimp, the fact that “there was no mediation between these two shows”, spoke volumes about the divisions between political and activist art at the time.New York Times critic Michael Brenson, reiterating a comment made by Fredric Jameson in the catalogue for Unfinished Business, describes the timeless appearance of Haacke’s work in 1986, which is “surprising for an artist whose work is in some way about ideology and history” (Brenson). The implication is that the artist gives a surprisingly long aesthetic afterlife to the politically specific – to ordinarily short shelf-life issues. In this mode of critical postmodernism in which we are unable to distinguish clearly between intervening in and merely reproducing the logic of the system, Haacke is seen as an astute director of an albeit ambiguous push and pull between political specificity and aesthetic irreducibility, political externality and the internalist mode of art about art. Jameson, while granting that Haacke’s work highlights the need to reinvent the role of the “ruling class” in the complex, globalised socio-economic situation of postmodernism, claims that it does so as representative of the “new intellectual problematic” of postmodernism. Haacke, according Jameson, stages postmodernism’s “crisis of ‘mapping’” whereby capitalism’s totalizing, systemic forms are “handled” (note that he avoids “critiqued” or “challenged”) by focusing on their manifestation through particular (“micro-public”) institutional means (49, 50).We can think of the above examples as constituting the postmodern version of Haacke, who frames very specific political issues on the one hand, and the limitless incorporative power of appropriative practice on the other. To say this another way, Haacke, circa 1986, points to specific sites of power struggle at the same time as revealing their generic absorption by an art-world system grown accustomed to its “duplicate anything” parameters. For all of his political intent, the artistic realm, totalised in accordance with the postmodern image, is ultimately where many thought his gestures remained. The philosopher turned art critic Arthur Danto, in a negative review of Haacke’s exhibition, portrayed institutional critique as part of an age-old business of purifying art, maintaining that Haacke’s “crude” and “heavy-handed” practice is blind to how art institutions have always relied on some form of critique in order for them to continue being respected “brokers of spirit”. This perception – of Haacke’s “external” critiques merely serving to “internally” strengthen existing art structures – was reiterated by Leo Steinberg. Supportively misconstruing the artist in the exhibition catalogue, Steinberg writes that Haacke’s “political message, by dint of dissonance, becomes grating and shrill – but shrill within the art context. And while its political effectiveness is probably minimal, its effect on Minimal art may well be profound” (15). Hans Haacke, MOMA Poll, 1970 All ConnectedSo, what do we make of the transformed reception of Haacke’s work since the late 1980s: from a postmodern ouroboros of “politicizing aesthetics and aestheticizing politics” to a revelatory exemplar of art’s moral power? At a period in the late 1980s when the culture wars were in full swing and yet activist groups remained on the margins of what would become a “mainstream” art world, Unfinished Business was, perhaps, blindingly relevant to its times. Unusually for a retrospective, it provided little historical distance for its subject, with Haacke becoming a victim of the era’s propensity to “compartmentalize the interpretive registers of inside and outside and the terms corresponding to such spatial­izing coordinates” (Lee 83).If commentary surrounding this 2019 retrospective is anything to go by, politics no longer performs such a parasitic, oppositional or even dialectical relation to art; no longer is the political regarded as a real-world intrusion into the formal, discerning, longue-durée field of aesthetics. The fact that protests inside the museum have become more visible and vociferous in recent years testifies to this shift. For Jason Farrago, in his review of All Connected for the New York Times, “the fact that no person and no artwork stands alone, that all of us are enmeshed in systems of economic and social power, is for anyone under 40 a statement of the obvious”. For Alyssa Battistoni, in Frieze magazine, “if institutional critique is a practice, it is hard to see where it is better embodied than in organizing a union, strike or boycott”.Some responders to All Connected, such as Ben Lewis, acknowledge how difficult it is to extract a single critical or political strategy from Haacke’s body of work; however, we can say that, in general, earlier postmodern questions concerning the aestheticisation of the socio-political reveal no longer dominates the reception of his practice. Today, rather than treating art and politics are two separate but related entities, like form is to content, better ideas circulate, such as those espoused by Bruno Latour and Jacques Rancière, for whom what counts as political is not determined by a specific program, medium or forum, but by the capacity of any actor-network to disrupt and change a normative social fabric. Compare Jameson’s claim that Haacke’s corporate and museological tropes are “dead forms” – through which “no subject-position speaks, not even in protest” (38) – with Battistoni’s, who, seeing Haacke’s activism as implicit, asks the reader: “how can we take the relationship between art and politics as seriously as Haacke has insisted we must?”Crimp’s concern that Unfinished Business perpetuated an image of the artist as distant from the “political stakes” of his work did not carry through to All Connected, whose respondents were less vexed about the relation between art and politics, with many noting its timeliness. The New Museum was, ironically, undergoing its own equity crisis in the months leading up to the exhibition, with newly unionised staff fighting with the Museum over workers’ salaries and healthcare even as it organised to build a new $89-million Rem Koolhaas-designed extension. Battistoni addressed these disputes at-length, claiming the protests “crystallize perfectly the changes that have shaped the world over the half-century of Haacke’s career, and especially over the 33 years since his last New Museum exhibition”. Of note is how little attention Battistoni pays to Haacke’s artistic methods when recounting his assumed solidarity with these disputes, suggesting that works such as Creating Consent (1981), Helmosboro Country (1990), and Standortkultur (Corporate Culture) (1997) – which pivot on art’s public image versus its corporate umbilical cord – do not convey some special aesthetico-political insight into a totalizing capitalist system. Instead, “he has simply been an astute and honest observer long enough to remind us that our current state of affairs has been in formation for decades”.Hans Haacke, News, 1969/2008 Hans Haacke, Wide White Flow, 1967/2008 Showing Systems Early on in the 1960s, Haacke was influenced by the American critic, artist, and curator Jack Burnham, who in a 1968 essay, “Systems Esthetics” for Artforum, inaugurated the loose conceptualist paradigm that would become known as “systems art”. Here, against Greenbergian formalism and what he saw as the “craft fetishism” of modernism, Burnham argues that “change emanates, not from things, but from the way things are done” (30). Burnham thought that emergent contemporary artists were intuitively aware of the importance of the systems approach: the significant artist in 1968 “strives to reduce the technical and psychical distance between his artistic output and the productive means of society”, and pays particular attention to relationships between organic and non-organic systems (31).As Michael Fried observed of minimalism in his now legendary 1967 essay Art and Objecthood, this shift in sixties art – signalled by the widespread interest in the systematic – entailed a turn towards the spatial, institutional, and societal contexts of receivership. For Burnham, art is not about “material entities” that beautify or modify the environment; rather, art exists “in relations between people and between people and the components of their environment” (31). At the forefront of his mind was land art, computer art, and research-driven conceptualist practice, which, against Fried, has “no contrived confines such as the theatre proscenium or picture frame” (32). In a 1969 lecture at the Guggenheim, Burnham confessed that his research concerned not just art as a distinct entity, but aesthetics in its broadest possible sense, declaring “as far as art is concerned, I’m not particularly interested in it. I believe that aesthetics exists in revelation” (Ragain).Working under the aegis of Burnham’s systems art, Haacke was shaken by the tumultuous and televised politics of late-1960s America – a time when, according to Joan Didion, a “demented and seductive vortical tension was building in the community” (41). Haacke cites Martin Luther King’s assassination as an “incident that made me understand that, in addition to what I had called physical and biological systems, there are also social systems and that art is an integral part of the universe of social systems” (Haacke, Conversation 222). Haacke created News (1969) in response to this awareness, comprising a (pre-Twitter) telex machine that endlessly spits out live news updates from wire services, piling up rolls and rolls of paper on the floor of the exhibition space over the course of its display. Echoing Burnham’s idea of the artist as a programmer whose job is to “prepare new codes and analyze data”, News nonetheless presents the museum as anything but immune from politics, and technological systems as anything but impersonal (32).This intensification of social responsibility in Haacke’s work sets him apart from other, arguably more reductive techno-scientific systems artists such as Sonia Sheridan and Les Levine. The gradual transformation of his ecological and quasi-scientific sculptural experiments from 1968 onwards could almost be seen as making a mockery of the anthropocentrism described in Fried’s 1967 critique. Here, Fried claims not only that the literalness of minimalist work amounts to an emphasis on shape and spatial presence over pictorial composition, but also, in this “theatricality of objecthood” literalness paradoxically mirrors (153). At times in Fried’s essay the minimalist art object reads as a mute form of sociality, the spatial presence filled by the conscious experience of looking – the theatrical relationship itself put on view. Fried thought that viewers of minimalism were presented with themselves in relation to the entire world as object, to which they were asked not to respond in an engaged formalist sense but (generically) to react. Pre-empting the rise of conceptual art and the sociological experiments of post-conceptualist practice, Fried, unapprovingly, argues that minimalist artists unleash an anthropomorphism that “must somehow confront the beholder” (154).Haacke, who admits he has “always been sympathetic to so-called Minimal art” (Haacke, A Conversation 26) embraced the human subject around the same time that Fried’s essay was published. While Fried would have viewed this move as further illustrating the minimalist tendency towards anthropomorphic confrontation, it would be more accurate to describe Haacke’s subsequent works as social-environmental barometers. Haacke began staging interactions which, however dry or administrative, framed the interplays of culture and nature, inside and outside, private and public spheres, expanding art’s definition by looking to the social circulation and economy that supported it.Haacke’s approach – which seems largely driven to show, to reveal – anticipates the viewer in a way that Fried would disapprove, for whom absorbed viewers, and the irreduction of gestalt to shape, are the by-products of assessments of aesthetic quality. For Donald Judd, the promotion of interest over conviction signalled scepticism about Clement Greenberg’s quality standards; it was a way of acknowledging the limitations of qualitative judgement, and, perhaps, of knowledge more generally. In this way, minimalism’s aesthetic relations are not framed so much as allowed to “go on and on” – the artists’ doubt about aesthetic value producing this ongoing temporal quality, which conviction supposedly lacks.In contrast to Unfinished Business, the placing of Haacke’s early sixties works adjacent to his later, more political works in All Connected revealed something other than the tensions between postmodern socio-political reveal and modernist-formalist revelation. The question of whether to intervene in an operating system – whether to let such a system go on and on – was raised throughout the exhibition, literally and metaphorically. To be faced with the interactions of physical, biological, and social systems (in Condensation Cube, 1963-67, and Wide White Flow, 1967/2008, but also in later works like MetroMobiltan, 1985) is to be faced with the question of change and one’s place in it. Framing systems in full swing, at their best, Haacke’s kinetic and environmental works suggest two things: 1. That the systems on display will be ongoing if their component parts aren’t altered; and 2. Any alteration will alter the system as a whole, in minor or significant ways. Applied to his practice more generally, what Haacke’s work hinges on is whether or not one perceives oneself as part of its systemic relations. To see oneself implicated is to see beyond the work’s literal forms and representations. Here, systemic imbrication equates to moral realisation: one’s capacity to alter the system as the question of what to do. Unlike the phenomenology-oriented minimalists, the viewer’s participation is not always assumed in Haacke’s work, who follows a more hermeneutic model. In fact, Haacke’s systems are often circular, highlighting participation as a conscious disruption of flow rather than an obligation that emanates from a particular work (148).This is a theatrical scenario as Fried describes it, but it is far from an abandonment of the issue of profound value. In fact, if we accept that Haacke’s work foregrounds intervention as a moral choice, it is closer to Fried’s own rallying cry for conviction in aesthetic judgement. As Rex Butler has argued, Fried’s advocacy of conviction over sceptical interest can be understood as dialectical in the Hegelian sense: conviction is the overcoming of scepticism, in a similar way that Geist, or spirit, for Hegel, is “the very split between subject and object, in which each makes the other possible” (Butler). What is advanced for Fried is the idea of “a scepticism that can be remarked only from the position of conviction and a conviction that can speak of itself only as this scepticism” (for instance, in his attempt to overcome his scepticism of literalist art on the basis of its scepticism). Strong and unequivocal feelings in Fried’s writing are informed by weak and indeterminate feeling, just as moral conviction in Haacke – the feeling that I, the viewer, should do something – emerges from an awareness that the system will continue to function fine without me. In other words, before being read as “a barometer of the changing and charged atmosphere of the public sphere” (Sutton 16), the impact of Haacke’s work depends upon an initial revelation. It is the realisation not just that one is embroiled in a series of “invisible but fundamental” relations greater than oneself, but that, in responding to seemingly sovereign social systems, the question of our involvement is a moral one, a claim for determination founded through an overcoming of the systemic (Fry 31).Haacke’s at once open and closed works suit the logic of our algorithmic age, where viewers have to shift constantly from a position of being targeted to one of finding for oneself. Peculiarly, when Haacke’s online digital polls in All Connected were hacked by activists (who randomized statistical responses in order to compel the Museum “to redress their continuing complacency in capitalism”) the culprits claimed they did it in sympathy with his work, not in spite of it: “we see our work as extending and conversing with Haacke’s, an artist and thinker who has been a source of inspiration to us both” (Hakim). This response – undermining done with veneration – is indicative of the complicated legacy of his work today. Haacke’s influence on artists such as Tania Bruguera, Sam Durant, Forensic Architecture, Laura Poitras, Carsten Höller, and Andrea Fraser has less to do with a particular political ideal than with his unique promotion of journalistic suspicion and moral revelation in forms of systems mapping. It suggests a coda be added to the sentiment of All Connected: all might not be revealed, but how we respond matters. Hans Haacke, Large Condensation Cube, 1963–67ReferencesBattistoni, Alyssa. “After a Contract Fight with Its Workers, the New Museum Opens Hans Haacke’s ‘All Connected’.” Frieze 208 (2019).Bishara, Hakim. “Hans Haacke Gets Hacked by Activists at the New Museum.” Hyperallergic 21 Jan. 2010. &lt;https://hyperallergic.com/538413/hans-haacke-gets-hacked-by-activists-at-the-new-museum/&gt;.Brenson, Michael. “Art: In Political Tone, Works by Hans Haacke.” New York Times 19 Dec. 1988. &lt;https://www.nytimes.com/1986/12/19/arts/artin-political-tone-worksby-hans-haacke.html&gt;.Buchloh, Benjamin. “Hans Haacke: Memory and Instrumental Reason.” Neo-Avantgarde and Culture Industry. Cambridge: MIT P, 2000.Burnham, Jack. “Systems Esthetics.” Artforum 7.1 (1968).Butler, Rex. “Art and Objecthood: Fried against Fried.” Nonsite 22 (2017). &lt;https://nonsite.org/feature/art-and-objecthood&gt;.Carrion-Murayari, Gary, and Massimiliano Gioni (eds.). Hans Haacke: All Connected. New York: Phaidon and New Museum, 2019.Crimp, Douglas. “Strategies of Public Address: Which Media, Which Publics?” In Hal Foster (ed.), Discussions in Contemporary Culture, no. 1. Washington: Bay P, 1987.Danto, Arthur C. “Hans Haacke and the Industry of Art.” In Gregg Horowitz and Tom Huhn (eds.), The Wake of Art: Criticism, Philosophy, and the Ends of Taste. London: Routledge, 1987/1998.Didion, Joan. The White Album. London: 4th Estate, 2019.Farago, Jason. “Hans Haacke, at the New Museum, Takes No Prisoners.” New York Times 31 Oct. 2019. &lt;https://www.nytimes.com/2019/10/31/arts/design/hans-haacke-review-new-museum.html&gt;.Fried, Michael. “Art and Objecthood.” Artforum 5 (June 1967).Fry, Edward. “Introduction to the Work of Hans Haacke.” In Hans Haacke 1967. Cambridge: MIT List Visual Arts Center, 2011.Glueck, Grace. “The Guggenheim Cancels Haacke’s Show.” New York Times 7 Apr. 1971.Gudel, Paul. “Michael Fried, Theatricality and the Threat of Skepticism.” Michael Fried and Philosophy. New York: Routledge, 2018.Haacke, Hans. Hans Haacke: Framing and Being Framed: 7 Works 1970-5. Halifax: P of the Nova Scotia College of Design and New York: New York UP, 1976.———. “Hans Haacke in Conversation with Gary Carrion-Murayari and Massimiliano Gioni.” Hans Haacke: All Connected. New York: Phaidon and New Museum, 2019.Haacke, Hans, et al. “A Conversation with Hans Haacke.” October 30 (1984).Haacke, Hans, and Brian Wallis (eds.). Hans Haacke: Unfinished Business. New York: New Museum of Contemporary Art; Cambridge, Mass: MIT P, 1986.“Haacke’s ‘All Connected.’” Frieze 25 Oct. 2019. &lt;https://frieze.com/article/after-contract-fight-its-workers-new-museum-opens-hans-haackes-all-connected&gt;.Judd, Donald. “Specific Objects.” Complete Writings 1959–1975. Halifax: P of the Nova Scotia College of Design and New York: New York UP, 1965/1975.Lee, Pamela M. “Unfinished ‘Unfinished Business.’” Hans Haacke: All Connected. New York: Phaidon P Limited and New Museum, 2019.Ragain, Melissa. “Jack Burnham (1931–2019).” Artforum 19 Mar. 2019. &lt;https://www.artforum.com/passages/melissa-ragain-on-jack-burnham-78935&gt;.Sutton, Gloria. “Hans Haacke: Works of Art, 1963–72.” Hans Haacke: All Connected. New York: Phaidon P Limited and New Museum, 2019.Tucker, Marcia. “Director’s Forward.” Hans Haacke: Unfinished Business. New York: New Museum of Contemporary Art; Cambridge, Mass: MIT P, 1986.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography