To see the other types of publications on this topic, follow the link: End-user disconnect.

Journal articles on the topic 'End-user disconnect'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 journal articles for your research on the topic 'End-user disconnect.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Tak Yeon, and Benjamin B. Bederson. "Give the people what they want: studying end-user needs for enhancing the web." PeerJ Computer Science 2 (November 14, 2016): e91. http://dx.doi.org/10.7717/peerj-cs.91.

Full text
Abstract:
End-user programming (EUP) is a common approach for helping ordinary people create small programs for their professional or daily tasks. Since end-users may not have programming skills or strong motivation for learning them, tools should provide what end-users want with minimal costs of learning –i.e., they must decrease the barriers to entry. However, it is often hard to address these needs, especially for fast-evolving domains such as the Web. To better understand these existing and ongoing challenges, we conducted two formative studies with Web users –a semi-structured interview study, and a Wizard-of-Oz study. The interview study identifies challenges that participants have with their daily experiences on the Web. The Wizard-of-Oz study investigates how participants would naturally explain three computational tasks to an interviewer acting as a hypothetical computer agent. These studies demonstrate a disconnect between what end-users want and what existing EUP systems support, and thus open the door for a path towards better support for end user needs. In particular, our findings include: (1) analysis of challenges that end-users experience on the Web with solutions; (2) seven core functionalities of EUP for addressing these challenges; (3) characteristics of non-programmers describing three common computation tasks; (4) design implications for future EUP systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Wehry, Susan, and Regula H. Robnett. "PERSONS LIVING WITH COGNITIVE IMPAIRMENT SHARE THEIR VIEWS ON TECHNOLOGY." Innovation in Aging 3, Supplement_1 (November 2019): S953. http://dx.doi.org/10.1093/geroni/igz038.3459.

Full text
Abstract:
Abstract The purpose of this study was to examine the experience of adults living with cognitive impairments and that of their care partners with digital technology including current use of, ease with and openness to using smart assistive technologies (SATs). SATs for older adults with (and without) cognitive impairments have become increasingly commonplace. Research on various digital devices has focused primarily on supporting users’ independence and care partner concerns for safety and security. Our qualitative, interview-based research project provided digital devices chosen by participants to address a specific personal goal. Interviews were conducted in the home and set-up assistance was provided during the initial interview. At the conclusion of the trial period, a second interview was conducted in the home. We describe the participants’ commendations for, expectations of, and frustrations with current technology as well as recommendations for potential, helpful digital technology. Current technology offers great promise but a disconnect between the design of digital technologies and the needs and wishes of the end-user still exists. This study will help inform additional user-driven application SATs, including those aimed at enhancing enjoyment and a higher quality of life.
APA, Harvard, Vancouver, ISO, and other styles
3

Erdogan, Gencer, Atle Refsdal, Bjørn Nygård, Ole Petter Rosland, and Bernt Kvam Randeberg. "Risk-Based Decision Support Model for Offshore Installations." Business Systems Research Journal 9, no. 2 (July 1, 2018): 55–68. http://dx.doi.org/10.2478/bsrj-2018-0019.

Full text
Abstract:
Abstract Background: During major maintenance projects on offshore installations, flotels are often used to accommodate the personnel. A gangway connects the flotel to the installation. If the offshore conditions are unfavorable, the responsible operatives need to decide whether to lift (disconnect) the gangway from the installation. If this is not done, there is a risk that an uncontrolled autolift (disconnection) occurs, causing harm to personnel and equipment. Objectives: We present a decision support model, developed using the DEXi tool for multi-criteria decision making, which produces advice on whether to disconnect/connect the gangway from/to the installation. Moreover, we report on our development method and experiences from the process, including the efforts invested. An evaluation of the resulting model is also offered, primarily based on feedback from a small group of offshore operatives and domain experts representing the end user target group. Methods/Approach: The decision support model was developed systematically in four steps: establish context, develop the model, tune the model, and collect feedback on the model. Results: The results indicate that the decision support model provides advice that corresponds with expert expectations, captures all aspects that are important for the assessment, is comprehensible to domain experts, and that the expected benefit justifies the effort for developing the model. Conclusions: We find the results promising, and believe that the approach can be fruitful in a wider range of risk-based decision support scenarios. Moreover, this paper can help other decision support developers decide whether a similar approach can suit them
APA, Harvard, Vancouver, ISO, and other styles
4

Ngan, Catherine G. Y., Rob M. I. Kapsa, and Peter F. M. Choong. "Strategies for neural control of prosthetic limbs: from electrode interfacing to 3D printing." Materials 12, no. 12 (June 14, 2019): 1927. http://dx.doi.org/10.3390/ma12121927.

Full text
Abstract:
Limb amputation is a major cause of disability in our community, for which motorised prosthetic devices offer a return to function and independence. With the commercialisation and increasing availability of advanced motorised prosthetic technologies, there is a consumer need and clinical drive for intuitive user control. In this context, rapid additive fabrication/prototyping capacities and biofabrication protocols embrace a highly-personalised medicine doctrine that marries specific patient biology and anatomy to high-end prosthetic design, manufacture and functionality. Commercially-available prosthetic models utilise surface electrodes that are limited by their disconnect between mind and device. As such, alternative strategies of mind–prosthetic interfacing have been explored to purposefully drive the prosthetic limb. This review investigates mind to machine interfacing strategies, with a focus on the biological challenges of long-term harnessing of the user’s cerebral commands to drive actuation/movement in electronic prostheses. It covers the limitations of skin, peripheral nerve and brain interfacing electrodes, and in particular the challenges of minimising the foreign-body response, as well as a new strategy of grafting muscle onto residual peripheral nerves. In conjunction, this review also investigates the applicability of additive tissue engineering at the nerve-electrode boundary, which has led to pioneering work in neural regeneration and bioelectrode development for applications at the neuroprosthetic interface.
APA, Harvard, Vancouver, ISO, and other styles
5

Duinkharjav, Budmonde, Praneeth Chakravarthula, Rachel Brown, Anjul Patney, and Qi Sun. "Image features influence reaction time." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–15. http://dx.doi.org/10.1145/3528223.3530055.

Full text
Abstract:
We aim to ask and answer an essential question " how quickly do we react after observing a displayed visual target?" To this end, we present psychophysical studies that characterize the remarkable disconnect between human saccadic behaviors and spatial visual acuity. Building on the results of our studies, we develop a perceptual model to predict temporal gaze behavior, particularly saccadic latency, as a function of the statistics of a displayed image. Specifically, we implement a neurologically-inspired probabilistic model that mimics the accumulation of confidence that leads to a perceptual decision. We validate our model with a series of objective measurements and user studies using an eye-tracked VR display. The results demonstrate that our model prediction is in statistical alignment with real-world human behavior. Further, we establish that many sub-threshold image modifications commonly introduced in graphics pipelines may significantly alter human reaction timing, even if the differences are visually undetectable. Finally, we show that our model can serve as a metric to predict and alter reaction latency of users in interactive computer graphics applications, thus may improve gaze-contingent rendering, design of virtual experiences, and player performance in e-sports. We illustrate this with two examples: estimating competition fairness in a video game with two different team colors, and tuning display viewing distance to minimize player reaction time.
APA, Harvard, Vancouver, ISO, and other styles
6

Alhasnawi, Bilal Naji, and Basil H. Jasim Jasim. "A Novel Hierarchical Energy Management System Based on Optimization for Multi-Microgrid." International Journal on Electrical Engineering and Informatics 12, no. 3 (September 30, 2020): 586–606. http://dx.doi.org/10.15676/ijeei.2020.12.3.10.

Full text
Abstract:
The microgrid vision has come to incorporate various communication technologies, which facilitate residential users to adopt different scheduling schemes in order to manage energy usage with reduced carbon emission. Through this study, we have introduced a novel method for residential load control with energy resources integrated. To this end, an input and optimization algorithm has been employed to control and schedule residential charges for cost savings, consumer inconvenience, and peak-to-average rate savings (PAR) purposes, including real-time electricity costs, energy demand, user expectations, and renewable energy parameters. This paper also provides a Maximum Power Point Tracking (MPPT) technique used to obtain full power from a hybrid power system during the variation of environmental conditions in both photovoltaic stations and batteries. An IEEE 14 bus program was considered to determine the efficiency of the proposed algorithm. This research also aims at developing the role model to determine the behaviors, as a result of a shift in the opening Protocol to the disconnect establishing the power generation island, of distributed energy resources on 14-node IEEE networks. The micro-grid is a simple case for the study of energy flow and smart grid efficiencyvariables and has dispersed tools. The findings show that the energy management system loadcollection using the suggested approach improves performance and decreases losses in contrast to previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Drake, Lori. "Scientific Prerequisites to Comprehension of the Tropical Cyclone Forecast: Intensity, Track, and Size." Weather and Forecasting 27, no. 2 (April 1, 2012): 462–72. http://dx.doi.org/10.1175/waf-d-11-00041.1.

Full text
Abstract:
Abstract The communication by forecasters of tropical cyclone (TC) descriptions and forecasts to user communities necessarily involves the transmission of information based in science to different classes of users composed primarily of nonscientists. Inherent in the problem is the necessity of translating or converting the scientific content of the forecast, including its associated uncertainty, which is mathematical and statistical in its native structure, into restructured content comprehensible to populations not generally schooled in those disciplines. The forecast interpretation problem encompasses not only the forms in which the information is presented or communicated (e.g., text versus graphics), but even more so the complexity and transparency of the scientific content contained between those forms. This article investigates the substantive areas of dissonance and disconnect between the scientific content of TC descriptions and forecasts, including the uncertainty, and the ability of end users to accurately comprehend and interpret the information. It centers on the three storm attributes for which there is a forecast, namely intensity, track, and size, within the context of existing research studies, public surveys, and original official documents that specifically provide insights into this subject matter. The results suggest that the TC descriptions and forecasts, once their scientific substance has been processed for the benefit of nonscientists, still require some preexisting scientific knowledge that may or may not be present among the different groups of nonspecialist users.
APA, Harvard, Vancouver, ISO, and other styles
8

Saleme, Pamela, Timo Dietrich, Bo Pang, and Joy Parkinson. "A gamified approach to promoting empathy in children." Journal of Social Marketing 10, no. 3 (June 10, 2020): 321–37. http://dx.doi.org/10.1108/jsocm-11-2019-0204.

Full text
Abstract:
Purpose Gamification has gained popularity in social marketing research; however, its application remains limited to a few contexts, and relatively little is known about how innovative gamification technologies such as augmented reality can be applied to social marketing programme design. This paper aims to demonstrate the application of gamification to a social marketing pilot programme designed to increase children’s empathy and empathic behaviour. Design/methodology/approach Informed by social cognitive theory (SCT), a mixed-method research design was adopted using pre- and post-programme surveys (n = 364) to assess effectiveness using paired samples t-test. Qualitative data included observations, participant’s questions and a feedback activity at the end of the programme. A thematic analysis was undertaken to examine the data and detect meaningful insights. Findings Children’s affective empathy and empathic behaviour outcomes were improved following the pilot programme. However, no effects were observed for cognitive empathy and social norms. Thematic analysis revealed three themes to further improve the game: developmentally appropriate design, user experience and game design. Research limitations/implications Findings demonstrated challenges with the application of SCT outlining a disconnect between the design of the gamified programme and theory application. Practical implications This study provides initial evidence for the application of innovative gamification technologies to increase empathy in children. Originality/value To the best of the authors’ knowledge, this paper is the first to examine how a gamified social marketing programme can increase empathy in children.
APA, Harvard, Vancouver, ISO, and other styles
9

Costescu, Dan M. "Modal Competition and Complementarity: Cost Optimization at End-User Level." Romanian Journal of Transport Infrastructure 7, no. 2 (December 1, 2018): 61–76. http://dx.doi.org/10.2478/rjti-2018-0012.

Full text
Abstract:
Abstract The paper aims to identify possible methods for balancing the allocation of transport flow on modal subsystems in order to efficiently use the infrastructures and reduce the negative effects of today’s unbalance. The aspects of intermodal competition are reviewed, considering the economic concepts regarding the substitutability of transportation services, conformation degree to the perfect competition model and the nature of cross elasticity demand. A top-down analysis over the whole infrastructure assembly is performed. The results, under the presumption of valid work hypothesis, indicated that for further analysis the set of networks transferring material flows can be assumed as disconnected from the other networks sets transferring energy, informational and values flows. The second part of the paper develops, for that disconnected networks, a generalized cost optimization model for multimodal transportation, where the comfort and safety are accounted. Thus, the performance of the existing algorithms based only on trip length, trip duration and energy consumption can be significantly improved. Additionally, the author proposes three new independent types of modal analysis that allow end-users and companies involved in transport organization to optimize their modal choice and the whole transport process organization.
APA, Harvard, Vancouver, ISO, and other styles
10

Noble, Peter, and Travis B. Paveglio. "Exploring Adoption of the Wildland Fire Decision Support System: End User Perspectives." Journal of Forestry 118, no. 2 (February 13, 2020): 154–71. http://dx.doi.org/10.1093/jofore/fvz070.

Full text
Abstract:
Abstract Abstract The increasing complexity of wildland fire management highlights the importance of sound decision making. Numerous fire management decision support systems (FMDSS) are designed to enhance science and technology delivery or assist fire managers with decision-making tasks. However, few scientific efforts have explored the adoption and use of FMDSS by fire managers. This research couples existing decision support system research and in-depth interviews with US Forest Service fire managers to explore perspectives surrounding the Wildland Fire Decision Support System (WFDSS). Results indicate that fire managers appreciate many WFDSS components but view it primarily as a means to document fire management decisions. They describe on-the-ground actions that can be disconnected with decisions developed in WFDSS, which they attribute to the timeliness of WFDSS outputs, the complexity of the WFDSS design, and how it was introduced to managers. We conclude by discussing how FMDSS development could address concerns raised by managers.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Lingyuan, Guo Freeman, and Nathan J. McNeese. "Channeling End-User Creativity: Leveraging Live Streaming for Distributed Collaboration in Indie Game Development." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (November 7, 2022): 1–28. http://dx.doi.org/10.1145/3555173.

Full text
Abstract:
This paper explores the role of live streaming in distributed collaborative software development using indie game development, an end-user driven creative community, as an example. We conducted 27 in-depth interviews with indie game developers from various cultures and countries, who had engaged in live streaming for collaborative software development either as a streamer or a viewer. Our findings show how live streaming can be used by indie game developers to support their endeavors to innovate the traditional game development model, which goes beyond just learning and teaching technical skills. We also highlight the potential challenges indie developers face in this process. We thus make unique contributions to CSCW by bridging the previously often disconnected research agendas on collaborative software development and live streaming. We also provide potential directions for designing future live streaming platforms to better support distributed collaboration in emerging end-user driven creative activities.
APA, Harvard, Vancouver, ISO, and other styles
12

Van Ameijde, Jeroen. "Data-driven Urban Design." SPOOL 9, no. 1 (May 27, 2022): 35–48. http://dx.doi.org/10.47982/spool.2022.1.03.

Full text
Abstract:
Nicholas Negroponte and MIT’s Architecture Machine Group speculated in the 1970s about computational processes that were open to participation, incorporating end-user preferences and democratizing urban design. Today’s ‘smart city’ technologies, using the monitoring of people’s movement and activity patterns to offer more effective and responsive services, might seem like contemporary interpretations of Negroponte’s vision, yet many of the collectors of user information are disconnected from urban policy making. This article presents a series of theoretical and procedural experiments conducted through academic research and teaching, developing user-driven generative design processes in the spirit of ‘The Architecture Machine’. It explores how new computational tools for site analysis and monitoring can enable data-driven urban place studies, and how these can be connected to generative strategies for public spaces and environments at various scales. By breaking down these processes into separate components of gathering, analysing, translating and implementing data, and conceptualizing them in relation to urban theory, it is shown how data-driven urban design processes can be conceived as an open-ended toolkit to achieve various types of user-driven outcomes. It is argued that architects and urban designers are uniquely situated to reflect on the benefits and value systems that control data-driven processes, and should deploy these to deliver more resilient, liveable and participatory urban spaces.
APA, Harvard, Vancouver, ISO, and other styles
13

Harish, Ballu, and R. S. Dwiwedi. "Exhibiting of geospatial attribute data using popup template Java-script application programming interface." International Journal of Scientific Reports 6, no. 12 (November 23, 2020): 532. http://dx.doi.org/10.18203/issn.2454-2156.intjscirep20205034.

Full text
Abstract:
<p>Arc-GIS server is used in creating web, desktop, mobile applications. Arc-GIS for server provides end user applications and services for spatial data management, visualization and spatial analysis. The proposed work deals with exhibiting of geo-spatial attribute data using the facility of Java script application programme interfaces (API’s) from Arc-GIS server. Popup-layout API reference is utilized in the work and furthermore two of its properties are utilized relying upon the need of the work. All the programming interfaces have their advantages for encouraging clients work to connect with the geo-spatial information. Keen web maps make an extraordinary method of envisioning complex data. They assist with beating up apparently disconnected data, uncover concealed examples, mine enormous datasets. Information can be composed on the work area, sent to the cloud, and shared utilizing Arc-GIS server on the web.</p>
APA, Harvard, Vancouver, ISO, and other styles
14

Mehta, Chanakya, and Khanjan Mehta. "A Design Space and Business Strategy Exploration Tool for Infrastructure-based Ventures in Developing Communities." International Journal for Service Learning in Engineering, Humanitarian Engineering and Social Entrepreneurship 6, no. 2 (October 11, 2011): 30–57. http://dx.doi.org/10.24908/ijsle.v6i2.3659.

Full text
Abstract:
Technology ventures in developing communities often fail because of disconnects between the designer, the implementer and the end-user. There is a growing trend towards curricular and extra-curricular programs and student clubs that focus on appropriate technology-based projects to address the needs of marginalized communities at the Base of the Pyramid (BOP). Finding the optimum distribution of time, money and sweat equity to be shared by the communities and partnering organizations can be pivotal in achieving long-term, sustainable impact for the communities. The E-Spot model seeks to identify the appropriate stakeholders within a venture and define their individual roles and the form of equity they might offer towards fulfilling the overarching objectives of the venture, while meeting their own needs. This model is the basis for the E-Spot canvas, a design space and business strategy exploration tool. The canvas facilitates group-thinking amongst stakeholders to match project resource requirements with time, money sweat and other equities that can be expended by them to sustain their project socially, economically and environmentally.
APA, Harvard, Vancouver, ISO, and other styles
15

Paz Penagos, Hernán, Germán Darío Castellanos Tache, Ronald Ferney Alarcón Ballesteros, Viviana Lucia Weiss Velandia, Ángela Roció Laverde Cañón, Juan Carlos Rodríguez Calderón, and Leonel Andrés Rincón Fosca. "Networking automation of ECI’s G-204 electronic engineering laboratory." Ingeniería e Investigación 26, no. 3 (September 1, 2006): 100–112. http://dx.doi.org/10.15446/ing.investig.v26n3.14758.

Full text
Abstract:
Increased use (by students and teachers) of the “Escuela Colombiana de Ingeniería Julio Garavito” Electronic Engineering laboratories during the last year has congested access to these laboratories; the School’s Electronic Engineering (Ecitrónica) programme’s applied Electronic studies’ center research group thus proposed, designed and developed a research project taking advantage of G building’s electrical distribution to offer access facilities, laboratory equipment control, energy saving and improved service quality. The G-204 laboratory’s network system will have an access control subsystem with client-main computer architecture. The latter consists of a user, schedule, group and work-bank database; the user is connected from any computer (client) to the main computer through Internet to reserve his/her turn at laboratory practice by selecting the schedule, group, work-bank, network type required (1Φ or 3 Φ) and registering co-workers. Access to the G-204 laboratory on the day and time of practice is made by means of an intelligent card reader. Information of public interest produced and controlled by the main computer is displayed on three LCD screens located on one of G building’s second floor walls, as is an electronic clock. The G-204 laboratory temperature and time are continually updated. Work-banks are enabled or disabled by the main computer; the work banks are provided with power (beginning of practice) or disconnected (end of practice or due to eventualities) to protect the equipment, save energy, facilitate monitors and supervise the logistics of the state of the equipment at the end of each practice. The research group was organised into Transmission Line and Applications sub-groups. Power Line Communications PLC technology was used for to exploring digital modulation alternatives, coding and detecting errors, coupling, data transmission protocols and new applications, all based on channel estimation (networking) as the means of transmission.
APA, Harvard, Vancouver, ISO, and other styles
16

Ko, Chihjen, and Lex Wang. "Applying Design Thinking in Revising Data Curation of Taiwanese Herbaria." Biodiversity Information Science and Standards 2 (May 22, 2018): e25828. http://dx.doi.org/10.3897/biss.2.25828.

Full text
Abstract:
Herbaria in Taiwan face critical data challenges: Different taxonomic views prevent data exchange; There is a lack of development practices to keep up with standard and technological advances; Data is disconnected from researchers’ perspective, thus it is difficult to demonstrate the value of taxonomists’ activities, even though a few herbaria have their specimen catalogue partially exposed in Darwin Core. Different taxonomic views prevent data exchange; There is a lack of development practices to keep up with standard and technological advances; Data is disconnected from researchers’ perspective, thus it is difficult to demonstrate the value of taxonomists’ activities, even though a few herbaria have their specimen catalogue partially exposed in Darwin Core. In consultation with the Herbarium of the Taiwan Forestry Research Institute (TAIF), the Herbarium of the National Taiwan University (TAI) and the Herbarium of the Biodiversity Research Center, Academia Sinica (HAST), which together host most important collections of the vegetation on the island, we have planned the following activities to address data challenges: Investigate a new data model for scientific names that will accommodate different taxonomic views and create a web service for access to taxonomic data; Refactor existing herbarium systems to utilize the aforementioned service so the three herbaria can share and maintain a standardized name database; Create a layer of Application Programming Interface (API) to allow multiple types of accessing devices; Conduct behavioral research regarding various personas engaged in the curatorial workflow; Create a unified front-end that supports data management, data discovery, and data analysis activities with user experience improvements. Investigate a new data model for scientific names that will accommodate different taxonomic views and create a web service for access to taxonomic data; Refactor existing herbarium systems to utilize the aforementioned service so the three herbaria can share and maintain a standardized name database; Create a layer of Application Programming Interface (API) to allow multiple types of accessing devices; Conduct behavioral research regarding various personas engaged in the curatorial workflow; Create a unified front-end that supports data management, data discovery, and data analysis activities with user experience improvements. To manage these developments at various levels, while maximizing the contribution of participating parties, it is crucial to use a proven methodological framework. As the creative industry has been leading in the area of solution development, the concept of design thinking and design thinking process (Brown and Katz 2009) has come to our radar. Design thinking is a systematic approach to handling problems and generating new opportunities (Pal 2016). From requirement capture to actual implementation, it helps consolidate ideas and identify agreed-on key priorities by constantly iterating through a series of interactive divergence and convergence steps, namely the following: Empathize: A divergent step. We learn about our audience, which in this case includes curators and visitors of the herbarium systems, about what they do and how they interact with the system, and collate our findings. Define: A convergent step. We construct a point of view based on audience needs. Ideate: A divergent step. We brainstorm and come up with creative solutions, which might be novel or based on existing practice. Prototype: A convergent step. We build representations of the chosen idea from the previous step. Test: Use the prototype to test whether the idea works. Then refine from step 3 if problems were with the prototyping, or even step 1, if the point of view needs to be revisited. Empathize: A divergent step. We learn about our audience, which in this case includes curators and visitors of the herbarium systems, about what they do and how they interact with the system, and collate our findings. Define: A convergent step. We construct a point of view based on audience needs. Ideate: A divergent step. We brainstorm and come up with creative solutions, which might be novel or based on existing practice. Prototype: A convergent step. We build representations of the chosen idea from the previous step. Test: Use the prototype to test whether the idea works. Then refine from step 3 if problems were with the prototyping, or even step 1, if the point of view needs to be revisited. The benefits by adapting to this process are: Instead of “design for you”, we “design together”, which strengthens the sense of community and helps the communication of what the revision and refactoring will achieve; When put in context, increased awareness and understanding of biodiversity data standards, such as Darwin Core (DwC) and Access to Biological Collections Data (ABCD); As we lend the responsibility of process control to an external facilitator, we are able to focus during each step as a participant. Instead of “design for you”, we “design together”, which strengthens the sense of community and helps the communication of what the revision and refactoring will achieve; When put in context, increased awareness and understanding of biodiversity data standards, such as Darwin Core (DwC) and Access to Biological Collections Data (ABCD); As we lend the responsibility of process control to an external facilitator, we are able to focus during each step as a participant. We illustrate how the planned activities are conducted by the five iterative steps.
APA, Harvard, Vancouver, ISO, and other styles
17

Gupta, Rajat, Mona Aggarwal, and and Swaran Ahuja. "Hamiltonian Graph Analysis – Mixed Integer Linear Programming (HGA-MILP) Based Link Failure Detection System in Optical Data Center Networks." Journal of Optical Communications, July 24, 2019. http://dx.doi.org/10.1515/joc-2019-0090.

Full text
Abstract:
AbstractThe internet services have become more popular rapidly over the decades. This is because of the inspiration created by new content-hungry applications on end-user devices like smart phones. Wavelength division multiplexed mesh networks are the optical backbone, working as the aggregating point of huge volume traffic, interconnecting several access networks and end users. The need of virtual data center is to retain services in case of failures such as optical link failures and resource or power outage in data centers that occurs in the cloud infrastructure and improve the survivability. The physical infrastructure may disconnect the virtual machines and physical data centers. This leads to the reduction in computational as well as communicational capabilities of the cloud. To overcome these issues, an integer linear programming model called hamiltonian graph analysis – mixed integer linear programming (HGA-MILP) is proposed in this work by utilizing hamiltonian graph model. The main motive of this research is to design a survivable data centre network mapping problem against physical link failures. Use of hamiltonian path methodology is helpful to organize data and improve network performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Figueiredo, Rachel, Helen Power, Kate Mercer, and Matthew Borland. "EMBEDDING LIBRARIANS IN ENGINEERING PROGRAMS: THREE CASE STUDIES WITH ENGINEERING STUDENTS." Proceedings of the Canadian Engineering Education Association (CEEA), June 12, 2020. http://dx.doi.org/10.24908/pceea.vi0.14145.

Full text
Abstract:
As the information landscape becomes increasingly complex, librarians must adapt accordingly. With information so readily available, students overestimate their research skills and lack awareness of how the library can help. However, librarians’ academic training makes them ideal resources to support students’ complex information needs - whether students know it or not. In this paper, we argue that embedded librarianship is the solution to this disconnect between librarian and user. Specifically, this paper provides case studies at two Canadian universities of librarians approaching embedded librarianship from different directions. At the University of Waterloo, two engineering librarians worked toward an embedded model of librarianship where this was not yet an established model in the Faculty of Engineering. At the University of Saskatchewan, a librarian was hired with the intention of the new position being embedded, without a formal structure or precedent for this within the College of Engineering. The term “embedded librarian” describes a service model where an academic librarian participates in an academic course or program on a continuing basis in order to understand the learning objectives and determine which resources best support them. In order to “do this, the librarian has to be familiar with the work and understand the domain and goals. Doing this, the librarian becomes an invaluable member of the team” [1]. The variables associated with embeddedness include location, funding, management and supervision, and participation [1]. To this end, the authors explore how each of these variables contribute to the success of moving towards this embedded model: how moving out of the library influences overall connection, how they acquired funding to grow a new collection, how management supports the overall goal, and how sustained participation in the program grows new opportunities. At both universities, librarians have seen most success embedding in programs with a strong emphasis on integrated STEM education where the focus is on providing real-world context with the aim of graduating well-rounded engineers [2]. The authors will discuss how programmatic learning outcomes and trends in integrated and interdisciplinary education have allowed them to stretch beyond the traditional boundaries of academic librarianship to demonstrate value to the Engineering departments in new ways. This paper reports on the experiences, advantages, and lessons learned in moving toward this model, and provides concrete examples for adapting these concepts to programs at other institutions. Through an intrinsic case study [3] the authors aim to understand how librarians’ embeddedness can adapt and change to support student learning in different contexts. This session is targeted towards practicing engineering librarians and engineering faculty members and educators. Attendees will leave the session with ideas on how to stimulate new partnerships between their library and Engineering programs.
APA, Harvard, Vancouver, ISO, and other styles
19

Rauch, Susan. "UX Case Study: Tracking EHR automation, scarcity of attention, and transaction hazards." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9692.

Full text
Abstract:
ObjectiveTo track and visually assess how automated attention structures within the electronic health record (EHR) compete for clinicians attention during computer physician order entry that could potentially lead to transactions hazards in the clinical narrative.IntroductionIn recent years, studies in health and medicine have shifted toward eHealth communication and the relationships among human interaction, computer literacy, and digital text content in medical discourses (1-6). Clinicians, however, continue to struggle with EHR usability, including how to effectively capture patient data without error (7-9). Usability is especially problematic for clinicians, who must now acquire new skills in electronic documentation (10). Challenges with the EHR occur because of clinicians’ struggle with attention to the non-linear format of clinical content and automated technologies (11). It is therefore important to understand how attention structures are visually situated within the EHR’s narrative architecture and audience for whom electronic text is written. It is equally important to visualize and track how automated language and design in health information technology (HIT) affect users’ attention when documenting clinical narratives (12). In the study of health information technology, researchers of eHealth platforms need to recognize how the construction of human communication lies within the metaphoric expression, design, and delivery of the EHR’s information architecture (13). Many studies of electronic health records (EHR) examine the design and usability in the development stages. Some studies focus on the economic value of the EHR Medicare incentive program, which affects providers’ return on investment (ROI). Few studies, however, identify the communicative value of how attention structures within the EHR’s information architecture compete for users’ attention during the clinical documentation process (9, 14).MethodsThis paper highlights methods from an observed EHR pre-launch testing event that analyzes the visual effects of attention structures within the EHR’s information landscape. The observation was completed in two separate stages, each with one IT facilitator and two participant demographics: Stage 1. On-site HIT clinical application staff testing and, Stage 2. Twenty-five participants (RN and non-RN clinical staff). During the second stage of the event, one participant’s task performance was screencast-recorded. The length of the testing for the one participant totaled 37 minutes. Because the EHR domain is propelled by both the Internet and Intranet, a contextual-rhetorical analysis of the data collected was performed which incorporated Nielsen's 10 Usability Heuristics for Interaction Design (15, 16) and Stuart Blythe’s methodological approach to analyzing digital writing and technology to defining rhetorical units of analysis in digital Web research (17).ResultsThe UX observation and contextual-rhetorical analysis of EHR design supports a 4-year qualitative study consisting of hospital interviews at two acute-care facilities and an online, national survey of revenue integrity and clinical documentation improvement specialists. The testing event served as an opportunity to observe how a healthcare organization user-experience tests the functionality of the EHR’s design build before launching it live. The testing event also provides an understanding of clinicians’ organizational needs and challenges during the clinical documentation process. The contextual-rhetorical analysis identified how the structure of narrative in the EHR represents rhetorical units of value that might influence how clinicians make decisions about narrative construction.ConclusionsThis UX case study analysis of an EHR testing event identifies how scarcity of attention and clinicians’ reliance on technology affect clinical documentation best practices leading to potential transaction hazards in the clinical narrative.The study is relevant in eHealth data surveillance because it shows how visual cues within the design of the EHR's technological landscape affect clinicians’ decision-making processes while documenting the EHR-generated clinical narrativeReferences1. Black A, Car J, Majeed A, Sheikh A. Strategic considerations for improving the quality of eHealth research: we need to improve the quality and capacity of academia to undertake informatics research. Journal of Innovation in Health Informatics. 2008;16(3):175-7.2. Meeks DW, Smith MW, Taylor L, Sittig DF, Scott JM, Singh H. An analysis of electronic health record-related patient safety concerns. Journal of the American Medical Informatics Association. 2014;21(6):1053-9.3. Owens KH. Rhetorics of e-Health and information age medicine: A risk-benefit analysis. JAC. 2011:225-35.4. Petersson J. Geographies of eHealth: Studies of Healthcare at a Distance2014.5. Solomon S. How we can end the disconnect in health. Health Voices. 2014(15):23.6. Subbiah NK. Improving Usability and Adoption of Tablet-based Electronic Health Record (EHR) Applications: Arizona State University; 2018.7. Khairat S, Burke G, Archambault H, Schwartz T, Larson J, Ratwani RM. Perceived Burden of EHRs on Physicians at Different Stages of Their Career. Applied clinical informatics. 2018;9(02):336-47.8. Staggers N, Elias BL, Makar E, Alexander GL. The Imperative of Solving Nurses’ Usability Problems With Health Information Technology. Journal of Nursing Administration. 2018;48(4):191-6.9. Yackel TR, Embi PJ. Unintended errors with EHR-based result management: a case series. Journal of the American Medical Informatics Association. 2010;17(1):104-7.10. Stewart WF, Shah NR, Selna MJ, Paulus RA, Walker JM. Bridging the inferential gap: the electronic health record and clinical evidence. Health Affairs. 2007;26(2):w181-w91.11. Johnson SB, Bakken S, Dine D, Hyun S, Mendonça E, Morrison F, et al. An electronic health record based on structured narrative. Journal of the American Medical Informatics Association. 2008;15(1):54-64.12. Lanham RA. The economics of attention: Style and substance in the age of information: University of Chicago Press; 2006.13. Salvo MJ. Rhetorical action in professional space: Information architecture as critical practice. Journal of Business and Technical Communication. 2004;18(1):39-66.14. Sittig DF, Singh H. A new socio-technical model for studying health information technology in complex adaptive healthcare systems. Cognitive Informatics for Biomedicine: Springer; 2015. p. 59-80.15. Nielsen J. 10 usability heuristics for user interface design. Nielsen Norman Group. 1995;1(1).16. Nielsen J, Molich R, editors. Heuristic evaluation of user interfaces. Proceedings of the SIGCHI conference on Human factors in computing systems; 1990: ACM.17. Blythe S. Digital Writing Research. In: McKee HA, DeVoss D, editors. Digital Writing Research: Technologies, Methodologies and Ethical Issues (New Dimensions in Computers and Composition)Cresskill, NJ: Hampton Press; 2007. p. 203-28.
APA, Harvard, Vancouver, ISO, and other styles
20

dos Remedios, Nicholas, Sarah Richmond, Jeff Christiansen, Nigel Ward, Hamish Holewa, and Kathryn Hall. "Building an Australian Reference Genome Atlas." Biodiversity Information Science and Standards 6 (August 23, 2022). http://dx.doi.org/10.3897/biss.6.91415.

Full text
Abstract:
Currently, genomics data for living species are stored in public and private repositories online. These repositories remain largely disconnected and only partially findable. The Australian Reference Genome Atlas (ARGA) Project is solving the problem of genomics data obscurity by creating an online platform where life sciences researchers can comprehensively and confidently search for data for taxa relevant to Australian research. At its most basic, ARGA is a tool for aggregating and indexing publicly available genomics (and genetics) data. We aim to improve the experience of discovering and accessing this data by building search functionality, based on features such as phenotypic traits and predicted and observed species distributions, and supporting data packaging and transfer to analysis environments. ARGA will index GenBank (National Institutes of Health (NIH), USA), the European Nucleotide Archive (EMBL-ENA), the database of Bioplatforms Australia, and selected DNA repositories in Australian faunal collections and herbaria. We will integrate these records with the occurrence records and taxonomic framework of the Atlas of Living Australia (ALA) to enrich the data and make it searchable using taxonomy, location, ecological characteristics and selected phenotypic data. The chief aims and outputs for the project are to: create a system to enable contextual metadata about a species to be used as a pointer to a variety of genomic data associated with that species; add functionality to that system to enable additional contextual information groupings, and community curation of these created groupings; create a user-facing web-accessible interface for the system; and devise a mechanism that allows the researchers searching the multiple genomic repositories, via ARGA, to select files for subsequent analysis and export them to other cloud-based analysis infrastructure. create a system to enable contextual metadata about a species to be used as a pointer to a variety of genomic data associated with that species; add functionality to that system to enable additional contextual information groupings, and community curation of these created groupings; create a user-facing web-accessible interface for the system; and devise a mechanism that allows the researchers searching the multiple genomic repositories, via ARGA, to select files for subsequent analysis and export them to other cloud-based analysis infrastructure. Our approach to ARGA incorporates: ingesting species metadata from multiple sequence repositories into a consistent data format using Darwin Core Archive (DwC-A); processing metadata using the Pipelines system developed by the Global Biodiversity Information Facility (GBIF), and as implemented in the ALA and other Living Atlases. indexing metadata using a Solr search engine; and providing a front-end web interface for users to find, select and export sequence files to a number of cloud-based analysis platforms. ingesting species metadata from multiple sequence repositories into a consistent data format using Darwin Core Archive (DwC-A); processing metadata using the Pipelines system developed by the Global Biodiversity Information Facility (GBIF), and as implemented in the ALA and other Living Atlases. indexing metadata using a Solr search engine; and providing a front-end web interface for users to find, select and export sequence files to a number of cloud-based analysis platforms. Here we will present an overview of the ARGA infrastructure and demonstrate an early prototype of the platform. We will show how ARGA can be used to interrogate DNA sequence records for taxa relevant to Australian research questions, realising a vision where genomics-based solutions to biological questions in conservation, ecology, agriculture and biosecurity can be manifested.
APA, Harvard, Vancouver, ISO, and other styles
21

Edmundson, Anna. "Curating in the Postdigital Age." M/C Journal 18, no. 4 (August 10, 2015). http://dx.doi.org/10.5204/mcj.1016.

Full text
Abstract:
It seems nowadays that any aspect of collecting and displaying tangible or intangible material culture is labeled as curating: shopkeepers curate their wares; DJs curate their musical selections; magazine editors curate media stories; and hipsters curate their coffee tables. Given the increasing ubiquity and complexity of 21st-century notions of curatorship, the current issue of MC Journal, ‘curate’, provides an excellent opportunity to consider some of the changes that have occurred in professional practice since the emergence of the ‘digital turn’. There is no doubt that the internet and interactive media have transformed the way we live our daily lives—and for many cultural commentators it only makes sense that they should also transform our cultural experiences. In this paper, I want to examine the issue of curatorial practice in the postdigital age, looking some of the ways that curating has changed over the last twenty years—and some of the ways it has not. The term postdigital comes from the work of Ross Parry, and is used to references the ‘tipping point’ where the use of digital technologies became normative practice in museums (24). Overall, I contend that although new technologies have substantially facilitated the way that curators do their jobs, core business and values have not changed as the result of the digital turn. While, major paradigm shifts have occurred in the field of professional curatorship over the last twenty years, these shifts have been issue-driven rather than a result of new technologies. Everyone’s a Curator In a 2009 article in the New York Times, journalist Alex Williams commented on the growing trend in American consumer culture of labeling oneself a curator. “The word ‘curate’,’’ he observed, “has become a fashionable code word among the aesthetically minded, who seem to paste it onto any activity that involves culling and selecting” (1). Williams dated the origins of the popular adoption of the term ‘curating’ to a decade earlier; noting the strong association between the uptake and the rise of the internet (2). This association is not surprising. The development of increasingly interactive software such as Web 2.0 has led to a rapid rise in new technologies aimed at connecting people and information in ways that were previously unimaginable. In particular the internet has become a space in which people can collect, store and most importantly share vast quantities of information. This information is often about objects. According to sociologist Jyri Engeström, the most successful social network sites on the internet (such as Pinterest, Flickr, Houzz etc), use discrete objects, rather than educational content or interpersonal relationships, as the basis for social interaction. So objects become the node for inter-personal communication. In these and other sites, internet users can find, collate and display multiple images of objects on the same page, which can in turn be connected at the press of a button to other related sources of information in the form of text, commentary or more images. These sites are often seen as the opportunity to virtually curate mini-exhibitions, as well as to create mood boards or sites of virtual consumption. The idea of curating as selective aesthetic editing is also popular in online markets places such as Etsy where numerous sellers offer ‘curated’ selections from home wares, to prints, to (my personal favorite) a curated selection of cat toys. In all of these exercises there is an emphasis on the idea of connoisseurship. As part of his article on the new breed of ‘curators’, for example, Alex Williams interviewed Tom Kalendrain, the Fashion Director of a leading American department store, which had engaged in a collaboration with Scott Schuman of the fashion blog, the Sartorialist. According to Kalendrain the store had asked Schuman to ‘curate’ a collection of clothes for them to sell. He justified calling Schuman a curator by explaining: “It was precisely his eye that made the store want to work with him; it was about the right shade of blue, about the cut, about the width of a lapel” (cited in Williams 2). The interview reveals much about current popular notions of what it means to be a curator. The central emphasis of Kalendrain’s distinction was on connoisseurship: exerting a privileged authoritative voice based on intimate knowledge of the subject matter and the ability to discern the very best examples from a plethora of choices. Ironically, in terms of contemporary museum practice, this is a model of curating that museums have consciously been trying to move away from for at least the last three decades. We are now witnessing an interesting disconnect in which the extra-museum community (represented in particular by a postdigital generation of cultural bloggers, commentators and entrepreneurs) are re-vivifying an archaic model of curating, based on object-centric connoisseurship, just at the point where professional curators had thought they had successfully moved on. From Being about Something to Being for Somebody The rejection of the object-expert model of curating has been so persuasive that it has transformed the way museums conduct core business across all sectors of the institution. Over the last thirty to forty years museums have witnessed a major pedagogical shift in how curators approach their work and how museums conceptualise their core values. These paradigmatic and pedagogical shifts were best characterised by the museologist Stephen Weil in his seminal article “From being about something to being for somebody.” Weil, writing in the late 1990s, noted that museums had turned away from traditional models in which individual curators (by way of scholarship and connoisseurship) dictated how the rest of the world (the audience) apprehended and understood significant objects of art, science and history—towards an audience centered approach where curators worked collaboratively with a variety of interested communities to create a pluralist forum for social change. In museum parlance these changes are referred to under the general rubric of the ‘new museology’: a paradigm shift, which had its origins in the 1970s; its gestation in the 1980s; and began to substantially manifest by the 1990s. Although no longer ‘new’, these shifts continue to influence museum practices in the 2000s. In her article, “Curatorship as Social Practice’” museologist Christina Kreps outlined some of the developments over recent decades that have challenged the object-centric model. According to Kreps, the ‘new museology’ was a paradigm shift that emerged from a widespread dissatisfaction with conventional interpretations of the museum and its functions and sought to re-orient itself away from strongly method and technique driven object-focused approaches. “The ‘new museum’ was to be people-centered, action-oriented, and devoted to social change and development” (315). An integral contributor to the developing new museology was the subjection of the western museum in the 1980s and ‘90s to representational critique from academics and activists. Such a critique entailed, in the words of Sharon Macdonald, questioning and drawing attention to “how meanings come to be inscribed and by whom, and how some come to be regarded as ‘right’ or taken as given” (3). Macdonald notes that postcolonial and feminist academics were especially engaged in this critique and the growing “identity politics” of the era. A growing engagement with the concept that museological /curatorial work is what Kreps (2003b) calls a ‘social process’, a recognition that; “people’s relationships to objects are primarily social and cultural ones” (154). This shift has particularly impacted on the practice of museum curatorship. By way of illustration we can compare two scholarly definitions of what constitutes a curator; one written in 1984 and one from 2001. The Manual of Curatorship, written in 1994 by Gary Edson and David Dean define a curator as: “a staff member or consultant who is as specialist in a particular field on study and who provides information, does research and oversees the maintenance, use, and enhancement of collections” (290). Cash Cash writing in 2001 defines curatorship instead as “a social practice predicated on the principle of a fixed relation between material objects and the human environment” (140). The shift has been towards increased self-reflexivity and a focus on greater plurality–acknowledging the needs of their diverse audiences and community stakeholders. As part of this internal reflection the role of curator has shifted from sole authority to cultural mediator—from connoisseur to community facilitator as a conduit for greater community-based conversation and audience engagement resulting in new interpretations of what museums are, and what their purpose is. This shift—away from objects and towards audiences—has been so great that it has led some scholars to question the need for museums to have standing collections at all. Do Museums Need Objects? In his provocatively titled work Do Museums Still Need Objects? Historian Steven Conn observes that many contemporary museums are turning away from the authority of the object and towards mass entertainment (1). Conn notes that there has been an increasing retreat from object-based research in the fields of art; science and ethnography; that less object-based research seems to be occurring in museums and fewer objects are being put on display (2). The success of science centers with no standing collections, the reduction in the number of objects put on display in modern museums (23); the increasing phalanx of ‘starchitect’ designed museums where the building is more important than the objects in it (11), and the increase of virtual museums and collections online, all seems to indicate that conventional museum objects have had their day (1-2). Or have they? At the same time that all of the above is occurring, ongoing research suggests that in the digital age, more than ever, people are seeking the authenticity of the real. For example, a 2008 survey of 5,000 visitors to living history sites in the USA, found that those surveyed expressed a strong desire to commune with historically authentic objects: respondents felt that their lives had become so crazy, so complicated, so unreal that they were seeking something real and authentic in their lives by visiting these museums. (Wilkening and Donnis 1) A subsequent research survey aimed specifically at young audiences (in their early twenties) reported that: seeing stuff online only made them want to see the real objects in person even more, [and that] they felt that museums were inherently authentic, largely because they have authentic objects that are unique and wonderful. (Wilkening 2) Adding to the question ‘do museums need objects?’, Rainey Tisdale argues that in the current digital age we need real museum objects more than ever. “Many museum professionals,” she reports “have come to believe that the increase in digital versions of objects actually enhances the value of in-person encounters with tangible, real things” (20). Museums still need objects. Indeed, in any kind of corporate planning, one of the first thing business managers look for in a company is what is unique about it. What can it provide that the competition can’t? Despite the popularity of all sorts of info-tainments, the one thing that museums have (and other institutions don’t) is significant collections. Collections are a museum’s niche resource – in business speak they are the asset that gives them the advantage over their competitors. Despite the increasing importance of technology in delivering information, including collections online, there is still overwhelming evidence to suggest that we should not be too quick to dismiss the traditional preserve of museums – the numinous object. And in fact, this is precisely the final argument that Steven Conn reaches in his above-mentioned publication. Curating in the Postdigital Age While it is reassuring (but not particularly surprising) that generations Y and Z can still differentiate between virtual and real objects, this doesn’t mean that museum curators can bury their heads in the collection room hoping that the digital age will simply go away. The reality is that while digitally savvy audiences continue to feel the need to see and commune with authentic materially-present objects, the ways in which they access information about these objects (prior to, during, and after a museum visit) has changed substantially due to technological advances. In turn, the ways in which curators research and present these objects – and stories about them – has also changed. So what are some of the changes that have occurred in museum operations and visitor behavior due to technological advances over the last twenty years? The most obvious technological advances over the last twenty years have actually been in data management. Since the 1990s a number of specialist data management systems have been developed for use in the museum sector. In theory at least, a curator can now access the entire collections of an institution without leaving their desk. Moreover, the same database that tells the curator how many objects the institution holds from the Torres Strait Islands, can also tell her what they look like (through high quality images); which objects were exhibited in past exhibitions; what their prior labels were; what in-house research has been conducted on them; what the conservation requirements are; where they are stored; and who to contact for copyright clearance for display—to name just a few functions. In addition a curator can get on the internet to search the online collection databases from other museums to find what objects they have from the Torres Strait Islands. Thus, while our curator is at this point conducting the same type of exhibition research that she would have done twenty years ago, the ease in which she can access information is substantially greater. The major difference of course is that today, rather than in the past, the curator would be collaborating with members of the original source community to undertake this project. Despite the rise of the internet, this type of liaison still usually occurs face to face. The development of accessible digital databases through the Internet and capacity to download images and information at a rapid rate has also changed the way non-museum staff can access collections. Audiences can now visit museum websites through which they can easily access information about current and past exhibitions, public programs, and online collections. In many cases visitors can also contribute to general discussion forums and collections provenance data through various means such as ‘tagging’; commenting on blogs; message boards; and virtual ‘talk back’ walls. Again, however, this represents a change in how visitors access museums but not a fundamental shift in what they can access. In the past, museum visitors were still encouraged to access and comment upon the collections; it’s just that doing so took a lot more time and effort. The rise of interactivity and the internet—in particular through Web 2.0—has led many commentators to call for a radical change in the ways museums operate. Museum analyst Lynda Kelly (2009) has commented on the issue that: the demands of the ‘information age’ have raised new questions for museums. It has been argued that museums need to move from being suppliers of information to providing usable knowledge and tools for visitors to explore their own ideas and reach their own conclusions because of increasing access to technologies, such as the internet. Gordon Freedman for example argues that internet technologies such as computers, the World Wide Web, mobile phones and email “… have put the power of communication, information gathering, and analysis in the hands of the individuals of the world” (299). Freedman argued that museums need to “evolve into a new kind of beast” (300) in order to keep up with the changes opening up to the possibility of audiences becoming mediators of information and knowledge. Although we often hear about the possibilities of new technologies in opening up the possibilities of multiple authors for exhibitions, I have yet to hear of an example of this successfully taking place. This doesn’t mean, however, that it will never happen. At present most museums seem to be merely dipping their toes in the waters. A recent example from the Art Gallery of South Australia illustrates this point. In 2013, the Gallery mounted an exhibition that was, in theory at least, curated by the public. Labeled as “the ultimate people’s choice exhibition” the project was hosted in conjunction with ABC Radio Adelaide. The public was encouraged to go online to the gallery website and select from a range of artworks in different categories by voting for their favorites. The ‘winning’ works were to form the basis of the exhibition. While the media spin on the exhibition gave the illusion of a mass curated show, in reality very little actual control was given over to the audience-curators. The public was presented a range of artworks, which had already been pre-selected from the standing collections; the themes for the exhibition had also already been determined as they informed the 120 artworks that were offered up for voting. Thus, in the end the pre-selection of objects and themes, as well as the timing and execution of the exhibition remained entirely in the hand of the professional curators. Another recent innovation did not attempt to harness public authorship, but rather enhanced individual visitor connections to museum collections by harnessing new GPS technologies. The Streetmuseum was a free app program created by the Museum of London to bring geotagged historical street views to hand held or portable mobile devices. The program allowed user to undertake a self-guided tour of London. After programing in their route, users could then point their device at various significant sites along the way. Looking through their viewfinder they would see a 3D historic photograph overlayed on the live site – allowing user not only to see what the area looked like in the past but also to capture an image of the overlay. While many of the available tagging apps simply allow for the opportunity of adding more white noise, allowing viewers to add commentary, pics, links to a particular geo tagged site but with no particular focus, the Streetmuseum had a well-defined purpose to encourage their audience to get out and explore London; to share their archival photograph collection with a broader audience; and to teach people more about London’s unique history. A Second Golden Age? A few years ago the Steven Conn suggested that museums are experiencing an international ‘golden age’ with more museums being built and visited and talked about than ever before (1). In the United States, where Conn is based, there are more than 17,500 accredited museums, and more than two million people visit some sort of museum per day, averaging around 865 million museum visits per year (2). However, at the same time that museums are proliferating, the traditional areas of academic research and theory that feed into museums such as history, cultural studies, anthropology and art history are experiencing a period of intense self reflexivity. Conn writes: At the turn of the twenty-first century, more people are going to more museums than at any time in the past, and simultaneously more scholars, critics, and others are writing and talking about museums. The two phenomena are most certainly related but it does not seem to be a happy relationship. Even as museums enjoy more and more success…many who write about them express varying degrees of foreboding. (1) There is no doubt that the internet and increasingly interactive media has transformed the way we live our daily lives—it only makes sense that it should also transform our cultural experiences. At the same time Museums need to learn to ride the wave without getting dumped into it. The best new media acts as a bridge—connecting people to places and ideas—allowing them to learn more about museum objects and historical spaces, value-adding to museum visits rather than replacing them altogether. As museologust Elaine Gurian, has recently concluded, the core business of museums seems unchanged thus far by the adoption of internet based technology: “the museum field generally, its curators, and those academic departments focused on training curators remain at the core philosophically unchanged despite their new websites and shiny new technological reference centres” (97). Virtual life has not replaced real life and online collections and exhibitions have not replaced real life visitations. Visitors want access to credible information about museum objects and museum exhibitions, they are not looking for Wiki-Museums. Or if they are are, they are looking to the Internet community to provide that service rather than the employees of state and federally funded museums. Both provide legitimate services, but they don’t necessarily need to provide the same service. In the same vein, extra-museum ‘curating’ of object and ideas through social media sites such as Pinterest, Flikr, Instagram and Tumblr provide a valuable source of inspiration and a highly enjoyable form of virtual consumption. But the popular uptake of the term ‘curating’ remains as easily separable from professional practice as the prior uptake of the terms ‘doctor’ and ‘architect’. An individual who doctors an image, or is the architect of their destiny, is still not going to operate on a patient nor construct a building. While major ontological shifts have occurred within museum curatorship over the last thirty years, these changes have resulted from wider social shifts, not directly from technology. This is not to say that technology will not change the museum’s ‘way of being’ in my professional lifetime—it’s just to say it hasn’t happened yet. References Cash Cash, Phillip. “Medicine Bundles: An Indigenous Approach.” Ed. T. Bray. The Future of the Past: Archaeologists, Native Americans and Repatriation. New York and London: Garland Publishing (2001): 139-145. Conn, Steven. Do Museums Still Need Objects? Philadelphia: University of Pennsylvania Press, 2011. Edson, Gary, and David Dean. The Handbook for Museums. New York and London: Routledge, 1994. Engeström, Jyri. “Why Some Social Network Services Work and Others Don’t — Or: The Case for Object-Centered Sociality.” Zengestrom Apr. 2005. 17 June 2015 ‹http://www.zengestrom.com/blog/2005/04/why-some-social-network-services-work-and-others-dont-or-the-case-for-object-centered-sociality.html›. Freedman, Gordon. “The Changing Nature of Museums”. Curator 43.4 (2000): 295-306. Gurian, Elaine Heumann. “Curator: From Soloist to Impresario.” Eds. Fiona Cameron and Lynda Kelly. Hot Topics, Public Culture, Museums. Newcastle: Cambridge Scholars Publishing, 2010. 95-111. Kelly, Lynda. “Museum Authority.” Blog 12 Nov. 2009. 25 June 2015 ‹http://australianmuseum.net.au/blogpost/museullaneous/museum-authority›. Kreps, Christina. “Curatorship as Social Practice.” Curator: The Museum Journal 46.3 (2003): 311-323. ———, Christina. Liberating Culture: Cross-Cultural Perspectives on Museums, Curation, and Heritage Preservation. London and New York: Routledge, 2003. Macdonald, Sharon. “Expanding Museum Studies: An Introduction.” Ed. Sharon MacDonald. A Companion to Museum Studies. Oxford: Blackwell Publishing, 2011. Parry, Ross. “The End of the Beginning: Normativity in the Postdigital Museum.” Museum Worlds: Advances in Research 1 (2013): 24-39. Tisdale, Rainey. “Do History Museums Still Need Objects?” History News (2011): 19-24. 18 June 2015 ‹http://aaslhcommunity.org/historynews/files/2011/08/RaineySmr11Links.pdf›. Suchy, Serene. Leading with Passion: Change Management in the Twenty-First Century Museum. Lanham: AltaMira Press, 2004. Weil, Stephen E. “From Being about Something to Being for Somebody: The Ongoing Transformation of the American Museum.” Daedalus, Journal of the American Academy of Arts and Sciences 128.3 (1999): 229–258. Wilkening, Susie. “Community Engagement and Objects—Mutually Exclusive?” Museum Audience Insight 27 July 2009. 14 June 2015 ‹http://reachadvisors.typepad.com/museum_audience_insight/2009/07/community-engagement-and-objects-mutually-exclusive.html›. ———, and Erica Donnis. “Authenticity? It Means Everything.” History News (2008) 63:4. Williams, Alex. “On the Tip of Creative Tongues.” New York Times 4 Oct. 2009. 4 June 2015 ‹http://www.nytimes.com/2009/10/04/fashion/04curate.html›.
APA, Harvard, Vancouver, ISO, and other styles
22

Waelder, Pau. "The Constant Murmur of Data." M/C Journal 13, no. 2 (April 15, 2010). http://dx.doi.org/10.5204/mcj.228.

Full text
Abstract:
Our daily environment is surrounded by a paradoxically silent and invisible flow: the coming and going of data through our network cables, routers and wireless devices. This data is not just 1s and 0s, but bits of the conversations, images, sounds, thoughts and other forms of information that result from our interaction with the world around us. If we can speak of a global ambience, it is certainly derived from this constant flow of data. It is an endless murmur that speaks to our machines and gives us a sense of awareness of a certain form of surrounding that is independent from our actual, physical location. The constant “presence” of data around us is something that we have become largely aware of. Already in 1994, Phil Agre stated in an article in WIRED Magazine: “We're so accustomed to data that hardly anyone questions it” (1). Agre indicated that this data is in fact a representation of the world, the discrete bits of information that form the reality we are immersed in. He also proposed that it should be “brought to life” by exploring its relationships with other data and the world itself. A decade later, these relationships had become the core of the new paradigm of the World Wide Web and our interaction with cyberspace. As Mitchell Whitelaw puts it: “The web is increasingly a set of interfaces to datasets ... . On the contemporary web the data pour has become the rule, rather than the exception. The so-called ‘web 2.0’ paradigm further abstracts web content into feeds, real-time flows of XML data” ("Art against Information"). These feeds and flows have been used by artists and researchers in the creation of different forms of dynamic visualisations, in which data is mapped according to a set of parameters in order to summarise it in a single image or structure. Lev Manovich distinguishes in these visualisations those made by artists, to which he refers as “data art”. Unlike other forms of mapping, according to Manovich data art has a precise goal: “The more interesting and at the end maybe more important challenge is how to represent the personal subjective experience of a person living in a data society” (15). Therefore, data artists extract from the bits of information available in cyberspace a dynamic representation of our contemporary environment, the ambience of our digital culture, our shared, intimate and at the same time anonymous, subjectivity. In this article I intend to present some of the ways in which artists have dealt with the murmur of data creatively, exploring the immense amounts of user generated content in forms that interrogate our relationship with the virtual environment and the global community. I will discuss several artistic projects that have shaped the data flow on the Internet in order to take the user back to a state of contemplation, as a listener, an observer, and finally encountering the virtual in a physical form. Listening The concept of ambience particularly evokes an auditory experience related to a given location: in filmmaking, it refers to the sounds of the surrounding space and is the opposite of silence; as a musical genre, ambient music contributes to create a certain atmosphere. In relation to flows of data, it can be said that the applications that analyze Internet traffic and information are “listening” to it, as if someone stands in a public place, overhearing other people's conversations. The act of listening also implies a reception, not an emission, which is a substantial distinction given the fact that data art projects work with given data instead of generating it. As Mitchell Whitelaw states: “Data here is first of all indexical of reality. Yet it is also found, or to put it another way, given. ... Data's creation — in the sense of making a measurement, framing and abstracting something from the flux of the real — is left out” (3). One of the most interesting artistic projects to initially address this sort of “listening” is Carnivore (2001) by the Radical Software Group. Inspired by DCS1000, an e-mail surveillance software developed by the FBI, Carnivore (which was actually the original name of the FBI's program) listens to Internet traffic and serves this data to interfaces (clients) designed by artists, which interpret the provided information in several ways. The data packets can be transformed into an animated graphic, as in amalgamatmosphere (2001) by Joshua Davis, or drive a fleet of radio controlled cars, as in Police State (2003) by Jonah Brucker-Cohen. Yet most of these clients treat data as a more or less abstract value (expressed in numbers) that serves to trigger the reactions in each client. Carnivore clients provide an initial sense of the concept of ambience as reflected in the data circulating the Internet, yet other projects will address this subject more eloquently. Fig. 1: Ben Rubin, Mark Hansen, Listening Post (2001-03). Multimedia installation. Photo: David Allison.Listening Post (2001-04) by Mark Hansen and Ben Rubin is an installation consisting of 231 small electronic screens distributed in a semicircular grid [fig.1: Listening Post]. The screens display texts culled from thousands of Internet chat rooms, which are read by a voice synthesiser and arranged synchronically across the grid. The installation thus becomes a sort of large panel, somewhere between a videowall and an altarpiece, which invites the viewer to engage in a meditative contemplation, seduced by the visual arrangement of the flickering texts scrolling on each screen, appearing and disappearing, whilst sedated by the soft, monotonous voice of the machine and an atmospheric musical soundtrack. The viewer is immersed in a particular ambience generated by the fragmented narratives of the anonymous conversations extracted from the Internet. The setting of the piece, isolated in a dark room, invites contemplation and silence, as the viewer concentrates on seeing and listening. The artists clearly state that their goal in creating this installation was to recreate a sense of ambience that is usually absent in electronic communications: “A participant in a chat room has limited sensory access to the collective 'buzz' of that room or of others nearby – the murmur of human contact that we hear naturally in a park, a plaza or a coffee shop is absent from the online experience. The goal of Listening Post is to collect this buzz and render it at a human scale” (Hansen 114-15). The "buzz", as Hansen and Rubin describe it, is in fact nonexistent in the sense that it does not take place in any physical environment, but is rather the imagined output of the circulation of a myriad blocks of data through the Net. This flow of data is translated into audible and visible signals, thus creating a "murmur" that the viewer can relate to her experience in interacting with other humans. The ambience of a room full of people engaged in conversation is artificially recreated and expanded beyond the boundaries of a real space. By extracting chats from the Internet, the murmur becomes global, reflecting the topics that are being shared by users around the world, in an improvised, ever-changing embodiment of the Zeitgeist, the spirit of the time, or even a certain stream of consciousness on a planetary scale. Fig. 2: Gregory Chatonsky, L'Attente - The Waiting (2007). Net artwork. Photo: Gregory Chatonsky.The idea of contemplation and receptiveness is also present in another artwork that elaborates on the concept of the Zeitgeist. L'Attente [The Waiting] (2007) by Gregory Chatonsky is a net art piece that feeds from the data on the Internet to create an open, never-ending fiction in real time [Fig.2: The Waiting]. In this case, the viewer experiences the artwork on her personal computer, as a sort of film in which words, images and sounds are displayed in a continuous sequence, driven by a slow paced soundtrack that confers a sense of unity to the fragmented nature of the work. The data is extracted in real time from several popular sites (photos from Flickr, posts from Twitter, sound effects from Odeo), the connection between image and text being generated by the network itself: the program extracts text from the posts that users write in Twitter, then selects some words to perform a search on the Flickr database and retrieve photos with matching keywords. The viewer is induced to make sense of this concatenation of visual and audible content and thus creates a story by mentally linking all the elements into what Chatonsky defines as "a fiction without narration" (Chatonsky, Flußgeist). The murmur here becomes a story, but without the guiding voice of a narrator. As with Listening Post, the viewer is placed in the role of a witness or a voyeur, subject to an endless flow of information which is not made of the usual contents distributed by mainstream media, but the personal and intimate statements of her peers, along with the images they have collected and the portraits that identify them in the social networks. In contrast to the overdetermination of History suggested by the term Zeitgeist, Chatonsky proposes a different concept, the spirit of the flow or Flußgeist, which derives not from a single idea expressed by multiple voices but from a "voice" that is generated by listening to all the different voices on the Net (Chatonsky, Zeitgeist). Again, the ambience is conceived as the combination of a myriad of fragments, which requires attentive contemplation. The artist describes this form of interacting with the contents of the piece by making a reference to the character of the angel Damiel in Wim Wenders’s film Wings of Desire (Der Himmel über Berlin, 1987): “to listen as an angel distant and proximate the inner voice of people, to place the hand on their insensible shoulder, to hold without being able to hold back” (Chatonsky, Flußgeist). The act of listening as described in Wenders's character illustrates several key aspects of the above mentioned artworks: there is, on the one hand, a receptiveness, carried out by the applications that extract data from the Internet, which cannot be “hold back” by the user, unable to control the flow that is evolving in front of her. On the other hand, the information she receives is always fragmentary, made up of disconnected parts which are, in the words of the artist Lisa Jevbratt, “rubbings ... indexical traces of reality” (1). Observing The observation of our environment takes us to consider the concept of landscape. Landscape, in its turn, acquires a double nature when we compare our relationship with the physical environment and the digital realm. In this sense, Mitchell Whitelaw stresses that while data moves at superhuman speed, the real world seems slow and persistent (Landscape). The overlapping of dynamic, fast-paced, virtual information on a physical reality that seems static in comparison is one of the distinctive traits of the following projects, in which the ambience is influenced by realtime data in a visual form that is particularly subtle, or even invisible to the naked eye. Fig. 3: Carlo Zanni, The Fifth Day (2009). Net artwork. Screenshot retrieved on 4/4/2009. Photo: Carlo Zanni. The Fifth Day (2009) by Carlo Zanni is a net art piece in which the artist has created a narration by displaying a sequence of ten pictures showing a taxi ride in the city of Alexandria [Fig.3: The Fifth Day]. Although still, the images are dynamic in the sense that they are transformed according to data retrieved from the Internet describing the political and cultural status of Egypt, along with data extracted from the user's own identity on the Net, such as her IP or city of residence. Every time a user accesses the website where the artwork is hosted, this data is collected and its values are applied to the photos by cloning or modifying particular elements in them. For instance, a photograph of a street will show as many passersby as the proportion of seats held by women in national parliament, while the reflection in the taxi driver's mirror in another photo will be replaced by a picture taken from Al-Jazeera's website. Zanni addresses the viewer's perception of the Middle East by inserting small bits of additional information and also elements from the viewer's location and culture into the images of the Egyptian city. The sequence is rendered as the trailer of a political thriller, enhanced by a dramatic soundtrack and concluded with the artwork's credits. As with the abovementioned projects, the viewer must adopt a passive role, contemplating the images before her and eventually observing the minute modifications inserted by the data retrieved in real time. Yet, in this case, the ambience is not made manifest by a constant buzz to which one must listen, but quite more subtly it is suggested by the fact that not even a still image is always the same. As if observing a landscape, the overall impression is that nothing has changed while there are minor transformations that denote a constant evolution. Zanni has explored this idea in previous works such as eBayLandscape (2004), in which he creates a landscape image by combining data extracted from several websites, or My Temporary Visiting Position from the Sunset Terrace Bar (2007), in which a view of the city of Ahlen (Germany) is combined with a real time webcam image of the sky in Naples (Italy). Although they may seem self-enclosed, these online, data-driven compositions also reflect the global ambience, the Zeitgeist, in different forms. As Carlo Giordano puts it: "Aesthetically, the work aims to a nearly seamless integration of mixed fragments. The contents of these parts, reflecting political and economical issues ... thematize actuality and centrality, amplifying the author's interest in what everybody is talking about, what happens hic et nunc, what is in the fore of the media and social discourse" (16-17). A landscape made of data, such as Zanni's eBayLandscape, is the most eloquent image of how an invisible layer of information is superimposed over our physical environment. Fig. 4: Clara Boj and Diego Díaz, Red Libre, Red Visible (2004-06). Intervention in the urban space. Photo: Lalalab.Artists Clara Boj and Diego Díaz, moreover, have developed a visualisation of the actual flows of data that permeate the spaces we inhabit. In Red Libre, Red Visible [Free Network, Visible Network] (2004-06), Boj and Díaz used Augmented Reality (AR) technology to display the flows of data in a local wireless network by creating AR marker tags that were placed on the street. A Carnivore client developed by the artists enabled anyone with a webcam pointing towards the marker tag and connected to the Wi-Fi network to see in real time the data packets flowing from their computer towards the tag [Fig.4: Red Libre]. The marker tags therefore served both as a tool for the visualisation of network activity as well as a visual sign of the existence of an open network in a particular urban area. Later on, they added the possibility of inserting custom made messages, 3D shapes and images that would appear when a particular AR marker tag was seen through the lens of the webcam. With this project, Boj and Díaz give the user the ability to observe and interact with a layer of her environment that was previously invisible and in some senses, out of reach. The artists developed this idea further in Observatorio [Observatory] (2008), a sightseeing telescope that reveals the existence of Wi-Fi networks in an urban area. In both projects, an important yet unnoticed aspect of our surroundings is brought into focus. As with Carlo Zanni's projects, we are invited to observe what usually escapes our perception. The ambience in our urban environment has also been explored by Julian Oliver, Clara Boj, Diego Díaz and Damian Stewart in The Artvertiser (2009-10), a hand-held augmented reality (AR) device that allows to substitute advertising billboards with custom made images. As Naomi Klein states in her book No Logo, the public spaces in most cities have been dominated by corporate advertising, allowing little or no space for freedom of expression (Klein 399). Oliver's project faces this situation by enabling a form of virtual culture jamming which converts any billboard-crowded plaza into an unparalleled exhibition space. Using AR technology, the artists have developed a system that enables anyone with a camera phone, smartphone or the customised "artvertiser binoculars" to record and substitute any billboard advertisement with a modified image. The user can therefore interact with her environment, first by observing and being aware of the presence of these commercial spaces and later on by inserting her own creations or those of other artists. By establishing a connection to the Internet, the modified billboard can be posted on sites like Flickr or YouTube, generating a constant feedback between the real location and the Net. Gregory Chatonsky's concept of the Flußgeist, which I mentioned earlier, is also present in these works, visually displaying the data on top of a real environment. Again, the user is placed in a passive situation, as a receptor of the information that is displayed in front of her, but in this case the connection with reality is made more evident. Furthermore, the perception of the environment minimises the awareness of the fragmentary nature of the information generated by the flow of data. Embodying In her introduction to the data visualisation section of her book Digital Art, Christiane Paul stresses the fact that data is “intrinsically virtual” and therefore lacking a particular form of manifestation: “Information itself to a large extent seems to have lost its 'body', becoming an abstract 'quality' that can make a fluid transition between different states of materiality” (Paul 174). Although data has no “body”, we can consider, as Paul suggests, any object containing a particular set of information to be a dataspace in its own. In this sense, a tendency in working with the Internet dataflow is to create a connection between the data and a physical object, either as the end result of a process in which the data has been collected and then transferred to a physical form, or providing a means of physically reshaping the object through the variable input of data. The objectification of data thus establishes a link between the virtual and the real, but in the context of an artwork it also implies a particular meaning, as the following examples will show.Fig. 5: Gregory Chatonsky, Le Registre - The Register (2007). Book shelf and books. Photo: Pau Waelder. In Le Registre [The Register] (2007), Gregory Chatonsky developed a software application that gathers sentences related to feelings found on blogs. These sentences are recorded and put together in the form a 500-page book every hour. Every day, the books are gathered in sets of 24 and incorporated to an infinite library. Chatonsky has created a series of bookshelves to collect the books for one day, therefore turning an abstract process into an object and providing a physical embodiment of the murmur of data that I have described earlier [Fig.5: Le Registre]. As with L'Attente, in this work Chatonsky elaborates on the concept of Flußgeist, by “listening” to a specific set of data (in a similar way as in Hansen and Rubin's Listening Post) and bringing it into salience. The end product of this process is not just a meaningless object but actually what makes this work profoundly ironic: printing the books is a futile effort, but also constitutes a borgesque attempt at creating an endless library of something as ephemeral as feelings. In a similar way, but with different intentions, Jens Wunderling brings the online world to the physical world in Default to Public (2009). A series of objects are located in several public spaces in order to display information extracted from users of the Twitter network. Wunderling's installation projects the tweets on a window or prints them in adhesive labels, while informing the users that their messages have been taken for this purpose. The materialisation of information meant for a virtual environment implies a new approach to the concept of ambiance as described previously, and in this case also questions the intimacy of those participating in social networks. As the artist puts it: "In times of rapid change concerning communication behavior, media access and competence, the project Default to Public aims to raise awareness of the possible effects on our lives and our privacy" (Wunderling 155). Fig. 6: Moisés Mañas, Stock (2009). Networked installation. Photo: Moisés Mañas. Finally, in Stock (2009), Moisés Mañas embodies the flow of data from stock markets in an installation consisting of several trench coats hanging from automated coat hangers which oscillate when the stock values of a certain company rise. The resulting movement of the respective trench coat simulates a person laughing. In this work, Mañas translates the abstract flow of data into a clearly understandable gesture, providing at the same time a comment on the dynamics of stock markets [Fig.6: Stock]. Mañas´s project does not therefore simply create a physical output of a specific information (such as the stock value of a company at any given moment), but instead creates a dynamic sculpture which suggests a different perception of an otherwise abstract data. On the one hand, the trenchcoats have a ghostly presence and, as they move with unnatural spams, they remind us of the Freudian concept of the Uncanny (Das Umheimliche) so frequently associated with robots and artificial intelligence. On the other hand, the image of a person laughing, in the context of stock markets and the current economical crisis, becomes an ironic symbol of the morality of some stockbrokers. In these projects, the ambience is brought into attention by generating a physical output of a particular set of data that is extracted from certain channels and piped into a system that creates an embodiment of this immaterial flow. Yet, as the example of Mañas's project clearly shows, objects have particular meanings that are incorporated into the artwork's concept and remind us that the visualisation of information in data art is always discretionary, shaped in a particular form in order to convey the artist's intentions. Beyond the Buzz The artworks presented in this article revealt that, beyond the murmur of sentences culled from chats and blogs, the flow of data on the Internet can be used to express our difficult relationship with the vast amount of information that surrounds us. As Mitchell Whitelaw puts it: “Data art reflects a contemporary worldview informed by data excess; ungraspable quantity, wide distribution, mobility, heterogeneity, flux. Orienting ourselves in this domain is a constant challenge; the network exceeds any overview or synopsis” (Information). This excess is compared by Lev Manovich with the Romantic concept of the Sublime, that which goes beyond the limits of human measure and perception, and suggests an interpretation of data art as the Anti-Sublime (Manovich 11). Yet, in the projects that I have presented, rather than making sense of the constant flow of data there is a sort of dialogue, a framing of the information under a particular interpretation. Data is channeled through the artworks's interfaces but remains as a raw material, unprocessed to some extent, retrieved from its original context. These works explore the possibility of presenting us with constantly renewed content that will develop and, if the artwork is preserved, reflect the thoughts and visions of the next generations. A work constantly evolving in the present continuous, yet also depending on the uncertain future of social network companies and the ever-changing nature of the Internet. The flow of data will nevertheless remain unstoppable, our ambience defined by the countless interactions that take place every day between our divided self and the growing number of machines that share information with us. References Agre, Phil. “Living Data.” Wired 2.11 (Nov. 1994). 30 April 2010 ‹http://www.wired.com/wired/archive/2.11/agre.if.html›. Chatonsky, Gregory. “Flußgeist, une fiction sans narration.” Gregory Chatonsky, Notes et Fragments 13 Feb. 2007. 28 Feb. 2010 ‹http://incident.net/users/gregory/wordpress/13-flusgeist-une-fiction-sans-narration/›. ———. “Le Zeitgeist et l'esprit de 'nôtre' temps.” Gregory Chatonsky, Notes et Fragments 21 Jan. 2007. 28 Feb. 2010 ‹http://incident.net/users/gregory/wordpress/21-le-zeigeist-et-lesprit-de-notre-temps/›. Giordano, Carlo. Carlo Zanni. Vitalogy. A Study of a Contemporary Presence. London: Institute of Contemporary Arts, 2005. Hansen, Mark, and Ben Rubin. “Listening Post.” Cyberarts 2004. International Compendium – Prix Ars Electronica 2004. Ed. Hannes Leopoldseder and Christine Schöpf. Ostfildern: Hate Cantz, 2004. 112-17. ———. “Babble Online: Applying Statistics and Design to Sonify the Internet.” Proceedings of the 2001 International Conference on Auditory Display, Espoo, Finland. 30 April 2010 ‹http://www.acoustics.hut.fi/icad2001/proceedings/papers/hansen.pdf›. Jevbratt, Lisa. “Projects.” A::minima 15 (2003). 30 April 2010 ‹http://aminima.net/wp/?p=93&language=en›. Klein, Naomi. No Logo. [El poder de las marcas]. Barcelona: Paidós, 2007. Manovich, Lev. “Data Visualization as New Abstraction and Anti-Sublime.” Manovich.net Aug. 2002. 30 April 2010 ‹http://www.manovich.net/DOCS/data_art_2.doc›. Paul, Christiane. Digital Art. London: Thames & Hudson, 2003. Whitelaw, Mitchell. “Landscape, Slow Data and Self-Revelation.” Kerb 17 (May 2009). 30 April 2010 ‹http://teemingvoid.blogspot.com/2009/05/landscape-slow-data-and-self-revelation.html›. ———. “Art against Information: Case Studies in Data Practice.” Fibreculture 11 (Jan. 2008). 30 April 2010 ‹http://journal.fibreculture.org/issue11/issue11_whitelaw.html›. Wunderling, Jens. "Default to Public." Cyberarts 2009. International Compendium – Prix Ars Electronica 2004. Ed. Hannes Leopoldseder, Christine Schöpf and Gerfried Stocker. Ostfildern: Hate Cantz, 2009. 154-55.
APA, Harvard, Vancouver, ISO, and other styles
23

Munster, Anna. "Love Machines." M/C Journal 2, no. 6 (September 1, 1999). http://dx.doi.org/10.5204/mcj.1780.

Full text
Abstract:
A new device, sure to inspire technological bedazzlement, has been installed in Hong Kong shopping malls. Called simply The Love Machine, it functions like a photo booth, dispensing on-the-spot portraits1. But rather than one subject, it requires a couple, in fact the couple, in order to do its work of digital reproduction. For the output of this imaging machine is none other than a picture of the combined features of the two sitters, 'morphed' together by computer software to produce a technological child. Its Japanese manufacturers, while obviously cashing in on the novelty value, nevertheless list the advantage it allows for future matrimonial selection based around the production of a suitable aesthetic. Needless to say, the good citizens of Hong Kong have not allowed any rigid criteria for genetic engineering to get in the way of the progeny such a machine allows, creating such monstrous couplings as the baby 'cat-human', achieved by a sitter coupling with their pet. Rather than being the object of love here, technology acts as the conduit of emotion, or stronger still, it is the love relation itself, bringing the two together as one. What I want to touch upon is the sense in which a desire for oneness inhabits our relations to and through the technological. There is already an abundance of literature around the erotics of cyberspace, documenting and detailing encounters of virtual sex fantasies and romance. As well, there are more theoretical attempts to come to terms with what Michael Heim describes as the "erotic ontology of cyberspace" (59). Heim depicts these encounters not as a ravaging desire gone wild, sprouting up in odd places or producing monstrous offspring, but in homely and familial terms. Finally with the computer as incarnation of the machine, our love for technology can cease its restless and previously unfulfilled wanderings and find a comfortable place. What is worth pausing over here is the sense in which the sexual is subjugated to a conjugal and familial metaphor, at the same time as desire is modelled according to a metaphysics of fullness and lack. I would argue that in advancing this kind of love relation with the computer and the digital, the possibility of a relation is actually short-circuited. For a relation assumes the existence of at least two terms, and in these representations, technology does not figure as a second term. It is either marked as the other, where desire finds a soul mate to fill its lack. Or the technological becomes invisible, subsumed in a spiritual instrumentalism that sees it merely forging the union of cybernetic souls. I would suggest that an erotic relation with the technological is occluded in most accounts of the sexual in cyberspace and in many engagements with digital technologies. Instead we are left with a non-relational meeting of the same with itself. We might describe the dominant utilisation of the technological as onanistic. Relations of difference could be a productive effect of the technological, but are instead culturally caught up within an operational logic which sees the relational erotic possibilities of the machinic eliminated as sameness touches itself. I want to point towards some different models for theorising technology by briefly drawing upon the texts of Félix Guattari and Avital Ronell. These may lead to the production of a desiring relation with technology by coupling the machine with alterity. One of several climatic scenes from the 'virtual sex' movie Strange Days, directed by Katherine Bigelow, graphically illustrates the onanistic encounter. Set on the eve of the new millennium, the temporality of the film sets up a feeling of dis-ease: it is both futuristic and yet only too close. The narrative centres on the blackmarket in ultimate VR: purchasing software which allows the user, donning special headgear, to re-experience recorded memories in other peoples' lives. An evil abuser of this technology, known until the end of the film as an anonymous male junkie, is addicted to increasingly frequent hits of another's apperception. In his quest to score above his tolerance level, the cyber-junkie rapes a prostitute, but instead of wearing the headgear used to record his own perception of the rape, he forces the woman to put it on making her annex her subjectivity to his experience of desire. He records her reaction to becoming an appendage to him. The effect of watching this scene is deeply unsettling: the camera-work sets up a point-of-view shot from the position of the male subject but plays it to the audience as one might see through a video view-finder, thus sedimenting an assumed cultural association between masculinity and the male gaze. What we see is the violence produced by the annihilation of another's desire; what we hear is the soundtrack of the woman mimicking the male's enjoyment of his own desire. Put simply, what we watch is a feedback loop of a particular formation of technological desire, one in which the desire of or for the other is audio-visually impeded. Ultimately the experience can be stored and replayed as a porn movie solely for future masturbation. The scene in Strange Days quite adequately summarises the obstructed and obstructive desire to go no further than masturbation caught in the defiles of feedback. Feedback is also the term used in both video and sound production when a recording device is aimed at or switched onto a device playing back the same recording. The result, in the case of video, is to create an infinite abyss of the same image playing back into itself on the monitor; in the case of sound a high-pitched signal is created which impedes further transmission. By naming the desire to fuse with the technological a feedback loop, I am suggesting that manifestations of this desire are neither productive nor connective, in that any relation to exterior or heterogeneous elements are shut out. They stamp out the flow of other desires and replay the same looping desire based around notions of fullness and lack, completion and incompletion, and of course masculinity and femininity. Mark Dery makes this association between the desire for the technological, the elision of matter and phallic modes of masculinity: This, to the masculinist technophile, is the weirdly alchemical end point of cyberculture: the distillation of pure mind from base matter. Sex, in such a context, would be purged of feminine contact -- removed, in fact, from all notions of physicality -- and reduced to mental masturbation. (121) Dery's point is a corollary to mine; in discarding the need for an embodied sexual experience, the literature and representations of cyberspace, both theoretical and fictional, endorse only a touching of the sublimated self, no other bodies or even the bodily is brought into contact. There is no shortage of evidence for the disregard embodiment holds among the doyens of cyber-architecture. Hans Moravec and Marvin Minsky, writing about Artificial Intelligence, promote a future in which pure consciousness, freed from its entanglement with the flesh, merges with the machine (Mind Children; The Society of Mind). Here the reverence shown towards digital technology enters the sublime point of a coalition where the mind is supported by some sophisticated hardware, ultimately capable of adapting and reproducing itself. There are now enough feminist critics of this kind of cyberspeak to have noticed in this fantasy of machinic fusion a replay of the old Cartesian mind/body dualism. My point, however, is that this desire is not simply put in place by a failure to rethink the body in the realm of the digital. It is augmented by the fact that this disregard for theorising an embodied experience feeds into an inability to encounter any other within the realm of the technological. We should note that this is perpetuated not just by those seeking future solace in the digital, but also by its most ardent cultural critics. Baudrillard, as one who seemingly fits this latter category, eager to disperse the notion that writers such as Moravec and Minsky propound regarding AI, is driven to making rather overarching ontological remarks about machines in general. In attempting to forestall the notion that the machine could ever become the complement to the human, Baudrillard cancels the relation of the machine to desire by cutting off its ability to produce anything in excess of itself. The machine, on his account, can be reduced to the production of itself alone; there is nothing supplementary, exterior to or differential in the machinic circuit (53). For Baudrillard, the pleasures of the interface do not even extend to the solitary vice of masturbation. Celibate machines are paralleled by celibate digital subjects each alone with themselves, forming a non-relational system. While Baudrillard offers a fair account of the solitary lack of relation produced in and by digital technologies, he nevertheless participates in reinforcing the transformation of what he calls "the process of relating into a process of communication between One and the Same" (58). He catches himself within the circulation of the very desire he finds problematic. But whether onanistic or celibate, the erotics of our present or possible relations to technology do not become any more enticing in many actual engagements with emerging technologies. Popular modes of interfacing our desires with the digital favor a particular assemblage of body and machine where a kind of furtive one-handed masturbation may be the only option left to us. I will call this the operational assemblage, borrowing from Baudrillard and his description of Virtual Man, operating and communicating across computer cables and networks while being simultaneously immobilised in front of the glare of the computer screen. An operational assemblage, whilst being efficacious, inhibits movement and ties the body to the machine. Far from the body being discarded by information technologies, the operational assemblage sees certain parts of the body privileged and territorialised. The most obvious instance of this is VR, which, in its most technologically advanced state, still only selects the eyes and the hand as its points of bodily interface. In so-called fully immersive VR experience, it is the hand, wearing a data glove, which propels the subject into movement in the virtual world, but it is a hand propelled by the subject's field of vision, computer monitors mounted in the enveloping headset. Thus the hand operates by being subjected to the gaze2. In VR, then, the real body is not somehow left behind as the subject enters a new state of electronic consciousness; rather there is a re-organisation and reterritorialisation of the hand under the operative guidance of the eye and scopic desire. This is attested to by the experience one has of the postural body schema during immersion in VR. The 'non-operational' body remaining in physical space often feels awkward and clumsy as if it is too large or cumbersome to drag around and interact in the virtual world, as if it were made virtually non-functional. The operational assemblage of a distanced eye territorialising the hand to create a loop of identity through the machine produces a desiring body which is blocked in its relational capacities. It can only touch itself as self; it cannot find itself an other or as other. Rather than encouraging the hand to break connections with the circuit of the gaze, to develop speeds, capabilities and potentials of its own, these encounters are perpetually returned to the screen and the domain of the eye. They feed back into a loop where relations to other desires, other kinds of bodies, other machines are circumvented. Looping back and returning to the aesthetic reduction performed by the Love Machine, a more lo-tech version of the two technologically contracted to one might point to the possibility of alterity that current digital machines seem keen to circumvent. At San Fransisco's Exploratorium museum one of the public points of interface with the Human Genome Project can be found3. The Exploratorium has a display set up which introduces the public to the bioinformation technology involved as well as soliciting responses to bio-ethical issues surrounding the question of genetic engineering. In the midst of this display a simple piece of glass hangs as a divider between two sides of a table. By sitting on one side of the table with a light shining from behind, one could see both a self-reflection and through the glass to whomever was sitting on the other side. The text accompanying the display encourages couples to occupy either side of the glass. What is produced for the sitter on the light side is a combination of their own reflection 'mapped' onto the features of the sitter on the other side. The text for the display encourages a judgement of the probable aesthetic outcome of combining one's genes with those of the other. I tested this display with my partner, crossing both sides of the mirror/glass. Our reactions were similar; a sensation approaching horror arose as we each faced our distorted, mirrored features as possible future progeny, a sensation akin to encountering the uncanny4. While suggesting the familiar, it also indicates what is concealed, becoming a thing not known and thus terrifying. For what was decidedly spooky in viewing a morphing of my image onto that of the other's, in the context of the surrounding bioinformatic technologies, was the sense in which a familiarity with the homely features of the self was dislocated by a haunting, marking the claim of a double utterly different. Recalling the assertion made by Heim that in the computer we find an intellectual and emotional resting point, we could question whether the familiarity of a resting place provides a satisfactory erotic encounter with the technological. We could ask whether the dream of the homely, of finding in the computer a kinship which sanctions the love machine relation, operates at the expense of dispelling that other, unfamiliar double through a controlling device which adjusts differences until they reach a point of homeostasis. What of a reading of the technological which might instaurate rather than diffuse the question of the unfamiliar double? I will gesture towards both Guattari's text Chaosmosis and Ronell's The Telephone Book, for the importance both give to the double in producing a different relation with the technological. For Guattari, the machine's ghost is exorcised by the predominant view that sees particular machines, such as the computer, as a subset of technology, a view given credence at the level of hype in the marketing of AI, virtual reality and so forth as part of the great technological future. It also gains credibility theoretically through the Heideggerian perspective. Instead Guattari insists that technology is dependent upon the machinic (33). The machinic is prior to and a condition of any actual technology, it is a movement rather than a ground; the movement through which heterogeneous elements such as bodies, sciences, information come to form the interrelated yet specific fields of a particular assemblage we might term technological. It is also the movement through which these components retain their singularity. Borrowing from modern biology, Guattari labels this movement "autopoietic" (39). Rather than the cybernetic model which sees the outside integrated into the structure of the machinic by an adjustment towards homogeneity cutting off flow, Guattari underlines a continual machinic movement towards the outside, towards alterity, which transforms the interrelations of the technological ensemble. The machinic is doubled not by the reproduction of itself, but by the possibility of its own replacement, its own annihilation and transformation into something different: Its emergence is doubled with breakdown, catastrophe -- the menace of death. It possesses a supplement: a dimension of alterity which it develops in different forms. (37) Here, we can adjoin Guattari with Ronell's historical reading of the metaphorics of the telephone in attempts to think through technology. Always shadowed by the possibility Heidegger wishes to stake out for a beyond to or an overcoming of the technological, Ronell is both critical of the technologising of desire in the cybernetic loop and insistent upon the difference produced by technology's doubling desire. Using the telephone as a synecdoche for technology -- and this strategy is itself ambiguous: does the telephone represent part of the technological or is it a more comprehensive summary of a less comprehensive system? -- Ronell argues that it can only be thought of as irreducibly two, a pair (5). This differentiates itself from the couple which notoriously contracts into one. She argues that the two are not reducible to each other, that sender and receiver do not always connect, are not reducible to equal end points in the flow of information. For Ronell, what we find when we are not at home, on unfamiliar ground, is -- the machine. The telephone in fact maintains its relation to the machinic and to the doubling this implies, via the uncanny in Ronell's text. It relates to a not-being-at-home for the self, precisely when it becomes machine -- the answering machine. The answering machine disconnects the speaker from the listener and inserts itself not as controlling device in the loop, but as delay, the deferral of union. Loosely soldering this with Guattari's notion that the machine introduces a "dimension of alterity", Ronell reads the technological via the telephone line as that relation to the outside, to the machinic difference that makes the self always unfamiliar (84). I would suggest then that pursuing a love relation with technology or through the technological leads us to deploy an entire metaphorics of the familial, where the self is ultimately home alone and only has itself to play with. In this metaphorics, technology as double and technology's doubling desire become a conduit that returns only to itself through the circuitous mechanism of the feedback loop. Rather than opening onto heterogeneous relations to bodies or allowing bodies to develop different relational capacities, the body here is immobilised by an operational and scopic territorialisation. To be excited by an encounter with the technological something unfamiliar is preferable, some sense of an alternating current in the midst of all this homeliness, an external perturbation rubbing up against the tired hand of a short-circuiting onanism. Footnotes 1. The Love Machine is also the title of a digital still image and sound installation commenting upon the Hong Kong booth produced by myself and Michele Barker and last exhibited at the Viruses and Mutations exhibition for the Melbourne Festival, The Aikenhead Conference Centre, St. Vincent's Hospital, October, 1998. 2. For an articulation of the way in which this maps onto perspectival vision, see Simon Penny, "Virtual Reality as the Completion of the Enlightenment Project." Culture on the Brink. Eds. G. Bender and T. Druckery. Seattle: Bay Press, 1994. 3. Funded by the US Government, the project's goal is to develop maps for the 23 paired human chromosomes and to unravel the sequence of bases that make up the DNA of these chromosomes. 4. This is what Freud described in his paper "The Uncanny". Tracing the etymology of the German word for the uncanny, unheimlich, which in English translates literally as 'unhomely', Freud notes that heimlich, or 'homely', in fact contains the ambiguity of its opposite, in one instance. References Baudrillard, Jean. "Xerox and Infinity." The Transparency of Evil: Essays in Extreme Phenomena. Trans J. Benedict. London: Verso, 1993. 51-9. Dery, Mark. Escape Velocity: Cyberculture at the End of the Century. New York: Grove Press, 1996. Freud, Sigmund. "The Uncanny." Standard Edition of the Complete Psychological Works of Sigmund Freud. Vol. 17. Trans. and ed. J. Strachey. London: Hogarth Press, 1955. Guattari, Félix. Chaosmosis: An Ethico-Aesthetic Paradigm. Sydney: Power Publications, 1995. Heim, Michael. "The Erotic Ontology of Cyberspace." Cyberspace: First Steps. Cambridge, Mass.: MIT P, 1994.59-80. Minsky, Marvin. The Society of Mind. New York: Simon and Schuster, 1985. Moravec, Hans. Mind Children. Cambridge, Mass.: Harvard UP, 1988. Ronell, Avital. The Telephone Book. Lincoln: U of Nebraska P, 1989. Citation reference for this article MLA style: Anna Munster. "Love Machines." M/C: A Journal of Media and Culture 2.6 (1999). [your date of access] <http://www.uq.edu.au/mc/9909/love.php>. Chicago style: Anna Munster, "Love Machines," M/C: A Journal of Media and Culture 2, no. 6 (1999), <http://www.uq.edu.au/mc/9909/love.php> ([your date of access]). APA style: Anna Munster. (1999) Love machines. M/C: A Journal of Media and Culture 2(6). <http://www.uq.edu.au/mc/9909/love.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
24

Leaver, Tama. "Going Dark." M/C Journal 24, no. 2 (April 28, 2021). http://dx.doi.org/10.5204/mcj.2774.

Full text
Abstract:
The first two months of 2021 saw Google and Facebook ‘go dark’ in terms of news content on the Australia versions of their platforms. In January, Google ran a so-called “experiment” which removed or demoted current news in the search results available to a segment of Australian users. While Google was only darkened for some, in February news on Facebook went completely dark, with the company banning all news content and news sharing for users within Australian. Both of these instances of going dark occurred because of the imminent threat these platforms faced from the News Media Bargaining Code legislation that was due to be finalised by the Australian parliament. This article examines how both Google and Facebook responded to the draft Code, focussing on their threats to go dark, and the extent to which those threats were carried out. After exploring the context which produced the threats of going dark, this article looks at their impact, and how the Code was reshaped in light of those threats before it was finally legislated in early March 2021. Most importantly, this article outlines why Google and Facebook were prepared to go dark in Australia, and whether they succeeded in trying to prevent Australia setting the precedent of national governments dictating the terms by which digital platforms should pay for news content. From the Digital Platforms Inquiry to the Draft Code In July 2019, the Australian Treasurer released the Digital Platforms Inquiry Final Report which had been prepared by the Australian Competition and Consumer Commission (ACCC). It outlined a range of areas where Australian law, policies and practices were not keeping pace with the realities of a digital world of search giants, social networks, and streaming media. Analysis of the submissions made as part of the Digital Platforms Inquiry found that the final report was “primarily framed around the concerns of media companies, particularly News Corp Australia, about the impact of platform companies’ market dominance of content distribution and advertising share, leading to unequal economic bargaining relationships and the gradual disappearance of journalism jobs and news media publishers” (Flew et al. 13). As such, one of the most provocative recommendations made was the establishment of a new code that would “address the imbalance in the bargaining relationship between leading digital platforms and news media businesses” (Australian Competition and Consumer Commission, Digital Platforms Inquiry 16). The ACCC suggested such a code would assist Australian news organisations of any size in negotiating with Facebook, Google and others for some form of payment for news content. The report was released at a time when there was a greatly increased global appetite for regulating digital platforms. Thus the battle over the Code was watched across the world as legislation that had the potential to open the door for similar laws in other countries (Flew and Wilding). Initially the report suggested that the digital giants should be asked to develop their own codes of conduct for negotiating with news organisations. These codes would have then been enforced within Australia if suitably robust. However, after months of the big digital platforms failing to produce meaningful codes of their own, the Australian government decided to commission their own rules in this arena. The ACCC thus prepared the draft legislation that was tabled in July 2020 as the Australian News Media Bargaining Code. According to the ACCC the Code, in essence, tried to create a level playing field where Australian news companies could force Google and Facebook to negotiate a ‘fair’ payment for linking to, or showing previews of, their news content. Of course, many commentators, and the platforms themselves, retorted that they already bring significant value to news companies by referring readers to news websites. While there were earlier examples of Google and Facebook paying for news, these were largely framed as philanthropy: benevolent digital giants supporting journalism for the good of democracy. News companies and the ACCC argued this approach completely ignored the fact that Google and Facebook commanded more than 80% of the online advertising market in Australia at that time (Meade, “Google, Facebook and YouTube”). Nor did the digital giants acknowledge their disruptive power given the bulk of that advertising revenue used to flow to news companies. Some of the key features of this draft of the Code included (Australian Competition and Consumer Commission, “News Media Bargaining Code”): Facebook and Google would be the (only) companies initially ‘designated’ by the Code (i.e. specific companies that must abide by the Code), with Instagram included as part of Facebook. The Code applied to all Australian news organisations, and specifically mentioned how small, regional, and rural news media would now be able to meaningfully bargain with digital platforms. Platforms would have 11 weeks after first being contacted by a news organisation to reach a mutually negotiated agreement. Failure to reach agreements would result in arbitration (using a style of arbitration called final party arbitration which has both parties present a final offer or position, with an Australian arbiter simply choosing between the two offers in most cases). Platforms were required to give 28 days notice of any change to their algorithms that would impact on the ways Australian news was ranked and appeared on their platform. Penalties for not following the Code could be ten million dollars, or 10% of the platform’s annual turnover in Australia (whichever was greater). Unsurprisingly, Facebook, Google and a number of other platforms and companies reacted very negatively to the draft Code, with their formal submissions arguing: that the algorithm change notifications would give certain news companies an unfair advantage while disrupting the platforms’ core business; that charging for linking would break the underlying free nature of the internet; that the Code overstated the importance and reach of news on each platform; and many other objections were presented, including strong rejections of the proposed model of arbitration which, they argued, completely favoured news companies without providing any real or reasonable limit on how much news organisations could ask to be paid (Google; Facebook). Google extended their argument by making a second submission in the form of a report with the title ‘The Financial Woes of News Publishers in Australia’ (Shapiro et al.) that argued Australian journalism and news was financially unsustainable long before digital platforms came along. However, in stark contrast the Digital News Report: Australia 2020 found that Google and Facebook were where many Australians found their news; in 2020, 52% of Australians accessed news on social media (up from 46% the year before), with 39% of Australians getting news from Facebook, and that number jumping to 49% when specifically focusing on news seeking during the first COVID-19 pandemic peak in April 2021 (Park et al.). The same report highlighted that 43% of people distrust news found on social media (with a further 29% neutral, and only 28% of people explicitly trusting news found via social media). Moreover, 64% of Australians were concerned about misinformation online, and of all the platforms mentioned in the survey, respondents were most concerned about Facebook as a source of misinformation, with 36% explicitly indicating this was the place they were most concerned about encountering ‘fake news’. In this context Facebook and Google battled the Code by launching a public relations campaigns, appealing directly to Australian consumers. Google Drives a Bus Across Australia Google’s initial response to the draft Code was a substantial public relations campaign which saw the technology company advocating against the Code but not necessarily the ideas behind it. Google instead posited their own alternative way of paying for journalism in Australia. On the main Google search landing page, the usually very white surrounds of the search bar included the text “Supporting Australian journalism: a constructive path forward” which linked to a Google page outlining their version of a ‘Fair Code’. Popup windows appeared across many of Google’s services and apps, noting Google “are willing to pay to support journalism”, with a button labelled ‘Hear our proposal’. Figure 1: Popup notification on Google Australia directing users to Google’s ‘A Fair Code’ proposal rebutting the draft Code. (Screen capture by author, 29 January 2021) Google’s popups and landing page links were visible for more than six months as the Code was debated. In September 2020, a Google blog post about the Code was accompanied by a YouTube video campaign featuring Australia comedian Greta Lee Jackson (Google Australia, Google Explains Arbitration). Jackson used the analogy of Google as a bus driver, who is forced to pay restaurants for delivering customers to them, and then pay part of the running costs of restaurants, too. The video reinforced Google’s argument that the draft Code was asking digital platforms to pay potentially enormous costs for news content without acknowledging the value of Google bringing readers to the news sites. However, the video opened with the line that “proposed laws can be confusing, so I'll use an analogy to break it down”, setting a tone that would seem patronising to many people. Moreover, the video, and Google’s main argument, completely ignored the personal data Google receives every time a user searches for, or clicks on, a news story via Google Search or any other Google service. If Google’s analogy was accurate, then the bus driver would be going through every passenger’s bag while they were on the bus, taking copies of all their documents from drivers licenses to loyalty cards, keeping a record of every time they use the bus, and then using this information to get advertisers to pay for a tailored advertisement on the back of the seat in front of every passenger, every time they rode the bus. Notably, by the end of March 2021, the video had only received 10,399 views, which suggests relatively few people actually clicked on it to watch. In early January 2021, at the height of the debate about the Code, Google ran what they called “an experiment” which saw around 1% of Australian users suddenly only receive “older or less relevant content” when searching for news (Barnet, “Google’s ‘Experiment’”). While ostensibly about testing options for when the Code became law, the unannounced experiment also served as a warning shot. Google very effectively reminded users and politicians about their important role in determining which news Australian users find, and what might happen if Google darkened what they returned as news results. On 21 January 2021, Mel Silva, the Managing Director and public face of Google in Australia and New Zealand gave public testimony about the company’s position before a Senate inquiry. Silva confirmed that Google were indeed considering removing Google Search in Australia altogether if the draft Code was not amended to address their key concerns (Silva, “Supporting Australian Journalism: A Constructive Path Forward An Update on the News Media Bargaining Code”). Google’s seemingly sudden escalation in their threat to go dark led to articles such as a New York Times piece entitled ‘An Australia with No Google? The Bitter Fight behind a Drastic Threat’ (Cave). Google also greatly amplified their appeal to the Australian public, with a video featuring Mel Silva appearing frequently on all Google sites in Australia to argue their position (Google Australia, An Update). By the end of March 2021, Silva’s video had been watched more than 2.2 million times on YouTube. Silva’s testimony, video and related posts from Google all characterised the Code as: breaking “how Google search works in Australia”; creating a world where links online are paid for and thus both breaking Google and “undermin[ing] how the web works”; and saw Google offer their News Showcase as a viable alternative that, in Google’s view, was “a fair one” (Silva, “Supporting Australian Journalism”). Google emphasised submissions about the Code which backed their position, including World Wide Web inventor Tim Berners-Lee who agreed that the idea of charging for links could have a more wide-reaching impact, challenging the idea of a free web (Leaver). Google also continued to release their News Showcase product in other parts of the world. They emphasised that there were existing arrangements for Showcase in Australia, but the current regulatory uncertainty meant it was paused in Australia until the debates about the Code were resolved. In the interim, news media across Australia, and the globe, were filled with stories speculating what an Australia would look like if Google went completely dark (e.g. Cave; Smyth). Even Microsoft weighed in to supporting the Code and offer their search engine Bing as a viable alternative to fill the void if Google really did go dark (Meade, “Microsoft’s Bing”). In mid-February, the draft Code was tabled in Australian parliament. Many politicians jumped at the chance to sing the Code’s praises and lament the power that Google and Facebook have across various spheres of Australian life. Yet as these speeches were happening, the Australian Treasurer Josh Frydenberg was holding weekend meetings with executives from Google and Facebook, trying to smooth the path toward the Code (Massola). In these meetings, a number of amendments were agreed to, including the Code more clearly taking in to account any existing deals already on the table before it became law. In these meetings the Treasurer made in clear to Google that if the deals done prior to the Code were big enough, he would consider not designating Google under the Code, which in effect would mean Google is not immediately subject to it (Samios and Visentin). With that concession in hand Google swiftly signed deals with over 50 Australian news publishers, including Seven West Media, Nine, News Corp, The Guardian, the ABC, and some smaller publishers such as Junkee Media (Taylor; Meade, “ABC Journalism”). While the specific details of these deals were not made public, the deals with Seven West Media and Nine were both reported to be worth around $30 million Australian dollars (Dudley-Nicholson). In reacting to Google's deals Frydenberg described them as “generous deals, these are fair deals, these are good deals for the Australian media businesses, deals that they are making off their own bat with the digital giants” (Snape, “‘These Are Good Deals’”). During the debates about the Code, Google had ultimately ensured that every Australian user was well aware that Google was, in their words, asking for a “fair” Code, and before the Code became law even the Treasurer was conceding that Google’s was offering a “fair deal” to Australian news companies. Facebook Goes Dark on News While Google never followed through on their threat to go completely dark, Facebook took a very different path, with a lot less warning. Facebook’s threat to remove all news from the platform for users in Australia was not made explicit in their formal submissions the draft of the Code. However, to be fair, Facebook’s Managing Director in Australia and New Zealand Will Easton did make a blog post at the end of August 2020 in which he clearly stated: “assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram” (Easton). During the negotiations in late 2020 Instagram was removed as an initial target of the Code (just as YouTube was not included as part of Google) along with a number of other concessions, but Facebook were not sated. Yet Easton’s post about removing news received very little attention after it was made, and certainly Facebook made no obvious attempt to inform their millions of Australian users that news might be completely blocked. Hence most Australians were shocked when that was exactly what Facebook did. Facebook’s power has, in many ways, always been exercised by what the platform’s algorithms display to users, what content is most visible and equally what content is made invisible (Bucher). The morning of Wednesday, 17 February 2021, Australian Facebook users awoke to find that all traditional news and journalism had been removed from the platform. Almost all pages associated with news organisations were similarly either disabled or wiped clean, and that any attempt to share links to news stories was met with a notification: “this post can’t be shared”. The Australian Prime Minister Scott Morrison reacted angrily, publicly lamenting Facebook’s choice to “unfriend Australia”, adding their actions were “as arrogant as they were disappointing”, vowing that Australia would “not be intimidated by big tech” (Snape, “Facebook Unrepentant”). Figure 2: Facebook notification appearing when Australians attempted to share news articles on the platform. (Screen capture by author, 20 February 2021) Facebook’s news ban in Australia was not limited to official news pages and news content. Instead, their ban initially included a range of pages and services such as the Australian Bureau of Meteorology, emergency services pages, health care pages, hospital pages, services providing vital information about the COVID-19 pandemic, and so forth. The breadth of the ban may have been purposeful, as one of Facebook’s biggest complaints was that the Code defined news too broadly (Facebook). Yet in the Australian context, where the country was wrestling with periodic lockdowns and the Coronavirus pandemic on one hand, and bushfires and floods on the other, the removal of these vital sources of information showed a complete lack of care or interest in Australian Facebook users. Beyond the immediate inconvenience of not being able to read or share news on Facebook, there were a range of other, immediate, consequences. As Barnet, amongst others, warned, a Facebook with all credible journalism banned would almost certainly open the floodgates to a tide of misinformation, with nothing left to fill the void; it made Facebook’s “public commitment to fighting misinformation look farcical” (Barnet, “Blocking Australian News”). Moreover, Bossio noted, “reputational damage from blocking important sites that serve Australia’s public interest overnight – and yet taking years to get on top of user privacy breaches and misinformation – undermines the legitimacy of the platform and its claimed civic intentions” (Bossio). If going dark and turning off news in Australia was supposed to win the sympathy of Australian Facebook users, then the plan largely backfired. Yet as with Google, the Australian Treasurer was meeting with Mark Zuckerberg and Facebook executives behind closed doors, which did eventually lead to changes before the Code was finally legislated (Massola). Facebook gained a number of concessions, including: a longer warning period before a Facebook could be designated by the Code; a longer period before news organisations would be able to expect negotiations to be concluded; an acknowledgement that existing deals would be taken in to account during negotiations; and, most importantly, a clarification that if Facebook was to once again block news this would both prevent them being subject to the Code and was not be something the platform could be punished for. Like Google, though, Facebook’s biggest gain was again the Treasurer making it clear that by making deals in advance on the Code becoming law, it was likely that Facebook would not be designated, and thus not subject to the Code at all (Samios and Visentin). After these concessions the news standoff ended and on 23 February the Australian Treasurer declared that after tense negotiations Facebook had “refriended Australia”; the company had “committed to entering into good-faith negotiations with Australian news media businesses and seeking to reach agreements to pay for content” (Visentin). Over the next month there were some concerns voiced about slow progress, but then major deals were announced between Facebook and News Corp Australia, and with Nine, with other deals following closely (Meade, “Rupert Murdoch”). Just over a week after the ban began, Facebook returned news to their platform in Australia. Facebook obviously felt they had won the battle, but Australia Facebook users were clearly cannon fodder, with their interests and wellbeing ignored. Who Won? The Immediate Aftermath of the Code After the showdowns with Google and Facebook, the final amendments to the Code were made and it was legislated as the News Media and Digital Platforms Mandatory Bargaining Code (Australian Treasury), going into effect on 2 March 2021. However, when it became legally binding, not one single company was ‘designated’, meaning that the Code did not immediately apply to anyone. Yet deals had been struck, money would flow to Australian news companies, and Facebook had returned news to its platform in Australia. At the outset, Google, Facebook, news companies in Australia and the Australian government all claimed to have won the battle over the Code. Having talked up their tough stance on big tech platforms when the Digital Platforms Inquiry landed in 2019, the Australian Government was under public pressure to deliver on that rhetoric. The debates and media coverage surrounding the Code involved a great deal of political posturing and gained much public attention. The Treasurer was delighted to see deals being struck that meant Facebook and Google would pay Australian news companies. He actively portrayed this as the government protecting Australia’s interest and democracy. The fact that the Code was leveraged as a threat does mean that the nuances of the Code are unlikely to be tested in a courtroom in the near future. Yet as a threat it was an effective one, and it does remain in the Treasurer’s toolkit, with the potential to be deployed in the future. While mostly outside the scope of this article, it should definitely be noted that the biggest winner in the Code debate was Rupert Murdoch, executive chairman of News Corp. They were the strongest advocates of regulation forcing the digital giants to pay for news in the first place, and had the most to gain and least to lose in the process. Most large news organisations in Australia have fared well, too, with new revenue flowing in from Google and Facebook. However, one of the most important facets of the Code was the inclusion of mechanisms to ensure that regional and small news publishers in Australia would be able to negotiate with Facebook and Google. While some might be able to band together and strike terms (and some already have) it is likely that many smaller news companies in Australia will miss out, since the deals being struck with the bigger news companies appear to be big enough to ensure they are not designated, and thus not subject to the Code (Purtill). A few weeks after the Code became law ACCC Chair Rod Sims stated that the “problem we’re addressing with the news media code is simply that we wanted to arrest the decline in money going to journalism” (Kohler). On that front the Code succeeded. However, there is no guarantee the deals will mean money will support actual journalists, rather than disappearing as extra corporate profits. Nor is there any onus on Facebook or Google to inform news organisations about changes to their algorithms that might impact on news rankings. Also, as many Australia news companies are now receiving payments from Google and Facebook, there is a danger the news media will become dependent on that revenue, which may make it harder for journalists to report on the big tech giants without some perceptions of a conflict of interest. In a diplomatic post about the Code, Google thanked everyone who had voiced concerns with the initial drafts of the legislation, thanked Australian users, and celebrated that their newly launched Google News Showcase had “two million views of content” with more than 70 news partners signed up within Australia (Silva, “An Update”). Given that News Showcase had already begun rolling out elsewhere in the world, it is likely Google were already aware they were going to have to contribute to the production of journalism across the globe. The cost of paying for news in Australia may well have fallen within the parameters Google had already decided were acceptable and inevitable before the debate about the Code even began (Purtill). In the aftermath of the Code becoming legislation, Google also posted a cutting critique of Microsoft, arguing they were “making self-serving claims and are even willing to break the way the open web works in an effort to undercut a rival” (Walker). In doing so, Google implicitly claimed that the concessions and changes to the Code they had managed to negotiate effectively positioned them as having championed the free and open web. At the end of February 2021, in a much more self-congratulatory post-mortem of the Code entitled “The Real Story of What Happened with News on Facebook in Australia”, Facebook reiterated their assertion that they bring significant value to news publishers and that the platform receives no real value in return, stating that in 2020 Facebook provided “approximately 5.1 billion free referrals to Australian publishers worth an estimated AU$407 million to the news industry” (Clegg). Deploying one last confused metaphor, Facebook argued the original draft of the Code was “like forcing car makers to fund radio stations because people might listen to them in the car — and letting the stations set the price.” Of course, there was no mention that following that metaphor, Facebook would have bugged the car and used that information to plaster the internal surfaces with personalised advertising. Facebook also touted the success of their Facebook News product in the UK, albeit without setting a date for the rollout of the product in Australia. While Facebook did concede that “the decision to stop the sharing of news in Australia appeared to come out of nowhere”, what the company failed to do was apologise to Australian Facebook users for the confusion and inconvenience they experienced. Nevertheless, on Facebook’s own terms, they certainly positioned themselves as having come out winners. Future research will need to determine whether Facebook’s actions damaged their reputation or encouraged significant numbers of Australians to leave the platform permanently, but in the wake of a number of high-profile scandals, including Cambridge Analytica (Vaidhyanathan), it is hard to see how Facebook’s actions would not have further undermined consumer trust in the company and their main platform (Park et al.). In fighting the Code, Google and Facebook were not just battling the Australian government, but also the implication that if they paid for news in Australia, they likely would also have to do so in other countries. The Code was thus seen as a dangerous precedent far more than just a mechanism to compel payment in Australia. Since both companies ensured they made deals prior to the Code becoming law, neither was initially ‘designated’, and thus neither were actually subject to the Code at the time of writing. The value of the Code has been as a threat and a means to force action from the digital giants. How effective it is as a piece of legislation remains to be seen in the future if, indeed, any company is ever designated. For other countries, the exact wording of the Code might not be as useful as a template, but its utility to force action has surely been noted. Like the inquiry which initiated it, the Code set “the largest digital platforms, Google and Facebook, up against the giants of traditional media, most notably Rupert Murdoch’s News Corporation” (Flew and Wilding 50). Yet in a relatively unusual turn of events, both sides of that battle claim to have won. At the same time, EU legislators watched the battle closely as they considered an “Australian-style code” of their own (Dillon). Moreover, in the month immediately following the Code being legislated, both the US and Canada were actively pursuing similar regulation (Baier) with Facebook already threatening to remove news and go dark for Canadian Facebook users (van Boom). For Facebook, and Google, the battle continues, but fighting the Code has meant the genie of paying for news content is well and truly out of the bottle. References Australian Competition and Consumer Commission. Digital Platforms Inquiry: Final Report. 25 July 2019. <https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/final-report-executive-summary>. ———. “News Media Bargaining Code: Draft Legislation.” Australian Competition and Consumer Commission, 22 July 2020. <https://www.accc.gov.au/focus-areas/digital-platforms/news-media-bargaining-code/draft-legislation>. Australian Treasury. Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Act 2021. Attorney-General’s Department, 2 Mar. 2021. <https://www.legislation.gov.au/Details/C2021A00021/Html/Text>. Baier, Jansen. “US Could Allow News Distribution Fees for Google, Facebook.” MediaFile, 31 Mar. 2021. <http://www.mediafiledc.com/us-could-allow-news-distribution-fees-for-google-facebook/>. Barnet, Belinda. “Blocking Australian News Shows Facebook’s Pledge to Fight Misinformation Is Farcical.” The Guardian, 18 Feb. 2021. <http://www.theguardian.com/commentisfree/2021/feb/18/blocking-australian-news-shows-facebooks-pledge-to-fight-misinformation-is-farcical>. ———. “Google’s ‘Experiment’ Hiding Australian News Just Shows Its Inordinate Power.” The Guardian, 14 Jan. 2021. <http://www.theguardian.com/commentisfree/2021/jan/14/googles-experiment-hiding-australian-news-just-shows-its-inordinate-power>. Bossio, Diana. “Facebook Has Pulled the Trigger on News Content — and Possibly Shot Itself in the Foot.” The Conversation, 18 Feb. 2021. <http://theconversation.com/facebook-has-pulled-the-trigger-on-news-content-and-possibly-shot-itself-in-the-foot-155547>. Bucher, Taina. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14.7 (2012): 1164–80. DOI:10.1177/1461444812440159. Cave, Damien. “An Australia with No Google? The Bitter Fight behind a Drastic Threat.” The New York Times, 22 Jan. 2021. <https://www.nytimes.com/2021/01/22/business/australia-google-facebook-news-media.html>. Clegg, Nick. “The Real Story of What Happened with News on Facebook in Australia.” About Facebook, 24 Feb. 2021. <https://about.fb.com/news/2021/02/the-real-story-of-what-happened-with-news-on-facebook-in-australia/>. Dillon, Grace. “EU Contemplates Australia-Style Media Bargaining Code; China Imposes New Antitrust Rules.” ExchangeWire.com, 9 Feb. 2021. <https://www.exchangewire.com/blog/2021/02/09/eu-contemplates-australia-style-media-bargaining-code-china-imposes-new-antitrust-rules/>. Dudley-Nicholson, Jennifer. “Google May Escape Laws after Spending Spree.” The Daily Telegraph, 17 Feb. 2021. <https://www.dailytelegraph.com.au/news/national/google-may-escape-tough-australian-news-laws-after-a-lastminute-spending-spree/news-story/d3b37406bf279ff6982287d281d1fbdd>. Easton, Will. “An Update about Changes to Facebook’s Services in Australia.” About Facebook, 1 Sep. 2020. <https://about.fb.com/news/2020/08/changes-to-facebooks-services-in-australia/>. Facebook. Facebook Response to the Australian Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Facebook_0.pdf>. Flew, Terry, et al. “Return of the Regulatory State: A Stakeholder Analysis of Australia’s Digital Platforms Inquiry and Online News Policy.” The Information Society 37.2 (2021): 128–45. DOI:10.1080/01972243.2020.1870597. Flew, Terry, and Derek Wilding. “The Turn to Regulation in Digital Communication: The ACCC’s Digital Platforms Inquiry and Australian Media Policy.” Media, Culture & Society 43.1 (2021): 48–65. DOI:10.1177/0163443720926044. Google. Draft News Media and Platforms Mandatory Bargaining Code: Submissions in Response. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Google_0.pdf>. Google Australia. An Update from Google on the News Media Bargaining Code. 2021. YouTube. <https://www.youtube.com/watch?v=dHypeuHePEI>. ———. Google Explains Arbitration under the News Media Bargaining Code. 2020. YouTube. <https://www.youtube.com/watch?v=6Io01W3migk>. Kohler, Alan. “The News Bargaining Code Is Officially Dead.” The New Daily, 16 Mar. 2021. <https://thenewdaily.com.au/news/2021/03/17/alan-kohler-news-bargaining-code-dead/>. Leaver, Tama. “Web’s Inventor Says News Media Bargaining Code Could Break the Internet. He’s Right — but There’s a Fix.” The Conversation, 21 Jan. 2021. <http://theconversation.com/webs-inventor-says-news-media-bargaining-code-could-break-the-internet-hes-right-but-theres-a-fix-153630>. Massola, James. “Frydenberg, Facebook Negotiating through the Weekend.” The Sydney Morning Herald, 20 Feb. 2021. <https://www.smh.com.au/politics/federal/frydenberg-facebook-negotiating-through-the-weekend-on-new-media-laws-20210219-p573zp.html>. Meade, Amanda. “ABC Journalism to Appear on Google’s News Showcase in Lucrative Deal.” The Guardian, 22 Feb. 2021. <http://www.theguardian.com/media/2021/feb/23/abc-journalism-to-appear-on-googles-showcase-in-lucrative-deal>. ———. “Google, Facebook and YouTube Found to Make Up More than 80% of Australian Digital Advertising.” The Guardian, 23 Oct. 2020. <http://www.theguardian.com/media/2020/oct/23/google-facebook-and-youtube-found-to-make-up-more-than-80-of-australian-digital-advertising>. ———. “Microsoft’s Bing Ready to Step in If Google Pulls Search from Australia, Minister Says.” The Guardian, 1 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/01/microsofts-bing-ready-to-step-in-if-google-pulls-search-from-australia-minister-says>. ———. “Rupert Murdoch’s News Corp Strikes Deal as Facebook Agrees to Pay for Australian Content.” The Guardian, 15 Mar. 2021. <http://www.theguardian.com/media/2021/mar/16/rupert-murdochs-news-corp-strikes-deal-as-facebook-agrees-to-pay-for-australian-content>. Park, Sora, et al. Digital News Report: Australia 2020. Canberra: News and Media Research Centre, 16 June 2020. DOI:10.25916/5ec32f8502ef0. Purtill, James. “Facebook Thinks It Won the Battle of the Media Bargaining Code — but So Does the Government.” ABC News, 25 Feb. 2021. <https://www.abc.net.au/news/science/2021-02-26/facebook-google-who-won-battle-news-media-bargaining-code/13193106>. Samios, Zoe, and Lisa Visentin. “‘Historic Moment’: Treasurer Josh Frydenberg Hails Google’s News Content Deals.” The Sydney Morning Herald, 17 Feb. 2021. <https://www.smh.com.au/business/companies/historic-moment-treasurer-josh-frydenberg-hails-google-s-news-content-deals-20210217-p573eu.html>. Shapiro, Carl, et al. The Financial Woes of News Publishers in Australia. 27 Aug. 2020. <https://www.accc.gov.au/system/files/Google%20Annex.PDF>. Silva, Mel. “An Update on the News Media Bargaining Code.” Google Australia, 1 Mar. 2021. <http://www.google.com.au/google-in-australia/an-open-letter/>. ———. “Supporting Australian Journalism: A Constructive Path Forward – An Update on the News Media Bargaining Code.” Google Australia, 22 Jan. 2021. <https://about.google/intl/ALL_au/google-in-australia/jan-6-letter/>. Smyth, Jamie. “Australian Companies Forced to Imagine Life without Google.” Financial Times, 9 Feb. 2021. <https://www.ft.com/content/fa66e8dc-afb1-4a50-8dfa-338a599ad82d>. Snape, Jack. “Facebook Unrepentant as Prime Minister Dubs Emergency Services Block ‘Arrogant.’” ABC News, 18 Feb. 2021. <https://www.abc.net.au/news/2021-02-18/facebook-unrepentant-scott-morrison-dubs-move-arrogant/13169340>. ———. “‘These Are Good Deals’: Treasurer Praises Google News Deals amid Pressure from Government Legislation.” ABC News, 17 Feb. 2021. <https://www.abc.net.au/news/2021-02-17/treasurer-praises-good-deals-between-google-news-seven/13163676>. Taylor, Josh. “Guardian Australia Strikes Deal with Google to Join News Showcase.” The Guardian, 20 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/20/guardian-australia-strikes-deal-with-google-to-join-news-showcase>. Vaidhyanathan, Siva. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford UP, 2018. Van Boom, Daniel. “Facebook Could Block News in Canada like It Did in Australia.” CNET, 29 Mar. 2021. <https://www.cnet.com/news/facebook-could-block-news-in-canada-like-it-did-in-australia/>. Visentin, Lisa. “Facebook Refriends Australia after Last-Minute Changes to Media Code.” The Sydney Morning Herald, 23 Feb. 2021. <https://www.smh.com.au/politics/federal/government-agrees-to-last-minute-amendments-to-media-code-20210222-p574kc.html>. Walker, Kent. “Our Ongoing Commitment to Supporting Journalism.” Google, 12 Mar. 2021. <https://blog.google/products/news/google-commitment-supporting-journalism/>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography