To see the other types of publications on this topic, follow the link: Rationalisation of technology.

Journal articles on the topic 'Rationalisation of technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 journal articles for your research on the topic 'Rationalisation of technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wilson, A. "Rationalisation by convergence [IP technology]." Computing and Control Engineering 15, no. 2 (April 1, 2004): 24–29. http://dx.doi.org/10.1049/cce:20040204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shah, Mohd Hazim. "Historicising Rationality: The Transmission of Rationality and Science to the Malay States under British Rule." Asian Journal of Social Science 35, no. 2 (2007): 216–41. http://dx.doi.org/10.1163/156853107x203441.

Full text
Abstract:
AbstractIn this article, I will attempt to look at the process of modernisation of Malay society under British rule, roughly from 1874 to 1957, from the perspective of Weber's theory of rationalisation. I will be looking at the process of rationalisation in the spheres of public administration, education, science and technology. Contrary to Weber's and Habermas' view of rationalisation as involving value and cultural change prior to social change, I argue that rationalisation as it occurred under conditions of colonisation in the Malay States in late 19th and early 20th centuries, require a different explanatory model in which the rationalisation that occurred was primarily instrumental and utilitarian in nature, not involving value-change, and where science was not institutionalised prior to its professionalisation. This reversal, in the sequential order, in comparison to that found in the European process of rationalisation, results in a different set of consequences and poses a problem for cultural stability and the indigenization of science and technology in Malaysia.
APA, Harvard, Vancouver, ISO, and other styles
3

Overwijk, Jan. "Paradoxes of Rationalisation: Openness and Control in Critical Theory and Luhmann's Systems Theory." Theory, Culture & Society 38, no. 1 (July 6, 2020): 127–48. http://dx.doi.org/10.1177/0263276420925548.

Full text
Abstract:
For the Critical Theory tradition of the Frankfurt School, rationalisation is a central concept that refers to the socio-cultural closure of capitalist modernity due to the proliferation of technical, ‘instrumental’ rationality at the expense of some form of political reason. This picture of rationalisation, however, hinges on a separation of technology and politics that is both empirically and philosophically problematic. This article aims to re-conceptualise the rationalisation thesis through a survey of research from science and technology studies and the conceptual framework of Niklas Luhmann's systems theory. It argues that rationalisation indeed exhibits a logic of closure, namely the ‘operational closure’ of sociotechnical systems of measurement, but that this closure in fact produces the historical logics of technical reason and, paradoxically, also generates spaces of critical-political openness. This opens up the theoretical and practical opportunity of connecting the politically just to the technically efficient.
APA, Harvard, Vancouver, ISO, and other styles
4

Balls, Michael. "18. Rationalisation and Intellectualisation." Alternatives to Laboratory Animals 43, no. 4 (September 2015): P49—P50. http://dx.doi.org/10.1177/026119291504300411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cousins, Paul D. "Supply base rationalisation: myth or reality?" European Journal of Purchasing & Supply Management 5, no. 3-4 (September 1999): 143–55. http://dx.doi.org/10.1016/s0969-7012(99)00019-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Westwood, John B. "Retail inventory movement – a case study in rationalisation." International Journal of Physical Distribution & Logistics Management 29, no. 7/8 (September 1999): 444–54. http://dx.doi.org/10.1108/eum0000000004614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Griffith, Niall J. L. "Is cognition plus technology an unbounded system?" Cognitive Technologies and the Pragmatics of Cognition 13, no. 3 (December 20, 2005): 583–613. http://dx.doi.org/10.1075/pc.13.3.10gri.

Full text
Abstract:
The relationship between cognition and culture is discussed in terms of technology and representation. The computational metaphor is discussed in relation to its providing an account of cognitive and technical development: the role of representation and self-modification through environmental manipulation and the development of open learning from stigmery. A rationalisation for the transformational effects of information and representation is sought in the physical and biological theories of Autokatakinetics and Autopoiesis. The conclusion drawn is that culture, rather than being an intrinsic property of our human phenotype was learned and that cultural cognition is an information transforming system that is inadequately characterised by notions of parameterised deep-structure and that it is an open and potentially unbounded informational system.
APA, Harvard, Vancouver, ISO, and other styles
8

Drioli, E., G. Di Profio, and E. Curcio. "Hybrid membrane operations in water desalination and industrial process rationalisation." Water Science and Technology 51, no. 6-7 (March 1, 2005): 293–304. http://dx.doi.org/10.2166/wst.2005.0649.

Full text
Abstract:
Membrane science and technology are recognized today as powerful tools in resolving some important global problems, and developing newer industrial processes, needed from the imperative of sustainable industrial growth. In seawater desalination, for resolving the dramatic increase of freshwater demand in many regions of the world, membrane unitary operations or the combination of some of them in integrated systems are already a real means for producing water from the sea, at lower costs and minimum environmental impact, with a very interesting prospective in particular for poor economy countries. However, membranes are used or are becoming used in some important industrial fields, for developing more efficient productive cycles, with reduced waste of raw-material, reducing the polluting charge by controlling byproduct generation, and reducing overall costs. In the present paper, other than for seawater desalination applications, some industrial applications where membrane technology has led already to match the goal of process intensification are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Fielden, D., and J. K. Jacques. "Systemic approach to energy rationalisation in island communities." International Journal of Energy Research 22, no. 2 (February 1998): 107–29. http://dx.doi.org/10.1002/(sici)1099-114x(199802)22:2<107::aid-er289>3.0.co;2-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Carter, Norman. "The application of a flexible tooling system in a flexible manufacturing system." Robotica 3, no. 4 (October 1985): 221–28. http://dx.doi.org/10.1017/s0263574700002319.

Full text
Abstract:
SUMMARYThe introduction of Flexible Manufacturing Systems, Cell Technology, and Automated Machining Techniques with the related reduction in manning levels has resulted in the development of tooling systems, tool management systems, and, independent tool magazines to service TURNING MACHINES where a high number of tools are required to cover one shift or unmanned operation.Actual cutting time (production time) represents a value between 5% and 20% of average machine utilisation time, and developments in cutting materials and geometries have largely exhausted rationalisation possibilities in this area.
APA, Harvard, Vancouver, ISO, and other styles
11

Marshall, J. N., and R. Richardson. "The Impact of ‘Telemediated’ Services on Corporate Structures: The Example of ‘Branchless’ Retail Banking in Britain." Environment and Planning A: Economy and Space 28, no. 10 (October 1996): 1843–58. http://dx.doi.org/10.1068/a281843.

Full text
Abstract:
In this paper we assess the spatial impact of ‘branchless’ retail banking which integrates telecommunications and computer technology to provide personal financial services remotely. We show that, in Britain, financial institutions are concentrating retail services into a small number of low-cost sites on the edge of cities in the north of the country, and exporting the services to more expensive locations. Associated with locational shifts is a rationalisation of corporate hierarchies and the introduction of a more ‘entrepreneurial’ approach to selling bank services, involving new types of gender-segmented work.
APA, Harvard, Vancouver, ISO, and other styles
12

Palmer, Gill. "Donovan, The Commission on Industrial Relations and Post-Liberal Rationalisation." British Journal of Industrial Relations 24, no. 2 (July 1986): 267–96. http://dx.doi.org/10.1111/j.1467-8543.1986.tb00686.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Altenried, Moritz. "Die Plattform als Fabrik." PROKLA. Zeitschrift für kritische Sozialwissenschaft 47, no. 187 (June 1, 2017): 175–92. http://dx.doi.org/10.32387/prokla.v47i187.140.

Full text
Abstract:
The article analyses digital labour on crowdwork platforms as paradigmatic example of an emerging digital Taylorism. The characteristics of this tendency comprise elements of classical Taylorist forms of rationalisation now enabled and reconfigured by digital technology. The algorithmic architecture of digital platforms can include distributed and diverse workers in front of their computers and smartphones into highly standardised labour processes. This allows tapping into new labour resources, both in temporal and spatial dimensions. In this respect crowdwork is finally situated in the context of the multiplication of labour, understood as a heterogenization of the division and composition of labour of which digital labour such as crowdwork is part and parcel
APA, Harvard, Vancouver, ISO, and other styles
14

Stipić, Matija, Dušan Prodanović, and Srđan Kolaković. "Rationalisation and reliability improvement of fire-fighting systems – the Novi Sad case study." Urban Water Journal 6, no. 2 (June 2009): 169–81. http://dx.doi.org/10.1080/15730620802541649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sohani, Shrihari S., and Sunil Maheshwari. "Problem of Plenty." International Journal of Knowledge-Based Organizations 6, no. 1 (January 2016): 38–48. http://dx.doi.org/10.4018/ijkbo.2016010103.

Full text
Abstract:
The article deals with understanding the phenomena of employee rationalisation in firms especially the ones operating in dynamic environments. In this paper the authors do an extensive review of literature to understand the complete cycle, which is from environmental conditions leading to employee surplus consequently leading to firm implementing relevant strategies to deal with the situation and the consequences thereof on firm and individuals. The authors focus their attention on high technology firms and the relevant strategies for managing employee surplus in such firms. They extend the literature by proposing the relevance of tacit knowledge in situations where employee surplus needs to be managed and how it can be retained for preserving the competitiveness of firms.
APA, Harvard, Vancouver, ISO, and other styles
16

Lindop, Paul H. "The Ninian Pipeline System in the UK North Sea – Developments for the Second Generation of Fields." Energy Exploration & Exploitation 10, no. 4-5 (September 1992): 246–58. http://dx.doi.org/10.1177/014459879201000405.

Full text
Abstract:
As the original giant fields of the East Shetland Basin enter their decline period there has been, as with any mature asset, the challenge of managing the pipeline system's potential and cost per barrel. One way to maintain this potential is to keep the system full by bringing other parties' volumes in to take the growing ullage. The Ninian Field, Ninian Pipeline System and Sullom Voe Terminal are, amongst other measures, benefitting from the advances of technology which have moved undeveloped fields from the marginal to the economic, and hence opened up a new generation of fields to use the pipeline system and enhance its value. At the same time as this, rationalisation of the process facilities at Sullom Voe, Europe's largest oil and liquified gas terminal, has as its aim to maintain the terminals efficiency and economic viability.
APA, Harvard, Vancouver, ISO, and other styles
17

Jenkins, J. O. "Organisational arrangements and drinking water quality." Water Supply 10, no. 2 (April 1, 2010): 227–41. http://dx.doi.org/10.2166/ws.2010.706.

Full text
Abstract:
This paper discusses the findings of a research project which explored the impact of varying organisational arrangements on drinking water quality in England and Wales, and the Republic of Ireland. It is established that drinking water quality has been of a consistently higher standard in England and Wales in comparison with the Republic of Ireland. It is also demonstrated that the associated organisational arrangements in England and Wales have been more successful in tackling certain problematic drinking water quality parameters. The paper concludes by arguing that national governments, and their regulatory agencies, should view the rationalisation of organisations involved in the provision of drinking water as key to ensuring better drinking water quality. It is also suggested that state regulators who are responsible for ensuring the quality of drinking water end their dependency on water providers for quality data. They should instead become capable of directly monitoring drinking water quality via their own sampling regime. It is argued that this organisational arrangement would be representative of a more progressive and robust organisational approach to ensuring the supply of safe high quality drinking water.
APA, Harvard, Vancouver, ISO, and other styles
18

Tracy, Sally K., Donna Hartz, Michael Nicholl, Yvonne McCann, and Deborah Latta. "An integrated service network in maternity— the implementation of a midwifery-led unit." Australian Health Review 29, no. 3 (2005): 332. http://dx.doi.org/10.1071/ah050332.

Full text
Abstract:
Maternity services in Australia are in urgent need of change. During the last 10 years several reviews have highlighted the need to provide more continuity of care for women in conjunction with the rationalisation of services. One solution may lie in the development of new integrated systems of care where primary-level maternity units offer midwiferyled care and women are transferred into perinatal centres to access tertiary-level obstetric technology and staff when required. This case study outlines the introduction of caseload midwifery into an Area Health Service in metropolitan Sydney. Our objective is to explore the concept of caseload midwifery and the process of implementing the first midwifery-led unit in NSW within an integrated service network. The midwifeled unit is a small but growing phenomenon in many countries.1 However, the provision of ?continuity? and ?woman-centred? midwifery care involves radical changes to conventional hospital practice.
APA, Harvard, Vancouver, ISO, and other styles
19

Hannaford, J., M. G. R. Holmes, C. L. R. Laizé, T. J. Marsh, and A. R. Young. "Evaluating hydrometric networks for prediction in ungauged basins: a new methodology and its application to England and Wales." Hydrology Research 44, no. 3 (December 19, 2012): 401–18. http://dx.doi.org/10.2166/nh.2012.115.

Full text
Abstract:
Flow estimates for ungauged catchments are often derived through regionalisation methods, which enable data transfer from a pool of hydrologically similar catchments with existing gauging stations (i.e., pooling-groups). This paper presents a methodology for indexing the utility of gauged catchments within widely used pooling-group methodologies for high and low flow estimation; this methodology is then used as the basis for a network evaluation strategy. The utility of monitoring stations is assessed using catchment properties and a parallel, but independent, appraisal of the quality of gauging station data, which considers hydrometric performance, anthropogenic disturbances and record length. Results from the application of the method to a national network of over 1,100 gauging stations in England and Wales are presented. First, the method is used to appraise the fitness for purpose of the network for regionalisation. The method is then used to identify gauges which monitor catchments with high potential for regionalisation, but which are deficient in terms of data quality – where upgrades in hydrometric performance would yield the greatest benefits. Finally, gauging stations with limited value for regionalisation, given the pooling-group criteria employed, are identified. Alongside a wider review of other uses of the network, this analysis could inform a judicious approach to network rationalisation.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhou, Jenny Zheng, and Aniruddha M. Gole. "Rationalisation and validation of dc power transfer limits for voltage sourced converter based high voltage DC transmission." IET Generation, Transmission & Distribution 10, no. 6 (April 21, 2016): 1327–35. http://dx.doi.org/10.1049/iet-gtd.2015.0801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rowland, I., and R. Strongman. "Southern Water faces the small works challenge." Water Science and Technology 41, no. 1 (January 1, 2000): 33–39. http://dx.doi.org/10.2166/wst.2000.0006.

Full text
Abstract:
Southern Water (a ScottishPower company) has 392 wastewater treatment works, 242 of which serve small communities with populations of less than 2000. Continuing pressures to improve effluent quality coupled with expectations of improved efficiency, and increased demand for resources for major capital schemes prompted Southern Water to take a more holistic approach to improvements and operation of its small treatment works. Operational costs (OPEX) were analysed for individual works enabling identification of target costs of operation (£/pe), opportunities for efficiency savings and investment priorities. A Company approach to investment in small works was produced which included development of a model to identify whether rationalisation (diverting the flows to an adjacent WTW and closing the site) was an economic option, and a Design Standard to simplify decision-making and process engineering for works refurbishment or replacement. The Design Standard focused on delivering low “whole-life-cost” solutions using simple and robust processes that require minimal operator involvement and ensure compliance with discharge consent standards. Southern Water is steadily reaping the benefits from its structured approach to ensuring compliance and cost-effective operation of its small works.
APA, Harvard, Vancouver, ISO, and other styles
22

Měšt’ánková, Hana, Gilles Mailhot, Jaromír Jirkovský, Josef Krýsa, and Michèle Bolte. "Mechanistic approach of the combined (iron–TiO2) photocatalytic system for the degradation of pollutants in aqueous solution: an attempt of rationalisation." Applied Catalysis B: Environmental 57, no. 4 (May 2005): 257–65. http://dx.doi.org/10.1016/j.apcatb.2004.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lepistö, Lauri. "On the use of rhetoric in promoting enterprise resource planning systems." Baltic Journal of Management 10, no. 2 (April 7, 2015): 203–21. http://dx.doi.org/10.1108/bjm-01-2014-0006.

Full text
Abstract:
Purpose – The purpose of this paper is to improve the understanding of the rhetoric used to promote enterprise resource planning (ERP) systems, which are complex organisation-wide software packages inherently connected to the domains of management and organisation. Design/methodology/approach – The study adopts a post-essentialist view on ERP systems and takes the form of a rhetorical analysis. Engaging in rhetorical scholarship in the area of technological change and management fashion literatures, this paper offers a close reading of a management text on ERP systems by Thomas H. Davenport published in 1998 in the Harvard Business Review. Findings – The rhetorical analysis distinguishes and identifies three rhetorical strategies – namely, rationalisation, theorisation and contradiction – used to promote ERP systems and thus involved in the construction of the phenomenon revolving around ERP systems. Originality/value – In spite of the importance of the rhetorical analysis of information technology in the context in which they operate, this paper argues that constructions of ERP systems should also be analysed beyond organisation-specific considerations. It further suggests that both researchers and practitioners should take seriously the rhetoric invoked by the well-known management writer that may easily go unnoticed.
APA, Harvard, Vancouver, ISO, and other styles
24

Charitonidou, Marianna. "László Moholy-Nagy and Alvar Aalto’s Connections." Enquiry The ARCC Journal for Architectural Research 17, no. 1 (December 30, 2020): 28–46. http://dx.doi.org/10.17831/enq:arcc.v17i1.1080.

Full text
Abstract:
Departing from the fact that László Moholy-Nagy’s Von Material zu Architektur (1929), had been an important source of inspiration for Alvar Aalto, this article examines the affinities between László Moholy-Nagy and Alvar Aalto’s intellectual positions. The article places emphasis on two particular ideas: how Aalto and Moholy-Nagy conceived the connection of biology with standardization and technology and its relationship to light and perception. Special attention is paid to the notions of “flexible standardisation” and rationalisation in Aalto’s thought, as well as to his belief that nature and standardization should be conceived are closely interconnected. In regard to their shared intellectual development, the article sheds light on the first encounters of the two men including: their meeting at the second Congrès International de l’Architecture Moderne (CIAM) in 1929; the June 1931 Finish meeting of Aino Marsio-Aalto, Alvar Aalto, Moholy-Nagy and Ellen Frank; the June 1931 exchanges between Aalto and Moholy-Nagy during the inner circle CIAM meeting in Berlin; and the common stay of the Aaltos and Moholy-Nagy in London in 1933 are discussed. Particular emphasis is placed on Aalto’s “The Reconstruction of Europe is the Key Problem for the Architecture of Our Time”, in which he argued that standardization in architecture should draw upon biological models.
APA, Harvard, Vancouver, ISO, and other styles
25

M’manga, Andrew, Shamal Faily, John McAlaney, Chris Williams, Youki Kadobayashi, and Daisuke Miyamoto. "A normative decision-making model for cyber security." Information & Computer Security 27, no. 5 (November 11, 2019): 636–46. http://dx.doi.org/10.1108/ics-01-2019-0021.

Full text
Abstract:
Purpose The purpose of this paper is to investigate security decision-making during risk and uncertain conditions and to propose a normative model capable of tracing the decision rationale. Design/methodology/approach The proposed risk rationalisation model is grounded in literature and studies on security analysts’ activities. The model design was inspired by established awareness models including the situation awareness and observe–orient–decide–act (OODA). Model validation was conducted using cognitive walkthroughs with security analysts. Findings The results indicate that the model may adequately be used to elicit the rationale or provide traceability for security decision-making. The results also illustrate how the model may be applied to facilitate design for security decision makers. Research limitations/implications The proof of concept is based on a hypothetical risk scenario. Further studies could investigate the model’s application in actual scenarios. Originality/value The paper proposes a novel approach to tracing the rationale behind security decision-making during risk and uncertain conditions. The research also illustrates techniques for adapting decision-making models to inform system design.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Seungho. "Private sector participation in the Shanghai water sector." Water Policy 9, no. 4 (August 1, 2007): 405–23. http://dx.doi.org/10.2166/wp.2007.015.

Full text
Abstract:
This paper explores the extent to which private sector participation has had an impact on Shanghai's water policy since the late 1990s. This study focuses on the private sphere where private companies in the Shanghai water sector have adapted to new changes in political and economic circumstances. Recent findings based on fieldwork and data from 2000 to 2004 disclose that the Shanghai government has been committed to implementing reforms for private sector participation in the water sector. In response, private companies have actively participated in the process of privatisation. Such private sector participation, however, is unlikely to continue on a smooth path unless the Shanghai government establishes adequate legal and regulatory frameworks for private companies. The study concludes that privatisation in the Shanghai water sector will be an unavoidable process for the rationalisation of water services stimulated by the programme of economic reforms initiated in the late 1970s. But this process has been and will continue to be, balanced first by the government's role in regulating privatised water services, second by the contribution of private companies in service provision and third by the continuous interaction between the government and private companies to achieve provision of high quality water in Shanghai.
APA, Harvard, Vancouver, ISO, and other styles
27

BLAZY, V., G. HEURTEBIZE, D. GILLES, G. TESNIERE, and G. FLEURY. "Solutions de monitoring au service d’audit de stations d’épuration des eaux usées." Techniques Sciences Méthodes, no. 6 (June 22, 2020): 33–43. http://dx.doi.org/10.36904/tsm/202006033.

Full text
Abstract:
L’audit de stations de traitement des eaux usées est un exercice complexe, car transdisciplinaire (génie des procédés, génie biologique, électromécanique, chimie, chimie analytique, hydraulique…), et dont le périmètre couvre de larges aspects relatifs à la sécurité, au process, à la maintenance, à l’économie et à l’organisation… Traditionnellement, cette démarche se décompose en trois phases distinctes visant : i) un recueil d’informations, ii) un relevé sur site et iii) un compte rendu. Les développements récents en matière de métrologie et d’instrumentation ont permis d’augmenter le niveau d’automatisation et de contrôle des usines de traitement des eaux. Les processus épuratoires, mis en oeuvre à travers des procédés plus ou moins complexes, sont ainsi suivis plus finement, assurant une réactivité opérationnelle immédiate et une fiabilisation des qualités d’eau à atteindre. Dans un même temps, la compilation de données au sein d’un automate programmable industriel et leur organisation/ traitement à travers une solution de supervision/hypervision amènent un suivi et une rationalisation de l’exploitation ouvrant la porte à des perspectives d’optimisation. Pour autant, les gains apportés par le monitoring sont rarement mis au profit des moyens d’investigations nécessaires à la phase de relevé sur site d’audit. Cet article présente plusieurs retours d’expériences à travers trois cas d’étude. Il intègre aussi bien des préoccupations d’ordres méthodologique, technique que des exemples de retour sur investissement dans la conduite et la restitution d’audit d’installations de traitement des eaux usées.
APA, Harvard, Vancouver, ISO, and other styles
28

Feng, Yuanyuan. "The enhanced participant-driven photo elicitation method for everyday life health information behaviour research." Aslib Journal of Information Management 71, no. 6 (November 18, 2019): 720–38. http://dx.doi.org/10.1108/ajim-02-2019-0042.

Full text
Abstract:
Purpose The purpose of this paper is to report the design and implementation of the enhanced participant-driven photo elicitation method in a qualitative interview study, to assess the performance of the method to investigate a research topic in everyday life health information behaviour and to provide insights on how to effectively use this method in future research. Design/methodology/approach The author embedded the enhanced participant-driven photo elicitation in a qualitative interview study to examine people’s everyday life health information behaviour with activity tracking technology. The author assessed the types of visual data collected by the method, categories of elicitation enabled by the method and how the method contributed to key research findings of the interview study. Findings The enhanced participant-driven photo elicitation generated rich, unique and meaningful data that would be otherwise difficult to collect through conventional qualitative interviews. The method also elicited explanation, rationalisation and reflection during the interviews, which enriched and triangulated key research findings. This work validated the benefits of the general photo elicitation method such as aiding participants’ recall of experiences, enriching research findings and improving research validity. It also demonstrated that the enhancement techniques used in this study could generate rich and even research data across interviews. Originality/value This paper describes the design and implementation of the enhanced participant-driven photo elicitation method to augment a qualitative interview study with activity tracker users. The author provides recommendations for researchers to take full advantage of the method in future everyday life health information behaviour research.
APA, Harvard, Vancouver, ISO, and other styles
29

Schuller, Annette, and W. Kurt Roth. "Rationalisation of the pre-analytical and analytical processes of serological infection diagnostics by the use of modular automated systems / Rationalisierung präanalytischer und analytischer Prozesse für die serologische Infektionsdiagnostik mit Hilfe automatischer Systeme." LaboratoriumsMedizin 32, no. 2 (January 1, 2008): 59–69. http://dx.doi.org/10.1515/jlm.2008.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wademan, Fiona. "Santos Energy Solutions: targeting a lower-carbon future and underpinning our sustainability." APPEA Journal 60, no. 2 (2020): 563. http://dx.doi.org/10.1071/aj19219.

Full text
Abstract:
Santos is actively working to reduce its carbon footprint and prepare for a lower-carbon future, including promoting the role of gas in this future. Santos has set a long-term aspiration to achieve net-zero emissions from its operations by 2050, and a target to reduce emissions across existing operations in the Cooper Basin and Queensland by 5% by 2025. The Energy Solutions team was created to support the delivery of these objectives. Energy Solutions completed a global technology review to identify technologies that would reduce emissions across Santos’ operations and grow gas demand. The review resulted in focus areas of solar, storage (battery, gas and other media), waste heat recovery, wind, carbon capture and storage (CCS) and solar thermal. Santos progressed to implementation and successful demonstration of emissions reduction in 2018 with a world-first installation of an autonomous solar- and battery-powered beam pump. The initial installation in the Cooper Basin is now being expanded to 56 pump conversions to solar and battery with the support of the Australian Renewable Energy Agency. Following this success, Santos increased scale with installation of a 2.12-MW solar array and associated infrastructure at the Port Bonython processing facility. In parallel, fuel efficiency opportunities were targeted through key equipment upgrades, including power generation at Devil Creek with new reciprocating gas engines and rationalisation of legacy compression and power generation infrastructure across the Cooper Basin. Another key focus of the team is the progression of CCS, with appraisal of reservoir targets and pre-front-end engineering design (FEED) studies completed in 2019. The success of these projects provides a robust platform to support the further and more complex emissions reduction project opportunities across Santos’ operations.
APA, Harvard, Vancouver, ISO, and other styles
31

Dénes Sulyok. "Economic questions of maize production on different soil types." Acta Agraria Debreceniensis, no. 49 (November 13, 2012): 289–92. http://dx.doi.org/10.34101/actaagrar/49/2545.

Full text
Abstract:
The requirements and objective of cultivation are in constant change. For example, different cultivation systems are developed for the purpose of soil protection, the preservation of its moisture content and on soils with various precipitation supply or production site conditions. Traditionally, one of the most important cultivation aims is crop needs. Further cost saving in fertilisation and crop protection can only be achieved by reducing the quality and quantity of production or it cannot be achieved at all. Furthermore, the costs can be significantly reduced by means of the rationalisation of cultivation. Energy and working time demand can also be notably reduced if ploughing is left out from the conventional tillage method. The key requirement of economicalness is to perform the cultivation at the optimal date, moisture level and the lowest possible cost.Within production costs, the cost of cultivation is between 3–17%, while they are between 8–36% within machinery costs. It is the vital condition the usability of each technological method to progressively reduce costs. Our evaluation work was carried out with the consideration of the yield data obtained from cooperating farms and the experiment database of the Institute for Land Utilisation, Regional Development and Technology of the Centre for Agricultural and Applied Economic Sciences of the University of Debrecen. Three technological methods (ploughing, heavy cultivator and loosening tillage) were used on several soil types which differ from in terms of cultivability (chernozem, sandy and sandy clay soils) from the economic/economical aspect. We examined the sectoral cost/income relation of maize production as an indicator plant. The maize price during the analytical period was 45 thousand HUF per t. On chernozem soils, the production of maize can be carried out on high income level, while maize production on sandy soils has a huge risk factor. The role of cultivation is the highest on high plasicity soils, since they have a huge energydemand and the there is a short amount of time available for each procedure in most cases.
APA, Harvard, Vancouver, ISO, and other styles
32

Fiorito, Jack, Paul Jarley, and John T. Delaney. "The Adoption of Information Technology by U.S. National Unions." Articles 55, no. 3 (April 12, 2005): 451–76. http://dx.doi.org/10.7202/051328ar.

Full text
Abstract:
Les syndicats aussi ont été enveloppés par la vague de la révolution de l'information. Ils ont utilisés la technologie de l'information et des communications (TIC) pour effectuer des campagnes d'organisation syndicale sur l'internet, pour tenir informés leurs membres des développements spécifiques reliés tant aux négociations qu'aux grèves et, plus généralement, pour améliorer les communications avec les membres, pour épauler certains efforts d'ordre politique ou d'ordre des négociations et pour des campagnes d'organisation. Un journal en ligne, publié pas des conseillers syndicaux provenant de plusieurs syndicats, fait état d'une foule d'exemples d'innovation et de créativité dans l'emploi des TIC et, plus particulièrement, dans l'internet (Ad Hoc Committee on Labor and the Web 1999). Un article récent en première page de l'AFL-CIO's America® Work intitulé : « Campagne d'organisation virtuelle » décrit comment les organisateurs syndicaux à travers le pays s'emparent du pouvoir de l'internet pour atteindre et mobiliser les membres (Lazarovici 1999 : 9). Pourquoi s'en préoccuper ? Au delà du fait que l'information est critique pour les syndicats, il existe des notions théoriques bien établies qui laissent croire à une influence des TIC sur les résultats que peut obtenir un syndicat. Le concept de Barney (1997) d'organisation comme source d'un avantage concurrentiel durable, notion sensiblement identique à celle de Leibenstein (1966) connue antérieurement sous l'idée d'une X-efficacité conserve toute sa pertinence ici. Quoique les syndicats ne sont pas habituellement en concurrence les uns avec les autres, l'emploi efficace des TIC leurs offre une possibilité d'améliorer les services aux membres, de bonifier leurs efforts au plan des relations politiques et publiques, d'améliorer leur performance au plan des négociations et leur habileté à organiser les nouveaux membres. Ainsi, les TIC offrent une source potentielle d'avantage concurrentiel, lorsque des syndicats se retrouvent effectivement en compétition. D'une manière plus importante, elles présentent un levier potentiel lorsque les syndicats sont en compétition avec les employeurs sur la forme de gouvernement d'un lieu de travail (i.e. l'unilatéralisme de l'employeur versus la détermination conjointe syndicat-employeur des conditions de travail). Un modèle dont le syndicat se sert. On doit généralement s'attendre à ce que des modèles d'innovation s'appliquent à un cas particulier de l'emploi des TIC. L'usage des TIC par un syndicat constitue un phénomène relativement nouveau et, partant, se qualifie comme une innovation (Daft 1982). De plus, la méta-analyse de Damanpour (1991) porte à croire que l'innovation est un phénomène organisationnel général, en ce sens que les organisations qui innovent dans un secteur ou sous une forme en particulier ont tendance à le faire dans d'autres secteurs et sous d'autres formes. Ainsi, les effets anticipés sous forme d'hypothèses par Delaney, Jarley et Fiorito (1996) devraient s'avérer les mêmes dans le cas de l'usage des TIC. Ceci nous amène à croire que certaines variables organisationnelles et environnementales affecteront l'usage des TIC au fur et à mesure que les rapports coûts-bénéfices seront connus. Dans les termes de la théorie des organisations, cette situation reflète essentiellement l'approche de la contingence structurelle. Les données. Notre source principale de données provient du Survey of Union Information Technology (Suit), une enquête par la poste effectuée au cours de l'été et de l'automne 1997. Une lettre d'introduction personnalisée expliquait la nature de l'étude, en garantissait le caractère confidentiel, offrait de fournir les résultats et demandait la participation. Un échantillon de 120 syndicats nationaux menant des activités aux États-Unis a été constitué à l'aide de l'annuaire de Gifford des organisations syndicales (1997). (Plusieurs incluent le membership de grands syndicats canadiens, tels les Machinistes, les Routiers et les Travailleurs de l'acier.) Soixante-quinze syndicats retournèrent des questionnaires utilisables. Les résultats. La rationalisation s'avère un effet positif et significatif sur une échelle multi-énoncés comprenant diverses formes et usages des TIC. La décentralisation ne montre aucun effet si l'on s'en tient au modèle de base ; cependant, la présence de covariances pour l'usage des TIC dans l'industrie, d'une part, et pour l'usage des TIC associé à une innovation antérieure, d'autre part, fait apparaître un effet positif, à la hauteur des attentes. Une mesure d'envergure stratégique échoue constamment à fournir une conclusion significative au plan statistique. La taille présente un impact positif consistant et fort, sauf dans la situation d'innovation antérieure. Au départ, la mesure de l'emploi des TIC en industrie présente un impact positif très impressionnant, sauf que, comme la taille, l'effet s'évanouit devant la présence d'une mesure d'innovation antérieure. On ne décèle pas non plus d'appui à la prévision d'un effet négatif sur le changement au plan de l'effectif syndical. Enfin, l'innovation antérieure est suivie d'un effet positif fort sur l'emploi des TIC par un syndicat. Conclusion. Le changement et l'innovation constituent sans aucun doute des enjeux vitaux pour les syndicats, au moment où ils doivent faire face au déclin de leur status. Un leadership nouveau à la FAT-COI et dans les syndicats nationaux considère sérieusement l'innovation et se demande comment cette dernière peut conduire à un renouvellement du syndicalisme. Les TIC, en particulier, deviendront probablement un facteur clef au moment où les syndicats tentent de formuler des stratégies de renouvellement dans la tourmente de la révolution de l'information. Il ne faudrait pas non plus considérer les TIC comme un élixir magique. Le déclin du syndicalisme est plutôt attribuable à une combinaison de facteurs et il faudrait être naïf pour penser qu'un changement quelconque pourrait contrer de tels effets. De toute manière, les TIC contiennent la promesse d'un outil puissant pour bonifier l'effort d'organisation, les services aux membres, l'efficacité au plan politique, à la table des négociations, au plan d'une plus grande solidarité entre les membres et d'une meilleure communication entre les membres et leurs leaders. Elles peuvent également prendre une valeur symbolique importante en aidant les syndicats à laisser croire qu'ils sont dans le coup (Shostak 1997) ou bien en les aidant à surmonter leur image de « dinosaure » (Hurd 1998). Si le fait d'utiliser les TIC en association avec d'autres innovations constitue une transformation, cela ne permettrait pas pour autant de conclure à une nouvelle forme de syndicalisme (e.g. un Cybersyndicat). Au minimum, il serait plus sûr d'affirmer que l'adoption des TIC aura probablement des effets remarquables sur la manière dont les syndicats assument leurs rôles conventionnels et il se peut que les TIC deviennent un catalyseur en les incitant à jouer de nouveaux rôles.
APA, Harvard, Vancouver, ISO, and other styles
33

Daniel, Yvonne, Charles Turner, Lisa Farrar, and R. Neil Dalton. "A Comparison of IEF and MSMS for Clinical Hemoglobinopathy Screening in 40,000 Newborns." Blood 112, no. 11 (November 16, 2008): 2387. http://dx.doi.org/10.1182/blood.v112.11.2387.2387.

Full text
Abstract:
Abstract Currently, newborn haemoglobinopathy screening is carried out using HPLC or isoelectric focusing (IEF). We have previously described a rapid and specific electrospray mass spectrometry–mass spectrometry (MSMS) technique, using multiple reaction monitoring (MRM) based peptide analysis, for simultaneous detection of the clinically significant haemoglobinpathies; haemoglobin (Hb)S, HbC, HbE, HbDPunjab and HbOArab. Here we report the results of a comparison of 40,000 newborn blood spots screened by both IEF and MSMS. For both IEF and MSMS analysis, blood spots (3.2mm) were punched into separate 96 well plates. IEF was performed using the Resolve haemoglobin test kit (PerkinElmer Life Sciences, Waltham, USA) and Isoscan imaging system. For MSMS analysis, the blood spots were digested for 30min at 37°C with a trypsin reagent, and diluted in mobile phase (acetonitrile: water, 50:50, with 0.025% formic acid. Sample, 2μl, was injected directly into the mobile phase (flow rate 80μl/min) and analysed, in positive ion mode, using a Sciex API4000 (Applied Biosystems, Warrington, UK). Specific MRM transitions for HbS, HbC, HbE, HbDPunjab, HbOArab, normal beta, alpha, gamma and delta chains were acquired; total acquisition time per sample was 60 sec. This enabled identification of sickling disorders and thalassaemia major, as well as assessment of transfusion state and potential identification of HbLepore and HbBarts. 40,000 blood spot samples for routine newborn haemoglobinopathy screening were analysed in parallel. HbS was detected in 199 samples; 8 were HbS/HbF only and 3 HbSC. HbC was detected in 39 samples, HbDPunjab in 52, HbE in 48. No HbOArab or HbLepore mutations were detected by either method. There have been no discrepancies between the analytical techniques. Using MSMS, mutation positive samples can be re-run in product ion scan mode to provide peptide sequence and hence unequivocal confirmation of the haemoglobin variant. In addition, 5,000 samples were analysed on a Sciex API4000 Q trap; using the information dependent acquisition facility provided “real time” peptide sequencing thus removing the requirement for re-injection. Sample preparation is very quick and simple for both methods, but the consumable costs associated with the MSMS technique are &lt;10% of those for IEF. The capital cost of MSMS can be offset by high throughput and/or integration with current inherited metabolic disease screening by MSMS. The specificity of the MSMS analysis implies that haemoglobinopathy detection can be limited to specified conditions, based on agreed screening policy, and can eliminate the need for costly and time consuming second line testing. This study demonstrates that newborn haemoglobinopathy screening can be carried out rapidly, easily, and cost effectively using MSMS technology. It enables rationalisation of technology platforms in newborn screening by consolidating screening for haemoglobinopathies and inherited metabolic diseases onto MSMS.
APA, Harvard, Vancouver, ISO, and other styles
34

CHARUE-DUBOC, FLORENCE. "A THEORETICAL FRAMEWORK FOR UNDERSTANDING THE ORGANISATION OF THE R&D FUNCTION: AN EMPIRICAL ILLUSTRATION FROM THE CHEMICAL AND PHARMACEUTICAL INDUSTRY." International Journal of Innovation Management 10, no. 04 (December 2006): 455–76. http://dx.doi.org/10.1142/s1363919606001569.

Full text
Abstract:
The increasing importance of innovation for companies, mergers and acquisitions, and the strengthening of project structures are leading to numerous rationalisations in the organisation of the research function. Although few works have analysed company R&D organisation and its impact on innovation performance, we elaborate on the concepts of economy of scope and absorptive capacity, introduced to compare the efficiency of various firms' R&D, so as to analyse the organisation of R&D departments. We focus on inter-project learning and argue that it must be viewed as an organisational question. "Organising by problem" constitutes a new means of cross-project learning and of enhancing absorptive capacity.
APA, Harvard, Vancouver, ISO, and other styles
35

Avramenko, Alex. "Inspiration at work: is it an oxymoron?" Baltic Journal of Management 9, no. 1 (December 20, 2013): 113–30. http://dx.doi.org/10.1108/bjm-07-2013-0110.

Full text
Abstract:
Purpose – The purpose of this paper is to present results of an exploratory cross-cultural study aiming to examine the role and meaning of inspiration in organisational settings to advance the contemporary understanding of inspiration and its manifestations. Design/methodology/approach – The study utilises Gadamerian philosophical hermeneutics to cultivate an understanding of the rationalisations of inspiration at work and to explore its conceptualisations to inform future research. Findings – The findings strongly indicate that inspiration in its numerous manifestations is not confined to the domain of personal life and that it often occurs in organisational settings. There are no indications that inspiration is affected by the cultural belongingness of employees, rather it is found that attitudes towards inspiration differ among representatives of the different levels of the organisational hierarchy. A connection between motivation and inspiration is discussed and indication found that at the level of lay accounts the concepts are perceived to be both different and complementary. Originality/value – The article presents a conceptualisation of inspiration in an organisational context to guide future research towards a more instrumental approach to recognising and utilising inspiration in contemporary management practice.
APA, Harvard, Vancouver, ISO, and other styles
36

Nunan, Joseph, and Andrew P. Walden. "POCUS in Acute Medicine." Acute Medicine Journal 19, no. 2 (April 1, 2020): 62–63. http://dx.doi.org/10.52964/amja.0802.

Full text
Abstract:
In this edition of Acute Medicine, Knight et al. demonstrate from SAMBA data that access to ultrasound machines and supervision is geographically heterogeneous.1 They raise concerns that this may lead to inequity of provision of Point of Care Ultrasound (POCUS) and the benefits it can provide for patients. This point is well made: Since the development of the Focused Acute Medicine Ultrasound (FAMUS) competencies in 20162 there has been a steady increase in the provision of supervisors to 803 and the number of individuals completing training has increased to 56. Whilst this is to be applauded, our experience concurs with this paper that much of the ultrasound training is concentrated in pockets of expertise in particular hospitals. As part of the AIM curriculum rewrite for 2022, the Special Advisory Group has proposed to the GMC that POCUS competencies become mandatory for all trainees in AIM. This will be supported by half a day of clinical time for training. This is a laudable aim but it is questionable whether, given the number of supervisors and their idiosyncratic distribution, the specialty of Acute Medicine would be able to support accreditation for all trainees. It is estimated that it takes 22 hours of supervision per supervised trainee.1 This is a significant commitment for jobbing consultants, especially with no supporting professional activity time in a job plan. The COVID19 pandemic has shown the value of POCUS lung ultrasound but has also highlighted further these training issues. A joint statement by the FAMUS and Focused Ultrasound in Intensive Care (FUSIC) committees has for the time being suggested that attendance at a course should no longer be a requirement for accreditation.4 This is an understandable attempt to try to keep training going but may have unintended consequences: It is often on courses that people are able to network and find solutions to mentoring and supervision in their areas and hospitals. The welcome support of the Royal College of Radiologists in their newsletter from Autumn 2019 may go some way to helping to fill the gap.5 The support of radiology and sonography departments will certainly lead to a greater number of training opportunities however there are concerns that many radiology departments remain sceptical about the role of POCUS and at an individual hospital level engagement is likely to be variable. As a solution to some of these problems at the Royal Berkshire hospital we have set up the AIM POCUS Academy with the goal of further embedding POCUS within the Acute Medicine department and the wider hospital.6 We are doing this by developing a multi-professional, multidisciplinary approach to POCUS through weekly teaching ‘Ultrarounds’. Links to the Sonography department, Cardiology, Respiratory Medicine, Emergency Medicine, Intensive Care, Vascular Access, Critical Care Outreach and Primary Care has led to a culture in which POCUS is considered a standard of care within the hospital and very much driven by the Acute Medicine department. This provides tremendous opportunities in all areas of the hospital for training and supervision but also the opportunities for different specialties to learn from each other. Individual Physiotherapists, Nurses and the Advanced Critical Care Practitioners in the hospital are either trained or are training up in lung ultrasound, ultrasound for vascular access or focused echocardiography and provide an excellent long term solution to the challenges of mentoring and supervision as they are more permanent members of staff. Where the sonography department and echocardiography department also provide mentorship for FAMUS and FUSIC, the exchange is both ways: Acute Medicine trainees have provided teaching around interpreting liver function testing which has led to rationalisation of inpatient abdominal ultrasounds and have taught sonographers how to interpret lung ultrasound; the echocardiography department are auditing the use of focused scans in specific conditions rather than a complete British Society of Echocardiography (BSE) full data sheet (so for instance in the context of an acute PE the echocardiographer will focus of right heart measures). This has helped to reduce waiting times within these departments. Technology has also helped to embed best practice and facilitate easier mentoring and supervision. Using the Butterfly IQ devices and a triple encrypted ‘cloud’ allows easy archiving of images and the possibility of remote reviewing which can even be in real time. This was invaluable during the COVID pandemic when GPs in our local ‘hot hub’ were trained up to use ultrasound to rule in and rule out COVID pneumonia allowing them to refer onto the ambulatory COVID pathway.6 They were able to share images via the cloud for review by experts in the hospital ensuring an extra degree of governance and oversight. We are trialing a similar system with our ‘Hospital at Home’ team with the hope it may save patients unnecessary visits to hospital for CXR and other imaging. If the specialty of Acute Medicine wants to take ownership of POCUS then we need to up our game and get serious about the supervision and mentoring of our trainees. Whilst the model of the AIM POCUS Academy may not be achievable everywhere, we believe initiatives like ours can act as the model for effective teaching and these centres could potentially provide a regional solution for POCUS training.
APA, Harvard, Vancouver, ISO, and other styles
37

Maldonado Castañeda, Oscar Javier. "Making HPV vaccines efficient: Cost-Effectiveness Analysis and the Economic Assemblage of Healthcare in Colombia." Science & Technology Studies, July 11, 2017, 2–18. http://dx.doi.org/10.23987/sts.55582.

Full text
Abstract:
Cost-effectiveness analysis (CEA) is a strategy of calculation whose main objective is to compare for making decisions about the best, the most efficient solution (costs vs. benefits) to a particular problem. Cost-effectiveness analysis not only provides a framework to compare healthcare interventions which in practice seem incommensurable; it also performs a set of assumptions regarding the nature of healthcare and the behaviour of individuals. This article analyses the role of CEA as a device to produce value in the introduction of HPV vaccines in Colombia. In the different institutional pathways and decision-making scenarios cost-effectiveness has been the key issue that justified the inclusions and the exclusions that such technology entails. Cost-effectiveness has justified the definition of girls as the population target and the exclusion of boys from risks and benefits of this technology. Moreover, cost-effectiveness analysis has been a key instrument in the sexualising and de-sexualising of cervical cancer and HPV vaccines through the rationalisation of economic benefits.
APA, Harvard, Vancouver, ISO, and other styles
38

Becker, Jörg. "Media and Information Technology in Ten Years’ Time: A Society of Control Both from Above and Below, and From Outside and Inside." tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 13, no. 2 (September 13, 2015). http://dx.doi.org/10.31269/triplec.v13i2.698.

Full text
Abstract:
In the process of continual change from the hand axe to the factory and now to industrial production 4.0, technology has had, and still has, two basically invariable functions: control and rationalisation. Each of these two terms is to be understood in a very comprehensive sense, in technical, engineering, commercial, legal and also social terms. This tenet also applies to television and to information technology. In my lecture, the terms “above” and “below” stand for a model of social stratification; they stand for capital and labour. The terms “outside” and “inside” stand for the external conditions of the class struggle from “above” and “below”. The external conditions mean the social and the inside conditions mean the psychological environment. Both television and information technology rely on content and organisational forms that run from above to below (from top to bottom). Moreover, contrary to Gutenberg’s invention of moving letters, today innovations in the media and IT fields no longer run from the bottom up, but only from the top down. While television conditions the individual from outside, users of social media internalise that same conditioning as a liberation from constraints.
APA, Harvard, Vancouver, ISO, and other styles
39

Petersen, Trine Brun. "Suiting Children for Institutions. The Development, Calibration and Stabilization of the One-piece Snowsuit." Journal of Design History, June 27, 2021. http://dx.doi.org/10.1093/jdh/epab025.

Full text
Abstract:
Abstract ’This article presents a study of the design history of the snowsuit as a product type and explores the constitutive factors in its development. To shed light on the snowsuit as a designed object, the study draws on Actor Network Theory (ANT), particularly work linking ANT to design. In order to do this, the study focuses on the Finnish company Reima, which has been a leading producer of children’s wear in the Nordic region since the 1960s. The article traces the development of the one-piece snowsuit from a marginal product type, whose use was advised against because it was seen as hindering children’s freedom of movement, to the ubiquitous position it holds in Nordic pre-school children’s wardrobe today. Innovations in textile technology play a prominent role in this development, as does the emergence of a new configuration of use, characterized by working mothers and institutionalized children, which has increased the demand for garments that are practical, robust and easy-care. The article argues that the snowsuit is essentially a technique for making children more ‘suitable’ for institutions and links the snowsuit to a broader movement towards the rationalisation of family life in the welfare state.
APA, Harvard, Vancouver, ISO, and other styles
40

Andrews, Trish, and Greg Klease. "Challenges of multisite video conferencing: The development of an alternative teaching/learning model." Australasian Journal of Educational Technology 14, no. 2 (October 14, 1998). http://dx.doi.org/10.14742/ajet.1902.

Full text
Abstract:
<span>The current trend of globalisation is one that is having a marked impact on society and the area of education in particular is feeling the impact. The dramatic changes that are taking place as a result of globalisation means that the demand for education is increasing significantly. There is growing recognition of not only the need for skills development but also reskilling and a requirement for lifelong learning (Duguet, 1995). Additionally, the increasing availability and stability of communications technologies along with the economic rationalisation that is characteristic of the nineties, means that educational institutions are rethinking the ways in which they deliver teaching and learning activities to an increasingly diverse and dispersed clientele.</span><p>This article describes a video conferencing project at Central Queensland University which was implemented to deliver simultaneous interactive instruction in first year chemistry to three campuses - Rockhampton, Mackay, Bundaberg. The article discusses some of the issues of implementing video conferencing as a tool for teaching in a distributed, multi-campus institution and the challenges in developing an interactive teaching and learning model. This includes the need for intensive ongoing staff development and the recognition that staff development for teaching with technology is a long term process of skills acquisition. It also recognises the importance of appropriate student preparation and the part this plays in successfully adopting technologically mediated teaching and learning programs.</p>
APA, Harvard, Vancouver, ISO, and other styles
41

Shadmon, Asher. "Geotechnics in the promotion of dimension stone." Journal of Nepal Geological Society 22 (December 1, 2000). http://dx.doi.org/10.3126/jngs.v22i0.32311.

Full text
Abstract:
Changes in the supply and use of dimension stone have extensively modified the approach of the engineering geologist to stone technology. Traditional expertise requirements have moved from exploration, extraction, and processing to quality control and application. Reasons for this are the globalisation of dimension stone supplies from new and unknown sources; deterioration of environmental conditions; cost saving in using thin stone tiles or slabs as veneer; at times hazardous; incorrect cladding applications, and the assessment of weathering. All these require the knowledge and experience of the engineering geologist, whose skills are not commonly known. Promotion, assisted in the media by exposure of global hazards has drawn attention to the discipline and practices. Research on the physical and mechanical stone properties has at last been taken up by major intergovernmental organisations. Large budgets are devoted to take stone testing out of century old routine."High tech" facilities are now applied to make stone-related evaluations more objective and independent of the human factor. Acceptance criteria of testing results require rationalisation. Geotechnical knowledge is important to keep test results within economic restraints and timetables. This is of special importance when linking such factors to environmental planning and control of quarrying and subsequent rehabilitation of the workings. In this paper a bird's eye view of the problems related to dimension stones is provided. Some examples of research trends are also given to exhibit the state-of-the-art.
APA, Harvard, Vancouver, ISO, and other styles
42

Couper, Rachel, Duncan Maxwell, and Mathew Aitchison. "The Legacies of Manufacturing and Factories of Industrialised Construction." Modular and Offsite Construction (MOC) Summit Proceedings, May 24, 2019, 243–50. http://dx.doi.org/10.29173/mocs100.

Full text
Abstract:
The term ‘industrialised construction’ carries the promise of an industry transformed, an industry driven by improved processes and higher quality products. One of the more obvious differences between industrialised construction and traditional construction is the factory. Yet it is often undervalued as a secondary consideration to the seemingly more important factors of speed, efficiency and economic rationalisation. This paper offers a reconsideration of the history of the factory as a critical feature in shaping contemporary sites of production in the construction industry. While the manufacturing mega-factories of today continue to develop at a rapid rate, their composition has been shaped by all three previous industrial revolutions and the current fourth. Drawing on the legacies of mechanisation, mass production and automation, today’s factory is informed by ideas of lean and agile production, and the connected factory forecast by Industry 4.0 looks towards the internet, cloud and IoT in visions of the future. By charting the evolution of the preceding three phases of industry in relation to key architectural developments of the factory, this paper reflects upon which aspects of these earlier chapters of manufacturing have affected the implementation of Industry 4.0 in the industrialised construction sector. Research in this area has often asked what the production sites of industrialised construction can learn from contemporary manufacturing, such as the automotive, aerospace or technology industries. By contrast, this paper questions the how the potential requirements of industrialised construction might differ from other forms of manufacturing and how this might in turn inform future sites of production in this sector. This paper speculates that a contemporary industrialised construction industry would be wise to re-evaluate the factory as a space specific to construction, distinct from manufacturing origins, in order to better address the broad range of new, or previously under-considered, industry specific requirements.
APA, Harvard, Vancouver, ISO, and other styles
43

Nunes, Mark. "Failure Notice." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2702.

Full text
Abstract:
Amongst the hundreds of emails that made their way to error@media-culture.org.au over the last ten months, I received the following correspondence: Failure noticeHi. This is the qmail-send program at sv01.wadax.ne.jp.I’m afraid I wasn’t able to deliver your message to the following addresses.This is a permanent error; I’ve given up. Sorry it didn’t work out.namewithheld@s.vodafone.ne.jp>:210.169.171.135 does not like recipient.Remote host said: 550 Invalid recipient:namewithheld@s.vodafone.ne.jp>Giving up on 210.169.171.135. Email of this sort marks a moment that is paradoxically odd and all too familiar in the digital exchanges of everyday life. The failure message arrives to tell me something “didn’t work out.” This message shows up in my email account looking no different from any other correspondence—only this one hails from the system itself, signalling a failure to communicate. Email from the “mailer-daemon” calls attention to both the logic of the post that governs email (a “letter” sent to an intended address at the intention of some source) and the otherwise invisible logic of informatic protocols, made visible in the system failure of a “permanent error.” In this particular instance, however, the failure notice is itself a kind of error. I never emailed namewithheld@s.vodafone.ne.jp—and by the mailer-daemon’s account, such a person does not exist. It seems that a spammer has exploited an email protocol as a way of covering his tracks: when a deliver-to path fails, the failure notice bounces to a third site. The failure notice marks the successful execution of a qmail protocol, but its arrival at our account is still a species of error. In most circumstances, error yields an invalid result. In calculation, error marks a kind of misstep that not only corrupts the end result, but all steps following the error. One error begets others. But as with the failure notice, error often marks not only the misdirections of a system, but also the system’s internal logic. The failure notice corresponds to a specific category of error—a potential error that the system must predict before it has actually occurred. While the notice signals failure (permanent error), it does so within the successful, efficient operation of a communicative system. What is at issue, then, is less a matter of whether or not error occurs than a system’s ability to handle error as it arises. Control systems attempt to close themselves off to error’s misdirections. If error signals a system failure, the “failure notice” of error foregrounds the degree to which in “societies of control” every error is a fatal error in that Baudrillardian sense—a failure that is subsumed in the operational logic of the system itself (40). Increasingly, the networks of a global marketplace require a rationalisation of processes and an introduction of informatic control systems to minimise wastage and optimise output. An informatic monoculture expresses itself through operational parameters that define communication according to principles of maximum transmission. In effect, in the growing dominance of a network society, we are witnessing the transcendence of a social and cultural system that must suppress at all costs the failure to communicate. This global communication system straddles a paradoxical moment of maximum exchange and maximum control. With growing frequency, social and commercial processes are governed by principles of quality assurance, what Lyotard defined nearly thirty years ago as a “logic of maximum performance” (xxiv). As Six Sigma standards migrate from the world of manufacturing to a wide range of institutions, we find a standard of maximum predictability and minimum error as the latest coin of the realm. Utopia is now an error-free world of 100% efficiency, accuracy, and predictability. This lure of an informatic “monoculture” reduces communication to a Maxwell’s demon for capturing transmission and excluding failure. Such a communicative system establishes a regime of signs that thrives upon the drift and flow of a network of signifiers, but that affirms its power as a system in its voracious incorporation of signs within a chain of signification (Deleuze and Guattari 111-117). Error is cast out as abject, the scapegoat “condemned as that which exceeds the signifying regime’s power of deterritorialization” (Deleuze and Guattari 117). Deleuze and Guattari describe this self-cycling apparatus of capture as “a funeral world of terror,” the terror of a black-hole regime that ultimately depends upon a return of the same and insures that everything that circulates communicates…or is cast off as abject (113). This terror marks a relation of control, one that depends upon a circulation of signs but that also insists all flows fall within its signifying regime. To speak of the “terror of information” is more than metaphorical to the extent that this forced binary (terror of signal/error of noise) imposes a kind of violence that demands a rationalisation of all singularities of expression into the functionalities of a quantifiable system. To the extent that systems of information imply systems of control, the violence of information is less metaphor than metonym, as it calls into high relief the scapegoat error—the abject remainder whose silenced line of flight marks the trajectory of the unclean. This cybernetic logic of maximum performance demands that error is either contained within the predictable deviations of a system’s performance, or nullified as outlying and asignifying. Statistics tells us that we are best off ignoring the outlier. This logic of the normal suggests that something very risky occurs when an event or an instance falls outside the scope of predicable variance. In the ascendancy of information, error, deviance, and outlying results cast a long shadow. In Norbert Wiener’s account of informatic entropy, this drift from systematic control marked a form of evil—not a Manichean evil of bad actors, but rather an Augustinian evil: a falling away from the perfection of order (34-36). Information utopia banishes error as a kind of evil—an aberration that is decidedly off the path of order and control. This cybernetic logic functions at all levels, from social systems theory to molecular biology. Our diseases are now described as errors in coding, transcription, or transmission—genetic anomalies, cancerous loop scripts, and neurochemical noise. Mutation figures as an error in reproduction—a straying from faithful replication and a falling away from the Good of order and control. But we should keep in mind that when we speak of “evil” in the context of this cybernetic logic, that evil takes on a specific form. It is the evil of the errant. Or to put it another way: it is the evil of the Sesame Street Muppet, Bert. In 2001, a U.S. high school student named Dino Ignacio created a graphic of the Muppet, Bert, with Osama bin Laden—part of his humorous Website project, “Bert is Evil.” A Pakistani-based publisher scanning the Web for images of bin Laden came across Ignacio’s image and, apparently not recognising the Sesame Street character, incorporated it into a series of anti-American posters. According to Henry Jenkins’s account of the events, in the weeks that followed, “CNN reporters recorded the unlikely sight of a mob of angry protestors marching through the streets chanting anti-American slogans and waving signs depicting Bert and bin Laden” (1-2). As the story of the Bert-sighting spread, new “Bert is evil” Websites sprang up, and Ignacio found himself the unwitting centre of a full-blown Internet phenomenon. Jenkins finds in this story a fascinating example of what he calls convergence culture, the blurring of the line between consumer and producer (3). From a somewhat different critical perspective, Mark Poster reads this moment of misappropriation and misreading as emblematic of global networked culture, in which “as never before, we must begin to interpret culture as multiple cacophonies of inscribed meanings as each cultural object moves across cultural differences” (11). But there is another moral to this story as well, to the extent that the convergence and cacophony described here occur in a moment of error, an errant slippage in which signification escapes its own regime of signs. The informatic (Augustinian) evil of Bert the Muppet showing up at an anti-American rally in Pakistan marks an event-scene in which an “error” not only signifies, but in its cognitive resonance, begins to amplify and replicate. At such moments, the “failure notice” of error signals a creative potential in its own right—a communicative context that escapes systemic control. The error of “evil Bert” introduces noise into this communicative system. It is abject information that marks an aberration within an otherwise orderly system of communication, an error of sorts marking an errant line of flight. But in contrast to the trance-like lure of 100% efficiency and maximum performance, is there not something seductive in these instances of error, as it draws us off our path of intention, leading us astray, pulling us toward the unintended and unforeseen? In its breach of predictable variance, error gives expression to the erratic. As such, “noise” marks a species of error (abject information) that, by failing to signify within a system, simultaneously marks an opening, a poiesis. This asignifying poetics of “noise,” marked by these moments of errant information, simultaneously refuses and exceeds the cybernetic imperative to communicate. This poetics of noise is somewhat reminiscent of Umberto Eco’s discussion of Claude Shannon’s information theory in The Open Work. For Shannon, the gap between signal and selection marks a space of “equivocation,” what Warren Weaver calls “an undesirable … uncertainty about what the message was” (Shannon and Weaver 21). Eco is intrigued by Shannon’s insight that communication is always haunted by equivocation, the uncertainty that the message received was the signal sent (57-58). Roland Barthes also picks up on this idea in S/Z, as N. Katherine Hayles notes in her discussion of information theory and post-structuralism (46). For these writers, equivocation suggests a creative potential in entropy, in that noise is, in Weaver’s words, “spurious information” (Shannon and Weaver 19). Eco elaborates on Shannon and Weaver’s information theory by distinguishing between actual communication (the message sent) and its virtuality (the possible messages received). Eco argues, in effect, that communication reduces information in its desire to actualise signal at the expense of noise. In contrast, poetics generates information by sustaining the equivocation of the text (66-68). It is in this tension between capture and escape marked by the scapegoats of error and noise that I find a potential for a contemporary poetics within a global network society. Error reveals the degree to which everyday life plays itself out within this space of equivocation. As Stuart Moulthrop addressed nearly ten years ago, our frequent encounters with “Error 404” on the Web calls attention to “the importance of not-finding”: that error marks a path in its own right, and not merely a misstep. Without question, this poetics of noise runs contrary to a dominant, cybernetic ideology of efficiency and control. By paying attention to drift and lines of flight, such erratic behaviour finds little favour in a world increasingly defined by protocol and predictable results. But note how in its attempt to capture error within its regime of signs, the logic of maximum performance is not above recuperating the Augustinian evil of error as a form of “fortunate fall.” Even in the Six Sigma world of 100% efficiency, does not corporate R & D mythologise the creative moment that allows error to turn a profit? Post-It Notes® and Silly Putty® present two classic instances in which happenstance, mistake, and error mark a moment in which “thinking outside of the box” saves the day. Error marks a kind of deviation from—and within—this system: a “failure” that at the same time marks a potential, a virtuality. Error calls attention to its etymological roots, a going astray, a wandering from intended destinations. Error, as errant heading, suggests ways in which failure, mutation, spurious information, and unintended results provide creative openings and lines of flight that allow for a reconceptualisation of what can (or cannot) be realised within social and cultural forms. While noise marks a rupture of signification, it also operates within the framework of a cybernetic imperative that constantly attempts to capture the flows that threaten to escape its operational parameters. As networks become increasingly social, this logic of rationalisation and abstraction serves as a dialectical enclosure for an information-based culture industry. But error also suggests a strategy of misdirection, getting a result back other than what one expected, and in doing so turns the cybernetic imperative against itself. “Google-bombing,” for example, creates an informatic structure that plays off of the creative potential of equivocation. Here, error of a Manichean sort introduces noise into an information system to produce unexpected results. Until recently, typing the word “failure” into the search engine Google produced as a top response George Bush’s Webpage at www.whitehouse.gov. By building Webpages in which the text “failure” links to the U.S. President’s page, users “hack” Google’s search algorithm to produce an errant heading. The cybernetic imperative is turned against itself; this strategy of misdirection enacts a “fatal error” that evokes the logic of a system to create an opening for poeisis, play, and the unintended. Information networks, no longer secondary to higher order social and cultural formations, now define the function and logic of social space itself. This culture of circulation creates equivalences by way of a common currency of “information,” such that “viral” distribution defines a social event in its own right, regardless of the content of transmission. While a decade earlier theorists speculated on the emergence of a collective intelligence via global networks, the culture of circulation that has developed online would seem to indicate that “emergence” and circulation are self-justifying events. In the moment of equivocation—not so much beyond good and evil, but rather in the spaces between signal and noise—slippage, error, and misdirection suggest a moment of opening in contrast to the black hole closures of the cybernetic imperative. The violence of an informatic monoculture expresses itself in this moment of insistence that whatever circulates signifies, and that which cannot communicate must be silenced. In such an environment, we would do well to examine these failures to communicate, as well as the ways in which error and noise seduce us off course. In contrast to the terror of an eternal return of the actual, a poetics of noise suggests a virtuality of the network, an opening of the possible in an increasingly networked society. The articles in this issue of M/C Journal approach error from a range of critical and social perspectives. Essays address the ways in which error marks both a misstep and an opening. Throughout this issue, the authors address error as both abject and privileged instance in a society increasingly defined by information networks and systems of control. In our feature article, “Revealing Errors,” Benjamin Mako Hill explores how media theorists would benefit from closer attention to errors as “under-appreciated and under-utilised in their ability to reveal technology around us.” By allowing errors to communicate, he argues, we gain a perspective that makes invisible technologies all the more visible. As such, error provides a productive moment for both interpretive and critical interventions. Two essays in this issue look at the place of error and noise within the work of art. Rather than foregrounding a concept of “medium” that emphasises clear, unimpeded transmission, these authors explore the ways in which the errant and unintended provide for a productive aesthetic in its own right. Using Shannon’s information theory, and in particular his concept of equivocation, Su Ballard’s essay, “Information, Noise, and et al.’s ‘maintenance of social solidarity-instance 5,” explores the productive error of noise in the digital installation art of a New Zealand artists’ collective. Rather than carefully controlling the viewer’s experience, et al.’s installation places the viewer within a field of equivocation, in effect encouraging misreadings and unintended insertions. In a similar vein, Tim Barker’s essay, “Error, the Unforeseen, and the Emergent: The Error of Interactive Media Art” examines the productive error of digital art, both as an expression of artistic intent and as an emergent expression within the digital medium. This “glitch aesthetic” foregrounds the errant and uncontrollable in any work of art. In doing so, Barker argues, error also serves as a measure of the virtual—a field of potential that gestures toward the “unforeseen.” The virtuality of error provides a framework of sorts for two additional essays that, while separated considerably in subject matter, share similar theoretical concerns. Taking up the concept of an asignifying poetics of noise, Christopher Grant Ward’s essay, “Stock Images, Filler Content, and the Ambiguous Corporate Message” explores how the stock image industry presents a kind of culture of noise in its attempt to encourage equivocation rather than control semiotic signal. By producing images that are more virtual than actual, visual filler provides an all-too-familiar instance of equivocation as a field of potential and a Derridean citation of undecidibility. Adi Kuntsman takes a similar theoretic tack in “‘Error: No Such Entry’: Haunted Ethnographies of Online Archives.” Using a database retrieval error message, “no such entry,” Kuntsman reflects upon her ethnographic study of an online community of Russian-Israeli queer immigrants. Error messages, she argues, serve as informatic “hauntings”—erasures that speak of an online community’s complex relation to the construction and archiving of a collective history. In the case of a database retrieval error—as in the mailer-daemon’s notice of the “550” error—the failure of an address to respond to its hailing calls attention to a gap between query and expected response. This slippage in control is, as discussed above, and instance of an Augustinian error. But what of the Manichean—the intentional engagement in strategies of misdirection? In Kimberly Gregson’s “Bad Avatar! Griefing in Virtual Worlds,” she provides a taxonomy of aberrant behaviour in online gaming, in which players distort or subvert orderly play through acts that violate protocol. From the perspective of many a gamer, griefing serves no purpose other than annoyance, since it exploits the rules of play to disrupt play itself. Yet in “Amazon Noir: Piracy, Distribution, Control,” Michael Dieter calls attention to “how the forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation.” The Amazon Noir project exploited vulnerabilities in Amazon.com’s Search Inside!™ feature to redistribute thousands of electronic texts for free through peer-to-peer networks. Dieter demonstrates how this “tactical media performance” challenged a cybernetic system of control by opening it up to new and ambiguous creative processes. Two of this issue’s pieces explore a specific error at the nexus of media and culture, and in keeping with Hill’s concept of “revealing errors,” use this “glitch” to lay bare dominant ideologies of media use. In her essay, “Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress,” Elizabeth Losh focuses on a highly public misreading of a Battlefield 2 fan video by experts from the Science Applications International Corporation in their testimony before Congress on digital terrorism. Losh argues that Congress’s willingness to give audience to this misreading is a revealing error in its own right, as it calls attention to the anxiety of experts and power brokers over the control and distribution of information. In a similar vein, in Yasmin Ibrahim’s essay, “The Emergence of Audience as Victims: The Issue of Trust in an Era of Phone Scandals,” explores the revealing error of interactive television gone wrong. Through an examination of recent BBC phone-in scandals, Ibrahim explores how failures—both technical and ethical—challenge an increasingly interactive audience’s sense of trust in the “reality” of mass media. Our final essay takes up the theme of mutation as genetic error. Martin Mantle’s essay, “‘Have You Tried Not Being a Mutant?’: Genetic Mutation and the Acquisition of Extra-ordinary Ability,” explores “normal” and “deviant” bodies as depicted in recent Hollywood retellings of comic book superhero tales. Error, he argues, while signalling the birth of superheroic abilities, marks a site of genetic anxiety in an informatic culture. Mutation as “error” marks the body as scapegoat, signalling all that exceeds normative control. In each of these essays, error, noise, deviation, and failure provide a context for analysis. In suggesting the potential for alternate, unintended outcomes, error marks a systematic misgiving of sorts—a creative potential with unpredictable consequences. As such, error—when given its space—provides an opening for artistic and critical interventions. References “Art Fry, Inventor of Post-It® Notes: ‘One Man’s Mistake is Another’s Inspiration.” InventHelp. 2004. 14 Oct. 2007 http://www.inventhelp.com/articles-for-inventors-art-fry.asp>. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill and Wang, 1974. Baudrillard, Jean. The Transparency of Evil. Trans. James Benedict. New York: Verso, 1993. Deleuze, Gilles. “Postscript on the Societies of Control.” October 59 (Winter 1992): 3-7. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Trans. Brian Massumi. Minneapolis: U Minnesota P, 1987. Eco, Umberto. The Open Work. Cambridge: Harvard UP, 1989. “Googlebombing ‘Failure.’” Official Google Blog. 16 Sep. 2005. 14 Oct. 2007 http://googleblog.blogspot.com/2005/09/googlebombing-failure.html>. Hayles, N. Katherine. How We Became Posthuman. Chicago: U Chicago P, 1999. Jenkins, Henry. Convergence Culture. New York: NYU Press, 2006. Lyotard, Jean-Francois. The Postmodern Condition. Trans. Geoffrey Bennington and Brian Massumi. Minneapolis: Minnesota UP, 1984. Moulthrop, Stuart. “Error 404: Doubting the Web.” 2000. 14 Oct. 2007 http://iat.ubalt.edu/moulthrop/essays/404.html>. Poster, Mark. Information Please. Durham, NC: Duke UP, 2006. Shannon, Claude, and Warren Weaver. The Mathematical Theory of Communication. Urbana: U Illinois P, 1949. “Silly Putty®.” Inventor of the Week. 3 Mar. 2003. 14 Oct. 2007 http://web.mit.edu/Invent/iow/sillyputty.html>. Wiener, Norbert. The Human Use of Human Beings. Cambridge, MA: Da Capo, 1988. Citation reference for this article MLA Style Nunes, Mark. "Failure Notice." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/00-editorial.php>. APA Style Nunes, M. (Oct. 2007) "Failure Notice," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/00-editorial.php>.
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Peter. "Community without Flesh." M/C Journal 2, no. 3 (May 1, 1999). http://dx.doi.org/10.5204/mcj.1750.

Full text
Abstract:
On Wednesday 21 April the Minister for Communications, Information Technology and the Arts introduced a piece of legislation into the Australian Senate to regulate the way Australians use the Internet. This legislation is presented within Australia's existing system of content regulation, a scheme that the Minister describes is not censorship, but merely regulation (Alston 55). Underlying Senator Alston's rhetoric about the protection of children from snuff film makers, paedophiles, drug pushers and other criminals, this long anticipated bill is aimed at reducing the amount of pornographic materials available via computer networks, a censorship regime in an age when regulation and classification are the words we prefer to use when society draws the line under material we want to see, but dare not allow ourselves access to. Regardless of any noble aspirations expressed by free-speech organisations such as Electronic Frontiers Australia relating to the defence of personal liberty and freedom of expression, this legislation is about porn. Under the Bill, Australia would proscribe our citizens from accessing: explicit depictions of sexual acts between consenting adults; mild non-violent fetishes; depictions of sexual violence, coercion or non-consent of any kind; depictions of child sexual abuse, bestiality, sexual acts accompanied by offensive fetishes, or exploitative incest fantasies; unduly detailed and/or relished acts of extreme violence or cruelty; explicit or unjustifiable depictions of sexual violence against non-consenting persons; and detailed instruction or encouragement in matters of crime or violence or the abuse of proscribed drugs. (OFLC) The Australian public, as a whole, favour the availability of sexually explicit materials in some form, with OFLC data indicating a relatively high degree of public support for X rated videos, the "high end" of the porn market (Paterson et al.). In Australia strict regulation of X rated materials in conventional media has resulted in a larger illegal market for these materials than the legalised sex industries of the ACT and Northern Territory (while 1.2 million X rated videos are legally sold out of the territories, 2 million are sold illegally in other jurisdictions, according to Patten). In Australia, censorship of media content has traditionally been based on the principles of the protection of society from moral harm and individual degradation, with specific emphasis on the protection of innocents from material they are not old enough for, or mentally capable of dealing with (Joint Select Committee on Video Material). Even when governments distanced themselves from direct personal censorship (such as Don Chipp's approach to the censorship of films and books in the late 1960s and early 1970s) and shifted the rationale behind censorship from prohibition to classification, the publicly stated aims of these decisions have been the support of existing community standards, rather than the imposition of strict legalistic moral values upon an unwilling society. In the debates surrounding censorship, and especially the level of censorship applied (rather than censorship as a whole), the question "what is the community we are talking about here?" has been a recurring theme. The standards that are applied to the regulation of media content, both online and off, are often the focus of community debate (a pluralistic community that obviously lacks "standards" by definition of the word). In essence the problem of maintaining a single set of moral and ethical values for the treatment of media content is a true political dilemma: a problem that lacks any form of solution acceptable to all participants. Since the introduction of the Internet as a "mass" medium (or more appropriately, a "popular" one), government indecision about how best to treat this new technology has precluded any form or content regulation other than the ad hoc use of existing non-technologically specific law to deal with areas of criminal or legally sanctionable intent (such as the use of copyright law, or the powers under the Crimes Act relating to the improper use of telecommunications services). However, indecision in political life is often associated with political weakness, and in the face of pressure to act decisively (motivated again by "community concern"), the Federal government has decided to extend the role of the Australian Broadcasting Authority to regulate and impose a censorship regime on Australian access of morally harmful materials. It is important to note the government's intention to censor access, rather than content of the Internet. While material hosted in Australia (ignoring, of course, the "cyberspace" definitions of non-territorial existence of information stored in networks) will be censored (removed from Australia computers), the government, lacking extraterritorial powers to compel the owners of machines located offshore, intends to introduce of some form of refused access list to materials located in other nations. What is interesting to consider in this context is the way that slight shifts of definitional paradigm alter the way this legislation can be considered. If information flows (upon which late capitalism is becoming more dependent) were to be located within the context of international law governing the flow of waterways, does the decision to prevent travel of morally dubious material through Australia's informational waterways impinge upon the riparian rights of other nations (the doctrine of fair usage without impeding flow; Godana 50)? Similarly, if we take Smith's extended definition of community within electronic transactional spaces (the maintenance of members' commitment to the group, monitoring and sanctioning behaviour and the production and distribution of resources), then the current Bill proposes the regulation of the activities of one community by another (granted, a larger community that incorporates the former). Seen in this context, this legislation is the direct intervention in an established social order by a larger and less homogeneous group. It may be trite to quote the Prime Minister's view of community in this context, where he states ...It is free individuals, strong communities and the rule of law which are the best defence against the intrusive power of the state and against those who think they know what is best for everyone else. (Howard 21) possibly because the paradigm in which this new legislation is situated does not classify those Australians online (who number up to 3 million) as a community in their own right. In a way the Internet users of Australia have never identified themselves as a community, nor been asked to act in a communitarian manner. While discussions about the value of community models when applied to the Internet are still divided, there are those who argue that their use of networked services can be seen in this light (Worthington). What this new legislation does, however, is preclude the establishment of public communities in order to meet the desires of government for some limits to be placed on Internet content. The Bill does allow for the development of "restricted access systems" that would allow pluralistic communities to develop and engage in a limited amount of self-regulation. These systems include privately accessible Intranets, or sites that restrict access through passwords or some other form of age verification technique. Thus, ignoring the minimum standards that will be required for these communities to qualify for some measure of self-regulatory freedom, what is unspoken here is that specific subsections of the Internet population may exist, provided they keep well away from the public gaze. A ghetto without physical walls. Under the Bill, a co-regulatory approach is endorsed by the government, favouring the establishment of industry codes of practice by ISPs and (or) the establishment of a single code of practice by the content hosting industry (content developers are relegated to yet undetermined complementary state legislation). However, this section of the Bill, in mandating a range of minimum requirements for these codes of practice, and denying plurality to the content providers, places an administrative imperative above any communitarian spirit. That is, that the Internet should have no more than one community, it should be an entity bound by a single guiding set of principles and be therefore easier to administer by Australian censors. This administrative imperative re-encapsulates the dilemma faced by governments dealing with the Internet: that at heart, the broadcast and print press paradigms of existing censorship regimes face massive administrative problems when presented with a communications technology that allows for wholesale publication of materials by individuals. Whereas the limited numbers of broadcasters and publishers have allowed the development of Australia's system of classification of materials (on a sliding scale from G to RC classifications or the equivalent print press version), the new legislation introduced into the Senate uses the classification scheme simply as a censorship mechanism: Internet content is either "ok" or "not ok". From a public administration perspective, this allows government to drastically reduce the amount of work required by regulators and eases the burden of compliance costs by ISPs, by directing clear and unambiguous statements about the acceptability of existing materials placed online. However, as we have seen in other areas of social policy (such as the rationalisation of Social Security services or Health), administrative expedience is often antipathetic to small communities that have special needs, or cultural sensitivities outside of mainstream society. While it is not appropriate to argue that public administration creates negative social impacts through expedience, what can be presented is that, where expedience is a core aim of legislation, poor administration may result. For many Australian purveyors of pornography, my comments will be entirely unhelpful as they endeavour to find effective ways to spoof offshore hosts or bone up (no pun intended) on tunnelling techniques. Given the easy way in which material can be reconstituted and relocated on the Internet, it seems likely that some form of regulatory avoidance will occur by users determined not to have their content removed or blocked. For those regulators given the unenviable task of censoring Internet access it may be worthwhile quoting from Sexing the Cherry, in which Jeanette Winterson describes the town: whose inhabitants are so cunning that to escape the insistence of creditors they knock down their houses in a single night and rebuild them elsewhere. So the number of buildings in the city is always constant but they are never in the same place from one day to the next. (43) Thus, while Winterson saw this game as a "most fulfilling pastime", it is likely to present real administrative headaches to ABA regulators when attempting to enforce the Bill's anti-avoidance clauses. The Australian government, in adapting existing regulatory paradigms to the Internet, has overlooked the informal communities who live, work and play within the virtual world of cyberspace. In attempting to meet a perceived social need for regulation with political and administrative expedience, it has ignored the potentially cohesive role of government in developing self-regulating communities who need little government intervention to produce socially beneficial outcomes. In proscribing activity externally to the realm in which these communities reside, what we may see is a new type of community, one whose desire for a feast of flesh leads them to evade the activities of regulators who operate in the "meat" world. What this may show us is that in a virtual environment, the regulators' net is no match for a world wide web. References Alston, Richard. "Regulation is Not Censorship." The Australian 13 April 1999: 55. Paterson, K., et. al. Classification Issues: Film, Video and Television. Sydney: The Office of Film and Literature Classification, 1993. Patten, F. Personal interview. 9 Feb. 1999. Godana, B.A. Africa's Shared Water Resources: Legal and Institutional Aspects of the Nile, Niger and Senegal River Systems. London: Frances Pinter, 1985. Howard, John. The Australia I Believe In: The Values, Directions and Policy Priorities of a Coalition Government Outlined in 1995. Canberra: Liberal Party, 1995. Joint Select Committee On Video Material. Report of the Joint Select Committee On Video Material. Canberra: APGS, 1988. Office of Film and Literature Classification. Cinema & Video Ratings Guide. 1999. 1 May 1999 <http://www.oflc.gov.au/classinfo.php>. Smith, Marc A. "Voices from the WELL: The Logic of the Virtual Commons." 1998. 2 Mar. 1999 <http://www.sscnet.ucla.edu/soc/csoc/papers/voices/Voices.htm>. Winterson, Jeanette. Sexing the Cherry. New York: Vintage Books. 1991. Worthington, T. Testimony before the Senate Select Committee on Information Technologies. Unpublished, 1999. Citation reference for this article MLA style: Peter Chen. "Community without Flesh: First Thoughts on the New Broadcasting Services Amendment (Online Services) Bill 1999." M/C: A Journal of Media and Culture 2.3 (1999). [your date of access] <http://www.uq.edu.au/mc/9905/bill.php>. Chicago style: Peter Chen, "Community without Flesh: First Thoughts on the New Broadcasting Services Amendment (Online Services) Bill 1999," M/C: A Journal of Media and Culture 2, no. 3 (1999), <http://www.uq.edu.au/mc/9905/bill.php> ([your date of access]). APA style: Author. (1999) Community without flesh: first thoughts on the new broadcasting services amendment (online services) bill 1999. M/C: A Journal of Media and Culture 2(3). <http://www.uq.edu.au/mc/9905/bill.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
45

Chau, Christina, and Laura Glitsos. "Time." M/C Journal 22, no. 6 (December 4, 2019). http://dx.doi.org/10.5204/mcj.1617.

Full text
Abstract:
Nearly 50 years on from Alvin Toffler’s Future Shock (1971), contemporary society finds itself navigating the Fourth Industrial Revolution. This era has been described as the convergence of digitisation, robotics, artificial intelligence, globalisation—and speed (Johannessen). As such, temporality is taking on a turbulent and elusive edge. In the previous century, Toffler highlighted that technological change accelerated perceptions of time, and he predicted that by the 21st century, people would find it “increasingly painful to keep up with the incessant demand for change that characterises our time”, where change would come about “with waves of ever accelerating speed and unprecedented impact” (18). While Toffler could not have predicted the exact nature and detail of the specificities of day-to-day life in 2019, we suggest Toffler’s characterisation marks an insightful ‘jumping off’ point for further introspection. With Toffler’s concerns in mind, this issue of M/C Journal is interested in multiple ways that digital media influences and expresses conceptions of temporality in this historical period, the final weeks of 2019. On the basis of the pieces that comprise this issue, we take this concern further to politicise the temporal figurations of media, which we propose permeate all aspects of contemporary experience. Theoretically, this position pays homage to the work performed by Jay Bolter and Richard Grusin more than two decades ago. In 1996, Bolter and Grusin ruminated on the “the wire”, a fictional device that was the central focus of the film Strange Days (1995), a media gadget that could mediate experience from one subject to another, “pure and uncut, straight from the cerebral cortex” (311). For Bolter and Grusin, ‘the wire’ epitomised contemporary culture’s movement toward virtual reality, “with its goal of unmediated visual and aural experience” and they suggested that the film provided a critique of the historical mode “in which digital technologies are proliferating faster than our cultural, legal, or educational institutions can keep up with them” (313). For us, perhaps even more urgently, the wire epitomises the colonisation, infiltration and permeation of the production of temporal layers through media systems and devices into the subject’s direct experience. The wire symbolises, among many things, a simulation of the terrain of time according to the Jorge Luis Borges fable, that is, one-for-one.Contingent upon new shifts, and the academic literature which has sought to critique them thus far, in this editorial, we raise the contention that the technologies and operations of power brought about through the Fourth Industrial Revolution, and its media apparatus, have exposed the subject to a multiplicity of timescapes. In doing so, these configurations have finally colonised subjective experience of time and temporality.Consequently, we have specifically featured a broad selection of articles that explore and discuss the presence of online, mobile, or streamed media as the primary means through which culture understands, expresses, and communicates the world, and ideas around temporality. The articles featured herein explore the ways in which constructs of time organise (and are organised by) other constructs such as; neoliberalism (Bianchino), relaxation (Pont), clocks (Cambpell), surveillance, biopower, narrative (Glitsos), monetisation (Grandinetti), memorialising (Wishart), time travel (Michael), utopias and dystopias (Herb). Through the spectrum of topics, we hope to elucidate to the reader the ways in which digital culture performs and generates ontological shifts that rewrite the relationship between media, time, and experience.ContemporaneityA key concern for us in this issue is the idea of ‘contemporaneity,’ which has been discussed more recently in art theory and criticism by Terry Smith, and Peter Osborne, amongst others. Both Smith and Osborne use the term to articulate the effects of contemporary globalisation, transnationalism, and post-conceptual art. Smith reminds us that in contemporary society there isthe insistent presentness of multiple, often incompatible temporalities accompanied by the failure of all candidates that seek to provide the overriding temporal framework – be it modern, historical, spiritual, evolutionary, geological, scientific, globalizing, planetary. (196)As a result, artists are negotiating and critiquing multiple intersecting and contradictory time codes that pervade contemporary society in order to grapple with contemporaneity today. Yet, concerns with overlayed temporalities enter our everyday more and more, as explored through Justin Grandinetti’s piece, “A Question of Time: HQ Trivia and Mobile Streaming Temporality”, in which he interrogates mobile streaming practices and the ways in which new devices seek out every possible moment that might be monetised and ‘made productive.’Grandinetti’s concern, like the others featured in this issue, attends to the notion of time as evasive, contradictory and antonymous while forming a sense of urgency around the changing present, and also reconciling a multiplicity of time codes at play through technology today. The present is immediately written and archived through news media live feeds, GPS tracking and bio data in apps used for fitness and entertainment amongst others, while the pace of national television, print media, and local radio is folded through our daily experiences. Consequently, we’re interested in the multiple, and sometimes incompatible temporalities that emerge through the varied ways in which digital media is used to express, explore, and communicate in the world today beyond the arenas of contemporary art and art history that Smith and Osborne are primarily concerned with. ExperienceExperience is key. Experience may in fact be the key that unlocks these following conversations about time and the subject, after all, time is nothing if not experiential. Empirically, we might claim that, time is “conceived as the intervals during which events occur” (Toffler 21). However, of course one can only be if one is being in time. Through Bergson we might say that the individual’s perception of time manifests “rightly or wrongly, to be inside and outside us at one and the same time … . To each moment of our inner life there thus corresponds a moment of our body and of all environing matter that is ‘simultaneous’ with it” (205). Time is the platform through which experience of consciousness is mediated, thus the varying manipulations of time through media apparatuses are therefore inextricable with our lived ‘everyday’.E.P. Thompson might call this our “time-sense”, a kind of “inward notation of time” (58), however this rationalisation of time is amplified and complicated by digital media, as warned by Campbell in this issue. Campbell explores the performativity of publicly writing the self on social media that commodifies experience. An inward notion of time therefore becomes inverted and publicly performed through digital media, which is a key source of anxiety and control for individuals. In Toffler’s estimation, even by as early as the 1970s the technoscience of Western culture had “released a totally new social force” and he contends that this had reshaped the collective psyche witha stream of change so accelerated that it influences our sense of time, revolutionizes the tempo of daily life, and affects the very way we “feel” the world around us. We no longer “feel” life as men [sic] did in the past. And this is the ultimate difference, the distinction that separates the truly contemporary man [sic] from all others. (17)While Toffler was referring to a different technological context, he serves as a reminder that digital media amplifies pre-existing effects of technology. Therefore, while autofiction and the public writing of the self is not necessarily new, it is nevertheless key to contemporary feelings of acceleration and the temporal vernacular of contemporaneity – one that exacerbates the experiences of acceleration, inertia, and how we ‘feel’ the present and our presence in the world.In this issue we also wish to note the ways in which digital culture, and perhaps in particular new media platforms and narratives that permeate our homes, appear to be directing the Western “time-sense” (Thompson 80) away from metaphors constructed through the linear trope of ‘rivers’ or ‘streams’ and toward the more complex arrangements that we suggest are more suited to metaphors of ‘confetti’ or ‘snow’, as Laura Glitsos elucidates in her piece “From Rivers to Confetti: Reconfigurations of Time through New Media Landscapes”.As just one example, we might think of the multiplicity of ‘peculiar times’ built upon each other in the production, distribution, consumption and convergence of so many levels of digital media. In one sense, we might approach ‘peculiar times’ as the peculiarity of temporality in any given context. However, in another sense, we might also recognise the layering of standardisation which is then peculiar to each of the modes of production, consumption, and distribution (as laid out by Althusser and Balibar). As just one example, in the context of streaming services, we find the “flattening of historical frames” (Kaplan 144) in the scrolling back and forward on social media timelines (Powell 2). So perhaps our peculiar time speaks of the collapsing between ontological boundaries of past, present, and future—a kind of contemporaneity that splits between the peculiarities of production and consumption of digital media.StandardisationHistoriographies of time-sense in the Western tradition have been covered by thinkers as diverse as E.P. Thompson, Graeme Davidson, Bernard Stiegler, and Henri Lefebvre. While it is not our aim to repeat those narratives here, we concede some markers are crucial to note in order to set the context for our selected pieces. Beginning in the early- to mid- middle ages in Europe, up until the spread of clocks in the 14th century, time was largely related to processes, tasks or stages of light during the day, and time does still continues to exist in this way for some communities (Thompson 58). During this era, and of even back to the third century BCE, there were time-keeping technologies which could measure smaller increments of the day, such as the water-clock, the sun-dial, and the hour-glass, but everyday activities for the working people were largely regulated by natural or circadian rhythms (Thompson). It is perhaps these rhythms which served to shape the ‘inward notation of time’, in Thompson’s words, through the discourses of nature, that is through the language of streams and rivers—or ‘flows’.The 13th century saw the advent of mechanical time-keeping technology utilising what is called a “verge escapement mechanism”, that is, a “feedback regulator that controls the speed of a mechanical clock” (Headrick 42). About a century later, coupled with the emergence of puritanism, Thompson tells us that we start to see a shift in the construction of time which more and more depends on the synchronisation of labour (Thompson 70). Even so, working rhythms remain fairly irregular, still more suited to what Thompson describes as “a natural human rhythm” (71). This changes suddenly in the 19th century when, with the explosion of the Industrial Age, we witness the dominance of factory-time and, of course, the adoption and standardisation of railway-time across Britain, Europe, India and North America (Schivelbusch). The trend toward standardisation continues into the mid-20th century with what George Ritzer has famously called “McDonaldization” (2008). Thus, through the blanketing nature of 20th century “industrial capitalism” (Thompson 80), everyday experience became predicated on standardisation. Thompson tells us that these “changes in manufacturing technique … demand greater synchronization of labour and a greater exactitude in time-routines in society” (80). For Thompson, the “technological conditioning” of “time-sense” ushers in the model of “time-measurement as a means of labour exploitation” (80). This historical point is central to Giacomo Bianchino’s argument in “Afterwork and Overtime: The Social Reproduction of Human Capital”, in his discussion of the fundamental nature of capitalism in shaping time-sense. However, what we suggest is that this theme of ‘time-sense’ as shaped by the broader political economy of media is found within each of the pieces in the issue.A discussion of standardisation is problematic, however, in the wider conceptualisation of time as elusive, multi-dynamic and fractured. Surely, standardisation should at least come with the ability of certainty, in some respects. However, this is the paradox of the digital and new media age: That standardisation is both arbitrary and, in echo of Balibar and Althusser, ‘peculiar’ to an endless layering of separate time-streams. It is, perhaps, the jumping between them, which has become a necessary function of living in the digital age, that produces the sense of fracture, the loss of standard.This issue of M/C Journal explores the various ways in which the constellation of current media practices that are online, offline, embodied, and networked, collectively inform and express concepts of time. The feature article "With This Body, I Subtract Myself from Neoliberalised Time: Sub-Habituality & Relaxation after Deleuze", written by Antonia Pont, keenly asks how relaxation might be used to evade neoliberal machinations around organising time, efficiency, and productivity, all of which endanger a diversity of temporalities. While all media have their own unique limitations and affordances regarding influencing and expressing relationships to time, they are also impacted by current perceptions of uncertainty and neoliberal agendas that underlie the working relationships between people, the media that they engage in, and representations of the world.The feelings of inertia expressed by Toffler nearly 50 years ago has not only been accelerated through technological expansion, but by a layering of multiple time codes which reflect the wide range of media practices that permeate the contemporary vernacular. In 2019, concepts from the current post-Internet stage are beginning to emerge and we are finding that digital media fragments as much as it connects and unites. An ‘inward notion of time’ becomes brokered through automated processes, issues around surveillance, affect, standardisation, norms, nostalgia, and the minutiae of digital time.ReferencesAlthusser, Louis, and Etienne Balibar. Reading Capital. London: NBL, 1970.Ansell-Pearson, Keith, John Ó Maoilearca, and Melissa McMahon. Henri Bergson: Key Writings. New York: Continuum, 2002.Bolter, Jay, and Richard Grusin. “Remediation.” Configurations 4.3 (1996): 311-358.Davison, Graeme. The Unforgiving Minute: How Australia Learned to Tell the Time. Melbourne: Oxford UP, 1993.Headrick, M.V. “Origin and Evolution of the Anchor Clock Escapement.” IEEE Control Systems 22.2 (2002): 41-52.Johannessen, Jon-Arild. Automation, Innovation and Economic Crisis: Surviving the Fourth Industrial Revolution. Milton: Routledge, 2018.Kaplan, E. Ann. Rocking around the Clock: Music Television, Postmodernism, and Consumer Culture. New York: Methuen, 1987.Powell, Helen. Stop the Clocks! Time and Narrative in Cinema. London: I.B. Tauris, 2012.Ritzer, George. The McDonaldization of Society. Los Angeles: Pine Forge P, 2008.Schivelbusch, Wolfgang. The Railway Journey: The Industrialization of Time and Space in the Nineteenth Century. Oakland: U of California P, 2014.Smith, Terry. What Is Contemporary Art? Chicago: U of Chicago P, 2009.Thompson, E.P. “Time, Work-Discipline, and Industrial Capitalism.” Past and Present 38.1 (1967): 56-97.Toffler, Alvin. Future Shock. London: Bodley Head, 1970.
APA, Harvard, Vancouver, ISO, and other styles
46

Bruns, Axel. "The End of 'Bandwidth'." M/C Journal 2, no. 8 (December 1, 1999). http://dx.doi.org/10.5204/mcj.1807.

Full text
Abstract:
It used to be so simple. If you turn on your TV or radio, your choices are limited: in Australia, there is a maximum of five or six free-to-air TV channels, depending on where you're located, and with a few minor exceptions, the programming is relatively uniform; you know what to expect, and when to expect it. To a slightly lesser degree, the same goes for radio: you might have a greater choice of stations, but you'll get an even smaller slice of the theoretically possible range of programming -- from Triple J to B105, there's mainstream, easy listening, format radio fodder, targetted at slightly different audience demographics, but hardly ever anything but comfortably agreeable to them. Only late at night or in some rare timeslots especially set aside for it, you might find something unusual, something innovative, or simply something unexpected. And of course that's so. How could it possibly be any other way? Of course radio and TV stations must appeal to the most widely shared tastes, must ensure that they satisfy the largest part of their audience with any given programme on any given day -- in short, must find the lowest common denominator which unifies their audience. That the term 'low' in this description has come to be linked to a negative meaning is -- at first -- only an accident of language: after all, mathematically this denominator constitutes in many ways the most fundamental of shared values between a series of fractions, and metaphorically, too, this commonality is certainly of fundamental importance to community culture. The need for radio and TV stations to appeal to such shared values of the many is twofold: where they are commercially run operations, it is simply sound business practice to look for the largest (and hence, most lucrative) audience available. In addition to this, however, the use of a public and limited resource -- the airwaves -- for the transmission of their programmes also creates significant obligations: since the people, represented by their governmental institutions, have licenced stations to use 'their' airwaves for transmission, of course stations are also obliged to repay this entrustment by satisfying the needs and wants of the greatest number of people, and as consistently as possible. All of this is summed up neatly with the word 'bandwidth'. Referring to frequency wavebands, bandwidth is a precious commodity: there is only a limited range of frequencies which can possibly be used to transmit broadcast-quality radio and TV, and each channel requires a significant share of that range -- which is why we can only have a limited number of stations, and hence, a limited range of programming transmitted through them. Getting away from frequency bands, the term can also be applied in other areas of transmission and publication: even services like cable TV frequently have their form of bandwidth (where cable TV systems have only been designed to take a set number of channels), and even commercial print publishing can be said to have its bandwidth, as only a limited number of publishers are likely to be able to exist commercially in a given market, and only a limited number of books and magazines can be distributed and sold through the usual channels each year. There are in each of these cases, then, physical limitations of one form or another. The last few years have seen this conception of bandwidth come under increased attack, however, and all those apparently obvious assumptions about our media environment must be reconsidered as a result. Ever since the rise of photocopiers and personal printers, after all, people have been able to create small-scale print publications without the need to apply for a share of the commercial publishers' 'bandwidth' -- witness the emergence of zines and newsletters for specific interest groups. The means of creation and distribution for these publications were and are not publicly or commercially controlled in any restrictive way, and so the old arguments for a 'responsible' use of bandwidth didn't hold any more -- thus the widespread disregard in these publications for any overarching commonly held ideas which need to be addressed: as soon as someone reads them, their production is justified. Publishing on the Internet drives the nail even further -- here, the notion of bandwidth comes to an end entirely, in two distinct ways. First, in a non-physical medium, the argument of the physical scarcity of the publication medium doesn't hold anymore -- space for publication in newsgroups and on Web pages, being digital, electronic, 'virtual', is infinitely expandable, much unlike frequency bands with their highly fixed and policed upper and lower boundaries. New 'stations' being added don't interfere with existing ones here, and so there's no need to limit the amount of individual channels available on the Net; hence the multitude of newsgroups and Websites available. Again, whatever can establish an audience (even just of a few readers) is justified in its existence. Secondly, available transmission bandwidth is also highly divisible along a temporal line, due to the packet-switching technology on which the medium is based: along the connections within the network, information that is transmitted is chopped up into small packets of data which are recombined at the receiver's end; this means that individual transmissions along the same connection can coexist without interfering with one another, if at a somewhat reduced speed (as anyone navigating the Web while downloading files has no doubt experienced). Again, this is quite different from the airwaves experience, where two radio stations or TV channels can't be broadcasting on the same frequency without drowning each other out. And even the reduction of transmission speed is likely to be only a temporary phenomenon, as network hardware is constantly being upgraded to higher speeds. Internet bandwidth, then, is infinite, in both the publication and the transmission sense of the word. If it's impossible to reach the end of available bandwidth on the Net, then, this means nothing less than that the very concept of 'bandwidth' on the Net ends: that is, it ceases to have any practical relevance -- as Costigan notes, reflecting on an all too familiar metaphor, "the Internet is in many ways the Wild West, the new frontier of our times, but its limits will not be reached. ... The Internet does not have an edge to push past, no wall or ocean to contain it. Its size and shape change constantly, and additions and subtractions do not inherently make something new or different" (xiii). But that this is so, that we have come to this end of 'bandwidth' by never being able to come to an end of bandwidth on the Net, is in itself something fundamentally new and different in media history -- and also something difficult to come to terms with. All those of courses, all those apparently obvious and natural practices of the mainstream media have left us ill prepared for a medium where they are anything but natural, and even counterproductive. Old habits are hard to break, as many of the apparently well-founded criticisms of the Internet show. Let's take Stephen Talbott as an example here: in one of my favourite passages of overzealous Net criticism, he writes of The paradox of intelligence and pathology. The Net: an instrument of rationalisation erected upon an inconceivably complex foundation of computerised logic -- an inexhaustible fount of lucid 'emergent order.' Or, the Net: madhouse, bizarre Underground, scene of flame wars and psychopathological acting out, universal red-light district. ... The Net: a nearly infinite repository of human experience converted into objective data and information -- a universal database supporting all future advances in knowledge and economic productivity. Or, the Net: perfected gossip mill; means for spreading rumours with lightning rapidity; ... ocean of dubious information. (348-9) Ignoring here the fundamental problem of Talbott's implicit claim that there are objective parameters according to which he can reliably judge whether or not any piece of online content is 'objective data' or 'dubious information' (and: for whom?), and thus his unnecessary construction of a paradox, a binary (no pun intended) division into 'good' and 'bad' uses, a second and immediately related problem is that Talbott seems to claim that the two sides of this 'paradox' are somehow able to interfere with each other, to the point of invalidating one another. This can easily be seen as a result of continuing to think in terms of bandwidth in the broadcast sense: there, the limited number of channels, and the limited amount of transmission space and time for each channel, have indeed meant that stations must carefully choose what material to broadcast, and that the results are frequently of a mainstream, middle-of-the-road, non-challenging nature. On the Net, this doesn't hold, however: here, the medium can be used for everything from the Human Genome Project to peddling sleeze and pirated 'warez', without the two ends of this continuum of uses ever affecting one another. That's not to say that what goes on in some parts of the Net isn't unsavoury, offensive, illegal, or even severely in violation of basic human rights; and where this is so, the appropriate measures, already provided by legal systems around the world, should be taken to get rid of the worst offenders -- notably, though, this won't be possible through cutting off their access to bandwidth: where bandwidth is unlimited and freely available to anyone, this cannot possibly work. Critical approaches like Talbott's, founded as they are on an outdated understanding of media processes and the false assumption of a homogeneous culture, won't help us in this, therefore: rather, faced with the limitless nature of online bandwidth, we must learn to understand the infinite, and live with it. The question isn't how many 'negative' uses of the Net we can point to -- there will always be an abundance of them. The question is what anyone of us, whoever 'we' are, can do to use the Net positively and productively -- whatever we as individuals might consider those positive and productive uses to be. References Costigan, James T. "Introduction: Forests, Trees, and Internet Research." Doing Internet Research: Critical Issues and Methods for Examining the Net. Ed. Steve Jones. Thousand Oaks, Calif.: Sage, 1999. Talbott, Stephen L. The Future Does Not Compute: Transcending the Machines in Our Midst. Sebastopol, Calif.: O'Reilly & Associates, 1995. Citation reference for this article MLA style: Axel Bruns. "The End of 'Bandwidth': Why We Must Learn to Understand the Infinite." M/C: A Journal of Media and Culture 2.8 (1999). [your date of access] <http://www.uq.edu.au/mc/9912/bandwidth.php>. Chicago style: Axel Bruns, "The End of 'Bandwidth': Why We Must Learn to Understand the Infinite," M/C: A Journal of Media and Culture 2, no. 8 (1999), <http://www.uq.edu.au/mc/9912/bandwidth.php> ([your date of access]). APA style: Axel Bruns. (1999) The end of 'bandwidth': why we must learn to understand the infinite. M/C: A Journal of Media and Culture 2(8). <http://www.uq.edu.au/mc/9912/bandwidth.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, C. Jason. "I Love To Hate You/All You Need Is Hate." M/C Journal 5, no. 6 (November 1, 2002). http://dx.doi.org/10.5204/mcj.2011.

Full text
Abstract:
Neil Tenant of The Pet Shop Boys crooned the song and memorable line ‘I love to hate you’. Today this refrain has become a global phenomenon within public rhetoric. Many thinkers, most famously Freud, have argued that war is innate to human nature, warfare being a projection of internal battles onto the external world. Etymologically war relates to ‘confusion’ and ‘strife’, two words intimately connected with a certain form of lovemadness. As with love, war is ‘play’ where only the noblest survive (Pick 70). While traditionally God is love in most main religions, J.F.C. Fuller maintains ‘war is a God-appointed instrument to teach wisdom to the foolish and righteousness to the evil-minded’ (Pick 109). For Mussolini, ‘war is to man what maternity is to the woman’ (Bollas 205). In the Christian tradition the pains of childbirth are the punishment for the original rejecting of divine love, that is God, for a love of the carnal and a lust for knowledge, just as the toil of work is the punishment for man. Chivalry equated war and love; ‘love is war’ and ‘the gift of her body to man by the woman is a reward for valour … love and war form an endless dialectic; Venus and Mars in eternal symbolic (not actual) copulation in the interests of nation building’ (Bush 158). In the twentieth century the symbolic becomes literal. Mussolini maintained that war must be embraced as a goal for humankind, just as fervently as intercourse must be embraced for procreation. ‘Man’ must metaphorically fuck man to the death and fuck women literally for more war fodder. Love of food is analogous to love of war, one involving masticating and excreting, the other doing the same literally or metaphorically, depending on the type of war. One first world war soldier remarked how it is very close to a picnic but far better because it has a purpose; it is the most glorious experience available (Storr 15). To William James, war defines the essence of humanity and human potential (Pick 140), often the exact description given by others for love. The very fact that men sacrifice their lives for others supposedly raises humans above animals, but this warlike attribute is akin to divine love, as in Christ’s sacrifice. War is mystical in its nature, as many believe madnesslove to be, and is an end in itself, not a tool. The jingoism of war brings out the most extreme form of comments, as in the following example from the Southern literary critic William Gilmore Simms on the US-Mexican War. ‘War is the greatest element of modern civilization, and our destiny is conquest. Indeed the moment a nation ceases to extend its sway it falls a prey to an inferior but more energetic neighbour’ (Bush 154). The current US president’s rhetoric is identical. What is clear is that the debates surrounding war in the nineteenth century take on a similar tone to those on lovereligion. This could be seen as inevitable given the emphasis of both in certain circumstances on sacrifice. Like love, war is seen as the healthiest of pursuits and the most ‘sane’ of activities. Without it only ‘madness’ can result, the irony being, as with love, that war often causes insanity. Contemporary psychotherapists use examples from world history to indicate how the same drives within the individual may manifest in society. The ‘butterfly principle’ is an example of this, where apparently trivial events can trigger enormous consequences (Wieland-Burston 91). Just as war may appease demands of the id for action and the pressure of the super ego for conformity, so love may satisfy these needs. Mad love can been viewed as a process where the conflict between these two forces is not reconciled via the ego and thus ‘insanity’ results. Daniel Pick discusses Hegel’s theories regarding the benefits of death in terms of the state. ‘The death of each nation is shown to contribute to the life of another greater one: “It then serves as material for a higher principle”’ (28). For Hegel, ‘man is the highest manifestation of the absolute’, so these actions which lead man as a group to ‘a higher principle’ must be God driven, God in a Christian context being defined as love(xviii). War is divinely inspired; it is love. ‘Scatter the nations who delight in war’ (NIV 1986 593), but it is inevitable, part of an internal process, and will continue till the end of time (2 Corinthians 10:3; Romans 7:23; Daniel 9:26). Of course there are many types of love and many types of war, current technology making the horrors of war more prolific but less real, more virtual. However, satisfaction from this form of warfare or virtual love may be tenuous, paradoxically making both more fertile. Desire is the desire of the Other, just as in war it is the fear of the Other, the belief that they desire your destruction, that leads you to war. With reference to Lacan, Terry Eagleton comes up with the following: ‘To say ‘I love you’ thus becomes equivalent to saying ‘it’s you who can’t satisfy me! How privileged and unique I must be, to remind you that it isn’t me you want…’ (Eagleton 279). We give each other our desire not satisfaction, so there can be no love or war without desire, which is law-like and anonymous, and outside of individual wishes. George W. Bush’s speech at the Department of Defence Service of Remembrance, The Pentagon, Arlington, Virginia on 11 October 2001 in many ways denied al-Qaida’s responsibility for the September 11th atrocities. The speech mentions that it is enough to know that evil, like good, exists. In true Biblical language, ours is not to reason why and in the terrorists evil has found a willing servant. For Nietzsche the Last Judgement is the sweet consolation of revenge for the lower orders, just as for those who believed they had suffered due to US imperialism, there was something sweet about September 11. Nietzsche as Zarathustra writes ‘God has his Hell; it is love for man (my italics) … God is dead; God has died of his pity for man’ (Nietzsche 114). Nietzsche writes that Zarathustra has grown weary of retribution, punishment, righteous revenge and that this is slavery; he wills that ‘man may be freed from the bondage of revenge’ (123). Importantly, both Bush and bin Laden, while declaring the power of their beliefs, concurrently set themselves and their followers up as victims, the unloved. Nietzsche reveals the essence of public rhetoric by declaring that the central lie is to maintain that it is part of the public’s voice. ‘The state is the coldest of all cold monsters. Coldly it lies too; and this lie creeps from its mouth: ‘I, the state, am the people’’ (76). In the Memorial speech quoted above Bush maintains that, unlike ‘our’ enemies, ‘we’ value every life, and ‘we’ mourn every loss. Again, from the Pentagon speech: ‘Theirs is the worst kind of violence, pure malice, while daring to claim the authority of God’. When we kill, so the argument goes, it is out of love, when they kill it is out of malice, hate. There is something infantile about George W. Bush. For Nietzsche every step away from instinct is regression. To suggest that George W. Bush is aping Nietzsche’s superman may appear preposterous, but his anti-intellectual slant is the essence of Nietzsche’s thought: actions speak louder than words; America is not about Being, but Becoming. ‘More than anything on earth he enjoys tragedies, bullfights, and crucifixions; and when he invented Hell for himself, behold, it was his heaven on earth’ (Nietzsche 235). Why were the images of the Twin Towers’ attack shown repeatedly? Do people love the challenge of adversity, or revel in the idea of hell and destruction, loving damnation? Nietzsche himself is not innocent. Despite his feigning to celebrate life, man must be overcome; man is a means to an end, just as the bombing of Afghanistan (or Iraq) and the Twin Towers for rival ideologies is a means to an end. ‘They kill because they desire to dominate’; ‘few countries meet their exacting standards of brutality and oppression’. Both Bush or bin Laden may have made these comments, but they are from the former, George W Bush’s, speech to the UN General Assembly in New York City, 10 November 2001. Bush goes on, maintaining: ‘History will record our response and judge or justify every nation in this hall’. God is not the judge here, but history itself, a form of Hegelian world spirit. Then the Nietzschean style rhetoric becomes more overt: ‘We choose the dignity of life over a culture of death’. And following this, Nietzsche’s comments about the state are once again pertinent, given the illegitimacy of Bush’s government. ‘We choose lawful change and civil disagreement over coercion, subversion and chaos’. The praise, that is, the love heaped on Bush for his rhetoric is telling for ‘when words are called holy - all the truth dies’ (Nietzsche 253). The hangover of the Old Testament revenge judge God swamps those drunk on the lust of hatred and revenge. This is clearly the love of war, of hatred. Any God worth existing needs to be temporal, extemporal and ‘atemporal’, yet ultimately ‘Being itself – and not only beings that are “in time” – is made visible in its “temporal” character’ (Heidegger 62). While I am not therefore insisting on a temporal God of love, a God of judgement, of the moment, makes a post-apocalyptical god unnecessary and transcendent love itself unthinkable. Works Cited Bollas, Chistopher. Being a Character. Psychoanalysis and Self Experience. London, Routledge: 1993. Bush, Clive. The Dream of Reason. London: Edward Arnold, 1977. Duncombe, Stephen. Notes From Underground. Zines and the Politics of Alternative Culture. London: Verso 1997. Eagleton, Terry. The Ideology of the Aesthetic. London: Blackwells, 1996. Freud, Sigmund. New Introductory Lectures on Psychoanalysis. Trans. James Strachey. London: Penguin, 1986. Hegel, G. Introductory Lectures on Aesthetics. Trans. Bernard Bosanquet. London: Penguin, 1993. Heidegger, Martin. Being and Time in Basic Writings. Ed. David Farrell Krell. London: Routledge, 1978. Pick, Daniel. War Machine, The Rationalisation of Slaughter in the Modern Age. London: Yale University Press, 1993. Nietzsche, Friedrich. Thus Spoke Zarathustra. Trans. R.J. Hollingdale. New York: Penguin, 1969. Storr, Anthony, Human Destructiveness. The Roots of Genocide and Human Cruelty. London: Routledge, 1991. Wieland-Burston, Joanne. Chaos and Order in the World of the Psyche. London: Routledge, 1991. The Holy Bible, New International Version. London: Hodder and Stoughton, 1986. lt;http://www.september11news.com> Links http://www.september11news.com Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Lee, C. Jason. "I Love To Hate You/All You Need Is Hate" M/C: A Journal of Media and Culture 5.6 (2002). Dn Month Year < http://www.media-culture.org.au/0211/ilovetohateyou.php>. APA Style Lee, C. J., (2002, Nov 20). I Love To Hate You/All You Need Is Hate. M/C: A Journal of Media and Culture, 5,(6). Retrieved Month Dn, Year, from http://www.media-culture.org.au/0211/ilovetohateyou.html
APA, Harvard, Vancouver, ISO, and other styles
48

Leaver, Tama. "The Social Media Contradiction: Data Mining and Digital Death." M/C Journal 16, no. 2 (March 8, 2013). http://dx.doi.org/10.5204/mcj.625.

Full text
Abstract:
Introduction Many social media tools and services are free to use. This fact often leads users to the mistaken presumption that the associated data generated whilst utilising these tools and services is without value. Users often focus on the social and presumed ephemeral nature of communication – imagining something that happens but then has no further record or value, akin to a telephone call – while corporations behind these tools tend to focus on the media side, the lasting value of these traces which can be combined, mined and analysed for new insight and revenue generation. This paper seeks to explore this social media contradiction in two ways. Firstly, a cursory examination of Google and Facebook will demonstrate how data mining and analysis are core practices for these corporate giants, central to their functioning, development and expansion. Yet the public rhetoric of these companies is not about the exchange of personal information for services, but rather the more utopian notions of organising the world’s information, or bringing everyone together through sharing. The second section of this paper examines some of the core ramifications of death in terms of social media, asking what happens when a user suddenly exists only as recorded media fragments, at least in digital terms. Death, at first glance, renders users (or post-users) without agency or, implicitly, value to companies which data-mine ongoing social practices. Yet the emergence of digital legacy management highlights the value of the data generated using social media, a value which persists even after death. The question of a digital estate thus illustrates the cumulative value of social media as media, even on an individual level. The ways Facebook and Google approach digital death are examined, demonstrating policies which enshrine the agency and rights of living users, but become far less coherent posthumously. Finally, along with digital legacy management, I will examine the potential for posthumous digital legacies which may, in some macabre ways, actually reanimate some aspects of a deceased user’s presence, such as the Lives On service which touts the slogan “when your heart stops beating, you'll keep tweeting”. Cumulatively, mapping digital legacy management by large online corporations, and the affordances of more focussed services dealing with digital death, illustrates the value of data generated by social media users, and the continued importance of the data even beyond the grave. Google While Google is universally synonymous with search, and is the world’s dominant search engine, it is less widely understood that one of the core elements keeping Google’s search results relevant is a complex operation mining user data. Different tools in Google’s array of services mine data in different ways (Zimmer, “Gaze”). Gmail, for example, uses algorithms to analyse an individual’s email in order to display the most relevant related advertising. This form of data mining is comparatively well known, with most Gmail users knowingly and willingly accepting more personalised advertising in order to use Google’s email service. However, the majority of people using Google’s search engine are unaware that search, too, is increasingly driven by the tracking, analysis and refining of results on the basis of user activity (Zimmer, “Externalities”). As Alexander Halavais (160–180) quite rightly argues, recent focus on the idea of social search – the deeper integration of social network information in gauging search results – is oxymoronic; all search, at least for Google, is driven by deep analysis of personal and aggregated social data. Indeed, the success of Google’s mining of user data has led to concerns that often invisible processes of customisation and personalisation will mean that the supposedly independent or objective algorithms producing Google’s search results will actually yield a different result for every person. As Siva Vaidhyanathan laments: “as users in a diverse array of countries train Google’s algorithms to respond to specialized queries with localised results, each place in the world will have a different list of what is important, true, or ‘relevant’ in response to any query” (138). Personalisation and customisation are not inherently problematic, and frequently do enhance the relevance of search results, but the main objection raised by critics is not Google’s data mining, but the lack of transparency in the way data are recorded, stored and utilised. Eli Pariser, for example, laments the development of a ubiquitous “filter bubble” wherein all search results are personalised and subjective but are hidden behind the rhetoric of computer-driven algorithmic objectivity (Pariser). While data mining informs and drives many of Google’s tools and services, the cumulative value of these captured fragments of information is best demonstrated by the new service Google Now. Google Now is a mobile app which delivers an ongoing stream of search results but without the need for user input. Google Now extrapolates the rhythms of a person’s life, their interests and their routines in order to algorithmically determine what information will be needed next, and automatically displays it on a user’s mobile device. Clearly Google Now is an extremely valuable and clever tool, and the more information a user shares, the better the ongoing customised results will be, demonstrating the direct exchange value of personal data: total personalisation requires total transparency. Each individual user will need to judge whether they wish to share with Google the considerable amount of personal information needed to make Google Now work. The pressing ethical question that remains is whether Google will ensure that users are sufficiently aware of the amount of data and personal privacy they are exchanging in order to utilise such a service. Facebook Facebook began as a closed network, open only to students at American universities, but has transformed over time to a much wider and more open network, with over a billion registered users. Facebook has continually reinvented their interface, protocols and design, often altering both privacy policies and users’ experience of privacy, and often meeting significant and vocal resistance in the process (boyd). The data mining performed by social networking service Facebook is also extensive, although primarily aimed at refining the way that targeted advertising appears on the platform. In 2007 Facebook partnered with various retail loyalty services and combined these records with Facebook’s user data. This information was used to power Facebook’s Beacon service, which added details of users’ retail history to their Facebook news feed (for example, “Tama just purchased a HTC One”). The impact of all of these seemingly unrelated purchases turning up in many people’s feeds suddenly revealed the complex surveillance, data mining and sharing of these data that was taking place (Doyle and Fraser). However, as Beacon was turned on, without consultation, for all Facebook users, there was a sizable backlash that meant that Facebook had to initially switch the service to opt-in, and then discontinue it altogether. While Beacon has been long since erased, it is notable that in early 2013 Facebook announced that they have strengthened partnerships with data mining and profiling companies, including Datalogix, Epsilon, Acxiom, and BlueKai, which harness customer information from a range of loyalty cards, to further refine the targeting ability offered to advertisers using Facebook (Hof). Facebook’s data mining, surveillance and integration across companies is thus still going on, but no longer directly visible to Facebook users, except in terms of the targeted advertisements which appear on the service. Facebook is also a platform, providing a scaffolding and gateway to many other tools and services. In order to use social games such as Zynga’s Farmville, Facebook users agree to allow Zynga to access their profile information, and use Facebook to authenticate their identity. Zynga has been unashamedly at the forefront of user analytics and data mining, attempting to algorithmically determine the best way to make virtual goods within their games attractive enough for users to pay for them with real money. Indeed, during a conference presentation, Zynga Vice President Ken Rudin stated outright that Zynga is “an analytics company masquerading as a games company” (Rudin). I would contend that this masquerade succeeds, as few Farmville players are likely to consider how their every choice and activity is being algorithmically scrutinised in order to determine what virtual goods they might actually buy. As an instance of what is widely being called ‘big data’, the data miing operations of Facebook, Zynga and similar services lead to a range of ethical questions (boyd and Crawford). While users may have ostensibly agreed to this data mining after clicking on Facebook’s Terms of Use agreement, the fact that almost no one reads these agreements when signing up for a service is the Internet’s worst kept secret. Similarly, the extension of these terms when Facebook operates as a platform for other applications is a far from transparent process. While examining the recording of user data leads to questions of privacy and surveillance, it is important to note that many users are often aware of the exchange to which they have agreed. Anders Albrechtslund deploys the term ‘social surveillance’ to usefully emphasise the knowing, playful and at times subversive approach some users take to the surveillance and data mining practices of online service providers. Similarly, E.J. Westlake notes that performances of self online are often not only knowing but deliberately false or misleading with the aim of exploiting the ways online activities are tracked. However, even users well aware of Facebook’s data mining on the site itself may be less informed about the social networking company’s mining of offsite activity. The introduction of ‘like’ buttons on many other Websites extends Facebook’s reach considerably. The various social plugins and ‘like’ buttons expand both active recording of user activity (where the like button is actually clicked) and passive data mining (since a cookie is installed or updated regardless of whether a button is actually pressed) (Gerlitz and Helmond). Indeed, because cookies – tiny packets of data exchanged and updated invisibly in browsers – assign each user a unique identifier, Facebook can either combine these data with an existing user’s profile or create profiles about non-users. If that person even joins Facebook, their account is connected with the existing, data-mined record of their Web activities (Roosendaal). As with Google, the significant issue here is not users knowingly sharing their data with Facebook, but the often complete lack of transparency in terms of the ways Facebook extracts and mines user data, both on Facebook itself and increasingly across applications using Facebook as a platform and across the Web through social plugins. Google after Death While data mining is clearly a core element in the operation of Facebook and Google, the ability to scrutinise the activities of users depends on those users being active; when someone dies, the question of the value and ownership of their digital assets becomes complicated, as does the way companies manage posthumous user information. For Google, the Gmail account of a deceased person becomes inactive; the stored email still takes up space on Google’s servers, but with no one using the account, no advertising is displayed and thus Google can earn no revenue from the account. However, the process of accessing the Gmail account of a deceased relative is an incredibly laborious one. In order to even begin the process, Google asks that someone physically mails a series of documents including a photocopy of a government-issued ID, the death certificate of the deceased person, evidence of an email the requester received from the deceased, along with other personal information. After Google have received and verified this information, they state that they might proceed to a second stage where further documents are required. Moreover, if at any stage Google decide that they cannot proceed in releasing a deceased relative’s Gmail account, they will not reveal their rationale. As their support documentation states: “because of our concerns for user privacy, if we determine that we cannot provide the Gmail content, we will not be able to share further details about the account or discuss our decision” (Google, “Accessing”). Thus, Google appears to enshrine the rights and privacy of individual users, even posthumously; the ownership or transfer of individual digital assets after death is neither a given, nor enshrined in Google’s policies. Yet, ironically, the economic value of that email to Google is likely zero, but the value of the email history of a loved one or business partner may be of substantial financial and emotional value, probably more so than when that person was alive. For those left behind, the value of email accounts as media, as a lasting record of social communication, is heightened. The question of how Google manages posthumous user data has been further complicated by the company’s March 2012 rationalisation of over seventy separate privacy policies for various tools and services they operate under the umbrella of a single privacy policy accessed using a single unified Google account. While this move was ostensibly to make privacy more understandable and transparent at Google, it had other impacts. For example, one of the side effects of a singular privacy policy and single Google identity is that deleting one of a recently deceased person’s services may inadvertently delete them all. Given that Google’s services include Gmail, YouTube and Picasa, this means that deleting an email account inadvertently erases all of the Google-hosted videos and photographs that individual posted during their lifetime. As Google warns, for example: “if you delete the Google Account to which your YouTube account is linked, you will delete both the Google Account AND your YouTube account, including all videos and account data” (Google, “What Happens”). A relative having gained access to a deceased person’s Gmail might sensibly delete the email account once the desired information is exported. However, it seems less likely that this executor would realise that in doing so all of the private and public videos that person had posted on YouTube would also permanently disappear. While material possessions can be carefully dispersed to specific individuals following the instructions in someone’s will, such affordances are not yet available for Google users. While it is entirely understandable that the ramification of policy changes are aimed at living users, as more and more online users pass away, the question of their digital assets becomes increasingly important. Google, for example, might allow a deceased person’s executor to elect which of their Google services should be kept online (perhaps their YouTube videos), which traces can be exported (perhaps their email), and which services can be deleted. At present, the lack of fine-grained controls over a user’s digital estate at Google makes this almost impossible. While it violates Google’s policies to transfer ownership of an account to another person, if someone does leave their passwords behind, this provides their loved ones with the best options in managing their digital legacy with Google. When someone dies and their online legacy is a collection of media fragments, the value of those media is far more apparent to the loved ones left behind rather than the companies housing those media. Facebook Memorialisation In response to users complaining that Facebook was suggesting they reconnect with deceased friends who had left Facebook profiles behind, in 2009 the company instituted an official policy of turning the Facebook profiles of departed users into memorial pages (Kelly). Technically, loved ones can choose between memorialisation and erasing an account altogether, but memorialisation is the default. This entails setting the account so that no one can log into it, and that no new friends (connections) can be made. Existing friends can access the page in line with the user’s final privacy settings, meaning that most friends will be able to post on the memorialised profile to remember that person in various ways (Facebook). Memorialised profiles (now Timelines, after Facebook’s redesign) thus become potential mourning spaces for existing connections. Since memorialised pages cannot make new connections, public memorial pages are increasingly popular on Facebook, frequently set up after a high-profile death, often involving young people, accidents or murder. Recent studies suggest that both of these Facebook spaces are allowing new online forms of mourning to emerge (Marwick and Ellison; Carroll and Landry; Kern, Forman, and Gil-Egui), although public pages have the downside of potentially inappropriate commentary and outright trolling (Phillips). Given Facebook has over a billion registered users, estimates already suggest that the platform houses 30 million profiles of deceased people, and this number will, of course, continue to grow (Kaleem). For Facebook, while posthumous users do not generate data themselves, the fact that they were part of a network means that their connections may interact with a memorialised account, or memorial page, and this activity, like all Facebook activities, allows the platform to display advertising and further track user interactions. However, at present Facebook’s options – to memorialise or delete accounts of deceased people – are fairly blunt. Once Facebook is aware that a user has died, no one is allowed to edit that person’s Facebook account or Timeline, so Facebook literally offers an all (memorialisation) or nothing (deletion) option. Given that Facebook is essentially a platform for performing identities, it seems a little short-sighted that executors cannot clean up or otherwise edit the final, lasting profile of a deceased Facebook user. As social networking services and social media become more ingrained in contemporary mourning practices, it may be that Facebook will allow more fine-grained control, positioning a digital executor also as a posthumous curator, making the final decision about what does and does not get kept in the memorialisation process. Since Facebook is continually mining user activity, the popularity of mourning as an activity on Facebook will likely mean that more attention is paid to the question of digital legacies. While the user themselves can no longer be social, the social practices of mourning, and the recording of a user as a media entity highlights the fact that social media can be about interactions which in significant ways include deceased users. Digital Legacy Services While the largest online corporations have fairly blunt tools for addressing digital death, there are a number of new tools and niche services emerging in this area which are attempting to offer nuanced control over digital legacies. Legacy Locker, for example, offers to store the passwords to all of a user’s online services and accounts, from Facebook to Paypal, and to store important documents and other digital material. Users designate beneficiaries who will receive this information after the account holder passes away, and this is confirmed by preselected “verifiers” who can attest to the account holder’s death. Death Switch similarly provides the ability to store and send information to users after the account holder dies, but tests whether someone is alive by sending verification emails; fail to respond to several prompts and Death Switch will determine a user has died, or is incapacitated, and executes the user’s final instructions. Perpetu goes a step further and offers the same tools as Legacy Locker but also automates existing options from social media services, allowing users to specify, for example, that their Facebook, Twitter or Gmail data should be downloaded and this archive should be sent to a designated recipient when the Perpetu user dies. These tools attempt to provide a more complex array of choices in terms of managing a user’s digital legacy, providing similar choices to those currently available when addressing material possessions in a formal will. At a broader level, the growing demand for these services attests to the ongoing value of online accounts and social media traces after a user’s death. Bequeathing passwords may not strictly follow the Terms of Use of the online services in question, but it is extremely hard to track or intervene when a user has the legitimate password, even if used by someone else. More to the point, this finely-grained legacy management allows far more flexibility in the utilisation and curation of digital assets posthumously. In the process of signing up for one of these services, or digital legacy management more broadly, the ongoing value and longevity of social media traces becomes more obvious to both the user planning their estate and those who ultimately have to manage it. The Social Media Afterlife The value of social media beyond the grave is also evident in the range of services which allow users to communicate in some fashion after they have passed away. Dead Social, for example, allows users to schedule posthumous social media activity, including the posting of tweets, sending of email, Facebook messages, or the release of online photos and videos. The service relies on a trusted executor confirming someone’s death, and after that releases these final messages effectively from beyond the grave. If I Die is a similar service, which also has an integrated Facebook application which ensures a user’s final message is directly displayed on their Timeline. In a bizarre promotional campaign around a service called If I Die First, the company is promising that the first user of the service to pass away will have their posthumous message delivered to a huge online audience, via popular blogs and mainstream press coverage. While this is not likely to appeal to everyone, the notion of a popular posthumous performance of self further complicates that question of what social media can mean after death. Illustrating the value of social media legacies in a quite different but equally powerful way, the Lives On service purports to algorithmically learn how a person uses Twitter while they are live, and then continue to tweet in their name after death. Internet critic Evgeny Morozov argues that Lives On is part of a Silicon Valley ideology of ‘solutionism’ which casts every facet of society as a problem in need of a digital solution (Morozov). In this instance, Lives On provides some semblance of a solution to the problem of death. While far from defeating death, the very fact that it might be possible to produce any meaningful approximation of a living person’s social media after they die is powerful testimony to the value of data mining and the importance of recognising that value. While Lives On is an experimental service in its infancy, it is worth wondering what sort of posthumous approximation might be built using the robust data profiles held by Facebook or Google. If Google Now can extrapolate what a user wants to see without any additional input, how hard would it be to retool this service to post what a user would have wanted after their death? Could there, in effect, be a Google After(life)? Conclusion Users of social media services have differing levels of awareness regarding the exchange they are agreeing to when signing up for services provided by Google or Facebook, and often value the social affordances without necessarily considering the ongoing media they are creating. Online corporations, by contrast, recognise and harness the informatic traces users generate through complex data mining and analysis. However, the death of a social media user provides a moment of rupture which highlights the significant value of the media traces a user leaves behind. More to the point, the value of these media becomes most evident to those left behind precisely because that individual can no longer be social. While beginning to address the issue of posthumous user data, Google and Facebook both have very blunt tools; Google might offer executors access while Facebook provides the option of locking a deceased user’s account as a memorial or removing it altogether. Neither of these responses do justice to the value that these media traces hold for the living, but emerging digital legacy management tools are increasingly providing a richer set of options for digital executors. While the differences between material and digital assets provoke an array of legal, spiritual and moral issues, digital traces nevertheless clearly hold significant and demonstrable value. For social media users, the death of someone they know is often the moment where the media side of social media – their lasting, infinitely replicable nature – becomes more important, more visible, and casts the value of the social media accounts of the living in a new light. For the larger online corporations and service providers, the inevitable increase in deceased users will likely provoke more fine-grained controls and responses to the question of digital legacies and posthumous profiles. It is likely, too, that the increase in online social practices of mourning will open new spaces and arenas for those same corporate giants to analyse and data-mine. References Albrechtslund, Anders. “Online Social Networking as Participatory Surveillance.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/article/view/2142/1949›. boyd, danah. “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence 14.1 (2008): 13–20. ———, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662–679. Carroll, Brian, and Katie Landry. “Logging On and Letting Out: Using Online Social Networks to Grieve and to Mourn.” Bulletin of Science, Technology & Society 30.5 (2010): 341–349. Doyle, Warwick, and Matthew Fraser. “Facebook, Surveillance and Power.” Facebook and Philosophy: What’s on Your Mind? Ed. D.E. Wittkower. Chicago, IL: Open Court, 2010. 215–230. Facebook. “Deactivating, Deleting & Memorializing Accounts.” Facebook Help Center. 2013. 7 Mar. 2013 ‹http://www.facebook.com/help/359046244166395/›. Gerlitz, Carolin, and Anne Helmond. “The Like Economy: Social Buttons and the Data-intensive Web.” New Media & Society (2013). Google. “Accessing a Deceased Person’s Mail.” 25 Jan. 2013. 21 Apr. 2013 ‹https://support.google.com/mail/answer/14300?hl=en›. ———. “What Happens to YouTube If I Delete My Google Account or Google+?” 8 Jan. 2013. 21 Apr. 2013 ‹http://support.google.com/youtube/bin/answer.py?hl=en&answer=69961&rd=1›. Halavais, Alexander. Search Engine Society. Polity, 2008. Hof, Robert. “Facebook Makes It Easier to Target Ads Based on Your Shopping History.” Forbes 27 Feb. 2013. 1 Mar. 2013 ‹http://www.forbes.com/sites/roberthof/2013/02/27/facebook-makes-it-easier-to-target-ads-based-on-your-shopping-history/›. Kaleem, Jaweed. “Death on Facebook Now Common as ‘Dead Profiles’ Create Vast Virtual Cemetery.” Huffington Post. 7 Dec. 2012. 7 Mar. 2013 ‹http://www.huffingtonpost.com/2012/12/07/death-facebook-dead-profiles_n_2245397.html›. Kelly, Max. “Memories of Friends Departed Endure on Facebook.” The Facebook Blog. 27 Oct. 2009. 7 Mar. 2013 ‹http://www.facebook.com/blog/blog.php?post=163091042130›. Kern, Rebecca, Abbe E. Forman, and Gisela Gil-Egui. “R.I.P.: Remain in Perpetuity. Facebook Memorial Pages.” Telematics and Informatics 30.1 (2012): 2–10. Marwick, Alice, and Nicole B. Ellison. “‘There Isn’t Wifi in Heaven!’ Negotiating Visibility on Facebook Memorial Pages.” Journal of Broadcasting & Electronic Media 56.3 (2012): 378–400. Morozov, Evgeny. “The Perils of Perfection.” The New York Times 2 Mar. 2013. 4 Mar. 2013 ‹http://www.nytimes.com/2013/03/03/opinion/sunday/the-perils-of-perfection.html?pagewanted=all&_r=0›. Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You. London: Viking, 2011. Phillips, Whitney. “LOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.” First Monday 16.12 (2011). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/3168›. Roosendaal, Arnold. “We Are All Connected to Facebook … by Facebook!” European Data Protection: In Good Health? Ed. Serge Gutwirth et al. Dordrecht: Springer, 2012. 3–19. Rudin, Ken. “Actionable Analytics at Zynga: Leveraging Big Data to Make Online Games More Fun and Social.” San Diego, CA, 2010. Vaidhyanathan, Siva. The Googlization of Everything. 1st ed. Berkeley: University of California Press, 2011. Westlake, E.J. “Friend Me If You Facebook: Generation Y and Performative Surveillance.” TDR: The Drama Review 52.4 (2008): 21–40. Zimmer, Michael. “The Externalities of Search 2.0: The Emerging Privacy Threats When the Drive for the Perfect Search Engine Meets Web 2.0.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/2136/1944›. ———. “The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance.” Web Search. Eds. Amanda Spink & Michael Zimmer. Berlin: Springer, 2008. 77–99.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography