To see the other types of publications on this topic, follow the link: Associated Simmons Hardware Companies.

Journal articles on the topic 'Associated Simmons Hardware Companies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 journal articles for your research on the topic 'Associated Simmons Hardware Companies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Zhuoxuan, and Warren Seering. "Does Open Source Hardware Have a Sustainable Business Model? An Analysis of Value Creation and Capture Mechanisms in Open Source Hardware Companies." Proceedings of the Design Society: International Conference on Engineering Design 1, no. 1 (July 2019): 2239–48. http://dx.doi.org/10.1017/dsi.2019.230.

Full text
Abstract:
AbstractAnalyzing value creation and capture mechanisms of open source hardware startup companies, this paper illustrates how an open source strategy can make economical sense for hardware startups. By interviewing 37 open source hardware company leaders, 12 company community members as well as analyzing forum data of 3 open source hardware companies; we realize that by open sourcing the design of hardware, a company can naturally establish its community, which is a key element for a company's success. Establishing a community can increase customer perceived value, decrease product development and sales cost, shorten product go-to-market time, and incubate startups with knowledge, experience and resources. These advantages can compensate for the risks associated with open source strategies and can make open source design a viable product development strategy for hardware startups.
APA, Harvard, Vancouver, ISO, and other styles
2

Legenvre, Herve, Ari-Pekka Hameri, and Pietari Kauttu. "Strategizing with Hardware Rich Open Source Ecosystems." Journal of Innovation Management 9, no. 2 (August 12, 2021): 1–20. http://dx.doi.org/10.24840/2183-0606_009.002_0003.

Full text
Abstract:
Companies are increasingly adopting open source strategies to develop and exploit complex infrastructures and platforms that combine software, hardware and standard interfaces. Such strategies require the development of a vibrant ecosystem of partners that combines the innovation capabilities of hundreds of companies from different industries. Our aim is to help decision makers assess the benefits and challenges associated with creating or joining such ecosystems. We use a case study approach on six major collaborative ecosystems that enable the development of complex, high cost infrastructures and platforms. We characterize their strategy, governance, and their degree of intellectual property (IP) openness. We offer a three-dimensional framework that helps managers characterize such ecosystems. Although all the ecosystems studied aim at scaling up innovative solutions, their strategy, governance and IP openness vary. An upstream strategy aimed at replacing supplier proprietary design with open substitutes requires a democratic governance and an intellectual property policy that maximize the attractiveness of the ecosystem. A downstream strategy aimed at carving a space in new markets requires an autocratic governance and an intellectual property policy that combine attractiveness and value capture opportunities.
APA, Harvard, Vancouver, ISO, and other styles
3

Khachaturyan, Mikhail, and Evgeniia Klicheva. "Risks of Introducing E-Governance Into Strategic Management Systems of Russian Companies in the Context of the Pandemic." International Journal of Electronic Government Research 17, no. 4 (October 2021): 84–102. http://dx.doi.org/10.4018/ijegr.2021100105.

Full text
Abstract:
With accelerated development of information and communication technology, information has acquired the status of the most accessible and, at the same time, the most valuable resource. E-governance systems are among the main forms of introducing digital technologies into Russian companies' strategic management systems in the context of the pandemic. In this regard, one of the key performance factors when introducing such systems is providing them with management tools of both traditional risks affecting the company's operations and new types of digital risks associated with the specifics of electronic governance. In this paper, the authors intend to reveal the main features of how such new risk factors influence the logic and functional processes of the Russian companies' strategic management systems in the context of the pandemic. The paper presents the authors' description of new types of risks associated with introducing e-governance into strategic management systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Poston, Robin S., and William J. Kettinger. "Mindfully Experimenting with IT." Journal of Database Management 25, no. 2 (April 2014): 29–51. http://dx.doi.org/10.4018/jdm.2014040102.

Full text
Abstract:
In many companies the process of new Information Technology (IT) identification and assessment lacks the rigor associated with experimentation. The realities of maintaining daily operations and the expense and expertise involved distract firms from conducting experiments. The authors explore cases of how companies introduce a new IT for the business use of digital social media. Because social media technologies are new, interest in its use is broad and diffused leading organizations to be unsure about how best to implement social media, prompting organizations to follow a mindful process of experimenting with these technologies. The cases illustrate that the extent of mindfulness influences how new technology implementations are introduced, supporting wider boundaries in assessments, richer interpretations of the IT's usefulness, multi-level foci concerning benefits and costs, persistence to continue exploration, and a greater use of fact-based decision-making. The authors observe that following a mindful introduction process reaps some of the benefits of experimentation, such as greater stakeholder satisfaction and organization-wide learning and understanding of the technology's potential.
APA, Harvard, Vancouver, ISO, and other styles
5

Naughton, Bernard D. "The EU Falsified Medicines Directive: Key Implications for Dispensers." Medicine Access @ Point of Care 1 (January 2017): maapoc.0000024. http://dx.doi.org/10.5301/maapoc.0000024.

Full text
Abstract:
The EU Falsified Medicines Directive (FMD) mandates the serialisation of prescription-only medicines using a two-dimensional (2D) barcode by pharmaceutical companies and the systematic verification of this 2D barcode in pharmacies. This European directive has ramifications for many stakeholders, including market authorization holders, wholesalers, parallel importers, and dispensers. Focusing primarily on the impact on UK dispensers, the following questions are addressed in this article: Where should the affected medicines be scanned? and who will pay for the incoming changes to practice? The role of the EU FMD in terms of drug recalls, the preparation required for EU FMD compliance, and the potential for added healthcare value are also discussed. Dispensers must prepare for the February 2019 EU FMD deadline date by choosing a point within their dispensing processes to scan medicines. Dispensers must also budget appropriately for the incoming costs associated with new hardware and processes.
APA, Harvard, Vancouver, ISO, and other styles
6

KRAVCHENKO, Oksana, and Yelyzaveta SAPOZHNIKOVA. "Trend analysis of the sensitivity of international business in the field of information technologies to global limitations." Economics. Finances. Law, no. 12/4 (December 29, 2020): 13–16. http://dx.doi.org/10.37634/efp.2020.12(4).3.

Full text
Abstract:
The paper examines the trends of international business in the field of information technology in the context of global constraints associated with the Covid-19 pandemic. In terms of global restrictions, the role of information technology development in the resilience of companies to changes in the external environment and ensuring the possibility of business survival is particularly acute. Analytical IT forecasts need to be considered to be ready for change. It was found that despite the appearance of growth in the field of information technology, due to the pandemic, and the transition of a many number of operations online, in general, there is a reduction. These trends are due to the fact that in the crisis caused by the Covid-19 pandemic, most companies and individuals will delay the modernization of hardware and software, trying to use what already exists. However, a more detailed analysis shows the heterogeneity in the change in demand for certain types of information technology services. Thus, there is an increase in such information technology services as public cloud services and video conferencing, and the presence of deferred demand in declining areas. At the same time, it can be argued that in the long run we can expect a resumption of rapid growth in demand for most information technology services, the consumption of which has been postponed. Today, software vendors help companies of all sizes and industries survive, grow and grow. We looked at the ten most profitable IT companies in the world in 2020. All of them have the highest revenue as of November 18, 2020. The ranking includes mainly US companies, countries with developed post-industrial economies. Almost 80 % of US GDP is accounted for by services, which has made them a world leader in this segment. Material production accounts for only 20 % of GDP, including all industries, agriculture and forestry, construction. Focus on scientific and technological progress is one of the hallmarks of an effective economic system. Given the current trends in the development of IT technologies, we can say that they are the driving force of global transformation and economic growth of the world, as well as increase the competitive advantage of any economic entity.
APA, Harvard, Vancouver, ISO, and other styles
7

Estrela, Vania V. "Biomedical Cyber-Physical Systems in the Light of Database as a Service (DBaaS) Paradigm." Medical Technologies Journal 4, no. 3 (December 7, 2020): 577. http://dx.doi.org/10.26415/2572-004x-vol4iss3p577-577.

Full text
Abstract:
Background: A database (DB) to store indexed information about drug delivery, test, and their temporal behavior is paramount in new Biomedical Cyber-Physical Systems (BCPSs). The term Database as a Service (DBaaS) means that a corporation delivers the hardware, software, and other infrastructure required by companies to operate their databases according to their demands instead of keeping an internal data warehouse. Methods: BCPSs attributes are presented and discussed. One needs to retrieve detailed knowledge reliably to make adequate healthcare treatment decisions. Furthermore, these DBs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. There are Search Query Language (SQL), and NoSQL DBs. Results: This work investigates how to retrieve biomedical-related knowledge reliably to make adequate healthcare treatment decisions. Furthermore, Biomedical DBaaSs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. Conclusion: A NoSQL DB allows more flexibility with changes while the BCPSs are running, which allows for queries and data handling according to the context and situation. A DBaaS must be adaptive and permit the DB management within an extensive variety of distinctive sources, modalities, dimensionalities, and data handling according to conventional ways.
APA, Harvard, Vancouver, ISO, and other styles
8

Bonnaud, Olivier, and Ahmad Bsiesy. "Adaptation of the Higher Education in Engineering to the Advanced Manufacturing Technologies." Advances in Technology Innovation 5, no. 2 (April 1, 2020): 65–75. http://dx.doi.org/10.46604/aiti.2020.4144.

Full text
Abstract:
The 21st century will be the era of the fourth industrial revolution with the progressive introduction of the digital society, with smart/connected objects, smart factories driven by robotics, the Internet of Things (IoT) and artificial intelligence. Manufacturing should be performed by the industry entitled 4.0. These are advanced technologies resulting from steady development of information technology associated with new objects and systems that can fulfil manufacturing tasks. The industry 4.0 concept relies largely on the ability to design and manufacture smart and connected devices that are based on microelectronics technology. This evolution requires highly-skilled technicians, engineers and PhDs well prepared for research, development and manufacturing. Their training, which combines knowledge and the associated compulsory know-how, is becoming the main challenge for the academic world. The curricula must therefore contain the basic knowledge and associated know-how training in all the specialties in the field. The software and hardware used in microelectronics and its applications are becoming so complex and expensive that the most realistic solution for practical training is to share facilities and human resources. This approach has been adopted by the French microelectronics education network, which includes twelve joint university centres and 2 industrial unions. It makes it possible to minimize training costs and to train future graduates on up-to-date tools similar to those used in companies. Thus, this paper deals with the strategy adopted by the French network in order to meet the needs of the future industry 4.0.
APA, Harvard, Vancouver, ISO, and other styles
9

Vajjhala, Narasimha Rao, and Ervin Ramollari. "Big Data using Cloud Computing - Opportunities for Small and Medium-sized Enterprises." European Journal of Economics and Business Studies 4, no. 1 (April 30, 2016): 129. http://dx.doi.org/10.26417/ejes.v4i1.p129-137.

Full text
Abstract:
Big Data has been listed as one of the current and future research frontiers by Gartner. Large-sized companies are already investing on and leveraging big data. Small-sized and medium-sized enterprises (SMEs) can also leverage big data to gain a strategic competitive advantage but are often limited by the lack of adequate financial resources to invest on the technology and manpower. Several big data challenges still exist especially in computer architecture that is CPU-heavy but I/O poor. Cloud computing eliminates the need to maintain expensive computing hardware and software. Cloud computing resources and techniques can be leveraged to address the traditional problems associated with fault tolerance and low performance causing bottlenecks to using big data. SMEs can take advantage of cloud computing techniques to avail the advantages of big data without significant investments in technology and manpower. This paper explores the current trends in the area of big data using cloud resources and how SMEs can take advantage of these technological trends. The results of this study will benefit SMEs in identifying and exploring possible opportunities and also understanding the challenges in leveraging big data.
APA, Harvard, Vancouver, ISO, and other styles
10

Bartczak, Krzysztof. "The Use of Digital Technology Platforms in the Context of Cybersecurity in the Industrial Sector." Foundations of Management 13, no. 1 (January 1, 2021): 117–30. http://dx.doi.org/10.2478/fman-2021-0009.

Full text
Abstract:
Abstract This study discusses the use of digital technology platforms (DTPs) in the context of cyber-security in the industrial sector, with a focus on digital industry (industrial) platforms (DIPs). A definition of DTPs is presented, including the author's interpretation, as well as the scope of DTP application in the industrial sector, which includes, in particular, European Digital Platforms (EDPs) and Polish Digital Platforms (PDPs), such as non-ferrous metals PDP or intelligent transport systems PDP. This is followed by a section covering the theoretical basis of the study that highlights the key challenges and risks associated with the use of DTPs as well as the methods for their neutralization in the form of specific concepts and systems that can be employed in the industrial sector. The subsequent section of the study is based on results of the author's own survey which collected information from a total of 120 companies operating in Poland, which were granted subsidies under the Operational Program Innovative Economy for investments involving the implementation and development of DTPs. The survey was carried out using a questionnaire developed by the author, which consisted of 23 questions. In this respect, as shown by the author's own studies, of greatest relevance are hardware failures and Internet outage events. Most importantly, concerns about such risks are some of the major factors underlying the negative attitudes of management staff of industrial companies toward DTPs, and therefore, it is so important to ensure that any such risks can be effectively addressed. They can be avoided through the use of certain concepts and systems such as STOE or CVSS. A typical company may know the model of DTPs in the context of challenges in the field of cybersecurity through this study; in particular, it can improve IT security.
APA, Harvard, Vancouver, ISO, and other styles
11

Zangiev, Taimuraz, Elizar Tarasov, Vladimir Sotnikov, Zalina Tugusheva, and Fatima Gunay. "About One Approach to the Selection of Information Protection Facilities." NBI Technologies, no. 1 (August 2018): 23–29. http://dx.doi.org/10.15688/nbit.jvolsu.2018.1.4.

Full text
Abstract:
Much attention in the sphere of information technology is paid to the aspects of information security, due to the growing damage. As a result of damage increase, there is a quantitative and qualitative growth in the market of software and hardware for information security. At the same time, new alternatives to existing information security tools are being developed, as well as means of protection against new vectors of attacks associated, for example, with the spread of the concept of ‘Internet of things’, big data and cloud technologies. At the same time, the analysis of information security incidents at enterprises that actively use information security tools shows that the use of information security systems does not provide the required level of protection for information objects that remain susceptible to attacks. According to recent studies, the share of corporate systems in the Russian Federation containing critical vulnerabilities associated with incorrect configuration of information security systems makes up more than 80 %. At the same time, the costs of Russian companies to ensure information security are increasing by an average of 30 % per year. The article presents current problems related to the conflicting requirements to the design of complex information security systems (CISS). The authors suggest an approach to selection and configuration of the CISS facilities based on the role model of M. Belbin in the interpretation of the CISS as a command that will allow building an integrated information protection circuit. The cases of manifestation of synergism and emergence, which ensure the effective functioning of the system, have been described.
APA, Harvard, Vancouver, ISO, and other styles
12

Tukkoji, Chetana, and Seetharam K. "Handling Imbalance Data in Reduce task of MapReduce in Cloud Environment." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 11 (November 30, 2017): 168. http://dx.doi.org/10.23956/ijarcsse.v7i11.498.

Full text
Abstract:
There is a growing need for an ad-hoc analysis of extremely large data sets, especially at web based companies where innovation critically depends on being able to analyze terabytes of data collected every day. Parallel database products, over a solution, but are usually prohibitively ex-pensive at this scale. But, most of the people who analyze data are called procedural programmers. The success of the more procedural map-reduce programming model and its associated scalable implementations on commodity hardware (low cost), is evidence of the above. However, the map-reduce paradigm is too low-level and rigid, and leads to a great deal of custom user code that is hard to maintain, and reuse. The map reduce is an effective tool for parallel data processing. One significant issue in practical map reduce application is the data skew. The imbalance of the amount of the data assigned to each tasks to take much longer to finish than the others. Now we need to propose a framework, to solve the data skew problem to reduce side application in the map reduce. It usage a innovative sampling of the data input accurate approximation to the distribution of the intermediate data by sampling only small fraction of the intermediate data. It does not contain the any type of the data to prevent the overlap between the maps and reduce stages.
APA, Harvard, Vancouver, ISO, and other styles
13

Jung, Soyi, Won Joon Yun, Joongheon Kim, and Jae-Hyun Kim. "Coordinated Multi-Agent Deep Reinforcement Learning for Energy-Aware UAV-Based Big-Data Platforms." Electronics 10, no. 5 (February 25, 2021): 543. http://dx.doi.org/10.3390/electronics10050543.

Full text
Abstract:
This paper proposes a novel coordinated multi-agent deep reinforcement learning (MADRL) algorithm for energy sharing among multiple unmanned aerial vehicles (UAVs) in order to conduct big-data processing in a distributed manner. For realizing UAV-assisted aerial surveillance or flexible mobile cellular services, robust wireless charging mechanisms are essential for delivering energy sources from charging towers (i.e., charging infrastructure) to their associated UAVs for seamless operations of autonomous UAVs in the sky. In order to actively and intelligently manage the energy resources in charging towers, a MADRL-based coordinated energy management system is desired and proposed for energy resource sharing among charging towers. When the required energy for charging UAVs is not enough in charging towers, the energy purchase from utility company (i.e., energy source provider in local energy market) is desired, which takes high costs. Therefore, the main objective of our proposed coordinated MADRL-based energy sharing learning algorithm is minimizing energy purchase from external utility companies to minimize system-operational costs. Finally, our performance evaluation results verify that the proposed coordinated MADRL-based algorithm achieves desired performance improvements.
APA, Harvard, Vancouver, ISO, and other styles
14

Priyadarshini, Aishwarya, Sanhita Mishra, Debani Prasad Mishra, Surender Reddy Salkuti, and Ramakanta Mohanty. "Fraudulent credit card transaction detection using soft computing techniques." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1634. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1634-1642.

Full text
Abstract:
<p>Nowadays, fraudulent or deceitful activities associated with financial transactions, predominantly using credit cards have been increasing at an alarming rate and are one of the most prevalent activities in finance industries, corporate companies, and other government organizations. It is therefore essential to incorporate a fraud detection system that mainly consists of intelligent fraud detection techniques to keep in view the consumer and clients’ welfare alike. Numerous fraud detection procedures, techniques, and systems in literature have been implemented by employing a myriad of intelligent techniques including algorithms and frameworks to detect fraudulent and deceitful transactions. This paper initially analyses the data through exploratory data analysis and then proposes various classification models that are implemented using intelligent soft computing techniques to predictively classify fraudulent credit card transactions. Classification algorithms such as K-Nearest neighbor (K-NN), decision tree, random forest (RF), and logistic regression (LR) have been implemented to critically evaluate their performances. The proposed model is computationally efficient, light-weight and can be used for credit card fraudulent transaction detection with better accuracy.</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Ashima, Ashima, and Mrs Navjot Jyoti. "ENHANCING JOB ALLOCATION USING NBST IN CLOUD ENVIRONMENT: A REVIEW." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 16, no. 3 (June 7, 2017): 6247–53. http://dx.doi.org/10.24297/ijct.v16i3.6182.

Full text
Abstract:
Cloud computing is a vigorous technology by which a user can get software, application, operating system and hardware as a service without actually possessing it and paying only according to the usage. Cloud Computing is a hot topic of research for the researchers these days. With the rapid growth of Interne technology cloud computing have become main source of computing for small as well big IT companies. In the cloud computing milieu the cloud data centers and the users of the cloud-computing are globally situated, therefore it is a big challenge for cloud data centers to efficiently handle the requests which are coming from millions of users and service them in an efficient manner. Load balancing is a critical aspect that ensures that all the resources and entities are well balanced such that no resource or entity neither is under loaded nor overloaded. The load balancing algorithms can be static or dynamic. Load balancing in this environment means equal distribution of workload across all the nodes. Load balancing provides a way of achieving the proper utilization of resources and better user satisfaction. Hence, use of an appropriate load balancing algorithm is necessary for selecting the virtual machines or servers. This paper focuses on the load balancing algorithm which distributes the incoming jobs among VMs optimally in cloud data centers. In this paper, we have reviewed several existing load balancing mechanisms and we have tried to address the problems associated with them.
APA, Harvard, Vancouver, ISO, and other styles
16

Maté, Alejandro, Jesús Peral, Juan Trujillo, Carlos Blanco, Diego García-Saiz, and Eduardo Fernández-Medina. "Improving security in NoSQL document databases through model-driven modernization." Knowledge and Information Systems 63, no. 8 (July 13, 2021): 2209–30. http://dx.doi.org/10.1007/s10115-021-01589-x.

Full text
Abstract:
AbstractNoSQL technologies have become a common component in many information systems and software applications. These technologies are focused on performance, enabling scalable processing of large volumes of structured and unstructured data. Unfortunately, most developments over NoSQL technologies consider security as an afterthought, putting at risk personal data of individuals and potentially causing severe economic loses as well as reputation crisis. In order to avoid these situations, companies require an approach that introduces security mechanisms into their systems without scrapping already in-place solutions to restart all over again the design process. Therefore, in this paper we propose the first modernization approach for introducing security in NoSQL databases, focusing on access control and thereby improving the security of their associated information systems and applications. Our approach analyzes the existing NoSQL solution of the organization, using a domain ontology to detect sensitive information and creating a conceptual model of the database. Together with this model, a series of security issues related to access control are listed, allowing database designers to identify the security mechanisms that must be incorporated into their existing solution. For each security issue, our approach automatically generates a proposed solution, consisting of a combination of privilege modifications, new roles and views to improve access control. In order to test our approach, we apply our process to a medical database implemented using the popular document-oriented NoSQL database, MongoDB. The great advantages of our approach are that: (1) it takes into account the context of the system thanks to the introduction of domain ontologies, (2) it helps to avoid missing critical access control issues since the analysis is performed automatically, (3) it reduces the effort and costs of the modernization process thanks to the automated steps in the process, (4) it can be used with different NoSQL document-based technologies in a successful way by adjusting the metamodel, and (5) it is lined up with known standards, hence allowing the application of guidelines and best practices.
APA, Harvard, Vancouver, ISO, and other styles
17

Sudarto, Ferry, Jahwahir Jahwahir, and Ryan Satria. "SISTEM KONTROL ROLLING DOOR MENGGUNAKAN SMARTPHONE BERBASIS ANDROID OS PADA PT. INDONESIA STANLEY ELECTRIC." CCIT Journal 9, no. 1 (September 29, 2015): 44–50. http://dx.doi.org/10.33050/ccit.v9i1.397.

Full text
Abstract:
PT Indonesia Stanley Electric is a company founded to meet the needs of vehicle lighting equipment 2 wheel and 4 wheel PT ISE start molding and light industry preferred the lighting equipment of motor vehicles, and the next stage is planning the manufacture of electronic komponoen motor vehicle. Stanley company headquartered in Japan until now PT ISE has 34 branch companies in several countries outside of Japan. Stanley Company in Indonesia is a subsidiary branch to 24, which is named PT Indonesia Stanley Electric (PT ISE). PT Indonesia Stanley Electric is located at Jl. BhumiMas 1 No. Cikupamas Industrial Area 17, where the company is using the Rolling Door as a means of exit and entry of Forklift who still use manual systems and for operation by pressing the open or close button made by a another operator to operate Rolling Door. Smartphones with Android operating system more widely available in the market at a more affordable price. Android operating system itself is open source operating system that can be modified according to user needs. The control system uses the Android-based smartphone is used to control the Rolling Door from a certain distance without having to interact directly with the Rolling Door. In this study, a prototype device Rolling Door control system automatically uses the Android-based Smartphone controls associated with the program of the microcontroller ATMega8 implanted using the Bluetooth network. In a mechanical system that is functioning there is a DC motor to drive the Rolling Door, the system switches to use proximity switches to determine the stopping point of the system. For electronic systems use 12 volt dc relay circuit, and using bluetooth module HC-05. With this system the user can operate the tool to open and close the Rolling Door via Smartphone, in addition to be used as a means of communication but also used as a device that is communicated to control a hardware device.
APA, Harvard, Vancouver, ISO, and other styles
18

Ahmed, Osama, and Abdul Wali Abdul Ali. "Simulating and Building an Appliance Clustering Fuzzified SVC for Single Phase System." ELEKTRIKA- Journal of Electrical Engineering 20, no. 1 (April 30, 2021): 34–42. http://dx.doi.org/10.11113/elektrika.v20n1.235.

Full text
Abstract:
A power system suffers from losses that can cause tragic consequences. Reactive power presence in the power system increases system losses delivered power quality and distorted the voltage. As a result, many studies are concerned with reactive power compensation. The necessity of balancing resistive power generation and absorption throughout a power system gave birth to many devices used for reactive power compensation. Static Var Compensators are hunt devices used for the generation or absorption of reactive power as desired. SVCs provide fast and smooth compensation and power factor correction. In this paper, a Fuzzified Static Var Compensator consists of Thyristor Controlled Reactor (TCR) branch and Thyristor Switched Capacitors branches for reactive power compensation and power factor correction at the load side is presented. The system is simulated using Simulink using a group of blocks and equations for measuring power factor, determining the weightage by which the power factor is improved, determining the firing angle of TCR branch, and capacitor configuration of TSC branches. Furthermore, a hardware prototype is designed and implemented with its associated software; it includes a smart meter build-up for power monitoring, which displays voltage, current, real power, reactive power and power factor and SVC branches with TRIAC as the power switching device. Lastly, static and dynamic loads are used to test the system's capability in providing fast response and compensation. The simulation results illustrated the proposed system's capability and responsiveness in compensating the reactive power and correcting the power factor. It also highlighted the proportional relation between reactive power presence and the increased cost in electricity bills. The proposed smart meter and SVC prototypes proved their capabilities in giving accurate measurement and monitoring and sending the data to the graphical user interface through ZigBee communication and power factor correction. Reactive power presence is an undesired event that affects the equipment and connected consumers of a power system. Therefore, fast and smooth compensation for reactive power became a matter of concern to utility companies, power consumers and manufacturers. Therefore, the use of compensating devices is of much importance as they can increase power capacity, regulate the voltage and improve the power system performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Elrafie, Emad A., Jerry P. White, and Fatema H. Awami. "The Event Solution--A New Approach for Fully Integrated Studies Covering Uncertainty Analysis and Risk Assessment." SPE Reservoir Evaluation & Engineering 11, no. 05 (October 1, 2008): 858–65. http://dx.doi.org/10.2118/105276-pa.

Full text
Abstract:
Summary Saudi Aramco strives to implement new and innovative techniques and approaches to assist in meeting the industry's increasing challenges. One of these is the new study approach, "the Event Solution," which leads to better synergy among different stakeholders and enables faster decisions that fully encompass the complex uncertainties associated with today's gasfield and oilfield developments. The Event Solution is a short, intensively collaborative event, which compresses major decision cycles, embraces uncertainty, and provides a wider range of alternative solutions. The Event Solution approach has been implemented successfully on 24 major studies worldwide, with the last eight projects conducted on Saudi Aramco megareservoirs. The concept is simple: Identify the most important study objective, and focus the collective skills and creativity of a team of experts to meet the study objective in a special event that lasts just for 2 months. The team is enabled with the latest hardware and software technologies in a large team room, specially designed for collaboration, where they can work together. A facilitator leads the team to implement the Event Solution process that helps the team to see "the big picture" and understand what matters to the bottom line. The team composition is enriched with representatives from all of the stakeholders (including technical experts, management, facilitators, and sometimes government and joint-venture partner's representatives) so the results can be concluded and implemented immediately, with maximum buy-in. The Event Solution process includes detailed uncertainty analysis and risk-assessment workflows that have been implemented successfully in many events. The most important deliverable of the Event Solution, however, is that all the stakeholders develop a clear and common understanding of the critical uncertainties, project risk, and the agreed plans to move forward--the decisions. This volume of work, which traditionally requires years, is completed in 2 months on average using the Event Solution process. This paper presents the elements and processes of this new approach. Critical elements to a successful Event Solution include software, workroom, team members, and a facilitator. Once the elements are in place, the facilitator leads the team through processes that include project preparation, parallel workflows, uncertainty analysis, critical information plans, project risk assessment, and mitigation plans. Note that uncertainty analysis is not a simple byproduct of the study, but an integral component of success. Introduction The oil and gas industry spends more than USD 130 billion in capital and exploration expense worldwide each year (OGJ 2000a, 2000b) on complex and uncertain ventures, highlighting the significant added value that can be achieved through processes that create synergy while reducing the decision cycle time. Warren (1994) notes that the success of an individual team can be variable when he states, "the fundamental idea of cross functional teams and goals appears to surface about every 10 years with a new label. Usually, attempts to implement this concept in the E&P business ended with utter failure for a variety of reasons" (Ching et al. 1993) The Event Solution extends the crossfunctional-team concept by formalizing key success factors:identifying the most important study objective,focusing the collective skills and creativity of a team of experts on meeting the study objective, andcollaborating in a special event that lasts 2 months rather than years. In the 1980s, the concept of asset teams was introduced by E&P companies around the globe to downsize and streamline operations. Unfortunately, integrated software was not mature enough at that time to enable real integration of the asset-team members. As integrated software became available and hardware became more powerful in the early-to-mid 1990s, asset teams began to achieve more success. By the late 1990s, common processes were adapted by most major oil and gas companies to ensure consistency and repeatable success across teams. Highly formalized processes, often employing gatekeepers, were developed to integrate the management (decision makers) and technical (asset) teams. Although integrated software and formalized processes enhanced the quality of the decision process, generating fully synergized analyses from a wide variety of data and skills was still a lengthy process. Furthermore, the decision makers often received different messages from different disciplines, which may not have incorporated a comprehensive image of uncertainty surrounding the decision. Between 2004 and 2005, several synergized study approaches (Williams et al. 2004; Landis and Benson 2005) were introduced to the industry as a means to bridge the gap between the technical asset teams and decision makers. These approaches were set either as workshop-style projects or facilitated teams focused on a set of business objectives. In 2001, the Event Solution approach was introduced to the industry (Ghazi and Elrafie 2001). Like asset teams, the Event Solution is a group of multidiscipline professionals working on a dedicated project. The Event Solution focuses on creating better synergy among all stakeholders (asset teams, managers, decision makers, and partners) by enabling faster decisions that fully encompass the complex uncertainties associated with today's projects. The focus is on specific, well-stated business objectives aligned to company strategy. The team follows a process in which each team member assesses uncertainties within his/her own analysis, with outputs subsequently rolled up into a studywide uncertainty assessment.
APA, Harvard, Vancouver, ISO, and other styles
20

Chawla, Ishaan. "Cloud Computing Environment: A Review." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 17, no. 2 (August 29, 2018): 7261–72. http://dx.doi.org/10.24297/ijct.v17i2.7674.

Full text
Abstract:
Cloud computing is a vigorous technology by which a user can get software, application, operating system and hardware as a service without actually possessing it and paying only according to the usage. Cloud Computing is a hot topic of research for the researchers these days. With the rapid growth of Internet technology cloud computing have become main source of computing for small as well big IT companies. In the cloud computing milieu the cloud data centers and the users of the cloud-computing are globally situated, therefore it is a big challenge for cloud data centers to efficiently handle the requests which are coming from millions of users and service them in an efficient manner.Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous computing systems. On cloud computing platform, load balancing of the entire system can be dynamically handled by using virtualization technology through which it becomes possible to remap virtual machine and physical resources according to the change in load. However, in order to improve performance, the virtual machines have to fully utilize its resources and services by adapting to computing environment dynamically. The load balancing with proper allocation of resources must be guaranteed in order to improve resource utility. Load balancing is a critical aspect that ensures that all the resources and entities are well balanced such that no resource or entity neither is under loaded nor overloaded. The load balancing algorithms can be static or dynamic. Load balancing in this environment means equal distribution of workload across all the nodes. Load balancing provides a way of achieving the proper utilization of resources and better user satisfaction. Hence, use of an appropriate load balancing algorithm is necessary for selecting the virtual machines or servers. This paper focuses on the load balancing algorithm which distributes the incoming jobs among VMs optimally in cloud data centers. In this paper, we have reviewed several existing load balancing mechanisms and we have tried to address the problems associated with them.
APA, Harvard, Vancouver, ISO, and other styles
21

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no. 3 (August 1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Full text
Abstract:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Desheng, Jingxiu Song, Yuan Bian, Xiaolong Zheng, and Zhu Zhang. "Risk perception and intelligent decision in complex social information network." Industrial Management & Data Systems ahead-of-print, ahead-of-print (December 15, 2020). http://dx.doi.org/10.1108/imds-10-2020-0566.

Full text
Abstract:
PurposeThe increase of turbulence sources and risk points under the complex social information network has brought severe challenges. This paper discusses risk perception and intelligent decision-making under the complex social information network to maintain social security and financial security.Design/methodology/approachCross-modal semantic fusion and social risk perception, temporal knowledge graph and analysis, complex social network intelligent decision-making methods have been studied. A big data computing platform of software and hardware integration for security combat is constructed based on the technical support.FindingsThe software and hardware integration platform driven by big data can realize joint identification of significant risks, intelligent analysis and large-scale group decision-making.Practical implicationsThe integrated platform can monitor the abnormal operation and potential associated risks of Listed Companies in real-time, reduce information asymmetry and accounting costs and improve the capital market's ability to serve the real economy. It can also provide critical technical support and decision support in necessary public opinion monitoring and control business.Originality/valueIn this paper, the theory of knowledge-enhanced multi-modal multi-granularity dynamic risk analysis and intelligent group decision-making and the idea of an inference think tank (I-aid-S) is proposed. New technologies and methods, such as association analysis, time series evolution and super large-scale group decision-making, have been established. It's also applied in behavior and situation deduction, public opinion and finance and provides real-time, dynamic, fast and high-quality think tank services.
APA, Harvard, Vancouver, ISO, and other styles
23

Gala, Neel, Madhusudan G. S., Paul George, Anmol Sahoo, Arjun Menon, and Kamakoti V. "SHAKTI: An Open-Source Processor Ecosystem." Advanced Computing and Communications, September 10, 2018. http://dx.doi.org/10.34048/2018.3.f2.

Full text
Abstract:
Processors have become ubiquitous in all the appliances and machines we use, in both consumer and industrial settings. These processors range from extremely small and low power micro-controllers (used in motor controls, home robots and appliances) to high-performance multi-core processors (used in servers and supercomputers). However, the growth of modern AI/ML environments (like Caffe[Jia et al. 2014], Tensorflow[Abadi et al. 2016]) and the need for features like enhanced security has forced the industry to look beyond general purpose solutions and towards domain-specific-customizations. While a large number of companies today can develop custom ASICs (Application Specific Integrated Chips) and license specific silicon blocks from chip-vendors to develop a customized SoCs (System on Chips), at the heart of every design is the processor and the associated hardware. To serve modern workloads better, these processors also need to be customized, upgraded, re-designed and augmented suitably. This requires that vendors/consumers have access to appropriate processor variants and the flexibility to make modifications and ship them at an affordable cost.
APA, Harvard, Vancouver, ISO, and other styles
24

Nayak, Debabrata. "Understanding the Security, Privacy and Trust Challenges of Cloud Computing." Journal of Cyber Security and Mobility, April 25, 2012. http://dx.doi.org/10.13052/jcsm2245-1439.1237.

Full text
Abstract:
The overall objective of this paper is to understand the Security, Privacy and Trust Challenges and to advise on policy and other interventions which should be considered in order to ensure that Indian users of cloud environments are offered appropriate protections, and to underpin Indian cloud ecosystem. Cloud computing is increasingly subject to interest from policymakers and regulatory authorities. The Indian regulator needs to develop a pan-Indian ‘cloud strategy’ that will serve to support growth and jobs and build an innovation advantage for India. However, the concern is that currently a number of challenges and risks with respect to security, privacy and trust exist that may undermine the attainment of these policy objectives. Our approach has been to undertake an analysis of the technological, operational and legal intricacies of cloud computing, taking into consideration the Indian dimension and the interests and objectives of all stakeholders (citizens, individual users, companies, cloud service providers, regulatory bodies and relevant public authorities). This paper represents an evolutionary progression in understanding the implications of cloud computing for security, privacy and trust. Starting from an overview of the challenges identified in the area of cloud, the study builds upon real-life case study implementations of cloud computing for its analysis and subsequent policy considerations. As such, we intend to offer additional value for policymakers beyond a comprehensive understanding of the current theoretical or empirically derived evidence base, which will understand the cloud computing and the associated open questions surrounding some of the important security, privacy and trust issues.
APA, Harvard, Vancouver, ISO, and other styles
25

Saini, Satyam, Jimil M. Shah, Pardeep Shahi, Pratik Bansode, Dereje Agonafer, Prabjit Singh, Roger Schmidt, and Mike Kaler. "Effects of Gaseous and Particulate Contaminants on Information Technology Equipment Reliability—A Review." Journal of Electronic Packaging 144, no. 3 (September 15, 2021). http://dx.doi.org/10.1115/1.4051255.

Full text
Abstract:
Abstract Over the last decade, several hyper-scale data center companies such as Google, Facebook, and Microsoft have demonstrated the cost-saving capabilities of airside economization with direct/indirect heat exchangers by moving to chiller-less air-cooled data centers. Under pressure from data center owners, information technology equipment OEMs like Dell and IBM are developing information technology equipment that can withstand peak excursion temperature ratings of up to 45 °C, clearly outside the recommended envelope, and into ASHRAEs A4 allowable envelope. As popular and widespread as these cooling technologies are becoming, airside economization comes with its challenges. There is a risk of premature hardware failures or reliability degradation posed by uncontrolled fine particulate and gaseous contaminants in presence of temperature and humidity transients. This paper presents an in-depth review of the particulate and gaseous contamination-related challenges faced by the modern-day data center facilities that use airside economization. This review summarizes specific experimental and computational studies to characterize the airborne contaminants and associated failure modes and mechanisms. In addition, standard lab-based and in-situ test methods for measuring the corrosive effects of the particles and the corrosive gases, as the means of testing the robustness of the equipment against these contaminants, under different temperature and relative humidity conditions are also reviewed. It also outlines the cost-sensitive mitigation techniques like improved filtration strategies and methods that can be utilized for efficient implementation of airside economization.
APA, Harvard, Vancouver, ISO, and other styles
26

Mignoni, Julhete, Bruno Anicet Bittencourt, Silvio Bitencourt da Silva, and Aurora Carneiro Zen. "Orchestrators of innovation networks in the city level: the case of Pacto Alegre." Innovation & Management Review ahead-of-print, ahead-of-print (July 15, 2021). http://dx.doi.org/10.1108/inmr-01-2021-0002.

Full text
Abstract:
PurposeThis paper investigates the roles and activities of the orchestrators of innovation networks constituted within cities. In this sense, the authors expected to contribute for research related to the roles and activities of the orchestrators of innovation networks constituted in the scope of cities given the large number and diversity of complex and multiple dimensions social actors (Castells & Borja, 1996; Reypens, Lievens & Blazevic, 2019).Design/methodology/approachThe authors conducted an exploratory research with a single case study in depth. The case chosen for the paper is the case of Pacto Alegre. The case selection criterion was the relevance of the Pacto Alegre Case in the construction of an innovation network in the city of Porto Alegre, Rio Grande do Sul, Brazil. The Pacto Alegre network was proposed by the Alliance for Innovation (composed of the three main Universities in the city: UFRGS, PUCRS and UNISINOS) and by the Municipality of Porto Alegre. In addition to these actors, the network counts on financial and development institutions as sponsors, with media partners, with design partners, with an advisory board (composed of five professionals considered references in different themes) and composed by more than 100 companies, associations and institutions from different areas (Pacto Alegre, 2019). Data were collected from 09/20/2020 to 11/30/2020 through in-depth interviews, documentary research and non-participant observation.FindingsIn this research, the authors highlighted the city as a community that involves and integrates various actors, such as citizens and companies, to collaborative innovation activities. For this, they proposed a framework on innovation networks and network orchestration. In this direction, seven dimensions of the “orchestration of innovation networks” were assumed as a result of the combination of previous studies by Dhanaraj and Parke (2006), Hurmelinna-Laukkanen et al. (2011) and da Silva and Bitencourt (2019). In the sequence, different roles of orchestrators associated with the literature were adopted based on the work by Pikkarainen et al. (2017) and Nielsen and Gausdal (2017).Research limitations/implicationsThe authors’ results advance in relation to other fields by promoting the expansion of the “orchestration of innovation networks” model with the combination of distinct elements from the literature in a coherent whole (agenda setting, mobilization, network stabilization, creation and transfer of knowledge, innovation appropriability, coordination and co-creation) and in the validation of its applicability in the context of the innovation network studied. In addition, when relating different roles of orchestrators to the seven dimensions studied, it was realized that there is no linear and objective relationship between the dimensions and roles of the orchestrator, as in each dimension there may be more than one role being played in the orchestration.Practical implicationsTherefore, the findings suggest two theoretical contributions. First, the authors identified a role not discussed in the literature, here called the communicator. In the case analysis, the authors observed the communicator role through functions performed by a media partner of the innovation network and by a group of civil society engaged in the city's causes. Second, the authors indicated a new dimension of orchestration related to the management of communication in the innovation network and its externalities such as p. ex. civil and organized society, characteristic of an innovation network set up within a city.Originality/valueAlthough several studies have proposed advances in the understanding of the orchestration of innovation networks (Dhanaraj & Parkhe, 2006; Ritala, Armila & Blomqvist, 2009; Nambisan & Sawhney, 2011; Hurmelinna-Laukkanen et al., 2011), the discussion on the topic is still a black box (Nilsen & Gausdal, 2017). More specifically, the authors identified a gap in the literature about the role and activities of actors in the city level. Few studies connected the regional dimension with the roles and activities of the orchestrators (Hurmelinna-Laukkanen et al., 2011; Pikkarainen et al., 2017), raising several challenges and opportunities to be considered by academics and managers.
APA, Harvard, Vancouver, ISO, and other styles
27

Popov, Anatoly, Konstantin Plotnikov, Pavel Ivanov, Denis Donya, Sergei Pachkin, and Irina Plotnikova. "Instant Drinks with Amaranth Flour: Simulation of Mechatronic Systems of Production." Food Processing: Techniques and Technology, June 27, 2020, 273–81. http://dx.doi.org/10.21603/2074-9414-2020-2-273-281.

Full text
Abstract:
Introduction. The world market of instant drinks is a highly competitive environment. New mechatronic production systems can help food companies maintain their competitiveness: they determine process modes, analyze them, and choose the optimal parameters, thus increasing the efficiency of the whole food enterprise. Another problem is the low biological and nutrient value of the finished product. New biologically active instant drinks could solve the problem that occurs in conditions of unsocial hours and unbalanced diet. Products of plant origin contain a lot of useful substances. Amaranth flour increases the biological value of the final products. The research objective was to develop mechatronic systems that could be used to produce instant drinks fortified with amaranth flour at the granulation stage. Study objects and methods. The present research featured a new line of production of instant granular drinks fortified with amaranth flour. The study focused on the granulation section. A drum vibro-granulator with controlled segregated flows was used to make a hardware design of the granulation process. The granulation process often demonstrates an unstable particle size distribution, which is associated with non-uniform mixing of the dry bulk components with the binder solution. A mechatronic module can solve this problem. However, it requires detailed information about the process conditions. Results and discussion. The research determined the specific energy consumption on the operating and design parameters for the granulation process in the new drum vibro-granulator. The experiment made it possible to obtain the optimal process parameters and improve the quality of the finished product. The flow rate of the binder solution depended on the readings of the power consumed by the kneading body engine, which stabilized the system. The value of this parameter is so small that its direct regulation is technically impossible. The paper introduces a block diagram of a multi-circuit cascade system to control the quality of the mixture automatically. The authors installed a valve on the pipeline that feeds the binder fluid in the pressure tank. The valve made it possible to control the process with sufficient accuracy. Conclusion. In the new mechatronic module of the drum vibro-granulator, the quality indicators of the resulting mix depend on the amount of power consumed by the kneading body engine and on the level of the binder solution in the pressure vessel.
APA, Harvard, Vancouver, ISO, and other styles
28

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
29

Potts, Jason. "The Alchian-Allen Theorem and the Economics of Internet Animals." M/C Journal 17, no. 2 (February 18, 2014). http://dx.doi.org/10.5204/mcj.779.

Full text
Abstract:
Economics of Cute There are many ways to study cute: for example, neuro-biology (cute as adaptation); anthropology (cute in culture); political economy (cute industries, how cute exploits consumers); cultural studies (social construction of cute); media theory and politics (representation and identity of cute), and so on. What about economics? At first sight, this might point to a money-capitalism nexus (“the cute economy”), but I want to argue here that the economics of cute actually works through choice interacting with fixed costs and what economists call ”the substitution effect”. Cute, in conjunction with the Internet, affects the trade-offs involved in choices people make. Let me put that more starkly: cute shapes the economy. This can be illustrated with internet animals, which at the time of writing means Grumpy Cat. I want to explain how that mechanism works – but to do so I will need some abstraction. This is not difficult – a simple application of a well-known economics model, namely the Allen-Alchian theorem, or the “third law of demand”. But I am going to take some liberties in order to represent that model clearly in this short paper. Specifically, I will model just two extremes of quality (“opera” and “cat videos”) to represent end-points of a spectrum. I will also assume that the entire effect of the internet is to lower the cost of cat videos. Now obviously these are just simplifying assumptions “for the purpose of the model”. And the purpose of the model is to illuminate a further aspect of how we might understand cute, by using an economic model of choice and its consequences. This is a standard technique in economics, but not so in cultural studies, so I will endeavour to explain these moments as we go, so as to avoid any confusion about analytic intent. The purpose of this paper is to suggest a way that a simple economic model might be applied to augment the cultural study of cute by seeking to unpack its economic aspect. This can be elucidated by considering the rise of internet animals as a media-cultural force, as epitomized by “cat videos”. We can explain this through an application of price theory and the theory of demand that was first proposed by Armen Alchian and William Allen. They showed how an equal fixed cost that was imposed to both high-quality and low-quality goods alike caused a shift in consumption toward the higher-quality good, because it is now relatively cheaper. Alchian and Allen had in mind something like transport costs on agricultural goods (such as apples). But it is also true that the same effect works in reverse (Cowen), and the purpose of this paper is to develop that logic to contribute to explaining how certain structural shifts in production and consumption in digital media, particularly the rise of blog formats such as Tumblr, a primary supplier of kittens on the Internet, can be in part understood as a consequence of this economic mechanism. There are three key assumptions to build this argument. The first is that the cost of the internet is independent of what it carries. This is certainly true at the level of machine code, and largely true at higher levels. What might be judged aesthetically high quality or low quality content – say of a Bach cantata or a funny cat video – are treated the same way if they both have the same file size. This is a physical and computational aspect of net-neutrality. The internet – or digitization – functions as a fixed cost imposed regardless of what cultural quality is moving across it. Second, while there are costs to using the internet (for example, in hardware or concerning digital literacy) these costs are lower than previous analog forms of information and cultural production and dissemination. This is not an empirical claim, but a logical one (revealed preference): if it were not so, people would not have chosen it. The first two points – net neutrality and lowered cost – I want to take as working assumptions, although they can obviously be debated. But that is not the purpose of the paper, which is instead the third point – the “Alchian-Allen theorem”, or the third fundamental law of demand. The Alchian-Allen Theorem The Alchian-Allen theorem is an extension of the law of demand (Razzolini et al) to consider how the distribution of high quality and low quality substitutes of the same good (such as apples) is affected by the imposition of a fixed cost (such as transportation). It is also known as the “shipping the good apples out” theorem, after Borcherding and Silberberg explained why places that produce a lot of apples – such as Seattle in the US – often also have low supplies of high quality apples compared to places that do not produce apples, such as New York. The puzzle of “why can’t you get good apples in Seattle?” is a simple but clever application of price theory. When a place produces high quality and low quality items, it will be rational for those in faraway places to consume the high quality items, and it will be rational for the producers to ship them, leaving only the low quality items locally.Why? Assume preferences and incomes are the same everywhere and that transport cost is the same regardless of whether the item shipped is high or low quality. Both high quality and low quality apples are more expensive in New York compared to Seattle, but because the fixed transport cost applies to both the high quality apples are relatively less expensive. Rational consumers in New York will consume more high quality apples. This makes fewer available in Seattle.Figure 1: Change in consumption ratio after the imposition of a fixed cost to all apples Another example: Australians drink higher quality Californian wine than Californians, and vice versa, because it is only worth shipping the high quality wine out. A counter-argument is that learning effects dominate: with high quality local product, local consumers learn to appreciate quality, and have different preferences (Cowen and Tabarrok).The Alchian-Allen theorem applies to any fixed cost that applies generally. For example, consider illegal drugs (such as alcohol during the US prohibition, or marijuana or cocaine presently) and the implication of a fixed penalty – such as a fine, or prison sentence, which is like a cost – applied to trafficking or consumption. Alchian-Allen predicts a shift toward higher quality (or stronger) drugs, because with a fixed penalty and probability of getting caught, the relatively stronger substance is now relatively cheaper. Empirical work finds that this effect did occur during alcohol prohibition, and is currently occurring in narcotics (Thornton Economics of Prohibition, "Potency of illegal drugs").Another application proposed by Steven Cuellar uses Alchian-Allen to explain a well-known statistical phenomenon why women taking the contraceptive pill on average prefer “more masculine” men. This is once again a shift toward quality predicted on falling relative price based on a common ‘fixed price’ (taking the pill) of sexual activity. Jean Eid et al show that the result also applies to racehorses (the good horses get shipped out), and Staten and Umbeck show it applies to students – the good students go to faraway universities, and the good student in those places do the same. So that’s apples, drugs, sex and racehorses. What about the Internet and kittens?Allen-Alchian Explains Why the Internet Is Made of CatsIn analog days, before digitization and Internet, the transactions costs involved with various consumption items, whether commodities or media, meant that the Alchian-Allen effect pushed in the direction of higher quality, bundled product. Any additional fixed costs, such as higher transport costs, or taxes or duties, or transactions costs associated with search and coordination and payment, i.e. costs that affected all substitutes in the same way, would tend to make the higher quality item relatively less expensive, increasing its consumption.But digitisation and the Internet reverse the direction of these transactions costs. Rather than adding a fixed cost, such as transport costs, the various aspects of the digital revolution are equivalent to a fall in fixed costs, particularly access.These factors are not just one thing, but a suite of changes that add up to lowered transaction costs in the production, distribution and consumption of media, culture and games. These include: The internet and world-wide-web, and its unencumbered operation The growth and increasing efficacy of search technology Growth of universal broadband for fast, wide band-width access Growth of mobile access (through smartphones and other appliances) Growth of social media networks (Facebook, Twitter; Metcalfe’s law) Growth of developer and distribution platforms (iPhone, android, iTunes) Globally falling hardware and network access costs (Moore’s law) Growth of e-commerce (Ebay, Amazon, Etsy) and e-payments (paypal, bitcoin) Expansions of digital literacy and competence Creative commons These effects do not simply shift us down a demand curve for each given consumption item. This effect alone simply predicts that we consume more. But the Alchian-Allen effect makes a different prediction, namely that we consume not just more, but also different.These effects function to reduce the overall fixed costs or transactions costs associated with any consumption, sharing, or production of media, culture or games over the internet (or in digital form). With this overall fixed cost component now reduced, it represents a relatively larger decline in cost at the lower-quality, more bite-sized or unbundled end of the media goods spectrum. As such, this predicts a change in the composition of the overall consumption basket to reflect the changed relative prices that these above effects give rise to. See Figure 2 below (based on a blog post by James Oswald). The key to the economics of cute, in consequence of digitisation, is to follow through the qualitative change that, because of the Alchian-Allen effect, moves away from the high-quality, highly-bundled, high-value end of the media goods spectrum. The “pattern prediction” here is toward more, different, and lower quality: toward five minutes of “Internet animals”, rather than a full day at the zoo. Figure 2: Reducing transaction costs lowers the relative price of cat videos Consider five dimensions in which this more and different tendency plays out. Consumption These effects make digital and Internet-based consumption cheaper, shifting us down a demand curve, so we consume more. That’s the first law of demand in action: i.e. demand curves slope downwards. But a further effect – brilliantly set out in Cowen – is that we also consume lower-quality media. This is not a value judgment. These lower-quality media may well have much higher aesthetic value. They may be funnier, or more tragic and sublime; or faster, or not. This is not about absolute value; only about relative value. Digitization operating through Allen-Alchian skews consumption toward the lower quality ends in some dimensions: whether this is time, as in shorter – or cost, as in cheaper – or size, as in smaller – or transmission quality, as in gifs. This can also be seen as a form of unbundling, of dropping of dimensions that are not valued to create a simplified product.So we consume different, with higher variance. We sample more than we used to. This means that we explore a larger information world. Consumption is bite-sized and assorted. This tendency is evident in the rise of apps and in the proliferation of media forms and devices and the value of interoperability.ProductionAs consumption shifts (lower quality, greater variety), so must production. The production process has two phases: (1) figuring out what to do, or development; and (2) doing it, or making. The world of trade and globalization describes the latter part: namely efficient production. The main challenge is the world of innovation: the entrepreneurial and experimental world of figuring out what to do, and how. It is this second world that is radically transformed by implications of lowered transaction costs.One implication is growth of user-communities based around collaborative media projects (such as open source software) and community-based platforms or common pool resources for sharing knowledge, such as the “Maker movement” (Anderson 2012). This phenomenon of user-co-creation, or produsers, has been widely recognized as an important new phenomenon in the innovation and production process, particularly those processes associated with new digital technologies. There are numerous explanations for this, particularly around preferences for cooperation, community-building, social learning and reputational capital, and entrepreneurial expectations (Quiggin and Potts, Banks and Potts). Business Models The Alchian-Allen effect on consumption and production follows through to business models. A business model is a way of extracting value that represents some strategic equilibrium between market forms, organizational structures, technological possibilities and institutional framework and environmental conditions that manifests in entrepreneurial patterns of business strategy and particular patterns of investment and organization. The discovery of effective business models is a key process of market capitalist development and competition. The Alchian-Allen effect impacts on the space of effective viable business models. Business models that used to work will work less well, or not at all. And new business models will be required. It is a significant challenge to develop these “economic technologies”. Perhaps no less so than development of the physical technologies, new business models are produced through experimental trial and error. They cannot be known in advance or planned. But business models will change, which will affect not only the constellation of existing companies and the value propositions that underlie them, but also the broader specializations based on these in terms of skill sets held and developed by people, locations of businesses and people, and so on. New business models will emerge from a process of Schumpeterian creative destruction as it unfolds (Beinhocker). The large production, high development cost, proprietary intellectual property and systems based business model is not likely to survive, other than as niche areas. More experimental, discovery-focused, fast-development-then-scale-up based business models are more likely to fit the new ecology. Social Network Markets & Novelty Bundling MarketsThe growth of variety and diversity of choice that comes with this change in the way media is consumed to reflect a reallocation of consumption toward smaller more bite-sized, lower valued chunks (the Alchian-Allen effect) presents consumers with a problem, namely that they have to make more choices over novelty. Choice over novelty is difficult for consumers because it is experimental and potentially costly due to risk of mistakes (Earl), but it also presents entrepreneurs with an opportunity to seek to help solve that problem. The problem is a simple consequence of bounded rationality and time scarcity. It is equivalent to saying that the cost of choice rises monotonically with the number of choices, and that because there is no way to make a complete rational choice, agents will use decision or choice heuristics. These heuristics can be developed independently by the agents themselves through experience, or they can be copied or adopted from others (Earl and Potts). What Potts et al call “social network markets” and what Potts calls “novelty bundling markets” are both instances of the latter process of copying and adoption of decision rules. Social network markets occur when agents use a “copy the most common” or “copy the highest rank” meta-level decision rule (Bentley et al) to deal with uncertainty. Social network markets can be efficient aggregators of distributed information, but they can also be path-dependent, and usually lead to winner-take all situations and dynamics. These can result in huge pay-offs differentials between first and second or fifth place, even when the initial quality differentials are slight or random. Diversity, rapid experimentation, and “fast-failure” are likely to be effective strategies. It also points to the role of trust and reputation in using adopted decision rules and the information economics that underlies that: namely that specialization and trade applies to the production and consumption of information as well as commodities. Novelty bundling markets are an entrepreneurial response to this problem, and observable in a range of new media and creative industries contexts. These include arts, music or food festivals or fairs where entertainment and sociality is combined with low opportunity cost situations in which to try bundles of novelty and connect with experts. These are by agents who developed expert preferences through investment and experience in consumption of the particular segment or domain. They are expert consumers and are selling their “decision rules” and not just the product. The more production and consumption of media and digital information goods and services experiences the Alchian-Allen effect, the greater the importance of novelty bundling markets. Intellectual Property & Regulation A further implication is that rent-seeking solutions may also emerge. This can be seen in two dimensions; pursuit of intellectual property (Boldrin and Levine); and demand for regulations (Stigler). The Alchian-Allen induced shift will affect markets and business models (and firms), and because this will induce strategic defensive and aggressive responses from different organizations. Some organizations will seek to fight and adapt to this new world through innovative competition. Other firms will fight through political connections. Most incumbent firms will have substantial investments in IP or in the business model it supports. Yet the intellectual property model is optimized for high-quality large volume centralized production and global sales of undifferentiated product. Much industrial and labour regulation is built on that model. How governments support such industries is predicated on the stability of this model. The Alchian-Allen effect threatens to upset that model. Political pushback will invariably take the form of opposing most new business models and the new entrants they carry. Conclusion I have presented here a lesser-known but important theorem in applied microeconomics – the Alchian-Allen effect – and explain why its inverse is central to understanding the evolution of new media industries, and also why cute animals proliferate on the Internet. The theorem states that when a fixed cost is added to substitute goods, consumers will shift to the higher quality item (now relatively less expensive). The theorem also holds in reverse, when a fixed cost is removed from substitute items we expect a shift to lower quality consumption. The Internet has dramatically lowered fixed costs of access to media consumption, and various development platforms have similarly lowered the costs of production. Alchian-Allen predicts a shift to lower-quality, ”bittier” cuter consumption (Cowen). References Alchian, Arman, and William Allen. Exchange and Production. 2nd ed. Belmont, CA: Wadsworth, 1967. Anderson, Chris. Makers. New York: Crown Business, 2012. Banks, John, and Jason Potts. "Consumer Co-Creation in Online Games." New Media and Society 12.2 (2010): 253-70. Beinhocker, Eric. Origin of Wealth. Cambridge, Mass.: Harvard University Press, 2005. Bentley, R., et al. "Regular Rates of Popular Culture Change Reflect Random Copying." Evolution and Human Behavior 28 (2007): 151-158. Borcherding, Thomas, and Eugene Silberberg. "Shipping the Good Apples Out: The Alchian and Allen Theorem Reconsidered." Journal of Political Economy 86.1 (1978): 131-6. Cowen, Tyler. Create Your Own Economy. New York: Dutton, 2009. (Also published as The Age of the Infovore: Succeeding in the Information Economy. Penguin, 2010.) Cowen, Tyler, and Alexander Tabarrok. "Good Grapes and Bad Lobsters: The Alchian and Allen Theorem Revisited." Journal of Economic Inquiry 33.2 (1995): 253-6. Cuellar, Steven. "Sex, Drugs and the Alchian-Allen Theorem." Unpublished paper, 2005. 29 Apr. 2014 ‹http://www.sonoma.edu/users/c/cuellar/research/Sex-Drugs.pdf›.Earl, Peter. The Economic Imagination. Cheltenham: Harvester Wheatsheaf, 1986. Earl, Peter, and Jason Potts. "The Market for Preferences." Cambridge Journal of Economics 28 (2004): 619–33. Eid, Jean, Travis Ng, and Terence Tai-Leung Chong. "Shipping the Good Horses Out." Wworking paper, 2012. http://homes.chass.utoronto.ca/~ngkaho/Research/shippinghorses.pdf Potts, Jason, et al. "Social Network Markets: A New Definition of Creative Industries." Journal of Cultural Economics 32.3 (2008): 166-185. Quiggin, John, and Jason Potts. "Economics of Non-Market Innovation & Digital Literacy." Media International Australia 128 (2008): 144-50. Razzolini, Laura, William Shughart, and Robert Tollison. "On the Third Law of Demand." Economic Inquiry 41.2 (2003): 292–298. Staten, Michael, and John Umbeck. “Shipping the Good Students Out: The Effect of a Fixed Charge on Student Enrollments.” Journal of Economic Education 20.2 (1989): 165-171. Stigler, George. "The Theory of Economic Regulation." Bell Journal of Economics 2.1 (1971): 3-22. Thornton, Mark. The Economics of Prohibition. Salt Lake City: University of Utah Press, 1991.Thornton, Mark. "The Potency of Illegal Drugs." Journal of Drug Issues 28.3 (1998): 525-40.
APA, Harvard, Vancouver, ISO, and other styles
30

Newman, James. "Save the Videogame! The National Videogame Archive: Preservation, Supersession and Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.167.

Full text
Abstract:
Introduction In October 2008, the UK’s National Videogame Archive became a reality and after years of negotiation, preparation and planning, this partnership between Nottingham Trent University’s Centre for Contemporary Play research group and The National Media Museum, accepted its first public donations to the collection. These first donations came from Sony’s Computer Entertainment Europe’s London Studios who presented the original, pre-production PlayStation 2 EyeToy camera (complete with its hand-written #1 sticker) and Harmonix who crossed the Atlantic to deliver prototypes of the Rock Band drum kit and guitar controllers along with a slew of games. Since then, we have been inundated with donations, enquiries and volunteers offering their services and it is clear that we have exciting and challenging times ahead of us at the NVA as we seek to continue our collecting programme and preserve, conserve, display and interpret these vital parts of popular culture. This essay, however, is not so much a document of these possible futures for our research or the challenges we face in moving forward as it is a discussion of some of the issues that make game preservation a vital and timely undertaking. In briefly telling the story of the genesis of the NVA, I hope to draw attention to some of the peculiarities (in both senses) of the situation in which videogames currently exist. While considerable attention has been paid to the preservation and curation of new media arts (e.g. Cook et al.), comparatively little work has been undertaken in relation to games. Surprisingly, the games industry has been similarly neglectful of the histories of gameplay and gamemaking. Throughout our research, it has became abundantly clear that even those individuals and companies most intimately associated with the development of this form, do not hold their corporate and personal histories in the high esteem we expected (see also Lowood et al.). And so, despite the well-worn bluster of an industry that proclaims itself as culturally significant as Hollywood, it is surprisingly difficult to find a definitive copy of the boxart of the final release of a Triple-A title let alone any of the pre-production materials. Through our journeys in the past couple of years, we have encountered shoeboxes under CEOs’ desks and proud parents’ collections of tapes and press cuttings. These are the closest things to a formalised archive that we currently have for many of the biggest British game development and publishing companies. Not only is this problematic in and of itself as we run the risk of losing titles and documents forever as well as the stories locked up in the memories of key individuals who grow ever older, but also it is symptomatic of an industry that, despite its public proclamations, neither places a high value on its products as popular culture nor truly recognises their impact on that culture. While a few valorised, still-ongoing, franchises like the Super Mario and Legend of Zelda series are repackaged and (digitally) re-released so as to provide continuity with current releases, a huge number of games simply disappear from view once their short period of retail limelight passes. Indeed, my argument in this essay rests to some extent on the admittedly polemical, and maybe even antagonistic, assertion that the past business and marketing practices of the videogames industry are partly to blame for the comparatively underdeveloped state of game preservation and the seemingly low cultural value placed on old games within the mainstream marketplace. Small wonder, then, that archives and formalised collections are not widespread. However antagonistic this point may seem, this essay does not set out merely to criticise the games industry. Indeed, it is important to recognise that the success and viability of projects such as the NVA is derived partly from close collaboration with industry partners. As such, it is my hope that in addition to contributing to the conversation about the importance and need for formalised strategies of game preservation, this essay goes some way to demonstrating the necessity of universities, museums, developers, publishers, advertisers and retailers tackling these issues in partnership. The Best Game Is the Next Game As will be clear from these opening paragraphs, this essay is primarily concerned with ‘old’ games. Perhaps surprisingly, however, we shall see that ‘old’ games are frequently not that old at all as even the shiniest, and newest of interactive experiences soon slip from view under the pressure of a relentless industrial and institutional push towards the forthcoming release and the ‘next generation’. More surprising still is that ‘old’ games are often difficult to come by as they occupy, at best, a marginalised position in the contemporary marketplace, assuming they are even visible at all. This is an odd situation. Videogames are, as any introductory primer on game studies will surely reveal, big business (see Kerr, for instance, as well as trade bodies such as ELSPA and The ESA for up-to-date sales figures). Given the videogame industry seems dedicated to growing its business and broadening its audiences (see Radd on Sony’s ‘Game 3.0’ strategy, for instance), it seems strange, from a commercial perspective if no other, that publishers’ and developers’ back catalogues are not being mercilessly plundered to wring the last pennies of profit from their IPs. Despite being cherished by players and fans, some of whom are actively engaged in their own private collecting and curation regimes (sometimes to apparently obsessive excess as Jones, among others, has noted), videogames have, nonetheless, been undervalued as part of our national popular cultural heritage by institutions of memory such as museums and archives which, I would suggest, have largely ignored and sometimes misunderstood or misrepresented them. Most of all, however, I wish to draw attention to the harm caused by the videogames industry itself. Consumers’ attentions are focused on ‘products’, on audiovisual (but mainly visual) technicalities and high-definition video specs rather than on the experiences of play and performance, or on games as artworks or artefact. Most damagingly, however, by constructing and contributing to an advertising, marketing and popular critical discourse that trades almost exclusively in the language of instant obsolescence, videogames have been robbed of their historical value and old platforms and titles are reduced to redundant, legacy systems and easily-marginalised ‘retro’ curiosities. The vision of inevitable technological progress that the videogames industry trades in reminds us of Paul Duguid’s concept of ‘supersession’ (see also Giddings and Kennedy, on the ‘technological imaginary’). Duguid identifies supersession as one of the key tropes in discussions of new media. The reductive idea that each new form subsumes and replaces its predecessor means that videogames are, to some extent, bound up in the same set of tensions that undermine the longevity of all new media. Chun rightly notes that, in contrast with more open terms like multimedia, ‘new media’ has always been somewhat problematic. Unaccommodating, ‘it portrayed other media as old or dead; it converged rather than multiplied; it did not efface itself in favor of a happy if redundant plurality’ (1). The very newness of new media and of videogames as the apotheosis of the interactivity and multimodality they promise (Newman, "In Search"), their gleam and shine, is quickly tarnished as they are replaced by ever-newer, ever more exciting, capable and ‘revolutionary’ technologies whose promise and moment in the limelight is, in turn, equally fleeting. As Franzen has noted, obsolescence and the trail of abandoned, superseded systems is a natural, even planned-for, product of an infatuation with the newness of new media. For Kline et al., the obsession with obsolescence leads to the characterisation of the videogames industry as a ‘perpetual innovation economy’ whose institutions ‘devote a growing share of their resources to the continual alteration and upgrading of their products. However, it is my contention here that the supersessionary tendency exerts a more serious impact on videogames than some other media partly because the apparently natural logic of obsolescence and technological progress goes largely unchecked and partly because there remain few institutions dedicated to considering and acting upon game preservation. The simple fact, as Lowood et al. have noted, is that material damage is being done as a result of this manufactured sense of continual progress and immediate, irrefutable obsolescence. By focusing on the upcoming new release and the preview of what is yet to come; by exciting gamers about what is in development and demonstrating the manifest ways in which the sheen of the new inevitably tarnishes the old. That which is replaced is fit only for the bargain bin or the budget-priced collection download, and as such, it is my position that we are systematically undermining and perhaps even eradicating the possibility of a thorough and well-documented history for videogames. This is a situation that we at the National Videogame Archive, along with colleagues in the emerging field of game preservation (e.g. the International Game Developers Association Game Preservation Special Interest Group, and the Keeping Emulation Environments Portable project) are, naturally, keen to address. Chief amongst our concerns is better understanding how it has come to be that, in 2009, game studies scholars and colleagues from across the memory and heritage sectors are still only at the beginning of the process of considering game preservation. The IGDA Game Preservation SIG was founded only five years ago and its ‘White Paper’ (Lowood et al.) is just published. Surprisingly, despite the importance of videogames within popular culture and the emergence and consolidation of the industry as a potent creative force, there remains comparatively little academic commentary or investigation into the specific situation and life-cycles of games or the demands that they place upon archivists and scholars of digital histories and cultural heritage. As I hope to demonstrate in this essay, one of the key tasks of the project of game preservation is to draw attention to the consequences of the concentration, even fetishisation, of the next generation, the new and the forthcoming. The focus on what I have termed ‘the lure of the imminent’ (e.g. Newman, Playing), the fixation on not only the present but also the as-yet-unreleased next generation, has contributed to the normalisation of the discourses of technological advancement and the inevitability and finality of obsolescence. The conflation of gameplay pleasure and cultural import with technological – and indeed, usually visual – sophistication gives rise to a context of endless newness, within which there appears to be little space for the ‘outdated’, the ‘superseded’ or the ‘old’. In a commercial and cultural space in which so little value is placed upon anything but the next game, we risk losing touch with the continuities of development and the practices of play while simultaneously robbing players and scholars of the critical tools and resources necessary for contextualised appreciation and analysis of game form and aesthetics, for instance (see Monnens, "Why", for more on the value of preserving ‘old’ games for analysis and scholarship). Moreover, we risk losing specific games, platforms, artefacts and products as they disappear into the bargain bucket or crumble to dust as media decay, deterioration and ‘bit rot’ (Monnens, "Losing") set in. Space does not here permit a discussion of the scope and extent of the preservation work required (for instance, the NVA sets its sights on preserving, documenting, interpreting and exhibiting ‘videogame culture’ in its broadest sense and recognises the importance of videogames as more than just code and as enmeshed within complex networks of productive, consumptive and performative practices). Neither is it my intention to discuss here the specific challenges and numerous issues associated with archival and exhibition tools such as emulation which seek to rebirth code on up-to-date, manageable, well-supported hardware platforms but which are frequently insensitive to the specificities and nuances of the played experience (see Newman, "On Emulation", for some further notes on videogame emulation, archiving and exhibition and Takeshita’s comments in Nutt on the technologies and aesthetics of glitches, for instance). Each of these issues is vitally important and will, doubtless become a part of the forthcoming research agenda for game preservation scholars. My focus here, however, is rather more straightforward and foundational and though it is deliberately controversial, it is my hope that its casts some light over some ingrained assumptions about videogames and the magnitude and urgency of the game preservation project. Videogames Are Disappearing? At a time when retailers’ shelves struggle under the weight of newly-released titles and digital distribution systems such as Steam, the PlayStation Network, Xbox Live Marketplace, WiiWare, DSiWare et al bring new ways to purchase and consume playable content, it might seem strange to suggest that videogames are disappearing. In addition to what we have perhaps come to think of as the ‘usual suspects’ in the hardware and software publishing marketplace, over the past year or so Apple have, unexpectedly and perhaps even surprising themselves, carved out a new gaming platform with the iPhone/iPod Touch and have dramatically simplified the notoriously difficult process of distributing mobile content with the iTunes App Store. In the face of this apparent glut of games and the emergence and (re)discovery of new markets with the iPhone, Wii and Nintendo DS, videogames seem an ever more a vital and visible part of popular culture. Yet, for all their commercial success and seemingly penetration the simple fact is that they are disappearing. And at an alarming rate. Addressing the IGDA community of game developers and producers, Henry Lowood makes the point with admirable clarity (see also Ruggill and McAllister): If we fail to address the problems of game preservation, the games you are making will disappear, perhaps within a few decades. You will lose access to your own intellectual property, you will be unable to show new developers the games you designed or that inspired you, and you may even find it necessary to re-invent a bunch of wheels. (Lowood et al. 1) For me, this point hit home most persuasively a few years ago when, along with Iain Simons, I was invited by the British Film Institute to contribute a book to their ‘Screen Guides’ series. 100 Videogames (Newman and Simons) was an intriguing prospect that provided us with the challenge and opportunity to explore some of the key moments in videogaming’s forty year history. However, although the research and writing processes proved to be an immensely pleasurable and rewarding experience that we hope culminated in an accessible, informative volume offering insight into some well-known (and some less-well known) games, the project was ultimately tinged with a more than a little disappointment and frustration. Assuming our book had successfully piqued the interest of our readers into rediscovering games previously played or perhaps investigating games for the first time, what could they then do? Where could they go to find these games in order to experience their delights (or their flaws and problems) at first hand? Had our volume been concerned with television or film, as most of the Screen Guides are, then online and offline retailers, libraries, and even archives for less widely-available materials, would have been obvious ports of call. For the student of videogames, however, the choices are not so much limited as practically non-existant. It is only comparatively recently that videogame retailers have shifted away from an almost exclusive focus on new releases and the zeitgeist platforms towards a recognition of old games and systems through the creation of the ‘pre-owned’ marketplace. The ‘pre-owned’ transaction is one in which old titles may be traded in for cash or against the purchase of new releases of hardware or software. Surely, then, this represents the commercial viability of classic games and is a recognition on the part of retail that the new release is not the only game in town. Yet, if we consider more carefully the ‘pre-owned’ model, we find a few telling points. First, there is cold economic sense to the pre-owned business model. In their financial statements for FY08, ‘GAME revealed that the service isn’t just a key part of its offer to consumers, but its also represents an ‘attractive’ gross margin 39 per cent.’ (French). Second, and most important, the premise of the pre-owned business as it is communicated to consumers still offers nothing but primacy to the new release. That one would trade-in one’s old games in order to consume these putatively better new ones speaks eloquently in the language of obsolesce and what Dovey and Kennedy have called the ‘technological imaginary’. The wire mesh buckets of old, pre-owned games are not displayed or coded as treasure troves for the discerning or completist collector but rather are nothing more than bargain bins. These are not classic games. These are cheap games. Cheap because they are old. Cheap because they have had their day. This is a curious situation that affects videogames most unfairly. Of course, my caricature of the videogame retailer is still incomplete as a good deal of the instantly visible shopfloor space is dedicated neither to pre-owned nor new releases but rather to displays of empty boxes often sporting unfinalised, sometimes mocked-up, boxart flaunting titles available for pre-order. Titles you cannot even buy yet. In the videogames marketplace, even the present is not exciting enough. The best game is always the next game. Importantly, retail is not alone in manufacturing this sense of dissatisfaction with the past and even the present. The specialist videogames press plays at least as important a role in reinforcing and normalising the supersessionary discourse of instant obsolescence by fixing readers’ attentions and expectations on the just-visible horizon. Examining the pages of specialist gaming publications reveals them to be something akin to Futurist paeans dedicating anything from 70 to 90% of their non-advertising pages to previews, interviews with developers about still-in-development titles (see Newman, Playing, for more on the specialist gaming press’ love affair with the next generation and the NDA scoop). Though a small number of publications specifically address retro titles (e.g. Imagine Publishing’s Retro Gamer), most titles are essentially vehicles to promote current and future product lines with many magazines essentially operating as delivery devices for cover-mounted CDs/DVDs offering teaser videos or playable demos of forthcoming titles to further whet the appetite. Manufacturing a sense of excitement might seem wholly natural and perhaps even desirable in helping to maintain a keen interest in gaming culture but the effect of the imbalance of popular coverage has a potentially deleterious effect on the status of superseded titles. Xbox World 360’s magnificently-titled ‘Anticip–O–Meter’ ™ does more than simply build anticipation. Like regular features that run under headings such as ‘The Next Best Game in The World Ever is…’, it seeks to author not so much excitement about the imminent release but a dissatisfaction with the present with which unfavourable comparisons are inevitably drawn. The current or previous crop of (once new, let us not forget) titles are not simply superseded but rather are reinvented as yardsticks to judge the prowess of the even newer and unarguably ‘better’. As Ashton has noted, the continual promotion of the impressiveness of the next generation requires a delicate balancing act and a selective, institutionalised system of recall and forgetting that recovers the past as a suite of (often technical) benchmarks (twice as many polygons, higher resolution etc.) In the absence of formalised and systematic collecting, these obsoleted titles run the risk of being forgotten forever once they no longer serve the purpose of demonstrating the comparative advancement of the successors. The Future of Videogaming’s Past Even if we accept the myriad claims of game studies scholars that videogames are worthy of serious interrogation in and of themselves and as part of a multifaceted, transmedial supersystem, we might be tempted to think that the lack of formalised collections, archival resources and readily available ‘old/classic’ titles at retail is of no great significance. After all, as Jones has observed, the videogame player is almost primed to undertake this kind of activity as gaming can, at least partly, be understood as the act and art of collecting. Games such as Animal Crossing make this tendency most manifest by challenging their players to collect objects and artefacts – from natural history through to works of visual art – so as to fill the initially-empty in-game Museum’s cases. While almost all videogames from The Sims to Katamari Damacy can be considered to engage their players in collecting and collection management work to some extent, Animal Crossing is perhaps the most pertinent example of the indivisibility of the gamer/archivist. Moreover, the permeability of the boundary between the fan’s collection of toys, dolls, posters and the other treasured objects of merchandising and the manipulation of inventories, acquisitions and equipment lists that we see in the menus and gameplay imperatives of videogames ensures an extensiveness and scope of fan collecting and archival work. Similarly, the sociality of fan collecting and the value placed on private hoarding, public sharing and the processes of research ‘…bridges to new levels of the game’ (Jones 48). Perhaps we should be as unsurprised that their focus on collecting makes videogames similar to eBay as we are to the realisation that eBay with its competitiveness, its winning and losing states, and its inexorable countdown timer, is nothing if not a game? We should be mindful, however, of overstating the positive effects of fandom on the fate of old games. Alongside eBay’s veneration of the original object, p2p and bittorrent sites reduce the videogame to its barest. Quite apart from the (il)legality of emulation and videogame ripping and sharing (see Conley et al.), the existence of ‘ROMs’ and the technicalities of their distribution reveals much about the peculiar tension between the interest in old games and their putative cultural and economic value. (St)ripped down to the barest of code, ROMs deny the gamer the paratextuality of the instruction manual or boxart. In fact, divorced from its context and robbed of its materiality, ROMs perhaps serve to make the original game even more distant. More tellingly, ROMs are typically distributed by the thousand in zipped files. And so, in just a few minutes, entire console back-catalogues – every game released in every territory – are available for browsing and playing on a PC or Mac. The completism of the collections allows detailed scrutiny of differences in Japanese versus European releases, for instance, and can be seen as a vital investigative resource. However, that these ROMs are packaged into collections of many thousands speaks implicitly of these games’ perceived value. In a similar vein, the budget-priced retro re-release collection helps to diminish the value of each constituent game and serves to simultaneously manufacture and highlight the manifestly unfair comparison between these intriguingly retro curios and the legitimately full-priced games of now and next. Customer comments at Amazon.co.uk demonstrate the way in which historical and technological comparisons are now solidly embedded within the popular discourse (see also Newman 2009b). Leaving feedback on Sega’s PS3/Xbox 360 Sega MegaDrive Ultimate Collection customers berate the publisher for the apparently meagre selection of titles on offer. Interestingly, this charge seems based less around the quality, variety or range of the collection but rather centres on jarring technological schisms and a clear sense of these titles being of necessarily and inevitably diminished monetary value. Comments range from outraged consternation, ‘Wtf, only 40 games?’, ‘I wont be getting this as one disc could hold the entire arsenal of consoles and games from commodore to sega saturn(Maybe even Dreamcast’ through to more detailed analyses that draw attention to the number of bits and bytes but that notably neglect any consideration of gameplay, experientiality, cultural significance or, heaven forbid, fun. “Ultimate” Collection? 32Mb of games on a Blu-ray disc?…here are 40 Megadrive games at a total of 31 Megabytes of data. This was taking the Michael on a DVD release for the PS2 (or even on a UMD for the PSP), but for a format that can store 50 Gigabytes of data, it’s an insult. Sega’s entire back catalogue of Megadrive games only comes to around 800 Megabytes - they could fit that several times over on a DVD. The ultimate consequence of these different but complementary attitudes to games that fix attentions on the future and package up decontextualised ROMs by the thousand or even collections of 40 titles on a single disc (selling for less than half the price of one of the original cartridges) is a disregard – perhaps even a disrespect – for ‘old’ games. Indeed, it is this tendency, this dominant discourse of inevitable, natural and unimpeachable obsolescence and supersession, that provided one of the prime motivators for establishing the NVA. As Lowood et al. note in the title of the IGDA Game Preservation SIG’s White Paper, we need to act to preserve and conserve videogames ‘before it’s too late’.ReferencesAshton, D. ‘Digital Gaming Upgrade and Recovery: Enrolling Memories and Technologies as a Strategy for the Future.’ M/C Journal 11.6 (2008). 13 Jun 2009 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/86›.Buffa, C. ‘How to Fix Videogame Journalism.’ GameDaily 20 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/how-to-fix-videogame-journalism/69202/?biz=1›. ———. ‘Opinion: How to Become a Better Videogame Journalist.’ GameDaily 28 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-how-to-become-a-better-videogame-journalist/69236/?biz=1. ———. ‘Opinion: The Videogame Review – Problems and Solutions.’ GameDaily 2 Aug. 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-the-videogame-review-problems-and-solutions/69257/?biz=1›. ———. ‘Opinion: Why Videogame Journalism Sucks.’ GameDaily 14 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-why-videogame-journalism-sucks/69180/?biz=1›. Cook, Sarah, Beryl Graham, and Sarah Martin eds. Curating New Media, Gateshead: BALTIC, 2002. Duguid, Paul. ‘Material Matters: The Past and Futurology of the Book.’ In Gary Nunberg, ed. The Future of the Book. Berkeley, CA: University of California Press, 1996. 63–101. French, Michael. 'GAME Reveals Pre-Owned Trading Is 18% of Business.’ MCV 22 Apr. 2009. 13 Jun 2009 ‹http://www.mcvuk.com/news/34019/GAME-reveals-pre-owned-trading-is-18-per-cent-of-business›. Giddings, Seth, and Helen Kennedy. ‘Digital Games as New Media.’ In J. Rutter and J. Bryce, eds. Understanding Digital Games. London: Sage. 129–147. Gillen, Kieron. ‘The New Games Journalism.’ Kieron Gillen’s Workblog 2004. 13 June 2009 ‹http://gillen.cream.org/wordpress_html/?page_id=3›. Jones, S. The Meaning of Video Games: Gaming and Textual Strategies, New York: Routledge, 2008. Kerr, A. The Business and Culture of Digital Games. London: Sage, 2006. Lister, Martin, John Dovey, Seth Giddings, Ian Grant and Kevin Kelly. New Media: A Critical Introduction. London and New York: Routledge, 2003. Lowood, Henry, Andrew Armstrong, Devin Monnens, Zach Vowell, Judd Ruggill, Ken McAllister, and Rachel Donahue. Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Monnens, Devin. ‘Why Are Games Worth Preserving?’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. ———. ‘Losing Digital Game History: Bit by Bit.’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Newman, J. ‘In Search of the Videogame Player: The Lives of Mario.’ New Media and Society 4.3 (2002): 407-425.———. ‘On Emulation.’ The National Videogame Archive Research Diary, 2009. 13 June 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/on-emulation/›. ———. ‘Our Cultural Heritage – Available by the Bucketload.’ The National Videogame Archive Research Diary, 2009. 10 Apr. 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/our-cultural-heritage-available-by-the-bucketload/›. ———. Playing with Videogames, London: Routledge, 2008. ———, and I. Simons. 100 Videogames. London: BFI Publishing, 2007. Nutt, C. ‘He Is 8-Bit: Capcom's Hironobu Takeshita Speaks.’ Gamasutra 2008. 13 June 2009 ‹http://www.gamasutra.com/view/feature/3752/›. Radd, D. ‘Gaming 3.0. Sony’s Phil Harrison Explains the PS3 Virtual Community, Home.’ Business Week 9 Mar. 2007. 13 June 2009 ‹http://www.businessweek.com/innovate/content/mar2007/id20070309_764852.htm?chan=innovation_game+room_top+stories›. Ruggill, Judd, and Ken McAllister. ‘What If We Do Nothing?’ Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009. ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. 16-19.
APA, Harvard, Vancouver, ISO, and other styles
31

Burwell, Catherine. "New(s) Readers: Multimodal Meaning-Making in AJ+ Captioned Video." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1241.

Full text
Abstract:
IntroductionIn 2013, Facebook introduced autoplay video into its newsfeed. In order not to produce sound disruptive to hearing users, videos were muted until a user clicked on them to enable audio. This move, recognised as a competitive response to the popularity of video-sharing sites like YouTube, has generated significant changes to the aesthetics, form, and modalities of online video. Many video producers have incorporated captions into their videos as a means of attracting and maintaining user attention. Of course, captions are not simply a replacement or translation of sound, but have instead added new layers of meaning and changed the way stories are told through video.In this paper, I ask how the use of captions has altered the communication of messages conveyed through online video. In particular, I consider the role captions have played in news reporting, as online platforms like Facebook become increasingly significant sites for the consumption of news. One of the most successful producers of online news video has been Al Jazeera Plus (AJ+). I examine two recent AJ+ news videos to consider how meaning is generated when captions are integrated into the already multimodal form of the video—their online reporting of Australian versus US healthcare systems, and the history of the Black Panther movement. I analyse interactions amongst image, sound, language, and typography and consider the role of captions in audience engagement, branding, and profit-making. Sean Zdenek notes that captions have yet to be recognised “as a significant variable in multimodal analysis, on par with image, sound and video” (xiii). Here, I attempt to pay close attention to the representational, cultural and economic shifts that occur when captions become a central component of online news reporting. I end by briefly enquiring into the implications of captions for our understanding of literacy in an age of constantly shifting media.Multimodality in Digital MediaJeff Bezemer and Gunther Kress define a mode as a “socially and culturally shaped resource for meaning making” (171). Modes include meaning communicated through writing, sound, image, gesture, oral language, and the use of space. Of course, all meanings are conveyed through multiple modes. A page of written text, for example, requires us to make sense through the simultaneous interpretation of words, space, colour, and font. Media such as television and film have long been understood as multimodal; however, with the appearance of digital technologies, media’s multimodality has become increasingly complex. Video games, for example, demonstrate an extraordinary interplay between image, sound, oral language, written text, and interactive gestures, while technologies such as the mobile phone combine the capacity to produce meaning through speaking, writing, and image creation.These multiple modes are not simply layered one on top of the other, but are instead “enmeshed through the complexity of interaction, representation and communication” (Jewitt 1). The rise of multimodal media—as well as the increasing interest in understanding multimodality—occurs against the backdrop of rapid technological, cultural, political, and economic change. These shifts include media convergence, political polarisation, and increased youth activism across the globe (Herrera), developments that are deeply intertwined with uses of digital media and technology. Indeed, theorists of multimodality like Jay Lemke challenge us to go beyond formalist readings of how multiple modes work together to create meaning, and to consider multimodality “within a political economy and a cultural ecology of identities, markets and values” (140).Video’s long history as an inexpensive and portable way to produce media has made it an especially dynamic form of multimodal media. In 1974, avant-garde video artist Nam June Paik predicted that “new forms of video … will stimulate the whole society to find more imaginative ways of telecommunication” (45). Fast forward more than 40 years, and we find that video has indeed become an imaginative and accessible form of communication. The cultural influence of video is evident in the proliferation of video genres, including remix videos, fan videos, Let’s Play videos, video blogs, live stream video, short form video, and video documentary, many of which combine semiotic resources in novel ways. The economic power of video is evident in the profitability of video sharing sites—YouTube in particular—as well as the recent appearance of video on other social media platforms such as Instagram and Facebook.These platforms constitute significant “sites of display.” As Rodney Jones notes, sites of display are not merely the material media through which information is displayed. Rather, they are complex spaces that organise social interactions—for example, between producers and users—and shape how meaning is made. Certainly we can see the influence of sites of display by considering Facebook’s 2013 introduction of autoplay into its newsfeed, a move that forced video producers to respond with new formats. As Edson Tandoc and Julian Maitra write, news organisations have had been forced to “play by Facebook’s frequently modified rules and change accordingly when the algorithms governing the social platform change” (2). AJ+ has been considered one of the media companies that has most successfully adapted to these changes, an adaptation I examine below. I begin by taking up Lemke’s challenge to consider multimodality contextually, reading AJ+ videos through the conceptual lens of the “attention economy,” a lens that highlights the profitability of attention within digital cultures. I then follow with analyses of two short AJ+ videos to show captions’ central role, not only in conveying meaning, but also in creating markets, and communicating branded identities and ideologies.AJ+, Facebook and the New Economies of AttentionThe Al Jazeera news network was founded in 1996 to cover news of the Arab world, with a declared commitment to give “voice to the voiceless.” Since that time, the network has gained global influence, yet many of its attempts to break into the American market have been unsuccessful (Youmans). In 2013, the network acquired Current TV in an effort to move into cable television. While that effort ultimately failed, Al Jazeera’s purchase of the youth-oriented Current TV nonetheless led to another, surprisingly fruitful enterprise, the development of the digital media channel Al Jazeera Plus (AJ+). AJ+ content, which is made up almost entirely of video, is directed at 18 to 35-year-olds. As William Youmans notes, AJ+ videos are informal and opinionated, and, while staying consistent with Al Jazeera’s mission to “give voice to the voiceless,” they also take an openly activist stance (114). Another distinctive feature of AJ+ videos is the way they are tailored for specific platforms. From the beginning, AJ+ has had particular success on Facebook, a success that has been recognised in popular and trade publications. A 2015 profile on AJ+ videos in Variety (Roettgers) noted that AJ+ was the ninth biggest video publisher on the social network, while a story on Journalism.co (Reid, “How AJ+ Reaches”) that same year commented on the remarkable extent to which Facebook audiences shared and interacted with AJ+ videos. These stories also note the distinctive video style that has become associated with the AJ+ brand—short, bold captions; striking images that include photos, maps, infographics, and animations; an effective opening hook; and a closing call to share the video.AJ+ video producers were developing this unique style just as Facebook’s autoplay was being introduced into newsfeeds. Autoplay—a mechanism through which videos are played automatically, without action from a user—predates Facebook’s introduction of the feature. However, autoplay on Internet sites had already begun to raise the ire of many users before its appearance on Facebook (Oremus, “In Defense of Autoplay”). By playing video automatically, autoplay wrests control away from users, and causes particular problems for users using assistive technologies. Reporting on Facebook’s decision to introduce autoplay, Josh Constine notes that the company was looking for a way to increase advertising revenues without increasing the number of actual ads. Encouraging users to upload and share video normalises the presence of video on Facebook, and opens up the door to the eventual addition of profitable video ads. Ensuring that video plays automatically gives video producers an opportunity to capture the attention of users without the need for them to actively click to start a video. Further, ensuring that the videos can be understood when played silently means that both deaf users and users who are situationally unable to hear the audio can also consume its content in any kind of setting.While Facebook has promoted its introduction of autoplay as a benefit to users (Oremus, “Facebook”), it is perhaps more clearly an illustration of the carefully-crafted production strategies used by digital platforms to capture, maintain, and control attention. Within digital capitalism, attention is a highly prized and scarce resource. Michael Goldhaber argues that once attention is given, it builds the potential for further attention in the future. He writes that “obtaining attention is obtaining a kind of enduring wealth, a form of wealth that puts you in a preferred position to get anything this new economy offers” (n.p.). In the case of Facebook, this offers video producers the opportunity to capture users’ attention quickly—in the time it takes them to scroll through their newsfeed. While this may equate to only a few seconds, those few seconds hold, as Goldhaber predicted, the potential to create further value and profit when videos are viewed, liked, shared, and commented on.Interviews with AJ+ producers reveal that an understanding of the value of this attention drives the organisation’s production decisions, and shapes content, aesthetics, and modalities. They also make it clear that it is captions that are central in their efforts to engage audiences. Jigar Mehta, former head of engagement at AJ+, explains that “those first three to five seconds have become vital in grabbing the audience’s attention” (quoted in Reid, “How AJ+ Reaches”). While early videos began with the AJ+ logo, that was soon dropped in favour of a bold image and text, a decision that dramatically increased views (Reid, “How AJ+ Reaches”). Captions and titles are not only central to grabbing attention, but also to maintaining it, particularly as many audience members consume video on mobile devices without sound. Mehta tells an editor at the Nieman Journalism Lab:we think a lot about whether a video works with the sound off. Do we have to subtitle it in order to keep the audience retention high? Do we need to use big fonts? Do we need to use color blocking in order to make words pop and make things stand out? (Mehta, qtd. in Ellis)An AJ+ designer similarly suggests that the most important aspects of AJ+ videos are brand, aesthetic style, consistency, clarity, and legibility (Zou). While questions of brand, style, and clarity are not surprising elements to associate with online video, the matter of legibility is. And yet, in contexts where video is viewed on small, hand-held screens and sound is not an option, legibility—as it relates to the arrangement, size and colour of type—does indeed take on new importance to storytelling and sense-making.While AJ+ producers frame the use of captions as an innovative response to Facebook’s modern algorithmic changes, it makes sense to also remember the significant histories of captioning that their videos ultimately draw upon. This lineage includes silent films of the early twentieth century, as well as the development of closed captions for deaf audiences later in that century. Just as he argues for the complexity, creativity, and transformative potential of captions themselves, Sean Zdenek also urges us to view the history of closed captioning not as a linear narrative moving inevitably towards progress, but as something far more complicated and marked by struggle, an important reminder of the fraught and human histories that are often overlooked in accounts of “new media.” Another important historical strand to consider is the centrality of the written word to digital media, and to the Internet in particular. As Carmen Lee writes, despite public anxieties and discussions over a perceived drop in time spent reading, digital media in fact “involve extensive use of the written word” (2). While this use takes myriad forms, many of these forms might be seen as connected to the production, consumption, and popularity of captions, including practices such as texting, tweeting, and adding titles and catchphrases to photos.Captions, Capture, and Contrast in Australian vs. US HealthcareOn May 4, 2017, US President Donald Trump was scheduled to meet with Australian Prime Minister Malcolm Turnbull in New York City. Trump delayed the meeting, however, in order to await the results of a vote in the US House of Representatives to repeal the Affordable Care Act—commonly known as Obama Care. When he finally sat down with the Prime Minister later that day, Trump told him that Australia has “better health care” than the US, a statement that, in the words of a Guardian report, “triggered astonishment and glee” amongst Trump’s critics (Smith). In response to Trump’s surprising pronouncement, AJ+ produced a 1-minute video extending Trump’s initial comparison with a series of contrasts between Australian government-funded health care and American privatised health care (Facebook, “President Trump Says…”). The video provides an excellent example of the role captions play in both generating attention and creating the unique aesthetic that is crucial to the AJ+ brand.The opening frame of the video begins with a shot of the two leaders seated in front of the US and Australian flags, a diplomatic scene familiar to anyone who follows politics. The colours of the picture are predominantly red, white and blue. Superimposed on top of the image is a textbox containing the words “How does Australia’s healthcare compare to the US?” The question appears in white capital letters on a black background, and the box itself is heavily outlined in yellow. The white and yellow AJ+ logo appears in the upper right corner of the frame. This opening frame poses a question to the viewer, encouraging a kind of rhetorical interactivity. Through the use of colour in and around the caption, it also quickly establishes the AJ+ brand. This opening scene also draws on the Internet’s history of humorous “image macros”—exemplified by the early LOL cat memes—that create comedy through the superimposition of captions on photographic images (Shifman).Captions continue to play a central role in meaning-making once the video plays. In the next frame, Trump is shown speaking to Turnbull. As he speaks, his words—“We have a failing healthcare”—drop onto the screen (Image 1). The captions are an exact transcription of Trump’s awkward phrase and appear centred in caps, with the words “failing healthcare” emphasised in larger, yellow font. With or without sound, these bold captions are concise, easily read on a small screen, and visually dominate the frame. The next few seconds of the video complete the sequence, as Trump tells Turnbull, “I shouldn’t say this to our great gentleman, my friend from Australia, ‘cause you have better healthcare than we do.” These words continue to appear over the image of the two men, still filling the screen. In essence, Trump’s verbal gaffe, transcribed word for word and appearing in AJ+’s characteristic white and yellow lettering, becomes the video’s hook, designed to visually call out to the Facebook user scrolling silently through their newsfeed.Image 1: “We have a failing healthcare.”The middle portion of the video answers the opening question, “How does Australia’s healthcare compare to the US?”. There is no verbal language in this segment—the only sound is a simple synthesised soundtrack. Instead, captions, images, and spatial design, working in close cooperation, are used to draw five comparisons. Each of these comparisons uses the same format. A title appears at the top of the screen, with the remainder of the screen divided in two. The left side is labelled Australia, the right U.S. Underneath these headings, a representative image appears, followed by two statistics, one for each country. For example, the third comparison contrasts Australian and American infant mortality rates (Image 2). The left side of the screen shows a close-up of a mother kissing a baby, with the superimposed caption “3 per 1,000 births.” On the other side of the yellow border, the American infant mortality rate is illustrated with an image of a sleeping baby superimposed with a corresponding caption, “6 per 1,000 births.” Without voiceover, captions do much of the work of communicating the national differences. They are, however, complemented and made more quickly comprehensible through the video’s spatial design and its subtly contrasting images, which help to visually organise the written content.Image 2: “Infant mortality rate”The final 10 seconds of the video bring sound back into the picture. We once again see and hear Trump tell Turnbull, “You have better healthcare than we do.” This image transforms into another pair of male faces—liberal American commentator Chris Hayes and US Senator Bernie Sanders—taken from a MSNBC cable television broadcast. On one side, Hayes says “They do have, they have universal healthcare.” On the other, Sanders laughs uproariously in response. The only added caption for this segment is “Hahahaha!”, the simplicity of which suggests that the video’s target audience is assumed to have a context for understanding Sander’s laughter. Here and throughout the video, autoplay leads to a far more visual style of relating information, one in which captions—working alongside images and layout—become, in Zdenek’s words, a sort of “textual performance” (6).The Black Panther Party and the Textual Performance of Progressive PoliticsReports on police brutality and Black Lives Matters protests have been amongst AJ+’s most widely viewed and shared videos (Reid, “Beyond Websites”). Their 2-minute video (Facebook, Black Panther) commemorating the 50th anniversary of the Black Panther Party, viewed 9.5 million times, provides background to these contemporary events. Like the comparison of American and Australian healthcare, captions shape the video’s structure. But here, rather than using contrast as means of quick visual communication, the video is structured as a list of five significant points about the Black Panther Party. Captions are used not only to itemise and simplify—and ultimately to reduce—the party’s complex history, but also, somewhat paradoxically, to promote the news organisation’s own progressive values.After announcing the intent and structure of the video—“5 things you should know about the Black Panther Party”—in its first 3 seconds, the video quickly sets in to describe each item in turn. The themes themselves correspond with AJ+’s own interests in policing, community, and protest, while the language used to announce each theme is characteristically concise and colloquial:They wanted to end police brutality.They were all about the community.They made enemies in high places.Women were vocal and active panthers.The Black Panthers’ legacy is still alive today.Each of these themes is represented using a combination of archival black and white news footage and photographs depicting Black Panther members, marches, and events. These still and moving images are accompanied by audio recordings from party members, explaining its origins, purposes, and influences. Captions are used throughout the video both to indicate the five themes and to transcribe the recordings. As the video moves from one theme to another, the corresponding number appears in the centre of the screen to indicate the transition, and then shrinks and moves to the upper left corner of the screen as a reminder for viewers. A musical soundtrack of strings and percussion, communicating a sense of urgency, underscores the full video.While typographic features like font size, colour, and placement were significant in communicating meaning in AJ+’s healthcare video, there is an even broader range of experimentation here. The numbers 1 to 5 that appear in the centre of the screen to announce each new theme blink and flicker like the countdown at the beginning of bygone film reels, gesturing towards the historical topic and complementing the black and white footage. For those many viewers watching the video without sound, an audio waveform above the transcribed interviews provides a visual clue that the captions are transcriptions of recorded voices. Finally, the colour green, used infrequently in AJ+ videos, is chosen to emphasise a select number of key words and phrases within the short video. Significantly, all of these words are spoken by Black Panther members. For example, captions transcribing former Panther leader Ericka Huggins speaking about the party’s slogan—“All power to the people”—highlight the words “power” and “people” with large, lime green letters that stand out against the grainy black and white photos (Image 3). The captions quite literally highlight ideas about oppression, justice, and social change that are central to an understanding of the history of the Black Panther Party, but also to the communication of the AJ+ brand.Image 3: “All power to the people”ConclusionEmploying distinctive combinations of word and image, AJ+ videos are produced to call out to users through the crowded semiotic spaces of social media. But they also call out to scholars to think carefully about the new kinds of literacies associated with rapidly changing digital media formats. Captioned video makes clear the need to recognise how meaning is constructed through sophisticated interpretive strategies that draw together multiple modes. While captions are certainly not new, an analysis of AJ+ videos suggests the use of novel typographical experiments that sit “midway between language and image” (Stöckl 289). Discussions of literacy need to expand to recognise this experimentation and to account for the complex interactions between the verbal and visual that get lost when written text is understood to function similarly across multiple platforms. In his interpretation of closed captioning, Zdenek provides an insightful list of the ways that captions transform meaning, including their capacity to contextualise, clarify, formalise, linearise and distill (8–9). His list signals not only the need for a deeper understanding of the role of captions, but also for a broader and more vivid vocabulary to describe multimodal meaning-making. Indeed, as Allan Luke suggests, within the complex multimodal and multilingual contexts of contemporary global societies, literacy requires that we develop and nurture “languages to talk about language” (459).Just as importantly, an analysis of captioned video that takes into account the economic reasons for captioning also reminds us of the need for critical media literacies. AJ+ videos reveal how the commercial goals of branding, promotion, and profit-making influence the shape and presentation of news. As meaning-makers and as citizens, we require the capacity to assess how we are being addressed by news organisations that are themselves responding to the interests of economic and cultural juggernauts such as Facebook. In schools, universities, and informal learning spaces, as well as through discourses circulated by research, media, and public policy, we might begin to generate more explicit and critical discussions of the ways that digital media—including texts that inform us and even those that exhort us towards more active forms of citizenship—simultaneously seek to manage, direct, and profit from our attention.ReferencesBezemer, Jeff, and Gunther Kress. “Writing in Multimodal Texts: A Social Semiotic Account of Designs for Learning.” Written Communication 25.2 (2008): 166–195.Constine, Josh. “Facebook Adds Automatic Subtitling for Page Videos.” TechCrunch 4 Jan. 2017. 1 May 2017 <https://techcrunch.com/2017/01/04/facebook-video-captions/>.Ellis, Justin. “How AJ+ Embraces Facebook, Autoplay, and Comments to Make Its Videos Stand Out.” Nieman Labs 3 Aug. 2015. 28 Apr. 2017 <http://www.niemanlab.org/2015/08/how-aj-embraces-facebook-autoplay-and-comments-to-make-its-videos-stand-out/>.Facebook. “President Trump Says…” Facebook, 2017. <https://www.facebook.com/ajplusenglish/videos/954884227986418/>.Facebook. “Black Panther.” Facebook, 2017. <https://www.facebook.com/ajplusenglish/videos/820822028059306/>.Goldhaber, Michael. “The Attention Economy and the Net.” First Monday 2.4 (1997). 9 June 2013 <http://firstmonday.org/article/view/519/440>.Herrera, Linda. “Youth and Citizenship in the Digital Age: A View from Egypt.” Harvard Educational Review 82.3 (2012): 333–352.Jewitt, Carey.”Introduction.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 1–8.Jones, Rodney. “Technology and Sites of Display.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 114–126.Lee, Carmen. “Micro-Blogging and Status Updates on Facebook: Texts and Practices.” Digital Discourse: Language in the New Media. Eds. Crispin Thurlow and Kristine Mroczek. Oxford Scholarship Online, 2011. DOI: 10.1093/acprof:oso/9780199795437.001.0001.Lemke, Jay. “Multimodality, Identity, and Time.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 140–150.Luke, Allan. “Critical Literacy in Australia: A Matter of Context and Standpoint.” Journal of Adolescent and Adult Literacy 43.5 (200): 448–461.Oremus, Will. “Facebook Is Eating the Media.” National Post 14 Jan. 2015. 15 June 2017 <http://news.nationalpost.com/news/facebook-is-eating-the-media-how-auto-play-videos-could-put-news-websites-out-of-business>.———. “In Defense of Autoplay.” Slate 16 June 2015. 14 June 2017 <http://www.slate.com/articles/technology/future_tense/2015/06/autoplay_videos_facebook_twitter_are_making_them_less_annoying.html>.Paik, Nam June. “The Video Synthesizer and Beyond.” The New Television: A Public/Private Art. Eds. Douglas Davis and Allison Simmons. Cambridge, MA: MIT Press, 1977. 45.Reid, Alistair. “Beyond Websites: How AJ+ Is Innovating in Digital Storytelling.” Journalism.co 17 Apr. 2015. 13 Feb. 2017 <https://www.journalism.co.uk/news/beyond-websites-how-aj-is-innovating-in-digital-storytelling/s2/a564811/>.———. “How AJ+ Reaches 600% of Its Audience on Facebook.” Journalism.co. 5 Aug. 2015. 13 Feb. 2017 <https://www.journalism.co.uk/news/how-aj-reaches-600-of-its-audience-on-facebook/s2/a566014/>.Roettgers, Jank. “How Al Jazeera’s AJ+ Became One of the Biggest Video Publishers on Facebook.” Variety 30 July 2015. 1 May 2017 <http://variety.com/2015/digital/news/how-al-jazeeras-aj-became-one-of-the-biggest-video-publishers-on-facebook-1201553333/>.Shifman, Limor. Memes in Digital Culture. Cambridge, MA: MIT Press, 2014.Smith, David. “Trump Says ‘Everybody’, Not Just Australia, Has Better Healthcare than US.” The Guardian 5 May 2017. 5 May 2017 <https://www.theguardian.com/us-news/2017/may/05/trump-healthcare-australia-better-malcolm-turnbull>.Stöckl, Hartmut. “Typography: Visual Language and Multimodality.” Interactions, Images and Texts. Eds. Sigrid Norris and Carmen Daniela Maier. Amsterdam: De Gruyter, 2014. 283–293.Tandoc, Edson, and Maitra, Julian. “New Organizations’ Use of Native Videos on Facebook: Tweaking the Journalistic Field One Algorithm Change at a Time. New Media & Society (2017). DOI: 10.1177/1461444817702398.Youmans, William. An Unlikely Audience: Al Jazeera’s Struggle in America. New York: Oxford University Press, 2017.Zdenek, Sean. Reading Sounds: Closed-Captioned Media and Popular Culture. Chicago: University of Chicago Press, 2015.Zou, Yanni. “How AJ+ Applies User-Centered Design to Win Millennials.” Medium 16 Apr. 2016. 7 May 2017 <https://medium.com/aj-platforms/how-aj-applies-user-centered-design-to-win-millennials-3be803a4192c>.
APA, Harvard, Vancouver, ISO, and other styles
32

Mallan, Kerry Margaret, and Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age." M/C Journal 11, no. 4 (June 24, 2008). http://dx.doi.org/10.5204/mcj.40.

Full text
Abstract:
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
APA, Harvard, Vancouver, ISO, and other styles
33

Deck, Andy. "Treadmill Culture." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2157.

Full text
Abstract:
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
APA, Harvard, Vancouver, ISO, and other styles
34

Apperley, Tom, Bjorn Nansen, Michael Arnold, and Rowan Wilken. "Broadband in the Burbs: NBN Infrastructure, Spectrum Politics and the Digital Home." M/C Journal 14, no. 4 (August 23, 2011). http://dx.doi.org/10.5204/mcj.400.

Full text
Abstract:
The convergence of suburban homes and digital media and communications technologies is set to undergo a major shift as next-generation broadband infrastructures are installed. Embodied in the Australian Government’s National Broadband Network (NBN) and the delivery of fibre-optic cable to the front door of every suburban home, is an anticipated future of digital living that will transform the landscape and experience of suburban life. Drawing from our research, and from industry, policy and media documents, we map some scenarios of the NBN rollout in its early stages to show that this imaginary of seamless broadband in the suburbs and the transformation of digital homes it anticipates is challenged by local cultural and material geographies, which we describe as a politics of spectrum. The universal implementation of policy across Australia faces a considerable challenge in dealing with Australia’s physical environment. Geography has always had a major impact on communications technologies and services in Australia, and a major impetus of building a national broadband network has been to overcome the “tyranny of distance” experienced by people in many remote, regional and suburban areas. In 2009 the minister for Broadband, Communications and the Digital Economy (DBCDE), Stephen Conroy, announced that with the Government’s NBN policy “every person and business in Australia, no-matter where they are located, will have access to affordable, fast broadband at their fingertips” (Conroy). This ambition to digitally connect and include imagines the NBN as the solution to the current patchwork of connectivity and Internet speeds experienced across the country (ACCAN). Overcoming geographic difference and providing fast, universal and equitable digital access is to be realised through an open access broadband network built by the newly established NBN Co. Limited, jointly owned by the Government and the private sector at a cost estimated at $43 billion over eight years. In the main this network will depend upon fibre-optics reaching over 90% of the population, and achieving download speeds of up to 100 Mbit/s. The remaining population, mostly living in rural and remote areas, will receive wireless and satellite connections providing speeds of 12 Mbit/s (Conroy). Differential implementation in relation to comparisons of urban and remote populations is thus already embedded in the policy, yet distance is not the only characteristic of Australia’s material geographies that will shape the physical implementation of the NBN and create a varied spectrum of the experience of broadband. Instead, in this article we examine the uneven experience of broadband we may see occurring within suburban regions; places in which enhanced and collective participation in the digital economy relies upon the provision of faster transmission speeds and the delivery of fibre “the last mile” to each and every premise. The crucial platform for delivering broadband to the ’burbs is the digital home. The notion of the connected or smart or digital home has been around in different guises for a number of decades (e.g. Edwards et al.), and received wide press coverage in the 1990s (e.g. Howard). It has since been concretised in the wake of the NBN as telecommunications companies struggle to envision a viable “next step” in broadband consumption. Novel to the NBN imaginary of the digital home is a shift from thinking about the digital home in terms of consumer electronics and interoperable or automatic devices, based on shared standards or home networking, to addressing the home as a platform embedded within the economy. The digital home is imagined as an integral part of a network of digital living with seamless transitions between home, office, supermarket, school, and hospital. In the imaginary of the NBN, the digital home becomes a vital connection in the growing digital economy. Communications Patchwork, NBN Roll-Out and Infrastructure Despite this imagined future of seamless connectivity and universal integration of suburban life with the digital economy, there has been an uneven take-up of fibre connections. We argue that this suggests that the particularities of place and the materialities of geography are relevant for understanding the differential uptake of the NBN across the test sites. Furthermore, we maintain that these issues provide a useful model for understanding the ongoing process and challenges that the rollout of the NBN will face in providing even access to the imagined future of the digital home to all Australians. As of June 2011 an average of 70 per cent of homes in the five first release NBN sites have agreed to have the fibre cables installed (Grubb). However, there is a dramatic variation between these sites: in Armidale, NSW, and Willunga, SA, the percentage of properties consenting to fibre connections on their house is between 80-90 per cent; whereas in Brunswick, Victoria, and Midway Point, Tasmania, the take-up rate is closer to 50 per cent (Grubb). We suggest that these variations are created by a differential geography of connectivity that will continue to grow in significance as the NBN is rolled out to more locations around Australia. These can be seen to emerge as a consequence of localised conditions relating to, for example, installation policy, a focus on cost, and installation logistics. Another significant factor, unable to be addressed within the scope of this paper, is the integration of the NBN with each household’s domestic network of hardware devices, internal connections, software, and of course skill and interest. Installation Policy The opt-in policy of the NBN Co requires that owners of properties agree to become connected—as opposed to being automatically connected unless they opt-out. This makes getting connected a far simpler task for owner-occupiers over renters, because the latter group were required to triangulate with their landlords in order to get connected. This was considered to be a factor that impacted on the relatively low uptake of the NBN in Brunswick and Midway Point, and is reflected in media reports (Grubb) and our research: There was a bit of a problem with Midway Point, because I think it is about fifty percent of the houses here are rentals, and you needed signatures from the owners for the box to be put onto the building (anon. “Broadband in the Home” project). …a lot of people rent here, so unless their landlord filled it in they wouldn’t know (anon. “Broadband in the Home” project). The issue is exacerbated by the concentration of rental properties in particular suburbs and complicated rental arrangements mediated through agents, which prevent effective communication between the occupiers and owners of a property. In order to increase take-up in Tasmania, former State Premier, David Bartlett, successfully introduced legislation to the Tasmanian state legislature in late 2010 to make the NBN opt-out rather than opt-in. This reversed the onus of responsibility and meant that in Tasmania all houses and businesses would be automatically connected unless otherwise requested, and in order to effect this simple policy change, the government had to change trespass laws. However, other state legislatures are hesitant to follow the opt-out model (Grubb). Differentials in owner-occupied and rental properties within urban centres, combined with opt-in policies, are likely to see a continuation of the connectivity patchwork that that has thus far characterised Australian communications experience. A Focus on Cost Despite a great deal of public debate about the NBN, there is relatively little discussion of its proposed benefits. The fibre-to-the-home structure of the NBN is also subject to fierce partisan political debate between Australia’s major political parties, particularly around the form and cost of its implementation. As a consequence of this preoccupation with cost, many Australian consumers cannot see a “value proposition” in connecting, and are not convinced of the benefits of the NBN (Brown). The NBN is often reduced to an increased minimum download rate, and to increased ISP fees associated with high speeds, rather than a broader discussion of how the infrastructure can impact on commerce, education, entertainment, healthcare, and work (Barr). Moreover, this lack of balance in the discussion of costs and benefits extends in some instances to outright misunderstandings about the difference between infrastructure and service provision: …my neighbour across the road did not understand what that letter meant, and she would have to have been one of dozens if not hundreds in the exactly the same situation, who thought they were signing up for a broadband plan rather than just access to the infrastructure (anon. “Broadband in the Home” project) Lastly, the advent of the NBN in the first release areas does not override the costs of existing contracts for broadband delivered over the current copper network. Australians are often required to sign long-term contracts that prevent them from switching immediately to the new HSB infrastructure. Installation Logistics Local variations in fibre installation were evident prior to the rollout of the NBN, when the increased provision of HSB was already being used as a marketing device for greenfield (newly developed) estates in suburban Australia. In the wake of the NBN rollouts, some housing developers have begun to lay “NBN-ready” optic fibre in greenfield estates. While this is a positive development for those who a purchasing a newly-developed property, those that invest in brownfield “re-developments,” may have to pay over twice the amount for the installation of the NBN (Neales). These varying local conditions of installation are reflected in the contractual arrangements for installing the fibre, the installers’ policies for installation, and the processes of installation (Darling): They’re gonna have to do 4000 houses a day … and it was a solid six months to get about 800 houses hooked up here. So, logistically I just can’t see it happening. (anon. “Broadband in the Home” project) Finally, for those who do not take-up the free initial installation offer, for whatever reason, there will be costs to have contractors return and connect the fibre (Grubb; Neales). Spectrum Politics, Fibre in the Neighbourhood The promise that the NBN will provide fast, universal and equitable digital access realised through a fibre-optic network is challenged by the experience of first release sites such as Midway Point. As evident above, and due to a number of factors, there is a likelihood in supposedly NBN-connected places of varied connectivity in which service will range from dial-up to DSL and ADSL to fibre and wireless, all within a single location. The varied connectivity in the early NBN rollout stages suggests that the patchwork of Internet connections commonly experienced in Australian suburbs will continue rather than disappear. This varied patchwork can be understood as a politics of spectrum. Rod Tucker (13-14) emphasises that the crucial element of spectrum is its bandwidth, or information carrying capacity. In light of this the politics of spectrum reframes the key issue of access to participation in the digital economy to examine stakes of the varying quality of connection (particularly download speeds), through the available medium (wireless, copper, coaxial cable, optical fibre), connection (modem, antenna, gateway) and service type (DSL, WiFi, Satellite, FTTP). This technical emphasis follows in the wake of debates about digital inclusion (e.g., Warschauer) to re-introduce the importance of connection quality—embedded in older “digital divide” discourse—into approaches that look beyond technical infrastructure to the social conditions of their use. This is a shift that takes account of the various and intertwined socio-technical factors influencing the quality of access and use. This spectrum politics also has important implications for the Universal Service Obligation (USO). Telstra (the former Telecom) continues to have the responsibility to provide every premise in Australia with a standard telephone service, that is at least a single copper line—or equivalent service—connection. However, the creation of the NBN Co. relieves Telstra of this obligation in the areas which have coverage from the fibre network. This agreement means that Telstra will gradually shut down its ageing copper network, following the pattern of the NBN rollout and transfer customers to the newly developed broadband fibre network (Hepworth and Wilson). Consequently, every individual phone service in those areas will be required to move onto the NBN to maintain the USO. This means that premises not connected to the NBN because the owners of the property opted out—by default or by choice—are faced with an uncertain future vis-à-vis the meaning and provision of the USO because they will not have access to either copper or fibre networks. At this extreme of spectrum politics, the current policy setting may result in households that have no possibility of a broadband connection. This potential problem can be resolved by a retro-rollout, in which NBN fibre connection is installed at some point in the future to every premises regardless of whether they originally agreed or not. Currently, however, the cost of a retrospective connection is expected to be borne by the consumer: “those who decline to allow NBN Co on to their property will need to pay up to $300 to connect to the NBN at a later date” (Grubb) Smaller, often brownfield development estates also face particular difficulties in the current long-term switch of responsibilities from Telstra to the NBN Co. This is because Telstra is reluctant to install new copper networks knowing that they will soon become obsolete. Instead, “in housing estates of fewer than 100 houses, Telstra is often providing residents with wireless phones that are unable to connect to the Internet” (Thompson). Thus a limbo is created, where new residents will not have access to either copper or fibre fixed line connections. Rather, they will have to use whatever wireless Internet is available in the area. Particularly concerning is that the period of the rollout is projected to last for eight years. As a result: “Thousands of Australians—many of them in regional areas—can expect years of worse, rather than better, Internet services as the National Broadband Network rolls out across the country” (Thompson). And, given different take-up rates and costs of retro-fitting, this situation could continue for many people and for many years after the initial rollout is completed. Implications of Spectrum Politics for the Digital Home What does this uncertain and patchwork future of connectivity imply for digital living and the next-generation broadband suburb? In contrast to the imagined post-NBN geography of the seamless digital home, local material and cultural factors will still create varied levels of service. This predicament challenges the ideals of organisations such as the Digital Living Network, an industry body comprised of corporate members, “based on principles of open standards and home networking interoperability [which] will unleash a rich digital media environment of interconnected devices that enable us all to experience our favorite content and services wherever and whenever we want” (Vohringer). Such a vision of convergence takes a domestic approach to the “Internet of things” by imagining a user-friendly network of personal computing, consumer electronics, mobile technologies, utilities, and other domestic technologies. The NBN anticipates a digital home that is integrated into the digital economy as a node of production and consumption. But this future is challenged by the patchwork of connectivity. Bruno Latour famously remarked that even the most extensive and powerful networks are local at every point. Although he was speaking of actor-networks, not broadband networks, analysis of the Australian experience of high-speed broadband would do well to look beyond its national characteristics to include its local characteristics, and the constellations between them. It is at the local level, importantly, at the level of the household and suburb, that the NBN will be experienced in daily life. As we have argued here, we have reason to expect that this experience will be as disparate as the network is distributed, and we have reason to believe that local cultural and material factors such as installation policies, discussions around costs and benefits, the household’s own internal digital infrastructure, and installation logistics at the level of the house and the neighbourhood, will continue to shape a patchworked geography of media and communications experiences for digital homes. References Australian Communications Consumer Action Network (ACCAN). National Broadband Network: A Guide for Consumers. Internet Society of Australia (ISOC-AU) and ACCAN, 2011. Barr, Trevor. “A Broadband Services Typology.” The Australian Economic Review 43.2 (2010): 187-193. Brown, Damien. “NBN Now 10 Times Faster.” The Mercury 13 Aug. 2010. ‹http://www.themercury.com.au/article/2010/08/13/165435_todays-news.html›. Conroy, Stephen (Minister for Broadband, Communications and the Digital Economy). “New National Broadband Network”. Canberra: Australian Government, 7 April 2009. ‹http://www.minister.dbcde.gov.au/media/media_releases/2009/022›. Darling, Peter. “Building the National Broadband Network.” Telecommunications Journal of Australia 60.3 (2010): 42.1-12. Department of Broadband, Communications and the Digital Economy (DBCDE). “Impacts of Teleworking under the NBN.” Report prepared by Access Economics. Canberra, 2010. Edwards, Keith, Rebecca Grinter, Ratul Mahajan, and David Wetherall. “Advancing the State of Home Networking.” Communications of the ACM 54.6 (2010): 62-71. Grubb, Ben. “Connect to NBN Now or Pay Up to $300 for Phone Line.” The Sydney Morning Herald 15 Oct. 2010. ‹http://www.smh.com.au/technology/technology-news/connect-to-nbn-now-or-pay-up-to-300-for-phone-line-20101015-16ms3.html›. Hepworth, Annabel, and Lauren Wilson. “Customers May Be Forced on to NBN to Keep Phones.” The Australian 12 Oct. 2010. ‹http://www.theaustralian.com.au/national-affairs/customers-may-be-forced-on-to-nbn-to-keep-phones/story-fn59niix-1225937394605›. Howard, Sandy. “How Your Home Will Operate.” Business Review Weekly 25 April 1994: 100. Intel Corporation. “Intel and the Digital Home.” ‹http://www.intel.com/standards/case/case_dh.htm›. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2005. Neales, Sue. “Bartlett Looks at ‘Opt-out’ NBN.” The Mercury 28 July 2010. ‹http://www.themercury.com.au/article/2010/07/28/161721_tasmania-news.html›. Spigel, Lynn. “Media Homes: Then and Now.” International Journal of Cultural Studies 4.4 (2001): 385–411. Thompson, Geoff. “Thousands to Be Stuck in NBN ‘Limbo’.” ABC Online 26 April 2011. ‹http://www.abc.net.au/news/stories/2011/04/26/3200127.htm›. Tietze, S., and G. Musson. “Recasting the Home—Work Relationship: A Case of Mutual Adjustment?” Organization Studies 26.9 (2005): 1331–1352. Trulove, James Grayson (ed.). The Smart House. New York: HDI, 2003. Tucker, Rodney S. “Broadband Facts, Fiction and Urban Myths.” Telecommunications Journal of Australia 60.3 (2010): 43.1 to 43.15. Vohringer, Cesar. CTO of Philips Consumer Electronics (from June 2003 DLNA press release) cited on the Intel Corporation website. ‹http://www.intel.com/standards/case/case_dh.htm›. Warschauer, Mark. Technology and Social Inclusion: Rethinking the Digital Divide. Cambridge: MIT Press, 2003. Wilken, Rowan, Michael Arnold, and Bjorn Nansen. “Broadband in the Home Pilot Study: Suburban Hobart.” Telecommunications Journal of Australia 61.1 (2011): 5.1-16.
APA, Harvard, Vancouver, ISO, and other styles
35

Kimberley, Maree. "Neuroscience and Young Adult Fiction: A Recipe for Trouble?" M/C Journal 14, no. 3 (June 25, 2011). http://dx.doi.org/10.5204/mcj.371.

Full text
Abstract:
Historically, science and medicine have been a great source of inspiration for fiction writers. Mary Shelley, in the 1831 introduction to her novel Frankenstein said she was been inspired, in part, by discussions about scientific experiments, including those of Darwin and Galvani. Shelley states “perhaps a corpse would be re-animated; galvanism had given token of such things: perhaps the component parts of a creature might be manufactured, brought together, and endued with vital warmth” (10). Countless other authors have followed her lead, from H.G. Wells, whose mad scientist Dr Moreau takes a lead from Shelley’s Dr Frankenstein, through to popular contemporary writers of adult fiction, such as Michael Crichton and Kathy Reichs, who have drawn on their scientific and medical backgrounds for their fictional works. Science and medicine themed fiction has also proven popular for younger readers, particularly in dystopian settings. Reichs has extended her writing to include the young adult market with Virals, which combines forensic science with the supernatural. Alison Allen-Grey’s 2009 novel, Lifegame, deals with cloning and organ replacement. Nathan Hobby’s The Fur is based around an environmental disaster where an invasive fungal-fur grows everywhere, including in people’s internal organs. Catherine Jinks’ Piggy in the Middle incorporates genetics and biomedical research into its horror-science fiction plot. Brian Caswell’s young adult novel, Cage of Butterflies uses elements of neuroscience as a plot device. However, although Caswell’s novel found commercial and critical success—it was shortlisted in the 1993 Children’s Book Council of Australia (CBCA) Book of the Year Awards Older Readers and was reprinted several times—neuroscience is a field that writers of young adult fiction tend to either ignore or only refer to on the periphery. This paper will explore how neuroscientific and dystopian elements interact in young adult fiction, focusing on the current trend for neuroscientific elements to be something that adolescent characters are subjected to rather than something they can use as a tool of positive change. It will argue that the time is right for a shift in young adult fiction away from a dystopian world view to one where the teenaged characters can become powerful agents of change. The term “neuroscience” was first coined in the 1960s as a way to hybridise a range of disciplines and sub-disciplines including biophsyics, biology and chemistry (Abi-Rached and Rose). Since then, neuroscience as a field has made huge leaps, particularly in the past two decades with discoveries about the development and growth of the adolescent brain; the dismissal of the nature versus nurture dichotomy; and the acceptance of brain plasticity. Although individual scientists had made discoveries relating to brain plasticity in adult humans as far back as the 1960s, for example, it is less than 10 years since neuroplasticity—the notion that nerve cells in human brains and nervous systems are malleable, and so can be changed or modified by input from the environment—was accepted into mainstream scientific thinking (Doidge). This was a significant change in brain science from the once dominant principle of localisation, which posited that specific brain functions were fixed in a specific area of the brain, and that once damaged, the function associated with a brain area could not improve or recover (Burrell; Kolb and Whishaw; Doidge). Furthermore, up until the late 1990s when neuroscientist Jay Giedd’s studies of adolescent brains showed that the brain’s grey matter, which thickens during childhood, thins during adolescence while the white matter thickens, it was widely accepted the human brain stopped maturing at around the age of twelve (Wallis and Dell). The research of Giedd and others showed that massive changes, including those affecting decision-making abilities, impulse control and skill development, take place in the developing adolescent brain (Carr-Gregg). Thus, within the last fifteen years, two significant discoveries within neuroscience—brain plasticity and the maturation of the adolescent brain­—have had a major impact on the way the brain is viewed and studied. Brian Caswell’s Cage of Butterflies, was published too early to take advantage of these neuroscientific discoveries. Nevertheless the novel includes some specific details about how the brains of a group of children within the story, the Babies, have been altered by febrile convulsions to create an abnormality in their brain anatomy. The abnormality is discovered by a CAT scan (the novel predates the use of fMRI brain scans). Due to their abnormal brain anatomy, the Babies are unable to communicate verbally but can communicate telepathically as a “shared mind” with others outside their small group. It is unlikely Caswell would have been aware of brain plasticity in the early 1990s, nevertheless, in the narrative, older teens are able to slowly understand the Babies by focusing on their telepathic messages until, over time, they can understand them without too much difficulty. Thus Caswell has incorporated neuroscientific elements throughout the plot of his novel and provided some neuroscientific explanation for how the Babies communicate. In recent years, several young adult novels, both speculative and contemporary, have used elements of neuroscience in their narratives; however, these novels tend to put neuroscience on the periphery. Rather than embracing neuroscience as a tool adolescent characters can use for their benefit, as Caswell did, neuroscience is typically something that exists around or is done to the characters; it is an element over which they have no control. These novels are found across several sub-genres of young adult fiction, including science fiction, speculative fiction and contemporary fiction. Most place their narratives in a dystopian world view. The dystopian settings reinforce the idea that the world is a dangerous place to live, and the teenaged characters living in the world of the novels are at the mercy of powerful oppressors. This creates tension within the narrative as the adolescents battle authorities for power. Without the ability to use neuroscientific advantages for their own gain, however, the characters’ power to change their worlds remains in the hands of adult authorities and the teenaged characters ultimately lose the fight to change their world. This lack of agency is evident in several dystopian young adult novels published in recent years, including the Uglies series and to a lesser extent Brain Jack and Dark Angel. Scott Westerfeld’s Uglies series is set in a dystopian future world and uses neuroscientific concepts to both reinforce the power of the ruling regime and give limited agency to the protagonists. In the first book in the series, Uglies, the science supports the narrative where necessary but is always subservient to the action. Westerfeld’s intended the Uglies series to focus on action. Westerfield states “I love a good action sequence, and this series is of full of hoverboard chases, escapes through ancient ruins, and leaps off tall buildings in bungee jackets” (Books). Nevertheless, the brain’s ability to rewire itself—the neuroscientific concept of brain plasticity—is a central idea within the Uglies series. In book one, the protagonist Tally Youngblood is desperate to turn 16 so she can join her friends and become a Pretty. However, she discovers the operation to become a Pretty involves not just plastic surgery to alter her looks: a lesion is inflicted on the brain, giving each Pretty the equivalent of a frontal lobotomy. In the next book, Pretties, Tally has undergone the procedure and then becomes one of the elite Specials, and in the third instalment she eventually rejects her Special status and returns to her true nature. This latter process, one of the characters explains, is possible because Tally has learnt to rewire her brain, and so undo the Pretty operation and the procedure that made her a Special. Thus neuroscientific concepts of brain injury and recovery through brain plasticity are prime plot devices. But the narrative offers no explanations for how Tally and some others have the ability to rewire their brains to undo the Pretty operation while most do not. The apparent complexity of the neuroscience is used as a surface plot device rather than as an element that could be explored to add narrative depth. In contrast, the philosophical implications of recent neuroscientific discoveries, rather than the physical, are explored in another recent young adult novel, Dark Angel. David Klass’ novel, Dark Angel, places recent developments in neuroscience in a contemporary setting to explore the nature of good and evil. It tells the story of 17-year-old Jeff, whose ordinary, small-town life implodes when his older brother, Troy, comes home on parole after serving five years for manslaughter. A school assignment forces Jeff to confront Troy’s complex nature. The science teacher asks his class “where does our growing knowledge of the chemical nature of the brain leave us in terms of... the human soul? When we think, are we really making choices or just following chemical pathways?” (Klass 74). This passage introduces a neuroscientific angle into the plot, and may refer to a case brought before the US Supreme Court in 2005 where the court admitted a brief based on brain scans showing that adolescent brains work differently than adult brains (Madrigal). The protagonist, Jeff, explores the nature of good and evil through this neuroscientific framework as the story's action unfolds, and examines his relationship with Troy, who is described in all his creepiness and vulnerability. Again through the teacher, Klass incorporates trauma and its impact on the brain from a neuroscientific perspective: There are psychiatrists and neurologists doing studies on violent lawbreakers...who are finding that these felons share amazingly similar patterns of abusive childhoods, brain injuries, and psychotic symptoms. (Klass 115)Jeff's story is infused with the fallout of his brother’s violent past and present, yet there is no hint of any trauma in Jeff’s or Troy’s childhoods that could be seen as a cause for Troy’s aberrant behaviour. Thus, although Klass’ novel explores more philosophical aspects of neuroscience, like Westerfeld’s novel, it uses developments in neuroscience as a point of interest. The neuroscience in Dark Angel is not embedded in the story but is a lens through which to view the theme of whether people are born evil or made evil. Brain Jack and Being are another two recent young adult novels that explore physical and philosophical aspects of modern neuroscience to some extent. Technology and its possible neurological effects on the brain, particularly the adolescent brain, is a field of research popularised by English neuroscientist Baroness Susan Greenfield. Brian Falkner’s 2010 release, Brain Jack, explores this branch of neuroscience with its cautionary tale of a hands-free device—a cap with small wires that attach to your head called the neuro-headset­—that allows you to control your computer with your thoughts. As more and more people use the neuro-headset, the avatar designed to help people learn to use the software develops consciousness and its own moral code, destroying anyone who it considers a threat by frying their brains. Like Dark Angel and Uglies, Brain Jack keeps the neuroscience on the periphery as an element over which the characters have little or no control, and details about how the neuro-headset affects the brain of its wearers, and how the avatar develops consciousness, are not explored. Conversely, Kevin Brooks’ novel Being explores the nature of consciousness outside the field of neuroscience. The protagonist, Robert, goes into hospital for a routine procedure and discovers that instead of internal organs, he has some kind of hardware. On the run from authorities who are after him for reasons he does not understand, Robert tries frantically to reconstruct his earliest memories to give him some clue as to who, or what, he really is: if he does not have normal human body parts, is he human? However, whether or not he has a human brain, and the implications of either answer for his consciousness, is never addressed. Thus, although the novels discussed above each incorporate neuroscience to some degree, they do so at a cursory level. In the case of Being this is understandable as neuroscience is never explicitly mentioned; rather it is a possible sub-text implied through the theme of consciousness. In Dark Angel, through the teacher as mouthpiece, neuroscience is offered up as a possible explanation for criminal behaviour, which causes the protagonist to question his beliefs and judgements about his brother. However, in Uglies, and to a lesser extent in Brain Jack, neuroscience is glossed over when more detail may have added extra depth and complexity to the novels. Fast-paced action is a common element in much contemporary young adult fiction, and thus it is possible that Westerfeld and Falkner both chose to sacrifice complexity for the sake of action. In Uglies, it is likely this is the case, given Westerfeld’s love of action sequences and his attention to detail about objects created exclusively for his futuristic world. However, Brain Jack goes into explicit detail about computer hacking. Falkner’s dismissal of the neuroscientific aspects of his plot, which could have added extra interest, most likely stems from his passion for computer science (he studied computer science at university) rather than a distaste for or ignorance of neuroscience. Nevertheless Falkner, Westerfeld, Brooks, and to a lesser extent Klass, have each glossed over a source of potential power that could turn the dystopian worlds of their novels into one where the teenaged protagonists hold the power to make lasting change. In each of these novels, neuroscientific concepts are generally used to support a bleak or dystopian world view. In Uglies, the characters have two choices: a life as a lobotomised Pretty or a life on the run from the authorities, where discovery and capture is a constant threat. The USA represented in Brain Jack descends into civil war, where those unknowingly enslaved by the avatar’s consciousness fight against those who refuse to wear the neuro-headsets. The protagonist in Being lives in hiding from the secret authorities who seek to capture and destroy him. Even in Dark Angel, the neuroscience is not a source of comfort or support for the protagonist, whose life, and that of his family, falls apart as a consequence of his older brother’s criminal actions. It is only in the 1990s novel, Cage of Butterflies, that characters use a neuroscientific advantage to improve their situation. The Babies in Caswell’s Cage of Butterflies are initially victims of their brain abnormality; however, with the help of the teenaged characters, along with two adult characters, they are able to use their “condition” to help create a new life for themselves. Telepathically communicating through their “shared mind,” the Babies coordinate their efforts with the others to escape from the research scientists who threaten their survival. In this way, what starts as a neurological disability is turned into an advantage. Cage of Butterflies illustrates how a young adult novel can incorporate neuroscience into its narrative in a way that offers the young adults agency to make positive changes in their lives. Furthermore, with recent neuroscientific discoveries showing that adolescence is a vital time for brain development and growth, there is potential for neuroscience to be explored as an agent of positive change in a new wave of young adult fiction, one that adopts a non-dystopian (if not optimistic) world view. Dystopian young adult fiction has been enjoying enormous popularity in western publishing in the past few years with series such as Chaos Walking, Hunger Games and Maze Runner trilogies topping bestseller lists. Dystopian fiction’s appeal to young adult audiences, states Westerfeld, is because: Teenagers’ lives are constantly defined by rules, and in response they construct their identities through necessary confrontations with authority, large and small. Imagining a world in which those authorities must be destroyed by any means necessary is one way of expanding that game. ("Teenage Wastelands")Teenagers often find themselves in trouble, and are almost as often like to cause trouble. Placing them in a fictional dystopian world gives them room to fight authority; too often, however, the young adult protagonists are never able to completely escape the world the adults impose upon them. For example, the epilogue of James Dashner’s The Maze Runner tells the reader the surviving group have not escaped the makers of the maze, and their apparent rescuers are part of the same group of adult authorities. Caswell’s neurologically evolved Babies, along with their high IQ teenage counterparts, however, provide a model for how young protagonists can take advantage of neuroscientific discoveries to cause trouble for hostile authorities in their fictional worlds. The power of the brain harnessed by adolescents, alongside their hormonal changes, is by its nature a recipe for trouble: it has the potential to give young people an agency and power adults may fear. In the everyday, lived world, neuroscientific tools are always in the hands of adults; however, there needs to be no such constraint in a fictional world. The superior ability of adolescents to grow the white matter of their brains, for example, could give rise to a range of fictional scenarios where the adolescents could use their brain power to brainwash adults in authority. A teenage neurosurgeon might not work well in a contemporary setting but could be credible in a speculative fiction setting. The number of possible scenarios is endless. More importantly, however, it offers a relatively unexplored avenue for teenaged characters to have agency and power in their fictional worlds. Westerfeld may be right in his assertion that the current popularity of dystopian fiction for young adults is a reaction to the highly monitored and controlled world in which they live ("Teenage Wastelands"). However, an alternative world view, one where the adolescents take control and defeat the adults, is just as valid. Such a scenario has been explored in Cory Doctorow’s For the Win, where marginalised and exploited gamers from Singapore and China band together with an American to form a global union and defeat their oppressors. Doctorow uses online gaming skills, a field of expertise where youth are considered superior to adults, to give his characters power over adults in their world. Similarly, the amazing changes that take place in the adolescent brain are a natural advantage that teenaged characters could utilise, particularly in speculative fiction, to gain power over adults. To imbue adolescent characters with such power has the potential to move young adult fiction beyond the confines of the dystopian novel and open new narrative pathways. The 2011 Bologna Children’s Book Fair supports the view that western-based publishing companies will be looking for more dystopian young adult fiction for the next year or two (Roback). However, within a few years, it is possible that the popularity of zombies, werewolves and vampires—and their dominance of fictional dystopian worlds—will pass or, at least change in their representations. The “next big thing” in young adult fiction could be neuroscience. Moreover, neuroscientific concepts could be incorporated into the standard zombie/vampire/werewolf trope to create yet another hybrid to explore: a zombie virus that mutates to give a new breed of undead creature superior intelligence, for example; or a new cross-breed of werewolf that gives humans the advantages of the canine brain with none of the disadvantages. The capacity and complexity of the human brain is enormous, and thus it offers enormous potential to create exciting young adult fiction that explores new territory, giving the teenaged reader a sense of their own power and natural advantages. In turn, this is bound to give them infinite potential to create fictional trouble. References Abi-Rachedm, Rose. “The Birth of the Neuromolecular Gaze.” History of the Human Sciences 23 (2010): 11-36. Allen-Gray, Alison. Lifegame. Oxford: Oxford UP, 2009. Brooks, Kevin. Being. London: Puffin Books, 2007. Burrell, Brian. Postcards from the Brain Museum. New York: Broadway, 2004. Carr-Gregg, Michael. The Princess Bitchface Syndrome. Melbourne: Penguin Books. 2006. Caswell, Brian. A Cage of Butterflies. Brisbane: University of Queensland Press, 1992. Dashner, James. The Maze Runner. Somerset, United Kingdom: Chicken House, 2010. Doctorow, Cory. For the Win. New York: Tor, 2010. Doidge, Norman. The Brain That Changes Itself. Melbourne: Scribe, 2007. Falkner, Brian. Brain Jack. New York: Random House, 2009. Hobby, Nathan. The Fur. Fremantle: Fremantle Press, 2004. Jinks, Catherine. Piggy in the Middle. Melbourne: Penguin, 1998. Klass, David. Dark Angel. New York: HarperTeen, 2007. Kolb, Bryan, and Ian Whishaw. Fundamentals of Human Neuropscychology, New York, Worth, 2009. Lehrer, Jonah. “The Human Brain Gets a New Map.” The Frontal Cortex. 2011. 10 April 2011 ‹http://www.wired.com/wiredscience/2011/04/the-human-brain-atlas/›. Madrigal, Alexis. “Courtroom First: Brain Scan Used in Murder Sentencing.” Wired. 2009. 16 April 2011 ‹http://www.wired.com/wiredscience/2009/11/brain-scan-murder-sentencing/›. Reichs, Kathy. Virals. London: Young Corgi, 2010. Roback, Diane. “Bologna 2011: Back to Business at a Buoyant Fair.” Publishers Weekly. 2011. 17 April 2011 ‹http://www.publishersweekly.com/pw/by-topic/childrens/childrens-industry-news/article/46698-bologna-2011-back-to-business-at-a-buoyant-fair.html›. Shelley, Mary. Frankenstein. London: Arrow Books, 1973. Wallis, Claudia, and Krystina Dell. “What Makes Teens Tick?” Death Penalty Information Centre. 2004. 10 April 2011 ‹http://www.deathpenaltyinfo.org/what-makes-teens-tick-flood-hormones-sure-also-host-structural-changes-brain-can-those-explain-behav›. Wells, H.G. The Island of Dr Moreau. Melbourne: Penguin, 1896. Westerfeld, Scott. Uglies. New York: Simon Pulse, 2005. ———. Pretties. New York: Simon Pulse, 2005. ———. Specials. New York: Simon Pulse, 2006. ———. Books. 2008. 1 Sep. 2010 ‹http://www.scottwesterfeld.com/author/books.htm›. ———. “Teenage Wastelands: How Dystopian YA Became Publishing’s Next Big Thing.” Tor.com 2011. 17 April 2011 ‹http://www.tor.com/blogs/2011/04/teenage-wastelands-how-dystopian-ya-became-publishings-next-big-thing›.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Ashlin. "In the Shadow of Platforms." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2750.

Full text
Abstract:
Introduction This article explores the changing relational quality of “the shadow of hierarchy”, in the context of the merging of platforms with infrastructure as the source of the shadow of hierarchy. In governance and regulatory studies, the shadow of hierarchy (or variations thereof), describes the space of influence that hierarchal organisations and infrastructures have (Héritier and Lehmkuhl; Lance et al.). A shift in who/what casts the shadow of hierarchy will necessarily result in changes to the attendant relational values, logics, and (techno)socialities that constitute the shadow, and a new arrangement of shadow that presents new challenges and opportunities. This article reflects on relevant literature to consider two different ways the shadow of hierarchy has qualitatively changed as platforms, rather than infrastructures, come to cast the shadow of hierarchy – an increase in scalability; and new socio-technical arrangements of (non)participation – and the opportunities and challenges therein. The article concludes that more concerted efforts are needed to design the shadow, given a seemingly directionless desire to enact data-driven solutions. The Shadow of Hierarchy, Infrastructures, and Platforms The shadow of hierarchy refers to how institutional, infrastructural, and organisational hierarchies create a relational zone of influence over a particular space. This commonly refers to executive decisions and legislation created by nation states, which are cast over private and non-governmental actors (Héritier and Lehmkuhl, 2). Lance et al. (252–53) argue that the shadow of hierarchy is a productive and desirable thing. Exploring the shadow of hierarchy in the context of how geospatial data agencies govern their data, Lance et al. find that the shadow of hierarchy enables the networked governance approaches that agencies adopt. This is because operating in the shadow of institutions provides authority, confers bureaucratic legitimacy and top-down power, and offers financial support. The darkness of the shadow is thus less a moral or ethicopolitical statement (such as that suggested by Fisher and Bolter, who use the idea of darkness to unpack the morality of tourism involving death and human suffering), and instead a relationality; an expression of differing values, logics, and (techno)socialities internal and external to those infrastructures and institutions that cast it (Gehl and McKelvey). The shadow of hierarchy might therefore be thought of as a field of relational influences and power that a social body casts over society, by virtue of a privileged position vis-a-vis society. It modulates society’s “light”; the resources (Bourdieu) and power relationships (Foucault) that run through social life, as parsed through a certain institutional and infrastructural worldview (the thing that blocks the light to create the shadow). In this way the shadow of hierarchy is not a field of absolute blackness that obscures, but instead a gradient of light and dark that creates certain effects. The shadow of hierarchy is now, however, also being cast by decentralised, privately held, and non-hierarchal platforms that are replacing or merging with public infrastructure, creating new social effects. Platforms are digital, socio-technical systems that create relationships between different entities. They are most commonly built around a relatively fixed core function (such as a social media service like Facebook), that then interacts with a peripheral set of complementors (advertising companies and app developers in the case of social media; Baldwin and Woodard), to create new relationships, forms of value, and other interactions (van Dijck, The Culture of Connectivity). In creating these relationships, platforms become inherently political (Gillespie), shaping relationships and content on the platform (Suzor) and in embodied life (Ajunwa; Eubanks). While platforms are often associated with optional consumer platforms (such as streaming services like Spotify), they have increasingly come to occupy the place of public infrastructure, and act as a powerful enabler to different socio-technical, economic, and political relationships (van Dijck, Governing Digital Societies). For instance, Plantin et al. argue that platforms have merged with infrastructures, and that once publicly held and funded institutions and essential services now share many characteristics with for-profit, privately held platforms. For example, Australia has had a long history of outsourcing employment services (Webster and Harding), and nearly privatised its entire visa processing data infrastructure (Jenkins). Platforms therefore have a greater role in casting the shadow of hierarchy than before. In doing so, they cast a shadow that is qualitatively different, modulated through a different set of relational values and (techno)socialities. Scalability A key difference and selling point of platforms is their scalability; since they can rapidly and easily up- and down-scale their functionalities in a way that traditional infrastructure cannot (Plantin et al.). The ability to respond “on-demand” to infrastructural requirements has made platforms the go-to service delivery option in the neo-liberalised public infrastructure environment (van Dijck, Governing Digital Societies). For instance, services providers like Amazon Web Services or Microsoft Azure provide on demand computing capacity for many nations’ most valuable services, including their intelligence and security capabilities (Amoore, Cloud Ethics; Konkel). The value of such platforms to government lies in the reduced cost and risk that comes with using rented capabilities, and the enhanced flexibility to increase or decrease their usage as required, without any of the economic sunk costs attached to owning the infrastructure. Scalability is, however, not just about on-demand technical capability, but about how platforms can change the scale of socio-technical relationships and services that are mediated through the platform. This changes the relational quality of the shadow of hierarchy, as activities and services occurring within the shadow are now connected into a larger and rapidly modulating scale. Scalability allows the shadow of hierarchy to extend from those in proximity to institutions to the broader population in general. For example, individual citizens can more easily “reach up” into governmental services and agencies as a part of completing their everyday business through platform such as MyGov in Australia (Services Australia). Using a smartphone application, citizens are afforded a more personalised and adaptive experience of the welfare state, as engaging with welfare services is no-longer tied to specific “brick-and-mortar” locations, but constantly available through a smartphone app and web portal. Multiple government services including healthcare and taxation are also connected to this platform, allowing users to reach across multiple government service domains to complete their personal business, seeking information and services that would have once required separate communications with different branches of government. The individual’s capacities to engage with the state have therefore upscaled with this change in the shadow, retaining a productivity and capacity enhancing quality that is reminiscent of older infrastructures and institutions, as the individual and their lived context is brought closer to the institutions themselves. Scale, however, comes with complications. The fundamental driver for scalability and its adaptive qualities is datafication. This means individuals and organisations are inflecting their operational and relational logics with the logic of datafication: a need to capture all data, at all times (van Dijck, Datafication; Fourcade and Healy). Platforms, especially privately held platforms, benefit significantly from this, as they rely on data to drive and refine their algorithmic tools, and ultimately create actionable intelligence that benefits their operations. Thus, scalability allows platforms to better “reach down” into individual lives and different social domains to fuel their operations. For example, as public transport services become increasingly datafied into mobility-as-a-service (MAAS) systems, ride sharing and on-demand transportation platforms like Uber and Lyft become incorporated into the public transport ecosystem (Lyons et al.). These platforms capture geospatial, behavioural, and reputational data from users and drivers during their interactions with the platform (Rosenblat and Stark; Attoh et al.). This generates additional value, and profits, for the platform itself with limited value returned to the user or the broader public it supports, outside of the transport service. It also places the platform in a position to gain wider access to the population and their data, by virtue of operating as a part of a public service. In this way the shadow of hierarchy may exacerbate inequity. The (dis)benefits of the shadow of hierarchy become unevenly spread amongst actors within its field, a function of an increased scalability that connects individuals into much broader assemblages of datafication. For Eubank, this can entrench existing economic and social inequalities by forcing those in need to engage with digitally mediated welfare systems that rely on distant and opaque computational judgements. Local services are subject to increased digital surveillance, a removal of agency from frontline advocates, and algorithmic judgement at scale. More fortunate citizens are also still at risk, with Nardi and Ekbia arguing that many digitally scaled relationships are examples of “heteromation”, whereby platforms convince actors in the platform to labour for free, such as through providing ratings which establish a platform’s reputational economy. Such labour fuels the operation of the platform through exploiting users, who become both a product/resource (as a source of data for third party advertisers) and a performer of unrewarded digital labour, such as through providing user reviews that help guide a platform’s algorithm(s). Both these examples represent a particularly disconcerting outcome for the shadow of hierarchy, which has its roots in public sector institutions who operate for a common good through shared and publicly held infrastructure. In shifting towards platforms, especially privately held platforms, value is transmitted to private corporations and not the public or the commons, as was the case with traditional infrastructure. The public also comes to own the risks attached to platforms if they become tied to public services, placing a further burden on the public if the platform fails, while reaping none of the profit and value generated through datafication. This is a poor bargain at best. (Non)Participation Scalability forms the basis for a further predicament: a changing socio-technical dynamic of (non)participation between individuals and services. According to Star (118), infrastructures are defined through their relationships to a given context. These relationships, which often exist as boundary objects between different communities, are “loosely structured in common use, and become tightly bound in particular locations” (Star, 118). While platforms are certainly boundary objects and relationally defined, the affordances of cloud computing have enabled a decoupling from physical location, and the operation of platforms across time and space through distributed digital nodes (smartphones, computers, and other localised hardware) and powerful algorithms that sort and process requests for service. This does not mean location is not important for the cloud (see Amoore, Cloud Geographies), but platforms are less likely to have a physically co-located presence in the same way traditional infrastructures had. Without the same institutional and infrastructural footprint, the modality for participating in and with the shadow of hierarchy that platforms cast becomes qualitatively different and predicated on digital intermediaries. Replacing a physical and human footprint with algorithmically supported and decentralised computing power allows scalability and some efficiency improvements, but it also removes taken-for-granted touchpoints for contestation and recourse. For example, ride-sharing platform Uber operates globally, and has expressed interest in operating in complement to (and perhaps in competition with) public transport services in some cities (Hall et al.; Conger). Given that Uber would come to operate as a part of the shadow of hierarchy that transport authorities cast over said cities, it would not be unreasonable to expect Uber to be subject to comparable advocacy, adjudication, transparency, and complaint-handling requirements. Unfortunately, it is unclear if this would be the case, with examples suggesting that Uber would use the scalability of its platform to avoid these mechanisms. This is revealed by ongoing legal action launched by concerned Uber drivers in the United Kingdom, who have sought access to the profiling data that Uber uses to manage and monitor its drivers (Sawers). The challenge has relied on transnational law (the European Union’s General Data Protection Regulation), with UK-based drivers lodging claims in Amsterdam to initiate the challenge. Such costly and complex actions are beyond the means of many, but demonstrate how reasonable participation in socio-technical and governance relationships (like contestations) might become limited, depending on how the shadow of hierarchy changes with the incorporation of platforms. Even if legal challenges for transparency are successful, they may not produce meaningful change. For instance, O’Neil links algorithmic bias to mathematical shortcomings in the variables used to measure the world; in the creation of irritational feedback loops based on incorrect data; and in the use of unsound data analysis techniques. These three factors contribute to inequitable digital metrics like predictive policing algorithms that disproportionately target racial minorities. Large amounts of selective data on minorities create myopic algorithms that direct police to target minorities, creating more selective data that reinforces the spurious model. These biases, however, are persistently inaccessible, and even when visible are often unintelligible to experts (Ananny and Crawford). The visibility of the technical “installed base” that support institutions and public services is therefore not a panacea, especially when the installed base (un)intentionally obfuscates participation in meaningful engagement like complaints handling. A negative outcome is, however, also not an inevitable thing. It is entirely possible to design platforms to allow individual users to scale up and have opportunities for enhanced participation. For instance, eGovernance and mobile governance literature have explored how citizens engage with state services at scale (Thomas and Streib; Foth et al.), and the open government movement has demonstrated the effectiveness of open data in understanding government operations (Barns; Janssen et al.), although these both have their challenges (Chadwick; Dawes). It is not a fantasy to imagine alternative configurations of the shadow of hierarchy that allow more participatory relationships. Open data could facilitate the governance of platforms at scale (Box et al.), where users are enfranchised into a platform by some form of membership right and given access to financial and governance records, in the same way that corporate shareholders are enfranchised, facilitated by the same app that provides a service. This could also be extended to decision making through voting and polling functions. Such a governance form would require radically different legal, business, and institutional structures to create and enforce this arrangement. Delacoix and Lawrence, for instance, suggest that data trusts, where a trustee is assigned legal and fiduciary responsibility to achieve maximum benefit for a specific group’s data, can be used to negotiate legal and governance relationships that meaningfully benefit the users of the trust. Trustees can be instructed to only share data to services whose algorithms are regularly audited for bias and provide datasets that are accurate representations of their users, for instance, avoiding erroneous proxies that disrupt algorithmic models. While these developments are in their infancy, it is not unreasonable to reflect on such endeavours now, as the technologies to achieve these are already in use. Conclusions There is a persistent myth that data will yield better, faster, more complete results in whatever field it is applied (Lee and Cook; Fourcade and Healy; Mayer-Schönberger and Cukier; Kitchin). This myth has led to data-driven assemblages, including artificial intelligence, platforms, surveillance, and other data-technologies, being deployed throughout social life. The public sector is no exception to this, but the deployment of any technological solution within the traditional institutions of the shadow of hierarchy is fraught with challenges, and often results in failure or unintended consequences (Henman). The complexity of these systems combined with time, budgetary, and political pressures can create a contested environment. It is this environment that moulds societies' light and resources to cast the shadow of hierarchy. Relationality within a shadow of hierarchy that reflects the complicated and competing interests of platforms is likely to present a range of unintended social consequences that are inherently emergent because they are entering into a complex system – society – that is extremely hard to model. The relational qualities of the shadow of hierarchy are therefore now more multidimensional and emergent, and experiences relating to socio-technical features like scale, and as a follow-on (non)participation, are evidence of this. Yet by being emergent, they are also directionless, a product of complex systems rather than designed and strategic intent. This is not an inherently bad thing, but given the potential for data-system and platforms to have negative or unintended consequences, it is worth considering whether remaining directionless is the best outcome. There are many examples of data-driven systems in healthcare (Obermeyer et al.), welfare (Eubanks; Henman and Marston), and economics (MacKenzie), having unintended and negative social consequences. Appropriately guiding the design and deployment of theses system also represents a growing body of knowledge and practical endeavour (Jirotka et al.; Stilgoe et al.). Armed with the knowledge of these social implications, constructing an appropriate social architecture (Box and Lemon; Box et al.) around the platforms and data systems that form the shadow of hierarchy should be encouraged. This social architecture should account for the affordances and emergent potentials of a complex social, institutional, economic, political, and technical environment, and should assist in guiding the shadow of hierarchy away from egregious challenges and towards meaningful opportunities. To be directionless is an opportunity to take a new direction. The intersection of platforms with public institutions and infrastructures has moulded society’s light into an evolving and emergent shadow of hierarchy over many domains. With the scale of the shadow changing, and shaping participation, who benefits and who loses out in the shadow of hierarchy is also changing. Equipped with insights into this change, we should not hesitate to shape this change, creating or preserving relationalities that offer the best outcomes. Defining, understanding, and practically implementing what the “best” outcome(s) are would be a valuable next step in this endeavour, and should prompt considerable discussion. If we wish the shadow of hierarchy to continue to be productive, then finding a social architecture to shape the emergence and directionlessness of socio-technical systems like platforms is an important step in the continued evolution of the shadow of hierarchy. References Ajunwa, Ifeoma. “Age Discrimination by Platforms.” Berkeley J. Emp. & Lab. L. 40 (2019): 1-30. Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020. ———. “Cloud Geographies: Computing, Data, Sovereignty.” Progress in Human Geography 42.1 (2018): 4-24. Ananny, Mike, and Kate Crawford. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20.3 (2018): 973–89. Attoh, Kafui, et al. “‘We’re Building Their Data’: Labor, Alienation, and Idiocy in the Smart City.” Environment and Planning D: Society and Space 37.6 (2019): 1007-24. Baldwin, Carliss Y., and C. Jason Woodard. “The Architecture of Platforms: A Unified View.” Platforms, Markets and Innovation. Ed. Annabelle Gawer. Cheltenham: Edward Elgar, 2009. 19–44. Barns, Sarah. “Mine Your Data: Open Data, Digital Strategies and Entrepreneurial Governance by Code.” Urban Geography 37.4 (2016): 554–71. Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA: Harvard University Press, 1984. Box, Paul, et al. Data Platforms for Smart Cities – A Landscape Scan and Recommendations for Smart City Practice. Canberra: CSIRO, 2020. Box, Paul, and David Lemon. The Role of Social Architecture in Information Infrastructure: A Report for the National Environmental Information Infrastructure (NEII). Canberra: CSIRO, 2015. Chadwick, Andrew. “Explaining the Failure of an Online Citizen Engagement Initiative: The Role of Internal Institutional Variables.” Journal of Information Technology & Politics 8.1 (2011): 21–40. Conger, Kate. “Uber Wants to Sell You Train Tickets. And Be Your Bus Service, Too.” The New York Times, 7 Aug. 2019. 19 Jan. 2021. <https://www.nytimes.com/2019/08/07/technology/uber-train-bus-public-transit.html>. Dawes, Sharon S. “The Evolution and Continuing Challenges of E‐Governance.” Public Administration Review 68 (2008): 86–102. Delacroix, Sylvie, and Neil D. Lawrence. “Bottom-Up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance.” International Data Privacy Law 9.4 (2019): 236-252. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018. Fisher, Joshua A., and Jay David Bolter. “Ethical Considerations for AR Experiences at Dark Tourism Sites”. IEEE Explore 29 April. 2019. 13 Apr. 2021 <https://ieeexplore.ieee.org/document/8699186>. Foth, Marcus, et al. From Social Butterfly to Engaged Citizen: Urban Informatics, Social Media, Ubiquitous Computing, and Mobile Technology to Support Citizen Engagement. Cambridge MA: MIT Press, 2011. Fourcade, Marion, and Kieran Healy. “Seeing like a Market.” Socio-Economic Review, 15.1 (2017): 9–29. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219–35. Gillespie, Tarleton. “The Politics of ‘Platforms.’” New Media & Society 12.3 (2010): 347–64. Hall, Jonathan D., et al. “Is Uber a Substitute or Complement for Public Transit?” Journal of Urban Economics 108 (2018): 36–50. Henman, Paul. “Improving Public Services Using Artificial Intelligence: Possibilities, Pitfalls, Governance.” Asia Pacific Journal of Public Administration 42.4 (2020): 209–21. Henman, Paul, and Greg Marston. “The Social Division of Welfare Surveillance.” Journal of Social Policy 37.2 (2008): 187–205. Héritier, Adrienne, and Dirk Lehmkuhl. “Introduction: The Shadow of Hierarchy and New Modes of Governance.” Journal of Public Policy 28.1 (2008): 1–17. Janssen, Marijn, et al. “Benefits, Adoption Barriers and Myths of Open Data and Open Government.” Information Systems Management 29.4 (2012): 258–68. Jenkins, Shannon. “Visa Privatisation Plan Scrapped, with New Approach to Tackle ’Emerging Global Threats’.” The Mandarin. 23 Mar. 2020. 19 Jan. 2021 <https://www.themandarin.com.au/128244-visa-privatisation-plan-scrapped-with-new-approach-to-tackle-emerging-global-threats/>. Jirotka, Marina, et al. “Responsible Research and Innovation in the Digital Age.” Communications of the ACM 60.6 (2016): 62–68. Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. Thousand Oaks, CA: Sage, 2014. Konkel, Frank. “CIA Awards Secret Multibillion-Dollar Cloud Contract.” Nextgov 20 Nov. 2020. 19 Jan. 2021 <https://www.nextgov.com/it-modernization/2020/11/exclusive-cia-awards-secret-multibillion-dollar-cloud-contract/170227/>. Lance, Kate T., et al. “Cross‐Agency Coordination in the Shadow of Hierarchy: ‘Joining up’Government Geospatial Information Systems.” International Journal of Geographical Information Science, 23.2 (2009): 249–69. Lee, Ashlin J., and Peta S. Cook. “The Myth of the ‘Data‐Driven’ Society: Exploring the Interactions of Data Interfaces, Circulations, and Abstractions.” Sociology Compass 14.1 (2020): 1–14. Lyons, Glenn, et al. “The Importance of User Perspective in the Evolution of MaaS.” Transportation Research Part A: Policy and Practice 121(2019): 22-36. MacKenzie, Donald. “‘Making’, ‘Taking’ and the Material Political Economy of Algorithmic Trading.” Economy and Society 47.4 (2018): 501-23. Mayer-Schönberger, V., and K. Cukier. Big Data: A Revolution That Will Change How We Live, Work and Think. London: John Murray, 2013. Michel Foucault. Discipline and Punish. London: Penguin, 1977. Nardi, Bonnie, and Hamid Ekbia. Heteromation, and Other Stories of Computing and Capitalism. Cambridge, MA: MIT Press, 2017. O’Neil, Cathy. Weapons of Math Destruction – How Big Data Increases Inequality and Threatens Democracy. London: Penguin, 2017. Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366.6464 (2019): 447-53. Plantin, Jean-Christophe, et al. “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook.” New Media & Society 20.1 (2018): 293–310. Rosenblat, Alex, and Luke Stark. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication 10 (2016): 3758–3784. Sawers, Paul. “Uber Drivers Sue for Data on Secret Profiling and Automated Decision-Making.” VentureBeat 20 July. 2020. 19 Jan. 2021 <https://venturebeat.com/2020/07/20/uber-drivers-sue-for-data-on-secret-profiling-and-automated-decision-making/>. Services Australia. About MyGov. Services Australia 19 Jan. 2021. 19 Jan. 2021 <https://www.servicesaustralia.gov.au/individuals/subjects/about-mygov>. Star, Susan Leigh. “Infrastructure and Ethnographic Practice: Working on the Fringes.” Scandinavian Journal of Information Systems 14.2 (2002):107-122. Stilgoe, Jack, et al. “Developing a Framework for Responsible Innovation.” Research Policy 42.9 (2013):1568-80. Suzor, Nicolas. Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge: Cambridge University Press, 2019. Thomas, John Clayton, and Gregory Streib. “The New Face of Government: Citizen‐initiated Contacts in the Era of E‐Government.” Journal of Public Administration Research and Theory 13.1 (2003): 83-102. Van Dijck, José. “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology.” Surveillance & Society 12.2 (2014): 197–208. ———. “Governing Digital Societies: Private Platforms, Public Values.” Computer Law & Security Review 36 (2020) 13 Apr. 2021 <https://www.sciencedirect.com/science/article/abs/pii/S0267364919303887>. ———. The Culture of Connectivity: A Critical History of Social Media. Oxford: Oxford University Press, 2013. Webster, Elizabeth, and Glenys Harding. “Outsourcing Public Employment Services: The Australian Experience.” Australian Economic Review 34.2 (2001): 231-42.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography