Siga este enlace para ver otros tipos de publicaciones sobre el tema: BPEL (Computer program language) Web services.

Artículos de revistas sobre el tema "BPEL (Computer program language) Web services"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 22 mejores artículos de revistas para su investigación sobre el tema "BPEL (Computer program language) Web services".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Boutrous Saab, C., D. Coulibaly, S. Haddad, T. Melliti, P. Moreaux y S. Rampacek. "An Integrated Framework for Web Services Orchestration". International Journal of Web Services Research 6, n.º 4 (octubre de 2009): 1–29. http://dx.doi.org/10.4018/jwsr.2009071301.

Texto completo
Resumen
Currently, Web services give place to active research and this is due both to industrial and theoretical factors. On one hand, Web services are essential as the design model of applications dedicated to the electronic business. On the other hand, this model aims to become one of the major formalisms for the design of distributed and cooperative applications in an open environment (the Internet). In this article, the authors will focus on two features of Web services. The first one concerns the interaction problem: given the interaction protocol of a Web service described in BPEL, how to generate the appropriate client? Their approach is based on a formal semantics for BPEL via process algebra and yields an algorithm which decides whether such a client exists and synthesizes the description of this client as a (timed) automaton. The second one concerns the design process of a service. They propose a method which proceeds by two successive refinements: first the service is described via UML, then refined in a BPEL model and finally enlarged with JAVA code using JCSWL, a new language that we introduce here. Their solutions are integrated in a service development framework that will be presented in a synthetic way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ji, Shunhui, Liming Hu, Yihan Cao, Pengcheng Zhang y Jerry Gao. "Verifiable Model Construction for Business Processes". International Journal of Software Engineering and Knowledge Engineering 31, n.º 07 (julio de 2021): 1017–42. http://dx.doi.org/10.1142/s0218194021500315.

Texto completo
Resumen
Business process specified in Business Process Execution Language (BPEL), which integrates existing services to develop composite service for offering more complicated function, is error-prone. Verification and testing are necessary to ensure the correctness of business processes. SPIN, for which the input language is PROcess MEta-LAnguage (Promela), is one of the most popular tools for detecting software defects and can be used both in verification and testing. In this paper, an automatic approach is proposed to construct the verifiable model for BPEL-based business process with Promela language. Business process is translated to an intermediate two-level representation, in which eXtended Control Flow Graph (XCFG) describes the behavior of BPEL process in the first level and Web Service Description Models (WSDM) depict the interface information of composite service and partner services in the second level. With XCFG of BPEL process, XCFGs for partner services are generated to describe their behavior. Promela model is constructed by defining data types based on WSDM and defining channels, variables and processes based on XCFGs. The constructed Promela model is closed, containing not only the BPEL process but also its execution environment. Case study shows that the proposed approach is effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sellami, Wael, Hatem Hadj Kacem y Ahmed Hadj Kacem. "A Formal Approach for the Validation of Web Service Orchestrations". International Journal of Web Portals 5, n.º 1 (enero de 2013): 41–54. http://dx.doi.org/10.4018/jwp.2013010104.

Texto completo
Resumen
A web service composition is considered as a real revolution in SOA (Service Oriented Architecture). It is based on assembling independent and loosely coupled services to build a composed web service. This composition can be described from both a local or a global perspective by respective orchestration or choreography. The validation of web service orchestrations is the main topic of this work. It is based on the verification of two classes of properties: generic and specific properties. The former can be checked for any invoked web services whereas the specific properties are different interdependence relationships between activities within an orchestration process. These properties cannot be directly verified on the orchestration process, so, the authors have to use formal techniques. In this paper, they propose a formal approach for the validation of web service orchestrations. This work adopts WS-BPEL 2.0 as the language to describe the web service orchestration and uses the SPIN model-checker for the verification engine. The WS-BPEL specification is translated into Promela code which is the input language for the SPIN model-checker, in order to check generic and specific properties expressed with LTL (Linear Temporal Logic).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Nematzadeh, Hossein, Homayun Motameni, Radziah Mohamad y Zahra Nematzadeh. "QoS Measurement of Workflow-Based Web Service Compositions Using Colored Petri Net". Scientific World Journal 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/847930.

Texto completo
Resumen
Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today’s web environments. Business should try to provide good quality regarding the customers’ requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Mei, Lijun, Yan Cai, Changjiang Jia, Bo Jiang y W. K. Chan. "Test Pair Selection for Test Case Prioritization in Regression Testing for WS-BPEL Programs". International Journal of Web Services Research 10, n.º 1 (enero de 2013): 73–102. http://dx.doi.org/10.4018/jwsr.2013010104.

Texto completo
Resumen
Many web services not only communicate through XML-based messages, but also may dynamically modify their behaviors by applying different interpretations on XML messages through updating the associated XML Schemas or XML-based interface specifications. Such artifacts are usually complex, allowing XML-based messages conforming to these specifications structurally complex. Testing should cost-effectively cover all scenarios. Test case prioritization is a dimension of regression testing that assures a program from unintended modifications by reordering the test cases within a test suite. However, many existing test case prioritization techniques for regression testing treat test cases of different complexity generically. In this paper, the authors exploit the insights on the structural similarity of XML-based artifacts between test cases in both static and dynamic dimensions, and propose a family of test case prioritization techniques that selects pairs of test case without replacement in turn. To the best of their knowledge, it is the first test case prioritization proposal that selects test case pairs for prioritization. The authors validate their techniques by a suite of benchmarks. The empirical results show that when incorporating all dimensions, some members of our technique family can be more effective than conventional coverage-based techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Held, Markus, Wolfgang Küchlin y Wolfgang Blochinger. "MoBiFlow". International Journal of Service Science, Management, Engineering, and Technology 2, n.º 4 (octubre de 2011): 67–78. http://dx.doi.org/10.4018/ijssmet.2011100107.

Texto completo
Resumen
Web-based problem solving environments provide sharing, execution and monitoring of scientific workflows. Where they depend on general purpose workflow development systems, the workflow notations are likely far too powerful and complex, especially in the area of biology, where programming skills are rare. On the other hand, application specific workflow systems may use special purpose languages and execution engines, suffering from a lack of standards, portability, documentation, stability of investment etc. In both cases, the need to support yet another application on the desk-top places a burden on the system administration of a research lab. In previous research the authors have developed the web based workflow systems Calvin and Hobbes, which enable biologists and computer scientists to approach these problems in collaboration. Both systems use a server-centric Web 2.0 based approach. Calvin is tailored to molecular biology applications, with a simple graphical workflow-language and easy access to existing BioMoby web services. Calvin workflows are compiled to industry standard BPEL workflows, which can be edited and refined in collaboration between researchers and computer scientists using the Hobbes tool. Together, Calvin and Hobbes form our workflow platform MoBiFlow, whose principles, design, and use cases are described in this paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Jalbani, Khuda Bux, Muhammad Yousaf, Muhammad Shahzad Sarfraz, Rozita Jamili Oskouei, Akhtar Hussain y Zojan Memon. "Poor Coding Leads to DoS Attack and Security Issues in Web Applications for Sensors". Security and Communication Networks 2021 (19 de mayo de 2021): 1–11. http://dx.doi.org/10.1155/2021/5523806.

Texto completo
Resumen
As the SQL injection attack is still at the top of the list at Open Web Application Security Project (OWASP) for more than one decade, this type of attack created too many types of issues for a web application, sensors, or any similar type of applications, such as leakage of user private data and organization intellectual property, or may cause Distributed Denial of Service (DDoS) attacks. This paper focused on the poor coding or invalidated input field which is a big cause of services unavailability for web applications. Secondly, it focused on the selection of program created issues for the WebSocket connections between sensors and the webserver. The number of users is growing to use web applications and mobile apps. These web applications or mobile apps are used for different purposes such as tracking vehicles, banking services, online stores for shopping, taxi booking, logistics, education, monitoring user activities, collecting data, or sending any instructions to sensors, and social websites. Web applications are easy to develop with less time and at a low cost. Due to that, business community or individual service provider’s first choice is to have a website and mobile app. So everyone is trying to provide 24/7 services to its users without any downtime. But there are some critical issues of web application design and development. These problems are leading to too many security loopholes for web servers, web applications, and its user’s privacy. Because of poor coding and validation of input fields, these web applications are vulnerable to SQL Injection and other security problems. Instead of using the latest third-party frameworks, language for website development, and version database server, another factor to disturb the services of a web server may be the socket programming for sensors at the production level. These sensors are installed in vehicles to track or use them for booking mobile apps.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Iriberri, Alicia. "Natural Language Processing and Psychology in e-Government Services". International Journal of Electronic Government Research 11, n.º 2 (abril de 2015): 1–17. http://dx.doi.org/10.4018/ijegr.2015040101.

Texto completo
Resumen
Crime statistics from the US Bureau of Justice and the FBI Uniform Crime Report show a gap between reported and unreported crime. For police to effectively prevent and solve crime, they require accurate and complete information about incidents. This article describes the evaluation of a crime reporting and interviewing system that witnesses can use to report crime incidents or suspicious activities anonymously while ensuring the information received is of such quality that police can use it to begin an investigation process. The system emulates the tasks that a police investigator would perform by leveraging natural language processing technology and the interviewing techniques used in the Cognitive Interview. The system incorporates open-source code from the General Architecture for Text Engineering (GATE) program developed by researchers at the University of Sheffield, Web and database technology, and Java-based proprietary code developed by the author. Findings of this evaluation show that the system is capable of producing accurate and complete reports by enhancing witnesses' memory recall and that its efficacy approximates the efficacy of a human conducting a cognitive interview closer than existing alternatives. The system is introduced as the first computer application of the cognitive interview and proposed as a viable alternative to face-to-face investigative interviews.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Yan, Hai Zhong. "Development Technology of Excel Data Server Application with DELPHI ADO + RemObjectcs Combined (Part 1: The Server Side)". Applied Mechanics and Materials 727-728 (enero de 2015): 959–64. http://dx.doi.org/10.4028/www.scientific.net/amm.727-728.959.

Texto completo
Resumen
Microsoft office Excel is an important part of the Microsoft Office suite of software that can be processed, statistical analysis of various data, with its quick style and rich functionality has been generally welcomed, but with the database networking and sharing trends deepening, Excel application bottlenecks in stand-alone mode began to appear, and the network is the computer information technology in today's inevitable end, I imagine combining ADO technology developed by Delphi Excel data server in order to break this limitation, changes in non-network applications Excel mode.Excel Data Server is a set of services and client program, the server deployed on the server, the client is retrieved by the query language SQL, Excel data file can perform various operations and customer interface directly to form a Web-based application systems, customer service not only to support the application at both ends of the LAN, you can also support the application on the Internet. In this paper, Delphi7.0 support, using ADO and RemObjects SDK tools to develop an Excel data server. RemObjects referred to RO. RO version a lot, but it is recommended to use RemObjects Data Abstract 6.0.43.801, Delphi's experience in the development of it appear to be more stable, server-side and client-side data connections and programming efficiency, data manipulation more convenient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sulistianingsih, Ellese y M. Mukminan. "THE DEVELOPMENT OF WEB-BASED LEARNING MULTIMEDIA FOR HIGH SCHOOL STUDENTS’ LITHOSPHERE MATERIAL". Geosfera Indonesia 4, n.º 1 (29 de abril de 2019): 11. http://dx.doi.org/10.19184/geosi.v4i1.9882.

Texto completo
Resumen
Science and Technology develop very fast in every aspect of life, including in the aspect of education. As the development of science and technology, guiding teachers to be able to make use various kinds of creative and innovative learning media in learning process at school is needed in order to increase the effectivity of the learning process which will have impact on the students’ learning motivation and learning outcomes. According to the explanation, learning multimedia needs to be developed in order to increase the students’ learning motivation and learning outcomes. This research is a research and development (R&D), which is then modified by using Tessmer formative evaluation. The analysis results show that the web-based learning multimedia for lithosphere material has been proven its eligibility, that the web-based learning is valid, practical, to be used in learning process and is effective in increasing students learning motivation and learning outcomes. References Bowman, S. F. (2015). Evaluation in Instructional Design Practice: A View from The Stakeholders. (Dissertation Doctor, Capella University, 2015). Retrieved from https://search. proquest.com/docview/1707694509/fulltext PDF/D439E6E103D04792PQ/6?accountid=31324. Cahyono, K. (2013). Penggunaan Media Interaktif Berbasis Web untuk Meningkatkan Motivasi dan hasil Belajar. Jurnal Teknik Informatika Abdurrab University. Retrieved from http://binaprajajournal.com/ojs/index. php/jbp/article/view/117. Daljoeni, N. (2014). Pengantar Geografi. Yogyakarta: Ombak. Day, T. (2012). Undergraduate Teaching and Learning in Physical Geography. Journal Physical Geography, 36(3). Retrieved fom https://search. proquest. com/doc view/1019246195/B5C4C63F0A8F4962PQ/1?accountid=31324. Fadli, M. S. & Ikawati, H. D. (2017). Penggunaan Multimedia untuk Meningkatkan Motivasi Belajar Siswa. Jurnal Teknologi Pendidikan, 2(2). Retrieved from http://ojs.ikipmataram.ac.id/index.php/jtp/article/view/598. Gilakjani, A. P. (2012). The Significant Role of Multimedia in Motivating EFL Learners’ Interest in English Language Learning. Journal Modern Education and Computer Science, 4(4). Retrieved from https://search. proquest.com/docview/1627735482/509798BC9EC481FPQ/1?accountid=31324. Hake, R. R. (1999). Analyzing Change/Gain Score. Dept. of Physics, Indiana University. Retrieved from http://www.physics.indiana.edu/~sdi/Analyzing Change-Gain. Hawley, D & Lyon, J. (2017). Plate Update: Refreshing Ideas for Teaching Plate Tectonics. Teaching Geography, 42(1). Retrieved from https://search.pro quest.com/docview/1952375936/73816528324E4DACPQ/1?accountid=31324. Huang, Q. (2012). Action Research on Motivation in English Reading. Journal Theory and Practice in Language Studies, 2(8). Retrieved from https://search. proquest.com/docview/1619300790/fulltextPDF/D04EC91FA9214B89PQ/2?accountid=31324. Kusumaningtias, A. D. & Mukminan. (2014). Pengembangan Multimedia Pembelajaran Geografi dengan Materi Litosfer dan Pedosfer untuk SMA Kelas X. Jurnal Ilmu-ilmu Sosial, 11(1). Retrieved from https://jurnal.uny.ac.id/index.php/sosia/article/download/5284/4583. Milovanovic, M. Perisic, J., Vukotic, S. Bugarcic, M. Radovanovic, L. &Ristic, M. (2016). Learning Mathematic Using Multimedia in Engineering Education. Journal Acta Technica Corviniensis – Bulleting of Engineering, 9(1). Retrieved from https://search.proquest.com/docview/1767584934/559B63 F69E094F98PQ/1?accountid=31324. Moeed, A. (2015). Science Investigation Students View about Learning, Motivation and Assessment. Singapore: Springer. Mohasin, S. F., Shinde, P. A. &Khaparde. (2013). E-Learning: A Tool for Library and Information Services. Journal of Library & Information Science, 3(2). Retrieved from https://search.proquest.com/docview/1440877148/fulltext PDF/33223E6022A248ECPQ/1?accountid=31324. Permadi, A. A. (2016). Pengembangan Media Pembelajaran Interaktif Berbasis Web dengan Pemanfaatan Video Conference Mata Pelajaran Produktif Teknik Komputer dan jaringan di Sekolah Menengah Kejuruan. Jurnal Pendidikan Teknologi dan Kejuruan. Retrieved from http://jural.unm.ac.id/ 3123/1/Jurnal.pdf. Presiden Republik Indonesia. (2000). Keputusan Presiden Republik Indonesia Nomor 50 Tahun 2000 Tentang Tim Koordinasi Telematika Indonesia. Robb, C. (2010). The Impact of Motivational Messages on Student Performance in Community College Online Courses. (Dissertation Doctor, University of Illinois at Urbana-Champaign, 2010). Retrieved from https://search.proquest.com/docview/778224030/18ED422A32FC4231PQ/3?accountid=31324 Sahrir, M. S., Alias, N. A., Ismail, Z., & Osman, N. (2012). Employing Design and Development Research (DDR): Approaches in the Design and Development of Online Arabic Vocabulary Learning Games Prototype. Journal of Educational Technology, 11(2). Retrieved from https://search. proquest.com/docview/1288340626/fulltextPDF/D439E6E103D04792PQ/1?accountid=31324. Sari, H. V. & Suswanto, H. (2017). Pengembangan media pembelajaran Berbasis Web Untuk mengukur hasil Belajar siswa pada mata pelajaran Komputer Jaringan Dasar program Keahlian teknik komputer dan jaringan.Jurnal Pendidikan, 2(7). Retrieved from http://journal.um.ac.id/index.php/jptpp/ article/view/9734/4593. Su, C. H. (2016). The effects of students' motivation, cognitive load and learning anxiety in gamification software engineering education: a structural equation modeling study. Journal Multimedia Tools Application, 75(16). Retrieved from https://search.proquest.com/docview/1867930658/fulltextPDF/9482B 31FA03D4E7CPQ/1?accountid=31324. Tessmer, M. (1998). Planning and Conducting Formative Evaluation. London: Kogan Page Limited. Tsai, M. J. (2009). The Model of Strategic e-Learning: Understanding and Evaluating Student E-Learning from Metacognitive Perspectives. Journal Educational Technology & Society, 12(1). Retrieved from https://search. p1roquest.com/docview/1287039259/20B52566A67140DBPQ/1?accountid=31324. Umar. (2013). Studi Komparatif Penguasaan Konsep Ulumul Qur’an Dalam Pembelajaran Yang Menggunakan Full E-Learning Dan Blended E-Learning. Jurnal TAPIS, 13(1). Retrieved from http://id.portalgaruda.org/? ref=browse&mod=viewarticle&article=252276. Wiyani, N. A. (2012). Desain Pembelajaran Pendidikan: Tata Rancang Pembelajaran Menuju Pencapaian Kompetensi. Yogyakarta: Ar-Ruzz Media. Copyright (c) 2018 Geosfera Indonesia Journal and Department of Geography Education, University of Jember This work is licensed under a Creative Commons Attribution-Share A like 4.0 International License
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Preddie, Martha Ingrid. "Canadian Public Library Users are Unaware of Their Information Literacy Deficiencies as Related to Internet Use and Public Libraries are Challenged to Address These Needs". Evidence Based Library and Information Practice 4, n.º 4 (14 de diciembre de 2009): 58. http://dx.doi.org/10.18438/b8sp7f.

Texto completo
Resumen
A Review of: Julien, Heidi and Cameron Hoffman. “Information Literacy Training in Canada’s Public Libraries.” Library Quarterly 78.1 (2008): 19-41. Objective – To examine the role of Canada’s public libraries in information literacy skills training, and to ascertain the perspectives of public library Internet users with regard to their experiences of information literacy. Design – Qualitative research using semi-structured interviews and observations. Setting – Five public libraries in Canada. Subjects – Twenty-eight public library staff members and twenty-five customers. Methods – This study constituted the second phase of a detailed examination of information literacy (IL) training in Canadian public libraries. Five public libraries located throughout Canada were selected for participation. These comprised a large central branch of a public library located in a town with a population of approximately two million, a main branch of a public library in an urban city of about one million people, a public library in a town with a population of about 75,000, a library in a town of 900 people and a public library located in the community center of a Canadian First Nations reserve that housed a population of less than 100 persons. After notifying customers via signage posted in the vicinity of computers and Internet access areas, the researchers observed each patron as they accessed the Internet via library computers. Observations focused on the general physical environment of the Internet access stations, customer activities and use of the Internet, as well as the nature and degree of customer interactions with each other and with staff. Photographs were also taken and observations were recorded via field notes. The former were analyzed via qualitative content analysis while quantitative analysis was applied to the observations. Additionally, each observed participant was interviewed immediately following Internet use. Interview questions focused on a range of issues including the reasons why customers used the Internet in public libraries, customers’ perceptions about their level of information literacy and their feelings with regard to being information literate, the nature of their exposure to IL training, the benefits they derived from such training, and their desire for further training. Public service librarians and other staff were also interviewed in a similar manner. These questions sought to ascertain staff views on the role of the public library with regard to IL training; perceptions of the need for and expected outcomes of such training; as well as the current situation pertinent to the provision of IL skills training in their respective libraries in terms of staff competencies, resource allocation, and the forms of training and evaluation. Interviews were recorded and transcribed. Data were interpreted via qualitative content analysis through the use of NVivo software. Main Results – Men were more frequent users of public library computers than women, outnumbering them by a ratio ranging from 2:1 to 3.4:1. Customers appeared to be mostly under the age of 30 and of diverse ethnicities. The average income of interviewed customers was less than the Canadian average. The site observations revealed that customers were seen using the Internet mainly for the purposes of communication (e.g., e-mail, instant messaging, online dating services). Such use was observed 78 times in four of the libraries. Entertainment accounted for 43 observations in all five sites and comprised activities such as online games, music videos, and movie listings. Twenty-eight observations involved business/financial uses (e.g., online shopping, exploration of investment sites, online banking). The use of search engines (25 observations), news information (23), foreign language and forum websites (21), and word processing were less frequently observed. Notably, there were only 20 observed library-specific uses (e.g., searching online catalogues, online database and library websites). Customers reported that they used the Internet mainly for general web searching and for e-mail. It was also observed that in general the physical environment was not conducive to computer use due to uncomfortable or absent seating and a lack of privacy. Additionally, only two sites had areas specifically designated for IL instruction. Of the 25 respondents, 19 reported at least five years experience with the Internet, 9 of whom cited experience of 10 years or more. Self-reported confidence with the Internet was high: 16 individuals claimed to be very confident, 7 somewhat confident, and only 2 lacking in confidence. There was a weak positive correlation between years of use and individuals’ reported levels of confidence. Customers reported interest in improving computer literacy (e.g., keyboarding ability) and IL skills (ability to use more sources of information). Some expressed a desire “to improve certain personal attitudes” (30), such as patience when conducting Internet searches. When presented with the Association of College and Research Libraries’ definition of IL, 13 (52%) of those interviewed claimed to be information literate, 8 were ambivalent, and 4 admitted to being information illiterate. Those who professed to be information literate had no particular feeling about this state of being, however 10 interviewees admitted feeling positive about being able to use the Internet to retrieve information. Most of those interviewed (15) disagreed that a paucity of IL skills is a deterrent to “accessing online information efficiently and effectively” (30). Eleven reported development of information skills through self teaching, while 8 cited secondary schools or tertiary educational institutions. However, such training was more in terms of computer technology education than IL. Eleven of the participants expressed a desire for additional IL training, 5 of whom indicated a preference for the public library to supply such training. Customers identified face-to-face, rather than online, as the ideal training format. Four interviewees identified time as the main barrier to Internet use and online access. As regards library staff, 22 (78.6%) of those interviewed posited IL training as an important role for public libraries. Many stated that customers had been asking for formal IL sessions with interest in training related to use of the catalogue, databases, and productivity software, as well as searching the web. Two roles were identified in the context of the public librarian as a provider of IL: “library staff as teachers/agents of empowerment and library staff as ‘public parents’” (32). The former was defined as supporting independent, lifelong learning through the provision of IL skills, and the latter encompassing assistance, guidance, problem solving, and filtering of unsuitable content. Staff identified challenges to IL training as societal challenges (e.g., need for customers to be able to evaluate information provided by the media, the public library’s role in reducing the digital divide), institutional (e.g., marketing of IL programs, staff constraints, lack of budget for IL training), infrastructural (e.g., limited space, poor Internet access in library buildings) and pedagogical challenges, such as differing views pertinent to the philosophy of IL, as well as the low levels of IL training to which Canadian students at all levels had been previously exposed. Despite these challenges library staff acknowledged positive outcomes resulting from IL training in terms of customers achieving a higher level of computer literacy, becoming more skillful at searching, and being able to use a variety of information sources. Affective benefits were also apparent such as increased independence and willingness to learn. Library staff also identified life expanding outcomes, such as the use of IL skills to procure employment. In contrast to customer self-perception, library staff expressed that customers’ IL skills were low, and that this resulted in their avoidance of “higher-level online research” and the inability to “determine appropriate information sources” (36). Several librarians highlighted customers’ incapacity to perform simple activities such as opening an email account. Library staff also alluded to customer’s reluctance to ask them for help. Libraries in the study offered a wide range of training. All provided informal, personalized training as needed. Formal IL sessions on searching the catalogue, online searching, and basic computer skills were conducted by the three bigger libraries. A mix of librarians and paraprofessional staff provided the training in these libraries. However, due to a lack of professional staff, the two smaller libraries offered periodic workshops facilitated by regional librarians. All the libraries lacked a defined training budget. Nonetheless, the largest urban library was well-positioned to offer IL training as it had a training coordinator, a training of trainers program, as well as technologically-equipped training spaces. The other libraries in this study provided no training of trainers programs and varied in terms of the adequacy of spaces allocated for the purpose of training. The libraries also varied in terms of the importance placed on the evaluation of IL training. At the largest library evaluation forms were used to improve training initiatives, while at the small town library “evaluations were done anecdotally” (38). Conclusion – While Internet access is available and utilized by a wide cross section of the population, IL skills are being developed informally and not through formal training offered by public libraries. Canadian public libraries need to work to improve information literacy skills by offering and promoting formal IL training programs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Henderson, Anne y Elyse Adler. "Project Access for adult English–language learners". First Monday, 6 de junio de 2005. http://dx.doi.org/10.5210/fm.v10i6.1249.

Texto completo
Resumen
Project Access, funded by the Institute of Museum and Library Services, is a collaborative two–year program between the Frist Center for the Visual Arts and the Nashville Public Library. The goal of Project Access is to help increase adult English Language learners’ (ELL) skills in language, visual art, and computer literacy. The eight–visit program offers participants from local community service institutions the opportunity to engage in art making, computer–based learning, museum and library visits. This article and the project Web site, http://www.projectaccess.org, give the visitor an overview of the project, lesson plans, and interactive features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

"Information and computer technologies in distance teaching of foreign languages using social networks (for Tunisian audience)". Teaching languages at higher institutions, n.º 38 (2021). http://dx.doi.org/10.26565/2073-4379-2021-38-02.

Texto completo
Resumen
The topic of the article concerns the application of information and computer technologies in teaching foreign languages, especially their adaptation to the conditions of distance learning. The author considers various approaches to the use of computer technologies in the current educational space and, in particular, the ways of working with some Internet tools in a Tunisian humanitarian higher education institution. The paper presents features of the Web 2.0 website as one of the possibilities to increase the volume of speech communication in a foreign language in distance learning as well as different types of Internet resources most popular among the students studying Russian in Tunisia. Hence, the author identifies and summarizes the specifics of introducing social networks as an additional platform for distance learning into the process of developing students' communicative competence. The objectives for describing the external factors that influence the improvement in methodological approaches to language teaching were the identification of the main ways for the engagement of the social networking services and integration of them into the structure of online lessons, i.e., selection of the country study material and development of methodological recommendations to bring it closer to the program topics, information exchange, and testing. There was applied the analytical method for researching the problems that students and teachers face in organizing the learning process based on digital assistance. It allowed classifying difficulties that hinder the successful integration of social media into learning. As a result, it was possible to identify the work rules for students and teachers, as well as the potential of Internet services which due to the development of databases and Internet infrastructure have facilitated the tasks for their users including teachers, as it is no longer necessary to acquire special knowledge and skills to create distant courses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Matthews, Nicole, Sherman Young, David Parker y Jemina Napier. "Looking across the Hearing Line?: Exploring Young Deaf People’s Use of Web 2.0". M/C Journal 13, n.º 3 (30 de junio de 2010). http://dx.doi.org/10.5204/mcj.266.

Texto completo
Resumen
IntroductionNew digital technologies hold promise for equalising access to information and communication for the Deaf community. SMS technology, for example, has helped to equalise deaf peoples’ access to information and made it easier to communicate with both deaf and hearing people (Tane Akamatsu et al.; Power and Power; Power, Power, and Horstmanshof; Valentine and Skelton, "Changing", "Umbilical"; Harper). A wealth of anecdotal evidence and some recent academic work suggests that new media technology is also reshaping deaf peoples’ sense of local and global community (Breivik "Deaf"; Breivik, Deaf; Brueggeman). One focus of research on new media technologies has been on technologies used for point to point communication, including communication (and interpretation) via video (Tane Akamatsu et al.; Power and Power; Power, Power, and Horstmanshof). Another has been the use of multimedia technologies in formal educational setting for pedagogical purposes, particularly English language literacy (e.g. Marshall Gentry et al.; Tane Akamatsu et al.; Vogel et al.). An emphasis on the role of multimedia in deaf education is understandable, considering the on-going highly politicised contest over whether to educate young deaf people in a bilingual environment using a signed language (Swanwick & Gregory). However, the increasing significance of social and participatory media in the leisure time of Westerners suggests that such uses of Web 2.0 are also worth exploring. There have begun to be some academic accounts of the enthusiastic adoption of vlogging by sign language users (e.g. Leigh; Cavander and Ladner) and this paper seeks to add to this important work. Web 2.0 has been defined by its ability to, in Denise Woods’ word, “harness collective intelligence” (19.2) by providing opportunities for users to make, adapt, “mash up” and share text, photos and video. As well as its well-documented participatory possibilities (Bruns), its re-emphasis on visual (as opposed to textual) communication is of particular interest for Deaf communities. It has been suggested that deaf people are a ‘visual variety of the human race’ (Bahan), and the visually rich presents new opportunities for visually rich forms of communication, most importantly via signed languages. The central importance of signed languages for Deaf identity suggests that the visual aspects of interactive multimedia might offer possibilities of maintenance, enhancement and shifts in those identities (Hyde, Power and Lloyd). At the same time, the visual aspects of the Web 2.0 are often audio-visual, such that the increasingly rich resources of the net offer potential barriers as well as routes to inclusion and community (see Woods; Ellis; Cavander and Ladner). In particular, lack of captioning or use of Auslan in video resources emerges as a key limit to the accessibility of the visual Web to deaf users (Cahill and Hollier). In this paper we ask to what extent contemporary digital media might create moments of permeability in what Krentz has called “the hearing line, that invisible boundary separating deaf and hearing people”( 2)”. To provide tentative answers to these questions, this paper will explore the use of participatory digital media by a group of young Deaf people taking part in a small-scale digital moviemaking project in Sydney in 2009. The ProjectAs a starting point, the interdisciplinary research team conducted a video-making course for young deaf sign language users within the Department of Media, Music and Cultural Studies at Macquarie University. The research team was comprised of one deaf and four hearing researchers, with expertise in media and cultural studies, information technology, sign language linguistics/ deaf studies, and signed language interpreting. The course was advertised through the newsletter of partner organization the NSW Deaf Society, via a Sydney bilingual deaf school and through the dense electronic networks of Australian deaf people. The course attracted fourteen participants from NSW, Western Australia and Queensland ranging in age from 10 to 18. Twelve of the participants were male, and two female. While there was no aspiration to gather a representative group of young people, it is worth noting there was some diversity within the group: for example, one participant was a wheelchair user while another had in recent years moved to Sydney from Africa and had learned Auslan relatively recently. Students were taught a variety of storytelling techniques and video-making skills, and set loose in groups to devise, shoot and edit a number of short films. The results were shared amongst the class, posted on a private YouTube channel and made into a DVD which was distributed to participants.The classes were largely taught in Auslan by a deaf teacher, although two sessions were taught by (non-deaf) members of Macquarie faculty, including an AFI award winning director. Those sessions were interpreted into Auslan by a sign language interpreter. Participants were then allowed free creative time to shoot video in locations of their choice on campus, or to edit their footage in the computer lab. Formal teaching sessions lasted half of each day – in the afternoons, participants were free to use the facilities or participate in a range of structured activities. Participants were also interviewed in groups, and individually, and their participation in the project was observed by researchers. Our research interest was in what deaf young people would choose to do with Web 2.0 technologies, and most particularly the visually rich elements of participatory and social media, in a relatively unstructured environment. Importantly, our focus was not on evaluating the effectiveness of multimedia for teaching deaf young people, or the level of literacy deployed by deaf young people in using the applications. Rather we were interested to discover the kinds of stories participants chose to tell, the ways they used Web 2.0 applications and the modalities of communication they chose to use. Given that Auslan was the language of instruction of the course, would participants draw on the tradition of deaf jokes and storytelling and narrate stories to camera in Auslan? Would they use the format of the “mash-up”, drawing on found footage or photographs? Would they make more filmic movies using Auslan dialogue? How would they use captions and text in their movies: as subtitles for Auslan dialogue? As an alternative to signing? Or not at all? Our observations from the project point to the great significance of the visual dimensions of Web 2.0 for the deaf young people who participated in the project. Initially, this was evident in the kind of movies students chose to make. Only one group – three young people in their late teens which included both of the young women in the class - chose to make a dialogue heavy movie, a spoof of Charlie’s Angels, entitled Deaf Angels. This movie included long scenes of the Angels using Auslan to chat together, receiving instruction from “Charlie” in sign language via videophone and recruiting “extras”, again using Auslan, to sign a petition for Auslan to be made an official Australian language. In follow up interviews, one of the students involved in making this film commented “my clip is about making a political statement, while the other [students in the class] made theirs just for fun”. The next group of (three) films, all with the involvement of the youngest class member, included signed storytelling of a sort readily recognisable from signed videos on-line: direct address to camera, with the teller narrating but also taking on the roles of characters and presenting their dialogue directly via the sign language convention of “role shift” - also referred to as constructed action and constructed dialogue (Metzger). One of these movies was an interesting hybrid. The first half of the four minute film had two young actors staging a hold-up at a vending machine, with a subsequent chase and fight scene. Like most of the films made by participants in the class, it included only one line of signed dialogue, with the rest of the narrative told visually through action. However, at the end of the action sequence, with the victim safely dead, the narrative was then retold by one of the performers within a signed story, using conventions typically observed in signed storytelling - such as role shift, characterisation and spatial mapping (Mather & Winston; Rayman; Wilson).The remaining films similarly drew on action and horror genres with copious use of chase and fight scenes and melodramatic and sometimes quite beautiful climactic death tableaux. The movies included a story about revenging the death of a brother; a story about escaping from jail; a short story about a hippo eating a vet; a similar short comprised of stills showing a sequence of executions in the computer lab; and a ghost story. Notably, most of these movies contained very little dialogue – with only one or two lines of signed dialogue in each four to five minute video (with the exception of the gun handshape used in context to represent the object liberally throughout most films). The kinds of movies made by this limited group of people on this one occasion are suggestive. While participants drew on a number of genres and communication strategies in their film making, the researchers were surprised at how few of the movies drew on traditions of signed storytelling or jokes– particularly since the course was targeted at deaf sign language users and promoted as presented in Auslan. Consequently, our group of students were largely drawn from the small number of deaf schools in which Auslan is the main language of instruction – an exceptional circumstance in an Australian setting in which most deaf young people attend mainstream schools (Byrnes et al.; Power and Hyde). Looking across the Hearing LineWe can make sense of the creative choices made by the participants in the course in a number of ways. Although methods of captioning were briefly introduced during the course, iMovie (the package which participants were using) has limited captioning functionality. Indeed, one student, who was involved in making the only clip to include captioning which contextualised the narrative, commented in follow-up interviews that he would have liked more information about captioning. It’s also possible that the compressed nature of the course prevented participants from undertaking the time-consuming task of scripting and entering captions. As well as being the most fun approach to the projects, the use of visual story telling was probably the easiest. This was perhaps exacerbated by the lack of emphasis on scriptwriting (outside of structural elements and broad narrative sweeps) in the course. Greater emphasis on that aspect of film-making would have given participants a stronger foundational literacy for caption-based projectsDespite these qualifications, both the movies made by students and our observations suggest the significance of a shared visual culture in the use of the Web by these particular young people. During an afternoon when many of the students were away swimming, one student stayed in the lab to use the computers. Rather than working on a video project, he spent time trawling through YouTube for clips purporting to show ghost sightings and other paranormal phenomena. He drew these clips to the attention of one of the research team who was present in the lab, prompting a discussion about the believability of the ghosts and supernatural apparitions in the clips. While some of the clips included (uncaptioned) off-screen dialogue and commentary, this didn’t seem to be a barrier to this student’s enjoyment. Like many other sub-genres of YouTube clips – pranks, pratfalls, cute or alarmingly dangerous incidents involving children and animals – these supernatural videos as a genre rely very little on commentary or dialogue for their meaning – just as with the action films that other students drew on so heavily in their movie making. In an E-Tech paper entitled "The Cute Cat Theory of Digital Activism", Ethan Zuckerman suggests that “web 1.0 was invented to allow physicists to share research papers and web 2.0 was created to allow people to share pictures of cute cats”. This comment points out both the Web 2.0’s vast repository of entertaining material in the ‘funny video’genre which is visually based, dialogue free, entertaining material accessible to a wide range of people, including deaf sign language users. In the realm of leisure, at least, the visually rich resources of Web 2.0’s ubiquitous images and video materials may be creating a shared culture in which the line between hearing and deaf people’s entertainment activities is less clear than it may have been in the past. The ironic tone of Zuckerman’s observation, however, alerts us to the limits of a reliance on language-free materials as a route to accessibility. The kinds of videos that the participants in the course chose to make speaks to the limitations as well as resources offered by the visual Web. There is still a limited range of captioned material on You Tube. In interviews, both young people and their teachers emphasised the central importance of access to captioned video on-line, with the young people we interviewed strongly favouring captioned video over the inclusion on-screen of simultaneous signed interpretations of text. One participant who was a regular user of a range of on-line social networking commented that if she really liked the look of a particular movie which was uncaptioned, she would sometimes contact its maker and ask them to add captions to it. Interestingly, two student participants emphasised in interviews that signed video should also include captions so hearing people could have access to signed narratives. These students seemed to be drawing on ideas about “reverse discrimination”, but their concern reflected the approach of many of the student movies - using shared visual conventions that made their movies available to the widest possible audience. All the students were anxious that hearing people could understand their work, perhaps a consequence of the course’s location in the University as an overwhelmingly hearing environment. In this emphasis on captioning rather than sign as a route to making media accessible, we may be seeing a consequence of the emphasis Krentz describes as ubiquitous in deaf education “the desire to make the differences between deaf and hearing people recede” (16). Krentz suggests that his concept of the ‘hearing line’ “must be perpetually retested and re-examined. It reveals complex and shifting relationships between physical difference, cultural fabrication and identity” (7). The students’ movies and attitudes emphasised the reality of that complexity. Our research project explored how some young Deaf people attempted to create stories capable of crossing categories of deafness and ‘hearing-ness’… unstable (like other identity categories) while others constructed narratives that affirmed Deaf Culture or drew on the Deaf storytelling traditions. This is of particular interest in the Web 2.0 environment, given that its technologies are often lauded as having the politics of participation. The example of the Deaf Community asks reasonable questions about the validity of those claims, and it’s hard to escape the conclusion that there is still less than appropriate access and that some users are more equal than others.How do young people handle the continuing lack of material available to the on the Web? The answer repeatedly offered by our young male interviewees was ‘I can’t be bothered’. As distinct from “I can’t understand” or “I won’t go there” this answer, represented a disengagement from demands to identify your literacy levels, reveal your preferred means of communication; to rehearse arguments about questions of access or expose attempts to struggle to make sense of texts that fail to employ readily accessible means of communicating. Neither an admission of failure or a demand for change, CAN’T-BE-BOTHERED in this context offers a cool way out of an accessibility impasse. This easily-dismissed comment in interviews was confirmed in a whole-group discussions, when students came to a consensus that if when searching for video resources on the Net they found video that included neither signing nor captions, they would move on to find other more accessible resources. Even here, though, the ground continues to shift. YouTube recently announced that it was making its auto-captioning feature open to everybody - a machine generated system that whilst not perfect does attempt to make all YouTube videos accessible to deaf people. (Bertolucci).The importance of captioning of non-signed video is thrown into further significance by our observation from the course of the use of YouTube as a search engine by the participants. Many of the students when asked to research information on the Web bypassed text-based search engines and used the more visual results presented on YouTube directly. In research on deaf adolescents’ search strategies on the Internet, Smith points to the promise of graphical interfaces for deaf young people as a strategy for overcoming the English literacy difficulties experienced by many deaf young people (527). In the years since Smith’s research was undertaken, the graphical and audiovisual resources available on the Web have exploded and users are increasingly turning to these resources in their searches, providing new possibilities for Deaf users (see for instance Schonfeld; Fajardo et al.). Preliminary ConclusionsA number of recent writers have pointed out the ways that the internet has made everyday communication with government services, businesses, workmates and friends immeasurably easier for deaf people (Power, Power and Horstmanshof; Keating and Mirus; Valentine and Skelton, "Changing", "Umbilical"). The ready availability of information in a textual and graphical form on the Web, and ready access to direct contact with others on the move via SMS, has worked against what has been described as deaf peoples’ “information deprivation”, while everyday tasks – booking tickets, for example – are no longer a struggle to communicate face-to-face with hearing people (Valentine and Skelton, "Changing"; Bakken 169-70).The impacts of new technologies should not be seen in simple terms, however. Valentine and Skelton summarise: “the Internet is not producing either just positive or just negative outcomes for D/deaf people but rather is generating a complex set of paradoxical effects for different users” (Valentine and Skelton, "Umbilical" 12). They note, for example, that the ability, via text-based on-line social media to interact with other people on-line regardless of geographic location, hearing status or facility with sign language has been highly valued by some of their deaf respondents. They comment, however, that the fact that many deaf people, using the Internet, can “pass” minimises the need for hearing people in a phonocentric society to be aware of the diversity of ways communication can take place. They note, for example, that “few mainstream Websites demonstrate awareness of D/deaf peoples’ information and communication needs/preferences (eg. by incorporating sign language video clips)” ("Changing" 11). As such, many deaf people have an enhanced ability to interact with a range of others, but in a mode favoured by the dominant culture, a culture which is thus unchallenged by exposure to alternative strategies of communication. Our research, preliminary as it is, suggests a somewhat different take on these complex questions. The visually driven, image-rich approach taken to movie making, Web-searching and information sharing by our participants suggests the emergence of a certain kind of on-line culture which seems likely to be shared by deaf and hearing young people. However where Valentine and Skelton suggest deaf people, in order to participate on-line, are obliged to do so, on the terms of the hearing majority, the increasingly visual nature of Web 2.0 suggests that the terrain may be shifting – even if there is still some way to go.AcknowledgementsWe would like to thank Natalie Kull and Meg Stewart for their research assistance on this project, and participants in the course and members of the project’s steering group for their generosity with their time and ideas.ReferencesBahan, B. "Upon the Formation of a Visual Variety of the Human Race. In H-Dirksen L. Baumann (ed.), Open Your Eyes: Deaf Studies Talking. London: University of Minnesota Press, 2007.Bakken, F. “SMS Use among Deaf Teens and Young Adults in Norway.” In R. Harper, L. Palen, and A. Taylor (eds.), The Inside Text: Social, Cultural and Design Perspectives on SMS. Netherlands: Springe, 2005. 161-74. Berners-Lee, Tim. Weaving the Web. London: Orion Business, 1999.Bertolucci, Jeff. “YouTube Offers Auto-Captioning to All Users.” PC World 5 Mar. 2010. 5 Mar. 2010 < http://www.macworld.com/article/146879/2010/03/YouTube_captions.html >.Breivik, Jan Kare. Deaf Identities in the Making: Local Lives, Transnational Connections. Washington, D.C.: Gallaudet University Press, 2005.———. “Deaf Identities: Visible Culture, Hidden Dilemmas and Scattered Belonging.” In H.G. Sicakkan and Y.G. Lithman (eds.), What Happens When a Society Is Diverse: Exploring Multidimensional Identities. Lewiston, New York: Edwin Mellen Press, 2006. 75-104.Brueggemann, B.J. (ed.). Literacy and Deaf People’s Cultural and Contextual Perspectives. Washington, DC: Gaudellet University Press, 2004. Bruns, Axel. Blogs, Wikipedia, Second Life and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Byrnes, Linda, Jeff Sigafoos, Field Rickards, and P. Margaret Brown. “Inclusion of Students Who Are Deaf or Hard of Hearing in Government Schools in New South Wales, Australia: Development and Implementation of a Policy.” Journal of Deaf Studies and Deaf Education 7.3 (2002): 244-257.Cahill, Martin, and Scott Hollier. Social Media Accessibility Review 1.0. Media Access Australia, 2009. Cavender, Anna, and Richard Ladner. “Hearing Impairments.” In S. Harper and Y. Yesilada (eds.), Web Accessibility. London: Springer, 2008.Ellis, Katie. “A Purposeful Rebuilding: YouTube, Representation, Accessibility and the Socio-Political Space of Disability." Telecommunications Journal of Australia 60.2 (2010): 1.1-21.12.Fajardo, Inmaculada, Elena Parra, and Jose J. Canas. “Do Sign Language Videos Improve Web Navigation for Deaf Signer Users?” Journal of Deaf Studies and Deaf Education 15.3 (2009): 242-262.Harper, Phil. “Networking the Deaf Nation.” Australian Journal of Communication 30.3 (2003): 153-166.Hyde, M., D. Power, and K. Lloyd. "W(h)ither the Deaf Community? Comments on Trevor Johnston’s Population, Genetics and the Future of Australian Sign Language." Sign Language Studies 6.2 (2006): 190-201. Keating, Elizabeth, and Gene Mirus. “American Sign Language in Virtual Space: Interactions between Deaf Users of Computer-Mediated Video.” Language in Society 32.5 (Nov. 2003): 693-714.Krentz, Christopher. Writing Deafness: The Hearing Line in Nineteenth-Century Literature. Chapel Hill, NC: University of North Carolina Press, 2007.Leigh, Irene. A Lens on Deaf Identities. Oxford: Oxford UP. 2009.Marshall Gentry, M., K.M. Chinn, and R.D. Moulton. “Effectiveness of Multimedia Reading Materials When Used with Children Who Are Deaf.” American Annals of the Deaf 5 (2004): 394-403.Mather, S., and E. Winston. "Spatial Mapping and Involvement in ASL Storytelling." In C. Lucas (ed.), Pinky Extension and Eye Gaze: Language Use in Deaf Communities. Washington, DC: Gallaudet University Press, 1998. 170-82.Metzger, M. "Constructed Action and Constructed Dialogue in American Sign Language." In C. Lucas (ed.), Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 1995. 255-71.Power, Des, and G. Leigh. "Principles and Practices of Literacy Development for Deaf Learners: A Historical Overview." Journal of Deaf Studies and Deaf Education 5.1 (2000): 3-8.Power, Des, and Merv Hyde. “The Characteristics and Extent of Participation of Deaf and Hard-of-Hearing Students in Regular Classes in Australian Schools.” Journal of Deaf Studies and Deaf Education 7.4 (2002): 302-311.Power, M., and D. Power “Everyone Here Speaks TXT: Deaf People Using SMS in Australia and the Rest of the World.” Journal of Deaf Studies and Deaf Education 9.3 (2004). Power, M., D. Power, and L. Horstmanshof. “Deaf People Communicating via SMS, TTY, Relay Service, Fax, and Computers in Australia.” Journal of Deaf Studies and Deaf Education 12.1 (2007): 80-92. Rayman, J. "Storytelling in the Visual Mode: A Comparison of ASL and English." In E. Wilson (ed.), Storytelling & Conversation: Discourse in Deaf Communities. Washington, DC: Gallaudet University Press, 2002. 59-82.Schonfeld, Eric. "ComScore: YouTube Now 25 Percent of All Google Searches." Tech Crunch 18 Dec. 2008. 14 May 2009 < http://www.techcrunch.com/2008/12/18/comscore-YouTube-now-25-percent-of-all-google-searches/?rss >.Smith, Chad. “Where Is It? How Deaf Adolescents Complete Fact-Based Internet Search Tasks." American Annals of the Deaf 151.5 (2005-6).Swanwick, R., and S. Gregory (eds.). Sign Bilingual Education: Policy and Practice. Coleford: Douglas McLean Publishing, 2007.Tane Akamatsu, C., C. Mayer, and C. Farrelly. “An Investigation of Two-Way Text Messaging Use with Deaf Students at the Secondary Level.” Journal of Deaf Studies and Deaf Education 11.1 (2006): 120-131.Valentine, Gill, and Tracy Skelton. “Changing Spaces: The Role of the Internet in Shaping Deaf Geographies.” Social and Cultural Geography 9.5 (2008): 469-85.———. “‘An Umbilical Cord to the World’: The Role of the Internet in D/deaf People’s Information and Communication Practices." Information, Communication and Society 12.1 (2009): 44-65.Vogel, Jennifer, Clint Bowers, Cricket Meehan, Raegan Hoeft, and Kristy Bradley. “Virtual Reality for Life Skills Education: Program Evaluation.” Deafness and Education International 61 (2004): 39-47.Wilson, J. "The Tobacco Story: Narrative Structure in an ASL Story." In C. Lucas (ed.), Multicultural Aspects of Sociolinguistics in Deaf Communities. Washington, DC: Gallaudet University Press, 1996. 152-80.Winston (ed.). Storytelling and Conversation: Discourse in Deaf Communities. Washington, D.C: Gallaudet University Press. 59-82.Woods, Denise. “Communicating in Virtual Worlds through an Accessible Web 2.0 Solution." Telecommunications Journal of Australia 60.2 (2010): 19.1-19.16YouTube Most Viewed. Online video. YouTube 2009. 23 May 2009 < http://www.YouTube.com/browse?s=mp&t=a >.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Dwyer, Tim. "Transformations". M/C Journal 7, n.º 2 (1 de marzo de 2004). http://dx.doi.org/10.5204/mcj.2339.

Texto completo
Resumen
The Australian Government has been actively evaluating how best to merge the functions of the Australian Communications Authority (ACA) and the Australian Broadcasting Authority (ABA) for around two years now. Broadly, the reason for this is an attempt to keep pace with the communications media transformations we reduce to the term “convergence.” Mounting pressure for restructuring is emerging as a site of turf contestation: the possibility of a regulatory “one-stop shop” for governments (and some industry players) is an end game of considerable force. But, from a public interest perspective, the case for a converged regulator needs to make sense to audiences using various media, as well as in terms of arguments about global, industrial, and technological change. This national debate about the institutional reshaping of media regulation is occurring within a wider global context of transformations in social, technological, and politico-economic frameworks of open capital and cultural markets, including the increasing prominence of international economic organisations, corporations, and Free Trade Agreements (FTAs). Although the recently concluded FTA with the US explicitly carves out a right for Australian Governments to make regulatory policy in relation to existing and new media, considerable uncertainty remains as to future regulatory arrangements. A key concern is how a right to intervene in cultural markets will be sustained in the face of cultural, politico-economic, and technological pressures that are reconfiguring creative industries on an international scale. While the right to intervene was retained for the audiovisual sector in the FTA, by contrast, it appears that comparable unilateral rights to intervene will not operate for telecommunications, e-commerce or intellectual property (DFAT). Blurring Boundaries A lack of certainty for audiences is a by-product of industry change, and further blurs regulatory boundaries: new digital media content and overlapping delivering technologies are already a reality for Australia’s media regulators. These hypothetical media usage scenarios indicate how confusion over the appropriate regulatory agency may arise: 1. playing electronic games that use racist language; 2. being subjected to deceptive or misleading pop-up advertising online 3. receiving messaged imagery on your mobile phone that offends, disturbs, or annoys; 4. watching a program like World Idol with SMS voting that subsequently raises charging or billing issues; or 5. watching a new “reality” TV program where products are being promoted with no explicit acknowledgement of the underlying commercial arrangements either during or at the end of the program. These are all instances where, theoretically, regulatory mechanisms are in place that allow individuals to complain and to seek some kind of redress as consumers and citizens. In the last scenario, in commercial television under the sector code, no clear-cut rules exist as to the precise form of the disclosure—as there is (from 2000) in commercial radio. It’s one of a number of issues the peak TV industry lobby Commercial TV Australia (CTVA) is considering in their review of the industry’s code of practice. CTVA have proposed an amendment to the code that will simply formalise the already existing practice . That is, commercial arrangements that assist in the making of a program should be acknowledged either during programs, or in their credits. In my view, this amendment doesn’t go far enough in post “cash for comment” mediascapes (Dwyer). Audiences have a right to expect that broadcasters, production companies and program celebrities are open and transparent with the Australian community about these kinds of arrangements. They need to be far more clearly signposted, and people better informed about their role. In the US, the “Commercial Alert” <http://www.commercialalert.org/> organisation has been lobbying the Federal Communications Commission and the Federal Trade Commission to achieve similar in-program “visual acknowledgements.” The ABA’s Commercial Radio Inquiry (“Cash-for-Comment”) found widespread systemic regulatory failure and introduced three new standards. On that basis, how could a “standstill” response by CTVA, constitute best practice for such a pervasive and influential medium as contemporary commercial television? The World Idol example may lead to confusion for some audiences, who are unsure whether the issues involved relate to broadcasting or telecommunications. In fact, it could be dealt with as a complaint to the Telecommunication Industry Ombudsman (TIO) under an ACA registered, but Australian Communications Industry Forum (ACIF) developed, code of practice. These kind of cross-platform issues may become more vexed in future years from an audience’s perspective, especially if reality formats using on-screen premium rate service numbers invite audiences to participate, by sending MMS (multimedia messaging services) images or short video grabs over wireless networks. The political and cultural implications of this kind of audience interaction, in terms of access, participation, and more generally the symbolic power of media, may perhaps even indicate a longer-term shift in relations with consumers and citizens. In the Internet example, the Australian Competition and Consumer Commission’s (ACCC) Internet advertising jurisdiction would apply—not the ABA’s “co-regulatory” Internet content regime as some may have thought. Although the ACCC deals with complaints relating to Internet advertising, there won’t be much traction for them in a more complex issue that also includes, say, racist or religious bigotry. The DVD example would probably fall between the remits of the Office of Film and Literature Classification’s (OFLC) new “convergent” Guidelines for the Classification of Film and Computer Games and race discrimination legislation administered by the Human Rights and Equal Opportunity Commission (HREOC). The OFLC’s National Classification Scheme is really geared to provide consumer advice on media products that contain sexual and violent imagery or coarse language, rather than issues of racist language. And it’s unlikely that a single person would have the locus standito even apply for a reclassification. It may fall within the jurisdiction of the HREOC depending on whether it was played in public or not. Even then it would probably be considered exempt on free speech grounds as an “artistic work.” Unsolicited, potentially illegal, content transmitted via mobile wireless devices, in particular 3G phones, provide another example of content that falls between the media regulation cracks. It illustrates a potential content policy “turf grab” too. Image-enabled mobile phones create a variety of novel issues for content producers, network operators, regulators, parents and viewers. There is no one government media authority or agency with a remit to deal with this issue. Although it has elements relating to the regulatory activities of the ACA, the ABA, the OFLC, the TIO, and TISSC, the combination of illegal or potentially prohibited content and its carriage over wireless networks positions it outside their current frameworks. The ACA may argue it should have responsibility for this kind of content since: it now enforces the recently enacted Commonwealth anti-Spam laws; has registered an industry code of practice for unsolicited content delivered over wireless networks; is seeking to include ‘adult’ content within premium rate service numbers, and, has been actively involved in consumer education for mobile telephony. It has also worked with TISSC and the ABA in relation to telephone sex information services over voice networks. On the other hand, the ABA would probably argue that it has the relevant expertise for regulating wirelessly transmitted image-content, arising from its experience of Internet and free and subscription TV industries, under co-regulatory codes of practice. The OFLC can also stake its claim for policy and compliance expertise, since the recently implemented Guidelines for Classification of Film and Computer Games were specifically developed to address issues of industry convergence. These Guidelines now underpin the regulation of content across the film, TV, video, subscription TV, computer games and Internet sectors. Reshaping Institutions Debates around the “merged regulator” concept have occurred on and off for at least a decade, with vested interests in agencies and the executive jockeying to stake claims over new turf. On several occasions the debate has been given renewed impetus in the context of ruling conservative parties’ mooted changes to the ownership and control regime. It’s tended to highlight demarcations of remit, informed as they are by historical and legal developments, and the gradual accretion of regulatory cultures. Now the key pressure points for regulatory change include the mere existence of already converged single regulatory structures in those countries with whom we tend to triangulate our policy comparisons—the US, the UK and Canada—increasingly in a context of debates concerning international trade agreements; and, overlaying this, new media formats and devices are complicating existing institutional arrangements and legal frameworks. The Department of Communications, Information Technology & the Arts’s (DCITA) review brief was initially framed as “options for reform in spectrum management,” but was then widened to include “new institutional arrangements” for a converged regulator, to deal with visual content in the latest generation of mobile telephony, and other image-enabled wireless devices (DCITA). No other regulatory agencies appear, at this point, to be actively on the Government’s radar screen (although they previously have been). Were the review to look more inclusively, the ACCC, the OFLC and the specialist telecommunications bodies, the TIO and the TISSC may also be drawn in. Current regulatory arrangements see the ACA delegate responsibility for broadcasting services bands of the radio frequency spectrum to the ABA. In fact, spectrum management is the turf least contested by the regulatory players themselves, although the “convergent regulator” issue provokes considerable angst among powerful incumbent media players. The consensus that exists at a regulatory level can be linked to the scientific convention that holds the radio frequency spectrum is a continuum of electromagnetic bands. In this view, it becomes artificial to sever broadcasting, as “broadcasting services bands” from the other remaining highly diverse communications uses, as occurred from 1992 when the Broadcasting Services Act was introduced. The prospect of new forms of spectrum charging is highly alarming for commercial broadcasters. In a joint submission to the DCITA review, the peak TV and radio industry lobby groups have indicated they will fight tooth and nail to resist new regulatory arrangements that would see a move away from the existing licence fee arrangements. These are paid as a sliding scale percentage of gross earnings that, it has been argued by Julian Thomas and Marion McCutcheon, “do not reflect the amount of spectrum used by a broadcaster, do not reflect the opportunity cost of using the spectrum, and do not provide an incentive for broadcasters to pursue more efficient ways of delivering their services” (6). An economic rationalist logic underpins pressure to modify the spectrum management (and charging) regime, and undoubtedly contributes to the commercial broadcasting industry’s general paranoia about reform. Total revenues collected by the ABA and the ACA between 1997 and 2002 were, respectively, $1423 million and $3644.7 million. Of these sums, using auction mechanisms, the ABA collected $391 million, while the ACA collected some $3 billion. The sale of spectrum that will be returned to the Commonwealth by television broadcasters when analog spectrum is eventually switched off, around the end of the decade, is a salivating prospect for Treasury officials. The large sums that have been successfully raised by the ACA boosts their position in planning discussions for the convergent media regulatory agency. The way in which media outlets and regulators respond to publics is an enduring question for a democratic polity, irrespective of how the product itself has been mediated and accessed. Media regulation and civic responsibility, including frameworks for negotiating consumer and citizen rights, are fundamental democratic rights (Keane; Tambini). The ABA’s Commercial Radio Inquiry (‘cash for comment’) has also reminded us that regulatory frameworks are important at the level of corporate conduct, as well as how they negotiate relations with specific media audiences (Johnson; Turner; Gordon-Smith). Building publicly meaningful regulatory frameworks will be demanding: relationships with audiences are often complex as people are constructed as both consumers and citizens, through marketised media regulation, institutions and more recently, through hybridising program formats (Murdock and Golding; Lumby and Probyn). In TV, we’ve seen the growth of infotainment formats blending entertainment and informational aspects of media consumption. At a deeper level, changes in the regulatory landscape are symptomatic of broader tectonic shifts in the discourses of governance in advanced information economies from the late 1980s onwards, where deregulatory agendas created an increasing reliance on free market, business-oriented solutions to regulation. “Co-regulation” and “self-regulation’ became the preferred mechanisms to more direct state control. Yet, curiously contradicting these market transformations, we continue to witness recurring instances of direct intervention on the basis of censorship rationales (Dwyer and Stockbridge). That digital media content is “converging” between different technologies and modes of delivery is the norm in “new media” regulatory rhetoric. Others critique “visions of techno-glory,” arguing instead for a view that sees fundamental continuities in media technologies (Winston). But the socio-cultural impacts of new media developments surround us: the introduction of multichannel digital and interactive TV (in free-to-air and subscription variants); broadband access in the office and home; wirelessly delivered content and mobility, and, as Jock Given notes, around the corner, there’s the possibility of “an Amazon.Com of movies-on-demand, with the local video and DVD store replaced by online access to a distant server” (90). Taking a longer view of media history, these changes can be seen to be embedded in the global (and local) “innovation frontier” of converging digital media content industries and its transforming modes of delivery and access technologies (QUT/CIRAC/Cutler & Co). The activities of regulatory agencies will continue to be a source of policy rivalry and turf contestation until such time as a convergent regulator is established to the satisfaction of key players. However, there are risks that the benefits of institutional reshaping will not be readily available for either audiences or industry. In the past, the idea that media power and responsibility ought to coexist has been recognised in both the regulation of the media by the state, and the field of communications media analysis (Curran and Seaton; Couldry). But for now, as media industries transform, whatever the eventual institutional configuration, the evolution of media power in neo-liberal market mediascapes will challenge the ongoing capacity for interventions by national governments and their agencies. Works Cited Australian Broadcasting Authority. Commercial Radio Inquiry: Final Report of the Australian Broadcasting Authority. Sydney: ABA, 2000. Australian Communications Information Forum. Industry Code: Short Message Service (SMS) Issues. Dec. 2002. 8 Mar. 2004 <http://www.acif.org.au/__data/page/3235/C580_Dec_2002_ACA.pdf >. Commercial Television Australia. Draft Commercial Television Industry Code of Practice. Aug. 2003. 8 Mar. 2004 <http://www.ctva.com.au/control.cfm?page=codereview&pageID=171&menucat=1.2.110.171&Level=3>. Couldry, Nick. The Place of Media Power: Pilgrims and Witnesses of the Media Age. London: Routledge, 2000. Curran, James, and Jean Seaton. Power without Responsibility: The Press, Broadcasting and New Media in Britain. 6th ed. London: Routledge, 2003. Dept. of Communication, Information Technology and the Arts. Options for Structural Reform in Spectrum Management. Canberra: DCITA, Aug. 2002. ---. Proposal for New Institutional Arrangements for the ACA and the ABA. Aug. 2003. 8 Mar. 2004 <http://www.dcita.gov.au/Article/0,,0_1-2_1-4_116552,00.php>. Dept. of Foreign Affairs and Trade. Australia-United States Free Trade Agreement. Feb. 2004. 8 Mar. 2004 <http://www.dfat.gov.au/trade/negotiations/us_fta/outcomes/11_audio_visual.php>. Dwyer, Tim. Submission to Commercial Television Australia’s Review of the Commercial Television Industry’s Code of Practice. Sept. 2003. Dwyer, Tim, and Sally Stockbridge. “Putting Violence to Work in New Media Policies: Trends in Australian Internet, Computer Game and Video Regulation.” New Media and Society 1.2 (1999): 227-49. Given, Jock. America’s Pie: Trade and Culture After 9/11. Sydney: U of NSW P, 2003. Gordon-Smith, Michael. “Media Ethics After Cash-for-Comment.” The Media and Communications in Australia. Ed. Stuart Cunningham and Graeme Turner. Sydney: Allen and Unwin, 2002. Johnson, Rob. Cash-for-Comment: The Seduction of Journo Culture. Sydney: Pluto, 2000. Keane, John. The Media and Democracy. Cambridge: Polity, 1991. Lumby, Cathy, and Elspeth Probyn, eds. Remote Control: New Media, New Ethics. Melbourne: Cambridge UP, 2003. Murdock, Graham, and Peter Golding. “Information Poverty and Political Inequality: Citizenship in the Age of Privatized Communications.” Journal of Communication 39.3 (1991): 180-95. QUT, CIRAC, and Cutler & Co. Research and Innovation Systems in the Production of Digital Content and Applications: Report for the National Office for the Information Economy. Canberra: Commonwealth of Australia, Sept. 2003. Tambini, Damian. Universal Access: A Realistic View. IPPR/Citizens Online Research Publication 1. London: IPPR, 2000. Thomas, Julian and Marion McCutcheon. “Is Broadcasting Special? Charging for Spectrum.” Conference paper. ABA conference, Canberra. May 2003. Turner, Graeme. “Talkback, Advertising and Journalism: A cautionary tale of self-regulated radio”. International Journal of Cultural Studies 3.2 (2000): 247-255. ---. “Reshaping Australian Institutions: Popular Culture, the Market and the Public Sphere.” Culture in Australia: Policies, Publics and Programs. Ed. Tony Bennett and David Carter. Melbourne: Cambridge UP, 2001. Winston, Brian. Media, Technology and Society: A History from the Telegraph to the Internet. London: Routledge, 1998. Web Links http://www.aba.gov.au http://www.aca.gov.au http://www.accc.gov.au http://www.acif.org.au http://www.adma.com.au http://www.ctva.com.au http://www.crtc.gc.ca http://www.dcita.com.au http://www.dfat.gov.au http://www.fcc.gov http://www.ippr.org.uk http://www.ofcom.org.uk http://www.oflc.gov.au Links http://www.commercialalert.org/ Citation reference for this article MLA Style Dwyer, Tim. "Transformations" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0403/06-transformations.php>. APA Style Dwyer, T. (2004, Mar17). Transformations. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0403/06-transformations.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Hinner, Kajetan. "Statistics of Major IRC Networks". M/C Journal 3, n.º 4 (1 de agosto de 2000). http://dx.doi.org/10.5204/mcj.1867.

Texto completo
Resumen
Internet Relay Chat (IRC) is a text-based computer-mediated communication (CMC) service in which people can meet and chat in real time. Most chat occurs in channels named for a specific topic, such as #usa or #linux. A user can take part in several channels when connected to an IRC network. For a long time the only major IRC network available was EFnet, founded in 1990. Over the 1990s three other major IRC networks developed, Undernet (1993), DALnet (1994) and IRCnet (which split from EFnet in June 1996). Several causes led to the separate development of IRC networks: fast growth of user numbers, poor scalability of the IRC protocol and content disagreements, like allowing or prohibiting 'bot programs. Today we are experiencing the development of regional IRC networks, such as BrasNet for Brazilian users, and increasing regionalisation of the global networks -- IRCnet users are generally European, EFnet users generally from the Americas and Australia. All persons connecting to an IRC network at one time create that IRC network's user space. People are constantly signing on and off each network. The total number of users who have ever been to a specific IRC network could be called its 'social space' and an IRC network's social space is by far larger than its user space at any one time. Although there has been research on IRC almost from its beginning (it was developed in 1988, and the first research was made available in late 1991 (Reid)), resources on quantitative development are rare. To rectify this situation, a quantitative data logging 'bot program -- Socip -- was created and set to run on various IRC networks. Socip has been running for almost two years on several IRC networks, giving Internet researchers empirical data of the quantitative development of IRC. Methodology Any approach to gathering quantitative data on IRC needs to fulfil the following tasks: Store the number of users that are on an IRC network at a given time, e.g. every five minutes; Store the number of channels; and, Store the number of servers. It is possible to get this information using the '/lusers' command on an IRC-II client, entered by hand. This approach yields results as in Table 1. Table 1: Number of IRC users on January 31st, 1995 Date Time Users Invisible Servers Channels 31.01.95 10:57 2737 2026 93 1637 During the first months of 1995, it was even possible to get all user information using the '/who **' command. However, on current major IRC networks with greater than 50000 users this method is denied by the IRC Server program, which terminates the connection because it is too slow to accept that amount of data. Added to this problem is the fact that collecting these data manually is an exhausting and repetitive task, better suited to automation. Three approaches to automation were attempted in the development process. The 'Eggdrop' approach The 'Eggdrop' 'bot is one of the best-known IRC 'bot programs. Once programmed, 'bots can act autonomously on an IRC network, and Eggdrop was considered particularly convenient because customised modules could be easily installed. However, testing showed that the Eggdrop 'bot was unsuitable for two reasons. The first was technical: for reasons undetermined, all Eggdrop modules created extensive CPU usage, making it impossible to run several Eggdrops simultaneously to research a number of IRC networks. The second reason had to do with the statistics to be obtained. The objective was to get a snapshot of current IRC users and IRC channel use every five minutes, written into an ASCII file. It was impossible to extend Eggdrop's possibilities in a way that it would periodically submit the '/lusers' command and write the received data into a file. For these reasons, and some security concerns, the Eggdrop approach was abandoned. IrcII was a UNIX IRC client with its own scripting language, making it possible to write command files which periodically submit the '/lusers' command to any chosen IRC server and log the command's output. Four different scripts were used to monitor IRCnet, EFnet, DALnet and Undernet from January to October 1998. These scripts were named Socius_D, Socius_E, Socius_I and Socius_U (depending on the network). Every hour each script stored the number of users and channels in a logfile (examinable using another script written in the Perl language). There were some drawbacks to the ircII script approach. While the need for a terminal to run on could be avoided using the 'screen' package -- making it possible to start ircII, run the scripts, detach, and log off again -- it was impossible to restart ircII and the scripts using an automatic task-scheduler. Thus periodic manual checks were required to find out if the scripts were still running and restart them if needed (e.g. if the server connection was lost). These checks showed that at least one script would not be running after 10 hours. Additional disadvantages were the lengthy log files and the necessity of providing a second program to extract the log file data and write it into a second file from which meaningful graphs could be created. The failure of the Eggdrop and ircII scripting approaches lead to the solution still in use today. Perl script-only approach Perl is a powerful script language for handling file-oriented data when speed is not extremely important. Its version 5 flavour allows a lot of modules to use it for expansion, including the Net::IRC package. The object-oriented Perl interface enables Perl scripts to connect to an IRC server, and use the basic IRC commands. The Socip.pl program includes all server definitions needed to create connections. Socip is currently monitoring ten major IRC networks, including DALnet, EFnet, IRCnet, the Microsoft Network, Talkcity, Undernet and Galaxynet. When run, "Social science IRC program" selects a nickname from its list corresponding to the network -- For EFnet, the first nickname used is Socip_E1. It then functions somewhat like a 'bot. Using that nickname, Socip tries to create an IRC connection to a server of the given network. If there is no failure, handlers are set up which take care of proper reactions to IRC server messages (such as Ping-pong, message output and reply). Socip then joins the channel #hose (the name has no special meaning), a maintenance channel with the additional effect of real persons meeting the 'bot and trying to interact with it every now and then. Those interactions are logged too. Sitting in that channel, the script sleeps periodically and checks if a certain time span has passed (the default is five minutes). After that, the '/lusers' command's output is stored in a data file for each IRC network and the IRC network's RRD (Round Robin database) file is updated. This database, which is organised chronologically, offers great detail for recent events and more condensed information for older events. User and channel information younger than 10 days is stored in five-minute detail. If older than two years, the same information is automatically averaged and stored in a per-day resolution. In case of network problems, Socip acts as necessary. For example, it recognises a connection termination and tries to reconnect after pausing by using the next nickname on the list. This prevents nickname collision problems. If the IRC server does not respond to '/luser' commands three times in a row, the next server on the list is accessed. Special (crontab-invoked) scripts take care of restarting Socip when necessary, as in termination of script because of network problems, IRC operator kill or power failure. After a reboot all scripts are automatically restarted. All monitoring is done on a Linux machine (Pentium 120, 32 MB, Debian Linux 2.1) which is up all the time. Processor load is not extensive, and this machine also acts as the Sociology Department's WWW-Server. Graphs creation Graphs can be created from the data in Socip's RRD files. This task is done using the MRTG (multi router traffic grapher) program by Tobias Oetiker. A script updates all IRC graphs four times a day. Usage of each IRC network is visualised through five graphs: Daily, Weekly and Monthly users and channels, accompanied by two graphs showing all known data users/channels and servers. All this information is continuously published on the World Wide Web at http://www.hinner.com/ircstat. Figures The following samples demonstrate what information can be produced by Socip. As already mentioned, graphs of all monitored networks are updated four times a day, with five graphs for each IRC network. Figure 1 shows the rise of EFnet users from about 40000 in November 1998 to 65000 in July 2000. Sampled data is oscillating around an average amount, which is resulting from the different time zones of users. Fig. 1: EFnet - Users and Channels since November 1998 Figure 2 illustrates the decrease of interconnected EFnet servers over the years. Each server is now handling more and more users. Reasons for taking IRC servers off the net are security concerns (attacks on the server by malicious persons), new payment schemes, maintenance and cost effort. Fig. 2: EFnet - Servers since November 1998 A nice example of a heavily changing weekly graph is Figure 3, which shows peaks shortly before 6pm CEST and almost no users shortly after midnight. Fig. 3: Galaxynet: Weekly Graph (July, 15th-22nd, 2000) The daily graph portrays usage variations with even more detail. Figure 4 is taken from Undernet user and channel data. The vertical gap in the graph indicates missing data, caused either by a net split or other network problems. Fig. 4: Undernet: Daily Graph: July, 22nd, 2000 The final example (Figure 5) shows a weekly graph of the Webchat (http://www.webchat.org) network. It can be seen that every day the user count varies from 5000 to nearly 20000, and that channel numbers fluctuate in concert accordingly from 2500 to 5000. Fig. 5: Webchat: Monthly graph, Week 24-29, 2000 Not every IRC user is connected all the time to an IRC network. This figure may have increased lately with more and more flatrates and cheap Internet access offers, but in general most users will sign off the network after some time. This is why IRC is a very dynamic society, with its membership constantly in flux. Maximum user counts only give the highest number of members who were simultaneously online at some point, and one could only guess at the number of total users of the network -- that is, including those who are using that IRC service but are not signed on at that time. To answer these questions, more thorough investigation is necessary. Then inflows and outflows might be more readily estimated. Table 2 shows the all time maximum user counts of seven IRC networks, compared to the average numbers of IRC users of the four major IRC networks during the third quarter 1998 (based on available data). Table 2: Maximum user counts of selected IRC networks DALnet EFnet Galaxy Net IRCnet MS Chat Undernet Webchat Max. 2000 64276 64309 15253 65340 17392 60210 19793 3rd Q. 1998 21000 37000 n/a 24500 n/a 24000 n/a Compared with the 200-300 users in 1991 and the 7000 IRC-chatters in 1994, the recent growth is certainly extraordinary: it adds up to a total of 306573 users across all monitored networks. It can be expected that the 500000 IRC user threshold will be passed some time during the year 2001. As a final remark, it should be said that obviously Web-based chat systems will be more and more common in the future. These chat services do not use standard IRC protocols, and will be very hard to monitor. Given that these systems are already quite popular, the actual number of chat users in the world could have already passed the half million landmark. References Reid, Elizabeth. "Electropolis: Communications and Community on Internet Relay Chat." Unpublished Honours Dissertation. U of Melbourne, 1991. The Socip program can be obtained at no cost from http://www.hinner.com. Most IRC networks can be accessed with the original Net::Irc Perl extension, but for some special cases (e.g. Talkcity) an extended version is needed, which can also be found there. Citation reference for this article MLA style: Kajetan Hinner. "Statistics of Major IRC Networks: Methods and Summary of User Count." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/count.php>. Chicago style: Kajetan Hinner, "Statistics of Major IRC Networks: Methods and Summary of User Count," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/count.php> ([your date of access]). APA style: Kajetan Hinner. (2000) Statistics of major IRC networks: methods and summary of user count. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/count.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Cesarini, Paul. "‘Opening’ the Xbox". M/C Journal 7, n.º 3 (1 de julio de 2004). http://dx.doi.org/10.5204/mcj.2371.

Texto completo
Resumen
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Goggin, Gerard. "‘mobile text’". M/C Journal 7, n.º 1 (1 de enero de 2004). http://dx.doi.org/10.5204/mcj.2312.

Texto completo
Resumen
Mobile In many countries, more people have mobile phones than they do fixed-line phones. Mobile phones are one of the fastest growing technologies ever, outstripping even the internet in many respects. With the advent and widespread deployment of digital systems, mobile phones were used by an estimated 1, 158, 254, 300 people worldwide in 2002 (up from approximately 91 million in 1995), 51. 4% of total telephone subscribers (ITU). One of the reasons for this is mobility itself: the ability for people to talk on the phone wherever they are. The communicative possibilities opened up by mobile phones have produced new uses and new discourses (see Katz and Aakhus; Brown, Green, and Harper; and Plant). Contemporary soundscapes now feature not only voice calls in previously quiet public spaces such as buses or restaurants but also the aural irruptions of customised polyphonic ringtones identifying whose phone is ringing by the tune downloaded. The mobile phone plays an important role in contemporary visual and material culture as fashion item and status symbol. Most tragically one might point to the tableau of people in the twin towers of the World Trade Centre, or aboard a plane about to crash, calling their loved ones to say good-bye (Galvin). By contrast, one can look on at the bathos of Australian cricketer Shane Warne’s predilection for pressing his mobile phone into service to arrange wanted and unwanted assignations while on tour. In this article, I wish to consider another important and so far also under-theorised aspect of mobile phones: text. Of contemporary textual and semiotic systems, mobile text is only a recent addition. Yet it is already produces millions of inscriptions each day, and promises to be of far-reaching significance. Txt Txt msg ws an acidnt. no 1 expcted it. Whn the 1st txt msg ws sent, in 1993 by Nokia eng stdnt Riku Pihkonen, the telcom cpnies thought it ws nt important. SMS – Short Message Service – ws nt considrd a majr pt of GSM. Like mny teks, the *pwr* of txt — indeed, the *pwr* of the fon — wz discvrd by users. In the case of txt mssng, the usrs were the yng or poor in the W and E. (Agar 105) As Jon Agar suggests in Constant Touch, textual communication through mobile phone was an after-thought. Mobile phones use radio waves, operating on a cellular system. The first such mobile service went live in Chicago in December 1978, in Sweden in 1981, in January 1985 in the United Kingdom (Agar), and in the mid-1980s in Australia. Mobile cellular systems allowed efficient sharing of scarce spectrum, improvements in handsets and quality, drawing on advances in science and engineering. In the first instance, technology designers, manufacturers, and mobile phone companies had been preoccupied with transferring telephone capabilities and culture to the mobile phone platform. With the growth in data communications from the 1960s onwards, consideration had been given to data capabilities of mobile phone. One difficulty, however, had been the poor quality and slow transfer rates of data communications over mobile networks, especially with first-generation analogue and early second-generation digital mobile phones. As the internet was widely and wildly adopted in the early to mid-1990s, mobile phone proponents looked at mimicking internet and online data services possibilities on their hand-held devices. What could work on a computer screen, it was thought, could be reinvented in miniature for the mobile phone — and hence much money was invested into the wireless access protocol (or WAP), which spectacularly flopped. The future of mobiles as a material support for text culture was not to lie, at first at least, in aping the world-wide web for the phone. It came from an unexpected direction: cheap, simple letters, spelling out short messages with strange new ellipses. SMS was built into the European Global System for Mobile (GSM) standard as an insignificant, additional capability. A number of telecommunications manufacturers thought so little of the SMS as not to not design or even offer the equipment needed (the servers, for instance) for the distribution of the messages. The character sets were limited, the keyboards small, the typeface displays rudimentary, and there was no acknowledgement that messages were actually received by the recipient. Yet SMS was cheap, and it offered one-to-one, or one-to-many, text communications that could be read at leisure, or more often, immediately. SMS was avidly taken up by young people, forming a new culture of media use. Sending a text message offered a relatively cheap and affordable alternative to the still expensive timed calls of voice mobile. In its early beginnings, mobile text can be seen as a subcultural activity. The text culture featured compressed, cryptic messages, with users devising their own abbreviations and grammar. One of the reasons young people took to texting was a tactic of consolidating and shaping their own shared culture, in distinction from the general culture dominated by their parents and other adults. Mobile texting become involved in a wider reworking of youth culture, involving other new media forms and technologies, and cultural developments (Butcher and Thomas). Another subculture that also was in the vanguard of SMS was the Deaf ‘community’. Though the Alexander Graham Bell, celebrated as the inventor of the telephone, very much had his hearing-impaired wife in mind in devising a new form of communication, Deaf people have been systematically left off the telecommunications network since this time. Deaf people pioneered an earlier form of text communications based on the Baudot standard, used for telex communications. Known as teletypewriter (TTY), or telecommunications device for the Deaf (TDD) in the US, this technology allowed Deaf people to communicate with each other by connecting such devices to the phone network. The addition of a relay service (established in Australia in the mid-1990s after much government resistance) allows Deaf people to communicate with hearing people without TTYs (Goggin & Newell). Connecting TTYs to mobile phones have been a vexed issue, however, because the digital phone network in Australia does not allow compatibility. For this reason, and because of other features, Deaf people have become avid users of SMS (Harper). An especially favoured device in Europe has been the Nokia Communicator, with its hinged keyboard. The move from a ‘restricted’, ‘subcultural’ economy to a ‘general’ economy sees mobile texting become incorporated in the semiotic texture and prosaic practices of everyday life. Many users were already familiar with the new conventions already developed around electronic mail, with shorter, crisper messages sent and received — more conversation-like than other correspondence. Unlike phone calls, email is asynchronous. The sender can respond immediately, and the reply will be received with seconds. However, they can also choose to reply at their leisure. Similarly, for the adept user, SMS offers considerable advantages over voice communications, because it makes textual production mobile. Writing and reading can take place wherever a mobile phone can be turned on: in the street, on the train, in the club, in the lecture theatre, in bed. The body writes differently too. Writing with a pen takes a finger and thumb. Typing on a keyboard requires between two and ten fingers. The mobile phone uses the ‘fifth finger’ — the thumb. Always too early, and too late, to speculate on contemporary culture (Morris), it is worth analyzing the textuality of mobile text. Theorists of media, especially television, have insisted on understanding the specific textual modes of different cultural forms. We are familiar with this imperative, and other methods of making visible and decentring structures of text, and the institutions which animate and frame them (whether author or producer; reader or audience; the cultural expectations encoded in genre; the inscriptions in technology). In formal terms, mobile text can be described as involving elision, great compression, and open-endedness. Its channels of communication physically constrain the composition of a very long single text message. Imagine sending James Joyce’s Finnegan’s Wake in one text message. How long would it take to key in this exemplar of the disintegration of the cultural form of the novel? How long would it take to read? How would one navigate the text? Imagine sending the Courier-Mail or Financial Review newspaper over a series of text messages? The concept of the ‘news’, with all its cultural baggage, is being reconfigured by mobile text — more along the lines of the older technology of the telegraph, perhaps: a few words suffices to signify what is important. Mobile textuality, then, involves a radical fragmentation and unpredictable seriality of text lexia (Barthes). Sometimes a mobile text looks singular: saying ‘yes’ or ‘no’, or sending your name and ID number to obtain your high school or university results. Yet, like a telephone conversation, or any text perhaps, its structure is always predicated upon, and haunted by, the other. Its imagined reader always has a mobile phone too, little time, no fixed address (except that hailed by the network’s radio transmitter), and a finger poised to respond. Mobile text has structure and channels. Yet, like all text, our reading and writing of it reworks those fixities and makes destabilizes our ‘clear’ communication. After all, mobile textuality has a set of new pre-conditions and fragilities. It introduces new sorts of ‘noise’ to signal problems to annoy those theorists cleaving to the Shannon and Weaver linear model of communication; signals often drop out; there is a network confirmation (and message displayed) that text messages have been sent, but no system guarantee that they have been received. Our friend or service provider might text us back, but how do we know that they got our text message? Commodity We are familiar now with the pleasures of mobile text, the smile of alerting a friend to our arrival, celebrating good news, jilting a lover, making a threat, firing a worker, flirting and picking-up. Text culture has a new vector of mobility, invented by its users, but now coveted and commodified by businesses who did not see it coming in the first place. Nimble in its keystrokes, rich in expressivity and cultural invention, but relatively rudimentary in its technical characteristics, mobile text culture has finally registered in the boardrooms of communications companies. Not only is SMS the preferred medium of mobile phone users to keep in touch with each other, SMS has insinuated itself into previously separate communication industries arenas. In 2002-2003 SMS became firmly established in television broadcasting. Finally, interactive television had arrived after many years of prototyping and being heralded. The keenly awaited back-channel for television arrives courtesy not of cable or satellite television, nor an extra fixed-phone line. It’s the mobile phone, stupid! Big Brother was not only a watershed in reality television, but also in convergent media. Less obvious perhaps than supplementary viewing, or biographies, or chat on Big Brother websites around the world was the use of SMS for voting. SMS is now routinely used by mainstream television channels for viewer feedback, contest entry, and program information. As well as its widespread deployment in broadcasting, mobile text culture has been the language of prosaic, everyday transactions. Slipping into a café at Bronte Beach in Sydney, why not pay your parking meter via SMS? You’ll even receive a warning when your time is up. The mobile is becoming the ‘electronic purse’, with SMS providing its syntax and sentences. The belated ingenuity of those fascinated by the economics of mobile text has also coincided with a technological reworking of its possibilities, with new implications for its semiotic possibilities. Multimedia messaging (MMS) has now been deployed, on capable digital phones (an instance of what has been called 2.5 generation [G] digital phones) and third-generation networks. MMS allows images, video, and audio to be communicated. At one level, this sort of capability can be user-generated, as in the popularity of mobiles that take pictures and send these to other users. Television broadcasters are also interested in the capability to send video clips of favourite programs to viewers. Not content with the revenues raised from millions of standard-priced SMS, and now MMS transactions, commercial participants along the value chain are keenly awaiting the deployment of what is called ‘premium rate’ SMS and MMS services. These services will involve the delivery of desirable content via SMS and MMS, and be priced at a premium. Products and services are likely to include: one-to-one textchat; subscription services (content delivered on handset); multi-party text chat (such as chat rooms); adult entertainment services; multi-part messages (such as text communications plus downloads); download of video or ringtones. In August 2003, one text-chat service charged $4.40 for a pair of SMS. Pwr At the end of 2003, we have scarcely registered the textual practices and systems in mobile text, a culture that sprang up in the interstices of telecommunications. It may be urgent that we do think about the stakes here, as SMS is being extended and commodified. There are obvious and serious policy issues in premium rate SMS and MMS services, and questions concerning the political economy in which these are embedded. Yet there are cultural questions too, with intricate ramifications. How do we understand the effects of mobile textuality, rewriting the telephone book for this new cultural form (Ronell). What are the new genres emerging? And what are the implications for cultural practice and policy? Does it matter, for instance, that new MMS and 3rd generation mobile platforms are not being designed or offered with any-to-any capabilities in mind: allowing any user to upload and send multimedia communications to other any. True, as the example of SMS shows, the inventiveness of users is difficult to foresee and predict, and so new forms of mobile text may have all sorts of relationships with content and communication. However, there are worrying signs of these developing mobile circuits being programmed for narrow channels of retail purchase of cultural products rather than open-source, open-architecture, publicly usable nodes of connection. Works Cited Agar, Jon. Constant Touch: A Global History of the Mobile Phone. Cambridge: Icon, 2003. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill & Wang, 1974. Brown, Barry, Green, Nicola, and Harper, Richard, eds. Wireless World: Social, Cultural, and Interactional Aspects of the Mobile Age. London: Springer Verlag, 2001. Butcher, Melissa, and Thomas, Mandy, eds. Ingenious: Emerging youth cultures in urban Australia. Melbourne: Pluto, 2003. Galvin, Michael. ‘September 11 and the Logistics of Communication.’ Continuum: Journal of Media and Cultural Studies 17.3 (2003): 303-13. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Digital in New Media. Lanham, MA: Rowman & Littlefield, 2003. Harper, Phil. ‘Networking the Deaf Nation.’ Australian Journal of Communication 30. 3 (2003), in press. International Telecommunications Union (ITU). ‘Mobile Cellular, subscribers per 100 people.’ World Telecommunication Indicators <http://www.itu.int/ITU-D/ict/statistics/> accessed 13 October 2003. Katz, James E., and Aakhus, Mark, eds. Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge: Cambridge U P, 2002. Morris, Meaghan. Too Soon, Too Late: History in Popular Culture. Bloomington and Indianapolis: U of Indiana P, 1998. Plant, Sadie. On the Mobile: The Effects of Mobile Telephones on Social and Individual Life. < http://www.motorola.com/mot/documents/0,1028,296,00.pdf> accessed 5 October 2003. Ronell, Avital. The Telephone Book: Technology—schizophrenia—electric speech. Lincoln: U of Nebraska P, 1989. Citation reference for this article MLA Style Goggin, Gerard. "‘mobile text’" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0401/03-goggin.php>. APA Style Goggin, G. (2004, Jan 12). ‘mobile text’. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0401/03-goggin.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Potts, Graham. ""I Want to Pump You Up!" Lance Armstrong, Alex Rodriguez, and the Biopolitics of Data- and Analogue-Flesh". M/C Journal 16, n.º 6 (6 de noviembre de 2013). http://dx.doi.org/10.5204/mcj.726.

Texto completo
Resumen
The copyrighting of digital augmentations (our data-flesh), their privatization and ownership by others from a vast distance that is simultaneously instantly telematically surmountable started simply enough. It was the initially innocuous corporatization of language and semiotics that started the deeper ontological flip, which placed the posthuman bits and parts over the posthuman that thought that it was running things. The posthumans in question, myself included, didn't help things much when, for instance, we all clicked an unthinking or unconcerned "yes" to Facebook® or Gmail®'s "terms and conditions of use" policies that gives them the real ownership and final say over those data based augments of sociality, speech, and memory. Today there is growing popular concern (or at least acknowledgement) over the surveillance of these augmentations by government, especially after the Edward Snowden NSA leaks. The same holds true for the dataveillance of data-flesh (i.e. Gmail® or Facebook® accounts) by private corporations for reasons of profit and/or at the behest of governments for reasons of "national security." While drawing a picture of this (bodily) state, of the intrusion through language of brands into our being and their coterminous policing of intelligible and iterative body boundaries and extensions, I want to address the next step in copyrighted augmentation, one that is current practice in professional sport, and part of the bourgeoning "anti-aging" industry, with rewriting of cellular structure and hormonal levels, for a price, on the open market. What I want to problematize is the contradiction between the rhetorical moralizing against upgrading the analogue-flesh, especially with respect to celebrity sports stars like Lance Armstrong and Alex Rodriquez, all the while the "anti-aging" industry does the same without censor. Indeed, it does so within the context of the contradictory social messaging and norms that our data-flesh and electric augmentations receive to constantly upgrade. I pose the question of the contradiction between the messages given to our analogue-flesh and data-flesh in order to examine the specific site of commentary on professional sports stars and their practices, but also to point to the ethical gap that exists not just for (legal) performance enhancing drugs (PED), but also to show the link to privatized and copyrighted genomic testing, the dataveillance of this information, and subsequent augmentations that may be undertaken because of the results. Copyrighted Language and Semiotics as Gateway Drug The corporatization of language and semiotics came about with an intrusion of exclusively held signs from the capitalist economy into language. This makes sense if one want to make surplus value greater: stamp a name onto something, especially a base commodity like a food product, and build up the name of that stamp, however one will, so that that name has perceived value in and of itself, and then charge as much as one can for it. Such is the story of the lack of real correlation between the price of Starbucks Coffee® and coffee as a commodity, set by Starbucks® on the basis of the cultural worth of the symbols and signs associated with it, rather than by what they pay for the labor and production costs prior to its branding. But what happens to these legally protected stamps once they start acting as more than just a sign and referent to a subsection of a specific commodity or thing? Once the stamp has worth and a life that is socially determined? What happens when these stamps get verbed, adjectived, and nouned? Naomi Klein, in the book that the New York Times referred to as a "movement bible" for the anti-globalization forces of the late 1990s said "logos, by the force of ubiquity, have become the closest thing we have to an international language, recognized and understood in many more places than English" (xxxvi). But there is an inherent built-in tension of copyrighted language and semiotics that illustrates the coterminous problems with data- and analogue-flesh augments. "We have almost two centuries' worth of brand-name history under our collective belt, coalescing to create a sort of global pop-cultural Morse code. But there is just one catch: while we may all have the code implanted in our brains, we're not really allowed to use it" (Klein 176). Companies want their "brands to be the air you breathe in - but don't dare exhale" or otherwise try to engage in a two-way dialogue that alters the intended meaning (Klein 182). Private signs power first-world and BRIC capitalism, language, and bodies. I do not have a coffee in the morning; I have Starbucks®. I do not speak on a cellular phone; I speak iPhone®. I am not using my computer right now; I am writing MacBook Air®. I do not look something up, search it, or research it; I Google® it. Klein was writing before the everyday uptake of sophisticated miniaturized and mobile computing and communication devices. With the digitalization of our senses and electronic limbs this viral invasion of language became material, effecting both our data- and analogue-flesh. The trajectory? First we used it; then we wore it as culturally and socially demarcating clothing; and finally we no longer used copyrighted speech terms: it became an always-present augmentation, an adjective to the lexicon body of language, and thereby out of democratic semiotic control. Today Twitter® is our (140 character limited) medium of speech. Skype® is our sense of sight, the way we have "real" face-to-face communication. Yelp® has extended our sense of taste and smell through restaurant reviews. The iPhone® is our sense of hearing. And OkCupid® and/or Grindr® and other sites and apps have become the skin of our sexual organs (and the site where they first meet). Today, love at first sight happens through .jpeg extensions; our first sexual experience ranked on a scale of risk determined by the type of video feed file format used: was it "protected" enough to stop its "spread"? In this sense the corporatization of language and semiotics acted as the gateway drug to corporatized digital-flesh; from use of something that is external to us to an augmentation that is part of us and indeed may be in excess of us or any notion of a singular liberal subject.Replacement of Analogue-Flesh? Arguably, this could be viewed as the coming to be of the full replacement of the fleshy analogue body by what are, or started as digital augmentations. Is this what Marshall McLuhan meant when he spoke of the "electronic exteriorization of the central nervous system" through the growing complexity of our "electric extensions"? McLuhan's work that spoke of the "global village" enabled by new technologies is usually read as a euphoric celebration of the utopic possibilities of interconnectivity. What these misreadings overlook is the darker side of his thought, where the "cultural probe" picks up the warning signals of the change to come, so that a Christian inspired project, a cultural Noah’s Ark, can be created to save the past from the future to come (Coupland). Jean Baudrillard, Paul Virilio, and Guy Debord have analyzed this replacement of the real and the changes to the relations between people—one I am arguing is branded/restricted—by offering us the terms simulacrum (Baudrillard), substitution (Virilio), and spectacle (Debord). The commonality which links Baudrillard and Virilio, but not Debord, is that the former two do not explicitly situate their critique as being within the loss of the real that they then describe. Baudrillard expresses that he can have a 'cool detachment' from his subject (Forget Foucault/Forget Baudrillard), while Virilio's is a Catholic moralist's cry lamenting the disappearance of the heterogeneous experiential dimensions in transit along the various axes of space and time. What differentiates Debord is that he had no qualms positioning his own person and his text, The Society of the Spectacle (SotS), as within its own subject matter - a critique that is limited, and acknowledged as such, by the blindness of its own inescapable horizon.This Revolt Will Be Copyrighted Yet today the analogue - at the least - performs a revolt in or possibly in excess of the spectacle that seeks its containment. How and at what site is the revolt by the analogue-flesh most viewable? Ironically, in the actions of celebrity professional sports stars and the Celebrity Class in general. Today it revolts against copyrighted data-flesh with copyrighted analogue-flesh. This is even the case when the specific site of contestation is (at least the illusion of) immortality, where the runaway digital always felt it held the trump card. A regimen of Human Growth Hormone (HGH) and other PEDs purports to do the same thing, if not better, at the cellular level, than the endless youth paraded in the unaging photo employed by the Facebook or Grindr Bodies®. But with the everyday use and popularization of drugs and enhancement supplements like HGH and related PEDs there is something more fundamental at play than the economic juggernaut that is the Body Beautiful; more than fleshy jealousy of Photoshopped® electronic skins. This drug use represents the logical extension of the ethics that drive our tech-wired lives. We are told daily to upgrade: our sexual organs (OkCupid® or Grindr®) for a better, more accurate match; our memory (Google® services) for largeness and safe portability; and our hearing and sight (iPhone® or Skype®) for increase connectivity, engaging the "real" (that we have lost). These upgrades are controlled and copyrighted, but that which grows the economy is an especially favored moral act in an age of austerity. Why should it be surprising, then, that with the economic backing of key players of Google®—kingpin of the global for-profit dataveillance racket—that for $99.95 23andMe® will send one a home DNA test kit, which once returned will be analyzed for genetic issues, with a personalized web-interface, including "featured links." Analogue-flesh fights back with willing copyrighted dataveillance of its genetic code. The test and the personalized results allow for augmentations of the Angelina Jolie type: private testing for genetic markers, a double mastectomy provided by private healthcare, followed by copyrighted replacement flesh. This is where we find the biopolitics of data- and analogue-flesh, lead forth, in an ironic turn, by the Celebrity Class, whom depend for their income on the lives of their posthuman bodies. This is a complete reversal of the course Debord charts out for them: The celebrity, the spectacular representation of a living human being, embodies this banality by embodying the image of a possible role. Being a star means specializing in the seemingly lived; the star is the object of identification with the shallow seeming life that has to compensate for the fragmented productive specializations which are actually lived. (SotS) While the electronic global village was to have left the flesh-and-blood as waste, today there is resistance by the analogue from where we would least expect it - attempts to catch up and replant itself as ontologically prior to the digital through legal medical supplementation; to make the posthuman the posthuman. We find the Celebrity Class at the forefront of the resistance, of making our posthuman bodies as controlled augmentations of a posthuman. But there is a definite contradiction as well, specifically in the press coverage of professional sports. The axiomatic ethical and moral sentiment of our age to always upgrade data-flesh and analogue-flesh is contradicted in professional sports by the recent suspensions of Lance Armstrong and Alex Rodriguez and the political and pundit critical commentary on their actions. Nancy Reagan to the Curbside: An Argument for Lance Armstrong and Alex Rodriguez's "Just Say Yes to Drugs" Campaign Probably to the complete shock of most of my family, friends, students, and former lovers who may be reading this, I actually follow sports reporting with great detail and have done so for years. That I never speak of any sports in my everyday interactions, haven't played a team or individual sport since I could speak (and thereby use my voice to inform my parents that I was refusing to participate), and even decline amateur or minor league play, like throwing a ball of any kind at a family BBQ, leaves me to, like Judith Butler, "give an account of oneself." And this accounting for my sports addiction is not incidental or insignificant with respect either to how the posthuman present can move from a state of posthumanism to one of posthumanism, nor my specific interpellation into (and excess) in either of those worlds. Recognizing that I will not overcome my addiction without admitting my problem, this paper is thus a first-step public acknowledgement: I have been seeing "Dr. C" for a period of three years, and together, through weekly appointments, we have been working through this issue of mine. (Now for the sake of avoiding the cycle of lying that often accompanies addiction I should probably add that Dr. C is a chiropractor who I see for back and nerve damage issues, and the talk therapy portion, a safe space to deal with the sports addiction, was an organic outgrowth of the original therapy structure). My data-flesh that had me wired in and sitting all the time had done havoc to the analogue-flesh. My copyrighted augments were demanding that I do something to remedy a situation where I was unable to be sitting and wired in all the time. Part of the treatment involved the insertion of many acupuncture needles in various parts of my body, and then having an electric current run through them for a sustained period of time. Ironically, as it was the wired augmentations that demanded this, due to my immobility at this time - one doesn't move with acupuncture needles deep within the body - I was forced away from my devices and into unmediated conversation with Dr. C about sports, celebrity sports stars, and the recent (argued) infractions by Armstrong and Rodriguez. Now I say "argued" because in the first place are what A-Rod and Armstrong did, or are accused of doing, the use of PEDs, HGH, and all the rest (cf. Lupica; Thompson, and Vinton) really a crime? Are they on their way, or are there real threats of jail and criminal prosecution? And in the most important sense, and despite all the rhetoric, are they really going against prevailing social norms with respect to medical enhancement? No, no, and no. What is peculiar about the "witch-hunt" of A-Rod and Armstrong - their words - is that we are undertaking it in the first place, while high-end boutique medical clinics (and internet pharmacies) offer the same treatment for analogue-flesh. Fixes for the human in posthuman; ways of keeping the human up to speed; arguably the moral equivalent, if done so with free will, of upgrading the software for ones iOS device. If the critiques of Baudrillard and Virilio are right, we seem to find nothing wrong with crippling our physical bodies and social skills by living through computers and telematic technologies, and obsess over the next upgrade that will make us (more) faster and quicker (than the other or others), while we righteously deny the same process to the flesh for those who, in Debord's description, are the most complicit in the spectacle, to the supposedly most posthuman of us - those that have become pure spectacle (Debord), pure simulation (Baudrillard), a total substitution (Virilio). But it seems that celebrities, and sports celebrities in specific haven't gone along for the ride of never-ending play of their own signifiers at the expense of doing away with the real; they were not, in Debord's words, content with "specializing in the seemingly lived"; they wanted, conversely, to specialize in the most maximally lived flesh, right down to cellular regeneration towards genetic youth, which is the strongest claim in favor of taking HGH. It looks like they were prepared to, in the case of Armstrong, engage in the "most sophisticated, professionalized and successful doping program that sport has ever seen" in the name of the flesh (BBC). But a doping program that can, for the most part, be legally obtained as treatment, and in the same city as A-Rod plays in and is now suspended for his "crimes" to boot (NY Vitality). This total incongruence between what is desired, sought, and obtained legally by members of their socioeconomic class, and many classes below as well, and is a direct outgrowth of the moral and ethical axiomatic of the day is why A-Rod and Armstrong are so bemused, indignant, and angry, if not in a state of outright denial that they did anything that was wrong, even while they admit, explicitly, that yes, they did what they are accused of doing: taking the drugs. Perhaps another way is needed to look at the unprecedentedly "harsh" and "long" sentences of punishment handed out to A-Rod and Armstrong. The posthuman governing bodies of the sports of the society of the spectacle in question realize that their spectacle machines are being pushed back at. A real threat because it goes with the grain of where the rest of us, or those that can buy in at the moment, are going. And this is where the talk therapy for my sports addiction with Dr. C falls into the story. I realized that the electrified needles were telling me that I too should put the posthuman back in control of my damaged flesh; engage in a (medically copyrighted) piece of performance philosophy and offset some of the areas of possible risk that through restricted techne 23andMe® had (arguably) found. Dr. C and I were peeved with A-Rod and Armstrong not for what they did, but what they didn't tell us. We wanted better details than half-baked admissions of moral culpability. We wanted exact details on what they'd done to keep up to their digital-flesh. Their media bodies were cultural probes, full in view, while their flesh bodies, priceless lab rats, are hidden from view (and likely to remain so due to ongoing litigation). These were, after all, big money cover-ups of (likely) the peak of posthuman science, and the lab results are now hidden behind an army of sports federations lawyers, and agents (and A-Rod's own army since he still plays); posthuman progress covered up by posthuman rules, sages, and agents of manipulation. Massive posthuman economies of spectacle, simulation, or substitution of the real putting as much force as they can bare on resurgent posthuman flesh - a celebrity flesh those economies, posthuman economies, want to see as utterly passive like Debord, but whose actions are showing unexpected posthuman alignment with the flesh. Why are the centers of posthumanist power concerned? Because once one sees that A-Rod and Armstrong did it, once one sees that others are doing the same legally without a fuss being made, then one can see that one can do the same; make flesh-and-blood keep up, or regrow and become more organically youthful, while OkCupid® or Grindr® data-flesh gets stuck with the now lagging Photoshopped® touchups. Which just adds to my desire to get "pumped up"; add a little of A-Rod and Armstrong's concoction to my own routine; and one of a long list of reasons to throw Nancy Reagan under the bus: to "just say yes to drugs." A desire that is tempered by the recognition that the current limits of intelligibility and iteration of subjects, the work of defining the bodies that matter that is now set by copyrighted language and copyrighted electric extensions is only being challenged within this society of the spectacle by an act that may give a feeling of unease for cause. This is because it is copyrighted genetic testing and its dataveillance and manipulation through copyrighted medical technology - the various branded PEDs, HGH treatments, and their providers - that is the tool through which the flesh enacts this biopolitical "rebellion."References Baudrillard, Jean. Forget Foucault/Forget Baudrillard. Trans Nicole Dufresne. Los Angeles: Semiotext(e), 2007. ————. Simulations. Trans. Paul Foss, Paul Patton and Philip Beitchman. Cambridge: Semiotext(e), 1983. BBC. "Lance Armstong: Usada Report Labels Him 'a Serial Cheat.'" BBC Online 11 Oct. 2012. 1 Dec. 2013 ‹http://www.bbc.co.uk/sport/0/cycling/19903716›. Butler, Judith. Giving an Account of Oneself. New York: Fordham University Press, 2005. Clark, Taylor. Starbucked: A Double Tall Tale of Caffeine, Commerce, and Culture. New York: Back Bay, 2008. Coupland, Douglas. Marshall McLuhan. Toronto: Penguin Books, 2009. Debord, Guy. Society of the Spectacle. Detroit: Black & Red: 1977. Klein, Naomi. No Logo: Taking Aim at the Brand Bullies. Toronto: Knopf Canada, 1999. Lupica, Mike. "Alex Rodriguez Beginning to Look a Lot like Lance Armstrong." NY Daily News. 6 Oct. 2013. 1 Dec. 2013 ‹http://www.nydailynews.com/sports/baseball/lupica-a-rod-tour-de-lance-article-1.1477544›. McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill Book Company, 1964. NY Vitality. "Testosterone Treatment." NY Vitality. 1 Dec. 2013 ‹http://vitalityhrt.com/hgh.html›. Thompson, Teri, and Nathaniel Vinton. "What Does Alex Rodriguez Hope to Accomplish by Following Lance Armstrong's Legal Blueprint?" NY Daily News 5 Oct. 2013. 1 Dec. 2013 ‹http://www.nydailynews.com/sports/i-team/a-rod-hope-accomplish-lance-blueprint-article-1.1477280›. Virilio, Paul. Speed and Politics. Trans. Mark Polizzotti. New York: Semiotext(e), 1986.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Jethani, Suneel. "New Media Maps as ‘Contact Zones’: Subjective Cartography and the Latent Aesthetics of the City-Text". M/C Journal 14, n.º 5 (18 de octubre de 2011). http://dx.doi.org/10.5204/mcj.421.

Texto completo
Resumen
Any understanding of social and cultural change is impossible without a knowledge of the way media work as environments. —Marshall McLuhan. What is visible and tangible in things represents our possible action upon them. —Henri Bergson. Introduction: Subjective Maps as ‘Contact Zones’ Maps feature heavily in a variety of media; they appear in textbooks, on television, in print, and on the screens of our handheld devices. The production of cartographic texts is a process that is imbued with power relations and bound up with the production and reproduction of social life (Pinder 405). Mapping involves choices as to what information is and is not included. In their organisation, categorisation, modeling, and representation maps show and they hide. Thus “the idea that a small number of maps or even a single (and singular) map might be sufficient can only apply in a spatialised area of study whose own self-affirmation depends on isolation from its context” (Lefebvre 85–86). These isolations determine the way we interpret the physical, biological, and social worlds. The map can be thought of as a schematic for political systems within a confined set of spatial relations, or as a container for political discourse. Mapping contributes equally to the construction of experiential realities as to the representation of physical space, which also contains the potential to incorporate representations of temporality and rhythm to spatial schemata. Thus maps construct realities as much as they represent them and coproduce space as much as the political identities of people who inhabit them. Maps are active texts and have the ability to promote social change (Pickles 146). It is no wonder, then, that artists, theorists and activists alike readily engage in the conflicted praxis of mapping. This critical engagement “becomes a method to track the past, embody memories, explain the unexplainable” and manifest the latent (Ibarra 66). In this paper I present a short case study of Bangalore: Subjective Cartographies a new media art project that aims to model a citizen driven effort to participate in a critical form of cartography, which challenges dominant representations of the city-space. I present a critical textual analysis of the maps produced in the workshops, the artist statements relating to these works used in the exhibition setting, and statements made by the participants on the project’s blog. This “praxis-logical” approach allows for a focus on the project as a space of aggregation and the communicative processes set in motion within them. In analysing such projects we could (and should) be asking questions about the functions served by the experimental concepts under study—who has put it forward? Who is utilising it and under what circumstances? Where and how has it come into being? How does discourse circulate within it? How do these spaces as sites of emergent forms of resistance within global capitalism challenge traditional social movements? How do they create self-reflexive systems?—as opposed to focusing on ontological and technical aspects of digital mapping (Renzi 73). In de-emphasising the technology of digital cartography and honing in on social relations embedded within the text(s), this study attempts to complement other studies on digital mapping (see Strom) by presenting a case from the field of politically oriented tactical media. Bangalore: Subjective Cartographies has been selected for analysis, in this exploration of media as “zone.” It goes some way to incorporating subjective narratives into spatial texts. This is a three-step process where participants tapped into spatial subjectivities by data collection or environmental sensing led by personal reflection or ethnographic enquiry, documenting and geo-tagging their findings in the map. Finally they engaged an imaginative or ludic process of synthesising their data in ways not inherent within the traditional conventions of cartography, such as the use of sound and distortion to explicate the intensity of invisible phenomena at various coordinates in the city-space. In what follows I address the “zone” theme by suggesting that if we apply McLuhan’s notion of media as environment together with Henri Bergson’s assertion that visibility and tangibility constitutes the potential for action to digital maps, projects such as Bangalore: Subjective Cartographies constitute a “contact zone.” A type of zone where groups come together at the local level and flows of discourse about art, information communication, media, technology, and environment intersect with local histories and cultures within the cartographic text. A “contact zone,” then, is a site where latent subjectivities are manifested and made potentially politically potent. “Contact zones,” however, need not be spaces for the aggrieved or excluded (Renzi 82), as they may well foster the ongoing cumulative politics of the mundane capable of developing into liminal spaces where dominant orders may be perforated. A “contact zone” is also not limitless and it must be made clear that the breaking of cartographic convention, as is the case with the project under study here, need not be viewed as resistances per se. It could equally represent thresholds for public versus private life, the city-as-text and the city-as-social space, or the zone where representations of space and representational spaces interface (Lefebvre 233), and culture flows between the mediated and ideated (Appadurai 33–36). I argue that a project like Bangalore: Subjective Cartographies demonstrates that maps as urban text form said “contact zones,” where not only are media forms such as image, text, sound, and video are juxtaposed in a singular spatial schematic, but narratives of individual and collective subjectivities (which challenge dominant orders of space and time, and city-rhythm) are contested. Such a “contact zone” in turn may not only act as a resource for citizens in the struggle of urban design reform and a democratisation of the facilities it produces, but may also serve as a heuristic device for researchers of new media spatiotemporalities and their social implications. Critical Cartography and Media Tactility Before presenting this brief illustrative study something needs to be said of the context from which Bangalore: Subjective Cartographies has arisen. Although a number of Web 2.0 applications have come into existence since the introduction of Google Maps and map application program interfaces, which generate a great deal of geo-tagged user generated content aimed at reconceptualising the mapped city-space (see historypin for example), few have exhibited great significance for researchers of media and communications from the perspective of building critical theories relating to political potential in mediated spaces. The expression of power through mapping can be understood from two perspectives. The first—attributed largely to the Frankfurt School—seeks to uncover the potential of a society that is repressed by capitalist co-opting of the cultural realm. This perspective sees maps as a potential challenge to, and means of providing emancipation from, existing power structures. The second, less concerned with dispelling false ideologies, deals with the politics of epistemology (Crampton and Krygier 14). According to Foucault, power was not applied from the top down but manifested laterally in a highly diffused manner (Foucault 117; Crampton and Krygier 14). Foucault’s privileging of the spatial and epistemological aspects of power and resistance complements the Frankfurt School’s resistance to oppression in the local. Together the two perspectives orient power relative to spatial and temporal subjectivities, and thus fit congruently into cartographic conventions. In order to make sense of these practices the post-oppositional character of tactical media maps should be located within an economy of power relations where resistance is never outside of the field of forces but rather is its indispensable element (Renzi 72). Such exercises in critical cartography are strongly informed by the critical politico-aesthetic praxis of political/art collective The Situationist International, whose maps of Paris were inherently political. The Situationist International incorporated appropriated texts into, and manipulated, existing maps to explicate city-rhythms and intensities to construct imaginative and alternate representations of the city. Bangalore: Subjective Cartographies adopts a similar approach. The artists’ statement reads: We build our subjective maps by combining different methods: photography, film, and sound recording; […] to explore the visible and invisible […] city; […] we adopt psycho-geographical approaches in exploring territory, defined as the study of the precise effects of the geographical environment, consciously developed or not, acting directly on the emotional behaviour of individuals. The project proposals put forth by workshop participants also draw heavily from the Situationists’s A New Theatre of Operations for Culture. A number of Situationist theories and practices feature in the rationale for the maps created in the Bangalore Subjective Cartographies workshop. For example, the Situationists took as their base a general notion of experimental behaviour and permanent play where rationality was approached on the basis of whether or not something interesting could be created out of it (Wark 12). The dérive is the rapid passage through various ambiences with a playful-constructive awareness of the psychographic contours of a specific section of space-time (Debord). The dérive can be thought of as an exploration of an environment without preconceptions about the contours of its geography, but rather a focus on the reality of inhabiting a place. Détournement involves the re-use of elements from recognised media to create a new work with meaning often opposed to the original. Psycho-geography is taken to be the subjective ambiences of particular spaces and times. The principles of détournement and psycho-geography imply a unitary urbanism, which hints at the potential of achieving in environments what may be achieved in media with détournement. Bangalore: Subjective Cartographies carries Situationist praxis forward by attempting to exploit certain properties of information digitalisation to formulate textual representations of unitary urbanism. Bangalore: Subjective Cartographies is demonstrative of a certain media tactility that exists more generally across digital-networked media ecologies and channels this to political ends. This tactility of media is best understood through textual properties awarded by the process and logic of digitalisation described in Lev Manovich’s Language of New Media. These properties are: numerical representation in the form of binary code, which allows for the reification of spatial data in a uniform format that can be stored and retrieved in-silico as opposed to in-situ; manipulation of this code by the use of algorithms, which renders the scales and lines of maps open to alteration; modularity that enables incorporation of other textual objects into the map whilst maintaining each incorporated item’s individual identity; the removal to some degree of human interaction in terms of the translation of environmental data into cartographic form (whilst other properties listed here enable human interaction with the cartographic text), and the nature of digital code allows for changes to accumulate incrementally creating infinite potential for refinements (Manovich 49–63). The Subjective Mapping of Bangalore Bangalore is an interesting site for such a project given the recent and rapid evolution of its media infrastructure. As a “media city,” the first television sets appeared in Bangalore at some point in the early 1980s. The first Internet Service Provider (ISP), which served corporate clients only, commenced operating a decade later and then offered dial-up services to domestic clients in the mid-1990s. At present, however, Bangalore has the largest number of broadband Internet connections in India. With the increasing convergence of computing and telecommunications with traditional forms of media such as film and photography, Bangalore demonstrates well what Scott McQuire terms a media-architecture complex, the core infrastructure for “contact zones” (vii). Bangalore: Subjective Cartographies was a workshop initiated by French artists Benjamin Cadon and Ewen Cardonnet. It was conducted with a number of students at the Srishti School of Art, Design and Technology in November and December 2009. Using Metamap.fr (an online cartographic tool that makes it possible to add multimedia content such as texts, video, photos, sounds, links, location points, and paths to digital maps) students were asked to, in groups of two or three, collect and consult data on ‘felt’ life in Bangalore using an ethnographic, transverse geographic, thematic, or temporal approach. The objective of the project was to model a citizen driven effort to subvert dominant cartographic representations of the city. In doing so, the project and this paper posits that there is potential for such methods to be adopted to form new literacies of cartographic media and to render the cartographic imaginary politically potent. The participants’ brief outlined two themes. The first was the visible and symbolic city where participants were asked to investigate the influence of the urban environment on the behaviours and sensations of its inhabitants, and to research and collect signifiers of traditional and modern worlds. The invisible city brief asked participants to consider the latent environment and link it to human behaviour—in this case electromagnetic radiation linked to the cities telecommunications and media infrastructure was to be specifically investigated. The Visible and Symbolic City During British rule many Indian cities functioned as dual entities where flow of people and commodities circulated between localised enclaves and the centralised British-built areas. Mirroring this was the dual mode of administration where power was shared between elected Indian legislators and appointed British officials (Hoselitz 432–33). Reflecting on this diarchy leads naturally to questions about the politics of civic services such as the water supply, modes of public communication and instruction, and the nature of the city’s administration, distribution, and manufacturing functions. Workshop participants approached these issues in a variety of ways. In the subjective maps entitled Microbial Streets and Water Use and Reuse, food and water sources of street vendors are traced with the aim to map water supply sources relative to the movements of street vendors operating in the city. Images of the microorganisms are captured using hacked webcams as makeshift microscopes. The data was then converted to audio using Pure Data—a real-time graphical programming environment for the processing audio, video and graphical data. The intention of Microbial Streets is to demonstrate how mapping technologies could be used to investigate the flows of food and water from source to consumer, and uncover some of the latencies involved in things consumed unhesitatingly everyday. Typographical Lens surveys Russell Market, an older part of the city through an exploration of the aesthetic and informational transformation of the city’s shop and street signage. In Ethni City, Avenue Road is mapped from the perspective of local goldsmiths who inhabit the area. Both these maps attempt to study the convergence of the city’s dual function and how the relationship between merchants and their customers has changed during the transition from localised enclaves, catering to the sale of particular types of goods, to the development of shopping precincts, where a variety of goods and services can be sought. Two of the project’s maps take a spatiotemporal-archivist approach to the city. Bangalore 8mm 1940s uses archival Super 8 footage and places digitised copies on the map at the corresponding locations of where they were originally filmed. The film sequences, when combined with satellite or street-view images, allow for the juxtaposition of present day visions of the city with those of the 1940s pre-partition era. Chronicles of Collection focuses on the relationship between people and their possessions from the point of view of the object and its pathways through the city in space and time. Collectors were chosen for this map as the value they placed on the object goes beyond the functional and the monetary, which allowed the resultant maps to access and express spatially the layers of meaning a particular object may take on in differing contexts of place and time in the city-space. The Invisible City In the expression of power through city-spaces, and by extension city-texts, certain circuits and flows are ossified and others rendered latent. Raymond Williams in Politics and Letters writes: however dominant a social system may be, the very meaning of its domination involves a limitation or selection of the activities it covers, so that by definition it cannot exhaust all social experience, which therefore always potentially contains space for alternative acts and alternative intentions which are not yet articulated as a social institution or even project. (252) The artists’ statement puts forward this possible response, an exploration of the latent aesthetics of the city-space: In this sense then, each device that enriches our perception for possible action on the real is worthy of attention. Even if it means the use of subjective methods, that may not be considered ‘evidence’. However, we must admit that any subjective investigation, when used systematically and in parallel with the results of technical measures, could lead to new possibilities of knowledge. Electromagnetic City maps the city’s sources of electromagnetic radiation, primarily from mobile phone towers, but also as a by-product of our everyday use of technologies, televisions, mobile phones, Internet Wi-Fi computer screens, and handheld devices. This map explores issues around how the city’s inhabitants hear, see, feel, and represent things that are a part of our environment but invisible, and asks: are there ways that the intangible can be oriented spatially? The intensity of electromagnetic radiation being emitted from these sources, which are thought to negatively influence the meditation of ancient sadhus (sages) also features in this map. This data was collected by taking electromagnetic flow meters into the suburb of Yelhanka (which is also of interest because it houses the largest milk dairy in the state of Karnataka) in a Situationist-like derive and then incorporated back into Metamap. Signal to Noise looks at the struggle between residents concerned with the placement of mobile phone towers around the city. It does so from the perspectives of people who seek information about their placement concerned about mobile phone signal quality, and others concerned about the proximity of this infrastructure to their homes due to to potential negative health effects. Interview footage was taken (using a mobile phone) and manipulated using Pure Data to distort the visual and audio quality of the footage in proportion to the fidelity of the mobile phone signal in the geographic area where the footage was taken. Conclusion The “contact zone” operating in Bangalore: Subjective Cartographies, and the underlying modes of social enquiry that make it valuable, creates potential for the contestation of new forms of polity that may in turn influence urban administration and result in more representative facilities of, and for, city-spaces and their citizenry. Robert Hassan argues that: This project would mean using tactical media to produce new spaces and temporalities that are explicitly concerned with working against the unsustainable “acceleration of just about everything” that our present neoliberal configuration of the network society has generated, showing that alternatives are possible and workable—in ones job, home life, family life, showing that digital [spaces and] temporality need not mean the unerring or unbending meter of real-time [and real city-space] but that an infinite number of temporalities [and subjectivities of space-time] can exist within the network society to correspond with a diversity of local and contextual cultures, societies and polities. (174) As maps and locative motifs begin to feature more prominently in media, analyses such as the one discussed in this paper may allow for researchers to develop theoretical approaches to studying newer forms of media. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. Minneapolis: U of Minnesota P, 1996. “Bangalore: Subjective Cartographies.” 25 July 2011 ‹http://bengaluru.labomedia.org/page/2/›. Bergson, Henri. Creative Evolution. New York: Henry Holt and Company, 1911. Crampton, Jeremy W., and John Krygier. “An Introduction to Critical Cartography.” ACME: An International E-Journal for Critical Geography 4 (2006): 11–13. Chardonnet, Ewen, and Benjamin Cadon. “Semaphore.” 25 July 2011 ‹http://semaphore.blogs.com/semaphore/spectral_investigations_collective/›. Debord, Guy. “Theory of the Dérive.” 25 July 2011 ‹http://www.bopsecrets.org/SI/2.derive.htm›. Foucault, Michel. Remarks on Marx. New York: Semitotext[e], 1991.Hassan, Robert. The Chronoscopic Society: Globalization, Time and Knowledge in the Networked Economy. New York: Lang, 2003. “Historypin.” 4 Aug. 2011 ‹http://www.historypin.com/›. Hoselitz, Bert F. “A Survey of the Literature on Urbanization in India.” India’s Urban Future Ed. Roy Turner. Berkeley: U of California P, 1961. 425-43. Ibarra, Anna. “Cosmologies of the Self.” Elephant 7 (2011): 66–96. Lefebvre, Henri. The Production of Space. Oxford: Blackwell, 1991. Lovink, Geert. Dark Fibre. Cambridge: MIT Press, 2002. Manovich, Lev. The Language of New Media Cambridge: MIT Press, 2000. “Metamap.fr.” 3 Mar. 2011 ‹http://metamap.fr/›. McLuhan, Marshall, and Quentin Fiore. The Medium Is the Massage. London: Penguin, 1967. McQuire, Scott. The Media City: Media, Architecture and Urban Space. London: Sage, 2008. Pickles, John. A History of Spaces: Cartographic Reason, Mapping and the Geo-Coded World. London: Routledge, 2004. Pinder, David. “Subverting Cartography: The Situationists and Maps of the City.” Environment and Planning A 28 (1996): 405–27. “Pure Data.” 6 Aug. 2011 ‹http://puredata.info/›. Renzi, Alessandra. “The Space of Tactical Media” Digital Media and Democracy: Tactics in Hard Times. Ed. Megan Boler. Cambridge: MIT Press, 2008. 71–100. Situationist International. “A New Theatre of Operations for Culture.” 6 Aug. 2011 ‹http://www.blueprintmagazine.co.uk/index.php/urbanism/reading-the-situationist-city/›. Strom, Timothy Erik. “Space, Cyberspace and the Interface: The Trouble with Google Maps.” M/C Journal 4.3 (2011). 6 Aug. 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/370›. Wark, McKenzie. 50 Years of Recuperation of the Situationist International, New York: Princeton Architectural Press, 2008. Williams, Raymond. Politics and Letters: Interviews with New Left Review. London: New Left, 1979.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Hill, Benjamin Mako. "Revealing Errors". M/C Journal 10, n.º 5 (1 de octubre de 2007). http://dx.doi.org/10.5204/mcj.2703.

Texto completo
Resumen
Introduction In The World Is Not a Desktop, Marc Weisner, the principal scientist and manager of the computer science laboratory at Xerox PARC, stated that, “a good tool is an invisible tool.” Weisner cited eyeglasses as an ideal technology because with spectacles, he argued, “you look at the world, not the eyeglasses.” Although Weisner’s work at PARC played an important role in the creation of the field of “ubiquitous computing”, his ideal is widespread in many areas of technology design. Through repetition, and by design, technologies blend into our lives. While technologies, and communications technologies in particular, have a powerful mediating impact, many of the most pervasive effects are taken for granted by most users. When technology works smoothly, its nature and effects are invisible. But technologies do not always work smoothly. A tiny fracture or a smudge on a lens renders glasses quite visible to the wearer. The Microsoft Windows “Blue Screen of Death” on subway in Seoul (Photo credit Wikimedia Commons). Anyone who has seen a famous “Blue Screen of Death”—the iconic signal of a Microsoft Windows crash—on a public screen or terminal knows how errors can thrust the technical details of previously invisible systems into view. Nobody knows that their ATM runs Windows until the system crashes. Of course, the operating system chosen for a sign or bank machine has important implications for its users. Windows, or an alternative operating system, creates affordances and imposes limitations. Faced with a crashed ATM, a consumer might ask herself if, with its rampant viruses and security holes, she should really trust an ATM running Windows? Technologies make previously impossible actions possible and many actions easier. In the process, they frame and constrain possible actions. They mediate. Communication technologies allow users to communicate in new ways but constrain communication in the process. In a very fundamental way, communication technologies define what their users can say, to whom they say it, and how they can say it—and what, to whom, and how they cannot. Humanities scholars understand the power, importance, and limitations of technology and technological mediation. Weisner hypothesised that, “to understand invisibility the humanities and social sciences are especially valuable, because they specialise in exposing the otherwise invisible.” However, technology activists, like those at the Free Software Foundation (FSF) and the Electronic Frontier Foundation (EFF), understand this power of technology as well. Largely constituted by technical members, both organisations, like humanists studying technology, have struggled to communicate their messages to a less-technical public. Before one can argue for the importance of individual control over who owns technology, as both FSF and EFF do, an audience must first appreciate the power and effect that their technology and its designers have. To understand the power that technology has on its users, users must first see the technology in question. Most users do not. Errors are under-appreciated and under-utilised in their ability to reveal technology around us. By painting a picture of how certain technologies facilitate certain mistakes, one can better show how technology mediates. By revealing errors, scholars and activists can reveal previously invisible technologies and their effects more generally. Errors can reveal technology—and its power and can do so in ways that users of technologies confront daily and understand intimately. The Misprinted Word Catalysed by Elizabeth Eisenstein, the last 35 years of print history scholarship provides both a richly described example of technological change and an analysis of its effects. Unemphasised in discussions of the revolutionary social, economic, and political impact of printing technologies is the fact that, especially in the early days of a major technological change, the artifacts of print are often quite similar to those produced by a new printing technology’s predecessors. From a reader’s purely material perspective, books are books; the press that created the book is invisible or irrelevant. Yet, while the specifics of print technologies are often hidden, they are often exposed by errors. While the shift from a scribal to print culture revolutionised culture, politics, and economics in early modern Europe, it was near-invisible to early readers (Eisenstein). Early printed books were the same books printed in the same way; the early press was conceived as a “mechanical scriptorium.” Shown below, Gutenberg’s black-letter Gothic typeface closely reproduced a scribal hand. Of course, handwriting and type were easily distinguishable; errors and irregularities were inherent in relatively unsteady human hands. Side-by-side comparisons of the hand-copied Malmesbury Bible (left) and the black letter typeface in the Gutenberg Bible (right) (Photo credits Wikimedia Commons & Wikimedia Commons). Printing, of course, introduced its own errors. As pages were produced en masse from a single block of type, so were mistakes. While a scribe would re-read and correct errors as they transcribed a second copy, no printing press would. More revealingly, print opened the door to whole new categories of errors. For example, printers setting type might confuse an inverted n with a u—and many did. Of course, no scribe made this mistake. An inverted u is only confused with an n due to the technological possibility of letter flipping in movable type. As print moved from Monotype and Linotype machines, to computerised typesetting, and eventually to desktop publishing, an accidentally flipped u retreated back into the realm of impossibility (Mergenthaler, Swank). Most readers do not know how their books are printed. The output of letterpresses, Monotypes, and laser printers are carefully designed to produce near-uniform output. To the degree that they succeed, the technologies themselves, and the specific nature of the mediation, becomes invisible to readers. But each technology is revealed in errors like the upside-down u, the output of a mispoured slug of Monotype, or streaks of toner from a laser printer. Changes in printing technologies after the press have also had profound effects. The creation of hot-metal Monotype and Linotype, for example, affected decisions to print and reprint and changed how and when it is done. New mass printing technologies allowed for the printing of works that, for economic reasons, would not have been published before. While personal computers, desktop publishing software, and laser printers make publishing accessible in new ways, it also places real limits on what can be printed. Print runs of a single copy—unheard of before the invention of the type-writer—are commonplace. But computers, like Linotypes, render certain formatting and presentation difficult and impossible. Errors provide a space where the particulars of printing make technologies visible in their products. An inverted u exposes a human typesetter, a letterpress, and a hasty error in judgment. Encoding errors and botched smart quotation marks—a ? in place of a “—are only possible with a computer. Streaks of toner are only produced by malfunctioning laser printers. Dust can reveal the photocopied provenance of a document. Few readers reflect on the power or importance of the particulars of the technologies that produced their books. In part, this is because the technologies are so hidden behind their products. Through errors, these technologies and the power they have on the “what” and “how” of printing are exposed. For scholars and activists attempting to expose exactly this, errors are an under-exploited opportunity. Typing Mistyping While errors have a profound effect on media consumption, their effect is equally important, and perhaps more strongly felt, when they occur during media creation. Like all mediating technologies, input technologies make it easier or more difficult to create certain messages. It is, for example, much easier to write a letter with a keyboard than it is to type a picture. It is much more difficult to write in languages with frequent use of accents on an English language keyboard than it is on a European keyboard. But while input systems like keyboards have a powerful effect on the nature of the messages they produce, they are invisible to recipients of messages. Except when the messages contains errors. Typists are much more likely to confuse letters in close proximity on a keyboard than people writing by hand or setting type. As keyboard layouts switch between countries and languages, new errors appear. The following is from a personal email: hez, if there’s not a subversion server handz, can i at least have the root password for one of our machines? I read through the instructions for setting one up and i think i could do it. [emphasis added] The email was quickly typed and, in two places, confuses the character y with z. Separated by five characters on QWERTY keyboards, these two letters are not easily mistaken or mistyped. However, their positions are swapped on German and English keyboards. In fact, the author was an American typing in a Viennese Internet cafe. The source of his repeated error was his false expectations—his familiarity with one keyboard layout in the context of another. The error revealed the context, both keyboard layouts, and his dependence on a particular keyboard. With the error, the keyboard, previously invisible, was exposed as an inter-mediator with its own particularities and effects. This effect does not change in mobile devices where new input methods have introduced powerful new ways of communicating. SMS messages on mobile phones are constrained in length to 160 characters. The result has been new styles of communication using SMS that some have gone so far as to call a new language or dialect called TXTSPK (Thurlow). Yet while they are obvious to social scientists, the profound effects of text message technologies on communication is unfelt by most users who simply see the messages themselves. More visible is the fact that input from a phone keypad has opened the door to errors which reveal input technology and its effects. In the standard method of SMS input, users press or hold buttons to cycle through the letters associated with numbers on a numeric keyboard (e.g., 2 represents A, B, and C; to produce a single C, a user presses 2 three times). This system makes it easy to confuse characters based on a shared association with a single number. Tegic’s popular T9 software allows users to type in words by pressing the number associated with each letter of each word in quick succession. T9 uses a database to pick the most likely word that maps to that sequence of numbers. While the system allows for quick input of words and phrases on a phone keypad, it also allows for the creation of new types of errors. A user trying to type me might accidentally write of because both words are mapped to the combination of 6 and 3 and because of is a more common word in English. T9 might confuse snow and pony while no human, and no other input method, would. Users composing SMS’s are constrained by its technology and its design. The fact that text messages must be short and the difficult nature of phone-based input methods has led to unique and highly constrained forms of communication like TXTSPK (Sutherland). Yet, while the influence of these input technologies is profound, users are rarely aware of it. Errors provide a situation where the particularities of a technology become visible and an opportunity for users to connect with scholars exposing the effect of technology and activists arguing for increased user control. Google News Denuded As technologies become more complex, they often become more mysterious to their users. While not invisible, users know little about the way that complex technologies work both because they become accustomed to them and because the technological specifics are hidden inside companies, behind web interfaces, within compiled software, and in “black boxes” (Latour). Errors can help reveal these technologies and expose their nature and effects. One such system, Google’s News, aggregates news stories and is designed to make it easy to read multiple stories on the same topic. The system works with “topic clusters” that attempt to group articles covering the same news event. The more items in a news cluster (especially from popular sources) and the closer together they appear in time, the higher confidence Google’s algorithms have in the “importance” of a story and the higher the likelihood that the cluster of stories will be listed on the Google News page. While the decision to include or remove individual sources is made by humans, the act of clustering is left to Google’s software. Because computers cannot “understand” the text of the articles being aggregated, clustering happens less intelligently. We know that clustering is primarily based on comparison of shared text and keywords—especially proper nouns. This process is aided by the widespread use of wire services like the Associated Press and Reuters which provide article text used, at least in part, by large numbers of news sources. Google has been reticent to divulge the implementation details of its clustering engine but users have been able to deduce the description above, and much more, by watching how Google News works and, more importantly, how it fails. For example, we know that Google News looks for shared text and keywords because text that deviates heavily from other articles is not “clustered” appropriately—even if it is extremely similar semantically. In this vein, blogger Philipp Lenssen gives advice to news sites who want to stand out in Google News: Of course, stories don’t have to be exactly the same to be matched—but if they are too different, they’ll also not appear in the same group. If you want to stand out in Google News search results, make your article be original, or else you’ll be collapsed into a cluster where you may or may not appear on the first results page. While a human editor has no trouble understanding that an article using different terms (and different, but equally appropriate, proper nouns) is discussing the same issue, the software behind Google News is more fragile. As a result, Google News fails to connect linked stories that no human editor would miss. A section of a screenshot of Google News clustering aggregation showcasing what appears to be an error. But just as importantly, Google News can connect stories that most human editors will not. Google News’s clustering of two stories by Al Jazeera on how “Iran offers to share nuclear technology,” and by the Guardian on how “Iran threatens to hide nuclear program,” seem at first glance to be a mistake. Hiding and sharing are diametrically opposed and mutually exclusive. But while it is true that most human editors would not cluster these stories, it is less clear that it is, in fact, an error. Investigation shows that the two articles are about the release of a single statement by the government of Iran on the same day. The spin is significant enough, and significantly different, that it could be argued that the aggregation of those stories was incorrect—or not. The error reveals details about the way that Google News works and about its limitations. It reminds readers of Google News of the technological nature of their news’ meditation and gives them a taste of the type of selection—and mis-selection—that goes on out of view. Users of Google News might be prompted to compare the system to other, more human methods. Ultimately it can remind them of the power that Google News (and humans in similar roles) have over our understanding of news and the world around us. These are all familiar arguments to social scientists of technology and echo the arguments of technology activists. By focusing on similar errors, both groups can connect to users less used to thinking in these terms. Conclusion Reflecting on the role of the humanities in a world of increasingly invisible technology for the blog, “Humanities, Arts, Science and Technology Advanced Collaboratory,” Duke English professor Cathy Davidson writes: When technology is accepted, when it becomes invisible, [humanists] really need to be paying attention. This is one reason why the humanities are more important than ever. Analysis—qualitative, deep, interpretive analysis—of social relations, social conditions, in a historical and philosophical perspective is what we do so well. The more technology is part of our lives, the less we think about it, the more we need rigorous humanistic thinking that reminds us that our behaviours are not natural but social, cultural, economic, and with consequences for us all. Davidson concisely points out the strength and importance of the humanities in evaluating technology. She is correct; users of technologies do not frequently analyse the social relations, conditions, and effects of the technology they use. Activists at the EFF and FSF argue that this lack of critical perspective leads to exploitation of users (Stallman). But users, and the technology they use, are only susceptible to this type of analysis when they understand the applicability of these analyses to their technologies. Davidson leaves open the more fundamental question: How will humanists first reveal technology so that they can reveal its effects? Scholars and activists must do more than contextualise and describe technology. They must first render invisible technologies visible. As the revealing nature of errors in printing systems, input systems, and “black box” software systems like Google News show, errors represent a point where invisible technology is already visible to users. As such, these errors, and countless others like them, can be treated as the tip of an iceberg. They represent an important opportunity for humanists and activists to further expose technologies and the beginning of a process that aims to reveal much more. References Davidson, Cathy. “When Technology Is Invisible, Humanists Better Get Busy.” HASTAC. (2007). 1 September 2007 http://www.hastac.org/node/779>. Eisenstein, Elisabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge, UK: Cambridge University Press, 1979. Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard UP, 1999. Lenssen, Philipp. “How Google News Indexes.” Google Blogscoped. 2006. 1 September 2007 http://blogoscoped.com/archive/2006-07-28-n49.html>. Mergenthaler, Ottmar. The Biography of Ottmar Mergenthaler, Inventor of the Linotype. New ed. New Castle, Deleware: Oak Knoll Books, 1989. Monotype: A Journal of Composing Room Efficiency. Philadelphia: Lanston Monotype Machine Co, 1913. Stallman, Richard M. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston, Massachusetts: Free Software Foundation, 2002. Sutherland, John. “Cn u txt?” Guardian Unlimited. London, UK. 2002. Swank, Alvin Garfield, and United Typothetae America. Linotype Mechanism. Chicago, Illinois: Dept. of Education, United Typothetae America, 1926. Thurlow, C. “Generation Txt? The Sociolinguistics of Young People’s Text-Messaging.” Discourse Analysis Online 1.1 (2003). Weiser, Marc. “The World Is Not a Desktop.” ACM Interactions. 1.1 (1994): 7-8. Citation reference for this article MLA Style Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/01-hill.php>. APA Style Hill, B. (Oct. 2007) "Revealing Errors," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/01-hill.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Brabazon, Tara. "Freedom from Choice". M/C Journal 7, n.º 6 (1 de enero de 2005). http://dx.doi.org/10.5204/mcj.2461.

Texto completo
Resumen
On May 18, 2003, the Australian Minister for Education, Brendon Nelson, appeared on the Channel Nine Sunday programme. The Yoda of political journalism, Laurie Oakes, attacked him personally and professionally. He disclosed to viewers that the Minister for Education, Science and Training had suffered a false start in his education, enrolling in one semester of an economics degree that was never completed. The following year, he commenced a medical qualification and went on to become a practicing doctor. He did not pay fees for any of his University courses. When reminded of these events, Dr Nelson became agitated, and revealed information not included in the public presentation of the budget of that year, including a ‘cap’ on HECS-funded places of five years for each student. He justified such a decision with the cliché that Australia’s taxpayers do not want “professional students completing degree after degree.” The Minister confirmed that the primary – and perhaps the only – task for university academics was to ‘train’ young people for the workforce. The fact that nearly 50% of students in some Australian Universities are over the age of twenty five has not entered his vision. He wanted young people to complete a rapid degree and enter the workforce, to commence paying taxes and the debt or loan required to fund a full fee-paying place. Now – nearly two years after this interview and with the Howard government blessed with a new mandate – it is time to ask how this administration will order education and value teaching and learning. The curbing of the time available to complete undergraduate courses during their last term in office makes plain the Australian Liberal Government’s stance on formal, publicly-funded lifelong learning. The notion that a student/worker can attain all required competencies, skills, attributes, motivations and ambitions from a single degree is an assumption of the new funding model. It is also significant to note that while attention is placed on the changing sources of income for universities, there have also been major shifts in the pattern of expenditure within universities, focusing on branding, marketing, recruitment, ‘regional’ campuses and off-shore courses. Similarly, the short-term funding goals of university research agendas encourage projects required by industry, rather than socially inflected concerns. There is little inevitable about teaching, research and education in Australia, except that the Federal Government will not create a fully-funded model for lifelong learning. The task for those of us involved in – and committed to – education in this environment is to probe the form and rationale for a (post) publicly funded University. This short paper for the ‘order’ issue of M/C explores learning and teaching within our current political and economic order. Particularly, I place attention on the synergies to such an order via phrases like the knowledge economy and the creative industries. To move beyond the empty promises of just-in-time learning, on-the-job training, graduate attributes and generic skills, we must reorder our assumptions and ask difficult questions of those who frame the context in which education takes place. For the term of your natural life Learning is a big business. Whether discussing the University of the Third Age, personal development courses, self help bestsellers or hard-edged vocational qualifications, definitions of learning – let alone education – are expanding. Concurrent with this growth, governments are reducing centralized funding and promoting alternative revenue streams. The diversity of student interests – or to use the language of the time, client’s learning goals – is transforming higher education into more than the provision of undergraduate and postgraduate degrees. The expansion of the student body beyond the 18-25 age group and the desire to ‘service industry’ has reordered the form and purpose of formal education. The number of potential students has expanded extraordinarily. As Lee Bash realized Today, some estimates suggest that as many as 47 percent of all students enrolled in higher education are over 25 years old. In the future, as lifelong learning becomes more integrated into the fabric of our culture, the proportion of adult students is expected to increase. And while we may not yet realize it, the academy is already being transformed as a result. (35) Lifelong learning is the major phrase and trope that initiates and justifies these changes. Such expansive economic opportunities trigger the entrepreneurial directives within universities. If lifelong learning is taken seriously, then the goals, entry standards, curriculum, information management policies and assessments need to be challenged and changed. Attention must be placed on words and phrases like ‘access’ and ‘alternative entry.’ Even more consideration must be placed on ‘outcomes’ and ‘accountability.’ Lifelong learning is a catchphrase for a change in purpose and agenda. Courses are developed from a wide range of education providers so that citizens can function in, or at least survive, the agitation of the post-work world. Both neo-liberal and third way models of capitalism require the labeling and development of an aspirational class, a group who desires to move ‘above’ their current context. Such an ambiguous economic and social goal always involves more than the vocational education and training sector or universities, with the aim being to seamlessly slot education into a ‘lifestyle.’ The difficulties with this discourse are two-fold. Firstly, how effectively can these aspirational notions be applied and translated into a real family and a real workplace? Secondly, does this scheme increase the information divide between rich and poor? There are many characteristics of an effective lifelong learner including great personal motivation, self esteem, confidence and intellectual curiosity. In a double shifting, change-fatigued population, the enthusiasm for perpetual learning may be difficult to summon. With the casualization of the post-Fordist workplace, it is no surprise that policy makers and employers are placing the economic and personal responsibility for retraining on individual workers. Instead of funding a training scheme in the workplace, there has been a devolving of skill acquisition and personal development. Through the twentieth century, and particularly after 1945, education was the track to social mobility. The difficulty now – with degree inflation and the loss of stable, secure, long-term employment – is that new modes of exclusion and disempowerment are being perpetuated through the education system. Field recognized that “the new adult education has been embraced most enthusiastically by those who are already relatively well qualified.” (105) This is a significant realization. Motivation, meta-learning skills and curiosity are increasingly being rewarded when found in the already credentialed, empowered workforce. Those already in work undertake lifelong learning. Adult education operates well for members of the middle class who are doing well and wish to do better. If success is individualized, then failure is also cast on the self, not the social system or policy. The disempowered are blamed for their own conditions and ‘failures.’ The concern, through the internationalization of the workforce, technological change and privatization of national assets, is that failure in formal education results in social exclusion and immobility. Besides being forced into classrooms, there are few options for those who do not wish to learn, in a learning society. Those who ‘choose’ not be a part of the national project of individual improvement, increased market share, company competitiveness and international standards are not relevant to the economy. But there is a personal benefit – that may have long term political consequences – from being ‘outside’ society. Perhaps the best theorist of the excluded is not sourced from a University, but from the realm of fictional writing. Irvine Welsh, author of the landmark Trainspotting, has stated that What we really need is freedom from choice … People who are in work have no time for anything else but work. They have no mental space to accommodate anything else but work. Whereas people who are outside the system will always find ways of amusing themselves. Even if they are materially disadvantaged they’ll still find ways of coping, getting by and making their own entertainment. (145-6) A blurring of work and learning, and work and leisure, may seem to create a borderless education, a learning framework uninhibited by curriculum, assessment or power structures. But lifelong learning aims to place as many (national) citizens as possible in ‘the system,’ striving for success or at least a pay increase which will facilitate the purchase of more consumer goods. Through any discussion of work-place training and vocationalism, it is important to remember those who choose not to choose life, who choose something else, who will not follow orders. Everybody wants to work The great imponderable for complex economic systems is how to manage fluctuations in labour and the market. The unstable relationship between need and supply necessitates flexibility in staffing solutions, and short-term supplementary labour options. When productivity and profit are the primary variables through which to judge successful management, then the alignments of education and employment are viewed and skewed through specific ideological imperatives. The library profession is an obvious occupation that has confronted these contradictions. It is ironic that the occupation that orders knowledge is experiencing a volatile and disordered workplace. In the past, it had been assumed that librarians hold a degree while technicians do not, and that technicians would not be asked to perform – unsupervised – the same duties as librarians. Obviously, such distinctions are increasingly redundant. Training packages, structured through competency-based training principles, have ensured technicians and librarians share knowledge systems which are taught through incremental stages. Mary Carroll recognized the primary questions raised through this change. If it is now the case that these distinctions have disappeared do we need to continue to draw them between professional and para-professional education? Does this mean that all sectors of the education community are in fact learning/teaching the same skills but at different levels so that no unique set of skills exist? (122) With education reduced to skills, thereby discrediting generalist degrees, the needs of industry have corroded the professional standards and stature of librarians. Certainly, the abilities of library technicians are finally being valued, but it is too convenient that one of the few professions dominated by women has suffered a demeaning of knowledge into competency. Lifelong learning, in this context, has collapsed high level abilities in information management into bite sized chunks of ‘skills.’ The ideology of lifelong learning – which is rarely discussed – is that it serves to devalue prior abilities and knowledges into an ever-expanding imperative for ‘new’ skills and software competencies. For example, ponder the consequences of Hitendra Pillay and Robert Elliott’s words: The expectations inherent in new roles, confounded by uncertainty of the environment and the explosion of information technology, now challenge us to reconceptualise human cognition and develop education and training in a way that resonates with current knowledge and skills. (95) Neophilliacal urges jut from their prose. The stress on ‘new roles,’ and ‘uncertain environments,’ the ‘explosion of information technology,’ ‘challenges,’ ‘reconceptualisations,’ and ‘current knowledge’ all affirms the present, the contemporary, and the now. Knowledge and expertise that have taken years to develop, nurture and apply are not validated through this educational brief. The demands of family, work, leisure, lifestyle, class and sexuality stretch the skin taut over economic and social contradictions. To ease these paradoxes, lifelong learning should stress pedagogy rather than applications, and context rather than content. Put another way, instead of stressing the link between (gee wizz) technological change and (inevitable) workplace restructuring and redundancies, emphasis needs to be placed on the relationship between professional development and verifiable technological outcomes, rather than spruiks and promises. Short term vocationalism in educational policy speaks to the ordering of our public culture, requiring immediate profits and a tight dialogue between education and work. Furthering this logic, if education ‘creates’ employment, then it also ‘creates’ unemployment. Ironically, in an environment that focuses on the multiple identities and roles of citizens, students are reduced to one label – ‘future workers.’ Obviously education has always been marinated in the political directives of the day. The industrial revolution introduced a range of technical complexities to the workforce. Fordism necessitated that a worker complete a task with precision and speed, requiring a high tolerance of stress and boredom. Now, more skills are ‘assumed’ by employers at the time that workplaces are off-loading their training expectations to the post-compulsory education sector. Therefore ‘lifelong learning’ is a political mask to empower the already empowered and create a low-level skill base for low paid workers, with the promise of competency-based training. Such ideologies never need to be stated overtly. A celebration of ‘the new’ masks this task. Not surprisingly therefore, lifelong learning has a rich new life in ordering creative industries strategies and frameworks. Codifying the creative The last twenty years have witnessed an expanding jurisdiction and justification of the market. As part of Tony Blair’s third way, the creative industries and the knowledge economy became catchwords to demonstrate that cultural concerns are not only economically viable but a necessity in the digital, post-Fordist, information age. Concerns with intellectual property rights, copyright, patents, and ownership of creative productions predominate in such a discourse. Described by Charles Leadbeater as Living on Thin Air, this new economy is “driven by new actors of production and sources of competitive advantage – innovation, design, branding, know-how – which are at work on all industries.” (10) Such market imperatives offer both challenges and opportunity for educationalists and students. Lifelong learning is a necessary accoutrement to the creative industries project. Learning cities and communities are the foundations for design, music, architecture and journalism. In British policy, and increasingly in Queensland, attention is placed on industry-based research funding to address this changing environment. In 2000, Stuart Cunningham and others listed the eight trends that order education, teaching and learning in this new environment. The Changes to the Provision of Education Globalization The arrival of new information and communication technologies The development of a knowledge economy, shortening the time between the development of new ideas and their application. The formation of learning organizations User-pays education The distribution of knowledge through interactive communication technologies (ICT) Increasing demand for education and training Scarcity of an experienced and trained workforce Source: S. Cunningham, Y. Ryan, L. Stedman, S. Tapsall, K. Bagdon, T. Flew and P. Coaldrake. The Business of Borderless Education. Canberra: DETYA Evaluation and Investigations Program [EIP], 2000. This table reverberates with the current challenges confronting education. Mobilizing such changes requires the lubrication of lifelong learning tropes in university mission statements and the promotion of a learning culture, while also acknowledging the limited financial conditions in which the educational sector is placed. For university scholars facilitating the creative industries approach, education is “supplying high value-added inputs to other enterprises,” (Hartley and Cunningham 5) rather than having value or purpose beyond the immediately and applicably economic. The assumption behind this table is that the areas of expansion in the workforce are the creative and service industries. In fact, the creative industries are the new service sector. This new economy makes specific demands of education. Education in the ‘old economy’ and the ‘new economy’ Old Economy New Economy Four-year degree Forty-year degree Training as a cost Training as a source of competitive advantage Learner mobility Content mobility Distance education Distributed learning Correspondence materials with video Multimedia centre Fordist training – one size fits all Tailored programmes Geographically fixed institutions Brand named universities and celebrity professors Just-in-case Just-in-time Isolated learners Virtual learning communities Source: T. Flew. “Educational Media in Transition: Broadcasting, Digital Media and Lifelong Learning in the Knowledge Economy.” International Journal of Instructional Media 29.1 (2002): 20. There are myriad assumptions lurking in Flew’s fascinating table. The imperative is short courses on the web, servicing the needs of industry. He described the product of this system as a “learner-earner.” (50) This ‘forty year degree’ is based on lifelong learning ideologies. However Flew’s ideas are undermined by the current government higher education agenda, through the capping – through time – of courses. The effect on the ‘learner-earner’ in having to earn more to privately fund a continuance of learning – to ensure that they keep on earning – needs to be addressed. There will be consequences to the housing market, family structures and leisure time. The costs of education will impact on other sectors of the economy and private lives. Also, there is little attention to the groups who are outside this taken-for-granted commitment to learning. Flew noted that barriers to greater participation in education and training at all levels, which is a fundamental requirement of lifelong learning in the knowledge economy, arise in part out of the lack of provision of quality technology-mediated learning, and also from inequalities of access to ICTs, or the ‘digital divide.’ (51) In such a statement, there is a misreading of teaching and learning. Such confusion is fuelled by the untheorised gap between ‘student’ and ‘consumer.’ The notion that technology (which in this context too often means computer-mediated platforms) is a barrier to education does not explain why conventional distance education courses, utilizing paper, ink and postage, were also unable to welcome or encourage groups disengaged from formal learning. Flew and others do not confront the issue of motivation, or the reason why citizens choose to add or remove the label of ‘student’ from their bag of identity labels. The stress on technology as both a panacea and problem for lifelong learning may justify theories of convergence and the integration of financial, retail, community, health and education provision into a services sector, but does not explain why students desire to learn, beyond economic necessity and employer expectations. Based on these assumptions of expanding creative industries and lifelong learning, the shape of education is warping. An ageing population requires educational expenditure to be reallocated from primary and secondary schooling and towards post-compulsory learning and training. This cost will also be privatized. When coupled with immigration flows, technological changes and alterations to market and labour structures, lifelong learning presents a profound and personal cost. An instrument for economic and social progress has been individualized, customized and privatized. The consequence of the ageing population in many nations including Australia is that there will be fewer young people in schools or employment. Such a shift will have consequences for the workplace and the taxation system. Similarly, those young workers who remain will be far more entrepreneurial and less loyal to their employers. Public education is now publically-assisted education. Jane Jenson and Denis Saint-Martin realized the impact of this change. The 1980s ideological shift in economic and social policy thinking towards policies and programmes inspired by neo-liberalism provoked serious social strains, especially income polarization and persistent poverty. An increasing reliance on market forces and the family for generating life-chances, a discourse of ‘responsibility,’ an enthusiasm for off-loading to the voluntary sector and other altered visions of the welfare architecture inspired by neo-liberalism have prompted a reaction. There has been a wide-ranging conversation in the 1990s and the first years of the new century in policy communities in Europe as in Canada, among policy makers who fear the high political, social and economic costs of failing to tend to social cohesion. (78) There are dense social reorderings initiated by neo-liberalism and changing the notions of learning, teaching and education. There are yet to be tracked costs to citizenship. The legacy of the 1980s and 1990s is that all organizations must behave like businesses. In such an environment, there are problems establishing social cohesion, let alone social justice. To stress the product – and not the process – of education contradicts the point of lifelong learning. Compliance and complicity replace critique. (Post) learning The Cold War has ended. The great ideological battle between communism and Western liberal democracy is over. Most countries believe both in markets and in a necessary role for Government. There will be thunderous debates inside nations about the balance, but the struggle for world hegemony by political ideology is gone. What preoccupies decision-makers now is a different danger. It is extremism driven by fanaticism, personified either in terrorist groups or rogue states. Tony Blair (http://www.number-10.gov.uk/output/Page6535.asp) Tony Blair, summoning his best Francis Fukuyama impersonation, signaled the triumph of liberal democracy over other political and economic systems. His third way is unrecognizable from the Labour party ideals of Clement Attlee. Probably his policies need to be. Yet in his second term, he is not focused on probing the specificities of the market-orientation of education, health and social welfare. Instead, decision makers are preoccupied with a war on terror. Such a conflict seemingly justifies large defense budgets which must be at the expense of social programmes. There is no recognition by Prime Ministers Blair or Howard that ‘high-tech’ armory and warfare is generally impotent to the terrorist’s weaponry of cars, bodies and bombs. This obvious lesson is present for them to see. After the rapid and successful ‘shock and awe’ tactics of Iraq War II, terrorism was neither annihilated nor slowed by the Coalition’s victory. Instead, suicide bombers in Saudi Arabia, Morocco, Indonesia and Israel snuck have through defenses, requiring little more than a car and explosives. More Americans have been killed since the war ended than during the conflict. Wars are useful when establishing a political order. They sort out good and evil, the just and the unjust. Education policy will never provide the ‘big win’ or the visible success of toppling Saddam Hussein’s statue. The victories of retraining, literacy, competency and knowledge can never succeed on this scale. As Blair offered, “these are new times. New threats need new measures.” (ht tp://www.number-10.gov.uk/output/Page6535.asp) These new measures include – by default – a user pays education system. In such an environment, lifelong learning cannot succeed. It requires a dense financial commitment in the long term. A learning society requires a new sort of war, using ideas not bullets. References Bash, Lee. “What Serving Adult Learners Can Teach Us: The Entrepreneurial Response.” Change January/February 2003: 32-7. Blair, Tony. “Full Text of the Prime Minister’s Speech at the Lord Mayor’s Banquet.” November 12, 2002. http://www.number-10.gov.uk/output/Page6535.asp. Carroll, Mary. “The Well-Worn Path.” The Australian Library Journal May 2002: 117-22. Field, J. Lifelong Learning and the New Educational Order. Stoke on Trent: Trentham Books, 2000. Flew, Terry. “Educational Media in Transition: Broadcasting, Digital Media and Lifelong Learning in the Knowledge Economy.” International Journal of Instructional Media 29.1 (2002): 47-60. Hartley, John, and Cunningham, Stuart. “Creative Industries – from Blue Poles to Fat Pipes.” Department of Education, Science and Training, Commonwealth of Australia (2002). Jenson, Jane, and Saint-Martin, Denis. “New Routes to Social Cohesion? Citizenship and the Social Investment State.” Canadian Journal of Sociology 28.1 (2003): 77-99. Leadbeater, Charles. Living on Thin Air. London: Viking, 1999. Pillay, Hitendra, and Elliott, Robert. “Distributed Learning: Understanding the Emerging Workplace Knowledge.” Journal of Interactive Learning Research 13.1-2 (2002): 93-107. Welsh, Irvine, from Redhead, Steve. “Post-Punk Junk.” Repetitive Beat Generation. Glasgow: Rebel Inc, 2000: 138-50. Citation reference for this article MLA Style Brabazon, Tara. "Freedom from Choice: Who Pays for Customer Service in the Knowledge Economy?." M/C Journal 7.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0501/02-brabazon.php>. APA Style Brabazon, T. (Jan. 2005) "Freedom from Choice: Who Pays for Customer Service in the Knowledge Economy?," M/C Journal, 7(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0501/02-brabazon.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía