To see the other types of publications on this topic, follow the link: TV (Computer file).

Journal articles on the topic 'TV (Computer file)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'TV (Computer file).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ma, Youwen, and Yi Wan. "Data Analysis Method of Intelligent Analysis Platform for Big Data of Film and Television." Complexity 2021 (April 16, 2021): 1–10. http://dx.doi.org/10.1155/2021/9947832.

Full text
Abstract:
Based on cloud computing and statistics theory, this paper proposes a reasonable analysis method for big data of film and television. The method selects Hadoop open source cloud platform as the basis, combines the MapReduce distributed programming model and HDFS distributed file storage system and other key cloud computing technologies. In order to cope with different data processing needs of film and television industry, association analysis, cluster analysis, factor analysis, and K-mean + association analysis algorithm training model were applied to model, process, and analyze the full data of film and TV series. According to the film type, producer, production region, investment, box office, audience rating, network score, audience group, and other factors, the film and television data in recent years are analyzed and studied. Based on the study of the impact of each attribute of film and television drama on film box office and TV audience rating, it is committed to the prediction of film and television industry and constantly verifies and improves the algorithm model.
APA, Harvard, Vancouver, ISO, and other styles
2

Vásquez-Ramírez, Raquel, Maritza Bustos-Lopez, Giner Alor-Hernández, Cuauhtémoc Sanchez-Ramírez, and Jorge García-Alcaraz. "AthenaCloud: A cloud-based platform for multi-device educational software generation." Computer Science and Information Systems 13, no. 3 (2016): 957–81. http://dx.doi.org/10.2298/csis160807037v.

Full text
Abstract:
Nowadays, information technologies play an important role in education. In education, mobile and TV applications can be considered a support tool in the teaching - learning process, however, relevant and appropriate mobile and TV applications are not always available; teachers can only judge applications by reviews or anecdotes instead of testing them. These reasons lead to the needs and benefits for creating one?s own mobile application for teaching and learning. In this work, we present a cloud-based platform for multi-device educational software generation (smartphones, tablets, Web, Android-based TV boxes, and smart TV devices) called AthenaCloud. It is important to mention that an open cloud-based platform allows teachers to create their own multi-device software by using a personal computer with Internet access. The goal of this platform is to provide a software tool to help educators upload their electronic contents - or use existing contents in an open repository - and package them in the desired setup file for one of the supported devices and operating systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Armanto, Armanto. "Implementasi Jaringan Tunnel Berbasis Eoip (Ethernet Over Ip ) Dengan Mikrotik Router Rb 2011 Il-Rm Di Silampari Tv Lubuklinggau." Jurnal Ilmiah Betrik 8, no. 01 (March 30, 2017): 42–52. http://dx.doi.org/10.36050/betrik.v8i01.65.

Full text
Abstract:
Designing an Eoip-Based Tunnel Network (Ethernet Over Ip) With Mikrotik Router Rb 2011 Il-Rm In Silampari Tv Lubuklinggau, compiled by armanto, Currently in Silampari Tv has two buildings in the suburbs That has two offices in different places with the distance Far enough, the first office in one studio is precisely located in Watervang and the two studios are in Sumber Agung Kota Lubuklinggau. In studio one is a production kitchen that processes video to run. Not connected to each other, so between studio one and setudio two can not do file sharing, VOIP, and the need for information exchange in other networks. Therefore I will create a computer network that can overcome the problems that have been complained by the Silampari TV by using MikroTik as its main tool and provide a solution that is quite economical and reliable for this problem, one of them with Tunnel. In this research used some tools like 6 unit pc which is used for client, swich as data communication media between utp cable, TP-Link Wireless Router fruit as hotspot and data distribution media from mikrotik one to mikrotik two, and two mikrotik rb 11 for Internet data and tunnel settings, Inteernet that supports this research is that can from one of the providers in the hole linggau namely B-NET
APA, Harvard, Vancouver, ISO, and other styles
4

Malluhi, Qutaibah, Vinh Duc Tran, and Viet Cuong Trinh. "Decentralized Broadcast Encryption Schemes with Constant Size Ciphertext and Fast Decryption." Symmetry 12, no. 6 (June 6, 2020): 969. http://dx.doi.org/10.3390/sym12060969.

Full text
Abstract:
Broadcast encryption ( BE ) allows a sender to encrypt a message to an arbitrary target set of legitimate users and to prevent non-legitimate users from recovering the broadcast information. BE has numerous practical applications such as satellite geolocation systems, file sharing systems, pay-TV systems, e-Health, social networks, cloud storage systems, etc. This paper presents two new decentralized BE schemes. Decentralization means that there is no single authority responsible for generating secret cryptographic keys for system users. Therefore, the scheme eliminates the concern of having a single point of failure as the central authority could be attacked, become malicious, or become unavailable. Recent attacks have shown that the centralized approach could lead to system malfunctioning or to leaking sensitive information. Another achievement of the proposed BE schemes is their performance characteristics that make them suitable for environments with light-weight clients, such as in Internet-of-Things (IoT) applications. The proposed approach improves the performance over existing decentralized BE schemes by simultaneously achieving constant size ciphertext, constant size secret key and fast decryption.
APA, Harvard, Vancouver, ISO, and other styles
5

Silva, Angélica Baptista, and Annibal Coelho de Amorim. "A Brazilian educational experiment: teleradiology on web TV." Journal of Telemedicine and Telecare 15, no. 7 (October 2009): 373–76. http://dx.doi.org/10.1258/jtt.2009.090204.

Full text
Abstract:
Since 2004, educational videoconferences have been held in Brazil for paediatric radiologists in training. The RUTE network has been used, a high-speed national research and education network. Twelve videoconferences were recorded by the Health Channel and transformed into TV programmes, both for conventional broadcast and for access via the Internet. Between October 2007 and December 2009 the Health Channel website registered 2378 hits. Our experience suggests that for successful recording of multipoint videoconferences, four areas are important: (1) a pre-planned script is required, for both physicians and film-makers; (2) particular care is necessary when editing the audiovisual material; (3) the audio and video equipment requires careful adjustment to preserve clinical discussions and the quality of radiology images; (4) to produce a product suitable for both TV sets and computer devices, the master tape needs to be encoded in low resolution digital video formats for Internet media (wmv and rm format for streaming, and compressed zip files for downloading) and MPEG format for DVDs.
APA, Harvard, Vancouver, ISO, and other styles
6

Völkl, E., L. F. Allard, T. A. Dodson, and T. A. Nolan. "Computer control of Transmission Electron Microscopes possibilities, concepts and present limitations." Proceedings, annual meeting, Electron Microscopy Society of America 53 (August 13, 1995): 22–23. http://dx.doi.org/10.1017/s0424820100136489.

Full text
Abstract:
The electron microscope laboratory at the High Temperature Materials Laboratory in Oak Ridge National Laboratory runs essentially film free. This is possible due to the use of TV-rate and slow-scan CCD cameras together with fast desktop computers connected through Ethernet to network servers for image storage and a variety of hard copy output devices. In our experience, the functionality of a film-free laboratory goes beyond the simple replacement of film material with some other storage media. For example, the tim and effort to produce a final hardcopy image has been reduced effectively from several hours to several minutes. At the same time, data accuracy has increased due to the high linearity of the CCD cameras and data safety is improved due to automated nightly backups.The present all-digital setup of our TEMs has already changed the routines on how an TEM is run. Two si uations occur regularly: 1) one microscopist moves back and forth between the TEM and the computer and 2) two microscopists run the computer and/or TEM as a team. Unfortunately, the tasks of recording an image and running the microscope are still very much separated.
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Anni. "Discussion of the Artistic Aesthetic Transformation between Film and Literature from the Perspective of Adaptation." Journal of Language Teaching and Research 11, no. 6 (November 1, 2020): 1005. http://dx.doi.org/10.17507/jltr.1106.19.

Full text
Abstract:
Literature relies on text reading to realize its artistic value. With the continuous replacement of communication means, literal reading may become a kind of classical or aristocratic sentiment. More and more people meet the needs of reading by means of image. Therefore, TV series or computer network related to it have become the most popular way, while film has become a very unique artistic way between literature and TV series. Since Chinese films have made some achievements in literary adaptation, this paper attempts to explore and discuss the essence of the artistic subject of the transformation of film and literature.
APA, Harvard, Vancouver, ISO, and other styles
8

Hendro W, Nur Cahyo. "PENGEMBANGAN MANAJEMEN PENYIARAN WALISONGO TV." Islamic Communication Journal 2, no. 1 (January 12, 2018): 1. http://dx.doi.org/10.21580/icj.2017.2.1.2097.

Full text
Abstract:
<p><em>Tele</em><em>v</em><em>i</em><em>s</em><em>i</em><em>on is the media most widely consumed by the people of the world and especially Indonesia, with conditions like this the influence of television to be very large on the mindset and patterns of public attitudes. Faculty of Da'wah and Communications UIN Walisongo as an institution that is responsible for the success of Islamic propagation in Indonesia is very appropriate to use television as one of his da'wah media. The management of broadcast television is classified into modern management because all activities in preparing and producing broadcast can not be separated from computer technology. computers have an enormous influence in speeding up a product, with collaboration between software will create new innovation results in broadcast television. The programs that will be presented must be well managed, by conducting scheduling time management of the expected broadcast programs that will be served can be anticipated as early as possible. The production process of television broadcasting must be done before the program is aired. Through the process of film editing in which there is a payload of information a television program can be produced. TV broadcasting program is integrated with Walisongo TV broadcast management information system software.</em></p><p>------------------------------------------------------------------------------------</p><p>Televisi adalah media yang paling luas dikonsumsi oleh masyarakat dunia dan khususnya Indonesia, dengan kondisi seperti ini pengaruh televisi menjadi sangat besar terhadap pola pikir maupun pola sikap masyarakat. Fakultas Dakwah dan Komunikasi UIN Walisongo sebagai sebuah institusi yang ikut bertanggung jawab atas berhasilnya dakwah Islam di Indonesia sangatlah tepat untuk menggunakan televisi sebagai salah satu media dakwahnya. Managemen siaran televisi digolongkan kedalam manajemen modern karena semua aktivitas dalam mempersiapkan dan memproduksi siaran tidak bisa lepas dari teknologi komputer. komputer mempunyai pengaruh yang sangat besar dalam mempercepat menghasilkan sebuah produk, dengan kolaborasi antar software akan tercipta hasil inovasi baru dalam siaran televisi. Program-program yang akan disajikan harus dikelola dengan baik, dengan melakukan managemen penjadwalan waktu siaran diharapkan program-program yang akan ditayangkan dapat diantisipasi sedini mungkin. Proses produksi siaran televisi harus dikerjakan sebelum program tersebut ditayangkan. Melalui proses editing film yang didalamnya terdapat muatan informasi sebuah program tayangan televisi dapat dihasilkan. Program siaran televisi tersebut diintegrasikan dengan software sistem informasi manajemen siaran Walisongo TV.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Völki, E., L. F. Allard, T. A. Dodson, and T. A. Nolan. "Computer Control of Transmission Electron Microscopes: Possibilities, Concepts and Present Limitations." Microscopy Today 4, no. 2 (March 1996): 24–25. http://dx.doi.org/10.1017/s1551929500067559.

Full text
Abstract:
The electron microscope laboratory at the High Temperature Materials Laboratory in Oak Ridge National Laboratory runs essentially film free. This is possible due to the use of TV-rate and slow-scan CCD cameras together with fast desktop computers connected through Ethernet to network servers for image storage; and a variety of hard copy output devices. In our experience, the functionality of a film-free laboratory goes beyond the simple replacement of film material with some other storage media. For example, the time and effort to produce a final hardcopy image has been reduced effectively from several hours to several minutes. At the same time, data accuracy has increased due to the high linearity of the CCD cameras and data safety is improved due to automated nightly backups.
APA, Harvard, Vancouver, ISO, and other styles
10

Prince, Stephen. ": How Did They Do It? Computer Illusion in Film and TV . Christopher W. Baker." Film Quarterly 49, no. 1 (October 1995): 61–62. http://dx.doi.org/10.1525/fq.1995.49.1.04a00220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Marks, L. D. "Computer-Assisted Microscope Alignment." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (August 12, 1990): 152–53. http://dx.doi.org/10.1017/s0424820100179518.

Full text
Abstract:
Full alignment of a high resolution electron microscope, including alignment of the beam tilt, is a very difficult process. Whereas astigmatism correction is relatively straightforward, correcting the beam tilt is by no means so simple and there is a strong interaction between astigmatism and tilt so that it is possible to apparently correct the two at one defocus. To overcome this problem, the method of applying a ± tilt oscillation to the beam has been introduced previously, the principle being to adjust the texture of an amorphous carbon film image so that it is symmetric with respect to the tilt oscillations. This procedure works, but in practice is not easy to use and there are additional experimental problems in terms of loss of intensity due to insufficiently corrected beam shift/tilt purity.As an extension of this process, and as a general extension of astigmatism correction as well, we have developed a procedure for providing a computer generated optical diffraction pattern to compliment the standard TV image. One critical problem was providing optical diffraction patterns at a sufficiently rapid speed to make the process experimentally viable; one optical diffraction pattern every 5 seconds for example is too slow. The numerical procedure can be broken down into a number of different steps:
APA, Harvard, Vancouver, ISO, and other styles
12

Iwao, Hirohide. "6 Next Generation TV Program Exchange System "FileX"." Journal of the Institute of Image Information and Television Engineers 67, no. 5 (2013): 379–82. http://dx.doi.org/10.3169/itej.67.379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Neto, Francisco Milton Mendes, Raphael de Carvalho Muniz, Aquiles Medeiros Filgueira Burlamaqui, and Rafael Castro de Souza. "An Agent-Based Approach for Delivering Educational Contents Through Interactive Digital TV in the Context of T-Learning." International Journal of Distance Education Technologies 13, no. 2 (April 2015): 73–92. http://dx.doi.org/10.4018/ijdet.2015040105.

Full text
Abstract:
The support of technological resources in teaching and learning has contributed to make them more efficient and enjoyable. Through this support has become quite common to use media resources before explored only for entertainment for educational purposes, among them the TV. The interactive Digital TV (iDTV) provides resources that make possible the development of a plethora of educational applications. However, since TV is a mass distribution device, an aggravating factor in the use of this media for education is the presentation of contents (learning objects) inadequate according to both the users' previous knowledge and the subject of courses in which they are enrolled. This paper tries to fill this gap by proposing an educational environment for iDTV, supported by an adequate standard for classification of learning objects for t-learning, in order to deliver educational contents for iDTV, according to users' knowledge level and suitability of contents to ongoing course.
APA, Harvard, Vancouver, ISO, and other styles
14

O’Keefe, Michael A., and Roar Kilaas. "Current and future directions of on-line Transmission Electron Microscope image processing and analysis." Proceedings, annual meeting, Electron Microscopy Society of America 47 (August 6, 1989): 48–49. http://dx.doi.org/10.1017/s0424820100152215.

Full text
Abstract:
Image processing and analysis are increasingly employed in order to extract the maximum amount of useful information from transmission electron micrographs. Whereas most processing is carried out a posteriori, i.e. from images that have been recorded on film then digitized for computer processing, it is obviously useful to be able to improve the on-line image in near-real time for the benefit of the microscope operator. In addition, interfacing an external computer to the internal controls of modern TEMs allows on-line image analysis to provide the first step in algorithms designed to assist the operator in adjustment of microscope parameters such as alignment, astigmatism and defocus.The hardware required for on-line image processing can be as simple as a detector coupled to a TV camera, the signal from which is digitized, stored and averaged by a set of cards controlled by a host computer, with a monitor displaying the image stored in the memory card.
APA, Harvard, Vancouver, ISO, and other styles
15

Prince, Stephen. "Review: How Did They Do It? Computer Illusion in Film and TV by Christopher W. Baker." Film Quarterly 49, no. 1 (1995): 61–62. http://dx.doi.org/10.2307/1213508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Larkey, Edward. "Narratological Approaches to Multimodal Cross-Cultural Comparisons of Global TV Formats." Audiovisual Data in Digital Humanities 7, no. 14 (December 31, 2018): 38. http://dx.doi.org/10.18146/2213-0969.2018.jethc152.

Full text
Abstract:
This article cross-culturally compares different versions of the Quebec sitcom/sketch comedy television series Un Gars, Une Fille (1997-2002) by examining the various gender roles and family conflict management strategies in a scene in which the heterosexual couple visits the male character’s mother-in-law. The article summarizes similarities and differences in the narrative structure, sequencing and content of several format adaptations by compiling computer-generated quantitative and qualitative data on the length of segments. To accomplish this, I have used the annotation function of Adobe Premiere, and visualized the findings using Microsoft Excel bar graphs and tables. This study applies a multimodal methodology to reveal the textual organization of scenes, shots and sequences which guide viewers toward culturally proxemic interpretations. This article discusses the benefits of applying the notion of discursive proximity suggested by Uribe-Jongbloed and Espinosa-Medina (2014) to gain a more comprehensive and complex understanding of the multimodal nature of cross-cultural comparison of global television format adaptations.
APA, Harvard, Vancouver, ISO, and other styles
17

Fong, Ken Kin-Kiu, and Stanley Kam Sing Wong. "Wi-Fi Adoption and Security in Hong Kong." International Business Research 10, no. 8 (July 11, 2017): 129. http://dx.doi.org/10.5539/ibr.v10n8p129.

Full text
Abstract:
The benefit of using WiFi for Internet connection is obvious: cost-effective and powerful. WiFi gives us the flexibility and convenience of not being tied to a fixed location. Nowadays, more and more electronic devices and gadgets, such as mobile phones, cameras, gaming devices, TV and entertainment equipment, are WiFi enabled. WiFi also enables your devices to share files instantly. WiFi broadcasting devices, such as Chromecast, give you extra convenience by allowing you to stream video and audio contents from your mobile phone to your TV using WiFi connection. However, this kind of flexibility and convenience comes with a cost. Sharing files, streaming contents or even accessing the Internet via WiFi means signals are being transmitted and they can be captured by anyone with a computer or mobile phone installed with appropriate software. Therefore, it is important to let WiFi users know their security risks and how to minimize them. Educating WiFi users to reduce the WiFi security risk is one of our on-going missions. Basing on empirically collected data, this paper is report of a comprehensive study on the use of WiFi and WiFi networking and the knowledge of WiFi users of the risks and security issues involved in using WiFi in Hong Kong. Findings of the study highlight the WiFi security knowledge gaps of the users in Hong Kong so that stakeholders can take action to improve Internet security by eliminating the security gaps identified.
APA, Harvard, Vancouver, ISO, and other styles
18

Mesyanzhinova, Alexandra Vadimovna. "The Art of Opera: Variability of Artistic Languages of Screen Forms." Journal of Flm Arts and Film Studies 6, no. 4 (December 15, 2014): 72–83. http://dx.doi.org/10.17816/vgik6472-83.

Full text
Abstract:
Specifics of an artistic language of screen forms of opera and its variations (movies, TV performances, live broadcasts in movie theatres) could be explicated successfully using a case study of different interpretations of the same work. The article retrospectively illustrates variability of an artistic language of the operatic screen forms using a case study of screen adaptations of Giuseppe Verdi's La Traviata (1853). The article analyzes a film-opera by Franco Zeffirelli (1982), a film-opera by Mario Lanfranchi (1968), TV-broadcast of Adrian Marthalers staging of La Traviata at Zurich Central station (2008), and the live broadcast in movie theaters of Willy Deckers production at the Metropolitan Opera (2012). The author presumes that the format of an opera-film through interacting with the audience primarily on emotional and symbolic levels appears as static. TV live broadcasts of opera productions, which have replaced filmed opera, convey the completeness and stasis of what is happening on screen thus allowing the viewer to feel the spontaneity of theatrical action and actually ushering the spectator into the space of each frame. Live broadcasts of opera performances in movie theatres create a unique symbiosis of arts: a live, devoid of stasis and predetermination, feature film emerges. Nowadays, when theatres are increasingly offering live broadcasts of their performances in Internet, it is possible to state that the operatic art is not just oriented towards the screen arts but is searching new opportunities to adapt itself to the screen format, and can no longer exist independently off the screen. All this affects the artistic language of screen and theatrical forms of the operatic art. Although the operatic art interacts well with the multimedia technologies, the issues of interpretation and transformation of the artistic forms, which arise when recordings of elapsed performances are converted to digital format and viewed on modern devices, require a very delicate approach. Tablet computers and other media devices, which are able to reproduce nearly every recording, in a certain sense, allow customization of the technical and aesthetic forms of an art piece according to the users preferences, thus introducing personalized artistic accents of a viewer.
APA, Harvard, Vancouver, ISO, and other styles
19

Gawroński, Sławomir, and Kinga Bajorek. "A Real Witcher—Slavic or Universal; from a Book, a Game or a TV Series? In the Circle of Multimedia Adaptations of a Fantasy Series of Novels “The Witcher” by A. Sapkowski." Arts 9, no. 4 (October 3, 2020): 102. http://dx.doi.org/10.3390/arts9040102.

Full text
Abstract:
A series of novels about a witcher, written by Andrzej Sapkowski almost thirty years ago, has now become an inspiration for the creation of mass productions of mainstream popular culture—film and multimedia adaptations for use in computer games. It is one of the few examples of global messages of mass culture being based on Polish creativity. The recognition of “The Witcher”, due to the Netflix production, soon contributed to building the national pride of Polish people, and at the same time sparked a discussion in Central and Eastern European countries on the consequences of the multimedia adaptation of Andrzej Sapkowski’s prose. Questions about the dissonance between the Slavic and universal dimensions of “The Witcher” in relation to the original novels and their adaptations are a part of the traditional discourse on the adaptability of literature and its consequences for the reception by the audience. This article tries to capture the specific character of the adaptations of Andrzej Sapkowski’s literature from the point of view of typology, known from the literature of the subject, as well as to answer the question about the consequences of the discrepancy between the original book and its adaptations in the form of a film, a TV series, and computer games. The considerations in the article were based on the literature analysis and the research based on the existing sources.
APA, Harvard, Vancouver, ISO, and other styles
20

Dronsfield, Jonathan Lahey. "Pedagogy of the Written Image." Journal of French and Francophone Philosophy 18, no. 2 (January 28, 2010): 87–106. http://dx.doi.org/10.5195/jffp.2010.214.

Full text
Abstract:
"Text, it is Jean-Luc Godard’s “ennemi royal, principal." Text, it is on the side of death, “les images c’est la vie et les textes, c’est la mort”. Twenty years later and the war is not over: “Une image est paisible. Une image de la Vierge avec son petit enfant sur son âne n'amène pas la guerre, c'est son interprétation par un texte qui amènera la guerre et qui fera que les soldats de Luther iront déchirer les toiles de Raphaël.” Godard’s story is not just of the history of cinema, it is one in which text has won out to the extent that TV and the computer too make of the image something subservient to text. “Since Gutenberg” the text has triumphed in this way. “There was a long struggle, marriage or liaison between painting and text. Then the text carried the day. Film is the last art in the pictorial tradition... Take away the text and you’ll see what’s left. In TV nothing is left.” It is obvious from these lines of demarcation, lines which divide the art of the image from what would declare war on its most sacred icons and defeat its technologies, a conflict in which there is just the one aggressor, a division which makes of the relation between image and text nothing less than a matter of life and death, that for Godard text is indeed the enemy..."
APA, Harvard, Vancouver, ISO, and other styles
21

Voelkl, E. "Live Electron Holography: A Window to the Phase World." Microscopy and Microanalysis 5, S2 (August 1999): 950–51. http://dx.doi.org/10.1017/s1431927600018079.

Full text
Abstract:
While electron holography has been around for many years (1948, see [1]), the process of obtaining a view into the phase world stayed a tedious one for a long time. When CCD cameras became available and started to replace film, many attempts were made to obtain a “live” view into the phase world. By far the most successful setup for a live phase display is described in [2]. There, Chen et al. use a mixture of digital and analog techniques to obtain phase images at TV-rate. This setup allows to move the sample and/or watch time dependent specimen changes in a unique way, i.e., live in the phase world. The reason why Chen uses a mixture between analog and digital methods is the capability of an optical lens system for fast Fourier processing.Computers have become significantly faster since Chen's introduction of the mixed setup in 1992 and allow to process holograms rapidly and obtain about one phase image per second on present standard computer systems and acquisition cameras.
APA, Harvard, Vancouver, ISO, and other styles
22

Kong, Lingqiang. "SIFT Feature-Based Video Camera Boundary Detection Algorithm." Complexity 2021 (April 11, 2021): 1–11. http://dx.doi.org/10.1155/2021/5587873.

Full text
Abstract:
Aiming at the problem of low accuracy of edge detection of the film and television lens, a new SIFT feature-based camera detection algorithm was proposed. Firstly, multiple frames of images are read in time sequence and converted into grayscale images. The frame image is further divided into blocks, and the average gradient of each block is calculated to construct the film dynamic texture. The correlation of the dynamic texture of adjacent frames and the matching degree of SIFT features of two frames were compared, and the predetection results were obtained according to the matching results. Next, compared with the next frame of the dynamic texture and SIFT feature whose step size is lower than the human eye refresh frequency, the final result is obtained. Through experiments on multiple groups of different types of film and television data, high recall rate and accuracy rate can be obtained. The algorithm in this paper can detect the gradual change lens with the complex structure and obtain high detection accuracy and recall rate. A lens boundary detection algorithm based on fuzzy clustering is realized. The algorithm can detect sudden changes/gradual changes of the lens at the same time without setting a threshold. It can effectively reduce the factors that affect lens detection, such as flash, movies, TV, and advertisements, and can reduce the influence of camera movement on the boundaries of movies and TVs. However, due to the complexity of film and television, there are still some missing and false detections in this algorithm, which need further study.
APA, Harvard, Vancouver, ISO, and other styles
23

Rogers, Kathryn E. "Story First, Technology Second." Advances in Archaeological Practice 8, no. 4 (November 2020): 428–33. http://dx.doi.org/10.1017/aap.2020.37.

Full text
Abstract:
OVERVIEWCombining the strengths of “traditional” documentary filmmaking (as “creative treatments of actualité”) with the immersive power of interactive digital technologies (from 360° video to VR to data mining to algorithms), i-Docs can transform audiences into participants, co-creators, and collaborators in nonfiction storytelling, allowing them to not only explore and experience a story on their own terms but to remix, share, and contribute their own content to a collective story. i-Docs are cross- and multiplatform, screening across cinema, computers, smartphones, and gallery installations. Showcased at leading film festivals, increasingly adopted by broadcasters—including PBS, Al Jazeera, and BBC—and critically acclaimed from the Webbys and the Pulitzers to Cannes, the i-Doc sector is set to boom. After a close reading of two i-Docs, Hunt for the Inca Ruins (2017) and Saydnaya (2016), I consider the potential of i-Docs to resolve archaeologists’ concerns about misrepresentation, accuracy, information quality, (co)authorship, and crediting original research in documentary storytelling. I also examine the sector's shortcomings of unstable production pathways, funding sources, technologies, and difficulties assessing impact. I propose that archaeologists should engage proactively with the i-Doc sector if we wish to avoid the pitfalls previously encountered in film and factual TV and make the most of this new format.
APA, Harvard, Vancouver, ISO, and other styles
24

Samaras, Evanthia. "Futureproofing Visual Effects." International Journal of Digital Curation 16, no. 1 (August 15, 2021): 15. http://dx.doi.org/10.2218/ijdc.v16i1.689.

Full text
Abstract:
Digital visual effects (VFX), including computer animation, have become a commonplace feature of contemporary episodic and film production projects. Using various commercial applications and bespoke tools, VFX artists craft digital objects (known as “assets”) to create visual elements such as characters and environments, which are composited together and output as shots. While the shots that make up the finished film or television (TV) episode are maintained and preserved within purpose-built digital asset management systems and repositories by the studios commissioning the projects; the wider VFX network currently has no consistent guidelines nor requirements around the digital curation of VFX digital assets and records. This includes a lack of guidance about how to effectively futureproof digital VFX and preserve it for the long-term. In this paper I provide a case study – a single shot from a 3D animation short film – to illustrate the complexities of digital VFX assets and records and the pipeline environments whence they are generated. I also draw from data collected from interviews with over 20 professional VFX practitioners from award-winning VFX companies, and I undertake socio-technical analysis of VFX using actor-network theory. I explain how high data volumes of digital information, rapid technology progression and dependencies on software pose significant preservation challenges. In addition, I outline that by conducting holistic appraisal, selection and disposal activities across their entire digital collections, and by continuing to develop and adopt open formats; the VFX industry has improved capability to preserve first-hand evidence of their work in years to come.
APA, Harvard, Vancouver, ISO, and other styles
25

Al-Shlool, Safaa. "(Im) Politeness and Gender in the Arabic Discourse of Social Media Network Websites: Facebook as a Norm." International Journal of Linguistics 8, no. 3 (June 13, 2016): 31. http://dx.doi.org/10.5296/ijl.v8i3.9301.

Full text
Abstract:
<p class="1"><span lang="X-NONE">The present study aims to investigate the differences and similarities in the ways men and women use (im)politeness strategies in communicating “online” in the Arabic discourse of social media network websites like Facebook as well as the role of the topic the interlocutors talk about in the use of (im)politeness strategies. In addition, the study investigates the differences between the men-men, women-women, women-men communication in the Arabic discourse of social media network website, Facebook. For the purposes of this study, a corpus of online Arabic texts were collected from some public web pages of the most popular TV show programs on some of the most well-liked social media network websites such as Facebook over a period of four months (from September 2012- December 2012). The obtained data were studied quantitatively and qualitatively. Many studies have been conducted on cross-gender differences especially in the computer mediated communication CMC, but none so far has focused on the gender differences and (im)politeness in the Arabic discourse of social media network websites although there is a huge number of Arabic users of such websites. The present study, therefore, attempts to fill in the gap in the literature. </span></p>
APA, Harvard, Vancouver, ISO, and other styles
26

Abrams, Zsuzsanna Ittzes. "Possibilities and challenges of learning German in a multimodal environment: A case study." ReCALL 28, no. 3 (June 24, 2016): 343–63. http://dx.doi.org/10.1017/s0958344016000082.

Full text
Abstract:
AbstractDespite a growing body of research on task-based language learning (TBLT) (Samuda & Bygate, 2008; Ellis, 2003), there is still little information available regarding the pedagogical design behind tasks and how they are implemented (Samuda & Bygate, 2008). Scholars in computer-mediated second language (L2) learning have called for research to fill in this gap by reflecting critically on task design and the subsequent implementation process (Fuchs, Hauck & Müller-Hartmann, 2012; Hampel, 2010; Hampel & Hauck, 2006; Hampel & Plaines, 2013; Hauck, 2010), instead of considering a task an “unproblematic fait accompli” (O’Dowd & Ware, 2009: 174). In response to this charge, the present case study provides a critical analysis of tasks carried out in a first-year German language course built around the weekly TV-series,Rosenheim-Cops(ZDF). These tasks drew from research on multimodality (Kress & van Leeuwen, 2001; Norris & Maier, 2014) providing a framework for understanding how multiple semiotic systems work together to create meaning. The insights provided by this study have relevance for research on multimodal task design (Hampel & Hauck, 2006) through examining the possibilities and limitations of beginner second language (L2) learners’ effective use of authentic resources. The results suggest that the tasks encouraged semiotic awareness, helped activate referential knowledge useful for accessing multimodal resources, and elicited a positive response to authentic L2 use in context. The challenges of implementing these tasks included variation in learners’ engagement with authentic multimedia resources, either due to L2 skills levels or the level of interest in the particular resource used.
APA, Harvard, Vancouver, ISO, and other styles
27

Nezhad Nili, Fatemeh. "Investigating Decoupage and Cinematic Special-Effects in Ferdowsi's Shahnameh." Modern Applied Science 11, no. 2 (October 28, 2016): 50. http://dx.doi.org/10.5539/mas.v11n2p50.

Full text
Abstract:
Decoupage or cinematic segmentation is a step by step map used by filmmaker to produce a film or television show, and cinematic special effect is reconstruction and re-creation of unrealistic or impossible scenes during production process. Shahnameh Ferdowsi, due to its attractive, deep and imaginative stories, enjoys a high capacity to be exploited in the field of movies and cinematic productions. The author aims to define principles and basic concepts of decoupage and special effects as well as to extract famous stories on each of the mythological, heroic and historical sections of the Shahnameh and to adapt the poems to cinematic segmentations. The author also aims to investigate each hemistich or line as a visual shot, and battles descriptions and marvels illustrated in epic and heroic works as special effects which are capable of being reconstructed in the world of movies. Analysis of mentioned stories and adapting Ferdowsi's imageries to the scenes defined in the modern scripts, has flaunted inherent ability of the Shahnameh to regenerate and adapt itself to the present time due to its creative and illustrator poet, and represents Ferdowsi as a filmmaker with a cinematic mind. At the end, the story will be analyzed and decoupaged and turns into a full screenplay, without any interferes and just based on Ferdowsi poems to become a work of art in various visual formats such as movies, TV shows, animations, computer games, etc. At the same time, it accounts for both aspects of form and content in a prominent work.
APA, Harvard, Vancouver, ISO, and other styles
28

Ruggieri, Gianluca, Paolo Zangheri, Mattia Bulgarelli, and Patrizia Pistochini. "Monitoring a Sample of Main Televisions and Connected Entertainment Systems in Northern Italy." Energies 12, no. 9 (May 8, 2019): 1741. http://dx.doi.org/10.3390/en12091741.

Full text
Abstract:
Energy labels are a powerful instrument to influence the electricity consumption of appliances and lighting devices in households. However, the real consumption data depend on a number of different factors, including marketing policies, purchase preferences, technology development, and last but not least behavioural habits. While white appliance consumption trends tend to change over a longer period, the use of entertainment devices changes quickly. A number of different devices (digital versatile disc (DVD) player, decoder, game console, home theater, video recorder) are normally connected to the main television set, and these devices change rapidly, and, at the same time, new behaviors are emerging. There is an increasing gap between, on one hand, the higher consumption of televisions and connected devices and the number of regulations developed for their regulation, and, on the other hand, the lack of knowledge on the real onsite consumption. In order to fill this gap, in 2017, a measurement campaign was promoted and developed in some households in northern Italy. The consumption of 28 main televisions and 14 entertainment systems was measured on a daily basis for at least two weeks. Standby consumptions were measured as well. On the basis of outcomes evaluated, it results that these devices are responsible for 9.3% of total electricity consumption as an average of 5.6% for televisions and 3.7% of the attached devices. Standby consumption is still considerably high (3.6% of the total electricity consumption), especially for satellite decoders. Some interesting correlations were studied highlighting the effect of the introduction of the energy labels or the increasing size of the TV over time. The main results obtained were compared to those of previous monitoring campaigns launched in Italy.
APA, Harvard, Vancouver, ISO, and other styles
29

Urazova, Svetlana Leonidovna. "Screen Communications As a Form of Socialization and Individualization." Journal of Flm Arts and Film Studies 7, no. 2 (June 15, 2015): 142–49. http://dx.doi.org/10.17816/vgik72142-149.

Full text
Abstract:
Common exposure to various kinds of mobile and stationary devices based on screen technologies produce substantiation for the term screen communications. The relevancy of its usage is specified, also the principles of functioning in context of updating social practices and multimedia informational space are substantiated in the article. The issue is due to the fact that the term screen communications has not yet been put into academic usage, unlike such terms as communication, mass communication and social communication. Nevertheless contemporary social practices for using the screen (cell phone, e-book, tablet, etc.) have turned into a daily routine and even demonstrate screen-phobia. The evolution of the technologies, new media (multimedia, multi-platforms), growth of information flows, form and content diversity of informational products, socialization effect and accumulation of empirical experience urge society to resort to the screen for receiving information (film-, video-, TV-production, Internet sites, social networks, computers, cell phones, tablets, e-books, electronic billboards, videoinformational systems, etc.). The article analyzes characteristics of well-known communications forms and types when superimposed to the term screen communications. The problem raised a need for thorough analysis of screen communications which different strata of society master; moreover, a great significance of studying peculiarities of screen culture in the digital era is emphasized. The article cites information about the emergence of Generation C, formed by social networks (a lecture on Nielsen Consumer 360 Conference). The Connected Collective Consumer has a distinguished identity and is ready for self-expression (ideas, cultural projects, etc.) within the group. As a conclusion it substantiates the nonlinearity of the social systems development, including social networks which are exposed to both the socialization effect and diversification and disintegration processes, which leads to a communication connections collapse.
APA, Harvard, Vancouver, ISO, and other styles
30

Makukh-Fedorkova, Ivanna. "The Role of Cinema in the History of Media Education in Canada." Mediaforum : Analytics, Forecasts, Information Management, no. 7 (December 23, 2019): 221–34. http://dx.doi.org/10.31861/mediaforum.2019.7.221-234.

Full text
Abstract:
The era of audiovisual culture began more than a hundred years ago with the advent of cinema, and is associated with a special language that underlies non-verbal communication processes. Today, screen influence on humans is dominant, as the generation for which computer is an integral part of everyday life has grown. In recent years, non-verbal language around the world has been a major tool in the fight for influence over human consciousness and intelligence. Formation of basic concepts of media education, which later developed into an international pedagogical movement, in a number of western countries (Great Britain, France, Germany) began in the 60’s and 70’s of the XX century. In Canada, as in most highly developed countries (USA, UK, France, Australia), the history of media education began to emerge from cinematographic material. The concept of screen education was formed by the British Society for Education in Film (SEFT), initiated by a group of enthusiastic educators in 1950. In the second half of the twentieth century, due to the intensive development of television, the initial term “film teaching” was transformed into “screen education”. The high intensity of students’ contact with new audiovisual media has become a subject of pedagogical excitement. There was a problem adjusting your children’s audience and media. The most progressive Canadian educators, who have recognized the futility of trying to differentiate students from the growing impact of TV and cinema, have begun introducing a special course in Screen Arts. The use of teachers of the rich potential of new audiovisual media has greatly optimized the learning process itself, the use of films in the classroom has become increasingly motivated. At the end of 1968, an assistant position was created at the Ontario Department of Education, which coordinated work in the “onscreen education” field. It is worth noting that media education in Canada developed under the influence of English media pedagogy. The first developments in the study of “screen education” were proposed in 1968 by British Professor A. Hodgkinson. Canadian institutions are actively implementing media education programs, as the development of e-learning is linked to the hope of solving a number of socio-economic problems. In particular, raising the general education level of the population, expanding access to higher levels of education, meeting the needs for higher education, organizing regular training of specialists in various fields. After all, on the way of building an e-learning system, countries need to solve a set of complex technological problems to ensure the functioning of an extensive network of training centers, quality control of the educational process, training of teaching staff and other problems. Today, it is safe to say that Canada’s media education is on the rise and occupies a leading position in the world. Thus, at the beginning of the 21st century, Canada’s media education reached a level of mass development, based on serious theoretical and methodological developments. Moreover, Canada remains the world leader in higher education and spends at least $ 25 billion on its universities annually. Only the United States, the United Kingdom and Australia are the biggest competitors in this area.
APA, Harvard, Vancouver, ISO, and other styles
31

Tong, Yangfan, Weiran Cao, Qian Sun, and Dong Chen. "The Use of Deep Learning and VR Technology in Film and Television Production From the Perspective of Audience Psychology." Frontiers in Psychology 12 (March 18, 2021). http://dx.doi.org/10.3389/fpsyg.2021.634993.

Full text
Abstract:
As the development of artificial intelligence (AI) technology, the deep-learning (DL)-based Virtual Reality (VR) technology, and DL technology are applied in human-computer interaction (HCI), and their impacts on modern film and TV works production and audience psychology are analyzed. In film and TV production, audiences have a higher demand for the verisimilitude and immersion of the works, especially in film production. Based on this, a 2D image recognition system for human body motions and a 3D recognition system for human body motions based on the convolutional neural network (CNN) algorithm of DL are proposed, and an analysis framework is established. The proposed systems are simulated on practical and professional datasets, respectively. The results show that the algorithm's computing performance in 2D image recognition is 7–9 times higher than that of the Open Pose method. It runs at 44.3 ms in 3D motion recognition, significantly lower than the Open Pose method's 794.5 and 138.7 ms. Although the detection accuracy has dropped by 2.4%, it is more efficient and convenient without limitations of scenarios in practical applications. The AI-based VR and DL enriches and expands the role and application of computer graphics in film and TV production using HCI technology theoretically and practically.
APA, Harvard, Vancouver, ISO, and other styles
32

Caldwell, Nick. "Virtual Domesticity." M/C Journal 3, no. 6 (December 1, 2000). http://dx.doi.org/10.5204/mcj.1885.

Full text
Abstract:
This paper has been constructed in a variety of working environments; some physical, some virtual, some merely perceptual. At an ever increasing rate, both work and leisure for the informationally-overexposed in contemporary, "post-modern" western cultures are taking place in a heterogeneous assemblage of working and living environments. At this point, I want to specifically consider the personal computer as a domestic space, one that mirrors, oddly and specifically, the ways that we relate to our own personal living spaces. I'm considering the domestic because I want to get away from some of the more outlandish configurations of virtual spaces in critical writing. These configurations in such writing tend to suggest that virtual spaces are utopian and transform our relationships with mundane reality. The mirroring I want to talk about is instead an illustration of the way we incorporate the virtual with the real in quite prosaic ways. One of the traditional conceptualisations of the divide between the private and the public sphere is to consider it as a distinction between play and work, although obviously this kind of distinction has the greatest currency when gender roles are highly specified in different spheres (and indeed the notion of culture as a bifurcated structure at all has less currency in postmodernity). The personal computer, being a multi-purpose computational and communication system, is capable of taking on many roles, depending on how its software infrastructure has been furnished. The kinds of distinctions we impose on its use have much stronger interactions at the wider cultural level. In Europe and in countries such as Australia and New Zealand, the so-called "personal home computer" of the 1980s held a much greater sway over the domestic market than it did in the United States, simply because buyers at the time had much less invested in the distinction between machines for work and for play. But shifting working patterns have forced machines that can do the work to go into places where they might not have been accepted before. And the clear distinctions of usage (work vs. play) have begun to blur in quite complex ways. But certain aspects of a computer's function have not quite managed to catch up... Despite the now unremarkable ubiquity of the personal computer in the home, the metaphors for data organisation in modern operating systems are clearly evolved from business office layouts. Discrete packages of information are "files", organised into "folders", "directories", or "drawers". Graphical User Interface (GUI) based operating systems further the metaphor, drawing files as sheets of paper with the corner folded, stored in manilla folders or filing cabinet drawers, which are accessed from a "desktop". These metaphorical associations of data types and programs suit the corporate office environment rather better, it could be argued, than the domestic environment, regardless of whatever actual use the computer is put to at home. I want to parallel two quite distinct and seemingly very different activities, by way of an illustration of how we arrange our data and our personal lives. The first is moving house. The second is re-installing Windows. Long time computer users will know what I'm talking about here, but for other, more fortunate types, I'll explain. The Windows operating system for PCs controls everything about the computer, from the way a piece of software can save data to the hard drive, to making sure the cursor and the mouse are in sync. Before a piece of software, such as a word processor, can be used normally, it must be copied to the hard drive of the computer and made to communicate with the operating system. This, in Windows, involves the operating system registering the location and function of every file that makes the software work. The more programs the user adds, the more this "registry" gets filled up, and the greater the chance that the data held in the registry breaks in some way. Eventually, the operating system grinds to a halt, and the computer seems barely functional. At which point, the only solution is to save all the user's data to a separate location, wipe the contents of the hard drive, and reloading all the software, starting with the operating system. This process is intensely tedious, and occasionally disastrous. Sometimes, programs only work due to very precise and unrepeatable combinations of driver software. Occasionally, the user might forget to back up all their data, and lose important files. Idiosyncratic settings that give the machine a unique personality for the user will invariably vanish. There can be any number of important reasons for moving house, but typically they don't involve the fact that the owner has broken the old one. However, in some way, the house is less fit for residence than it was previously. So it is with a computer re-installation. The level of psychic trauma associated with a move may be far greater than that with a reformat, but it is quantitatively, not qualitatively so. And just as a new house can be a breath of fresh air (even literally), a newly installed operating system will seem to fly through activities that it once wheezed through, even if the underlying hardware is the same. A computer is a domestic space for power-users. It has to be cleaned, maintained, customised and made unique. Like a house, it is a text that has inscribed on it every change and renovation that the occupant has seen fit to implement within it. Although the metaphors of operating systems encourage us to think of data structures as we think of office stationary and equipment, the context of the usage and the attachments we form with these abstract structures colour the way we construct their environments. This is even starting to feed into the design of user interfaces. Music software is increasingly beginning to resemble hi-fi gear in its graphical designs. Image viewers have control panels like VCRs. I don't think that this can be reduced to a notion of form following function. Often these new interfaces actually get in the way of efficient operation of the software. But considered semiotically, they relate very strongly to our patterns of interaction with living environments. Computers are tremendously widespread in western nations, and yet we are in a transitional period. They are colonising domestic spaces, creating new forms of spatiality that are domestic but both physical and virtualised. Moves have been underway for several years to bring this colonisation to an apotheosis, a state of ubiquitous computing, whereby computers become seamlessly and invisibly integrated into all domestic and working environments. Early attempts have included the "WebTV" project, a mostly unsuccessful attempt to turn the TV into an Internet browser. If such a change does come about, it will almost certainly transform the way we interact with our computing technology again, and this time, in unrecognisable ways. References Poole, Steven. Trigger Happy: The Inner Life of Video Games. London: Forth Estate, 2000. Weiser, Mark. "Ubiquitous Computing." 12 Dec. 2000 <http://www.ubiq.com/hypertext/weiser/UbiHome.php>. Citation reference for this article MLA style: Nick Caldwell. "Virtual Domesticity: Renewing the Notion of Cybernetic Living and Working Environments." M/C: A Journal of Media and Culture 3.6 (2000). [your date of access] <http://www.api-network.com/mc/0012/virtual.php>. Chicago style: Nick Caldwell, "Virtual Domesticity: Renewing the Notion of Cybernetic Living and Working Environments," M/C: A Journal of Media and Culture 3, no. 6 (2000), <http://www.api-network.com/mc/0012/virtual.php> ([your date of access]). APA style: Nick Caldwell. (2000) Virtual Domesticity: Renewing the Notion of Cybernetic Living and Working Environments. M/C: A Journal of Media and Culture 3(6). <http://www.api-network.com/mc/0012/virtual.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
33

Yocom, P. Niel. "Overview of the Major Phosphor Technologies." MRS Proceedings 348 (1994). http://dx.doi.org/10.1557/proc-348-495.

Full text
Abstract:
ABSTRACTPhosphors are ubiquitously present in our everyday environment. The most common uses are in fluorescent lamps and display devices; e.g., TV tubes and computer displays. In addition, phosphors are used in medical radiology to transform the x-ray energy to light that is recorded on film for diagnostic purposes. The types of phosphors, mechanisms of operation, and future needs for these major phosphor technologies will be surveyed. In addition, brief mention will be made of some of the minor uses of phosphor materials.
APA, Harvard, Vancouver, ISO, and other styles
34

Richards, Stephen. "Reviews." Research in Learning Technology 1, no. 1 (December 30, 2011). http://dx.doi.org/10.3402/rlt.v1i1.9475.

Full text
Abstract:
Compact Disc Interactive (CD-I) is a new electronic publishing medium for multimedia information. Unlike conventional publishing media such as paper and film, CD-I provides an interactive method of accessing stored information and controlling its subsequent display on a TV screen. CD-I revolutionizes the publishing of all sorts of material such as music, text, images, computer graphics, film and video. It also adds many capabilities not possible with traditional publication media. Until recently, however, the only widely available textbook on CD-I was Preston's Compact Disc-Interactive: A Designer's Overview, published in 1988 by Kluwer Technical Books. Now, with the recent release of CD-I in Europe, three new books on the technology have become available. They form part of The CD-I Series produced by Philips Interactive Media Systems (UK) and published by the Addison-Wesley. All three have 1992 imprints.DOI:10.1080/0968776930010109
APA, Harvard, Vancouver, ISO, and other styles
35

Huertas-Martín, Víctor. "Off-Modern Hybridity in TV Theatre: Theatrical, Cinematic and Media Temporalities in Rupert Goold’s Macbeth (BBC - Illuminations Media, 2010)." International Journal of Transmedia Literacy (IJTL) 5 (January 12, 2021). http://dx.doi.org/10.7358/ijtl-2019-004-huer.

Full text
Abstract:
Rupert Goold’s screen production of Macbeth – firstly, staged in 2007 and, later, filmed in 2010 – has been studied as an example of the stage-to-screen hybrid corpus of Shakespearean audio-visual adaptations. Thus, much of the critical emphasis on the production has been placed on its filmic qualities. Particularly, the genre film conventions deployed across the film has summoned the attention of Shakespeare on screen scholars and it has been the creators’ intentions to precisely point at Goold’s filmic intertextual repertoire. Given the recent increasing attention to the multiple media and languages employed in stage-to-screen hybrid Shakespearean adaptations and other exchanges between the languages of the stage and film to rework Shakespearean and theatrical productions, it is instructive to observe the ways in which adaptations such as this one engage with larger processes of transmedia storytelling, not only paying attention to theatrical and filmic languages but to the transmedia strategies these TV theatrical films make use of. Importantly, it is instructive to look into the narrative and philosophical purposes served by transmedia storytelling as the multiple media and languages used in the film display a range of temporalities and film genres associated to them that allow us to expand the interpretive range of Shakespeare’s source text. Following this premise, this essay examines Goold’s Macbeth as a nostalgia narrative in which transmedia strategies serve to display a range of media-based narrative strands that expand the film’s range of possible interpretations. To prove this, I will insert Goold’s film in the larger process of transmedia storytelling encompassing the performance history of Macbeth. Additionally, I will identify narrative strands in Goold’s televisual, theatrical, musical, poetic and computer-based sources. The results will show that Macbeth – and, by extension, potentially this applies to TV theatrical adaptations of Shakespeare’s plays – constitutes a strand of the larger corpus of transmedia storytelling wrapping up the Scottish play’s performance history as well as Shakespeare’s overall performance history.
APA, Harvard, Vancouver, ISO, and other styles
36

Thomas, Sue, Chris Joseph, Jess Laccetti, Bruce Mason, Simon Mills, Simon Perril, and Kate Pullinger. "Transliteracy: Crossing divides." First Monday, December 12, 2007. http://dx.doi.org/10.5210/fm.v12i12.2060.

Full text
Abstract:
Transliteracy might provide a unifying perspective on what it means to be literate in the twenty-first century. It is not a new behavior but has only been identified as a working concept since the internet generated new ways of thinking about human communication. This article defines transliteracy as “the ability to read, write and interact across a range of platforms, tools and media from signing and orality through handwriting, print, TV, radio and film, to digital social networks” and opens the debate with examples from history, orality, philosophy, literature, and ethnography. We invite responses, expansion, and development.
APA, Harvard, Vancouver, ISO, and other styles
37

Hollier, Scott, Katie M. Ellis, and Mike Kent. "User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1259.

Full text
Abstract:
Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.
APA, Harvard, Vancouver, ISO, and other styles
38

Lindgren, Simon. "Sub*culture: Exploring the dynamics of a networked public." Transformative Works and Cultures 14 (October 21, 2012). http://dx.doi.org/10.3983/twc.2013.0447.

Full text
Abstract:
The sub scene, an online community for creating and distributing subtitle files for pirated movies and TV series, is a culture wherein the knowledge of a number of contributors is pooled. I describe the cultural and social protocols that shape the sub scene, with a focus on the linguistic and social exchange that characterizes this particular networked public. Analysis of the linguistic exchange shows that the sub scene is about networked collaboration, but one under a relatively strict social code. The analysis of the social exchange is structured according to Quentin Jones's definition of a virtual settlement. There is a minimum level of interactivity, as well as a variety of communicators, on the sub scene. It can also be described as a virtual common public place where computer-mediated interaction takes place, both in the form of coordination networks and of expert/user networks. Furthermore, it has a minimum level of sustained membership. The culture of the sub scene simultaneously bears characteristics of socialized and alienated cyberculture, which should not be perceived as a contradiction. The development of Internet culture is always happening within the full complexity of society as a whole, and the interplay between unity and discord must be seen as the basis for the social integration of any group.
APA, Harvard, Vancouver, ISO, and other styles
39

Wasser, Frederick. "Media Is Driving Work." M/C Journal 4, no. 5 (November 1, 2001). http://dx.doi.org/10.5204/mcj.1935.

Full text
Abstract:
My thesis is that new media, starting with analog broadcast and going through digital convergence, blur the line between work time and free time. The technology that we are adopting has transformed free time into potential and actual labour time. At the dawn of the modern age, work shifted from tasked time to measured time. Previously, tasked time intermingled work and leisure according to the vagaries of nature. All this was banished when industrial capitalism instituted the work clock (Mumford 12-8). But now, many have noticed how post-industrial capitalism features a new intermingling captured in such expressions as "24/7" and "multi-tasking." Yet, we are only beginning to understand that media are driving a return to the pre-modern where the hour and the space are both ambiguous, available for either work or leisure. This may be the unfortunate side effect of the much vaunted "interactivity." Do you remember the old American TV show Dobie Gillis (1959-63) which featured the character Maynard G. Krebs? He always shuddered at the mention of the four-letter word "work." Now, American television shows makes it a point that everyone works (even if just barely). Seinfeld was a bold exception in featuring the work-free Kramer; a deliberate homage to the 1940s team of Abbott and Costello. Today, as welfare is turned into workfare, The New York Times scolds even the idle rich to adopt the work ethic (Yazigi). The Forms of Broadcast and Digital Media Are Driving the Merger of Work and Leisure More than the Content It is not just the content of television and other media that is undermining the leisured life; it is the social structure within which we use the media. Broadcast advertisements were the first mode/media combinations that began to recolonise free time for the new consumer economy. There had been a previous buildup in the volume and the ubiquity of advertising particularly in billboards and print. However, the attention of the reader to the printed commercial message could not be controlled and measured. Radio was the first to appropriate and measure its audience's time for the purposes of advertising. Nineteenth century media had promoted a middle class lifestyle based on spending money on home to create a refuge from work. Twentieth century broadcasting was now planting commercial messages within that refuge in the sacred moments of repose. Subsequent to broadcast, home video and cable facilitated flexible work by offering entertainment on a 24 hour basis. Finally, the computer, which juxtaposes image/sound/text within a single machine, offers the user the same proto-interactive blend of entertainment and commercial messages that broadcasting pioneered. It also fulfills the earlier promise of interactive TV by allowing us to work and to shop, in all parts of the day and night. We need to theorise this movement. The theory of media as work needs an institutional perspective. Therefore, I begin with Dallas Smythe's blindspot argument, which gave scholarly gravitas to the structural relationship of work and media (263-299). Horkheimer and Adorno had already noticed that capitalism was extending work into free time (137). Dallas Smythe went on to dissect the precise means by which late capitalism was extending work. Smythe restates the Marxist definition of capitalist labour as that human activity which creates exchange value. Then he considered the advertising industry, which currently approaches200 billion in the USA and realised that a great deal of exchange value has been created. The audience is one element of the labour that creates this exchange value. The appropriation of people's time creates advertising value. The time we spend listening to commercials on radio or viewing them on TV can be measured and is the unit of production for the value of advertising. Our viewing time ipso facto has been changed into work time. We may not experience it subjectively as work time although pundits such as Marie Winn and Jerry Mander suggest that TV viewing contributes to the same physical stresses as actual work. Nonetheless, Smythe sees commercial broadcasting as expanding the realm of capitalism into time that was otherwise set aside for private uses. Smythe's essay created a certain degree of excitement among political economists of media. Sut Jhally used Smythe to explain aspects of US broadcast history such as the innovations of William Paley in creating the CBS network (Jhally 70-9). In 1927, as Paley contemplated winning market share from his rival NBC, he realised that selling audience time was far more profitable than selling programs. Therefore, he paid affiliated stations to air his network's programs while NBC was still charging them for the privilege. It was more lucrative to Paley to turn around and sell the stations' guaranteed time to advertisers, than to collect direct payments for supplying programs. NBC switched to his business model within a year. Smythe/Jhally's model explains the superiority of Paley's model and is a historical proof of Smythe's thesis. Nonetheless, many economists and media theorists have responded with a "so what?" to Smythe's thesis that watching TV as work. Everyone knows that the basis of network television is the sale of "eyeballs" to the advertisers. However, Smythe's thesis remains suggestive. Perhaps he arrived at it after working at the U.S. Federal Communications Commission from 1943 to 1948 (Smythe 2). He was part of a team that made one last futile attempt to force radio to embrace public interest programming. This effort failed because the tide of consumerism was too strong. Radio and television were the leading edge of recapturing the home for work, setting the stage for the Internet and a postmodern replication of the cottage industries of pre and proto-industrial worlds. The consequences have been immense. The Depression and the crisis of over-production Cultural studies recognises that social values have shifted from production to consumption (Lash and Urry). The shift has a crystallising moment in the Great Depression of 1929 through 1940. One proposal at the time was to reduce individual work hours in order to create more jobs (see Hunnicut). This proposal of "share the work" was not adopted. From the point of view of the producer, sharing the work would make little difference to productivity. However, from the retailer's perspective each individual worker would accumulate less money to buy products. Overall sales would stagnate or decline. Prominent American economists at the time argued that sharing the work would mean sharing the unemployment. They warned the US government this was a fundamental threat to an economy based on consumption. Only a fully employed laborer could have enough money to buy down the national inventory. In 1932, N. A. Weston told the American Economic Association that: " ...[the labourers'] function in society as a consumer is of equal importance as the part he plays as a producer." (Weston 11). If the defeat of the share the work movement is the negative manifestation of consumerism, then the invasion by broadcast of our leisure time is its positive materialisation. We can trace this understanding by looking at Herbert Hoover. When he was the Secretary of Commerce in 1924 he warned station executives that: "I have never believed that it was possible to advertise through broadcasting without ruining the [radio] industry" (Radio's Big Issue). He had not recognised that broadcast advertising would be qualitatively more powerful for the economy than print advertising. By 1929, Hoover, now President Hoover, approved an economics committee recommendation in the traumatic year of 1929 that leisure time be made "consumable " (Committee on Recent Economic Changes xvi). His administration supported the growth of commercial radio because broadcasting was a new efficient answer to the economists' question of how to motivate consumption. Not so coincidentally network radio became a profitable industry during the great Depression. The economic power that pre-war radio hinted at flourished in the proliferation of post-war television. Advertisers switched their dollars from magazines to TV, causing the demise of such general interest magazines as Life, The Saturday Evening Postet al. Western Europe quickly followed the American broadcasting model. Great Britain was the first, allowing television to advertise the consumer revolution in 1955. Japan and many others started to permit advertising on television. During the era of television, the nature of work changed from manufacturing to servicing (Preston 148-9). Two working parents also became the norm as a greater percentage of the population took salaried employment, mostly women (International Labour Office). Many of the service jobs are to monitor the new global division of labour that allows industrialised nations to consume while emerging nations produce. (Chapter seven of Preston is the most current discussion of the shift of jobs within information economies and between industrialised and emerging nations.) Flexible Time/ Flexible Media Film and television has responded by depicting these shifts. The Mary Tyler Moore Show debuted in September of 1970 (see http://www.transparencynow.com/mary.htm). In this show nurturing and emotional attachments were centered in the work place, not in an actual biological family. It started a trend that continues to this day. However, media representations of the changing nature of work are merely symptomatic of the relationship between media and work. Broadcast advertising has a more causal relationship. As people worked more to buy more, they found that they wanted time-saving media. It is in this time period that the Internet started (1968), that the video cassette recorder was introduced (1975) and that the cable industry grew. Each of these ultimately enhanced the flexibility of work time. The VCR allowed time shifting programs. This is the media answer to the work concept of flexible time. The tired worker can now see her/his favourite TV show according to his/her own flex schedule (Wasser 2001). Cable programming, with its repeats and staggered starting times, also accommodates the new 24/7 work day. These machines, offering greater choice of programming and scheduling, are the first prototypes of interactivity. The Internet goes further in expanding flexible time by adding actual shopping to the vicarious enjoyment of consumerist products on television. The Internet user continues to perform the labour of watching advertising and, in addition, now has the opportunity to do actual work tasks at any time of the day or night. The computer enters the home as an all-purpose machine. Its purchase is motivated by several simultaneous factors. The rhetoric often stresses the recreational and work aspects of the computer in the same breath (Reed 173, Friedrich 16-7). Games drove the early computer programmers to find more "user-friendly" interfaces in order to entice young consumers. Entertainment continues to be the main driving force behind visual and audio improvements. This has been true ever since the introduction of the Apple II, Radio Shack's TRS 80 and Atari 400 personal computers in the 1977-1978 time frame (see http://www.atari-history.com/computers/8bits/400.html). The current ubiquity of colour monitors, and the standard package of speakers with PC computers are strong indications that entertainment and leisure pursuits continue to drive the marketing of computers. However, once the computer is in place in the study or bedroom, its uses fully integrates the user with world of work in both the sense of consuming and creating value. This is a specific instance of what Philip Graham calls the analytical convergence of production, consumption and circulation in hypercapitalism. The streaming video and audio not only captures the action of the game, they lend sensual appeal to the banner advertising and the power point downloads from work. In one regard, the advent of Internet advertising is a regression to the pre-broadcast era. The passive web site ad runs the same risk of being ignored as does print advertising. The measure of a successful web ad is interactivity that most often necessitates a click through on the part of the viewer. Ads often show up on separate windows that necessitate a click from the viewer if only to close down the program. In the words of Bolter and Grusin, click-through advertising is a hypermediation of television. In other words, it makes apparent the transparent relationship television forged between work and leisure. We do not sit passively through Internet advertising, we click to either eliminate them or to go on and buy the advertised products. Just as broadcasting facilitated consumable leisure, new media combines consumable leisure with flexible portable work. The new media landscape has had consequences, although the price of consumable leisure took awhile to become visible. The average work week declined from 1945 to 1982. After that point in the US, it has been edging up, continuously (United States Bureau of Labor Statistics). There is some question whether the computer has improved productivity (Kim), there is little question that the computer is colonising leisure time for multi-tasking. In a population that goes online from home almost twice as much as those who go online from work, almost half use their online time for work based activities other than email. Undoubtedly, email activity would account for even more work time (Horrigan). On the other side of the blur between work and leisure, the Pew Institute estimates that fifty percent use work Internet time for personal pleasure ("Wired Workers"). Media theory has to reengage the problem that Horkheimer/Adorno/Smythe raised. The contemporary problem of leisure is not so much the lack of leisure, but its fractured, non-contemplative, unfulfilling nature. A media critique will demonstrate the contribution of the TV and the Internet to this erosion of free time. References Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 2000. Committee on Recent Economic Changes. Recent Economic Changes. Vol. 1. New York: no publisher listed, 1929. Friedrich, Otto. "The Computer Moves In." Time 3 Jan. 1983: 14-24. Graham, Philip. Hypercapitalism: A Political Economy of Informational Idealism. In press for New Media and Society2.2 (2000). Horkheimer, Max, and Theodor W. Adorno. Dialectic of Enlightenment. New York: Continuum Publishing, 1944/1987. Horrigan, John B. "New Internet Users: What They Do Online, What They Don't and Implications for the 'Net's Future." Pew Internet and American Life Project. 25 Sep. 2000. 24 Oct. 2001 <http://www.pewinternet.org/reports/toc.asp?Report=22>. Hunnicutt, Benjamin Kline. Work without End: Abandoning Shorter Hours for the Right to Work. Philadelphia: Temple UP, 1988. International Labour Office. Economically Active Populations: Estimates and Projections 1950-2025. Geneva: ILO, 1995. Jhally, Sut. The Codes of Advertising. New York: St. Martin's Press, 1987. Kim, Jane. "Computers and the Digital Economy." Digital Economy 1999. 8 June 1999. October 24, 2001 <http://www.digitaleconomy.gov/powerpoint/triplett/index.htm>. Lash, Scott, and John Urry. Economies of Signs and Space. London: Sage Publications, 1994. Mander, Jerry. Four Arguments for the Elimination of Television. New York: Morrow Press, 1978. Mumford, Lewis. Technics and Civilization. New York: Harcourt Brace, 1934. Preston, Paschal. Reshaping Communication: Technology, Information and Social Change. London: Sage, 2001. "Radio's Big Issue Who Is to Pay the Artist?" The New York Times 18 May 1924: Section 8, 3. Reed, Lori. "Domesticating the Personal Computer." Critical Studies in Media Communication17 (2000): 159-85. Smythe, Dallas. Counterclockwise: Perspectives on Communication. Boulder, CO: Westview Press, 1993. United States Bureau of Labor Statistics. Unpublished Data from the Current Population Survey. 2001. Wasser, Frederick A. Veni, Vidi, Video: The Hollywood Empire and the VCR. Austin, TX: U of Texas P, 2001. Weston, N.A., T.N. Carver, J.P. Frey, E.H. Johnson, T.R. Snavely and F.D. Tyson. "Shorter Working Time and Unemployment." American Economic Review Supplement 22.1 (March 1932): 8-15. <http://links.jstor.org/sici?sici=0002-8282%28193203%2922%3C8%3ASWTAU%3E2.0.CO%3B2-3>. Winn, Marie. The Plug-in Drug. New York: Viking Press, 1977. "Wired Workers: Who They Are, What They're Doing Online." Pew Internet Life Report 3 Sep. 2000. 24 Oct. 2000 <http://www.pewinternet.org/reports/toc.asp?Report=20>. Yazigi, Monique P. "Shocking Visits to the Real World." The New York Times 21 Feb. 1990. Page unknown. Links http://www.pewinternet.org/reports/toc.asp?Report=20 http://www.pewinternet.org/reports/toc.asp?Report=22 http://www.atari-history.com/computers/8bits/400.html http://www.transparencynow.com/mary.htm http://www.digitaleconomy.gov/powerpoint/triplett/index.htm http://links.jstor.org/sici?sici=0002-8282%28193203%2922%3C8%3ASWTAU%3 E2.0.CO%3B2-3 Citation reference for this article MLA Style Wasser, Frederick. "Media Is Driving Work" M/C: A Journal of Media and Culture 4.5 (2001). [your date of access] < http://www.media-culture.org.au/0111/Wasser.xml >. Chicago Style Wasser, Frederick, "Media Is Driving Work" M/C: A Journal of Media and Culture 4, no. 5 (2001), < http://www.media-culture.org.au/0111/Wasser.xml > ([your date of access]). APA Style Wasser, Frederick. (2001) Media Is Driving Work. M/C: A Journal of Media and Culture 4(5). < http://www.media-culture.org.au/0111/Wasser.xml > ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
40

Joensuu, Juri. "Intimate Technology?" M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2333.

Full text
Abstract:
I Usually, when reading or literature is being discussed, the contents of that discussion (like themes, symbols, poetic means, etc.) are focused. The humanities or academic criticism have rarely drawn their attention to the material media of literature. However, in recent years some of the conversation concerning literature has shifted the focus from contents to the material side. It has been a matter of working in a good cause: this conversation has been aroused by digital culture and its fearsome threat to print culture and books in general. It is evident that digitalization has given rise to a certain rhetorical system among the defenders of the traditional book and print. These are easily distinguished in discussions dealing with the advantages and favoured traits of print and book. This kind of traditionalist argumentation is based firstly on the claimed interruption and deep differences between print culture and digital culture. Secondly, it emphasizes the emotional and sensual factors of reading and tends to adore books as objects, and – third – by doing that, it also takes a somewhat nostalgic or mystifying approach to reading fiction. Because inventing neologisms is always amusing, this rhetorical system could be called intimism. The new word is coined from ‘intimacy’ and ‘intimate’, because the print-defending arguments always reduce their main idea to thoughts of close, warm, private, safe, and familiar nature of reading. Intimism wants to see the “traditional” culture in opposition to the “new” culture that threatens to destroy the print culture and even literacy. It sees print as the language of culture and civilization, which is under threat of becoming extinct. It is questionable, though, whether print culture needs advocates in the first place. It is possibly the most victorious and pervasive innovation throughout human history and shows no signs of fading or collapsing whatsoever. When I claim that intimism attaches nostalgic and mystifying approaches to print, the point is not to nullify the value of subjective reading experiences or the inexplicable features always present in reading. Also, I am not “defending” digitalization. Rather, I am aiming to highlight some tropes cast at reading as an act, and see how reading printed books is figured rhetorically. Intimism bases its argumentation on the thought of deep interruption between print-orientated culture and digitalization. Cultural periods (orality, print, digital) are not clearly consecutive, but overlapping. Orality has lived side by side with print, and will live with and in digital culture too. There’s plenty of resemblances and residues to oral culture in digital forms of communication. Print culture is deeply entangled with digital culture, technologically as well as content-wise. Book-printing is so utterly digitalized that only thing in the production process of books that is not digital is the book itself. Furthermore, it is not fair to treat the existing digital literature as one “lump”. Electronic literature is divided in different categories. Digitalization of (already) printed literature – conserving classic literature and making it electronically available – is very different from genuinely digital texts, new literature published as a digital file (to be read from computer screen or hand-computer) or texts that take full advantage of digital opportunities, such as interactivity, linking, updating, programming, multimedia, or internet. These sorts of “originally digital” texts base their logic of functioning and textual dynamics on digital technology – thus being impossible to print. Digital text is not a clear-cut object in the same way we can think print is. In short, electronic text is fluent; print is static. According to Jay David Bolter, “all information … in the computer world is a kind of controlled movement” (Bolter 31). Electronic text can be transported into thousands of computers simultaneously; it can be renewed, updated, or destroyed from a distance. Text on the computer screen is one possible instantiation of the coding we do not usually associate with the text “itself”. What is the equivalent for coding of the text in print? The physical appearance of the text does not identify itself in the same way with the reading surface as in print, where the text is literally pressed into the surface making these two inseparable. II William Gass, among many others, sees the charm of the book connected very closely to the charm of the physical, material nature of book. Gass argues that We shall not understand what a book is, and why a book has the value many persons have, and is even less replaceable than a person, if we forget how important to it is its body, the building that has been built to hold its lines of language safely together. For Gass, digital texts are texts without bodies. They wander endlessly in the digital space, restless, so that they seem to be like textual ghosts (and maybe therefore be fearsome?). Gass sees digital texts as pale shadows of their printed forerunners. Words on paper patiently wait to be reread, but words on a screen are transient, “they only wait to be remade, relit” (Gass). The very essence of the book lies in its material body, which keeps its lines “safely together” and identifies its contents with the medium, the beloved paper object. In this light, Amy Tan’s formulation is significant. (Her comments were adopted from Koskimaa 2000. This TV panel discussion with Tan, Harold Bloom and Dick Brass is yet to pin down.) She said that books, unlike electronic texts, are charmingly sensual. The term sensual can be understood in two ways, which are both interesting from the standpoint here. When we read, books as objects are in permanent contact with our senses. Sight, feeling, and even hearing are important factors in reading, the first mentioned crucially so, with smelling or tasting more rare (even though some hard-core biblio-fetishists enjoy sniffing their loved ones). Secondly, sensual can be understood as in sentence “her lips are sensual”. Sensual in this meaning connects reading with something that is both emotional and aesthetic: emotional, because it is hardly definable; simultaneously unattainable, but clearly present in its internality. Something present only for me. “Aesthetism” claims that reading (and specifically reading book) is in itself aesthetic deed – some sort of modelled cultural act. Treating books and print as if they had human characteristics, in other words to anthropomorphise them, seems to be typical to this manner of speaking. Books consist of two crucially human denominators – body and soul. The bond between the book and its reader resembles to friendship. Books love us almost as much as we love them. They are “even less replaceable than a person”. They supply “warmth of the word”, as Gass puts it, almost like you’d have a close friend next to you to share your feelings. And, is not the previously-mentioned sensuality also closely connected to carnal pleasures? These argumentation lines both entangle contents and subjective experiences with a certain medium, the printed book. They evaluate form with content by claiming that a medium an sich has positive impacts on the message. Intimism makes reading an external ceremony of cultivation to which certain feelings can be attached. W.J. Ong has described writing as internalized technology (Ong 81-83). We could now reverse this thought – intimism externalizes the subjective and private value of reading into acceptable reading of printed book. It thus technologises the internal, inexplicable and obscure core of reading. III In a digital context, according to Gass, reading itself cannot shape the book like in print, where an avid reader leaves traces and marks on the body of book. This makes electronic text impersonal, distant to the reader and, in a way, overdetermined by the technology it’s based on. This overdetermination is something that makes us irritatingly conscious of the technological ground we are working on when we are reading an electronic text. Technology pushes itself through to the fore from between the lines, it makes itself extremely present, more present than with reading in the technology of print, or so it feels to us. The printed book seems to be more able to efface, to obscure, its technological nature. We are so used to the book as a form of presentation, that we can easily ignore its technological and material bases. Maybe the feeling of absence makes for the high feeling of presence in the technology of print and book? As intimism foregrounds the material aspects of literature, which are usually backgrounded, some writers have underlined this material paradox by hoisting the very technology of book and print, even revelling in it, and throwing the medium of the book, the machine of literature, usually so lame and tame, in your face. This kind of overdetermination of the technology of the printed novel begins at least in the Baroque period or with Laurence Sterne, traversing through the nouveaux roman and American metafictionists such as Barth, Federman, and Sukenick. For intimism, print is an intimate technology. It is a seemingly paradoxical concept, because usually technology is thought to be something formal or external. The intimist argumentation projects contents into the form, and unlike the mentioned authors who fool around with our literal automatisms, it takes a moral, restrictive and colonizing stance to reading and its technologies. Note Based and re-written on the paper held in Baltic Ring: International Writing and Reading Seminar, 11.- 13. April 2002, University of Jyväskylä, Finland. References Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale: Laurence Erlbaum, 1991. Gass, William H. “In Defense of the Book.” Harper’s Magazine November 1999: 45-51. Koskimaa, Raine. ”King-efekti ja kirjallisuuden sähköinen tulevaisuus.” Nuori Voima 4-5, 2000. Ong, Walter J. Orality and Literacy: Technologizing of the Word. 1982. London: Routledge, 1988. Citation reference for this article MLA Style Joensuu, Juri. "Intimate Technology?: Literature, Reading and the Argumentation Defending Book and Print." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/02-joensuu.php>. APA Style Joensuu, J. (Jun. 2005) "Intimate Technology?: Literature, Reading and the Argumentation Defending Book and Print," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/02-joensuu.php>.
APA, Harvard, Vancouver, ISO, and other styles
41

Omar Sharif, Bayan, Aras Hamad Rasul, Osman Ibrahim Mahmud, and Farman Nuri Abdulla. "Patient’s Information Toward Some Modifiable Risk Factors of Ischemic Heart Disease." Kurdistan Journal of Applied Research, December 12, 2020, 27–39. http://dx.doi.org/10.24017/science.2020.ichms2020.4.

Full text
Abstract:
Ischemic heart disease (IHD), , is the condition of heart problems, caused by narrowed coronary arteries that supply oxygenated blood to the heart muscle. There is a shortage of study of bachelor students. The goal of this research was directed to assess level of patient’s information toward some modifiable risk factors of IHD at Rania teaching hospital in Kurdistan region of Iraq during the period of (20th October 2019 - 10th February 2020). A non- probability purposive sample of (143) patients; the study instrument was constructed of total (42) items for the purpose of data collection. The content validity of the instrument was determined through a panel of (12) experts. Reliability of the instrument was determined through the use of internal consistency reliability (split half) approach which was estimated as r = (0.83) the data were collected through the use of interview technique (face to face approach), the computer files is used to organizing and coding it. The data analyzed by Statistical approaches which includes: descriptive and inferential statistical and chi- square, data analysis (SPSS version 25). The outcome showed that most of the sample rang from the age (25-40) years and most of them were male from urban, more than half of them were unemployed but nearly half of them were graduated from primary school. 32.2% of them diagnosed by cardiovascular disease. However more than half of them had a high level of information about IHD as a general, and the TV was the first source of their information but more than half of them were overweight, 65% did not do regular exercise, 52.4% were relatively stressful. Also, the study demonstrated that there is no significant association between socio demographic data and level of patient’s information toward some modifiable risk factors of IHD, with age, gender, educational level and occupation with IHD, at p value greater than 0.05. The study recommended to ministry of health and directorate of health in Rania city to develop and supervise the center of dietary regimen and halls of exercise for the people to implement their information and practice it
APA, Harvard, Vancouver, ISO, and other styles
42

Omar Sharif, Bayan, Aras Hamad Rasul, Osman Ibrahim Mahmud, and Farman Nuri Abdulla. "Patient’s Information Toward Some Modifiable Risk Factors of Ischemic Heart Disease." Kurdistan Journal of Applied Research, December 12, 2020, 27–39. http://dx.doi.org/10.24017/science.2020.ichms2020.4.

Full text
Abstract:
Ischemic heart disease (IHD), , is the condition of heart problems, caused by narrowed coronary arteries that supply oxygenated blood to the heart muscle. There is a shortage of study of bachelor students. The goal of this research was directed to assess level of patient’s information toward some modifiable risk factors of IHD at Rania teaching hospital in Kurdistan region of Iraq during the period of (20th October 2019 - 10th February 2020). A non- probability purposive sample of (143) patients; the study instrument was constructed of total (42) items for the purpose of data collection. The content validity of the instrument was determined through a panel of (12) experts. Reliability of the instrument was determined through the use of internal consistency reliability (split half) approach which was estimated as r = (0.83) the data were collected through the use of interview technique (face to face approach), the computer files is used to organizing and coding it. The data analyzed by Statistical approaches which includes: descriptive and inferential statistical and chi- square, data analysis (SPSS version 25). The outcome showed that most of the sample rang from the age (25-40) years and most of them were male from urban, more than half of them were unemployed but nearly half of them were graduated from primary school. 32.2% of them diagnosed by cardiovascular disease. However more than half of them had a high level of information about IHD as a general, and the TV was the first source of their information but more than half of them were overweight, 65% did not do regular exercise, 52.4% were relatively stressful. Also, the study demonstrated that there is no significant association between socio demographic data and level of patient’s information toward some modifiable risk factors of IHD, with age, gender, educational level and occupation with IHD, at p value greater than 0.05. The study recommended to ministry of health and directorate of health in Rania city to develop and supervise the center of dietary regimen and halls of exercise for the people to implement their information and practice it
APA, Harvard, Vancouver, ISO, and other styles
43

Holloway, Donell. "Sharing Foxtel." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2163.

Full text
Abstract:
The kids in our house are making a public comeback. They are surfacing from the private recesses of the house out into the more communal space of the family lounge room. After years of being holed-up in their bedrooms they are back – thanks to pay TV. Foxtel has presented them with a smorgasbord of programs, tempting enough to entice even the most die-hard gamer or teenage recluse out of the bedroom and into the throes of lounge room politics. In some ways, our newly populated lounge room is reminiscent of David Morley’s Family Television with television viewing “situated firmly within the politics of the living room”(19). This article explores the notion that the introduction of pay TV in Australia challenges the general movement towards an individualisation of media consumption in the family home. Due to pay TV’s limitations of one outlet per household, children and teenagers are leaving the privacies of their own bedrooms and returning to the family lounge room. With this return, many family members are having to relearn the art of sharing (or getting your fair share of) this limited resource. This situation may also be a particularly Australian one, as it seems that pay TV with its multi-channel viewing is more readily available on multiple television sets within homes in other countries (Tidhar, Chava and Nossek 16) Family television viewing seems to be part of a relatively recent (round the hearth) tradition, which followed from the family piano, phonograph and the radio. These traditions re-established the home and family as a place where parental authority overrode the dangers of the outside world. Radio broadcasters in the 1940s endorsed the family radio as a way to promote family togetherness because (as they saw it) “the house and hearth have [had] been largely given up in favour of a multitude of other interests and activities outside, with the consequent disintegration of family ties and affections” (Lewis qtd. in Flichy 158). Television viewing followed suit as another round-the-hearth family tradition during the earlier period of domestic television. David Morley (1986) explored the domestic consumption of television in the context of everyday family life during this time, a time when only one television set was available to most families – a time when dads, and the occasional mum, ruled the television viewing habits of the family. Morley’s approach to television viewing was one in which the household (or family) was central to interpreting the television audience; where there were gendered regimes of watching and program choice which often reflected existing power relationships in the home. Today most Australian families have more than one television set (Department of Foreign Affairs and Trade), leading the way to more individualised and fragmented modes of television watching within Australian homes. The introduction of computers, the Internet and video consoles in many family homes seems to have further dispersed family members to more private spaces in the family home, diffusing existing conflict surrounding the family television. Therefore, "if television once brought the family together around the hearth, now domestic technologies permit the dispersal of family members to different rooms or different activities within the same space" (Livingstone 128). The geographical migration of the television set, along with new digital technologies, to the bedrooms and secondary living spaces in many family homes has brought with it new dynamics for social space within the household and a “reciprocal (re)construction of the meanings and functions of both the technological objects and the domestic spaces they inhabit” (Caron 3). Equipping bedrooms with television and digital technologies has the ability to change the room’s conventional usage – both spatially and temporally. In this way our 11-year-old’s bedroom has been transformed into a specialised bedroom culture – a gamer’s paradise. By locating a television and game console in this bedroom the technologies are identified as personal property while at the same time allowing for a space that functions as both communal and private, for sharing with siblings and friends and solitary gaming. The upside of this general movement towards a separate bedroom culture (or private media spaces) is that there are more spaces to engage with media technologies and therefore more viewing choice for family members. These extra media spaces have freed up the lounge room possibly allowing for more harmony and accord within the family, while at the same time bringing about the opportunity for some family members to retreat from the social togetherness of family television viewing. However, with the limitations of one outlet per household in Australian pay TV, the lounge room has again become the focus of family television viewing in some Australian homes. With over twenty percent of Australian homes now subscribing to pay TV (Australian Film Commission) some Australian families may again be experiencing the togetherness (and the inevitable struggles) of sharing one television set. In the days when one television set was the norm, Morley explored the way in which family viewing habits reflected existing power relationships in the home, focussing mainly on issues concerned with gender and class in the UK home. Twenty years later, in our house on the other side of the world, similar battles are taking place – Dragonball Z vs. Ocean Girl, Robot Wars vs. Buffy and World Series Cricket vs. Changing Rooms. However, unlike Morley’s Family Television the results of these gendered battles are too close to call. Perhaps, with forty or more channels to choose from, and programs designed to appeal to specific family members, the stakes are even higher and the battle has only just begun. Today’s media-rich home environments, with multiple television sets and digital technologies, seem to have gone some way to resolving household conflicts over television viewing allowing for more choice and individualisation of media consumption. However, the introduction of pay TV in Australia has seen a return to the living room politics of family television viewing somewhat reminiscent of Morley’s Family Television where sharing the family television reflected and highlighted existing family power relationships and struggles. Works Cited Australian Department of Foreign Affairs and Trade. Australia in Brief: Way of Life. Commonwealth of Australia. 2 March 2003 <http://www.dfat.gov.au/aib2001/media.php>. Australian Film Commission. Get the Picture: Fast Facts. 2002. 3 March 2003 <http://www.afc.gov.au/GTP/wptvfast.php>. Caron, Andre. "New Communication Technologies in the Home: A Qualitative Study on the Introduction, Appropriation and Uses of Media in the Family." Young People and the Media. Sydney: International Forum of Researchers, 2000. Livingstone, Sonia. "The Meaning of Domestic Technologies: A Personal Construct Analysis of Familial Gender Relations." Consuming Technologies: Media and Information in Domestic Spaces. Eds. Roger Silverstone and Eric Hirsch. London: Routledge, 1992. 113-30. Morley, D. Family Television: Cultural Power and Domestic Pleasure. London: Routledge, 1986. Tidhar, Chava E., and Hillel Nossek. "All in the Family: The Integration of a New Media Technology in the Family." Communications 27 (2002): 15-34. Links http://www.afc.gov.au/GTP/wptvfast.htmlhttp://www.dfat.gov.au/aib2001/media.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Holloway, Donell. "Sharing Foxtel" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/06-sharingfoxtel.php>. APA Style Holloway, D. (2003, Apr 23). Sharing Foxtel. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/06-sharingfoxtel.php>
APA, Harvard, Vancouver, ISO, and other styles
44

Champion, Katherine M. "A Risky Business? The Role of Incentives and Runaway Production in Securing a Screen Industries Production Base in Scotland." M/C Journal 19, no. 3 (June 22, 2016). http://dx.doi.org/10.5204/mcj.1101.

Full text
Abstract:
IntroductionDespite claims that the importance of distance has been reduced due to technological and communications improvements (Cairncross; Friedman; O’Brien), the ‘power of place’ still resonates, often intensifying the role of geography (Christopherson et al.; Morgan; Pratt; Scott and Storper). Within the film industry, there has been a decentralisation of production from Hollywood, but there remains a spatial logic which has preferenced particular centres, such as Toronto, Vancouver, Sydney and Prague often led by a combination of incentives (Christopherson and Storper; Goldsmith and O’Regan; Goldsmith et al.; Miller et al.; Mould). The emergence of high end television, television programming for which the production budget is more than £1 million per television hour, has presented new opportunities for screen hubs sharing a very similar value chain to the film industry (OlsbergSPI with Nordicity).In recent years, interventions have proliferated with the aim of capitalising on the decentralisation of certain activities in order to attract international screen industries production and embed it within local hubs. Tools for building capacity and expertise have proliferated, including support for studio complex facilities, infrastructural investments, tax breaks and other economic incentives (Cucco; Goldsmith and O’Regan; Jensen; Goldsmith et al.; McDonald; Miller et al.; Mould). Yet experience tells us that these will not succeed everywhere. There is a need for a better understanding of both the capacity for places to build a distinctive and competitive advantage within a highly globalised landscape and the relative merits of alternative interventions designed to generate a sustainable production base.This article first sets out the rationale for the appetite identified in the screen industries for co-location, or clustering and concentration in a tightly drawn physical area, in global hubs of production. It goes on to explore the latest trends of decentralisation and examines the upturn in interventions aimed at attracting mobile screen industries capital and labour. Finally it introduces the Scottish screen industries and explores some of the ways in which Scotland has sought to position itself as a recipient of screen industries activity. The paper identifies some key gaps in infrastructure, most notably a studio, and calls for closer examination of the essential ingredients of, and possible interventions needed for, a vibrant and sustainable industry.A Compulsion for ProximityIt has been argued that particular spatial and place-based factors are central to the development and organisation of the screen industries. The film and television sector, the particular focus of this article, exhibit an extraordinarily high degree of spatial agglomeration, especially favouring centres with global status. It is worth noting that the computer games sector, not explored in this article, slightly diverges from this trend displaying more spatial patterns of decentralisation (Vallance), although key physical hubs of activity have been identified (Champion). Creative products often possess a cachet that is directly associated with their point of origin, for example fashion from Paris, films from Hollywood and country music from Nashville – although it can also be acknowledged that these are often strategic commercial constructions (Pecknold). The place of production represents a unique component of the final product as well as an authentication of substantive and symbolic quality (Scott, “Creative cities”). Place can act as part of a brand or image for creative industries, often reinforcing the advantage of being based in particular centres of production.Very localised historical, cultural, social and physical factors may also influence the success of creative production in particular places. Place-based factors relating to the built environment, including cheap space, public-sector support framework, connectivity, local identity, institutional environment and availability of amenities, are seen as possible influences in the locational choices of creative industry firms (see, for example, Drake; Helbrecht; Hutton; Leadbeater and Oakley; Markusen).Employment trends are notoriously difficult to measure in the screen industries (Christopherson, “Hollywood in decline?”), but the sector does contain large numbers of very small firms and freelancers. This allows them to be flexible but poses certain problems that can be somewhat offset by co-location. The findings of Antcliff et al.’s study of workers in the audiovisual industry in the UK suggested that individuals sought to reconstruct stable employment relations through their involvement in and use of networks. The trust and reciprocity engendered by stable networks, built up over time, were used to offset the risk associated with the erosion of stable employment. These findings are echoed by a study of TV content production in two media regions in Germany by Sydow and Staber who found that, although firms come together to work on particular projects, typically their business relations extend for a much longer period than this. Commonly, firms and individuals who have worked together previously will reassemble for further project work aided by their past experiences and expectations.Co-location allows the development of shared structures: language, technical attitudes, interpretative schemes and ‘communities of practice’ (Bathelt, et al.). Grabher describes this process as ‘hanging out’. Deep local pools of creative and skilled labour are advantageous both to firms and employees (Reimer et al.) by allowing flexibility, developing networks and offsetting risk (Banks et al.; Scott, “Global City Regions”). For example in Cook and Pandit’s study comparing the broadcasting industry in three city-regions, London was found to be hugely advantaged by its unrivalled talent pool, high financial rewards and prestigious projects. As Barnes and Hutton assert in relation to the wider creative industries, “if place matters, it matters most to them” (1251). This is certainly true for the screen industries and their spatial logic points towards a compulsion for proximity in large global hubs.Decentralisation and ‘Sticky’ PlacesDespite the attraction of global production hubs, there has been a decentralisation of screen industries from key centres, starting with the film industry and the vertical disintegration of Hollywood studios (Christopherson and Storper). There are instances of ‘runaway production’ from the 1920s onwards with around 40 per cent of all features being accounted for by offshore production in 1960 (Miller et al., 133). This trend has been increasing significantly in the last 20 years, leading to the genesis of new hubs of screen activity such as Toronto, Vancouver, Sydney and Prague (Christopherson, “Project work in context”; Goldsmith et al.; Mould; Miller et al.; Szczepanik). This development has been prompted by a multiplicity of reasons including favourable currency value differentials and economic incentives. Subsidies and tax breaks have been offered to secure international productions with most countries demanding that, in order to qualify for tax relief, productions have to spend a certain amount of their budget within the local economy, employ local crew and use domestic creative talent (Hill). Extensive infrastructure has been developed including studio complexes to attempt to lure productions with the advantage of a full service offering (Goldsmith and O’Regan).Internationally, Canada has been the greatest beneficiary of ‘runaway production’ with a state-led enactment of generous film incentives since the late 1990s (McDonald). Vancouver and Toronto are the busiest locations for North American Screen production after Los Angeles and New York, due to exchange rates and tax rebates on labour costs (Miller et al., 141). 80% of Vancouver’s production is attributable to runaway production (Jensen, 27) and the city is considered by some to have crossed a threshold as:It now possesses sufficient depth and breadth of talent to undertake the full array of pre-production, production and post-production services for the delivery of major motion pictures and TV programmes. (Barnes and Coe, 19)Similarly, Toronto is considered to have established a “comprehensive set of horizontal and vertical media capabilities” to ensure its status as a “full function media centre” (Davis, 98). These cities have successfully engaged in entrepreneurial activity to attract production (Christopherson, “Project Work in Context”) and in Vancouver the proactive role of provincial government and labour unions are, in part, credited with its success (Barnes and Coe). Studio-complex infrastructure has also been used to lure global productions, with Toronto, Melbourne and Sydney all being seen as key examples of where such developments have been used as a strategic priority to take local production capacity to the next level (Goldsmith and O’Regan).Studies which provide a historiography of the development of screen-industry hubs emphasise a complex interplay of social, cultural and physical conditions. In the complex and global flows of the screen industries, ‘sticky’ hubs have emerged with the ability to attract and retain capital and skilled labour. Despite being principally organised to attract international production, most studio complexes, especially those outside of global centres need to have a strong relationship to local or national film and television production to ensure the sustainability and depth of the labour pool (Goldsmith and O’Regan, 2003). Many have a broadcaster on site as well as a range of companies with a media orientation and training facilities (Goldsmith and O’Regan, 2003; Picard, 2008). The emergence of film studio complexes in the Australian Gold Coast and Vancouver was accompanied by an increasing role for television production and this multi-purpose nature was important for the continuity of production.Fostering a strong community of below the line workers, such as set designers, locations managers, make-up artists and props manufacturers, can also be a clear advantage in attracting international productions. For example at Cinecitta in Italy, the expertise of set designers and experienced crews in the Barrandov Studios of Prague are regarded as major selling points of the studio complexes there (Goldsmith and O’Regan; Miller et al.; Szczepanik). Natural and built environments are also considered very important for film and television firms and it is a useful advantage for capturing international production when cities can double for other locations as in the cases of Toronto, Vancouver, Prague for example (Evans; Goldsmith and O’Regan; Szczepanik). Toronto, for instance, has doubled for New York in over 100 films and with regard to television Due South’s (1994-1998) use of Toronto as Chicago was estimated to have saved 40 per cent in costs (Miller et al., 141).The Scottish Screen Industries Within mobile flows of capital and labour, Scotland has sought to position itself as a recipient of screen industries activity through multiple interventions, including investment in institutional frameworks, direct and indirect economic subsidies and the development of physical infrastructure. Traditionally creative industry activity in the UK has been concentrated in London and the South East which together account for 43% of the creative economy workforce (Bakhshi et al.). In order, in part to redress this imbalance and more generally to encourage the attraction and retention of international production a range of policies have been introduced focused on the screen industries. A revised Film Tax Relief was introduced in 2007 to encourage inward investment and prevent offshoring of indigenous production, and this has since been extended to high-end television, animation and children’s programming. Broadcasting has also experienced a push for decentralisation led by public funding with a responsibility to be regionally representative. The BBC (“BBC Annual Report and Accounts 2014/15”) is currently exceeding its target of 50% network spend outside London by 2016, with 17% spent in Scotland, Wales and Northern Ireland. Channel 4 has similarly committed to commission at least 9% of its original spend from the nations by 2020. Studios have been also developed across the UK including at Roath Lock (Cardiff), Titanic Studios (Belfast), MedicaCity (Salford) and The Sharp Project (Manchester).The creative industries have been identified as one of seven growth sectors for Scotland by the government (Scottish Government). In 2010, the film and video sector employed 3,500 people and contributed £120 million GVA and £120 million adjusted GVA to the economy and the radio and TV sector employed 3,500 people and contributed £50 million GVA and £400 million adjusted GVA (The Scottish Parliament). Beyond the direct economic benefits of sectors, the on-screen representation of Scotland has been claimed to boost visitor numbers to the country (EKOS) and high profile international film productions have been attracted including Skyfall (2012) and WWZ (2013).Scotland has historically attracted international film and TV productions due to its natural locations (VisitScotland) and on average, between 2009-2014, six big budget films a year used Scottish locations both urban and rural (BOP Consulting, 2014). In all, a total of £20 million was generated by film-making in Glasgow during 2011 (Balkind) with WWZ (2013) and Cloud Atlas (2013), representing Philadelphia and San Francisco respectively, as well as doubling for Edinburgh for the recent acclaimed Scottish films Filth (2013) and Sunshine on Leith (2013). Sanson (80) asserts that the use of the city as a site for international productions not only brings in direct revenue from production money but also promotes the city as a “fashionable place to live, work and visit. Creativity makes the city both profitable and ‘cool’”.Nonetheless, issues persist and it has been suggested that Scotland lacks a stable and sustainable film industry, with low indigenous production levels and variable success from year to year in attracting inward investment (BOP Consulting). With regard to crew, problems with an insufficient production base have been identified as an issue in maintaining a pipeline of skills (BOP Consulting). Developing ‘talent’ is a central aspect of the Scottish Government’s Strategy for the Creative Industries, yet there remains the core challenge of retaining skills and encouraging new talent into the industry (BOP Consulting).With regard to film, a lack of substantial funding incentives and the absence of a studio have been identified as a key concern for the sector. For example, within the film industry the majority of inward investment filming in Scotland is location work as it lacks the studio facilities that would enable it to sustain a big-budget production in its entirety (BOP Consulting). The absence of such infrastructure has been seen as contributing to a drain of Scottish talent from these industries to other areas and countries where there is a more vibrant sector (BOP Consulting). The loss of Scottish talent to Northern Ireland was attributed to the longevity of the work being provided by Games of Thrones (2011-) now having completed its six series at the Titanic Studios in Belfast (EKOS) although this may have been stemmed somewhat recently with the attraction of US high-end TV series Outlander (2014-) which has been based at Wardpark in Cumbernauld since 2013.Television, both high-end production and local broadcasting, appears crucial to the sustainability of screen production in Scotland. Outlander has been estimated to contribute to Scotland’s production spend figures reaching a historic high of £45.8 million in 2014 (Creative Scotland ”Creative Scotland Screen Strategy Update”). The arrival of the program has almost doubled production spend in Scotland, offering the chance for increased stability for screen industries workers. Qualifying for UK High-End Television Tax Relief, Outlander has engaged a crew of approximately 300 across props, filming and set build, and cast over 2,000 supporting artist roles from within Scotland and the UK.Long running drama, in particular, offers key opportunities for both those cutting their teeth in the screen industries and also by providing more consistent and longer-term employment to existing workers. BBC television soap River City (2002-) has been identified as a key example of such an opportunity and the programme has been credited with providing a springboard for developing the skills of local actors, writers and production crew (Hibberd). This kind of pipeline of production is critical given the work patterns of the sector. According to Creative Skillset, of the 4,000 people in Scotland are employed in the film and television industries, 40% of television workers are freelance and 90% of film production work in freelance (EKOS).In an attempt to address skills gaps, the Outlander Trainee Placement Scheme has been devised in collaboration with Creative Scotland and Creative Skillset. During filming of Season One, thirty-eight trainees were supported across a range of production and craft roles, followed by a further twenty-five in Season Two. Encouragingly Outlander, and the books it is based on, is set in Scotland so the authenticity of place has played a strong component in the decision to locate production there. Producer David Brown began his career on Bill Forsyth films Gregory’s Girl (1981), Local Hero (1983) and Comfort and Joy (1984) and has a strong existing relationship to Scotland. He has been very vocal in his support for the trainee program, contending that “training is the future of our industry and we at Outlander see the growth of talent and opportunities as part of our mission here in Scotland” (“Outlander fast tracks next generation of skilled screen talent”).ConclusionsThis article has aimed to explore the relationship between place and the screen industries and, taking Scotland as its focus, has outlined a need to more closely examine the ways in which the sector can be supported. Despite the possible gains in terms of building a sustainable industry, the state-led funding of the global screen industries is contested. The use of tax breaks and incentives has been problematised and critiques range from use of public funding to attract footloose media industries to the increasingly zero sum game of competition between competing places (Morawetz; McDonald). In relation to broadcasting, there have been critiques of a ‘lift and shift’ approach to policy in the UK, with TV production companies moving to the nations and regions temporarily to meet the quota and leaving once a production has finished (House of Commons). Further to this, issues have been raised regarding how far such interventions can seed and develop a rich production ecology that offers opportunities for indigenous talent (Christopherson and Rightor).Nonetheless recent success for the screen industries in Scotland can, at least in part, be attributed to interventions including increased decentralisation of broadcasting and the high-end television tax incentives. This article has identified gaps in infrastructure which continue to stymie growth and have led to production drain to other centres. Important gaps in knowledge can also be acknowledged that warrant further investigation and unpacking including the relationship between film, high-end television and broadcasting, especially in terms of the opportunities they offer for screen industries workers to build a career in Scotland and notable gaps in infrastructure and the impact they have on the loss of production.ReferencesAntcliff, Valerie, Richard Saundry, and Mark Stuart. Freelance Worker Networks in Audio-Visual Industries. University of Central Lancashire, 2004.Bakhshi, Hasan, John Davies, Alan Freeman, and Peter Higgs. "The Geography of the UK’s Creative and High–Tech Economies." 2015.Balkind, Nicola. World Film Locations: Glasgow. Intellect Books, 2013.Banks, Mark, Andy Lovatt, Justin O’Connor, and Carlo Raffo. "Risk and Trust in the Cultural Industries." Geoforum 31.4 (2000): 453-464.Barnes, Trevor, and Neil M. Coe. “Vancouver as Media Cluster: The Cases of Video Games and Film/TV." Media Clusters: Spatial Agglomeration and Content Capabilities (2011): 251-277.Barnes, Trevor, and Thomas Hutton. "Situating the New Economy: Contingencies of Regeneration and Dislocation in Vancouver's Inner City." Urban Studies 46.5-6 (2009): 1247-1269.Bathelt, Harald, Anders Malmberg, and Peter Maskell. "Clusters and Knowledge: Local Buzz, Global Pipelines and the Process of Knowledge Creation." Progress in Human Geography 28.1 (2004): 31-56.BBC Annual Report and Accounts 2014/15 London: BBC (2015)BOP Consulting Review of the Film Sector in Glasgow: Report for Creative Scotland. Edinburgh: BOP Consulting, 2014.Champion, Katherine. "Problematizing a Homogeneous Spatial Logic for the Creative Industries: The Case of the Digital Games Industry." Changing the Rules of the Game. Palgrave Macmillan UK, 2013. 9-27.Cairncross, Francis. The Death of Distance London: Orion Business, 1997.Channel 4. Annual Report. London: Channel 4, 2014.Christopherson, Susan. "Project Work in Context: Regulatory Change and the New Geography of Media." Environment and Planning A 34.11 (2002): 2003-2015.———. "Hollywood in Decline? US Film and Television Producers beyond the Era of Fiscal Crisis." Cambridge Journal of Regions, Economy and Society 6.1 (2013): 141-157.Christopherson, Susan, and Michael Storper. "The City as Studio; the World as Back Lot: The Impact of Vertical Disintegration on the Location of the Motion Picture Industry." Environment and Planning D: Society and Space 4.3 (1986): 305-320.Christopherson, Susan, and Ned Rightor. "The Creative Economy as “Big Business”: Evaluating State Strategies to Lure Filmmakers." Journal of Planning Education and Research 29.3 (2010): 336-352.Christopherson, Susan, Harry Garretsen, and Ron Martin. "The World Is Not Flat: Putting Globalization in Its Place." Cambridge Journal of Regions, Economy and Society 1.3 (2008): 343-349.Cook, Gary A.S., and Naresh R. Pandit. "Service Industry Clustering: A Comparison of Broadcasting in Three City-Regions." The Service Industries Journal 27.4 (2007): 453-469.Creative Scotland Creative Scotland Screen Strategy Update. 2016. <http://www.creativescotland.com/__data/assets/pdf_file/0008/33992/Creative-Scotland-Screen-Strategy-Update-Feb2016.pdf>.———. Outlander Fast Tracks Next Generation of Skilled Screen Talent. 2016. <http://www.creativescotland.com/what-we-do/latest-news/archive/2016/02/outlander-fast-tracks-next-generation-of-skilled-screen-talent>.Cucco, Marco. "Blockbuster Outsourcing: Is There Really No Place like Home?" Film Studies 13.1 (2015): 73-93.Davis, Charles H. "Media Industry Clusters and Public Policy." Media Clusters: Spatial Agglomeration and Content Capabilities (2011): 72-98.Drake, Graham. "‘This Place Gives Me Space’: Place and Creativity in the Creative Industries." Geoforum 34.4 (2003): 511-524.EKOS. “Options for a Film and TV Production Space: Report for Scottish Enterprise.” Glasgow: EKOS, March 2014.Evans, Graeme. "Creative Cities, Creative Spaces and Urban Policy." Urban Studies 46.5-6 (2009): 1003-1040.Freidman, Thomas. "The World Is Flat." New York: Farrar, Straus and Giroux, 2006.Goldsmith, Ben, and Tom O’Regan. “Cinema Cities, Media Cities: The Contemporary International Studio Complex.” Screen Industry, Culture and Policy Research Series. Sydney: Australian Film Commission, Sep. 2003.Goldsmith, Ben, Susan Ward, and Tom O’Regan. "Global and Local Hollywood." InMedia. The French Journal of Media and Media Representations in the English-Speaking World 1 (2012).Grabher, Gernot. "The Project Ecology of Advertising: Tasks, Talents and Teams." Regional Studies 36.3 (2002): 245-262.Helbrecht, Ilse. "The Creative Metropolis Services, Symbols and Spaces." Zeitschrift für Kanada Studien 18 (1998): 79-93.Hibberd, Lynne. "Devolution in Policy and Practice: A Study of River City and BBC Scotland." Westminster Papers in Communication and Culture 4.3 (2007): 107-205.Hill, John. "'This Is for the Batmans as Well as the Vera Drakes': Economics, Culture and UK Government Film Production Policy in the 2000s." Journal of British Cinema and Television 9.3 (2012): 333-356.House of Commons Scottish Affairs Committee. “Creative Industries in Scotland.” Second Report of Session 2015–16. London: House of Commons, 2016.Hutton, Thomas A. "The New Economy of the Inner City." Cities 21.2 (2004): 89-108.Jensen, Rodney J.C. "The Spatial and Economic Contribution of Sydney's Visual Entertainment Industries." Australian Planner 48.1 (2011): 24-36.Leadbeater, Charles, and Kate Oakley. Surfing the Long Wave: Knowledge Entrepreneurship in Britain. London: Demos, 2001.McDonald, Adrian H. "Down the Rabbit Hole: The Madness of State Film Incentives as a 'Solution' to Runaway Production." University of Pennsylvania Journal of Business Law 14.85 (2011): 85-163.Markusen, Ann. "Sticky Places in Slippery Space: A Typology of Industrial Districts." Economic Geography (1996): 293-313.———. "Urban Development and the Politics of a Creative Class: Evidence from a Study of Artists." Environment and Planning A 38.10 (2006): 1921-1940.Miller, Toby, N. Govil, J. McMurria, R. Maxwell, and T. Wang. Global Hollywood 2. London: BFI, 2005.Morawetz, Norbert, et al. "Finance, Policy and Industrial Dynamics—The Rise of Co‐productions in the Film Industry." Industry and Innovation 14.4 (2007): 421-443.Morgan, Kevin. "The Exaggerated Death of Geography: Learning, Proximity and Territorial Innovation Systems." Journal of Economic Geography 4.1 (2004): 3-21.Mould, Oli. "Mission Impossible? Reconsidering the Research into Sydney's Film Industry." Studies in Australasian Cinema 1.1 (2007): 47-60.O’Brien, Richard. "Global Financial Integration: The End of Geography." London: Royal Institute of International Affairs, Pinter Publishers, 2002.OlsbergSPI with Nordicity. “Economic Contribution of the UK’s Film, High-End TV, Video Game, and Animation Programming Sectors.” Report presented to the BFI, Pinewood Shepperton plc, Ukie, the British Film Commission and Pact. London: BFI, Feb. 2015.Pecknold, Diane. "Heart of the Country? The Construction of Nashville as the Capital of Country Music." Sounds and the City. London: Palgrave Macmillan UK, 2014. 19-37.Picard, Robert G. Media Clusters: Local Agglomeration in an Industry Developing Networked Virtual Clusters. Jönköping International Business School, 2008.Pratt, Andy C. "New Media, the New Economy and New Spaces." Geoforum 31.4 (2000): 425-436.Reimer, Suzanne, Steven Pinch, and Peter Sunley. "Design Spaces: Agglomeration and Creativity in British Design Agencies." Geografiska Annaler: Series B, Human Geography 90.2 (2008): 151-172.Sanson, Kevin. Goodbye Brigadoon: Place, Production, and Identity in Global Glasgow. Diss. University of Texas at Austin, 2011.Scott, Allen J. "Creative Cities: Conceptual Issues and Policy Questions." Journal of Urban Affairs 28.1 (2006): 1-17.———. Global City-Regions: Trends, Theory, Policy. Oxford University Press, 2002.Scott, Allen J., and Michael Storper. "Regions, Globalization, Development." Regional Studies 41.S1 (2007): S191-S205.The Scottish Government. The Scottish Government Economic Strategy. Edinburgh: Scottish Government, 2015.———. Growth, Talent, Ambition – the Government’s Strategy for the Creative Industries. Edinburgh: Scottish Government, 2011.The Scottish Parliament Economy, Energy and Tourism Committee. The Economic Impact of the Film, TV and Video Games Industries. Edinburgh: Scottish Parliament, 2015.Sydow, Jörg, and Udo Staber. "The Institutional Embeddedness of Project Networks: The Case of Content Production in German Television." Regional Studies 36.3 (2002): 215-227.Szczepanik, Petr. "Globalization through the Eyes of Runners: Student Interns as Ethnographers on Runaway Productions in Prague." Media Industries 1.1 (2014).Vallance, Paul. "Creative Knowing, Organisational Learning, and Socio-Spatial Expansion in UK Videogame Development Studios." Geoforum 51 (2014): 15-26.Visit Scotland. “Scotland Voted Best Cinematic Destination in the World.” 2015. <https://www.visitscotland.com/blog/films/scotland-voted-best-cinematic-destination-in-the-world/>.
APA, Harvard, Vancouver, ISO, and other styles
45

Subar, Amy F., Sharon I. Kirkpatrick, Beth Mittl, Thea P. Zimmerman, Frances E. Thompson, Christopher Bingley, Gordon B. Willis, et al. "Abstract 029: The Automated Self-Administered 24-Hour Dietary Recall (ASA24): A Research Resource from the National Cancer Institute (NCI)." Circulation 125, suppl_10 (March 13, 2012). http://dx.doi.org/10.1161/circ.125.suppl_10.a029.

Full text
Abstract:
Introduction: Extensive evidence has demonstrated that 24-hour dietary recalls (24HRs) provide high-quality dietary intake data with minimal bias, making them the preferred tool for nutrition monitoring and, potentially, for studying diet and disease associations. Traditional 24HRs, however, are expensive and impractical for large-scale research because they rely on trained interviewers, and require multiple administrations to estimate usual intakes. To address these challenges, NCI developed ASA24. System: The ASA24 system is a publicly available web-based software tool that enables automated and self-administered 24HRs for epidemiologic, intervention, behavioral, or clinical research. ASA24 consists of a Respondent application used by participants to enter recall data and a Researcher application used by researchers to manage study logistics and obtain nutrient and food level data. The format and design of the Respondent application are modeled on USDA’s interviewer-administered Automated Multiple Pass Method (AMPM) 24HR, which uses multi-level food probes to assess food types and amounts. A Beta version of ASA24, released in 2009, has been used by over 100 researchers to collect over 20,000 recalls. Version 1 of ASA24, released in September 2011, offers improved functionality, features, and usability. Respondents report their intakes using a list of foods and beverages from USDA’s most current Food and Nutrient Database for Dietary Studies (FNDDS 4.1). Multiple images are shown to help respondents estimate portion size. ASA24 allows respondents to: 1) find foods and beverages by browsing or searching, 2) move or copy a food or beverage to a different meal, edit a meal, adjust reported amounts, or correct double reports, 3) review a final list of the day’s intake, and 4) access help. Resulting data files include food codes, nutrients, and MyPyramid food group equivalents for each day and each food, as well as variables to calculate Healthy Eating Index scores. Additional optional modules querying location of meals, who one ate with, TV/computer use during meals, and supplement intake are available, as well as a Spanish language version. Evaluation: ASA24 will be compared to traditional interviewer-administered recalls in a large sample of adults and within a smaller feeding study. The measurement error structure of ASA24 will be evaluated against doubly-labeled water and multiple 24-hour urinary nitrogen collections in three large on-going cohorts (NCI’s AARP Diet and Health Study, and Harvard’s Nurses Health Study and Health Professionals Follow-up Study). Conclusion: ASA24 has the potential to improve dietary assessment by enhancing the feasibility and cost-effectiveness of collecting high-quality dietary data.
APA, Harvard, Vancouver, ISO, and other styles
46

Dwyer, Tim. "Transformations." M/C Journal 7, no. 2 (March 1, 2004). http://dx.doi.org/10.5204/mcj.2339.

Full text
Abstract:
The Australian Government has been actively evaluating how best to merge the functions of the Australian Communications Authority (ACA) and the Australian Broadcasting Authority (ABA) for around two years now. Broadly, the reason for this is an attempt to keep pace with the communications media transformations we reduce to the term “convergence.” Mounting pressure for restructuring is emerging as a site of turf contestation: the possibility of a regulatory “one-stop shop” for governments (and some industry players) is an end game of considerable force. But, from a public interest perspective, the case for a converged regulator needs to make sense to audiences using various media, as well as in terms of arguments about global, industrial, and technological change. This national debate about the institutional reshaping of media regulation is occurring within a wider global context of transformations in social, technological, and politico-economic frameworks of open capital and cultural markets, including the increasing prominence of international economic organisations, corporations, and Free Trade Agreements (FTAs). Although the recently concluded FTA with the US explicitly carves out a right for Australian Governments to make regulatory policy in relation to existing and new media, considerable uncertainty remains as to future regulatory arrangements. A key concern is how a right to intervene in cultural markets will be sustained in the face of cultural, politico-economic, and technological pressures that are reconfiguring creative industries on an international scale. While the right to intervene was retained for the audiovisual sector in the FTA, by contrast, it appears that comparable unilateral rights to intervene will not operate for telecommunications, e-commerce or intellectual property (DFAT). Blurring Boundaries A lack of certainty for audiences is a by-product of industry change, and further blurs regulatory boundaries: new digital media content and overlapping delivering technologies are already a reality for Australia’s media regulators. These hypothetical media usage scenarios indicate how confusion over the appropriate regulatory agency may arise: 1. playing electronic games that use racist language; 2. being subjected to deceptive or misleading pop-up advertising online 3. receiving messaged imagery on your mobile phone that offends, disturbs, or annoys; 4. watching a program like World Idol with SMS voting that subsequently raises charging or billing issues; or 5. watching a new “reality” TV program where products are being promoted with no explicit acknowledgement of the underlying commercial arrangements either during or at the end of the program. These are all instances where, theoretically, regulatory mechanisms are in place that allow individuals to complain and to seek some kind of redress as consumers and citizens. In the last scenario, in commercial television under the sector code, no clear-cut rules exist as to the precise form of the disclosure—as there is (from 2000) in commercial radio. It’s one of a number of issues the peak TV industry lobby Commercial TV Australia (CTVA) is considering in their review of the industry’s code of practice. CTVA have proposed an amendment to the code that will simply formalise the already existing practice . That is, commercial arrangements that assist in the making of a program should be acknowledged either during programs, or in their credits. In my view, this amendment doesn’t go far enough in post “cash for comment” mediascapes (Dwyer). Audiences have a right to expect that broadcasters, production companies and program celebrities are open and transparent with the Australian community about these kinds of arrangements. They need to be far more clearly signposted, and people better informed about their role. In the US, the “Commercial Alert” <http://www.commercialalert.org/> organisation has been lobbying the Federal Communications Commission and the Federal Trade Commission to achieve similar in-program “visual acknowledgements.” The ABA’s Commercial Radio Inquiry (“Cash-for-Comment”) found widespread systemic regulatory failure and introduced three new standards. On that basis, how could a “standstill” response by CTVA, constitute best practice for such a pervasive and influential medium as contemporary commercial television? The World Idol example may lead to confusion for some audiences, who are unsure whether the issues involved relate to broadcasting or telecommunications. In fact, it could be dealt with as a complaint to the Telecommunication Industry Ombudsman (TIO) under an ACA registered, but Australian Communications Industry Forum (ACIF) developed, code of practice. These kind of cross-platform issues may become more vexed in future years from an audience’s perspective, especially if reality formats using on-screen premium rate service numbers invite audiences to participate, by sending MMS (multimedia messaging services) images or short video grabs over wireless networks. The political and cultural implications of this kind of audience interaction, in terms of access, participation, and more generally the symbolic power of media, may perhaps even indicate a longer-term shift in relations with consumers and citizens. In the Internet example, the Australian Competition and Consumer Commission’s (ACCC) Internet advertising jurisdiction would apply—not the ABA’s “co-regulatory” Internet content regime as some may have thought. Although the ACCC deals with complaints relating to Internet advertising, there won’t be much traction for them in a more complex issue that also includes, say, racist or religious bigotry. The DVD example would probably fall between the remits of the Office of Film and Literature Classification’s (OFLC) new “convergent” Guidelines for the Classification of Film and Computer Games and race discrimination legislation administered by the Human Rights and Equal Opportunity Commission (HREOC). The OFLC’s National Classification Scheme is really geared to provide consumer advice on media products that contain sexual and violent imagery or coarse language, rather than issues of racist language. And it’s unlikely that a single person would have the locus standito even apply for a reclassification. It may fall within the jurisdiction of the HREOC depending on whether it was played in public or not. Even then it would probably be considered exempt on free speech grounds as an “artistic work.” Unsolicited, potentially illegal, content transmitted via mobile wireless devices, in particular 3G phones, provide another example of content that falls between the media regulation cracks. It illustrates a potential content policy “turf grab” too. Image-enabled mobile phones create a variety of novel issues for content producers, network operators, regulators, parents and viewers. There is no one government media authority or agency with a remit to deal with this issue. Although it has elements relating to the regulatory activities of the ACA, the ABA, the OFLC, the TIO, and TISSC, the combination of illegal or potentially prohibited content and its carriage over wireless networks positions it outside their current frameworks. The ACA may argue it should have responsibility for this kind of content since: it now enforces the recently enacted Commonwealth anti-Spam laws; has registered an industry code of practice for unsolicited content delivered over wireless networks; is seeking to include ‘adult’ content within premium rate service numbers, and, has been actively involved in consumer education for mobile telephony. It has also worked with TISSC and the ABA in relation to telephone sex information services over voice networks. On the other hand, the ABA would probably argue that it has the relevant expertise for regulating wirelessly transmitted image-content, arising from its experience of Internet and free and subscription TV industries, under co-regulatory codes of practice. The OFLC can also stake its claim for policy and compliance expertise, since the recently implemented Guidelines for Classification of Film and Computer Games were specifically developed to address issues of industry convergence. These Guidelines now underpin the regulation of content across the film, TV, video, subscription TV, computer games and Internet sectors. Reshaping Institutions Debates around the “merged regulator” concept have occurred on and off for at least a decade, with vested interests in agencies and the executive jockeying to stake claims over new turf. On several occasions the debate has been given renewed impetus in the context of ruling conservative parties’ mooted changes to the ownership and control regime. It’s tended to highlight demarcations of remit, informed as they are by historical and legal developments, and the gradual accretion of regulatory cultures. Now the key pressure points for regulatory change include the mere existence of already converged single regulatory structures in those countries with whom we tend to triangulate our policy comparisons—the US, the UK and Canada—increasingly in a context of debates concerning international trade agreements; and, overlaying this, new media formats and devices are complicating existing institutional arrangements and legal frameworks. The Department of Communications, Information Technology & the Arts’s (DCITA) review brief was initially framed as “options for reform in spectrum management,” but was then widened to include “new institutional arrangements” for a converged regulator, to deal with visual content in the latest generation of mobile telephony, and other image-enabled wireless devices (DCITA). No other regulatory agencies appear, at this point, to be actively on the Government’s radar screen (although they previously have been). Were the review to look more inclusively, the ACCC, the OFLC and the specialist telecommunications bodies, the TIO and the TISSC may also be drawn in. Current regulatory arrangements see the ACA delegate responsibility for broadcasting services bands of the radio frequency spectrum to the ABA. In fact, spectrum management is the turf least contested by the regulatory players themselves, although the “convergent regulator” issue provokes considerable angst among powerful incumbent media players. The consensus that exists at a regulatory level can be linked to the scientific convention that holds the radio frequency spectrum is a continuum of electromagnetic bands. In this view, it becomes artificial to sever broadcasting, as “broadcasting services bands” from the other remaining highly diverse communications uses, as occurred from 1992 when the Broadcasting Services Act was introduced. The prospect of new forms of spectrum charging is highly alarming for commercial broadcasters. In a joint submission to the DCITA review, the peak TV and radio industry lobby groups have indicated they will fight tooth and nail to resist new regulatory arrangements that would see a move away from the existing licence fee arrangements. These are paid as a sliding scale percentage of gross earnings that, it has been argued by Julian Thomas and Marion McCutcheon, “do not reflect the amount of spectrum used by a broadcaster, do not reflect the opportunity cost of using the spectrum, and do not provide an incentive for broadcasters to pursue more efficient ways of delivering their services” (6). An economic rationalist logic underpins pressure to modify the spectrum management (and charging) regime, and undoubtedly contributes to the commercial broadcasting industry’s general paranoia about reform. Total revenues collected by the ABA and the ACA between 1997 and 2002 were, respectively, $1423 million and $3644.7 million. Of these sums, using auction mechanisms, the ABA collected $391 million, while the ACA collected some $3 billion. The sale of spectrum that will be returned to the Commonwealth by television broadcasters when analog spectrum is eventually switched off, around the end of the decade, is a salivating prospect for Treasury officials. The large sums that have been successfully raised by the ACA boosts their position in planning discussions for the convergent media regulatory agency. The way in which media outlets and regulators respond to publics is an enduring question for a democratic polity, irrespective of how the product itself has been mediated and accessed. Media regulation and civic responsibility, including frameworks for negotiating consumer and citizen rights, are fundamental democratic rights (Keane; Tambini). The ABA’s Commercial Radio Inquiry (‘cash for comment’) has also reminded us that regulatory frameworks are important at the level of corporate conduct, as well as how they negotiate relations with specific media audiences (Johnson; Turner; Gordon-Smith). Building publicly meaningful regulatory frameworks will be demanding: relationships with audiences are often complex as people are constructed as both consumers and citizens, through marketised media regulation, institutions and more recently, through hybridising program formats (Murdock and Golding; Lumby and Probyn). In TV, we’ve seen the growth of infotainment formats blending entertainment and informational aspects of media consumption. At a deeper level, changes in the regulatory landscape are symptomatic of broader tectonic shifts in the discourses of governance in advanced information economies from the late 1980s onwards, where deregulatory agendas created an increasing reliance on free market, business-oriented solutions to regulation. “Co-regulation” and “self-regulation’ became the preferred mechanisms to more direct state control. Yet, curiously contradicting these market transformations, we continue to witness recurring instances of direct intervention on the basis of censorship rationales (Dwyer and Stockbridge). That digital media content is “converging” between different technologies and modes of delivery is the norm in “new media” regulatory rhetoric. Others critique “visions of techno-glory,” arguing instead for a view that sees fundamental continuities in media technologies (Winston). But the socio-cultural impacts of new media developments surround us: the introduction of multichannel digital and interactive TV (in free-to-air and subscription variants); broadband access in the office and home; wirelessly delivered content and mobility, and, as Jock Given notes, around the corner, there’s the possibility of “an Amazon.Com of movies-on-demand, with the local video and DVD store replaced by online access to a distant server” (90). Taking a longer view of media history, these changes can be seen to be embedded in the global (and local) “innovation frontier” of converging digital media content industries and its transforming modes of delivery and access technologies (QUT/CIRAC/Cutler & Co). The activities of regulatory agencies will continue to be a source of policy rivalry and turf contestation until such time as a convergent regulator is established to the satisfaction of key players. However, there are risks that the benefits of institutional reshaping will not be readily available for either audiences or industry. In the past, the idea that media power and responsibility ought to coexist has been recognised in both the regulation of the media by the state, and the field of communications media analysis (Curran and Seaton; Couldry). But for now, as media industries transform, whatever the eventual institutional configuration, the evolution of media power in neo-liberal market mediascapes will challenge the ongoing capacity for interventions by national governments and their agencies. Works Cited Australian Broadcasting Authority. Commercial Radio Inquiry: Final Report of the Australian Broadcasting Authority. Sydney: ABA, 2000. Australian Communications Information Forum. Industry Code: Short Message Service (SMS) Issues. Dec. 2002. 8 Mar. 2004 <http://www.acif.org.au/__data/page/3235/C580_Dec_2002_ACA.pdf >. Commercial Television Australia. Draft Commercial Television Industry Code of Practice. Aug. 2003. 8 Mar. 2004 <http://www.ctva.com.au/control.cfm?page=codereview&pageID=171&menucat=1.2.110.171&Level=3>. Couldry, Nick. The Place of Media Power: Pilgrims and Witnesses of the Media Age. London: Routledge, 2000. Curran, James, and Jean Seaton. Power without Responsibility: The Press, Broadcasting and New Media in Britain. 6th ed. London: Routledge, 2003. Dept. of Communication, Information Technology and the Arts. Options for Structural Reform in Spectrum Management. Canberra: DCITA, Aug. 2002. ---. Proposal for New Institutional Arrangements for the ACA and the ABA. Aug. 2003. 8 Mar. 2004 <http://www.dcita.gov.au/Article/0,,0_1-2_1-4_116552,00.php>. Dept. of Foreign Affairs and Trade. Australia-United States Free Trade Agreement. Feb. 2004. 8 Mar. 2004 <http://www.dfat.gov.au/trade/negotiations/us_fta/outcomes/11_audio_visual.php>. Dwyer, Tim. Submission to Commercial Television Australia’s Review of the Commercial Television Industry’s Code of Practice. Sept. 2003. Dwyer, Tim, and Sally Stockbridge. “Putting Violence to Work in New Media Policies: Trends in Australian Internet, Computer Game and Video Regulation.” New Media and Society 1.2 (1999): 227-49. Given, Jock. America’s Pie: Trade and Culture After 9/11. Sydney: U of NSW P, 2003. Gordon-Smith, Michael. “Media Ethics After Cash-for-Comment.” The Media and Communications in Australia. Ed. Stuart Cunningham and Graeme Turner. Sydney: Allen and Unwin, 2002. Johnson, Rob. Cash-for-Comment: The Seduction of Journo Culture. Sydney: Pluto, 2000. Keane, John. The Media and Democracy. Cambridge: Polity, 1991. Lumby, Cathy, and Elspeth Probyn, eds. Remote Control: New Media, New Ethics. Melbourne: Cambridge UP, 2003. Murdock, Graham, and Peter Golding. “Information Poverty and Political Inequality: Citizenship in the Age of Privatized Communications.” Journal of Communication 39.3 (1991): 180-95. QUT, CIRAC, and Cutler & Co. Research and Innovation Systems in the Production of Digital Content and Applications: Report for the National Office for the Information Economy. Canberra: Commonwealth of Australia, Sept. 2003. Tambini, Damian. Universal Access: A Realistic View. IPPR/Citizens Online Research Publication 1. London: IPPR, 2000. Thomas, Julian and Marion McCutcheon. “Is Broadcasting Special? Charging for Spectrum.” Conference paper. ABA conference, Canberra. May 2003. Turner, Graeme. “Talkback, Advertising and Journalism: A cautionary tale of self-regulated radio”. International Journal of Cultural Studies 3.2 (2000): 247-255. ---. “Reshaping Australian Institutions: Popular Culture, the Market and the Public Sphere.” Culture in Australia: Policies, Publics and Programs. Ed. Tony Bennett and David Carter. Melbourne: Cambridge UP, 2001. Winston, Brian. Media, Technology and Society: A History from the Telegraph to the Internet. London: Routledge, 1998. Web Links http://www.aba.gov.au http://www.aca.gov.au http://www.accc.gov.au http://www.acif.org.au http://www.adma.com.au http://www.ctva.com.au http://www.crtc.gc.ca http://www.dcita.com.au http://www.dfat.gov.au http://www.fcc.gov http://www.ippr.org.uk http://www.ofcom.org.uk http://www.oflc.gov.au Links http://www.commercialalert.org/ Citation reference for this article MLA Style Dwyer, Tim. "Transformations" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0403/06-transformations.php>. APA Style Dwyer, T. (2004, Mar17). Transformations. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0403/06-transformations.php>
APA, Harvard, Vancouver, ISO, and other styles
47

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
48

Kustritz, Anne. "Transmedia Serial Narration: Crossroads of Media, Story, and Time." M/C Journal 21, no. 1 (March 14, 2018). http://dx.doi.org/10.5204/mcj.1388.

Full text
Abstract:
The concept of transmedia storyworlds unfolding across complex serial narrative structures has become increasingly important to the study of modern media industries and audience communities. Yet, the precise connections between transmedia networks, serial structures, and narrative processes often remain underdeveloped. The dispersion of potential story elements across a diverse collection of media platforms and technologies prompts questions concerning the function of seriality in the absence of fixed instalments, the meaning of narrative when plot is largely a personal construction of each audience member, and the nature of storytelling in the absence of a unifying author, or when authorship itself takes on a serial character. This special issue opens a conversation on the intersection of these three concepts and their implications for a variety of disciplines, artistic practices, and philosophies. By re-thinking these concepts from fresh perspectives, the collection challenges scholars to consider how a wide range of academic, aesthetic, and social phenomena might be productively thought through using the overlapping lenses of transmedia, seriality, and narrativity. Thus, the collection gathers scholars from life-writing, sport, film studies, cultural anthropology, fine arts, media studies, and literature, all of whom find common ground at this fruitful crossroads. This breadth also challenges the narrow use of transmedia as a specialized term to describe current developments in corporate mass media products that seek to exploit the affordances of hybrid digital media environments. Many prominent scholars, including Marie-Laure Ryan and Henry Jenkins, acknowledge that a basic definition of transmedia as stories with extensions and reinterpretations in numerous media forms includes the oldest kinds of human expression, such as the ancient storyworlds of Arthurian legend and The Odyssey. Yet, what Jenkins terms “top-down” transmedia—that is, pre-planned and often corporate transmedia—has received a disproportionate share of scholarly attention, with modern franchises like The Matrix, the Marvel universe, and Lost serving as common exemplars (Flanagan, Livingstone, and McKenny; Hadas; Mittell; Scolari). Thus, many of the contributions to this issue push the boundaries of what has commonly been studied as transmedia as well as the limits of what may be considered a serial structure or even a story. For example, these papers imagine how an autobiography may also be a digital concept album unfolding in reverse, how participatory artistic performances may unfold in unpredictable instalments across physical and digital space, and how studying sports fandom as a long series of transmedia narrative elements encourages scholars to grapple with the unique structures assembled by audiences of non-fictional story worlds. Setting these experimental offerings into dialogue with entries that approach the study of transmedia in a more established manner provides the basis for building bridges between such recognized conversations in new media studies and potential collaborations with other disciplines and subfields of media studies.This issue builds upon papers collected from four years of the International Transmedia Serial Narration Seminar, which I co-organized with Dr. Claire Cornillon, Assistant Professor (Maîtresse de Conférences) of comparative literature at Université de Nîmes. The seminar held sessions in Paris, Le Havre, Rouen, Amsterdam, and Utrecht, with interdisciplinary speakers from the USA, Australia, France, Belgium, and the Netherlands. As a transnational, interdisciplinary project intended to cross both theoretical and physical boundaries, the seminar aimed to foster exchange between academic conversations that can become isolated not only within disciplines, but also within national and linguistic borders. The seminar thus sought to enhance academic mobility between both people and ideas, and the digital, open-access publication of the collected papers alongside additional scholarly interlocutors serves to broaden the seminar’s goals of creating a border-crossing conversation. After two special issues primarily collecting the French language papers in TV/Series (2014) and Revue Française des Sciences de l’Information et de la Communication (2017), this issue seeks to share the Transmedia Serial Narration project with a wider audience by publishing the remaining English-language papers, accompanied by several other contributions in dialogue with the seminar’s themes. It is our hope that this collection will invite a broad international audience to creatively question the meaning of transmedia, seriality, and narrativity both historically and in the modern, rapidly changing, global and digital media environment.Several articles in the issue illuminate existing debates and common case studies in transmedia scholarship by comparing theoretical models to the much more slippery reality of a media form in flux. Thus, Mélanie Bourdaa’s feature article, “From One Medium to the Next: How Comic Books Create Richer Storylines,” examines theories of narrative complexity and transmedia by scholars including Henry Jenkins, Derek Johnson, and Jason Mittell to then propose a new typology of extensions to accommodate the lived reality expressed by producers of transmedia. Because her interviews with artists and writers emphasize the co-constitutive nature of economic and narrative considerations in professionals’ decisions, Bourdaa’s typology can offer researchers a tool to clarify the marketing and narrative layers of transmedia extensions. As such, her classification system further illuminates what is particular about forms of corporate transmedia with a profit orientation, which may not be shared by non-profit, collective, and independently produced transmedia projects.Likewise, Radha O’Meara and Alex Bevan map existing scholarship on transmedia to point out the limitations of deriving theory only from certain forms of storytelling. In their article “Transmedia Theory’s Author Discourse and Its Limitations,” O’Meara and Bevan argue that scholars have preferred to focus on examples of transmedia with a strong central author-figure or that they may indeed help to rhetorically shore up the coherency of transmedia authorship through writing about transmedia creators as auteurs. Tying their critique to the established weaknesses of auteur theory associated with classic commentaries like Roland Barthes’ “Death of the Author” and Foucault’s “What is an Author?”, O’Meara and Bevan explain that this focus on transmedia creators as authority figures reinforces hierarchical, patriarchal understandings of the creative process and excludes from consideration all those unauthorized transmedia extensions through which audiences frequently engage and make meaning from transmedia networks. They also emphasize the importance of constructing academic theories of transmedia authorship that can accommodate collaborative forms of hybrid amateur and professional authorship, as well as tolerate the ambiguities of “authorless” storyworlds that lack clear narrative boundaries. O’Meara and Bevan argue that such theories will help to break down gendered power hierarchies in Hollywood, which have long allowed individual men to “claim credit for the stories and for all the work that many people do across various sectors and industries.”Dan Hassler-Forest likewise considers existing theory and a corporate case study in his examination of analogue echoes within a modern transmedia serial structure by mapping the storyworld of Twin Peaks (1990). His article, “‘Two Birds with One Stone’: Transmedia Serialisation in Twin Peaks,” demonstrates the push-and-pull between two contemporary TV production strategies: first, the use of transmedia elements that draw viewers away from the TV screen toward other platforms, and second, the deployment of strategies that draw viewers back to the TV by incentivizing broadcast-era appointment viewing. Twin Peaks offers a particularly interesting example of the manner in which these strategies intertwine partly because it already offered viewers an analogue transmedia experience in the 1990s by splitting story elements between TV episodes and books. Unlike O’Meara and Bevan, who elucidate the growing prominence of transmedia auteurs who lend rhetorical coherence to dispersed narrative elements, Hassler-Forest argues that this older analogue transmedia network capitalized upon the dilution of authorial authority, due to the distance between TV and book versions, to negotiate tensions between the producers’ competing visions. Hassler-Forest also notes that the addition of digital soundtrack albums further complicates the serial nature of the story by using the iTunes and TV distribution schedules to incentivize repeated sequential consumption of each element, thus drawing modern viewers to the TV screen, then the computer screen, and then back again.Two articles offer a concrete test of these theoretical perspectives by utilizing ethnographic participant-observation and interviewing to examine how audiences actually navigate diffuse, dispersed storyworlds. For example, Céline Masoni’s article, “From Seriality to Transmediality: A Socio-narrative Approach of a Skilful and Literate Audience,” documents fans’ highly strategic participatory practices. From her observations of and interviews with fans, Masoni theorizes the types of media literacy and social as well as technological competencies cultivated through transmedia fan practices. Olivier Servais and Sarah Sepulchre’s article similarly describes a long-term ethnography of fan transmedia activity, including interviews with fans and participant-observation of the MMORPG (Massively Multiplayer Online Role-Playing Game) Game of Thrones Ascent (2013). Servais and Sepulchre find that most people in their interviews are not “committed” fans, but rather casual readers and viewers who follow transmedia extensions sporadically. By focusing on this group, they widen the existing research which often focuses on or assumes a committed audience like the skilful and literate fans discussed by Masoni.Servais and Sepulchre’s results suggest that these viewers may be less likely to seek out all transmedia extensions but readily accept and adapt unexpected elements, such as the media appearances of actors, to add to their serial experiences of the storyworld. In a parallel research protocol observing the Game of Thrones Ascent MMORPG, Servais and Sepulchre report that the most highly-skilled players exhibit few behaviours associated with immersion in the storyworld, but the majority of less-skilled players use their gameplay choices to increase immersion by, for example, choosing a player name that evokes the narrative. As a result, Servais and Sepulchre shed light upon the activities of transmedia audiences who are not necessarily deeply committed to the entire transmedia network, and yet who nonetheless make deliberate choices to collect their preferred narrative elements and increase their own immersion.Two contributors elucidate forms of transmedia that upset the common emphasis on storyworlds with film or TV as the core property or “mothership” (Scott). In her article “Transmedia Storyworlds, Literary Theory, Games,” Joyce Goggin maps the history of intersections between experimental literature and ludology. As a result, she questions the continuing dichotomy between narratology and ludology in game studies to argue for a more broadly transmedia strategy, in which the same storyworld may be simultaneously narrative and ludic. Such a theory can incorporate a great deal of what might otherwise be unproblematically treated as literature, opening up the book to interrogation as an inherently transmedial medium.L.J. Maher similarly examines the serial narrative structures that may take shape in a transmedia storyworld centred on music rather than film or TV. In her article “You Got Spirit, Kid: Transmedial Life-Writing Across Time and Space,” Maher charts the music, graphic novels, and fan interactions that comprise the Coheed and Cambria band storyworld. In particular, Maher emphasizes the importance of autobiography for Coheed and Cambria, which bridges between fictional and non-fictional narrative elements. This interplay remains undertheorized within transmedia scholarship, although a few have begun to explicate the use of transmedia life-writing in an activist context (Cati and Piredda; Van Luyn and Klaebe; Riggs). As a result, Maher widens the scope of existing transmedia theory by more thoroughly connecting fictional and autobiographical elements in the same storyworld and considering how serial transmedia storytelling structures may differ when the core component is music.The final three articles take a more experimental approach that actively challenges the existing boundaries of transmedia scholarship. Catherine Lord’s article, “Serial Nuns: Michelle Williams Gamaker’s The Fruit Is There to Be Eaten as Serial and Trans-serial,” explores the unique storytelling structures of a cluster of independent films that traverse time, space, medium, and gender. Although not a traditional transmedia project, since the network includes a novel and film adaptations and extensions by different directors as well as real-world locations and histories, Lord challenges transmedia theorists to imagine storyworlds that include popular history, independent production, and spatial performances and practices. Lord argues that the main character’s trans identity provides an embodied and theoretical pivot within the storyworld, which invites audiences to accept a position of radical mobility where all fixed expectations about the separation between categories of flora and fauna, centre and periphery, the present and the past, as well as authorized and unauthorized extensions, dissolve.In his article “Non-Fiction Transmedia: Seriality and Forensics in Media Sport,” Markus Stauff extends the concept of serial transmedia storyworlds to sport, focusing on an audience-centred perspective. For the most part, transmedia has been theorized with fictional storyworlds as the prototypical examples. A growing number of scholars, including Arnau Gifreu-Castells and Siobhan O'Flynn, enrich our understanding of transmedia storytelling by exploring non-fiction examples, but these are commonly restricted to the documentary genre (Freeman; Gifreu-Castells, Misek, and Verbruggen; Karlsen; Kerrigan and Velikovsky). Very few scholars comment on the transmedia nature of sport coverage and fandom, and when they do so it is often within the framework of transmedia news coverage (Gambarato, Alzamora, and Tárcia; McClearen; Waysdorf). Stauff’s article thus provides a welcome addition to the existing scholarship in this field by theorizing how sport fans construct a user-centred serial transmedia storyworld by piecing together narrative elements across media sources, embodied experiences, and the serialized ritual of sport seasons. In doing so, he points toward ways in which non-fiction transmedia may significantly differ from fictional storyworlds, but he also enriches our understanding of an audience-centred perspective on the construction of transmedia serial narratives.In his artistic practice, Robert Lawrence may most profoundly stretch the existing parameters of transmedia theory. Lawrence’s article, “Locate, Combine, Contradict, Iterate: Serial Strategies for PostInternet Art,” details his decades-long interrogation of transmedia seriality through performative and participatory forms of art that bridge digital space, studio space, and public space. While theatre and fine arts have often been considered through the theoretical lens of intermediality (Bennett, Boenisch, Kattenbelt, Vandsoe), the nexus of transmedia, seriality, and narrative enables Lawrence to describe the complex, interconnected web of planned and unplanned extensions of his hybrid digital and physical installations, which often last for decades and incorporate a global scope. Lawrence thus takes the strategies of engagement that are perhaps more familiar to transmedia theorists from corporate viral marketing campaigns and turns them toward civic ends (Anyiwo, Bourdaa, Hardy, Hassler-Forest, Scolari, Sokolova, Stork). As such, Lawrence’s artistic practice challenges theorists of transmedia and intermedia to consider the kinds of social and political “interventions” that artists and citizens can stage through the networked possibilities of transmedia expression and how the impact of such projects can be amplified through serial repetition.Together, the whole collection opens new pathways for transmedia scholarship, more deeply explores how transmedia narration complicates understandings of seriality, and constructs an international, interdisciplinary dialogue that brings often isolated conversations into contact. In particular, this issue enriches the existing scholarship on independent, artistic, and non-fiction transmedia, while also proposing some important limitations, exceptions, and critiques to existing scholarship featuring corporate transmedia projects with a commercial, top-down structure and a strong auteur-like creator. These diverse case studies and perspectives enable us to understand more inclusively the structures and social functions of transmedia in the pre-digital age, to theorize more robustly how audiences experience transmedia in the current era of experimentation, and to imagine more broadly a complex future for transmedia seriality wherein professionals, artists, and amateurs all engage in an iterative, inclusive process of creative and civic storytelling, transcending artificial borders imposed by discipline, nationalism, capitalism, and medium.ReferencesAnyiwo, U. Melissa. "It’s Not Television, It’s Transmedia Storytelling: Marketing the ‘Real’World of True Blood." True Blood: Investigating Vampires and Southern Gothic. Ed. Brigid Cherry. New York: IB Tauris, 2012. 157-71.Barthes, Roland. "The Death of the Author." Image, Music, Text. Trans. Stephen Heath. Basingstoke: Macmillian, 1988. 142-48.Bennett, Jill. "Aesthetics of Intermediality." Art History 30.3 (2007): 432-450.Boenisch, Peter M. "Aesthetic Art to Aisthetic Act: Theatre, Media, Intermedial Performance." (2006): 103-116.Bourdaa, Melanie. "This Is Not Marketing. This Is HBO: Branding HBO with Transmedia Storytelling." Networking Knowledge: Journal of the MeCCSA Postgraduate Network 7.1 (2014).Cati, Alice, and Maria Francesca Piredda. "Among Drowned Lives: Digital Archives and Migrant Memories in the Age of Transmediality." a/b: Auto/Biography Studies 32.3 (2017): 628-637.Flanagan, Martin, Andrew Livingstone, and Mike McKenny. The Marvel Studios Phenomenon: Inside a Transmedia Universe. New York: Bloomsbury Publishing, 2016.Foucault, Michel. "Authorship: What Is an Author?" Screen 20.1 (1979): 13-34.Freeman, Matthew. "Small Change – Big Difference: Tracking the Transmediality of Red Nose Day." VIEW Journal of European Television History and Culture 5.10 (2016): 87-96.Gambarato, Renira Rampazzo, Geane C. Alzamora, and Lorena Peret Teixeira Tárcia. "2016 Rio Summer Olympics and the Transmedia Journalism of Planned Events." Exploring Transmedia Journalism in the Digital Age. Hershey, PA: IGI Global, 2018. 126-146.Gifreu-Castells, Arnau. "Mapping Trends in Interactive Non-fiction through the Lenses of Interactive Documentary." International Conference on Interactive Digital Storytelling. Berlin: Springer, 2014.Gifreu-Castells, Arnau, Richard Misek, and Erwin Verbruggen. "Transgressing the Non-fiction Transmedia Narrative." VIEW Journal of European Television History and Culture 5.10 (2016): 1-3.Hadas, Leora. "Authorship and Authenticity in the Transmedia Brand: The Case of Marvel's Agents of SHIELD." Networking Knowledge: Journal of the MeCCSA Postgraduate Network 7.1 (2014).Hardy, Jonathan. "Mapping Commercial Intertextuality: HBO’s True Blood." Convergence 17.1 (2011): 7-17.Hassler-Forest, Dan. "Skimmers, Dippers, and Divers: Campfire’s Steve Coulson on Transmedia Marketing and Audience Participation." Participations 13.1 (2016): 682-692.Jenkins, Henry. “Transmedia 202: Further Reflections.” Confessions of an Aca-Fan. 31 July 2011. <http://henryjenkins.org/blog/2011/08/defining_transmedia_further_re.html>. ———. “Transmedia Storytelling 101.” Confessions of an Aca-Fan. 21 Mar. 2007. <http://henryjenkins.org/blog/2007/03/transmedia_storytelling_101.html>. ———. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006.Johnson, Derek. Media Franchising: Creative License and Collaboration in the Culture Industries. New York: New York UP, 2013.Karlsen, Joakim. "Aligning Participation with Authorship: Independent Transmedia Documentary Production in Norway." VIEW Journal of European Television History and Culture 5.10 (2016): 40-51.Kattenbelt, Chiel. "Theatre as the Art of the Performer and the Stage of Intermediality." Intermediality in Theatre and Performance 2 (2006): 29-39.Kerrigan, Susan, and J. T. Velikovsky. "Examining Documentary Transmedia Narratives through The Living History of Fort Scratchley Project." Convergence 22.3 (2016): 250-268.Van Luyn, Ariella, and Helen Klaebe. "Making Stories Matter: Using Participatory New Media Storytelling and Evaluation to Serve Marginalized and Regional Communities." Creative Communities: Regional Inclusion and the Arts. Intellect Press, 2015. 157-173.McClearen, Jennifer. "‘We Are All Fighters’: The Transmedia Marketing of Difference in the Ultimate Fighting Championship (UFC)." International Journal of Communication 11 (2017): 18.Mittell, Jason. "Playing for Plot in the Lost and Portal Franchises." Eludamos: Journal for Computer Game Culture 6.1 (2012): 5-13.O'Flynn, Siobhan. "Documentary's Metamorphic Form: Webdoc, Interactive, Transmedia, Participatory and Beyond." Studies in Documentary Film 6.2 (2012): 141-157.Riggs, Nicholas A. "Leaving Cancerland: Following Bud at the End of Life." Storytelling, Self, Society 10.1 (2014): 78-92.Ryan, Marie-Laure. “Transmedial Storytelling and Transfictionality.” Poetics Today, 34.3 (2013): 361-388. <https://doi.org/10.1215/03335372-2325250>.Scolari, Carlos Alberto. "Transmedia Storytelling: Implicit Consumers, Narrative Worlds, and Branding in Contemporary Media Production." International Journal of Communication 3 (2009).Scott, Suzanne. “Who’s Steering the Mothership: The Role of the Fanboy Auteur in Transmedia Storytelling.” The Participatory Cultures Handbook. Eds. Aaron Delwiche and Jennifer Henderson. New York: Routledge, 2013. 43-53.Sokolova, Natalia. "Co-opting Transmedia Consumers: User Content as Entertainment or ‘Free Labour’? The Cases of STALKER. and Metro 2033." Europe-Asia Studies 64.8 (2012): 1565-1583.Stork, Matthias. "The Cultural Economics of Performance Space: Negotiating Fan, Labor, and Marketing Practice in Glee's Transmedia Geography." Transformative Works & Cultures 15 (2014).Waysdorf, Abby. "My Football Fandoms, Performance, and Place." Transformative Works & Cultures 18 (2015).Vandsoe, Anette. "Listening to the World. Sound, Media and Intermediality in Contemporary Sound Art." SoundEffects – An Interdisciplinary Journal of Sound and Sound Experience 1.1 (2011): 67-81.
APA, Harvard, Vancouver, ISO, and other styles
49

Baltasar, Neusa. "Weblogues: a new instrument for the promotion of the communication between television and viewers." Comunicar 13, no. 25 (October 1, 2005). http://dx.doi.org/10.3916/c25-2005-079.

Full text
Abstract:
Weblogs, also referred to as Blogs, are a very important tool in our days and its growth in the last few years is the confirmation of the importance they have in the Portuguese mediatic context. This new communication instrument can be defined as a frequently updated website or personal diary where the contents are inserted by chronological order and the most recent content is always the one that appears first on top of the site. Blogs are usually pointers to other websites or blogs and they often include comments from the readers. One of its most interesting features is that they can be created and maintained by a group with common interests, and be a debate and reflection space for the group members or they can be open to the community comments if that’s their members’ option. If the Internet was already considered a privileged space for communication and information exchange, blogs came to reinforce these potentials even more in the sense that they sustain themselves as a meeting point among people with common interests. Blogs offer the opportunity for everyone (with a computer and Internet connection) to access information, comment it, express their ideas and opinions, share their knowledge, etc. It is in this perspective that we consider blogs can, as an expression and reflection space, fill in a gap that exists in television – the audience space, their answer, opinion, etc. In Portugal there are many blogs related to media education and some about television in particular and they are a reference of critical analysis and reflection. This paper is based in the study of some of these blogs related to media education and television, as for instance Crianças e Media, Educação para os Media, Educomunicação/Educomunicación and Irreal tv. We aim at doing an analysis in what concerns their goals, contents, functionality and dynamics. Departing from this analysis we intend to deepen knowledge about this kind of blogs and its worth to reflection and to increase critical thought towards television and media in general. We also aim at drawing attention to the small part television occupies as creator of a dialogue space for their audience and to the fact that it should be more involved in creating initiatives and promoting interaction with their audience. Los Weblogs, también conocidos como Blogs, son una herramienta muy importante en estos días y su crecimiento en los últimos años confirma su importancia en el contexto mediático portugués. Este nuevo instrumento puede ser definido como un tipo de pagina webo diario personal frecuentemente actualizado donde los contenidos son exhibidos por un orden cronológico en que los más recientes son los primeros que nos surgen en la cabecera de la pagina web. Los Blogs tienen, normalmente, enlaces para otros websites o blogs y muchas veces incluyen comentarios de los lectores. Una de sus características más interesantes es que pueden ser creados y desarrollados por un grupo congregado en torno de intereses compartidos y ofrecen un espacio de debate para los miembros del grupo. Pero también pueden ser abiertos a los comentarios de la comunidad en general, si sus miembros permiten los comentarios de otras personas exteriores al blog. Si Internet ya era considerada un espacio privilegiado para la comunicación y el cambio de información, los weblogs vienen a reforzar este potencial aún más, pues son reconocidos y apreciados por ser un punto de encuentro donde las personas se reúnen al rededor de un tema. Se mantienen como un local que permite a todos (con ordenador y conexión a Internet) alcanzar información, comentarla, expresar sus ideas y opiniones, compartir sus conocimientos, etc. Es en esta perspectiva que se considera que los blogs podrán rellenar un hueco que existe ahora en la televisión, el lugar de los telespectadores, sus respuestas, opiniones, sugestiones, etc. En Portugal, se encuentran muchos blogs relacionados con la educación para los medios y algunos sobre el tema de la televisióny estos son considerados una referencia para la reflexión y la crítica. Esta comunicación se basa en el estudio de algunos de estos blogs relacionados con la educación para los medios y para la televisión, por ejemplo Crianças e Media, Educação para os Media, Educomunicação/Educomunicación e Irreal tv. Se pretende desarrollar un análisis en lo que respecta a sus objetivos, contenidos, funciones y dinámicas. Partiendo de este análisis se busca profundizar en los conocimientos sobre este tipo de blogs y su significado para fomentar el pensamiento crítico sobre la televisión y los medios en general. También se pretende hacer una llamada de atención para la necesidad que existe de que la televisión desenvuelva espacios de dialogo con su audiencia, y creemos que ésta debe desarrollar iniciativas para promover la interacción con los telespectadores.
APA, Harvard, Vancouver, ISO, and other styles
50

Bruns, Axel. "The End of 'Bandwidth'." M/C Journal 2, no. 8 (December 1, 1999). http://dx.doi.org/10.5204/mcj.1807.

Full text
Abstract:
It used to be so simple. If you turn on your TV or radio, your choices are limited: in Australia, there is a maximum of five or six free-to-air TV channels, depending on where you're located, and with a few minor exceptions, the programming is relatively uniform; you know what to expect, and when to expect it. To a slightly lesser degree, the same goes for radio: you might have a greater choice of stations, but you'll get an even smaller slice of the theoretically possible range of programming -- from Triple J to B105, there's mainstream, easy listening, format radio fodder, targetted at slightly different audience demographics, but hardly ever anything but comfortably agreeable to them. Only late at night or in some rare timeslots especially set aside for it, you might find something unusual, something innovative, or simply something unexpected. And of course that's so. How could it possibly be any other way? Of course radio and TV stations must appeal to the most widely shared tastes, must ensure that they satisfy the largest part of their audience with any given programme on any given day -- in short, must find the lowest common denominator which unifies their audience. That the term 'low' in this description has come to be linked to a negative meaning is -- at first -- only an accident of language: after all, mathematically this denominator constitutes in many ways the most fundamental of shared values between a series of fractions, and metaphorically, too, this commonality is certainly of fundamental importance to community culture. The need for radio and TV stations to appeal to such shared values of the many is twofold: where they are commercially run operations, it is simply sound business practice to look for the largest (and hence, most lucrative) audience available. In addition to this, however, the use of a public and limited resource -- the airwaves -- for the transmission of their programmes also creates significant obligations: since the people, represented by their governmental institutions, have licenced stations to use 'their' airwaves for transmission, of course stations are also obliged to repay this entrustment by satisfying the needs and wants of the greatest number of people, and as consistently as possible. All of this is summed up neatly with the word 'bandwidth'. Referring to frequency wavebands, bandwidth is a precious commodity: there is only a limited range of frequencies which can possibly be used to transmit broadcast-quality radio and TV, and each channel requires a significant share of that range -- which is why we can only have a limited number of stations, and hence, a limited range of programming transmitted through them. Getting away from frequency bands, the term can also be applied in other areas of transmission and publication: even services like cable TV frequently have their form of bandwidth (where cable TV systems have only been designed to take a set number of channels), and even commercial print publishing can be said to have its bandwidth, as only a limited number of publishers are likely to be able to exist commercially in a given market, and only a limited number of books and magazines can be distributed and sold through the usual channels each year. There are in each of these cases, then, physical limitations of one form or another. The last few years have seen this conception of bandwidth come under increased attack, however, and all those apparently obvious assumptions about our media environment must be reconsidered as a result. Ever since the rise of photocopiers and personal printers, after all, people have been able to create small-scale print publications without the need to apply for a share of the commercial publishers' 'bandwidth' -- witness the emergence of zines and newsletters for specific interest groups. The means of creation and distribution for these publications were and are not publicly or commercially controlled in any restrictive way, and so the old arguments for a 'responsible' use of bandwidth didn't hold any more -- thus the widespread disregard in these publications for any overarching commonly held ideas which need to be addressed: as soon as someone reads them, their production is justified. Publishing on the Internet drives the nail even further -- here, the notion of bandwidth comes to an end entirely, in two distinct ways. First, in a non-physical medium, the argument of the physical scarcity of the publication medium doesn't hold anymore -- space for publication in newsgroups and on Web pages, being digital, electronic, 'virtual', is infinitely expandable, much unlike frequency bands with their highly fixed and policed upper and lower boundaries. New 'stations' being added don't interfere with existing ones here, and so there's no need to limit the amount of individual channels available on the Net; hence the multitude of newsgroups and Websites available. Again, whatever can establish an audience (even just of a few readers) is justified in its existence. Secondly, available transmission bandwidth is also highly divisible along a temporal line, due to the packet-switching technology on which the medium is based: along the connections within the network, information that is transmitted is chopped up into small packets of data which are recombined at the receiver's end; this means that individual transmissions along the same connection can coexist without interfering with one another, if at a somewhat reduced speed (as anyone navigating the Web while downloading files has no doubt experienced). Again, this is quite different from the airwaves experience, where two radio stations or TV channels can't be broadcasting on the same frequency without drowning each other out. And even the reduction of transmission speed is likely to be only a temporary phenomenon, as network hardware is constantly being upgraded to higher speeds. Internet bandwidth, then, is infinite, in both the publication and the transmission sense of the word. If it's impossible to reach the end of available bandwidth on the Net, then, this means nothing less than that the very concept of 'bandwidth' on the Net ends: that is, it ceases to have any practical relevance -- as Costigan notes, reflecting on an all too familiar metaphor, "the Internet is in many ways the Wild West, the new frontier of our times, but its limits will not be reached. ... The Internet does not have an edge to push past, no wall or ocean to contain it. Its size and shape change constantly, and additions and subtractions do not inherently make something new or different" (xiii). But that this is so, that we have come to this end of 'bandwidth' by never being able to come to an end of bandwidth on the Net, is in itself something fundamentally new and different in media history -- and also something difficult to come to terms with. All those of courses, all those apparently obvious and natural practices of the mainstream media have left us ill prepared for a medium where they are anything but natural, and even counterproductive. Old habits are hard to break, as many of the apparently well-founded criticisms of the Internet show. Let's take Stephen Talbott as an example here: in one of my favourite passages of overzealous Net criticism, he writes of The paradox of intelligence and pathology. The Net: an instrument of rationalisation erected upon an inconceivably complex foundation of computerised logic -- an inexhaustible fount of lucid 'emergent order.' Or, the Net: madhouse, bizarre Underground, scene of flame wars and psychopathological acting out, universal red-light district. ... The Net: a nearly infinite repository of human experience converted into objective data and information -- a universal database supporting all future advances in knowledge and economic productivity. Or, the Net: perfected gossip mill; means for spreading rumours with lightning rapidity; ... ocean of dubious information. (348-9) Ignoring here the fundamental problem of Talbott's implicit claim that there are objective parameters according to which he can reliably judge whether or not any piece of online content is 'objective data' or 'dubious information' (and: for whom?), and thus his unnecessary construction of a paradox, a binary (no pun intended) division into 'good' and 'bad' uses, a second and immediately related problem is that Talbott seems to claim that the two sides of this 'paradox' are somehow able to interfere with each other, to the point of invalidating one another. This can easily be seen as a result of continuing to think in terms of bandwidth in the broadcast sense: there, the limited number of channels, and the limited amount of transmission space and time for each channel, have indeed meant that stations must carefully choose what material to broadcast, and that the results are frequently of a mainstream, middle-of-the-road, non-challenging nature. On the Net, this doesn't hold, however: here, the medium can be used for everything from the Human Genome Project to peddling sleeze and pirated 'warez', without the two ends of this continuum of uses ever affecting one another. That's not to say that what goes on in some parts of the Net isn't unsavoury, offensive, illegal, or even severely in violation of basic human rights; and where this is so, the appropriate measures, already provided by legal systems around the world, should be taken to get rid of the worst offenders -- notably, though, this won't be possible through cutting off their access to bandwidth: where bandwidth is unlimited and freely available to anyone, this cannot possibly work. Critical approaches like Talbott's, founded as they are on an outdated understanding of media processes and the false assumption of a homogeneous culture, won't help us in this, therefore: rather, faced with the limitless nature of online bandwidth, we must learn to understand the infinite, and live with it. The question isn't how many 'negative' uses of the Net we can point to -- there will always be an abundance of them. The question is what anyone of us, whoever 'we' are, can do to use the Net positively and productively -- whatever we as individuals might consider those positive and productive uses to be. References Costigan, James T. "Introduction: Forests, Trees, and Internet Research." Doing Internet Research: Critical Issues and Methods for Examining the Net. Ed. Steve Jones. Thousand Oaks, Calif.: Sage, 1999. Talbott, Stephen L. The Future Does Not Compute: Transcending the Machines in Our Midst. Sebastopol, Calif.: O'Reilly & Associates, 1995. Citation reference for this article MLA style: Axel Bruns. "The End of 'Bandwidth': Why We Must Learn to Understand the Infinite." M/C: A Journal of Media and Culture 2.8 (1999). [your date of access] <http://www.uq.edu.au/mc/9912/bandwidth.php>. Chicago style: Axel Bruns, "The End of 'Bandwidth': Why We Must Learn to Understand the Infinite," M/C: A Journal of Media and Culture 2, no. 8 (1999), <http://www.uq.edu.au/mc/9912/bandwidth.php> ([your date of access]). APA style: Axel Bruns. (1999) The end of 'bandwidth': why we must learn to understand the infinite. M/C: A Journal of Media and Culture 2(8). <http://www.uq.edu.au/mc/9912/bandwidth.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography