Academic literature on the topic 'Multimedia Communications|Native American Studies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multimedia Communications|Native American Studies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multimedia Communications|Native American Studies"

1

Kaufman, Carol E., Traci M. Schwinn, Kirsten Black, Ellen M. Keane, Cecelia K. Big Crow, Carly Shangreau, Nicole R. Tuitt, Ruth Arthur-Asmah, and Bradley Morse. "Impacting Precursors to Sexual Behavior Among Young American Indian Adolescents of the Northern Plains: A Cluster Randomized Controlled Trial." Journal of Early Adolescence 38, no. 7 (May 9, 2017): 988–1007. http://dx.doi.org/10.1177/0272431617708055.

Full text
Abstract:
We assessed the effectiveness of a culturally grounded, multimedia, sexual risk reduction intervention called Circle of Life (mCOL), designed to increase knowledge and self-efficacy among preteen American Indians and Alaska Natives. Partnering with Native Boys and Girls Clubs in 15 communities across six Northern Plains reservations, we conducted a cluster randomized controlled trial among 10- to 12-year-olds ( n = 167; mean age = 11.2). Club units were randomly assigned to mCOL ( n = 8) or the attention-control program, After-School Science Plus (AS+; n = 7). Compared with the AS+ group, mCOL youth scored significantly higher on HIV/sexually transmitted infection (STI) knowledge questions at both follow-ups; self-efficacy to avoid peer pressure and self-efficacy to avoid sex were significantly higher at posttest; self-perceived volition was significantly higher at 9-month follow-up; and no differences were found for behavioral precursors to sex. mCOL had modest effects on precursors to sexual behavior, which may lead to less risky sexual behavior in later years.
APA, Harvard, Vancouver, ISO, and other styles
2

Spady, James O'Neil. "Reconsidering Theory: Power, the Learning Body, and Cultural Change during Early American Colonization." Journal of Early American History 1, no. 3 (2011): 191–214. http://dx.doi.org/10.1163/187707011x598178.

Full text
Abstract:
AbstractThis essay participates in recent calls for more direct engagement with theory in research and teaching within History and Early American Studies. Over the last decade voices have gathered for a reconsideration of fundamental theoretical concepts in the historiography of culture. This essay reconsiders theory on semiotics, learning, and the body to reopen a conceptual problem in early American cultural historiography: the relationships between organized power and individual agency. I suggest an approach to power and agency specifically tuned to the conditions of early America colonization, which was more intimate and diverse while possessing fewer institutions and less communications-saturation than a focus on myth, ideology, or discursive formations might assume. Reconsidering semiotics as embodied allows a conception of the body as a learning entity creatively mediating discourses and social constructions and thereby generating new historical identities and relations of power. The argument draws on studies on gender/sexuality, Native Americans, and the enslaved and takes cues from the work of Gyatri Spivak, Ann Laura Stoler, Michel Foucault, Lev Vygotsky, and Charles Sanders Peirce.
APA, Harvard, Vancouver, ISO, and other styles
3

Merchant, Melissa, Katie M. Ellis, and Natalie Latter. "Captions and the Cooking Show." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1260.

Full text
Abstract:
While the television cooking genre has evolved in numerous ways to withstand competition and become a constant feature in television programming (Collins and College), it has been argued that audience demand for televisual cooking has always been high because of the daily importance of cooking (Hamada, “Multimedia Integration”). Early cooking shows were characterised by an instructional discourse, before quickly embracing an entertainment focus; modern cooking shows take on a more competitive, out of the kitchen focus (Collins and College). The genre has continued to evolve, with celebrity chefs and ordinary people embracing transmedia affordances to return to the instructional focus of the early cooking shows. While the television cooking show is recognised for its broad cultural impacts related to gender (Ouellette and Hay), cultural capital (Ibrahim; Oren), television formatting (Oren), and even communication itself (Matwick and Matwick), its role in the widespread adoption of television captions is significantly underexplored. Even the fact that a cooking show was the first ever program captioned on American television is almost completely unremarked within cooking show histories and literature.A Brief History of Captioning WorldwideWhen captions were first introduced on US television in the early 1970s, programmers were guided by the general principle to make the captioned program “accessible to every deaf viewer regardless of reading ability” (Jensema, McCann and Ramsey 284). However, there were no exact rules regarding captioning quality and captions did not reflect verbatim what was said onscreen. According to Jensema, McCann and Ramsey (285), less than verbatim captioning continued for many years because “deaf people were so delighted to have captions that they accepted almost anything thrown on the screen” (see also Newell 266 for a discussion of the UK context).While the benefits of captions for people who are D/deaf or hard of hearing were immediate, its commercial applications also became apparent. When the moral argument that people who were D/deaf or hard of hearing had a right to access television via captions proved unsuccessful in the fight for legislation, advocates lobbied the US Congress about the mainstream commercial benefits such as in education and the benefits for people learning English as a second language (Downey). Activist efforts and hard-won legal battles meant D/deaf and hard of hearing viewers can now expect closed captions on almost all television content. With legislation in place to determine the provision of captions, attention began to focus on their quality. D/deaf viewers are no longer just delighted to accept anything thrown on the screen and have begun to demand verbatim captioning. At the same time, market-based incentives are capturing the attention of television executives seeking to make money, and the widespread availability of verbatim captions has been recognised for its multimedia—and therefore commercial—applications. These include its capacity for information retrieval (Miura et al.; Agnihotri et al.) and for creative repurposing of television content (Blankinship et al.). Captions and transcripts have been identified as being of particular importance to augmenting the information provided in cooking shows (Miura et al.; Oh et al.).Early Captions in the US: Julia Child’s The French ChefJulia Child is indicative of the early period of the cooking genre (Collins and College)—she has been described as “the epitome of the TV chef” (ray 53) and is often credited for making cooking accessible to American audiences through her onscreen focus on normalising techniques that she promised could be mastered at home (ray). She is still recognised for her mastery of the genre, and for her capacity to entertain in a way that stood out from her contemporaries (Collins and College; ray).Julia Child’s The French Chef originally aired on the US publicly-funded Public Broadcasting System (PBS) affiliate WBGH from 1963–1973. The captioning of television also began in the 1960s, with educators creating the captions themselves, mainly for educational use in deaf schools (Downey 70). However, there soon came calls for public television to also be made accessible for the deaf and hard of hearing—the debate focused on equality and pushed for recognition that deaf people were culturally diverse (Downey 70).The PBS therefore began a trial of captioning programs (Downey 71). These would be “open captions”—characters which were positioned on the screen as part of the normal image for all viewers to see (Downey 71). The trial was designed to determine both the number of D/deaf and hard of hearing people viewing the program, as well as to test if non-D/deaf and hard of hearing viewers would watch a program which had captions (Downey 71). The French Chef was selected for captioning by WBGH because it was their most popular television show in the early 1970s and in 1972 eight episodes of The French Chef were aired using open—albeit inconsistent—captions (Downey 71; Jensema et al. 284).There were concerns from some broadcasters that openly captioned programs would drive away the “hearing majority” (Downey 71). However, there was no explicit study carried out in 1972 on the viewers of The French Chef to determine if this was the case because WBGH ran out of funds to research this further (Downey 71). Nevertheless, Jensema, McCann and Ramsey (284) note that WBGH did begin to re-broadcast ABC World News Tonight in the 1970s with open captions and that this was the only regularly captioned show at the time.Due to changes in technology and fears that not everyone wanted to see captions onscreen, television’s focus shifted from open captions to closed captioning in the 1980s. Captions became encoded, with viewers needing a decoder to be able to access them. However, the high cost of the decoders meant that many could not afford to buy them and adoption of the technology was slow (Youngblood and Lysaght 243; Downey 71). In 1979, the US government had set up the National Captioning Institute (NCI) with a mandate to develop and sell these decoders, and provide captioning services to the networks. This was initially government-funded but was designed to eventually be self-sufficient (Downey 73).PBS, ABC and NBC (but not CBS) had agreed to a trial (Downey 73). However, there was a reluctance on the part of broadcasters to pay to caption content when there was not enough evidence that the demand was high (Downey 73—74). The argument for the provision of captioned content therefore began to focus on the rights of all citizens to be able to access a public service. A complaint was lodged claiming that the Los Angeles station KCET, which was a PBS affiliate, did not provide captioned content that was available elsewhere (Downey 74). When Los Angeles PBS station KCET refused to air captioned episodes of The French Chef, the Greater Los Angeles Council on Deafness (GLAD) picketed the station until the decision was reversed. GLAD then focused on legislation and used the Rehabilitation Act to argue that television was federally assisted and, by not providing captioned content, broadcasters were in violation of the Act (Downey 74).GLAD also used the 1934 Communications Act in their argument. This Act had firstly established the Federal Communications Commission (FCC) and then assigned them the right to grant and renew broadcast licenses as long as those broadcasters served the ‘‘public interest, convenience, and necessity’’ (Michalik, cited in Downey 74). The FCC could, argued GLAD, therefore refuse to renew the licenses of broadcasters who did not air captioned content. However, rather than this argument working in their favour, the FCC instead changed its own procedures to avoid such legal actions in the future (Downey 75). As a result, although some stations began to voluntarily caption more content, it was not until 1996 that it became a legally mandated requirement with the introduction of the Telecommunications Act (Youngblood and Lysaght 244)—too late for The French Chef.My Kitchen Rules: Captioning BreachWhereas The French Chef presented instructional cooking programming from a kitchen set, more recently the food genre has moved away from the staged domestic kitchen set as an instructional space to use real-life domestic kitchens and more competitive multi-bench spaces. The Australian program MKR straddles this shift in the cooking genre with the first half of each season occurring in domestic settings and the second half in Iron Chef style studio competition (see Oren for a discussion of the influence of Iron Chef on contemporary cooking shows).All broadcast channels in Australia are mandated to caption 100 per cent of programs aired between 6am and midnight. However, the 2013 MKR Grand Final broadcast by Channel Seven Brisbane Pty Ltd and Channel Seven Melbourne Pty Ltd (Seven) failed to transmit 10 minutes of captions some 30 minutes into the 2-hour program. The ACMA received two complaints relating to this. The first complaint, received on 27 April 2013, the same evening as the program was broadcast, noted ‘[the D/deaf community] … should not have to miss out’ (ACMA, Report No. 3046 3). The second complaint, received on 30 April 2013, identified the crucial nature of the missing segment and its effect on viewers’ overall enjoyment of the program (ACMA, Report No. 3046 3).Seven explained that the relevant segment (approximately 10 per cent of the program) was missing from the captioning file, but that it had not appeared to be missing when Seven completed its usual captioning checks prior to broadcast (ACMA, Report No. 3046 4). The ACMA found that Seven had breached the conditions of their commercial television broadcasting licence by “failing to provide a captioning service for the program” (ACMA, Report No. 3046 12). The interruption of captioning was serious enough to constitute a breach due, in part, to the nature and characteristic of the program:the viewer is engaged in the momentum of the competitive process by being provided with an understanding of each of the competition stages; how the judges, guests and contestants interact; and their commentaries of the food and the cooking processes during those stages. (ACMA, Report No. 3046 6)These interactions have become a crucial part of the cooking genre, a genre often described as offering a way to acquire cultural capital via instructions in both cooking and ideological food preferences (Oren 31). Further, in relation to the uncaptioned MKR segment, ACMA acknowledged it would have been difficult to follow both the cooking process and the exchanges taking place between contestants (ACMA, Report No. 3046 8). ACMA considered these exchanges crucial to ‘a viewer’s understanding of, and secondly to their engagement with the different inter-related stages of the program’ (ACMA, Report No. 3046 7).An additional complaint was made with regards to the same program broadcast on Prime Television (Northern) Pty Ltd (Prime), a Seven Network affiliate. The complaint stated that the lack of captions was “Not good enough in prime time and for a show that is non-live in nature” (ACMA, Report No. 3124 3). Despite the fact that the ACMA found that “the fault arose from the affiliate, Seven, rather than from the licensee [Prime]”, Prime was also found to also have breached their licence conditions by failing to provide a captioning service (ACMA, Report No. 3124 12).The following year, Seven launched captions for their online catch-up television platform. Although this was a result of discussions with a complainant over the broader lack of captioned online television content, it was also a step that re-established Seven’s credentials as a leader in commercial television access. The 2015 season of MKR also featured their first partially-deaf contestant, Emilie Biggar.Mainstreaming Captions — Inter-Platform CooperationOver time, cooking shows on television have evolved from an informative style (The French Chef) to become more entertaining in their approach (MKR). As Oren identifies, this has seen a shift in the food genre “away from the traditional, instructional format and towards professionalism and competition” (Oren 25). The affordances of television itself as a visual medium has also been recognised as crucial in the popularity of this genre and its more recent transmedia turn. That is, following Joshua Meyrowitz’s medium theory regarding how different media can afford us different messages, televised cooking shows offer audiences stylised knowledge about food and cooking beyond the traditional cookbook (Oren; ray). In addition, cooking shows are taking their product beyond just television and increasing their inter-platform cooperation (Oren)—for example, MKR has a comprehensive companion website that viewers can visit to watch whole episodes, obtain full recipes, and view shopping lists. While this can be viewed as a modern take on Julia Child’s cookbook success, it must also be considered in the context of the increasing focus on multimedia approaches to cooking instructions (Hamada et al., Multimedia Integration; Cooking Navi; Oh et al.). Audiences today are more likely to attempt a recipe if they have seen it on television, and will use transmedia to download the recipe. As Oren explains:foodism’s ascent to popular culture provides the backdrop and motivation for the current explosion of food-themed formats that encourages audiences’ investment in their own expertise as critics, diners, foodies and even wanna-be professional chefs. FoodTV, in turn, feeds back into a web-powered, gastro-culture and critique-economy where appraisal outranks delight. (Oren 33)This explosion in popularity of the web-powered gastro culture Oren refers to has led to an increase in appetite for step by step, easy to access instructions. These are being delivered using captions. As a result of the legislation and activism described throughout this paper, captions are more widely available and, in many cases, now describe what is said onscreen verbatim. In addition, the mainstream commercial benefits and uses of captions are being explored. Captions have therefore moved from a specialist assistive technology for people who are D/deaf or hard of hearing to become recognised as an important resource for creative television viewers regardless of their hearing (Blankinship et al.). With captions becoming more accessible, accurate, financially viable, and mainstreamed, their potential as an additional television resource is of interest. As outlined above, within the cooking show genre—especially with its current multimedia turn and the demand for captioned recipe instructions (Hamada et al., “Multimedia Integration”, “Cooking Navi”; Oh et al.)—this is particularly pertinent.Hamada et al. identify captions as a useful technology to use in the increasingly popular educational, yet entertaining, cooking show genre as the required information—ingredient lists, instructions, recipes—is in high demand (Hamada et al., “Multimedia Integration” 658). They note that cooking shows often present information out of order, making them difficult to follow, particularly if a recipe must be sourced later from a website (Hamada et al., “Multimedia Integration” 658-59; Oh et al.). Each step in a recipe must be navigated and coordinated, particularly if multiple recipes are being completed at the same times (Hamada, et al., Cooking Navi) as is often the case on cooking shows such as MKR. Using captions as part of a software program to index cooking videos facilitates a number of search affordances for people wishing to replicate the recipe themselves. As Kyeong-Jin et al. explain:if food and recipe information are published as linked data with the scheme, it enables to search food recipe and annotate certain recipe by communities (sic). In addition, because of characteristics of linked data, information on food recipes can be connected to additional data source such as products for ingredients, and recipe websites can support users’ decision making in the cooking domain. (Oh et al. 2)The advantages of such a software program are many. For the audience there is easy access to desired information. For the number of commercial entities involved, this consumer desire facilitates endless marketing opportunities including product placement, increased ratings, and software development. Interesting, all of this falls outside the “usual” parameters of captions as purely an assistive device for a few, and facilitates the mainstreaming—and perhaps beginnings of acceptance—of captions.ConclusionCaptions are a vital accessibility feature for television viewers who are D/deaf or hard of hearing, not just from an informative or entertainment perspective but also to facilitate social inclusion for this culturally diverse group. The availability and quality of television captions has moved through three stages. These can be broadly summarised as early yet inconsistent captions, captions becoming more widely available and accurate—often as a direct result of activism and legislation—but not yet fully verbatim, and verbatim captions as adopted within mainstream software applications. This paper has situated these stages within the television cooking genre, a genre often remarked for its appeal towards inclusion and cultural capital.If television facilitates social inclusion, then food television offers vital cultural capital. While Julia Child’s The French Chef offered the first example of television captions via open captions in 1972, a lack of funding means we do not know how viewers (both hearing and not) actually received the program. However, at the time, captions that would be considered unacceptable today were received favourably (Jensema, McCann and Ramsey; Newell)—anything was deemed better than nothing. Increasingly, as the focus shifted to closed captioning and the cooking genre embraced a more competitive approach, viewers who required captions were no longer happy with missing or inconsistent captioning quality. The was particularly significant in Australia in 2013 when several viewers complained to ACMA that captions were missing from the finale of MKR. These captions provided more than vital cooking instructions—their lack prevented viewers from understanding conflict within the program. Following this breach, Seven became the only Australian commercial television station to offer captions on their web based catch-up platform. While this may have gone a long way to rehabilitate Seven amongst D/deaf and hard of hearing audiences, there is the potential too for commercial benefits. Caption technology is now being mainstreamed for use in cooking software applications developed from televised cooking shows. These allow viewers—both D/deaf and hearing—to access information in a completely new, and inclusive, way.ReferencesAgnihotri, Lalitha, et al. “Summarization of Video Programs Based on Closed Captions.” 4315 (2001): 599–607.Australian Communications and Media Authority (ACMA). Investigation Report No. 3046. 2013. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3046%20My%20Kitchen%20Rules%20Grand%20Final%20docx.docx>.———. Investigation Report No. 3124. 2014. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3124%20NEN%20My%20Kitchen%20Rules%20docx.docx>.Blankinship, E., et al. “Closed Caption, Open Source.” BT Technology Journal 22.4 (2004): 151–59.Collins, Kathleen, and John Jay College. “TV Cooking Shows: The Evolution of a Genre”. Flow: A Critical Forum on Television and Media Culture (7 May 2008). 14 May 2017 <http://www.flowjournal.org/2008/05/tv-cooking-shows-the-evolution-of-a-genre/>.Downey, Greg. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82. DOI: 10.1108/14636690710734670.Hamada, Reiko, et al. “Multimedia Integration for Cooking Video Indexing.” Advances in Multimedia Information Processing-PCM 2004 (2005): 657–64.Hamada, Reiko, et al. “Cooking Navi: Assistant for Daily Cooking in Kitchen.” Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM.Ibrahim, Yasmin. “Food Porn and the Invitation to Gaze: Ephemeral Consumption and the Digital Spectacle.” International Journal of E-Politics (IJEP) 6.3 (2015): 1–12.Jensema, Carl J., Ralph McCann, and Scott Ramsey. “Closed-Captioned Television Presentation Speed and Vocabulary.” American Annals of the Deaf 141.4 (1996): 284–292.Matwick, Kelsi, and Keri Matwick. “Inquiry in Television Cooking Shows.” Discourse & Communication 9.3 (2015): 313–30.Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press, 1985.Miura, K., et al. “Automatic Generation of a Multimedia Encyclopedia from TV Programs by Using Closed Captions and Detecting Principal Video Objects.” Eighth IEEE International Symposium on Multimedia (2006): 873–80.Newell, A.F. “Teletext for the Deaf.” Electronics and Power 28.3 (1982): 263–66.Oh, K.J. et al. “Automatic Indexing of Cooking Video by Using Caption-Recipe Alignment.” 2014 International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC2014) (2014): 1–6.Oren, Tasha. “On the Line: Format, Cooking and Competition as Television Values.” Critical Studies in Television: The International Journal of Television Studies 8.2 (2013): 20–35.Ouellette, Laurie, and James Hay. “Makeover Television, Governmentality and the Good Citizen.” Continuum: Journal of Media & Cultural Studies 22.4 (2008): 471–84.ray, krishnendu. “Domesticating Cuisine: Food and Aesthetics on American Television.” Gastronomica 7.1 (2007): 50–63.Youngblood, Norman E., and Ryan Lysaght. “Accessibility and Use of Online Video Captions by Local Television News Websites.” Electronic News 9.4 (2015): 242–256.
APA, Harvard, Vancouver, ISO, and other styles
4

Goggin, Gerard. "Broadband." M/C Journal 6, no. 4 (August 1, 2003). http://dx.doi.org/10.5204/mcj.2219.

Full text
Abstract:
Connecting I’ve moved house on the weekend, closer to the centre of an Australian capital city. I had recently signed up for broadband, with a major Australian Internet company (my first contact, cf. Turner). Now I am the proud owner of a larger modem than I have ever owned: a white cable modem. I gaze out into our new street: two thick black cables cosseted in silver wire. I am relieved. My new home is located in one of those streets, double-cabled by Telstra and Optus in the data-rush of the mid-1990s. Otherwise, I’d be moth-balling the cable modem, and the thrill of my data percolating down coaxial cable. And it would be off to the computer supermarket to buy an ASDL modem, then to pick a provider, to squeeze some twenty-first century connectivity out of old copper (the phone network our grandparents and great-grandparents built). If I still lived in the country, or the outskirts of the city, or anywhere else more than four kilometres from the phone exchange, and somewhere that cable pay TV will never reach, it would be a dish for me — satellite. Our digital lives are premised upon infrastructure, the networks through which we shape what we do, fashion the meanings of our customs and practices, and exchange signs with others. Infrastructure is not simply the material or the technical (Lamberton), but it is the dense, fibrous knotting together of social visions, cultural resources, individual desires, and connections. No more can one easily discern between ‘society’ and ‘technology’, ‘carriage’ and ‘content’, ‘base’ and ‘superstructure’, or ‘infrastructure’ and ‘applications’ (or ‘services’ or ‘content’). To understand telecommunications in action, or the vectors of fibre, we need to consider the long and heterogeneous list of links among different human and non-human actors — the long networks, to take Bruno Latour’s evocative concept, that confect our broadband networks (Latour). The co-ordinates of our infrastructure still build on a century-long history of telecommunications networks, on the nineteenth-century centrality of telegraphy preceding this, and on the histories of the public and private so inscribed. Yet we are in the midst of a long, slow dismantling of the posts-telegraph-telephone (PTT) model of the monopoly carrier for each nation that dominated the twentieth century, with its deep colonial foundations. Instead our New World Information and Communication Order is not the decolonising UNESCO vision of the late 1970s and early 1980s (MacBride, Maitland). Rather it is the neoliberal, free trade, market access model, its symbol the 1984 US judicial decision to require the break-up of AT&T and the UK legislation in the same year that underpinned the Thatcherite twin move to privatize British Telecom and introduce telecommunications competition. Between 1984 and 1999, 110 telecommunications companies were privatized, and the ‘acquisition of privatized PTOs [public telecommunications operators] by European and American operators does follow colonial lines’ (Winseck 396; see also Mody, Bauer & Straubhaar). The competitive market has now been uneasily installed as the paradigm for convergent communications networks, not least with the World Trade Organisation’s 1994 General Agreement on Trade in Services and Annex on Telecommunications. As the citizen is recast as consumer and customer (Goggin, ‘Citizens and Beyond’), we rethink our cultural and political axioms as well as the axes that orient our understandings in this area. Information might travel close to the speed of light, and we might fantasise about optical fibre to the home (or pillow), but our terrain, our band where the struggle lies today, is narrower than we wish. Begging for broadband, it seems, is a long way from warchalking for WiFi. Policy Circuits The dreary everyday business of getting connected plugs the individual netizen into a tangled mess of policy circuits, as much as tricky network negotiations. Broadband in mid-2003 in Australia is a curious chimera, welded together from a patchwork of technologies, old and newer communications industries, emerging economies and patterns of use. Broadband conjures up grander visions, however, of communication and cultural cornucopia. Broadband is high-speed, high-bandwidth, ‘always-on’, networked communications. People can send and receive video, engage in multimedia exchanges of all sorts, make the most of online education, realise the vision of home-based work and trading, have access to telemedicine, and entertainment. Broadband really entered the lexicon with the mass takeup of the Internet in the early to mid-1990s, and with the debates about something called the ‘information superhighway’. The rise of the Internet, the deregulation of telecommunications, and the involuted convergence of communications and media technologies saw broadband positioned at the centre of policy debates nearly a decade ago. In 1993-1994, Australia had its Broadband Services Expert Group (BSEG), established by the then Labor government. The BSEG was charged with inquiring into ‘issues relating to the delivery of broadband services to homes, schools and businesses’. Stung by criticisms of elite composition (a narrow membership, with only one woman among its twelve members, and no consumer or citizen group representation), the BSEG was prompted into wider public discussion and consultation (Goggin & Newell). The then Bureau of Transport and Communications Economics (BTCE), since transmogrified into the Communications Research Unit of the Department of Communications, Information Technology and the Arts (DCITA), conducted its large-scale Communications Futures Project (BTCE and Luck). The BSEG Final report posed the question starkly: As a society we have choices to make. If we ignore the opportunities we run the risk of being left behind as other countries introduce new services and make themselves more competitive: we will become consumers of other countries’ content, culture and technologies rather than our own. Or we could adopt new technologies at any cost…This report puts forward a different approach, one based on developing a new, user-oriented strategy for communications. The emphasis will be on communication among people... (BSEG v) The BSEG proposed a ‘National Strategy for New Communications Networks’ based on three aspects: education and community access, industry development, and the role of government (BSEG x). Ironically, while the nation, or at least its policy elites, pondered the weighty question of broadband, Australia’s two largest telcos were doing it. The commercial decision of Telstra/Foxtel and Optus Vision, and their various television partners, was to nail their colours (black) to the mast, or rather telegraph pole, and to lay cable in the major capital cities. In fact, they duplicated the infrastructure in cities such as Sydney and Melbourne, then deciding it would not be profitable to cable up even regional centres, let alone small country towns or settlements. As Terry Flew and Christina Spurgeon observe: This wasteful duplication contrasted with many other parts of the country that would never have access to this infrastructure, or to the social and economic benefits that it was perceived to deliver. (Flew & Spurgeon 72) The implications of this decision for Australia’s telecommunications and television were profound, but there was little, if any, public input into this. Then Minister Michael Lee was very proud of his anti-siphoning list of programs, such as national sporting events, that would remain on free-to-air television rather than screen on pay, but was unwilling, or unable, to develop policy on broadband and pay TV cable infrastructure (on the ironies of Australia’s television history, see Given’s masterly account). During this period also, it may be remembered, Australia’s Internet was being passed into private hands, with the tendering out of AARNET (see Spurgeon for discussion). No such national strategy on broadband really emerged in the intervening years, nor has the market provided integrated, accessible broadband services. In 1997, landmark telecommunications legislation was enacted that provided a comprehensive framework for competition in telecommunications, as well as consolidating and extending consumer protection, universal service, customer service standards, and other reforms (CLC). Carrier and reseller competition had commenced in 1991, and the 1997 legislation gave it further impetus. Effective competition is now well established in long distance telephone markets, and in mobiles. Rivalrous competition exists in the market for local-call services, though viable alternatives to Telstra’s dominance are still few (Fels). Broadband too is an area where there is symbolic rivalry rather than effective competition. This is most visible in advertised ADSL offerings in large cities, yet most of the infrastructure for these services is comprised by Telstra’s copper, fixed-line network. Facilities-based duopoly competition exists principally where Telstra/Foxtel and Optus cable networks have been laid, though there are quite a number of ventures underway by regional telcos, power companies, and, most substantial perhaps, the ACT government’s TransACT broadband network. Policymakers and industry have been greatly concerned about what they see as slow takeup of broadband, compared to other countries, and by barriers to broadband competition and access to ‘bottleneck’ facilities (such as Telstra or Optus’s networks) by potential competitors. The government has alternated between trying to talk up broadband benefits and rates of take up and recognising the real difficulties Australia faces as a large country with a relative small and dispersed population. In March 2003, Minister Alston directed the ACCC to implement new monitoring and reporting arrangements on competition in the broadband industry. A key site for discussion of these matters has been the competition policy institution, the Australian Competition and Consumer Commission, and its various inquiries, reports, and considerations (consult ACCC’s telecommunications homepage at http://www.accc.gov.au/telco/fs-telecom.htm). Another key site has been the Productivity Commission (http://www.pc.gov.au), while a third is the National Office on the Information Economy (NOIE - http://www.noie.gov.au/projects/access/access/broadband1.htm). Others have questioned whether even the most perfectly competitive market in broadband will actually provide access to citizens and consumers. A great deal of work on this issue has been undertaken by DCITA, NOIE, the regulators, and industry bodies, not to mention consumer and public interest groups. Since 1997, there have been a number of governmental inquiries undertaken or in progress concerning the takeup of broadband and networked new media (for example, a House of Representatives Wireless Broadband Inquiry), as well as important inquiries into the still most strategically important of Australia’s companies in this area, Telstra. Much of this effort on an ersatz broadband policy has been piecemeal and fragmented. There are fundamental difficulties with the large size of the Australian continent and its harsh terrain, the small size of the Australian market, the number of providers, and the dominant position effectively still held by Telstra, as well as Singtel Optus (Optus’s previous overseas investors included Cable & Wireless and Bell South), and the larger telecommunications and Internet companies (such as Ozemail). Many consumers living in metropolitan Australia still face real difficulties in realising the slogan ‘bandwidth for all’, but the situation in parts of rural Australia is far worse. Satellite ‘broadband’ solutions are available, through Telstra Countrywide or other providers, but these offer limited two-way interactivity. Data can be received at reasonable speeds (though at far lower data rates than how ‘broadband’ used to be defined), but can only be sent at far slower rates (Goggin, Rural Communities Online). The cultural implications of these digital constraints may well be considerable. Computer gamers, for instance, are frustrated by slow return paths. In this light, the final report of the January 2003 Broadband Advisory Group (BAG) is very timely. The BAG report opens with a broadband rhapsody: Broadband communications technologies can deliver substantial economic and social benefits to Australia…As well as producing productivity gains in traditional and new industries, advanced connectivity can enrich community life, particularly in rural and regional areas. It provides the basis for integration of remote communities into national economic, cultural and social life. (BAG 1, 7) Its prescriptions include: Australia will be a world leader in the availability and effective use of broadband...and to capture the economic and social benefits of broadband connectivity...Broadband should be available to all Australians at fair and reasonable prices…Market arrangements should be pro-competitive and encourage investment...The Government should adopt a National Broadband Strategy (BAG 1) And, like its predecessor nine years earlier, the BAG report does make reference to a national broadband strategy aiming to maximise “choice in work and recreation activities available to all Australians independent of location, background, age or interests” (17). However, the idea of a national broadband strategy is not something the BAG really comes to grips with. The final report is keen on encouraging broadband adoption, but not explicit on how barriers to broadband can be addressed. Perhaps this is not surprising given that the membership of the BAG, dominated by representatives of large corporations and senior bureaucrats was even less representative than its BSEG predecessor. Some months after the BAG report, the Federal government did declare a broadband strategy. It did so, intriguingly enough, under the rubric of its response to the Regional Telecommunications Inquiry report (Estens), the second inquiry responsible for reassuring citizens nervous about the full-privatisation of Telstra (the first inquiry being Besley). The government’s grand $142.8 million National Broadband Strategy focusses on the ‘broadband needs of regional Australians, in partnership with all levels of government’ (Alston, ‘National Broadband Strategy’). Among other things, the government claims that the Strategy will result in “improved outcomes in terms of services and prices for regional broadband access; [and] the development of national broadband infrastructure assets.” (Alston, ‘National Broadband Strategy’) At the same time, the government announced an overall response to the Estens Inquiry, with specific safeguards for Telstra’s role in regional communications — a preliminary to the full Telstra sale (Alston, ‘Future Proofing’). Less publicised was the government’s further initiative in indigenous telecommunications, complementing its Telecommunications Action Plan for Remote Indigenous Communities (DCITA). Indigenous people, it can be argued, were never really contemplated as citizens with the ken of the universal service policy taken to underpin the twentieth-century government monopoly PTT project. In Australia during the deregulatory and re-regulatory 1990s, there was a great reluctance on the part of Labor and Coalition Federal governments, Telstra and other industry participants, even to research issues of access to and use of telecommunications by indigenous communicators. Telstra, and to a lesser extent Optus (who had purchased AUSSAT as part of their licence arrangements), shrouded the issue of indigenous communications in mystery that policymakers were very reluctant to uncover, let alone systematically address. Then regulator, the Australian Telecommunications Authority (AUSTEL), had raised grave concerns about indigenous telecommunications access in its 1991 Rural Communications inquiry. However, there was no government consideration of, nor research upon, these issues until Alston commissioned a study in 2001 — the basis for the TAPRIC strategy (DCITA). The elision of indigenous telecommunications from mainstream industry and government policy is all the more puzzling, if one considers the extraordinarily varied and significant experiments by indigenous Australians in telecommunications and Internet (not least in the early work of the Tanami community, made famous in media and cultural studies by the writings of anthropologist Eric Michaels). While the government’s mid-2003 moves on a ‘National Broadband Strategy’ attend to some details of the broadband predicament, they fall well short of an integrated framework that grasps the shortcomings of the neoliberal communications model. The funding offered is a token amount. The view from the seat of government is a glance from the rear-view mirror: taking a snapshot of rural communications in the years 2000-2002 and projecting this tableau into a safety-net ‘future proofing’ for the inevitable turning away of a fully-privately-owned Telstra from its previously universal, ‘carrier of last resort’ responsibilities. In this aetiolated, residualist policy gaze, citizens remain constructed as consumers in a very narrow sense in this incremental, quietist version of state securing of market arrangements. What is missing is any more expansive notion of citizens, their varied needs, expectations, uses, and cultural imaginings of ‘always on’ broadband networks. Hybrid Networks “Most people on earth will eventually have access to networks that are all switched, interactive, and broadband”, wrote Frances Cairncross in 1998. ‘Eventually’ is a very appropriate word to describe the parlous state of broadband technology implementation. Broadband is in a slow state of evolution and invention. The story of broadband so far underscores the predicament for Australian access to bandwidth, when we lack any comprehensive, integrated, effective, and fair policy in communications and information technology. We have only begun to experiment with broadband technologies and understand their evolving uses, cultural forms, and the sense in which they rework us as subjects. Our communications networks are not superhighways, to invoke an enduring artefact from an older technology. Nor any longer are they a single ‘public’ switched telecommunications network, like those presided over by the post-telegraph-telephone monopolies of old. Like roads themselves, or the nascent postal system of the sixteenth century, broadband is a patchwork quilt. The ‘fibre’ of our communications networks is hybrid. To be sure, powerful corporations dominate, like the Tassis or Taxis who served as postmasters to the Habsburg emperors (Briggs & Burke 25). Activating broadband today provides a perspective on the path dependency of technology history, and how we can open up new threads of a communications fabric. Our options for transforming our multitudinous networked lives emerge as much from everyday tactics and strategies as they do from grander schemes and unifying policies. We may care to reflect on the waning potential for nation-building technology, in the wake of globalisation. We no longer gather our imagined community around a Community Telephone Plan as it was called in 1960 (Barr, Moyal, and PMG). Yet we do require national and international strategies to get and stay connected (Barr), ideas and funding that concretely address the wider dimensions of access and use. We do need to debate the respective roles of Telstra, the state, community initiatives, and industry competition in fair telecommunications futures. Networks have global reach and require global and national integration. Here vision, co-ordination, and resources are urgently required for our commonweal and moral fibre. To feel the width of the band we desire, we need to plug into and activate the policy circuits. Thanks to Grayson Cooke, Patrick Lichty, Ned Rossiter, John Pace, and an anonymous reviewer for helpful comments. Works Cited Alston, Richard. ‘ “Future Proofing” Regional Communications.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.php> —. ‘A National Broadband Strategy.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.php>. Australian Competition and Consumer Commission (ACCC). Broadband Services Report March 2003. Canberra: ACCC, 2003. 17 July 2003 <http://www.accc.gov.au/telco/fs-telecom.htm>. —. Emerging Market Structures in the Communications Sector. Canberra: ACCC, 2003. 15 July 2003 <http://www.accc.gov.au/pubs/publications/utilities/telecommu... ...nications/Emerg_mar_struc.doc>. Barr, Trevor. new media.com: The Changing Face of Australia’s Media and Telecommunications. Sydney: Allen & Unwin, 2000. Besley, Tim (Telecommunications Service Inquiry). Connecting Australia: Telecommunications Service Inquiry. Canberra: Department of Information, Communications and the Arts, 2000. 17 July 2003 <http://www.telinquiry.gov.au/final_report.php>. Briggs, Asa, and Burke, Peter. A Social History of the Internet: From Gutenberg to the Internet. Cambridge: Polity, 2002. Broadband Advisory Group. Australia’s Broadband Connectivity: The Broadband Advisory Group’s Report to Government. Melbourne: National Office on the Information Economy, 2003. 15 July 2003 <http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm>. Broadband Services Expert Group. Networking Australia’s Future: Final Report. Canberra: Australian Government Publishing Service (AGPS), 1994. Bureau of Transport and Communications Economics (BTCE). Communications Futures Final Project. Canberra: AGPS, 1994. Cairncross, Frances. The Death of Distance: How the Communications Revolution Will Change Our Lives. London: Orion Business Books, 1997. Communications Law Centre (CLC). Australian Telecommunications Regulation: The Communications Law Centre Guide. 2nd edition. Sydney: Communications Law Centre, University of NSW, 2001. Department of Communications, Information Technology and the Arts (DCITA). Telecommunications Action Plan for Remote Indigenous Communities: Report on the Strategic Study for Improving Telecommunications in Remote Indigenous Communities. Canberra: DCITA, 2002. Estens, D. Connecting Regional Australia: The Report of the Regional Telecommunications Inquiry. Canberra: DCITA, 2002. <http://www.telinquiry.gov.au/rti-report.php>, accessed 17 July 2003. Fels, Alan. ‘Competition in Telecommunications’, speech to Australian Telecommunications Users Group 19th Annual Conference. 6 March, 2003, Sydney. <http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc>, accessed 15 July 2003. Flew, Terry, and Spurgeon, Christina. ‘Television After Broadcasting’. In The Australian TV Book. Ed. Graeme Turner and Stuart Cunningham. Allen & Unwin, Sydney. 69-85. 2000. Given, Jock. Turning Off the Television. Sydney: UNSW Press, 2003. Goggin, Gerard. ‘Citizens and Beyond: Universal service in the Twilight of the Nation-State.’ In All Connected?: Universal Service in Telecommunications, ed. Bruce Langtry. Melbourne: University of Melbourne Press, 1998. 49-77 —. Rural Communities Online: Networking to link Consumers to Providers. Melbourne: Telstra Consumer Consultative Council, 2003. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. House of Representatives Standing Committee on Communications, Information Technology and the Arts (HoR). Connecting Australia!: Wireless Broadband. Report of Inquiry into Wireless Broadband Technologies. Canberra: Parliament House, 2002. <http://www.aph.gov.au/house/committee/cita/Wbt/report.htm>, accessed 17 July 2003. Lamberton, Don. ‘A Telecommunications Infrastructure is Not an Information Infrastructure’. Prometheus: Journal of Issues in Technological Change, Innovation, Information Economics, Communication and Science Policy 14 (1996): 31-38. Latour, Bruno. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press, 1987. Luck, David. ‘Revisiting the Future: Assessing the 1994 BTCE communications futures project.’ Media International Australia 96 (2000): 109-119. MacBride, Sean (Chair of International Commission for the Study of Communication Problems). Many Voices, One World: Towards a New More Just and More Efficient World Information and Communication Order. Paris: Kegan Page, London. UNESCO, 1980. Maitland Commission (Independent Commission on Worldwide Telecommunications Development). The Missing Link. Geneva: International Telecommunications Union, 1985. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Sydney: Allen & Unwin, 1994. Mody, Bella, Bauer, Johannes M., and Straubhaar, Joseph D., eds. Telecommunications Politics: Ownership and Control of the Information Highway in Developing Countries. Mahwah, NJ: Erlbaum, 1995. Moyal, Ann. Clear Across Australia: A History of Telecommunications. Melbourne: Thomas Nelson, 1984. Post-Master General’s Department (PMG). Community Telephone Plan for Australia. Melbourne: PMG, 1960. Productivity Commission (PC). Telecommunications Competition Regulation: Inquiry Report. Report No. 16. Melbourne: Productivity Commission, 2001. <http://www.pc.gov.au/inquiry/telecommunications/finalreport/>, accessed 17 July 2003. Spurgeon, Christina. ‘National Culture, Communications and the Information Economy.’ Media International Australia 87 (1998): 23-34. Turner, Graeme. ‘First Contact: coming to terms with the cable guy.’ UTS Review 3 (1997): 109-21. Winseck, Dwayne. ‘Wired Cities and Transnational Communications: New Forms of Governance for Telecommunications and the New Media’. In The Handbook of New Media: Social Shaping and Consequences of ICTs, ed. Leah A. Lievrouw and Sonia Livingstone. London: Sage, 2002. 393-409. World Trade Organisation. General Agreement on Trade in Services: Annex on Telecommunications. Geneva: World Trade Organisation, 1994. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm>. —. Fourth protocol to the General Agreement on Trade in Services. Geneva: World Trade Organisation. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm>. Links http://www.accc.gov.au/pubs/publications/utilities/telecommunications/Emerg_mar_struc.doc http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc http://www.accc.gov.au/telco/fs-telecom.htm http://www.aph.gov.au/house/committee/cita/Wbt/report.htm http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.html http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.html http://www.noie.gov.au/projects/access/access/broadband1.htm http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm http://www.pc.gov.au http://www.pc.gov.au/inquiry/telecommunications/finalreport/ http://www.telinquiry.gov.au/final_report.html http://www.telinquiry.gov.au/rti-report.html http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Goggin, Gerard. "Broadband" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/02-featurebroadband.php>. APA Style Goggin, G. (2003, Aug 26). Broadband. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/02-featurebroadband.php>
APA, Harvard, Vancouver, ISO, and other styles
5

"Language teaching." Language Teaching 38, no. 1 (January 2005): 19–26. http://dx.doi.org/10.1017/s0261444805212521.

Full text
Abstract:
05–01Ainsworth, Judith (Wilfrid Laurier U, Canada). Hôtel Renaissance:using a project case study to teach business French. Journal of Language for International Business (Glendale, AZ, USA) 16.1 (2005), 43–59.05–02Bärenfänger, Olaf (U of Leipzig, Germany). Fremdsprachenlemen durch Lernmanagement: Grundzüge eines projektbasierten Didaktikkonzepts [Foreign language learning through learning management: main features of a didactic project-based concept]. Fremdsprachen Lehren und Lernen (Tübingen, Germany) 33 (2004), 251–267.05–03Benati, Alessandro (U of Greenwich, UK; a.benati@gre.ac.uk). The effects of processing instruction, traditional instruction and meaning-output instruction on the acquisition of the English past simple tense. Language Teaching Research (London, UK) 9.1 (2005), 67–93.05–04Carless D. (Hong Kong Institute of Education, Hong Kong). Issues in teachers' reinterpretation of a task-based innovation in primary schools. TESOL Quarterly (Alexandria, VA, USA) 38.4 (2004), 639–662.05–05Curry, M. J. & Lillis, T. (U of Rochester, New York, USA). Multilingual scholars and the imperative to publish in English: negotiating interests, demands, and rewards. TESOL Quarterly (Alexandria, VA, USA) 38.4 (2004), 663–688.05–06Dufficy, Paul (U of Sydney, Australia; p.dufficy@edfac.usyd.edu.au). Predisposition to choose: the language of an information gap task in a multilingual primary classroom. Language Teaching Research (London, UK) 8.3 (2004), 241–261.05–07Evans, Michael & Fisher, Linda (U of Cambridge, UK; mje1000@hermes.cam.ac.uk). Measuring gains in pupils' foreign language competence as a result of participating in a school exchange visit: the case of Y9 pupils at three comprehensive schools in the UK. Language Teaching Research (London, UK) 9.2 (2005), 173–192.05–08Gunn, Cindy (The American U of Sharjah, UAE; cgunn@ausharjah.edu). Prioritizing practitioner research: an example from the field. Language Teaching Research (London, UK) 9.1 (2005), 97–112.05–09Hansen, J. G. & Liu, J. (U of Arizona, USA). Guiding principles for effective peer response. ELT Journal (Oxford, UK) 59.1 (2005), 31–38.05–10Hatoss, Anikó (U of Southern Queensland, Australia; hatoss@usq.edu.au). A model for evaluating textbooks. Babel – Journal of the AFMLTA (Queensland, Australia) 39.2 (2004), 25–32.05–11Kabat, Kaori, Weibe, Grace & Chao, Tracy (U of Alberta, Canada). Challenge of developing and implementing multimedia courseware for a Japanese language program. CALICO Journal (TX, USA), 22.2 (2005), 237–250.05–12Kuo, Wan-wen (U of Pennsylvania, USA). Survival skills in foreign languages for business practitioners: the development of an online Chinese project. Journal of Language for International Business (Glendale, AZ, USA) 16.1 (2005), 1–17.05–13Liu, D., Ahn, G., Baek, K. & Han, N. (Oklahoma City U, USA). South Korean high school English teachers' code switching: questions and challenges in the drive for maximal use of English in teaching. TESOL Quarterly (Alexandria, VA, USA) 38.4 (2004), 605–638.05–14Lotherington, Heather (York U, Canada). What four skills? Redefining language and literacy standards for ELT in the digital era. TESL Canada Journal (Burnaby, Canada) 22.1 (2004), 64–78.05–15Lutjeharms, Madeline (Vrije U, Belgium). Der Zugriff auf das mentale Lexikon und der Wortschatzerwerb in der Fremdsprache [Access to the mental lexicon and vocabulary acquisition in a foreign language]. Fremdsprachen Lehren und Lernen (Tübingen, Germany) 33 (2004), 10–24.05–16Lyster, Roy (McGill U, Canada; roy.lyster@mcgill.ca). Research on form-focused instruction in immersion classrooms: implications for theory and practice. French Language Studies (Cambridge, UK) 14.3 (2004), 321–341.05–17Mackey, Alison (Georgetown U, USA; mackeya@georgetown.edu), Polio, Charlene & McDonough, Kim The relationship between experience, education and teachers' use of incidental focus-on-form techniques. Language Teaching Research (London, UK) 8.3 (2004), 301–327.05–18MacLennan, Janet (U of Puerto Rico). How can I hear your voice when someone else is speaking for you? An investigation of the phenomenon of the classroom spokesperson in the ESL classroom. TESL Canada Journal (Burnaby, Canada) 22.1 (2004), 91–97.05–19Mangubhai, Francis (U of Southern Queensland, Australia; mangubha@usq.edu.au), Marland, Perc, Dashwood, Ann & Son, Jeong-Bae. Similarities and differences in teachers' and researchers' conceptions of communicative language teaching: does the use of an educational model cast a better light?Language Teaching Research (London, UK) 9.1 (2005), 31–66.05–20Meskill, Carla & Anthony, Natasha (Albany State U of New York, USA; cmeskill@uamail.albany.edu). Foreign language learning with CMC: forms of online instructional discourse in a hybrid Russian class. System (Oxford, UK) 33.1 (2005), 89–105.05–21Paribakht, T. S. (U of Ottawa, Canada; parbakh@uottowa.ca). The role of grammar in second language lexical processing. RELC Journal (Singapore) 35.2 (2004), 149–160.05–22Ramachandran, Sharimllah Devi (Kolej U Teknikal Kebangsaan, Malaysia; sharimllah@kutkm.edu.my) & Rahim, Hajar Abdul. Meaning recall and retention: the impact of the translation method on elementary level learners' vocabulary learning. RELC Journal (Singapore) 35.2 (2004), 161–178.05–23Roessingh, Hetty & Johnson, Carla (U of Calgary, Canada). Teacher-prepared materials: a principled approach. TESL Canada Journal (Burnaby, Canada) 22.1 (2004), 44–63.05–24Rogers, Sandra H. (Otago Polytechnic English Language Institute, New Zealand; sandrar@tekotago.ac.nz). Evaluating textual coherence: a case study of university business writing by EFL and native English speaking students in New Zealand. RELC Journal (Singapore) 35.2 (2004), 135–147.05–25Sheen, Young Hee (Teachers College, Columbia U, USA; ys335@columbia.edu). Corrective feedback and learner uptake in communicative classrooms across instructional settings. Language Teaching Research (London, UK) 8.3 (2004), 263–300.05–26Sparks, Richard L. (College of Mt. St. Joseph, USA) Ganschow, Leonore, Artzer, Marjorie E., Siebenhar, David & Plageman, Mark. Foreign language teachers' perceptions of students' academic skills, affective characteristics, and proficiency: replication and follow-up studies. Foreign Language Annals (New York, USA) 37.2 (2004), 263–278.05–27Taguchi, Naoko (Carnegie Mellon U, USA). The communicative approach in Japanese secondary schools: teachers perceptions and practice. The Language Teacher (Japan) 29.3 (2005), 3–12.05–28Tsang, Wai King (City U of Hong Kong, Hong Kong; entsanwk@cityu.edu.hk). Feedback and uptake in teacher-student interaction: an analysis of 18 English lessons in Hong Kong secondary classrooms. RELC Journal (Singapore) 35.2(2004), 187–209.05–29Weinberg, Alice (U of Ottowa, Canada). Les chansons de la francophonie website and its two web-usage-tracking systems in an advanced listening comprehension course. CALICO Journal (TX, USA) 22.2 (2005), 251–268.05–30West, D. Vanisa (Messiah College, PA, USA). Literature in lower-level courses: making progress in both language and reading skills. Foreign Language Annals (New York, USA) 37.2 (2004), 209–223.05–31Williams, Cheri (U of Cincinnati, USA) & Hufnagel, Krissy. The impact of word study instruction on kindergarten children's journal writing. Research in the Teaching of English (Urbana, IL, USA) 39.3 (2005), 233–270.
APA, Harvard, Vancouver, ISO, and other styles
6

Mizrach, Steven. "Natives on the Electronic Frontier." M/C Journal 3, no. 6 (December 1, 2000). http://dx.doi.org/10.5204/mcj.1890.

Full text
Abstract:
Introduction Many anthropologists and other academics have attempted to argue that the spread of technology is a global homogenising force, socialising the remaining indigenous groups across the planet into an indistinct Western "monoculture" focussed on consumption, where they are rapidly losing their cultural distinctiveness. In many cases, these intellectuals -– people such as Jerry Mander -- often blame the diffusion of television (particularly through new innovations that are allowing it to penetrate further into rural areas, such as satellite and cable) as a key force in the effort to "assimilate" indigenous groups and eradicate their unique identities. Such writers suggest that indigenous groups can do nothing to resist the onslaught of the technologically, economically, and aesthetically superior power of Western television. Ironically, while often protesting the plight of indigenous groups and heralding their need for cultural survival, these authors often fail to recognise these groups’ abilities to fend for themselves and preserve their cultural integrity. On the other side of the debate are visual anthropologists and others who are arguing that indigenous groups are quickly becoming savvy to Western technologies, and that they are now using them for cultural revitalisation, linguistic revival, and the creation of outlets for the indigenous voice. In this school of thought, technology is seen not so much as a threat to indigenous groups, but instead as a remarkable opportunity to reverse the misfortunes of these groups at the hands of colonisation and national programmes of attempted assimilation. From this perspective, the rush of indigenous groups to adopt new technologies comes hand-in-hand with recent efforts to assert their tribal sovereignty and their independence. Technology has become a "weapon" in their struggle for technological autonomy. As a result, many are starting their own television stations and networks, and thus transforming the way television operates in their societies -– away from global monocultures and toward local interests. I hypothesise that in fact there is no correlation between television viewing and acculturation, and that, in fact, the more familiar people are with the technology of television and the current way the technology is utilised, the more likely they are to be interested in using it to revive and promote their own culture. Whatever slight negative effect exists depends on the degree to which local people can understand and redirect how that technology is used within their own cultural context. However, it should be stated that for terms of this investigation, I consider the technologies of "video" and "television" to be identical. One is the recording aspect, and the other the distribution aspect, of the same technology. Once people become aware that they can control what is on the television screen through the instrumentality of video, they immediately begin attempting to assert cultural values through it. And this is precisely what is going on on the Cheyenne River Reservation. This project is significant because the phenomenon of globalisation is real and Western technologies such as video, radio, and PCs are spreading throughout the world, including the "Fourth World" of the planet’s indigenous peoples. However, in order to deal with the phenomenon of globalisation, anthropologists and others may need to deal more realistically with the phenomenon of technological diffusion, which operates far less simply than they might assume. Well-meaning anthropologists seeking to "protect" indigenous groups from the "invasion" of technologies which will change their way of life may be doing these groups a disservice. If they turned some of their effort away from fending off these technologies and toward teaching indigenous groups how to use them, perhaps they might have a better result in creating a better future for them. I hope this study will show a more productive model for dealing with technological diffusion and what effects it has on cultural change in indigenous societies. There have been very few authors that have dealt with this topic head-on. One of the first to do so was Pace (1993), who suggested that some Brazilian Indians were acculturating more quickly as a result of television finally coming to their remote villages in the 1960s. Molohon (1984) looked at two Cree communities, and found that the one which had more heavy television viewing was culturally closer to its neighboring white towns. Zimmerman (1996) fingered television as one of the key elements in causing Indian teenagers to lose their sense of identity, thus putting them at higher risk for suicide. Gillespie (1995) argued that television is actually a ‘weapon’ of national states everywhere in their efforts to assimilate and socialise indigenous and other ethnic minority groups. In contrast, authors like Weiner (1997), Straubhaar (1991), and Graburn (1982) have all critiqued these approaches, suggesting that they deny subjectivity and critical thinking to indigenous TV audiences. Each of these researchers suggest, based on their field work, that indigenous people are no more likely than anybody else to believe that the things they see on television are true, and no more likely to adopt the values or worldviews promoted by Western TV programmers and advertisers. In fact, Graburn has observed that the Inuit became so disgusted with what they saw on Canadian national television, that they went out and started their own TV network in an effort to provide their people with meaningful alternatives on their screens. Bell (1995) sounds a cautionary note against studies like Graburn’s, noting that the efforts of indigenous New Zealanders to create their own TV programming for local markets failed, largely because they were crowded out by the "media imperialism" of outside international television. Although the indigenous groups there tried to put their own faces on the screen, many local viewers preferred to see the faces of J.R. Ewing and company, and lowered the ratings share of these efforts. Salween (1991) thinks that global media "cultural imperialism" is real -– that it is an objective pursued by international television marketers -– and suggests a media effects approach might be the best way to see whether it works. Woll (1987) notes that historically many ethnic groups have formed their self-images based on the way they have been portrayed onscreen, and that so far these portrayals have been far from sympathetic. In fact, even once these groups started their own cinemas or TV programmes, they unconsciously perpetuated stereotypes first foisted on them by other people. This study tends to side with those who have observed that indigenous people do not tend to "roll over" in the wake of the onslaught of Western television. Although cautionary studies need to be examined carefully, this research will posit that although the dominant forces controlling TV are antithetical to indigenous groups and their goals, the efforts of indigenous people to take control of their TV screens and their own "media literacy" are also increasing. Thus, this study should contribute to the viewpoint that perhaps the best way to save indigenous groups from cultural eradication is to give them access to television and show them how to set up their own stations and distribute their own video programming. In fact, it appears to be the case that TV, the Internet, and electronic 'new media' are helping to foster a process of cultural renewal, not just among the Lakota, but also among the Inuit, the Australian aborigines, and other indigenous groups. These new technologies are helping them renew their native languages, cultural values, and ceremonial traditions, sometimes by giving them new vehicles and forms. Methods The research for this project was conducted on the Cheyenne River Sioux Reservation headquartered in Eagle Butte, South Dakota. Participants chosen for this project were Lakota Sioux who were of the age of consent (18 or older) and who were tribal members living on the reservation. They were given a survey which consisted of five components: a demographic question section identifying their age, gender, and individual data; a technology question section identifying what technologies they had in their home; a TV question section measuring the amount of television they watched; an acculturation question section determining their comparative level of acculturation; and a cultural knowledge question section determining their knowledge of Lakota history. This questionnaire was often followed up by unstructured ethnographic interviews. Thirty-three people of mixed age and gender were given this questionnaire, and for the purposes of this research paper, I focussed primarily on their responses dealing with television and acculturation. These people were chosen through strictly random sampling based on picking addresses at random from the phone book and visiting their houses. The television section asked specifically how many hours of TV they watched per day and per week, what shows they watched, what kinds of shows they preferred, and what rooms in their home had TVs. The acculturation section asked them questions such as how much they used the Lakota language, how close their values were to Lakota values, and how much participation they had in traditional indigenous rituals and customs. To assure open and honest responses, each participant filled out a consent form, and was promised anonymity of their answers. To avoid data contamination, I remained with each person until they completed the questionnaire. For my data analysis, I attempted to determine if there was any correlation (Pearson’s coefficient r of correlation) between such things as hours of TV viewed per week or years of TV ownership with such things as the number of traditional ceremonies they attended in the past year, the number of non-traditional Lakota values they had, their fluency in the Lakota language, their level of cultural knowledge, or the number of traditional practices and customs they had engaged in in their lives. Through simple statistical tests, I determined whether television viewing had any impact on these variables which were reasonable proxies for level of acculturation. Findings Having chosen two independent variables, hours of TV watched per week, and years of TV ownership, I tested if there was any significant correlation between them and the dependent variables of Lakota peoples’ level of cultural knowledge, participation in traditional practices, conformity of values to non-Lakota or non-traditional values, fluency in Lakota, and participation in traditional ceremonies (Table 1). These variables all seemed like reasonable proxies for acculturation since acculturated Lakota would know less of their own culture, go to fewer ceremonies, and so on. The cultural knowledge score was based on how many complete answers the respondents knew to ‘fill in the blank’ questions regarding Lakota history, historical figures, and important events. Participation in traditional practices was based on how many items they marked in a survey of whether or not they had ever raised a tipi, used traditional medicine, etc. The score for conformity to non-Lakota values was based on how many items they marked with a contrary answer to the emic Lakota value system ("the seven Ws".) Lakota fluency was based on how well they could speak, write, or use the Lakota language. And ceremonial attendance was based on the number of traditional ceremonies they had attended in the past year. There were no significant correlations between either of these TV-related variables and these indexes of acculturation. Table 1. R-Scores (Pearson’s Coefficient of Correlation) between Variables Representing Television and Acculturation R-SCORES Cultural Knowledge Traditional Practices Modern Values Lakota Fluency Ceremonial Attendance Years Owning TV 0.1399 -0.0445 -0.4646 -0.0660 0.1465 Hours of TV/Week -0.3414 -0.2640 -0.2798 -0.3349 0.2048 The strongest correlation was between the number of years the Lakota person owned a television, and the number of non-Lakota (or ‘modern Western’) values they held in their value system. But even that correlation was pretty weak, and nowhere near the r-score of other linear correlations, such as between their age and the number of children they had. How much television Lakota people watched did not seem to have any influence on how much cultural knowledge they knew, how many traditional practices they had participated in, how many non-Lakota values they held, how well they spoke or used the Lakota language, or how many ceremonies they attended. Even though there does not appear to be anything unusual about their television preferences, and in general they are watching the same shows as other non-Lakota people on the reservation, they are not becoming more acculturated as a result of their exposure to television. Although the Lakota people may be losing aspects of their culture, language, and traditions, other causes seem to be at the forefront than television. I also found that people who were very interested in television production as well as consumption saw this as a tool for putting more Lakota-oriented programs on the air. The more they knew about how television worked, the more they were interested in using it as a tool in their own community. And where I was working at the Cultural Center, there was an effort to videotape many community and cultural events. The Center had a massive archive of videotaped material, but unfortunately while they had faithfully recorded all kinds of cultural events, many of them were not quite "broadcast ready". There was more focus on showing these video programmes, especially oral history interviews with elders, on VCRs in the school system, and in integrating them into various kinds of multimedia and hypermedia. While the Cultural Center had begun broadcasting (remotely through a radio modem) a weekly radio show, ‘Wakpa Waste’ (Good Morning CRST), on the radio station to the north, KLND-Standing Rock, there had never been any forays into TV broadcasting. The Cultural Center director had looked into the feasibility of putting up a television signal transmission tower, and had applied for a grant to erect one, but that grant was denied. The local cable system in Eagle Butte unfortunately lacked the technology to carry true "local access" programming; although the Channel 8 of the system carried CRST News and text announcements, there was no open channel available to carry locally produced public access programming. The way the cable system was set up, it was purely a "relay" or feed from news and channels from elsewhere. Also, people were investing heavily in satellite systems, especially the new DBS (direct broadcast satellite) receivers, and would not be able to pick up local access programmes anyway. The main problem hindering the Lakotas’ efforts to preserve their culture through TV and video was lack of access to broadcast distribution technology. They had the interest, the means, and the stock of programming to put on the air. They had the production and editing equipment, although not the studios to do a "live" show. Were they able to have more local access to and control over TV distribution technology, they would have a potent "arsenal" for resisting the drastic acculturation their community is undergoing. TV has the potential to be a tool for great cultural revitalisation, but because the technology and know-how for producing it was located elsewhere, the Lakotas could not benefit from it. Discussion I hypothesised that the effects if TV viewing on levels of indigenous acculturation would be negligible. The data support my hypothesis that TV does not seem to have a major correlation with other indices of acculturation. Previous studies by anthropologists such as Pace and Molohon suggested that TV was a key determinant in the acculturation of indigenous people in Brazil and the U.S. -– this being the theory of cultural imperialism. However, this research suggests that TV’s effect on the decline of indigenous culture is weak and inconclusive. In fact, the qualitative data suggest that the Lakota most familiar with TV are also the most interested in using it as a tool for cultural preservation. Although the CRST Lakota currently lack the means for mass broadcast of cultural programming, there is great interest in it, and new technologies such as the Internet and micro-broadcast may give them the means. There are other examples of this phenomenon worldwide, which suggest that the Lakota experience is not unique. In recent years, Australian Aborigines, Canadian Inuit, and Brazilian Kayapo have each begun ambitious efforts in creating satellite-based television networks that allow them to reach their far-flung populations with programming in their own indigenous language. In Australia, Aboriginal activists have created music television programming which has helped them assert their position in land claims disputes with the Australian government (Michaels 1994), and also to educate the Europeans of Australia about the aboriginal way of life. In Canada, the Inuit have also created satellite TV networks which are indigenous-owned and operated and carry traditional cultural programming (Valaskakis 1992). Like the Aborigines and the Inuit, the Lakota through their HVJ Lakota Cultural Center are beginning to create their own radio and video programming on a smaller scale, but are beginning to examine using the reservation's cable network to carry some of this material. Since my quantitative survey included only 33 respondents, the data are not as robust as would be determined from a larger sample. However, ethnographic interviews focussing on how people approach TV, as well as other qualitative data, support the inferences of the quantitative research. It is not clear that my work with the Lakota is necessarily generalisable to other populations. Practically, it does suggest that anthropologists interested in cultural and linguistic preservation should strive to increase indigenous access to, and control of, TV production technology. ‘Protecting’ indigenous groups from new technologies may cause more harm than good. Future applied anthropologists should work with the ‘natives’ and help teach them how to adopt and adapt this technology for their own purposes. Although this is a matter that I deal with more intensively in my dissertation, it also appears to me to be the case that, contrary to the warnings of Mander, many indigenous cultures are not being culturally assimilated by media technology, but instead are assimilating the technology into their own particular cultural contexts. The technology is part of a process of revitalisation or renewal -- although there is a definite process of change and adaptation underway, this actually represents an 'updating' of old cultural practices for new situations in an attempt to make them viable for the modern situation. Indeed, I think that the Internet, globally, is allowing indigenous people to reassert themselves as a Fourth World "power bloc" on the world stage, as linkages are being formed between Saami, Maya, Lakota, Kayapo, Inuit, and Aborigines. Further research should focus on: why TV seems to have a greater acculturative influence on certain indigenous groups rather than others; whether indigenous people can truly compete equally in the broadcast "marketplace" with Western cultural programming; and whether attempts to quantify the success of TV/video technology in cultural preservation and revival can truly demonstrate that this technology plays a positive role. In conclusion, social scientists may need to take a sidelong look at why precisely they have been such strong critics of introducing new technologies into indigenous societies. There is a better role that they can play –- that of technology ‘broker’. They can cooperate with indigenous groups, serving to facilitate the exchange of knowledge, expertise, and technology between them and the majority society. References Bell, Avril. "'An Endangered Species’: Local Programming in the New Zealand Television Market." Media, Culture & Society 17.1 (1995): 182-202. Gillespie, Marie. Television, Ethnicity, and Cultural Change. New York: Routledge, 1995. Graburn, Nelson. "Television and the Canadian Inuit". Inuit Etudes 6.2 (1982): 7-24. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Minneapolis: U of Minnesota P, 1994. Molohon, K.T. "Responses to Television in Two Swampy Cree Communities on the West James Bay." Kroeber Anthropology Society Papers 63/64 (1982): 95-103. Pace, Richard. "First-Time Televiewing in Amazonia: Television Acculturation in Gurupa, Brazil." Ethnology 32.1 (1993): 187-206. Salween, Michael. "Cultural Imperialism: A Media Effects Approach." Critical Studies in Mass Communication 8.2 (1991): 29-39. Straubhaar, J. "Beyond Media Imperialism: Asymmetrical Interdependence and Cultural Proximity". Critical Studies in Mass Communication 8.1 (1991): 39-70. Valaskakis, Gail. "Communication, Culture, and Technology: Satellites and Northern Native Broadcasting in Canada". Ethnic Minority Media: An International Perspective. Newbury Park: Sage Publications, 1992. Weiner, J. "Televisualist Anthropology: Representation, Aesthetics, Politics." Current Anthropology 38.3 (1997): 197-236. Woll, Allen. Ethnic and Racial Images in American Film and Television: Historical Essays and Bibliography. New York: Garland Press, 1987. Zimmerman, M. "The Development of a Measure of Enculturation for Native American Youth." American Journal of Community Psychology 24.1 (1996): 295-311. Citation reference for this article MLA style: Steven Mizrach. "Natives on the Electronic Frontier: Television and Cultural Change on the Cheyenne River Sioux Reservation." M/C: A Journal of Media and Culture 3.6 (2000). [your date of access] <http://www.api-network.com/mc/0012/natives.php>. Chicago style: Steven Mizrach, "Natives on the Electronic Frontier: Television and Cultural Change on the Cheyenne River Sioux Reservation," M/C: A Journal of Media and Culture 3, no. 6 (2000), <http://www.api-network.com/mc/0012/natives.php> ([your date of access]). APA style: Steven Mizrach. (2000) Natives on the electronic frontier: television and cultural change on the Cheyenne River Sioux Reservation. M/C: A Journal of Media and Culture 3(6). <http://www.api-network.com/mc/0012/natives.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
7

Hermida, Alfred. "From TV to Twitter: How Ambient News Became Ambient Journalism." M/C Journal 13, no. 2 (March 9, 2010). http://dx.doi.org/10.5204/mcj.220.

Full text
Abstract:
In a TED talk in June 2009, media scholar Clay Shirky cited the devastating earthquake that struck the Sichuan province of China in May 2008 as an example of how media flows are changing. He explained how the first reports of the quake came not from traditional news media, but from local residents who sent messages on QQ, China’s largest social network, and on Twitter, the world’s most popular micro-blogging service. "As the quake was happening, the news was reported," said Shirky. This was neither a unique nor isolated incident. It has become commonplace for the people caught up in the news to provide the first accounts, images and video of events unfolding around them. Studies in participatory journalism suggest that professional journalists now share jurisdiction over the news in the sense that citizens are participating in the observation, selection, filtering, distribution and interpretation of events. This paper argues that the ability of citizens to play “an active role in the process of collecting, reporting, analysing and disseminating news and information” (Bowman and Willis 9) means we need to reassess the meaning of ‘ambient’ as applied to news and journalism. Twitter has emerged as a key medium for news and information about major events, such as during the earthquake in Chile in February 2010 (see, for example, Silverman; Dickinson). This paper discusses how social media technologies such as Twitter, which facilitate the immediate dissemination of digital fragments of news and information, are creating what I have described as “ambient journalism” (Hermida). It approaches real-time, networked digital technologies as awareness systems that offer diverse means to collect, communicate, share and display news and information in the periphery of a user's awareness. Twitter shares some similarities with other forms of communication. Like the telephone, it facilitates a real-time exchange of information. Like instant messaging, the information is sent in short bursts. But it extends the affordances of previous modes of communication by combining these features in both a one-to-many and many-to-many framework that is public, archived and searchable. Twitter allows a large number of users to communicate with each other simultaneously in real-time, based on an asymmetrical relationship between friends and followers. The messages form social streams of connected data that provide value both individually and in aggregate. News All Around The term ‘ambient’ has been used in journalism to describe the ubiquitous nature of news in today's society. In their 2002 study, Hargreaves and Thomas said one of the defining features of the media landscape in the UK was the easy availability of news through a host of media platforms, such as public billboards and mobile phones, and in spaces, such as trains and aircraft. “News is, in a word, ambient, like the air we breathe,” they concluded (44). The availability of news all around meant that citizens were able to maintain an awareness of what was taking place in the world as they went about their everyday activities. One of the ways news has become ambient has been through the proliferation of displays in public places carrying 24-hour news channels or showing news headlines. In her book, Ambient Television, Anna McCarthy explored how television has become pervasive by extending outside the home and dominating public spaces, from the doctor’s waiting room to the bar. “When we search for TV in public places, we find a dense, ambient clutter of public audio-visual apparatuses,” wrote McCarthy (13). In some ways, the proliferation of news on digital platforms has intensified the presence of ambient news. In a March 2010 Pew Internet report, Purcell et al. found that “in the digital era, news has become omnipresent. Americans access it in multiple formats on multiple platforms on myriad devices” (2). It seems that, if anything, digital technologies have increased the presence of ambient news. This approach to the term ‘ambient’ is based on a twentieth century model of mass media. Traditional mass media, from newspapers through radio to television, are largely one-directional, impersonal one-to-many carriers of news and information (McQuail 55). The most palpable feature of the mass media is to reach the many, and this affects the relationship between the media and the audience. Consequently, the news audience does not act for itself, but is “acted upon” (McQuail 57). It is assigned the role of consumer. The public is present in news as citizens who receive information about, and interpretation of, events from professional journalists. The public as the recipient of information fits in with the concept of ambient news as “news which is free at the point of consumption, available on demand and very often available in the background to people’s lives without them even looking” (Hargreaves and Thomas 51). To suggest that members of the audience are just empty receptacles to be filled with news is an oversimplification. For example, television viewers are not solely defined in terms of spectatorship (see, for example, Ang). But audiences have, traditionally, been kept well outside the journalistic process, defined as the “selecting, writing, editing, positioning, scheduling, repeating and otherwise massaging information to become news” (Shoemaker et al. 73). This audience is cast as the receiver, with virtually no sense of agency over the news process. As a result, journalistic communication has evolved, largely, as a process of one-way, one-to-many transmission of news and information to the public. The following section explores the shift towards a more participatory media environment. News as a Social Experience The shift from an era of broadcast mass media to an era of networked digital media has fundamentally altered flows of information. Non-linear, many-to-many digital communication technologies have transferred the means of media production and dissemination into the hands of the public, and are rewriting the relationship between the audience and journalists. Where there were once limited and cost-intensive channels for the distribution of content, there are now a myriad of widely available digital channels. Henry Jenkins has written about the emergence of a participatory culture that “contrasts with older notions of passive media spectatorship. Rather than talking about media producers and consumers occupying separate roles, we might now see them as participants who interact with each other according to a new set of rules that none of us fully understands” (3). Axel Bruns has coined the term “produsage” (2) to refer to the blurred line between producers and consumers, while Jay Rosen has talked about the “people formerly know as the audience.” For some, the consequences of this shift could be “a new model of journalism, labelled participatory journalism,” (Domingo et al. 331), raising questions about who can be described as a journalist and perhaps, even, how journalism itself is defined. The trend towards a more participatory media ecosystem was evident in the March 2010 study on news habits in the USA by Pew Internet. It highlighted that the news was becoming a social experience. “News is becoming a participatory activity, as people contribute their own stories and experiences and post their reactions to events” (Purcell et al. 40). The study found that 37% of Internet users, described by Pew as “news participators,” had actively contributed to the creation, commentary, or dissemination of news (44). This reflects how the Internet has changed the relationship between journalists and audiences from a one-way, asymmetric model of communication to a more participatory and collective system (Boczkowski; Deuze). The following sections considers how the ability of the audience to participate in the gathering, analysis and communication of news and information requires a re-examination of the concept of ambient news. A Distributed Conversation As I’ve discussed, ambient news is based on the idea of the audience as the receiver. Ambient journalism, on the other hand, takes account of how audiences are able to become part of the news process. However, this does not mean that citizens are necessarily producing journalism within the established framework of accounts and analysis through narratives, with the aim of providing accurate and objective portrayals of reality. Rather, I suggest that ambient journalism presents a multi-faceted and fragmented news experience, where citizens are producing small pieces of content that can be collectively considered as journalism. It acknowledges the audience as both a receiver and a sender. I suggest that micro-blogging social media services such as Twitter, that enable millions of people to communicate instantly, share and discuss events, are an expression of ambient journalism. Micro-blogging is a new media technology that enables and extends society's ability to communicate, enabling users to share brief bursts of information from multiple digital devices. Twitter has become one of the most popular micro-blogging platforms, with some 50 million messages sent daily by February 2010 (Twitter). Twitter enables users to communicate with each other simultaneously via short messages no longer than 140 characters, known as ‘tweets’. The micro-blogging platform shares some similarities with instant messaging. It allows for near synchronous communications from users, resulting in a continuous stream of up-to-date messages, usually in a conversational tone. Unlike instant messaging, Twitter is largely public, creating a new body of content online that can be archived, searched and retrieved. The messages can be extracted, analysed and aggregated, providing a measure of activity around a particular event or subject and, in some cases, an indication of the general sentiment about it. For example, the deluge of tweets following Michael Jackson's death in July 2009 has been described as a public and collective expression of loss that indicated “the scale of the world’s shock and sadness” (Cashmore). While tweets are atomic in nature, they are part of a distributed conversation through a social network of interconnected users. To paraphrase David Weinberger's description of the Web, tweets are “many small pieces loosely joined,” (ix). In common with mass media audiences, users may be very widely dispersed and usually unknown to each other. Twitter provides a structure for them to act together as if in an organised way, for example through the use of hashtags–the # symbol–and keywords to signpost topics and issues. This provides a mechanism to aggregate, archive and analyse the individual tweets as a whole. Furthermore, information is not simply dependent on the content of the message. A user's profile, their social connections and the messages they resend, or retweet, provide an additional layer of information. This is called the social graph and it is implicit in social networks such as Twitter. The social graph provides a representation of an individual and their connections. Each user on Twitter has followers, who themselves have followers. Thus each tweet has a social graph attached to it, as does each message that is retweeted (forwarded to other users). Accordingly, social graphs offer a means to infer reputation and trust. Twitter as Ambient Journalism Services such as Twitter can be considered as awareness systems, defined as computer-mediated communication systems “intended to help people construct and maintain awareness of each others’ activities, context or status, even when the participants are not co-located” (Markopoulos et al., v). In such a system, the value does not lie in the individual sliver of information that may, on its own, be of limited value or validity. Rather the value lies in the combined effect of the communication. In this sense, Twitter becomes part of an ambient media system where users receive a flow of information from both established media and from each other. Both news and journalism are ambient, suggesting that “broad, asynchronous, lightweight and always-on communication systems such as Twitter are enabling citizens to maintain a mental model of news and events around them” (Hermida 5). Obviously, not everything on Twitter is an act of journalism. There are messages about almost every topic that often have little impact beyond an individual and their circle of friends, from random thoughts and observations to day-to-day minutiae. But it is undeniable that Twitter has emerged as a significant platform for people to report, comment and share news about major events, with individuals performing some of the institutionalised functions of the professional journalist. Examples where Twitter has emerged as a platform for journalism include the 2008 US presidential elections, the Mumbai attacks in November of 2008 and the January 2009 crash of US Airways flight (Lenhard and Fox 2). In these examples, Twitter served as a platform for first-hand, real-time reports from people caught up in the events as they unfolded, with the cell phone used as the primary reporting tool. For example, the dramatic Hudson River landing of the US Airways flight was captured by ferry passenger Janis Krum, who took a photo with a cell phone and sent it out via Twitter.One of the issues associated with services like Twitter is the speed and number of micro-bursts of data, together with the potentially high signal to noise ratio. For example, the number of tweets related to the disputed election result in Iran in June 2009 peaked at 221,774 in one hour, from an average flow of between 10,000 and 50,000 an hour (Parr). Hence there is a need for systems to aid in selection, organisation and interpretation to make sense of this ambient journalism. Traditionally the journalist has been the mechanism to filter, organise and interpret this information and deliver the news in ready-made packages. Such a role was possible in an environment where access to the means of media production was limited. But the thousands of acts of journalism taking place on Twitter every day make it impossible for an individual journalist to identify the collective sum of knowledge contained in the micro-fragments, and bring meaning to the data. Rather, we should look to the literature on ambient media, where researchers talk about media systems that understand individual desires and needs, and act autonomously on their behalf (for example Lugmayr). Applied to journalism, this suggests a need for tools that can analyse, interpret and contextualise a system of collective intelligence. An example of such a service is TwitterStand, developed by a group of researchers at the University of Maryland (Sankaranarayanan et al.). The team describe TwitterStand as “an attempt to harness this emerging technology to gather and disseminate breaking news much faster than conventional news media” (51). In their paper, they describe in detail how their news processing system is able to identify and cluster news tweets in a noisy medium. They conclude that “Twitter, or most likely a successor of it, is a harbinger of a futuristic technology that is likely to capture and transmit the sum total of all human experiences of the moment” (51). While such a comment may be something of an overstatement, it indicates how emerging real-time, networked technologies are creating systems of distributed journalism.Similarly, the US Geological Survey (USGS) is investigating social media technologies as a way quickly to gather information about recent earthquakes. It has developed a system called the Twitter Earthquake Detector to gather real-time, earthquake-related messages from Twitter and filter the messages by place, time, and keyword (US Department of the Interior). By collecting and analysing the tweets, the USGS believes it can access anecdotal information from citizens about a quake much faster than if it only relied on scientific information from authoritative sources.Both of these are examples of research into the development of tools that help users negotiate and regulate the streams and information flowing through networked media. They address issues of information overload by making sense of distributed and unstructured data, finding a single concept such as news in what Sankaranarayanan et al., say is “akin to finding needles in stacks of tweets’ (43). danah boyd eloquently captured the potential for such as system, writing that “those who are most enamoured with services like Twitter talk passionately about feeling as though they are living and breathing with the world around them, peripherally aware and in tune, adding content to the stream and grabbing it when appropriate.” Conclusion While this paper has focused on Twitter in its discussion of ambient journalism, it is possible that the service may be overtaken by another or several similar digital technologies. This has happened, for example, in the social networking space, with Friendster been supplanted by MySpace and more recently by Facebook. However, underlying services like Twitter are a set of characteristics often referred to by the catchall phrase, the real-time Web. As often with emerging and rapidly developing Internet trends, it can be challenging to define what the real-time Web means. Entrepreneur Ken Fromm has identified a set of characteristics that offer a good starting point to understand the real-time Web. He describes it as a new form of loosely organised communication that is creating a new body of public content in real-time, with a related social graph. In the context of our discussion of the term ‘ambient’, the characteristics of the real-time Web do not only extend the pervasiveness of ambient news. They also enable the former audience to become part of the news environment as it has the means to gather, select, produce and distribute news and information. Writing about changing news habits in the US, Purcell et al. conclude that “people’s relationship to news is becoming portable, personalized, and participatory” (2). Ambient news has evolved into ambient journalism, as people contribute to the creation, dissemination and discussion of news via social media services such as Twitter. To adapt Ian Hargreaves' description of ambient news in his book, Journalism: Truth or Dare?, we can say that journalism, which was once difficult and expensive to produce, today surrounds us like the air we breathe. Much of it is, literally, ambient, and being produced by professionals and citizens. The challenge going forward is helping the public negotiate and regulate this flow of awareness information, facilitating the collection, transmission and understanding of news. References Ang, Ien. Desperately Seeking the Audience. London: Routledge, 1991. Boczkowski, Pablo. J. Digitizing the News: Innovation in Online Newspapers. Cambridge: MIT Press, 2004. boyd, danah. “Streams of Content, Limited Attention.” UX Magazine 25 Feb. 2010. 27 Feb. 2010 ‹http://uxmag.com/features/streams-of-content-limited-attention›. Bowman, Shayne, and Chris Willis. We Media: How Audiences Are Shaping the Future of News and Information. The Media Center, 2003. 10 Jan. 2010 ‹http://www.hypergene.net/wemedia/weblog.php›. Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008. Cashmore, Pete. “Michael Jackson Dies: Twitter Tributes Now 30% of Tweets.” Mashable 25 June 2009. 26 June 2010 ‹http://mashable.com/2009/06/25/michael-jackson-twitter/›. Department of the Interior. “U.S. Geological Survey: Twitter Earthquake Detector (TED).” 13 Jan. 2010. 12 Feb. 2010 ‹http://recovery.doi.gov/press/us-geological-survey-twitter-earthquake-detector-ted/›. Deuze, Mark. “The Web and Its Journalisms: Considering the Consequences of Different Types of Newsmedia Online.” New Media and Society 5 (2003): 203-230. Dickinson, Elizabeth. “Chile's Twitter Response.” Foreign Policy 1 March 2010. 2 March 2010 ‹http://blog.foreignpolicy.com/posts/2010/03/01/chiles_twitter_response›. Domingo, David, Thorsten Quandt, Ari Heinonen, Steve Paulussen, Jane B. Singer and Marina Vujnovic. “Participatory Journalism Practices in the Media and Beyond.” Journalism Practice 2.3 (2008): 326-342. Fromm, Ken. “The Real-Time Web: A Primer, Part 1.” ReadWriteWeb 29 Aug. 2009. 7 Dec. 2009 ‹http://www.readwriteweb.com/archives/the_real-time_web_a_primer_part_1.php›. Hargreaves, Ian. Journalism: Truth or Dare? Oxford: Oxford University Press, 2003. Hargreaves, Ian, and Thomas, James. “New News, Old News.” ITC/BSC, Oct. 2002. 5 Dec. 2009 ‹http://legacy.caerdydd.ac.uk/jomec/resources/news.pdf›. Hermida, Alfred. “Twittering the News: The Emergence of Ambient Journalism.” Journalism Practice. First published on 11 March 2010 (iFirst). 12 March 2010 ‹http://www.informaworld.com/smpp/content~content=a919807525›. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006. Lenhard, Amanda, and Susannah Fox. “Twitter and Status Updating.” Pew Internet and American Life Project, 12 Feb. 2009. 13 Feb. 2010 ‹http://www.pewinternet.org/Reports/2009/Twitter-and-status-updating.aspx›. Lugmayr, Artur. “The Future Is ‘Ambient.’” Proceedings of SPIE Vol. 6074, 607403 Multimedia on Mobile Devices II. Vol. 6074. Eds. Reiner Creutzburg, Jarmo H. Takala, and Chang Wen Chen. San Jose: SPIE, 2006. Markopoulos, Panos, Boris De Ruyter and Wendy MacKay. Awareness Systems: Advances in Theory, Methodology and Design. Dordrecht: Springer, 2009. McCarthy, Anna. Ambient Television: Visual Culture and Public Space. Durham: Duke University Press, 2001. McQuail, Denis. McQuail’s Mass Communication Theory. London: Sage, 2000. Parr, Ben. “Mindblowing #IranElection Stats: 221,744 Tweets per Hour at Peak.” Mashable 17 June 2009. 10 August 2009 ‹http://mashable.com/2009/06/17/iranelection-crisis-numbers/›. Purcell, Kristen, Lee Rainie, Amy Mitchell, Tom Rosenstiel, and Kenny Olmstead, “Understanding the Participatory News Consumer.” Pew Internet and American Life Project, 1 March 2010. 2 March 2010 ‹http://www.pewinternet.org/Reports/2010/Online-News.aspx?r=1›. Rosen Jay. “The People Formerly Known as the Audience.” Pressthink 27 June 2006. 8 August 2009 ‹http://journalism.nyu.edu/pubzone/weblogs/pressthink/2006/06/27/ppl_frmr.html›. Sankaranarayanan, Jagan, Hanan Samet, Benjamin E. Teitler, Michael D. Lieberman, and Jon Sperling. “TwitterStand: News in Tweets. Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS '09). New York: ACM, 2009. 42-51. Shirky, Clay. “How Social Media Can Make History.” TED Talks June 2009. 2 March 2010 ‹http://www.ted.com/talks/clay_shirky_how_cellphones_twitter_facebook_can_make_history.html›. Shoemaker, Pamela J., Tim P. Vos, and Stephen D. Reese. “Journalists as Gatekeepers.” Eds. Karin Wahl-Jorgensen and Thomas Hanitzsch, Handbook of Journalism Studies. New York: Routledge, 2008. 73-87. Silverman, Matt. “Chile Earthquake Pictures: Twitter Photos Tell the Story.” Mashable 27 Feb. 2010. 2 March 2010 ‹http://mashable.com/2010/02/27/chile-earthquake-twitpics/›. Singer, Jane. “Strange Bedfellows: The Diffusion of Convergence in Four News Organisations.” Journalism Studies 5 (2004): 3-18. Twitter. “Measuring Tweets.” Twitter blog, 22 Feb. 2010. 23 Feb. 2010 ‹http://blog.twitter.com/2010/02/measuring-tweets.html›. Weinberger, David. Small Pieces, Loosely Joined. Cambridge, MA: Perseus Publishing, 2002.
APA, Harvard, Vancouver, ISO, and other styles
8

Young, Sherman. "Beyond the Flickering Screen: Re-situating e-books." M/C Journal 11, no. 4 (August 26, 2008). http://dx.doi.org/10.5204/mcj.61.

Full text
Abstract:
The move from analog distribution to online digital delivery is common in the contemporary mediascape. Music is in the midst of an ipod driven paradigm shift (Levy), television and movie delivery is being reconfigured (Johnson), and newspaper and magazines are confronting the reality of the world wide web and what it means for business models and ideas of journalism (Beecher). In the midst of this change, the book publishing industry remains defiant. While embracing digital production technologies, the vast majority of book content is still delivered in material form, printed and shipped the old-fashioned way—despite the efforts of many technology companies over the last decade. Even the latest efforts from corporate giants such as Sony and Amazon (who appear to have solved many of the technical hurdles of electronic reading devices) have had little visible impact. The idea of electronic books, or e-books, remains the domain of geeky early adopters (“Have”). The reasons for this are manifold, but, arguably, a broader uptake of e-books has not occurred because cultural change is much more difficult than technological change and book readers have yet to be persuaded to change their cultural habits. Electronic reading devices have been around for as long as there have been computers with screens, but serious attempts to replicate the portability, readability, and convenience of a printed book have only been with us for a decade or so. The late 1990s saw the release of a number of e-book devices. In quick succession, the likes of the Rocket e-Book, the SoftBook and the Franklin eBookman all failed to catch on. Despite this lack of market penetration, software companies began to explore the possibilities—Microsoft’s Reader software competed with a similar product from Adobe, some publishers became content providers, and a niche market of consumers began reading e-books on personal digital assistants (PDAs). That niche was sufficient for e-reading communities and shopfronts to appear, with a reasonable range of titles becoming available for purchase to feed demand that was very much driven by early adopters. But the e-book market was and remains small. For most people, books are still regarded as printed paper objects, purchased from a bookstore, borrowed from a library, or bought online from companies like Amazon.com. More recently, the introduction of e-ink technologies (EPDs) (DeJean), which allow for screens with far more book-like resolution and contrast, has provided the impetus for a new generation of e-book devices. In combination with an expanded range of titles (and deals with major publishing houses to include current best-sellers), there has been renewed interest in the idea of e-books. Those who have used the current generation of e-ink devices are generally positive about the experience. Except for some sluggishness in “turning” pages, the screens appear crisp, clear and are not as tiring to read as older displays. There are a number of devices that have embraced the new screen technologies (mobileread) but most attention has been paid to three devices in particular—mainly because their manufacturers have tried to create an ecosystem that provides content for their reading devices in much the same way that Apple’s itunes store provides content for ipods. The Sony Portable Reader (Sonystyle) was the first electronic ink device to be produced by a mainstream consumers electronics company. Sony ties the Reader to its Connect store, which allows the purchase of book titles via a computer; titles are then downloaded to the Reader in the same way that an mp3 player is loaded with music. Sony’s most prominent competition in the marketplace is Amazon’s Kindle, which does not require users to have a computer. Instead, its key feature is a constant wireless connection to Amazon’s growing library of Kindle titles. This works in conjunction with US cellphone provider Sprint to allow the purchase of books via wireless downloads wherever the Sprint network exists. The system, which Amazon calls “whispernet,” is invisible to readers and the cost is incorporated into the price of books, so Kindle users never see a bill from Sprint (“Frequently”). Both the Sony Reader and the Amazon Kindle are available only in limited markets; Kindle’s reliance on a cellphone network means that its adoption internationally is dependent on Amazon establishing a relationship with a cellphone provider in each country of release. And because both devices are linked to e-bookstores, territorial rights issues with book publishers (who trade publishing rights for particular global territories in a colonial-era mode of operation that seems to ignore the reality of global information mobility (Thompson 74–77)) contribute to the restricted availability of both the Sony and Amazon products. The other mainstream device is the iRex Iliad, which is not constrained to a particular online bookstore and thus is available internationally. Its bookstore ecosystems are local relationships—with Dymocks in Australia, Borders in the UK, and other booksellers across Europe (iRex). All three devices use EPDs and share similar specifications for the actual reading of e-books. Some might argue that the lack of a search function in the Sony and the ability to write on pages in the Iliad are quite substantive differences, but overall the devices are distinguished by their availability and the accessibility of book titles. Those who have used the devices extensively are generally positive about the experience. Amazon’s Customer Reviews are full of positive comments, and the sense from many commentators is that the systems are a viable replacement for old-fashioned printed books (Marr). Despite the good reviews—which suggest that the technology is actually now good enough to compete with printed books—the e-book devices have failed to catch on. Amazon has been hesitant to state actual sales figures, leaving it to so-called analysts to guess with the most optimistic suggesting that only 30 to 50,000 have sold since launch in late 2007 (Sridharan). By comparison, a mid-list book title (in the US) would expect to sell a similar number of copies. The sales data for the Sony Portable Reader (which has been on the market for nearly two years) and the iRex iliad are also elusive (Slocum), suggesting that they have not meaningfully changed the landscape. Tellingly, despite the new devices, the e-book industry is still tiny. Although it is growing, the latest American data show that the e-book market has wholesale revenues of around $10 million per quarter (or around $40 million per year), which is dwarfed by the $35 billion in revenues regularly earned annually in the US printed book industry ("Book"). It’s clear that despite the technological advances, e-books have yet to cross the chasm from early adopter to mainstream usage (see IPDF). The reason for this is complex; there are issues of marketing and distribution that need to be considered, as well as continuing arguments about screen technologies, appropriate publishing models, and digital rights management. It is beyond the scope of this article to do justice to those issues. Suffice to say, the book industry is affected by the same debates over content that plague other media industries (Vershbow). But, arguably, the key reason for the minimal market impact is straightforward—technological change is relatively easy, but cultural change is much more difficult. The current generation of e-book devices might be technically very close to being a viable replacement for print on paper (and the next generation of devices will no doubt be even better), but there are bigger cultural hurdles to be overcome. For most people, the social practice of reading books (du Gay et al 10) is inextricably tied with printed objects and a print culture that is not yet commonly associated with “technology” (perhaps because books, as machines for reading (Young 160), have become an invisible technology (Norman 246)). E. Annie Proulx’s dismissive suggestion that “nobody is going to sit down and read a novel on a twitchy little screen. Ever” (1994) is commonly echoed when book buyers consider the digital alternative. Those thoughts only scratch the surface of a deeply embedded cultural practice. The centuries since Gutenberg’s printing press and the vast social and cultural changes that followed positioned print culture as the dominant cultural mode until relatively recently (Eisenstein; Ong). The emerging electronic media forms of the twentieth century displaced that dominance with many arguing that the print age was moved aside by first radio and television and now computers and the Internet (McLuhan; Postman). Indeed, there is a subtext in that line of thought, one that situates electronic media forms (particularly screen-based ones) as the antithesis of print and book culture. Current e-book reading devices attempt to minimise the need for cultural change by trying to replicate a print culture within an e-print culture. For the most part, they are designed to appeal to book readers as a replacement for printed books. But it will take more than a perfect electronic facsimile of print on paper to persuade readers to disengage with a print culture that incorporates bookshops, bookclubs, writing in the margins, touching and smelling the pages and covers, admiring the typesetting, showing off their bookshelves, and visibly identifying with their collections. The frequently made technical arguments (about flashing screens and reading in the bath (Randolph)) do not address the broader apprehension about a cultural experience that many readers do not wish to leave behind. It is in that context that booklovers appear particularly resistant to any shift from print to a screen-based format. One only has to engage in a discussion about e-books (or lurk on an online forum where one is happening) to appreciate how deeply embedded print culture is (Hepworth)—book readers have a historical attachment to the printed object and it is this embedded cultural resistance that is the biggest barrier for e-books to overcome. Although e-book devices in no way resemble television, print culture is still deeply suspicious of any screen-based media and arguments are often made that the book as a physical object is critical because “different types of media function differently, and even if the content is similar the form matters quite a lot” (Weber). Of course, many in the newspaper industry would argue that long-standing cultural habits can change very rapidly and the migration of eyeballs from newsprint to the Internet is a cautionary tale (see Auckland). That specific format shift saw cultural change driven by increased convenience and a perception of decreased cost. For those already connected to the Internet, reading newspapers online represented zero marginal cost, and the range of online offerings dwarfed that of the local newsagency. The advantage of immediacy and multimedia elements, and the possibility of immediate feedback, appeared sufficient to drive many away from print towards online newspapers.For a similar shift in the e-book realm, there must be similar incentives for readers. At the moment, the only advantages on offer are weightlessness (which only appeals to frequent travellers) and convenience via constant access to a heavenly library of titles (Young 150). Amazon’s Kindle bookshop can be accessed 24/7 from anywhere there is a Sprint network coverage (Nelson). However, even this advantage is not so clear-cut—there is a meagre range of available electronic titles compared to printed offerings. For example, Amazon claims 130,000 titles are currently available for Kindle and Sony has 50,000 for its Reader, figures that are dwarfed by Amazon’s own printed book range. Importantly, there is little apparent cost advantage to e-books. The price of electronic reading devices is significant, amounting to a few hundred dollars to which must be added the cost of e-books. The actual cost of those titles is also not as attractive as it might be. In an age where much digital content often appears to be free, consumers demand a significant price advantage for purchasing online. Although some e-book titles are priced more affordably than their printed counterparts, the cost of many seems strangely high given the lack of a physical object to print and ship. For example, Amazon Kindle titles might be cheaper than the print version, but the actual difference (after discounting) is not an order of magnitude, but of degree. For example, Randy Pausch’s bestselling The Last Lecture is available for $12.07 as a paperback or $9.99 as a Kindle edition (“Last”). For casual readers, the numbers make no sense—when the price of the reading device is included, the actual cost is prohibitive for those who only buy a few titles a year. At the moment, e-books only make sense for heavy readers for whom the additional cost of the reading device will be amortised over a large number of books in a reasonably short time. (A recent article in the Wall Street Journal suggested that the break-even point for the Kindle was the purchase of 61 books (Arends).) Unfortunately for the e-book industry, not is only is that particular market relatively small, it is the one least likely to shift from the embedded habits of print culture. Arguably, should e-books eventually offer a significant cost benefit for consumers, uptake would be more dramatic. However, in his study of cellphone cultures, Gerard Goggin argues against purely fiscal motivations, suggesting that cultural change is driven by other factors—in his example, new ways of communicating, connecting, and engaging (205–211). The few market segments where electronic books have succeeded are informative. For example, the market for printed encyclopedias has essentially disappeared. Most have reinvented themselves as CD-ROMs or DVD-ROMs and are sold for a fraction of the price. Although cost is undoubtedly a factor in their market success, added features such as multimedia, searchability, and immediacy via associated websites are compelling reasons driving the purchase of electronic encyclopedias over the printed versions. The contrast with the aforementioned e-book devices is apparent with encyclopedias moving away from their historical role in print culture. Electronic encyclopedias don’t try to replicate the older print forms. Rather they represent a dramatic shift of book content into an interactive audio-visual domain. They have experimented with new formats and reconfigured content for the new media forms—the publishers in question simply left print culture behind and embraced a newly emerging computer or multimedia culture. This step into another realm of social practices also happened in the academic realm, which is now deeply embedded in computer-based delivery of research and pedagogy. Not only are scholarly journals moving online (Thompson 320–325), but so too are scholarly books. For example, at the Macquarie University Library, there has been a dramatic increase in the number of electronic books in the collection. The library purchased 895 e-books in 2005 and 68,000 in 2007. During the same period, the number of printed books purchased remained relatively stable with about 16,000 bought annually (Macquarie University Library). The reasons for the dramatic increase in e-book purchases are manifold and not primarily driven by cost considerations. Not only does the library have limited space for physical storage, but Macquarie (like most other Universities) emphasises its e-learning environment. In that context, a single e-book allows multiple, geographically dispersed, simultaneous access, which better suits the flexibility demanded of the current generation of students. Significantly, these e-books require no electronic reading device beyond a standard computer with an internet connection. Users simply search for their required reading online and read it via their web browser—the library is operating in a pedagogical culture that assumes that staff and students have ready access to the necessary resources and are happy to read large amounts of text on a screen. Again, gestures towards print culture are minimal, and the e-books in question exist in a completely different distributed electronic environment. Another interesting example is that of mobile phone novels, or “keitai” fiction, popular in Japan. These novels typically consist of a few hundred pages, each of which contains about 500 Japanese characters. They are downloaded to (and read on) cellphones for about ten dollars apiece and can sell in the millions of copies (Katayama). There are many reasons why the keitai novel has achieved such popularity compared to the e-book approaches pursued in the West. The relatively low cost of wireless data in Japan, and the ubiquity of the cellphone are probably factors. But the presence of keitai culture—a set of cultural practices surrounding the mobile phone—suggests that the mobile novel springs not from a print culture, but from somewhere else. Indeed, keitai novels are written (often on the phones themselves) in a manner that lends itself to the constraints of highly portable devices with small screens, and provides new modes of engagement and communication. Their editors attribute the success of keitai novels to how well they fit into the lifestyle of their target demographic, and how they act as community nodes around which readers and writers interact (Hani). Although some will instinctively suggest that long-form narratives are doomed with such an approach, it is worthwhile remembering that, a decade ago, few considered reading long articles using a web browser and the appropriate response to computer-based media was to rewrite material to suit the screen (Nielsen). However, without really noticing the change, the Web became mainstream and users began reading everything on their computers, including much longer pieces of text. Apart from the examples cited, the wider book trade has largely approached e-books by trying to replicate print culture, albeit with an electronic reading device. Until there is a significant cost and convenience benefit for readers, this approach is unlikely to be widely successful. As indicated above, those segments of the market where e-books have succeeded are those whose social practices are driven by different cultural motivations. It may well be that the full-frontal approach attempted to date is doomed to failure, and e-books would achieve more widespread adoption if the book trade took a different approach. The Amazon Kindle has not yet persuaded bookloving readers to abandon print for screen in sufficient numbers to mark a seachange. Indeed, it is unlikely that any device positioned specifically as a book replacement will succeed. Instead of seeking to make an e-book culture a replacement for print culture, effectively placing the reading of books in a silo separated from other day-to-day activities, it might be better to situate e-books within a mobility culture, as part of the burgeoning range of social activities revolving around a connected, convergent mobile device. Reading should be understood as an activity that doesn’t begin with a particular device, but is done with whatever device is at hand. In much the same way that other media producers make content available for a number of platforms, book publishers should explore the potential of the new mobile devices. Over 45 million smartphones were sold globally in the first three months of 2008 (“Gartner”)—somewhat more than the estimated shipments of e-book reading devices. As well as allowing a range of communications possibilities, these convergent devices are emerging as key elements in the new digital mediascape—one that allows users access to a broad range of media products via a single pocket-sized device. Each of those smartphones makes a perfectly adequate e-book reading device, and it might be useful to pursue a strategy that embeds book reading as one of the key possibilities of this growing mobility culture. The casual gaming market serves as an interesting example. While hardcore gamers cling to their games PCs and consoles, a burgeoning alternative games market has emerged, with a different demographic purchasing less technically challenging games for more informal gaming encounters. This market has slowly shifted to convergent mobile devices, exemplified by Sega’s success in selling 300,000 copies of Super Monkey Ball within 20 days of its release for Apple’s iphone (“Super”). Casual gamers do not necessarily go on to become hardcore games, but they are gamers nonetheless—and today’s casual games (like the aforementioned Super Monkey Ball) are yesterday’s hardcore games of choice. It might be the same for reading. The availability of e-books on mobile platforms may not result in more people embracing longer-form literature. But it will increase the number of people actually reading, and, just as casual gaming has attracted a female demographic (Wallace 8), the instant availability of appropriate reading material might sway some of those men who appear to be reluctant readers (McEwan). Rather than focus on printed books, and book-like reading devices, the industry should re-position e-books as an easily accessible content choice in a digitally converged media environment. This is more a cultural shift than a technological one—for publishers and readers alike. Situating e-books in such a way may alienate a segment of the bookloving community, but such readers are unlikely to respond to anything other than print on paper. Indeed, it may encourage a whole new demographic—unafraid of the flickering screen—to engage with the manifold attractions of “books.” References Arends, Brett. “Can Amazon’s Kindle Save You Money?” The Wall St Journal 24 June 2008. 25 June 2008 ‹http://online.wsj.com/article/SB121431458215899767.html? mod=rss_whats_news_technology>. Auckland, Steve. “The Future of Newspapers.” The Independent 13 Nov. 2008. 24 June 2008 ‹http://news.independent.co.uk/media/article1963543.ece>. Beecher, Eric. “War of Words.” The Monthly, June 2007: 22–26. 25 June 2008 . “Book Industry Trends 2006 Shows Publishers’ Net Revenues at $34.59 Billion for 2005.” Book Industry Study Group. 22 May 2006 ‹http://www.bisg.org/news/press.php?pressid=35>. DeJean, David, “The Future of e-paper: The Kindle is Only the Beginning.” Computerworld 6 June 2008. 12 June 2008 ‹http://www.computerworld.com/action/article .do?command=viewArticleBasic&articleId=9091118>. du Gay, Paul, Stuart Hall, Linda Janes, Hugh Mackay, and Keith Negus. Doing Cultural Studies: The Story of the Sony Walkman. Thousand Oaks: Sage, 1997. Eisenstein, Elizabeth. The Printing Press as an Agent of Change. Cambridge: Cambridge UP, 1997. “Frequently Asked Questions about Amazon Kindle.” Amazon.com. 12 June 2008 ‹http://www.amazon.com/gp/help/customer/display.html?nodeId=200127480&#whispernet>. “Gartner Says Worldwide Smartphone Sales Grew 29 Percent in First Quarter 2008.” Gartner. 6 June 2008. 20 June 2008 ‹http://www.gartner.com/it/page.jsp?id=688116>. Goggin, Gerard. Cell Phone Cultures. London: Routledge, 2006. Hani, Yoko. “Cellphone Bards Make Bestseller Lists.” Japan Times Online Sep. 2007. 20 June 2008 ‹http://search.japantimes.co.jp/cgi-bin/fl20070923x4.html>. “Have you Changed your mind on Ebook Readers?” Slashdot. 25 June 2008 ‹http://ask.slashdot.org/article.pl?sid=08/05/08/2317250>. Hepworth, David. “The Future of Reading or the Sinclair C5.” The Word 17 June 2008. 20 June 2008 ‹http://www.wordmagazine.co.uk/content/future-reading-or-sinclair-c5>. IPDF (International Digital Publishing Forum) Industry Statistics. 24 June 2008 ‹http://www.openebook.org/doc_library/industrystats.htm>. iRex Technologies Press. 12 June 2008 ‹http://www.irextechnologies.com/about/press>. Johnson, Bobbie. “Vince Cerf, AKA the Godfather of the Net, Predicts the End of TV as We Know It.” The Guardian 27 Aug. 2008. 24 June 2008 ‹http://www.guardian.co.uk/technology/2007/aug/27/news.google>. Katayama, Lisa. “Big Books Hit Japan’s Tiny Phones.” Wired Jan. 2007. 24 June 2008 ‹http://www.wired.com/culture/lifestyle/news/2007/01/72329>. “The Last Lecture.” Amazon.com. 24 June 2008 ‹http://www.amazon.com/gp/product/1401323251/ref=amb_link_3359852_2? pf_rd_m=ATVPDKIKX0DER&pf_rd_s=right-1&pf_rd_r=07NDSWAK6D4HT181CNXD &pf_rd_t=101&pf_rd_p=385880801&pf_rd_i=549028>.Levy, Steven. The Perfect Thing. London:Ebury Press, 2006. Macquarie University Library Annual Report 2007. 24 June 2008 ‹http://senate.mq.edu.au/ltagenda/0308/library_report%202007.doc>. Marr, Andrew. “Curling Up with a Good EBook.” The Guardian 11 May 2007. 23 May 2007 ‹http://technology.guardian.co.uk/news/story/0,,2077278,00.html>. McEwan, Ian. “Hello, Would you Like a Free Book?” The Guardian 20 Sep. 2005. 28 June 2008 ‹http://www.guardian.co.uk/books/2005/sep/20/fiction.features11>. McLuhan, Marshall. The Gutenberg Galaxy. Toronto: U of Toronto P, 1962. Mobileread. E-book Reader Matrix, Mobileread Wiki. 30 May 2008 ‹http://wiki.mobileread.com/wiki/E-book_Reader_Matrix>. Nelson, Sara. “Warming to Kindle.” Publishers Weekly 10 Dec. 2007. 31 Jan. 2008 ‹http://www.publishersweekly.com/article/CA6510861.htm.html>. Nielsen, Jakob. “Concise, Scannable and Objective, How to Write for the Web.” 1997. ‹20 June 2008 ‹http://www.useit.com/papers/webwriting/writing.html>. Norman, Don. The Invisible Computer: Why Good Products Can Fail. Cambridge, MA: MIT P, 1998. Ong, Walter. Orality & Literacy: The Technologizing of the Word. New York: Methuen, 1988. Postman, Neil. Amusing Ourselves to Death. New York: Penguin, 1986. Proulx, E. Annie. “Books on Top.” The New York Times 26 May 1994. 28 June 2008 ‹http://www.nytimes.com/books/99/05/23/specials/proulx-top.html>. Randolph, Eleanor. “Reading into the Future.” The New York Times 18 June 2008. 19 June 2008 ‹http://www.nytimes.com/2008/06/18/opinion/18wed3.html?>. Slocum, Mac. “The Pitfalls of Publishing’s E-Reader Guessing Game.” O’Reilly TOC. June 2006. 24 June 2008 ‹http://toc.oreilly.com/2008/06/the-pitfalls-of-publishings-er.html>. Sridharan, Vasanth. “Goldman: Amazon Sold up to 50,000 Kindles in Q1.” Silicon Alley Insider 19 May 2008. 25 June 2008 ‹http://www.alleyinsider.com/2008/5/how_many_kindles_sold_last_quarter_>. “Super Monkey Ball iPhone's Super Sales.” Edge OnLine. 24 Aug. 2008 ‹http://www.edge-online.com/news/super-monkey-ball-iphones-super-sales>. Thompson, John B. Books in the Digital Age. London: Polity, 2005. Vershbow, Ben. “Self Destructing Books.” if:book. May 2005. 4 Oct. 2006 ‹http://www.futureofthebook.org/blog/archives/2005/05/selfdestructing_books.html>. Wallace, Margaret, and Brian Robbins. 2006 Casual Games White Paper. IDGA. 24 Aug. 2008 ‹http://www.igda.org/casual/IGDA_CasualGames_Whitepaper_2006.pdf>. Weber, Jonathan. “Why Books Resist the Rise of Novel Technologies.” The Times Online 23 May 2006. 25 June 2008 ‹http://entertainment.timesonline.co.uk/tol/arts_and_entertainment/books/article724510.ece> Young, Sherman. The Book is Dead, Long Live the Book. Sydney: UNSW P, 2007.
APA, Harvard, Vancouver, ISO, and other styles
9

Hollier, Scott, Katie M. Ellis, and Mike Kent. "User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1259.

Full text
Abstract:
Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.
APA, Harvard, Vancouver, ISO, and other styles
10

Ellis, Katie, Mike Kent, and Gwyneth Peaty. "Captioned Recorded Lectures as a Mainstream Learning Tool." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1262.

Full text
Abstract:
In Australian universities, many courses provide lecture notes as a standard learning resource; however, captions and transcripts of these lectures are not usually provided unless requested by a student through dedicated disability support officers (Worthington). As a result, to date their use has been limited. However, while the requirement for—and benefits of—captioned online lectures for students with disabilities is widely recognised, these captions or transcripts might also represent further opportunity for a personalised approach to learning for the mainstream student population (Podszebka et al.; Griffin). This article reports findings of research assessing the usefulness of captioned recorded lectures as a mainstream learning tool to determine their usefulness in enhancing inclusivity and learning outcomes for the disabled, international, and broader student population.Literature ReviewCaptions have been found to be of benefit for a number of different groups considered at-risk. These include people who are D/deaf or hard of hearing, those with other learning difficulties, and those from a non-English speaking background (NESB).For students who are D/deaf or hard of hearing, captions play a vital role in providing access to otherwise inaccessible audio content. Captions have been found to be superior to sign language interpreters, note takers, and lip reading (Stinson et al.; Maiorana-Basas and Pagliaro; Marschark et al.).The use of captions for students with a range of cognitive disabilities has also been shown to help with student comprehension of video-based instruction in a higher education context (Evmenova; Evmenova and Behrmann). This includes students with autism spectrum disorder (ASD) (Knight et al.; Reagon et al.) and students with dyslexia (Alty et al.; Beacham and Alty). While, anecdotally, captions are also seen as of benefit for students with attention deficit hyperactivity disorder (ADHD) (Kent et al.), studies have proved inconclusive (Lewis and Brown).The third group of at-risk students identified as benefiting from captioning recorded lecture content are those from a NESB. The use of captions has been shown to increase vocabulary learning (Montero Perez, Peters, Clarebout, and Desmet; Montero Perez, Van Den Noortgate, and Desmet) and to assist with comprehension of presenters with accents or rapid speech (Borgaonkar, 2013).In addition to these three main groups of at-risk students, captions have also been demonstrated to increase the learning outcomes for older students (Pachman and Ke, 2012; Schmidt and Haydu, 1992). Captions also have demonstrable benefits for the broader student cohort beyond these at-risk groups (Podszebka et al.; Griffin). For example, a recent study found that the broader student population utilised lecture captions and transcripts in order to focus, retain information, and overcome poor audio quality (Linder). However, the same study revealed that students were largely unaware about the availability of captions and transcripts, nor how to access them.MethodologyIn 2016 students in the Curtin University unit Web Communications (an introductory unit for the Internet Communications major) and its complementary first year unit, Internet and Everyday Life, along with a second year unit, Web Media, were provided with access to closed captions for their online recorded lectures. The latter unit was added to the study serendipitously when its lectures were required to be captioned through a request from the Curtin Disability Office during the study period. Recordings and captions were created using the existing captioning system available through Curtin’s lecture recording platform—Echo360. As well as providing a written caption of what is being said during the lectures, this system also offers a sophisticated search functionality, as well as access to a total transcript of the lecture. The students were provided access to an online training module, developed specifically for this study, to explain the use of this system.Enrolled Curtin students, both on-campus and online, Open Universities Australia (OUA) students studying through Curtin online, teaching staff, and disability officers were then invited to participate in a survey and interviews. The study sought to gain insights into students’ use of both recorded lectures and captioned video at the time of the survey, and their anticipated future usage of these services (see Kent et al.).A total of 50 students—of 539 enrolled across the different instances of the three units—completed the survey. In addition, five follow-up interviews with students, teaching staff, and disability support staff were conducted once the surveys had been completed. Staff interviewed included tutors and unit coordinators who taught and supervised units in which the lecture captions were provided. The interviews assessed the awareness, use, and perceived validity of the captions system in the context of both learning and teaching.ResultsA number of different questions were asked regarding students’ demographics, their engagement with online unit materials, including recorded lectures, their awareness of Echo360’s lecture captions, as well as its additional features, their perceived value of online captions for their studies, and the future significance of captions in a university context.Of the 50 participants in the survey, only six identified themselves as a person with a disability—almost 90 per cent did not identify as disabled. Additionally, 45 of the 50 participants identified English as their primary language. Only one student identified as a person with both a disability and coming from a NESB.Engagement with Online Unit Materials and Recorded LecturesThe survey results provide insight into the ways in which participants interact with the Echo360 lecture system. Over 90 per cent of students had accessed the recorded lectures via the Echo360 system. While this might not seem notable at first, given such materials are essential elements of the units surveyed, the level of repeated engagement seen in these results is important because it indicates the extent to which students are revising the same material multiple times—a practice that captions are designed to facilitate and assist. For instance, one lecture was recorded per week for each unit surveyed, and most respondents (70 per cent) were viewing these lectures at least once or twice a week, while 10 per cent were viewing the lectures multiple times a week. Over half of the students surveyed reported viewing the same lecture more than once. Out these participants, 19 (or 73 per cent) had viewed a lecture twice and 23 per cent had viewed it three times or more. This illustrates that frequent revision is taking place, as students watch the same lecture repeatedly to absorb and clarify its contents. This frequency of repeated engagement with recorded unit materials—lectures in particular—indicates that students were making online engagement and revision a key element of their learning process.Awareness of the Echo360 Lecture Captions and Additional FeaturesHowever, while students were highly engaged with both the online learning material and the recorded lectures, there was less awareness of the availability of the captioning system—only 34 per cent of students indicated they were aware of having access to captions. The survey also asked students whether or not they had used additional features of the Echo360 captioning system such as the search function and downloadable lecture transcripts. Survey results confirm that these features were being used; however, responses indicated that only a minority of students using the captions system used these features, with 28 per cent using the search function and 33 per cent making use of the transcripts. These results can be seen as an indication that additional features were useful for revision, albeit for the minority of students who used them. A Curtin disability advisor noted in their interview that:transcripts are particularly useful in addition to captions as they allow the user to quickly skim the material rather than sit through a whole lecture. Transcripts also allow translation into other languages, highlighting text and other features that make the content more accessible.Teaching staff were positive about these features and suggested that providing transcripts saved time for tutors who are often approached to provide these to individual students:I typically receive requests for lecture transcripts at the commencement of each study period. In SP3 [during this study] I did not receive any requests.I feel that lecture transcripts would be particularly useful as this is the most common request I receive from students, especially those with disabilities.I think transcripts and keyword searching would likely be useful to many students who access lectures through recordings (or who access recordings even after attending the lecture in person).However, the one student who was interviewed preferred the keyword search feature, although they expressed interest in transcripts as well:I used the captions keyword search. I think I would like to use the lecture transcript as well but I did not use that in this unit.In summary, while not all students made use of Echo360’s additional features for captions, those who did access them did so frequently, indicating that these are potentially useful learning tools.Value of CaptionsOf the students who were aware of the captions, 63 per cent found them useful for engaging with the lecture material. According to one of the students:[captions] made a big difference to me in terms on understanding and retaining what was said in the lectures. I am not sure that many students would realise this unless they actually used the captions…I found it much easier to follow what was being said in the recorded lectures and I also found that they helped stay focussed and not become distracted from the lecture.It is notable that the improvements described above do not involve assistance with hearing or language issues, but the extent to which captions improve a more general learning experience. This participant identified themselves as a native English speaker with no disabilities, yet the captions still made a “big difference” in their ability to follow, understand, focus on, and retain information drawn from the lectures.However, while over 60 per cent of students who used the captions reported they found them useful, it was difficult to get more detailed feedback on precisely how and why. Only 52.6 per cent reported actually using them when accessing the lectures, and a relatively small number reported taking advantage of the search and transcripts features available through the Echo360 system. Exactly how they were being used and what role they play in student learning is therefore an area to pursue in future research, as it will assist in breaking down the benefits of captions for all learners.Teaching staff also reported the difficulty in assessing the full value of captions—one teacher interviewed explained that the impact of captions was hard to monitor quantitatively during regular teaching:it is difficult enough to track who listens to lectures at all, let alone who might be using the captions, or have found these helpful. I would like to think that not only those with hearing impairments, but also ESL students and even people who find listening to and taking in the recording difficult for other reasons, might have benefitted.Some teaching staff, however, did note positive feedback from students:one student has given me positive feedback via comments on the [discussion board].one has reported that it helps with retention and with times when speech is soft or garbled. I suspect it helps mediate my accent and pitch!While 60 per cent claiming captions were useful is a solid majority, it is notable that some participants skipped this question. As discussed above, survey answers indicate that this was because these 37 students did not think they had access to captions in their units.Future SignificanceOverall, these results indicate that while captions can provide a benefit to students’ engagement with online lecture learning material, there is a need for more direct and ongoing information sharing to ensure both students and teaching staff are fully aware of captions and how to use them. Technical issues—such as the time delay in captions being uploaded—potentially dissuade students from using this facility, so improving the speed and reliability of this tool could increase the number of learners keen to use it. All staff interviewed agreed that implementing captions for all lectures would be beneficial for everyone:any technology that can assist in making lectures more accessible is useful, particularly in OUA [online] courses.it would be a good example of Universal Design as it would make the lecture content more accessible for students with disabilities as well as students with other equity needs.YES—it benefits all students. I personally find that I understand and my attention is held more by captioned content.it certainly makes my role easier as it allows effective access to recorded lectures. Captioning allows full access as every word is accessible as opposed to note taking which is not verbatim.DiscussionThe results of this research indicate that captions—and their additional features—available through the Echo360 captions system are an aid to student learning. However, there are significant challenges to be addressed to make students aware of these features and their potential benefits.This study has shown that in a cohort of primarily English speaking students without disabilities, over 60 per cent found captions a useful addition to recorded lectures. This suggests that the implementation of captions for all recorded lectures would have widespread benefits for all learners, not only those with hearing or language difficulties. However, at present, only “eligible” students who approach the disability office would be considered for this service, usually students who are D/deaf or hard of hearing. Yet it can be argued that these benefits—and challenges—could also extend to other groups that are might traditionally have been seen to benefit from the use of captions such as students with other disabilities or those from a NESB.However, again, a lack of awareness of the training module meant that this potential cohort did not benefit from this trial. In this study, none of the students who identified as having a disability or coming from a NESB indicated that they had access to the training module. Further, five of the six students with disabilities reported that they did not have access to the captions system and, similarly, only two of the five NESB students. Despite these low numbers, all the students who were part of these two groups and who did access the captions system did find it useful.It can therefore be seen that the main challenge for teaching staff is to ensure all students are aware of captions and can access them easily. One option for reducing the need for training or further instructions might be having captions always ON by default. This means students could incorporate them into their study experience without having to take direct action or, equally, could simply choose to switch them off.There are also a few potential teething issues with implementing captions universally that need to be noted, as staff expressed some concerns regarding how this might alter the teaching and learning experience. For example:because the captioning is once-off, it means I can’t re-record the lectures where there was a failure in technology as the new versions would not be captioned.a bit cautious about the transcript as there may be problems with students copying that content and also with not viewing the lectures thinking the transcripts are sufficient.Despite these concerns, the survey results and interviews support the previous findings showing that lecture captions have the potential to benefit all learners, enhancing each student’s existing capabilities. As one staff member put it:in the main I just feel [captions are] important for accessibility and equity in general. Why should people have to request captions? Recorded lecture content should be available to all students, in whatever way they find it most easy (or possible) to engage.Follow-up from students at the end of the study further supported this. As one student noted in an email at the start of 2017:hi all, in one of my units last semester we were lucky enough to have captions on the recorded lectures. They were immensely helpful for a number of reasons. I really hope they might become available to us in this unit.ConclusionsWhen this project set out to investigate the ways diverse groups of students could utilise captioned lectures if they were offered it as a mainstream learning tool rather than a feature only disabled students could request, existing research suggested that many accommodations designed to assist students with disabilities actually benefit the entire cohort. The results of the survey confirmed this was also the case for captioning.However, currently, lecture captions are typically utilised in Australian higher education settings—including Curtin—only as an assistive technology for students with disabilities, particularly students who are D/deaf or hard of hearing. In these circumstances, the student must undertake a lengthy process months in advance to ensure timely access to essential captioned material. Mainstreaming the provision of captions and transcripts for online lectures would greatly increase the accessibility of online learning—removing these barriers allows education providers to harness the broad potential of captioning technology. Indeed, ensuring that captions were available “by default” would benefit the educational outcomes and self-determination of the wide range of students who could benefit from this technology.Lecture captioning and transcription is increasingly cost-effective, given technological developments in speech-to-text or automatic speech recognition software, and the increasing re-use of content across different iterations of a unit in online higher education courses. At the same time, international trends in online education—not least the rapidly evolving interpretations of international legislation—provide new incentives for educational providers to begin addressing accessibility shortcomings by incorporating captions and transcripts into the basic materials of a course.Finally, an understanding of the diverse benefits of lecture captions and transcripts needs to be shared widely amongst higher education providers, researchers, teaching staff, and students to ensure the potential of this technology is accessed and used effectively. Understanding who can benefit from captions, and how they benefit, is a necessary step in encouraging greater use of such technology, and thereby enhancing students’ learning opportunities.AcknowledgementsThis research was funded by the Curtin University Teaching Excellence Development Fund. Natalie Latter and Kai-ti Kao provided vital research assistance. We also thank the students and staff who participated in the surveys and interviews.ReferencesAlty, J.L., A. Al-Sharrah, and N. Beacham. “When Humans Form Media and Media Form Humans: An Experimental Study Examining the Effects Different Digital Media Have on the Learning Outcomes of Students Who Have Different Learning Styles.” Interacting with Computers 18.5 (2006): 891–909.Beacham, N.A., and J.L. Alty. “An Investigation into the Effects That Digital Media Can Have on the Learning Outcomes of Individuals Who Have Dyslexia.” Computers & Education 47.1 (2006): 74–93.Borgaonkar, R. “Captioning for Classroom Lecture Videos.” University of Houston 2013. <https://uh-ir.tdl.org/uh-ir/handle/10657/517>.Evmenova, A. “Lights. Camera. Captions: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities.” Ph.D. thesis. Virginia: George Mason U, 2008.Evmenova, A., and M. Behrmann. “Enabling Access and Enhancing Comprehension of Video Content for Postsecondary Students with Intellectual Disability.” Education and Training in Autism and Developmental Disabilities 49.1 (2014): 45–59.Griffin, Emily. “Who Uses Closed Captions? Not Just the Deaf or Hard of Hearing.” 3PlayMedia Aug. 2015 <http://www.3playmedia.com/2015/08/28/who-uses-closed-captions-not-just-the-deaf-or-hard-of-hearing/>.Kent, Mike, Katie Ellis, Gwyneth Peaty, Natalie Latter, and Kathryn Locke. Mainstreaming Captions for Online Lectures in Higher Education in Australia: Alternative Approaches to Engaging with Video Content. Perth: National Centre for Student Equity in Higher Education (NCSEHE), Curtin U, 2017. <https://www.ncsehe.edu.au/publications/4074/?doing_wp_cron=1493183232.7519669532775878906250>.Knight, V., B.R. McKissick, and A. Saunders. “A Review of Technology-Based Interventions to Teach Academic Skills to Students with Autism Spectrum Disorder.” Journal of Autism and Developmental Disorders 43.11 (2013): 2628–2648. <https://doi.org/10.1007/s10803-013-1814-y>.Linder, Katie. Student Uses and Perceptions of Closed Captions and Transcripts: Results from a National Study. Corvallis, OR: Oregon State U Ecampus Research Unit, 2016.Lewis, D., and V. Brown. “Multimedia and ADHD Learners: Are Subtitles Beneficial or Detrimental?” Annual Meeting of the AECT International Convention, The Galt House, Louisville 2012. <http://www.aect.org/pdf/proceedings12/2012/12_17.pdf>.Maiorana-Basas, M., and C.M. Pagliaro. “Technology Use among Adults Who Are Deaf and Hard of Hearing: A National Survey.” Journal of Deaf Studies and Deaf Education 19.3 (2014): 400–410. <https://doi.org/10.1093/deafed/enu005>.Marschark, Marc, Greg Leigh, Patricia Sapere, Denis Burnham, Carol Convertino, Michael Stinson, Harry Knoors, Mathijs P. J. Vervloed, and William Noble. “Benefits of Sign Language Interpreting and Text Alternatives for Deaf Students’ Classroom Learning.” Journal of Deaf Studies and Deaf Education 11.4 (2006): 421–437. <https://doi.org/10.1093/deafed/enl013>.Montero Perez, M., E. Peters, G. Clarebout, and P. Desmet. “Effects of Captioning on Video Comprehension and Incidental Vocabulary Learning.” Language Learning & Technology 18.1 (2014): 118–141.Montero Perez, M., W. Van Den Noortgate, and P. Desmet. “Captioned Video for L2 Listening and Vocabulary Learning: A Meta-Analysis.” System 41.3 (2013): 720–739. <https://doi.org/10.1016/j.system.2013.07.013>.Pachman, M., and F. Ke. “Environmental Support Hypothesis in Designing Multimedia Training for Older Adults: Is Less Always More?” Computers & Education 58.1 (2012): 100–110. <https://doi.org/10.1016/j.compedu.2011.08.011>.Podszebka, Darcy, Candee Conklin, Mary Apple, and Amy Windus. “Comparison of Video and Text Narrative Presentations on Comprehension and Vocabulary Acquisition”. Paper presented at SUNY – Geneseo Annual Reading and Literacy Symposium. New York: Geneseo, May 1998. <https://dcmp.org/caai/nadh161.pdf>.Reagon, K.A., T.S. Higbee, and K. Endicott. “Using Video Instruction Procedures with and without Embedded Text to Teach Object Labeling to Preschoolers with Autism: A Preliminary Investigation.” Journal of Special Education Technology 22.1 (2007): 13–20.Schmidt, M.J., and M.L. Haydu. “The Older Hearing‐Impaired Adult in the Classroom: Real‐Time Closed Captioning as a Technological Alternative to the Oral Lecture.” Educational Gerontology 18.3 (1992): 273–276. <https://doi.org/10.1080/0360127920180308>.Stinson, M.S., L.B. Elliot, R.R. Kelly, and Y. Liu. “Deaf and Hard-of-Hearing Students’ Memory of Lectures with Speech-to-Text and Interpreting/Note Taking Services.” The Journal of Special Education 43.1 (2009): 52–64. <https://doi.org/10.1177/0022466907313453>.Worthington, Tom. “Are Australian Universities Required to Caption Lecture Videos?” Higher Education Whisperer 14 Feb. 2015. <http://blog.highereducationwhisperer.com/2015/02/are-australian-universities-required-to.html>.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multimedia Communications|Native American Studies"

1

Chisum, Pamela Corinne. "Becoming visible in invisible space| How the cyborg trickster is (re)inventing American Indian (ndn) identity." Thesis, Washington State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3598054.

Full text
Abstract:

This dissertation investigates issues of representation surrounding the American Indian (NDN) and the mixedblood. By conflating images of the trickster as described by NDN scholars with the postmodern theories of Donna Haraway, I explore in my dissertation how the trickster provides a way of viewing formerly accepted boundaries of identity from new perspectives. As cyborg, the trickster is in the "system," but it is also enacting change by pushing against those boundaries, exposing them as social fictions. I create a cyborg trickster heuristic, using it as a lens with which to both analyze how NDNs construct online identities and the rhetorical maneuvers they undergo. Moving beyond access issues, I show how NDNs are strengthening their presence through social media. Ultimately, I argue that the cyborg trickster shows how identities (NDN and non-NDN alike) are multiply-created and constantly in flux, transcending the traditional boundaries of self and other, online and offline, space and place, to allow for a new understanding of the individual in society and society within the individual.

APA, Harvard, Vancouver, ISO, and other styles
2

Ventimiglia, Andrew. "Spirited Possessions| Media and Intellectual Property in the American Spiritual Marketplace." Thesis, University of California, Davis, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10036192.

Full text
Abstract:

This dissertation explores the role that intellectual property law plays as it influences the circulation and use of religious goods in contemporary religious organizations in the United States. The coherence of many modern spiritual communities no longer lies in a centralized institution like the church but instead in a shared dedication to sacred texts and other religious media. Thus, intellectual property law has become an effective means to administer the ephemeral beliefs and practices mediated by these texts. I explore a number of cases to demonstrate how intellectual property law can be used to maintain and adjudicate social relations rather than simply determining the proper allocation of ownership over a contested good. This project uses a number of select case studies – the legal battles of the Urantia Foundation and Worldwide Church of God, Scientology’s lawsuits against Internet Service Providers, the practice of sermon-stealing as it relates to the growth of sermon databases – to examine how religious communities ethically justify forms of ownership in religious goods and to highlight the incongruities between theories of authorship, originality and ownership within spiritual communities and those embedded in the law. I conclude that religious property owners construct innovative strategies for knowledge production and distribution as they mobilize IP to organize social and spiritual communities, care for and protect sacred goods, produce new articulations of spiritual identity, and even use the prohibitions of law to enchant material forms.

APA, Harvard, Vancouver, ISO, and other styles
3

Holder, Laura L. "Common Christs| Christ Figures, American Christianity, and Sacrifice on Cult Television." Thesis, University of Louisiana at Lafayette, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3687688.

Full text
Abstract:

Shifts in social attitudes towards American Christianity have resulted in a changed representation of Christ figures, specifically in their representation on television. Traditional Christ figures, those who believed in unconditional love and self-sacrifice for the greater good, clung to the church view and were figures of virtue and innocence. Modern Christ figures have become what I call "Common Christs"—people who are less likely to be the image of sinless perfection and more often violent and profane saviors. These modern stand-ins are usually from blue-collar or lower class backgrounds; they are the Christs of the common man. Generally, these Common Christs are in opposition with the dogmatic authority of the Christian church. The storylines that have Common Christs as their heroes often depict the organized religion of the church as an enemy, a negative institution trying to prevent the salvation of the common man by the common man. The purpose of my dissertation is to examine Common Christs as they appear in cult television shows that embrace and make strong use of Christian mythology without being considered Christian television, specifically The X-Files, Buffy, the Vampire Slayer, and Supernatural, to show how this changed image works as evidence of what I call the development of a textual religion. Ultimately, I hope that my discussion of Common Christs and textual religion will lead into a larger discussion between the academic camps of religious studies, pop culture studies and literary criticism about the importance of cross-disciplinary focus.

APA, Harvard, Vancouver, ISO, and other styles
4

Manuelito, Brenda K. "Creating Space for an Indigenous Approach to Digital Storytelling: "Living Breath" of Survivance Within an Anishinaabe Community in Northern Michigan." Antioch University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1433004268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rodriguez, Carmella M. "The Journey of a Digital Story: A Healing Performance of Mino-Bimaadiziwin: The Good Life." Antioch University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1433005531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hampton, Darlene Rose 1976. "Beyond resistance: Gender, performance, and fannish practice in digital culture." Thesis, University of Oregon, 2010. http://hdl.handle.net/1794/11070.

Full text
Abstract:
x, 160 p. : ill. (some col.)
Although the web appears to be a welcoming space for women, online spaces--like offline spaces--are rendered female through associations with the personal/private, embodiment, or an emphasis on intimacy. As such, these spaces are marked, marginalized, and often dismissed. Using an explicitly interdisciplinary approach that combines cultural studies models with feminist theory, new media studies, and performance, Beyond Resistance uses fandom as a way to render visible the invisible ways that repressive discourses of gender are woven throughout digital culture. I examine a variety of online fan practices that use popular media to perform individual negotiations of repressive ideologies of sex and gender, such as fan-authored fiction, role-playing games, and vids and machinima--digital videos created from re-editing television and video game texts. Although many of these negotiations are potentially resistive, I demonstrate how that potential is being limited and redirected in ways that actually reinforce constructions of gender that support the dominant culture. The centrality of traditional notions of sex and gender in determining the value of fan practices, through both popular representation and critical analysis, serves as a microcosm of how discourses of gender are operating within digital culture to support the continued gendering of the public and private spheres within digital space. This gendering contributes to the ongoing subordination of women under patriarchy by marginalizing or dismissing their concerns, labor, and cultural tastes.
Committee in charge: Priscilla Ovalle, Chairperson; Kathleen Karlyn, Member; Michael Aronson, Member; Kate Mondloch, Outside Member
APA, Harvard, Vancouver, ISO, and other styles
7

Herman, Jennifer Linda. "Effecting Science in Affective Places: The Rhetoric of Science in American Science and Technology Centers." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1396961008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

King, John. "Soft Focus: The Invisible War For Reality." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1626953472815601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brown, Jared Clayton. "Sex and the City, Platinum Edition: How The Golden Girls Altered American Situation Comedy." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1366060647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hall, Stefan. "“You’ve Seen the Movie, Now Play the Game”: Recoding the Cinematic in Digital Media and Virtual Culture." Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1300365433.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Multimedia Communications|Native American Studies"

1

Teo, Timothy, and Jan Noyes. "Teachers' Use of Information and Communications Technology (ICT)." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1359–65. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch183.

Full text
Abstract:
In the developed world, multimedia technologies, networks, and online services continue to pervade our everyday lives. Alongside the advancements in multimedia and networking technologies, it is essential for the stakeholders (e.g., business policy personnel and technology designers) to ensure that the end users are adequately informed and skilled to exploit such technologies for the betterment of their lives for example, work and study. A large proportion of multimedia technologies users come from the educational institutions. Within the educational context, tools such as multimedia technologies, networks, and online services are commonly referred to as information and communications technology (ICT). Over the last two decades, research findings have provided evidence to suggest that the use of ICT has resulted in positive effects on students’ learning (Blok, Oostdam, Otter, & Overmaat, 2002; Boster, Meyer, Roberto, & Inge, 2002; Kulik, 2003). As a change agent in many educational activities, the teacher in the developed world plays a key role in ICT integration in schools (McCannon & Crews, 2000). Research has found many factors to be influential in explaining teachers’ use of the computer, and these are commonly grouped into personal, school, and technical factors, although often factors from more than one group determine use. Personal factors relate to the teacher per se, and might include their experience, confidence, motivation, and commitment to using ICT, and so forth (Bitner & Bitner, 2002; Zhao, Pugh, Sheldon & Byers, 2002). School environment factors pertain to organizational and environmental issues, for example, time and support given by the school administration to ICT (Conlon & Simpson; 2003; Guha, 2003; Vannatta, 2000). Finally, technical factors relate to the ICT itself, and issues relating to the hardware/software and peripheral devices such as keyboards and mice, printers, and scanners. This article focuses on these factors and draws comparisons between highly technologically developed countries from Europe and North America, and less developed countries from Asia. In Europe and North America, research relating to teachers’ use of ICT tends to be older. For example, studies by Rosen and Weil (1995) and Hadley and Sheingold (1993) found that factors that influence the teacher’s use of the computer include teaching experience with ICT, on-site technology support, availability of computers, and financial support. Robertson et al. (1996) examined teachers of Grade 8 students (14 year olds) and found their computer use to be related to organizational change, time, and support from administration, perceptions of computer, and other personal and psychological factors. In the UK, Cox, Preston, and Cox (1999) used a questionnaire to collect evidence relating to teachers’ ICT experiences, expertise, and attitude toward ICT for teaching and learning. Factors important to ICT use were the extent to which ICT was perceived to have made learning to be more interesting, easier, and fun for students and teachers. Other factors such as using ICT to improve presentation of materials and accessibility to the computers for personal use and making administration more efficient were also cited as influential. Hence, it can be seen that school and technical factors have important roles to play in affecting teachers’ use of ICT.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography