Academic literature on the topic 'Netscape Communicator (Computer file)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Netscape Communicator (Computer file).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Netscape Communicator (Computer file)"

1

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
2

Downes, Daniel M. "The Medium Vanishes?" M/C Journal 3, no. 1 (March 1, 2000). http://dx.doi.org/10.5204/mcj.1829.

Full text
Abstract:
Introduction The recent AOL/Time-Warner merger invites us to re-think the relationships amongst content producers, distributors, and audiences. Worth an estimated $300 billion (US), the largest Internet transaction of all time, the deal is 45 times larger than the AOL/Netscape merger of November 1998 (Ledbetter). Additionally, the Time Warner/EMI merger, which followed hard on the heels of the AOL/Time-Warner deal and is itself worth $28 billion (US), created the largest content rights organisation in the music industry. The joining of the Internet giant (AOL) with what was already the world's largest media corporation (Time-Warner-EMI) has inspired some exuberant reactions. An Infoworld column proclaimed: The AOL/Time-Warner merger signals the demise of traditional media companies and the ascendancy of 'new economy' media companies that will force any industry hesitant to adopt a complete electronic-commerce strategy to rethink and put itself on Internet time. (Saap & Schwarrtz) This comment identifies the distribution channel as the dominant component of the "new economy" media. But this might not really be much of an innovation. Indeed, the assumption of all industry observers is that Time-Warner will provide broadband distribution (through its extensive cable holdings) as well as proprietary content for AOL. It is also expected that Time-Warner will adopt AOL's strategy of seeking sponsorship for development projects as well as for content. However, both of these phenomena -- merger and sponsorship -- are at least as old as radio. It seems that the Internet is merely repeating an old industrial strategy. Nonetheless, one important difference distinguishes the Internet from earlier media: its characterisation of the audience. Internet companies such as AOL and Microsoft tend towards a simple and simplistic media- centred view of the audience as market. I will show, however, that as the Internet assumes more of the traditional mass media functions, it will be forced to adopt a more sophisticated notion of the mass audience. Indeed, the Internet is currently the site in which audience definitions borrowed from broadcasting are encountering and merging with definitions borrowed from marketing. The Internet apparently lends itself to both models. As a result, definitions of what the Internet does or is, and of how we should understand the audience, are suitably confused and opaque. And the behaviour of big Internet players, such as AOL and MSN, perfectly reflects this confusion as they seem to careen between a view of the Internet as the new television and a contrasting view of the Internet as the new shopping mall. Meanwhile, Internet users move in ways that most observers fail to capture. For example, Baran and Davis characterise mass communication as a process involving (1) an organized sender, (2) engaged in the distribution of messages, (3) directed toward a large audience. They argue that broadcasting fits this model whereas a LISTSERV does not because, even though the LISTSERV may have very many subscribers, its content is filtered through a single person or Webmaster. But why is the Webmaster suddenly more determining than a network programmer or magazine editor? The distinction seems to grow out of the Internet's technological characteristics: it is an interactive pipeline, therefore its use necessarily excludes the possibility of "broadcasting" which in turn causes us to reject "traditional" notions of the audience. However, if a media organisation were to establish an AOL discussion group in order to promote Warner TV shows, for example, would not the resulting communication suddenly fall under the definition as set out by Baran and Davis? It was precisely the confusion around such definitions that caused the CRTC (Canada's broadcasting and telecommunications regulator) to hold hearings in 1999 to determine what kind of medium the Internet is. Unlike traditional broadcasting, Internet communication does indeed include the possibility of interactivity and niche communities. In this sense, it is closer to narrowcasting than to broadcasting even while maintaining the possibility of broadcasting. Hence, the nature of the audience using the Internet quickly becomes muddy. While such muddiness might have led us to sharpen our definitions of the audience, it seems instead to have led many to focus on the medium itself. For example, Morris & Ogan define the Internet as a mass medium because it addresses a mass audience mediated through technology (Morris & Ogan 39). They divide producers and audiences on the Internet into four groups: One-to-one asynchronous communication (e-mail); Many-to-many asynchronous communication (Usenet and News Groups); One-to-one, one-to-few, and one-to-many synchronous communication (topic groups, construction of an object, role-playing games, IRC chats, chat rooms); Asynchronous communication (searches, many-to-one, one-to-one, one to- many, source-receiver relations (Morris & Ogan 42-3) Thus, some Internet communication qualifies as mass communication while some does not. However, the focus remains firmly anchored on either the sender or the medium because the receiver --the audience -- is apparently too slippery to define. When definitions do address the content distributed over the Net, they make a distinction between passive reception and interactive participation. As the World Wide Web makes pre-packaged content the norm, the Internet increasingly resembles a traditional mass medium. Timothy Roscoe argues that the main focus of the World Wide Web is not the production of content (and, hence, the fulfilment of the Internet's democratic potential) but rather the presentation of already produced material: "the dominant activity in relation to the Web is not producing your own content but surfing for content" (Rosco 680). He concludes that if the emphasis is on viewing material, the Internet will become a medium similar to television. Within media studies, several models of the audience compete for dominance in the "new media" economy. Denis McQuail recalls how historically, the electronic media furthered the view of the audience as a "public". The audience was an aggregate of common interests. With broadcasting, the electronic audience was delocalised and socially decomposed (McQuail, Mass 212). According to McQuail, it was not a great step to move from understanding the audience as a dispersed "public" to thinking about the audience as itself a market, both for products and as a commodity to be sold to advertisers. McQuail defines this conception of the audience as an "aggregate of potential customers with a known social- economic profile at which a medium or message is directed" (McQuail, Mass 221). Oddly though, in light of the emancipatory claims made for the Internet, this is precisely the dominant view of the audience in the "new media economy". Media Audience as Market How does the marketing model characterise the relationship between audience and producer? According to McQuail, the marketing model links sender and receiver in a cash transaction between producer and consumer rather than in a communicative relationship between equal interlocutors. Such a model ignores the relationships amongst consumers. Indeed, neither the effectiveness of the communication nor the quality of the communicative experience matters. This model, explicitly calculating and implicitly manipulative, is characteristically a "view from the media" (McQuail, Audience 9). Some scholars, when discussing new media, no longer even refer to audiences. They speak of users or consumers (Pavick & Dennis). The logic of the marketing model lies in the changing revenue base for media industries. Advertising-supported media revenues have been dropping since the early 1990s while user-supported media such as cable, satellite, online services, and pay-per-view, have been steadily growing (Pavlik & Dennis 19). In the Internet-based media landscape, the audience is a revenue stream and a source of consumer information. As Bill Gates says, it is all about "eyeballs". In keeping with this view, AOL hopes to attract consumers with its "one-stop shopping and billing". And Internet providers such as MSN do not even consider their subscribers as "audiences". Instead, they work from a consumer model derived from the computer software industry: individuals make purchases without the seller providing content or thematising the likely use of the software. The analogy extends well beyond the transactional moment. The common practice of prototyping products and beta-testing software requires the participation of potential customers in the product development cycle not as a potential audience sharing meanings but as recalcitrant individuals able to uncover bugs. Hence, media companies like MTV now use the Internet as a source of sophisticated demographic research. Recently, MTV Asia established a Website as a marketing tool to collect preferences and audience profiles (Slater 50). The MTV audience is now part of the product development cycle. Another method for getting information involves the "cookie" file that automatically provides a Website with information about the user who logs on to a site (Pavick & Dennis). Simultaneously, though, both Microsoft and AOL have consciously shifted from user-subscription revenues to advertising in an effort to make online services more like television (Gomery; Darlin). For example, AOL has long tried to produce content through its own studios to generate sufficiently heavy traffic on its Internet service in order to garner profitable advertising fees (Young). However, AOL and Microsoft have had little success in providing content (Krantz; Manes). In fact, faced with the AOL/Time-Warner merger, Microsoft declared that it was in the software rather than the content business (Trott). In short, they are caught between a broadcasting model and a consumer model and their behaviour is characteristically erratic. Similarly, media companies such as Time-Warner have failed to establish their own portals. Indeed, Time-Warner even abandoned attempts to create large Websites to compete with other Internet services when it shut down its Pathfinder site (Egan). Instead it refocussed its Websites so as to blur the line between pitching products and covering them (Reid; Lyons). Since one strategy for gaining large audiences is the creation of portals - - large Websites that keep surfers within the confines of a single company's site by providing content -- this is the logic behind the AOL/Time-Warner merger though both companies have clearly been unsuccessful at precisely such attempts. AOL seems to hope that Time- Warner will act as its content specialist, providing the type of compelling material that will make users want to use AOL, whereas Time- Warner seems to hope that AOL will become its privileged pipeline to the hearts and minds of untold millions. Neither has a coherent view of the audience, how it behaves, or should behave. Consequently, their efforts have a distinctly "unmanaged" and slighly inexplicable air to them, as though everyone were simultaneously hopeful and clueless. While one might argue that the stage is set to capitalise on the audience as commodity, there are indications that the success of such an approach is far from guaranteed. First, the AOL/Time-Warner/EMI transaction, merely by existing, has sparked conflicts over proprietary rights. For example, the Recording Industry Association of America, representing Sony, Universal, BMG, Warner and EMI, recently launched a $6.8 billion lawsuit against MP3.com -- an AOL subsidiary -- for alleged copyright violations. Specifically, MP3.com is being sued for selling digitized music over the Internet without paying royalties to the record companies (Anderson). A similar lawsuit has recently been launched over the issue of re- broadcasting television programs over the Internet. The major US networks have joined together against Canadian Internet company iCravetv for the unlawful distribution of content. Both the iCravetv and the MP3.com cases show how dominant media players can marshal their forces to protect proprietary rights in both content and distribution. Since software and media industries have failed to recreate the Internet in the image of traditional broadcasting, the merger of the dominant players in each industry makes sense. However, their simultaneous failure to secure proprietary rights reflects both the competitive nature of the "new media economy" and the weakness of the marketing view of the audience. Media Audience as Public It is often said that communication produces social cohesion. From such cohesion communities emerge on which political or social orders can be constructed. The power of social cohesion and attachment to group symbols can even create a sense of belonging to a "people" or nation (Deutsch). Sociologist Daniel Bell described how the mass media helped create an American culture simply by addressing a large enough audience. He suggested that on the evening of 7 March 1955, when one out of every two Americans could see Mary Martin as Peter Pan on television, a kind of social revolution occurred and a new American public was born. "It was the first time in history that a single individual was seen and heard at the same time by such a broad public" (Bell, quoted in Mattelart 72). One could easily substitute the 1953 World Series or the birth of little Ricky on I Love Lucy. The desire to document such a process recurs with the Internet. Internet communities are based on the assumption that a common experience "creates" group cohesion (Rheingold; Jones). However, as a mass medium, the Internet has yet to find its originary moment, that event to which all could credibly point as the birth of something genuine and meaningful. A recent contender was the appearance of Paul McCartney at the refurbished Cavern Club in Liverpool. On Tuesday, 14 December 1999, McCartney played to a packed club of 300 fans, while another 150,000 watched on an outdoor screen nearby. MSN arranged to broadcast the concert live over the Internet. It advertised an anticipated global audience of 500 million. Unfortunately, there was such heavy Internet traffic that the system was unable to accommodate more than 3 million people. Servers in the United Kingdom were so congested that many could only watch the choppy video stream via an American link. The concert raises a number of questions about "virtual" events. We can draw several conclusions about measuring Internet audiences. While 3 million is a sizeable audience for a 20 minute transmission, by advertising a potential audience of 500 million, MSN showed remarkably poor judgment of its inherent appeal. The Internet is the first medium that allows access to unprocessed material or information about events to be delivered to an audience with neither the time constraints of broadcast media nor the space limitations of the traditional press. This is often cited as one of the characteristics that sets the Internet apart from other media. This feeds the idea of the Internet audience as a participatory, democratic public. For example, it is often claimed that the Internet can foster democratic participation by providing voters with uninterpreted information about candidates and issues (Selnow). However, as James Curran argues, the very process of distributing uninterrupted, unfiltered information, at least in the case of traditional mass media, represents an abdication of a central democratic function -- that of watchdog to power (Curran). In the end, publics are created and maintained through active and continuous participation on the part of communicators and audiences. The Internet holds together potentially conflicting communicative relationships within the same technological medium (Merrill & Ogan). Viewing the audience as co-participant in a communicative relationship makes more sense than simply focussing on the Internet audience as either an aggregate of consumers or a passively constructed symbolic public. Audience as Relationship Many scholars have shifted attention from the producer to the audience as an active participant in the communication process (Ang; McQuail, Audience). Virginia Nightingale goes further to describe the audience as part of a communicative relationship. Nightingale identifies four factors in the relationship between audiences and producers that emphasize their co-dependency. The audience and producer are engaged in a symbiotic relationship in which consumption and use are necessary but not sufficient explanations of audience relations. The notion of the audience invokes, at least potentially, a greater range of activities than simply use or consumption. Further, the audience actively, if not always consciously, enters relationships with content producers and the institutions that govern the creation, distribution and exhibition of content (Nightingale 149-50). Others have demonstrated how this relationship between audiences and producers is no longer the one-sided affair characterised by the marketing model or the model of the audience as public. A global culture is emerging based on critical viewing skills. Kavoori calls this a reflexive mode born of an increasing familiarity with the narrative conventions of news and an awareness of the institutional imperatives of media industries (Kavoori). Given the sophistication of the emergent global audience, a theory that reduces new media audiences to a set of consumer preferences or behaviours will inevitably prove inadequate, just as it has for understanding audience behavior in old media. Similarly, by ignoring those elements of audience behavior that will be easily transported to the Web, we run the risk of idealising the Internet as a medium that will create an illusory, pre-technological public. Conclusion There is an understandable confusion between the two models of the audience that appear in the examples above. The "new economy" will have to come to terms with sophisticated audiences. Contrary to IBM's claim that they want to "get to know all about you", Internet users do not seem particularly interested in becoming a perpetual source of market information. The fragmented, autonomous audience resists attempts to lock it into proprietary relationships. Internet hypesters talk about creating publics and argue that the Internet recreates the intimacy of community as a corrective to the atomisation and alienation characteristic of mass society. This faith in the power of a medium to create social cohesion recalls the view of the television audience as a public constructed by the common experience of watching an important event. However, MSN's McCartney concert indicates that creating a public from spectacle it is not a simple process. In fact, what the Internet media conglomerates seem to want more than anything is to create consumer bases. Audiences exist for pleasure and by the desire to be entertained. As Internet media institutions are established, the cynical view of the audience as a source of consumer behavior and preferences will inevitably give way, to some extent, to a view of the audience as participant in communication. Audiences will be seen, as they have been by other media, as groups whose attention must be courted and rewarded. Who knows, maybe the AOL/Time-Warner merger might, indeed, signal the new medium's coming of age. References Anderson, Lessley. "To Beam or Not to Beam. MP3.com Is Being Sued by the Major Record Labels. Does the Digital Download Site Stand a Chance?" Industry Standard 31 Jan. 2000. <http://www.thestandard.com>. Ang, Ien. Watching Dallas: Soap Opera and the Melodramatic Imagination. London: Methuen, 1985. Baran, Stanley, and Dennis Davis. Mass Communication Theory: Foundations, Ferment, and Future. 2nd ed. Belmont, Calif.: Wadsworth 2000. Curran, James. "Mass Media and Democracy Revisited." Mass Media and Society. Eds. James Curran and Michael Gurevitch. New York: Hodder Headline Group, 1996. Darlin, Damon. "He Wants Your Eyeballs." Forbes 159 (16 June 1997): 114-6. Egan, Jack, "Pathfinder, Rest in Peace: Time-Warner Pulls the Plug on Site." US News and World Report 126.18 (10 May 1999): 50. Gomery, Douglas. "Making the Web Look like Television (American Online and Microsoft)." American Journalism Review 19 (March 1997): 46. Jones, Steve, ed. CyberSociety: Computer-Mediated Communication and Community. Thousand Oaks: Sage, 1995. Kavoori, Amandam P. "Discursive Texts, Reflexive Audiences: Global Trends in Television News Texts and Audience Reception." Journal of Broadcasting and Electronic Media 43.3 (Summer 1999): 386-98. Krantz, Michael. "Is MSN on the Block?" Time 150 (20 Oct. 1997): 82. Ledbetter, James. "AOL-Time-Warner Make It Big." Industry Standard 11 Jan. 2000. <http://www.thestandard.com>. Lyons, Daniel. "Desparate.com (Media Companies Losing Millions on the Web Turn to Electronic Commerce)." Forbes 163.6 (22 March 1999): 50-1. Manes, Stephen. "The New MSN as Prehistoric TV." New York Times 4 Feb. 1997: C6. McQuail, Denis. Audience Analysis. Thousand Oaks, Calif.: Sage, 1997. ---. Mass Communication Theory. 2nd ed. London: Sage, 1987. Mattelart, Armand. Mapping World Communication: War, Progress, Culture. Trans. Susan Emanuel and James A. Cohen. Minneapolis: U of Minnesota P, 1994. Morris, Merrill, and Christine Ogan. "The Internet as Mass Medium." Journal of Communications 46 (Winter 1996): 39-50. Nightingale, Virginia. Studying Audience: The Shock of the Real. London: Routledge, 1996. Pavlik, John V., and Everette E. Dennis. New Media Technology: Cultural and Commercial Perspectives. 2nd ed. Boston: Allyn and Bacon, 1998. Reid, Calvin. "Time-Warner Seeks Electronic Synergy, Profits on the Web (Pathfinder Site)." Publisher's Weekly 242 (4 Dec. 1995): 12. Rheingold, Howard. Virtual Community: Homesteading on the Electronic Frontier. New York: Harper, 1993. Roscoe, Timothy. "The Construction of the World Wide Web Audience." Media, Culture and Society 21.5 (1999): 673-84. Saap, Geneva, and Ephraim Schwarrtz. "AOL-Time-Warner Deal to Impact Commerce, Content, and Access Markets." Infoworld 11 January 2000. <http://infoworld.com/articles/ic/xml/00/01/11/000111icimpact.xml>. Slater, Joanna. "Cool Customers: Music Channels Hope New Web Sites Tap into Teen Spirit." Far Eastern Economic Review 162.9 (4 March 1999): 50. Trott, Bob. "Microsoft Views AOL-Time-Warner as Confirmation of Its Own Strategy." Infoworld 11 Jan. 2000. <http://infoworld.com/articles/pi/xml/00/01/11/000111pimsaoltw.xml>. Yan, Catherine. "A Major Studio Called AOL?" Business Week 1 Dec. 1997: 1773-4. Citation reference for this article MLA style: Daniel M. Downes. "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/mass.php>. Chicago style: Daniel M. Downes, "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]). APA style: Daniel M. Downes. (2000) The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
3

McCormack, Paul. "Remembering the Week after Next." M/C Journal 1, no. 2 (August 1, 1998). http://dx.doi.org/10.5204/mcj.1709.

Full text
Abstract:
"It's a poor sort of memory that only works backwards," the Queen remarked. "What sort of things do you remember best?" Alice ventured to ask. "Oh, things that happened in the week after next," the Queen replied in a careless tone. Lewis Carroll, Through the Looking Glass. It would seem, odd as the notion may appear at first glance, that memory can in fact be thought of as working in two directions: both backwards and forwards. Take, for example, the commonly enough expressed sentiment that one should avail of every opportunity that life presents to "learn from experience". Isn't to do so, in fact, a projection into the future of the 'memory' that has been gained in the past, and stored in the present? Isn't the implication that, with a little careful observation of last week, one can begin to 'remember' what happened in the week after next? Consider now the development of a revolutionary new communications technology. Who are its pioneers? Experience has taught that there are at least three categories of person (or organisation) which are to be found wherever tomorrow is being actualised. They are the visionaries, the enthusiasts, and the entrepreneurs -- though some may argue that this last category would better be called economic opportunists. Of course, these are not, and shouldn't be thought of as, completely distinct categories, separated by impermeable barriers; one can be in all three as easily as not. The early days of Radio fit this model. As an infant technology it was fostered by visionaries like Marconi, enthusiasts like the many around the world who cobbled together their own home-made transmitters and receivers, and entrepreneur/opportunists like Frank Conrad of Westinghouse, whose 8XK transmitted periodically during the first world war to test equipment made by the company for the American military (Mishkind). The emergence of interactive networked computing, and ultimately of the Internet, fits this model too. There were early visionaries like Douglas Engelbart, and the MIT professor J.C.R. Licklider, who were among the first to see a potential for more than simply large scale number crunching in the fledgling electronic computing industry (Rheingold 65-89). Enthusiasts include the North Carolina students who created Usenet, and the Chicago hobbyists who "triggered the worldwide BBS movement because they wanted to transfer files from one PC to another without driving across town" (Rheingold 67). As for entrepreneur/opportunists, well, organisations like Netscape, Yahoo!, and Amazon.com leap to mind. When revolutionary development is underway the potential for change is seen to be boundless. Radio was quickly recognised as a means to cross vast distances, and difficult terrain. It became a lifeline to ships in distress, bridging the dreadful isolation of the unforgiving oceans. It was put to use as a public service: the US Agriculture Department's broadcasting of weather reports as early as 1912 being some of the earliest radio broadcasts in that country (White). Similarly, Westinghouse's 8XK, along with many other fledgling stations, broadcast the results of the US presidential election on the night of November 2nd, 1920 (Mishkind). The "wireless" telegraph helped to join that huge nation together, and having done so, went on to inform and entertain it with news, concerts, lectures and the like. The democratising potential of the new medium and its easily disseminated information was soon recognised and debated: "Will Radio Make People the Government" demanded a 1924 headline in Radio Broadcast, an early industry magazine (Lappin). All this inevitably gave rise to questions of control; for a free medium could also be seen, depending on one's point of view, as a dangerous, anarchic medium. Perhaps those who pay for it should control it; but who is to pay for it, and with what? For a long time there was no clear vision anywhere of how the medium could be made to turn a dollar. In England a tax on the sale of radio hardware was introduced to fund the newly formed, and government owned, British Broadcasting Corporation. Such a model was rejected in the US, however, where large corporations -- among them AT&T, Westinghouse and General Electric -- gradually gained the upper hand. The system they put in place at first involved the leasing of airtime on large networks to commercial 'sponsors', which subsequently grew into direct on-air advertising. It won't have escaped the notice of many, I'm sure, that much of this could just as easily be about the Internet in the 1990s as Radio in the 1920s. And this is where memory comes into play. Certainly there are many, and profound, differences between the two media. The very nature of the Internet may seem to many to be just too decentralised, too anarchic, to ever be effectively harnessed -- or hijacked if you prefer -- by commercial interests. But it was, at one time, also impossible to see how Radio could ever show a profit. And sure, commercial Radio isn't the only kind of Radio out there. Radio National in Australia, for example, is a publicly funded network that does many of the good things a relatively uncoerced technology can do; but is this aspect of the medium central or marginalised, and which do we want it to be? Robert Mc Chesney considers that "to answer the question of whither the Internet, one need only determine where the greatest profits are to be found". This is a fairly bleak view but it may well be true. To find out for yourself where the Internet is likely to go, exercise the memory of the past, and you might remember the future. References Carroll, Lewis. Through the Looking Glass and What Alice Found There: The Annotated Alice. Ed. Martin Gardner. Harmondsworth: Penguin, 1965. 166-345. Lappin, Todd. "Deja Vu All Over Again." Wired. 11 Aug. 1998 <http://www.wired.com/wired/3.05/features/dejavu.php>. McChesney, Robert. "The Internet and US Communication Policy-Making in Historical and Critical Perspective." Journal of Computer Mediated Communication 1.4 (1995). 30 May 1998 <http://www.ascusc.org/jcmc/vol1/issue4/mcchesney.php>. Mishkind, Barry. "Who's On First?" 20 Aug. 1995. 11 Aug. 1998 <http://www.oldradio.com/archives/general/first.php>. Radio Museum. 11 Aug. 1998 <http://home.luna.nl/~arjan-muil/radio/museum.php>. Rheingold, Howard. The Virtual Community: Surfing the Internet. London: Minerva, 1995. Surfing the Aether. 11 Aug. 1998 <http://www.northwinds.net/bchris/index.htm>. White, Thomas H. "United States Early Radio History." 25 Jul. 1998. 11 Aug. 1998 <http://www.ipass.net/~whitetho/index.php>. Citation reference for this article MLA style: Paul Mc Cormack. "Remembering the Week after Next." M/C: A Journal of Media and Culture 1.2 (1998). [your date of access] <http://www.uq.edu.au/mc/9808/week.php>. Chicago style: Paul Mc Cormack, "Remembering the Week after Next," M/C: A Journal of Media and Culture 1, no. 2 (1998), <http://www.uq.edu.au/mc/9808/week.php> ([your date of access]). APA style: Paul Mc Cormack. (1998) Remembering the week after next. M/C: A Journal of Media and Culture 1(2). <http://www.uq.edu.au/mc/9808/week.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
4

Green, Lelia, and Carmen Guinery. "Harry Potter and the Fan Fiction Phenomenon." M/C Journal 7, no. 5 (November 1, 2004). http://dx.doi.org/10.5204/mcj.2442.

Full text
Abstract:
The Harry Potter (HP) Fan Fiction (FF) phenomenon offers an opportunity to explore the nature of fame and the work of fans (including the second author, a participant observer) in creating and circulating cultural products within fan communities. Matt Hills comments (xi) that “fandom is not simply a ‘thing’ that can be picked over analytically. It is also always performative; by which I mean that it is an identity which is (dis-)claimed, and which performs cultural work”. This paper explores the cultural work of fandom in relation to FF and fame. The global HP phenomenon – in which FF lists are a small part – has made creator J K Rowling richer than the Queen of England, according to the 2003 ‘Sunday Times Rich List’. The books (five so far) and the films (three) continue to accelerate the growth in Rowling’s fortune, which quadrupled from 2001-3: an incredible success for an author unknown before the publication of Harry Potter and the Philosopher’s Stone in 1997. Even the on-screen HP lead actor, Daniel Radcliffe, is now Britain’s second wealthiest teenager (after England’s Prince Harry). There are other globally successful books, such as the Lord of the Rings trilogy, and the Narnia collection, but neither of these series has experienced the momentum of the HP rise to fame. (See Endnote for an indication of the scale of fan involvement with HP FF, compared with Lord of the Rings.) Contemporary ‘Fame’ has been critically defined in relation to the western mass media’s requirement for ‘entertaining’ content, and the production and circulation of celebrity as opposed to ‘hard news’(Turner, Bonner and Marshall). The current perception is that an army of publicists and spin doctors are usually necessary, but not sufficient, to create and nurture global fame. Yet the HP phenomenon started out with no greater publicity investment than that garnered by any other promising first novelist: and given the status of HP as children’s publishing, it was probably less hyped than equivalent adult-audience publications. So are there particular characteristics of HP and his creator that predisposed the series and its author to become famous? And how does the fame status relate to fans’ incorporation of these cultural materials into their lives? Accepting that it is no more possible to predict the future fame of an author or (fictional) character than it is to predict the future financial success of a book, film or album, there is a range of features of the HP phenomenon that, in hindsight, helped accelerate the fame momentum, creating what has become in hindsight an unparalleled global media property. J K Rowling’s personal story – in the hands of her publicity machine – itself constituted a magical myth: the struggling single mother writing away (in longhand) in a Scottish café, snatching odd moments to construct the first book while her infant daughter slept. (Comparatively little attention was paid by the marketers to the author’s professional training and status as a teacher, or to Rowling’s own admission that the first book, and the outline for the series, took five years to write.) Rowling’s name itself, with no self-evident gender attribution, was also indicative of ambiguity and mystery. The back-story to HP, therefore, became one of a quintessentially romantic endeavour – the struggle to write against the odds. Publicity relating to the ‘starving in a garret’ background is not sufficient to explain the HP/Rowling grip on the popular imagination, however. Instead it is arguable that the growth of HP fame and fandom is directly related to the growth of the Internet and to the middle class readers’ Internet access. If the production of celebrity is a major project of the conventional mass media, the HP phenomenon is a harbinger of the hyper-fame that can be generated through the combined efforts of the mass media and online fan communities. The implication of this – evident in new online viral marketing techniques (Kirby), is that publicists need to pique cyber-interest as well as work with the mass media in the construction of celebrity. As the cheer-leaders for online viral marketing make the argument, the technique “provides the missing link between the [bottom-up] word-of-mouth approach and the top-down, advertainment approach”. Which is not to say that the initial HP success was a function of online viral marketing: rather, the marketers learned their trade by analysing the magnifier impact that the online fan communities had upon the exponential growth of the HP phenomenon. This cyber-impact is based both on enhanced connectivity – the bottom-up, word-of-mouth dynamic, and on the individual’s need to assume an identity (albeit fluid) to participate effectively in online community. Critiquing the notion that the computer is an identity machine, Streeter focuses upon (649) “identities that people have brought to computers from the culture at large”. He does not deal in any depth with FF, but suggests (651) that “what the Internet is and will come to be, then, is partly a matter of who we expect to be when we sit down to use it”. What happens when fans sit down to use the Internet, and is there a particular reason why the Internet should be of importance to the rise and rise of HP fame? From the point of view of one of us, HP was born at more or less the same time as she was. Eleven years old in the first book, published in 1997, Potter’s putative birth year might be set in 1986 – in line with many of the original HP readership, and the publisher’s target market. At the point that this cohort was first spellbound by Potter, 1998-9, they were also on the brink of discovering the Internet. In Australia and many western nations, over half of (two-parent) families with school-aged children were online by the end of 2000 (ABS). Potter would notionally have been 14: his fans a little younger but well primed for the ‘teeny-bopper’ years. Arguably, the only thing more famous than HP for that age-group, at that time, was the Internet itself. As knowledge of the Internet grew stories about it constituted both news and entertainment and circulated widely in the mass media: the uncertainty concerning new media, and their impact upon existing social structures, has – over time – precipitated a succession of moral panics … Established commercial media are not noted for their generosity to competitors, and it is unsurprising that many of the moral panics circulating about pornography on the Net, Internet stalking, Web addiction, hate sites etc are promulgated in the older media. (Green xxvii) Although the mass media may have successfully scared the impressionable, the Internet was not solely constructed as a site of moral panic. Prior to the general pervasiveness of the Internet in domestic space, P. David Marshall discusses multiple constructions of the computer – seen by parents as an educational tool which could help future-proof their children; but which their children were more like to conceptualise as a games machine, or (this was the greater fear) use for hacking. As the computer was to become a site for the battle ground between education, entertainment and power, so too the Internet was poised to be colonised by teenagers for a variety of purposes their parents would have preferred to prevent: chat, pornography, game-playing (among others). Fan communities thrive on the power of the individual fan to project themselves and their fan identity as part of an ongoing conversation. Further, in constructing the reasons behind what has happened in the HP narrative, and in speculating what is to come, fans are presenting themselves as identities with whom others might agree (positive affirmation) or disagree (offering the chance for engagement through exchange). The genuinely insightful fans, who apparently predict the plots before they’re published, may even be credited in their communities with inspiring J K Rowling’s muse. (The FF mythology is that J K Rowling dare not look at the FF sites in case she finds herself influenced.) Nancy Baym, commenting on a soap opera fan Usenet group (Usenet was an early 1990s precursor to discussion groups) notes that: The viewers’ relationship with characters, the viewers’ understanding of socioemotional experience, and soap opera’s narrative structure, in which moments of maximal suspense are always followed by temporal gaps, work together to ensure that fans will use the gaps during and between shows to discuss with one another possible outcomes and possible interpretations of what has been seen. (143) In HP terms the The Philosopher’s Stone constructed a fan knowledge that J K Rowling’s project entailed at least seven books (one for each year at Hogwarts School) and this offered plentiful opportunities to speculate upon the future direction and evolution of the HP characters. With each speculation, each posting, the individual fan can refine and extend their identity as a member of the FF community. The temporal gaps between the books and the films – coupled with the expanding possibilities of Internet communication – mean that fans can feel both creative and connected while circulating the cultural materials derived from their engagement with the HP ‘canon’. Canon is used to describe the HP oeuvre as approved by Rowling, her publishers, and her copyright assignees (for example, Warner Bros). In contrast, ‘fanon’ is the name used by fans to refer the body of work that results from their creative/subversive interactions with the core texts, such as “slash” (homo-erotic/romance) fiction. Differentiation between the two terms acknowledges the likelihood that J K Rowling or her assignees might not approve of fanon. The constructed identities of fans who deal solely with canon differ significantly from those who are engaged in fanon. The implicit (romantic) or explicit (full-action descriptions) sexualisation of HP FF is part of a complex identity play on behalf of both the writers and readers of FF. Further, given that the online communities are often nurtured and enriched by offline face to face exchanges with other participants, what an individual is prepared to read or not to read, or write or not write, says as much about that person’s public persona as does another’s overt consumption of pornography; or diet of art house films, in contrast to someone else’s enthusiasm for Friends. Hearn, Mandeville and Anthony argue that a “central assertion of postmodern views of consumption is that social identity can be interpreted as a function of consumption” (106), and few would disagree with them: herein lies the power of the brand. Noting that consumer culture centrally focuses upon harnessing ‘the desire to desire’, Streeter’s work (654, on the opening up of Internet connectivity) suggests a continuum from ‘desire provoked’; through anticipation, ‘excitement based on what people imagined would happen’; to a sense of ‘possibility’. All this was made more tantalising in terms of the ‘unpredictability’ of how cyberspace would eventually resolve itself (657). Thus a progression is posited from desire through to the thrill of comparing future possibilities with eventual outcomes. These forces clearly influence the HP FF phenomenon, where a section of HP fans have become impatient with the pace of the ‘official’/canon HP text. J K Rowling’s writing has slowed down to the point that Harry’s initial readership has overtaken him by several years. He’s about to enter his sixth year (of seven) at secondary school – his erstwhile-contemporaries have already left school or are about to graduate to University. HP is yet to have ‘a relationship’: his fans are engaged in some well-informed speculation as to a range of sexual possibilities which would likely take J K Rowling some light years from her marketers’ core readership. So the story is progressing more slowly than many fans would choose and with less spice than many would like (from the evidence of the web, at least). As indicated in the Endnote, the productivity of the fans, as they ‘fill in the gaps’ while waiting for the official narrative to resume, is prodigious. It may be that as the fans outstrip HP in their own social and emotional development they find his reactions in later books increasingly unbelievable, and/or out of character with the HP they felt they knew. Thus they develop an alternative ‘Harry’ in fanon. Some FF authors identify in advance which books they accept as canon, and which they have decided to ignore. For example, popular FF author Midnight Blue gives the setting of her evolving FF The Mirror of Maybe as “after Harry Potter and the Goblet of Fire and as an alternative to the events detailed in Harry Potter and the Order of the Phoenix, [this] is a Slash story involving Harry Potter and Severus Snape”. Some fans, tired of waiting for Rowling to get Harry grown up, ‘are doin’ it for themselves’. Alternatively, it may be that as they get older the first groups of HP fans are unwilling to relinquish their investment in the HP phenomenon, but are equally unwilling to align themselves uncritically with the anodyne story of the canon. Harry Potter, as Warner Bros licensed him, may be OK for pre-teens, but less cool for the older adolescent. The range of identities that can be constructed using the many online HP FF genres, however, permits wide scope for FF members to identify with dissident constructions of the HP narrative and helps to add to the momentum with which his fame increases. Latterly there is evidence that custodians of canon may be making subtle overtures to creators of fanon. Here, the viral marketers have a particular challenge – to embrace the huge market represented by fanon, while not disturbing those whose HP fandom is based upon the purity of canon. Some elements of fanon feel their discourses have been recognised within the evolving approved narrative . This sense within the fan community – that the holders of the canon have complimented them through an intertextual reference – is much prized and builds the momentum of the fame engagement (as has been demonstrated by Watson, with respect to the band ‘phish’). Specifically, Harry/Draco slash fans have delighted in the hint of a blown kiss from Draco Malfoy to Harry (as Draco sends Harry an origami bird/graffiti message in a Defence against the Dark Arts Class in Harry Potter and the Prisoner of Azkaban) as an acknowledgement of their cultural contribution to the development of the HP phenomenon. Streeter credits Raymond’s essay ‘The Cathedral and the Bazaar’ as offering a model for the incorporation of voluntary labour into the marketplace. Although Streeter’s example concerns the Open Source movement, derived from hacker culture, it has parallels with the prodigious creativity (and productivity) of the HP FF communities. Discussing the decision by Netscape to throw open the source code of its software in 1998, allowing those who use it to modify and improve it, Streeter comments that (659) “the core trope is to portray Linux-style software development like a bazaar, a real-life competitive marketplace”. The bazaar features a world of competing, yet complementary, small traders each displaying their skills and their wares for evaluation in terms of the product on offer. In contrast, “Microsoft-style software production is portrayed as hierarchical and centralised – and thus inefficient – like a cathedral”. Raymond identifies “ego satisfaction and reputation among other [peers]” as a specific socio-emotional benefit for volunteer participants (in Open Source development), going on to note: “Voluntary cultures that work this way are not actually uncommon [… for example] science fiction fandom, which unlike hackerdom has long explicitly recognized ‘egoboo’ (ego-boosting, or the enhancement of one’s reputation among other fans) as the basic drive behind volunteer activity”. This may also be a prime mover for FF engagement. Where fans have outgrown the anodyne canon they get added value through using the raw materials of the HP stories to construct fanon: establishing and building individual identities and communities through HP consumption practices in parallel with, but different from, those deemed acceptable for younger, more innocent, fans. The fame implicit in HP fandom is not only that of HP, the HP lead actor Daniel Radcliffe and HP’s creator J K Rowling; for some fans the famed ‘state or quality of being widely honoured and acclaimed’ can be realised through their participation in online fan culture – fans become famous and recognised within their own community for the quality of their work and the generosity of their sharing with others. The cultural capital circulated on the FF sites is both canon and fanon, a matter of some anxiety for the corporations that typically buy into and foster these mega-media products. As Jim Ward, Vice-President of Marketing for Lucasfilm comments about Star Wars fans (cited in Murray 11): “We love our fans. We want them to have fun. But if in fact someone is using our characters to create a story unto itself, that’s not in the spirit of what we think fandom is about. Fandom is about celebrating the story the way it is.” Slash fans would beg to differ, and for many FF readers and writers, the joy of engagement, and a significant engine for the growth of HP fame, is partly located in the creativity offered for readers and writers to fill in the gaps. Endnote HP FF ranges from posts on general FF sites (such as fanfiction.net >> books, where HP has 147,067 stories [on 4,490 pages of hotlinks] posted, compared with its nearest ‘rival’ Lord of the rings: with 33,189 FF stories). General FF sites exclude adult content, much of which is corralled into 18+ FF sites, such as Restrictedsection.org, set up when core material was expelled from general sites. As an example of one adult site, the Potter Slash Archive is selective (unlike fanfiction.net, for example) which means that only stories liked by the site team are displayed. Authors submitting work are asked to abide by a list of ‘compulsory parameters’, but ‘warnings’ fall under the category of ‘optional parameters’: “Please put a warning if your story contains content that may be offensive to some authors [sic], such as m/m sex, graphic sex or violence, violent sex, character death, major angst, BDSM, non-con (rape) etc”. Adult-content FF readers/writers embrace a range of unexpected genres – such as Twincest (incest within either of the two sets of twin characters in HP) and Weasleycest (incest within the Weasley clan) – in addition to mainstream romance/homo-erotica pairings, such as that between Harry Potter and Draco Malfoy. (NB: within the time frame 16 August – 4 October, Harry Potter FF writers had posted an additional 9,196 stories on the fanfiction.net site alone.) References ABS. 8147.0 Use of the Internet by Householders, Australia. http://www.abs.gov.au/ausstats/abs@.nsf/ e8ae5488b598839cca25682000131612/ ae8e67619446db22ca2568a9001393f8!OpenDocument, 2001, 2001>. Baym, Nancy. “The Emergence of Community in Computer-Mediated Communication.” CyberSociety: Computer-Mediated Communication and Community. Ed. S. Jones. Thousand Oaks, CA: Sage, 1995. 138-63. Blue, Midnight. “The Mirror of Maybe.” http://www.greyblue.net/MidnightBlue/Mirror/default.htm>. Coates, Laura. “Muggle Kids Battle for Domain Name Rights. Irish Computer. http://www.irishcomputer.com/domaingame2.html>. Fanfiction.net. “Category: Books” http://www.fanfiction.net/cat/202/>. Green, Lelia. Technoculture: From Alphabet to Cybersex. Sydney: Allen & Unwin. Hearn, Greg, Tom Mandeville and David Anthony. The Communication Superhighway: Social and Economic Change in the Digital Age. Sydney: Allen & Unwin, 1997. Hills, Matt. Fan Cultures. London: Routledge, 2002. Houghton Mifflin. “Potlatch.” Encyclopedia of North American Indians. http://college.hmco.com/history/readerscomp/naind/html/ na_030900_potlatch.htm>. Kirby, Justin. “Brand Papers: Getting the Bug.” Brand Strategy July-August 2004. http://www.dmc.co.uk/pdf/BrandStrategy07-0804.pdf>. Marshall, P. David. “Technophobia: Video Games, Computer Hacks and Cybernetics.” Media International Australia 85 (Nov. 1997): 70-8. Murray, Simone. “Celebrating the Story the Way It Is: Cultural Studies, Corporate Media and the Contested Utility of Fandom.” Continuum 18.1 (2004): 7-25. Raymond, Eric S. The Cathedral and the Bazaar. 2000. http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s11.html>. Streeter, Thomas. The Romantic Self and the Politics of Internet Commercialization. Cultural Studies 17.5 (2003): 648-68. Turner, Graeme, Frances Bonner, and P. David Marshall. Fame Games: The Production of Celebrity in Australia. Melbourne: Cambridge UP. Watson, Nessim. “Why We Argue about Virtual Community: A Case Study of the Phish.net Fan Community.” Virtual Culture: Identity and Communication in Cybersociety. Ed. Steven G. Jones. London: Sage, 1997. 102-32. Citation reference for this article MLA Style Green, Lelia, and Carmen Guinery. "Harry Potter and the Fan Fiction Phenomenon." M/C Journal 7.5 (2004). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0411/14-green.php>. APA Style Green, L., and C. Guinery. (Nov. 2004) "Harry Potter and the Fan Fiction Phenomenon," M/C Journal, 7(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0411/14-green.php>.
APA, Harvard, Vancouver, ISO, and other styles
5

Bruns, Axel. "The Fiction of Copyright." M/C Journal 2, no. 1 (February 1, 1999). http://dx.doi.org/10.5204/mcj.1737.

Full text
Abstract:
It is the same spectacle all over the Western world: whenever delegates gather to discuss the development and consequences of new media technologies, a handful of people among them will stand out from the crowd, and somehow seem not quite to fit in with the remaining assortment of techno-evangelists, Internet ethnographers, multimedia project leaders, and online culture critics. At some point in the proceedings, they'll get to the podium and hold a talk on their ideas for the future of copyright protection and intellectual property (IP) rights in the information age; when they are finished, the reactions of the audience typically range from mild "what was that all about?" amusement to sheer "they haven't got a clue" disbelief. Spare a thought for copyright lawyers; they're valiantly fighting a losing battle. Ever since the digitalisation and networking of our interpersonal and mass media made information transmission and duplication effortless and instantaneous, they've been trying to come up with ways to uphold and enforce concepts of copyright which are fundamentally linked to information as bound to physical objects (artifacts, books, CDs, etc.), as Barlow has demonstrated so clearly in "Selling Wine without Bottles". He writes that "copyright worked well because, Gutenberg notwithstanding, it was hard to make a book. ... Books had material surfaces to which one could attach copyright notices, publisher's marques, and price tags". If you could control the physical media which were used to transmit information (paper, books, audio and video tapes, as well as radio and TV sets, or access to cable systems), you could control who made copies when and where, and at what price. This only worked as long as the technology to make copies was similarly scarce, though: as soon as most people learnt to write, or as faxes and photocopiers became cheaper, the only real copyright protection books had was the effort that would have to be spent to copy them. With technology continuously advancing (perhaps even at accellerating pace), copyright is soon becoming a legal fiction that is losing its link to reality. Indeed, we are now at a point where we have the opportunity -- the necessity, even -- to shift the fictional paradigm, to replace the industrial-age fiction of protective individual copyright with an information-age fiction of widespread intellectual cooperation. As it becomes ever easier to bypass and ignore copyright rules, and as copyright thus becomes ever more illusionary, this new fiction will correspondingly come ever closer to being realised. To Protect and to ... Lose Today, the lawyers' (and their corporate employers') favourite weapon in their fight against electronic copyright piracy are increasingly elaborate protection mechanisms -- hidden electronic signatures to mark intellectual property, electronic keys to unlock copyrighted products only for legitimate users (and sometimes only for a fixed amount of time or after certain licence payments), encryption of sensitive information, or of entire products to prevent electronic duplication. While the encryption of information exchanges between individuals has been proven to be a useful deterrent against all but the most determined of hackers, it's interesting to note that practically no electronic copyright protection mechanism of mass market products has ever been seen to work. However good and elaborate the protection efforts, it seems that as long as there is a sufficient number of interested consumers unwilling to pay for legitimate access, copy protections will be cracked eventually: the rampant software piracy is the best example. On the other hand, where copy protections become too elaborate and cumbersome, they end up killing the product they are meant to protect: this is currently happening in the case of some of the pay-per-view or limited-plays protection schemes forced upon the U.S. market for Digital Versatile Discs (DVDs). The eventual failure of such mechanisms isn't a particularly recent observation, even. When broadcast radio was first introduced in Australia in 1923, it was proposed that programme content should be protected (and stations financed) by fixing radio receivers to a particular station's frequency -- by buying such a 'sealed set' receiver you would in effect subscribe to a station and acquire the right to receive the content it provided. Never known as uninventive, those Australians who this overprotectiveness didn't completely put off buying a receiver (radio was far from being a proven mass medium at the time, after all) did of course soon break the seal, and learnt to adjust the frequency to try out different stations -- or they built their own radios from scratch. The 'sealed set' scheme was abandoned after only nine months. Even with the development of copy protection schemes since the 1920s, a full (or at least sufficiently comprehensive) protection of intellectual property seems as unattainable a fiction as it was then. Protection and copying technology are never far apart in development anyway, but even more fundamentally, the protected products are eventually meant to be used, after all. No matter how elaborately protected a CD, a video, or a computer programme is, it will still have to be converted into sound waves, image information, or executable code, and at that level copying will still remain possible. In the absence of workable copy protection, however, copies will be made in large amounts -- even more so since information is now being spread and multiplied around the globe virtually at the speed of light. Against this tide of copies, any attempts to use legislation to at least force the payment of royalties from illegitimate users are also becoming increasingly futile. While there may be a few highly publicised court cases, the multitude of small transgressions will remain unanswered. This in turn undermines the equality before the law that is a basic human right: increasingly, the few that are punished will be able to argue that, if "everybody does it", to single them out is highly unfair. At the same time, corporate efforts to uphold the law may be counterproductive: as Barlow writes, "against the swift tide of custom, the Software Publishers' current practice of hanging a few visible scapegoats is so obviously capricious as to only further diminish respect for the law". Quite simply, their legal costs may not be justified by the results anymore. Abandoning Copyright Law If copyright has become a fiction, however -- one that is still, despite all evidence, posited as reality by the legal system --, and if the makeup of today's electronic media, particularly the Internet, allow that fiction to be widely ignored and circumvented in daily practice -- despite all corporate legal efforts --, how is this disparity between law and reality to be solved? Barlow offers a clear answer: "whenever there is such profound divergence between the law and social practice, it is not society that adapts". He goes on to state that it may well be that when the current system of intellectual property law has collapsed, as seems inevitable, that no new legal structure will arise in its place. But something will happen. After all, people do business. When a currency becomes meaningless, business is done in barter. When societies develop outside the law, they develop their own unwritten codes, practices, and ethical systems. While technology may undo law, technology offers methods for restoring creative rights. When William Gibson invented the term 'cyberspace', he described it as a "consensual hallucination" (67). As the removal of copyright to the realm of the fictional has been driven largely by the Internet and its 'freedom of information' ethics, perhaps it is apt to speak of a new approach to intellectual property (or, with Barlow, to 'creative rights') as one of consensual, collaborative use of such property. This approach is far from being fully realised yet, and must so for now remain fiction, too, but it is no mere utopian vision -- in various places, attempts are made to put into place consensual schemes of dealing with intellectual property. They also represent a move from IP hoarding to IP use. Raymond speaks of the schemes competing here as the 'cathedral' and the 'bazaar' system. In the cathedral system, knowledge is tightly controlled, and only the finished product, "carefully crafted by individual wizards or small bands of mages working in splendid isolation" (1), is ever released. This corresponds to traditional copyright approaches, where company secrets are hoarded and locked away (sometimes only in order to keep competitors from using them), and breaches punished severely. The bazaar system, on the other hand, includes the entire community of producers and users early on in the creative process, up to the point of removing the producer/user dichotomy altogether: "no quiet, reverent cathedral-building here -- rather, ... a great babbling bazaar of differing agendas and approaches ... out of which a coherent and stable system could seemingly emerge only by a succession of miracles", as Raymond admits (1). The Linux 'Miracle' Raymond writes about one such bazaar-system project which provides impressive proof that the approach can work, however: the highly acclaimed Unix-based operating system Linux. Instigated and organised by Finnish programmer Linus Torvalds, this enthusiast-driven, Internet-based development project has achieved more in less than a decade than what many corporate developers (Microsoft being the obvious example) can do in thrice that time, and with little financial incentive or institutional support at that. As Raymond describes, "the Linux world behaves in many respects like a free market or an ecology, a collection of selfish agents attempting to maximise utility which in the process produces a self-correcting spontaneous order more elaborate and efficient than any amount of central planning could achieve" (10). Thus, while there is no doubt that individual participants will eventually always also be driven by selfish reasons, there is collaboration towards the achievement of communal goals, and a consensus about what those goals are: "while coding remains an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire communities. The developer who uses only his or her own brain in a closed project is going to fall behind the developer who knows how to create an open, evolutionary context in which bug-spotting and improvements get done by hundreds of people" (Raymond 10). It is obvious that such collaborative projects need a structure that allows for the immediate participation of a large community, and so in the same way that the Internet has been instrumental in dismantling traditional copyright systems, it is also a driving factor in making these new approaches possible: "Linux was the first project to make a conscious and successful effort to use the entire world as its talent pool. I don't think it's a coincidence that the gestation period of Linux coincided with the birth of the World Wide Web, and that Linux left its infancy during the same period in 1993-1994 that saw the takeoff of the ISP industry and the explosion of mainstream interest in the Internet. Linus was the first person who learned how to play by the new rules that pervasive Internet made possible" (Raymond 10). While some previous collaborative efforts exist (such as shareware schemes, which have existed ever since the advent of programmable home computers), their comparatively limited successes underline the importance of a suitable communication medium. The success of Linux has now begun to affect corporate structures, too: informational material for the Mozilla project, in fact, makes direct reference to the Linux experience. On the Net, Mozilla is as big as it gets -- instituted to continue development of Netscape Communicator-based Web browsers following Netscape's publication of the Communicator source code, it poses a serious threat to Microsoft's push (the legality of which is currently under investigation in the U.S.) to increase marketshare for its Internet Explorer browser. Much like Linux, Mozilla will be a collaborative effort: "we intend to delegate authority over the various modules to the people most qualified to make decisions about them. We intend to operate as a meritocracy: the more good code you contribute, the more responsibility you will be given. We believe that to be the only way to continue to remain relevant, and to do the greatest good for the greatest number" ("Who Is Mozilla.org?"), with the Netscape corporation only one among that number, and a contributor amongst many. Netscape itself intends to release browsers based on the Mozilla source code, with some individual proprietary additions and the benefits corporate structures allow (printed manuals, helplines, and the like), but -- so it seems -- it is giving up its unlimited hold over the course of development of the browser. Such actions afford an almost prophetic quality to Barlow's observation that "familiarity is an important asset in the world of information. It may often be the case that the best thing you can do to raise the demand for your product is to give it away". The use of examples from the computer world should not be seen to mean that the consensual, collaborative use of intellectual property suggested here is limited only to software -- it is, however, no surprise that a computer-based medium would first be put to use to support computer-based development projects. Producers and artists from other fields can profit from networking with their peers and clients just as much: artists can stay in touch with their audience and one another, working on collaborative projects such as the brilliant Djam Karet CD Collaborator (see Taylor's review in Gibraltar), professional interest groups can exchange information about the latest developments in their field as well as link with the users of their products to find out about their needs or problems, and the use of the Net as a medium of communication for academic researchers was one of its first applications, of course. In many such cases, consensual collaboration would even speed up the development process and help iron out remaining glitches, beating the efforts of traditional institutions with their severely guarded intellectual property rights. As Raymond sees it, for example, "no commercial developer can match the pool of talent the Linux community can bring to bear on a problem", and so "perhaps in the end the free-software culture will triumph not because cooperation is morally right or software 'hoarding' is morally wrong ... , but simply because the commercial world cannot win an evolutionary arms race with free-software communities that can put orders of magnitude more skilled time into a problem" (10). Realising the Fiction There remains the problem that even the members of such development communities must make a living somehow -- a need to which their efforts in the community not only don't contribute, but the pursuit of which even limits the time available for the community efforts. The apparent impossibility of reconciling these two goals has made the consensual collaborative approach appear little more than a utopian fiction so far, individual successes like Linux or (potentially) Mozilla notwithstanding. However, there are ways of making money from the communal work even if due to the abolition of copyright laws mere royalty payments are impossible -- as the example of Netscape's relation to the Mozilla project shows, the added benefits that corporate support can bring will still seem worth paying for, for many users. Similarly, while music and artwork may be freely available on the Net, many music fans will still prefer to get the entire CD package from a store rather than having to burn the CD and print the booklet themselves. The changes to producer/user relations suggested here do have severe implications for corporate and legal structures, however, and that is the central reason why particularly the major corporate intellectual property holders (or, hoarders) and their armies of lawyers are engaged in such a fierce defensive battle. Needless to say, the changeover from the still-powerful fiction of enforcible intellectual property copyrights to the new vision of open, consensual collaboration that gives credit for individual contributions, but has no concept of an exclusive ownership of ideas, will not take place overnight. Intellectual property will continue to be guarded, trade secrets will keep being kept, for some time yet, but -- just as is the case with the established practice of patenting particular ideas just so competitors can't use them, but without ever putting them to use in one's own work -- eventually such efforts will prove to be self-defeating. Shutting one's creative talents off in a quiet cathedral will come to be seen as less productive than engaging in the creative cooperation occuring in the global bazaar, and solitary directives of central executives will be replaced by consensual decisions of the community of producers and users. As Raymond points out, "this is not to say that individual vision and brilliance will no longer matter; rather, ... the cutting edge ... will belong to people who start from individual vision and brilliance, then amplify it through the effective construction of voluntary communities of interest" (10). Such communal approaches may to some seem much like communism, but this, too, is a misconception. In fact, in this new system there is much more exchange, much more give and take going on than in the traditional process of an exchange of money for product between user and producer -- only the currency has changed. "This explains much of the collective 'volunteer' work which fills the archives, newsgroups, and databases of the Internet. Its denizens are not working for 'nothing,' as is widely believed. Rather they are getting paid in something besides money. It is an economy which consists almost entirely of information" (Barlow). And with the removal of the many barriers to the free flow of information and obstacles to scientific and artistic development that traditional copyright has created, the progress of human endeavour itself is likely to be sped up. In the end, then, it all comes down to what fictions we choose to believe or reject. In the light of recent developments, and considering the evidence that suggests the viability, even superiority of alternative approaches, it is becoming increasingly hard to believe that traditional copyright can, and much less, should be sustained. Other than the few major copyright holders, few stand to gain from upholding these rights. On the other hand, were we to lift copyright restrictions and use the ideas and information thus made available freely in a cooperative, consensual, and most of all productive way, we all might profit. As various projects have shown, that fiction is already in the process of being realised. References Barlow, John Perry. "Selling Wine without Bottles: The Economy of Mind on the Global Net." 1993. 26 Jan. 1999 <www.eff.org/pub/Publications/John_Perry_Barlow/HTML/idea_economy_article.php>. Gibson, William. Neuromancer. London: HarperCollins, 1984. Raymond, Eric S. "The Cathedral and the Bazaar." 1998. 26 Jan. 1999 <http://www.redhat.com/redhat/cathedral-bazaar/cathedral-bazaar.php>. Taylor, Mike. "Djam Karet, Jeff Greinke, Tim Song Jones, Nick Peck, Kit Watkins." Gibraltar 5.12 (22 Apr. 1995). 10 Feb. 1999 <http://www.progrock.net/gibraltar/issues/Vol5.Iss12.htm>. "Who Is Mozilla.org?" Mozilla.org Website. 1998. 26 Jan. 1999 <http://www.mozilla.org/about.php>. Citation reference for this article MLA style: Axel Bruns. "The Fiction of Copyright: Towards a Consensual Use of Intellectual Property." M/C: A Journal of Media and Culture 2.1 (1999). [your date of access] <http://www.uq.edu.au/mc/9902/copy.php>. Chicago style: Axel Bruns, "The Fiction of Copyright: Towards a Consensual Use of Intellectual Property," M/C: A Journal of Media and Culture 2, no. 1 (1999), <http://www.uq.edu.au/mc/9902/copy.php> ([your date of access]). APA style: Axel Bruns. (1999) The fiction of copyright: towards a consensual use of intellectual property. M/C: A Journal of Media and Culture 2(1). <http://www.uq.edu.au/mc/9902/copy.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Netscape Communicator (Computer file)"

1

Barker, Donald. The World Wide Web featuring Netscape Communicator 4 software. Cambridge, Mass: Course Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Levine, Young Margaret, ed. Dummies 101. Foster City, CA: IDG Books Worldwide, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bender, Hy. Dummies 101. Foster City, Calif: IDG Books Worldwide, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Reichard, Kevin. Netscape Communicator 4.0. New York, NY: MIS:Press, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Greg, Robertson, ed. The essential Netscape communicator book. Rocklin, CA: Prima Pub., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Castro, Elizabeth. Netscape Communicator 4 for Macintosh. Berkeley, CA: Peachpit Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Projects for the Netscape Communicator 4.0. Reading, Mass: Addison-Wesley, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bill, Vernon, ed. The big guide to Netscape communicator 4. Indianapolis, Ind: Sams.net, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

A, Vonk John, ed. Netscape Communicator and the World Wide Web. Boston: Irwin McGraw-Hill, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grauer, Robert T. Exploring the Internet with Netscape Communicator 4. Upper Saddle River, NJ: Prentice Hall, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography