Journal articles on the topic 'Electric guitar music Electronic music Computer music Electric guitar and electronic music'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 journal articles for your research on the topic 'Electric guitar music Electronic music Computer music Electric guitar and electronic music.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lindroos, Niklas, Henri Penttinen, and Vesa Välimäki. "Parametric Electric Guitar Synthesis." Computer Music Journal 35, no. 3 (2011): 18–27. http://dx.doi.org/10.1162/comj_a_00066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schwartz, Jeff. "Writing Jimi: rock guitar pedagogy as postmodern folkloric practice." Popular Music 12, no. 3 (1993): 281–88. http://dx.doi.org/10.1017/s0261143000005729.

Full text
Abstract:
Most instruction in electric guitar, bass guitar, drums and electronic keyboards is conducted on a one-to-one basis by uncertified, independent teachers. The lessons are face-to-face, and based on the student's imitation of the teacher's example. Popular music education is a ‘little tradition’ (in comparison to school music departments) and largely an oral one, thus meeting the usual criteria of folk cultures.
APA, Harvard, Vancouver, ISO, and other styles
3

Carfoot, Gavin. "Acoustic, Electric and Virtual Noise: The Cultural Identity of the Guitar." Leonardo Music Journal 16 (December 2006): 35–39. http://dx.doi.org/10.1162/lmj.2006.16.35.

Full text
Abstract:
Guitar technology underwent significant changes in the 20th century in the move from acoustic to electric instruments. In the first part of the 21st century, the guitar continues to develop through its interaction with digital technologies. Such changes in guitar technology are usually grounded in what we might call the “cultural identity” of the instrument: that is, the various ways that the guitar is used to enact, influence and challenge sociocultural and musical discourses. Often, these different uses of the guitar can be seen to reflect a conflict between the changing concepts of “noise” and “musical sound.”
APA, Harvard, Vancouver, ISO, and other styles
4

Djatmiko, Sigit. "FENOMENOLOGI MUSIK." Dharmasmrti: Jurnal Ilmu Agama dan Kebudayaan 15, no. 28 (2016): 108–13. http://dx.doi.org/10.32795/ds.v15i28.63.

Full text
Abstract:
The Kraftwerk music group from Dusseldorf, Germany, began to be famous since 1974. The prominent feature of the Kraftwerk is that they were trying to be the pioneer of the alteration from the acoustic and electric music into the electronic music. Their mission was dehumanizing the music to produce impersonal sounds and with the “musicians” which would rather be considered as machine tools than as human. The works of the Kraftwerk arguably became the blueprint for the sort of avantgarde music, the prototype for kinds of music that celebrated the shift from the sounds of the guitar strings and the human vocal into the sounds of strum combination. In sum, the main aim of the Kraftwerk was to fully merge with the technology, to stop playing the instruments, and let the instruments play themselves.
APA, Harvard, Vancouver, ISO, and other styles
5

Jenson, Jen, Suzanne De Castell, Rachel Muehrer, and Milena Droumeva. "So you think you can play: An exploratory study of music video games." Journal of Music, Technology and Education 9, no. 3 (2016): 273–88. http://dx.doi.org/10.1386/jmte.9.3.273_1.

Full text
Abstract:
Digital music technologies have evolved by leaps and bounds over the last 10 years. The most popular digital music games allow gamers to experience the performativity of music, long before they have the requisite knowledge and skills, by playing with instrument-shaped controllers (e.g. Guitar Hero, Rock Band, Sing Star, Wii Music), while others involve plugging conventional electric guitars into a game console to learn musical technique through gameplay (e.g. Rocksmith). Many of these digital music environments claim to have educative potential, and some are actually used in music classrooms. This article discusses the findings from a pilot study to explore what high school age students could gain in terms of musical knowledge, skill and understanding from these games. We found students improved from pre- to post-assessment in different areas of musicianship after playing Sing Party, Wii Music and Rocksmith, as well as a variety of games on the iPad.
APA, Harvard, Vancouver, ISO, and other styles
6

Sullivan, Charles R. "Extending the Karplus-Strong Algorithm to Synthesize Electric Guitar Timbres with Distortion and Feedback." Computer Music Journal 14, no. 3 (1990): 26. http://dx.doi.org/10.2307/3679957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arpel, Anna Laura, and Joel Chadabe. "Electric Sound: The Past and Promise of Electronic Music." Computer Music Journal 21, no. 3 (1997): 100. http://dx.doi.org/10.2307/3681020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Burt, Warren, and Joel Chadabe. "Electric Sound: The Past and Promise of Electronic Music." Computer Music Journal 22, no. 1 (1998): 73. http://dx.doi.org/10.2307/3681046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Battier, Marc, and Joel Chadabe. "Electric Sound: The Past and Promise of Electronic Music." Leonardo Music Journal 7 (1997): 100. http://dx.doi.org/10.2307/1513256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ligeti, Lukas. "The Burkina Electric Project and Some Thoughts about Electronic Music in Africa." Leonardo Music Journal 16 (December 2006): 64. http://dx.doi.org/10.1162/lmj.2006.16.64b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lubet, Alex. "Disability Studies and Performing Arts Medicine." Medical Problems of Performing Artists 17, no. 2 (2002): 59–62. http://dx.doi.org/10.21091/mppa.2002.2009.

Full text
Abstract:
My introduction to the emerging field of disability studies (DS) was not by accident, but by injury. A professor of music composition and theory who uses piano and computer keyboards extensively, performs on acoustic guitar, electric bass, and mandolin, and handwrites a great deal, I have coped with pain and functional limitations from spinal and upper limb injuries for years. In 1999, on disability leave, recovering from neurosurgery for cervical disk herniation, I read a call for papers on disability and the performing arts. Intrigued, I immersed myself in DS literature, and began to participate in the Society for Disability Studies and to engage in research, teaching, and creative projects on disability topics.
APA, Harvard, Vancouver, ISO, and other styles
12

Gluck, Robert J. "Electric Circus, Electric Ear and the Intermedia Center in Late-1960s New York." Leonardo 45, no. 1 (2012): 50–56. http://dx.doi.org/10.1162/leon_a_00325.

Full text
Abstract:
Composer Morton Subotnick moved to New York in 1966 for a brief but productive stay, establishing a small but notable electronic music studio affiliated with New York University. It was built around an early Buchla system and became Subotnick's personal workspace and a creative home for a cluster of emerging young composers. Subotnick also provided artistic direction for a new multimedia discoteque, the Electric Circus, an outgrowth of ideas he formulated earlier at the San Francisco Tape Music Center. A Monday evening series at the Circus, Electric Ear, helped spawn a cluster of venues for new music and multimedia. While the NYU studio and Electric Ear represent examples of centers operating outside commercial forces, the Electric Circus was entrepreneurial in nature, which ultimately compromised its artistic values.
APA, Harvard, Vancouver, ISO, and other styles
13

Kovalenko, Anatoliі. "DEVELOPMENT OF UKRAINIAN GUITAR EDUCATION IN THE CONDITIONS OF DISTANCE LEARNING." Academic Notes Series Pedagogical Science 1, no. 190 (2020): 100–104. http://dx.doi.org/10.36550/2415-7988-2020-1-190-100-104.

Full text
Abstract:
The article reveals the positive and negative aspects of the development of domestic guitar education in terms of distance learning. Applying the methods of historical and pedagogical analysis and a systematic approach, the recommendations of the Ministry of Education and Science of Ukraine were analyzed, which considered this area of development of the educational process and scientific works of guitar researchers. It is determined that the standards of higher education in the specialty 025 «Musical Arts» do not specify the quality of computer and software. It has been found that students, teachers, and heads of educational institutions from time to time face the issue of updating computer equipment to implement a quality educational process. It is proved that when using the software «Zoom» or «Skype» for group practical classes, a small error in the work of the Internet can lead to desynchronization in the work of the music group. It is highlighted that professions related to training, individual services, and creativity will remain relevant, as they cannot be replaced by automated systems even with the use of artificial intelligence; «the introduction of distance learning tools in schools is positive, in particular the use of the Microsoft Teams for Education Center, which includes the following options: downloading materials, storing and sharing them; possibility to add electronic textbooks / educational games; placement of announcements, digests for all participants of the educational process», etc.; negative aspects of the development of domestic guitar education in terms of distance learning are the lack of live contact between teacher and student, the inability to visually adjust the performance movements (left and right hand, artistic gestures, etc.); the ability of the mentor to remove the mental and physiological clamp from the performer is lost; slow internet connection speed and low quality of computer components can hinder a quality educational process and others. In terms of distance learning, domestic guitar education has changed the vector of its development. Users of Internet resources became the target audience of performing guitar art, and music education of guitarists received new opportunities for improvement. However, there are still factors that hinder the full implementation of positive changes. We consider the following to be the main ones: the lack of live communication between the teacher and the student, the unstable speed of the Internet connection in some regions of Ukraine and the low quality of computer components among some participants in the educational process.
APA, Harvard, Vancouver, ISO, and other styles
14

Paiuk, Gabriel. "Tactility, Traces and Codes: Reassessing timbre in electronic media." Organised Sound 18, no. 3 (2013): 306–13. http://dx.doi.org/10.1017/s1355771813000289.

Full text
Abstract:
This article starts by arguing that in diverse approaches to electronically produced sound in music of recent times a shift in focus has occurred, from the creation of novel sounds to the manipulation of sound materials inherent in a culture of electric and electronic devices of sound production.Within these practices, the use of lo-fi devices, circuit-bending, cracked electronics and a resurfacing of older technologies is coupled with digital technology in a process which emphasises the devices characteristic modes of sound production and artefacts. Electronic sound becomes regarded as embedded on a reservoir of qualities, memories and registers of technologies that inhabit our sound environment.From this starting point our apprehension of technologically produced sound is reassessed, constituted as the crossing of particular conditions of production and reception, cultural traces and codes inherent in the practices and characteristics of media.This perspective lays the ground for a compositional approach that exposes and problematises the interaction of these multiple conditions.
APA, Harvard, Vancouver, ISO, and other styles
15

Long, Jason, Jim Murphy, Dale Carnegie, and Ajay Kapur. "Loudspeakers Optional: A history of non-loudspeaker-based electroacoustic music." Organised Sound 22, no. 2 (2017): 195–205. http://dx.doi.org/10.1017/s1355771817000103.

Full text
Abstract:
The discipline of electroacoustic music is most commonly associated with acousmatic musical forms such as tape-music and musique concrète, and the electroacoustic historical canon primarily centres around the mid-twentieth-century works of Pierre Schaeffer, Karlheinz Stockhausen, John Cage and related artists. As the march of technology progressed in the latter half of the twentieth century, alternative technologies opened up new areas within the electroacoustic discipline such as computer music, hyper-instrument performance and live electronic performance. In addition, the areas of electromagnetic actuation and musical robotics also allowed electroacoustic artists to actualise their works with real-world acoustic sound-objects instead of or along side loudspeakers. While these works owe much to the oft-cited pioneers mentioned above, there exists another equally significant alternative history of artists who utilised electric, electronic, pneumatic, hydraulic and other sources of power to create what is essentially electroacoustic music without loudspeakers. This article uncovers this ‘missing history’ and traces it to its earliest roots over a thousand years ago to shed light on often-neglected technological and artistic developments that have shaped and continue to shape electronic music today.
APA, Harvard, Vancouver, ISO, and other styles
16

Refsum Jensenius, Alexander, and Victoria Johnson. "Performing the Electric Violin in a Sonic Space." Computer Music Journal 36, no. 4 (2012): 28–39. http://dx.doi.org/10.1162/comj_a_00148.

Full text
Abstract:
This article presents the development of the improvisation piece Transformation for electric violin and live electronics. The aim of the project was to develop an “invisible” technological setup that would allow the performer to move freely on stage while still being in full control of the electronics. The developed system consists of a video-based motion-tracking system, with a camera hanging in the ceiling above the stage. The performer's motion and position on stage is used to control the playback of sonic fragments from a database of violin sounds, using concatenative synthesis as the sound engine. The setup allows the performer to improvise freely together with the electronic sounds being played back as she moves around the “sonic space.” The system has been stable in rehearsal and performance, and the simplicity of the approach has been inspiring to both the performer and the audience.
APA, Harvard, Vancouver, ISO, and other styles
17

RAMEL, SYLVIE. "Conservation and restoration of electroacoustic musical instruments at the Musée de la Musique, Paris." Organised Sound 9, no. 1 (2004): 87–90. http://dx.doi.org/10.1017/s1355771804000111.

Full text
Abstract:
The Museum of Music in Paris possesses a collection of 280 instruments from the twentieth century. Most of them belong to the general families of electric and electronic musical instruments, which we will call ‘electrophones’, in deference to the name chosen by Curt Sachs (1940). The instruments are gathered in families so that the whole collection illustrates the milestones of the twentieth century; for instance, the museum has a large set of diverse Ondes Martenot. However, due to its scarcity, the Trautonium is represented by one of Oscar Sala's Mixtur-Trautonia.Like any museum, we have to encourage the conservation of this heritage. To maintain a large collection of electrophones like the one we have, a specific knowledge base has to be developed. We have been working on this aspect of the project for the past two years. From the onset, it was decided to start the collection with the Ondes Martenot. Our aim was to define a model approach that could also be applied to other electric and electronic instruments. This work involves organising the instruments, studying them in order to outline conditions of appropriate conservation, and determining which kind(s) of restoration should be undertaken.A first step has been to gather all information necessary to understanding the instrument and its mode of performance. With this goal in mind, we have taken a complete inventory of our collection with the aim of coming up with a first assessment of the state of the instruments and determining whether to allow performers to play them. Thanks to this work, we were able to start taking precautionary measures against degradation; we are now also able to answer many questions relevant to the restoration and conservation of this collection.
APA, Harvard, Vancouver, ISO, and other styles
18

Kane, Carolyn. "The Electric “Now Indigo Blue”: Synthetic Color and Video Synthesis Circa 1969." Leonardo 46, no. 4 (2013): 360–66. http://dx.doi.org/10.1162/leon_a_00607.

Full text
Abstract:
Circa 1969, a few talented electrical engineers and pioneering video artists built video synthesizers capable of generating luminous and abstract psychedelic colors that many believed to be cosmic and revolutionary, and in many ways they were. Drawing on archival materials from Boston's WGBH archives and New York's Electronics Arts Intermix, this paper analyzes this early history in the work of electronics engineer Eric Siegel and Nam June Paik's and Shuya Abe's Paik/Abe Video Synthesizer, built at WGBH in 1969. The images produced from these devices were, as Siegel puts it, akin to a “psychic healing medium” used to create “mass cosmic consciousness, awakening higher levels of the mind, [and] bringing awareness of the soul.” While such radical and cosmic unions have ultimately failed, these unique color technologies nonetheless laid the foundation for colorism in the history of electronic computer art.
APA, Harvard, Vancouver, ISO, and other styles
19

Dillon, Steve. "Jam2jam." M/C Journal 9, no. 6 (2006). http://dx.doi.org/10.5204/mcj.2683.

Full text
Abstract:

 
 
 Introduction Generative algorithms have been used for many years by computer musicians like Iannis Xenakis (Xenakis) and David Cope (Cope) to make complex electronic music composition. Advances in computer technology have made it possible to design music algorithms based upon specific pitch, timbre and rhythmic qualities that can be manipulated in real time with a simple interface that a child can control.jam2jam (Brown, Sorensen, & Dillon) is a shareware program developed in java that uses these ideas and involves what we have called Networked Improvisation, which ‘can be broadly described as collaborative music making over a computer network’ (Dillon & Brown). Fig. 1: jam2jam interface (download a shareware version at http://www.explodingart.com/.) Jamming Online With this software users manipulate sliders and dials to influence changes in music in real time. This enables the opportunity for participants to interact with the sound possibilities of a chosen musical style as a focused musical environment. Essentially by moving a slider or dial the user can change the intensity of the musical activity across musical elements such as rhythm, harmony, timbre and volume and the changes they make will respond within the framework of the musical style parameters, updating and recomposing within the timeframe of a quaver/eighth note. This enables the users to play within the style and to hear and influence the shape and structure of the sound. Whilst real time performance using a computer is not new, what is different about this software is that through a network users can create virtual ensembles, which are simultaneously collaborative and interactive. jam2jam was developed using philosophical design principals based on an understanding of ‘meaning’ gained by musicians drawn from both software, live music experiences (Dillon, Student as Maker) and research about how professional composers engage with technology in creative production (Brown, Music Composition). New music technologies have for centuries provided expressive possibilities and an environment where humans can be playful. With jam2jam users can play with complex or simple musical ideas, interact with the musical elements, and hear the changes immediately. When networked they can have these musical experiences collaboratively in a virtual ensemble. Background The initial development of jam2jam began with a survey of the musical tastes of a group of children between the ages of 8-14 in a multi racial community in Delaware, Ohio in the USA as part of the Delaware Children’s Music Festival in 2002. These surveys of ‘the music they liked’ resulted in the researchers purchasing Compact Discs and completing a rule based analysis of the styles. This analysis was then converted into numerical values and algorithms were constructed and used as a structure for the software. The algorithms propose the intensity of range of each style. For example, in the Grunge style the snare drum at low intensity plays a cross stick rim timbre on the second and fourth beat and at high intensity the sound becomes a gated snare sound and plays rhythmic quaver/eighth note triplets. In between these are characteristic rhythmic materials that are less complex than the extreme (triplets). This procedure is replicated across five instruments; drums, percussion, bass, guitar and keyboard. The melodic instruments have algorithms for pitch organisation within the possibilities of the style. These algorithms are the recipes or lesson plans for interactive music making where the student’s gestures control the intensity of the music as it composes in real time. A simple interface was designed (see fig. 1) with a page for each instrument and the mixer. The interface primarily uses dials and sliders for interaction, with radio buttons for timbrel/instrument selection. Once the software was built and installed students were observed using it by videotaping their interaction and interviewing both children and teachers. Observations, which fed into the developmental design, were drawn on a daily basis with the interface and sound engine being regularly updated to accommodate students and teacher requirements. The principals of observation and analysis were based upon a theory of meaningful engagement (Brown, “Modes”, Music Composition; Dillon, “Modelling”, Student as Maker). These adjustments were applied to the software, the curriculum design and to the facilitators’ organisational processes and interactions with the students. The concept of meaningful engagement, which has been applied to this software development process, has provided an effective tool for identifying the location of meaning and describing modes of creative engagement experienced through networked jamming. It also provided a framework for dynamic evaluation and feedback which influences the design with each successive iteration. Defining a Contemporary Musicianship Networked improvisational experiences develop a contemporary musicianship, in which the computer is embraced as an instrument that can be used skillfully in live performance with both acoustic/electric instruments and other network users. The network itself becomes a site for a virtual ensemble where users can experience interaction between ‘players’ in real time. With networked improvisation, cyberspace becomes a venue. Observations have also included performances between two distant locations and ones where computers on the network simultaneously ‘jammed’ with ‘live’ acoustic performers. The Future The future of networked jamming is exciting. There is potential for these environments to replicate complex musical systems and engage participants in musical understandings, linking gesture and sound with concepts of musical knowledge that are constructed within the algorithm and the interface. The dynamic development of Networked Jamming applications involve designs which apply philosophical and pedagogical principles that encourage and sustain meaningful engagement with music making. These are sufficiently complex to allow the revisiting of musical experiences and knowledge at increasingly deeper levels. Conclusion jam2jam is a proof of concept model for networked jamming environments, where people and machines play music in collaborative ensembles. Network jamming requires a contemporary musicianship, which embraces the computer as an instrument, the network as an ensemble and cyberspace as venue for performance. These concepts facilitate access to the ensemble performance of complex musical structures through simple interfaces. It provides the opportunity for users to be creatively immersed in the simultaneous act of listening and performance. jam2jam represents an opportunity for music-makers to have interactive experiences with musical knowledge in a way not otherwise previously available. It enables children, adults and the disabled to enter into a collaborative community where technology mediates a live ensemble performance. The experience could be an ostinato pumping out hip-hop or techno grooves, a Xenakis chaos algorithm, or a minimal ambient soundscape. With the development of new algorithms, a sample engine and creative interface design we believe this concept has amazing possibilities. The real potential of this concept lies in the access that the users have to meaningful engagement with ensemble performance in the production of music, in real time, even with limited previous experience or dexterity. References Brown, A. “Modes of Compositional Engagement.” Paper presented at the Australasian Computer Music Conference-Interfaces, Brisbane, Australia. 2000. ———. Music Composition and the Computer: An Examination of the Work Practices of Five Experienced Composers. Unpublished PhD, University of Queensland, Brisbane, 2003. ———, A. Sorensen, and S. Dillon. jam2jam (Version 1) Interactive generative music making software. Brisbane: Exploding Art Music Productions, 2002. Cope, D. “Computer Modelling of Musical Intelligence in EMI.” Computer Music Journal 16.2 (1992): 69-83. Dillon, S. “Modelling: Meaning through Software Design.” Paper presented at the 26th Annual Conference of the Australian Association for Research in Music Education, Southern Cross University Tweed Heads, 2004. ———, and A. Brown. “Networked Improvisational Musical Environments: Learning through Online Collaborative Music Making.” In Embedding Music Technology in the Secondary School. Eds. J. Finney & P. Burnard. Cambridge: Continuum Press, In Press. Dillon, S. C. The Student as Maker: An Examination of the Meaning of Music to Students in a School and the Ways in Which We Give Access to Meaningful Music Education. Unpublished PhD, La Trobe, Melbourne, 2001. Xenakis, I. Formalized Music. New York: Pendragon Press, 1991. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Dillon, Steve. "Jam2jam: Networked Jamming." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/04-dillon.php>. APA Style
 Dillon, S. (Dec. 2006) "Jam2jam: Networked Jamming," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/04-dillon.php>. 
APA, Harvard, Vancouver, ISO, and other styles
20

Burns, Alex. "'This Machine Is Obsolete'." M/C Journal 2, no. 8 (1999). http://dx.doi.org/10.5204/mcj.1805.

Full text
Abstract:
'He did what the cipher could not, he rescued himself.' -- Alfred Bester, The Stars My Destination (23) On many levels, the new Nine Inch Nails album The Fragile is a gritty meditation about different types of End: the eternal relationship cycle of 'fragility, tension, ordeal, fragmentation' (adapted, with apologies to Wilhelm Reich); fin-de-siècle anxiety; post-millennium foreboding; a spectre of the alien discontinuity that heralds an on-rushing future vastly different from the one envisaged by Enlightenment Project architects. In retrospect, it's easy for this perspective to be dismissed as jargon-filled cyber-crit hyperbole. Cyber-crit has always been at its best too when it invents pre-histories and finds hidden connections between different phenomena (like the work of Greil Marcus and early Mark Dery), and not when it is closer to Chinese Water Torture, name-checking the canon's icons (the 'Deleuze/Guattari' tag-team), texts and key terms. "The organization of sound is interpreted historically, politically, socially ... . It subdues music's ambition, reins it in, restores it to its proper place, reconciles it to its naturally belated fate", comments imagineer Kodwo Eshun (4) on how cyber-crit destroys albums and the innocence of the listening experience. This is how official histories are constructed a priori and freeze-dried according to personal tastes and prior memes: sometimes the most interesting experiments are Darwinian dead-ends that fail to make the canon, or don't register on the radar. Anyone approaching The Fragile must also contend with the music industry's harsh realities. For every 10 000 Goth fans who moshed to the primal 'kill-fuck-dance' rhythms of the hit single "Closer" (heeding its siren-call to fulfil basic physiological needs and build niche-space), maybe 20 noted that the same riff returned with a darker edge in the title track to The Downward Spiral, undermining the glorification of Indulgent hedonism. "The problem with such alternative audiences," notes Disinformation Creative Director Richard Metzger, "is that they are trying to be different -- just like everyone else." According to author Don Webb, "some mature Chaos and Black Magicians reject their earlier Nine Inch Nails-inspired Goth beginnings and are extremely critical towards new adopters because they are uncomfortable with the subculture's growing popularity, which threatens to taint their meticulously constructed 'mysterious' worlds. But by doing so, they are also rejecting their symbolic imprinting and some powerful Keys to unlocking their personal history." It is also difficult to separate Nine Inch Nails from the commercialisation and colossal money-making machine that inevitably ensued on the MTV tour circuit: do we blame Michael Trent Reznor because most of his audience are unlikely to be familiar with 'first-wave' industrial bands including Cabaret Voltaire and the experiments of Genesis P. Orridge in Throbbing Gristle? Do we accuse Reznor of being a plagiarist just because he wears some of his influences -- Dr. Dre, Daft Punk, Atari Teenage Riot, Pink Floyd's The Wall (1979), Tom Waits's Bone Machine (1992), David Bowie's Low (1977) -- on his sleeve? And do we accept no-brain rock critic album reviews who quote lines like 'All the pieces didn't fit/Though I really didn't give a shit' ("Where Is Everybody?") or 'And when I suck you off/Not a drop will go to waste' ("Starfuckers Inc") as representative of his true personality? Reznor evidently has his own thoughts on this subject, but we should let the music speak for itself. The album's epic production and technical complexity turned into a post-modern studio Vision Quest, assisted by producer Alan Moulder, eleventh-hour saviour Bob Ezrin (brought in by Reznor to 'block-out' conceptual and sonic continuity), and a group of assault-technicians. The fruit of these collaborations is an album where Reznor is playing with our organism's time-binding sense, modulating strange emotions through deeply embedded tonal angularities. During his five-year absence, Trent Reznor fought diverse forms of repetitious trauma, from endogenous depression caused by endless touring to the death of his beloved grandmother (who raised him throughout childhood). An end signals a new beginning, a spiral is an open-ended and ever-shifting structure, and so Reznor sought to re-discover the Elder Gods within, a shamanic approach to renewal and secular salvation utilised most effectively by music PR luminary and scientist Howard Bloom. Concerned with healing the human animal through Ordeals that hard-wire the physiological baselines of Love, Hate and Fear, Reznor also focusses on what happens when 'meaning-making' collapses and hope for the future cannot easily be found. He accurately captures the confusion that such dissolution of meaning and decline of social institutions brings to the world -- Francis Fukuyama calls this bifurcation 'The Great Disruption'. For a generation who experienced their late childhood and early adolescence in Reagan's America, Reznor and his influences (Marilyn Manson and Filter) capture the Dark Side of recent history, unleashed at Altamont and mutating into the Apocalyptic style of American politics (evident in the 'Star Wars'/SDI fascination). The personal 'psychotic core' that was crystallised by the collapse of the nuclear family unit and supportive social institutions has returned to haunt us with dystopian fantasies that are played out across Internet streaming media and visceral MTV film-clips. That such cathartic releases are useful -- and even necessary (to those whose lives have been formed by socio-economic 'life conditions') is a point that escapes critics like Roger Scruton, some Christian Evangelists and the New Right. The 'escapist' quality of early 1980s 'Rapture' and 'Cosmocide' (Hal Lindsey) prophecies has yielded strange fruit for the Children of Ezekiel, whom Reznor and Marilyn Manson are unofficial spokes-persons for. From a macro perspective, Reznor's post-human evolutionary nexus lies, like J.G. Ballard's tales, in a mythical near-future built upon past memory-shards. It is the kind of worldview that fuses organic and morphogenetic structures with industrial machines run amok, thus The Fragile is an artefact that captures the subjective contents of the different mind produced by different times. Sonic events are in-synch but out of phase. Samples subtly trigger and then scramble kinaesthetic-visceral and kinaesthetic-tactile memories, suggestive of dissociated affective states or body memories that are incapable of being retrieved (van der Kolk 294). Perhaps this is why after a Century of Identity Confusion some fans find it impossible to listen to a 102-minute album in one sitting. No wonder then that the double album is divided into 'left' and 'right' discs (a reference to split-brain research?). The real-time track-by-track interpretation below is necessarily subjective, and is intended to serve as a provisional listener's guide to the aural ur-text of 1999. The Fragile is full of encrypted tones and garbled frequencies that capture a world where the future is always bleeding into a non-recoverable past. Turbulent wave-forms fight for the listener's attention with prolonged static lulls. This does not make for comfortable or even 'nice' listening. The music's mind is a snapshot, a critical indicator, of the deep structures brewing within the Weltanschauung that could erupt at any moment. "Somewhat Damaged" opens the album's 'Left' disc with an oscillating acoustic strum that anchor's the listener's attention. Offset by pulsing beats and mallet percussion, Reznor builds up sound layers that contrast with lyrical epitaphs like 'Everything that swore it wouldn't change is different now'. Icarus iconography is invoked, but perhaps a more fitting mythopoeic symbol of the journey that lies ahead would be Nietzsche's pursuit of his Ariadne through the labyrinth of life, during which the hero is steadily consumed by his numbing psychosis. Reznor fittingly comments: 'Didn't quite/Fell Apart/Where were you?' If we consider that Reznor has been repeating the same cycle with different variations throughout all of his music to date, retro-fitting each new album into a seamless tapestry, then this track signals that he has begun to finally climb out of self-imposed exile in the Underworld. "The Day the World Went Away" has a tremendously eerie opening, with plucked mandolin effects entering at 0:40. The main slashing guitar riff was interpreted by some critics as Reznor's attempt to parody himself. For some reason, the eerie backdrop and fragmented acoustic guitar strums recalls to my mind civil defence nuclear war films. Reznor, like William S. Burroughs, has some powerful obsessions. The track builds up in intensity, with a 'Chorus of the Damned' singing 'na na nah' over apocalyptic end-times imagery. At 4:22 the track ends with an echo that loops and repeats. "The Frail" signals a shift to mournful introspectiveness with piano: a soundtrack to faded 8 mm films and dying memories. The piano builds up slowly with background echo, holds and segues into ... "The Wretched", beginning with a savage downbeat that recalls earlier material from Pretty Hate Machine. 'The Far Aways/Forget It' intones Reznor -- it's becoming clear that despite some claims to the contrary, there is redemption in this album, but it is one borne out of a relentless move forward, a strive-drive. 'You're finally free/You could be' suggest Reznor studied Existentialism during his psychotherapy visits. This song contains perhaps the ultimate post-relationship line: 'It didn't turn out the way you wanted it to, did it?' It's over, just not the way you wanted; you can always leave the partner you're with, but the ones you have already left will always stain your memories. The lines 'Back at the beginning/Sinking/Spinning' recall the claustrophobic trapped world and 'eternal Now' dislocation of Post-Traumatic Stress Disorder victims. At 3:44 a plucked cello riff, filtered, segues into a sludge buzz-saw guitar solo. At 5:18 the cello riff loops and repeats. "We're in This Together Now" uses static as percussion, highlighting the influence of electricity flows instead of traditional rock instrument configurations. At 0:34 vocals enter, at 1:15 Reznor wails 'I'm impossible', showing he is the heir to Roger Waters's self-reflective rock-star angst. 'Until the very end of me, until the very end of you' reverts the traditional marriage vow, whilst 'You're the Queen and I'm the King' quotes David Bowie's "Heroes". Unlike earlier tracks like "Reptile", this track is far more positive about relationships, which have previously resembled toxic-dyads. Reznor signals a delta surge (breaking through barriers at any cost), despite a time-line morphing between present-past-future. At 5:30 synths and piano signal a shift, at 5:49 the outgoing piano riff begins. The film-clip is filled with redemptive water imagery. The soundtrack gradually gets more murky and at 7:05 a subterranean note signals closure. "The Fragile" is even more hopeful and life-affirming (some may even interpret it as devotional), but this love -- representative of the End-Times, alludes to the 'Glamour of Evil' (Nico) in the line 'Fragile/She doesn't see her beauty'. The fusion of synths and atonal guitars beginning at 2:13 summons forth film-clip imagery -- mazes, pageants, bald eagles, found sounds, cloaked figures, ruined statues, enveloping darkness. "Just like You Imagined" opens with Soundscapes worthy of Robert Fripp, doubled by piano and guitar at 0:39. Drums and muffled voices enter at 0:54 -- are we seeing a pattern to Reznor's writing here? Sonic debris guitar enters at 1:08, bringing forth intensities from white noise. This track is full of subtle joys like the 1:23-1:36 solo by David Bowie pianist Mike Garson and guitarist Adrian Belew's outgoing guitar solo at 2:43, shifting back to the underlying soundscapes at 3:07. The sounds are always on the dissipative edge of chaos. "Just like You Imagined" opens with Soundscapes worthy of Robert Fripp, doubled by piano and guitar at 0:39. Drums and muffled voices enter at 0:54 -- are we seeing a pattern to Reznor's writing here? Sonic debris guitar enters at 1:08, bringing forth intensities from white noise. This track is full of subtle joys like the 1:23-1:36 solo by David Bowie pianist Mike Garson and guitarist Adrian Belew's outgoing guitar solo at 2:43, shifting back to the underlying soundscapes at 3:07. The sounds are always on the dissipative edge of chaos. "Pilgrimage" utilises a persistent ostinato and beat, with a driving guitar overlay at 0:18. This is perhaps the most familiar track, using Reznor motifs like the doubling of the riff with acoustic guitars between 1:12-1:20, march cries, and pitch-shift effects on a 3:18 drumbeat/cymbal. Or at least I could claim it was familiar, if it were not that legendary hip-hop producer and 'edge-of-panic' tactilist Dr. Dre helped assemble the final track mix. "No, You Don't" has been interpreted as an attack on Marilyn Manson and Hole's Courntey Love, particularly the 0:47 line 'Got to keep it all on the outside/Because everything is dead on the inside' and the 2:33 final verse 'Just so you know, I did not believe you could sink so low'. The song's structure is familiar: a basic beat at 0:16, guitars building from 0:31 to sneering vocals, a 2:03 counter-riff that merges at 2:19 with vocals and ascending to the final verse and 3:26 final distortion... "La Mer" is the first major surprise, a beautiful and sweeping fusion of piano, keyboard and cello, reminiscent of Symbolist composer Debussy. At 1:07 Denise Milfort whispers, setting the stage for sometime Ministry drummer Bill Reiflin's jazz drumming at 1:22, and a funky 1:32 guitar/bass line. The pulsing synth guitar at 2:04 serves as anchoring percussion for a cinematic electronica mindscape, filtered through new layers of sonic chiaroscuro at 2:51. 3:06 phase shifting, 3:22 layer doubling, 3:37 outgoing solo, 3:50-3:54 more swirling vocal fragments, seguing into a fading cello quartet as shadows creep. David Carson's moody film-clip captures the end more ominously, depicting the beauty of drowning. This track contains the line 'Nothing can stop me now', which appears to be Reznor's personal mantra. This track rivals 'Hurt' and 'A Warm Place' from The Downward Spiral and 'Something I Can Never Have' from Pretty Hate Machine as perhaps the most emotionally revealing and delicate material that Reznor has written. "The Great Below" ends the first disc with more multi-layered textures fusing nostalgia and reverie: a twelve-second cello riff is counter-pointed by a plucked overlay, which builds to a 0:43 washed pulse effect, transformed by six second pulses between 1:04-1:19 and a further effects layer at 1:24. E-bow effects underscore lyrics like 'Currents have their say' (2:33) and 'Washes me away' (2:44), which a 3:33 sitar riff answers. These complexities are further transmuted by seemingly random events -- a 4:06 doubling of the sitar riff which 'glitches' and a 4:32 backbeat echo that drifts for four bars. While Reznor's lyrics suggest that he is unable to control subjective time-states (like The Joker in the Batman: Dark Knight series of Kali-yuga comic-books), the track constructions show that the Key to his hold over the listener is very carefully constructed songs whose spaces resemble Pythagorean mathematical formulas. Misdirecting the audience is the secret of many magicians. "The Way Out Is Through" opens the 'Right' disc with an industrial riff that builds at 0:19 to click-track and rhythm, the equivalent of a weaving spiral. Whispering 'All I've undergone/I will keep on' at 1:24, Reznor is backed at 1:38 by synths and drums coalescing into guitars, which take shape at 1:46 and turn into a torrential electrical current. The models are clearly natural morphogenetic structures. The track twists through inner storms and torments from 2:42 to 2:48, mirrored by vocal shards at 2:59 and soundscapes at 3:45, before piano fades in and out at 4:12. The title references peri-natal theories of development (particularly those of Stanislav Grof), which is the source of much of the album's imagery. "Into the Void" is not the Black Sabbath song of the same name, but a catchy track that uses the same unfolding formula (opening static, cello at 0:18, guitars at 0:31, drums and backbeat at 1:02, trademark industrial vocals and synth at 1:02, verse at 1:23), and would not appear out of place in a Survival Research Laboratories exhibition. At 3:42 Reznor plays with the edge of synth soundscapes, merging vocals at 4:02 and ending the track nicely at 4:44 alone. "Where Is Everybody?" emulates earlier structures, but relies from 2:01 on whirring effects and organic rhythms, including a flurry of eight beat pulses between 2:40-2:46 and a 3:33 spiralling guitar solo. The 4:26 guitar solo is pure Adrian Belew, and is suddenly ended by spluttering static and white noise at 5:13. "The Mark Has Been Made" signals another downshift into introspectiveness with 0:32 ghostly synth shimmers, echoed by cello at 1:04 which is the doubled at 1:55 by guitar. At 2:08 industrial riffs suddenly build up, weaving between 3:28 distorted guitars and the return of the repressed original layer at 4:16. The surprise is a mystery 32 second soundscape at the end with Reznor crooning 'I'm getting closer, all the time' like a zombie devil Elvis. "Please" highlights spacious noise at 0:48, and signals a central album motif at 1:04 with the line 'Time starts slowing down/Sink until I drown'. The psychic mood of the album shifts with the discovery of Imagination as a liberating force against oppression. The synth sound again is remarkably organic for an industrial album. "Starfuckers Inc" is the now infamous sneering attack on rock-stardom, perhaps at Marilyn Manson (at 3:08 Reznor quotes Carly Simon's 'You're So Vain'). Jungle beats and pulsing synths open the track, which features the sound-sculpting talent of Pop Will Eat Itself member Clint Mansell. Beginning at 0:26, Reznor's vocals appear to have been sampled, looped and cut up (apologies to Brion Gysin and William S. Burroughs). The lines 'I have arrived and this time you should believe the hype/I listened to everyone now I know everyone was right' is a very savage and funny exposure of Manson's constant references to Friedrich Nietzsche's Herd-mentality: the Herd needs a bogey-man to whip it into submission, and Manson comes dangerous close to fulfilling this potential, thus becoming trapped by a 'Stacked Deck' paradox. The 4:08 lyric line 'Now I belong I'm one of the Chosen Ones/Now I belong I'm one of the Beautiful Ones' highlights the problem of being Elect and becoming intertwined with institutionalised group-think. The album version ditches the closing sample of Gene Simmons screaming "Thankyou and goodnight!" to an enraptured audience on the single from KISS Alive (1975), which was appropriately over-the-top (the alternate quiet version is worth hearing also). "The danger Marilyn Manson faces", notes Don Webb (current High Priest of the Temple of Set), "is that he may end up in twenty years time on the 'Tonight Show' safely singing our favourite songs like a Goth Frank Sinatra, and will have gradually lost his antinomian power. It's much harder to maintain the enigmatic aura of an Evil villain than it is to play the clown with society". Reznor's superior musicianship and sense of irony should keep him from falling into the same trap. "Complication" juggernauts in at 0:57 with screaming vocals and a barrage of white noise at 1:56. It's clear by now that Reznor has read his psychological operations (PSYOP) manuals pertaining to blasting the hell out of his audiences' psyche by any means necessary. Computer blip noise and black light flotation tank memories. Dislocating pauses and time-bends. The aural equivalent of Klein bottles. "Complication" juggernauts in at 0:57 with screaming vocals and a barrage of white noise at 1:56. It's clear by now that Reznor has read his psychological operations (PSYOP) manuals pertaining to blasting the hell out of his audiences' psyche by any means necessary. Computer blip noise and black light flotation tank memories. Dislocating pauses and time-bends. The aural equivalent of Klein bottles. "The Big Come Down" begins with a four-second synth/static intro that is smashed apart by a hard beat at 0:05 and kaleidoscope guitars at 0:16. Critics refer to the song's lyrics in an attempt to project a narcissistic Reznor personality, but don't comment on stylistic tweaks like the AM radio influenced backing vocals at 1:02 and 1:19, or the use of guitars as a percussion layer at 1:51. A further intriguing element is the return of the fly samples at 2:38, an effect heard on previous releases and a possible post-human sub-text. The alien mythos will eventually reign over the banal and empty human. At 3:07 the synths return with static, a further overlay adds more synths at 3:45 as the track spirals to its peak, before dissipating at 3:1 in a mesh of percussion and guitars. "Underneath It All" opens with a riff that signals we have reached the album's climatic turning point, with the recurring theme of fragmenting body-memories returning at 0:23 with the line 'All I can do/I can still feel you', and being echoed by pulsing static at 0:42 as electric percussion. A 'Messiah Complex' appears at 1:34 with the line 'Crucify/After all I've died/After all I've tried/You are still inside', or at least it appears to be that on the surface. This is the kind of line that typical rock critics will quote, but a careful re-reading suggests that Reznor is pointing to the painful nature of remanifesting. Our past shapes us more than we would like to admit particularly our first relationships. "Ripe (With Decay)" is the album's final statement, a complex weaving of passages over a repetitive mesh of guitars, pulsing echoes, back-beats, soundscapes, and a powerful Mike Garson piano solo (2:26). Earlier motifs including fly samples (3:00), mournful funeral violas (3:36) and slowing time effects (4:28) recur throughout the track. Having finally reached the psychotic core, Reznor is not content to let us rest, mixing funk bass riffs (4:46), vocal snatches (5:23) and oscillating guitars (5:39) that drag the listener forever onwards towards the edge of the abyss (5:58). The final sequence begins at 6:22, loses fidelity at 6:28, and ends abruptly at 6:35. At millennium's end there is a common-held perception that the world is in an irreversible state of decay, and that Culture is just a wafer-thin veneer over anarchy. Music like The Fragile suggests that we are still trying to assimilate into popular culture the 'war-on-Self' worldviews unleashed by the nineteenth-century 'Masters of Suspicion' (Charles Darwin, Sigmund Freud, Friedrich Nietzsche). This 'assimilation gap' is evident in industrial music, which in the late 1970s was struggling to capture the mood of the Industrial Revolution and Charles Dickens, so the genre is ripe for further exploration of the scarred psyche. What the self-appointed moral guardians of the Herd fail to appreciate is that as the imprint baseline rises (reflective of socio-political realities), the kind of imagery prevalent throughout The Fragile and in films like Strange Days (1995), The Matrix (1999) and eXistenZ (1999) is going to get even darker. The solution is not censorship or repression in the name of pleasing an all-saving surrogate god-figure. No, these things have to be faced and embraced somehow. Such a process can only occur if there is space within for the Sadeian aesthetic that Nine Inch Nails embodies, and not a denial of Dark Eros. "We need a second Renaissance", notes Don Webb, "a rejuvenation of Culture on a significant scale". In other words, a global culture-shift of quantum (aeon or epoch-changing) proportions. The tools required will probably not come just from the over-wordy criticism of Cyber-culture and Cultural Studies or the logical-negative feeding frenzy of most Music Journalism. They will come from a dynamic synthesis of disciplines striving toward a unity of knowledge -- what socio-biologist Edward O. Wilson has described as 'Consilience'. Liberating tools and ideas will be conveyed to a wider public audience unfamiliar with such principles through predominantly science fiction visual imagery and industrial/electronica music. The Fragile serves as an invaluable model for how such artefacts could transmit their dreams and propagate their messages. For the hyper-alert listener, it will be the first step on a new journey. But sadly for the majority, it will be just another hysterical industrial album promoted as selection of the month. References Bester, Alfred. The Stars My Destination. London: Millennium Books, 1999. Eshun, Kodwo. More Brilliant than the Sun: Adventures in Sonic Fiction. London: Quartet Books, 1998. Van der Kolk, Bessel A. "Trauma and Memory." Traumatic Stress: The Effects of Overwhelming Experience on Mind, Body, and Society. Eds. Bessel A. van der Kolk et al. New York: Guilford Press, 1996. Nine Inch Nails. Downward Spiral. Nothing/Interscope, 1994. ---. The Fragile. Nothing, 1999. ---. Pretty Hate Machine. TVT, 1989. Citation reference for this article MLA style: Alex Burns. "'This Machine Is Obsolete': A Listeners' Guide to Nine Inch Nails' The Fragile." M/C: A Journal of Media and Culture 2.8 (1999). [your date of access] <http://www.uq.edu.au/mc/9912/nine.php>. Chicago style: Alex Burns, "'This Machine Is Obsolete': A Listeners' Guide to Nine Inch Nails' The Fragile," M/C: A Journal of Media and Culture 2, no. 8 (1999), <http://www.uq.edu.au/mc/9912/nine.php> ([your date of access]). APA style: Alex Burns. (1999) 'This machine is obsolete': a listeners' guide to Nine Inch Nails' The fragile. M/C: A Journal of Media and Culture 2(8). <http://www.uq.edu.au/mc/9912/nine.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
21

Moore, Christopher Luke. "Digital Games Distribution: The Presence of the Past and the Future of Obsolescence." M/C Journal 12, no. 3 (2009). http://dx.doi.org/10.5204/mcj.166.

Full text
Abstract:
A common criticism of the rhythm video games genre — including series like Guitar Hero and Rock Band, is that playing musical simulation games is a waste of time when you could be playing an actual guitar and learning a real skill. A more serious criticism of games cultures draws attention to the degree of e-waste they produce. E-waste or electronic waste includes mobiles phones, computers, televisions and other electronic devices, containing toxic chemicals and metals whose landfill, recycling and salvaging all produce distinct environmental and social problems. The e-waste produced by games like Guitar Hero is obvious in the regular flow of merchandise transforming computer and video games stores into simulation music stores, filled with replica guitars, drum kits, microphones and other products whose half-lives are short and whose obsolescence is anticipated in the annual cycles of consumption and disposal. This paper explores the connection between e-waste and obsolescence in the games industry, and argues for the further consideration of consumers as part of the solution to the problem of e-waste. It uses a case study of the PC digital distribution software platform, Steam, to suggest that the digital distribution of games may offer an alternative model to market driven software and hardware obsolescence, and more generally, that such software platforms might be a place to support cultures of consumption that delay rather than promote hardware obsolescence and its inevitability as e-waste. The question is whether there exists a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities (its current 'green' benefit), but also for supporting consumer practices that further reduce e-waste. The games industry relies on a rapid production and innovation cycle, one that actively enforces hardware obsolescence. Current video game consoles, including the PlayStation 3, the Xbox 360 and Nintendo Wii, are the seventh generation of home gaming consoles to appear within forty years, and each generation is accompanied by an immense international transportation of games hardware, software (in various storage formats) and peripherals. Obsolescence also occurs at the software or content level and is significant because the games industry as a creative industry is dependent on the extensive management of multiple intellectual properties. The computing and video games software industry operates a close partnership with the hardware industry, and as such, software obsolescence directly contributes to hardware obsolescence. The obsolescence of content and the redundancy of the methods of policing its scarcity in the marketplace has been accelerated and altered by the processes of disintermediation with a range of outcomes (Flew). The music industry is perhaps the most advanced in terms of disintermediation with digital distribution at the center of the conflict between the legitimate and unauthorised access to intellectual property. This points to one issue with the hypothesis that digital distribution can lead to a reduction in hardware obsolescence, as the marketplace leader and key online distributor of music, Apple, is also the major producer of new media technologies and devices that are the paragon of stylistic obsolescence. Stylistic obsolescence, in which fashion changes products across seasons of consumption, has long been observed as the dominant form of scaled industrial innovation (Slade). Stylistic obsolescence is differentiated from mechanical or technological obsolescence as the deliberate supersedence of products by more advanced designs, better production techniques and other minor innovations. The line between the stylistic and technological obsolescence is not always clear, especially as reduced durability has become a powerful market strategy (Fitzpatrick). This occurs where the design of technologies is subsumed within the discourses of manufacturing, consumption and the logic of planned obsolescence in which the product or parts are intended to fail, degrade or under perform over time. It is especially the case with signature new media technologies such as laptop computers, mobile phones and portable games devices. Gamers are as guilty as other consumer groups in contributing to e-waste as participants in the industry's cycles of planned obsolescence, but some of them complicate discussions over the future of obsolescence and e-waste. Many gamers actively work to forestall the obsolescence of their games: they invest time in the play of older games (“retrogaming”) they donate labor and creative energy to the production of user-generated content as a means of sustaining involvement in gaming communities; and they produce entirely new game experiences for other users, based on existing software and hardware modifications known as 'mods'. With Guitar Hero and other 'rhythm' games it would be easy to argue that the hardware components of this genre have only one future: as waste. Alternatively, we could consider the actual lifespan of these objects (including their impact as e-waste) and the roles they play in the performances and practices of communities of gamers. For example, the Elmo Guitar Hero controller mod, the Tesla coil Guitar Hero controller interface, the Rock Band Speak n' Spellbinder mashup, the multiple and almost sacrilegious Fender guitar hero mods, the Guitar Hero Portable Turntable Mod and MAKE magazine's Trumpet Hero all indicate a significant diversity of user innovation, community formation and individual investment in the post-retail life of computer and video game hardware. Obsolescence is not just a problem for the games industry but for the computing and electronics industries more broadly as direct contributors to the social and environmental cost of electrical waste and obsolete electrical equipment. Planned obsolescence has long been the experience of gamers and computer users, as the basis of a utopian mythology of upgrades (Dovey and Kennedy). For PC users the upgrade pathway is traversed by the consumption of further hardware and software post initial purchase in a cycle of endless consumption, acquisition and waste (as older parts are replaced and eventually discarded). The accumulation and disposal of these cultural artefacts does not devalue or accrue in space or time at the same rate (Straw) and many users will persist for years, gradually upgrading and delaying obsolescence and even perpetuate the circulation of older cultural commodities. Flea markets and secondhand fairs are popular sites for the purchase of new, recent, old, and recycled computer hardware, and peripherals. Such practices and parallel markets support the strategies of 'making do' described by De Certeau, but they also continue the cycle of upgrade and obsolescence, and they are still consumed as part of the promise of the 'new', and the desire of a purchase that will finally 'fix' the users' computer in a state of completion (29). The planned obsolescence of new media technologies is common, but its success is mixed; for example, support for Microsoft's operating system Windows XP was officially withdrawn in April 2009 (Robinson), but due to the popularity in low cost PC 'netbooks' outfitted with an optimised XP operating system and a less than enthusiastic response to the 'next generation' Windows Vista, XP continues to be popular. Digital Distribution: A Solution? Gamers may be able to reduce the accumulation of e-waste by supporting the disintermediation of the games retail sector by means of online distribution. Disintermediation is the establishment of a direct relationship between the creators of content and their consumers through products and services offered by content producers (Flew 201). The move to digital distribution has already begun to reduce the need to physically handle commodities, but this currently signals only further support of planned, stylistic and technological obsolescence, increasing the rate at which the commodities for recording, storing, distributing and exhibiting digital content become e-waste. Digital distribution is sometimes overlooked as a potential means for promoting communities of user practice dedicated to e-waste reduction, at the same time it is actively employed to reduce the potential for the unregulated appropriation of content and restrict post-purchase sales through Digital Rights Management (DRM) technologies. Distributors like Amazon.com continue to pursue commercial opportunities in linking the user to digital distribution of content via exclusive hardware and software technologies. The Amazon e-book reader, the Kindle, operates via a proprietary mobile network using a commercially run version of the wireless 3G protocols. The e-book reader is heavily encrypted with Digital Rights Management (DRM) technologies and exclusive digital book formats designed to enforce current copyright restrictions and eliminate second-hand sales, lending, and further post-purchase distribution. The success of this mode of distribution is connected to Amazon's ability to tap both the mainstream market and the consumer demand for the less-than-popular; those books, movies, music and television series that may not have been 'hits' at the time of release. The desire to revisit forgotten niches, such as B-sides, comics, books, and older video games, suggests Chris Anderson, linked with so-called “long tail” economics. Recently Webb has queried the economic impact of the Long Tail as a business strategy, but does not deny the underlying dynamics, which suggest that content does not obsolesce in any straightforward way. Niche markets for older content are nourished by participatory cultures and Web 2.0 style online services. A good example of the Long Tail phenomenon is the recent case of the 1971 book A Lion Called Christian, by Anthony Burke and John Rendall, republished after the author's film of a visit to a resettled Christian in Africa was popularised on YouTube in 2008. Anderson's Long Tail theory suggests that over time a large number of items, each with unique rather than mass histories, will be subsumed as part of a larger community of consumers, including fans, collectors and everyday users with a long term interest in their use and preservation. If digital distribution platforms can reduce e-waste, they can perhaps be fostered by to ensuring digital consumers have access to morally and ethically aware consumer decisions, but also that they enjoy traditional consumer freedoms, such as the right to sell on and change or modify their property. For it is not only the fixation on the 'next generation' that contributes to obsolescence, but also technologies like DRM systems that discourage second hand sales and restrict modification. The legislative upgrades, patches and amendments to copyright law that have attempted to maintain the law's effectiveness in competing with peer-to-peer networks have supported DRM and other intellectual property enforcement technologies, despite the difficulties that owners of intellectual property have encountered with the effectiveness of DRM systems (Moore, Creative). The games industry continues to experiment with DRM, however, this industry also stands out as one of the few to have significantly incorporated the user within the official modes of production (Moore, Commonising). Is the games industry capable (or willing) of supporting a digital delivery system that attempts to minimise or even reverse software and hardware obsolescence? We can try to answer this question by looking in detail at the biggest digital distributor of PC games, Steam. Steam Figure 1: The Steam Application user interface retail section Steam is a digital distribution system designed for the Microsoft Windows operating system and operated by American video game development company and publisher, Valve Corporation. Steam combines online games retail, DRM technologies and internet-based distribution services with social networking and multiplayer features (in-game voice and text chat, user profiles, etc) and direct support for major games publishers, independent producers, and communities of user-contributors (modders). Steam, like the iTunes games store, Xbox Live and other digital distributors, provides consumers with direct digital downloads of new, recent and classic titles that can be accessed remotely by the user from any (internet equipped) location. Steam was first packaged with the physical distribution of Half Life 2 in 2004, and the platform's eventual popularity is tied to the success of that game franchise. Steam was not an optional component of the game's installation and many gamers protested in various online forums, while the platform was treated with suspicion by the global PC games press. It did not help that Steam was at launch everything that gamers take objection to: a persistent and initially 'buggy' piece of software that sits in the PC's operating system and occupies limited memory resources at the cost of hardware performance. Regular updates to the Steam software platform introduced social network features just as mainstream sites like MySpace and Facebook were emerging, and its popularity has undergone rapid subsequent growth. Steam now eclipses competitors with more than 20 million user accounts (Leahy) and Valve Corporation makes it publicly known that Steam collects large amounts of data about its users. This information is available via the public player profile in the community section of the Steam application. It includes the average number of hours the user plays per week, and can even indicate the difficulty the user has in navigating game obstacles. Valve reports on the number of users on Steam every two hours via its web site, with a population on average between one and two million simultaneous users (Valve, Steam). We know these users’ hardware profiles because Valve Corporation makes the results of its surveillance public knowledge via the Steam Hardware Survey. Valve’s hardware survey itself conceptualises obsolescence in two ways. First, it uses the results to define the 'cutting edge' of PC technologies and publishing the standards of its own high end production hardware on the companies blog. Second, the effect of the Survey is to subsequently define obsolescent hardware: for example, in the Survey results for April 2009, we can see that the slight majority of users maintain computers with two central processing units while a significant proportion (almost one third) of users still maintained much older PCs with a single CPU. Both effects of the Survey appear to be well understood by Valve: the Steam Hardware Survey automatically collects information about the community's computer hardware configurations and presents an aggregate picture of the stats on our web site. The survey helps us make better engineering and gameplay decisions, because it makes sure we're targeting machines our customers actually use, rather than measuring only against the hardware we've got in the office. We often get asked about the configuration of the machines we build around the office to do both game and Steam development. We also tend to turn over machines in the office pretty rapidly, at roughly every 18 months. (Valve, Team Fortress) Valve’s support of older hardware might counter perceptions that older PCs have no use and begins to reverse decades of opinion regarding planned and stylistic obsolescence in the PC hardware and software industries. Equally significant to the extension of the lives of older PCs is Steam's support for mods and its promotion of user generated content. By providing software for mod creation and distribution, Steam maximises what Postigo calls the development potential of fan-programmers. One of the 'payoffs' in the information/access exchange for the user with Steam is the degree to which Valve's End-User Licence Agreement (EULA) permits individuals and communities of 'modders' to appropriate its proprietary game content for use in the creation of new games and games materials for redistribution via Steam. These mods extend the play of the older games, by requiring their purchase via Steam in order for the individual user to participate in the modded experience. If Steam is able to encourage this kind of appropriation and community support for older content, then the potential exists for it to support cultures of consumption and practice of use that collaboratively maintain, extend, and prolong the life and use of games. Further, Steam incorporates the insights of “long tail” economics in a purely digital distribution model, in which the obsolescence of 'non-hit' game titles can be dramatically overturned. Published in November 2007, Unreal Tournament 3 (UT3) by Epic Games, was unappreciated in a market saturated with games in the first-person shooter genre. Epic republished UT3 on Steam 18 months later, making the game available to play for free for one weekend, followed by discounted access to new content. The 2000 per cent increase in players over the game's 'free' trial weekend, has translated into enough sales of the game for Epic to no longer consider the release a commercial failure: It’s an incredible precedent to set: making a game a success almost 18 months after a poor launch. It’s something that could only have happened now, and with a system like Steam...Something that silently updates a purchase with patches and extra content automatically, so you don’t have to make the decision to seek out some exciting new feature: it’s just there anyway. Something that, if you don’t already own it, advertises that game to you at an agreeably reduced price whenever it loads. Something that enjoys a vast community who are in turn plugged into a sea of smaller relevant communities. It’s incredibly sinister. It’s also incredibly exciting... (Meer) Clearly concerns exist about Steam's user privacy policy, but this also invites us to the think about the economic relationship between gamers and games companies as it is reconfigured through the private contractual relationship established by the EULA which accompanies the digital distribution model. The games industry has established contractual and licensing arrangements with its consumer base in order to support and reincorporate emerging trends in user generated cultures and other cultural formations within its official modes of production (Moore, "Commonising"). When we consider that Valve gets to tax sales of its virtual goods and can further sell the information farmed from its users to hardware manufacturers, it is reasonable to consider the relationship between the corporation and its gamers as exploitative. Gabe Newell, the Valve co-founder and managing director, conversely believes that people are willing to give up personal information if they feel it is being used to get better services (Leahy). If that sentiment is correct then consumers may be willing to further trade for services that can reduce obsolescence and begin to address the problems of e-waste from the ground up. Conclusion Clearly, there is a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities but also supporting consumer practices that further reduce e-waste. For an industry where only a small proportion of the games made break even, the successful relaunch of older games content indicates Steam's capacity to ameliorate software obsolescence. Digital distribution extends the use of commercially released games by providing disintermediated access to older and user-generated content. For Valve, this occurs within a network of exchange as access to user-generated content, social networking services, and support for the organisation and coordination of communities of gamers is traded for user-information and repeat business. Evidence for whether this will actively translate to an equivalent decrease in the obsolescence of game hardware might be observed with indicators like the Steam Hardware Survey in the future. The degree of potential offered by digital distribution is disrupted by a range of technical, commercial and legal hurdles, primary of which is the deployment of DRM, as part of a range of techniques designed to limit consumer behaviour post purchase. While intervention in the form of legislation and radical change to the insidious nature of electronics production is crucial in order to achieve long term reduction in e-waste, the user is currently considered only in terms of 'ethical' consumption and ultimately divested of responsibility through participation in corporate, state and civil recycling and e-waste management operations. The message is either 'careful what you purchase' or 'careful how you throw it away' and, like DRM, ignores the connections between product, producer and user and the consumer support for environmentally, ethically and socially positive production, distribrution, disposal and recycling. This article, has adopted a different strategy, one that sees digital distribution platforms like Steam, as capable, if not currently active, in supporting community practices that should be seriously considered in conjunction with a range of approaches to the challenge of obsolescence and e-waste. References Anderson, Chris. "The Long Tail." Wired Magazine 12. 10 (2004). 20 Apr. 2009 ‹http://www.wired.com/wired/archive/12.10/tail.html›. De Certeau, Michel. The Practice of Everyday Life. Berkeley: U of California P, 1984. Dovey, Jon, and Helen Kennedy. Game Cultures: Computer Games as New Media. London: Open University Press,2006. Fitzpatrick, Kathleen. The Anxiety of Obsolescence. Nashville: Vanderbilt UP, 2008. Flew, Terry. New Media: An Introduction. South Melbourne: Oxford UP, 2008. Leahy, Brian. "Live Blog: DICE 2009 Keynote - Gabe Newell, Valve Software." The Feed. G4TV 18 Feb. 2009. 16 Apr. 2009 ‹http://g4tv.com/thefeed/blog/post/693342/Live-Blog-DICE-2009-Keynote-–-Gabe-Newell-Valve-Software.html›. Meer, Alec. "Unreal Tournament 3 and the New Lazarus Effect." Rock, Paper, Shotgun 16 Mar. 2009. 24 Apr. 2009 ‹http://www.rockpapershotgun.com/2009/03/16/unreal-tournament-3-and-the-new-lazarus-effect/›.Moore, Christopher. "Commonising the Enclosure: Online Games and Reforming Intellectual Property Regimes." Australian Journal of Emerging Technologies and Society 3. 2, (2005). 12 Apr. 2009 ‹http://www.swin.edu.au/sbs/ajets/journal/issue5-V3N2/abstract_moore.htm›. Moore, Christopher. "Creative Choices: Changes to Australian Copyright Law and the Future of the Public Domain." Media International Australia 114 (Feb. 2005): 71–83. Postigo, Hector. "Of Mods and Modders: Chasing Down the Value of Fan-Based Digital Game Modification." Games and Culture 2 (2007): 300-13. Robinson, Daniel. "Windows XP Support Runs Out Next Week." PC Business Authority 8 Apr. 2009. 16 Apr. 2009 ‹http://www.pcauthority.com.au/News/142013,windows-xp-support-runs-out-next-week.aspx›. Straw, Will. "Exhausted Commodities: The Material Culture of Music." Canadian Journal of Communication 25.1 (2000): 175. Slade, Giles. Made to Break: Technology and Obsolescence in America. Cambridge: Harvard UP, 2006. Valve. "Steam and Game Stats." 26 Apr. 2009 ‹http://store.steampowered.com/stats/›. Valve. "Team Fortress 2: The Scout Update." Steam Marketing Message 20 Feb. 2009. 12 Apr. 2009 ‹http://storefront.steampowered.com/Steam/Marketing/message/2269/›. Webb, Richard. "Online Shopping and the Harry Potter Effect." New Scientist 2687 (2008): 52-55. 16 Apr. 2009 ‹http://www.newscientist.com/article/mg20026873.300-online-shopping-and-the-harry-potter-effect.html?page=2›. With thanks to Dr Nicola Evans and Dr Frances Steel for their feedback and comments on drafts of this paper.
APA, Harvard, Vancouver, ISO, and other styles
22

Perfeito, Ana, and Bruno Mendes Silva. "Moda Vestra - um espetáculo de cinema ao vivo sobre o Algarve." AVANCA | CINEMA, May 11, 2020. http://dx.doi.org/10.37390/ac.v0i0.71.

Full text
Abstract:
“Moda Vestra” is a collective of artists from the Algarve, who perform visuals and music in real-time. They play classical instruments like accordion and bass guitar, and use computer softwares and interfaces for the electronic music and live visuals. We define this language as Live Cinema. Live because both musicians and visual artist are performing in real-time, and cinema because there is a story to tell the audience, using images in movement with sound.This article has two main objectives. First is to draw a state of the art related to this phenomenon known as Live Cinema, that crosses with genres such as audiovisual performance and concepts such as real-time. The second aims to analyse the concept, the morphology and the work’s methodology of “Moda Vestra collective. Which also became a phenomenon because of their alternative way of telling stories from a stage.
APA, Harvard, Vancouver, ISO, and other styles
23

Jones, Steve. "Seeing Sound, Hearing Image." M/C Journal 2, no. 4 (1999). http://dx.doi.org/10.5204/mcj.1763.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies Popular music is firmly rooted within realist practice, or what has been called the "culture of authenticity" associated with modernism. As Lawrence Grossberg notes, the accelleration of the rate of change in modern life caused, in post-war youth culture, an identity crisis or "lived contradiction" that gave rock (particularly) and popular music (generally) a peculiar position in regard to notions of authenticity. Grossberg places rock's authenticity within the "difference" it maintains from other cultural forms, and notes that its difference "can be justified aesthetically or ideologically, or in terms of the social position of the audiences, or by the economics of its production, or through the measure of its popularity or the statement of its politics" (205-6). Popular music scholars have not adequately addressed issues of authenticity and individuality. Two of the most important questions to be asked are: How is authenticity communicated in popular music? What is the site of the interpretation of authenticity? It is important to ask about sound, technology, about the attempt to understand the ideal and the image, the natural and artificial. It is these that make clear the strongest connections between popular music and contemporary culture. Popular music is a particularly appropriate site for the study of authenticity as a cultural category, for several reasons. For one thing, other media do not follow us, as aural media do, into malls, elevators, cars, planes. Nor do they wait for us, as a tape player paused and ready to play. What is important is not that music is "everywhere" but, to borrow from Vivian Sobchack, that it creates a "here" that can be transported anywhere. In fact, we are able to walk around enveloped by a personal aural environment, thanks to a Sony Walkman.1 Also, it is more difficult to shut out the aural than the visual. Closing one's ears does not entirely shut out sound. There is, additionally, the sense that sound and music are interpreted from within, that is, that they resonate through and within the body, and as such engage with one's self in a fashion that coincides with Charles Taylor's claim that the "ideal of authenticity" is an inner-directed one. It must be noted that authenticity is not, however, communicated only via music, but via text and image. Grossberg noted the "primacy of sound" in rock music, and the important link between music, visual image, and authenticity: Visual style as conceived in rock culture is usually the stage for an outrageous and self-conscious inauthenticity... . It was here -- in its visual presentation -- that rock often most explicitly manifested both an ironic resistance to the dominant culture and its sympathies with the business of entertainment ... . The demand for live performance has always expressed the desire for the visual mark (and proof) of authenticity. (208) But that relationship can also be reversed: Music and sound serve in some instances to provide the aural mark and proof of authenticity. Consider, for instance, the "tear" in the voice that Jensen identifies in Hank Williams's singing, and in that of Patsy Cline. For the latter, voicing, in this sense, was particularly important, as it meant more than a singing style, it also involved matters of self-identity, as Jensen appropriately associates with the move of country music from "hometown" to "uptown" (101). Cline's move toward a more "uptown" style involved her visual image, too. At a significant turning point in her career, Faron Young noted, Cline "left that country girl look in those western outfits behind and opted for a slicker appearance in dresses and high fashion gowns" (Jensen 101). Popular music has forged a link with visual media, and in some sense music itself has become more visual (though not necessarily less aural) the more it has engaged with industrial processes in the entertainment industry. For example, engagement with music videos and film soundtracks has made music a part of the larger convergence of mass media forms. Alongside that convergence, the use of music in visual media has come to serve as adjunct to visual symbolisation. One only need observe the increasingly commercial uses to which music is put (as in advertising, film soundtracks and music videos) to note ways in which music serves image. In the literature from a variety of disciplines, including communication, art and music, it has been argued that music videos are the visualisation of music. But in many respects the opposite is true. Music videos are the auralisation of the visual. Music serves many of the same purposes as sound does generally in visual media. One can find a strong argument for the use of sound as supplement to visual media in Silverman's and Altman's work. For Silverman, sound in cinema has largely been overlooked (pun intended) in favor of the visual image, but sound is a more effective (and perhaps necessary) element for willful suspension of disbelief. One may see this as well in the development of Dolby Surround Sound, and in increased emphasis on sound engineering among video and computer game makers, as well as the development of sub-woofers and high-fidelity speakers as computer peripherals. Another way that sound has become more closely associated with the visual is through the ongoing evolution of marketing demands within the popular music industry that increasingly rely on visual media and force image to the front. Internet technologies, particularly the WorldWideWeb (WWW), are also evidence of a merging of the visual and aural (see Hayward). The development of low-cost desktop video equipment and WWW publishing, CD-i, CD-ROM, DVD, and other technologies, has meant that visual images continue to form part of the industrial routine of the music business. The decrease in cost of many of these technologies has also led to the adoption of such routines among individual musicians, small/independent labels, and producers seeking to mimic the resources of major labels (a practice that has become considerably easier via the Internet, as it is difficult to determine capital resources solely from a WWW site). Yet there is another facet to the evolution of the link between the aural and visual. Sound has become more visual by way of its representation during its production (a representation, and process, that has largely been ignored in popular music studies). That representation has to do with the digitisation of sound, and the subsequent transformation sound and music can undergo after being digitised and portrayed on a computer screen. Once digitised, sound can be made visual in any number of ways, through traditional methods like music notation, through representation as audio waveform, by way of MIDI notation, bit streams, or through representation as shapes and colors (as in recent software applications particularly for children, like Making Music by Morton Subotnick). The impetus for these representations comes from the desire for increased control over sound (see Jones, Rock Formation) and such control seems most easily accomplished by way of computers and their concomitant visual technologies (monitors, printers). To make computers useful tools for sound recording it is necessary to employ some form of visual representation for the aural, and the flexibility of modern computers allows for new modes of predominately visual representation. Each of these connections between the aural and visual is in turn related to technology, for as audio technology develops within the entertainment industry it makes sense for synergistic development to occur with visual media technologies. Yet popular music scholars routinely analyse aural and visual media in isolation from one another. The challenge for popular music studies and music philosophy posed by visual media technologies, that they must attend to spatiality and context (both visual and aural), has not been taken up. Until such time as it is, it will be difficult, if not impossible, to engage issues of authenticity, because they will remain rootless instead of situated within the experience of music as fully sensual (in some cases even synaesthetic). Most of the traditional judgments of authenticity among music critics and many popular music scholars involve space and time, the former in terms of the movement of music across cultures and the latter in terms of history. None rely on notions of the "situatedness" of the listener or musicmaker in a particular aural, visual and historical space. Part of the reason for the lack of such an understanding arises from the very means by which popular music is created. We have become accustomed to understanding music as manipulation of sound, and so far as most modern music production is concerned such manipulation occurs as much visually as aurally, by cutting, pasting and otherwise altering audio waveforms on a computer screen. Musicians no more record music than they record fingering; they engage in sound recording. And recording engineers and producers rely less and less on sound and more on sight to determine whether a recording conforms to the demands of digital reproduction.2 Sound, particularly when joined with the visual, becomes a means to build and manipulate the environment, virtual and non-virtual (see Jones, "Sound"). Sound & Music As we construct space through sound, both in terms of audio production (e.g., the use of reverberation devices in recording studios) and in terms of everyday life (e.g., perception of aural stimuli, whether by ear or vibration in the body, from points surrounding us), we centre it within experience. Sound combines the psychological and physiological. Audio engineer George Massenburg noted that in film theaters: You couldn't utilise the full 360-degree sound space for music because there was an "exit sign" phenomena [sic]. If you had a lot of audio going on in the back, people would have a natural inclination to turn around and stare at the back of the room. (Massenburg 79-80) However, he went on to say, beyond observations of such reactions to multichannel sound technology, "we don't know very much". Research in psychoacoustics being used to develop virtual audio systems relies on such reactions and on a notion of human hardwiring for stimulus response (see Jones, "Sense"). But a major stumbling block toward the development of those systems is that none are able to account for individual listeners' perceptions. It is therefore important to consider the individual along with the social dimension in discussions of sound and music. For instance, the term "sound" is deployed in popular music to signify several things, all of which have to do with music or musical performance, but none of which is music. So, for instance, musical groups or performers can have a "sound", but it is distinguishable from what notes they play. Entire music scenes can have "sounds", but the music within such scenes is clearly distinct and differentiated. For the study of popular music this is a significant but often overlooked dimension. As Grossberg argues, "the authenticity of rock was measured by its sound" (207). Visually, he says, popular music is suspect and often inauthentic (sometimes purposefully so), and it is grounded in the aural. Similarly in country music Jensen notes that the "Nashville Sound" continually evoked conflicting definitions among fans and musicians, but that: The music itself was the arena in and through which claims about the Nashville Sound's authenticity were played out. A certain sound (steel guitar, with fiddle) was deemed "hard" or "pure" country, in spite of its own commercial history. (84) One should, therefore, attend to the interpretive acts associated with sound and its meaning. But why has not popular music studies engaged in systematic analysis of sound at the level of the individual as well as the social? As John Shepherd put it, "little cultural theoretical work in music is concerned with music's sounds" ("Value" 174). Why should this be a cause for concern? First, because Shepherd claims that sound is not "meaningful" in the traditional sense. Second, because it leads us to re-examine the question long set to the side in popular music studies: What is music? The structural homology, the connection between meaning and social formation, is a foundation upon which the concept of authenticity in popular music stands. Yet the ability to label a particular piece of music "good" shifts from moment to moment, and place to place. Frith understates the problem when he writes that "it is difficult ... to say how musical texts mean or represent something, and it is difficult to isolate structures of musical creation or control" (56). Shepherd attempts to overcome this difficulty by emphasising that: Music is a social medium in sound. What [this] means ... is that the sounds of music provide constantly moving and complex matrices of sounds in which individuals may invest their own meanings ... [however] while the matrices of sounds which seemingly constitute an individual "piece" of music can accommodate a range of meanings, and thereby allow for negotiability of meaning, they cannot accommodate all possible meanings. (Shepherd, "Art") It must be acknowledged that authenticity is constructed, and that in itself is an argument against the most common way to think of authenticity. If authenticity implies something about the "pure" state of an object or symbol then surely such a state is connected to some "objective" rendering, one not possible according to Shepherd's claims. In some sense, then, authenticity is autonomous, its materialisation springs not from any necessary connection to sound, image, text, but from individual acts of interpretation, typically within what in literary criticism has come to be known as "interpretive communities". It is not hard to illustrate the point by generalising and observing that rock's notion of authenticity is captured in terms of songwriting, but that songwriters are typically identified with places (e.g. Tin Pan Alley, the Brill Building, Liverpool, etc.). In this way there is an obvious connection between authenticity and authorship (see Jones, "Popular Music Studies") and geography (as well in terms of musical "scenes", e.g. the "Philly Sound", the "Sun Sound", etc.). The important thing to note is the resultant connection between the symbolic and the physical worlds rooted (pun intended) in geography. As Redhead & Street put it: The idea of "roots" refers to a number of aspects of the musical process. There is the audience in which the musician's career is rooted ... . Another notion of roots refers to music. Here the idea is that the sounds and the style of the music should continue to resemble the source from which it sprang ... . The issue ... can be detected in the argument of those who raise doubts about the use of musical high-technology by African artists. A final version of roots applies to the artist's sociological origins. (180) It is important, consequently, to note that new technologies, particularly ones associated with the distribution of music, are of increasing importance in regulating the tension between alienation and progress mentioned earlier, as they are technologies not simply of musical production and consumption, but of geography. That the tension they mediate is most readily apparent in legal skirmishes during an unsettled era for copyright law (see Brown) should not distract scholars from understanding their cultural significance. These technologies are, on the one hand, "liberating" (see Hayward, Young, and Marsh) insofar as they permit greater geographical "reach" and thus greater marketing opportunities (see Fromartz), but on the other hand they permit less commercial control, insofar as they permit digitised music to freely circulate without restriction or compensation, to the chagrin of copyright enthusiasts. They also create opportunities for musical collaboration (see Hayward) between performers in different zones of time and space, on a scale unmatched since the development of multitracking enabled the layering of sound. Most importantly, these technologies open spaces for the construction of authenticity that have hitherto been unavailable, particularly across distances that have largely separated cultures and fan communities (see Paul). The technologies of Internetworking provide yet another way to make connections between authenticity, music and sound. Community and locality (as Redhead & Street, as well as others like Sara Cohen and Ruth Finnegan, note) are the elements used by audience and artist alike to understand the authenticity of a performer or performance. The lived experience of an artist, in a particular nexus of time and space, is to be somehow communicated via music and interpreted "properly" by an audience. But technologies of Internetworking permit the construction of alternative spaces, times and identities. In no small way that has also been the situation with the mediation of music via most recordings. They are constructed with a sense of space, consumed within particular spaces, at particular times, in individual, most often private, settings. What the network technologies have wrought is a networked audience for music that is linked globally but rooted in the local. To put it another way, the range of possibilities when it comes to interpretive communities has widened, but the experience of music has not significantly shifted, that is, the listener experiences music individually, and locally. Musical activity, whether it is defined as cultural or commercial practice, is neither flat nor autonomous. It is marked by ever-changing tastes (hence not flat) but within an interpretive structure (via "interpretive communities"). Musical activity must be understood within the nexus of the complex relations between technical, commercial and cultural processes. As Jensen put it in her analysis of Patsy Cline's career: Those who write about culture production can treat it as a mechanical process, a strategic construction of material within technical or institutional systems, logical, rational, and calculated. But Patsy Cline's recording career shows, among other things, how this commodity production view must be linked to an understanding of culture as meaning something -- as defining, connecting, expressing, mattering to those who participate with it. (101) To achieve that type of understanding will require that popular music scholars understand authenticity and music in a symbolic realm. Rather than conceiving of authenticity as a limited resource (that is, there is only so much that is "pure" that can go around), it is important to foreground its symbolic and ever-changing character. Put another way, authenticity is not used by musician or audience simply to label something as such, but rather to mean something about music that matters at that moment. Authenticity therefore does not somehow "slip away", nor does a "pure" authentic exist. Authenticity in this regard is, as Baudrillard explains concerning mechanical reproduction, "conceived according to (its) very reproducibility ... there are models from which all forms proceed according to modulated differences" (56). Popular music scholars must carefully assess the affective dimensions of fans, musicians, and also record company executives, recording producers, and so on, to be sensitive to the deeply rooted construction of authenticity and authentic experience throughout musical processes. Only then will there emerge an understanding of the structures of feeling that are central to the experience of music. Footnotes For analyses of the Walkman's role in social settings and popular music consumption see du Gay; Hosokawa; and Chen. It has been thus since the advent of disc recording, when engineers would watch a record's grooves through a microscope lens as it was being cut to ensure grooves would not cross over one into another. References Altman, Rick. "Television/Sound." Studies in Entertainment. Ed. Tania Modleski. Bloomington: Indiana UP, 1986. 39-54. Baudrillard, Jean. Symbolic Death and Exchange. London: Sage, 1993. Brown, Ronald. Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights. Washington, DC: U.S. Department of Commerce, 1995. Chen, Shing-Ling. "Electronic Narcissism: College Students' Experiences of Walkman Listening." Annual meeting of the International Communication Association. Washington, D.C. 1993. Du Gay, Paul, et al. Doing Cultural Studies. London: Sage, 1997. Frith, Simon. Sound Effects. New York: Pantheon, 1981. Fromartz, Steven. "Starts-ups Sell Garage Bands, Bowie on Web." Reuters newswire, 4 Dec. 1996. Grossberg, Lawrence. We Gotta Get Out of This Place. London: Routledge, 1992. Hayward, Philip. "Enterprise on the New Frontier." Convergence 1.2 (Winter 1995): 29-44. Hosokawa, Shuhei. "The Walkman Effect." Popular Music 4 (1984). Jensen, Joli. The Nashville Sound: Authenticity, Commercialisation and Country Music. Nashville, Vanderbilt UP, 1998. Jones, Steve. Rock Formation: Music, Technology and Mass Communication. Newbury Park, CA: Sage, 1992. ---. "Popular Music Studies and Critical Legal Studies" Stanford Humanities Review 3.2 (Fall 1993): 77-90. ---. "A Sense of Space: Virtual Reality, Authenticity and the Aural." Critical Studies in Mass Communication 10.3 (Sep. 1993), 238-52. ---. "Sound, Space & Digitisation." Media Information Australia 67 (Feb. 1993): 83-91. Marrsh, Brian. "Musicians Adopt Technology to Market Their Skills." Wall Street Journal 14 Oct. 1994: C2. Massenburg, George. "Recording the Future." EQ (Apr. 1997): 79-80. Paul, Frank. "R&B: Soul Music Fans Make Cyberspace Their Meeting Place." Reuters newswire, 11 July 1996. Redhead, Steve, and John Street. "Have I the Right? Legitimacy, Authenticity and Community in Folk's Politics." Popular Music 8.2 (1989). Shepherd, John. "Art, Culture and Interdisciplinarity." Davidson Dunston Research Lecture. Carleton University, Canada. 3 May 1992. ---. "Value and Power in Music." The Sound of Music: Meaning and Power in Culture. Eds. John Shepherd and Peter Wicke. Cambridge: Polity, 1993. Silverman, Kaja. The Acoustic Mirror. Bloomington: Indiana UP, 1988. Sobchack, Vivian. Screening Space. New York: Ungar, 1982. Young, Charles. "Aussie Artists Use Internet and Bootleg CDs to Protect Rights." Pro Sound News July 1995. Citation reference for this article MLA style: Steve Jones. "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies." M/C: A Journal of Media and Culture 2.4 (1999). [your date of access] <http://www.uq.edu.au/mc/9906/remix.php>. Chicago style: Steve Jones, "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies," M/C: A Journal of Media and Culture 2, no. 4 (1999), <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]). APA style: Steve Jones. (1999) Seeing Sound, Hearing Image: "Remixing" Authenticity in Popular Music Studies. M/C: A Journal of Media and Culture 2(4). <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
24

Brown, Andrew R. "Code Jamming." M/C Journal 9, no. 6 (2006). http://dx.doi.org/10.5204/mcj.2681.

Full text
Abstract:

 
 
 Jamming culture has become associated with digital manipulation and reuse of materials. As well, the term jamming has long been used by musicians (and other performers) to mean improvisation, especially in collaborative situations. A practice that gets to the heart of both these meanings is live coding; where digital content (music and/or visuals predominantly) is created through computer programming as a performance. During live coding performances digital content is created and presented in real time. Normally the code from the performers screen is displayed via data projection so that the audience can see the unfolding process as well as see or hear the artistic outcome. This article will focus on live coding of music, but the issues it raises for jamming culture apply to other mediums also. Live coding of music uses the computer as an instrument, which is “played” by the direct construction and manipulation of sonic and musical processes. Gestural control involves typing at the computer keyboard but, unlike traditional “keyboard” instruments, these key gestures are usually indirect in their effect on the sonic result because they result in programming language text which is then interpreted by the computer. Some live coding performers, notably Amy Alexander, have played on the duality of the keyboard as direct and indirect input source by using it as both a text entry device, audio trigger, and performance prop. In most cases, keyboard typing produces notational description during live coding performances as an indirect music making, related to what may previously have been called composing or conducting; where sound generation is controlled rather than triggered. The computer system becomes performer and the degree of interpretive autonomy allocated to the computer can vary widely, but is typically limited to probabilistic choices, structural processes and use of pre-established sound generators. In live coding practices, the code is a medium of expression through which creative ideas are articulated. The code acts as a notational representation of computational processes. It not only leads to the sonic outcome but also is available for reflection, reuse and modification. The aspects of music described by the code are open to some variation, especially in relation to choices about music or sonic granularity. This granularity continuum ranges from a focus on sound synthesis at one end of the scale to the structural organisation of musical events or sections at the other end. Regardless of the level of content granularity being controlled, when jamming with code the time constraints of the live performance environment force the performer to develop succinct and parsimonious expressions and to create processes that sustain activity (often using repetition, iteration and evolution) in order to maintain a coherent and developing musical structure during the performance. As a result, live coding requires not only new performance skills but also new ways of describing the structures of and processes that create music. Jamming activities are additionally complex when they are collaborative. Live Coding performances can often be collaborative, either between several musicians and/or between music and visual live coders. Issues that arise in collaborative settings are both creative and technical. When collaborating between performers in the same output medium (e.g., two musicians) the roles of each performer need to be defined. When a pianist and a vocalist improvise the harmonic and melodic roles are relatively obvious, but two laptop performers are more like a guitar duo where each can take any lead, supportive, rhythmic, harmonic, melodic, textual or other function. Prior organisation and sensitivity to the needs of the unfolding performance are required, as they have always been in musical improvisations. At the technical level it may be necessary for computers to be networked so that timing information, at least, is shared. Various network protocols, most commonly Open Sound Control (OSC), are used for this purpose. Another collaboration takes place in live coding, the one between the performer and the computer; especially where the computational processes are generative (as is often the case). This real-time interaction between musician and algorithmic process has been termed Hyperimprovisation by Roger Dean. Jamming cultures that focus on remixing often value the sharing of resources, especially through the movement and treatment of content artefacts such as audio samples and digital images. In live coding circles there is a similarly strong culture of resource sharing, but live coders are mostly concerned with sharing techniques, processes and tools. In recognition of this, it is quite common that when distributing works live coding artists will include descriptions of the processes used to create work and even share the code. This practice is also common in the broader computational arts community, as evident in the sharing of flash code on sites such as Levitated by Jared Tarbell, in the Processing site (Reas & Fry), or in publications such as Flash Maths Creativity (Peters et al.). Also underscoring this culture of sharing, is a prioritising of reputation above (or prior to) profit. As a result of these social factors most live coding tools are freely distributed. Live Coding tools have become more common in the past few years. There are a number of personalised systems that utilise various different programming languages and environments. Some of the more polished programs, that can be used widely, include SuperCollider (McCartney), Chuck (Wang & Cook) and Impromptu (Sorensen). While these environments all use different languages and varying ways of dealing with sound structure granularity, they do share some common aspects that reveal the priorities and requirements of live coding. Firstly, they are dynamic environments where the musical/sonic processes are not interrupted by modifications to the code; changes can be made on the fly and code is modifiable at runtime. Secondly, they are text-based and quite general programming environments, which means that the full leverage of abstract coding structures can be applied during live coding performances. Thirdly, they all prioritise time, both at architectural and syntactic levels. They are designed for real-time performance where events need to occur reliably. The text-based nature of these tools means that using them in live performance is barely distinguishable from any other computer task, such as writing an email, and thus the practice of projecting the environment to reveal the live process has become standard in the live coding community as a way of communicating with an audience (Collins). It is interesting to reflect on how audiences respond to the projection of code as part of live coding performances. In the author’s experience as both an audience member and live coding performer, the reception has varied widely. Most people seem to find it curious and comforting. Even if they cannot follow the code, they understand or are reassured that the performance is being generated by the code. Those who understand the code often report a sense of increased anticipation as they see structures emerge, and sometimes opportunities missed. Some people dislike the projection of the code, and see it as a distasteful display of virtuosity or as a distraction to their listening experience. The live coding practitioners tend to see the projection of code as a way of revealing the underlying generative and gestural nature of their performance. For some, such as Julian Rohrhuber, code projection is a way of revealing ideas and their development during the performance. “The incremental process of livecoding really is what makes it an act of public reasoning” (Rohrhuber). For both audience and performer, live coding is an explicitly risky venture and this element of public risk taking has long been central to the appreciation of the performing arts (not to mention sport and other cultural activities). The place of live coding in the broader cultural setting is still being established. It certainly is a form of jamming, or improvisation, it also involves the generation of digital content and the remixing of cultural ideas and materials. In some ways it is also connected to instrument building. Live coding practices prioritise process and therefore have a link with conceptual visual art and serial music composition movements from the 20th century. Much of the music produced by live coding has aesthetic links, naturally enough, to electronic music genres including musique concrète, electronic dance music, glitch music, noise art and minimalism. A grouping that is not overly coherent besides a shared concern for processes and systems. Live coding is receiving greater popular and academic attention as evident in recent articles in Wired (Andrews), ABC Online (Martin) and media culture blogs including The Teeming Void (Whitelaw 2006). Whatever its future profile in the boarder cultural sector the live coding community continues to grow and flourish amongst enthusiasts. The TOPLAP site is a hub of live coding activities and links prominent practitioners including, Alex McLean, Nick Collins, Adrian Ward, Julian Rohrhuber, Amy Alexander, Frederick Olofsson, Ge Wang, and Andrew Sorensen. These people and many others are exploring live coding as a form of jamming in digital media and as a way of creating new cultural practices and works. References Andrews, R. “Real DJs Code Live.” Wired: Technology News 6 July 2006. http://www.wired.com/news/technology/0,71248-0.html>. Collins, N. “Generative Music and Laptop Performance.” Contemporary Music Review 22.4 (2004): 67-79. Fry, Ben, and Casey Reas. Processing. http://processing.org/>. Martin, R. “The Sound of Invention.” Catapult. ABC Online 2006. http://www.abc.net.au/catapult/indepth/s1725739.htm>. McCartney, J. “SuperCollider: A New Real-Time Sound Synthesis Language.” The International Computer Music Conference. San Francisco: International Computer Music Association, 1996. 257-258. Peters, K., M. Tan, and M. Jamie. Flash Math Creativity. Berkeley, CA: Friends of ED, 2004. Reas, Casey, and Ben Fry. “Processing: A Learning Environment for Creating Interactive Web Graphics.” International Conference on Computer Graphics and Interactive Techniques. San Diego: ACM SIGGRAPH, 2003. 1. Rohrhuber, J. Post to a Live Coding email list. livecode@slab.org. 10 Sep. 2006. Sorensen, A. “Impromptu: An Interactive Programming Environment for Composition and Performance.” In Proceedings of the Australasian Computer Music Conference 2005. Eds. A. R. Brown and T. Opie. Brisbane: ACMA, 2005. 149-153. Tarbell, Jared. Levitated. http://www.levitated.net/daily/index.html>. TOPLAP. http://toplap.org/>. Wang, G., and P.R. Cook. “ChucK: A Concurrent, On-the-fly, Audio Programming Language.” International Computer Music Conference. ICMA, 2003. 219-226 Whitelaw, M. “Data, Code & Performance.” The Teeming Void 21 Sep. 2006. http://teemingvoid.blogspot.com/2006/09/data-code-performance.html>. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Brown, Andrew R. "Code Jamming." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/03-brown.php>. APA Style
 Brown, A. (Dec. 2006) "Code Jamming," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/03-brown.php>. 
APA, Harvard, Vancouver, ISO, and other styles
25

Koguchi, Hideo, and Kazuhisa Hoshi. "Evaluation of Joining Strength of Silicon-Resin Interface at a Vertex in a Three-Dimensional Joint Structure." Journal of Electronic Packaging 134, no. 2 (2012). http://dx.doi.org/10.1115/1.4006139.

Full text
Abstract:
Portable electric devices such as mobile phones and portable music players have become compact and improved their performance. High-density packaging technology such as chip size package (CSP) and stacked-CSP is used for improving the performance of devices. CSP has a bonded structure composed of materials with different properties. A mismatch of material properties may cause a stress singularity, which leads to the failure of the bonding part in structures. In the present paper, stress analysis using the boundary element method and an eigenvalue analysis using the finite element method are used for evaluating the intensity of a singularity at a vertex in three-dimensional joints. A three-dimensional boundary element program based on the fundamental solution for two-phase isotropic materials is used for calculating the stress distribution in a three-dimensional joint. Angular function in the singular stress field at the vertex in the three-dimensional joint is calculated using an eigenvector determined from the eigenvalue analysis. The joining strength of interface in several kinds of sillicon-resin specimen with different triangular bonding areas is investigated analytically and experimentally. An experiment for debonding the interface in the joints is firstly carried out. Stress singularity analysis for the three-dimensional joints subjected to an external force for debonding the joints is secondly conducted. Combining results of the experiment and the analysis yields a final stress distribution for evaluating the strength of interface. Finally, a relationship of force for delamination in joints with different bonding areas is derived, and a critical value of the 3D intensity of the singularity is determined.
APA, Harvard, Vancouver, ISO, and other styles
26

Collins, Steve. "Amen to That." M/C Journal 10, no. 2 (2007). http://dx.doi.org/10.5204/mcj.2638.

Full text
Abstract:

 
 
 In 1956, John Cage predicted that “in the future, records will be made from records” (Duffel, 202). Certainly, musical creativity has always involved a certain amount of appropriation and adaptation of previous works. For example, Vivaldi appropriated and adapted the “Cum sancto spiritu” fugue of Ruggieri’s Gloria (Burnett, 4; Forbes, 261). If stuck for a guitar solo on stage, Keith Richards admits that he’ll adapt Buddy Holly for his own purposes (Street, 135). Similarly, Nirvana adapted the opening riff from Killing Jokes’ “Eighties” for their song “Come as You Are”. Musical “quotation” is actively encouraged in jazz, and contemporary hip-hop would not exist if the genre’s pioneers and progenitors had not plundered and adapted existing recorded music. Sampling technologies, however, have taken musical adaptation a step further and realised Cage’s prediction. Hardware and software samplers have developed to the stage where any piece of audio can be appropriated and adapted to suit the creative impulses of the sampling musician (or samplist). The practice of sampling challenges established notions of creativity, with whole albums created with no original musical input as most would understand it—literally “records made from records.” Sample-based music is premised on adapting audio plundered from the cultural environment. This paper explores the ways in which technology is used to adapt previous recordings into new ones, and how musicians themselves have adapted to the potentials of digital technology for exploring alternative approaches to musical creativity. Sampling is frequently defined as “the process of converting an analog signal to a digital format.” While this definition remains true, it does not acknowledge the prevalence of digital media. The “analogue to digital” method of sampling requires a microphone or instrument to be recorded directly into a sampler. Digital media, however, simplifies the process. For example, a samplist can download a video from YouTube and rip the audio track for editing, slicing, and manipulation, all using software within the noiseless digital environment of the computer. Perhaps it is more prudent to describe sampling simply as the process of capturing sound. Regardless of the process, once a sound is loaded into a sampler (hardware or software) it can be replayed using a MIDI keyboard, trigger pad or sequencer. Use of the sampled sound, however, need not be a faithful rendition or clone of the original. At the most basic level of manipulation, the duration and pitch of sounds can be altered. The digital processes that are implemented into the Roland VariOS Phrase Sampler allow samplists to eliminate the pitch or melodic quality of a sampled phrase. The phrase can then be melodically redefined as the samplist sees fit: adapted to a new tempo, key signature, and context or genre. Similarly, software such as Propellerhead’s ReCycle slices drum beats into individual hits for use with a loop sampler such as Reason’s Dr Rex module. Once loaded into Dr Rex, the individual original drum sounds can be used to program a new beat divorced from the syncopation of the original drum beat. Further, the individual slices can be subjected to pitch, envelope (a component that shapes the volume of the sound over time) and filter (a component that emphasises and suppresses certain frequencies) control, thus an existing drum beat can easily be adapted to play a new rhythm at any tempo. For example, this rhythm was created from slicing up and rearranging Clyde Stubblefield’s classic break from James Brown’s “Funky Drummer”. Sonic adaptation of digital information is not necessarily confined to the auditory realm. An audio editor such as Sony’s Sound Forge is able to open any file format as raw audio. For example, a Word document or a Flash file could be opened with the data interpreted as audio. Admittedly, the majority of results obtained are harsh white noise, but there is scope for serendipitous anomalies such as a glitchy beat that can be extracted and further manipulated by audio software. Audiopaint is an additive synthesis application created by Nicolas Fournel for converting digital images into audio. Each pixel position and colour is translated into information designating frequency (pitch), amplitude (volume) and pan position in the stereo image. The user can determine which one of the three RGB channels corresponds to either of the stereo channels. Further, the oscillator for the wave form can be either the default sine wave or an existing audio file such as a drum loop can be used. The oscillator shapes the end result, responding to the dynamics of the sine wave or the audio file. Although Audiopaint labours under the same caveat as with the use of raw audio, the software can produce some interesting results. Both approaches to sound generation present results that challenge distinctions between “musical sound” and “noise”. Sampling is also a cultural practice, a relatively recent form of adaptation extending out of a time honoured creative aesthetic that borrows, quotes and appropriates from existing works to create new ones. Different fields of production, as well as different commentators, variously use terms such as “co-creative media”, “cumulative authorship”, and “derivative works” with regard to creations that to one extent or another utilise existing works in the production of new ones (Coombe; Morris; Woodmansee). The extent of the sampling may range from subtle influence to dominating significance within the new work, but the constant principle remains: an existing work is appropriated and adapted to fit the needs of the secondary creator. Proponents of what may be broadly referred to as the “free culture” movement argue that creativity and innovation inherently relies on the appropriation and adaptation of existing works (for example, see Lessig, Future of Ideas; Lessig, Free Culture; McLeod, Freedom of Expression; Vaidhyanathan). For example, Gwen Stefani’s 2004 release “Rich Girl” is based on Louchie Lou and Michie One’s 1994 single of the same title. Lou and One’s “Rich Girl”, in turn, is a reggae dance hall adaptation of “If I Were a Rich Man” from Fiddler on the Roof. Stefani’s “na na na” vocal riff shares the same melody as the “Ya ha deedle deedle, bubba bubba deedle deedle dum” riff from Fiddler on the Roof. Samantha Mumba adapted David Bowie’s “Ashes to Ashes” for her second single “Body II Body”. Similarly, Richard X adapted Tubeway Army’s “Are ‘Friends’ Electric?’ and Adina Howard’s “Freak Like Me” for a career saving single for Sugababes. Digital technologies enable and even promote the adaptation of existing works (Morris). The ease of appropriating and manipulating digital audio files has given rise to a form of music known variously as mash-up, bootleg, or bastard pop. Mash-ups are the most recent stage in a history of musical appropriation and they epitomise the sampling aesthetic. Typically produced in bedroom computer-based studios, mash-up artists use software such as Acid or Cool Edit Pro to cut up digital music files and reassemble the fragments to create new songs, arbitrarily adding self-composed parts if desired. Comprised almost exclusively from sections of captured music, mash-ups have been referred to as “fictional pop music” because they conjure up scenarios where, for example, Destiny’s Child jams in a Seattle garage with Nirvana or the Spice Girls perform with Nine Inch Nails (Petridis). Once the initial humour of the novelty has passed, the results can be deeply alluring. Mash-ups extract the distinctive characteristics of songs and place them in new, innovative contexts. As Dale Lawrence writes: “the vocals are often taken from largely reviled or ignored sources—cornball acts like Aguilera or Destiny’s Child—and recast in wildly unlikely contexts … where against all odds, they actually work”. Similarly, Crawford argues that “part of the art is to combine the greatest possible aesthetic dissonance with the maximum musical harmony. The pleasure for listeners is in discovering unlikely artistic complementarities and revisiting their musical memories in mutated forms” (36). Sometimes the adaptation works in the favour of the sampled artist: George Clinton claims that because of sampling he is more popular now than in 1976—“the sampling made us big again” (Green). The creative aspect of mash-ups is unlike that usually associated with musical composition and has more in common with DJing. In an effort to further clarify this aspect, we may regard DJ mixes as “mash-ups on the fly.” When Grandmaster Flash recorded his quilt-pop masterpiece, “Adventures of Grandmaster Flash on the Wheels of Steel,” it was recorded while he performed live, demonstrating his precision and skill with turntables. Modern audio editing software facilitates the capture and storage of sound, allowing mash-up artists to manipulate sounds bytes outside of “real-time” and the live performance parameters within which Flash worked. Thus, the creative element is not the traditional arrangement of chords and parts, but rather “audio contexts”. If, as Riley pessimistically suggests, “there are no new chords to be played, there are no new song structures to be developed, there are no new stories to be told, and there are no new themes to explore,” then perhaps it is understandable that artists have searched for new forms of musical creativity. The notes and chords of mash-ups are segments of existing works sequenced together to produce inter-layered contexts rather than purely tonal patterns. The merit of mash-up culture lies in its function of deconstructing the boundaries of genre and providing new musical possibilities. The process of mashing-up genres functions to critique contemporary music culture by “pointing a finger at how stifled and obvious the current musical landscape has become. … Suddenly rap doesn’t have to be set to predictable funk beats, pop/R&B ballads don’t have to come wrapped in cheese, garage melodies don’t have to recycle the Ramones” (Lawrence). According to Theodor Adorno, the Frankfurt School critic, popular music (of his time) was irretrievably simplistic and constructed from easily interchangeable, modular components (McLeod, “Confessions”, 86). A standardised and repetitive approach to musical composition fosters a mode of consumption dubbed by Adorno “quotation listening” and characterised by passive acceptance of, and obsession with, a song’s riffs (44-5). As noted by Em McAvan, Adorno’s analysis elevates the producer over the consumer, portraying a culture industry controlling a passive audience through standardised products (McAvan). The characteristics that Adorno observed in the popular music of his time are classic traits of contemporary popular music. Mash-up artists, however, are not representative of Adorno’s producers for a passive audience, instead opting to wrest creative control from composers and the recording industry and adapt existing songs in pursuit of their own creative impulses. Although mash-up productions may consciously or unconsciously criticise the current state of popular music, they necessarily exist in creative symbiosis with the commercial genres: “if pop songs weren’t simple and formulaic, it would be much harder for mashup bedroom auteurs to do their job” (McLeod, “Confessions”, 86). Arguably, when creating mash-ups, some individuals are expressing their dissatisfaction with the stagnation of the pop industry and are instead working to create music that they as consumers wish to hear. Sample-based music—as an exercise in adaptation—encourages a Foucauldian questioning of the composer’s authority over their musical texts. Recorded music is typically a passive medium in which the consumer receives the music in its original, unaltered form. DJ Dangermouse (Brian Burton) breached this pact to create his Grey Album, which is a mash-up of an a cappella version of Jay-Z’s Black Album and the Beatles’ eponymous album (also known as the White Album). Dangermouse says that “every kick, snare, and chord is taken from the Beatles White Album and is in their original recording somewhere.” In deconstructing the Beatles’ songs, Dangermouse turned the recordings into a palette for creating his own new work, adapting audio fragments to suit his creative impulses. As Joanna Demers writes, “refashioning these sounds and reorganising them into new sonic phrases and sentences, he creates acoustic mosaics that in most instances are still traceable to the Beatles source, yet are unmistakeably distinct from it” (139-40). Dangermouse’s approach is symptomatic of what Schütze refers to as remix culture: an open challenge to a culture predicated on exclusive ownership, authorship, and controlled distribution … . Against ownership it upholds an ethic of creative borrowing and sharing. Against the original it holds out an open process of recombination and creative transformation. It equally calls into question the categories, rifts and borders between high and low cultures, pop and elitist art practices, as well as blurring lines between artistic disciplines. Using just a laptop, an audio editor and a calculator, Gregg Gillis, a.k.a. Girl Talk, created the Night Ripper album using samples from 167 artists (Dombale). Although all the songs on Night Ripper are blatantly sampled-based, Gillis sees his creations as “original things” (Dombale). The adaptation of sampled fragments culled from the Top 40 is part of Gillis’ creative process: “It’s not about who created this source originally, it’s about recontextualising—creating new music. … I’ve always tried to make my own songs” (Dombale). Gillis states that his music has no political message, but is a reflection of his enthusiasm for pop music: “It’s a celebration of everything Top 40, that’s the point” (Dombale). Gillis’ “celebratory” exercises in creativity echo those of various fan-fiction authors who celebrate the characters and worlds that constitute popular culture. Adaptation through sampling is not always centred solely on music. Sydney-based Tom Compagnoni, a.k.a. Wax Audio, adapted a variety of sound bytes from politicians and media personalities including George W. Bush, Alexander Downer, Alan Jones, Ray Hadley, and John Howard in the creation of his Mediacracy E.P.. In one particular instance, Compagnoni used a myriad of samples culled from various media appearances by George W. Bush to recreate the vocals for John Lennon’s Imagine. Created in early 2005, the track, which features speeded-up instrumental samples from a karaoke version of Lennon’s original, is an immediate irony fuelled comment on the invasion of Iraq. The rationale underpinning the song is further emphasised when “Imagine This” reprises into “Let’s Give Peace a Chance” interspersed with short vocal fragments of “Come Together”. Compagnoni justifies his adaptations by presenting appropriated media sound bytes that deliberately set out to demonstrate the way information is manipulated to present any particular point of view. Playing the media like an instrument, Wax Audio juxtaposes found sounds in a way that forces the listener to confront the bias, contradiction and sensationalism inherent in their daily intake of media information. … Oh yeah—and it’s bloody funny hearing George W Bush sing “Imagine”. Notwithstanding the humorous quality of the songs, Mediacracy represents a creative outlet for Compagnoni’s political opinions that is emphasised by the adaptation of Lennon’s song. Through his adaptation, Compagnoni revitalises Lennon’s sentiments about the Vietnam War and superimposes them onto the US policy on Iraq. An interesting aspect of sampled-based music is the re-occurrence of particular samples across various productions, which demonstrates that the same fragment can be adapted for a plethora of musical contexts. For example, Clyde Stubblefield’s “Funky Drummer” break is reputed to be the most sampled break in the world. The break from 1960s soul/funk band the Winstons’ “Amen Brother” (the B-side to their 1969 release “Color Him Father”), however, is another candidate for the title of “most sampled break”. The “Amen break” was revived with the advent of the sampler. Having featured heavily in early hip-hop records such as “Words of Wisdom” by Third Base and “Straight Out of Compton” by NWA, the break “appears quite adaptable to a range of music genres and tastes” (Harrison, 9m 46s). Beginning in the early 1990s, adaptations of this break became a constant of jungle music as sampling technology developed to facilitate more complex operations (Harrison, 5m 52s). The break features on Shy FX’s “Original Nutta”, L Double & Younghead’s “New Style”, Squarepusher’s “Big Acid”, and a cover version of Led Zepplin’s “Whole Lotta Love” by Jane’s Addiction front man Perry Farrell. This is to name but a few tracks that have adapted the break. Wikipedia offers a list of songs employing an adaptation of the “Amen break”. This list, however, falls short of the “hundreds of tracks” argued for by Nate Harrison, who notes that “an entire subculture based on this one drum loop … six seconds from 1969” has developed (8m 45s). The “Amen break” is so ubiquitous that, much like the twelve bar blues structure, it has become a foundational element of an entire genre and has been adapted to satisfy a plethora of creative impulses. The sheer prevalence of the “Amen break” simultaneously illustrates the creative nature of music adaptation as well as the potentials for adaptation stemming from digital technology such as the sampler. The cut-up and rearrangement aspect of creative sampling technology at once suggests the original but also something new and different. Sampling in general, and the phenomenon of the “Amen break” in particular, ensures the longevity of the original sources; sampled-based music exhibits characteristics acquired from the source materials, yet the illegitimate offspring are not their parents. Sampling as a technology for creatively adapting existing forms of audio has encouraged alternative approaches to musical composition. Further, it has given rise to a new breed of musician that has adapted to technologies of adaptation. Mash-up artists and samplists demonstrate that recorded music is not simply a fixed or read-only product but one that can be freed from the composer’s original arrangement to be adapted and reconfigured. Many mash-up artists such as Gregg Gillis are not trained musicians, but their ears are honed from enthusiastic consumption of music. Individuals such as DJ Dangermouse, Gregg Gillis and Tom Compagnoni appropriate, reshape and re-present the surrounding soundscape to suit diverse creative urges, thereby adapting the passive medium of recorded sound into an active production tool. References Adorno, Theodor. “On the Fetish Character in Music and the Regression of Listening.” The Culture Industry: Selected Essays on Mass Culture. Ed. J. Bernstein. London, New York: Routledge, 1991. Burnett, Henry. “Ruggieri and Vivaldi: Two Venetian Gloria Settings.” American Choral Review 30 (1988): 3. Compagnoni, Tom. “Wax Audio: Mediacracy.” Wax Audio. 2005. 2 Apr. 2007 http://www.waxaudio.com.au/downloads/mediacracy>. Coombe, Rosemary. The Cultural Life of Intellectual Properties. Durham, London: Duke University Press, 1998. Demers, Joanna. Steal This Music: How Intellectual Property Law Affects Musical Creativity. Athens, London: University of Georgia Press, 2006. Dombale, Ryan. “Interview: Girl Talk.” Pitchfork. 2006. 9 Jan. 2007 http://www.pitchforkmedia.com/article/feature/37785/Interview_Interview_Girl_Talk>. Duffel, Daniel. Making Music with Samples. San Francisco: Backbeat Books, 2005. Forbes, Anne-Marie. “A Venetian Festal Gloria: Antonio Lotti’s Gloria in D Major.” Music Research: New Directions for a New Century. Eds. M. Ewans, R. Halton, and J. Phillips. London: Cambridge Scholars Press, 2004. Green, Robert. “George Clinton: Ambassador from the Mothership.” Synthesis. Undated. 15 Sep. 2005 http://www.synthesis.net/music/story.php?type=story&id=70>. Harrison, Nate. “Can I Get an Amen?” Nate Harrison. 2004. 8 Jan. 2007 http://www.nkhstudio.com>. Lawrence, Dale. “On Mashups.” Nuvo. 2002. 8 Jan. 2007 http://www.nuvo.net/articles/article_292/>. Lessig, Lawrence. The Future of Ideas. New York: Random House, 2001. ———. Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity. New York: The Penguin Press, 2004. McAvan, Em. “Boulevard of Broken Songs: Mash-Ups as Textual Re-Appropriation of Popular Music Culture.” M/C Journal 9.6 (2006) 3 Apr. 2007 http://journal.media-culture.org.au/0612/02-mcavan.php>. McLeod, Kembrew. “Confessions of an Intellectual (Property): Danger Mouse, Mickey Mouse, Sonny Bono, and My Long and Winding Path as a Copyright Activist-Academic.” Popular Music & Society 28.79. ———. Freedom of Expression: Overzealous Copyright Bozos and Other Enemies of Creativity. United States: Doubleday Books. Morris, Sue. “Co-Creative Media: Online Multiplayer Computer Game Culture.” Scan 1.1 (2004). 8 Jan. 2007 http://scan.net.au/scan/journal/display_article.php?recordID=16>. Petridis, Alexis. “Pop Will Eat Itself.” The Guardian UK. March 2003. 8 Jan. 2007 http://www.guardian.co.uk/arts/critic/feature/0,1169,922797,00.html>. Riley. “Pop Will Eat Itself—Or Will It?”. The Truth Unknown (archived at Archive.org). 2003. 9 Jan. 2007 http://web.archive.org/web/20030624154252 /www.thetruthunknown.com/viewnews.asp?articleid=79>. Schütze, Bernard. “Samples from the Heap: Notes on Recycling the Detritus of a Remixed Culture”. Horizon Zero 2003. 8 Jan. 2007 http://www.horizonzero.ca/textsite/remix.php?tlang=0&is=8&file=5>. Vaidhyanathan, Siva. Copyrights and Copywrongs: The Rise of Intellectual Property and How It Threatens Creativity. New York, London: New York University Press, 2003. Woodmansee, Martha. “On the Author Effect: Recovering Collectivity.” The Construction of Authorship: Textual Appropriation in Law and Literature. Eds. M. Woodmansee, P. Jaszi and P. Durham; London: Duke University Press, 1994. 15. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Collins, Steve. "Amen to That: Sampling and Adapting the Past." M/C Journal 10.2 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0705/09-collins.php>. APA Style
 Collins, S. (May 2007) "Amen to That: Sampling and Adapting the Past," M/C Journal, 10(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0705/09-collins.php>. 
APA, Harvard, Vancouver, ISO, and other styles
27

Mitchell, Peta, and E. Sean Rintel. "Editorial." M/C Journal 5, no. 4 (2002). http://dx.doi.org/10.5204/mcj.1968.

Full text
Abstract:
This time last year we proposed the theme of the 'loop' issue to the M/C collective because it sounded deeply cool, satisfying our poststructuralist posturings about reflexivity and representation, while also tapping into everyday cultural objects and practices. We expected that the 'loop' issue would generate some interesting and varied responses, and it did. We received submissions about music, visual art, language, child development, pop-cultural artefacts, mathematics and culture in general. These explorations of disparate fields seemed, however, to be tied together by a common thread: the concept of "generation" itself. Each article in the 'loop' issue describes a loop that does not simply repeat an original operation, but that through iteration creates new possibilities and new meanings. More recently, when we began the process of putting the 'loop' issue together we found ourselves faced with a problem we had not envisaged: how might we structure the issue to reflect our guiding metaphor? How might we make articles about loops read like a loop? For all its decentredness, the web is ultimately still a linear medium, and one that constructs fairly rigorous and striated hierarchies. Even M/C, which has tried to set itself apart from traditional printed academic journals, is still structurally reminiscent of them. We toyed with the idea of hyperlinking articles or changing the table of contents from a descending list to a circular one. Apart from the inconceivably difficult task of convincing our web-designers to change the table of contents template for a one-off issue, such strategies would not necessarily have the desired effect. Readers could choose not to make use of our designed loop—hypertext would still work against us because readers might jump around in any order they desired. Moreover, the very necessity for the issue to have a "feature article" would straighten out any loop we might construct. An editorially enforced structure, it seemed, was not the answer. So we turned to our contributions. Strangely enough (or perhaps not strangely at all), we found that, read together, they could be perceived as generating their own internal spiralling system of interconnected loops. We as editors have therefore sought to place the contributions in an order that brings out this cyclical reading. The issue's feature article is Laurie Johnson's "Agency, Beyond Strange Cultural Loops"; a highly accessible, gently amusing, and deeply thought-out meditation on the production and analysis of culture. Johnson works his way from the simple concept of infinite loops in computer programs, though the strange loops described by Douglas Hofstader and drawn by M.C Escher in his work Drawing Hands, to what he calls "strange cultural loops", in which the reception of original signs is shaped by what has been seen before. He contends that such loops constitute culture, because the very notion of cultural re-production is impossible without considering agency. Infinite loops are infinite only if we decide them to be, just as culture is recognisable if and how we choose it to be. What makes culture easily graspable in everyday life, yet so difficult to analyse, is that reception is as productive of culture as creation. The first of our two special visual art features is Vince Dziekan's "The Synthetic Image", which is at once an exhibition of 13 digital artists and an exploration of the curatorial process. "The Synthetic Image" is an astounding interactive exhibition/installation which the user is invited to explore via a kind of looped nodal map. Indeed, "The Synthetic Image" immerses the user in three kinds of loops. First, art and critique are drawn together into interlinking loops. Second, a centrifugal hypertextual structure is used to both create and display a narrative space. Third, the metaphor of the loop is used to discuss the synthetic nature of digital art that explores the relationship between the real and the virtual. M/C is extremely proud to present this exhibition in conjunction with the Department of Multimedia and Digital Arts at Monash University, Melbourne. Simone Murray explores cultural production through corporate loops in "Harry Potter, Inc.: Content Recycling for Corporate Strategy". Murray investigates not the first-wave success of the books but the second-wave take up of Harry Potter as a franchisable cultural product with substantial multi-demographic appeal. This is a detailed examination of how America Online-Time Warner (AOL-TW) has linked its corporate strategy to the characteristics of the Harry Potter brand. More than a simple sequential marketing operation, AOL-TW recycles content of the books, film, and soundtracks, in three ways: reusing digital content to sell its own products, licencing significant portions of content to secondary manufacturers, and, finally, using Harry Potter content to stimulate interest in non-Harry Potter AOL-TW products. Content recycling exemplifies current corporate drives toward synergy. In "Mastering the 'Visual Groove': Animated Electric Light Bulb Signs, Locations, and Loops", Margaret Weigel reminds that looping media are not new phenomena, dating from the Victorian era and coming to particular prominence in the looping electric light bulb signs of the late 1800s and early 1900s. Our reactions to electric signs were, she argues, strangely similar to those of many new media: moral panic and debate, leading to acceptance and even fondness. Weigel describes many of these whimsical modernist spectacles, particularly those of Broadway (the "Great White Way") in the early 1900s, and then describes their reception by tourists and locals. While tourists were drawn to the nightly spectacles, locals mastered and integrated the cyclical marketing systems into their daily lives. Greg Hainge's "Platonic Relations: The Problem of the Loop in Contemporary Electronic Music" proposes a way in which looping in electronic music may avoid the banal "Platonic" mode of repetition maligned by Deleuze in Difference and Repetition. In electronic music, Hainge argues, this passive approach to the loop conceives of the sampled element as "constitut[ing] an originary identity," the repetition of which constructs "an absolute internal resemblance". Moreover, Platonic looping itself forms a kind of technologically-determinist feedback loop within which the electronic artist finds him- or herself caught. By analysing the way in which electronic artist Kaffe Matthews breaks free of the Platonic mode, Hainge identifies a more "improvisational and dynamic aesthetic" of looping. In "Making Data Flow: On the Implications of Code Loops", Adrian Mackenzie explores and typologises loops in computer code. He argues that computer code has become an object of intense interest in cultural life, perhaps because computer code is at least as generative of meaning as content is supposed to be. Loops are an integral part of the coding process, and are also an interesting way to investigate the generation of meaning through information flows. Mackenzie finds the distinguishing feature of code loops to be their bounding conditions. Different bounding conditions, of course, generate different information flows. Given this, flows can be adapted to different cultural purposes by writers, artists, and hackers interested in exploring different spatio-temporal manifolds. Andrew T. Jacobs, in "Appropriating a Slur: Semantic Looping in the African-American Usage of 'Nigga'", unravels the fascinating rhetorical process by which the highly charged epithet 'nigger' has been reclaimed as 'nigga' by African-Americans. Drawing on rhetorical analysis and African-American sociology, Jacobs argues that co-opting the slur has involved three looping mechanisms—agnominatio, semantic inversion, and chiastic slayingthemselves combined into a looping process which he calls "semantic looping". He concludes that the use of "nigga" is a resistance strategy that functions through both recalling and refuting racism. In "Loops and Fakes and Illusions", Keith Russell investigates the role loops play in the childhood development of social understanding. Not only do loops figure in development, but, as Russell's reading of D.W. Winnicott demonstrates, childhood development itself is a sustaining loop. Following John Dewey, Russell contends that perplexity is the source of intellectual development, and that children exercise their perplexity by puzzling over illusions based around loops. Russell explores how these illusions and fakes demonstrate the tensions and dynamics of social reality, concluding that playing with loops is a lifelong process. Cameron Brown's "Rep-tiles with Woven Horns" is the second of our special visual art features. The title of Brown's article is also the title of the image gracing the cover of this issue. The image itself is particularly suited to the "loop" issue because it is a fractal created by recursion. Brown's article describes the geometry and mathematics behind the image, providing a step-by step demonstration of its creation. We think that even non-mathematicians will follow the logic of the steps involved, and gain a deeper appreciation of both the power and elegance of recursion. As we must, we conclude the 'loop' issue much as we began it, exploring the links between agency and the generative power of the loop. Like Laurie Johnson, Luis O. Arata's "Creation by looping Interactions" questions the creative process involved in M.C. Escher's Drawing Hands, but imagines it as an animated process with two different outcomes. While one outcome is a closed loop—akin to the Platonic looping described by Hainge—generating only itself, the other is an open loop. Open loops, Arata contends, are a form of interaction, a powerful reflexive dialogue of participatory creation. He shows that cutting-edge science is finding the reflexive creativity of open loops to be increasingly important both to practice and theory, concluding that innovation is all the richer for it. Thus, one might say that Johnson and Arata each takes the role of one of the hands in the artwork they both analyse: Escher's Drawing Hands. Within that larger loop, smaller loops are described, and so, like Nietzsche, we find ourselves "insatiably calling out da capo" (56). References Nietzsche, Friedrich. Beyond Good and Evil. Trans. R.J. Hollingdale. Harmondsworth: Penguin, 1973. Citation reference for this article MLA Style Mitchell, Peta and Rintel, E. Sean. "Editorial" M/C: A Journal of Media and Culture 5.4 (2002). [your date of access] < http://www.media-culture.org.au/mc/0208/editorial.php>. Chicago Style Mitchell, Peta and Rintel, E. Sean, "Editorial" M/C: A Journal of Media and Culture 5, no. 4 (2002), < http://www.media-culture.org.au/mc/0208/editorial.php> ([your date of access]). APA Style Mitchell, Peta and Rintel, E. Sean. (2002) Editorial. M/C: A Journal of Media and Culture 5(4). < http://www.media-culture.org.au/mc/0208/editorial.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
28

Collins, Steve. "Recovering Fair Use." M/C Journal 11, no. 6 (2008). http://dx.doi.org/10.5204/mcj.105.

Full text
Abstract:
IntroductionThe Internet (especially in the so-called Web 2.0 phase), digital media and file-sharing networks have thrust copyright law under public scrutiny, provoking discourses questioning what is fair in the digital age. Accessible hardware and software has led to prosumerism – creativity blending media consumption with media production to create new works that are freely disseminated online via popular video-sharing Web sites such as YouTube or genre specific music sites like GYBO (“Get Your Bootleg On”) amongst many others. The term “prosumer” is older than the Web, and the conceptual convergence of producer and consumer roles is certainly not new, for “at electric speeds the consumer becomes producer as the public becomes participant role player” (McLuhan 4). Similarly, Toffler’s “Third Wave” challenges “old power relationships” and promises to “heal the historic breach between producer and consumer, giving rise to the ‘prosumer’ economics” (27). Prosumption blurs the traditionally separate consumer and producer creating a new creative era of mass customisation of artefacts culled from the (copyrighted) media landscape (Tapscott 62-3). Simultaneously, corporate interests dependent upon the protections provided by copyright law lobby for augmented rights and actively defend their intellectual property through law suits, takedown notices and technological reinforcement. Despite a lack demonstrable economic harm in many cases, the propertarian approach is winning and frequently leading to absurd results (Collins).The balance between private and public interests in creative works is facilitated by the doctrine of fair use (as codified in the United States Copyright Act 1976, section 107). The majority of copyright laws contain “fair” exceptions to claims of infringement, but fair use is characterised by a flexible, open-ended approach that allows the law to flex with the times. Until recently the defence was unique to the U.S., but on 2 January Israel amended its copyright laws to include a fair use defence. (For an overview of the new Israeli fair use exception, see Efroni.) Despite its flexibility, fair use has been systematically eroded by ever encroaching copyrights. This paper argues that copyright enforcement has spun out of control and the raison d’être of the law has shifted from being “an engine of free expression” (Harper & Row, Publishers, Inc. v. Nation Enterprises 471 U.S. 539, 558 (1985)) towards a “legal regime for intellectual property that increasingly looks like the law of real property, or more properly an idealized construct of that law, one in which courts seeks out and punish virtually any use of an intellectual property right by another” (Lemley 1032). Although the copyright landscape appears bleak, two recent cases suggest that fair use has not fallen by the wayside and may well recover. This paper situates fair use as an essential legal and cultural mechanism for optimising creative expression.A Brief History of CopyrightThe law of copyright extends back to eighteenth century England when the Statute of Anne (1710) was enacted. Whilst the length of this paper precludes an in depth analysis of the law and its export to the U.S., it is important to stress the goals of copyright. “Copyright in the American tradition was not meant to be a “property right” as the public generally understands property. It was originally a narrow federal policy that granted a limited trade monopoly in exchange for universal use and access” (Vaidhyanathan 11). Copyright was designed as a right limited in scope and duration to ensure that culturally important creative works were not the victims of monopolies and were free (as later mandated in the U.S. Constitution) “to promote the progress.” During the 18th century English copyright discourse Lord Camden warned against propertarian approaches lest “all our learning will be locked up in the hands of the Tonsons and the Lintons of the age, who will set what price upon it their avarice chooses to demand, till the public become as much their slaves, as their own hackney compilers are” (Donaldson v. Becket 17 Cobbett Parliamentary History, col. 1000). Camden’s sentiments found favour in subsequent years with members of the North American judiciary reiterating that copyright was a limited right in the interests of society—the law’s primary beneficiary (see for example, Wheaton v. Peters 33 US 591 [1834]; Fox Film Corporation v. Doyal 286 US 123 [1932]; US v. Paramount Pictures 334 US 131 [1948]; Mazer v. Stein 347 US 201, 219 [1954]; Twentieth Century Music Corp. v. Aitken 422 U.S. 151 [1975]; Aronson v. Quick Point Pencil Co. 440 US 257 [1979]; Dowling v. United States 473 US 207 [1985]; Harper & Row, Publishers, Inc. v. Nation Enterprises 471 U.S. 539 [1985]; Luther R. Campbell a.k.a. Luke Skyywalker, et al. v. Acuff-Rose Music, Inc. 510 U.S 569 [1994]). Putting the “Fair” in Fair UseIn Folsom v. Marsh 9 F. Cas. 342 (C.C.D. Mass. 1841) (No. 4,901) Justice Storey formulated the modern shape of fair use from a wealth of case law extending back to 1740 and across the Atlantic. Over the course of one hundred years the English judiciary developed a relatively cohesive set of principles governing the use of a first author’s work by a subsequent author without consent. Storey’s synthesis of these principles proved so comprehensive that later English courts would look to his decision for guidance (Scott v. Stanford L.R. 3 Eq. 718, 722 (1867)). Patry explains fair use as integral to the social utility of copyright to “encourage. . . learned men to compose and write useful books” by allowing a second author to use, under certain circumstances, a portion of a prior author’s work, where the second author would himself produce a work promoting the goals of copyright (Patry 4-5).Fair use is a safety valve on copyright law to prevent oppressive monopolies, but some scholars suggest that fair use is less a defence and more a right that subordinates copyrights. Lange and Lange Anderson argue that the doctrine is not fundamentally about copyright or a system of property, but is rather concerned with the recognition of the public domain and its preservation from the ever encroaching advances of copyright (2001). Fair use should not be understood as subordinate to the exclusive rights of copyright owners. Rather, as Lange and Lange Anderson claim, the doctrine should stand in the superior position: the complete spectrum of ownership through copyright can only be determined pursuant to a consideration of what is required by fair use (Lange and Lange Anderson 19). The language of section 107 suggests that fair use is not subordinate to the bundle of rights enjoyed by copyright ownership: “Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work . . . is not an infringement of copyright” (Copyright Act 1976, s.107). Fair use is not merely about the marketplace for copyright works; it is concerned with what Weinreb refers to as “a community’s established practices and understandings” (1151-2). This argument boldly suggests that judicial application of fair use has consistently erred through subordinating the doctrine to copyright and considering simply the effect of the appropriation on the market place for the original work.The emphasis on economic factors has led courts to sympathise with copyright owners leading to a propertarian or Blackstonian approach to copyright (Collins; Travis) propagating the myth that any use of copyrighted materials must be licensed. Law and media reports alike are potted with examples. For example, in Bridgeport Music, Inc., et al v. Dimension Films et al 383 F. 3d 400 (6th Cir. 2004) a Sixth Circuit Court of Appeals held that the transformative use of a three-note guitar sample infringed copyrights and that musicians must obtain licence from copyright owners for every appropriated audio fragment regardless of duration or recognisability. Similarly, in 2006 Christopher Knight self-produced a one-minute television advertisement to support his campaign to be elected to the board of education for Rockingham County, North Carolina. As a fan of Star Wars, Knight used a makeshift Death Star and lightsaber in his clip, capitalising on the imagery of the Jedi Knight opposing the oppressive regime of the Empire to protect the people. According to an interview in The Register the advertisement was well received by local audiences prompting Knight to upload it to his YouTube channel. Several months later, Knight’s clip appeared on Web Junk 2.0, a cable show broadcast by VH1, a channel owned by media conglomerate Viacom. Although his permission was not sought, Knight was pleased with the exposure, after all “how often does a local school board ad wind up on VH1?” (Metz). Uploading the segment of Web Junk 2.0 featuring the advertisement to YouTube, however, led Viacom to quickly issue a take-down notice citing copyright infringement. Knight expressed his confusion at the apparent unfairness of the situation: “Viacom says that I can’t use my clip showing my commercial, claiming copy infringement? As we say in the South, that’s ass-backwards” (Metz).The current state of copyright law is, as Patry says, “depressing”:We are well past the healthy dose stage and into the serious illness stage ... things are getting worse, not better. Copyright law has abandoned its reason for being: to encourage learning and the creation of new works. Instead, its principal functions now are to preserve existing failed business models, to suppress new business models and technologies, and to obtain, if possible, enormous windfall profits from activity that not only causes no harm, but which is beneficial to copyright owners. Like Humpty-Dumpty, the copyright law we used to know can never be put back together.The erosion of fair use by encroaching private interests represented by copyrights has led to strong critiques leveled at the judiciary and legislators by Lessig, McLeod and Vaidhyanathan. “Free culture” proponents warn that an overly strict copyright regime unbalanced by an equally prevalent fair use doctrine is dangerous to creativity, innovation, culture and democracy. After all, “few, if any, things ... are strictly original throughout. Every book in literature, science and art, borrows, and must necessarily borrow, and use much which was well known and used before. No man creates a new language for himself, at least if he be a wise man, in writing a book. He contents himself with the use of language already known and used and understood by others” (Emerson v. Davis, 8 F. Cas. 615, 619 (No. 4,436) (CCD Mass. 1845), qted in Campbell v. Acuff-Rose, 62 U.S.L.W. at 4171 (1994)). The rise of the Web 2.0 phase with its emphasis on end-user created content has led to an unrelenting wave of creativity, and much of it incorporates or “mashes up” copyright material. As Negativland observes, free appropriation is “inevitable when a population bombarded with electronic media meets the hardware [and software] that encourages them to capture it” and creatively express themselves through appropriated media forms (251). The current state of copyright and fair use is bleak, but not beyond recovery. Two recent cases suggest a resurgence of the ideology underpinning the doctrine of fair use and the role played by copyright.Let’s Go CrazyIn “Let’s Go Crazy #1” on YouTube, Holden Lenz (then eighteen months old) is caught bopping to a barely recognizable recording of Prince’s “Let’s Go Crazy” in his mother’s Pennsylvanian kitchen. The twenty-nine second long video was viewed a mere twenty-eight times by family and friends before Stephanie Lenz received an email from YouTube informing her of its compliance with a Digital Millennium Copyright Act (DMCA) take-down notice issued by Universal, copyright owners of Prince’s recording (McDonald). Lenz has since filed a counterclaim against Universal and YouTube has reinstated the video. Ironically, the media exposure surrounding Lenz’s situation has led to the video being viewed 633,560 times at the time of writing. Comments associated with the video indicate a less than reverential opinion of Prince and Universal and support the fairness of using the song. On 8 Aug. 2008 a Californian District Court denied Universal’s motion to dismiss Lenz’s counterclaim. The question at the centre of the court judgment was whether copyright owners should consider “the fair use doctrine in formulating a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law.” The court ultimately found in favour of Lenz and also reaffirmed the position of fair use in relation to copyright. Universal rested its argument on two key points. First, that copyright owners cannot be expected to consider fair use prior to issuing takedown notices because fair use is a defence, invoked after the act rather than a use authorized by the copyright owner or the law. Second, because the DMCA does not mention fair use, then there should be no requirement to consider it, or at the very least, it should not be considered until it is raised in legal defence.In rejecting both arguments the court accepted Lenz’s argument that fair use is an authorised use of copyrighted materials because the doctrine of fair use is embedded into the Copyright Act 1976. The court substantiated the point by emphasising the language of section 107. Although fair use is absent from the DMCA, the court reiterated that it is part of the Copyright Act and that “notwithstanding the provisions of sections 106 and 106A” a fair use “is not an infringement of copyright” (s.107, Copyright Act 1976). Overzealous rights holders frequently abuse the DMCA as a means to quash all use of copyrighted materials without considering fair use. This decision reaffirms that fair use “should not be considered a bizarre, occasionally tolerated departure from the grand conception of the copyright design” but something that it is integral to the constitution of copyright law and essential in ensuring that copyright’s goals can be fulfilled (Leval 1100). Unlicensed musical sampling has never fared well in the courtroom. Three decades of rejection and admonishment by judges culminated in Bridgeport Music, Inc., et al v. Dimension Films et al 383 F. 3d 400 (6th Cir. 2004): “Get a license or do not sample. We do not see this stifling creativity in any significant way” was the ruling on an action brought against an unlicensed use of a three-note guitar sample under section 114, an audio piracy provision. The Bridgeport decision sounded a death knell for unlicensed sampling, ensuring that only artists with sufficient capital to pay the piper could legitimately be creative with the wealth of recorded music available. The cost of licensing samples can often outweigh the creative merit of the act itself as discussed by McLeod (86) and Beaujon (25). In August 2008 the Supreme Court of New York heard EMI v. Premise Media in which EMI sought an injunction against an unlicensed fifteen second excerpt of John Lennon’s “Imagine” featured in Expelled: No Intelligence Allowed, a controversial documentary canvassing alleged chilling of intelligent design proponents in academic circles. (The family of John Lennon and EMI had previously failed to persuade a Manhattan federal court in a similar action.) The court upheld Premise Media’s arguments for fair use and rejected the Bridgeport approach on which EMI had rested its entire complaint. Justice Lowe criticised the Bridgeport court for its failure to examine the legislative intent of section 114 suggesting that courts should look to the black letter of the law rather than blindly accept propertarian arguments. This decision is of particular importance because it establishes that fair use applies to unlicensed use of sound recordings and re-establishes de minimis use.ConclusionThis paper was partly inspired by the final entry on eminent copyright scholar William Patry’s personal copyright law blog (1 Aug. 2008). A copyright lawyer for over 25 years, Patry articulated his belief that copyright law has swung too far away from its initial objectives and that balance could never be restored. The two cases presented in this paper demonstrate that fair use – and therefore balance – can be recovered in copyright. The federal Supreme Court and lower courts have stressed that copyright was intended to promote creativity and have upheld the fair doctrine, but in order for the balance to exist in copyright law, cases must come before the courts; copyright myth must be challenged. As McLeod states, “the real-world problems occur when institutions that actually have the resources to defend themselves against unwarranted or frivolous lawsuits choose to take the safe route, thus eroding fair use”(146-7). ReferencesBeaujon, Andrew. “It’s Not the Beat, It’s the Mocean.” CMJ New Music Monthly. April 1999.Collins, Steve. “Good Copy, Bad Copy: Covers, Sampling and Copyright.” M/C Journal 8.3 (2005). 26 Aug. 2008 ‹http://journal.media-culture.org.au/0507/02-collins.php›.———. “‘Property Talk’ and the Revival of Blackstonian Copyright.” M/C Journal 9.4 (2006). 26 Aug. 2008 ‹http://journal.media-culture.org.au/0609/5-collins.php›.Donaldson v. Becket 17 Cobbett Parliamentary History, col. 953.Efroni, Zohar. “Israel’s Fair Use.” The Center for Internet and Society (2008). 26 Aug. 2008 ‹http://cyberlaw.stanford.edu/node/5670›.Lange, David, and Jennifer Lange Anderson. “Copyright, Fair Use and Transformative Critical Appropriation.” Conference on the Public Domain, Duke Law School. 2001. 26 Aug. 2008 ‹http://www.law.duke.edu/pd/papers/langeand.pdf›.Lemley, Mark. “Property, Intellectual Property, and Free Riding.” Texas Law Review 83 (2005): 1031.Lessig, Lawrence. The Future of Ideas. New York: Random House, 2001.———. Free Culture. New York: Penguin, 2004.Leval, Pierre. “Toward a Fair Use Standard.” Harvard Law Review 103 (1990): 1105.McDonald, Heather. “Holden Lenz, 18 Months, versus Prince and Universal Music Group.” About.com: Music Careers 2007. 26 Aug. 2008 ‹http://musicians.about.com/b/2007/10/27/holden-lenz-18-months-versus-prince-and-universal-music-group.htm›.McLeod, Kembrew. “How Copyright Law Changed Hip Hop: An interview with Public Enemy’s Chuck D and Hank Shocklee.” Stay Free 2002. 26 Aug. 2008 ‹http://www.stayfreemagazine.org/archives/20/public_enemy.html›.———. Freedom of Expression: Overzealous Copyright Bozos and Other Enemies of Creativity. United States: Doubleday, 2005.McLuhan, Marshall, and Barrington Nevitt. Take Today: The Executive as Dropout. Ontario: Longman Canada, 1972.Metz, Cade. “Viacom Slaps YouTuber for Behaving like Viacom.” The Register 2007. 26 Aug. 2008 ‹http://www.theregister.co.uk/2007/08/30/viacom_slaps_pol/›.Negativland, ed. Fair Use: The Story of the Letter U and the Numeral 2. Concord: Seeland, 1995.Patry, William. The Fair Use Privilege in Copyright Law. Washington DC: Bureau of National Affairs, 1985.———. “End of the Blog.” The Patry Copyright Blog. 1 Aug. 2008. 27 Aug. 2008 ‹http://williampatry.blogspot.com/2008/08/end-of-blog.html›.Tapscott, Don. The Digital Economy: Promise and Peril in the Age of Networked Intelligence. New York: McGraw Hill, 1996.Toffler, Alvin. The Third Wave. London, Glasgow, Sydney, Auckland. Toronto, Johannesburg: William Collins, 1980.Travis, Hannibal. “Pirates of the Information Infrastructure: Blackstonian Copyright and the First Amendment.” Berkeley Technology Law Journal, Vol. 15 (2000), No. 777.Vaidhyanathan, Siva. Copyrights and Copywrongs: The Rise of Intellectual Property and How It Threatens Creativity. New York; London: New York UP, 2003.
APA, Harvard, Vancouver, ISO, and other styles
29

Maybury, Terrence. "The Literacy Control Complex." M/C Journal 7, no. 2 (2004). http://dx.doi.org/10.5204/mcj.2337.

Full text
Abstract:
Usually, a literature search is a benign phase of the research regime. It was, however, during this phase on my current project where a semi-conscious pique I’d been feeling developed into an obvious rancour. Because I’ve been involved in both electronic production and consumption, and the pedagogy surrounding it, I was interested in how the literate domain was coping with the transformations coming out of the new media communications r/evolution. This concern became clearer with the reading and re-reading of Kathleen Tyner’s book, Literacy in a Digital World: Teaching and Learning in the Age of Information. Sometimes, irritation is a camouflage for an emerging and hybridised form of knowledge, so it was necessary to unearth this masquerade of discord that welled-up in the most unexpected of places. Literacy in a Digital World makes all the right noises: it discusses technology; Walter Ong; media literacy; primary, secondary, and tertiary schooling; Plato’s Phaedrus; psychoanalysis; storytelling; networks; aesthetics; even numeracy and multiliteracies, along with a host of other highly appropriate subject matter vis-à-vis its object of analysis. On one reading, it’s a highly illuminating overview. There is, however, a differing interpretation of Literacy in a Digital World, and it’s of a more sombre hue. This other more doleful reading makes Literacy in a Digital World a superior representative of a sometimes largely under-theorised control-complex, and an un.conscious authoritarianism, implicit in the production of any type of knowledge. Of course, in this instance the type of production referenced is literate in orientation. The literate domain, then, is not merely an angel of enlightened debate; under the influence and direction of particular human configurations, literacy has its power struggles with other forms of representation. If the PR machine encourages a more seraphical view of the culture industry, it comes at the expense of the latter’s sometimes-tyrannical underbelly. It is vital, then, to question and investigate these un.conscious forces, specifically in relation to the production of literate forms of culture and the ‘discourse’ it carries on regarding electronic forms of knowledge, a paradigm for which is slowly emerging electracy and a subject I will return to. This assertion is no overstatement. Literacy in a Digital World has concealed within its discourse the assumption that the dominant modes of teaching and learning are literate and will continue to be so. That is, all knowledge is mediated via either typographic or chirographic words on a page, or even on a screen. This is strange given that Tyner admits in the Introduction that “I am an itinerant teacher, reluctant writer, and sometimes media producer” (1, my emphasis). The orientation in Literacy in a Digital World, it seems to me, is a mask for the authoritarianism at the heart of the literate establishment trying to contain and corral the intensifying global flows of electronic information. Ironically, it also seems to be a peculiarly electronic way to present information: that is, the sifting, analysis, and categorisation, along with the representation of phenomena, through the force of one’s un.conscious biases, with the latter making all knowledge production laden with emotional causation. This awkwardness in using the term “literacy” in relation to electronic forms of knowledge surfaces once more in Paul Messaris’s Visual “Literacy”. Again, this is peculiar given that this highly developed and informative text might be a fine introduction to electracy as a possible alternative paradigm to literacy, if only, for instance, it made some mention of sound as a counterpoint to textual and visual symbolisation. The point where Messaris passes over this former contradiction is worth quoting: Strictly speaking, of course, the term “literacy” should be applied only to reading and writing. But it would probably be too pedantic and, in any case, it would surely be futile to resist the increasingly common tendency to apply this term to other kinds of communication skills (mathematical “literacy,” computer “literacy”) as well as to the substantive knowledge that communication rests on (historical, geographic, cultural “literacy”). (2-3) While Messaris might use the term “visual literacy” reluctantly, the assumption that literacy will take over the conceptual reins of electronic communication and remain the pre-eminent form of knowledge production is widespread. This assumption might be happening in the literature on the subject but in the wider population there is a rising electrate sensibility. It is in the work of Gregory Ulmer that electracy is most extensively articulated, and the following brief outline has been heavily influenced by his speculation on the subject. Electracy is a paradigm that requires, in the production and consumption of electronic material, highly developed competencies in both oracy and literacy, and if necessary comes on top of any knowledge of the subject or content of any given work, program, or project. The conceptual frame of electracy is herein tentatively defined as both a well-developed range and depth of communicative competency in oral, literate, and electronic forms, biased from the latter’s point of view. A crucial addition, one sometimes overlooked in earlier communicative forms, is that of the technate, or technacy, a working knowledge of the technological infrastructure underpinning all communication and its in-built ideological assumptions. It is in this context of the various communicative competencies required for electronic production and consumption that the term ‘literacy’ (or for that matter ‘oracy’) is questionable. Furthermore, electracy can spread out to mean the following: it is that domain of knowledge formation whose arrangement, transference, and interpretation rely primarily on electronic networks, systems, codes and apparatuses, for either its production, circulation, or consumption. It could be analogue, in the sense of videotape; digital, in the case of the computer; aurally centred, as in the examples of music, radio or sound-scapes; mathematically configured, in relation to programming code for instance; visually fixated, as in broadcast television; ‘amateur’, as in the home-video or home-studio realm; politically sensitive, in the case of surveillance footage; medically fixated, as in the orbit of tomography; ambiguous, as in the instance of The Sydney Morning Herald made available on the WWW, or of Hollywood blockbusters broadcast on television, or hired/bought in a DVD/video format; this is not to mention Brad Pitt reading a classic novel on audio-tape. Electracy is a strikingly simple, yet highly complex and heterogeneous communicative paradigm. Electracy is also a generic term, one whose very comprehensiveness and dynamic mutability is its defining hallmark, and one in which a whole host of communicative codes and symbolic systems reside. Moreover, almost anyone can comprehend meaning in electronic media because “electric epistemology cannot remain confined to small groups of users, as oral epistemologies have, and cannot remain the property of an educated elite, as literate epistemologies have” (Gozzi and Haynes 224). Furthermore, as Ulmer writes: “To speak of computer literacy or media literacy may be an attempt to remain within the apparatus of alphabetic writing that has organized the Western tradition for nearly the past three millennia” (“Foreword” xii). The catch is that the knowledge forms thus produced through electracy are the abstract epistemological vectors on which the diverse markets of global capitalism thrive. The dynamic nature of these “multimodal” forms of electronic knowledge (Kress, “Visual” 73), then, is increasingly applicable to all of us in the local/global, human/world conglomerate in which any polity is now framed. To continue to emphasise literacy and alphabetic consciousness might then be blinding us to this emerging relationship between electracy and globalisation, possibly even to localisation and regionalisation. It may be possible to trace the dichotomy outlined above between literate and electrate forms of knowledge to larger political/economic and cultural forces. As Saskia Sassen illustrates, sovereignty and territoriality are central aspects in the operation of the still important nation-state, especially in an era of encroaching globalisation. In the past, sovereignty referred to the absolute power of monarchs to control their dominions and is an idea that has been transferred to the nation-state in the long transition to representative democracy. Territoriality refers to the specific physical space that sovereignty is seen as guaranteeing. As Sassen writes, “In the main … rule in the modern world flows from the absolute sovereignty of the state over its national territory” (3). Quite clearly, in the shifting regimes of geo-political power that characterise the global era, sovereign control over territory, and, equally, control over the ideas that might reconfigure our interpretation of concepts such as sovereignty and territoriality, nationalism and literacy, are all in a state of change. Today’s climate of geo-political uncertainty has undoubtedly produced a control complex in relation to these shifting power bases, a condition that arises when psychic, epistemological and political certainties move to a state of unpredictable flux. In Benedict Anderson’s Imagined Communities another important examination of nationalism there is an emphasis on how literacy was an essential ingredient in its development as a political structure. Operational levels of literacy also came to be a key component in the development of the idea of the autonomous self that arose with democracy and its use as an organising principle in citizenship rituals like voting in some nation-states. Eric Leed puts it this way: “By the sixteenth century, literacy had become one of the definitive signs — along with the possession of property and a permanent residence — of an independent social status” (53). Clearly, any conception of sovereignty and territoriality has to be read, after being written constitutionally, by those people who form the basis of a national polity and over whom these two categories operate. The “fundamental anxiety” over literacy that Kress speaks of (Before Writing 1) is a sub-component of this larger control complex in that a quantum increase in the volume and diversity of electronic communication is contributing to declining levels of literacy in the body politic. In the current moment there is a control complex of almost plague proportions in our selves, our systems of knowledge, and our institutions and polities, because it is undoubtedly a key factor at the epicentre of any turf war. Even my own strident anxieties over the dominance of literacy in debates over electronic communication deserve to be laid out on the analyst’s couch, in part because any manifestation of the control complex in a turf war is aimed squarely at the repression of alternative ways of being and becoming. The endgame: it might be wiser to more closely examine this literacy control complex, possible alternative paradigms of knowledge production and consumption such as electracy, and their broader relationship to patterns of political/economic/cultural organisation and control. Acknowledgements I am indebted to Patrice Braun and Ros Mills, respectively, for editorial advice and technical assistance in the preparation of this essay. Note on reading “The Literacy Control Complex” The dot configuration in ‘un.conscious’ is used deliberately as an electronic marker to implicitly indicate the omni-directional nature of the power surges that dif.fuse the conscious and the unconscious in the field of political action where any turf war is conducted. While this justification is not obvious, I do want to create a sense of intrigue in the reader as to why this dot configuration might be used. One of the many things that fascinates me about electronic communication is its considerable ability for condensation; the sound-bite is one epistemological example of this idea, the dot, as an electronic form of conceptual elision, is another. If you are interested in this field, I highly recommend perusal of the MEZ posts that crop up periodically on a number of media related lists. MEZ’s posts have made me more cognisant of electronic forms of written expression. These experiments in electronic writing deserve to be tested. Works Cited Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. rev. ed. London and New York: Verso, 1991. Gozzi Jr., Raymond, and W. Lance Haynes. “Electric Media and Electric Epistemology: Empathy at a Distance.” Critical Studies in Mass Communication 9.3 (1992): 217-28. Messaris, Paul. Visual “Literacy”: Image, Mind, and Reality. Boulder: Westview Press, 1994. Kress, Gunther. “Visual and Verbal Modes of Representation in Electronically Mediated Communication: The Potentials of New Forms of Text.” Page to Screen: Taking Literacy into the Electronic Era. Ed. Ilana Snyder. Sydney: Allen & Unwin, 1997. 53-79. ---. Before Writing: Rethinking the Paths to Literacy. London: Routledge, 1997. Leed, Eric. “‘Voice’ and ‘Print’: Master Symbols in the History of Communication.” The Myths of Information: Technology and Postindustrial Culture. Ed. Kathleen Woodward. Madison, Wisconsin: Coda Press, 1980. 41-61. Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia UP, 1996. Tyner, Kathleen. Literacy in a Digital World: Teaching and Learning in the Age of Information. Mahwah, NJ: Lawrence Erlbaum Associates, 1998. Ulmer, Gregory. Teletheory: Grammatology in the Age of Video. New York: Routledge, 1989. ---. Heuretics: The Logic of Invention. New York: Johns Hopkins U P, 1994. ---. “Foreword/Forward (Into Electracy).” Literacy Theory in the Age of the Internet. Ed. Todd Taylor and Irene Ward. New York: Columbia U P, 1998. ix-xiii. ---. Internet Invention: Literacy into Electracy. Boston: Longman, 2003. Citation reference for this article MLA Style Maybury, Terrence. "The Literacy Control Complex" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0403/05-literacy.php>. APA Style Maybury, T. (2004, Mar17). The Literacy Control Complex. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0403/05-literacy.php>
APA, Harvard, Vancouver, ISO, and other styles
30

Mayo, Sherry. "NXT Space for Visual Thinking." M/C Journal 1, no. 4 (1998). http://dx.doi.org/10.5204/mcj.1722.

Full text
Abstract:
"Space, the limitless area in which all things exist and move." -- Merriam-Webster Dictionary(658) Can we determine our point in time and space at this moment of pre-millennium anticipation? The evolution of our visualisation of space as a culture is shifting and entering the critical consciousness of our global village. The infinite expansion of space's parameters, definitions and visualisation remains the next frontier -- not only for NASA, but for visual culture. Benjamin's vision of loss of the aura of originality through reproduction has come to pass, so has the concept of McLuhan's global village, Baudrillard's simulacra, and Gibson's cyberpunk. Recent technologies such as digital imaging, video, 3-D modelling, virtual reality, and the Internet have brought us to the cusp of the millennium as pioneers of what I call this 'NXT space' for visual thinking, for artistic expression. The vision being constructed in pre-millennium culture takes place in an objectless fictionalised space. This virtual reality is a space that is expanding infinitely, as we speak. The vehicle through which access is gained into this layer takes the form of a machine that requires a mind/body split. The viewer probes through the intangible pixels and collects visual data. The data received on this or that layer have the potential to transport the viewer virtually and yield a visceral experience. The new tools for visualisation allow an expanded perception to an altered state of consciousness. The new works cross the boundaries between media, and are the result of virtual trips via the usage of digital imaging. Their aesthetic reflects our digital society in which people maintain extremely intimate relationships with their computers. This new era is populated by a new generation that is inside more than outside, emailing while faxing, speaking on the phone and surfing the Web with MTV on in the background. We have surpassed postmodernist ideas of pluralism and simultaneity and have produced people for whom the digital age is no revolution. Selected colours, forms and spaces refer to the pixelisation of our daily experience. We are really discussing pop for ahistorical youth, who consider virtual reality to be the norm of visualisation via digitally produced ads, movies, TV shows, music videos, video games and the computer. The term "new media" is already antiquated. We are participating in a realm that is fluent with technology, where the visualisation of space is more natural than an idea of objecthood. (At least as long as we're operating in the technology-rich Western world, that is.) The relationship of these virtual spaces with the mass audience is the cause of pre-millennium anxiety. The cool distance of remote control and the ability to remain in an altered state of consciousness are the residual effects of virtual reality. It is this alienated otherness that allows for the atomisation of the universe. We construct artifice for interface, and simulacra have become more familiar than the "real". NXT space, cyberspace, is the most vital space for visual thinking in the 21st century. The malleability and immateriality of the pixel sub-universe has exponential potential. The artists of this future, who will dedicate themselves successfully to dealing with the new parameters of this installation space, will not consider themselves "computer artists". They will be simply artists working with integrated electronic arts. Digital imaging has permeated our lives to such an extent that like Las Vegas "it's the sunsets that look fake as all hell" (Hickey). Venturi depicts the interior of Las Vegas's casinos as infinite dark spaces with lots of lights transmitting information. Cyberspace is a public/private space occupied by a global village, in that it is a public space through its accessibility to anyone with Internet access, and a social space due to the ability to exchange ideas and meet others through dialogue; however, it is also an intimate private space due to its intangibility and the distance between each loner at their terminal. NXT needs a common sign system that is seductive enough to persuade the visitor into entering the site and can act as a navigational tool. People like to return to places that feel familiar and stimulate reverie of past experiences. This requires the visitor to fantasise while navigating through a cybersite and believe that it is an actual place that exists and where they can dwell. Venturi's model of the sign system as paramount to the identification of the actual architecture is perfect for cyberspace, because you are selling the idea or the fiction of the site, not the desert that it really is. Although NXT can not utilise object cathexion to stimulate fantasy and attachment to site, it can breed familiarity through a consistent sign system and a dynamic and interactive social space which would entice frequent revisiting. NXT Space, a home for the other? "Suddenly it becomes possible that there are just others, that we ourselves are an 'other' among others", as Paul Ricoeur said in 1962. If one were to impose Heidegger's thinking in regards to building and dwelling, they would have to reconstruct NXT as a site that would promote dwelling. It would have to be built in a way in which people were not anonymous or random. A chat room or BBS would have to be attached, where people could actively participate with one another within NXT. Once these visitors had other people that they could identify with and repeatedly interact with, they would form a community within the NXT site. Mortals would roam not on earth, nor under the sky, possibly before divinities (who knows), but rather through pixel light and fiber optics without a physical interface between beings. If the goal of mortals is a Heideggerian notion of attachment to a site through building and building's goal is dwelling and dwelling's goal is identification and identification is accomplished through the cultivation of culture, then NXT could be a successful location. NXT could accommodate an interchange between beings that would be free of physiological constraints and identity separations. This is what could be exchanged and exposed in the NXT site without the interference and taint of socio-physio parameters that separate people from one another. A place where everyone without the convenience or burden of identity becomes simply another other. NXT could implement theory in an integral contextual way that could effect critical consciousness and a transformation of society. This site could serve as a theoretical laboratory where people could exchange and experiment within a dialogue. NXT as a test site could push the parameters of cyberspace and otherness in a real and tangible way. This "cyber-factory" would be interactive and analytical. The fictional simulated world is becoming our reality and cyberspace is becoming a more reasonable parallel to life. Travelling through time and space seems more attainable than ever before through the Internet. Net surfing is zipping through the Louvre, trifling through the Grand Canyon and then checking your horoscope. People are becoming used to this ability and the abstract is becoming more tangible to the masses. As techno-literacy and access increase, so should practical application of abstract theory. NXT would escape reification of theory through dynamic accessibility. The virtual factory could be a Voltaire's cafe of cyber-thinkers charting the critical consciousness and evolution of our Web-linked world. Although ultimately in the West we do exist within a capitalist system where every good thought leaks out to the masses and becomes popular, popularity creates fashion, fashion is fetishistic, thereby desirable, and accumulates monetary value. Market power depoliticises original content and enables an idea to become dogma; another trophy in the cultural hall of fame. Ideas do die, but in another time and place can be resurrected and utilised as a template for counter-reaction. This is analogous to genetic evolution -- DNA makes RNA which makes retro-DNA, etc. --, and the helix spirals on, making reification an organic process. However, will cyberspace ever be instrumental in transforming society in the next century? Access is the largest inhibitor. Privileged technophiles often forget that they are in the minority. How do we become more inclusive and expand the dialogue to encompass the infinite number of different voices on our planet? NXT space is limited to a relatively small number of individuals with the ability to afford and gain access to high-tech equipment. This will continue the existing socio-economic imbalance that restricts our critical consciousness. Without developing the Internet into the NXT space, we will be tremendously bothered by ISPs, with data transfer control and content police. My fear for the global village, surfing through our virtual landscape, is that we will all skid off this swiftly tilting planet. The addiction to the Net and to simulated experiences will subject us to remote control. The inundation of commercialism bombarding the spectator was inevitable, and subsequently there are fewer innovative sites pushing the boundaries of experimentation with this medium. Pre-millennium anxiety is abundant in technophobes, but as a technophile I too am afflicted. My fantasy of a NXT space is dwindling as the clock ticks towards the Y2K problem and a new niche for community and social construction has already been out-competed. If only we could imagine all the people living in the NXT space with its potential for tolerance, dialogue, and community. References Bachelard, Gaston. The Poetics of Space: The Classic Look at How We Experience Intimate Places. Boston, MA: Beacon, 1994. Benjamin, Walter. Illuminations. New York: Schocken, 1978. Gibson, William. Neuromancer. San Francisco: Ace Books, 1984. Heidegger, Martin. The Question Concerning Technology, and Other Essays. Trans. William Lovitt. New York: Garland, 1977. Hickey, David. Air Guitar: Four Essays on Art and Democracy. Los Angeles: Art Issues, 1997. Koch, Stephen. Stargazer: Andy Warhol's World and His Films. London: Calder and Boyars, 1973. McLuhan, Marshall. Understanding Media: The Extensions of Man. Cambridge, MA: MIT Press, 1994. The Merriam-Webster Dictionary. Springfield, MA: G.&.C. Merriam, 1974. Venturi, Robert. Learning from Las Vegas: The Forgotten Symbolism of Architectual Form. Cambridge, MA: MIT Press, 1977. Citation reference for this article MLA style: Sherry Mayo. "NXT Space for Visual Thinking: An Experimental Cyberlab." M/C: A Journal of Media and Culture 1.4 (1998). [your date of access] <http://www.uq.edu.au/mc/9811/nxt.php>. Chicago style: Sherry Mayo, "NXT Space for Visual Thinking: An Experimental Cyberlab," M/C: A Journal of Media and Culture 1, no. 4 (1998), <http://www.uq.edu.au/mc/9811/nxt.php> ([your date of access]). APA style: Sherry Mayo. (1998) NXT space for visual thinking: an experimental cyberlab. M/C: A Journal of Media and Culture 1(4). <http://www.uq.edu.au/mc/9811/nxt.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
31

Losh, Elizabeth. "Artificial Intelligence." M/C Journal 10, no. 5 (2007). http://dx.doi.org/10.5204/mcj.2710.

Full text
Abstract:

 
 
 On the morning of Thursday, 4 May 2006, the United States House Permanent Select Committee on Intelligence held an open hearing entitled “Terrorist Use of the Internet.” The Intelligence committee meeting was scheduled to take place in Room 1302 of the Longworth Office Building, a Depression-era structure with a neoclassical façade. Because of a dysfunctional elevator, some of the congressional representatives were late to the meeting. During the testimony about the newest political applications for cutting-edge digital technology, the microphones periodically malfunctioned, and witnesses complained of “technical problems” several times. By the end of the day it seemed that what was to be remembered about the hearing was the shocking revelation that terrorists were using videogames to recruit young jihadists. The Associated Press wrote a short, restrained article about the hearing that only mentioned “computer games and recruitment videos” in passing. Eager to have their version of the news item picked up, Reuters made videogames the focus of their coverage with a headline that announced, “Islamists Using US Videogames in Youth Appeal.” Like a game of telephone, as the Reuters videogame story was quickly re-run by several Internet news services, each iteration of the title seemed less true to the exact language of the original. One Internet news service changed the headline to “Islamic militants recruit using U.S. video games.” Fox News re-titled the story again to emphasise that this alert about technological manipulation was coming from recognised specialists in the anti-terrorism surveillance field: “Experts: Islamic Militants Customizing Violent Video Games.” As the story circulated, the body of the article remained largely unchanged, in which the Reuters reporter described the digital materials from Islamic extremists that were shown at the congressional hearing. During the segment that apparently most captured the attention of the wire service reporters, eerie music played as an English-speaking narrator condemned the “infidel” and declared that he had “put a jihad” on them, as aerial shots moved over 3D computer-generated images of flaming oil facilities and mosques covered with geometric designs. Suddenly, this menacing voice-over was interrupted by an explosion, as a virtual rocket was launched into a simulated military helicopter. The Reuters reporter shared this dystopian vision from cyberspace with Western audiences by quoting directly from the chilling commentary and describing a dissonant montage of images and remixed sound. “I was just a boy when the infidels came to my village in Blackhawk helicopters,” a narrator’s voice said as the screen flashed between images of street-level gunfights, explosions and helicopter assaults. Then came a recording of President George W. Bush’s September 16, 2001, statement: “This crusade, this war on terrorism, is going to take a while.” It was edited to repeat the word “crusade,” which Muslims often define as an attack on Islam by Christianity. According to the news reports, the key piece of evidence before Congress seemed to be a film by “SonicJihad” of recorded videogame play, which – according to the experts – was widely distributed online. Much of the clip takes place from the point of view of a first-person shooter, seen as if through the eyes of an armed insurgent, but the viewer also periodically sees third-person action in which the player appears as a running figure wearing a red-and-white checked keffiyeh, who dashes toward the screen with a rocket launcher balanced on his shoulder. Significantly, another of the player’s hand-held weapons is a detonator that triggers remote blasts. As jaunty music plays, helicopters, tanks, and armoured vehicles burst into smoke and flame. Finally, at the triumphant ending of the video, a green and white flag bearing a crescent is hoisted aloft into the sky to signify victory by Islamic forces. To explain the existence of this digital alternative history in which jihadists could be conquerors, the Reuters story described the deviousness of the country’s terrorist opponents, who were now apparently modifying popular videogames through their wizardry and inserting anti-American, pro-insurgency content into U.S.-made consumer technology. One of the latest video games modified by militants is the popular “Battlefield 2” from leading video game publisher, Electronic Arts Inc of Redwood City, California. Jeff Brown, a spokesman for Electronic Arts, said enthusiasts often write software modifications, known as “mods,” to video games. “Millions of people create mods on games around the world,” he said. “We have absolutely no control over them. It’s like drawing a mustache on a picture.” Although the Electronic Arts executive dismissed the activities of modders as a “mustache on a picture” that could only be considered little more than childish vandalism of their off-the-shelf corporate product, others saw a more serious form of criminality at work. Testifying experts and the legislators listening on the committee used the video to call for greater Internet surveillance efforts and electronic counter-measures. Within twenty-four hours of the sensationalistic news breaking, however, a group of Battlefield 2 fans was crowing about the idiocy of reporters. The game play footage wasn’t from a high-tech modification of the software by Islamic extremists; it had been posted on a Planet Battlefield forum the previous December of 2005 by a game fan who had cut together regular game play with a Bush remix and a parody snippet of the soundtrack from the 2004 hit comedy film Team America. The voice describing the Black Hawk helicopters was the voice of Trey Parker of South Park cartoon fame, and – much to Parker’s amusement – even the mention of “goats screaming” did not clue spectators in to the fact of a comic source. Ironically, the moment in the movie from which the sound clip is excerpted is one about intelligence gathering. As an agent of Team America, a fictional elite U.S. commando squad, the hero of the film’s all-puppet cast, Gary Johnston, is impersonating a jihadist radical inside a hostile Egyptian tavern that is modelled on the cantina scene from Star Wars. Additional laughs come from the fact that agent Johnston is accepted by the menacing terrorist cell as “Hakmed,” despite the fact that he utters a series of improbable clichés made up of incoherent stereotypes about life in the Middle East while dressed up in a disguise made up of shoe polish and a turban from a bathroom towel. The man behind the “SonicJihad” pseudonym turned out to be a twenty-five-year-old hospital administrator named Samir, and what reporters and representatives saw was nothing more exotic than game play from an add-on expansion pack of Battlefield 2, which – like other versions of the game – allows first-person shooter play from the position of the opponent as a standard feature. While SonicJihad initially joined his fellow gamers in ridiculing the mainstream media, he also expressed astonishment and outrage about a larger politics of reception. In one interview he argued that the media illiteracy of Reuters potentially enabled a whole series of category errors, in which harmless gamers could be demonised as terrorists. It wasn’t intended for the purpose what it was portrayed to be by the media. So no I don’t regret making a funny video . . . why should I? The only thing I regret is thinking that news from Reuters was objective and always right. The least they could do is some online research before publishing this. If they label me al-Qaeda just for making this silly video, that makes you think, what is this al-Qaeda? And is everything al-Qaeda? Although Sonic Jihad dismissed his own work as “silly” or “funny,” he expected considerably more from a credible news agency like Reuters: “objective” reporting, “online research,” and fact-checking before “publishing.” Within the week, almost all of the salient details in the Reuters story were revealed to be incorrect. SonicJihad’s film was not made by terrorists or for terrorists: it was not created by “Islamic militants” for “Muslim youths.” The videogame it depicted had not been modified by a “tech-savvy militant” with advanced programming skills. Of course, what is most extraordinary about this story isn’t just that Reuters merely got its facts wrong; it is that a self-identified “parody” video was shown to the august House Intelligence Committee by a team of well-paid “experts” from the Science Applications International Corporation (SAIC), a major contractor with the federal government, as key evidence of terrorist recruitment techniques and abuse of digital networks. Moreover, this story of media illiteracy unfolded in the context of a fundamental Constitutional debate about domestic surveillance via communications technology and the further regulation of digital content by lawmakers. Furthermore, the transcripts of the actual hearing showed that much more than simple gullibility or technological ignorance was in play. Based on their exchanges in the public record, elected representatives and government experts appear to be keenly aware that the digital discourses of an emerging information culture might be challenging their authority and that of the longstanding institutions of knowledge and power with which they are affiliated. These hearings can be seen as representative of a larger historical moment in which emphatic declarations about prohibiting specific practices in digital culture have come to occupy a prominent place at the podium, news desk, or official Web portal. This environment of cultural reaction can be used to explain why policy makers’ reaction to terrorists’ use of networked communication and digital media actually tells us more about our own American ideologies about technology and rhetoric in a contemporary information environment. When the experts come forward at the Sonic Jihad hearing to “walk us through the media and some of the products,” they present digital artefacts of an information economy that mirrors many of the features of our own consumption of objects of electronic discourse, which seem dangerously easy to copy and distribute and thus also create confusion about their intended meanings, audiences, and purposes. From this one hearing we can see how the reception of many new digital genres plays out in the public sphere of legislative discourse. Web pages, videogames, and Weblogs are mentioned specifically in the transcript. The main architecture of the witnesses’ presentation to the committee is organised according to the rhetorical conventions of a PowerPoint presentation. Moreover, the arguments made by expert witnesses about the relationship of orality to literacy or of public to private communications in new media are highly relevant to how we might understand other important digital genres, such as electronic mail or text messaging. The hearing also invites consideration of privacy, intellectual property, and digital “rights,” because moral values about freedom and ownership are alluded to by many of the elected representatives present, albeit often through the looking glass of user behaviours imagined as radically Other. For example, terrorists are described as “modders” and “hackers” who subvert those who properly create, own, legitimate, and regulate intellectual property. To explain embarrassing leaks of infinitely replicable digital files, witness Ron Roughead says, “We’re not even sure that they don’t even hack into the kinds of spaces that hold photographs in order to get pictures that our forces have taken.” Another witness, Undersecretary of Defense for Policy and International Affairs, Peter Rodman claims that “any video game that comes out, as soon as the code is released, they will modify it and change the game for their needs.” Thus, the implication of these witnesses’ testimony is that the release of code into the public domain can contribute to political subversion, much as covert intrusion into computer networks by stealthy hackers can. However, the witnesses from the Pentagon and from the government contractor SAIC often present a contradictory image of the supposed terrorists in the hearing transcripts. Sometimes the enemy is depicted as an organisation of technological masterminds, capable of manipulating the computer code of unwitting Americans and snatching their rightful intellectual property away; sometimes those from the opposing forces are depicted as pre-modern and even sub-literate political innocents. In contrast, the congressional representatives seem to focus on similarities when comparing the work of “terrorists” to the everyday digital practices of their constituents and even of themselves. According to the transcripts of this open hearing, legislators on both sides of the aisle express anxiety about domestic patterns of Internet reception. Even the legislators’ own Web pages are potentially disruptive electronic artefacts, particularly when the demands of digital labour interfere with their duties as lawmakers. Although the subject of the hearing is ostensibly terrorist Websites, Representative Anna Eshoo (D-California) bemoans the difficulty of maintaining her own official congressional site. As she observes, “So we are – as members, I think we’re very sensitive about what’s on our Website, and if I retained what I had on my Website three years ago, I’d be out of business. So we know that they have to be renewed. They go up, they go down, they’re rebuilt, they’re – you know, the message is targeted to the future.” In their questions, lawmakers identify Weblogs (blogs) as a particular area of concern as a destabilising alternative to authoritative print sources of information from established institutions. Representative Alcee Hastings (D-Florida) compares the polluting power of insurgent bloggers to that of influential online muckrakers from the American political Right. Hastings complains of “garbage on our regular mainstream news that comes from blog sites.” Representative Heather Wilson (R-New Mexico) attempts to project a media-savvy persona by bringing up the “phenomenon of blogging” in conjunction with her questions about jihadist Websites in which she notes how Internet traffic can be magnified by cooperative ventures among groups of ideologically like-minded content-providers: “These Websites, and particularly the most active ones, are they cross-linked? And do they have kind of hot links to your other favorite sites on them?” At one point Representative Wilson asks witness Rodman if he knows “of your 100 hottest sites where the Webmasters are educated? What nationality they are? Where they’re getting their money from?” In her questions, Wilson implicitly acknowledges that Web work reflects influences from pedagogical communities, economic networks of the exchange of capital, and even potentially the specific ideologies of nation-states. It is perhaps indicative of the government contractors’ anachronistic worldview that the witness is unable to answer Wilson’s question. He explains that his agency focuses on the physical location of the server or ISP rather than the social backgrounds of the individuals who might be manufacturing objectionable digital texts. The premise behind the contractors’ working method – surveilling the technical apparatus not the social network – may be related to other beliefs expressed by government witnesses, such as the supposition that jihadist Websites are collectively produced and spontaneously emerge from the indigenous, traditional, tribal culture, instead of assuming that Iraqi insurgents have analogous beliefs, practices, and technological awareness to those in first-world countries. The residual subtexts in the witnesses’ conjectures about competing cultures of orality and literacy may tell us something about a reactionary rhetoric around videogames and digital culture more generally. According to the experts before Congress, the Middle Eastern audience for these videogames and Websites is limited by its membership in a pre-literate society that is only capable of abortive cultural production without access to knowledge that is archived in printed codices. Sometimes the witnesses before Congress seem to be unintentionally channelling the ideas of the late literacy theorist Walter Ong about the “secondary orality” associated with talky electronic media such as television, radio, audio recording, or telephone communication. Later followers of Ong extend this concept of secondary orality to hypertext, hypermedia, e-mail, and blogs, because they similarly share features of both speech and written discourse. Although Ong’s disciples celebrate this vibrant reconnection to a mythic, communal past of what Kathleen Welch calls “electric rhetoric,” the defence industry consultants express their profound state of alarm at the potentially dangerous and subversive character of this hybrid form of communication. The concept of an “oral tradition” is first introduced by the expert witnesses in the context of modern marketing and product distribution: “The Internet is used for a variety of things – command and control,” one witness states. “One of the things that’s missed frequently is how and – how effective the adversary is at using the Internet to distribute product. They’re using that distribution network as a modern form of oral tradition, if you will.” Thus, although the Internet can be deployed for hierarchical “command and control” activities, it also functions as a highly efficient peer-to-peer distributed network for disseminating the commodity of information. Throughout the hearings, the witnesses imply that unregulated lateral communication among social actors who are not authorised to speak for nation-states or to produce legitimated expert discourses is potentially destabilising to political order. Witness Eric Michael describes the “oral tradition” and the conventions of communal life in the Middle East to emphasise the primacy of speech in the collective discursive practices of this alien population: “I’d like to point your attention to the media types and the fact that the oral tradition is listed as most important. The other media listed support that. And the significance of the oral tradition is more than just – it’s the medium by which, once it comes off the Internet, it is transferred.” The experts go on to claim that this “oral tradition” can contaminate other media because it functions as “rumor,” the traditional bane of the stately discourse of military leaders since the classical era. The oral tradition now also has an aspect of rumor. A[n] event takes place. There is an explosion in a city. Rumor is that the United States Air Force dropped a bomb and is doing indiscriminate killing. This ends up being discussed on the street. It ends up showing up in a Friday sermon in a mosque or in another religious institution. It then gets recycled into written materials. Media picks up the story and broadcasts it, at which point it’s now a fact. In this particular case that we were telling you about, it showed up on a network television, and their propaganda continues to go back to this false initial report on network television and continue to reiterate that it’s a fact, even though the United States government has proven that it was not a fact, even though the network has since recanted the broadcast. In this example, many-to-many discussion on the “street” is formalised into a one-to many “sermon” and then further stylised using technology in a one-to-many broadcast on “network television” in which “propaganda” that is “false” can no longer be disputed. This “oral tradition” is like digital media, because elements of discourse can be infinitely copied or “recycled,” and it is designed to “reiterate” content. In this hearing, the word “rhetoric” is associated with destructive counter-cultural forces by the witnesses who reiterate cultural truisms dating back to Plato and the Gorgias. For example, witness Eric Michael initially presents “rhetoric” as the use of culturally specific and hence untranslatable figures of speech, but he quickly moves to an outright castigation of the entire communicative mode. “Rhetoric,” he tells us, is designed to “distort the truth,” because it is a “selective” assembly or a “distortion.” Rhetoric is also at odds with reason, because it appeals to “emotion” and a romanticised Weltanschauung oriented around discourses of “struggle.” The film by SonicJihad is chosen as the final clip by the witnesses before Congress, because it allegedly combines many different types of emotional appeal, and thus it conveniently ties together all of the themes that the witnesses present to the legislators about unreliable oral or rhetorical sources in the Middle East: And there you see how all these products are linked together. And you can see where the games are set to psychologically condition you to go kill coalition forces. You can see how they use humor. You can see how the entire campaign is carefully crafted to first evoke an emotion and then to evoke a response and to direct that response in the direction that they want. Jihadist digital products, especially videogames, are effective means of manipulation, the witnesses argue, because they employ multiple channels of persuasion and carefully sequenced and integrated subliminal messages. To understand the larger cultural conversation of the hearing, it is important to keep in mind that the related argument that “games” can “psychologically condition” players to be predisposed to violence is one that was important in other congressional hearings of the period, as well one that played a role in bills and resolutions that were passed by the full body of the legislative branch. In the witness’s testimony an appeal to anti-game sympathies at home is combined with a critique of a closed anti-democratic system abroad in which the circuits of rhetorical production and their composite metonymic chains are described as those that command specific, unvarying, robotic responses. This sharp criticism of the artful use of a presentation style that is “crafted” is ironic, given that the witnesses’ “compilation” of jihadist digital material is staged in the form of a carefully structured PowerPoint presentation, one that is paced to a well-rehearsed rhythm of “slide, please” or “next slide” in the transcript. The transcript also reveals that the members of the House Intelligence Committee were not the original audience for the witnesses’ PowerPoint presentation. Rather, when it was first created by SAIC, this “expert” presentation was designed for training purposes for the troops on the ground, who would be facing the challenges of deployment in hostile terrain. According to the witnesses, having the slide show showcased before Congress was something of an afterthought. Nonetheless, Congressman Tiahrt (R-KN) is so impressed with the rhetorical mastery of the consultants that he tries to appropriate it. As Tiarht puts it, “I’d like to get a copy of that slide sometime.” From the hearing we also learn that the terrorists’ Websites are threatening precisely because they manifest a polymorphously perverse geometry of expansion. For example, one SAIC witness before the House Committee compares the replication and elaboration of digital material online to a “spiderweb.” Like Representative Eshoo’s site, he also notes that the terrorists’ sites go “up” and “down,” but the consultant is left to speculate about whether or not there is any “central coordination” to serve as an organising principle and to explain the persistence and consistency of messages despite the apparent lack of a single authorial ethos to offer a stable, humanised, point of reference. In the hearing, the oft-cited solution to the problem created by the hybridity and iterability of digital rhetoric appears to be “public diplomacy.” Both consultants and lawmakers seem to agree that the damaging messages of the insurgents must be countered with U.S. sanctioned information, and thus the phrase “public diplomacy” appears in the hearing seven times. However, witness Roughhead complains that the protean “oral tradition” and what Henry Jenkins has called the “transmedia” character of digital culture, which often crosses several platforms of traditional print, projection, or broadcast media, stymies their best rhetorical efforts: “I think the point that we’ve tried to make in the briefing is that wherever there’s Internet availability at all, they can then download these – these programs and put them onto compact discs, DVDs, or post them into posters, and provide them to a greater range of people in the oral tradition that they’ve grown up in. And so they only need a few Internet sites in order to distribute and disseminate the message.” Of course, to maintain their share of the government market, the Science Applications International Corporation also employs practices of publicity and promotion through the Internet and digital media. They use HTML Web pages for these purposes, as well as PowerPoint presentations and online video. The rhetoric of the Website of SAIC emphasises their motto “From Science to Solutions.” After a short Flash film about how SAIC scientists and engineers solve “complex technical problems,” the visitor is taken to the home page of the firm that re-emphasises their central message about expertise. The maps, uniforms, and specialised tools and equipment that are depicted in these opening Web pages reinforce an ethos of professional specialisation that is able to respond to multiple threats posed by the “global war on terror.” By 26 June 2006, the incident finally was being described as a “Pentagon Snafu” by ABC News. From the opening of reporter Jake Tapper’s investigative Webcast, established government institutions were put on the spot: “So, how much does the Pentagon know about videogames? Well, when it came to a recent appearance before Congress, apparently not enough.” Indeed, the very language about “experts” that was highlighted in the earlier coverage is repeated by Tapper in mockery, with the significant exception of “independent expert” Ian Bogost of the Georgia Institute of Technology. If the Pentagon and SAIC deride the legitimacy of rhetoric as a cultural practice, Bogost occupies himself with its defence. In his recent book Persuasive Games: The Expressive Power of Videogames, Bogost draws upon the authority of the “2,500 year history of rhetoric” to argue that videogames represent a significant development in that cultural narrative. Given that Bogost and his Watercooler Games Weblog co-editor Gonzalo Frasca were actively involved in the detective work that exposed the depth of professional incompetence involved in the government’s line-up of witnesses, it is appropriate that Bogost is given the final words in the ABC exposé. As Bogost says, “We should be deeply bothered by this. We should really be questioning the kind of advice that Congress is getting.” Bogost may be right that Congress received terrible counsel on that day, but a close reading of the transcript reveals that elected officials were much more than passive listeners: in fact they were lively participants in a cultural conversation about regulating digital media. After looking at the actual language of these exchanges, it seems that the persuasiveness of the misinformation from the Pentagon and SAIC had as much to do with lawmakers’ preconceived anxieties about practices of computer-mediated communication close to home as it did with the contradictory stereotypes that were presented to them about Internet practices abroad. In other words, lawmakers found themselves looking into a fun house mirror that distorted what should have been familiar artefacts of American popular culture because it was precisely what they wanted to see. References ABC News. “Terrorist Videogame?” Nightline Online. 21 June 2006. 22 June 2006 http://abcnews.go.com/Video/playerIndex?id=2105341>. Bogost, Ian. Persuasive Games: Videogames and Procedural Rhetoric. Cambridge, MA: MIT Press, 2007. Game Politics. “Was Congress Misled by ‘Terrorist’ Game Video? We Talk to Gamer Who Created the Footage.” 11 May 2006. http://gamepolitics.livejournal.com/285129.html#cutid1>. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. julieb. “David Morgan Is a Horrible Writer and Should Be Fired.” Online posting. 5 May 2006. Dvorak Uncensored Cage Match Forums. http://cagematch.dvorak.org/index.php/topic,130.0.html>. Mahmood. “Terrorists Don’t Recruit with Battlefield 2.” GGL Global Gaming. 16 May 2006 http://www.ggl.com/news.php?NewsId=3090>. Morgan, David. “Islamists Using U.S. Video Games in Youth Appeal.” Reuters online news service. 4 May 2006 http://today.reuters.com/news/ArticleNews.aspx?type=topNews &storyID=2006-05-04T215543Z_01_N04305973_RTRUKOC_0_US-SECURITY- VIDEOGAMES.xml&pageNumber=0&imageid=&cap=&sz=13&WTModLoc= NewsArt-C1-ArticlePage2>. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London/New York: Methuen, 1982. Parker, Trey. Online posting. 7 May 2006. 9 May 2006 http://www.treyparker.com>. Plato. “Gorgias.” Plato: Collected Dialogues. Princeton: Princeton UP, 1961. Shrader, Katherine. “Pentagon Surfing Thousands of Jihad Sites.” Associated Press 4 May 2006. SonicJihad. “SonicJihad: A Day in the Life of a Resistance Fighter.” Online posting. 26 Dec. 2005. Planet Battlefield Forums. 9 May 2006 http://www.forumplanet.com/planetbattlefield/topic.asp?fid=13670&tid=1806909&p=1>. Tapper, Jake, and Audery Taylor. “Terrorist Video Game or Pentagon Snafu?” ABC News Nightline 21 June 2006. 30 June 2006 http://abcnews.go.com/Nightline/Technology/story?id=2105128&page=1>. U.S. Congressional Record. Panel I of the Hearing of the House Select Intelligence Committee, Subject: “Terrorist Use of the Internet for Communications.” Federal News Service. 4 May 2006. Welch, Kathleen E. Electric Rhetoric: Classical Rhetoric, Oralism, and the New Literacy. Cambridge, MA: MIT Press, 1999. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Losh, Elizabeth. "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/08-losh.php>. APA Style
 Losh, E. (Oct. 2007) "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/08-losh.php>. 
APA, Harvard, Vancouver, ISO, and other styles
32

Cruikshank, Lauren. "Articulating Alternatives: Moving Past a Plug-and-Play Prosthetic Media Model." M/C Journal 22, no. 5 (2019). http://dx.doi.org/10.5204/mcj.1596.

Full text
Abstract:
The first uncomfortable twinges started when I was a grad student, churning out my Master’s thesis on a laptop that I worked on at the library, in my bedroom, on the kitchen table, and at the coffee shop. By the last few months, typing was becoming uncomfortable for my arms, but as any thesis writer will tell you, your whole body is uncomfortable with the endless hours sitting, inputting, and revising. I didn’t think much of it until I moved on to a new city to start a PhD program. Now the burning that accompanied my essay-typing binges started to worry me more, especially since I noticed the twinges didn’t go away when I got up to chat with my roommate, or to go to bed. I finally mentioned the annoying arm to Sonja, a medical student friend of mine visiting me one afternoon. She asked me to pick up a chair in front of me, palms out. I did, and the attempt stabbed pain up my arm and through my elbow joint. The chair fell out of my hands. We looked at each other, eyebrows raised.Six months and much computer work later, I still hadn’t really addressed the issue. Who had time? Chasing mystery ailments around and more importantly, doing any less typing were not high on my likely list. But like the proverbial frog in slowly heated water, things had gotten much worse without my really acknowledging it. That is, until the day I got up from my laptop, stretched out and wandered into the kitchen to put some pasta on to boil. When the spaghetti was ready, I grabbed the pot to drain it and my right arm gave as if someone had just handed me a 200-pound weight. The pot, pasta and boiling water hit the floor with a scalding splash that nearly missed both me and the fleeing cat. Maybe there was a problem here.Both popular and critical understandings of the body have been in a great deal of flux over the past three or four decades as digital media technologies have become ever more pervasive and personal. Interfacing with the popular Internet, video games, mobile devices, wearable computing, and other new media technologies have prompted many to reflect on and reconsider what it means to be an embodied human being in an increasingly digitally determined era. As a result, the body, at various times in this recent history, has been theoretically disowned, disavowed, discarded, disdained, replaced, idealised, essentialised, hollowed out, re-occupied, dismembered, reconstituted, reclaimed and re-imagined in light of new media. Despite all of the angst over the relationships our embodied selves have had to digital media, of course, our embodied selves have endured. It remains true, that “even in the age of technosocial subjects, life is lived through bodies” (Stone 113).How we understand our embodiments and their entanglements with technologies matter deeply, moreover, for these understandings shape not only discourse around embodiment and media, but also the very bodies and media in question in very real ways. For example, a long-held tenet in both popular culture and academic work has been the notion that media technologies extend our bodies and our senses as technological prostheses. The idea here is that media technologies work like prostheses that extend the reach of our eyes, ears, voice, touch, and other bodily abilities through time and space, augmenting our abilities to experience and influence the world.Canadian media scholar Marshall McLuhan is one influential proponent of this notion, and claimed that, in fact, “the central purpose of all my work is to convey this message, that by understanding media as they extend man, we gain a measure of control over them” (McLuhan and Zingrone 265). Other more contemporary media scholars reflect on how “our prosthetic technological extensions enable us to amplify and extend ourselves in ways that profoundly affect the nature and scale of human communication” (Cleland 75), and suggest that a media technology such as one’s mobile device, can act “as a prosthesis that supports the individual in their interactions with the world” (Glitsos 161). Popular and commercial discourses also frequently make use of this idea, from the 1980’s AT&T ad campaign that nudged you to “Reach out and Touch Someone” via the telephone, to Texas Instruments’s claim in the 1990’s that their products were “Extending Your Reach”, to Nikon’s contemporary nudge to “See Much Further” with the prosthetic assistance of their cameras. The etymology of the term “prosthesis” reveals that the term evolves from Greek and Latin components that mean, roughly, “to add to”. The word was originally employed in the 16th century in a grammatical context to indicate “the addition of a letter or syllable to the beginning of a word”, and was adopted to describe “the replacement of defective or absent parts of the body by artificial substitutes” in the 1700’s. More recently the world “prosthesis” has come to be used to indicate more simply, “an artificial replacement for a part of the body” (OED Online). As we see in the use of the term over the past few decades, the meaning of the word continues to shift and is now often used to describe technological additions that don’t necessarily replace parts of the body, but augment and extend embodied capabilities in various ways. Technology as prosthesis is “a trope that has flourished in a recent and varied literature concerned with interrogating human-technology interfaces” (Jain 32), and now goes far beyond signifying the replacement of missing components. Although the prosthesis has “become somewhat of an all-purpose metaphor for interactions of body and technology” (Sun 16) and “a tempting theoretical gadget” (Jain 49), I contend that this metaphor is not often used particularly faithfully. Instead of invoking anything akin to the complex lived corporeal experiences and conundrums of prosthetic users, what we often get when it comes to metaphors of technology-as-prostheses is a fascination with the potential of technologies in seamlessly extending our bodies. This necessitates a fantasy version of both the body and its prostheses as interchangeable or extendable appendages to be unproblematically plugged and unplugged, modifying our capabilities and perceptions to our varying whims.Of course, a body seamlessly and infinitely extended by technological prostheses is really no body. This model forgoes actual lived bodies for a shiny but hollow amalgamation based on what I have termed the “disembodimyth” enabled by technological transcendence. By imagining our bodies as assemblages of optional appendages, it is not far of a leap to imagine opting out of our bodies altogether and using technological means to unfasten our consciousness from our corporeal parts. Alison Muri points out that this myth of imminent emancipation from our bodies via unity with technology is a view that has become “increasingly prominent in popular media and cultural studies” (74), despite or perhaps because of the fact that, due to global overpopulation and wasteful human environmental practices, “the human body has never before been so present, or so materially manifest at any time in the history of humanity”, rendering “contradictory, if not absurd, the extravagantly metaphorical claims over the past two decades of the human body’s disappearance or obsolescence due to technology” (75-76). In other words, it becomes increasingly difficult to speak seriously about the body being erased or escaped via technological prosthetics when those prosthetics, and our bodies themselves, continue to proliferate and contribute to the piling up of waste and pollution in the current Anthropocene. But whether they imply smooth couplings with alluring technologies, or uncoupling from the body altogether, these technology-as-prosthesis metaphors tell us very little about “prosthetic realities” (Sun 24). Actual prosthetic realities involve learning curves; pain, frustrations and triumphs; hard-earned remappings of mental models; and much experimentation and adaption on the part of both technology and user in order to function. In this vein, Vivian Sobchak has detailed the complex sensations and phenomenological effects that followed the amputation of her leg high above the knee, including the shifting presence of her “phantom limb” perceptions, the alignments, irritations, movements, and stabilities offered by her prosthetic leg, and her shifting senses of bodily integrity and body-image over time. An oversimplistic application of the prosthetic metaphor for our encounters with technology runs the risk of forgetting this wealth of experiences and instructive first-hand accounts from people who have been using therapeutic prosthetics as long as assistive devices have been conceived of, built, and used. Of course, prosthetics have long been employed not simply to aid function and mobility, but also to restore and prop up concepts of what a “whole,” “normal” body looks like, moves like, and includes as essential components. Prosthetics are employed, in many cases, to allow the user to “pass” as able-bodied in rendering their own technological presence invisible, in service of restoring an ableist notion of embodied normality. Scholars of Critical Disability Studies have pushed back against these ableist notions, in service of recognising the capacities of “the disabled body when it is understood not as a less than perfect form of the normative standard, but as figuring difference in a nonbinary sense” (Shildrick 14). Paralympian, actress, and model Aimee Mullins has lent her voice to this cause, publicly contesting the prioritisation of realistic, unobtrusive form in prosthetic design. In a TED talk entitled It’s Not Fair Having 12 Pairs of Legs, she showcases her collection of prosthetics, including “cheetah legs” designed for optimal running speed, transparent glass-like legs, ornately carved wooden legs, Barbie doll-inspired legs customised with high heel shoes, and beautiful, impractical jellyfish legs. In illustrating the functional, fashionable, and fantastical possibilities, she challenges prosthetic designers to embrace more poetry and whimsy, while urging us all to move “away from the need to replicate human-ness as the only aesthetic ideal” (Mullins). In this same light, Sarah S. Jain asks “how do body-prosthesis relays transform individual bodies as well as entire social notions about what a properly functioning physical body might be?” (39). In her exploration of how prostheses can be simultaneously wounding and enabling, Jain recounts Sigmund Freud’s struggle with his own palate replacement following surgery for throat cancer in 1923. His prosthesis allowed him to regain the ability to speak and eat, but also caused him significant pain. Nevertheless, his artificial palate had to be worn, or the tissue would shrink and necessitate additional painful procedures (Jain 31). Despite this fraught experience, Freud himself espoused the trope of technologically enhanced transcendence, pronouncing “Man has, as it were, become a prosthetic god. When he puts on all his auxiliary organs, he is truly magnificent.” However, he did add a qualification, perhaps reflective of his own experiences, by next noting, “but those organs have not grown on him and they still give him much trouble at times” (qtd. in Jain 31). This trouble is, I argue, important to remember and reclaim. It is also no less present in our interactions with our media prostheses. Many of our technological encounters with media come with unacknowledged discomforts, adjustments, lag, strain, ill-fitting defaults, and fatigue. From carpal tunnel syndrome to virtual reality vertigo, our interactions with media technologies are often marked by pain and “much trouble” in Freud’s sense. Computer Science and Cultural Studies scholar Phoebe Sengers opens a short piece titled Technological Prostheses: An Anecdote, by reflecting on how “we have reached the post-physical era. On the Internet, all that matters is our thoughts. The body is obsolete. At least, whoever designed my computer interface thought so.” She traces how concentrated interactions with computers during her graduate work led to intense tendonitis in her hands. Her doctor responded by handing her “a technological prosthesis, two black leather wrist braces” that allowed her to return to her keyboard to resume typing ten hours a day. Shortly after her assisted return to her computer, she developed severe tendonitis in her elbows and had to stop typing altogether. Her advisor also handed her a technological prosthesis, this time “a speech understanding system that would transcribe my words,” so that she could continue to work. Two days later she lost her voice. Ultimately she “learned that my body does not go away when I work. I learned to stop when it hurt […] and to refuse to behave as though my body was not there” (Sengers). My own experiences in grad school were similar in many ways to Sengers’s. Besides the pasta problem outlined above, my own computer interfacing injuries at that point in my career meant I could no longer turn keys in doors, use a screwdriver, lift weights, or play the guitar. I held a friend’s baby at Christmas that year and the pressure of the small body on my arm make me wince. My family doctor bent my arm around a little, then shrugging her shoulders, she signed me up for a nerve test. As a young neurologist proceeded to administer a series of electric shocks and stick pins into my arms in various places, I noticed she had an arm brace herself. She explained that she also had a repetitive strain injury aggravated by her work tasks. She pronounced mine an advanced repetitive strain injury involving both medial and lateral epicondylitis, and sent me home with recommendations for rest, ice and physiotherapy. Rest was a challenge: Like Sengers, I puzzled over how one might manage to be productive in academia without typing. I tried out some physiotherapy, with my arm connected to electrodes and currents coursing through my elbow until my arm contorted in bizarre ways involuntarily. I tried switching my mouse from my right side to my left, switching from typing to voice recognition software and switching from a laptop to a more ergonomic desktop setup. I tried herbal topical treatments, wearing an extremely ugly arm brace, doing yoga poses, and enduring chiropractic bone-cracking. I learned in talking with people around me at that time that repetitive strains of various kinds are surprisingly common conditions for academics and other computer-oriented occupations. I learned other things well worth learning in that painful process. In terms of my own writing and thinking about technology, I have even less tolerance for the idea of ephemeral, transcendent technological fusions between human and machine. Seductive slippages into a cyberspatial existence seem less sexy when bumping your body up against the very physical and unforgiving interface hurts more with each keystroke or mouse click. The experience has given me a chronic injury to manage carefully ever since, rationing my typing time and redoubling my commitment to practicing embodied theorising about technology, with attention to sensation, materiality, and the way joints (between bones or between computer and computant) can become points of inflammation. Although pain is rarely referenced in the myths of smooth human and technological incorporations, there is much to be learned in acknowledging and exploring the entry and exit wounds made when we interface with technology. The elbow, or wrist, or lower back, or mental health that gives out serves as an effective alarm, should it be ignored too long. If nothing else, like a crashed computer, a point of pain will break a flow of events typically taken for granted. Whether it is your screen or your pinky finger that unexpectedly freezes, a system collapse will prompt a step back to look with new perspective at the process you were engaged in. The lag, crash, break, gap, crack, or blister exposes the inherent imperfections in a system and offers up an invitation for reflection, critical engagement, and careful choice.One careful choice we could make would be a more critical engagement with technology-as-prosthesis by “re-membering” our jointedness with technologies. Of course, joints themselves are not distinct parts, but interesting articulated systems and relationships in the spaces in-between. Experiencing our jointedness with technologies involves recognising that this is not the smooth romantic union with technology that has so often been exalted. Instead, our technological articulations involve a range of pleasures and pain, flows and blockages, frictions and slippages, flexibilities and rigidities. I suggest that a new model for understanding technology and embodiment might employ “articulata” as a central figure, informed by the multiple meanings of articulation. At their simplest, articulata are hinged, jointed, plural beings, but they are also precarious things that move beyond a hollow collection of corporeal parts. The inspiration for an exploration of articulation as a metaphor in this way was planted by the work of Donna Haraway, and especially by her 1992 essay, “The Promises of Monsters: A Regenerative Politics for Inappropriate/d Others,” in which she touches briefly on articulation and its promise. Haraway suggests that “To articulate is to signify. It is to put things together, scary things, risky things, contingent things. I want to live in an articulate world. We articulate; therefore we are” (324). Following from Haraway’s work, this framework insists that bodies and technologies are not simply components cobbled together, but a set of relations that rework each other in complex and ongoing processes of articulation. The double-jointed meaning of articulation is particularly apt as inspiration for crafting a more nuanced understanding of embodiment, since articulation implies both physiology and communication. It is a term that can be used to explain physical jointedness and mobility, but also expressive specificities. We articulate a joint by exploring its range of motion and we articulate ideas by expressing them in words. In both senses we articulate and are articulated by our jointed nature. Instead of oversimplifying or idealising embodied relationships with prostheses and other technologies, we might conceive of them and experience them as part of a “joint project”, based on points of connexion that are not static, but dynamic, expressive, complex, contested, and sometimes uncomfortable. After all, as Shildrick reminds us, in addition to functioning as utilitarian material artifacts, “prostheses are rich in semiotic meaning and mark the site where the disordering ambiguity, and potential transgressions, of the interplay between the human, animal and machine cannot be occluded” (17). By encouraging the attentive embracing of these multiple meanings, disorderings, ambiguities, transgressions and interplays, my aim moving forward is to explore the ways in which we might all become more articulate about our articulations. After all, I too want to live in an articulate world.ReferencesAT&T. "AT&T Reach Out and Touch Someone Commercial – 1987." Advertisement. 13 Mar. 2014. YouTube. <http://www.youtube.com/watch?v=OapWdclVqEY>.Cleland, Kathy. "Prosthetic Bodies and Virtual Cyborgs." Second Nature 3 (2010): 74–101.Glitsos, Laura. "Screen as Skin: The Somatechnics of Touchscreen Music Media." Somatechnics 7.1 (2017): 142–165.Haraway, Donna. "Promises of Monsters: A Regenerative Politics for Inappropriate/d Others." Cultural Studies. Eds. Lawrence Grossberg, Cary Nelson and Paula A. Treichler. New York: Routledge, 1992. 295–337.Jain, Sarah S. "The Prosthetic Imagination: Enabling and Disabling the Prosthetic Trope." Science, Technology, & Human Values 31.54 (1999): 31–54.McLuhan, Eric, and Frank Zingrone, eds. Essential McLuhan. Concord: Anansi P, 1995.Mullins, Aimee. Aimee Mullins: It’s Not Fair Having 12 Pairs of Legs. TED, 2009. <http://www.ted.com/talks/aimee_mullins_prosthetic_aesthetics.html>.Muri, Allison. "Of Shit and the Soul: Tropes of Cybernetic Disembodiment in Contemporary Culture." Body & Society 9.3 (2003): 73–92.Nikon. "See Much Further! Nikon COOLPIX P1000." Advertisement. 1 Nov. 2018. YouTube. <http://www.youtube.com/watch?v=UtABWZX0U8w>.OED Online. "prosthesis, n." Oxford UP. June 2019. 1 Aug. 2019 <https://www-oed-com.proxy.hil.unb.ca/view/Entry/153069?redirectedFrom=prosthesis#eid>.Sengers, Phoebe. "Technological Prostheses: An Anecdote." ZKP-4 Net Criticism Reader. Eds. Geert Lovink and Pit Schultz. 1997.Shildrick, Margrit. "Why Should Our Bodies End at the Skin?: Embodiment, Boundaries, and Somatechnics." Hypatia 30.1 (2015): 13–29.Sobchak, Vivian. "Living a ‘Phantom Limb’: On the Phenomenology of Bodily Integrity." Body & Society 16.3 (2010): 51–67.Stone, Allucquere Roseanne. "Will the Real Body Please Stand Up? Boundary Stories about Virtual Cultures." Cyberspace: First Steps. Ed. Michael Benedikt. Cambridge: MIT P, 1991. 81–113.Sun, Hsiao-yu. "Prosthetic Configurations and Imagination: Dis/ability, Body and Technology." Concentric: Literacy and Cultural Studies 44.1 (2018): 13–39.Texas Instruments. "We Wrote the Book on Classroom Calculators." Advertisement. Teaching Children Mathematics 2.1 (1995): Back Matter. <http://www.jstor.org/stable/41196414>.
APA, Harvard, Vancouver, ISO, and other styles
33

Barker, Timothy Scott. "Information and Atmospheres: Exploring the Relationship between the Natural Environment and Information Aesthetics." M/C Journal 15, no. 3 (2012). http://dx.doi.org/10.5204/mcj.482.

Full text
Abstract:
Our culture abhors the world.Yet Quicksand is swallowing the duellists; the river is threatening the fighter: earth, waters and climate, the mute world, the voiceless things once placed as a decor surrounding the usual spectacles, all those things that never interested anyone, from now on thrust themselves brutally and without warning into our schemes and manoeuvres (Michel Serres, The Natural Contract, p 3). When Michel Serres describes culture's abhorrence of the world in the opening pages of The Natural Contract he draws our attention to the sidelining of nature in histories and theories that have sought to describe Western culture. As Serres argues, cultural histories are quite often built on the debates and struggles of humanity, which are largely held apart from their natural surroundings, as if on a stage, "purified of things" (3). But, as he is at pains to point out, human activity and conflict always take place within a natural milieu, a space of quicksand, swelling rivers, shifting earth, and atmospheric turbulence. Recently, via the potential for vast environmental change, what was once thought of as a staid “nature” has reasserted itself within culture. In this paper I explore how Serres’s positioning of nature can be understood amid new communication systems, which, via the apparent dematerialization of messages, seems to have further removed culture from nature. From here, I focus on a set of artworks that work against this division, reformulating the connection between information, a topic usually considered in relation to media and anthropic communication (and something about which Serres too has a great deal to say), and nature, an entity commonly considered beyond human contrivance. In particular, I explore how information visualisation and sonification has been used to give a new sense of materiality to the atmosphere, repotentialising the air as a natural and informational entity. The Natural Contract argues for the legal legitimacy of nature, a natural contract similar in standing to Rousseau’s social contract. Serres’ss book explores the history and notion of a “legal person”, arguing for a linking of the scientific view of the world and the legal visions of social life, where inert objects and living beings are considered within the same legal framework. As such The Natural Contract does not deal with ecology per-se, but instead focuses on an argument for the inclusion of nature within law (Serres, “A Return” 131). In a drastic reconfiguring of the subject/object relationship, Serres explains how the space that once existed as a backdrop for human endeavour now seems to thrust itself directly into history. "They (natural events) burst in on our culture, which had never formed anything but a local, vague, and cosmetic idea of them: nature" (Serres, The Natural Contract 3). In this movement, nature does not simply take on the role of a new object to be included within a world still dominated by human subjects. Instead, human beings are understood as intertwined with a global system of turbulence that is both manipulated by them and manipulates them. Taking my lead from Serres’s book, in this paper I begin to explore the disconnections and reconnections that have been established between information and the natural environment. While I acknowledge that there is nothing natural about the term “nature” (Harman 251), I use the term to designate an environment constituted by the systematic processes of the collection of entities that are neither human beings nor human crafted artefacts. As the formation of cultural systems becomes demarcated from these natural objects, the scene is set for the development of culturally mediated concepts such as “nature” and “wilderness,” as entities untouched and unspoilt by cultural process (Morton). On one side of the divide the complex of communication systems is situated, on the other is situated “nature”. The restructuring of information flows due to developments in electronic communication has ostensibly removed messages from the medium of nature. Media is now considered within its own ecology (see Fuller; Strate) quite separate from nature, except when it is developed as media content (see Cubitt; Murray; Heumann). A separation between the structures of media ecologies and the structures of natural ecologies has emerged over the history of electronic communication. For instance, since the synoptic media theory of McLuhan it has been generally acknowledged that the shift from script to print, from stone to parchment, and from the printing press to more recent developments such as the radio, telephone, television, and Web2.0, have fundamentally altered the structure and effects of human relationships. However, these developments – “the extensions of man” (McLuhan)— also changed the relationship between society and nature. Changes in communications technology have allowed people to remain dispersed, as ideas, in the form of electric currents or pulses of light travel vast distances and in diverse directions, with communication no longer requiring human movement across geographic space. Technologies such as the telegraph and the radio, with their ability to seemingly dematerialize the media of messages, reformulated the concept of communication into a “quasi-physical connection” across the obstacles of time and space (Clarke, “Communication” 132). Prior to this, the natural world itself was the medium through which information was passed. Rather than messages transmitted via wires, communication was associated with the transport of messages through the world via human movement, with the materiality of the medium measured in the time it took to cover geographic space. The flow of messages followed trade flows (Briggs and Burke 20). Messages moved along trails, on rail, over bridges, down canals, and along shipping channels, arriving at their destination as information. More recently however, information, due to its instantaneous distribution and multiplication across space, seems to have no need for nature as a medium. Nature has become merely a topic for information, as media content, rather than as something that takes part within the information system itself. The above example illustrates a separation between information exchange and the natural environment brought about by a set of technological developments. As Serres points out, the word “media” is etymologically related to the word “milieu”. Hence, a theory of media should be always related to an understanding of the environment (Crocker). But humans no longer need to physically move through the natural world to communicate, ideas can move freely from region to region, from air-conditioned room to air-conditioned room, relatively unimpeded by natural forces or geographic distance. For a long time now, information exchange has not necessitated human movement through the natural environment and this has consequences for how the formation of culture and its location in (or dislocation from) the natural world is viewed. A number of artists have begun questioning the separation between media and nature, particularly concerning the materiality of air, and using information to provide new points of contact between media and the atmosphere (for a discussion of the history of ecoart see Wallen). In Eclipse (2009) (fig. 1) for instance, an internet based work undertaken by the collective EcoArtTech, environmental sensing technology and online media is used experimentally to visualize air pollution. EcoArtTech is made up of the artist duo Cary Peppermint and Leila Nadir and since 2005 they have been inquiring into the relationship between digital technology and the natural environment, particularly regarding concepts such as “wilderness”. In Eclipse, EcoArtTech garner photographs of American national parks from social media and photo sharing sites. Air quality data gathered from the nearest capital city is then inputted into an algorithm that visibly distorts the image based on the levels of particle pollution detected in the atmosphere. The photographs that circulate on photo sharing sites such as Flickr—photographs that are usually rather banal in their adherence to a history of wilderness photography—are augmented by the environmental pollution circulating in nearby capital cities. Figure 1: EcoArtTech, Eclipse (detail of screenshot), 2009 (Internet-based work available at:http://turbulence.org/Works/eclipse/) The digital is often associated with the clean transmission of information, as packets of data move from a server, over fibre optic cables, to be unpacked and re-presented on a computer's screen. Likewise, the photographs displayed in Eclipse are quite often of an unspoilt nature, containing no errors in their exposure or focus (most probably because these wilderness photographs were taken with digital cameras). As the photographs are overlaid with information garnered from air quality levels, the “unspoilt” photograph is directly related to pollution in the natural environment. In Eclipse the background noise of “wilderness,” the pollution in the air, is reframed as foreground. “We breathe background noise…Background noise is the ground of our perception, absolutely uninterrupted, it is our perennial sustenance, the element of the software of all our logic” (Serres, Genesis 7). Noise is activated in Eclipse in a similar way to Serres’s description, as an indication of the wider milieu in which communication takes place (Crocker). Noise links the photograph and its transmission not only to the medium of the internet and the glitches that arise as information is circulated, but also to the air in the originally photographed location. In addition to noise, there are parallels between the original photographs of nature gleaned from photo sharing sites and Serres’s concept of a history that somehow stands itself apart from the effects of ongoing environmental processes. By compartmentalising the natural and cultural worlds, both the historiography that Serres argues against and the wilderness photograph produces a concept of nature that is somehow outside, behind, or above human activities and the associated matter of noise. Eclipse, by altering photographs using real-time data, puts the still image into contact with the processes and informational outputs of nature. Air quality sensors detect pollution in the atmosphere and code these atmospheric processes into computer readable information. The photograph is no longer static but is now open to continual recreation and degeneration, dependent on the coded value of the atmosphere in a given location. A similar materiality is given to air in a public work undertaken by Preemptive Media, titled Areas Immediate Reading (AIR) (fig. 2). In this project, Preemptive Media, made up of Beatriz da Costa, Jamie Schulte and Brooke Singer, equip participants with instruments for measuring air quality as they walked around New York City. The devices monitor the carbon monoxide (CO), nitrogen oxides (NOx) or ground level ozone (O3) levels that are being breathed in by the carrier. As Michael Dieter has pointed out in his reading of the work, the application of sensing technology by Preemptive Media is in distinct contrast to the conventional application of air quality monitoring, which usually takes the form of extremely high resolution located devices spread over great distances. These larger air monitoring networks tend to present the value garnered from a large expanse of the atmosphere that covers individual cities or states. The AIR project, in contrast, by using small mobile sensors, attempts to put people in informational contact with the air that they are breathing in their local and immediate time and place, and allows them to monitor the small parcels of atmosphere that surround other users in other locations (Dieter). It thus presents many small and mobile spheres of atmosphere, inhabited by individuals as they move through the city. In AIR we see the experimental application of an already developed technology in order to put people on the street in contact with the atmospheres that they are moving through. It gives a new informational form to the “vast but invisible ocean of air that surrounds us and permeates us” (Ihde 3), which in this case is given voice by a technological apparatus that converts the air into information. The atmosphere as information becomes less of a vague background and more of a measurable entity that ingresses into the lives and movements of human users. The air is conditioned by information; the turbulent and noisy atmosphere has been converted via technology into readable information (Connor 186-88). Figure 2: Preemptive Media, Areas Immediate Reading (AIR) (close up of device), 2011 Throughout his career Serres has developed a philosophy of information and communication that may help us to reframe the relationship between the natural and cultural worlds (see Brown). Conventionally, the natural world is understood as made up of energy and matter, with exchanges of energy and the flows of biomass through food webs binding ecosystems together (DeLanda 120-1). However, the tendencies and structures of natural systems, like cultural systems, are also dependent on the communication of information. It is here that Serres provides us with a way to view natural and cultural systems as connected by a flow of energy and information. He points out that in the wake of Claude Shannon’s famous Mathematical Theory of Communication it has been possible to consider the relationship between information and thermodynamics, at least in Shannon’s explanation of noise as entropy (Serres, Hermes74). For Serres, an ecosystem can be conceptualised as an informational and energetic system: “it receives, stores, exchanges, and gives off both energy and information in all forms, from the light of the sun to the flow of matter which passes through it (food, oxygen, heat, signals)” (Serres, Hermes 74). Just as we are related to the natural world based on flows of energy— as sunlight is converted into energy by plants, which we in turn convert into food— we are also bound together by flows of information. The task is to find new ways to sense this information, to actualise the information, and imagine nature as more than a welter of data and the air as more than background. If we think of information in broad ranging terms as “coded values of the output of a process” (Losee 254), then we see that information and the environment—as a setting that is produced by continual and energetic processes—are in constant contact. After all, humans sense information from the environment all the time; we constantly decode the coded values of environmental processes transmitted via the atmosphere. I smell a flower, I hear bird songs, and I see the red glow of a sunset. The process of the singing bird is coded as vibrations of air particles that knock against my ear drum. The flower is coded as molecules in the atmosphere enter my nose and bind to cilia. The red glow is coded as wavelengths from the sun are dispersed in the Earth’s atmosphere and arrive at my eye. Information, of course, does not actually exist as information until some observing system constructs it (Clarke, “Information” 157-159). This observing system as we see the sunset, hear the birds, or smell the flower involves the atmosphere as a medium, along with our sense organs and cognitive and non-cognitive processes. The molecules in the atmosphere exist independently of our sense of them, but they do not actualise as information until they are operationalised by the observational system. Prior to this, information can be thought of as noise circulating within the atmosphere. Heinz Von Foester, one of the key figures of cybernetics, states “The environment contains no information. The environment is as it is” (Von Foester in Clarke, “Information” 157). Information, in this model, actualises only when something in the world causes a change to the observational system, as a difference that makes a difference (Bateson 448-466). Air expelled from a bird’s lungs and out its beak causes air molecules to vibrate, introducing difference into the atmosphere, which is then picked up by my ear and registered as sound, informing me that a bird is nearby. One bird song is picked up as information amid the swirling noise of nature and a difference in the air makes a difference to the observational system. It may be useful to think of the purpose of information as to control action and that this is necessary “whenever the people concerned, controllers as well as controlled, belong to an organised social group whose collective purpose is to survive and prosper” (Scarrott 262). Information in this sense operates the organisation of groups. Using this definition rooted in cybernetics, we see that information allows groups, which are dependent on certain control structures based on the sending and receiving of messages through media, to thrive and defines the boundaries of these groups. We see this in a flock of birds, for instance, which forms based on the information that one bird garners from the movements of the other birds in proximity. Extrapolating from this, if we are to live included in an ecological system capable of survival, the transmission of information is vital. But the form of the information is also important. To communicate, for example, one entity first needs to recognise that the other is speaking and differentiate this information from the noise in the air. Following Clarke and Von Foester, an observing system needs to be operational. An art project that gives aesthetic form to environmental processes in this vein—and one that is particularly concerned with the co-agentive relation between humans and nature—is Reiko Goto and Tim Collin’s Plein Air (2010) (fig. 3), an element in their ongoing Eden 3 project. In this work a technological apparatus is wired to a tree. This apparatus, which references the box easels most famously used by the Impressionists to paint ‘en plein air’, uses sensing technology to detect the tree’s responses to the varying CO2 levels in the atmosphere. An algorithm then translates this into real time piano compositions. The tree’s biological processes are coded into the voice of a piano and sensed by listeners as aesthetic information. What is at stake in this work is a new understanding of atmospheres as a site for the exchange of information, and an attempt to resituate the interdependence of human and non-human entities within an experimental aesthetic system. As we breathe out carbon dioxide—both through our physiological process of breathing and our cultural processes of polluting—trees breath it in. By translating these biological processes into a musical form, Collins and Gotto’s work signals a movement from a process of atmospheric exchange to a digital process of sensing and coding, the output of which is then transmitted through the atmosphere as sound. It must be mentioned that within this movement from atmospheric gas to atmospheric music we are not listening to the tree alone. We are listening to a much more complex polyphony involving the components of the digital sensing technology, the tree, the gases in the atmosphere, and the biological (breathing) and cultural processes (cars, factories and coal fired power stations) that produce these gases. Figure 3: Reiko Goto and Tim Collins, Plein Air, 2010 As both Don Ihde and Steven Connor have pointed out, the air that we breathe is not neutral. It is, on the contrary, given its significance in technology, sound, and voice. Taking this further, we might understand sensing technology as conditioning the air with information. This type of air conditioning—as information alters the condition of air—occurs as technology picks up, detects, and makes sensible phenomena in the atmosphere. While communication media such as the telegraph and other electronic information distribution systems may have distanced information from nature, the sensing technology experimentally applied by EcoArtTech, Preeemptive Media, and Goto and Collins, may remind us of the materiality of air. These technologies allow us to connect to the atmosphere; they reformulate it, converting it to information, giving new form to the coded processes in nature.AcknowledgmentAll images reproduced with the kind permission of the artists. References Bateson, Gregory. Steps to an Ecology of Mind. Chicago: University of Chicago Press, 1972. Briggs, Asa, and Peter Burke. A Social History of the Media: From Gutenberg to the Internet. Maden: Polity Press, 2009. Brown, Steve. “Michel Serres: Science, Translation and the Logic of the Parasite.” Theory, Culture and Society 19.1 (2002): 1-27. Clarke, Bruce. “Communication.” Critical Terms for Media Studies. Eds. Mark B. N. Hansen and W. J. T. Mitchell. Chicago: University of Chicago Press, 2010. 131-45 -----. “Information.” Critical Terms for Media Studies. Eds. Mark B. N. Hansen and W. J. T. Mitchell. Chicago: University of Chicago Press, 2010. 157-71 Crocker, Stephen. “Noise and Exceptions: Pure Mediality in Serres and Agamben.” CTheory: 1000 Days of Theory. (2007). 7 June 2012 ‹http://www.ctheory.net/articles.aspx?id=574› Connor, Stephen. The Matter of Air: Science and the Art of the Etheral. London: Reaktion, 2010. Cubitt, Sean. EcoMedia. Amsterdam and New York: Rodopi, 2005 Deiter, Michael. “Processes, Issues, AIR: Toward Reticular Politics.” Australian Humanities Review 46 (2009). 9 June 2012 ‹http://www.australianhumanitiesreview.org/archive/Issue-May-2009/dieter.htm› DeLanda, Manuel. Intensive Science and Virtual Philosophy. London and New York: Continuum, 2002. Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, MA: MIT Press, 2005 Harman, Graham. Guerilla Metaphysics. Illinois: Open Court, 2005. Ihde, Don. Listening and Voice: Phenomenologies of Sound. Albany: State University of New York, 2007. Innis, Harold. Empire and Communication. Toronto: Voyageur Classics, 1950/2007. Losee, Robert M. “A Discipline Independent Definition of Information.” Journal of the American Society for Information Science 48.3 (1997): 254–69. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Sphere Books, 1964/1967. Morton, Timothy. Ecology Without Nature: Rethinking Environmental Aesthetics. Cambridge: Harvard University Press, 2007. Murray, Robin, and Heumann, Joseph. Ecology and Popular Film: Cinema on the Edge. Albany: State University of New York, 2009 Scarrott, G.C. “The Nature of Information.” The Computer Journal 32.3 (1989): 261-66 Serres, Michel. Hermes: Literature, Science Philosophy. Baltimore: The John Hopkins Press, 1982. -----. The Natural Contract. Trans. Elizabeth MacArthur and William Paulson. Ann Arbor: The University of Michigan Press, 1992/1995. -----. Genesis. Trans. Genevieve James and James Nielson. Ann Arbor: The University of Michigan Press, 1982/1995. -----. “A Return to the Natural Contract.” Making Peace with the Earth. Ed. Jerome Binde. Oxford: UNESCO and Berghahn Books, 2007. Strate, Lance. Echoes and Reflections: On Media Ecology as a Field of Study. New York: Hampton Press, 2006 Wallen, Ruth. “Ecological Art: A Call for Intervention in a Time of Crisis.” Leonardo 45.3 (2012): 234-42.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography