To see the other types of publications on this topic, follow the link: Digital Audio Workstations.

Journal articles on the topic 'Digital Audio Workstations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 journal articles for your research on the topic 'Digital Audio Workstations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kovalenko, Oleksandr M. "ОСОБЛИВОСТІ ВИКОРИСТАННЯ ЦИФРОВИХ АУДІО РОБОЧИХ СТАНЦІЙ, ПРИЗНАЧЕНИХ ДЛЯ СТВОРЕННЯ ЕЛЕКТРОННОЇ МУЗИКИ В УМОВАХ НЕФОРМАЛЬНОЇ ОСВІТИ ДОРОСЛИХ". Information Technologies and Learning Tools 53, № 3 (2016): 178. http://dx.doi.org/10.33407/itlt.v53i3.1428.

Full text
Abstract:
In the article it is highlighted the importance of self-education and self-development of adults to perform an effective vital activity in the modern information society. It has been considered possibilities of application of audio workstation for musical self-education and self-development of adults. Analysis of basic characteristics and functional features of digital audio workstations is given. The basic advantages and disadvantages of using digital audio workstations in musical self-education of adults are presented. A comparative analysis of the most widespread in the world of digital audio workstations is given. The analysis was carried out by examining of special literature, practical use of digital audio workstations, sites developers of these programs and the experience of using digital audio workstations. At present digital audio workstations is a tool for creating an electronic music. That is why the ability to use the sequencer becomes the main requirement for music producers, arrangers and sound engineers.
APA, Harvard, Vancouver, ISO, and other styles
2

Metatla, Oussama, Fiore Martin, Adam Parkinson, Nick Bryan-Kinns, Tony Stockman, and Atau Tanaka. "Audio-haptic interfaces for digital audio workstations." Journal on Multimodal User Interfaces 10, no. 3 (2016): 247–58. http://dx.doi.org/10.1007/s12193-016-0217-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clauhs, Matthew. "Songwriting with digital audio workstations in an online community." Journal of Popular Music Education 4, no. 2 (2020): 237–52. http://dx.doi.org/10.1386/jpme_00027_1.

Full text
Abstract:
Digital audio workstations and online file-sharing technology may be combined to create opportunities for collaborations among many groups, including performing ensembles, music technology classes, professional songwriters and preservice music teachers. This article presents a model for a digitally mediated online collaboration that focuses on popular music songwriting activities in school and higher education settings. Using an example from a high school music production class that collaborated with an undergraduate music education course through Google Docs and a file-sharing platform, the author outlines steps towards facilitating partnerships that focus on creating music in an online community. Such collaborations may help remove barriers between our classrooms and our communities as music teachers leverage technology to develop relationships with creators and performers of popular music everywhere.
APA, Harvard, Vancouver, ISO, and other styles
4

Sawaguchi, Masaki. "Special Edition Recent Audio Technique in Sound Field Reproduction. Professional Equipments. Digital Audio Workstations Have Set the Practical Application into Audio Production." Journal of the Institute of Television Engineers of Japan 46, no. 9 (1992): 1089–95. http://dx.doi.org/10.3169/itej1978.46.1089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cardoso, Ana Maria Pereira, and Rodrigo Fonseca e. Rodrigues. "A EXPERIÊNCIA DA INTERAÇÃO E O DESIGN DE INTERFACES: SEMIÓTICA E METACOMUNICAÇÃO NOS DIGITAL AUDIO WORKSTATIONS." CASA: Cadernos de Semiótica Aplicada 14, no. 1 (2016): 265. http://dx.doi.org/10.21709/casa.v14i1.8244.

Full text
Abstract:
O artigo levanta alguns problemas relacionados à concepção de interfaces para os Digital Audio Workstations (DAWs). A abordagem teórica utiliza os conceitos de semiose e das três fases da experiência da Semiótica peirceana. A tese da metacomunicação, apresentada pela Engenharia Semiótica, também é estudada e se refere à singularidade do diálogo entre designers e usuários. Como consequência, diferentes estratégias de comunicação são requeridas, levando-se em conta que tal conversação não ocorre num tempo sincrônico. Os DAWs Sonar (CakeWalk, 2010) e o GarageBand (Apple, 2013) foram eleitos como corpus empírico porque seus projetos almejam balancear a performance do sistema com as habilidades mnemônicas e intuitivas sob diferentes arquétipos culturais, técnicos e afetivos de seus potenciais utilizadores: pessoas com expectativas singulares diante de um dispositivo de gravação sonora. Acredita-se que as concepções do design, além de perseguirem novas funcionalidades, poderiam procurar oferecer aos usuários dos DAWs possibilidades de exploração do sistema a partir de suas próprias performances regulares. Desse modo, o compositor seria convidado a repensar seus métodos de criação musical e suas singularidades insuspeitadas de escuta.
APA, Harvard, Vancouver, ISO, and other styles
6

Walzer, Daniel. "Blurred lines: Practical and theoretical implications of a DAW-based pedagogy." Journal of Music Technology & Education 13, no. 1 (2020): 79–94. http://dx.doi.org/10.1386/jmte_00017_1.

Full text
Abstract:
Digital audio workstations (DAWs) occupy a prominent space in the creative arts. Songwriters, composers, producers, and audio engineers use a combination of software and virtual instruments to record and make music. Educators increasingly find DAWs useful for teaching concepts in signal flow, acoustics and sound synthesis, and to model analogue processes. As the creative industries shift to primarily software-based platforms, the identities, roles, and responsibilities of the participants intersect and blur. Similarly, networked technologies change the space and place of creative activity. Now, the ‘studio’ exists virtually anywhere. For educators working with students, these changing paradigms present a series of challenges. This article explores the DAW’s possibilities across three areas: space and place, theory and identity, and pedagogy. The article advocates for a less technocratic model of teaching and learning with DAWs in favour of an approach that cultivates a balance of aesthetic awareness and creativity.
APA, Harvard, Vancouver, ISO, and other styles
7

Brooker, Phillip, and Wes Sharrock. "Collaborative Music-Making with Digital Audio Workstations: The “nth Member” as a Heuristic Device for Understanding the Role of Technologies in Audio Composition." Symbolic Interaction 39, no. 3 (2016): 463–83. http://dx.doi.org/10.1002/symb.238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Aengus, Craig T. Jin, and Oliver Bown. "Design and Evaluation of Agents that Sequence and Juxtapose Short Musical Patterns in Real Time." Computer Music Journal 41, no. 4 (2018): 45–63. http://dx.doi.org/10.1162/comj_a_00439.

Full text
Abstract:
We present and discuss the Agent Designer, a system that enables users of digital audio workstations to generate novel high-level structures for their compositions based on previous examples. The system uses variable-order Markov models and rule induction to learn both temporal relations and structural relations between parts in a piece of music. As is usual in machine learning, however, the quality of the learning can be improved greatly by users specifying relevant features. The Agent Designer therefore points to important design and human–computer interaction problems, as well as algorithmic challenges. We present a number of studies that help to understand how effective the Agent Designer is and how we might design a user interface that best enables users to obtain quality results from the system. We show that the Agent Designer is effective for certain musical styles, such as loop-based electronic music, and that we as expert users can design agents that produce the most effective results. We also note that it remains a challenge to automate this process fully.
APA, Harvard, Vancouver, ISO, and other styles
9

Roads, Curtis. "Integrated Media Systems Digital Dyaxis: A Digital Audio Workstation." Computer Music Journal 13, no. 3 (1989): 107. http://dx.doi.org/10.2307/3680029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McGrath, David. "Real‐time auralization with the Huron digital audio convolution workstation." Journal of the Acoustical Society of America 100, no. 4 (1996): 2579. http://dx.doi.org/10.1121/1.417511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Truax, Barry. "Electroacoustic Music and the Digital Future." Circuit 13, no. 1 (2010): 21–26. http://dx.doi.org/10.7202/902261ar.

Full text
Abstract:
This article outlines the author's views on the contemporary social and economic situation of electroacoustic music and digital technology in general. The dominance of commercial interests in shaping the listener, the artist, and the definition of culture is examined. Issues associated with digital technology, such as standardization, de-skilling, and upgrades, are discussed with respect to artistic practice. It is argued that marginalized artforms such as electroacoustic music have benefited from the windsurfer availability of the digital audio workstation (DAW) for production and the Internet for distribution, but no analogous avenue exists for the creation of the consumer.
APA, Harvard, Vancouver, ISO, and other styles
12

Mansour, Essam. "A survey of digital information literacy (DIL) among academic library and information professionals." Digital Library Perspectives 33, no. 2 (2017): 166–88. http://dx.doi.org/10.1108/dlp-07-2016-0022.

Full text
Abstract:
Purpose The key purpose of this study is to explore digital information literacy (DIL) possessed by South Valley University (SVU) library and information professionals. It also tries to identify the various types of DIL and find constraints affecting the related skills and competencies of those professionals. Design/methodology/approach A quantitative research methodology was adopted in the form of a survey, which was undertaken from February to March 2016. As stated by Kerlinger (1986), the survey research is a useful instrument for educational fact-finding, and a means by which much information can be acquired from the study’s population. The survey instrument was a self-administrated questionnaire, which was adopted for data collection. A pilot questionnaire was first sent to a small random sample of the respondents, with feedback being used to fine-tune the final questionnaire. The targeted population of this study included library and information professionals (n = 127) belonging to SVU libraries that spread over three provinces/campuses: the Qena campus (number of libraries = 22), the Luxor campus (n = 3) and the Hurghada campus (n = 2). The library and information professionals are described to be librarians, library assistants and library directors. Of 127 questionnaires, 101 (79.5%) responses were received. To collect data, the study used a questionnaire, which has six sections reflecting the research objectives of the study. Findings The findings showed that over two-thirds of SVU library and information professionals are males, and almost one-third are females. Majority respondents are aged between 26 and 40 years, and most possessed bachelor’s degrees, of which nearly two-thirds held library science degrees. Regarding the respondents’ professional profile, the study also showed that majority were librarians, followed by library assistants and library managers, and nearly half had 10 years of experience, followed by those who have work experience of 6-7 years. This study showed that there is a significant relationship between some of the respondents’ demographic characteristics (age and education) and their DIL. The respondents’ gender had no effect on their DIL. The study also showed that there is a significant relationship between all the respondents’ professional characteristics, particularly their discipline, followed by job title, work experience and DIL. Regarding the level of respondents’ knowledge of the types of computers, many showed that their knowledge of using mobile devices, followed by PCs, workstations, portable media players/digital audio player and personal digital assistant was, at the least, high. The respondents’ knowledge of other types ranged between moderate to non-proficient. A large number of the respondents showed that their proficiency in using output devices, followed by input devices, processing devices, storage devices and communication devices was also, at the least, high. While the largest number of the respondents showed that their proficiency in using application software was, at least, high, the largest number of them showed that their proficiency in using system software was moderate. Regarding the respondents’ knowledge-based competencies, as well as skills-based competencies, especially when these competencies related to the integration of ICTs into the library work, this study showed that such kind of competencies possessed by SVU library and information professionals ranged between competent and somewhat competent. Regarding the challenges affecting the respondents’ acquiring skills and related competencies, the study revealed that the lack of funds, training, physical facilities, connection to the internet, subscribing to e-databases, lack of time as well as challenges related to SVU library system regulations, were significant to them. Other challenges such as the lack of incorporating and exploiting the new technologies and products into library integrated systems educators’ services, as well as challenges related to psychological barriers, lack of current curricula in the area of ICTs and a shortage of experienced LIS counselors, were also significant to them. This study reached conclusions that the SVU library and information professionals should be qualified and get adapted to ICTs and related competencies. They need to be provided with sufficient training to update their knowledge regarding the use and integration of technology in their library work. Research limitations/implications This study investigates DIL among library and information professionals at SVU, an Egyptian university. Any findings and conclusions resulted from this study are limited in scope to only the library professionals of this university. Such a topic has limited previous research. The size and homogeneity of the sample limit the generalizability of this study. Practical/implications The study aims to investigate DIL acquired by SVU library and information professionals. The potential results of this study would be useful for library schools, library associations and other pertinent authorities for the planning of training programs and courses. The findings may also be helpful for library educators to develop curricula that meet the needs of library and information professionals. Originality/value This study is one of the few studies conducted on this topic in Egypt. The literature on the topic of this research revealed that extensive research has been undertaken on DIL in higher education in developed countries, but very limited research has been conducted on this topic in Egypt and similar developing countries, particularly among academic library and information professionals. No definition for the concept of DIL has yet been produced, and many Egyptian academic institutions define this concept based on their own needs depending on existing models.
APA, Harvard, Vancouver, ISO, and other styles
13

Bressan, Federica, Valentina Burini, Edoardo Micheloni, Antonio Rodà, Richard L. Hess, and Sergio Canazza. "Reading Tapes Backwards: A Legitimate Approach to Saving Time and Money in Digitization Projects?" Applied Sciences 11, no. 15 (2021): 7092. http://dx.doi.org/10.3390/app11157092.

Full text
Abstract:
Audio carriers are subject to a fast and irreversible decay. In order to save valuable historical recordings, the audio signal and other relevant information can be extracted from the source audio document and stored on another medium, normally a redundant digital storage system. This procedure is called ’content transfer’. It is a costly and time-consuming procedure. There are several solutions with which the cost can be reduced. One consists of picking up all tracks from a two-sided tape in one pass. This means that some tracks will be digitized forward and some backwards, to be subsequently corrected in the digital workstation. This article is concerned with the question of whether reading tracks backwards introduces unwanted effects into the signal. In particular, it investigates whether a difference can be observed between audio signals read forward or backwards and, if so, whether the difference is measurable. The results show that a difference can be observed, yet this is not enough to conclude that this “backwards” approach should not be used. The complexity of the situation is presented in the discussion. Future work includes reproducing this experiment with different audio equipment, as well as a perception test with human subjects.
APA, Harvard, Vancouver, ISO, and other styles
14

Stickland, Scott, Rukshan Athauda, and Nathan Scott. "Design and Evaluation of a Scalable Real-Time Online Digital Audio Workstation Collaboration Framework." Journal of the Audio Engineering Society 69, no. 6 (2021): 410–31. http://dx.doi.org/10.17743/jaes.2021.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Rianggi, Yogi Elga, Rafiloza Rafiloza, and Wilma Sriwulan. "GEMA DI WAKTU SUBUH." MELAYU ARTS AND PERFORMANCE JOURNAL 2, no. 2 (2020): 247. http://dx.doi.org/10.26887/mapj.v2i2.977.

Full text
Abstract:
ABSTRACTGema di Waktu Subuh (in English, it’s translated into Echo at the Dawn Time) is the work of multimedia music with the method of sound exploration in the form of sound-design composition. This composition consists of manipulative sounds that describe the atmosphere occurring at the time of Subuh prayer in Salayo Tanang Bukit Sileh, Lembang Jaya sub-district, Solok district. This composition includes in illustration music that explores the sounds of nature at dawn such as river sound, cicada’s sound, rooster’s sound, vehicle sound, the sound of people’s reciting verses in Quran, and Shalawat Tahrim as the sign of the entrance of Subuh prayer time. Gema di Waktu Subuh was processed through the application of Digital Audio Workstation (DAW) cubase5 with the assistance of Virtual Sound Technology (VST) namely Waves 9, processed with producing 3d sound. Keywords: Multimedia Music, Manipulation, Exploration, Breaking Dawn Echo. ABSTRAKGema di Waktu Subuh merupakan karya musik multimedia dengan metode eksplorasi bunyi dalam bentuk penggarapan Sound Design. Karya ini merupakan suara-suara manipulasi yang menggambarkan suasana yang terjadi ketika akan masuknya waktu sholat subuh di daerah Salayo Tanang Bukit Sileh Kecamatan Lembang Jaya Kabupaten Solok .Karya ini termasuk musik ilustrasi mengeksplorasi suara-suara Alam diwaktu subuh, seperti: bunyi sungai, bunyi jangkrik, bunyi ayam berkokok, bunyi kendaraan, bunyi orang yang membacakan Tilawah ayat suci Al-Qur’an, dan Shalawat Tahrim sebagai penanda masuknya waktu sholat Subuh.Gema di Waktu Subuh diolah melalui sarana pengaplikasian Digital Audio Workstation (DAW) cubase5 dengan bantuan Virtual Sound Technology (VST) yaitu Waves 9, diolah dengan menghasilkansuara 3d sound. Kata kunci: Musik-Multimedia, Manipulasi, Eksplorasi, Gema-Shubuh.
APA, Harvard, Vancouver, ISO, and other styles
16

Gopal, M., and W. P. Jepson. "The Study of Dynamic Slug Flow Characteristics Using Digital Image Analysis—Part I: Flow Visualization." Journal of Energy Resources Technology 120, no. 2 (1998): 97–101. http://dx.doi.org/10.1115/1.2795032.

Full text
Abstract:
This paper reports the application of novel, digital image analysis techniques in the study of slug flow characteristics, under dynamic conditions in two-phase gas-liquid mixtures. Water and an oil of viscosity 18 cP were used for the liquid phase and carbon dioxide was used for the gas phase. Flow in a 75-mm i.d., 10-m long acrylic pipeline system was studied. Images of slugs were recorded on video by S-VHS cameras, using an audio-visual mixer. Each image was then digitized frame-by-frame and analyzed on a SGI™ workstation. Detailed slug characteristics, including liquid film heights, slug translational velocity, mixing length, and, slug length, were obtained.
APA, Harvard, Vancouver, ISO, and other styles
17

Febrian, Arsya, Hestiasari Rante, Sritrusta Sukaridhoto, and Akhmad Alimudin. "Music Scoring for Film Using Fruity Loops Studio." E3S Web of Conferences 188 (2020): 00004. http://dx.doi.org/10.1051/e3sconf/202018800004.

Full text
Abstract:
Making music for a film can be said to be quite challenging for some people with the necessity that music can evoke the atmosphere in the film. Determination and placement of audio aspects into visual form are things done in the music scoring process. Of course, it will be very inconvenient and inefficient when making music must be through recording instruments manually through the studio. As technology develops in the world of music production, making music for films can now be made using only a computer. This can happen thanks to the Digital Audio Workstation (DAW) software. Nowadays, various types of DAW are emerging, including one that is quite well known, Fruity Loops Studio or commonly called FL Studio. This study aims to find out how the music scoring process for a film using FL Studio, as a reference for making music for films.
APA, Harvard, Vancouver, ISO, and other styles
18

Conklin, Darrell, Martin Gasser, and Stefan Oertl. "Creative Chord Sequence Generation for Electronic Dance Music." Applied Sciences 8, no. 9 (2018): 1704. http://dx.doi.org/10.3390/app8091704.

Full text
Abstract:
This paper describes the theory and implementation of a digital audio workstation plug-in for chord sequence generation. The plug-in is intended to encourage and inspire a composer of electronic dance music to explore loops through chord sequence pattern definition, position locking and generation into unlocked positions. A basic cyclic first-order statistical model is extended with latent diatonicity variables which permits sequences to depart from a specified key. Degrees of diatonicity of generated sequences can be explored and parameters for voicing the sequences can be manipulated. Feedback on the concepts, interface, and usability was given by a small focus group of musicians and music producers.
APA, Harvard, Vancouver, ISO, and other styles
19

Anthony, Brendan, Paul Thompson, and Tuomas Auvinen. "Learning the ‘tracker’ process: A case study into popular music pedagogy." Journal of Popular Music Education 4, no. 2 (2020): 211–35. http://dx.doi.org/10.1386/jpme_00026_1.

Full text
Abstract:
The ‘tracker’ production process is a modern form of music production agency where top-line songwriters work with music programmers called ‘trackers’, primarily within the confines of the digital audio workstation. In this case, production, songwriting and performance often happen concurrently, and collaboration involves the synthesis of ideas, musical negotiations and expertise in using digital and online technologies. In providing popular music production learning activities that translate to professional contexts, higher education institutions face a number of challenges, particularly where much of the collaboration is undertaken online. This article reports on a cohort of Bachelor of Popular Music students who undertook a tracker process module. Students’ perceptions of ‘engagement’ and ‘learning’ were captured via an assessment item and survey, and a themed analysis indicated that the pedagogy promoted the use of diverse social skills, was highly collaborative, relied both on specialist and non-specialist knowledge, and involved the use of digital and online communications.
APA, Harvard, Vancouver, ISO, and other styles
20

Jiwandono, Mas Drajad, Dilla Octavianingrum, and Gandung Djatmiko. "Pemanfaatan Logic Pro X dan E-Gamelan sebagai Alternatif Media Pembelajaran Praktik Karawitan Secara Daring." Indonesian Journal of Performing Arts Education 1, no. 2 (2021): 42–47. http://dx.doi.org/10.24821/ijopaed.v1i2.5542.

Full text
Abstract:
Masa darurat COVID-19 menuntut proses pembelajaran dilakukan secara daring. Hal ini menghambat proses pembelajaran praktik karawitan di SMP Negeri 2 Kretek, Bantul DIY. Oleh karena itu inovasi media pembelajaran cukup dibutuhkan seperti pemanfaatan Digital Audio Workstation (DAW) Logic Pro X dan aplikasi E-Gamelan. Tujuan penelitian ini adalah untuk mendeskripsikan penggunaan DAW Logic Pro dan E-Gamelan sebagai alternatif media pembelajaran praktik karawitan secara daring. Pengumpulan data ditempuh mulai dari observasi, wawancara, dan analisis dokumen yang berkaitan dengan permasalahan tersebut. Sumber data dalam penelitian ini adalah guru dan peserta didik di sekolah terkait. Teknik validasi pada penelitian ini menggunakan validasi triangulasi teknik. Hasil penelitian menunjukkan bahwa penggunaan media Logic Pro X dan E-Gamelan dalam proses pembelajaran praktik karawitan SMP cukup efektif membantu guru dalam mencapai tujuan pembelajaran. Lebih lanjut, sebagian besar para siswa memberikan respon positif terhadap proses pembelajaran yang dilaksanakan. The COVID-19 emergency period demands that the learning process be carried out online. This hinders the learning process of karawitan practice at SMP Negeri 2 Kretek, Bantul DIY. Therefore, learning media innovations are needed, such as Digital Audio Workstation (DAW), Logic Pro X and the E-Gamelan application. This study aims to describe the use of DAW Logic Pro and E-Gamelan as alternative media for learning karawitan practice online. Data collection was taken starting from observation, interviews, and analysis of documents related to these problems. Sources of data in this study were teachers and students in affiliated schools. The validation technique uses a triangulation validation technique. The study results indicate that using Logic Pro X and E-Gamelan media in the learning process of junior high school karawitan practice is quite effective in helping teachers achieve learning goals. Furthermore, most of the students gave a positive response to the learning process carried out.
APA, Harvard, Vancouver, ISO, and other styles
21

McCoid, Scott, Jason Freeman, Brian Magerko, et al. "EarSketch: An integrated approach to teaching introductory computer music." Organised Sound 18, no. 2 (2013): 146–60. http://dx.doi.org/10.1017/s135577181300006x.

Full text
Abstract:
EarSketch is an all-in-one approach to supporting a holistic introductory course to computer music as an artistic pursuit and a research practice. Targeted to the high school and undergraduate levels, EarSketch enables students to acquire a strong foundation in electroacoustic composition, computer music research and computer science. It integrates a Python programming environment with a commercial digital audio workstation program (Cockos’ Reaper) to provide a unified environment within which students can use programmatic techniques in tandem with more traditional music production strategies to compose music. In this paper we discuss the context and goals of EarSketch, its design and implementation, and its use in a pilot summer camp for high school students.
APA, Harvard, Vancouver, ISO, and other styles
22

Worrall, David. "Computational Designing of Sonic Morphologies." Organised Sound 25, no. 1 (2020): 15–24. http://dx.doi.org/10.1017/s1355771819000426.

Full text
Abstract:
Much electroacoustic music composition and sound art, and the commentary that surrounds them, is locked into a materialist sound-object mindset in which the hierarchical organisation of sonic events, especially those developed through abstraction, are considered antithetical to sounds ‘being themselves’. This article argues that musical sounds are not just material objects, and that musical notations, on paper or in computer code, are not just symbolic abstractions, but instructions for embodied actions. When notation is employed computationally to control resonance and gestural actuators at multiple acoustic, psychoacoustic and conceptual levels of music form, vibrant sonic morphologies may emerge from the quantum-like boundaries between them. In order to achieve that result, it is necessary to replace our primary focus of compositional attention from the Digital Audio Workstation sound transformation tools currently in vogue, with those that support algorithmic thinking at all levels of compositional design.
APA, Harvard, Vancouver, ISO, and other styles
23

Zotter, Franz, Markus Zaunschirm, Matthias Frank, and Matthias Kronlachner. "A Beamformer to Play with Wall Reflections: The Icosahedral Loudspeaker." Computer Music Journal 41, no. 3 (2017): 50–68. http://dx.doi.org/10.1162/comj_a_00429.

Full text
Abstract:
The quote from Pierre Boulez, given as an epigraph to this article, inspired French researchers to start developing technology for spherical loudspeaker arrays in the 1990s. The hope was to retain the naturalness of sound sources. Now, a few decades later, one might be able to show that even more can be done: In electroacoustic music, using the icosahedral loudspeaker array called IKO seems to enable spatial gestures that enrich alien sounds with a tangible acoustic naturalness. After a brief discussion of directivity-based composition in computer music, the first part of the article describes the technical background of the IKO, its usage in a digital audio workstation, and psychoacoustic evidence regarding the auditory objects the IKO produces. The second part deals with acoustic equations of spherical beamforming, how the IKO's loudspeakers are controlled correspondingly, how we deal with excursion limits, and the resulting beam patterns generated by the IKO.
APA, Harvard, Vancouver, ISO, and other styles
24

Kalra, Siddharth, Sarika Jain, and Amit Agarwal. "Gesture Controlled Tactile Augmented Reality Interface for the Visually Impaired." Journal of Information Technology Research 14, no. 2 (2021): 125–51. http://dx.doi.org/10.4018/jitr.2021040107.

Full text
Abstract:
This paper proposes to create an augmented reality interface for the visually impaired, enabling a way of haptically interacting with the computer system by creating a virtual workstation, facilitating a natural and intuitive way to accomplish a multitude of computer-based tasks (such as emailing, word processing, storing and retrieving files from the computer, making a phone call, searching the web, etc.). The proposed system utilizes a combination of a haptic glove device, a gesture-based control system, and an augmented reality computer interface which creates an immersive interaction between the blind user and the computer. The gestures are recognized, and the user is provided with audio and vibratory haptic feedbacks. This user interface allows the user to actually “touch, feel, and physically interact” with digital controls and virtual real estate of a computer system. A test of applicability was conducted which showcased promising positive results.
APA, Harvard, Vancouver, ISO, and other styles
25

Son, SeungHyeok. "A Study on the Application Method of Creative Music Class by using DAW(Digital Audio Workstation) based Garage Band Software." Korean Society of Music Education Technology 33 (October 16, 2017): 93–114. http://dx.doi.org/10.30832/jmes.2017.33.93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vellozo, Fernanda Freitas, Ana Paula Leonardi Dellaméa, and Michele Vargas Garcia. "Design of a sentence identification test with pictures (TIS-F) based on the pediatric speech intelligibility test." Revista CEFAC 19, no. 6 (2017): 773–81. http://dx.doi.org/10.1590/1982-021620171965517.

Full text
Abstract:
ABSTRACT Purposes: to design a sentence identification test with pictures for adults (Teste de Identificação de Sentenças com Figuras (TIS-F)) as an alternative for evaluation of auditory figure-background ability for verbal sounds, based on the Pediatric Speech Intelligibility Test and also for assessment of unskillful individuals who cannot accomplish other tests with higher levels of difficulty and greater demands. Methods: the Adobe Illustrator software was used and the image vectorization technique applied for figures creation. The sentences and the competitive message were audio-recorded in a sound treated room by a female announcer, using the software - REAPER - FM Digital Audio Workstation. Results: the TIS-F consisted of a 32 x 45 cm card, containing 10 figures, each one measuring 12 x 12 cm; one compact disc containing a track with the test calibration tone and seven test tracks, each one containing ten sentences and a competitive message and a specific protocol. Conclusion: the TIS-F is composed of a compact disc with dual-channel recording, with seven tracks containing ten sentences and the competitive story, one card containing ten pictures and a labeling protocol for all presentations and S/N in use, as well as the established normality values.
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Charles, Alexandria Guo, Gautam Salhotra, Sara Sprinkhuizen, Keerthi Shetty, and David Sun Kong. "Sonifying Data from the Human Microbiota: Biota Beats." Computer Music Journal 44, no. 1 (2020): 51–70. http://dx.doi.org/10.1162/comj_a_00552.

Full text
Abstract:
Abstract This article presents a musical interface that enables the sonification of data from the human microbiota, the trillions of microorganisms that inhabit the human body, into sound and music. The project is concerned with public engagement in science, particularly the life sciences, and developing cultivation technologies that take advantage of the ubiquitous and accessible nature of the human microbiota. In this article we examine the collaboration between team members proficient in musical composition and those with expertise in biology, sonification, and data visualization, producing an individualized piece of music designed to capture basic biological data and user attention. Although this system, called Biota Beats, sonifies ubiquitous data for educational science projects, it also establishes a connection between individuals and their bodies and between a community and its context through interactive music experiences, while attempting to make the science of the human microbiome more accessible. The science behind standardizing sonified data for scientific, human analysis is still in development (in comparison to charts, graphs, spectrograms, or other types of data visualization). So a more artistic approach, using the framework of musical genres and their associated themes and motifs, is a convenient and previously established way to capitalize on how people naturally perceive sound. Further, to forge a creative connection between the human microbiota and the music genre, a philosophical shift is necessary, that of viewing the human body and the digital audio workstation as ubiquitous computers.
APA, Harvard, Vancouver, ISO, and other styles
28

Zaluzec, Nestor J. "Tele-Presence Microscopy/LabSpace: An Interactive Collaboratory for use in Education and Research." Proceedings, annual meeting, Electron Microscopy Society of America 54 (August 11, 1996): 382–83. http://dx.doi.org/10.1017/s0424820100164374.

Full text
Abstract:
Computerized control of scientific instrumentation has been successfully implemented in recent years to facilitate the indirect operation or remote observation of a wide variety of equipment including the full range of electron microscopes. The concept is, however, usually applied in it’s simplest sense, namely-the direct one-to-one functional replacement of “local operation” of equipment by a remote workstation. While the microscope is clearly central to the our research, real collaboration will not be achieved simply by creating a networked interface to a microscope for remote scientists. This is merely a simple exercise in computer programming and digital control. For true distributed collaboration (either in research and/or teaching) to be successful, all of the aspects of the research/teaching environment must be considered. For example, the investigators must be able to talk to and see each other while running an instrument, and they should be able to do everything else they would normally do if they were in the same laboratory. This includes sharing experimental data, review previous experiments, write papers, talk over coffee and even visiting each other in their office to plan current and/or future work. The TelePresence Microscopy (TPM) /LabSpace project attempts to bridge the gap between simple “remote microscopy” and true collaboration, by integrating protocols, tools, and interactive links to instrumentation, data (real-time as well as archived), and audio-visual communications. The initial goal of this project has been to create a virtual space, accessible via the Internet, where microscopists and their colleagues, who are distributed across the nation or the world, can meet, talk, plan their research, and also run their experiments.
APA, Harvard, Vancouver, ISO, and other styles
29

Bader, Mohamed. "Audit on prolactin monitoring for patients on oral risperidone, intramuscular risperidone, and intramuscular paliperidone." BJPsych Open 7, S1 (2021): S65. http://dx.doi.org/10.1192/bjo.2021.216.

Full text
Abstract:
AimsThe aim of this audit was to investigate whether sufficient Prolactin monitoring was completed in a patient sample in the Torfaen area of Aneurin Bevan University Health Board. This audit targetted patients an oral or intra-muscular formulation of Risperidone in the year 2018 with the hypothesis that Prolactin monitoring is done less frequently than recommended.BackgroundRisperidone is the anti-psychotic drug most frequently associated with hyperprolactinemia which is often asymptomatic but can present with symptoms of oligomenorrhea, amenorrhea, galactorrhea, decreased libido, infertility, and decreased bone mass in women. Men with hyperprolactinemia may present with erectile dysfunction, decreased libido, infertility, gynecomastia, decreased bone mass, and rarely galactorrhea. The BNF advises monitoring of Prolactin at baseline, after 6 months, and then annually.MethodRetrospective review of 150 patients’ clinical letters to identify if they are on the above medications, using the local digital records system EPEX. Emails were also sent to community psychiatric nurses asking them if they could highlight any patients they were caseholding on the above medication. Depot clinic lists were also examined. Patients identified as being on the above medication had their blood tests reviewed on the online system Clinical Workstation (CWS) to determine whether they had their Prolactin level tested. A single spot sample of all patients on Talygarn ward in January 2019 was also included.Result1. 28 Risperidone2. 23 of 28 never had any Prolactin measurements3. 2 of 28 patients had the appropriate level of monitoring done for the year of 2018a. One patient complained of Galacotorrheab. Another patient had baseline done while on the ward and isn't due for any further monitoring at the time of writing.ConclusionThe above results identify that Prolactin monitoring is not being routinely completed for patients on the studied medication at an acceptable compliance level. Limitations around utitlity of prolactin monitoring may be the contributing factors; eg. Prolactin levels or medication dose may not be positively associated with adverse effects.. Further efforts were made to highlight the importance of baseline prolactin monitoring, as well as including a baseline Prolactin as an admission blood test for patients presenting with psychotic symptoms or on an anti-psychotic. A complete audit of metabolic monitoring and Prolactin levels for all patients on anti-psychotics would be an appropriate next step.
APA, Harvard, Vancouver, ISO, and other styles
30

Duncan, Renee. "Cognitive Processing in Digital Audio Workstation Composing." General Music Today, August 20, 2021, 104837132110344. http://dx.doi.org/10.1177/10483713211034441.

Full text
Abstract:
Music teachers have been thrust into a new world where digital learning is the new normal and use of technology has become more necessity than an added extra. While there are many new resources available, sometimes reexamining those more familiar can help repurpose them for digital learning. This article unearths the cognitive processes that occur when students interact with digital audio workstations (GarageBand and Soundtrap) both in classrooms and through online learning. The contents explicitly identify how cognitive processes might manifest in students’ learning, engagement, and work produced from two such programs: GarageBand and Soundtrap. The intent is to provide music educators with a practical and accessible resource to help guide an electronic composing curriculum.
APA, Harvard, Vancouver, ISO, and other styles
31

Oleksandr Mazur. "MUSIC RADIO RECORDINGS AS OBJECTS OF THE ARCHIVAL STORAGE." Scientific journal “Library Science. Record Studies. Informology”, no. 3 (February 1, 2021). http://dx.doi.org/10.32461/2409-9805.3.2020.224268.

Full text
Abstract:
The purpose of the article is to characterize the peculiarities of organizing the storage of musical audio recordings in the repositories of radio stations. The methodology is based on the use of general scientific and special methods. The universal nature of music as a special language determines the internationality of music art. From the moment of birth and during the past years verbal report of the unwritten conditions, that are connected to music sound record accumulation in different cultural institutions, has changed not once. The transformations of this relationship were caused by different reasons – social, political, technical, and technological nature. In current high-technology conditions integration properties inherent in the holistic process of creation, circulation, and spread of information, accumulated in music compositions for radio. Today the formation of the new qualitative communication area with the rapid growth of the sound messages streams level occurs. The article is dedicated to the scientific problem of the preservation and use of music audio recordings of radio companies as objects of archival storage. Archived musical radio records are defined as a special cluster of the communication area. The socio-communicative approach is the methodological basis of the publication. The scientific novelty. The main directions of ensuring the preservation, restoration, and restoration of music audio recordings of archival audio recordings are substantiated, as well as the peculiarities of digitization and use of sound documents of this type, are revealed. The specifics of the formation of the respective collections are considered on the example of the BBC Archive Center and The British Library Sound Archive as leading foreign institutions where music records are stored. It is concluded that digital technologies have led to a change in the culture of music consumption and, accordingly, have transformed the processes of storing music archival recordings in the repositories of radio companies, which have acquired specific properties. Examining music radio recordings as archival objects have shown that in the age of the digital revolution, the music industry around the world has undergone significant changes. Both the revenue structure and the cost structure of record labels and music radio companies have fundamentally changed. Conclusions. Digital technologies have led to a change in the culture of music consumption (there has been a change in the ideology of authorship for music products) and the emergence of digital recording technologies that use artificial intelligence, digital workstations, etc. In this regard, the specifics of the organization of storage of music archival recordings were transformed, in particular in the phono repositories of radio companies that have acquired specific properties of the service grade (reprint of archival music recordings that were previously specially recorded for radio stations).
APA, Harvard, Vancouver, ISO, and other styles
32

Levee, John Thomas, and Nathan Wolek. "Correction of Spatialization Issues in Acousmatic Music: Remedying Incompatibility Between SpatGRIS and Logic Pro X [Stetson University]." Journal of Student Research, April 24, 2019. http://dx.doi.org/10.47611/jsr.vi.699.

Full text
Abstract:
In this project, a technical solution for incompatibility among software programs involved in the spatialization of sound in multichannel speaker arrays was designed and implemented. Acousmatic music is a genre of electronic music intended for playback by a group of loudspeakers with the central concept being calculated ideation by the composer on how sound moves in space. Dr. Robert Normandeau, a pioneer in both acousmatic composition and sound spatialization research, describes the genre as “Cinema for the Ear.” Through his efforts with Groupe de Recherche en Immersion Spatiale (GRIS), Dr. Normandeau created a software plugin, SpatGRIS, which allows composers to send sounds around the space to come seemingly from anywhere in relation to the listener. This allows acousmatic composers to send sounds around, over, or through the audience for a completely immersive experience. Through use of SpatGRIS in conjunction with Logic Pro X, one of the most globally popular Digital Audio Workstations, the plugin has proven useful to create complex sonic movements in acousmatic compositions produced throughout the completion of this research. However, when trying to export these projects in their entirety for playback and sharing, both programs labeled and exported channels differently in octophonic (eight channel) compositions. This difference resulted in sounds from the composition being spatialized incorrectly. Therefore, a method using a free third-party software, which can easily remedy this error and correct the final recordings to their originally intended state, was created to serve as solution.
APA, Harvard, Vancouver, ISO, and other styles
33

"BandLab: A Free Cloud-based Digital Audio Workstation (DAW) for Music Production." College Music Symposium 61, no. 1 (2021). http://dx.doi.org/10.18177/sym.2020.61.1.sr.11510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Michielse, Maarten. "Musical Chameleons: Fluency and Flexibility in Online Remix Contests." M/C Journal 16, no. 4 (2013). http://dx.doi.org/10.5204/mcj.676.

Full text
Abstract:
While digital remix practices in music have been researched extensively in the last few years (see recently Jansen; Navas; Pinch and Athanasiades; Väkevä), the specific challenges and skills that are central to remixing are still not well understood (Borschke 90). As writers like Demers, Lessig, and Théberge argue, the fact that remixers rework already existing songs rather than building a track from scratch, often means they are perceived as musical thieves or parasites rather than creative artists. Moreover, as writers like Borschke and Rodgers argue, because remixers make use of digital audio workstations to produce and rework their sounds, their practices tend to be seen as highly automated, offering relatively little by way of musical and creative challenges, especially compared to more traditional (electro)acoustic forms of music-making. An underestimation of skill is problematic, however, because, as my own empirical research shows, creative skills and challenges are important to the way digital remixers themselves experience and value their practice. Drawing from virtual ethnographic research within the online remix communities of Indaba Music, this article argues that, not despite but because remixers start from already existing songs and because they rework these songs with the help of digital audio workstations, a particular set of creative abilities becomes foregrounded, namely: ‘fluency’ and ‘flexibility’ (Gouzouasis; Guilford, “Creativity Research”, Intelligence, “Measurement”). Fluency, the way the concept is used here, refers to the ability to respond to, and produce ideas for, a wide variety of musical source materials, quickly and easily. Flexibility refers to the ability to understand, and adapt these approaches to, the ‘musical affordances’ (Gibson; Windsor and De Bézenac) of the original song, that is: the different musical possibilities and constraints the source material provides. For remixers, fluency and flexibility are not only needed in order to be able to participate in these remix contests, they are also central to the way they value and evaluate each other’s work.Researching Online Remix ContestsAs part of a larger research project on online music practices, between 2011 and 2012, I spent eighteen months conducting virtual ethnographic research (Hine) within several remix competitions hosted on online music community Indaba Music. Indaba is not the only online community where creative works can be exchanged and discussed. For this research, however, I have chosen to focus on Indaba because, other than in a remix community like ccMixter for example, competitions are very much central to the Indaba community, thus making it a good place to investigate negotiations of skills and techniques. Also, unlike a community like ACIDplanet which is tied explicitly to Sony’s audio software program ACID Pro, Indaba is not connected to any particular audio workstation, thus providing an insight into a relatively broad variety of remix practices. During my research on Indaba, I monitored discussions between participants, listened to work that had been uploaded, and talked to remixers via personal messaging. In addition to my daily monitoring, I also talked to 21 remixers more extensively through Skype interviews. These interviews were semi-structured, and lasted between 50 minutes and 3.5 hours, sometimes spread over multiple sessions. During these interviews, remixers not only talked about their practices, they also shared work in progress with me by showing their remixes on screen or by directing a webcam to their instruments while they played, recorded, or mixed their material. All the remixers who participated in these interviews granted me permission to quote them and to use the original nicknames or personal names they use on Indaba in this publication. Besides the online observations and interviews, I also participated in three remix competitions myself, in order to gain a better understanding of what it means to be part of a remix community and to see what kind of challenges and abilities are involved. In the online remix contests of Indaba, professional artists invite remixers to rework a song and share and discuss these works within the community. For the purpose of these contests, artists provide separate audio files (so-called ‘stems’) for different musical elements such as voice, drums, bass, or guitar. Remixers can produce their tracks by rearranging these stems, or they can add new audio material, such as beats, chords, and rhythms, as long as this material is not copyrighted. Remixers generally comply with this rule. During the course of a contest, remixers upload their work to the website and discuss and share the results with other remixers. A typical remix contest draws between 200 or 300 participants. These participants are mostly amateur musicians or semi-professionals in the sense that they do not make a living with their creative practices, but rather participate in these contests as a hobby. A remix contest normally lasts for four or five weeks. After that time, the hosting artist chooses a winner and the remixers move on to another contest, hosted by a different artist and featuring a new song, sometimes from a completely different musical genre. It is partly because of this move from contest to contest that fluency and flexibility can be understood as central abilities within these remix practices. Fluency and flexibility are concepts adopted from the work of Joy Paul Guilford (“Creativity Research”, Intelligence, “Measurement”) who developed them in his creativity research from the 1950s onwards. For Guilford, fluency and flexibility are part of divergent-production abilities, those abilities we need in order to be able to deal with open questions or tasks, in which multiple solutions or answers are possible, in a quick and effective way. Within creativity research, divergent-production abilities have mainly been measured and evaluated quantitatively. In music related studies, for example, researchers have scored and assessed so-called fluency and flexibility factors in the music practices of children and adults and compared them to other creative abilities (Webster). For the purpose of this article, however, I do not wish to approach fluency and flexibility quantitatively. Rather, I would like to show that in online remix practices, fluency and flexibility, as creative abilities, become very much foregrounded. Gouzouasis already alludes to this possibility, pointing out that, in digital music practices, fluency might be more important than the ability to read and write traditional music notation. Gouzouasis’ argument, however, does not refer to a specific empirical case. Also, it does not reflect on how digital musicians themselves consider these abilities central to their own practices. Looking at online remix competitions, however, this last aspect becomes clear.FluencyFor Guilford, ‘fluency’ can be understood as the ability to produce a response, or multiple responses, to an open question or task quickly and easily (“Creativity Research”, Intelligence, “Measurement”). It is about making associations, finding different uses or purposes for certain source materials, and combining separate elements into organised phrases and patterns. Based on this definition, it is not difficult to see a link with remix competitions, in which remixers are asked to come up with a musical response to a given song within a limited time frame. Online remix contests are essentially a form of working on demand. It is the artist who invites the audience to remix a song. It is also the artist who decides which song can be remixed and which audio files can be used for that mix. Remixers who participate in these contests are usually not fans of these artists. Often they do not even know the song before they enter a competition. Instead, they travel from contest to contest, taking on many different remix opportunities. For every competition, then, remixers have to first familiarise themselves with the source material, and then try to come up with a creative response that is not only different from the original, but also different from all the other remixes that have already been uploaded. Remixers do not consider this a problem, but embrace it as a challenge. As Moritz Breit, one of the remixers, explained to me: “I like remixing [on Indaba] because it’s a challenge. You get something and have to make something different out of it, and later people will tell you how you did.” Or as hüpersonique put it: “It’s really a challenge. You hear a song and you say: ‘OK, it’s not my taste. But it’s good quality and if I could do something in my genre that would be very interesting’.” If these remixers consider the competitions to be a challenge, it is mainly because these contests provide an exercise of call and response. On Indaba, remixers apply different tempos, timbres, and sounds to a song, they upload and discuss work in progress, and they evaluate and compare the results by commenting on each other’s work. While remixers officially only need to develop one response, in practice they tend to create multiple ideas which they either combine in a single eclectic mix or otherwise include in different tracks which they upload separately. Remixers even have their own techniques in order to stimulate a variety of responses. Some remixers, for example, told me how they expose themselves to a large number of different songs and artists before they start remixing, in order to pick up different ideas and sounds. Others told me how they prefer not to listen to the original song, as it might diminish their ability to move away from it. Instead, they download only one or two of the original stems (usually the vocals) and start improvising around those sounds, without ever having heard the original song as a whole. As Ola Melander, one of the remixers, explained: “I never listen to it. I just load [the vocals] and the drum tracks. [....] I have to do it [in] my own style. [….] I don’t want that the original influences it, I want to make the chords myself, and figure out what it will sound like.” Or as Stretched Mind explained to me: “I listen to the vocal stem, only that, so no synths, no guitars, just pure vocal stems, nothing else. And I figure out what could fit with that.” On Indaba, being able to respond to, and associate around, the original track is considered to be more important than what Guilford calls ‘elaboration’ (“Measurement” 159). For Guilford, elaboration is the ability to turn a rough outline into a detailed and finished whole. It is basically a form of fine tuning. In the case of remixing, this fine tuning is called ‘mastering’ and it is all about getting exactly the right timbre, dynamics, volume, and balance in a track in order to create a ‘perfect’ sounding mix. On Indaba, only a select group of remixers is actually interested in such a professional form of elaboration. As Moritz Breit told me: “It’s not that you have like a huge bunch of perfectly mastered submissions. So nobody is really expecting that from you.” Indeed, in the comment section remixers tend to say less about audio fidelity than about how they like a certain approach. Even when a critical remark is made about the audio quality of a mix, these criticisms are often preceded or followed by encouraging comments which praise the idea behind the track or applaud the way a remixer has brought the song into a new direction. In short, the comments are often directed more towards fluency than towards elaboration, showing that for many of these remixers the idea of a response, any response, is more important than creating a professional or sellable track.Being able to produce a musical response is also more important on Indaba than having specific musical instrument skills. Most remixers work with digital audio workstations, such as Cubase, Logic Pro, and Pro Tools. These software programs make it possible to manipulate and produce sounds in ways that may include musical instruments, but do not necessarily involve them. As Hugill writes, with these programs “a sound source could be a recording, a live sound, an acoustic instrument, a synthesizer, the human body, etc. In fact, any sounding object can be a sound source” (128). As such, remix competitions tend to draw a large variety of different participants, with a wide range of musical backgrounds and instrument skills. Some remixers on Indaba create their remixes by making use of sample libraries and loops. Others, who have the ability, also add sounds with instruments such as drums, guitars, or violins, which they record with microphones or, in the case of electronic or digital instruments, plug directly into their personal computers. Remixers who are confident about their instrument skills improvise around the original tracks in real-time, while less confident players record short segments, which they then alter and correct afterwards with their audio programs. Within the logic of these digital audio workstation practices, these differences are not significant, as all audio input merely functions as a starting point, needing to be adjusted, layered, combined, and recombined afterwards in order to create the final mix. For the contestants themselves these differences are also not so significant, as contestants are still, in their own ways, involved in the challenge of responding to and associating around the original stems, regardless of the specific techniques or instruments used. The fact that remixers are open to different methods and techniques does not mean, however, that every submission is considered to be as valid as any other. Remixers do have strong opinions about what is a good remix and what is not. Looking at the comments contestants give on each other’s work, and the way they talk about their practices during interviews, it becomes clear that remixers find it important that a remix somehow fits the original source material. As hüpersonique explained: “A lot of [remixes] don’t really match the vocals (…) and then it sounds not that good.” From this perspective, remixers not only need to be fluent, they also need to be flexible towards their source material. FlexibilityFor Guilford, flexibility is the readiness to change direction or method (Intelligence, “Measurement”). It is, as Arnold writes, “facilitated by having a great many tricks in your bag, knowing lots of techniques, [and] having broad experience” (129). In music, flexibility can be understood as the ability to switch easily between different sounds, rhythms, and approaches, in order to achieve a desired musical effect. Guilford distinguishes between two forms of flexibility: ‘spontaneous flexibility’, when a subject chooses himself to switch between different approaches, and ‘adaptive flexibility’ when a switch in approach is necessary or preferred to fit a certain task (“Measurement” 158). While both forms of flexibility can be found on Indaba, adaptive flexibility is seen as a particularly important criterion of being a skilled remixer, as it shows that a remixer is able to understand, and react to, the musical affordances of the original track. The idea that music has affordances is not new. As Windsor and De Bézenac argue, building on Gibson’s original theory of affordances, even in the most free expressive jazz improvisations, there are certain cues that make us understand if a solo is “going with” or “going against” the shared context, and it is these cues that guide a musician through an improvisation (111). The same is true for remix practices. As Regelski argues, any form of music rearranging or appropriation “requires considerable understanding of music’s properties – and the different affordances of those properties” (38). Even when remixers only use one of the original stems, such as the vocals, they need to take into account, for example, the tempo of the song, the intensity of the voice, the chord patterns on which the vocals are based, and the mood or feeling the singer is trying to convey. A skilled remixer, then, builds his or her ideas on top of that so that they strengthen and not diminish these properties. On Indaba, ironic or humoristic remixers too are expected to consider at least some of the basic features of the original track, such as its key or its particular form of musical phrasing. Remixes in which these features are purposely ignored are often not appreciated by the community. As Tim Toz, one of the remixers, explained: “There’s only so much you can do, I think, in the context of a melody plus the way the song was originally sung. […] I hear guys trying to bend certain vocal cadences into other kinds of grooves, and it somehow doesn’t work […], it [begins] to sound unnatural.” On Indaba, remixers complement each other when they find the right approach to the original track. They also critique each other when an approach does not fit the original song, when it does not go along with the ‘feel’ of the track, or when it seem to be out of key or sync with the vocals. By discussing each other’s tracks, remixers not only collectively explore the limits and possibilities of a song, they also implicitly discuss their abilities to hear those possibilities and be able to act on them appropriately. What remixers need in order to be able to do this is what Hugill calls, ‘aural awareness’ (15): the ability to understand how sound works, both in a broad and in-depth way. While aural awareness is important for any musician, remixers are especially reliant on it, as their work is centred around the manipulation and extension of already existing sounds (Hugill). In order to be able to move from contest to contest, remixers need to have a broad understanding of how different musical styles work and the kind of possibilities they afford. At the same time they also need to know, at a more granular level, how sounds interact and how small alterations of chords, timbres, or rhythms can change the overall feel of a track. ConclusionRemix competitions draw participants with a wide variety of musical backgrounds who make use of a broad range of instruments and techniques. The reason such a diverse group is able to participate and compete together is not because these practices do not require musical skill, but rather because remix competitions draw on particular kinds of abilities which are not directly linked to specific methods or techniques. While it might not be necessary to produce a flawless track or to be able to play musical material in real-time, remixers do need to be able to respond to a wide variety of source materials, in a quick and effective way. Also, while it might not be necessary for remixers to be able to produce a song from scratch, they do need to be able to understand, and adapt to, the musical affordances different songs provide. In order to be able to move from contest to contest, as true musical chameleons, remixers need a broad and in-depth understanding of how sound works in different musical contexts and how particular musical responses can be achieved. As soon as remixers upload a track, it is mainly these abilities that will be judged, discussed, and evaluated by the community. In this way fluency and flexibility are not only central abilities in order to be able to participate in these remix competitions, they are also important yardsticks by which remixers measure and evaluate both their own work and the achievements of their peers.AcknowledgementsThe author would like to thank Renée van de Vall, Karin Wenz, and Dennis Kersten for their comments on early drafts of this article. Parts of this research have, in an earlier stage, been presented during the IASPM International Conference for the Study of Popular Music in Gijon, Spain 2013. ReferencesArnold, John E. “Education for Innovation.” A Source Book for Creative Thinking. Eds. Sidney Jay Parnes and Harold F. Harding. New York: Charles Scribner’s Sons, 1962.Borschke, Margie. Rethinking the Rhetoric of Remix. Copies and Material Culture in Digital Networks. PhD Thesis U of New South Wales, 2012.Demers, Joanna. Steal This Music. How Intellectual Property Law Affects Musical Creativity. Athens: The U of Georgia P, 2006. Gibson, James J. The Ecological Approach to Visual Perception. London: Lawrence Erlbaum, 1986. Gouzouasis, Peter. “Fluency in General Music and Arts Technologies: Is the Future of Music a Garage Band Mentality?” Action, Criticism, and Theory for Music Education 4. 2 (2005). 26 Aug. 2012 .Guilford, Joy Paul. “Creativity: It’s Measurement and Development.” A Source Book for Creative Thinking. Eds. Sidney Jay Parnes and Harold F. Harding. New York: Charles Scribner’s Sons, 1962. Guilford, Joy Paul. “Creativity Research: Past, Present and Future.” Frontiers of Creativity Research. Beyond the Basics. Ed. Scott G. Isaksen. Buffalo: Bearly Limited, 1987 [1950]. 33–65. Guilford, Joy Paul. The Nature of Human Intelligence. London: McGraw-Hill, 1971. Hine, Christine. Virtual Ethnography. London: Sage, 2000. Hugill, Andrew. The Digital Musician. New York: Routledge, 2008.Jansen, Bas. Where Credit is Due: Cultural Practices of Recorded Music. PhD Thesis U of Amsterdam, 2011. Lessig, Lawrence. Remix. Making Art and Commerce Thrive in the Hybrid Economy. London: Bloomsbury, 2008. Navas, Eduardo. Remix Theory. The Aesthetics of Sampling. New York: Springer Wien, 2012.Pinch, Trevor, and Katherine Athanasiades. “Online Music Sites as Sonic Sociotechnical Communities: Identity, Reputation, and Technology at ACIDplanet.com.” The Oxford Handbook of Sound Studies. Eds. Trevor Pinch and Karin Bijsterveld. Oxford: Oxford UP, 2011. 480–505.Regelski, Thomas A. “Amateuring in Music and its Rivals.” Action, Criticism, and Theory for Music Education 6. 3 (2007): 22–50. Rodgers, Tara. “On the Process and Aesthetics of Sampling in Electronic Music Production.” Organised Sound 8.3 (2003): 313–20. Théberge, Paul. “Technology, Creative Practice and Copyright.” Music and Copyright. Second Edition. Eds. Simon Frith and Lee Marshall. Edinburgh: Edinburgh UP, 2004. 139–56. Väkevä, Lauri. “Garage Band or GarageBand®? Remixing Musical Futures.” British Journal of Music Education 27. 1 (2010): 59–70.Webster, Peter R. “Research on Creative Thinking in Music: The Assessment Literature.” Handbook of Research on Music Teaching and Learning. Ed. Richard Colwell. New York: Shirmer, 1992. 266–80. Windsor, W. Luke, and Christophe de Bézenac. “Music and Affordances.” Musicae Scientiae 16. 1 (2012): 102–20.
APA, Harvard, Vancouver, ISO, and other styles
35

Chambers, Paul. "Producing the self: Digitisation, music-making and subjectivity." Journal of Sociology, April 29, 2021, 144078332110093. http://dx.doi.org/10.1177/14407833211009317.

Full text
Abstract:
This article demonstrates how the availability of music platforms, the guidance of online tutorials and user-friendly affordances of software interfaces have democratised the making of electronic music. Software users traverse new forms of technology as part of their social and cultural selves, the confluence of musical affiliation and specific social media platforms supporting the production and exploration of identity. Case studies of women and non-binary identifying music-makers highlight digitisation’s role in enabling creative agency. Music emerging through these processes evidences a stylistic fluidity that indicates its means of construction, using the sampling, pitch, and time-stretching capabilities of the digital audio workstation. Digitisation provides the inspiration of a world of music and the means and knowledge of how to make it, allowing musical, personal and collective subjectivities to be explored.
APA, Harvard, Vancouver, ISO, and other styles
36

Adejoh, Thomas, Chukwuemeka H. Elugwu, Mohammed Sidi, Emeka E. Ezugwu, Chijioke O. Asogwa, and Mark C. Okeji. "An audit of radiographers’ practice of left-right image annotation in film-screen radiography and after installation of computed radiography in a tertiary hospital in Africa." Egyptian Journal of Radiology and Nuclear Medicine 51, no. 1 (2020). http://dx.doi.org/10.1186/s43055-020-00371-3.

Full text
Abstract:
Abstract Background Errors in radiographic image annotation by radiographers could potentially lead to misdiagnoses by radiologists and wrong side surgery by surgeons. Such medical negligence has dire medico-legal consequences. It was hypothesized that newer technology of computed radiography (CR) and direct digital radiography (DDR) image annotation would potentially lead to a change in practice with subsequent reduction in annotation errors. Following installation of computed radiography, a modality with electronic, post-processing image annotation, the hypothesis was investigated in our study centre. Results A total of 72,602 and 126,482 images were documented for film-screen radiography (FSR) and computed radiography (CR), respectively in the department. From these, a sample size of 9452 made up of 4726 each for FSR and CR was drawn. Anatomical side marker errors were common in every anatomy imaged, with more errors seen in FSR (4.6%) than CR (0.6%). Collectively, an error rate of 3.0% was observed. Errors noticed were as a result of marker burnout due to over-exposure as well as marker cone off due to tight beam collimation. Conclusion Error rates were considerably reduced following a change from film-screen radiography (FSR) to computed radiography (CR) at the study centre. This change was, however, influenced more by a team of quality control radiographers stationed at CR workstation than by actual practice in x-ray imaging suite. Presence of anthropomorphic phantom in the teaching laboratories in the universities for demonstrations will significantly inculcate the skill needed to completely eliminate anatomical side marker (ASM) error in practice.
APA, Harvard, Vancouver, ISO, and other styles
37

Lukas, Scott A. "Nevermoreprint." M/C Journal 8, no. 2 (2005). http://dx.doi.org/10.5204/mcj.2336.

Full text
Abstract:

 
 
 Perhaps the supreme quality of print is one that is lost on us, since it has so casual and obvious an existence (McLuhan 160). Print Machine (Thad Donovan, 1995) In the introduction to his book on 9/11, Welcome to the Desert of the Real, Slavoj Zizek uses an analogy of letter writing to emphasize the contingency of post-9/11 reality. In the example, Zizek discusses the efforts of writers to escape the eyes of governmental censors and a system that used blue ink to indicate a message was true, red ink to indicate it was false. The story ends with an individual receiving a letter from the censored country stating that the writer could not find any red ink. The ambiguity and the duplicity of writing, suggested in Zizek’s tale of colored inks, is a condition of the contemporary world, even if we are unaware of it. We exist in an age in which print—the economization of writing—has an increasingly significant and precarious role in our lives. We turn to the Internet chat room for textual interventions in our sexual, political and aesthetic lives. We burn satanic Harry Potter books and issue fatwas against writers like Salman Rushdie. We narrate our lives using pictures, fonts of varying typeface and color, and sound on our personalized homepages. We throw out our printed books and buy audio ones so we can listen to our favorite authors in the car. We place trust of our life savings, personal numbers, and digital identity in the hands of unseen individuals behind computer screens. Decisively, we are a print people, but our very nature of being dependent on the technologies of print in our public and private lives leads to our inability to consider the epistemological, social and existential effects of print on us. In this article, I focus on the current manifestations of print—what I call “newprint”—including their relationships to consumerism, identity formation and the politics of the state. I will then consider the democratic possibilities of print, suggested by the personalization of print through the Internet and home publishing, and conclude with the implications of the end of print that include the possibility of a post-print language and the middle voice. In order to understand the significance of our current print culture, it is important to situate print in the context of the history of communication. In earlier times, writing had magical associations (Harris 10), and commonly these underpinnings led to the stratification of communities. Writing functioned as a type of black box, “the mysterious technology by which any message [could] be concealed from its illiterate bearer” (Harris 16). Plato and Socrates warned against the negative effects of writing on the mind, including the erosion of memory (Ong 81). Though it once supplemented the communicational bases of orality, the written word soon supplanted it and created a dramatic existential shift in people—a separation of “the knower from the known” (Ong 43-44). As writing moved from the inconvenience of illuminated manuscripts and hand-copied texts, it became systemized in Gutenberg print, and writing then took on the signature of the state—messages between people were codified in the technology of print. With the advent of computer technologies in the 1990s, including personal computers, word processing programs, printers, and the Internet, the age of newprint begins. Newprint includes the electronic language of the Internet and other examples of the public alphabet, including billboards, signage and the language of advertising. As much as members of consumer society are led to believe that newprint is the harbinger of positive identity construction and individualism, closer analysis of the mechanisms of newprint leads to a different conclusion. An important context of new print is found in the space of the home computer. The home computer is the workstation of the contemporary discursive culture—people send and receive emails, do their shopping on the Internet, meet friends and even spouses through dating services, conceal their identity on MUDs and MOOs, and produce state-of-the-art publishing projects, even books. The ubiquity of print in the space of the personal computer leads to the vital illusion that this newprint is emancipatory. Some theorists have argued that the Internet exhibits the spirit of communicative action addressed by Juergen Habermas, but such thinkers have neglected the fact that the foundations of newprint, just like those of Gutenberg print, are the state and the corporation. Recent advertising of Hewlett-Packard and other computer companies illustrates this point. One advertisement suggested that consumers could “invent themselves” through HP computer and printer technology: by using the varied media available to them, consumers can make everything from personalized greeting cards to full-fledged books. As Friedrich Kittler illustrates, we should resist the urge to separate the practices of writing from the technologies of their production, what Jay David Bolter (41) denotes as the “writing space”. For as much as we long for new means of democratic and individualistic expression, we should not succumb to the urge to accept newprint because of its immediacy, novelty or efficiency. Doing so will relegate us to a mechanistic existence, what is referenced metaphorically in Thad Donovan’s “print machine.” In multiple contexts, newprint extends the corporate state’s propaganda industry by turning the written word into artifice. Even before newprint, the individual was confronted with the hegemony of writing. Writing creates “context-free language” or “autonomous discourse,” which means an individual cannot directly confront the language or speaker as one could in oral cultures (Ong 78). This further division of the individual from the communicational world is emphasized in newprint’s focus on the aesthetics of the typeface. In word processing programs like Microsoft Word, and specialized ones like TwistType, the consumer can take a word or a sentence and transform it into an aesthetic formation. On the word processing program that is producing this text, I can choose from Blinking Background, Las Vegas Lights, Marching Red or Black Ants, Shimmer, and Sparkle Text. On my campus email system I am confronted with pictorial backgrounds, font selection and animation as an intimate aspect of the communicational system of my college. On my cell phone I can receive text messages, and I can choose to use emoticons (iconic characters and messages) on the Internet. As Walter Ong wrote, “print situates words in space more relentlessly than writing ever did … control of position is everything in print” (Ong 121). In the case of the new culture of print, the control over more functions of the printed page, specifically its presentation, leads some consumers to believe that choice and individuality are the outcomes. Newprint does not free the writer from the constraints imposed by the means of traditional print—the printing press—rather, it furthers them as the individual operates by the logos of a predetermined and programmed electronic print. The capacity to spell and write grammatically correct sentences is abated by the availability of spell- and grammar-checking functions in word processing software. In many ways, the aura of writing is lost in newprint in the same way in which art lost its organic nature as it moved into the age of reproducibility (Benjamin). Just as filters in imaging programs like Photoshop reduce the aesthetic functions of the user to the determinations of the software programmer, the use of automated print technologies—whether spell-checking or fanciful page layout software like QuarkXpress or Page Maker—will further dilute the voice of the writer. Additionally, the new forms of print can lead to a fracturing of community, the opposite intent of Habermas’ communicative action. An example is the recent growth of specialized languages on the Internet. Some of the newer forms of such languages use combinations of alphanumeric characters to create a language that can only be read by those with the code. As Internet print becomes more specialized, a tribal effect may be felt within our communities. Since email began a few years ago, I have noticed that the nature of the emails I receive has been dramatically altered. Today’s emails tend to be short and commonly include short hands (“LOL” = “laugh out loud”), including the elimination of capitalization and punctuation. In surveying students on the reasons behind such alterations of language in email, I am told that these short hands allow for more efficient forms of communication. In my mind, this is the key issue that is at stake in both print and newprint culture—for as long as we rely on print and other communicational systems as a form of efficiency, we are doomed to send and receive inaccurate and potentially dangerous messages. Benedict Anderson and Hannah Arendt addressed the connections of print to nationalistic and fascist urges (Anderson; Arendt), and such tendencies are seen in the post-9/11 discursive formations within the United States. Bumper stickers and Presidential addresses conveyed the same simplistic printed messages: “Either You are with Us or You are with the Terrorists.” Whether dropping leaflets from airplanes or in scrolling text messages on the bottom of the television news screen, the state is dependent on the efficiency of print to maintain control of the citizen. A feature of this efficiency is that newprint be rhetorically immediate in its results, widely available in different forms of technology, and dominated by the notion of individuality and democracy that is envisioned in HP’s “invent yourself” advertsiements. As Marshall McLuhan’s epigram suggests, we have an ambiguous relationship to print. We depend on printed language in our daily lives, for education and for the economic transactions that underpin our consumer world, yet we are unable to confront the rhetoric of the state and mass media that are consequences of the immediacy and magic of both print and new print. Print extends the domination of our consciousness by forms of discourse that privilege representation over experience and the subject over the object. As we look to new means of communicating with one another and of expressing our intimate lives, we must consider altering the discursive foundations of our communication, such as looking to the middle voice. The middle voice erases the distinctions between subjects and objects and instead emphasizes the writer being in the midst of things, as a part of the world as opposed to dominating it (Barthes; Tyler). A few months prior to writing this article, I spent the fall quarter teaching in London. One day I received an email that changed my life. My partner of nearly six years announced that she was leaving me. I was gripped with the fact of my being unable to discuss the situation with her as we were thousands of miles apart and I struggled to understand how such a significant and personal circumstance could be communicated with the printed word of email. Welcome to new print! References Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London: Verso, 1991. Arendt, Hannah. The Origins of Totalitarianism. San Diego: Harcourt Brace, 1976. Barthes, Roland. “To Write: An Intransitive Verb?” The Languages of Criticism and the Sciences of Man: The Structuralist Controversy. Ed. Richard Macksey and Eugenio Donato. Baltimore: Johns Hopkins UP, 1970. 134-56. Benjamin, Walter. “The Work of Art in the Age of Its Technological Reproducibility: Second Version.” Walter Benjamin: Selected Writings, Volume 3: 1935-1938. Cambridge: Belknap/Harvard, 2002. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Lawrence Erlbaum, 1991. Habermas, Jürgen. The Theory of Communicative Action. Vol. I. Boston: Beacon Press, 1985. Harris, Roy. The Origin of Writing. La Salle, IL: Open Court, 1986. Kittler, Friedrich A. Discourse Networks 1800/1900. Stanford: Stanford UP, 1990. McLuhan, Marshall. Understanding Media: The Extensions of Man. Cambridge: MIT P, 1994. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Routledge, 1991. Tyler, Stephen A. “The Middle Voice: The Influence of Post-Modernism on Empirical Research in Anthropology.” Post-modernism and Anthropology. Eds. K. Geuijen, D. Raven, and J. de Wolf. Assen, The Neatherlands: Van Gorcum, 1995. Zizek, Slavoj. Welcome to the Desert of the Real. London: Verso, 2002. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Lukas, Scott A. "Nevermoreprint." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/04-lukas.php>. APA Style
 Lukas, S. (Jun. 2005) "Nevermoreprint," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/04-lukas.php>. 
APA, Harvard, Vancouver, ISO, and other styles
38

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (2004). http://dx.doi.org/10.5204/mcj.2371.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography