To see the other types of publications on this topic, follow the link: Machine Learning,Musical Instrument Recognition.

Journal articles on the topic 'Machine Learning,Musical Instrument Recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 41 journal articles for your research on the topic 'Machine Learning,Musical Instrument Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Eyharabide, Victoria, Imad Eddine Ibrahim Bekkouch, and Nicolae Dragoș Constantin. "Knowledge Graph Embedding-Based Domain Adaptation for Musical Instrument Recognition." Computers 10, no. 8 (August 3, 2021): 94. http://dx.doi.org/10.3390/computers10080094.

Full text
Abstract:
Convolutional neural networks raised the bar for machine learning and artificial intelligence applications, mainly due to the abundance of data and computations. However, there is not always enough data for training, especially when it comes to historical collections of cultural heritage where the original artworks have been destroyed or damaged over time. Transfer Learning and domain adaptation techniques are possible solutions to tackle the issue of data scarcity. This article presents a new method for domain adaptation based on Knowledge graph embeddings. Knowledge Graph embedding forms a projection of a knowledge graph into a lower-dimensional where entities and relations are represented into continuous vector spaces. Our method incorporates these semantic vector spaces as a key ingredient to guide the domain adaptation process. We combined knowledge graph embeddings with visual embeddings from the images and trained a neural network with the combined embeddings as anchors using an extension of Fisher’s linear discriminant. We evaluated our approach on two cultural heritage datasets of images containing medieval and renaissance musical instruments. The experimental results showed a significant increase in the baselines and state-of-the-art performance compared with other domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Rajesh, Sangeetha, and Nalini N. J. "Recognition of Musical Instrument Using Deep Learning Techniques." International Journal of Information Retrieval Research 11, no. 4 (October 2021): 41–60. http://dx.doi.org/10.4018/ijirr.2021100103.

Full text
Abstract:
The proposed work investigates the impact of Mel Frequency Cepstral Coefficients (MFCC), Chroma DCT Reduced Pitch (CRP), and Chroma Energy Normalized Statistics (CENS) for instrument recognition from monophonic instrumental music clips using deep learning techniques, Bidirectional Recurrent Neural Networks with Long Short-Term Memory (BRNN-LSTM), stacked autoencoders (SAE), and Convolutional Neural Network - Long Short-Term Memory (CNN-LSTM). Initially, MFCC, CENS, and CRP features are extracted from instrumental music clips collected as a dataset from various online libraries. In this work, the deep neural network models have been fabricated by training with extracted features. Recognition rates of 94.9%, 96.8%, and 88.6% are achieved using combined MFCC and CENS features, and 90.9%, 92.2%, and 87.5% are achieved using combined MFCC and CRP features with deep learning models BRNN-LSTM, CNN-LSTM, and SAE, respectively. The experimental results evidence that MFCC features combined with CENS and CRP features at score level revamp the efficacy of the proposed system.
APA, Harvard, Vancouver, ISO, and other styles
3

Mostafa, Mohamed M., and Nedret Billor. "Recognition of Western style musical genres using machine learning techniques." Expert Systems with Applications 36, no. 8 (October 2009): 11378–89. http://dx.doi.org/10.1016/j.eswa.2009.03.050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hawley, Scott H. "Synthesis of Musical Instrument Sounds: Physics-Based Modeling or Machine Learning?" Acoustics Today 16, no. 1 (2020): 20. http://dx.doi.org/10.1121/at.2020.16.1.20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Maliki, I., and Sofiyanudin. "Musical Instrument Recognition using Mel-Frequency Cepstral Coefficients and Learning Vector Quantization." IOP Conference Series: Materials Science and Engineering 407 (September 26, 2018): 012118. http://dx.doi.org/10.1088/1757-899x/407/1/012118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tanaka, Atau. "Intention, Effort, and Restraint: The EMG in Musical Performance." Leonardo 48, no. 3 (June 2015): 298–99. http://dx.doi.org/10.1162/leon_a_01018.

Full text
Abstract:
The author presents the challenges and opportunities in the use of the electromyogram (EMG), a signal representing muscle activity, for digital musical instrument applications. The author presents basic mapping paradigms and the place of the EMG in multimodal interaction and describes initial trials in machine learning. It is proposed that nonlinearities in musical instrument response cannot be modelled only by parameter interpolation and require strategies of extrapolation. The author introduces the concepts of intention, effort, and restraint as such strategies, to exploit, as well as confront limitations of, the use of muscle signals in musical performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Tao, Yanbing Chen, Dongqi Li, Tao Yang, and Jianhua Cao. "Electronic Tongue Recognition with Feature Specificity Enhancement." Sensors 20, no. 3 (January 31, 2020): 772. http://dx.doi.org/10.3390/s20030772.

Full text
Abstract:
As a kind of intelligent instrument, an electronic tongue (E-tongue) realizes liquid analysis with an electrode-sensor array and certain machine learning methods. The large amplitude pulse voltammetry (LAPV) is a regular E-tongue type that prefers to collect a large amount of response data at a high sampling frequency within a short time. Therefore, a fast and effective feature extraction method is necessary for machine learning methods. Considering the fact that massive common-mode components (high correlated signals) in the sensor-array responses would depress the recognition performance of the machine learning models, we have proposed an alternative feature extraction method named feature specificity enhancement (FSE) for feature specificity enhancement and feature dimension reduction. The proposed FSE method highlights the specificity signals by eliminating the common mode signals on paired sensor responses. Meanwhile, the radial basis function is utilized to project the original features into a nonlinear space. Furthermore, we selected the kernel extreme learning machine (KELM) as the recognition part owing to its fast speed and excellent flexibility. Two datasets from LAPV E-tongues have been adopted for the evaluation of the machine-learning models. One is collected by a designed E-tongue for beverage identification and the other one is a public benchmark. For performance comparison, we introduced several machine-learning models consisting of different combinations of feature extraction and recognition methods. The experimental results show that the proposed FSE coupled with KELM demonstrates obvious superiority to other models in accuracy, time consumption and memory cost. Additionally, low parameter sensitivity of the proposed model has been demonstrated as well.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Daeyeol, Tegg Taekyong Sung, Soo Young Cho, Gyunghak Lee, and Chae Bong Sohn. "A Single Predominant Instrument Recognition of Polyphonic Music Using CNN-based Timbre Analysis." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 590. http://dx.doi.org/10.14419/ijet.v7i3.34.19388.

Full text
Abstract:
Classifying musical instrument from polyphonic music is a challenging but important task in music information retrieval. This work enables to automatically tag music information, such as genre classification. In previous, almost every work of spectrogram analysis has been used Short Time Fourier Transform (STFT) and Mel Frequency Cepstral Coefficient (MFCC). Recently, sparkgram is researched and used in audio source analysis. Moreover, for deep learning approach, modified convolutional neural networks (CNN) widely have been researched, but many results have not been improved drastically. Instead of improving backbone networks, we have researched on preprocessing process.In this paper, we use CNN and Hilbert Spectral Analysis (HSA) to solve the polyphonic music problem. The HSA is performed at the fixed length of polyphonic music, and a predominant instrument is labeled at its result. As result, we have achieved the state-of-the-art result in IRMAS dataset and 3% performance improvement in individual instruments
APA, Harvard, Vancouver, ISO, and other styles
9

Senan, Norhalina, Rosziati Ibrahim, Nazri Mohd Nawi, Iwan Tri Riyadi Yanto, and Tutut Herawan. "Rough and Soft Set Approaches for Attributes Selection of Traditional Malay Musical Instrument Sounds Classification." International Journal of Software Science and Computational Intelligence 4, no. 2 (April 2012): 14–40. http://dx.doi.org/10.4018/jssci.2012040102.

Full text
Abstract:
Feature selection or attribute reduction is performed mainly to avoid the ‘curse of dimensionality’ in the large database problem including musical instrument sound classification. This problem deals with the irrelevant and redundant features. Rough set theory and soft set theory proposed by Pawlak and Molodtsov, respectively, are mathematical tools for dealing with the uncertain and imprecision data. Rough and soft set-based dimensionality reduction can be considered as machine learning approaches for feature selection. In this paper, the authors applied these approaches for data cleansing and feature selection technique of Traditional Malay musical instrument sound classification. The data cleansing technique is developed based on matrices computation of multi-soft sets while feature selection using maximum attributes dependency based on rough set theory. The modeling process comprises eight phases: data acquisition, sound editing, data representation, feature extraction, data discretization, data cleansing, feature selection, and feature validation via classification. The results show that the highest classification accuracy of 99.82% was achieved from the best 17 features with 1-NN classifier.
APA, Harvard, Vancouver, ISO, and other styles
10

Thaler, Fabian, and Heiko Gewald. "Language Characteristics Supporting Early Alzheimer's Diagnosis through Machine Learning - A Literature Review." Health Informatics - An International Journal 10, no. 1 (February 28, 2021): 5–23. http://dx.doi.org/10.5121/hiij.2021.10102.

Full text
Abstract:
Alzheimer's dementia (AD) is the most common incurable neurodegenerative disease worldwide. Apart from memory loss, AD leads to speech disorders. Timely diagnosis is crucial to halt the progression of the disease. However, current diagnostic procedures are costly, invasive, and distressing. Early-stage AD manifests itself in speech disorders, which implies examining those. Machine Learning (ML) represents a promising instrument in this context. Nevertheless, no genuine consensus on the language characteristics to be analyzed exists. To counteract this deficit and provide topic-related researchers with a better basis for decision-making, we present, based on a literature review, favourable speech characteristics for the appliance toward AD detection via ML. Research trends to apply spontaneous speech, gained from image descriptions, as analysis basis, and points out that the combined use of acoustic, linguistic, and demographic features positively influences recognition accuracy. In total, we have identified 97 overarching acoustic, linguistic and demographic features.
APA, Harvard, Vancouver, ISO, and other styles
11

Cai, Yinying, and Amit Sharma. "Swarm Intelligence Optimization: An Exploration and Application of Machine Learning Technology." Journal of Intelligent Systems 30, no. 1 (January 1, 2021): 460–69. http://dx.doi.org/10.1515/jisys-2020-0084.

Full text
Abstract:
Abstract In the agriculture development and growth, the efficient machinery and equipment plays an important role. Various research studies are involved in the implementation of the research and patents to aid the smart agriculture and authors and reviewers that machine leaning technologies are providing the best support for this growth. To explore machine learning technology and machine learning algorithms, the most of the applications are studied based on the swarm intelligence optimization. An optimized V3CFOA-RF model is built through V3CFOA. The algorithm is tested in the data set collected concerning rice pests, later analyzed and compared in detail with other existing algorithms. The research result shows that the model and algorithm proposed are not only more accurate in recognition and prediction, but also solve the time lagging problem to a degree. The model and algorithm helped realize a higher accuracy in crop pest prediction, which ensures a more stable and higher output of rice. Thus they can be employed as an important decision-making instrument in the agricultural production sector.
APA, Harvard, Vancouver, ISO, and other styles
12

Xu, Liyuan, Jie He, Shihong Duan, Xibin Wu, and Qin Wang. "Comparison of machine learning algorithms for concentration detection and prediction of formaldehyde based on electronic nose." Sensor Review 36, no. 2 (March 21, 2016): 207–16. http://dx.doi.org/10.1108/sr-07-2015-0104.

Full text
Abstract:
Purpose Sensor arrays and pattern recognition-based electronic nose (E-nose) is a typical detection and recognition instrument for indoor air quality (IAQ). The E-nose is able to monitor several pollutants in the air by mimicking the human olfactory system. Formaldehyde concentration prediction is one of the major functionalities of the E-nose, and three typical machine learning (ML) algorithms are most frequently used, including back propagation (BP) neural network, radial basis function (RBF) neural network and support vector regression (SVR). Design/methodology/approach This paper comparatively evaluates and analyzes those three ML algorithms under controllable environment, which is built on a marketable sensor arrays E-nose platform. Variable temperature (T), relative humidity (RH) and pollutant concentrations (C) conditions were measured during experiments to support the investigation. Findings Regression models have been built using the above-mentioned three typical algorithms, and in-depth analysis demonstrates that the model of the BP neural network results in a better prediction performance than others. Originality/value Finally, the empirical results prove that ML algorithms, combined with low-cost sensors, can make high-precision contaminant concentration detection indoor.
APA, Harvard, Vancouver, ISO, and other styles
13

Pacha, Alexander, Jan Hajič, and Jorge Calvo-Zaragoza. "A Baseline for General Music Object Detection with Deep Learning." Applied Sciences 8, no. 9 (August 29, 2018): 1488. http://dx.doi.org/10.3390/app8091488.

Full text
Abstract:
Deep learning is bringing breakthroughs to many computer vision subfields including Optical Music Recognition (OMR), which has seen a series of improvements to musical symbol detection achieved by using generic deep learning models. However, so far, each such proposal has been based on a specific dataset and different evaluation criteria, which made it difficult to quantify the new deep learning-based state-of-the-art and assess the relative merits of these detection models on music scores. In this paper, a baseline for general detection of musical symbols with deep learning is presented. We consider three datasets of heterogeneous typology but with the same annotation format, three neural models of different nature, and establish their performance in terms of a common evaluation standard. The experimental results confirm that the direct music object detection with deep learning is indeed promising, but at the same time illustrates some of the domain-specific shortcomings of the general detectors. A qualitative comparison then suggests avenues for OMR improvement, based both on properties of the detection model and how the datasets are defined. To the best of our knowledge, this is the first time that competing music object detection systems from the machine learning paradigm are directly compared to each other. We hope that this work will serve as a reference to measure the progress of future developments of OMR in music object detection.
APA, Harvard, Vancouver, ISO, and other styles
14

Ciarlo, Gregorio, Daniele Angelosante, Marco Guerriero, Giorgio Saldarini, and Nunzio Bonavita. "Enhanced PEMS Performance and Regulatory Compliance through Machine Learning." Sustainability in Environment 3, no. 4 (November 2, 2018): 329. http://dx.doi.org/10.22158/se.v3n4p329.

Full text
Abstract:
<p><em>Modeling technologies</em><em> can pro</em><em>vide strong support to existing emission management systems, by means of what is known as a Predictive Emission Monitoring System (PEMS). These systems do not measure emissions through any hardware device, but use computer models to predict emission concentrations on the ground of process data (e.g., fuel flow, load) and ambient parameters (e.g., air temperature, relative humidity). They actually represent a relevant application arena for the so-called Inferential Sensor technology which has quickly proved to be invaluable in modern process automation and optimization strategies (Qin et al., 1997; Kadlec et al., 2009). While lots of applications demonstrate that software systems provide accuracy comparable to that of hardware-based Continuous Emission Monitoring Systems (CEMS), virtual analyzers are able to offer additional features and capabilities which are often not properly considered by end-users. Depending on local regulations and constraints, PEMS can be exploited either as primary source of emission monitoring or as a back-up of hardware-based CEMS able to validate analyzers’ readings and extend their service factor. PEMS consistency (and therefore its acceptance from environmental authorities) is directly linked to the accuracy and reliability of each parameter used as input of the models. While environmental authorities are steadily opening to PEMS, it is easy to foresee that major recognition and acceptance will be driven by extending PEMS robustness in front of possible sensor failures. Providing reliable instrument fail-over procedures is the main objective of Sensor Validation (SV) strategies. In this work, the capabilities of a class of machine learning algorithms will be presented, showing the results based on tests performed actual field data gathered at a fluid catalytic cracking unit.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
15

Mohd Ghazali, Mohamad Hazwan, and Wan Rahiman. "Vibration Analysis for Machine Monitoring and Diagnosis: A Systematic Review." Shock and Vibration 2021 (September 10, 2021): 1–25. http://dx.doi.org/10.1155/2021/9469318.

Full text
Abstract:
Untimely machinery breakdown will incur significant losses, especially to the manufacturing company as it affects the production rates. During operation, machines generate vibrations and there are unwanted vibrations that will disrupt the machine system, which results in faults such as imbalance, wear, and misalignment. Thus, vibration analysis has become an effective method to monitor the health and performance of the machine. The vibration signatures of the machines contain important information regarding the machine condition such as the source of failure and its severity. Operators are also provided with an early warning for scheduled maintenance. Numerous approaches for analyzing the vibration data of machinery have been proposed over the years, and each approach has its characteristics, advantages, and disadvantages. This manuscript presents a systematic review of up-to-date vibration analysis for machine monitoring and diagnosis. It involves data acquisition (instrument applied such as analyzer and sensors), feature extraction, and fault recognition techniques using artificial intelligence (AI). Several research questions (RQs) are aimed to be answered in this manuscript. A combination of time domain statistical features and deep learning approaches is expected to be widely applied in the future, where fault features can be automatically extracted from the raw vibration signals. The presence of various sensors and communication devices in the emerging smart machines will present a new and huge challenge in vibration monitoring and diagnosing.
APA, Harvard, Vancouver, ISO, and other styles
16

Apridiansyah, Yovi Apri, and Pahrizal Pahrizal. "PENGENALAN ALAT MUSIK TRADISIONAL BENGKULU (DOL) DIGITAL BERBASIS ANDROID." Journal of Technopreneurship and Information System (JTIS) 2, no. 1 (March 5, 2019): 12–17. http://dx.doi.org/10.36085/jtis.v2i1.179.

Full text
Abstract:
Abstract—. Indonesia possesses a highly diverse affluence of art and culture, from Sabang to Merauke, deployed in a variety of arts and cultures that have been bequeathed from generation to generation. Dol is a traditional musical instrument that is performed by striking, which is based on electroacoustic technology or digital methods. His tone is harked through an amplifier and loudspeaker. In terms of sound quality, electronic dol practically makes no difference with common dol. Based on the background, the problem was formulated on how to make a recognition to traditional musical instruments (dol) built upon Android. The research objective of introducing traditional musical instruments (dol) on Android-based digital is to append concept into learning virtual dol with Android so that it is increasingly interactive. The costs of this application is that it is not mobile in 3 dimensions, in dol and tasa voice recording settings still utilizing normal recording.Keywords: Application, Music, Dol, AndroidAbstrak—. Indonesia memiliki kekayaan seni dan budaya yang sangat beragam, dari Sabang sampai Merauke, tersebar beraneka ragam seni dan budaya yang diwariskan secara turun temurun. Dol adalah alat musik tradisional yang dimainkan dengan dipukul, yang didasarkan pada teknologi elektroakustik atau metode digital. Nada suaranya terdengar melalui sebuah amplifier dan loudspeaker. Dari sisi mutu suara, dol elektronik nyaris tak ada bedanya dengan dol biasa. Berdasarkan latar belakang, maka dirumuskan masalahnya bagaimana membuat pengenalan alat musik tradisional (dol) digital berbasis android. Tujuan penelitian pengenalan alat musik tradisional (dol) digital berbasis android adalah untuk menambah wawasan dalam belajar dol virtual dengan android sehingga yang lebih interaktif. Kekurangan aplikasi ini tidak bersifat 3 dimensi yang berbasis mobile, dalam pengaturan perekaman suara dol dan tasa masih menggunakan perekaman biasa.Kata Kunci : Aplikasi, Musik, Dol, Android
APA, Harvard, Vancouver, ISO, and other styles
17

Giri, Chaitanya, Henderson James Cleaves, Markus Meringer, and Kuhan Chandru. "The Post-COVID-19 Era: Interdisciplinary Demands of Contagion Surveillance Mass Spectrometry for Future Pandemics." Sustainability 13, no. 14 (July 7, 2021): 7614. http://dx.doi.org/10.3390/su13147614.

Full text
Abstract:
Mass spectrometry (MS) can become a potentially useful instrument type for aerosol, droplet and fomite (ADF) contagion surveillance in pandemic outbreaks, such as the ongoing SARS-CoV-2 pandemic. However, this will require development of detection protocols and purposing of instrumentation for in situ environmental contagion surveillance. These approaches include: (1) enhancing biomarker detection by pattern recognition and machine learning; (2) the need for investigating viral degradation induced by environmental factors; (3) representing viral molecular data with multidimensional data transforms, such as van Krevelen diagrams, that can be repurposed to detect viable viruses in environmental samples; and (4) absorbing engineering attributes for developing contagion surveillance MS from those used for astrobiology and chemical, biological, radiological, nuclear (CBRN) monitoring applications. Widespread deployment of such an MS-based contagion surveillance could help identify hot zones, create containment perimeters around them and assist in preventing the endemic-to-pandemic progression of contagious diseases.
APA, Harvard, Vancouver, ISO, and other styles
18

Jakubik, Jan, and Halina Kwaśnicka. "Similarity-Based Summarization of Music Files for Support Vector Machines." Complexity 2018 (August 1, 2018): 1–10. http://dx.doi.org/10.1155/2018/1935938.

Full text
Abstract:
Automatic retrieval of music information is an active area of research in which problems such as automatically assigning genres or descriptors of emotional content to music emerge. Recent advancements in the area rely on the use of deep learning, which allows researchers to operate on a low-level description of the music. Deep neural network architectures can learn to build feature representations that summarize music files from data itself, rather than expert knowledge. In this paper, a novel approach to applying feature learning in combination with support vector machines to musical data is presented. A spectrogram of the music file, which is too complex to be processed by SVM, is first reduced to a compact representation by a recurrent neural network. An adjustment to loss function of the network is proposed so that the network learns to build a representation space that replicates a certain notion of similarity between annotations, rather than to explicitly make predictions. We evaluate the approach on five datasets, focusing on emotion recognition and complementing it with genre classification. In experiments, the proposed loss function adjustment is shown to improve results in classification and regression tasks, but only when the learned similarity notion corresponds to a kernel function employed within the SVM. These results suggest that adjusting deep learning methods to build data representations that target a specific classifier or regressor can open up new perspectives for the use of standard machine learning methods in music domain.
APA, Harvard, Vancouver, ISO, and other styles
19

van Vugt, F. T., and D. J. Ostry. "Early stages of sensorimotor map acquisition: learning with free exploration, without active movement or global structure." Journal of Neurophysiology 122, no. 4 (October 1, 2019): 1708–20. http://dx.doi.org/10.1152/jn.00429.2019.

Full text
Abstract:
One of the puzzles of learning to talk or play a musical instrument is how we learn which movement produces a particular sound: an audiomotor map. The initial stages of map acquisition can be studied by having participants learn arm movements to auditory targets. The key question is what mechanism drives this early learning. Three learning processes from previous literature were tested: map learning may rely on active motor outflow (target), on error correction, and on the correspondence between sensory and motor distances (i.e., that similar movements map to similar sounds). Alternatively, we hypothesized that map learning can proceed without these. Participants made movements that were mapped to sounds in a number of different conditions that each precluded one of the potential learning processes. We tested whether map learning relies on assumptions about topological continuity by exposing participants to a permuted map that did not preserve distances in auditory and motor space. Further groups were tested who passively experienced the targets, kinematic trajectories produced by a robot arm, and auditory feedback as a yoked active participant (hence without active motor outflow). Another group made movements without receiving targets (thus without experiencing errors). In each case we observed substantial learning, therefore none of the three hypothesized processes is required for learning. Instead early map acquisition can occur with free exploration without target error correction, is based on sensory-to-sensory correspondences, and possible even for discontinuous maps. The findings are consistent with the idea that early sensorimotor map formation can involve instance-specific learning. NEW & NOTEWORTHY This study tested learning of novel sensorimotor maps in a variety of unusual circumstances, including learning a mapping that was permuted in such as way that it fragmented the sensorimotor workspace into discontinuous parts, thus not preserving sensory and motor topology. Participants could learn this mapping, and they could learn without motor outflow or targets. These results point to a robust learning mechanism building on individual instances, inspired from machine learning literature.
APA, Harvard, Vancouver, ISO, and other styles
20

Haining, Kate, Gina Brunner, Ruchika Gajwani, Joachim Gross, Andrew Gumley, Stephen Lawrie, Matthias Schwannauer, Frauke Schultze-Lutter, and Peter Uhlhaas. "S64. COGNITIVE IMPAIRMENTS AND PREDICTION OF FUNCTIONAL OUTCOME IN INDIVIDUALS AT CLINICAL HIGH-RISK FOR PSYCHOSIS." Schizophrenia Bulletin 46, Supplement_1 (April 2020): S57—S58. http://dx.doi.org/10.1093/schbul/sbaa031.130.

Full text
Abstract:
Abstract Background Research in individuals at clinical-high risk for psychosis (CHR-P) has focused on developing algorithms to predict transition to psychosis. However, it is becoming increasingly important to address other outcomes, such as the level of functioning of CHR-P participants. To address this important question, this study investigated the relationship between baseline cognitive performance and functional outcome between 6–12 months in a sample of CHR-P individuals using a machine-learning approach to identify features that are predictive of long-term functional impairments. Methods Data was available for 111 CHR-P individuals at 6–12 months follow-up. In addition, 47 CHR-negative (CHR-N) participants who did not meet CHR criteria and 55 healthy controls (HCs) were recruited. CHR-P status was assessed using the Comprehensive Assessment of At-Risk Mental States (CAARMS) and the Schizophrenia Proneness Instrument, Adult version (SPI-A). Cognitive assessments included the Brief Assessment of Cognition in Schizophrenia (BACS) and the Penn Computerized Neurocognitive Battery (CNB). Global, social and role functioning scales were used to measure functional status. CHR-P individuals were divided into good functional outcome (GFO, GAF ≥ 65) and poor functional outcome groups (PFO, GAF &lt; 65). Feature selection was performed using LASSO regression with the LARS algorithm and 10-fold cross validation with GAF scores at baseline as the outcome variable. The following features were identified as predictors of GAF scores at baseline: verbal memory, verbal fluency, attention, emotion recognition, social and role functioning and SPI-A distress. This model explained 47% of the variance in baseline GAF scores. In the next step, Support Vector Machines (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF) classifiers with 10-fold cross validation were then trained on those features with GAF category at follow-up used as the binary label column. Models were compared using a calculated score incorporating area under the curve (AUC), accuracy, and AUC consistency across runs, whereby AUC was given a higher weighting than accuracy due to class imbalance. Results CHR-P individuals had slower motor speed, reduced attention and processing speed and increased emotion recognition reaction times (RTs) compared to HCs and reduced attention and processing speed compared to CHR-Ns. At follow-up, 66% of CHR-P individuals had PFO. LDA emerged as the strongest classifier, showing a mean AUC of 0.75 (SD = 0.15), indicating acceptable classification performance for GAF category at follow-up. PFO was detected with a sensitivity of 75% and specificity of 58%, with a total mean weighted accuracy of 68%. Discussion The CHR-P state was associated with significant impairments in cognition, highlighting the importance of interventions such as cognitive remediation in this population. Our data suggest that the development of features using machine learning approaches is effective in predicting functional outcomes in CHR-P individuals. Greater levels of accuracy, sensitivity and specificity might be achieved by increasing training sets and validating the classifier with external data sets. Indeed, machine learning methods have potential given that trained classifiers can easily be shared online, thus enabling clinical professionals to make individualised predictions.
APA, Harvard, Vancouver, ISO, and other styles
21

Taft, Teresa, Charlene Weir, Heidi Kramer, and Julio Facelli. "2444 Development of an instrument to identify factors influencing point of care recruitment in primary care settings: A pilot study at University of Utah Health." Journal of Clinical and Translational Science 2, S1 (June 2018): 40–41. http://dx.doi.org/10.1017/cts.2018.162.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: Electronic health records have become the fulcrum for efforts by institutions to reduce errors, improve safety, reduce cost, and improve compliance with recommended guidelines. In recent times they are also being considered as a potential game changer for improving patient recruitment for clinical trials (CT). Although the use of CDS for clinical care is partially understood, its use for CT patient identification and recruitment is young and a great deal of experimental and theoretical research is needed in this area to optimize the use of CDS tools that personalize patient care by identifying relevant clinical trials and other research interventions. The use of CDS tools for CT recruitment offers a great deal of possibilities, but some initial usage has been disappointing. This may not be surprising because, while the implementation of these interventions is somewhat simple, ensuring that they are embedded into the right point of the care providers workflow is highly complex and may affect many actors in a clinical care setting, including patients, nurses, physicians, clinical coordinators, and investigators. Overcoming the challenges of alerting providers regarding their patient’s eligibility for clinical trials is an important and difficult challenge. Translating that effort into effective recruitment will require understanding of the psychological and workflow barriers and facilitators for how providers respond to automated alerts requesting patient referrals. Evidence from using CDS for clinical care that shows alerts become increasingly ignored over time or with more exposure (1, 2). The features, timing, and method of these alerts are important usability factors that may influence effectiveness of the referral process. Focus group methods capture the shared perspectives of a phenomenon and have been shown to be an effective method for identifying perceptions, attitudes, information needs, and other human factors effecting workflow (3, 4). Our objective was to develop a generalizable method for measuring physician and clinic level factors defining a successful point of care recruitment program in an outpatient care setting. To achieve this we attempted to (a) Characterize provider’s attitudes regarding CTs referrals and research. (b) Identify perceived workflow strategies and facilitators relevant to CT recruitment in primary care. (c) Develop and test a pilot instrument. METHODS/STUDY POPULATION: The methods had 3 phases: focus groups, development of item pool, and tool development. Focus group topics were developed by 4 experienced investigators, with training in biomedical informatics, cognitive psychology, human factors, and workflow analysis, based upon a knowledge of the literature. A script was developed and the methods were piloted with a group of 4 clinicians. In all, 16 primary care providers, 5 clinic directors, and 6 staff supervisors participated in 6 focus groups, with an average of 5 participants each, to discuss clinical trial recruitment at the point of care. Focus groups were conducted by the development team. Audio recording were content coded and analyzed to identify themes by consensus of 3 authors. Item Pool generation involved extracting items identified in the focus group analysis, selecting a subset deemed most interesting based on knowledge of the recruitment literature and iteratively writing and refining questions. Instrument development consisted of piloting an initial 7-item questionnaire with a local primary provider sample. Questions were correlated with the item pool and limited to reduce provider burden, based on those that the study team deemed most applicable to information technology supported recruitment. Descriptive statistical analysis was performed on the pilot survey results. An online survey was developed based on the findings of the focus groups and emailed to 127 primary care providers who were invited to participate. In total, 36 questionnaires were completed. This study was approved by the University of Utah Institutional Review Board. RESULTS/ANTICIPATED RESULTS: The results section is organized into 3 sections: (a) Focus groups, (b) Item generation; and (c) Questionnaire pilot. (I) (1) Focus Groups. Themes identified through a qualitative review are presented below with illustrative comments of participants. The diversity of attitudes and willingness to support clinical trial recruitment varied so substantially that no single pattern emerged. Attitudes ranged from enthusiastic support, to interest in some trials to disinterest or distrust in trials in general. Compensation for time spent, which could be monetary, informational, or through professional recognition; and provider relationship with the study team or pre-selection of specific trials by a clinic oversight committee, and importance to providers practice positively affected willingness to help recruit. “I would love to get people into clinical trials as much as possible... If it works for them you are going to help a whole lot of other people.” If we felt like we have done every possible thing that was already established as evidence-based and it didn’t work out, then we would consider the trials. I think that studies are more beneficial for specific specialists... There might be a whole slew of things that I never deal with or don’t care about because it’s not prevalent for my patient population. Local and reputable... A long distance someone asking to do something is just not the same as someone in the trenches with you. The bottom line is how much work is involved at our end and if there is going to be any compensation for that. I think also the providers would like have feedback on what they referred them to. And how did it go? So did we pick the right patient? ... It helps us to know, did they even sign up for the study? Getting your name on a research paper would be nice too. Lack of information regarding trials reduced support for recruitment of patients. Providers stated that they do not know how to quickly find information about studies, nor do they have time to find the information, and therefore cannot efficiently council patients regarding trial participation. Notifications regarding clinical trials that were deemed to be important included: Trial coordinator intention to recruit patients, enrollment of a patient in a clinical drug trial, trial progress and result updates, and reports of effectiveness of provider recruitment efforts. Perceived information needs regarding trials that providers are referring patients to included: trial purpose, design, benefits and risks, potential side effects, intervention details, medication class (mechanism of action), drug interactions with study drug, study timeline, coordinator contact information, link to print off patient handouts, enrollment instructions, and a link to study website. (2) It’s just we don’t know any of the information ... and it can’t take any of our time. ... I don’t have time to research it. Sometimes the patients ask me questions about it and I would like to be in a position where I have some information about it before I am asked. It would be nice to be notified if they [my patients] are enrolled in the trial, when it turns into actual recruitment. I do like to know if they’re in [a trial] so that when they come in for problems, I at least know that they might be on a study medication so I can be safe. I’ll get an ER message, “The patient got admitted. There blood pressure’s, you know, tanked, because they’re on a study drug I didn’t know anything about.” if there’s certain side effects that I need to be watching out for. It would also be good to have a contact person from the study in case we need to notify them of. “this person’s possible having an adverse event. Look into it more.” (3) Provider burden associated with patient recruitment appeared to be a deterrent. These burdens included adding to the providers task list, increasing the time required to complete a visit, and usurpation of control over the patients care plan with the associated effect on provider quality scores. We don’t have time. I mean, we don’t even take a lunch break. I have 15 minutes and now this is taking this many minutes away from my 15 minutes. I am just sick of extra work. We already have so much extra work. It’s just more stuff to do. We are maxed out on stuff to do. Right now, part of our compensation depends on having our patients A1Cs controlled. And so if we’re taking a chance that maybe they’re getting a medicine, maybe they’re not, maybe it’ll help, maybe it won’t, its gonna further delay our ability to get paid. Cause they’re like “I’m not going to let you go mess up my patient and I’m going to have to deal with the consequences is kind of the way they think. If you’re going to put the patient in a study, being able drop them from our registry so we don’t get penalized for a negative outcome [is important]. (4) Patient’s needs were a priority among factors influencing likelihood to help recruitment patients. Providers considered perceived benefit or risk to the patient, such as additional healthcare services, increased monitoring, financial assistance, or access to new treatments when other options have been ineffective, important; as well as continuance of established care that has proven effective, and ethical recruitment that addresses language and mental health to ensure that patients can make decisions regarding study participation. If there’s something great that’s gonna benefit a patient, I would definitely wanna know about it to give them that option. You know that’s what we wanna try to do is make our patients better. Someone who is really well controlled and doing well, I would not tend to put them toward the study. Just keep going with what’s working right now. Sometimes there’s financial incentives for them to participate, so you know, if its a good fit its easy to at least offer that to the patient. They get treatment maybe that they can’t afford. You don’t want to be seen as somebody who's forcing a patient... if their provider is telling them this is a good idea you are more likely to get your patient to do it. I think they have to understand what a clinical trial is, first of all, in that it’s a trial. Right? We’re trying to figure out if a certain treatment is good or not. It may not work. It may work. With many patients, they don’t only have medical problems, but significant mental illness that sometimes interferes a lot with just our treatment of them here for their clinical problems. And so, that probably would interfere with someone’s ability to understand and consent to a trial. And the patients have the right to make that choice. I don’t need to be—I don’t mind influencing them on things I know about, I think are invaluable, but I don’t need to be a barrier to them. (5) Perceived responsibility in trial recruitment varied substantially, from no involvement at all, to prescreening, counseling, or recruiting patients. Some providers felt that they should have the right to say “no” to recruitment of their patients while others believed prescreening was an unnecessary burden, outside of their role as a primary care provider. if someone prescreens and thinks its appropriate and gives me that judgment call to say, do you think it would be a good fit? I think one of them, they sent, and I said, Oh, I don’t think it would be a good fit because of this...So that would be fine. I don’t think I need to be a gatekeeper for studies. I mean, if there’s people that qualify for a study, and there’s a great study that’s been approved, and they can recruit them without me knowing, that doesn’t bother me in the slightest. I liked how it was—I could do a simple referral ... someone else figured out the qualifications. if we knew of ongoing studies and if we thought a certain patient may qualify for a certain study, we just contact the coordinator, and then they just take care of the rest. I think that appropriate ... from our perspective, would be, “Are you interested?” “This is the number for a person who can sit with you, talk with you about a trial, tell you everything about it, answer your questions, and then you can make a decision.” I’m not going to let you go mess up my patient and I’m going to have to deal with the consequences. (6) A clinic-implementation approach that systemizes workflow, limits the number of trials providers are asked to recruit for, and minimizes provider time burden is needed. Suggested methods for informing providers of patient clinical trial eligibility included: email, alerts, in-basket messages, texts, phone-calls, and in-person contact. People are so sick of change, change, change, change ... if there’s no stability whatsoever, then people get frustrated and start to burn out. Having my staff remember how to do it correctly and I remember what studies we have going ... it becomes somewhat of a burden... it’s hard for us to remember as we are flying through our day. There just needs to be a clear understanding with those roles... Who does the patient call? We don’t want to look like we don’t know what we are doing. There probably should be a selection committee put together from various people who have stakes in the community, at least who can say, “This would be applicable for xx clinic.” (7) Provider Suggestions Providers had multiple suggestions regarding notification methods. (II) Development of item pool and construction of questionnaire The specific items were constructed from literature review on physician’s attitudes and results from the focus group. The overarching concern was on readability, brief questionnaire size, and relevance. A large item were constructed and then reduced through piloting. (III) Questionnaire Pilot Results: The 7-item pilot questionnaire was completed by 36 physicians (28% response rate). In this section, we report the empirical results. DISCUSSION/SIGNIFICANCE OF IMPACT: Discussion Relevance of Methods. Overall, the described methods for determining components for a recruitment program in primary care shows early promise. The focus groups that consisted of providers, staff and administrators resulted in insights as to workflows, attitudes, and clinical processes. These insights significantly varied across clinics. This variation supported the need for an individualized clinic-based approach that will meet local needs. During the course of the study, participants were willing to participate in all activities (although some requested payment). We were able to conduct the focus groups as scheduled and obtained the desired input. The analysis of the focus group transcripts was performed using iterative discussions and did not needed any special adaptation for this area of study. The pilot survey response rate was within the expected for this type of study. Focus groups can rapidly provide rich information regarding attitudes and other factors affecting provider participation at the point of care. However, findings from focus groups must always be confirmed through larger studies. It is important to keep the focus groups small and to hold multiple focus groups to offset the more vocal participants that may influence comments of others. This study shows that using our 3-step approach it is possible to gather important information on clinician’s and staff perceptions and needs to participate in point of care patient recruitment for CT. The focus groups also provide an important step for survey construction. Designing surveys empirically requires multiple validation efforts, which will be conducted in the future. However, we can draw preliminary conclusions from the results of the pilot study which are quite informative and they are discussed below. Near future work will be to expand the response rate through additional local survey and conduct formal psychometric testing and validation both locally and nationally. A final validation will be proposed through the CTSA consortiums. Variation in responses. There was a lack of normal curves in our survey results. This points to the need to target education and recruitment efforts by provider type (with similar perspectives). Identification of these types would be useful. Some specific points regarding variability that should be considered in program design. Preferences for trail recruitment methods. Many trial recruitment notification methods have the potential to be successful when used judiciously and done well, particularly if the trial coordinator/provider relationship is supported by reciprocal benefits to the provider. Consistency in workflow within seems paramount to success. Providers can pull some notifications at a time they choose, while other notifications interrupt and must be used sparingly. Some allow review of multiple patients at the same time, and some foster easy access to the patient’s medical record. Conclusions. The authors recommend that recruitment HIT be customizable at the clinic and provider level by responsibility and interest to allow selection of level of information, delivery method, that is, email, text, in-basket, alert, dashboard, mail; frequency of notification, and an opt out feature. These customizable options will allow for better support of clinic workflow or goals. There is the potential with machine learning technology to monitor provider interactions with trial notifications and for the system to automatically make adjustments to the method and level that best supports each physician. Limitations: The major limitation is the focus on one site only and one delivery system (university based). The low response makes generalization difficult. Efforts to improve the rate are underway. Many populations are under-represented in Utah. Full psychometric analysis was not conducted but will part of the final project.
APA, Harvard, Vancouver, ISO, and other styles
22

"Different Machine Learning Classifiers for Music Emotion Recognition." International Journal of Recent Technology and Engineering 8, no. 4 (November 30, 2019): 2187–91. http://dx.doi.org/10.35940/ijrte.d7833.118419.

Full text
Abstract:
Music in an essential part of life and the emotion carried by it is key to its perception and usage. Music Emotion Recognition (MER) is the task of identifying the emotion in musical tracks and classifying them accordingly. The objective of this research paper is to check the effectiveness of popular machine learning classifiers like XGboost, Random Forest, Decision Trees, Support Vector Machine (SVM), K-Nearest-Neighbour (KNN) and Gaussian Naive Bayes on the task of MER. Using the MIREX-like dataset [17] to test these classifiers, the effects of oversampling algorithms like Synthetic Minority Oversampling Technique (SMOTE) [22] and Random Oversampling (ROS) were also verified. In all, the Gaussian Naive Bayes classifier gave the maximum accuracy of 40.33%. The other classifiers gave accuracies in between 20.44% and 38.67%. Thus, a limit on the classification accuracy has been reached using these classifiers and also using traditional musical or statistical metrics derived from the music as input features. In view of this, deep learning-based approaches using Convolutional Neural Networks (CNNs) [13] and spectrograms of the music clips for MER is a promising alternative.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Huizi. "Piano Education of Children Using Musical Instrument Recognition and Deep Learning Technologies Under the Educational Psychology." Frontiers in Psychology 12 (September 16, 2021). http://dx.doi.org/10.3389/fpsyg.2021.705116.

Full text
Abstract:
The objective of the study was to enhance quality education in the traditional pre-school piano education. Deep Learning (DL) technology is applied to piano education of children to improve their interest in learning music. Firstly, the problems of the traditional piano education of children were analyzed with the teaching patterns discussed under educational psychology, and a targeted music education plan was established. Secondly, musical instrument recognition technology was introduced, and the musical instrument recognition model was implemented based on DL. Thirdly, the proposed model was applied to the piano education of children to guide the music learning of students and improve their interest in piano learning. The feature recognition and acquisition of the proposed model were improved. Finally, the different teaching patterns were comparatively analyzed through the Questionnaire Survey (QS). The experimental results showed that the instrument recognition accuracy of Hybrid Neural Network (HNN) is 97.2%, and with the increase of iterations, the recognition error rate of the model decreases and stabilizes. Therefore, the proposed HNN based on DL for musical instrument recognition can accurately identify musical features. The QS results showed that the introduction of musical instrument recognition technology in the piano education of children can improve their interest in piano learning. Therefore, the establishment of the piano education patterns based on the piano education model can improve the effectiveness of teaching piano to students. This research provides a reference for the intelligentization of children's piano education.
APA, Harvard, Vancouver, ISO, and other styles
24

Ke, Jiangyan, Rongchuan Lin, and Ashutosh Sharma. "An Automatic Instrument Recognition Approach Based on Deep Convolutional Neural Network." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 14 (March 22, 2021). http://dx.doi.org/10.2174/2352096514666210322155008.

Full text
Abstract:
Background: This paper presents an automatic instrument recognition method highlighting the deep learning aspect of instrument identification in order to advance the automatic process of video monitoring remotely equipment of substation. Methodology: This work utilizes the Scale Invariant Feature Transform approach (SIFT) and the Gaussian difference model for instrument positioning while proposing a design scheme of instrument identification system. Results: The experimental outcomes obtained proves that the proposed system is capable of automatically recognizing a modest graphical interface and study independently while improving the operation effectiveness of appliance and realizing the purpose of spontaneous self-check. The proposed approach is applicable for musical instrument recognition and it provides 92% of the accuracy rate, 87.5% precision value and recall rate of 91.2%. Conclusion: The comparative analysis with other state of the art methods justifies that the proposed deep learning based music recognition method outperforms the other existing approaches in terms of accuracy, thereby providing a practicable music instrument recognition solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Liang, Xin Wen, Jiaming Shi, Shutong Li, Yuhan Xiao, Qun Wan, and Xiuying Qian. "Effects of individual factors on perceived emotion and felt emotion of music: Based on machine learning methods." Psychology of Music, July 2, 2020, 030573562092842. http://dx.doi.org/10.1177/0305735620928422.

Full text
Abstract:
Music emotion information is widely used in music information retrieval, music recommendation, music therapy, and so forth. In the field of music emotion recognition (MER), computer scientists extract musical features to identify musical emotions, but this method ignores listeners’ individual differences. Applying machine learning methods, this study formed relations among audio features, individual factors, and music emotions. We used audio features and individual features as inputs to predict the perceived emotion and felt emotion of music, respectively. The results show that real-time individual features (e.g., preference for target music and mechanism indices) can significantly improve the model’s effect, and stable individual features (e.g., sex, music experience, and personality) have no effect. Compared with the recognition models of perceived emotions, the individual features have greater effects on the recognition models of felt emotions.
APA, Harvard, Vancouver, ISO, and other styles
26

Taweewat, Pat. "Detection of a Specific Musical Instrument Note Playing inPolyphonic Mixtures by Extreme Learning Machine andParticle Swarm Optimization." International Journal of Information and Electronics Engineering, 2012. http://dx.doi.org/10.7763/ijiee.2012.v2.198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Price-Mohr, Ruth, and Colin Price. "Learning to Play the Piano Whilst Reading Music: Short-Term School-Based Piano Instruction Improves Memory and Word Recognition in Children." International Journal of Early Childhood, September 2, 2021. http://dx.doi.org/10.1007/s13158-021-00297-5.

Full text
Abstract:
AbstractThere is a substantial body of evidence that demonstrates links between language and music and between music and improved cognitive ability, particularly with regard to verbal and working memory, in both adults and children. However, there is often a mix of type of musical training and instrument used and use of musical notation. The research reported here uses a randomised controlled trial with 32 novice children, aged seven, learning to play the piano with both hands whilst reading music notation. The intervention was conducted in a school setting, each child receiving in total four hours of instruction. Results confirm previous findings that short-term music instruction improves working memory. Results also demonstrated that children with this musical training outperformed controls on a word identification measure. Overall, the results show evidence for a causal relationship between music learning and improvements in verbal skills. The significant differences occurred after only one term of instruction and were stable 3 months post-intervention.
APA, Harvard, Vancouver, ISO, and other styles
28

Aribisala, Benjamin, Obaro Olori, and Patrick Owate. "Emotion Recognition Using Ensemble Bagged Tree Classifier and Electroencephalogram Signals." JOURNAL OF RESEARCH AND REVIEW IN SCIENCE 5, no. 1 (December 1, 2018). http://dx.doi.org/10.36108/jrrslasu/8102/50(0141).

Full text
Abstract:
Introduction: Emotion plays a key role in our daily life and work, especially in decision making, as people's moods can influence their mode of communication, behaviour or productivity. Emotion recognition has attracted some research works and medical imaging technology offers tools for emotion classification. Aims: The aim of this work is to develop a machine learning technique for recognizing emotion based on Electroencephalogram (EEG) data Materials and Methods: Experimentation was based on a publicly available EEG Dataset for Emotion Analysis using Physiological (DEAP). The data comprises of EEG signals acquired from thirty two adults while watching forty 40 different musical video clips of one minute each. Participants rated each video in terms of four emotional states, namely, arousal, valence, like/dislike and dominance. We extracted some features from the dataset using Discrete Wavelet Transforms to extract wavelet energy, wavelet entropy, and standard deviation. We then classified the extracted features into four emotional states, namely, High Valence/High Arousal, High Valance/Low Arousal, Low Valence/High Arousal, and Low Valence/Low Arousal using Ensemble Bagged Trees. Results: Ensemble Bagged Trees gave sensitivity, specificity, and accuracy of 97.54%, 99.21%, and 97.80% respectively. Support Vector Machine and Ensemble Boosted Tree gave similar results. Conclusion: Our results showed that machine learning classification of emotion using EEG data is very promising. This can help in the treatment of patients, especially those with expression problems like Amyotrophic Lateral Sclerosis which is a muscle disease, the real emotional state of patients will help doctors to provide appropriate medical care. Keywords: Electroencephalogram, Emotions Recognition, Ensemble Classification, Ensemble Bagged Trees, Machine Learning
APA, Harvard, Vancouver, ISO, and other styles
29

"The dynamics of computer music." Organised Sound 1, no. 1 (April 1996): 1–2. http://dx.doi.org/10.1017/s1355771896000118.

Full text
Abstract:
'Organised sound' - the term coined by Edgard Varèse for a new definition of musical constructivism - denotes for our increasingly technologically dominated culture an urge towards the recognition of the human impulse behind the 'system'. Such is the diversity of activity in today's computer music, we need to maintain a balance between technological advances and musically creative and scholarly endeavour, at all levels of an essentially educative process. The model of 'life-long learning' makes a special kind of sense when we can explore our musical creativity in partnership with the computer, a machine now capable of sophisticated response from a humanly embedded intelligence.
APA, Harvard, Vancouver, ISO, and other styles
30

Cai, Rui-rui. "Exploration and Application of Machine Learning Technology Based on Swarm Intelligence Optimization." Recent Patents on Engineering 13 (December 2, 2019). http://dx.doi.org/10.2174/1872212113666191202144754.

Full text
Abstract:
Background: In the agriculture development and growth, the intelligent machinery and equipment plays an important role. Various researchers are involved for implementing the research and patents to aid the smart agriculture and author reviewer that machine leaning technologies are providing the best support for this growth. Method: To explore machine learning technology and machine learning algorithms, mostly based on swarm intelligence optimization, and their applications are studied. An optimized V3CFOA-RF model is built through V3CFOA. The algorithm is tested in the data set collected concerning rice pests, later analysed and compared in detail with other existing algorithms. Results: The research result shows that the model and algorithm proposed are not only more accurate in recognition and prediction, but also solve the time lagging problem to a degree. Conclusion: The model and algorithm helped realise a higher accuracy in crop pest prediction, which ensures a more stable and higher output of rice. Thus they can be employed as an important decision-making instrument in the agricultural production sector.
APA, Harvard, Vancouver, ISO, and other styles
31

Lisena, Pasquale, Albert Meroño-Peñuela, and Raphaël Troncy. "MIDI2vec: Learning MIDI embeddings for reliable prediction of symbolic music metadata." Semantic Web, September 14, 2021, 1–21. http://dx.doi.org/10.3233/sw-210446.

Full text
Abstract:
An important problem in large symbolic music collections is the low availability of high-quality metadata, which is essential for various information retrieval tasks. Traditionally, systems have addressed this by relying either on costly human annotations or on rule-based systems at a limited scale. Recently, embedding strategies have been exploited for representing latent factors in graphs of connected nodes. In this work, we propose MIDI2vec, a new approach for representing MIDI files as vectors based on graph embedding techniques. Our strategy consists of representing the MIDI data as a graph, including the information about tempo, time signature, programs and notes. Next, we run and optimise node2vec for generating embeddings using random walks in the graph. We demonstrate that the resulting vectors can successfully be employed for predicting the musical genre and other metadata such as the composer, the instrument or the movement. In particular, we conduct experiments using those vectors as input to a Feed-Forward Neural Network and we report good comparable accuracy scores in the prediction with respect to other approaches relying purely on symbolic music, avoiding feature engineering and producing highly scalable and reusable models with low dimensionality. Our proposal has real-world applications in automated metadata tagging for symbolic music, for example in digital libraries for musicology, datasets for machine learning, and knowledge graph completion.
APA, Harvard, Vancouver, ISO, and other styles
32

Miller, Brian A. "“All of the Rules of Jazz”." Music Theory Online 26, no. 3 (September 2020). http://dx.doi.org/10.30535/mto.26.3.6.

Full text
Abstract:
Though improvising computer systems are hardly new, jazz has recently become the focus of a number of novel computer music projects aimed at convincingly improvising alongside humans, with a particular focus on the use of machine learning to imitate human styles. The attempt to implement a sort of Turing test for jazz, and interest from organizations like DARPA in the results, raises important questions about the nature of improvisation and musical style, but also about the ways jazz comes popularly to stand for such broad concepts as “conversation” or “democracy.” This essay explores these questions by considering robots that play straight-ahead neoclassical jazz alongside George Lewis’s free-improvising Voyager system, reading the technical details of such projects in terms of the ways they theorize the recognition and production of style, but also in terms of the political implications of human-computer musicking in an age of algorithmic surveillance and big data.
APA, Harvard, Vancouver, ISO, and other styles
33

Miller, Brian A. "“All of the Rules of Jazz”." Music Theory Online 26, no. 3 (September 2020). http://dx.doi.org/10.30535/mto.26.3.6.

Full text
Abstract:
Though improvising computer systems are hardly new, jazz has recently become the focus of a number of novel computer music projects aimed at convincingly improvising alongside humans, with a particular focus on the use of machine learning to imitate human styles. The attempt to implement a sort of Turing test for jazz, and interest from organizations like DARPA in the results, raises important questions about the nature of improvisation and musical style, but also about the ways jazz comes popularly to stand for such broad concepts as “conversation” or “democracy.” This essay explores these questions by considering robots that play straight-ahead neoclassical jazz alongside George Lewis’s free-improvising Voyager system, reading the technical details of such projects in terms of the ways they theorize the recognition and production of style, but also in terms of the political implications of human-computer musicking in an age of algorithmic surveillance and big data.
APA, Harvard, Vancouver, ISO, and other styles
34

Lostanlen, Vincent, Christian El-Hajj, Mathias Rossignol, Grégoire Lafay, Joakim Andén, and Mathieu Lagrange. "Time–frequency scattering accurately models auditory similarities between instrumental playing techniques." EURASIP Journal on Audio, Speech, and Music Processing 2021, no. 1 (January 11, 2021). http://dx.doi.org/10.1186/s13636-020-00187-z.

Full text
Abstract:
AbstractInstrumentalplaying techniques such as vibratos, glissandos, and trills often denote musical expressivity, both in classical and folk contexts. However, most existing approaches to music similarity retrieval fail to describe timbre beyond the so-called “ordinary” technique, use instrument identity as a proxy for timbre quality, and do not allow for customization to the perceptual idiosyncrasies of a new subject. In this article, we ask 31 human participants to organize 78 isolated notes into a set of timbre clusters. Analyzing their responses suggests that timbre perception operates within a more flexible taxonomy than those provided by instruments or playing techniques alone. In addition, we propose a machine listening model to recover the cluster graph of auditory similarities across instruments, mutes, and techniques. Our model relies on joint time–frequency scattering features to extract spectrotemporal modulations as acoustic features. Furthermore, it minimizes triplet loss in the cluster graph by means of the large-margin nearest neighbor (LMNN) metric learning algorithm. Over a dataset of 9346 isolated notes, we report a state-of-the-art average precision at rank five (AP@5) of .%. An ablation study demonstrates that removing either the joint time–frequency scattering transform or the metric learning algorithm noticeably degrades performance.
APA, Harvard, Vancouver, ISO, and other styles
35

Campanioni, Chris. "How Bizarre: The Glitch of the Nineties as a Fantasy of New Authorship." M/C Journal 21, no. 5 (December 6, 2018). http://dx.doi.org/10.5204/mcj.1463.

Full text
Abstract:
As the ball dropped on 1999, is it any wonder that No Doubt played, “It’s the End of the World as We Know It” by R.E.M. live on MTV? Any discussion of the Nineties—and its pinnacle moment, Y2K—requires a discussion of both the cover and the glitch, two performative and technological enactments that fomented the collapse between author-reader and user-machine that has, twenty years later, become normalised in today’s Post Internet culture. By staging failure and inviting the audience to participate, the glitch and the cover call into question the original and the origin story. This breakdown of normative borders has prompted the convergence of previously demarcated media, genres, and cultures, a constellation from which to recognise a stochastic hybrid form. The Cover as a Revelation of Collaborative MurmurBefore Sean Parker collaborated with Shawn Fanning to launch Napster on 1 June 1999, networked file distribution existed as cumbersome text-based programs like Internet Relay Chat and Usenet, servers which resembled bulletin boards comprising multiple categories of digitally ripped files. Napster’s simple interface, its advanced search filters, and its focus on music and audio files fostered a peer-to-peer network that became the fastest growing website in history, registering 80 million users in less than two years.In harnessing the transgressive power of the Internet to force a new mode of content sharing, Napster forced traditional providers to rethink what constitutes “content” at a moment which prefigures our current phenomena of “produsage” (Bruns) and the vast popularity of user-generated content. At stake is not just the democratisation of art but troubling the very idea of intellectual property, which is to say, the very concept of ownership.Long before the Internet was re-routed from military servers and then mainstreamed, Michel Foucault understood the efficacy of anonymous interactions on the level of literature, imagining a culture where discourse would circulate without any need for an author. But what he was asking in 1969 is something we can better answer today, because it seems less germane to call into question the need for an author in a culture in which everyone is writing, producing, and reproducing text, and more effective to think about re-evaluating the notion of a single author, or what it means to write by yourself. One would have to testify to the particular medium we have at our disposal, the Internet’s ultimate permissibility, its provocations for collaboration and co-creation. One would have to surrender the idea that authors own anything besides our will to keep producing, and our desire for change; and to modulate means to resist without negating, to alter without omitting, to enable something new to come forward; the unfolding of the text into the anonymity of a murmur.We should remind ourselves that “to author” all the way down to its Latin roots signifies advising, witnessing, and transferring. We should be reminded that to author something means to forget the act of saying “I,” to forget it or to make it recede in the background in service of the other or others, on behalf of a community. The de-centralisation of Web development and programming initiated by Napster inform a poetics of relation, an always-open structure in which, as Édouard Glissant said, “the creator of a text is effaced, or rather, is done away with, to be revealed in the texture of his creation” (25). When a solid melts, it reveals something always underneath, something at the bottom, something inside—something new and something that was always already there. A cover, too, is both a revival and a reworking, an update and an interpretation, a retrospective tribute and a re-version that looks toward the future. In performing the new, the original as singular is called into question, replaced by an increasingly fetishised copy made up of and made by multiples.Authorial Effacement and the Exigency of the ErrorY2K, otherwise known as the Millennium Bug, was a coding problem, an abbreviation made to save memory space which would disrupt computers during the transition from 1999 to 2000, when it was feared that the new year would become literally unrecognisable. After an estimated $300 billion in upgraded hardware and software was spent to make computers Y2K-compliant, something more extraordinary than global network collapse occurred as midnight struck: nothing.But what if the machine admits the possibility of accident? Implicit in the admission of any accident is the disclosure of a new condition—something to be heard, to happen, from the Greek ad-cadere, which means to fall. In this drop into non-repetition, the glitch actualises an idea about authorship that necessitates multi-user collaboration; the curtain falls only to reveal the hidden face of technology, which becomes, ultimately, instructions for its re-programming. And even as it deviates, the new form is liable to become mainstreamed into a new fashion. “Glitch’s inherently critical moment(um)” (Menkman 8) indicates this potential for technological self-insurgence, while suggesting the broader cultural collapse of generic markers and hierarchies, and its ensuing flow into authorial fluidity.This feeling of shock, this move “towards the ruins of destructed meaning” (Menkman 29) inherent in any encounter with the glitch, forecasted not the immediate horror of Y2K, but the delayed disasters of 9/11, Hurricane Katrina, Deepwater Horizon Oil Spill, Indian Ocean tsunami, Sichuan Province earthquake, global financial crisis, and two international wars that would all follow within the next nine years. If, as Menkman asserts, the glitch, in representing a loss of self-control “captures the machine revealing itself” (30), what also surfaces is the tipping point that edges us toward a new becoming—not only the inevitability of surrender between machine and user, but their reversibility. Just as crowds stood, transfixed before midnight of the new millennium in anticipation of the error, or its exigency, it’s always the glitch I wait for; it’s always the glitch I aim to re-create, as if on command. The accidental revelation, or the machine breaking through to show us its insides. Like the P2P network that Napster introduced to culture, every glitch produces feedback, a category of noise (Shannon) influencing the machine’s future behaviour whereby potential users might return the transmission.Re-Orienting the Bizarre in Fantasy and FictionIt is in the fantasy of dreams, and their residual leakage into everyday life, evidenced so often in David Lynch’s Twin Peaks, where we can locate a similar authorial agency. The cult Nineties psycho-noir, and its discontinuous return twenty-six years later, provoke us into reconsidering the science of sleep as the art of fiction, assembling an alternative, interactive discourse from found material.The turning in and turning into in dreams is often described as an encounter with the “bizarre,” a word which indicates our lack of understanding about the peculiar processes that normally happen inside our heads. Dreams are inherently and primarily bizarre, Allan J. Hobson argues, because during REM sleep, our noradrenergic and serotonergic systems do not modulate the activated brain, as they do in waking. “The cerebral cortex and hippocampus cannot function in their usual oriented and linear logical way,” Hobson writes, “but instead create odd and remote associations” (71). But is it, in fact, that our dreams are “bizarre” or is it that the model itself is faulty—a precept premised on the normative, its dependency upon generalisation and reducibility—what is bizarre if not the ordinary modulations that occur in everyday life?Recall Foucault’s interest not in what a dream means but what a dream does. How it rematerialises in the waking world and its basis in and effect on imagination. Recall recollection itself, or Erin J. Wamsley’s “Dreaming and Offline Memory Consolidation.” “A ‘function’ for dreaming,” Wamsley writes, “hinges on the difficult question of whether conscious experience in general serves any function” (433). And to think about the dream as a specific mode of experience related to a specific theory of knowledge is to think about a specific form of revelation. It is this revelation, this becoming or coming-to-be, that makes the connection to crowd-sourced content production explicit—dreams serve as an audition or dress rehearsal in which new learning experiences with others are incorporated into the unconscious so that they might be used for production in the waking world. Bert O. States elaborates, linking the function of the dream with the function of the fiction writer “who makes models of the world that carry the imprint and structure of our various concerns. And it does this by using real people, or ‘scraps’ of other people, as the instruments of hypothetical facts” (28). Four out of ten characters in a dream are strangers, according to Calvin Hall, who is himself a stranger, someone I’ve never met in waking life or in a dream. But now that I’ve read him, now that I’ve written him into this work, he seems closer to me. Twin Peak’s serial lesson for viewers is this—even the people who seem strangers to us can interact with and intervene in our processes of production.These are the moments that a beginning takes place. And even if nothing directly follows, this transfer constitutes the hypothesised moment of production, an always-already perhaps, the what-if stimulus of charged possibility; the soil plot, or plot line, for freedom. Twin Peaks is a town in which the bizarre penetrates the everyday so often that eventually, the bizarre is no longer bizarre, but just another encounter with the ordinary. Dream sequences are common, but even more common—and more significant—are the moments in which what might otherwise be a dream vision ruptures into real life; these moments propel the narrative.Exhibit A: A man who hasn’t gone outside in a while begins to crumble, falling to the earth when forced to chase after a young girl, who’s just stolen the secret journal of another young girl, which he, in turn, had stolen.B: A horse appears in the middle of the living room after a routine vacuum cleaning and a subtle barely-there transition, a fade-out into a fade-in, what people call a dissolve. No one notices, or thinks to point out its presence. Or maybe they’re distracted. Or maybe they’ve already forgotten. Dissolve.(I keep hitting “Save As.” As if renaming something can also transform it.)C: All the guests at the Great Northern Hotel begin to dance the tango on cue—a musical, without any music.D: After an accident, a middle-aged woman with an eye patch—she was wearing the eye patch before the accident—believes she’s seventeen again. She enrolls in Twin Peaks High School and joins the cheerleading team.E: A woman pretending to be a Japanese businessman ambles into the town bar to meet her estranged husband, who fails to recognise his cross-dressing, race-swapping wife.F: A girl with blond hair is murdered, only to come back as another girl, with the same face and a different name. And brown hair. They’re cousins.G: After taking over her dead best friend’s Meals on Wheels route, Donna Hayward walks in to meet a boy wearing a tuxedo, sitting on the couch with his fingers clasped: a magician-in-training. “Sometimes things can happen just like this,” he says with a snap while the camera cuts to his grandmother, bed-ridden, and the appearance of a plate of creamed corn that vanishes as soon as she announces its name.H: A woman named Margaret talks to and through a log. The log, cradled in her arms wherever she goes, becomes a key witness.I: After a seven-minute diegetic dream sequence, which includes a one-armed man, a dwarf, a waltz, a dead girl, a dialogue played backward, and a significantly aged representation of the dreamer, Agent Cooper wakes up and drastically shifts his investigation of a mysterious small-town murder. The dream gives him agency; it turns him from a detective staring at a dead-end to one with a map of clues. The next day, it makes him a storyteller; all the others, sitting tableside in the middle of the woods become a captive audience. They become readers. They read into his dream to create their own scenarios. Exhibit I. The cycle of imagination spins on.Images re-direct and obfuscate meaning, a process of over-determination which Foucault says results in “a multiplication of meanings which override and contradict each other” (DAE 34). In the absence of image, the process of imagination prevails. In the absence of story, real drama in our conscious life, we form complex narratives in our sleep—our imaginative unconscious. Sometimes they leak out, become stories in our waking life, if we think to compose them.“A bargain has been struck,” says Harold, an under-5 bit player, later, in an episode called “Laura’s Secret Diary.” So that she might have the chance to read Laura Palmer’s diary, Donna Hayward agrees to talk about her own life, giving Harold the opportunity to write it down in his notebook: his “living novel” the new chapter which reads, after uncapping his pen and smiling, “Donna Hayward.”He flips to the front page and sets a book weight to keep the page in place. He looks over at Donna sheepishly. “Begin.”Donna begins talking about where she was born, the particulars of her father—the lone town doctor—before she interrupts the script and asks her interviewer about his origin story. Not used to people asking him the questions, Harold’s mouth drops and he stops writing. He puts his free hand to his chest and clears his throat. (The ambient, wind-chime soundtrack intensifies.) “I grew up in Boston,” he finally volunteers. “Well, actually, I grew up in books.”He turns his head from Donna to the notebook, writing feverishly, as if he’s begun to write his own responses as the camera cuts back to his subject, Donna, crossing her legs with both hands cupped at her exposed knee, leaning in to tell him: “There’s things you can’t get in books.”“There’s things you can’t get anywhere,” he returns, pen still in his hand. “When we dream, they can be found in other people.”What is a call to composition if not a call for a response? It is always the audience which makes a work of art, re-framed in our own image, the same way we re-orient ourselves in a dream to negotiate its “inconsistencies.” Bizarreness is merely a consequence of linguistic limitations, the overwhelming sensory dream experience which can only be re-framed via a visual representation. And so the relationship between the experience of reading and dreaming is made explicit when we consider the associations internalised in the reader/audience when ingesting a passage of words on a page or on the stage, objects that become mental images and concept pictures, a lens of perception that we may liken to another art form: the film, with its jump-cuts and dissolves, so much like the defamiliarising and dislocating experience of dreaming, especially for the dreamer who wakes. What else to do in that moment but write about it?Evidence of the bizarre in dreams is only the evidence of the capacity of our human consciousness at work in the unconscious; the moment in which imagination and memory come together to create another reality, a spectrum of reality that doesn’t posit a binary between waking and sleeping, a spectrum of reality that revels in the moments where the two coalesce, merge, cross-pollinate—and what action glides forward in its wake? Sustained un-hesitation and the wish to stay inside one’s self. To be conscious of the world outside the dream means the end of one. To see one’s face in the act of dreaming would require the same act of obliteration. Recognition of the other, and of the self, prevents the process from being fulfilled. Creative production and dreaming, like voyeurism, depend on this same lack of recognition, or the recognition of yourself as other. What else is a dream if not a moment of becoming, of substituting or sublimating yourself for someone else?We are asked to relate a recent dream or we volunteer an account, to a friend or lover. We use the word “seem” in nearly every description, when we add it up or how we fail to. Everything seems to be a certain way. It’s not a place but a feeling. James, another character on Twin Peaks, says the same thing, after someone asks him, “Where do you want to go?” but before he hops on his motorcycle and rides off into the unknowable future outside the frame. Everything seems like something else, based on our own associations, our own knowledge of people and things. Offline memory consolidation. Seeming and semblance. An uncertainty of appearing—both happening and seeing. How we mediate—and re-materialise—the dream through text is our attempt to re-capture imagination, to leave off the image and better become it. If, as Foucault says, the dream is always a dream of death, its purpose is a call to creation.Outside of dreams, something bizarre occurs. We call it novelty or news. We might even bestow it with fame. A man gets on the wrong plane and ends up halfway across the world. A movie is made into the moment of his misfortune. Years later, in real life and in movie time, an Iranian refugee can’t even get on the plane; he is turned away by UK immigration officials at Charles de Gaulle, so he spends the next sixteen years living in the airport lounge; when he departs in real life, the movie (The Terminal, 2004) arrives in theaters. Did it take sixteen years to film the terminal exile? How bizarre, how bizarre. OMC’s eponymous refrain of the 1996 one-hit wonder, which is another way of saying, an anomaly.When all things are counted and countable in today’s algorithmic-rich culture, deviance becomes less of a statistical glitch and more of a testament to human peculiarity; the repressed idiosyncrasies of man before machine but especially the fallible tendencies of mankind within machines—the non-repetition of chance that the Nineties emblematised in the form of its final act. The point is to imagine what comes next; to remember waiting together for the end of the world. There is no need to even open your eyes to see it. It is just a feeling. ReferencesBruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Cultural Attitudes towards Technology and Communication 2006: Proceedings of the Fifth International Conference, eds. Fay Sudweeks, Herbert Hrachovec, and Charles Ess. Murdoch: School of Information Technology, 2006. 275-84. <https://eprints.qut.edu.au/4863/1/4863_1.pdf>.Foucault, Michel. “Dream, Imagination and Existence.” Dream and Existence. Ed. Keith Hoeller. Pittsburgh: Review of Existential Psychology & Psychiatry, 1986. 31-78.———. “What Is an Author?” The Foucault Reader: An Introduction to Foucault’s Thought. Ed. Paul Rainbow. New York: Penguin, 1991.Glissant, Édouard. Poetics of Relation. Trans. Betsy Wing. Ann Arbor: U of Michigan P, 1997.Hall, Calvin S. The Meaning of Dreams. New York: McGraw Hill, 1966.Hobson, J. Allan. The Dream Drugstore: Chemically Altered State of Conscious­ness. Cambridge: MIT Press, 2001.Menkman, Rosa. The Glitch Moment(um). Amsterdam: Network Notebooks, 2011.Shannon, Claude Elwood. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): 379-423.States, Bert O. “Bizarreness in Dreams and Other Fictions.” The Dream and the Text: Essays on Literature and Language. Ed. Carol Schreier Rupprecht. Albany: SUNY P, 1993.Twin Peaks. Dir. David Lynch. ABC and Showtime. 1990-3 & 2017. Wamsley, Erin. “Dreaming and Offline Memory Consolidation.” Current Neurology and Neuroscience Reports 14.3 (2014): 433. “Y2K Bug.” Encyclopedia Britannica. 18 July 2018. <https://www.britannica.com/technology/Y2K-bug>.
APA, Harvard, Vancouver, ISO, and other styles
36

Spasić, Irena, David Owen, Andrew Smith, and Kate Button. "KLOSURE: Closing in on open–ended patient questionnaires with text mining." Journal of Biomedical Semantics 10, S1 (November 2019). http://dx.doi.org/10.1186/s13326-019-0215-3.

Full text
Abstract:
Abstract Background Knee injury and Osteoarthritis Outcome Score (KOOS) is an instrument used to quantify patients’ perceptions about their knee condition and associated problems. It is administered as a 42-item closed-ended questionnaire in which patients are asked to self-assess five outcomes: pain, other symptoms, activities of daily living, sport and recreation activities, and quality of life. We developed KLOG as a 10-item open-ended version of the KOOS questionnaire in an attempt to obtain deeper insight into patients’ opinions including their unmet needs. However, the open–ended nature of the questionnaire incurs analytical overhead associated with the interpretation of responses. The goal of this study was to automate such analysis. We implemented KLOSURE as a system for mining free–text responses to the KLOG questionnaire. It consists of two subsystems, one concerned with feature extraction and the other one concerned with classification of feature vectors. Feature extraction is performed by a set of four modules whose main functionalities are linguistic pre-processing, sentiment analysis, named entity recognition and lexicon lookup respectively. Outputs produced by each module are combined into feature vectors. The structure of feature vectors will vary across the KLOG questions. Finally, Weka, a machine learning workbench, was used for classification of feature vectors. Results The precision of the system varied between 62.8 and 95.3%, whereas the recall varied from 58.3 to 87.6% across the 10 questions. The overall performance in terms of F–measure varied between 59.0 and 91.3% with an average of 74.4% and a standard deviation of 8.8. Conclusions We demonstrated the feasibility of mining open-ended patient questionnaires. By automatically mapping free text answers onto a Likert scale, we can effectively measure the progress of rehabilitation over time. In comparison to traditional closed-ended questionnaires, our approach offers much richer information that can be utilised to support clinical decision making. In conclusion, we demonstrated how text mining can be used to combine the benefits of qualitative and quantitative analysis of patient experiences.
APA, Harvard, Vancouver, ISO, and other styles
37

Brown, Andrew R. "Code Jamming." M/C Journal 9, no. 6 (December 1, 2006). http://dx.doi.org/10.5204/mcj.2681.

Full text
Abstract:
Jamming culture has become associated with digital manipulation and reuse of materials. As well, the term jamming has long been used by musicians (and other performers) to mean improvisation, especially in collaborative situations. A practice that gets to the heart of both these meanings is live coding; where digital content (music and/or visuals predominantly) is created through computer programming as a performance. During live coding performances digital content is created and presented in real time. Normally the code from the performers screen is displayed via data projection so that the audience can see the unfolding process as well as see or hear the artistic outcome. This article will focus on live coding of music, but the issues it raises for jamming culture apply to other mediums also. Live coding of music uses the computer as an instrument, which is “played” by the direct construction and manipulation of sonic and musical processes. Gestural control involves typing at the computer keyboard but, unlike traditional “keyboard” instruments, these key gestures are usually indirect in their effect on the sonic result because they result in programming language text which is then interpreted by the computer. Some live coding performers, notably Amy Alexander, have played on the duality of the keyboard as direct and indirect input source by using it as both a text entry device, audio trigger, and performance prop. In most cases, keyboard typing produces notational description during live coding performances as an indirect music making, related to what may previously have been called composing or conducting; where sound generation is controlled rather than triggered. The computer system becomes performer and the degree of interpretive autonomy allocated to the computer can vary widely, but is typically limited to probabilistic choices, structural processes and use of pre-established sound generators. In live coding practices, the code is a medium of expression through which creative ideas are articulated. The code acts as a notational representation of computational processes. It not only leads to the sonic outcome but also is available for reflection, reuse and modification. The aspects of music described by the code are open to some variation, especially in relation to choices about music or sonic granularity. This granularity continuum ranges from a focus on sound synthesis at one end of the scale to the structural organisation of musical events or sections at the other end. Regardless of the level of content granularity being controlled, when jamming with code the time constraints of the live performance environment force the performer to develop succinct and parsimonious expressions and to create processes that sustain activity (often using repetition, iteration and evolution) in order to maintain a coherent and developing musical structure during the performance. As a result, live coding requires not only new performance skills but also new ways of describing the structures of and processes that create music. Jamming activities are additionally complex when they are collaborative. Live Coding performances can often be collaborative, either between several musicians and/or between music and visual live coders. Issues that arise in collaborative settings are both creative and technical. When collaborating between performers in the same output medium (e.g., two musicians) the roles of each performer need to be defined. When a pianist and a vocalist improvise the harmonic and melodic roles are relatively obvious, but two laptop performers are more like a guitar duo where each can take any lead, supportive, rhythmic, harmonic, melodic, textual or other function. Prior organisation and sensitivity to the needs of the unfolding performance are required, as they have always been in musical improvisations. At the technical level it may be necessary for computers to be networked so that timing information, at least, is shared. Various network protocols, most commonly Open Sound Control (OSC), are used for this purpose. Another collaboration takes place in live coding, the one between the performer and the computer; especially where the computational processes are generative (as is often the case). This real-time interaction between musician and algorithmic process has been termed Hyperimprovisation by Roger Dean. Jamming cultures that focus on remixing often value the sharing of resources, especially through the movement and treatment of content artefacts such as audio samples and digital images. In live coding circles there is a similarly strong culture of resource sharing, but live coders are mostly concerned with sharing techniques, processes and tools. In recognition of this, it is quite common that when distributing works live coding artists will include descriptions of the processes used to create work and even share the code. This practice is also common in the broader computational arts community, as evident in the sharing of flash code on sites such as Levitated by Jared Tarbell, in the Processing site (Reas & Fry), or in publications such as Flash Maths Creativity (Peters et al.). Also underscoring this culture of sharing, is a prioritising of reputation above (or prior to) profit. As a result of these social factors most live coding tools are freely distributed. Live Coding tools have become more common in the past few years. There are a number of personalised systems that utilise various different programming languages and environments. Some of the more polished programs, that can be used widely, include SuperCollider (McCartney), Chuck (Wang & Cook) and Impromptu (Sorensen). While these environments all use different languages and varying ways of dealing with sound structure granularity, they do share some common aspects that reveal the priorities and requirements of live coding. Firstly, they are dynamic environments where the musical/sonic processes are not interrupted by modifications to the code; changes can be made on the fly and code is modifiable at runtime. Secondly, they are text-based and quite general programming environments, which means that the full leverage of abstract coding structures can be applied during live coding performances. Thirdly, they all prioritise time, both at architectural and syntactic levels. They are designed for real-time performance where events need to occur reliably. The text-based nature of these tools means that using them in live performance is barely distinguishable from any other computer task, such as writing an email, and thus the practice of projecting the environment to reveal the live process has become standard in the live coding community as a way of communicating with an audience (Collins). It is interesting to reflect on how audiences respond to the projection of code as part of live coding performances. In the author’s experience as both an audience member and live coding performer, the reception has varied widely. Most people seem to find it curious and comforting. Even if they cannot follow the code, they understand or are reassured that the performance is being generated by the code. Those who understand the code often report a sense of increased anticipation as they see structures emerge, and sometimes opportunities missed. Some people dislike the projection of the code, and see it as a distasteful display of virtuosity or as a distraction to their listening experience. The live coding practitioners tend to see the projection of code as a way of revealing the underlying generative and gestural nature of their performance. For some, such as Julian Rohrhuber, code projection is a way of revealing ideas and their development during the performance. “The incremental process of livecoding really is what makes it an act of public reasoning” (Rohrhuber). For both audience and performer, live coding is an explicitly risky venture and this element of public risk taking has long been central to the appreciation of the performing arts (not to mention sport and other cultural activities). The place of live coding in the broader cultural setting is still being established. It certainly is a form of jamming, or improvisation, it also involves the generation of digital content and the remixing of cultural ideas and materials. In some ways it is also connected to instrument building. Live coding practices prioritise process and therefore have a link with conceptual visual art and serial music composition movements from the 20th century. Much of the music produced by live coding has aesthetic links, naturally enough, to electronic music genres including musique concrète, electronic dance music, glitch music, noise art and minimalism. A grouping that is not overly coherent besides a shared concern for processes and systems. Live coding is receiving greater popular and academic attention as evident in recent articles in Wired (Andrews), ABC Online (Martin) and media culture blogs including The Teeming Void (Whitelaw 2006). Whatever its future profile in the boarder cultural sector the live coding community continues to grow and flourish amongst enthusiasts. The TOPLAP site is a hub of live coding activities and links prominent practitioners including, Alex McLean, Nick Collins, Adrian Ward, Julian Rohrhuber, Amy Alexander, Frederick Olofsson, Ge Wang, and Andrew Sorensen. These people and many others are exploring live coding as a form of jamming in digital media and as a way of creating new cultural practices and works. References Andrews, R. “Real DJs Code Live.” Wired: Technology News 6 July 2006. http://www.wired.com/news/technology/0,71248-0.html>. Collins, N. “Generative Music and Laptop Performance.” Contemporary Music Review 22.4 (2004): 67-79. Fry, Ben, and Casey Reas. Processing. http://processing.org/>. Martin, R. “The Sound of Invention.” Catapult. ABC Online 2006. http://www.abc.net.au/catapult/indepth/s1725739.htm>. McCartney, J. “SuperCollider: A New Real-Time Sound Synthesis Language.” The International Computer Music Conference. San Francisco: International Computer Music Association, 1996. 257-258. Peters, K., M. Tan, and M. Jamie. Flash Math Creativity. Berkeley, CA: Friends of ED, 2004. Reas, Casey, and Ben Fry. “Processing: A Learning Environment for Creating Interactive Web Graphics.” International Conference on Computer Graphics and Interactive Techniques. San Diego: ACM SIGGRAPH, 2003. 1. Rohrhuber, J. Post to a Live Coding email list. livecode@slab.org. 10 Sep. 2006. Sorensen, A. “Impromptu: An Interactive Programming Environment for Composition and Performance.” In Proceedings of the Australasian Computer Music Conference 2005. Eds. A. R. Brown and T. Opie. Brisbane: ACMA, 2005. 149-153. Tarbell, Jared. Levitated. http://www.levitated.net/daily/index.html>. TOPLAP. http://toplap.org/>. Wang, G., and P.R. Cook. “ChucK: A Concurrent, On-the-fly, Audio Programming Language.” International Computer Music Conference. ICMA, 2003. 219-226 Whitelaw, M. “Data, Code & Performance.” The Teeming Void 21 Sep. 2006. http://teemingvoid.blogspot.com/2006/09/data-code-performance.html>. Citation reference for this article MLA Style Brown, Andrew R. "Code Jamming." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/03-brown.php>. APA Style Brown, A. (Dec. 2006) "Code Jamming," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/03-brown.php>.
APA, Harvard, Vancouver, ISO, and other styles
38

Quinan, C. L., and Hannah Pezzack. "A Biometric Logic of Revelation: Zach Blas’s SANCTUM (2018)." M/C Journal 23, no. 4 (August 12, 2020). http://dx.doi.org/10.5204/mcj.1664.

Full text
Abstract:
Ubiquitous in airports, border checkpoints, and other securitised spaces throughout the world, full-body imaging scanners claim to read bodies in order to identify if they pose security threats. Millimetre-wave body imaging machines—the most common type of body scanner—display to the operating security agent a screen with a generic body outline. If an anomaly is found or if an individual does not align with the machine’s understanding of an “average” body, a small box is highlighted and placed around the “problem” area, prompting further inspection in the form of pat-downs or questioning. In this complex security regime governed by such biometric, body-based technologies, it could be argued that nonalignment with bodily normativity as well as an attendant failure to reveal oneself—to become “transparent” (Hall 295)—marks a body as dangerous. As these algorithmic technologies become more pervasive, so too does the imperative to critically examine their purported neutrality and operative logic of revelation and readability.Biometric technologies are marketed as excavators of truth, with their optic potency claiming to demask masquerading bodies. Failure and bias are, however, an inescapable aspect of such technologies that work with narrow parameters of human morphology. Indeed, surveillance technologies have been taken to task for their inherent racial and gender biases (Browne; Pugliese). Facial recognition has, for example, been critiqued for its inability to read darker skin tones (Buolamwini and Gebru), while body scanners have been shown to target transgender bodies (Keyes; Magnet and Rodgers; Quinan). Critical security studies scholar Shoshana Magnet argues that error is endemic to the technological functioning of biometrics, particularly since they operate according to the faulty notion that bodies are “stable” and unchanging repositories of information that can be reified into code (Magnet 2).Although body scanners are presented as being able to reliably expose concealed weapons, they are riddled with incompetencies that misidentify and over-select certain demographics as suspect. Full-body scanners have, for example, caused considerable difficulties for transgender travellers, breast cancer patients, and people who use prosthetics, such as artificial limbs, colonoscopy bags, binders, or prosthetic genitalia (Clarkson; Quinan; Spalding). While it is not in the scope of this article to detail the workings of body imaging technologies and their inconsistencies, a growing body of scholarship has substantiated the claim that these machines unfairly impact those identifying as transgender and non-binary (see, e.g., Beauchamp; Currah and Mulqueen; Magnet and Rogers; Sjoberg). Moreover, they are constructed according to a logic of binary gender: before each person enters the scanner, transportation security officers must make a quick assessment of their gender/sex by pressing either a blue (corresponding to “male”) or pink (corresponding to “female”) button. In this sense, biometric, computerised security systems control and monitor the boundaries between male and female.The ability to “reveal” oneself is henceforth predicated on having a body free of “abnormalities” and fitting neatly into one of the two sex categorisations that the machine demands. Transgender and gender-nonconforming individuals, particularly those who do not have a binary gender presentation or whose presentation does not correspond to the sex marker in their documentation, also face difficulties if the machine flags anomalies (Quinan and Bresser). Drawing on a Foucauldian analysis of power as productive, Toby Beauchamp similarly illustrates how surveillance technologies not only identify but also create and reshape the figure of the dangerous subject in relation to normative configurations of gender, race, and able-bodiedness. By mobilizing narratives of concealment and disguise, heightened security measures frame gender nonconformity as dangerous (Beauchamp, Going Stealth). Although national and supranational authorities market biometric scanning technologies as scientifically neutral and exact methods of identification and verification and as an infallible solution to security risks, such tools of surveillance are clearly shaped by preconceptions and prejudgements about race, gender, and bodily normativity. Not only are they encoded with “prototypical whiteness” (Browne) but they are also built on “grossly stereotypical” configurations of gender (Clarkson).Amongst this increasingly securitised landscape, creative forms of artistic resistance can offer up a means of subverting discriminatory policing and surveillance practices by posing alternate visualisations that reveal and challenge their supposed objectivity. In his 2018 audio-video artwork installation entitled SANCTUM, UK-based American artist Zach Blas delves into how biometric technologies, like those described above, both reveal and (re)shape ontology by utilising the affectual resonance of sexual submission. Evoking the contradictory notions of oppression and pleasure, Blas describes SANCTUM as “a mystical environment that perverts sex dungeons with the apparatuses and procedures of airport body scans, biometric analysis, and predictive policing” (see full description at https://zachblas.info/works/sanctum/).Depicting generic mannequins that stand in for the digitalised rendering of the human forms that pass through body scanners, the installation transports the scanners out of the airport and into a queer environment that collapses sex, security, and weaponry; an environment that is “at once a prison-house of algorithmic capture, a sex dungeon with no genitals, a weapons factory, and a temple to security.” This artistic reframing gestures towards full-body scanning technology’s germination in the military, prisons, and other disciplinary systems, highlighting how its development and use has originated from punitive—rather than protective—contexts.In what follows, we adopt a methodological approach that applies visual analysis and close reading to scrutinise a selection of scenes from SANCTUM that underscore the sadomasochistic power inherent in surveillance technologies. Analysing visual and aural elements of the artistic intervention allows us to complicate the relationship between transparency and recognition and to problematise the dynamic of mandatory complicity and revelation that body scanners warrant. In contrast to a discourse of visibility that characterises algorithmically driven surveillance technology, Blas suggests opacity as a resistance strategy to biometrics' standardisation of identity. Taking an approach informed by critical security studies and queer theory, we also argue that SANCTUM highlights the violence inherent to the practice of reducing the body to a flat, inert surface that purports to align with some sort of “core” identity, a notion that contradicts feminist and queer approaches to identity and corporeality as fluid and changing. In close reading this artistic installation alongside emerging scholarship on the discriminatory effects of biometric technology, this article aims to highlight the potential of art to queer the supposed objectivity and neutrality of biometric surveillance and to critically challenge normative logics of revelation and readability.Corporeal Fetishism and Body HorrorThroughout both his artistic practice and scholarly work, Blas has been critical of the above narrative of biometrics as objective extractors of information. Rather than looking to dominant forms of representation as a means for recognition and social change, Blas’s work asks that we strive for creative techniques that precisely queer biometric and legal systems in order to make oneself unaccounted for. For him, “transparency, visibility, and representation to the state should be used tactically, they are never the end goal for a transformative politics but are, ultimately, a trap” (Blas and Gaboury 158). While we would simultaneously argue that invisibility is itself a privilege that is unevenly distributed, his creative work attempts to refuse a politics of visibility and to embrace an “informatic opacity” that is attuned to differences in bodies and identities (Blas).In particular, Blas’s artistic interventions titled Facial Weaponization Suite (2011-14) and Face Cages (2013-16) protest against biometric recognition and the inequalities that these technologies propagate by making masks and wearable metal objects that cannot be detected as human faces. This artistic-activist project contests biometric facial recognition and their attendant inequalities by, as detailed on the artist’s website,making ‘collective masks’ in workshops that are modelled from the aggregated facial data of participants, resulting in amorphous masks that cannot be detected as human faces by biometric facial recognition technologies. The masks are used for public interventions and performances.One mask explores blackness and the racist implications that undergird biometric technologies’ inability to detect dark skin. Meanwhile another mask, which he calls the “Fag Face Mask”, points to the heteronormative underpinnings of facial recognition. Created from the aggregated facial data of queer men, this amorphous pink mask implicitly references—and contests—scientific studies that have attempted to link the identification of sexual orientation through rapid facial recognition techniques.Building on this body of creative work that has advocated for opacity as a tool of social and political transformation, SANCTUM resists the revelatory impulses of biometric technology by turning to the use and abuse of full-body imaging. The installation opens with a shot of a large, dark industrial space. At the far end of a red, spotlighted corridor, a black mask flickers on a screen. A shimmering, oscillating sound reverberates—the opening bars of a techno track—that breaks down in rhythm while the mask evaporates into a cloud of smoke. The camera swivels, and a white figure—the generic mannequin of the body scanner screen—is pummelled by invisible forces as if in a wind tunnel. These ghostly silhouettes appear and reappear in different positions, with some being whipped and others stretched and penetrated by a steel anal hook. Rather than conjuring a traditional horror trope of the body’s terrifying, bloody interior, SANCTUM evokes a new kind of feared and fetishized trope that is endemic to the current era of surveillance capitalism: the abstracted body, standardised and datafied, created through the supposedly objective and efficient gaze of AI-driven machinery.Resting on the floor in front of the ominous animated mask are neon fragments arranged in an occultist formation—hands or half a face. By breaking the body down into component parts— “from retina to fingerprints”—biometric technologies “purport to make individual bodies endlessly replicable, segmentable and transmissible in the transnational spaces of global capital” (Magnet 8). The notion that bodies can be seamlessly turned into blueprints extracted from biological and cultural contexts has been described by Donna Haraway as “corporeal fetishism” (Haraway, Modest). In the context of SANCTUM, Blas illustrates the dangers of mistaking a model for a “concrete entity” (Haraway, “Situated” 147). Indeed, the digital cartography of the generic mannequin becomes no longer a mode of representation but instead a technoscientific truth.Several scenes in SANCTUM also illustrate a process whereby substances are extracted from the mannequins and used as tools to enact violence. In one such instance, a silver webbing is generated over a kneeling figure. Upon closer inspection, this geometric structure, which is reminiscent of Blas’s earlier Face Cages project, is a replication of the triangulated patterns produced by facial recognition software in its mapping of distance between eyes, nose, and mouth. In the next scene, this “map” breaks apart into singular shapes that float and transform into a metallic whip, before eventually reconstituting themselves as a penetrative douche hose that causes the mannequin to spasm and vomit a pixelated liquid. Its secretions levitate and become the webbing, and then the sequence begins anew.In another scene, a mannequin is held upside-down and force-fed a bubbling liquid that is being pumped through tubes from its arms, legs, and stomach. These depictions visualise Magnet’s argument that biometric renderings of bodies are understood not to be “tropic” or “historically specific” but are instead presented as “plumbing individual depths in order to extract core identity” (5). In this sense, this visual representation calls to mind biometrics’ reification of body and identity, obfuscating what Haraway would describe as the “situatedness of knowledge”. Blas’s work, however, forces a critique of these very systems, as the materials extracted from the bodies of the mannequins in SANCTUM allude to how biometric cartographies drawn from travellers are utilised to justify detainment. These security technologies employ what Magnet has referred to as “surveillant scopophilia,” that is, new ways and forms of looking at the human body “disassembled into component parts while simultaneously working to assuage individual anxieties about safety and security through the promise of surveillance” (17). The transparent body—the body that can submit and reveal itself—is ironically represented by the distinctly genderless translucent mannequins. Although the generic mannequins are seemingly blank slates, the installation simultaneously forces a conversation about the ways in which biometrics draw upon and perpetuate assumptions about gender, race, and sexuality.Biometric SubjugationOn her 2016 critically acclaimed album HOPELESSNESS, openly transgender singer, composer, and visual artist Anohni performs a deviant subjectivity that highlights the above dynamics that mark the contemporary surveillance discourse. To an imagined “daddy” technocrat, she sings:Watch me… I know you love me'Cause you're always watching me'Case I'm involved in evil'Case I'm involved in terrorism'Case I'm involved in child molestersEvoking a queer sexual frisson, Anohni describes how, as a trans woman, she is hyper-visible to state institutions. She narrates a voyeuristic relation where trans bodies are policed as threats to public safety rather than protected from systemic discrimination. Through the seemingly benevolent “daddy” character and the play on ‘cause (i.e., because) and ‘case (i.e., in case), she highlights how gender-nonconforming individuals are predictively surveilled and assumed to already be guilty. Reflecting on daddy-boy sexual paradigms, Jack Halberstam reads the “sideways” relations of queer practices as an enactment of “rupture as substitution” to create a new project that “holds on to vestiges of the old but distorts” (226). Upending power and control, queer art has the capacity to both reveal and undermine hegemonic structures while simultaneously allowing for the distortion of the old to create something new.Employing the sublimatory relations of bondage, discipline, sadism, and masochism (BDSM), Blas’s queer installation similarly creates a sideways representation that re-orientates the logic of the biometric scanners, thereby unveiling the always already sexualised relations of scrutiny and interrogation as well as the submissive complicity they demand. Replacing the airport environment with a dark and foreboding mise-en-scène allows Blas to focus on capture rather than mobility, highlighting the ways in which border checkpoints (including those instantiated by the airport) encourage free travel for some while foreclosing movement for others. Building on Sara Ahmed’s “phenomenology of being stopped”, Magnet considers what happens when we turn our gaze to those “who fail to pass the checkpoint” (107). In SANCTUM, the same actions are played out again and again on spectral beings who are trapped in various states: they shudder in cages, are chained to the floor, or are projected against the parameters of mounted screens. One ghostly figure, for instance, lies pinned down by metallic grappling hooks, arms raised above the head in a recognisable stance of surrender, conjuring up the now-familiar image of a traveller standing in the cylindrical scanner machine, waiting to be screened. In portraying this extended moment of immobility, Blas lays bare the deep contradictions in the rhetoric of “freedom of movement” that underlies such spaces.On a global level, media reporting, scientific studies, and policy documents proclaim that biometrics are essential to ensuring personal safety and national security. Within the public imagination, these technologies become seductive because of their marked ability to identify terrorist attackers—to reveal threatening bodies—thereby appealing to the anxious citizen’s fear of the disguised suicide bomber. Yet for marginalised identities prefigured as criminal or deceptive—including transgender and black and brown bodies—the inability to perform such acts of revelation via submission to screening can result in humiliation and further discrimination, public shaming, and even tortuous inquiry – acts that are played out in SANCTUM.Masked GenitalsFeminist surveillance studies scholar Rachel Hall has referred to the impetus for revelation in the post-9/11 era as a desire for a universal “aesthetics of transparency” in which the world and the body is turned inside-out so that there are no longer “secrets or interiors … in which terrorists or terrorist threats might find refuge” (127). Hall takes up the case study of Umar Farouk Abdulmutallab (infamously known as “the Underwear Bomber”) who attempted to detonate plastic explosives hidden in his underwear while onboard a flight from Amsterdam to Detroit on 25 December 2009. Hall argues that this event signified a coalescence of fears surrounding bodies of colour, genitalia, and terrorism. News reports following the incident stated that Abdulmutallab tucked his penis to make room for the explosive, thereby “queer[ing] the aspiring terrorist by indirectly referencing his willingness … to make room for a substitute phallus” (Hall 289). Overtly manifested in the Underwear Bomber incident is also a desire to voyeuristically expose a hidden, threatening interiority, which is inherently implicated with anxieties surrounding gender deviance. Beauchamp elaborates on how gender deviance and transgression have coalesced with terrorism, which was exemplified in the wake of the 9/11 attacks when the United States Department of Homeland Security issued a memo that male terrorists “may dress as females in order to discourage scrutiny” (“Artful” 359). Although this advisory did not explicitly reference transgender populations, it linked “deviant” gender presentation—to which we could also add Abdulmutallab’s tucking of his penis—with threats to national security (Beauchamp, Going Stealth). This also calls to mind a broader discussion of the ways in which genitalia feature in the screening process. Prior to the introduction of millimetre-wave body scanning technology, the most common form of scanner used was the backscatter imaging machine, which displayed “naked” body images of each passenger to the security agent. Due to privacy concerns, these machines were replaced by the scanners currently in place which use a generic outline of a passenger (exemplified in SANCTUM) to detect possible threats.It is here worth returning to Blas’s installation, as it also implicitly critiques the security protocols that attempt to reveal genitalia as both threatening and as evidence of an inner truth about a body. At one moment in the installation a bayonet-like object pierces the blank crotch of the mannequin, shattering it into holographic fragments. The apparent genderlessness of the mannequins is contrasted with these graphic sexual acts. The penetrating metallic instrument that breaks into the loin of the mannequin, combined with the camera shot that slowly zooms in on this action, draws attention to a surveillant fascination with genitalia and revelation. As Nicholas L. Clarkson documents in his analysis of airport security protocols governing prostheses, including limbs and packies (silicone penis prostheses), genitals are a central component of the screening process. While it is stipulated that physical searches should not require travellers to remove items of clothing, such as underwear, or to expose their genitals to staff for inspection, prosthetics are routinely screened and examined. This practice can create tensions for trans or disabled passengers with prosthetics in so-called “sensitive” areas, particularly as guidelines for security measures are often implemented by airport staff who are not properly trained in transgender-sensitive protocols.ConclusionAccording to media technologies scholar Jeremy Packer, “rather than being treated as one to be protected from an exterior force and one’s self, the citizen is now treated as an always potential threat, a becoming bomb” (382). Although this technological policing impacts all who are subjected to security regimes (which is to say, everyone), this amalgamation of body and bomb has exacerbated the ways in which bodies socially coded as threatening or deceptive are targeted by security and surveillance regimes. Nonetheless, others have argued that the use of invasive forms of surveillance can be justified by the state as an exchange: that citizens should willingly give up their right to privacy in exchange for safety (Monahan 1). Rather than subscribing to this paradigm, Blas’ SANCTUM critiques the violence of mandatory complicity in this “trade-off” narrative. Because their operationalisation rests on normative notions of embodiment that are governed by preconceptions around gender, race, sexuality and ability, surveillance systems demand that bodies become transparent. This disproportionally affects those whose bodies do not match norms, with trans and queer bodies often becoming unreadable (Kafer and Grinberg). The shadowy realm of SANCTUM illustrates this tension between biometric revelation and resistance, but also suggests that opacity may be a tool of transformation in the face of such discriminatory violations that are built into surveillance.ReferencesAhmed, Sara. “A Phenomenology of Whiteness.” Feminist Theory 8.2 (2007): 149–68.Beauchamp, Toby. “Artful Concealment and Strategic Visibility: Transgender Bodies and U.S. State Surveillance after 9/11.” Surveillance & Society 6.4 (2009): 356–66.———. Going Stealth: Transgender Politics and U.S. Surveillance Practices. Durham, NC: Duke UP, 2019.Blas, Zach. “Informatic Opacity.” The Journal of Aesthetics and Protest 9 (2014). <http://www.joaap.org/issue9/zachblas.htm>.Blas, Zach, and Jacob Gaboury. 2016. “Biometrics and Opacity: A Conversation.” Camera Obscura: Feminism, Culture, and Media Studies 31.2 (2016): 154-65.Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.Browne, Simone. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke UP, 2015.Clarkson, Nicholas L. “Incoherent Assemblages: Transgender Conflicts in US Security.” Surveillance & Society 17.5 (2019): 618-30.Currah, Paisley, and Tara Mulqueen. “Securitizing Gender: Identity, Biometrics, and Transgender Bodies at the Airport.” Social Research 78.2 (2011): 556-82.Halberstam, Jack. The Queer Art of Failure. Durham: Duke UP, 2011.Hall, Rachel. “Terror and the Female Grotesque: Introducing Full-Body Scanners to U.S. Airports.” Feminist Surveillance Studies. Eds. Rachel E. Dubrofsky and Shoshana Amielle Magnet. Durham, NC: Duke UP, 2015. 127-49.Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14.3 (1988): 575-99.———. Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. New York: Routledge, 1997.Kafer, Gary, and Daniel Grinberg. “Queer Surveillance.” Surveillance & Society 17.5 (2019): 592-601.Keyes, O.S. “The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition.” Proceedings of the ACM on Human-Computer Interaction 2. CSCW, Article 88 (2018): 1-22.Magnet, Shoshana Amielle. When Biometrics Fail: Gender, Race, and the Technology of Identity. Durham: Duke UP, 2011.Magnet, Shoshana, and Tara Rodgers. “Stripping for the State: Whole Body Imaging Technologies and the Surveillance of Othered Bodies.” Feminist Media Studies 12.1 (2012): 101–18.Monahan, Torin. Surveillance and Security: Technological Politics and Power in Everyday Life. New York: Routledge, 2006.Packer, Jeremy. “Becoming Bombs: Mobilizing Mobility in the War of Terror.” Cultural Studies 10.5 (2006): 378-99.Pugliese, Joseph. “In Silico Race and the Heteronomy of Biometric Proxies: Biometrics in the Context of Civilian Life, Border Security and Counter-Terrorism Laws.” Australian Feminist Law Journal 23 (2005): 1-32.Pugliese, Joseph. Biometrics: Bodies, Technologies, Biopolitics New York: Routledge, 2010.Quinan, C.L. “Gender (In)securities: Surveillance and Transgender Bodies in a Post-9/11 Era of Neoliberalism.” Eds. Stef Wittendorp and Matthias Leese. Security/Mobility: Politics of Movement. Manchester: Manchester UP, 2017. 153-69.Quinan, C.L., and Nina Bresser. “Gender at the Border: Global Responses to Gender Diverse Subjectivities and Non-Binary Registration Practices.” Global Perspectives 1.1 (2020). <https://doi.org/10.1525/gp.2020.12553>.Sjoberg, Laura. “(S)he Shall Not Be Moved: Gender, Bodies and Travel Rights in the Post-9/11 Era.” Security Journal 28.2 (2015): 198-215.Spalding, Sally J. “Airport Outings: The Coalitional Possibilities of Affective Rupture.” Women’s Studies in Communication 39.4 (2016): 460-80.
APA, Harvard, Vancouver, ISO, and other styles
39

Stewart, Jonathan. "If I Had Possession over Judgment Day: Augmenting Robert Johnson." M/C Journal 16, no. 6 (December 16, 2013). http://dx.doi.org/10.5204/mcj.715.

Full text
Abstract:
augmentvb [ɔːgˈmɛnt]1. to make or become greater in number, amount, strength, etc.; increase2. Music: to increase (a major or perfect interval) by a semitone (Collins English Dictionary 107) Almost everything associated with Robert Johnson has been subject to some form of augmentation. His talent as a musician and songwriter has been embroidered by myth-making. Johnson’s few remaining artefacts—his photographic images, his grave site, other physical records of his existence—have attained the status of reliquary. Even the integrity of his forty-two surviving recordings is now challenged by audiophiles who posit they were musically and sonically augmented by speeding up—increasing the tempo and pitch. This article documents the promulgation of myth in the life and music of Robert Johnson. His disputed photographic images are cited as archetypal contested artefacts, augmented both by false claims and genuine new discoveries—some of which suggest Johnson’s cultural magnetism is so compelling that even items only tenuously connected to his work draw significant attention. Current challenges to the musical integrity of Johnson’s original recordings, that they were “augmented” in order to raise the tempo, are presented as exemplars of our on-going fascination with his life and work. Part literature review, part investigative history, it uses the phenomenon of augmentation as a prism to shed new light on this enigmatic figure. Johnson’s obscurity during his lifetime, and for twenty-three years after his demise in 1938, offered little indication of his future status as a musical legend: “As far as the evolution of black music goes, Robert Johnson was an extremely minor figure, and very little that happened in the decades following his death would have been affected if he had never played a note” (Wald, Escaping xv). Such anonymity allowed those who first wrote about his music to embrace and propagate the myths that grew around this troubled character and his apparently “supernatural” genius. Johnson’s first press notice, from a pseudonymous John Hammond writing in The New Masses in 1937, spoke of a mysterious character from “deepest Mississippi” who “makes Leadbelly sound like an accomplished poseur” (Prial 111). The following year Hammond eulogised the singer in profoundly romantic terms: “It still knocks me over when I think of how lucky it is that a talent like his ever found its way to phonograph records […] Johnson died last week at precisely the moment when Vocalion scouts finally reached him and told him that he was booked to appear at Carnegie Hall” (19). The visceral awe experienced by subsequent generations of Johnson aficionados seems inspired by the remarkable capacity of his recordings to transcend space and time, reaching far beyond their immediate intended audience. “Johnson’s music changed the way the world looked to me,” wrote Greil Marcus, “I could listen to nothing else for months.” The music’s impact originates, at least in part, from the ambiguity of its origins: “I have the feeling, at times, that the reason Johnson has remained so elusive is that no one has been willing to take him at his word” (27-8). Three decades later Bob Dylan expressed similar sentiments over seven detailed pages of Chronicles: From the first note the vibrations from the loudspeaker made my hair stand up … it felt like a ghost had come into the room, a fearsome apparition …When he sings about icicles hanging on a tree it gives me the chills, or about milk turning blue … it made me nauseous and I wondered how he did that … It’s hard to imagine sharecroppers or plantation field hands at hop joints, relating to songs like these. You have to wonder if Johnson was playing for an audience that only he could see, one off in the future. (282-4) Such ready invocation of the supernatural bears witness to the profundity and resilience of the “lost bluesman” as a romantic trope. Barry Lee Pearson and Bill McCulloch have produced a painstaking genealogy of such a-historical misrepresentation. Early contributors include Rudi Blesch, Samuel B Charters, Frank Driggs’ liner notes for Johnson’s King of the Delta Blues Singers collection, and critic Pete Welding’s prolific 1960s output. Even comparatively recent researchers who ostensibly sought to demystify the legend couldn’t help but embellish the narrative. “It is undeniable that Johnson was fascinated with and probably obsessed by supernatural imagery,” asserted Robert Palmer (127). For Peter Guralnick his best songs articulate “the debt that must be paid for art and the Faustian bargain that Johnson sees at its core” (43). Contemporary scholarship from Pearson and McCulloch, James Banninghof, Charles Ford, and Elijah Wald has scrutinised Johnson’s life and work on a more evidential basis. This process has been likened to assembling a complicated jigsaw where half the pieces are missing: The Mississippi Delta has been practically turned upside down in the search for records of Robert Johnson. So far only marriage application signatures, two photos, a death certificate, a disputed death note, a few scattered school documents and conflicting oral histories of the man exist. Nothing more. (Graves 47) Such material is scrappy and unreliable. Johnson’s marriage licenses and his school records suggest contradictory dates of birth (Freeland 49). His death certificate mistakes his age—we now know that Johnson inadvertently founded another rock myth, the “27 Club” which includes fellow guitarists Brian Jones, Jimi Hendrix and Kurt Cobain (Wolkewitz et al., Segalstad and Hunter)—and incorrectly states he was single when he was twice widowed. A second contemporary research strand focuses on the mythmaking process itself. For Eric Rothenbuhler the appeal of Johnson’s recordings lies in his unique “for-the-record” aesthetic, that foreshadowed playing and song writing standards not widely realised until the 1960s. For Patricia Schroeder Johnson’s legend reveals far more about the story-tellers than it does the source—which over time has become “an empty center around which multiple interpretations, assorted viewpoints, and a variety of discourses swirl” (3). Some accounts of Johnson’s life seem entirely coloured by their authors’ cultural preconceptions. The most enduring myth, Johnson’s “crossroads” encounter with the Devil, is commonly redrawn according to the predilections of those telling the tale. That this story really belongs to bluesman Tommy Johnson has been known for over four decades (Evans 22), yet it was mistakenly attributed to Robert as recently as 1999 in French blues magazine Soul Bag (Pearson and McCulloch 92-3). Such errors are, thankfully, becoming less common. While the movie Crossroads (1986) brazenly appropriated Tommy’s story, the young walking bluesman in Oh, Brother, Where Art Thou? (2000) faithfully proclaims his authentic identity: “Thanks for the lift, sir. My name's Tommy. Tommy Johnson […] I had to be at that crossroads last midnight. Sell my soul to the devil.” Nevertheless the “supernatural” constituent of Johnson’s legend remains an irresistible framing device. It inspired evocative footage in Peter Meyer’s Can’t You Hear the Wind Howl? The Life and Music of Robert Johnson (1998). Even the liner notes to the definitive Sony Music Robert Johnson: The Centennial Edition celebrate and reclaim his myth: nothing about this musician is more famous than the word-of-mouth accounts of him selling his soul to the devil at a midnight crossroads in exchange for his singular mastery of blues guitar. It has become fashionable to downplay or dismiss this account nowadays, but the most likely source of the tale is Johnson himself, and the best efforts of scholars to present this artist in ordinary, human terms have done little to cut through the mystique and mystery that surround him. Repackaged versions of Johnson’s recordings became available via Amazon.co.uk and Spotify when they fell out of copyright in the United Kingdom. Predictable titles such as Contracted to the Devil, Hellbound, Me and the Devil Blues, and Up Jumped the Devil along with their distinctive “crossroads” artwork continue to demonstrate the durability of this myth [1]. Ironically, Johnson’s recordings were made during an era when one-off exhibited artworks (such as his individual performances of music) first became reproducible products. Walter Benjamin famously described the impact of this development: that which withers in the age of mechanical reproduction is the aura of the work of art […] the technique of reproduction detaches the reproduced object from the domain of tradition. By making many reproductions it substitutes a plurality of copies for a unique existence. (7) Marybeth Hamilton drew on Benjamin in her exploration of white folklorists’ efforts to document authentic pre-modern blues culture. Such individuals sought to preserve the intensity of the uncorrupted and untutored black voice before its authenticity and uniqueness could be tarnished by widespread mechanical reproduction. Two artefacts central to Johnson’s myth, his photographs and his recorded output, will now be considered in that context. In 1973 researcher Stephen LaVere located two pictures in the possession of his half–sister Carrie Thompson. The first, a cheap “dime store” self portrait taken in the equivalent of a modern photo booth, shows Johnson around a year into his life as a walking bluesman. The second, taken in the Hooks Bros. studio in Beale Street, Memphis, portrays a dapper and smiling musician on the eve of his short career as a Vocalion recording artist [2]. Neither was published for over a decade after their “discovery” due to fears of litigation from a competing researcher. A third photograph remains unpublished, still owned by Johnson’s family: The man has short nappy hair; he is slight, one foot is raised, and he is up on his toes as though stretching for height. There is a sharp crease in his pants, and a handkerchief protrudes from his breast pocket […] His eyes are deep-set, reserved, and his expression forms a half-smile, there seems to be a gentleness about him, his fingers are extraordinarily long and delicate, his head is tilted to one side. (Guralnick 67) Recently a fourth portrait appeared, seemingly out of nowhere, in Vanity Fair. Vintage guitar seller Steven Schein discovered a sepia photograph labelled “Old Snapshot Blues Guitar B. B. King???” [sic] while browsing Ebay and purchased it for $2,200. Johnson’s son positively identified the image, and a Houston Police Department forensic artist employed face recognition technology to confirm that “all the features are consistent if not identical” (DiGiacomo 2008). The provenance of this photograph remains disputed, however. Johnson’s guitar appears overly distressed for what would at the time be a new model, while his clothes reflect an inappropriate style for the period (Graves). Another contested “Johnson” image found on four seconds of silent film showed a walking bluesman playing outside a small town cinema in Ruleville, Mississippi. It inspired Bob Dylan to wax lyrical in Chronicles: “You can see that really is Robert Johnson, has to be – couldn’t be anyone else. He’s playing with huge, spiderlike hands and they magically move over the strings of his guitar” (287). However it had already been proved that this figure couldn’t be Johnson, because the background movie poster shows a film released three years after the musician’s death. The temptation to wish such items genuine is clearly a difficult one to overcome: “even things that might have been Robert Johnson now leave an afterglow” (Schroeder 154, my italics). Johnson’s recordings, so carefully preserved by Hammond and other researchers, might offer tangible and inviolate primary source material. Yet these also now face a serious challenge: they run too rapidly by a factor of up to 15 per cent (Gibbens; Wilde). Speeding up music allowed early producers to increase a song’s vibrancy and fit longer takes on to their restricted media. By slowing the recording tempo, master discs provided a “mother” print that would cause all subsequent pressings to play unnaturally quickly when reproduced. Robert Johnson worked for half a decade as a walking blues musician without restrictions on the length of his songs before recording with producer Don Law and engineer Vincent Liebler in San Antonio (1936) and Dallas (1937). Longer compositions were reworked for these sessions, re-arranging and edited out verses (Wald, Escaping). It is also conceivable that they were purposefully, or even accidentally, sped up. (The tempo consistency of machines used in early field recordings across the South has often been questioned, as many played too fast or slow (Morris).) Slowed-down versions of Johnson’s songs from contributors such as Angus Blackthorne and Ron Talley now proliferate on YouTube. The debate has fuelled detailed discussion in online blogs, where some contributors to specialist audio technology forums have attempted to decode a faintly detectable background hum using spectrum analysers. If the frequency of the alternating current that powered Law and Liebler’s machine could be established at 50 or 60 Hz it might provide evidence of possible tempo variation. A peak at 51.4 Hz, one contributor argues, suggests “the recordings are 2.8 per cent fast, about half a semitone” (Blischke). Such “augmentation” has yet to be fully explored in academic literature. Graves describes the discussion as “compelling and intriguing” in his endnotes, concluding “there are many pros and cons to the argument and, indeed, many recordings over the years have been speeded up to make them seem livelier” (124). Wald ("Robert Johnson") provides a compelling and detailed counter-thesis on his website, although he does acknowledge inconsistencies in pitch among alternate master takes of some recordings. No-one who actually saw Robert Johnson perform ever called attention to potential discrepancies between the pitch of his natural and recorded voice. David “Honeyboy” Edwards, Robert Lockwood Jr. and Johnny Shines were all interviewed repeatedly by documentarians and researchers, but none ever raised the issue. Conversely Johnson’s former girlfriend Willie Mae Powell was visibly affected by the familiarity in his voice on hearing his recording of the tune Johnson wrote for her, “Love in Vain”, in Chris Hunt’s The Search for Robert Johnson (1991). Clues might also lie in the natural tonality of Johnson’s instrument. Delta bluesmen who shared Johnson’s repertoire and played slide guitar in his style commonly used a tuning of open G (D-G-D-G-B-G). Colloquially known as “Spanish” (Gordon 2002, 38-42) it offers a natural home key of G major for slide guitar. We might therefore expect Johnson’s recordings to revolve around the tonic (G) or its dominant (D) -however almost all of his songs are a full tone higher, in the key of A or its dominant E. (The only exceptions are “They’re Red Hot” and “From Four Till Late” in C, and “Love in Vain” in G.) A pitch increase such as this might be consistent with an increase in the speed of these recordings. Although an alternative explanation might be that Johnson tuned his strings particularly tightly, which would benefit his slide playing but also make fingering notes and chords less comfortable. Yet another is that he used a capo to raise the key of his instrument and was capable of performing difficult lead parts in relatively high fret positions on the neck of an acoustic guitar. This is accepted by Scott Ainslie and Dave Whitehill in their authoritative volume of transcriptions At the Crossroads (11). The photo booth self portrait of Johnson also clearly shows a capo at the second fret—which would indeed raise open G to open A (in concert pitch). The most persuasive reasoning against speed tampering runs parallel to the argument laid out earlier in this piece, previous iterations of the Johnson myth have superimposed their own circumstances and ignored the context and reality of the protagonist’s lived experience. As Wald argues, our assumptions of what we think Johnson ought to sound like have little bearing on what he actually sounded like. It is a compelling point. When Son House, Skip James, Bukka White, and other surviving bluesmen were “rediscovered” during the 1960s urban folk revival of North America and Europe they were old men with deep and resonant voices. Johnson’s falsetto vocalisations do not, therefore, accord with the commonly accepted sound of an authentic blues artist. Yet Johnson was in his mid-twenties in 1936 and 1937; a young man heavily influenced by the success of other high pitched male blues singers of his era. people argue that what is better about the sound is that the slower, lower Johnson sounds more like Son House. Now, House was a major influence on Johnson, but by the time Johnson recorded he was not trying to sound like House—an older player who had been unsuccessful on records—but rather like Leroy Carr, Casey Bill Weldon, Kokomo Arnold, Lonnie Johnson, and Peetie Wheatstraw, who were the big blues recording stars in the mid–1930s, and whose vocal styles he imitated on most of his records. (For example, the ooh-well-well falsetto yodel he often used was imitated from Wheatstraw and Weldon.) These singers tended to have higher, smoother voices than House—exactly the sound that Johnson seems to have been going for, and that the House fans dislike. So their whole argument is based on the fact that they prefer the older Delta sound to the mainstream popular blues sound of the 1930s—or, to put it differently, that their tastes are different from Johnson’s own tastes at the moment he was recording. (Wald, "Robert Johnson") Few media can capture an audible moment entirely accurately, and the idea of engineering a faithful reproduction of an original performance is also only one element of the rationale for any recording. Commercial engineers often aim to represent the emotion of a musical moment, rather than its totality. John and Alan Lomax may have worked as documentarians, preserving sound as faithfully as possible for the benefit of future generations on behalf of the Library of Congress. Law and Liebler, however, were producing exciting and profitable commercial products for a financial gain. Paradoxically, then, whatever the “real” Robert Johnson sounded like (deeper voice, no mesmeric falsetto, not such an extraordinarily adept guitar player, never met the Devil … and so on) the mythical figure who “sold his soul at the crossroads” and shipped millions of albums after his death may, on that basis, be equally as authentic as the original. Schroeder draws on Mikhail Bakhtin to comment on such vacant yet hotly contested spaces around the Johnson myth. For Bakhtin, literary texts are ascribed new meanings by consecutive generations as they absorb and respond to them. Every age re–accentuates in its own way the works of its most immediate past. The historical life of classic works is in fact the uninterrupted process of their social and ideological re–accentuation [of] ever newer aspects of meaning; their semantic content literally continues to grow, to further create out of itself. (421) In this respect Johnson’s legend is a “classic work”, entirely removed from its historical life, a free floating form re-contextualised and reinterpreted by successive generations in order to make sense of their own cultural predilections (Schroeder 57). As Graves observes, “since Robert Johnson’s death there has seemed to be a mathematical equation of sorts at play: the less truth we have, the more myth we get” (113). The threads connecting his real and mythical identity seem so comprehensively intertwined that only the most assiduous scholars are capable of disentanglement. Johnson’s life and work seem destined to remain augmented and contested for as long as people want to play guitar, and others want to listen to them. Notes[1] Actually the dominant theme of Johnson’s songs is not “the supernatural” it is his inveterate womanising. Almost all Johnson’s lyrics employ creative metaphors to depict troubled relationships. Some even include vivid images of domestic abuse. In “Stop Breakin’ Down Blues” a woman threatens him with a gun. In “32–20 Blues” he discusses the most effective calibre of weapon to shoot his partner and “cut her half in two.” In “Me and the Devil Blues” Johnson promises “to beat my woman until I get satisfied”. However in The Lady and Mrs Johnson five-time W. C. Handy award winner Rory Block re-wrote these words to befit her own cultural agenda, inverting the original sentiment as: “I got to love my baby ‘til I get satisfied”.[2] The Gibson L-1 guitar featured in Johnson’s Hooks Bros. portrait briefly became another contested artefact when it appeared in the catalogue of a New York State memorabilia dealership in 2006 with an asking price of $6,000,000. The Australian owner had apparently purchased the instrument forty years earlier under the impression it was bona fide, although photographic comparison technology showed that it couldn’t be genuine and the item was withdrawn. “Had it been real, I would have been able to sell it several times over,” Gary Zimet from MIT Memorabilia told me in an interview for Guitarist Magazine at the time, “a unique item like that will only ever increase in value” (Stewart 2010). References Ainslie, Scott, and Dave Whitehall. Robert Johnson: At the Crossroads – The Authoritative Guitar Transcriptions. Milwaukee: Hal Leonard Publishing, 1992. Bakhtin, Mikhail M. The Dialogic Imagination. Austin: University of Texas Press, 1982. Banks, Russell. “The Devil and Robert Johnson – Robert Johnson: The Complete Recordings.” The New Republic 204.17 (1991): 27-30. Banninghof, James. “Some Ramblings on Robert Johnson’s Mind: Critical Analysis and Aesthetic in Delta Blues.” American Music 15/2 (1997): 137-158. Benjamin, Walter. The Work of Art in the Age of Mechanical Reproduction. London: Penguin, 2008. Blackthorne, Angus. “Robert Johnson Slowed Down.” YouTube.com 2011. 1 Aug. 2013 ‹http://www.youtube.com/user/ANGUSBLACKTHORN?feature=watch›. Blesh, Rudi. Shining Trumpets: A History of Jazz. New York: Knopf, 1946. Blischke, Michael. “Slowing Down Robert Johnson.” The Straight Dope 2008. 1 Aug. 2013 ‹http://boards.straightdope.com/sdmb/showthread.php?t=461601›. Block, Rory. The Lady and Mrs Johnson. Rykodisc 10872, 2006. Charters, Samuel. The Country Blues. New York: De Capo Press, 1959. Collins UK. Collins English Dictionary. Glasgow: Harper Collins Publishers, 2010. DiGiacomo, Frank. “A Disputed Robert Johnson Photo Gets the C.S.I. Treatment.” Vanity Fair 2008. 1 Aug. 2013 ‹http://www.vanityfair.com/online/daily/2008/10/a-disputed-robert-johnson-photo-gets-the-csi-treatment›. DiGiacomo, Frank. “Portrait of a Phantom: Searching for Robert Johnson.” Vanity Fair 2008. 1 Aug. 2013 ‹http://www.vanityfair.com/culture/features/2008/11/johnson200811›. Dylan, Bob. Chronicles Vol 1. London: Simon & Schuster, 2005. Evans, David. Tommy Johnson. London: November Books, 1971. Ford, Charles. “Robert Johnson’s Rhythms.” Popular Music 17.1 (1998): 71-93. Freeland, Tom. “Robert Johnson: Some Witnesses to a Short Life.” Living Blues 150 (2000): 43-49. Gibbens, John. “Steady Rollin’ Man: A Revolutionary Critique of Robert Johnson.” Touched 2004. 1 Aug. 2013 ‹http://www.touched.co.uk/press/rjnote.html›. Gioia, Ted. Delta Blues: The Life and Times of the Mississippi Masters Who Revolutionised American Music. London: W. W. Norton & Co, 2008. Gioia, Ted. "Robert Johnson: A Century, and Beyond." Robert Johnson: The Centennial Collection. Sony Music 88697859072, 2011. Gordon, Robert. Can’t Be Satisfied: The Life and Times of Muddy Waters. London: Pimlico Books, 2002. Graves, Tom. Crossroads: The Life and Afterlife of Blues Legend Robert Johnson. Spokane: Demers Books, 2008. Guralnick, Peter. Searching for Robert Johnson: The Life and Legend of the "King of the Delta Blues Singers". London: Plume, 1998. Hamilton, Marybeth. In Search of the Blues: Black Voices, White Visions. London: Jonathan Cape, 2007. Hammond, John. From Spirituals to Swing (Dedicated to Bessie Smith). New York: The New Masses, 1938. Johnson, Robert. “Hellbound.” Amazon.co.uk 2011. 1 Aug. 2013 ‹http://www.amazon.co.uk/Hellbound/dp/B0063S8Y4C/ref=sr_1_cc_2?s=aps&ie=UTF8&qid=1376605065&sr=1-2-catcorr&keywords=robert+johnson+hellbound›. ———. “Contracted to the Devil.” Amazon.co.uk 2002. 1 Aug. 2013. ‹http://www.amazon.co.uk/Contracted-The-Devil-Robert-Johnson/dp/B00006F1L4/ref=sr_1_cc_1?s=aps&ie=UTF8&qid=1376830351&sr=1-1-catcorr&keywords=Contracted+to+The+Devil›. ———. King of the Delta Blues Singers. Columbia Records CL1654, 1961. ———. “Me and the Devil Blues.” Amazon.co.uk 2003. 1 Aug. 2013 ‹http://www.amazon.co.uk/Me-Devil-Blues-Robert-Johnson/dp/B00008SH7O/ref=sr_1_16?s=music&ie=UTF8&qid=1376604807&sr=1-16&keywords=robert+johnson›. ———. “The High Price of Soul.” Amazon.co.uk 2007. 1 Aug. 2013 ‹http://www.amazon.co.uk/High-Price-Soul-Robert-Johnson/dp/B000LC582C/ref=sr_1_39?s=music&ie=UTF8&qid=1376604863&sr=1-39&keywords=robert+johnson›. ———. “Up Jumped the Devil.” Amazon.co.uk 2005. 1 Aug. 2013 ‹http://www.amazon.co.uk/Up-Jumped-Devil-Robert-Johnson/dp/B000B57SL8/ref=sr_1_2?s=music&ie=UTF8&qid=1376829917&sr=1-2&keywords=Up+Jumped+The+Devil›. Marcus, Greil. Mystery Train: Images of America in Rock ‘n’ Roll Music. London: Plume, 1997. Morris, Christopher. “Phonograph Blues: Robert Johnson Mastered at Wrong Speed?” Variety 2010. 1 Aug. 2013 ‹http://www.varietysoundcheck.com/2010/05/phonograph-blues-robert-johnson-mastered-at-wrong-speed.html›. Oh, Brother, Where Art Thou? DVD. Universal Pictures, 2000. Palmer, Robert. Deep Blues: A Musical and Cultural History from the Mississippi Delta to Chicago’s South Side to the World. London: Penguin Books, 1981. Pearson, Barry Lee, and Bill McCulloch. Robert Johnson: Lost and Found. Chicago: University of Illinois Press, 2003. Prial, Dunstan. The Producer: John Hammond and the Soul of American Music. New York: Farrar, Straus and Giroux, 2006. Rothenbuhler, Eric W. “For–the–Record Aesthetics and Robert Johnson’s Blues Style as a Product of Recorded Culture.” Popular Music 26.1 (2007): 65-81. Rothenbuhler, Eric W. “Myth and Collective Memory in the Case of Robert Johnson.” Critical Studies in Media Communication 24.3 (2007): 189-205. Schroeder, Patricia. Robert Johnson, Mythmaking and Contemporary American Culture (Music in American Life). Chicago: University of Illinois Press, 2004. Segalstad, Eric, and Josh Hunter. The 27s: The Greatest Myth of Rock and Roll. Berkeley: North Atlantic Books, 2009. Stewart, Jon. “Rock Climbing: Jon Stewart Concludes His Investigation of the Myths behind Robert Johnson.” Guitarist Magazine 327 (2010): 34. The Search for Robert Johnson. DVD. Sony Pictures, 1991. Talley, Ron. “Robert Johnson, 'Sweet Home Chicago', as It REALLY Sounded...” YouTube.com 2012. 1 Aug. 2013. ‹http://www.youtube.com/watch?v=LCHod3_yEWQ›. Wald, Elijah. Escaping the Delta: Robert Johnson and the Invention of the Blues. London: HarperCollins, 2005. ———. The Robert Johnson Speed Recording Controversy. Elijah Wald — Writer, Musician 2012. 1 Aug. 2013. ‹http://www.elijahwald.com/johnsonspeed.html›. Wilde, John . “Robert Johnson Revelation Tells Us to Put the Brakes on the Blues: We've Been Listening to the Immortal 'King of the Delta Blues' at the Wrong Speed, But Now We Can Hear Him as He Intended.” The Guardian 2010. 1 Aug. 2013 ‹http://www.theguardian.com/music/musicblog/2010/may/27/robert-johnson-blues›. Wolkewitz, M., A. Allignol, N. Graves, and A.G. Barnett. “Is 27 Really a Dangerous Age for Famous Musicians? Retrospective Cohort Study.” British Medical Journal 343 (2011): d7799. 1 Aug. 2013 ‹http://www.bmj.com/content/343/bmj.d7799›.
APA, Harvard, Vancouver, ISO, and other styles
40

Kerasidou, Xaroula (Charalampia). "Regressive Augmentation: Investigating Ubicomp’s Romantic Promises." M/C Journal 16, no. 6 (November 7, 2013). http://dx.doi.org/10.5204/mcj.733.

Full text
Abstract:
Machines that fit the human environment instead of forcing humans to enter theirs will make using a computer as refreshing as taking a walk in the woods. Mark Weiser on ubiquitous computing (21st Century Computer 104) In 2007, a forum entitled HCI 2020: Human Values in a Digital Age sought to address the questions: What will our world be like in 2020? Digital technologies will continue to proliferate, enabling ever more ways of changing how we live. But will such developments improve the quality of life, empower us, and make us feel safer, happier and more connected? Or will living with technology make it more tiresome, frustrating, angst-ridden, and security-driven? What will it mean to be human when everything we do is supported or augmented by technology? (Harper et al. 10) The forum came as a response to, what many call, post-PC technological developments; developments that seek to engulf our lives in digital technologies which in their various forms are meant to support and augment our everyday lives. One of these developments has been the project of ubiquitous computing along with its kin project, tangible computing. Ubiquitous computing (ubicomp) made its appearance in the late 1980s in the labs of Xerox’s Palo Alto Research Center (PARC) as the “third wave” in computing, following those of the mainframe and personal computing (Weiser, Open House 2). Mark Weiser, who coined the term, along with his collaborators at Xerox PARC, envisioned a “new technological paradigm” which would leave behind the traditional one-to-one relationship between human and computer, and spread computation “ubiquitously, but invisibly, throughout the environment” (Weiser, Gold and Brown 693). Since then, the field has grown and now counts several peer-reviewed journals, conferences, and academic and industrial research centres around the world, which have set out to study the new “post-PC computing” under names such as Pervasive Computing, Ambient Intelligence, Tangible Computing, The Internet of Things, etc. Instead of providing a comprehensive account of all the different ubicomp incarnations, this paper seeks to focus on the early projects and writings of some of ubicomp’s most prominent figures and tease out, as a way of critique, the origins of some of its romantic promises. From the outset, ubiquitous computing was heavily informed by a human-centred approach that sought to shift the focus from the personal computer back to its users. On the grounds that the PC has dominated the technological landscape at the expense of its human counterparts, ubiquitous computing promised a different human-machine interaction, with “machines that fit the human environment instead of forcing humans to enter theirs” (104, my italics) placing the two in opposite and antagonistic terrains. The problem comes about in the form of interaction between people and machines … So when the two have to meet, which side should dominate? In the past, it has been the machine that dominates. In the future, it should be the human. (Norman 140) Within these early ubicomp discourses, the computer came to embody a technological menace, the machine that threatened the liberal humanist value of being free and in control. For example, in 1999 in a book that was characterized as “the bible of ‘post-PC’ thinking” by Business Week, Donald Norman exclaimed: we have let ourselves to be trapped. … I don’t want to be controlled by a technology. I just want to get on with my life, … So down with PC’s; down with computers. All they do is complicate our lives. (72) And we read on the website of MIT’s first ubicomp project Oxygen: For over forty years, computation has centered about machines, not people. We have catered to expensive computers, pampering them in air-conditioned rooms or carrying them around with us. Purporting to serve us, they have actually forced us to serve them. Ubiquitous computing then, in its early incarnations, was presented as the solution; the human-centred, somewhat natural approach, which would shift the emphasis away from the machine and bring control back to its legitimate owner, the liberal autonomous human subject, becoming the facilitator of our apparently threatened humanness. Its promise? An early promise of regressive augmentation, I would say, since it promised to augment our lives, not by changing them, but by returning us to a past, better world that the alienating PC has supposedly displaced, enabling us to “have more time to be more fully human” (Weiser and Brown). And it sought to achieve this through the key characteristic of invisibility, which was based on the paradox that while more and more computers will permeate our lives, they will effectively disappear. Ubicomp’s Early Romantic Promises The question of how we can make computers disappear has been addressed in computer research in various ways. One of the earliest and most prominent of these is the approach, which focuses on the physicality of the world seeking to build tangible interfaces. One of the main advocates of this approach is MIT’s Tangible Media Group, led by Professor Hiroshi Ishii. The group has been working on their vision, which they call “Tangible Bits,” for almost two decades now, and in 2009 they were awarded the “Lasting Impact Award” at the ACM Symposium on User Interface Software and Technology (UIST) for their metaDesk platform, presented in 1997 (fig.1), which explores the coupling of everyday physical objects with digital information (Ullmer and Ishii). Also, in 2004 in a special paper titled “Bottles: A Transparent Interface as a Tribute to Mark Weiser”, Ishii presented once again an early project he and his group developed in 1999, and for which they were personally commented by Weiser himself. According to Ishii, bottles (fig. 2)—a system which comprises three glass bottles “filled with music” each representing a different musical instrument, placed on a Plexiglas “stage” and controlled by their physical manipulation (moving, opening or closing them)—no less, “illustrates Mark Weiser’s vision of the transparent (or invisible) interface that weaves itself into the fabric of everyday life” (1299). Figure 1: metaDesk platform (MIT Tangible Media Group) Figure 2: musicBottles (MIT Tangible Media Group) Tangible computing was based on the premise that we inhabit two worlds: the physical world and cyberspace, or as Ishii and Ullmer put it, the world of atoms and the world of bits claiming that there is gap between these two worlds that left us “torn between these parallel but disjoint spaces” (1). This agreed with Weiser’s argument that cyberspace, and specifically the computer, has taken centre stage leaving the real world—the real people, the real interactions—in the background and neglected. Tangible computing then sought to address this problem by "bridging the gaps between both cyberspace and the physical environment" (1). As Ishii and Ullmer wrote in 1997: The aim of our research is to show concrete ways to move beyond the current dominant model of GUI [Graphic User Interface] bound to computers with a flat rectangular display, windows, a mouse, and a keyboard. To make computing truly ubiquitous and invisible, we seek to establish a new type of HCI that we call "Tangible User Interfaces" (TUIs). TUIs will augment the real physical world by coupling digital information to everyday physical objects and environments. (2) “Our intention is to take advantage of natural physical affordances to achieve a heightened legibility and seamlessness of interaction between people and information” (2). In his earlier work computer scientist Paul Dourish turned to phenomenology and the concept of embodiment in order to develop an understanding of interaction as embodied. This was prior to his recent work with cultural anthropologist Bell where they examined the motivating mythology of ubiquitous computing along with the messiness of its lived experience (Dourish and Bell). Dourish, in this earlier work observed that one of the common critical features early tangible and ubiquitous computing shared is that “they both attempt to exploit our natural familiarity with the everyday environment and our highly developed spatial and physical skills to specialize and control how computation can be used in concert with naturalistic activities” (Context-Aware Computing 232). They then sought to exploit this familiarity in order to build natural computational interfaces that fit seamlessly within our everyday, real world (Where the Action Is 17). This idea of an existing set of natural tactile skills appears to come hand-in-hand with a nostalgic, romantic view of an innocent, simple, and long gone world that the early projects of tangible and ubiquitous computing sought to revive; a world where the personal computer not only did not fit, an innocent world in fact displaced by the personal computer. In 1997, Ishii and Ullmer wrote about their decision to start their investigations about the “future of HCI” in the museum of the Collection of Historic Scientific Instruments at Harvard University in their efforts to get inspired by “the aesthetics and rich affordances of these historical scientific instruments” concerned that, “alas, much of this richness has been lost to the rapid flood of digital technologies” (1). Elsewhere Ishii explained that the origin of his idea to design a bottle interface began with the concept of a “weather forecast bottle;” an idea he intended to develop as a present for his mother. “Upon opening the weather bottle, she would be greeted by the sound of singing birds if the next day’s weather was forecasted to be clear” (1300). Here, we are introduced to a nice elderly lady who has opened thousands of bottles while cooking for her family in her kitchen. This senior lady; who is made to embody the symbolic alignment between woman, the domestic and nature (see Soper, Rose, Plumwood); “has never clicked a mouse, typed a URL, nor booted a computer in her life” (Ishii 1300). Instead, “my mother simply wanted to know the following day’s weather forecast. Why should this be so complicated?” (1300, my italics). Weiser also mobilised nostalgic sentiments in order to paint a picture of what it would be to live with ubiquitous computing. So, for example, when seeking a metaphor for ubiquitous computing, he proposed “childhood – playful, a building of foundations, constant learning, a bit mysterious and quickly forgotten by adults” (Not a Desktop 8). He viewed the ubicomp home as the ideal retreat to a state of childhood; playfully reaching out to the unknown, while being securely protected and safely “at home” (Open House). These early ideas of a direct experience of the world through our bodily senses along with the romantic view of a past, simple, and better world that the computer threatened and that future technological developments promised, could point towards what Leo Marx has described as America’s “pastoral ideal”, a force that, according to Marx, is ingrained in the American view of life. Balancing between primitivism and civilisation, nature and culture, the pastoral ideal “is an embodiment of what Lovejoy calls ‘semi-primitivism’; it is located in a middle ground somewhere ‘between’, yet in a transcendent relation to, the opposing forces of civilisation and nature” (Marx 23). It appears that the early advocates of tangible and ubiquitous computing sought to strike a similar balance to the American pastoral ideal; a precarious position that managed to reconcile the disfavour and fear of Europe’s “satanic mills” with an admiration for the technological power of the Industrial Revolution, the admiration for technological development with the bucolic ideal of an unspoiled and pure nature. But how was such a balance to be achieved? How could the ideal middle state be achieved balancing the opposing forces of technological development and the dream of the return to a serene pastoral existence? According to Leo Marx, for the European colonisers, the New World was to provide the answer to this exact question (101). The American landscape was to become the terrain where old and new, nature and technology harmonically meet to form a libertarian utopia. Technology was seen as “‘naturally arising’ from the landscape as another natural ‘means of happiness’ decreed by the Creator in his design of the continent. So, far from conceding that there might be anything alien or ‘artificial’ about mechanization, technology was seen as inherent in ‘nature’; both geographic and human” (160). Since then, according to Marx, the idea of the “return” to a new Golden Age has been engrained in the American culture and it appears that it informs ubiquitous computing’s own early visions. The idea of a “naturally arising” technology which would facilitate our return to the once lost garden of security and nostalgia appears to have become a common theme within ubiquitous computing discourses making appearances across time and borders. So, for example, while in 1991 Weiser envisioned that ubiquitous technologies will make “using a computer as refreshing as taking a walk in the woods” (21st Century Computer 11), twelve years later Marzano writing about Philip’s vision of Ambient Intelligence promised that “the living space of the future could look more like that of the past than that of today” (9). While the pastoral defined nature in terms of the geographical landscape, early ubiquitous computing appeared to define nature in terms of the objects, tools and technologies that surround us and our interactions with them. While pastoral America defined itself in contradistinction to the European industrial sites and the dirty, smoky and alienating cityscapes, within those early ubiquitous computing discourses the role of the alienating force was assigned to the personal computer. And whereas the personal computer with its “grey box” was early on rejected as the modern embodiment of the European satanic mills, computation was welcomed as a “naturally arising” technological solution which would infuse the objects which, “through the ages, … are most relevant to human life—chairs, tables and beds, for instance, … the objects we can’t do without” (Marzano 9). Or else, it would infuse the—newly constructed—natural landscape fulfilling the promise that when the “world of bits” and the “world of atoms” are finally bridged, the balance will be restored. But how did these two worlds come into existence? How did bits and atoms come to occupy different and separate ontological spheres? Far from being obvious or commonsensical, the idea of the separation between bits and atoms has a history that grounds it to specific times and places, and consequently makes those early ubiquitous and tangible computing discourses part of a bigger story that, as documented (Hayles) and argued (Agre), started some time ago. The view that we inhabit the two worlds of atoms and bits (Ishii and Ullmer) was endorsed by both early ubiquitous and tangible computing, it was based on the idea of the separation of computation from its material instantiation, presenting the former as a free floating entity able to infuse our world. As we saw earlier, tangible computing took the idea of this separation as an unquestionable fact, which then served as the basis for its research goals. As we read in the home page of the Tangible Media Group’s website: Where the sea of bits meets the land of atoms, we are now facing the challenge of reconciling our dual citizenship in the physical and digital worlds. "Tangible Bits" is our vision of Human Computer Interaction (HCI): we seek a seamless coupling of bits and atoms by giving physical form to digital information and computation (my italics). The idea that digital information does not have to have a physical form, but is given one in order to achieve a coupling of the two worlds, not only reinforces the view of digital information as an immaterial entity, but also places it in a privileged position against the material world. Under this light, those early ideas of augmentation or of “awakening” the physical world (Ishii and Ullmer 3) appear to be based on the idea of a passive material world that can be brought to life and become worthy and meaningful through computation, making ubiquitous computing part of a bigger and more familiar story. Restaging the dominant Cartesian dualism between the “ensouled” subject and the “soulless” material object, the latter is rendered passive, manipulable, and void of agency and, just like Ishii’s old bottles, it is performed as a mute, docile “empty vessel” ready to carry out any of its creator’s wishes; hold perfumes and beverages, play music, or tell the weather. At the same time, computation was presented as the force that could breathe life to a mundane and passive world; a free floating, somewhat natural, immaterial entity, like oxygen (hence the name of MIT’s first ubicomp project), like the air we breathe that could travel unobstructed through any medium, our everyday objects and our environment. But it is interesting to see that in those early ubicomp discourses computation’s power did not extend too far. While computation appeared to be foregrounded as a powerful, almost magic, entity able to give life and soul to a soulless material world, at the same time it was presented as controlled and muted. The computational power that would fill our lives, according to Weiser’s ubiquitous computing, would be invisible, it wouldn’t “intrude on our consciousness” (Weiser Not a Desktop 7), it would leave no traces and bring no radical changes. If anything, it would enable us to re-establish our humanness and return us to our past, natural state promising not to change us, or our lives, by introducing something new and unfamiliar, but to enable us to “remain serene and in control” (Weiser and Brown). In other words, ubiquitous computing, as this early story goes, would not be alienating, complex, obtrusive, or even noticeable, for that matter, and so, at the end of this paper, we come full circle to ubicomp’s early goals of invisibility with its underpinnings of the precarious pastoral ideal. This short paper focused on some of ubicomp’s early stories and projects and specifically on its promise to return us to a past and implicitly better world that the PC has arguably displaced. By reading these early promises of, what I call, regressive augmentation through Marx’s work on the “pastoral ideal,” this paper sought to tease out, in order to unsettle, the origins of some of ubicomp’s romantic promises. References Agre, P. E. Computation and Human Experience. New York: Cambridge University Press, 1997. Dourish, P. “Seeking a Foundation for Context-Aware Computing.” Human–Computer Interaction 16.2-4 (2001): 229-241. ———. Where the Action Is: The Foundations of Embodied Interaction. Cambridge: MIT Press, 2001. Dourish, P. and Genevieve Bell. Divining a Digital Future: Mess and Mythology in Ubiquitous Computing. Cambridge, Massachusetts: MIT Press, 2011.Grimes, A., and R. Harper. “Celebratory Technology: New Directions for Food Research in HCI.” In CHI’08, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 2008. 467-476. Harper, R., T. Rodden, Y. Rogers, and A. Sellen (eds.). Being Human: Human-Computer Interaction in the Year 2020. Microsoft Research, 2008. 1 Dec. 2013 ‹http://research.microsoft.com/en-us/um/Cambridge/projects/hci2020/downloads/BeingHuman_A3.pdf›. Hayles, K. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999. Ishii, H. “Bottles: A Transparent Interface as a Tribute to Mark Weiser.” IEICE Transactions on Information and Systems 87.6 (2004): 1299-1311. Ishii, H., and B. Ullmer. “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms.” In CHI ’97, Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 1997. 234-241. Marx, L. The Machine in the Garden: Technology and the Pastoral Ideal in America. 35th ed. New York: Oxford University Press, 2000. Marzano, S. “Cultural Issues in Ambient Intelligence”. In E. Aarts and S. Marzano (eds.), The New Everyday: Views on Ambient Intelligence. Rotterdam: 010 Publishers, 2003. Norman, D. The Invisible Computer: Why Good Oroducts Can Fail, the Personal Computer Is So Complex, and Information Appliances Are the Solution. Cambridge, Mass.: MIT Press, 1999. Plumwood, V. Feminism and the Mastery of Nature. London, New York: Routledge, 1993. Rose, G. Feminism and Geography. Cambridge: Polity, 1993. Soper, K. “Naturalised Woman and Feminized Nature.” In L. Coupe (ed.), The Green Studies Reader: From Romanticism to Ecocriticism. London: Routledge, 2000. Ullmer, B., and H. Ishii. “The metaDESK: Models and Prototypes for Tangible User Interfaces.” In UIST '97, Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology. New York: ACM, 1997. 223-232. Weiser, M. “The Computer for the 21st Century." Scientific American 265.3 (1991): 94-104. ———. “The Open House.” ITP Review 2.0, 1996. 1 Dec. 2013 ‹http://makingfurnitureinteractive.files.wordpress.com/2007/09/wholehouse.pdf›. ———. “The World Is Not a Desktop." Interactions 1.1 (1994): 7-8. Weiser, M., and J.S. Brown. “The Coming Age of Calm Technology.” 1996. 1 Dec. 2013 ‹http://www.johnseelybrown.com/calmtech.pdf›. Weiser, M., R. Gold, and J.S. Brown. “The Origins of Ubiquitous Computing at PARC in the Late 80s.” Pervasive Computing 38 (1999): 693-696.
APA, Harvard, Vancouver, ISO, and other styles
41

Starrs, D. Bruno. "Enabling the Auteurial Voice in Dance Me to My Song." M/C Journal 11, no. 3 (July 2, 2008). http://dx.doi.org/10.5204/mcj.49.

Full text
Abstract:
Despite numerous critics describing him as an auteur (i.e. a film-maker who ‘does’ everything and fulfils every production role [Bordwell and Thompson 37] and/or with a signature “world-view” detectable in his/her work [Caughie 10]), Rolf de Heer appears to have declined primary authorship of Dance Me to My Song (1997), his seventh in an oeuvre of twelve feature films. Indeed, the opening credits do not mention his name at all: it is only with the closing credits that the audience learns de Heer has directed the film. Rather, as the film commences, the viewer is informed by the titles that it is “A film by Heather Rose”, thus suggesting that the work is her singular creation. Direct and uncompromising, with its unflattering shots of the lead actor and writer (Heather Rose Slattery, a young woman born with cerebral palsy), the film may be read as a courageous self-portrait which finds the grace, humanity and humour trapped inside Rose’s twisted body. Alternatively, it may be read as yet another example of de Heer’s signature interest in foregrounding a world view which gives voice to marginalised characters such as the disabled or the disadvantaged. For example, the developmentally retarded eponyme of Bad Boy Bubby (1993) is eventually able to make art as a singer in a band and succeeds in creating a happy family with a wife and two kids. The ‘mute’ girl in The Quiet Room (1996) makes herself heard by her squabbling parents through her persistent activism. In Ten Canoes (2006) the Indigenous Australians cast themselves according to kinship ties, not according to the director’s choosing, and tell their story in their own uncolonised language. A cursory glance at the films of Rolf de Heer suggests he is overtly interested in conveying to the audience the often overlooked agency of his unlikely protagonists. In the ultra-competitive world of professional film-making it is rare to see primary authorship ceded by a director so generously. However, the allocation of authorship to a member of a marginalized population re-invigorates questions prompted by Andy Medhurst regarding a film’s “authorship test” (198) and its relationship to a subaltern community wherein he writes that “a biographical approach has more political justification if the project being undertaken is one concerned with the cultural history of a marginalized group” (202-3). Just as films by gay authors about gay characters may have greater credibility, as Medhurst posits, one might wonder would a film by a person with a disability about a character with the same disability be better received? Enabling authorship by an unknown, crippled woman such as Rose rather than a famous, able-bodied male such as de Heer may be cynically regarded as good (show) business in that it is politically correct. This essay therefore asks if the appellation “A film by Heather Rose” is appropriate for Dance Me to My Song. Whose agency in telling the story (or ‘doing’ the film-making), the able bodied Rolf de Heer or the disabled Heather Rose, is reflected in this cinematic production? In other words, whose voice is enabled when an audience receives this film? In attempting to answer these questions it is inevitable that Paul Darke’s concept of the “normality drama” (181) is referred to and questioned, as I argue that Dance Me to My Song makes groundbreaking departures from the conventions of the typical disability narrative. Heather Rose as Auteur Rose plays the film’s heroine, Julia, who like herself has cerebral palsy, a group of non-progressive, chronic disorders resulting from changes produced in the brain during the prenatal stages of life. Although severely affected physically, Rose suffered no intellectual impairment and had acted in Rolf de Heer’s cult hit Bad Boy Bubby five years before, a confidence-building experience that grew into an ongoing fascination with the filmmaking process. Subsequently, working with co-writer Frederick Stahl, she devised the scenario for this film, writing the lead role for herself and then proactively bringing it to de Heer’s attention. Rose wrote of de Heer’s deliberate lack of involvement in the script-writing process: “Rolf didn’t even want to read what we’d done so far, saying he didn’t want to interfere with our process” (de Heer, “Production Notes”). In 2002, aged 36, Rose died and Stahl reports in her obituary an excerpt from her diary: People see me as a person who has to be controlled. But let me tell you something, people. I am not! And I am going to make something real special of my life! I am going to go out there and grab life with both hands!!! I am going to make the most sexy and honest film about disability that has ever been made!! (Stahl, “Standing Room Only”) This proclamation of her ability and ambition in screen-writing is indicative of Rose’s desire to do. In a guest lecture Rose gave further insights into the active intent in writing Dance Me to My Song: I wanted to create a screenplay, but not just another soppy disability film, I wanted to make a hot sexy film, which showed the real world … The message I wanted to convey to an audience was “As people with disabilities, we have the same feelings and desires as others”. (Rose, “ISAAC 2000 Conference Presentation”) Rose went on to explain her strategy for winning over director de Heer: “Rolf was not sure about committing to the movie; I had to pester him really. I decided to invite him to my birthday party. It took a few drinks, but I got him to agree to be the director” (ibid) and with this revelation of her tactical approach her film-making agency is further evidenced. Rose’s proactive innovation is not just evident in her successfully approaching de Heer. Her screenplay serves as a radical exception to films featuring disabled persons, which, according to Paul Darke in 1998, typically involve the disabled protagonist struggling to triumph over the limitations imposed by their disability in their ‘admirable’ attempts to normalize. Such normality dramas are usually characterized by two generic themes: first, that the state of abnormality is nothing other than tragic because of its medical implications; and, second, that the struggle for normality, or some semblance of it in normalization – as represented in the film by the other characters – is unquestionably right owing to its axiomatic supremacy. (187) Darke argues that the so-called normality drama is “unambiguously a negation of ascribing any real social or individual value to the impaired or abnormal” (196), and that such dramas function to reinforce the able-bodied audience’s self image of normality and the notion of the disabled as the inferior Other. Able-bodied characters are typically portrayed positively in the normality drama: “A normality as represented in the decency and support of those characters who exist around, and for, the impaired central character. Thus many of the disabled characters in such narratives are bitter, frustrated and unfulfilled and either antisocial or asocial” (193). Darke then identifies The Elephant Man (David Lynch, 1980) and Born on the Fourth of July (Oliver Stone, 1989) as archetypal films of this genre. Even in films in which seemingly positive images of the disabled are featured, the protagonist is still to be regarded as the abnormal Other, because in comparison to the other characters within that narrative the impaired character is still a comparatively second-class citizen in the world of the film. My Left Foot is, as always, a prime example: Christy Brown may well be a writer, relatively wealthy and happy, but he is not seen as sexual in any way (194). However, Dance Me to My Song defies such generic restrictions: Julia’s temperament is upbeat and cheerful and her disability, rather than appearing tragic, is made to look healthy, not “second class”, in comparison with her physically attractive, able-bodied but deeply unhappy carer, Madelaine (Joey Kennedy). Within the first few minutes of the film we see Madelaine dissatisfied as she stands, inspecting her healthy, toned and naked body in the bathroom mirror, contrasted with vision of Julia’s twisted form, prostrate, pale and naked on the bed. Yet, in due course, it is the able-bodied girl who is shown to be insecure and lacking in character. Madelaine steals Julia’s money and calls her “spastic”. Foul-mouthed and short-tempered, Madelaine perversely positions Julia in her wheelchair to force her to watch as she has perfunctory sex with her latest boyfriend. Madelaine even masquerades as Julia, commandeering her voice synthesizer to give a fraudulently positive account of her on-the-job performance to the employment agency she works for. Madelaine’s “axiomatic supremacy” is thoroughly undermined and in the most striking contrast to the typical normality drama, Julia is unashamedly sexual: she is no Christy Brown. The affective juxtaposition of these two different personalities stems from the internal nature of Madelaine’s problems compared to the external nature of Julia’s problems. Madelaine has an emotional disability rather than a physical disability and several scenes in the film show her reduced to helpless tears. Then one day when Madelaine has left her to her own devices, Julia defiantly wheels herself outside and bumps into - almost literally - handsome, able-bodied Eddie (John Brumpton). Cheerfully determined, Julia wins him over and a lasting friendship is formed. Having seen the joy that sex brings to Madelaine, Julia also wants carnal fulfilment so she telephones Eddie and arranges a date. When Eddie arrives, he reads the text on her voice machine’s screen containing the title line to the film ‘Dance me to my song’ and they share a tender moment. Eddie’s gentleness as he dances Julia to her song (“Kizugu” written by Bernard Huber and John Laidler, as performed by Okapi Guitars) is simultaneously contrasted with the near-date-rapes Madelaine endures in her casual relationships. The conflict between Madeline and Julia is such that it prompts Albert Moran and Errol Vieth to categorize the film as “women’s melodrama”: Dance Me to My Song clearly belongs to the genre of the romance. However, it is also important to recognize it under the mantle of the women’s melodrama … because it has to do with a woman’s feelings and suffering, not so much because of the flow of circumstance but rather because of the wickedness and malevolence of another woman who is her enemy and rival. (198-9) Melodrama is a genre that frequently resorts to depicting disability in which a person condemned by society as disabled struggles to succeed in love: some prime examples include An Affair to Remember (Leo McCarey, 1957) involving a paraplegic woman, and The Piano (Jane Campion, 1993) in which a strong-spirited but mute woman achieves love. The more conventional Hollywood romances typically involve attractive, able-bodied characters. In Dance Me to My Song the melodramatic conflict between the two remarkably different women at first seems dominated by Madelaine, who states: “I know I’m good looking, good in bed ... better off than you, you poor thing” in a stream-of-consciousness delivery in which Julia is constructed as listener rather than converser. Julia is further reduced to the status of sub-human as Madelaine says: “I wish you could eat like a normal person instead of a bloody animal” and her erstwhile boyfriend Trevor says: “She looks like a fuckin’ insect.” Even the benevolent Eddie says: “I don’t like leaving you alone but I guess you’re used to it.” To this the defiant Julia replies; “Please don’t talk about me in front of me like I’m an animal or not there at all.” Eddie is suitably chastised and when he treats her to an over-priced ice-cream the shop assistant says “Poor little thing … She’ll enjoy this, won’t she?” Julia smiles, types the words “Fuck me!”, and promptly drops the ice-cream on the floor. Eddie laughs supportively. “I’ll just get her another one,” says the flustered shop assistant, “and then get her out of here, please!” With striking eloquence, Julia wheels herself out of the shop, her voice machine announcing “Fuck me, fuck me, fuck me, fuck me, fuck me”, as she departs exultantly. With this bold statement of independence and defiance in the face of patronising condescension, the audience sees Rose’s burgeoning strength of character and agency reflected in the onscreen character she has created. Dance Me to My Song and the films mentioned above are, however, rare exceptions in the many that dare represent disability on the screen at all, compliant as the majority are with Darke’s expectations of the normality drama. Significantly, the usual medical-model nexus in many normality films is ignored in Rose’s screenplay: no medication, hospitals or white laboratory coats are to be seen in Julia’s world. Finally, as I have described elsewhere, Julia is shown joyfully dancing in her wheelchair with Eddie while Madelaine proves her physical inferiority with a ‘dance’ of frustration around her broken-down car (see Starrs, "Dance"). In Rose’s authorial vision, audience’s expectations of yet another film of the normality drama genre are subverted as the disabled protagonist proves superior to her ‘normal’ adversary in their melodramatic rivalry for the sexual favours of an able-bodied love-interest. Rolf de Heer as Auteur De Heer does not like to dwell on the topic of auteurism: in an interview in 2007 he somewhat impatiently states: I don’t go in much for that sort of analysis that in the end is terminology. … Look, I write the damn things, and direct them, and I don’t completely produce them anymore – there are other people. If that makes me an auteur in other people’s terminologies, then fine. (Starrs, "Sounds" 20) De Heer has been described as a “remarkably non-egotistical filmmaker” (Davis “Working together”) which is possibly why he handed ownership of this film to Rose. Of the writer/actor who plied him with drink so he would agree to back her script, de Heer states: It is impossible to overstate the courage of the performance that you see on the screen. … Heather somehow found the means to respond on cue, to maintain the concentration, to move in the desired direction, all the myriad of acting fundamentals that we take for granted as normal things to do in our normal lives. (“Production NHotes”) De Heer’s willingness to shift authorship from director to writer/actor is representative of this film’s groundbreaking promotion of the potential for agency within disability. Rather than being passive and suffering, Rose is able to ‘do.’ As the lead actor she is central to the narrative. As the principle writer she is central to the film’s production. And she does both. But in conflict with this auteurial intent is the temptation to describe Dance Me to My Song as an autobiographical documentary, since it is Rose herself, with her unique and obvious physical handicap, playing the film’s heroine, Julia. In interview, however, De Heer apparently disagrees with this interpretation: Rolf de Heer is quick to point out, though, that the film is not a biography.“Not at all; only in the sense that writers use material from their own lives.Madelaine is merely the collection of the worst qualities of the worst carers Heather’s ever had.” Dance Me to My Song could be seen as a dramatised documentary, since it is Rose herself playing Julia, and her physical or surface life is so intense and she is so obviously handicapped. While he understands that response, de Heer draws a comparison with the first films that used black actors instead of white actors in blackface. “I don’t know how it felt emotionally to an audience, I wasn’t there, but I think that is the equivalent”. (Urban) An example of an actor wearing “black-face” to portray a cerebral palsy victim might well be Gus Trikonis’s 1980 film Touched By Love. In this, the disabled girl is unconvincingly played by the pretty, able-bodied actress Diane Lane. The true nature of the character’s disability is hidden and cosmeticized to Hollywood expectations. Compared to that inauthentic film, Rose’s screenwriting and performance in Dance Me to My Song is a self-penned fiction couched in unmediated reality and certainly warrants authorial recognition. Despite his unselfish credit-giving, de Heer’s direction of this remarkable film is nevertheless detectable. His auteur signature is especially evident in his technological employment of sound as I have argued elsewhere (see Starrs, "Awoval"). The first distinctly de Heer influence is the use of a binaural recording device - similar to that used in Bad Boy Bubby (1993) - to convey to the audience the laboured nature of Julia’s breathing and to subjectively align the audience with her point of view. This apparatus provides a disturbing sound bed that is part wheezing, part grunting. There is no escaping Julia’s physically unusual life, from her reliance on others for food, toilet and showering, to the half-strangled sounds emanating from her ineffectual larynx. But de Heer insists that Julia does speak, like Stephen Hawkings, via her Epson RealVoice computerized voice synthesizer, and thus Julia manages to retain her dignity. De Heer has her play this machine like a musical instrument, its neatly modulated feminine tones immediately prompting empathy. Rose Capp notes de Heer’s preoccupation with finding a voice for those minority groups within the population who struggle to be heard, stating: de Heer has been equally consistent in exploring the communicative difficulties underpinning troubled relationships. From the mute young protagonist of The Quiet Room to the aphasic heroine of Dance Me to My Song, De Heer’s films are frequently preoccupied with the profound inadequacy or outright failure of language as a means of communication (21). Certainly, the importance to Julia of her only means of communication, her voice synthesizer, is stressed by de Heer throughout the film. Everybody around her has, to varying degrees, problems in hearing correctly or understanding both what and how Julia communicates with her alien mode of conversing, and she is frequently asked to repeat herself. Even the well-meaning Eddie says: “I don’t know what the machine is trying to say”. But it is ultimately via her voice synthesizer that Julia expresses her indomitable character. When first she meets Eddie, she types: “Please put my voice machine on my chair, STUPID.” She proudly declares ownership of a condom found in the bathroom with “It’s mine!” The callous Madelaine soon realizes Julia’s strength is in her voice machine and withholds access to the device as punishment for if she takes it away then Julia is less demanding for the self-centred carer. Indeed, the film which starts off portraying the physical superiority of Madelaine soon shows us that the carer’s life, for all her able-bodied, free-love ways, is far more miserable than Julia’s. As de Heer has done in many of his other films, a voice has been given to those who might otherwise not be heard through significant decision making in direction. In Rose’s case, this is achieved most obviously via her electric voice synthesizer. I have also suggested elsewhere (see Starrs, "Dance") that de Heer has helped find a second voice for Rose via the language of dance, and in doing so has expanded the audience’s understandings of quality of life for the disabled, as per Mike Oliver’s social model of disability, rather than the more usual medical model of disability. Empowered by her act of courage with Eddie, Julia sacks her uncaring ‘carer’ and the film ends optimistically with Julia and her new man dancing on the front porch. By picturing the couple in long shot and from above, Julia’s joyous dance of triumph is depicted as ordinary, normal and not deserving of close examination. This happy ending is intercut with a shot of Madeline and her broken down car, performing her own frustrated dance and this further emphasizes that she was unable to ‘dance’ (i.e. communicate and compete) with Julia. The disabled performer such as Rose, whether deliberately appropriating a role or passively accepting it, usually struggles to placate two contrasting realities: (s)he is at once invisible in the public world of interhuman relations and simultaneously hyper-visible due to physical Otherness and subsequent instantaneous typecasting. But by the end of Dance Me to My Song, Rose and de Heer have subverted this notion of the disabled performer grappling with the dual roles of invisible victim and hyper-visible victim by depicting Julia as socially and physically adept. She ‘wins the guy’ and dances her victory as de Heer’s inspirational camera looks down at her success like an omniscient and pleased god. Film academic Vivian Sobchack writes of the phenomenology of dance choreography for the disabled and her own experience of waltzing with the maker of her prosthetic leg, Steve, with the comment: “for the moment I did displace focus on my bodily immanence to the transcendent ensemble of our movement and I really began to waltz” (65). It is easy to imagine Rose’s own, similar feeling of bodily transcendence in the closing shot of Dance Me to My Song as she shows she can ‘dance’ better than her able-bodied rival, content as she is with her self-identity. Conclusion: Validation of the Auteurial OtherRolf de Heer was a well-known film-maker by the time he directed Dance Me to My Song. His films Bad Boy Bubby (1993) and The Quiet Room (1996) had both screened at the Cannes International Film Festival. He was rapidly developing a reputation for non-mainstream representations of marginalised, subaltern populations, a cinematic trajectory that was to be further consolidated by later films privileging the voice of Indigenous Peoples in The Tracker (2002) and Ten Canoes (2006), the latter winning the Special Jury prize at Cannes. His films often feature unlikely protagonists or as Liz Ferrier writes, are “characterised by vulnerable bodies … feminised … none of whom embody hegemonic masculinity” (65): they are the opposite of Hollywood’s hyper-masculine, hard-bodied, controlling heroes. With a nascent politically correct worldview proving popular, de Heer may have considered the assigning of authorship to Rose a marketable idea, her being representative of a marginalized group, which as Andy Medhurst might argue, may be more politically justifiable, as it apparently is with films of gay authorship. However, it must be emphasized that there is no evidence that de Heer’s reticence about claiming authorship of Dance Me to My Song is motivated by pecuniary interests, nor does he seem to have been trying to distance himself from the project through embarrassment or dissatisfaction with the film or its relatively unknown writer/actor. Rather, he seems to be giving credit for authorship where credit is due, for as a result of Rose’s tenacity and agency this film is, in two ways, her creative success. Firstly, it is a rare exception to the disability film genre defined by Paul Darke as the “normality drama” because in the film’s diegesis, Julia is shown triumphing not simply over the limitations of her disability, but over her able-bodied rival in love as well: she ‘dances’ better than the ‘normal’ Madelaine. Secondly, in her gaining possession of the primary credits, and the mantle of the film’s primary author, Rose is shown triumphing over other aspiring able-bodied film-makers in the notoriously competitive film-making industry. Despite being an unpublished and unknown author, the label “A film by Heather Rose” is, I believe, a deserved coup for the woman who set out to make “the most sexy and honest film about disability ever made”. As with de Heer’s other films in which marginalised peoples are given voice, he demonstrates a desire not to subjugate the Other, but to validate and empower him/her. He both acknowledges their authorial voices and credits them as essential beings, and in enabling such subaltern populations to be heard, willingly cedes his privileged position as a successful, white, male, able-bodied film-maker. In the credits of this film he seems to be saying ‘I may be an auteur, but Heather Rose is a no less able auteur’. References Bordwell, David and Kristin Thompson. Film Art: An Introduction, 4th ed. New York: McGraw-Hill, 1993. Capp, Rose. “Alexandra and the de Heer Project.” RealTime + Onscreen 56 (Aug.-Sep. 2003): 21. 6 June 2008 ‹http://www.realtimearts.net/article/issue56/7153›. Caughie, John. “Introduction”. Theories of Authorship. Ed. John Caughie. London: Routledge and Kegan Paul, 1981. 9-16. Darke, Paul. “Cinematic Representations of Disability.” The Disability Reader. Ed. Tom Shakespeare. London and New York: Cassell, 1988. 181-198. Davis, Therese. “Working Together: Two Cultures, One Film, Many Canoes.” Senses of Cinema 2006. 6 June 2008 ‹http://www.sensesofcinema.com/contents/06/41/ten-canoes.html›. De Heer, Rolf. “Production Notes.” Vertigo Productions. Undated. 6 June 2008 ‹http://www.vertigoproductions.com.au/information.php?film_id=10&display=notes›. Ferrier, Liz. “Vulnerable Bodies: Creative Disabilities in Contemporary Australian Film.” Australian Cinema in the 1990s. Ed. Ian Craven. London and Portland: Frank Cass and Co., 2001. 57-78. Medhurst, Andy. “That Special Thrill: Brief Encounter, Homosexuality and Authorship.” Screen 32.2 (1991): 197-208. Moran, Albert, and Errol Veith. Film in Australia: An Introduction. Melbourne: Cambridge UP, 2006. Oliver, Mike. Social Work with Disabled People. Basingstoke: MacMillan, 1983. Rose Slattery, Heather. “ISAAC 2000 Conference Presentation.” Words+ n.d. 6 June 2008 ‹http://www.words-plus.com/website/stories/isaac2000.htm›. Sobchack, Vivian. “‘Choreography for One, Two, and Three Legs’ (A Phenomenological Meditation in Movements).” Topoi 24.1 (2005): 55-66. Stahl, Frederick. “Standing Room Only for a Thunderbolt in a Wheelchair,” Sydney Morning Herald 31 Oct. 2002. 6 June 2008 ‹http://www.smh.com.au/articles/2002/10/30/1035683471529.html›. Starrs, D. Bruno. “Sounds of Silence: An Interview with Rolf de Heer.” Metro 152 (2007): 18-21. ———. “An avowal of male lack: Sound in Rolf de Heer’s The Old Man Who Read Love Stories (2003).” Metro 156 (2008): 148-153. ———. “Dance Me to My Song (Rolf de Heer 1997): The Story of a Disabled Dancer.” Proceedings Scopic Bodies Dance Studies Research Seminar Series 2007. Ed. Mark Harvey. University of Auckland, 2008 (in press). Urban, Andrew L. “Dance Me to My Song, Rolf de Heer, Australia.” Film Festivals 1988. 6 June 2008. ‹http://www.filmfestivals.com/cannes98/selofus9.htm›.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography