Journal articles on the topic 'Code division multiple access. Radio Mobile communication systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Code division multiple access. Radio Mobile communication systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Yarlykov, M. S., and S. M. Yarlykova. "Signal-detection and signal-processing algorithms for code-division multiple-access satellite mobile communications systems employed simultaneously with satellite radio navigation systems." Journal of Communications Technology and Electronics 51, no. 8 (August 2006): 874–94. http://dx.doi.org/10.1134/s1064226906080055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zidane, Mohammed, Said Safi, Mohamed Sabri, and Miloud Frikel. "Using Least Mean p-Power Algorithm to Correct Channel Distortion in MC-CDMA Systems." Journal of Telecommunications and Information Technology 3 (September 28, 2018): 23–30. http://dx.doi.org/10.26636/jtit.2018.114717.

Full text
Abstract:
This work focuses on adaptive Broadband Radio Access Network (BRAN) channel identification and on downlink Multi-Carrier Code Division Multiple Access (MCCDMA) equalization. We use the normalized BRAN C channel model for 4G mobile communications, distinguishing between indoor and outdoor scenarios. On the one hand, BRAN C channel parameters are identified using the Least Mean p-Power (LMP) algorithm. On the other, we consider these coefficients in the context of adaptive equalization. We provide an overview and a mathematic formulation of MC-CDMA systems. According to these fundamental concepts, the equalizer technique is investigated analytically to compensate for channel distortion in terms of the bit error rate (BER). The numerical simulation results, for various signal-to-noise ratios and different p threshold, show that the presented algorithm is able to simulate the BRAN C channel measured with different accuracy levels. Furthermore, as far as the adaptive equalization problem is concerned, the results obtained using the zero-forcing equalizer demonstrate that the algorithm is adequate for some particular cases of threshold p.
APA, Harvard, Vancouver, ISO, and other styles
3

Jiménez-Pacheco, Alberto, Ángel Fernández-Herrero, and Javier Casajús-Quirós. "Design and Implementation of a Hardware Module for MIMO Decoding in a 4G Wireless Receiver." VLSI Design 2008 (January 31, 2008): 1–8. http://dx.doi.org/10.1155/2008/312614.

Full text
Abstract:
Future 4th Generation (4G) wireless multiuser communication systems will have to provide advanced multimedia services to an increasing number of users, making good use of the scarce spectrum resources. Thus, 4G system design should pursue both higher-transmission bit rates and higher spectral efficiencies. To achieve this goal, multiple antenna systems are called to play a crucial role. In this contribution we address the implementation in FPGAs of a multiple-input multiple-output (MIMO) decoder embedded in a prototype of a 4G mobile receiver. This MIMO decoder is part of a multicarrier code-division multiple-access (MC-CDMA) radio system, equipped with multiple antennas at both ends of the link, that is able to handle up to 32 users and provides raw transmission bit-rates up to 125 Mbps. The task of the MIMO decoder is to appropriately combine the signals simultaneously received on all antennas to construct an improved signal, free of interference, from which to estimate the transmitted symbols. A comprehensive explanation of the complete design process is provided, including architectural decisions, floating-point to fixed-point translation, and description of the validation procedure. We also report implementation results using FPGA devices of the Xilinx Virtex-4 family.
APA, Harvard, Vancouver, ISO, and other styles
4

Thrimurthulu, V., and N. S. Murti Sarma. "Investigation on Interference Mitigation Schemes for Next Generation Cellular Communications." International Journal of Engineering & Technology 7, no. 2.20 (April 18, 2018): 230. http://dx.doi.org/10.14419/ijet.v7i2.20.14768.

Full text
Abstract:
The fanciful quick development and exponential increment of advanced mobile phones the remote info activity request is additional within the returning years. That forces reexamining current remote cell arranges thanks to the shortage of the accessible vary. 2 noteworthy difficulties for advancing future Evolution (LTE) systems area unit to accomplish improved cell scope and framework limit contrasted and broadband Code Division Multiple Access (WCDMA) framework. Powerful use of radio assets and thick vary use is at the middle to realize these objectives. Be that because it might, thick repeat use might increment between cell electric resistance, that so extraordinarily restricts the limit of shoppers within the framework. Between cell obstruction will limit general framework execution as so much as output and preternatural productivity, significantly for the shoppers set at the cell edge region. later on, cautious administration of between cell obstructions finishes up vital to boost LTE framework execution. during this paper, obstruction alleviation plans for LTE downlink systems area unit investigated. The eNB and also the Mobile-Femto as each provide similar assets and transfer speed. This has created AN electric resistance issue from the downlink signs of every alternative to their UEs. this examination has adjusted a adept repeat use conspire that worked powerfully finished separation and accomplished increased outcomes within the flag quality and output of Macro and Mobile-Femto UE once contrasted with past electric resistance administration plans e.g. Partial Frequency use factor1 (NoFFR-3) and uncomplete Frequency use factor3 (FFR-3). Keywords: The fanciful quick development and exponential increment of advanced mobile phones the remote info activity request is additional within the returning years. That forces reexamining current remote cell arranges thanks to the shortage of the accessible vary. 2 noteworthy difficulties for advancing future Evolution (LTE) systems area unit to accomplish improved cell scope and framework limit contrasted and broadband Code Division Multiple Access (WCDMA) framework. Powerful use of radio assets and thick vary use is at the middle to realize these objectives. Be that because it might, thick repeat use might increment between cell electric resistance, that so extraordinarily restricts the limit of shoppers within the framework. Between cell obstruction will limit general framework execution as so much as output and preternatural productivity, significantly for the shoppers set at the cell edge region. later on, cautious administration of between cell obstructions finishes up vital to boost LTE framework execution. during this paper, obstruction alleviation plans for LTE downlink systems area unit investigated. The eNB and also the Mobile-Femto as each provide similar assets and transfer speed. This has created AN electric resistance issue from the downlink signs of every alternative to their UEs. this examination has adjusted a adept repeat use conspire that worked powerfully finished separation and accomplished increased outcomes within the flag quality and output of Macro and Mobile-Femto UE once contrasted with past electric resistance administration plans e.g. Partial Frequency use factor1 (NoFFR-3) and uncomplete Frequency use factor3 (FFR-3).
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Tao, and Leon O. Chua. "Chaotic Digital Code-Division Multiple Access (CDMA) Communication Systems." International Journal of Bifurcation and Chaos 07, no. 12 (December 1997): 2789–805. http://dx.doi.org/10.1142/s0218127497001886.

Full text
Abstract:
In this paper, the structure, principle and framework of chaotic digital code-division multiple access ((CD)2 MA) communication systems are presented. Unlike the existing CDMA systems, (CD)2MA systems use continuous pseudo-random time series to spread the spectrum of message signal and the spread signal is then directly sent through a channel to the receiver. In this sense, the carrier used in (CD)2MA is a continuous pseudo-random signal instead of a single tone as used in CDMA. We give the statistical properties of the noise-like carriers. In a (CD)2MA system, every mobile station has the same structure and parameters, only different initial conditions are assigned to different mobile stations. Instead of synchronizing two binary pseudo-random sequences as in CDMA systems, we use an impulsive control scheme to synchronize two chaotic systems in (CD)2MA. The simulation results show that the channel capacity of (CD)2MA is twice as large than that of CDMA.
APA, Harvard, Vancouver, ISO, and other styles
6

Fapojuwo, A. O. "Radio capacity of direct sequence code division multiple access mobile radio systems." IEE Proceedings I Communications, Speech and Vision 140, no. 5 (1993): 402. http://dx.doi.org/10.1049/ip-i-2.1993.0058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fischer, G., F. Pivit, and W. Wiebeck. "Link budget comparison of different mobile communication systems based on EIRP and EISL." Advances in Radio Science 2 (May 27, 2005): 127–33. http://dx.doi.org/10.5194/ars-2-127-2004.

Full text
Abstract:
Abstract. The metric EISL (Equivalent Isotropic Sensitivity Level) describing the effective sensitivity level usable at the air interface of a mobile or a basestation is used to compare mobile communication systems either based on time division or code division multiple access in terms of coverage and emission characteristics. It turns out that systems that organize the multiple access by different codes rather than different timeslots run at less emission and offer greater coverage.
APA, Harvard, Vancouver, ISO, and other styles
8

MINOMO, MASAHIRO, and TATSURO MASAMURA. "PROSPECTS FOR MOBILE COMMUNICATION SYSTEMS AND KEY TECHNOLOGIES CAPABLE OF SUPPORTING EXPANDING MOBILE MULTIMEDIA SERVICES." Journal of Circuits, Systems and Computers 13, no. 02 (April 2004): 237–51. http://dx.doi.org/10.1142/s0218126604001404.

Full text
Abstract:
The first commercial service of the 3rd generation (3G) mobile communication system, IMT-2000 (International Mobile Telecommunications), was launched in October 2001 in Japan. This is the first 3G service employing Wideband Code Division Multiple Access (W-CDMA) as its air interface between mobile terminals and base stations. This new system, 3G, is expected to accelerate the deployment of future mobile multimedia services, which substantially got started with the "i-mode" service in February 1999 in Japan. Research activities into future mobile communication systems capable of supporting a vastly expanded market for mobile multimedia services are underway worldwide. This paper describes the vision, service trends, and technical challenges of such future systems. Broadband packet wireless access, Variable Spreading Factor Orthogonal Frequency and Code Division Multiplexing (VSF-OFCDM), are promising candidates for realizing future mobile communication systems that provide higher transmission rates and capacity than 3G systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Miriampally, Venkata Raghavendra, G. Subba Rao, and V. Sudheer Raja. "Determination of Number of Channels in Multiple Access Techniques for Wireless Communications." International Journal of Informatics and Communication Technology (IJ-ICT) 4, no. 1 (April 1, 2015): 1. http://dx.doi.org/10.11591/ijict.v4i1.pp1-6.

Full text
Abstract:
<p>In wireless communications system, it is desirable to allow the subscriber to send simultaneously information to the base station while receiving information from base station. Multiple access techniques are used to allow many mobile users to share simultaneously a finite amount of radio spectrum. Frequency division multiple access (FDMA), time division multiple access(TDMA), and code division multiple access (CDMA) are the three major access techniques used to share the available bandwidth in a wireless communication system. In this paper we calculated the number of channels required for FDMA &amp; TDMA techniques depending on various factors such as spectrum, channel band width etc.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Gurevich, V., and S. Egorov. "Modeling of Amplitude Characteristic in Radio Channels of Code Division Multiple Access Systems." Proceedings of Telecommunication Universities 6, no. 2 (2020): 30–38. http://dx.doi.org/10.31854/1813-324x-2020-6-2-30-38.

Full text
Abstract:
In CDMA radio access systems, amplitude distortions in a nonlinear amplifier (NA) of a group signal lead to bit errors at the outputs of subscriber channels. To assess the permissible distortion limits and their influence on the transmission quality of subscriber signals, an electronic model of the amplitude characteristic (AC) and analytical relations are needed that relate the probability of an error when registering the output signal of the communication channel with the nonlinearity of the AC NA and other destabilizing factors. The article compares alternative mathematical models of AC NA. In contrast to traditional methods of analysis, usually limited to the choice of models with fixed parameters, a method for variably determining the parameters of AC is considered. The results are: a comparison of known methods for approximating the AC of broadband nonlinear power amplifiers of radio signals is given, an algorithm for selecting model parameters for CDMA systems with QAM using the Rapp model is proposed.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Lei, Yu Feng Zhang, Jian Guo, and Lian Gao. "The Wireless Network Positioning Strategies Basing on Mobile Terminal." Advanced Materials Research 546-547 (July 2012): 1124–29. http://dx.doi.org/10.4028/www.scientific.net/amr.546-547.1124.

Full text
Abstract:
With the continuous development of terminal technologies, the use of mobile terminals is becoming more and more diverse. Many value-added services brought by such applications gradually become to be the firm's new profit opportunity. The location itself has great value; with the display of electronic map or with the support of geographic information database, a variety of information can be shown, tracked and handled. Such location-based services (LBS) is widely used in the field of public wireless data, and mobile location services is recognized as the most attractive wireless data value-added business in 3G network. This paper discusses three positioning standards of the 3GPP first; and then combined with the existing GSM/GPRS cellular radio communication network, it mainly gives specific positioning implementation strategies of single base station in the TD-SCDMA (Time Division Synchronous Code Division Multiple Access) system.
APA, Harvard, Vancouver, ISO, and other styles
12

Trung, Nguyen Huu, and Doan Thanh Binh. "LARGE-SCALE MIMO MC-CDMA SYSTEM USING COMBINED MULTIPLE BEAMFORMING AND SPATIAL MULTIPLEXING." Vietnam Journal of Science and Technology 56, no. 1 (January 30, 2018): 102. http://dx.doi.org/10.15625/2525-2518/56/1/9204.

Full text
Abstract:
This paper proposes a novel Large-scale (massive) Multi-input Multi-output Multi-carrier Code division multiple access (LS MIMO MC-CDMA) model and application to Fifth-Generation Mobile Communication Systems (5G). This system uses combined cylindrical array antenna multiple beamforming architecture with spatial multiplexing. The model is optimized by Min-Max criteria in order to minimize side lobes and maximize compression of propagation loss. The Monte Carlo simulation results unify with the analytical solution for system performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Minh Tuong, Viktor I. Nefedov, Igor V. Kozlovsky, Alexey V. Malafeev, Kirill A. Selenya, and Natalia A. Mirolyubova. "Analysis of the Raman spectrum of high-power amplifiers of wireless communication systems." Russian Technological Journal 7, no. 6 (January 10, 2020): 96–105. http://dx.doi.org/10.32362/2500-316x-2019-7-6-96-105.

Full text
Abstract:
At present, the transfer of information is an integral part of technologies that are actively developing in the framework of the process called the Fourth Industrial Revolution. In this, space-satellite, satellite and other mobile wireless communication systems play an increasingly important role. Almost all of them include multiple access, which means a method of common resource division of the communication channel between subscribers (each mobile station has the ability to use a satellite retransmitter or the base station of a mobile wireless communication system to transmit its signals regardless of the operation of another station). Multiple-access communication systems are used for digital radio and television broadcasting in high-speed communication lines, in wireless local area networks, for data transmission in the microwave range, and also for communication with various mobile partners. In the radio transmitting and receiving paths of communication systems with multiple access, multiple signals are used (the sum of the power of the subscriber signals) with very complex types of digital envelope modulation, so they use wide working bands. With an increase in the quality of information transmission in mobile wireless communication systems, there are special requirements for powerful amplification systems (PAS) of receiving-transmitting tracts, which must have high efficiency and high output power, required bandwidth, network capacity, and linearity of message transmission channels. To achieve maximum efficiency in the PAS, the operating point of its amplifying element should be near the saturation region, on the main nonlinearity of the transfer characteristic. When multiple signals are introduced simultaneously into the PAS, it generates unfiltered intermodulation harmonics (IH). Intermodulation harmonics are formed due to the nonlinearity of the amplitude characteristics and the unevenness of phase-amplitude characteristics and due to the need to work with the highest efficiency of the PAS, which requires a shift of the operating point to the saturation thresholds of their amplifying elements. This, in turn, causes the appearance of IH. Since the harmonic oscillations IH actually represent noise for neighboring communication channels and are not theoretically filtered, an equalizer (otherwise an optimizer) of characteristics, is needed to reduce the level of these interferences in the output (Raman) spectrum of the PAS.
APA, Harvard, Vancouver, ISO, and other styles
14

Luan, Zhijun, and Hunli Fan. "Design and Implementation of Wireless Sensor Cellular Network Based on Android Platform." International Journal of Online and Biomedical Engineering (iJOE) 15, no. 01 (January 17, 2019): 18. http://dx.doi.org/10.3991/ijoe.v15i01.9774.

Full text
Abstract:
<span lang="EN-US">t</span><span lang="EN-US">he fusion of cellular network and wireless sensor network is the key research problem of Internet of Things (IoT) technology at present. The design and implementation of wireless sensor cellular network based on Android platform is mainly studied. Firstly, wireless sensor network and cellular network, Android platform, cellular network and Wireless Sensor Network (WSN) fusion strategy, and Wireless Sensor Network (WSN) gateway platform are introduced. Then related functions are introduced, mainly including terminal registration management, connection management, authentication management, terminal fault management, and communication message design on the gateway and sensor network side. Finally, related functional tests are conducted. The results show that the designed application layer gateway system can connect the sensor network with the cellular network.</span><span lang="DE"> Time Division-Synchronization Code Division Multiple Access (TD-SCDMA) and cellular network covers the whole world, while cellular network has been interconnected with the Internet through Access Network Technologies such as General Packet Radio Service (GPRS), The Fourth Generation of Mobile Phone Mobile Communication Technology Standards (4G) and Long Term Evolution (LTE), thus enabling the sensor network to access the Internet anytime and anywhere</span>
APA, Harvard, Vancouver, ISO, and other styles
15

Ghanim Wadday, Ahmed Ghanim Wadday, Faris Mohammed Ali, and Hayder Jawad Mohammed Albattat. "Design of high scalability multi-subcarrier rof hybrid system based on optical CDMA/TDM." Indonesian Journal of Electrical Engineering and Computer Science 21, no. 2 (February 1, 2021): 927. http://dx.doi.org/10.11591/ijeecs.v21.i2.pp927-937.

Full text
Abstract:
<span>The technology of radio over fiber (RoF) regard a crucial point to solve problems in wireless communication system. As well as, the growth of internet applications also reveals a tremendous increase in bandwidth for different applications. Therefore, the development of optical networks is very important that have maximum bandwidth by using different multiple access techniques. Optical code division multiple access (OCDMA) technique has considered as a good solution for high bandwidth network. Hybrid optical systems of OCDMA and time division multiplexing (OTDM) has been proposed in this paper to increase the number of simultaneous users. The results of hybrid OCDMA and OTDM system demonstrate that this system can make a considerable increase in the network scalability while ensuring sufficient data rate and an acceptable bit error rate. Where M-user OCDMA signals can be transmitted in different channels of an OTDM system. Due to its wide band facility compared with other access techniques, OCDMA used here. In addition to its high scalability for our radio network, the OTDM and SCM utilized. The combination of these efficient access technique and powerful time-sharing media are lead to increase the framework system scalability.</span>
APA, Harvard, Vancouver, ISO, and other styles
16

Dong, Jian, Xiaping Yu, and Guoqiang Hu. "Design of a Compact Quad-Band Slot Antenna for Integrated Mobile Devices." International Journal of Antennas and Propagation 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/3717681.

Full text
Abstract:
In order to incorporate different communication standards into a single device, a compact quad-band slot antenna is proposed in this paper. The proposed antenna is composed of a dielectric substrate, T-shaped microstrip patch with a circle slot and an inverted L-slot, and a comb-shaped ground on the back of the substrate. By adopting these structures, it can produce four different bands, while maintaining a small size and a simple structure. Furthermore, a prototype of the quad-band antenna is designed and fabricated. The simulated and measured results show that the proposed antenna can operate over the 1.79–2.63 GHz, 3.46–3.97 GHz, 4.92–5.85 GHz, and 7.87–8.40 GHz, which can cover entire PCS (Personal Communications Service, 1.85–1.99 GHz), UMTS (Universal Mobile Telecommunications System, 1.92–2.17 GHz), WCDMA (wideband code-division multiple access, 2.1 GHz), Bluetooth (2.4–2.48 GHz), WiBro (Wireless Broad band access service, 2.3–3.39 GHz), WLAN (Wireless Local Area Networks, 2.4/5.2/5.8 GHz), WiMAX (Worldwide Interoperability for Microwave Access, 2.5/3.5/5.5 GHz), and X-band SATcom applications (7.9~8.4 GHz). The proposed antenna is particularly attractive for mobile devices integrating multiple communication systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Gultom, Imeldawaty. "CDMA Modulation for Communication System Environment using Frequency Hopping Spread Spectrum." International Innovative Research Journal of Engineering and Technology 6, no. 1 (September 30, 2020): EC—1—EC—13. http://dx.doi.org/10.32595/iirjet.org/v6i1.2020.131.

Full text
Abstract:
In this paper, an overall framework for a joint Special emphasis is placed on the communication segment of the sensing system at 85 GHz. Code division multiplexing using frequency hopping spread spectrum signals is implemented at 85 ghz to take advantage of reduced interference between ambient communication. The framework, which spans the entire chain of signal processing, mat lab is enabled, explained, and simulated using data networking. A template, able to scatter, fraud detection, including radio frequency-block and synchronization non-idealities are built up and analyzed. Also, the implementation of a channel model is into the Win Prop technology and embedded into the simulation of simu link. In the previous paper, they implemented the code division multiple access using a direct sequence spread spectrum at 77ghz for secure communication. Because of using 77ghz for the direct sequence spread spectrum, there will occur noise distortion and interference in the communication system. So that there will be poor system communication between transmitter and receiver. To overcome these problems, our paper explains the implementation of code division multiple access using frequency hopping spread spectrum for better and secure communications. By using this frequency-hopping spread spectrum technology, we can reduce the noise distortion and interference between the transmitter and the receiver. So that our system will be in proper condition to transmit the signals in the same range of frequency without any interference and distortion. FHSS systems can allow a higher aggregate bandwidth for coverage because FHSS provides more channels in the same range of frequencies. In accordance with the bit-error-rate, the module is assessed. By creating white Gaussian additive noise. The attribute is proven to reconcile the theoretical assumptions with the outcomes. By organizing a Rake-Receiver, the system is further boosted with structure configuration without any distortions.
APA, Harvard, Vancouver, ISO, and other styles
18

van der Hofstad, Remco, and Marten J. Klok. "Improving the performance of third-generation wireless communication systems." Advances in Applied Probability 36, no. 04 (December 2004): 1046–84. http://dx.doi.org/10.1017/s0001867800013318.

Full text
Abstract:
The third-generation (3G) mobile communication system uses a technique called code division multiple access (CDMA), in which multiple users use the same frequency and time domain. The data signals of the users are distinguished using codes. When there are many users, interference deteriorates the quality of the system. For more efficient use of resources, we wish to allow more users to transmit simultaneously, by using algorithms that utilize the structure of the CDMA system more effectively than the simple matched filter (MF) system used in the proposed 3G systems. In this paper, we investigate an advanced algorithm called hard-decision parallel interference cancellation (HD-PIC), in which estimates of the interfering signals are used to improve the quality of the signal of the desired user. We compare HD-PIC with MF in a simple case, where the only two parameters are the number of users and the length of the coding sequences. We focus on the exponential rate for the probability of a bit-error, explain the relevance of this parameter, and investigate how it scales when the number of users grows large. We also review extensions of our results, proved elsewhere, showing that in HD-PIC, more users can transmit without errors than in the MF system.
APA, Harvard, Vancouver, ISO, and other styles
19

van der Hofstad, Remco, and Marten J. Klok. "Improving the performance of third-generation wireless communication systems." Advances in Applied Probability 36, no. 4 (December 2004): 1046–84. http://dx.doi.org/10.1239/aap/1103662958.

Full text
Abstract:
The third-generation (3G) mobile communication system uses a technique called code division multiple access (CDMA), in which multiple users use the same frequency and time domain. The data signals of the users are distinguished using codes. When there are many users, interference deteriorates the quality of the system. For more efficient use of resources, we wish to allow more users to transmit simultaneously, by using algorithms that utilize the structure of the CDMA system more effectively than the simple matched filter (MF) system used in the proposed 3G systems. In this paper, we investigate an advanced algorithm called hard-decision parallel interference cancellation (HD-PIC), in which estimates of the interfering signals are used to improve the quality of the signal of the desired user. We compare HD-PIC with MF in a simple case, where the only two parameters are the number of users and the length of the coding sequences. We focus on the exponential rate for the probability of a bit-error, explain the relevance of this parameter, and investigate how it scales when the number of users grows large. We also review extensions of our results, proved elsewhere, showing that in HD-PIC, more users can transmit without errors than in the MF system.
APA, Harvard, Vancouver, ISO, and other styles
20

Ogunrinola, Olawale Oluwasegun, Isaiah Opeyemi Olaniyi, Segun A. Afolabi, Gbemiga Abraham Olaniyi, and Olushola Emmanuel Ajeigbe. "Modelling and Development of a Radio Resource Control and Scheduling Algorithm for Long-Term Evolution (LTE) Uplink." Review of Computer Engineering Studies 8, no. 2 (June 30, 2021): 23–34. http://dx.doi.org/10.18280/rces.080201.

Full text
Abstract:
Modern radio communication services transmit signals from an earth station to a high-altitude station, space station or a space radio system via a feeder link while in Global Systems for Mobile Communication (GSM) and computer networks, the radio uplink transmit from cell phones to base station linking the network core to the communication interphase via an upstream facility. Hitherto, the Single-Carrier Frequency Division Multiple Access (SC-FDMA) has been adopted for uplink access in the Long-Term Evolution (LTE) scheme by the 3GPP. In this journal, the LTE uplink radio resource allocation is addressed as an optimization problem, where the desired solution is the mapping of the schedulable UE to schedulable Resource Blocks (RBs) that maximizes the proportional fairness metric. The particle swarm optimization (PSO) has been employed for this research. PSO is an algorithm that is very easy to implement to solve real time optimization problems and has fewer parameters to adjust when compared to other evolutionary algorithms. The proposed scheme was found to outperform the First Maximum Expansion (FME) and Recursive Maximum Expansion (RME) in terms of simulation time and fairness while maintaining the throughput.
APA, Harvard, Vancouver, ISO, and other styles
21

Bhatt, P. R. "Internationalisation and Innovation: A Case Study of Nokia." Vision: The Journal of Business Perspective 6, no. 2 (July 2002): 121–29. http://dx.doi.org/10.1177/097226290200600212.

Full text
Abstract:
NOKIA is one of the ‘e-generation’ companies, which relies on the web to conduct their everyday business, demanding richer and more personalized experience. Its objective is ‘to transform the Digital Age to a truly Mobile Age’, giving everyone access to information. Nokia is the undisputed global king of mobile communication. Its strategy is to become a global player in telecommunications through ‘collaboration and innovations'. It has made spectacular innovations in mobile communications. It brought technologies such as General Packet Radio Services (GPRS), Wideband Code Division Multiple Access (CDMA) as the mobile moves third generation (3G). Nokia has established their cutting edge technology and trend settling lifestyle offerings while unveiling their mobile handset products. In 3G services, Nokia will give e-mail, weather information maps, rout planning, traffic information, bank account data, views, travel information, etc. Nokia adopted a strategy of mergers, acquisitions, alliance and collaboration to gain superiority in technology and competitive advantage. While Nokia is the market leader in handset manufacturing with 35.3% share, Ericsson is the king of wireless network equipment with 33% market share. Nokia's performance was impressive during 1996–2000. Nokia's future growth areas include market leadership in security infrastructure for corporates, supplying solutions to help corporations block viruses and intruders at their network gateways.
APA, Harvard, Vancouver, ISO, and other styles
22

N. Motade, Sumitra, and Anju V. Kulkarni. "Incremental gradient algorithm for multiuser detection in multi-carrier DS-CDMA system under modulation schemes." International Journal of Engineering & Technology 7, no. 2.6 (March 11, 2018): 311. http://dx.doi.org/10.14419/ijet.v7i2.6.11270.

Full text
Abstract:
Nowadays, Multicarrier Direct sequence code division multiple access (MC DS-CDMA) systems are used in mobile communication. Performance of these systems are limited by multiple access interference (MAI) created by spread-spectrum users in the channel as well as background channel noise. This paper proposes an incremental gradient descent (IGD) multi-user detection (MUD) for MC DS-CDMA system that can achieve near-optimum performance while the number of users is linear in its implementation complexity. The IGD algorithm make an effort to perform optimum MUD by updating one user's bit decision each iteration in the best way. This algorithm accelerates the gradient algorithm convergence by averaging. When a minimum mean square error (MMSE) MUD is employed to initialize the proposed algorithm, in all cases tested the gradient search converges to a solution with optimum performance. Further, the iterative tests denote that the proposed IGD algorithm provides significant performance for cases where other suboptimum algorithms perform poorly. Simulation compares the proposed IGD algorithm with the conventional detectors.
APA, Harvard, Vancouver, ISO, and other styles
23

Karim, Bakhtiar Ali, and Bzhar Rahaman Othman. "Study of Uplink Interference in UMTS Network: ASIACELL Company, Iraq." Kurdistan Journal of Applied Research 5, no. 1 (June 9, 2020): 137–48. http://dx.doi.org/10.24017/science.2020.1.7.

Full text
Abstract:
Universal Mobile Telecommunications System (UMTS), is the third generation (3G) of mobile communication which is based on the wideband code division multiple access (W-CDMA) radio access to provide bandwidth and spectral efficiency. Interference in 3G system is significantly lower compared to the preceded generations. However, it does not mean 3G is free from the issues associated with interferences, such as low signal quality and call drop. The interference level in UMTS can be measured by using the well-known parameter Received Total Wideband Power (RTWP). This parameter is affected by many factors such as number of the users connected to the system, combining second generation (2G) and 3G frequencies within the same geographical area, geographical causes (difference of Altitude), and hardware impairment. In this paper we intensively study how these factors affect the uplink interference level (i.e. RTWP value) in 3G system used by a particular telecommunications company, Asiacell company, Iraq. The obtained data shows that call drop is the most serious issue raised due to high value of RTWP in 3G system. We demonstrate that system enhancement, in terms of lower RTWP level, are obtained by adding second carrier to the sites, separating 2G band from 3G band using special filter, and optimizing the hardware components.
APA, Harvard, Vancouver, ISO, and other styles
24

Berceanu, Madalina-Georgiana, Carmen Florea, and Simona Halunga. "Performance Comparison of Massive MIMO System with Orthogonal and Nonorthogonal Multiple Access for Uplink in 5G Systems." Applied Sciences 10, no. 20 (October 14, 2020): 7139. http://dx.doi.org/10.3390/app10207139.

Full text
Abstract:
In the attempt to respond to market demands, new techniques for wireless communication systems have been proposed to ensure, to all active users that are sharing the same network cell, an increased quality of service, regardless of any environmental factors, such as their position within the cell, time, space, climate, and noise. One example is the nonorthogonal multiple access (NOMA) technique, proposed within the 5G standard, known for supporting a massive connectivity and a more efficient use of radio resources. This paper presents two new sets of complex codes— multiple-user shared-access (MUSA) and extended MUSA (EMUSA), and an algorithm of allocation such that the intercorrelation should be as reduced as possible that can be used in MUSA for 5G NOMA-based technique scheme. Also, it analyzes the possibility of creating complex codes starting from PN (cPN), which is a novel idea proposed in this paper, whose results are promising with respect to the overall system performances. First, a description of the basic principles of MUSA are presented; next, the description of the proposed system will be provided, whose performance will be tested using Monte Carlo MATLAB simulations based on bit error rate (BER) versus signal-to-noise ratio (SNR). The system performances are evaluated in different scenarios and compared with classical code division multiple access (CDMA) having the following system parameters in sight: the number of antennas at the receiver side and the number of active users.
APA, Harvard, Vancouver, ISO, and other styles
25

OH, TICK HUI, and KIM GEOK TAN. "IMAGE TRANSMISSION THROUGH MC-CDMA CHANNEL: AN IMAGE QUALITY EVALUATION." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 06 (November 2008): 827–50. http://dx.doi.org/10.1142/s0219691308002707.

Full text
Abstract:
The multicarrier code division multiple access (MC-CDMA) system is considered as an advancement in mobile communication system. Seen as the next generation communication technology after the 3G, signals can be easily transmitted and received using the Fast Fourier Transform (FFT) device without increasing the transmitter and receiver complexities. Besides, the MC-CDMA system provides good spectral efficiency. If the number of sub-carriers and the spacing between sub-carriers are chosen properly, it is unlikely that all the sub-carriers will be in deep fade. Thus, it provides frequency diversity. Frequency diversity means that frequencies separated by more than the coherence bandwidth of the channel will be uncorrelated and thus will not experience the same fades. Three types of MC-CDMA systems were proposed — MC-CDMA, multicarrier DS-CDMA and multitone CDMA. In MC-CDMA systems, the transmitter spreads the user data over different sub-carriers using spreading code (Hadamard Walsh code) in frequency domain. The multicarrier DS-CDMA system spreads the S/P converted data streams using spreading code in time domain. This scheme is proposed for uplink communication because this scheme can provide quasi-synchronization. In the MT-CDMA scheme, the S/P converted data stream is spread in time domain so that the spectrum of each sub-carrier prior to spreading operation can satisfy the orthogonality condition with minimum frequency separation. Thus, the spectrum of each sub-carrier no longer satisfies the orthogonality condition. MC-CDMA uses longer spreading code as compared to other schemes and thus accommodates more users. The JPEG2000 still image compression is a new image compression technique which is optimized not only for efficiency, but also for scalability and interoperability in network and mobile environment. This standard can provide superior low bit rate performance and continuous-tone and bi-level compression. Because of the robustness to bit errors, JPEG2000 is widely used in wireless communication channel.
APA, Harvard, Vancouver, ISO, and other styles
26

Fey, Anne, Remco van der Hofstad, and Marten J. Klok. "Large deviations for eigenvalues of sample covariance matrices, with applications to mobile communication systems." Advances in Applied Probability 40, no. 04 (December 2008): 1048–71. http://dx.doi.org/10.1017/s0001867800002962.

Full text
Abstract:
We study sample covariance matrices of the form W = (1 / n) C C T, where C is a k x n matrix with independent and identically distributed (i.i.d.) mean 0 entries. This is a generalization of the so-called Wishart matrices, where the entries of C are i.i.d. standard normal random variables. Such matrices arise in statistics as sample covariance matrices, and the high-dimensional case, when k is large, arises in the analysis of DNA experiments. We investigate the large deviation properties of the largest and smallest eigenvalues of W when either k is fixed and n → ∞ or k n → ∞ with k n = o(n / log log n), in the case where the squares of the i.i.d. entries have finite exponential moments. Previous results, proving almost sure limits of the eigenvalues, require only finite fourth moments. Our most explicit results for large k are for the case where the entries of C are ∓ 1 with equal probability. We relate the large deviation rate functions of the smallest and largest eigenvalues to the rate functions for i.i.d. standard normal entries of C . This case is of particular interest since it is related to the problem of decoding of a signal in a code-division multiple-access (CDMA) system arising in mobile communication systems. In this example, k is the number of users in the system and n is the length of the coding sequence of each of the users. Each user transmits at the same time and uses the same frequency; the codes are used to distinguish the signals of the separate users. The results imply large deviation bounds for the probability of a bit error due to the interference of the various users.
APA, Harvard, Vancouver, ISO, and other styles
27

Fey, Anne, Remco van der Hofstad, and Marten J. Klok. "Large deviations for eigenvalues of sample covariance matrices, with applications to mobile communication systems." Advances in Applied Probability 40, no. 4 (December 2008): 1048–71. http://dx.doi.org/10.1239/aap/1231340164.

Full text
Abstract:
We study sample covariance matrices of the form W = (1 / n)CCT, where C is a k x n matrix with independent and identically distributed (i.i.d.) mean 0 entries. This is a generalization of the so-called Wishart matrices, where the entries of C are i.i.d. standard normal random variables. Such matrices arise in statistics as sample covariance matrices, and the high-dimensional case, when k is large, arises in the analysis of DNA experiments. We investigate the large deviation properties of the largest and smallest eigenvalues of W when either k is fixed and n → ∞ or kn → ∞ with kn = o(n / log log n), in the case where the squares of the i.i.d. entries have finite exponential moments. Previous results, proving almost sure limits of the eigenvalues, require only finite fourth moments. Our most explicit results for large k are for the case where the entries of C are ∓ 1 with equal probability. We relate the large deviation rate functions of the smallest and largest eigenvalues to the rate functions for i.i.d. standard normal entries of C. This case is of particular interest since it is related to the problem of decoding of a signal in a code-division multiple-access (CDMA) system arising in mobile communication systems. In this example, k is the number of users in the system and n is the length of the coding sequence of each of the users. Each user transmits at the same time and uses the same frequency; the codes are used to distinguish the signals of the separate users. The results imply large deviation bounds for the probability of a bit error due to the interference of the various users.
APA, Harvard, Vancouver, ISO, and other styles
28

THIYAGU, KATHIYAIAH, and T. H. OH. "ROBUST JPEG2000 IMAGE TRANSMISSION THROUGH LOW SNR MULTI-CARRIER CDMA SYSTEM IN FREQUENCY-SELECTIVE RAYLEIGH FADING CHANNELS: A PERFORMANCE STUDY." International Journal of Wavelets, Multiresolution and Information Processing 09, no. 02 (March 2011): 283–303. http://dx.doi.org/10.1142/s0219691311004018.

Full text
Abstract:
The demand for high data rate transmission is ever increasing every day. Multi-carrier code division multiple access (MC-CDMA) system is considered as the forerunner and advancement in the mobile communication system. In this paper, two types of JPEG2000 lossily-compressed test images are transmitted through an MC-CDMA channel in low SNR (as low as 4 dB) environment and their quality are evaluated objectively by using peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). The test images are all compressed from ratio of 10 : 1 up to 70 : 1 and the system involves multi-user image transmission in near real-time low SNR (±5 dB). It is found that JPEG2000 image compression technique that applies wavelet transform performed quite well in the low SNR multipath fading channel — as low as 4 dB, and this looks promising for future applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Manzano, Mario, Felipe Espinosa, Ángel M. Bravo-Santos, Enrique Santiso, Ignacio Bravo, and David Garcia. "Dynamic Cognitive Self-Organized TDMA for Medium Access Control in Real-Time Vehicle to Vehicle Communications." Mathematical Problems in Engineering 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/574528.

Full text
Abstract:
The emergence of intelligent transport systems has brought out a new set of requirements on wireless communication. To cope with these requirements, several proposals are currently under discussion. In this highly mobile environment, the design of a prompt, efficient, flexible, and reliable medium access control, able to cover the specific constraints of the named real-time communications applications, is still unsolved. This paper presents the original proposal integrating Non-Cooperative Cognitive Time Division Multiple Access (NCC-TDMA) based on Cognitive Radio (CR) techniques to obtain a mechanism which complies with the requirements of real-time communications. Though the proposed MAC uses a slotted channel, it can be adapted to operate on the physical layer of different standards. The authors’ analysis considers the IEEE WAVE and 802.11p as the standards of reference. The mechanism also offers other advantages, such as avoiding signalling and the adaptation capacity to channel conditions and interferences. The solution is applied to the problem of units merging a convoy. Comparison results between NCC-TDMA and Slotted-Aloha are included.
APA, Harvard, Vancouver, ISO, and other styles
30

Mohammed, Saifuldeen A. "Securing Physical Layer for FHSS Communication System Using Code andPhase Hopping Techniques in CDMA, System Design and Implementation." Journal of Engineering 26, no. 7 (July 1, 2020): 190–205. http://dx.doi.org/10.31026/j.eng.2020.07.13.

Full text
Abstract:
The Frequency-hopping Spread Spectrum (FHSS) systems and techniques are using in military and civilianradar recently and in the communication system for securing the information on wireless communications link channels, for example in the Wi-Fi 8.02.X IEEE using multiple number bandwidth and frequencies in the wireless channel in order to hopping on them for increasing the security level during the broadcast, but nowadays FHSS problem, which is, any Smart Software Defined Radio (S-SDR) can easily detect a wireless signal at the transmitter and the receiver for the hopping sequence in both of these, then duplicate this sequence in order to hack the signal on both transmitter and receiver messages using the order of the sequences that will be recognized for next transmissions. In 2017 Code and Phase Hopping Techniques are both proposed to resolve the most recent problem in security-related, but not with FHSS, therefore in this paper presents a new composed proposed system will progress Phase Shift and Code Hopping in Code Division Multiplexing Access (CDMA) to complement with FHSS; because the wireless Communications systems security in the last years became a nightmare for some individuals, also companies, and even countries, also join them needing for higher bits rate in the wireless channels; because of next-generation communication system 5G-6G and the evolution in social networking, IoT, stream media, IoE, visualizations, and cloud computing. The new ideas can be applicable for a large number of users, also fast implementation without synchronization, as well as without any focus on encryption or keys-exchanges, at completion all results are simulation in MATLAB R2017b, the results have been tested by using 8 users in the same time, also our results showing promises effects on security for both applied systems Phases Shift and Codes Hopping especially the Code Hopping, therefor; our results encouraged us to complete research and compare our system with other.
APA, Harvard, Vancouver, ISO, and other styles
31

Coruh, Uğur, and Oğuz Bayat. "Hybrid Secure Authentication and Key Exchange Scheme for M2M Home Networks." Security and Communication Networks 2018 (November 1, 2018): 1–25. http://dx.doi.org/10.1155/2018/6563089.

Full text
Abstract:
In this paper, we analyzed Sun et al.’s scheme which proposes an M2M (Machine-to-Machine) secure communication scheme by using existing TD SCMA (Time Division-Synchronous Code Division Multiple Access) networks. They offer a password-based authentication and key establishment protocol for mutual authentication. Moreover, their proposed secure channel establishment protocol uses symmetric cryptography and one-way hash algorithms and they considered using their protected channel model for mobile users and smart home networks. In this paper, we propose to complete the missing part of Sun et al.’s scheme. This can occur by addressing privacy-preserving and message modification protection. Moreover, improvements can be made to MITM (Man-In-The-Middle) attack resistance, anomaly detection and DoS (Denial-of-Service) attacks with timing. ECDH (Elliptic Curve Diffie Hellman) cryptography based protected cipher-key exchange operation used on initial setup and key-injection operations to provide secure user registration, user password change and home gateway network join phases. We simulated both the proposed and Sun et al.’s schemes. We analyzed Sun et al.’s scheme for performance, network congestion and resource usage. Missing privacy-preserving was analyzed and compared with the GLARM scheme, and the storage cost of each phase was analyzed according to Ferrag et al.’s survey proposal. In Sun et al.’s scheme, future work for the security architecture of the home network is related to Li et al.’s protocol being implemented in our proposed design.
APA, Harvard, Vancouver, ISO, and other styles
32

Hasan, Moh Khalid, Mostafa Zaman Chowdhury, Md Shahjalal, and Yeong Min Jang. "Fuzzy Based Network Assignment and Link-Switching Analysis in Hybrid OCC/LiFi System." Wireless Communications and Mobile Computing 2018 (November 19, 2018): 1–15. http://dx.doi.org/10.1155/2018/2870518.

Full text
Abstract:
In recent times, optical wireless communications (OWC) have become attractive research interest in mobile communication for its inexpensiveness and high-speed data transmission capability and it is already recognized as complementary to radio-frequency (RF) based technologies. Light fidelity (LiFi) and optical camera communication (OCC) are two promising OWC technologies that use a photo detector (PD) and a camera, respectively, to receive optical pulses. These communication systems can be implemented in all kinds of environments using existing light-emitting diode (LED) infrastructures to transmit data. However, both networking layers suffer from several limitations. An excellent solution to overcoming these limitations is the integration of OCC and LiFi. In this paper, we propose a hybrid OCC and LiFi architecture to improve the quality-of-service (QoS) of users. A network assignment mechanism is developed for the hybrid system. A dynamic link-switching technique for efficient handover management between networks is proposed afterward which includes switching provisioning based on user mobility and detailed network switching flow analysis. Fuzzy logic (FL) is used to develop the proposed mechanisms. A time-division multiple access (TDMA) based approach, called round-robin scheduling (RRS), is also adopted to ensure fairness in time resource allocation while serving multiple users using the same LED in the hybrid system. Furthermore, simulation results are presented taking different practical application scenarios into consideration. The performance analysis of the network assignment mechanism, which is provided at the end of the paper, demonstrates the importance and feasibility of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Sizov, V. A., D. M. Malinichev, and Kh K. Kuchmezov. "The study of promising secure information systems based on signal modeling." Open Education 23, no. 2 (May 14, 2019): 69–77. http://dx.doi.org/10.21686/1818-4243-2019-2-69-77.

Full text
Abstract:
The aim of the study is to increase the effectiveness of information security management through the use of 5G networks. The transition to the fifth-generation network does not solve the existing problems of information security and leads to the emergence of new threats. The main objective of each modulation method of signals is to ensure high bandwidth, proper transmission quality in a noisy communication channel, using the minimum amount of energy. One of the most effective indicators of increasing the level of information security in wireless networks is quadrature modulation, which is used in such networks as: LTE, WiMAX, McWill, DVB-T (T2), Wi-Fi and other radio access networks [1].One of the promising directions for the development of 5G networks is the use of higher frequency ranges, such as the range of millimeter waves (from 30 to 300 GHz) [2, 3]. A feature of the millimeter wave range is that they provide much wider spectral bands, making it possible to significantly increase the bandwidth in the channels. Thus, when studying prospective protected information systems based on the use of 5G network technology, it is advisable to use a simulation of the signals of the channel-level interaction of subscribers, which allows you to evaluate the basic security parameters at the physical level.Materials and research methods. Fifth generation networks will simultaneously look like any previous generation of mobile networks, and at the same time they will differ significantly from them – and there are a number of explanations that become more obvious if you think about how these changes affect the principles of user and equipment safety networks of the fifth generation.Widespread in the field of digital information transmission, including 5G networks, has received combinational modulation, called quadrature amplitude modulation.Multiposition signals have the greatest spectral efficiency, of which four-position phase modulation and sixteen-position quadrature amplitude modulation are most often used.The quadrature amplitude modulation is a kind of multi-position amplitude-phase modulation, in addition to the phase, the amplitude of the signal for a given type of modulation will also carry information. This leads to the fact that for a given frequency band the amount of transmitted information increases.A brief overview of the existing modulation approaches is presented OFDM (english. Orthogonal frequency-division multiplexing) [4, 5] systems and methods for forming solutions of signal modulation problems for building such systems/Results. Currently, OFDM technology is widely used in modern wireless Internet systems. High data transfer rates in OFDM systems are achieved using parallel information transfer over a large number of orthogonal frequency subchannels (subcarriers) [6].The method of synthesizing signal-code constructions with orthogonal frequency multiplexing provides for different scenarios for the use of semi-square modulation depending on the requirements for interception protection, as well as balancing between spectral and energy efficiency. This method can be used in two cases: with alternative and consistent transmission of signals. In the case of alternative transmission, only one of the four subcarriers is used during one channel interval. For efficient use of bandwidth, the proposed method involves the use of the spectrum of three other subcarriers for data transmission in D2D channels (this creates a connection between two user devices that are in close proximity), which allows you to further avoid interference between fixed channels and D2D communication channels.Findings. At present, 5G networks can be considered as one of the necessary components of the digital transformation and digital economy, while the main task in ensuring security in cellular communications is protection against eavesdropping. However, in the future world of smartphones and the Internet of things, in environments with a large number of mechanisms, the probability of listening is likely to fade into the background. Instead, you have to think about such things as data manipulation attacks, which, for example, can be used to command the mechanisms to perform certain actions (for example, open the door or take control of an unmanned vehicle). Mobile network operators, like consumer electronics manufacturers, will be able to offer “security as a service,” with the result that application providers will be able to apply additional levels of security over existing secure cellular network channels when transferring certain types of data. [7] Due to the better spectral density, the proposed signal conditioning method makes it possible to use prototypes of window functions with the best spatial localization properties without violating the orthogonality condition of the signal bases, and accordingly does not require the use of cyclic prefixes when generating the OFDM signal.
APA, Harvard, Vancouver, ISO, and other styles
34

Aquino, Guilherme P., and Luciano L. Mendes. "Sparse code multiple access on the generalized frequency division multiplexing." EURASIP Journal on Wireless Communications and Networking 2020, no. 1 (October 23, 2020). http://dx.doi.org/10.1186/s13638-020-01832-z.

Full text
Abstract:
Abstract Recent advances in the communication systems culminated in a new class of multiple access schemes, named non-orthogonal multiple access (NOMA), where the main goal is to increase the spectrum efficiency by overlapping data from different users in a single time-frequency resource used by the physical layer. NOMA receivers can resolve the interference among data symbols from different users, increasing the overall system spectrum efficiency without introducing symbol error rate (SER) performance loss, which makes this class of multiple access techniques interesting for future mobile communication systems. This paper analyzes one promising NOMA technique, called sparse code multiple access (SCMA), where C users can share U<C time-frequency resources from the physical layer. Initially, the SCMA and orthogonal frequency division multiplexing (OFDM) integration is considered, defining a benchmark for the overall SER performance for the multiple access technique. Furthermore, this paper proposes the SCMA and generalized frequency division multiplexing (GFDM) integration. Since GFDM is a highly flexible non-orthogonal waveform that can mimic several other waveforms as corner cases, it is an interesting candidate for future wireless communication systems. This paper proposes two approaches for combining SCMA and GFDM. The first one combines a soft equalizer, called block expectation propagation (BEP), and a multi-user detection (MUD) scheme based on the sum-product algorithm (SPA). This approach achieves the best SER performance, but with the significant increment of the complexity at the receiver. In the second approach, BEP is integrated with a simplified MUD, which is an original contribution of this paper, aiming for reducing the receiver’s complexity at the cost of SER performance loss. The solutions proposed in this paper show that SCMA-GFDM can be an interesting solution for future mobile networks.
APA, Harvard, Vancouver, ISO, and other styles
35

Gurugopinath, Sanjeev. "Non-Orthogonal Multiple Access." Advanced Computing and Communications, June 30, 2019. http://dx.doi.org/10.34048/2019.2.f2.

Full text
Abstract:
Non-orthogonal multiple access (NOMA) has been recently proposed as a technique to increase the network throughput and to support massive connectivity, which are major requirements in the fifth generation (5G) communication systems. The NOMA can be realized through two different approaches, namely, in (a) power-domain, and (b) code-domain. In the power-domain NOMA (PD-NOMA), multiple users are assigned different power levels – based on their individual channel quality information – over the same orthogonal resources. The functionality of PD-NOMA comprises of two main techniques, namely, superposition coding at the transmitter and successive interference cancellation (SIC) at the receiver. An efficient implementation of SIC would facilitate to remove interference across the users. The SIC is carried out at users with the best channel conditions and is performed in descending order of the channel. On the other hand, in the code-domain NOMA (CD-NOMA), multiplexing is carried out using low-density spreading sequences for each user, similar to the code division multiple access (CDMA) technology. In this article, we provide an introduction to NOMA and present the details on the working principle of NOMA systems. Later, we discuss the different types of NOMA schemes under PD- and CD-domains, and investigate the related applications in the context of 5G communication systems. Additionally, we discuss the integration of NOMA with other technologies related to 5G such as cognitive radio and massive MIMO, and discuss some future research challenges.
APA, Harvard, Vancouver, ISO, and other styles
36

"Analysis of sliding window decorrelation in DS-CDMA mobile radio." Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 447, no. 1930 (November 8, 1994): 313–40. http://dx.doi.org/10.1098/rspa.1994.0143.

Full text
Abstract:
The sliding window decorrelating algorithm (SLWA) has been proposed in Wijayasuriya et al . (1992 b, c ) as an alternative to dynamic power control in mobile direct sequence code division multiple access (DS-CDMA) systems. The architecture is readily extended to incorporate RAKE diversity combining techniques, thereby overcoming the aggravated near—far problem (NFP) found in multipath fading environments. In this paper we derive a mathematical model for a multi-user DS-CDMA system incorporating a sliding window (finite sequence length) decorrelator. The main contributions of the analysis are the investigation of performance under the practical limitation of incomplete RAKE combining, characterization of the interference variance in a finite sequence length asynchronous configuration, and the incorporation of vehicle motion. Decorrelation in a mobile radio channel is discussed and results derived from a simulation model presented, in addition to numerical examples from the analytical model.
APA, Harvard, Vancouver, ISO, and other styles
37

"Pseudo Convex Framework using Sparse Channel Estimation for Multipath Fading Channels in DS-CDMA Systems." International Journal of Engineering and Advanced Technology 9, no. 5 (June 30, 2020): 104–11. http://dx.doi.org/10.35940/ijeat.d9066.069520.

Full text
Abstract:
A Direct-Sequence Code-Division Multiple-Access (DS-CDMA) with rake receiver for multiuser system sets up a new dimension in mobile communication system. We propose a pseudo convex framework using an optimum sparse channel estimation technique for DS-CDMA mobile communication systems. Further, the blind channel estimation problem will be examined for rake-based DS-CDMA communication framework with time variant multi-path fading channels This receiver accomplishes an effective assessment of channel according to a maximum convexity criterion, by means of the sparse technique. This estimation method requires a convenient representation of the discrete multipath fading channel based on the sparse theory. In this paper, we have defined a specialized interior-point method for solving large-scale ℓ1 - problem in multiuser detection. Our method can be generalized to handle a variety of extensions such as various channel conditions. A new solution to DS-CDMA based sparse channel estimation is presented in this paper that assures a global optimal solution. Also, it is proven that the said solution can be used as an apparent program that shall enable solutions utilizing interior point techniques involving polynomial time complexity. Through simulation the rationality of the techniques proposed in this paper has been highlighted by results obtained for various modulation schemes and channel parameters. The execution of a pseudo convex framework using sparse channel estimation technique with rake receiver in DS-CDMA framework for multipath fading channels is explored. The overall performance is evaluated in terms of bit error rate (BER) for a range of values of signal to noise ratio (SNR). This framework gives better performance under various modulation schemes using pseudo noise (PN) spreading code. Furthermore, performance of the proposed system is compared with different detectors.
APA, Harvard, Vancouver, ISO, and other styles
38

"Performance Analysis of Reduced Handoff Interruption Time and Energy Utilization in Cognitive Radio Networks by Unmanned Area Vehicle." International Journal of Recent Technology and Engineering 8, no. 4 (November 30, 2019): 9483–86. http://dx.doi.org/10.35940/ijrte.d9755.118419.

Full text
Abstract:
The WiMAX (Worldwide Interoperability Microwave Access) is important in communication systems. Mobility is also important in WiMax to achieve high speed in data exchange over the medium. During the exchange of data handoff may be occurred.This paper is focused on handoff in WiMAX and MS (Mobile Station). The Handover Management Algorithm is used to avoid handoff in addition to improve the handover interruption time and to decrease the signaling transaction during the handover procedure we used Global Position System (GPS) to perform handoff faster. GPS has been introduced in this paper to find the position of the MS and BS then the MS will automatically choose BS by routing. We developed a new algorithm to improve the handoff interruption by introducing Time Division Multiple Access (TDMA).The MS finds its position using GPS and find the distance to the SBS (Source Base Station) and nearby BSs. In the next step,MS selects the target BS based on distance. Moreover we combine Handover Management Algorithm (HMA) with Cognitive radio networks (CRNs) for which are the way out for the trouble of underutilizing the license spectrum for which there are more needs in the final pair of decades. The congestion of the wireless spectrum has triggered a stringent contest for panic network resources.
APA, Harvard, Vancouver, ISO, and other styles
39

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (October 12, 2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
40

Ibrahim, Yasmin. "Commodifying Terrorism." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2665.

Full text
Abstract:
Introduction Figure 1 The counter-Terrorism advertising campaign of London’s Metropolitan Police commodifies some everyday items such as mobile phones, computers, passports and credit cards as having the potential to sustain terrorist activities. The process of ascribing cultural values and symbolic meanings to some everyday technical gadgets objectifies and situates Terrorism into the everyday life. The police, in urging people to look out for ‘the unusual’ in their normal day-to-day lives, juxtapose the everyday with the unusual, where day-to-day consumption, routines and flows of human activity can seemingly house insidious and atavistic elements. This again is reiterated in the Met police press release: Terrorists live within our communities making their plans whilst doing everything they can to blend in, and trying not to raise suspicions about their activities. (MPA Website) The commodification of Terrorism through uncommon and everyday objects situates Terrorism as a phenomenon which occupies a liminal space within the everyday. It resides, breathes and co-exists within the taken-for-granted routines and objects of ‘the everyday’ where it has the potential to explode and disrupt without warning. Since 9/11 and the 7/7 bombings Terrorism has been narrated through the disruption of mobility, whether in mid-air or in the deep recesses of the Underground. The resonant thread of disruption to human mobility evokes a powerful meta-narrative where acts of Terrorism can halt human agency amidst the backdrop of the metropolis, which is often a metaphor for speed and accelerated activities. If globalisation and the interconnected nature of the world are understood through discourses of risk, Terrorism bears the same footprint in urban spaces of modernity, narrating the vulnerability of the human condition in an inter-linked world where ideological struggles and resistance are manifested through inexplicable violence and destruction of lives, where the everyday is suspended to embrace the unexpected. As a consequence ambient fear “saturates the social spaces of everyday life” (Hubbard 2). The commodification of Terrorism through everyday items of consumption inevitably creates an intertextuality with real and media events, which constantly corrode the security of the metropolis. Paddy Scannell alludes to a doubling of place in our mediated world where “public events now occur simultaneously in two different places; the place of the event itself and that in which it is watched and heard. The media then vacillates between the two sites and creates experiences of simultaneity, liveness and immediacy” (qtd. in Moores 22). The doubling of place through media constructs a pervasive environment of risk and fear. Mark Danner (qtd. in Bauman 106) points out that the most powerful weapon of the 9/11 terrorists was that innocuous and “most American of technological creations: the television set” which provided a global platform to constantly replay and remember the dreadful scenes of the day, enabling the terrorist to appear invincible and to narrate fear as ubiquitous and omnipresent. Philip Abrams argues that ‘big events’ (such as 9/11 and 7/7) do make a difference in the social world for such events function as a transformative device between the past and future, forcing society to alter or transform its perspectives. David Altheide points out that since September 11 and the ensuing war on terror, a new discourse of Terrorism has emerged as a way of expressing how the world has changed and defining a state of constant alert through a media logic and format that shapes the nature of discourse itself. Consequently, the intensity and centralisation of surveillance in Western countries increased dramatically, placing the emphasis on expanding the forms of the already existing range of surveillance processes and practices that circumscribe and help shape our social existence (Lyon, Terrorism 2). Normalisation of Surveillance The role of technologies, particularly information and communication technologies (ICTs), and other infrastructures to unevenly distribute access to the goods and services necessary for modern life, while facilitating data collection on and control of the public, are significant characteristics of modernity (Reiman; Graham and Marvin; Monahan). The embedding of technological surveillance into spaces and infrastructures not only augment social control but also redefine data as a form of capital which can be shared between public and private sectors (Gandy, Data Mining; O’Harrow; Monahan). The scale, complexity and limitations of omnipresent and omnipotent surveillance, nevertheless, offer room for both subversion as well as new forms of domination and oppression (Marx). In surveillance studies, Foucault’s analysis is often heavily employed to explain lines of continuity and change between earlier forms of surveillance and data assemblage and contemporary forms in the shape of closed-circuit television (CCTV) and other surveillance modes (Dee). It establishes the need to discern patterns of power and normalisation and the subliminal or obvious cultural codes and categories that emerge through these arrangements (Fopp; Lyon, Electronic; Norris and Armstrong). In their study of CCTV surveillance, Norris and Armstrong (cf. in Dee) point out that when added to the daily minutiae of surveillance, CCTV cameras in public spaces, along with other camera surveillance in work places, capture human beings on a database constantly. The normalisation of surveillance, particularly with reference to CCTV, the popularisation of surveillance through television formats such as ‘Big Brother’ (Dee), and the expansion of online platforms to publish private images, has created a contradictory, complex and contested nature of spatial and power relationships in society. The UK, for example, has the most developed system of both urban and public space cameras in the world and this growth of camera surveillance and, as Lyon (Surveillance) points out, this has been achieved with very little, if any, public debate as to their benefits or otherwise. There may now be as many as 4.2 million CCTV cameras in Britain (cf. Lyon, Surveillance). That is one for every fourteen people and a person can be captured on over 300 cameras every day. An estimated £500m of public money has been invested in CCTV infrastructure over the last decade but, according to a Home Office study, CCTV schemes that have been assessed had little overall effect on crime levels (Wood and Ball). In spatial terms, these statistics reiterate Foucault’s emphasis on the power economy of the unseen gaze. Michel Foucault in analysing the links between power, information and surveillance inspired by Bentham’s idea of the Panopticon, indicated that it is possible to sanction or reward an individual through the act of surveillance without their knowledge (155). It is this unseen and unknown gaze of surveillance that is fundamental to the exercise of power. The design and arrangement of buildings can be engineered so that the “surveillance is permanent in its effects, even if it is discontinuous in its action” (Foucault 201). Lyon (Terrorism), in tracing the trajectory of surveillance studies, points out that much of surveillance literature has focused on understanding it as a centralised bureaucratic relationship between the powerful and the governed. Invisible forms of surveillance have also been viewed as a class weapon in some societies. With the advancements in and proliferation of surveillance technologies as well as convergence with other technologies, Lyon argues that it is no longer feasible to view surveillance as a linear or centralised process. In our contemporary globalised world, there is a need to reconcile the dialectical strands that mediate surveillance as a process. In acknowledging this, Giles Deleuze and Felix Guattari have constructed surveillance as a rhizome that defies linearity to appropriate a more convoluted and malleable form where the coding of bodies and data can be enmeshed to produce intricate power relationships and hierarchies within societies. Latour draws on the notion of assemblage by propounding that data is amalgamated from scattered centres of calculation where these can range from state and commercial institutions to scientific laboratories which scrutinise data to conceive governance and control strategies. Both the Latourian and Deleuzian ideas of surveillance highlight the disparate arrays of people, technologies and organisations that become connected to make “surveillance assemblages” in contrast to the static, unidirectional Panopticon metaphor (Ball, “Organization” 93). In a similar vein, Gandy (Panoptic) infers that it is misleading to assume that surveillance in practice is as complete and totalising as the Panoptic ideal type would have us believe. Co-optation of Millions The Metropolitan Police’s counter-Terrorism strategy seeks to co-opt millions where the corporeal body can complement the landscape of technological surveillance that already co-exists within modernity. In its press release, the role of civilian bodies in ensuring security of the city is stressed; Keeping Londoners safe from Terrorism is not a job solely for governments, security services or police. If we are to make London the safest major city in the world, we must mobilise against Terrorism not only the resources of the state, but also the active support of the millions of people who live and work in the capita. (MPA Website). Surveillance is increasingly simulated through the millions of corporeal entities where seeing in advance is the goal even before technology records and codes these images (William). Bodies understand and code risk and images through the cultural narratives which circulate in society. Compared to CCTV technology images, which require cultural and political interpretations and interventions, bodies as surveillance organisms implicitly code other bodies and activities. The travel bag in the Metropolitan Police poster reinforces the images of the 7/7 bombers and the renewed attempts to bomb the London Underground on the 21st of July. It reiterates the CCTV footage revealing images of the bombers wearing rucksacks. The image of the rucksack both embodies the everyday as well as the potential for evil in everyday objects. It also inevitably reproduces the cultural biases and prejudices where the rucksack is subliminally associated with a specific type of body. The rucksack in these terms is a laden image which symbolically captures the context and culture of risk discourses in society. The co-optation of the population as a surveillance entity also recasts new forms of social responsibility within the democratic polity, where privacy is increasingly mediated by the greater need to monitor, trace and record the activities of one another. Nikolas Rose, in discussing the increasing ‘responsibilisation’ of individuals in modern societies, describes the process in which the individual accepts responsibility for personal actions across a wide range of fields of social and economic activity as in the choice of diet, savings and pension arrangements, health care decisions and choices, home security measures and personal investment choices (qtd. in Dee). While surveillance in individualistic terms is often viewed as a threat to privacy, Rose argues that the state of ‘advanced liberalism’ within modernity and post-modernity requires considerable degrees of self-governance, regulation and surveillance whereby the individual is constructed both as a ‘new citizen’ and a key site of self management. By co-opting and recasting the role of the citizen in the age of Terrorism, the citizen to a degree accepts responsibility for both surveillance and security. In our sociological imagination the body is constructed both as lived as well as a social object. Erving Goffman uses the word ‘umwelt’ to stress that human embodiment is central to the constitution of the social world. Goffman defines ‘umwelt’ as “the region around an individual from which signs of alarm can come” and employs it to capture how people as social actors perceive and manage their settings when interacting in public places (252). Goffman’s ‘umwelt’ can be traced to Immanuel Kant’s idea that it is the a priori categories of space and time that make it possible for a subject to perceive a world (Umiker-Sebeok; qtd. in Ball, “Organization”). Anthony Giddens adapted the term Umwelt to refer to “a phenomenal world with which the individual is routinely ‘in touch’ in respect of potential dangers and alarms which then formed a core of (accomplished) normalcy with which individuals and groups surround themselves” (244). Benjamin Smith, in considering the body as an integral component of the link between our consciousness and our material world, observes that the body is continuously inscribed by culture. These inscriptions, he argues, encompass a wide range of cultural practices and will imply knowledge of a variety of social constructs. The inscribing of the body will produce cultural meanings as well as create forms of subjectivity while locating and situating the body within a cultural matrix (Smith). Drawing on Derrida’s work, Pugliese employs the term ‘Somatechnics’ to conceptualise the body as a culturally intelligible construct and to address the techniques in and through which the body is formed and transformed (qtd. in Osuri). These techniques can encompass signification systems such as race and gender and equally technologies which mediate our sense of reality. These technologies of thinking, seeing, hearing, signifying, visualising and positioning produce the very conditions for the cultural intelligibility of the body (Osuri). The body is then continuously inscribed and interpreted through mediated signifying systems. Similarly, Hayles, while not intending to impose a Cartesian dichotomy between the physical body and its cognitive presence, contends that the use and interactions with technology incorporate the body as a material entity but it also equally inscribes it by marking, recording and tracing its actions in various terrains. According to Gayatri Spivak (qtd. in Ball, “Organization”) new habits and experiences are embedded into the corporeal entity which then mediates its reactions and responses to the social world. This means one’s body is not completely one’s own and the presence of ideological forces or influences then inscribe the body with meanings, codes and cultural values. In our modern condition, the body and data are intimately and intricately bound. Outside the home, it is difficult for the body to avoid entering into relationships that produce electronic personal data (Stalder). According to Felix Stalder our physical bodies are shadowed by a ‘data body’ which follows the physical body of the consuming citizen and sometimes precedes it by constructing the individual through data (12). Before we arrive somewhere, we have already been measured and classified. Thus, upon arrival, the citizen will be treated according to the criteria ‘connected with the profile that represents us’ (Gandy, Panoptic; William). Following September 11, Lyon (Terrorism) reveals that surveillance data from a myriad of sources, such as supermarkets, motels, traffic control points, credit card transactions records and so on, was used to trace the activities of terrorists in the days and hours before their attacks, confirming that the body leaves data traces and trails. Surveillance works by abstracting bodies from places and splitting them into flows to be reassembled as virtual data-doubles, and in the process can replicate hierarchies and centralise power (Lyon, Terrorism). Mike Dee points out that the nature of surveillance taking place in modern societies is complex and far-reaching and in many ways insidious as surveillance needs to be situated within the broadest context of everyday human acts whether it is shopping with loyalty cards or paying utility bills. Physical vulnerability of the body becomes more complex in the time-space distanciated surveillance systems to which the body has become increasingly exposed. As such, each transaction – whether it be a phone call, credit card transaction, or Internet search – leaves a ‘data trail’ linkable to an individual person or place. Haggerty and Ericson, drawing from Deleuze and Guattari’s concept of the assemblage, describe the convergence and spread of data-gathering systems between different social domains and multiple levels (qtd. in Hier). They argue that the target of the generic ‘surveillance assemblage’ is the human body, which is broken into a series of data flows on which surveillance process is based. The thrust of the focus is the data individuals can yield and the categories to which they can contribute. These are then reapplied to the body. In this sense, surveillance is rhizomatic for it is diverse and connected to an underlying, invisible infrastructure which concerns interconnected technologies in multiple contexts (Ball, “Elements”). The co-opted body in the schema of counter-Terrorism enters a power arrangement where it constitutes both the unseen gaze as well as the data that will be implicated and captured in this arrangement. It is capable of producing surveillance data for those in power while creating new data through its transactions and movements in its everyday life. The body is unequivocally constructed through this data and is also entrapped by it in terms of representation and categorisation. The corporeal body is therefore part of the machinery of surveillance while being vulnerable to its discriminatory powers of categorisation and victimisation. As Hannah Arendt (qtd. in Bauman 91) had warned, “we terrestrial creatures bidding for cosmic significance will shortly be unable to comprehend and articulate the things we are capable of doing” Arendt’s caution conveys the complexity, vulnerability as well as the complicity of the human condition in the surveillance society. Equally it exemplifies how the corporeal body can be co-opted as a surveillance entity sustaining a new ‘banality’ (Arendt) in the machinery of surveillance. Social Consequences of Surveillance Lyon (Terrorism) observed that the events of 9/11 and 7/7 in the UK have inevitably become a prism through which aspects of social structure and processes may be viewed. This prism helps to illuminate the already existing vast range of surveillance practices and processes that touch everyday life in so-called information societies. As Lyon (Terrorism) points out surveillance is always ambiguous and can encompass genuine benefits and plausible rationales as well as palpable disadvantages. There are elements of representation to consider in terms of how surveillance technologies can re-present data that are collected at source or gathered from another technological medium, and these representations bring different meanings and enable different interpretations of life and surveillance (Ball, “Elements”). As such surveillance needs to be viewed in a number of ways: practice, knowledge and protection from threat. As data can be manipulated and interpreted according to cultural values and norms it reflects the inevitability of power relations to forge its identity in a surveillance society. In this sense, Ball (“Elements”) concludes surveillance practices capture and create different versions of life as lived by surveilled subjects. She refers to actors within the surveilled domain as ‘intermediaries’, where meaning is inscribed, where technologies re-present information, where power/resistance operates, and where networks are bound together to sometimes distort as well as reiterate patterns of hegemony (“Elements” 93). While surveillance is often connected with technology, it does not however determine nor decide how we code or employ our data. New technologies rarely enter passive environments of total inequality for they become enmeshed in complex pre-existing power and value systems (Marx). With surveillance there is an emphasis on the classificatory powers in our contemporary world “as persons and groups are often risk-profiled in the commercial sphere which rates their social contributions and sorts them into systems” (Lyon, Terrorism 2). Lyon (Terrorism) contends that the surveillance society is one that is organised and structured using surveillance-based techniques recorded by technologies, on behalf of the organisations and governments that structure our society. This information is then sorted, sifted and categorised and used as a basis for decisions which affect our life chances (Wood and Ball). The emergence of pervasive, automated and discriminatory mechanisms for risk profiling and social categorising constitute a significant mechanism for reproducing and reinforcing social, economic and cultural divisions in information societies. Such automated categorisation, Lyon (Terrorism) warns, has consequences for everyone especially in face of the new anti-terror measures enacted after September 11. In tandem with this, Bauman points out that a few suicidal murderers on the loose will be quite enough to recycle thousands of innocents into the “usual suspects”. In no time, a few iniquitous individual choices will be reprocessed into the attributes of a “category”; a category easily recognisable by, for instance, a suspiciously dark skin or a suspiciously bulky rucksack* *the kind of object which CCTV cameras are designed to note and passers-by are told to be vigilant about. And passers-by are keen to oblige. Since the terrorist atrocities on the London Underground, the volume of incidents classified as “racist attacks” rose sharply around the country. (122; emphasis added) Bauman, drawing on Lyon, asserts that the understandable desire for security combined with the pressure to adopt different kind of systems “will create a culture of control that will colonise more areas of life with or without the consent of the citizen” (123). This means that the inhabitants of the urban space whether a citizen, worker or consumer who has no terrorist ambitions whatsoever will discover that their opportunities are more circumscribed by the subject positions or categories which are imposed on them. Bauman cautions that for some these categories may be extremely prejudicial, restricting them from consumer choices because of credit ratings, or more insidiously, relegating them to second-class status because of their colour or ethnic background (124). Joseph Pugliese, in linking visual regimes of racial profiling and the shooting of Jean Charles de Menezes in the aftermath of 7/7 bombings in London, suggests that the discursive relations of power and visuality are inextricably bound. Pugliese argues that racial profiling creates a regime of visuality which fundamentally inscribes our physiology of perceptions with stereotypical images. He applies this analogy to Menzes running down the platform in which the retina transforms him into the “hallucinogenic figure of an Asian Terrorist” (Pugliese 8). With globalisation and the proliferation of ICTs, borders and boundaries are no longer sacrosanct and as such risks are managed by enacting ‘smart borders’ through new technologies, with huge databases behind the scenes processing information about individuals and their journeys through the profiling of body parts with, for example, iris scans (Wood and Ball 31). Such body profiling technologies are used to create watch lists of dangerous passengers or identity groups who might be of greater ‘risk’. The body in a surveillance society can be dissected into parts and profiled and coded through technology. These disparate codings of body parts can be assembled (or selectively omitted) to construct and represent whole bodies in our information society to ascertain risk. The selection and circulation of knowledge will also determine who gets slotted into the various categories that a surveillance society creates. Conclusion When the corporeal body is subsumed into a web of surveillance it often raises questions about the deterministic nature of technology. The question is a long-standing one in our modern consciousness. We are apprehensive about according technology too much power and yet it is implicated in the contemporary power relationships where it is suspended amidst human motive, agency and anxiety. The emergence of surveillance societies, the co-optation of bodies in surveillance schemas, as well as the construction of the body through data in everyday transactions, conveys both the vulnerabilities of the human condition as well as its complicity in maintaining the power arrangements in society. Bauman, in citing Jacques Ellul and Hannah Arendt, points out that we suffer a ‘moral lag’ in so far as technology and society are concerned, for often we ruminate on the consequences of our actions and motives only as afterthoughts without realising at this point of existence that the “actions we take are most commonly prompted by the resources (including technology) at our disposal” (91). References Abrams, Philip. Historical Sociology. Shepton Mallet, UK: Open Books, 1982. Altheide, David. “Consuming Terrorism.” Symbolic Interaction 27.3 (2004): 289-308. Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. London: Faber & Faber, 1963. Bauman, Zygmunt. Liquid Fear. Cambridge, UK: Polity, 2006. Ball, Kristie. “Elements of Surveillance: A New Framework and Future Research Direction.” Information, Communication and Society 5.4 (2002): 573-90 ———. “Organization, Surveillance and the Body: Towards a Politics of Resistance.” Organization 12 (2005): 89-108. Dee, Mike. “The New Citizenship of the Risk and Surveillance Society – From a Citizenship of Hope to a Citizenship of Fear?” Paper Presented to the Social Change in the 21st Century Conference, Queensland University of Technology, Queensland, Australia, 22 Nov. 2002. 14 April 2007 http://eprints.qut.edu.au/archive/00005508/02/5508.pdf>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. Fopp, Rodney. “Increasing the Potential for Gaze, Surveillance and Normalization: The Transformation of an Australian Policy for People and Homeless.” Surveillance and Society 1.1 (2002): 48-65. Foucault, Michel. Discipline and Punish: The Birth of the Prison. London: Allen Lane, 1977. Giddens, Anthony. Modernity and Self-Identity. Self and Society in the Late Modern Age. Stanford: Stanford UP, 1991. Gandy, Oscar. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview, 1997. ———. “Data Mining and Surveillance in the Post 9/11 Environment.” The Intensification of Surveillance: Crime, Terrorism and War in the Information Age. Eds. Kristie Ball and Frank Webster. Sterling, VA: Pluto Press, 2003. Goffman, Erving. Relations in Public. Harmondsworth: Penguin, 1971. Graham, Stephen, and Simon Marvin. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. New York: Routledge, 2001. Hier, Sean. “Probing Surveillance Assemblage: On the Dialectics of Surveillance Practices as Process of Social Control.” Surveillance and Society 1.3 (2003): 399-411. Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: U of Chicago P, 1999. Hubbard, Phil. “Fear and Loathing at the Multiplex: Everyday Anxiety in the Post-Industrial City.” Capital & Class 80 (2003). Latour, Bruno. Science in Action. Cambridge, Mass: Harvard UP, 1987 Lyon, David. The Electronic Eye – The Rise of Surveillance Society. Oxford: Polity Press, 1994. ———. “Terrorism and Surveillance: Security, Freedom and Justice after September 11 2001.” Privacy Lecture Series, Queens University, 12 Nov 2001. 16 April 2007 http://privacy.openflows.org/lyon_paper.html>. ———. “Surveillance Studies: Understanding Visibility, Mobility and the Phonetic Fix.” Surveillance and Society 1.1 (2002): 1-7. Metropolitan Police Authority (MPA). “Counter Terrorism: The London Debate.” Press Release. 21 June 2006. 18 April 2007 http://www.mpa.gov.uk.access/issues/comeng/Terrorism.htm>. Pugliese, Joseph. “Asymmetries of Terror: Visual Regimes of Racial Profiling and the Shooting of Jean Charles de Menezes in the Context of the War in Iraq.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol15no1_2006/ pugliese.htm>. Marx, Gary. “A Tack in the Shoe: Neutralizing and Resisting the New Surveillance.” Journal of Social Issues 59.2 (2003). 18 April 2007 http://web.mit.edu/gtmarx/www/tack.html>. Moores, Shaun. “Doubling of Place.” Mediaspace: Place Scale and Culture in a Media Age. Eds. Nick Couldry and Anna McCarthy. Routledge, London, 2004. Monahan, Teri, ed. Surveillance and Security: Technological Politics and Power in Everyday Life. Routledge: London, 2006. Norris, Clive, and Gary Armstrong. The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg, 1999. O’Harrow, Robert. No Place to Hide. New York: Free Press, 2005. Osuri, Goldie. “Media Necropower: Australian Media Reception and the Somatechnics of Mamdouh Habib.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol5no1_2006 osuri_necropower.htm>. Rose, Nikolas. “Government and Control.” British Journal of Criminology 40 (2000): 321–399. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Smith, Benjamin. “In What Ways, and for What Reasons, Do We Inscribe Our Bodies?” 15 Nov. 1998. 30 May 2007 http:www.bmezine.com/ritual/981115/Whatways.html>. Stalder, Felix. “Privacy Is Not the Antidote to Surveillance.” Surveillance and Society 1.1 (2002): 120-124. Umiker-Sebeok, Jean. “Power and the Construction of Gendered Spaces.” Indiana University-Bloomington. 14 April 2007 http://www.slis.indiana.edu/faculty/umikerse/papers/power.html>. William, Bogard. The Simulation of Surveillance: Hypercontrol in Telematic Societies. Cambridge: Cambridge UP, 1996. Wood, Kristie, and David M. Ball, eds. “A Report on the Surveillance Society.” Surveillance Studies Network, UK, Sep. 2006. 14 April 2007 http://www.ico.gov.uk/upload/documents/library/data_protection/ practical_application/surveillance_society_full_report_2006.pdf>. Citation reference for this article MLA Style Ibrahim, Yasmin. "Commodifying Terrorism: Body, Surveillance and the Everyday." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/05-ibrahim.php>. APA Style Ibrahim, Y. (Jun. 2007) "Commodifying Terrorism: Body, Surveillance and the Everyday," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/05-ibrahim.php>.
APA, Harvard, Vancouver, ISO, and other styles
41

Acland, Charles. "Matinees, Summers and Opening Weekends." M/C Journal 3, no. 1 (March 1, 2000). http://dx.doi.org/10.5204/mcj.1824.

Full text
Abstract:
Newspapers and the 7:15 Showing Cinemagoing involves planning. Even in the most impromptu instances, one has to consider meeting places, line-ups and competing responsibilities. One arranges child care, postpones household chores, or rushes to finish meals. One must organise transportation and think about routes, traffic, parking or public transit. And during the course of making plans for a trip to the cinema, whether alone or in the company of others, typically one turns to locate a recent newspaper. Consulting its printed page lets us ascertain locations, a selection of film titles and their corresponding show times. In preparing to feed a cinema craving, we burrow through a newspaper to an entertainment section, finding a tableau of information and promotional appeals. Such sections compile the mini-posters of movie advertisements, with their truncated credits, as well as various reviews and entertainment news. We see names of shopping malls doubling as names of theatres. We read celebrity gossip that may or may not pertain to the film selected for that occasion. We informally rank viewing priorities ranging from essential theatrical experiences to those that can wait for the videotape release. We attempt to assess our own mood and the taste of our filmgoing companions, matching up what we suppose are appropriate selections. Certainly, other media vie to supplant the newspaper's role in cinemagoing; many now access on-line sources and telephone services that offer the crucial details about start times. Nonetheless, as a campaign by the Newspaper Association of America in Variety aimed to remind film marketers, 80% of cinemagoers refer to newspaper listings for times and locations before heading out. The accuracy of that association's statistics notwithstanding, for the moment, the local daily or weekly newspaper has a secure place in the routines of cinematic life. A basic impetus for the newspaper's role is its presentation of a schedule of show times. Whatever the venue -- published, phone or on-line -- it strikes me as especially telling that schedules are part of the ordinariness of cinemagoing. To be sure, there are those who decide what film to see on site. Anecdotally, I have had several people comment recently that they no longer decide what movie to see, but where to see a (any) movie. Regardless, the schedule, coupled with the theatre's location, figures as a point of coordination for travel through community space to a site of film consumption. The choice of show time is governed by countless demands of everyday life. How often has the timing of a film -- not the film itself, the theatre at which it's playing, nor one's financial situation --determined one's attendance? How familiar is the assessment that show times are such that one cannot make it, that the film begins a bit too earlier, that it will run too late for whatever reason, and that other tasks intervene to take precedence? I want to make several observations related to the scheduling of film exhibition. Most generally, it makes manifest that cinemagoing involves an exercise in the application of cinema knowledge -- that is, minute, everyday facilities and familiarities that help orchestrate the ordinariness of cultural life. Such knowledge informs what Michel de Certeau characterises as "the procedures of everyday creativity" (xiv). Far from random, the unexceptional decisions and actions involved with cinemagoing bear an ordering and a predictability. Novelty in audience activity appears, but it is alongside fairly exact expectations about the event. The schedule of start times is essential to the routinisation of filmgoing. Displaying a Fordist logic of streamlining commodity distribution and the time management of consumption, audiences circulate through a machine that shapes their constituency, providing a set time for seating, departure, snack purchases and socialising. Even with the staggered times offered by multiplex cinemas, schedules still lay down a fixed template around which other activities have to be arrayed by the patron. As audiences move to and through the theatre, the schedule endeavours to regulate practice, making us the subjects of a temporal grid, a city context, a cinema space, as well as of the film itself. To be sure, one can arrive late and leave early, confounding the schedule's disciplining force. Most importantly, with or without such forms of evasion, it channels the actions of audiences in ways that consideration of the gaze cannot address. Taking account of the scheduling of cinema culture, and its implication of adjunct procedures of everyday life, points to dimensions of subjectivity neglected by dominant theories of spectatorship. To be the subject of a cinema schedule is to understand one assemblage of the parameters of everyday creativity. It would be foolish to see cinema audiences as cattle, herded and processed alone, in some crude Gustave LeBon fashion. It would be equally foolish not to recognise the manner in which film distribution and exhibition operates precisely by constructing images of the activity of people as demographic clusters and generalised cultural consumers. The ordinary tactics of filmgoing are supplemental to, and run alongside, a set of industrial structures and practices. While there is a correlation between a culture industry's imagined audience and the life that ensues around its offerings, we cannot neglect that, as attention to film scheduling alerts us, audiences are subjects of an institutional apparatus, brought into being for the reproduction of an industrial edifice. Streamline Audiences In this, film is no different from any culture industry. Film exhibition and distribution relies on an understanding of both the market and the product or service being sold at any given point in time. Operations respond to economic conditions, competing companies, and alternative activities. Economic rationality in this strategic process, however, only explains so much. This is especially true for an industry that must continually predict, and arguably give shape to, the "mood" and predilections of disparate and distant audiences. Producers, distributors and exhibitors assess which films will "work", to whom they will be marketed, as well as establish the very terms of success. Without a doubt, much of the film industry's attentions act to reduce this uncertainty; here, one need only think of the various forms of textual continuity (genre films, star performances, etc.) and the economies of mass advertising as ways to ensure box office receipts. Yet, at the core of the operations of film exhibition remains a number of flexible assumptions about audience activity, taste and desire. These assumptions emerge from a variety of sources to form a brand of temporary industry "commonsense", and as such are harbingers of an industrial logic. Ien Ang has usefully pursued this view in her comparative analysis of three national television structures and their operating assumptions about audiences. Broadcasters streamline and discipline audiences as part of their organisational procedures, with the consequence of shaping ideas about consumers as well as assuring the reproduction of the industrial structure itself. She writes, "institutional knowledge is driven toward making the audience visible in such a way that it helps the institutions to increase their power to get their relationship with the audience under control, and this can only be done by symbolically constructing 'television audience' as an objectified category of others that can be controlled, that is, contained in the interest of a predetermined institutional goal" (7). Ang demonstrates, in particular, how various industrially sanctioned programming strategies (programme strips, "hammocking" new shows between successful ones, and counter-programming to a competitor's strengths) and modes of audience measurement grow out of, and invariably support, those institutional goals. And, most crucially, her approach is not an effort to ascertain the empirical certainty of "actual" audiences; instead, it charts the discursive terrain in which the abstract concept of audience becomes material for the continuation of industry practices. Ang's work tenders special insight to film culture. In fact, television scholarship has taken full advantage of exploring the routine nature of that medium, the best of which deploys its findings to lay bare configurations of power in domestic contexts. One aspect has been television time and schedules. For example, David Morley points to the role of television in structuring everyday life, discussing a range of research that emphasises the temporal dimension. Alerting us to the non- necessary determination of television's temporal structure, he comments that we "need to maintain a sensitivity to these micro-levels of division and differentiation while we attend to the macro-questions of the media's own role in the social structuring of time" (265). As such, the negotiation of temporal structures implies that schedules are not monolithic impositions of order. Indeed, as Morley puts it, they "must be seen as both entering into already constructed, historically specific divisions of space and time, and also as transforming those pre-existing division" (266). Television's temporal grid has been address by others as well. Paddy Scannell characterises scheduling and continuity techniques, which link programmes, as a standardisation of use, making radio and television predictable, 'user friendly' media (9). John Caughie refers to the organization of flow as a way to talk about the national particularities of British and American television (49-50). All, while making their own contributions, appeal to a detailing of viewing context as part of any study of audience, consumption or experience; uncovering the practices of television programmers as they attempt to apprehend and create viewing conditions for their audiences is a first step in this detailing. Why has a similar conceptual framework not been applied with the same rigour to film? Certainly the history of film and television's association with different, at times divergent, disciplinary formations helps us appreciate such theoretical disparities. I would like to mention one less conspicuous explanation. It occurs to me that one frequently sees a collapse in the distinction between the everyday and the domestic; in much scholarship, the latter term appears as a powerful trope of the former. The consequence has been the absenting of a myriad of other -- if you will, non-domestic -- manifestations of everyday-ness, unfortunately encouraging a rather literal understanding of the everyday. The impression is that the abstractions of the everyday are reduced to daily occurrences. Simply put, my minor appeal is for the extension of this vein of television scholarship to out-of-home technologies and cultural forms, that is, other sites and locations of the everyday. In so doing, we pay attention to extra-textual structures of cinematic life; other regimes of knowledge, power, subjectivity and practice appear. Film audiences require a discussion about the ordinary, the calculated and the casual practices of cinematic engagement. Such a discussion would chart institutional knowledge, identifying operating strategies and recognising the creativity and multidimensionality of cinemagoing. What are the discursive parameters in which the film industry imagines cinema audiences? What are the related implications for the structures in which the practice of cinemagoing occurs? Vectors of Exhibition Time One set of those structures of audience and industry practice involves the temporal dimension of film exhibition. In what follows, I want to speculate on three vectors of the temporality of cinema spaces (meaning that I will not address issues of diegetic time). Note further that my observations emerge from a close study of industrial discourse in the U.S. and Canada. I would be interested to hear how they are manifest in other continental contexts. First, the running times of films encourage turnovers of the audience during the course of a single day at each screen. The special event of lengthy anomalies has helped mark the epic, and the historic, from standard fare. As discussed above, show times coordinate cinemagoing and regulate leisure time. Knowing the codes of screenings means participating in an extension of the industrial model of labour and service management. Running times incorporate more texts than the feature presentation alone. Besides the history of double features, there are now advertisements, trailers for coming attractions, trailers for films now playing in neighbouring auditoriums, promotional shorts demonstrating new sound systems, public service announcements, reminders to turn off cell phones and pagers, and the exhibitor's own signature clips. A growing focal point for filmgoing, these introductory texts received a boost in 1990, when the Motion Picture Association of America changed its standards for the length of trailers, boosting it from 90 seconds to a full two minutes (Brookman). This intertextuality needs to be supplemented by a consideration of inter- media appeals. For example, advertisements for television began appearing in theatres in the 1990s. And many lobbies of multiplex cinemas now offer a range of media forms, including video previews, magazines, arcades and virtual reality games. Implied here is that motion pictures are not the only media audiences experience in cinemas and that there is an explicit attempt to integrate a cinema's texts with those at other sites and locations. Thus, an exhibitor's schedule accommodates an intertextual strip, offering a limited parallel to Raymond Williams's concept of "flow", which he characterised by stating -- quite erroneously -- "in all communication systems before broadcasting the essential items were discrete" (86-7). Certainly, the flow between trailers, advertisements and feature presentations is not identical to that of the endless, ongoing text of television. There are not the same possibilities for "interruption" that Williams emphasises with respect to broadcasting flow. Further, in theatrical exhibition, there is an end-time, a time at which there is a public acknowledgement of the completion of the projected performance, one that necessitates vacating the cinema. This end-time is a moment at which the "rental" of the space has come due; and it harkens a return to the street, to the negotiation of city space, to modes of public transit and the mobile privatisation of cars. Nonetheless, a schedule constructs a temporal boundary in which audiences encounter a range of texts and media in what might be seen as limited flow. Second, the ephemerality of audiences -- moving to the cinema, consuming its texts, then passing the seat on to someone else -- is matched by the ephemerality of the features themselves. Distributors' demand for increasing numbers of screens necessary for massive, saturation openings has meant that films now replace one another more rapidly than in the past. Films that may have run for months now expect weeks, with fewer exceptions. Wider openings and shorter runs have created a cinemagoing culture characterised by flux. The acceleration of the turnover of films has been made possible by the expansion of various secondary markets for distribution, most importantly videotape, splintering where we might find audiences and multiplying viewing contexts. Speeding up the popular in this fashion means that the influence of individual texts can only be truly gauged via cross-media scrutiny. Short theatrical runs are not axiomatically designed for cinemagoers anymore; they can also be intended to attract the attention of video renters, purchasers and retailers. Independent video distributors, especially, "view theatrical release as a marketing expense, not a profit center" (Hindes & Roman 16). In this respect, we might think of such theatrical runs as "trailers" or "loss leaders" for the video release, with selected locations for a film's release potentially providing visibility, even prestige, in certain city markets or neighbourhoods. Distributors are able to count on some promotion through popular consumer- guide reviews, usually accompanying theatrical release as opposed to the passing critical attention given to video release. Consequently, this shapes the kinds of uses an assessment of the current cinema is put to; acknowledging that new releases function as a resource for cinema knowledge highlights the way audiences choose between and determine big screen and small screen films. Taken in this manner, popular audiences see the current cinema as largely a rough catalogue to future cultural consumption. Third, motion picture release is part of the structure of memories and activities over the course of a year. New films appear in an informal and ever-fluctuating structure of seasons. The concepts of summer movies and Christmas films, or the opening weekends that are marked by a holiday, sets up a fit between cinemagoing and other activities -- family gatherings, celebrations, etc. Further, this fit is presumably resonant for both the industry and popular audiences alike, though certainly for different reasons. The concentration of new films around visible holiday periods results in a temporally defined dearth of cinemas; an inordinate focus upon three periods in the year in the U.S. and Canada -- the last weekend in May, June/July/August and December -- creates seasonal shortages of screens (Rice-Barker 20). In fact, the boom in theatre construction through the latter half of the 1990s was, in part, to deal with those short-term shortages and not some year-round inadequate seating. Configurations of releasing colour a calendar with the tactical manoeuvres of distributors and exhibitors. Releasing provides a particular shape to the "current cinema", a term I employ to refer to a temporally designated slate of cinematic texts characterised most prominently by their newness. Television arranges programmes to capitalise on flow, to carry forward audiences and to counter-programme competitors' simultaneous offerings. Similarly, distributors jostle with each other, with their films and with certain key dates, for the limited weekends available, hoping to match a competitor's film intended for one audience with one intended for another. Industry reporter Leonard Klady sketched some of the contemporary truisms of releasing based upon the experience of 1997. He remarks upon the success of moving Liar, Liar (Tom Shadyac, 1997) to a March opening and the early May openings of Austin Powers: International Man of Mystery (Jay Roach, 1997) and Breakdown (Jonathan Mostow, 1997), generally seen as not desirable times of the year for premieres. He cautions against opening two films the same weekend, and thus competing with yourself, using the example of Fox's Soul Food (George Tillman, Jr., 1997) and The Edge (Lee Tamahori, 1997). While distributors seek out weekends clear of films that would threaten to overshadow their own, Klady points to the exception of two hits opening on the same date of December 19, 1997 -- Tomorrow Never Dies (Roger Spottiswoode, 1997) and Titanic (James Cameron, 1997). Though but a single opinion, Klady's observations are a peek into a conventional strain of strategising among distributors and exhibitors. Such planning for the timing and appearance of films is akin to the programming decisions of network executives. And I would hazard to say that digital cinema, reportedly -- though unlikely -- just on the horizon and in which texts will be beamed to cinemas via satellite rather than circulated in prints, will only augment this comparison; releasing will become that much more like programming, or at least will be conceptualised as such. To summarize, the first vector of exhibition temporality is the scheduling and running time; the second is the theatrical run; the third is the idea of seasons and the "programming" of openings. These are just some of the forces streamlining filmgoers; the temporal structuring of screenings, runs and film seasons provides a material contour to the abstraction of audience. Here, what I have delineated are components of an industrial logic about popular and public entertainment, one that offers a certain controlled knowledge about and for cinemagoing audiences. Shifting Conceptual Frameworks A note of caution is in order. I emphatically resist an interpretation that we are witnessing the becoming-film of television and the becoming-tv of film. Underneath the "inversion" argument is a weak brand of technological determinism, as though each asserts its own essential qualities. Such a pat declaration seems more in line with the mythos of convergence, and its quasi-Darwinian "natural" collapse of technologies. Instead, my point here is quite the opposite, that there is nothing essential or unique about the scheduling or flow of television; indeed, one does not have to look far to find examples of less schedule-dependent television. What I want to highlight is that application of any term of distinction -- event/flow, gaze/glance, public/private, and so on -- has more to do with our thinking, with the core discursive arrangements that have made film and television, and their audiences, available to us as knowable and different. So, using empirical evidence to slide one term over to the other is a strategy intended to supplement and destabilise the manner in which we draw conclusions, and even pose questions, of each. What this proposes is, again following the contributions of Ien Ang, that we need to see cinemagoing in its institutional formation, rather than some stable technological, textual or experiential apparatus. The activity is not only a function of a constraining industrial practice or of wildly creative patrons, but of a complex inter-determination between the two. Cinemagoing is an organisational entity harbouring, reviving and constituting knowledge and commonsense about film commodities, audiences and everyday life. An event of cinema begins well before the dimming of an auditorium's lights. The moment a newspaper is consulted, with its local representation of an internationally circulating current cinema, its listings belie a scheduling, an orderliness, to the possible projections in a given location. As audiences are formed as subjects of the current cinema, we are also agents in the continuation of a set of institutions as well. References Ang, Ien. Desperately Seeking the Audience. New York: Routledge, 1991. Brookman, Faye. "Trailers: The Big Business of Drawing Crowds." Variety 13 June 1990: 48. Caughie, John. "Playing at Being American: Games and Tactics." Logics of Television: Essays in Cultural Criticism. Ed. Patricia Mellencamp. Bloomington: Indiana UP, 1990. De Certeau, Michel. The Practice of Everyday Life. Trans. Steve Rendall. Berkeley: U of California P, 1984. Hindes, Andrew, and Monica Roman. "Video Titles Do Pitstops on Screens." Variety 16-22 Sep. 1996: 11+. Klady, Leonard. "Hitting and Missing the Market: Studios Show Savvy -- or Just Luck -- with Pic Release Strategies." Variety 19-25 Jan. 1998: 18. Morley, David. Television, Audiences and Cultural Studies. New York: Routledge, 1992. Newspaper Association of America. "Before They See It Here..." Advertisement. Variety 22-28 Nov. 1999: 38. Rice-Barker, Leo. "Industry Banks on New Technology, Expanded Slates." Playback 6 May 1996: 19-20. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Williams, Raymond. Television: Technology and Cultural Form. New York: Schocken, 1975. Citation reference for this article MLA style: Charles Acland. "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/cinema.php>. Chicago style: Charles Acland, "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]). APA style: Charles Acland. (2000) Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
42

Edmundson, Anna. "Curating in the Postdigital Age." M/C Journal 18, no. 4 (August 10, 2015). http://dx.doi.org/10.5204/mcj.1016.

Full text
Abstract:
It seems nowadays that any aspect of collecting and displaying tangible or intangible material culture is labeled as curating: shopkeepers curate their wares; DJs curate their musical selections; magazine editors curate media stories; and hipsters curate their coffee tables. Given the increasing ubiquity and complexity of 21st-century notions of curatorship, the current issue of MC Journal, ‘curate’, provides an excellent opportunity to consider some of the changes that have occurred in professional practice since the emergence of the ‘digital turn’. There is no doubt that the internet and interactive media have transformed the way we live our daily lives—and for many cultural commentators it only makes sense that they should also transform our cultural experiences. In this paper, I want to examine the issue of curatorial practice in the postdigital age, looking some of the ways that curating has changed over the last twenty years—and some of the ways it has not. The term postdigital comes from the work of Ross Parry, and is used to references the ‘tipping point’ where the use of digital technologies became normative practice in museums (24). Overall, I contend that although new technologies have substantially facilitated the way that curators do their jobs, core business and values have not changed as the result of the digital turn. While, major paradigm shifts have occurred in the field of professional curatorship over the last twenty years, these shifts have been issue-driven rather than a result of new technologies. Everyone’s a Curator In a 2009 article in the New York Times, journalist Alex Williams commented on the growing trend in American consumer culture of labeling oneself a curator. “The word ‘curate’,’’ he observed, “has become a fashionable code word among the aesthetically minded, who seem to paste it onto any activity that involves culling and selecting” (1). Williams dated the origins of the popular adoption of the term ‘curating’ to a decade earlier; noting the strong association between the uptake and the rise of the internet (2). This association is not surprising. The development of increasingly interactive software such as Web 2.0 has led to a rapid rise in new technologies aimed at connecting people and information in ways that were previously unimaginable. In particular the internet has become a space in which people can collect, store and most importantly share vast quantities of information. This information is often about objects. According to sociologist Jyri Engeström, the most successful social network sites on the internet (such as Pinterest, Flickr, Houzz etc), use discrete objects, rather than educational content or interpersonal relationships, as the basis for social interaction. So objects become the node for inter-personal communication. In these and other sites, internet users can find, collate and display multiple images of objects on the same page, which can in turn be connected at the press of a button to other related sources of information in the form of text, commentary or more images. These sites are often seen as the opportunity to virtually curate mini-exhibitions, as well as to create mood boards or sites of virtual consumption. The idea of curating as selective aesthetic editing is also popular in online markets places such as Etsy where numerous sellers offer ‘curated’ selections from home wares, to prints, to (my personal favorite) a curated selection of cat toys. In all of these exercises there is an emphasis on the idea of connoisseurship. As part of his article on the new breed of ‘curators’, for example, Alex Williams interviewed Tom Kalendrain, the Fashion Director of a leading American department store, which had engaged in a collaboration with Scott Schuman of the fashion blog, the Sartorialist. According to Kalendrain the store had asked Schuman to ‘curate’ a collection of clothes for them to sell. He justified calling Schuman a curator by explaining: “It was precisely his eye that made the store want to work with him; it was about the right shade of blue, about the cut, about the width of a lapel” (cited in Williams 2). The interview reveals much about current popular notions of what it means to be a curator. The central emphasis of Kalendrain’s distinction was on connoisseurship: exerting a privileged authoritative voice based on intimate knowledge of the subject matter and the ability to discern the very best examples from a plethora of choices. Ironically, in terms of contemporary museum practice, this is a model of curating that museums have consciously been trying to move away from for at least the last three decades. We are now witnessing an interesting disconnect in which the extra-museum community (represented in particular by a postdigital generation of cultural bloggers, commentators and entrepreneurs) are re-vivifying an archaic model of curating, based on object-centric connoisseurship, just at the point where professional curators had thought they had successfully moved on. From Being about Something to Being for Somebody The rejection of the object-expert model of curating has been so persuasive that it has transformed the way museums conduct core business across all sectors of the institution. Over the last thirty to forty years museums have witnessed a major pedagogical shift in how curators approach their work and how museums conceptualise their core values. These paradigmatic and pedagogical shifts were best characterised by the museologist Stephen Weil in his seminal article “From being about something to being for somebody.” Weil, writing in the late 1990s, noted that museums had turned away from traditional models in which individual curators (by way of scholarship and connoisseurship) dictated how the rest of the world (the audience) apprehended and understood significant objects of art, science and history—towards an audience centered approach where curators worked collaboratively with a variety of interested communities to create a pluralist forum for social change. In museum parlance these changes are referred to under the general rubric of the ‘new museology’: a paradigm shift, which had its origins in the 1970s; its gestation in the 1980s; and began to substantially manifest by the 1990s. Although no longer ‘new’, these shifts continue to influence museum practices in the 2000s. In her article, “Curatorship as Social Practice’” museologist Christina Kreps outlined some of the developments over recent decades that have challenged the object-centric model. According to Kreps, the ‘new museology’ was a paradigm shift that emerged from a widespread dissatisfaction with conventional interpretations of the museum and its functions and sought to re-orient itself away from strongly method and technique driven object-focused approaches. “The ‘new museum’ was to be people-centered, action-oriented, and devoted to social change and development” (315). An integral contributor to the developing new museology was the subjection of the western museum in the 1980s and ‘90s to representational critique from academics and activists. Such a critique entailed, in the words of Sharon Macdonald, questioning and drawing attention to “how meanings come to be inscribed and by whom, and how some come to be regarded as ‘right’ or taken as given” (3). Macdonald notes that postcolonial and feminist academics were especially engaged in this critique and the growing “identity politics” of the era. A growing engagement with the concept that museological /curatorial work is what Kreps (2003b) calls a ‘social process’, a recognition that; “people’s relationships to objects are primarily social and cultural ones” (154). This shift has particularly impacted on the practice of museum curatorship. By way of illustration we can compare two scholarly definitions of what constitutes a curator; one written in 1984 and one from 2001. The Manual of Curatorship, written in 1994 by Gary Edson and David Dean define a curator as: “a staff member or consultant who is as specialist in a particular field on study and who provides information, does research and oversees the maintenance, use, and enhancement of collections” (290). Cash Cash writing in 2001 defines curatorship instead as “a social practice predicated on the principle of a fixed relation between material objects and the human environment” (140). The shift has been towards increased self-reflexivity and a focus on greater plurality–acknowledging the needs of their diverse audiences and community stakeholders. As part of this internal reflection the role of curator has shifted from sole authority to cultural mediator—from connoisseur to community facilitator as a conduit for greater community-based conversation and audience engagement resulting in new interpretations of what museums are, and what their purpose is. This shift—away from objects and towards audiences—has been so great that it has led some scholars to question the need for museums to have standing collections at all. Do Museums Need Objects? In his provocatively titled work Do Museums Still Need Objects? Historian Steven Conn observes that many contemporary museums are turning away from the authority of the object and towards mass entertainment (1). Conn notes that there has been an increasing retreat from object-based research in the fields of art; science and ethnography; that less object-based research seems to be occurring in museums and fewer objects are being put on display (2). The success of science centers with no standing collections, the reduction in the number of objects put on display in modern museums (23); the increasing phalanx of ‘starchitect’ designed museums where the building is more important than the objects in it (11), and the increase of virtual museums and collections online, all seems to indicate that conventional museum objects have had their day (1-2). Or have they? At the same time that all of the above is occurring, ongoing research suggests that in the digital age, more than ever, people are seeking the authenticity of the real. For example, a 2008 survey of 5,000 visitors to living history sites in the USA, found that those surveyed expressed a strong desire to commune with historically authentic objects: respondents felt that their lives had become so crazy, so complicated, so unreal that they were seeking something real and authentic in their lives by visiting these museums. (Wilkening and Donnis 1) A subsequent research survey aimed specifically at young audiences (in their early twenties) reported that: seeing stuff online only made them want to see the real objects in person even more, [and that] they felt that museums were inherently authentic, largely because they have authentic objects that are unique and wonderful. (Wilkening 2) Adding to the question ‘do museums need objects?’, Rainey Tisdale argues that in the current digital age we need real museum objects more than ever. “Many museum professionals,” she reports “have come to believe that the increase in digital versions of objects actually enhances the value of in-person encounters with tangible, real things” (20). Museums still need objects. Indeed, in any kind of corporate planning, one of the first thing business managers look for in a company is what is unique about it. What can it provide that the competition can’t? Despite the popularity of all sorts of info-tainments, the one thing that museums have (and other institutions don’t) is significant collections. Collections are a museum’s niche resource – in business speak they are the asset that gives them the advantage over their competitors. Despite the increasing importance of technology in delivering information, including collections online, there is still overwhelming evidence to suggest that we should not be too quick to dismiss the traditional preserve of museums – the numinous object. And in fact, this is precisely the final argument that Steven Conn reaches in his above-mentioned publication. Curating in the Postdigital Age While it is reassuring (but not particularly surprising) that generations Y and Z can still differentiate between virtual and real objects, this doesn’t mean that museum curators can bury their heads in the collection room hoping that the digital age will simply go away. The reality is that while digitally savvy audiences continue to feel the need to see and commune with authentic materially-present objects, the ways in which they access information about these objects (prior to, during, and after a museum visit) has changed substantially due to technological advances. In turn, the ways in which curators research and present these objects – and stories about them – has also changed. So what are some of the changes that have occurred in museum operations and visitor behavior due to technological advances over the last twenty years? The most obvious technological advances over the last twenty years have actually been in data management. Since the 1990s a number of specialist data management systems have been developed for use in the museum sector. In theory at least, a curator can now access the entire collections of an institution without leaving their desk. Moreover, the same database that tells the curator how many objects the institution holds from the Torres Strait Islands, can also tell her what they look like (through high quality images); which objects were exhibited in past exhibitions; what their prior labels were; what in-house research has been conducted on them; what the conservation requirements are; where they are stored; and who to contact for copyright clearance for display—to name just a few functions. In addition a curator can get on the internet to search the online collection databases from other museums to find what objects they have from the Torres Strait Islands. Thus, while our curator is at this point conducting the same type of exhibition research that she would have done twenty years ago, the ease in which she can access information is substantially greater. The major difference of course is that today, rather than in the past, the curator would be collaborating with members of the original source community to undertake this project. Despite the rise of the internet, this type of liaison still usually occurs face to face. The development of accessible digital databases through the Internet and capacity to download images and information at a rapid rate has also changed the way non-museum staff can access collections. Audiences can now visit museum websites through which they can easily access information about current and past exhibitions, public programs, and online collections. In many cases visitors can also contribute to general discussion forums and collections provenance data through various means such as ‘tagging’; commenting on blogs; message boards; and virtual ‘talk back’ walls. Again, however, this represents a change in how visitors access museums but not a fundamental shift in what they can access. In the past, museum visitors were still encouraged to access and comment upon the collections; it’s just that doing so took a lot more time and effort. The rise of interactivity and the internet—in particular through Web 2.0—has led many commentators to call for a radical change in the ways museums operate. Museum analyst Lynda Kelly (2009) has commented on the issue that: the demands of the ‘information age’ have raised new questions for museums. It has been argued that museums need to move from being suppliers of information to providing usable knowledge and tools for visitors to explore their own ideas and reach their own conclusions because of increasing access to technologies, such as the internet. Gordon Freedman for example argues that internet technologies such as computers, the World Wide Web, mobile phones and email “… have put the power of communication, information gathering, and analysis in the hands of the individuals of the world” (299). Freedman argued that museums need to “evolve into a new kind of beast” (300) in order to keep up with the changes opening up to the possibility of audiences becoming mediators of information and knowledge. Although we often hear about the possibilities of new technologies in opening up the possibilities of multiple authors for exhibitions, I have yet to hear of an example of this successfully taking place. This doesn’t mean, however, that it will never happen. At present most museums seem to be merely dipping their toes in the waters. A recent example from the Art Gallery of South Australia illustrates this point. In 2013, the Gallery mounted an exhibition that was, in theory at least, curated by the public. Labeled as “the ultimate people’s choice exhibition” the project was hosted in conjunction with ABC Radio Adelaide. The public was encouraged to go online to the gallery website and select from a range of artworks in different categories by voting for their favorites. The ‘winning’ works were to form the basis of the exhibition. While the media spin on the exhibition gave the illusion of a mass curated show, in reality very little actual control was given over to the audience-curators. The public was presented a range of artworks, which had already been pre-selected from the standing collections; the themes for the exhibition had also already been determined as they informed the 120 artworks that were offered up for voting. Thus, in the end the pre-selection of objects and themes, as well as the timing and execution of the exhibition remained entirely in the hand of the professional curators. Another recent innovation did not attempt to harness public authorship, but rather enhanced individual visitor connections to museum collections by harnessing new GPS technologies. The Streetmuseum was a free app program created by the Museum of London to bring geotagged historical street views to hand held or portable mobile devices. The program allowed user to undertake a self-guided tour of London. After programing in their route, users could then point their device at various significant sites along the way. Looking through their viewfinder they would see a 3D historic photograph overlayed on the live site – allowing user not only to see what the area looked like in the past but also to capture an image of the overlay. While many of the available tagging apps simply allow for the opportunity of adding more white noise, allowing viewers to add commentary, pics, links to a particular geo tagged site but with no particular focus, the Streetmuseum had a well-defined purpose to encourage their audience to get out and explore London; to share their archival photograph collection with a broader audience; and to teach people more about London’s unique history. A Second Golden Age? A few years ago the Steven Conn suggested that museums are experiencing an international ‘golden age’ with more museums being built and visited and talked about than ever before (1). In the United States, where Conn is based, there are more than 17,500 accredited museums, and more than two million people visit some sort of museum per day, averaging around 865 million museum visits per year (2). However, at the same time that museums are proliferating, the traditional areas of academic research and theory that feed into museums such as history, cultural studies, anthropology and art history are experiencing a period of intense self reflexivity. Conn writes: At the turn of the twenty-first century, more people are going to more museums than at any time in the past, and simultaneously more scholars, critics, and others are writing and talking about museums. The two phenomena are most certainly related but it does not seem to be a happy relationship. Even as museums enjoy more and more success…many who write about them express varying degrees of foreboding. (1) There is no doubt that the internet and increasingly interactive media has transformed the way we live our daily lives—it only makes sense that it should also transform our cultural experiences. At the same time Museums need to learn to ride the wave without getting dumped into it. The best new media acts as a bridge—connecting people to places and ideas—allowing them to learn more about museum objects and historical spaces, value-adding to museum visits rather than replacing them altogether. As museologust Elaine Gurian, has recently concluded, the core business of museums seems unchanged thus far by the adoption of internet based technology: “the museum field generally, its curators, and those academic departments focused on training curators remain at the core philosophically unchanged despite their new websites and shiny new technological reference centres” (97). Virtual life has not replaced real life and online collections and exhibitions have not replaced real life visitations. Visitors want access to credible information about museum objects and museum exhibitions, they are not looking for Wiki-Museums. Or if they are are, they are looking to the Internet community to provide that service rather than the employees of state and federally funded museums. Both provide legitimate services, but they don’t necessarily need to provide the same service. In the same vein, extra-museum ‘curating’ of object and ideas through social media sites such as Pinterest, Flikr, Instagram and Tumblr provide a valuable source of inspiration and a highly enjoyable form of virtual consumption. But the popular uptake of the term ‘curating’ remains as easily separable from professional practice as the prior uptake of the terms ‘doctor’ and ‘architect’. An individual who doctors an image, or is the architect of their destiny, is still not going to operate on a patient nor construct a building. While major ontological shifts have occurred within museum curatorship over the last thirty years, these changes have resulted from wider social shifts, not directly from technology. This is not to say that technology will not change the museum’s ‘way of being’ in my professional lifetime—it’s just to say it hasn’t happened yet. References Cash Cash, Phillip. “Medicine Bundles: An Indigenous Approach.” Ed. T. Bray. The Future of the Past: Archaeologists, Native Americans and Repatriation. New York and London: Garland Publishing (2001): 139-145. Conn, Steven. Do Museums Still Need Objects? Philadelphia: University of Pennsylvania Press, 2011. Edson, Gary, and David Dean. The Handbook for Museums. New York and London: Routledge, 1994. Engeström, Jyri. “Why Some Social Network Services Work and Others Don’t — Or: The Case for Object-Centered Sociality.” Zengestrom Apr. 2005. 17 June 2015 ‹http://www.zengestrom.com/blog/2005/04/why-some-social-network-services-work-and-others-dont-or-the-case-for-object-centered-sociality.html›. Freedman, Gordon. “The Changing Nature of Museums”. Curator 43.4 (2000): 295-306. Gurian, Elaine Heumann. “Curator: From Soloist to Impresario.” Eds. Fiona Cameron and Lynda Kelly. Hot Topics, Public Culture, Museums. Newcastle: Cambridge Scholars Publishing, 2010. 95-111. Kelly, Lynda. “Museum Authority.” Blog 12 Nov. 2009. 25 June 2015 ‹http://australianmuseum.net.au/blogpost/museullaneous/museum-authority›. Kreps, Christina. “Curatorship as Social Practice.” Curator: The Museum Journal 46.3 (2003): 311-323. ———, Christina. Liberating Culture: Cross-Cultural Perspectives on Museums, Curation, and Heritage Preservation. London and New York: Routledge, 2003. Macdonald, Sharon. “Expanding Museum Studies: An Introduction.” Ed. Sharon MacDonald. A Companion to Museum Studies. Oxford: Blackwell Publishing, 2011. Parry, Ross. “The End of the Beginning: Normativity in the Postdigital Museum.” Museum Worlds: Advances in Research 1 (2013): 24-39. Tisdale, Rainey. “Do History Museums Still Need Objects?” History News (2011): 19-24. 18 June 2015 ‹http://aaslhcommunity.org/historynews/files/2011/08/RaineySmr11Links.pdf›. Suchy, Serene. Leading with Passion: Change Management in the Twenty-First Century Museum. Lanham: AltaMira Press, 2004. Weil, Stephen E. “From Being about Something to Being for Somebody: The Ongoing Transformation of the American Museum.” Daedalus, Journal of the American Academy of Arts and Sciences 128.3 (1999): 229–258. Wilkening, Susie. “Community Engagement and Objects—Mutually Exclusive?” Museum Audience Insight 27 July 2009. 14 June 2015 ‹http://reachadvisors.typepad.com/museum_audience_insight/2009/07/community-engagement-and-objects-mutually-exclusive.html›. ———, and Erica Donnis. “Authenticity? It Means Everything.” History News (2008) 63:4. Williams, Alex. “On the Tip of Creative Tongues.” New York Times 4 Oct. 2009. 4 June 2015 ‹http://www.nytimes.com/2009/10/04/fashion/04curate.html›.
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Ashlin. "In the Shadow of Platforms." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2750.

Full text
Abstract:
Introduction This article explores the changing relational quality of “the shadow of hierarchy”, in the context of the merging of platforms with infrastructure as the source of the shadow of hierarchy. In governance and regulatory studies, the shadow of hierarchy (or variations thereof), describes the space of influence that hierarchal organisations and infrastructures have (Héritier and Lehmkuhl; Lance et al.). A shift in who/what casts the shadow of hierarchy will necessarily result in changes to the attendant relational values, logics, and (techno)socialities that constitute the shadow, and a new arrangement of shadow that presents new challenges and opportunities. This article reflects on relevant literature to consider two different ways the shadow of hierarchy has qualitatively changed as platforms, rather than infrastructures, come to cast the shadow of hierarchy – an increase in scalability; and new socio-technical arrangements of (non)participation – and the opportunities and challenges therein. The article concludes that more concerted efforts are needed to design the shadow, given a seemingly directionless desire to enact data-driven solutions. The Shadow of Hierarchy, Infrastructures, and Platforms The shadow of hierarchy refers to how institutional, infrastructural, and organisational hierarchies create a relational zone of influence over a particular space. This commonly refers to executive decisions and legislation created by nation states, which are cast over private and non-governmental actors (Héritier and Lehmkuhl, 2). Lance et al. (252–53) argue that the shadow of hierarchy is a productive and desirable thing. Exploring the shadow of hierarchy in the context of how geospatial data agencies govern their data, Lance et al. find that the shadow of hierarchy enables the networked governance approaches that agencies adopt. This is because operating in the shadow of institutions provides authority, confers bureaucratic legitimacy and top-down power, and offers financial support. The darkness of the shadow is thus less a moral or ethicopolitical statement (such as that suggested by Fisher and Bolter, who use the idea of darkness to unpack the morality of tourism involving death and human suffering), and instead a relationality; an expression of differing values, logics, and (techno)socialities internal and external to those infrastructures and institutions that cast it (Gehl and McKelvey). The shadow of hierarchy might therefore be thought of as a field of relational influences and power that a social body casts over society, by virtue of a privileged position vis-a-vis society. It modulates society’s “light”; the resources (Bourdieu) and power relationships (Foucault) that run through social life, as parsed through a certain institutional and infrastructural worldview (the thing that blocks the light to create the shadow). In this way the shadow of hierarchy is not a field of absolute blackness that obscures, but instead a gradient of light and dark that creates certain effects. The shadow of hierarchy is now, however, also being cast by decentralised, privately held, and non-hierarchal platforms that are replacing or merging with public infrastructure, creating new social effects. Platforms are digital, socio-technical systems that create relationships between different entities. They are most commonly built around a relatively fixed core function (such as a social media service like Facebook), that then interacts with a peripheral set of complementors (advertising companies and app developers in the case of social media; Baldwin and Woodard), to create new relationships, forms of value, and other interactions (van Dijck, The Culture of Connectivity). In creating these relationships, platforms become inherently political (Gillespie), shaping relationships and content on the platform (Suzor) and in embodied life (Ajunwa; Eubanks). While platforms are often associated with optional consumer platforms (such as streaming services like Spotify), they have increasingly come to occupy the place of public infrastructure, and act as a powerful enabler to different socio-technical, economic, and political relationships (van Dijck, Governing Digital Societies). For instance, Plantin et al. argue that platforms have merged with infrastructures, and that once publicly held and funded institutions and essential services now share many characteristics with for-profit, privately held platforms. For example, Australia has had a long history of outsourcing employment services (Webster and Harding), and nearly privatised its entire visa processing data infrastructure (Jenkins). Platforms therefore have a greater role in casting the shadow of hierarchy than before. In doing so, they cast a shadow that is qualitatively different, modulated through a different set of relational values and (techno)socialities. Scalability A key difference and selling point of platforms is their scalability; since they can rapidly and easily up- and down-scale their functionalities in a way that traditional infrastructure cannot (Plantin et al.). The ability to respond “on-demand” to infrastructural requirements has made platforms the go-to service delivery option in the neo-liberalised public infrastructure environment (van Dijck, Governing Digital Societies). For instance, services providers like Amazon Web Services or Microsoft Azure provide on demand computing capacity for many nations’ most valuable services, including their intelligence and security capabilities (Amoore, Cloud Ethics; Konkel). The value of such platforms to government lies in the reduced cost and risk that comes with using rented capabilities, and the enhanced flexibility to increase or decrease their usage as required, without any of the economic sunk costs attached to owning the infrastructure. Scalability is, however, not just about on-demand technical capability, but about how platforms can change the scale of socio-technical relationships and services that are mediated through the platform. This changes the relational quality of the shadow of hierarchy, as activities and services occurring within the shadow are now connected into a larger and rapidly modulating scale. Scalability allows the shadow of hierarchy to extend from those in proximity to institutions to the broader population in general. For example, individual citizens can more easily “reach up” into governmental services and agencies as a part of completing their everyday business through platform such as MyGov in Australia (Services Australia). Using a smartphone application, citizens are afforded a more personalised and adaptive experience of the welfare state, as engaging with welfare services is no-longer tied to specific “brick-and-mortar” locations, but constantly available through a smartphone app and web portal. Multiple government services including healthcare and taxation are also connected to this platform, allowing users to reach across multiple government service domains to complete their personal business, seeking information and services that would have once required separate communications with different branches of government. The individual’s capacities to engage with the state have therefore upscaled with this change in the shadow, retaining a productivity and capacity enhancing quality that is reminiscent of older infrastructures and institutions, as the individual and their lived context is brought closer to the institutions themselves. Scale, however, comes with complications. The fundamental driver for scalability and its adaptive qualities is datafication. This means individuals and organisations are inflecting their operational and relational logics with the logic of datafication: a need to capture all data, at all times (van Dijck, Datafication; Fourcade and Healy). Platforms, especially privately held platforms, benefit significantly from this, as they rely on data to drive and refine their algorithmic tools, and ultimately create actionable intelligence that benefits their operations. Thus, scalability allows platforms to better “reach down” into individual lives and different social domains to fuel their operations. For example, as public transport services become increasingly datafied into mobility-as-a-service (MAAS) systems, ride sharing and on-demand transportation platforms like Uber and Lyft become incorporated into the public transport ecosystem (Lyons et al.). These platforms capture geospatial, behavioural, and reputational data from users and drivers during their interactions with the platform (Rosenblat and Stark; Attoh et al.). This generates additional value, and profits, for the platform itself with limited value returned to the user or the broader public it supports, outside of the transport service. It also places the platform in a position to gain wider access to the population and their data, by virtue of operating as a part of a public service. In this way the shadow of hierarchy may exacerbate inequity. The (dis)benefits of the shadow of hierarchy become unevenly spread amongst actors within its field, a function of an increased scalability that connects individuals into much broader assemblages of datafication. For Eubank, this can entrench existing economic and social inequalities by forcing those in need to engage with digitally mediated welfare systems that rely on distant and opaque computational judgements. Local services are subject to increased digital surveillance, a removal of agency from frontline advocates, and algorithmic judgement at scale. More fortunate citizens are also still at risk, with Nardi and Ekbia arguing that many digitally scaled relationships are examples of “heteromation”, whereby platforms convince actors in the platform to labour for free, such as through providing ratings which establish a platform’s reputational economy. Such labour fuels the operation of the platform through exploiting users, who become both a product/resource (as a source of data for third party advertisers) and a performer of unrewarded digital labour, such as through providing user reviews that help guide a platform’s algorithm(s). Both these examples represent a particularly disconcerting outcome for the shadow of hierarchy, which has its roots in public sector institutions who operate for a common good through shared and publicly held infrastructure. In shifting towards platforms, especially privately held platforms, value is transmitted to private corporations and not the public or the commons, as was the case with traditional infrastructure. The public also comes to own the risks attached to platforms if they become tied to public services, placing a further burden on the public if the platform fails, while reaping none of the profit and value generated through datafication. This is a poor bargain at best. (Non)Participation Scalability forms the basis for a further predicament: a changing socio-technical dynamic of (non)participation between individuals and services. According to Star (118), infrastructures are defined through their relationships to a given context. These relationships, which often exist as boundary objects between different communities, are “loosely structured in common use, and become tightly bound in particular locations” (Star, 118). While platforms are certainly boundary objects and relationally defined, the affordances of cloud computing have enabled a decoupling from physical location, and the operation of platforms across time and space through distributed digital nodes (smartphones, computers, and other localised hardware) and powerful algorithms that sort and process requests for service. This does not mean location is not important for the cloud (see Amoore, Cloud Geographies), but platforms are less likely to have a physically co-located presence in the same way traditional infrastructures had. Without the same institutional and infrastructural footprint, the modality for participating in and with the shadow of hierarchy that platforms cast becomes qualitatively different and predicated on digital intermediaries. Replacing a physical and human footprint with algorithmically supported and decentralised computing power allows scalability and some efficiency improvements, but it also removes taken-for-granted touchpoints for contestation and recourse. For example, ride-sharing platform Uber operates globally, and has expressed interest in operating in complement to (and perhaps in competition with) public transport services in some cities (Hall et al.; Conger). Given that Uber would come to operate as a part of the shadow of hierarchy that transport authorities cast over said cities, it would not be unreasonable to expect Uber to be subject to comparable advocacy, adjudication, transparency, and complaint-handling requirements. Unfortunately, it is unclear if this would be the case, with examples suggesting that Uber would use the scalability of its platform to avoid these mechanisms. This is revealed by ongoing legal action launched by concerned Uber drivers in the United Kingdom, who have sought access to the profiling data that Uber uses to manage and monitor its drivers (Sawers). The challenge has relied on transnational law (the European Union’s General Data Protection Regulation), with UK-based drivers lodging claims in Amsterdam to initiate the challenge. Such costly and complex actions are beyond the means of many, but demonstrate how reasonable participation in socio-technical and governance relationships (like contestations) might become limited, depending on how the shadow of hierarchy changes with the incorporation of platforms. Even if legal challenges for transparency are successful, they may not produce meaningful change. For instance, O’Neil links algorithmic bias to mathematical shortcomings in the variables used to measure the world; in the creation of irritational feedback loops based on incorrect data; and in the use of unsound data analysis techniques. These three factors contribute to inequitable digital metrics like predictive policing algorithms that disproportionately target racial minorities. Large amounts of selective data on minorities create myopic algorithms that direct police to target minorities, creating more selective data that reinforces the spurious model. These biases, however, are persistently inaccessible, and even when visible are often unintelligible to experts (Ananny and Crawford). The visibility of the technical “installed base” that support institutions and public services is therefore not a panacea, especially when the installed base (un)intentionally obfuscates participation in meaningful engagement like complaints handling. A negative outcome is, however, also not an inevitable thing. It is entirely possible to design platforms to allow individual users to scale up and have opportunities for enhanced participation. For instance, eGovernance and mobile governance literature have explored how citizens engage with state services at scale (Thomas and Streib; Foth et al.), and the open government movement has demonstrated the effectiveness of open data in understanding government operations (Barns; Janssen et al.), although these both have their challenges (Chadwick; Dawes). It is not a fantasy to imagine alternative configurations of the shadow of hierarchy that allow more participatory relationships. Open data could facilitate the governance of platforms at scale (Box et al.), where users are enfranchised into a platform by some form of membership right and given access to financial and governance records, in the same way that corporate shareholders are enfranchised, facilitated by the same app that provides a service. This could also be extended to decision making through voting and polling functions. Such a governance form would require radically different legal, business, and institutional structures to create and enforce this arrangement. Delacoix and Lawrence, for instance, suggest that data trusts, where a trustee is assigned legal and fiduciary responsibility to achieve maximum benefit for a specific group’s data, can be used to negotiate legal and governance relationships that meaningfully benefit the users of the trust. Trustees can be instructed to only share data to services whose algorithms are regularly audited for bias and provide datasets that are accurate representations of their users, for instance, avoiding erroneous proxies that disrupt algorithmic models. While these developments are in their infancy, it is not unreasonable to reflect on such endeavours now, as the technologies to achieve these are already in use. Conclusions There is a persistent myth that data will yield better, faster, more complete results in whatever field it is applied (Lee and Cook; Fourcade and Healy; Mayer-Schönberger and Cukier; Kitchin). This myth has led to data-driven assemblages, including artificial intelligence, platforms, surveillance, and other data-technologies, being deployed throughout social life. The public sector is no exception to this, but the deployment of any technological solution within the traditional institutions of the shadow of hierarchy is fraught with challenges, and often results in failure or unintended consequences (Henman). The complexity of these systems combined with time, budgetary, and political pressures can create a contested environment. It is this environment that moulds societies' light and resources to cast the shadow of hierarchy. Relationality within a shadow of hierarchy that reflects the complicated and competing interests of platforms is likely to present a range of unintended social consequences that are inherently emergent because they are entering into a complex system – society – that is extremely hard to model. The relational qualities of the shadow of hierarchy are therefore now more multidimensional and emergent, and experiences relating to socio-technical features like scale, and as a follow-on (non)participation, are evidence of this. Yet by being emergent, they are also directionless, a product of complex systems rather than designed and strategic intent. This is not an inherently bad thing, but given the potential for data-system and platforms to have negative or unintended consequences, it is worth considering whether remaining directionless is the best outcome. There are many examples of data-driven systems in healthcare (Obermeyer et al.), welfare (Eubanks; Henman and Marston), and economics (MacKenzie), having unintended and negative social consequences. Appropriately guiding the design and deployment of theses system also represents a growing body of knowledge and practical endeavour (Jirotka et al.; Stilgoe et al.). Armed with the knowledge of these social implications, constructing an appropriate social architecture (Box and Lemon; Box et al.) around the platforms and data systems that form the shadow of hierarchy should be encouraged. This social architecture should account for the affordances and emergent potentials of a complex social, institutional, economic, political, and technical environment, and should assist in guiding the shadow of hierarchy away from egregious challenges and towards meaningful opportunities. To be directionless is an opportunity to take a new direction. The intersection of platforms with public institutions and infrastructures has moulded society’s light into an evolving and emergent shadow of hierarchy over many domains. With the scale of the shadow changing, and shaping participation, who benefits and who loses out in the shadow of hierarchy is also changing. Equipped with insights into this change, we should not hesitate to shape this change, creating or preserving relationalities that offer the best outcomes. Defining, understanding, and practically implementing what the “best” outcome(s) are would be a valuable next step in this endeavour, and should prompt considerable discussion. If we wish the shadow of hierarchy to continue to be productive, then finding a social architecture to shape the emergence and directionlessness of socio-technical systems like platforms is an important step in the continued evolution of the shadow of hierarchy. References Ajunwa, Ifeoma. “Age Discrimination by Platforms.” Berkeley J. Emp. & Lab. L. 40 (2019): 1-30. Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020. ———. “Cloud Geographies: Computing, Data, Sovereignty.” Progress in Human Geography 42.1 (2018): 4-24. Ananny, Mike, and Kate Crawford. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20.3 (2018): 973–89. Attoh, Kafui, et al. “‘We’re Building Their Data’: Labor, Alienation, and Idiocy in the Smart City.” Environment and Planning D: Society and Space 37.6 (2019): 1007-24. Baldwin, Carliss Y., and C. Jason Woodard. “The Architecture of Platforms: A Unified View.” Platforms, Markets and Innovation. Ed. Annabelle Gawer. Cheltenham: Edward Elgar, 2009. 19–44. Barns, Sarah. “Mine Your Data: Open Data, Digital Strategies and Entrepreneurial Governance by Code.” Urban Geography 37.4 (2016): 554–71. Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA: Harvard University Press, 1984. Box, Paul, et al. Data Platforms for Smart Cities – A Landscape Scan and Recommendations for Smart City Practice. Canberra: CSIRO, 2020. Box, Paul, and David Lemon. The Role of Social Architecture in Information Infrastructure: A Report for the National Environmental Information Infrastructure (NEII). Canberra: CSIRO, 2015. Chadwick, Andrew. “Explaining the Failure of an Online Citizen Engagement Initiative: The Role of Internal Institutional Variables.” Journal of Information Technology & Politics 8.1 (2011): 21–40. Conger, Kate. “Uber Wants to Sell You Train Tickets. And Be Your Bus Service, Too.” The New York Times, 7 Aug. 2019. 19 Jan. 2021. <https://www.nytimes.com/2019/08/07/technology/uber-train-bus-public-transit.html>. Dawes, Sharon S. “The Evolution and Continuing Challenges of E‐Governance.” Public Administration Review 68 (2008): 86–102. Delacroix, Sylvie, and Neil D. Lawrence. “Bottom-Up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance.” International Data Privacy Law 9.4 (2019): 236-252. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018. Fisher, Joshua A., and Jay David Bolter. “Ethical Considerations for AR Experiences at Dark Tourism Sites”. IEEE Explore 29 April. 2019. 13 Apr. 2021 <https://ieeexplore.ieee.org/document/8699186>. Foth, Marcus, et al. From Social Butterfly to Engaged Citizen: Urban Informatics, Social Media, Ubiquitous Computing, and Mobile Technology to Support Citizen Engagement. Cambridge MA: MIT Press, 2011. Fourcade, Marion, and Kieran Healy. “Seeing like a Market.” Socio-Economic Review, 15.1 (2017): 9–29. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219–35. Gillespie, Tarleton. “The Politics of ‘Platforms.’” New Media & Society 12.3 (2010): 347–64. Hall, Jonathan D., et al. “Is Uber a Substitute or Complement for Public Transit?” Journal of Urban Economics 108 (2018): 36–50. Henman, Paul. “Improving Public Services Using Artificial Intelligence: Possibilities, Pitfalls, Governance.” Asia Pacific Journal of Public Administration 42.4 (2020): 209–21. Henman, Paul, and Greg Marston. “The Social Division of Welfare Surveillance.” Journal of Social Policy 37.2 (2008): 187–205. Héritier, Adrienne, and Dirk Lehmkuhl. “Introduction: The Shadow of Hierarchy and New Modes of Governance.” Journal of Public Policy 28.1 (2008): 1–17. Janssen, Marijn, et al. “Benefits, Adoption Barriers and Myths of Open Data and Open Government.” Information Systems Management 29.4 (2012): 258–68. Jenkins, Shannon. “Visa Privatisation Plan Scrapped, with New Approach to Tackle ’Emerging Global Threats’.” The Mandarin. 23 Mar. 2020. 19 Jan. 2021 <https://www.themandarin.com.au/128244-visa-privatisation-plan-scrapped-with-new-approach-to-tackle-emerging-global-threats/>. Jirotka, Marina, et al. “Responsible Research and Innovation in the Digital Age.” Communications of the ACM 60.6 (2016): 62–68. Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. Thousand Oaks, CA: Sage, 2014. Konkel, Frank. “CIA Awards Secret Multibillion-Dollar Cloud Contract.” Nextgov 20 Nov. 2020. 19 Jan. 2021 <https://www.nextgov.com/it-modernization/2020/11/exclusive-cia-awards-secret-multibillion-dollar-cloud-contract/170227/>. Lance, Kate T., et al. “Cross‐Agency Coordination in the Shadow of Hierarchy: ‘Joining up’Government Geospatial Information Systems.” International Journal of Geographical Information Science, 23.2 (2009): 249–69. Lee, Ashlin J., and Peta S. Cook. “The Myth of the ‘Data‐Driven’ Society: Exploring the Interactions of Data Interfaces, Circulations, and Abstractions.” Sociology Compass 14.1 (2020): 1–14. Lyons, Glenn, et al. “The Importance of User Perspective in the Evolution of MaaS.” Transportation Research Part A: Policy and Practice 121(2019): 22-36. MacKenzie, Donald. “‘Making’, ‘Taking’ and the Material Political Economy of Algorithmic Trading.” Economy and Society 47.4 (2018): 501-23. Mayer-Schönberger, V., and K. Cukier. Big Data: A Revolution That Will Change How We Live, Work and Think. London: John Murray, 2013. Michel Foucault. Discipline and Punish. London: Penguin, 1977. Nardi, Bonnie, and Hamid Ekbia. Heteromation, and Other Stories of Computing and Capitalism. Cambridge, MA: MIT Press, 2017. O’Neil, Cathy. Weapons of Math Destruction – How Big Data Increases Inequality and Threatens Democracy. London: Penguin, 2017. Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366.6464 (2019): 447-53. Plantin, Jean-Christophe, et al. “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook.” New Media & Society 20.1 (2018): 293–310. Rosenblat, Alex, and Luke Stark. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication 10 (2016): 3758–3784. Sawers, Paul. “Uber Drivers Sue for Data on Secret Profiling and Automated Decision-Making.” VentureBeat 20 July. 2020. 19 Jan. 2021 <https://venturebeat.com/2020/07/20/uber-drivers-sue-for-data-on-secret-profiling-and-automated-decision-making/>. Services Australia. About MyGov. Services Australia 19 Jan. 2021. 19 Jan. 2021 <https://www.servicesaustralia.gov.au/individuals/subjects/about-mygov>. Star, Susan Leigh. “Infrastructure and Ethnographic Practice: Working on the Fringes.” Scandinavian Journal of Information Systems 14.2 (2002):107-122. Stilgoe, Jack, et al. “Developing a Framework for Responsible Innovation.” Research Policy 42.9 (2013):1568-80. Suzor, Nicolas. Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge: Cambridge University Press, 2019. Thomas, John Clayton, and Gregory Streib. “The New Face of Government: Citizen‐initiated Contacts in the Era of E‐Government.” Journal of Public Administration Research and Theory 13.1 (2003): 83-102. Van Dijck, José. “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology.” Surveillance & Society 12.2 (2014): 197–208. ———. “Governing Digital Societies: Private Platforms, Public Values.” Computer Law & Security Review 36 (2020) 13 Apr. 2021 <https://www.sciencedirect.com/science/article/abs/pii/S0267364919303887>. ———. The Culture of Connectivity: A Critical History of Social Media. Oxford: Oxford University Press, 2013. Webster, Elizabeth, and Glenys Harding. “Outsourcing Public Employment Services: The Australian Experience.” Australian Economic Review 34.2 (2001): 231-42.
APA, Harvard, Vancouver, ISO, and other styles
44

Dominey-Howes, Dale. "Tsunami Waves of Destruction: The Creation of the “New Australian Catastrophe”." M/C Journal 16, no. 1 (March 18, 2013). http://dx.doi.org/10.5204/mcj.594.

Full text
Abstract:
Introduction The aim of this paper is to examine whether recent catastrophic tsunamis have driven a cultural shift in the awareness of Australians to the danger associated with this natural hazard and whether the media have contributed to the emergence of “tsunami” as a new Australian catastrophe. Prior to the devastating 2004 Indian Ocean Tsunami disaster (2004 IOT), tsunamis as a type of hazard capable of generating widespread catastrophe were not well known by the general public and had barely registered within the wider scientific community. As a university based lecturer who specialises in natural disasters, I always started my public talks or student lectures with an attempt at a detailed description of what a tsunami is. With little high quality visual and media imagery to use, this was not easy. The Australian geologist Ted Bryant was right when he named his 2001 book Tsunami: The Underrated Hazard. That changed on 26 December 2004 when the third largest earthquake ever recorded occurred northwest of Sumatra, Indonesia, triggering the most catastrophic tsunami ever experienced. The 2004 IOT claimed at least 220,000 lives—probably more—injured tens of thousands, destroyed widespread coastal infrastructure and left millions homeless. Beyond the catastrophic impacts, this tsunami was conspicuous because, for the first time, such a devastating tsunami was widely captured on video and other forms of moving and still imagery. This occurred for two reasons. Firstly, the tsunami took place during daylight hours in good weather conditions—factors conducive to capturing high quality visual images. Secondly, many people—both local residents and westerners who were on beachside holidays and at the coast at multiple locations impacted by the tsunami—were able to capture images of the tsunami on their cameras, videos, and smart phones. The extensive media coverage—including horrifying television, video, and still imagery that raced around the globe in the hours and days after the tsunami, filling our television screens, homes, and lives regardless of where we lived—had a dramatic effect. This single event drove a quantum shift in the wider cultural awareness of this type of catastrophe and acted as a catalyst for improved individual and societal understanding of the nature and effects of disaster landscapes. Since this event, there have been several notable tsunamis, including the March 2011 Japan catastrophe. Once again, this event occurred during daylight hours and was widely captured by multiple forms of media. These events have resulted in a cascade of media coverage across television, radio, movie, and documentary channels, in the print media, online, and in the popular press and on social media—very little of which was available prior to 2004. Much of this has been documentary and informative in style, but there have also been numerous television dramas and movies. For example, an episode of the popular American television series CSI Miami entitled Crime Wave (Season 3, Episode 7) featured a tsunami, triggered by a volcanic eruption in the Atlantic and impacting Miami, as the backdrop to a standard crime-filled episode ("CSI," IMDb; Wikipedia). In 2010, Warner Bros Studios released the supernatural drama fantasy film Hereafter directed by Clint Eastwood. In the movie, a television journalist survives a near-death experience during the 2004 IOT in what might be the most dramatic, and probably accurate, cinematic portrayal of a tsunami ("Hereafter," IMDb; Wikipedia). Thus, these creative and entertaining forms of media, influenced by the catastrophic nature of tsunamis, are impetuses for creativity that also contribute to a transformation of cultural knowledge of catastrophe. The transformative potential of creative media, together with national and intergovernmental disaster risk reduction activity such as community education, awareness campaigns, community evacuation planning and drills, may be indirectly inferred from rapid and positive community behavioural responses. By this I mean many people in coastal communities who experience strong earthquakes are starting a process of self-evacuation, even if regional tsunami warning centres have not issued an alert or warning. For example, when people in coastal locations in Samoa felt a large earthquake on 29 September 2009, many self-evacuated to higher ground or sought information and instruction from relevant authorities because they expected a tsunami to occur. When interviewed, survivors stated that the memory of television and media coverage of the 2004 IOT acted as a catalyst for their affirmative behavioural response (Dominey-Howes and Thaman 1). Thus, individual and community cultural understandings of the nature and effects of tsunami catastrophes are incredibly important for shaping resilience and reducing vulnerability. However, this cultural shift is not playing out evenly.Are Australia and Its People at Risk from Tsunamis?Prior to the 2004 IOT, there was little discussion about, research in to, or awareness about tsunamis and Australia. Ted Bryant from the University of Wollongong had controversially proposed that Australia had been affected by tsunamis much bigger than the 2004 IOT six to eight times during the last 10,000 years and that it was only a matter of when, not if, such an event repeated itself (Bryant, "Second Edition"). Whilst his claims had received some media attention, his ideas did not achieve widespread scientific, cultural, or community acceptance. Not-with-standing this, Australia has been affected by more than 60 small tsunamis since European colonisation (Dominey-Howes 239). Indeed, the 2004 IOT and 2006 Java tsunami caused significant flooding of parts of the Northern Territory and Western Australia (Prendergast and Brown 69). However, the affected areas were sparsely populated and experienced very little in the way of damage or loss. Thus they did not cross any sort of critical threshold of “catastrophe” and failed to achieve meaningful community consciousness—they were not agents of cultural transformation.Regardless of the risk faced by Australia’s coastline, Australians travel to, and holiday in, places that experience tsunamis. In fact, 26 Australians were killed during the 2004 IOT (DFAT) and five were killed by the September 2009 South Pacific tsunami (Caldwell et al. 26). What Role Do the Media Play in Preparing for and Responding to Catastrophe?Regardless of the type of hazard/disaster/catastrophe, the key functions the media play include (but are not limited to): pre-event community education, awareness raising, and planning and preparations; during-event preparation and action, including status updates, evacuation warnings and notices, and recommendations for affirmative behaviours; and post-event responses and recovery actions to follow, including where to gain aid and support. Further, the media also play a role in providing a forum for debate and post-event analysis and reflection, as a mechanism to hold decision makers to account. From time to time, the media also provide a platform for examining who, if anyone, might be to blame for losses sustained during catastrophes and can act as a powerful conduit for driving socio-cultural, behavioural, and policy change. Many of these functions are elegantly described and a series of best practices outlined by The Caribbean Disaster Emergency Management Agency in a tsunami specific publication freely available online (CDEMA 1). What Has Been the Media Coverage in Australia about Tsunamis and Their Effects on Australians?A manifest contents analysis of media material covering tsunamis over the last decade using the framework of Cox et al. reveals that coverage falls into distinctive and repetitive forms or themes. After tsunamis, I have collected articles (more than 130 to date) published in key Australian national broadsheets (e.g., The Australian and Sydney Morning Herald) and tabloid (e.g., The Telegraph) newspapers and have watched on television and monitored on social media, such as YouTube and Facebook, the types of coverage given to tsunamis either affecting Australia, or Australians domestically and overseas. In all cases, I continued to monitor and collect these stories and accounts for a fixed period of four weeks after each event, commencing on the day of the tsunami. The themes raised in the coverage include: the nature of the event. For example, where, when, why did it occur, how big was it, and what were the effects; what emergency response and recovery actions are being undertaken by the emergency services and how these are being provided; exploration of how the event was made worse or better by poor/good planning and prior knowledge, action or inaction, confusion and misunderstanding; the attribution of blame and responsibility; the good news story—often the discovery and rescue of an “iconic victim/survivor”—usually a child days to weeks later; and follow-up reporting weeks to months later and on anniversaries. This coverage generally focuses on how things are improving and is often juxtaposed with the ongoing suffering of victims. I select the word “victims” purposefully for the media frequently prefer this over the more affirmative “survivor.”The media seldom carry reports of “behind the scenes” disaster preparatory work such as community education programs, the development and installation of warning and monitoring systems, and ongoing training and policy work by response agencies and governments since such stories tend to be less glamorous in terms of the disaster gore factor and less newsworthy (Cox et al. 469; Miles and Morse 365; Ploughman 308).With regard to Australians specifically, the manifest contents analysis reveals that coverage can be described as follows. First, it focuses on those Australians killed and injured. Such coverage provides elements of a biography of the victims, telling their stories, personalising these individuals so we build empathy for their suffering and the suffering of their families. The Australian victims are not unknown strangers—they are named and pictures of their smiling faces are printed or broadcast. Second, the media describe and catalogue the loss and ongoing suffering of the victims (survivors). Third, the media use phrases to describe Australians such as “innocent victims in the wrong place at the wrong time.” This narrative establishes the sense that these “innocents” have been somehow wronged and transgressed and that suffering should not be experienced by them. The fourth theme addresses the difficulties Australians have in accessing Consular support and in acquiring replacement passports in order to return home. It usually goes on to describe how they have difficulty in gaining access to accommodation, clothing, food, and water and any necessary medicines and the challenges associated with booking travel home and the complexities of communicating with family and friends. The last theme focuses on how Australians were often (usually?) not given relevant safety information by “responsible people” or “those in the know” in the place where they were at the time of the tsunami. This establishes a sense that Australians were left out and not considered by the relevant authorities. This narrative pays little attention to the wide scale impact upon and suffering of resident local populations who lack the capacity to escape the landscape of catastrophe.How Does Australian Media Coverage of (Tsunami) Catastrophe Compare with Elsewhere?A review of the available literature suggests media coverage of catastrophes involving domestic citizens is similar globally. For example, Olofsson (557) in an analysis of newspaper articles in Sweden about the 2004 IOT showed that the tsunami was framed as a Swedish disaster heavily focused on Sweden, Swedish victims, and Thailand, and that there was a division between “us” (Swedes) and “them” (others or non-Swedes). Olofsson (557) described two types of “us” and “them.” At the international level Sweden, i.e. “us,” was glorified and contrasted with “inferior” countries such as Thailand, “them.” Olofsson (557) concluded that mediated frames of catastrophe are influenced by stereotypes and nationalistic values.Such nationalistic approaches preface one type of suffering in catastrophe over others and delegitimises the experiences of some survivors. Thus, catastrophes are not evenly experienced. Importantly, Olofsson although not explicitly using the term, explains that the underlying reason for this construction of “them” and “us” is a form of imperialism and colonialism. Sharp refers to “historically rooted power hierarchies between countries and regions of the world” (304)—this is especially so of western news media reporting on catastrophes within and affecting “other” (non-western) countries. Sharp goes much further in relation to western representations and imaginations of the “war on terror” (arguably a global catastrophe) by explicitly noting the near universal western-centric dominance of this representation and the construction of the “west” as good and all “non-west” as not (299). Like it or not, the western media, including elements of the mainstream Australian media, adhere to this imperialistic representation. Studies of tsunami and other catastrophes drawing upon different types of media (still images, video, film, camera, and social media such as Facebook, Twitter, and the like) and from different national settings have explored the multiple functions of media. These functions include: providing information, questioning the authorities, and offering a chance for transformative learning. Further, they alleviate pain and suffering, providing new virtual communities of shared experience and hearing that facilitate resilience and recovery from catastrophe. Lastly, they contribute to a cultural transformation of catastrophe—both positive and negative (Hjorth and Kyoung-hwa "The Mourning"; "Good Grief"; McCargo and Hyon-Suk 236; Brown and Minty 9; Lau et al. 675; Morgan and de Goyet 33; Piotrowski and Armstrong 341; Sood et al. 27).Has Extensive Media Coverage Resulted in an Improved Awareness of the Catastrophic Potential of Tsunami for Australians?In playing devil’s advocate, my simple response is NO! This because I have been interviewing Australians about their perceptions and knowledge of tsunamis as a catastrophe, after events have occurred. These events have triggered alerts and warnings by the Australian Tsunami Warning System (ATWS) for selected coastal regions of Australia. Consequently, I have visited coastal suburbs and interviewed people about tsunamis generally and those events specifically. Formal interviews (surveys) and informal conversations have revolved around what people perceived about the hazard, the likely consequences, what they knew about the warning, where they got their information from, how they behaved and why, and so forth. I have undertaken this work after the 2007 Solomon Islands, 2009 New Zealand, 2009 South Pacific, the February 2010 Chile, and March 2011 Japan tsunamis. I have now spoken to more than 800 people. Detailed research results will be presented elsewhere, but of relevance here, I have discovered that, to begin with, Australians have a reasonable and shared cultural knowledge of the potential catastrophic effects that tsunamis can have. They use terms such as “devastating; death; damage; loss; frightening; economic impact; societal loss; horrific; overwhelming and catastrophic.” Secondly, when I ask Australians about their sources of information about tsunamis, they describe the television (80%); Internet (85%); radio (25%); newspaper (35%); and social media including YouTube (65%). This tells me that the media are critical to underpinning knowledge of catastrophe and are a powerful transformative medium for the acquisition of knowledge. Thirdly, when asked about where people get information about live warning messages and alerts, Australians stated the “television (95%); Internet (70%); family and friends (65%).” Fourthly and significantly, when individuals were asked what they thought being caught in a tsunami would be like, responses included “fun (50%); awesome (75%); like in a movie (40%).” Fifthly, when people were asked about what they would do (i.e., their “stated behaviour”) during a real tsunami arriving at the coast, responses included “go down to the beach to swim/surf the tsunami (40%); go to the sea to watch (85%); video the tsunami and sell to the news media people (40%).”An independent and powerful representation of the disjunct between Australians’ knowledge of the catastrophic potential of tsunamis and their “negative” behavioral response can be found in viewing live television news coverage broadcast from Sydney beaches on the morning of Sunday 28 February 2010. The Chilean tsunami had taken more than 14 hours to travel from Chile to the eastern seaboard of Australia and the ATWS had issued an accurate warning and had correctly forecast the arrival time of the tsunami (approximately 08.30 am). The television and radio media had dutifully broadcast the warning issued by the State Emergency Services. The message was simple: “Stay out of the water, evacuate the beaches and move to higher ground.” As the tsunami arrived, those news broadcasts showed volunteer State Emergency Service personnel and Surf Life Saving Australia lifeguards “begging” with literally hundreds (probably thousands up and down the eastern seaboard of Australia) of members of the public to stop swimming in the incoming tsunami and to evacuate the beaches. On that occasion, Australians were lucky and the tsunami was inconsequential. What do these responses mean? Clearly Australians recognise and can describe the consequences of a tsunami. However, they are not associating the catastrophic nature of tsunami with their own lives or experience. They are avoiding or disallowing the reality; they normalise and dramaticise the event. Thus in Australia, to date, a cultural transformation about the catastrophic nature of tsunami has not occurred for reasons that are not entirely clear but are the subject of ongoing study.The Emergence of Tsunami as a “New Australian Catastrophe”?As a natural disaster expert with nearly two decades experience, in my mind tsunami has emerged as a “new Australian catastrophe.” I believe this has occurred for a number of reasons. Firstly, the 2004 IOT was devastating and did impact northwestern Australia, raising the flag on this hitherto, unknown threat. Australia is now known to be vulnerable to the tsunami catastrophe. The media have played a critical role here. Secondly, in the 2004 IOT and other tsunamis since, Australians have died and their deaths have been widely reported in the Australian media. Thirdly, the emergence of various forms of social media has facilitated an explosion in information and material that can be consumed, digested, reimagined, and normalised by Australians hungry for the gore of catastrophe—it feeds our desire for catastrophic death and destruction. Fourthly, catastrophe has been creatively imagined and retold for a story-hungry viewing public. Whether through regular television shows easily consumed from a comfy chair at home, or whilst eating popcorn at a cinema, tsunami catastrophe is being fed to us in a way that reaffirms its naturalness. Juxtaposed against this idea though is that, despite all the graphic imagery of tsunami catastrophe, especially images of dead children in other countries, Australian media do not and culturally cannot, display images of dead Australian children. Such images are widely considered too gruesome but are well known to drive changes in cultural behaviour because of the iconic significance of the child within our society. As such, a cultural shift has not yet occurred and so the potential of catastrophe remains waiting to strike. Fifthly and significantly, given the fact that large numbers of Australians have not died during recent tsunamis means that again, the catastrophic potential of tsunamis is not yet realised and has not resulted in cultural changes to more affirmative behaviour. Lastly, Australians are probably more aware of “regular or common” catastrophes such as floods and bush fires that are normal to the Australian climate system and which are endlessly experienced individually and culturally and covered by the media in all forms. The Australian summer of 2012–13 has again been dominated by floods and fires. If this idea is accepted, the media construct a uniquely Australian imaginary of catastrophe and cultural discourse of disaster. The familiarity with these common climate catastrophes makes us “culturally blind” to the catastrophe that is tsunami.The consequences of a major tsunami affecting Australia some point in the future are likely to be of a scale not yet comprehensible. References Australian Broadcasting Corporation (ABC). "ABC Net Splash." 20 Mar. 2013 ‹http://splash.abc.net.au/media?id=31077›. Brown, Philip, and Jessica Minty. “Media Coverage and Charitable Giving after the 2004 Tsunami.” Southern Economic Journal 75 (2008): 9–25. Bryant, Edward. Tsunami: The Underrated Hazard. First Edition, Cambridge: Cambridge UP, 2001. ———. Tsunami: The Underrated Hazard. Second Edition, Sydney: Springer-Praxis, 2008. Caldwell, Anna, Natalie Gregg, Fiona Hudson, Patrick Lion, Janelle Miles, Bart Sinclair, and John Wright. “Samoa Tsunami Claims Five Aussies as Death Toll Rises.” The Courier Mail 1 Oct. 2009. 20 Mar. 2013 ‹http://www.couriermail.com.au/news/samoa-tsunami-claims-five-aussies-as-death-toll-rises/story-e6freon6-1225781357413›. CDEMA. "The Caribbean Disaster Emergency Management Agency. Tsunami SMART Media Web Site." 18 Dec. 2012. 20 Mar. 2013 ‹http://weready.org/tsunami/index.php?Itemid=40&id=40&option=com_content&view=article›. Cox, Robin, Bonita Long, and Megan Jones. “Sequestering of Suffering – Critical Discourse Analysis of Natural Disaster Media Coverage.” Journal of Health Psychology 13 (2008): 469–80. “CSI: Miami (Season 3, Episode 7).” International Movie Database (IMDb). ‹http://www.imdb.com/title/tt0534784/›. 9 Jan. 2013. "CSI: Miami (Season 3)." Wikipedia. ‹http://en.wikipedia.org/wiki/CSI:_Miami_(season_3)#Episodes›. 21 Mar. 2013. DFAT. "Department of Foreign Affairs and Trade Annual Report 2004–2005." 8 Jan. 2013 ‹http://www.dfat.gov.au/dept/annual_reports/04_05/downloads/2_Outcome2.pdf›. Dominey-Howes, Dale. “Geological and Historical Records of Australian Tsunami.” Marine Geology 239 (2007): 99–123. Dominey-Howes, Dale, and Randy Thaman. “UNESCO-IOC International Tsunami Survey Team Samoa Interim Report of Field Survey 14–21 October 2009.” No. 2. Australian Tsunami Research Centre. University of New South Wales, Sydney. "Hereafter." International Movie Database (IMDb). ‹http://www.imdb.com/title/tt1212419/›. 9 Jan. 2013."Hereafter." Wikipedia. ‹http://en.wikipedia.org/wiki/Hereafter (film)›. 21 Mar. 2013. Hjorth, Larissa, and Yonnie Kyoung-hwa. “The Mourning After: A Case Study of Social Media in the 3.11 Earthquake Disaster in Japan.” Television and News Media 12 (2011): 552–59. ———, and Yonnie Kyoung-hwa. “Good Grief: The Role of Mobile Social Media in the 3.11 Earthquake Disaster in Japan.” Digital Creativity 22 (2011): 187–99. Lau, Joseph, Mason Lau, and Jean Kim. “Impacts of Media Coverage on the Community Stress Level in Hong Kong after the Tsunami on 26 December 2004.” Journal of Epidemiology and Community Health 60 (2006): 675–82. McCargo, Duncan, and Lee Hyon-Suk. “Japan’s Political Tsunami: What’s Media Got to Do with It?” International Journal of Press-Politics 15 (2010): 236–45. Miles, Brian, and Stephanie Morse. “The Role of News Media in Natural Disaster Risk and Recovery.” Ecological Economics 63 (2007): 365–73. Morgan, Olive, and Charles de Goyet. “Dispelling Disaster Myths about Dead Bodies and Disease: The Role of Scientific Evidence and the Media.” Revista Panamericana de Salud Publica-Pan American Journal of Public Health 18 (2005): 33–6. Olofsson, Anna. “The Indian Ocean Tsunami in Swedish Newspapers: Nationalism after Catastrophe.” Disaster Prevention and Management 20 (2011): 557–69. Piotrowski, Chris, and Terry Armstrong. “Mass Media Preferences in Disaster: A Study of Hurricane Danny.” Social Behavior and Personality 26 (1998): 341–45. Ploughman, Penelope. “The American Print News Media Construction of Five Natural Disasters.” Disasters 19 (1995): 308–26. Prendergast, Amy, and Nick Brown. “Far Field Impact and Coastal Sedimentation Associated with the 2006 Java Tsunami in West Australia: Post-Tsunami Survey at Steep Point, West Australia.” Natural Hazards 60 (2012): 69–79. Sharp, Joanne. “A Subaltern Critical Geopolitics of The War on Terror: Postcolonial Security in Tanzania.” Geoforum 42 (2011): 297–305. Sood, Rahul, Stockdale, Geoffrey, and Everett Rogers. “How the News Media Operate in Natural Disasters.” Journal of Communication 37 (1987): 27–41.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography