To see the other types of publications on this topic, follow the link: Info Globe (Information retrieval system).

Journal articles on the topic 'Info Globe (Information retrieval system)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 journal articles for your research on the topic 'Info Globe (Information retrieval system).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ganvir, Mayank. "Patient Information Maintaining & Analyzing." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 10, 2021): 107–10. http://dx.doi.org/10.22214/ijraset.2021.34862.

Full text
Abstract:
Hospitals presently use a manual system for the management and maintenance of essential info. This system needs various paper forms, with knowledge stores unfold throughout the hospital management infrastructure. Usually, info (on forms) is incomplete or does not follow management standards. Forms are usually lost in transit between departments requiring a comprehensive auditing method to confirm that no important info is lost. Multiple copies of similar info exist within the hospital and should result in inconsistencies in knowledge in numerous knowledge stores. A significant part of the operation of any hospital involves the acquisition, management, and timely retrieval of nice volumes of knowledge. This info generally involves; patient personal info and case history, staff information and ward scheduling, scheduling programming, operating theatre scheduling, and numerous facilities waiting lists. All of this info should be managed in an economical and cost-wise fashion so that an institution's resources could also be effectively utilized. Patient info maintaining & Analyzing can automate the management of the hospital creating additional economic and the error free. It aims at standardizing data, consolidating data, reducing inconsistencies, and ensuring data integrity.
APA, Harvard, Vancouver, ISO, and other styles
2

Pham, Nhut Minh, Hieu Quang Pham, Thi Hieu Luong, and Quan Hai Vu. "Hybrid operations for content-based Vietnamese agricultural multimedia information retrieval." Science and Technology Development Journal 18, no. 4 (December 30, 2015): 51–63. http://dx.doi.org/10.32508/stdj.v18i4.909.

Full text
Abstract:
Content-based multimedia information retrieval is never a trivial task even with state-of-the-art approaches. Its mandatory challenge, called “semantic gap,” requires much more understanding of the way human perceive things (i.e., visual and auditory information). Computer scientists have spent thousands of hours seeking optimal solutions, only ended up falling in the bound of this gap for both visual and spoken contexts. While an over-the-gap approach is unreachable, we insist on assembling current viable techniques from both contexts, aligned with a domain concept base (i.e., an ontology), to construct an info service for the retrieval of agricultural multimedia information. The development process spans over three packages: (1) building a Vietnamese agricultural thesaurus; (2) crafting a visual-auditory intertwined search engine; and (3) system deployment as an info service. We spring our the thesaurus in 2 sub-boughs: the aquaculture ontology consists of 3455 concepts and 5396 terms, with 28 relationships, covering about 2200 fish species and their related terms; and the plant production ontology comprises of 3437 concepts and 6874 terms, with 5 relationships, covering farming, plant production, pests, etc. These ontologies serve as a global linkage between keywords, visual, and spoken features, as well as providing the reinforcement for the system performances (e.g., through query expansion, knowledge indexing…). On the other hand, constructing a visual-auditory intertwined search engine is a bit trickier. Automatic transcriptions of audio channels are marked as the anchor points for the collection of visual features. These features, in turn, got clustered based on the referenced thesauri, and ultimately tracking out missing info induced by the speech recognizer’s word error rates. This compensation technique bought us back 14 % of loss recall and an increase of 9 % accuracy over the baseline system. Finally, wrapping the retrieval system as an info service guarantees its practical deployment, asour target audiences are the majority of farmers in developing countries who are unable to reach modern farming information and knowledge.
APA, Harvard, Vancouver, ISO, and other styles
3

V, Radhika, and Tejaswini . "Development of Digital Repository and Retrieval System for Rose Germplasm Management." Journal of Horticultural Sciences 14, no. 1 (June 30, 2019): 58–68. http://dx.doi.org/10.24154/jhs.2019.v14i01.010.

Full text
Abstract:
Live repository of rose consisting of different genotypes and species of roses available across the globe has been established at ICAR-IIHR. All these genotypes have been characterized for 60 morphological characters for description of these varieties. Along with the live repository of plants, efforts have been made to develop digital repository of all these genotypes. The digital repository consists of description of characters, quantitative measurement for selected important characters and images for all the descriptors. A web-enabled interface has been developed for the selective retrieval of accessions with desired characters, and also for retrieval of all the information for the selected genotype. The information system will be useful across the germplasm collection centers, for the breeders and other end users by enabling them to select the appropriate germplasm andavoid duplicates.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Kune-Yao, and Sheng-Yuan Yang. "A Cloud Information Monitoring and Recommendation Multi-Agent System with Friendly Interfaces for Tourism." Applied Sciences 9, no. 20 (October 17, 2019): 4385. http://dx.doi.org/10.3390/app9204385.

Full text
Abstract:
The tourism statistics of Taiwan’s government state that the tourism industry is one of the fastest growing economic sources in the world. Therefore, the demand for a tourism information system with a friendly interface is growing. This research implemented the construction of a cloud information service platform based on numerous practical developments in the Dr. What-Info system (i.e., a master multi-agent system on what the information is), which developed universal application interface (UAI) technology based on the Taiwan government’s open data with the aim of connecting different application programming interfaces (APIs) according to different data formats and intelligence through local GPS location retrieval, in support of three-stage intelligent decision-making and a three-tier address-based UAI technology comparison. This paper further developed a novel citizen-centric multi-agent information monitoring and recommendation system for the tourism sector. The proposed system was experimentally demonstrated as a successful integration of technology, and stands as an innovative piece of work in the literature. Although there is room for improvement in experience and maybe more travel-related agents, the feasibility of the proposed service architecture has been proven.
APA, Harvard, Vancouver, ISO, and other styles
5

Prasad, Durga, Niranjan N. Chiplunkar, and K. Prabhakar Nayak. "A Trusted Ubiquitous Healthcare Monitoring System for Hospital Environment." International Journal of Mobile Computing and Multimedia Communications 8, no. 2 (April 2017): 14–26. http://dx.doi.org/10.4018/ijmcmc.2017040102.

Full text
Abstract:
Wireless Body Sensor Network with wearable and implantable body sensors have been grabbing lot of interests among the researchers and healthcare service providers. These sensors forward physiological data to the personnel at the hospital, doctor or caretaker anytime, anywhere; hence the name of the network is Ubiquitous health monitoring system. The technology has brought Internet of Things into this system making it to get connected to the cloud based internet. This has made the retrieval of information to the expert and thus improving the happiness of elderly people and patients suffering from chronic diseases. This paper focuses on creating an android based application for monitoring patients in hospital environment. The necessity of sharing hospital data to the experts around the globe has brought the necessity of trust in Health care systems. The data sharing in the IOT environment is secured. The environment is tested in real-time cloud environment. The proposed android application serves to be better architecture for hospital monitoring.
APA, Harvard, Vancouver, ISO, and other styles
6

Beaumont, Jon. "Knowledge Management: a Systems Case Study from Shearman & Sterling LLP." Legal Information Management 17, no. 4 (December 2017): 220–28. http://dx.doi.org/10.1017/s1472669617000433.

Full text
Abstract:
AbstractPre-2013, Shearman & Sterling employed only two full-time knowledge management (KM) professionals across the globe. As Jon Beaumont describes, there was no centralised method of storage or retrieval for knowledge and Attorneys would have to contend with searching the firm's Document Management System (DMS), SharePoint intranet, internal discussion boards or ten disparate knowledge systems for document and matter information. ‘Knowledge Center’ was launched in 2015, following two years of planning, aimed at consolidating firm systems and providing users with a single interface to access any required know-how. This article will touch upon the consolidation and migration of information, but focus predominantly on Knowledge Center itself, examining functionality, search, filtering and browse. Processes for better knowledge identification of both document and matter know-how, all of which have contributed to the success of Knowledge Center, shall also be considered.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Nadia, and Christopher D. Barnet. "CLIMCAPS observing capability for temperature, moisture, and trace gases from AIRS/AMSU and CrIS/ATMS." Atmospheric Measurement Techniques 13, no. 8 (August 17, 2020): 4437–59. http://dx.doi.org/10.5194/amt-13-4437-2020.

Full text
Abstract:
Abstract. The Community Long-term Infrared Microwave Combined Atmospheric Product System (CLIMCAPS) retrieves vertical profiles of temperature, water vapor, greenhouse and pollutant gases, and cloud properties from measurements made by infrared and microwave instruments on polar-orbiting satellites. These are AIRS/AMSU on Aqua and CrIS/ATMS on Suomi NPP and NOAA20; together they span nearly 2 decades of daily observations (2002 to present) that can help characterize diurnal and seasonal atmospheric processes from different time periods or regions across the globe. While the measurements are consistent, their information content varies due to uncertainty stemming from (i) the observing system (e.g., instrument type and noise, choice of inversion method, algorithmic implementation, and assumptions) and (ii) localized conditions (e.g., presence of clouds, rate of temperature change with pressure, amount of water vapor, and surface type). CLIMCAPS quantifies, propagates, and reports all known sources of uncertainty as thoroughly as possible so that its retrieval products have value in climate science and applications. In this paper we characterize the CLIMCAPS version 2.0 system and diagnose its observing capability (ability to retrieve information accurately and consistently over time and space) for seven atmospheric variables – temperature, H2O, CO, O3, CO2, HNO3, and CH4 – from two satellite platforms, Aqua and NOAA20. We illustrate how CLIMCAPS observing capability varies spatially, from scene to scene, and latitudinally across the globe. We conclude with a discussion of how CLIMCAPS uncertainty metrics can be used in diagnosing its retrievals to promote understanding of the observing system and the atmosphere it measures.
APA, Harvard, Vancouver, ISO, and other styles
8

Ward, Dale M., E. Robert Kursinski, Angel C. Otarola, Michael Stovern, Josh McGhee, Abe Young, Jared Hainsworth, Jeff Hagen, William Sisk, and Heather Reed. "Retrieval of water vapor using ground-based observations from a prototype ATOMMS active centimeter- and millimeter-wavelength occultation instrument." Atmospheric Measurement Techniques 12, no. 3 (March 27, 2019): 1955–77. http://dx.doi.org/10.5194/amt-12-1955-2019.

Full text
Abstract:
Abstract. A fundamental goal of satellite weather and climate observations is profiling the atmosphere with in situ-like precision and resolution with absolute accuracy and unbiased, all-weather, global coverage. While GPS radio occultation (RO) has perhaps come closest in terms of profiling the gas state from orbit, it does not provide sufficient information to simultaneously profile water vapor and temperature. We have been developing the Active Temperature, Ozone and Moisture Microwave Spectrometer (ATOMMS) RO system that probes the 22 and 183 GHz water vapor absorption lines to simultaneously profile temperature and water vapor from the lower troposphere to the mesopause. Using an ATOMMS instrument prototype between two mountaintops, we have demonstrated its ability to penetrate through water vapor, clouds and rain up to optical depths of 17 (7 orders of magnitude reduction in signal power) and still isolate the vapor absorption line spectrum to retrieve water vapor with a random uncertainty of less than 1 %. This demonstration represents a key step toward an orbiting ATOMMS system for weather, climate and constraining processes. ATOMMS water vapor retrievals from orbit will not be biased by climatological or first-guess constraints and will be capable of capturing nearly the full range of variability through the atmosphere and around the globe, in both clear and cloudy conditions, and will therefore greatly improve our understanding and analysis of water vapor. This information can be used to improve weather and climate models through constraints on and refinement of processes affecting and affected by water vapor.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodríguez-Fernández, Nemesio J., Joaquin Muñoz Sabater, Philippe Richaume, Patricia de Rosnay, Yann H. Kerr, Clement Albergel, Matthias Drusch, and Susanne Mecklenburg. "SMOS near-real-time soil moisture product: processor overview and first validation results." Hydrology and Earth System Sciences 21, no. 10 (October 17, 2017): 5201–16. http://dx.doi.org/10.5194/hess-21-5201-2017.

Full text
Abstract:
Abstract. Measurements of the surface soil moisture (SM) content are important for a wide range of applications. Among them, operational hydrology and numerical weather prediction, for instance, need SM information in near-real-time (NRT), typically not later than 3 h after sensing. The European Space Agency (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite is the first mission specifically designed to measure SM from space. The ESA Level 2 SM retrieval algorithm is based on a detailed geophysical modelling and cannot provide SM in NRT. This paper presents the new ESA SMOS NRT SM product. It uses a neural network (NN) to provide SM in NRT. The NN inputs are SMOS brightness temperatures for horizontal and vertical polarizations and incidence angles from 30 to 45°. In addition, the NN uses surface soil temperature from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS). The NN was trained on SMOS Level 2 (L2) SM. The swath of the NRT SM retrieval is somewhat narrower (∼ 915 km) than that of the L2 SM dataset (∼ 1150 km), which implies a slightly lower revisit time. The new SMOS NRT SM product was compared to the SMOS Level 2 SM product. The NRT SM data show a standard deviation of the difference with respect to the L2 data of < 0.05 m3 m−3 in most of the Earth and a Pearson correlation coefficient higher than 0.7 in large regions of the globe. The NRT SM dataset does not show a global bias with respect to the L2 dataset but can show local biases of up to 0.05 m3 m−3 in absolute value. The two SMOS SM products were evaluated against in situ measurements of SM from more than 120 sites of the SCAN (Soil Climate Analysis Network) and the USCRN (US Climate Reference Network) networks in North America. The NRT dataset obtains similar but slightly better results than the L2 data. In summary, the NN SMOS NRT SM product exhibits performances similar to those of the Level 2 SM product but it has the advantage of being available in less than 3.5 h after sensing, complying with NRT requirements. The new product is processed at ECMWF and it is distributed by ESA and via the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) multicast service (EUMETCast).
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Cheng, Oleg Dubovik, David Fuertes, Pavel Litvinov, Tatyana Lapyonok, Anton Lopatin, Fabrice Ducos, et al. "Validation of GRASP algorithm product from POLDER/PARASOL data and assessment of multi-angular polarimetry potential for aerosol monitoring." Earth System Science Data 12, no. 4 (December 22, 2020): 3573–620. http://dx.doi.org/10.5194/essd-12-3573-2020.

Full text
Abstract:
Abstract. Proven by multiple theoretical and practical studies, multi-angular spectral polarimetry is ideal for comprehensive retrieval of properties of aerosols. Furthermore, a large number of advanced space polarimeters have been launched recently or planned to be deployed in the coming few years (Dubovik et al., 2019). Nevertheless, at present, practical utilization of aerosol products from polarimetry is rather limited, due to the relatively small number of polarimetric compared to photometric observations, as well as challenges in making full use of the extensive information content available in these complex observations. Indeed, while in recent years several new algorithms have been developed to provide enhanced aerosol retrievals from satellite polarimetry, the practical value of available aerosol products from polarimeters yet remains to be proven. In this regard, this paper presents the analysis of aerosol products obtained by the Generalized Retrieval of Atmosphere and Surface Properties (GRASP) algorithm from POLDER/PARASOL observations. After about a decade of development, GRASP has been adapted for operational processing of polarimetric satellite observations and several aerosol products from POLDER/PARASOL observations have been released. These updated PARASOL/GRASP products are publicly available (e.g., http://www.icare.univ-lille.fr, last access: 16 October 2018, http://www.grasp-open.com/products/, last access: 28 March 2020); the dataset used in the current study is registered under https://doi.org/10.5281/zenodo.3887265 (Chen et al., 2020). The objective of this study is to comprehensively evaluate the GRASP aerosol products obtained from POLDER/PARASOL observations. First, the validation of the entire 2005–2013 archive was conducted by comparing to ground-based Aerosol Robotic Network (AERONET) data. The subjects of the validation are spectral aerosol optical depth (AOD), aerosol absorption optical depth (AAOD) and single-scattering albedo (SSA) at six wavelengths, as well as Ångström exponent (AE), fine-mode AOD (AODF) and coarse-mode AOD (AODC) interpolated to the reference wavelength 550 nm. Second, an inter-comparison of PARASOL/GRASP products with the PARASOL/Operational, MODIS Dark Target (DT), Deep Blue (DB) and Multi-Angle Implementation of Atmospheric Correction (MAIAC) aerosol products for the year 2008 was performed. Over land both satellite data validations and inter-comparisons were conducted separately for different surface types, discriminated by bins of normalized difference vegetation index (NDVI): < 0.2, 0.2 ≤ and < 0.4, 0.4 ≤ and < 0.6, and ≥ 0.6. Three PARASOL/GRASP products were analyzed: GRASP/HP (“High Precision”), Optimized and Models. These different products are consistent but were obtained using different assumptions in aerosol modeling with different accuracies of atmospheric radiative transfer (RT) calculations. Specifically, when using GRASP/HP or Optimized there is direct retrieval of the aerosol size distribution and spectral complex index of refraction. When using GRASP/Models, the aerosol is approximated by a mixture of several prescribed aerosol components, each with their own fixed size distribution and optical properties, and only the concentrations of those components are retrieved. GRASP/HP employs the most accurate RT calculations, while GRASP/Optimized and GRASP/Models are optimized to achieve the best trade-off between accuracy and speed. In all these three options, the underlying surface reflectance is retrieved simultaneously with the aerosol properties, and the radiative transfer calculations are performed “online” during the retrieval. All validation results obtained for the full archive of PARASOL/GRASP products show solid quality of retrieved aerosol characteristics. The GRASP/Models retrievals, however, provided the most solid AOD products, e.g., AOD (550 nm) is unbiased and has the highest correlation (R ∼ 0.92) and the highest fraction of retrievals (∼ 55.3 %) satisfying the accuracy requirements of the Global Climate Observing System (GCOS) when compared to AERONET observations. GRASP/HP and GRASP/Optimized AOD products show a non-negligible positive bias (∼ 0.07) when AOD is low (< 0.2). On the other hand, the detailed aerosol microphysical characteristics (AE, AODF, AODC, SSA, etc.) provided by GRASP/HP and GRASP/Optimized correlate generally better with AERONET than do the results of GRASP/Models. Overall, GRASP/HP processing demonstrates the high quality of microphysical characteristics retrieval versus AERONET. Evidently, the GRASP/Models approach is more adapted for retrieval of total AOD, while the detailed aerosol microphysical properties are limited when a mixture of aerosol models with fixed optical properties are used. The results of a comparative analysis of PARASOL/GRASP and MODIS products showed that, based on validation against AERONET, the PARASOL/GRASP AOD (550 nm) product is of similar and sometimes of higher quality compared to the MODIS products. All AOD retrievals are more accurate and in good agreement over ocean. Over land, especially over bright surfaces, the retrieval quality degrades and the differences in total AOD products increase. The detailed aerosol characteristics, such as AE, AODF and AODC from PARASOL/GRASP, are generally more reliable, especially over land. The global inter-comparisons of PARASOL/GRASP versus MODIS showed rather robust agreement, though some patterns and tendencies were observed. Over ocean, PARASOL/Models and MODIS/DT AOD agree well with the correlation coefficient of 0.92. Over land, the correlation between PARASOL/Models and the different MODIS products is lower, ranging from 0.76 to 0.85. There is no significant global offset; though over bright surfaces MODIS products tend to show higher values compared to PARASOL/Models when AOD is low and smaller values for moderate and high AODs. Seasonal AOD means suggest that PARASOL/GRASP products show more biomass burning aerosol loading in central Africa and dust over the Taklamakan Desert, but less AOD in the northern Sahara. It is noticeable also that the correlation for the data over AERONET sites are somewhat higher, suggesting that the retrieval assumptions generally work better over AERONET sites than over the rest of the globe. One of the potential reasons may be that MODIS retrievals, in general, rely more on AERONET climatology than GRASP retrievals. Overall, the analysis shows that the quality of AOD retrieval from multi-angular polarimetric observations like POLDER is at least comparable to that of single-viewing MODIS-like imagers. At the same time, the multi-angular polarimetric observations provide more information on other aerosol properties (e.g., spectral AODF, AODC, AE), as well as additional parameters such as AAOD and SSA.
APA, Harvard, Vancouver, ISO, and other styles
11

Subramaniam, M., A. Kathirvel, E. Sabitha, and H. Anwar Basha. "Modified Firefly Algorithm and Fuzzy C-Mean Clustering Based Semantic Information Retrieval." Journal of Web Engineering, February 17, 2021. http://dx.doi.org/10.13052/jwe1540-9589.2012.

Full text
Abstract:
As enormous volume of electronic data increased gradually, searching as well as retrieving essential info from the internet is extremely difficult task. Normally, the Information Retrieval (IR) systems present info dependent upon the user’s query keywords. At present, it is insufficient as large volume of online data and it contains less precision as the system takes syntactic level search into consideration. Furthermore, numerous previous search engines utilize a variety of techniques for semantic based document extraction and the relevancy between the documents has been measured using page ranking methods. On the other hand, it contains certain problems with searching time. With the intention of enhancing the query searching time, the research system implemented a Modified Firefly Algorithm (MFA) adapted with Intelligent Ontology and Latent Dirichlet Allocation based Information Retrieval (IOLDAIR) model. In this recommended methodology, the set of web documents, Face book comments and tweets are taken as dataset. By means of utilizing Tokenization process, the dataset pre-processing is carried out. Strong ontology is built dependent upon a lot of info collected by means of referring via diverse websites. Find out the keywords as well as carry out semantic analysis with user query by utilizing ontology matching by means of jaccard similarity. The feature extraction is carried out dependent upon the semantic analysis. After that, by means of Modified Firefly Algorithm (MFA), the ideal features are chosen. With the help of Fuzzy C-Mean (FCM) clustering, the appropriate documents are grouped and rank them. At last by using IOLDAIR model, the appropriate information’s are extracted. The major benefit of the research technique is the raise in relevancy, capability of dealing with big data as well as fast retrieval. The experimentation outcomes prove that the presented method attains improved performance when matched up with the previous system.
APA, Harvard, Vancouver, ISO, and other styles
12

Battur, Ranjana, and Jagadisha N. "Image Feature Synthesis and Matching in Content-Based Image Retrieval System – A Review." Journal of Electronics and Communication Systems 6, no. 1 (April 21, 2021). http://dx.doi.org/10.46610/joecs.2021.v06i01.003.

Full text
Abstract:
One of the important concepts in information & data analytics is the content-based image retrieval process. We are living in the information age. In the modern-day digital information technology imaging world, this is playing a predominant role in different sectors ranging from defense to research fields. Content-based image retrieval, also known as query by image content is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching for digital images or textual matters in large databases. The usage of digital images has been increased enormously from the last decade due to the drastic growth in storage & network technology. These technological changes have led professional users to use, store and manipulate remotely stored images. Information Retrieval (IR) deals with the location and retrieval of related documents or images based on user inputs such as keywords or examples as a query from the repository. This has motivated us to take up the research work on the CBIR concepts. Hence, to throw light into this chosen research topic, we are carrying out extensive research on the image feature synthesis and matching in content-based image retrieval systems using the concepts of AI, ML & Fuzzy Logic schemes, which could be used to improve the retrieval system's performance in CBIRs. A brief survey, i.e., an insight into the chosen research area in the field of content-based image retrievals was made & the same is being presented w.r.t. the work done by various researchers across the globe in the form of an extensive literature review. The work done by them was studied, lacunas observed & the problem was defined with a couple of good objectives to be solved four objectives were proposed as O1 - Investigation of the effectiveness of evolutionary computation in generating composite operator vectors for image, so that feature dimensionality is reduced to improve retrieval performances; O2 - Construction of the image-leve
APA, Harvard, Vancouver, ISO, and other styles
13

Sampson, Tony. "A Virus in Info-Space." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2368.

Full text
Abstract:
‘We are faced today with an entire system of communication technology which is the perfect medium to host and transfer the very programs designed to destroy the functionality of the system.’ (IBM Researcher: Sarah Gordon, 1995) Despite renewed interest in open source code, the openness of the information space is nothing new in terms of the free flow of information. The transitive and nonlinear configuration of data flow has ceaselessly facilitated the sharing of code. The openness of the info-space encourages a free distribution model, which has become central to numerous developments through the abundant supply of freeware, shareware and source code. Key moments in open source history include the release in 1998 of Netscape’s Communicator source code, a clear attempt to stimulate browser development. More recently in February 2004 the ‘partial leaking’ of Microsoft Windows 2000 and NT 4.0 source code demonstrated the often-hostile disposition of open culture and the potential threat it poses to existing corporate business models. However, the leading exponents of the open source ethic predate these events by more than a decade. As an extension of the hacker, the virus writer has managed, since the 1980s, to bend the shape of info-space beyond recognition. By freely spreading viruses, worms and hacker programs across the globe, virus writers have provided researchers with a remarkable set of digital footprints to follow. The virus has, as IBM researcher Sarah Gordon points out, exposed the info-space as a ‘perfect medium’ rife for malicious viral infection. This paper argues that viral technologies can hold info-space hostage to the uncertain undercurrents of information itself. As such, despite mercantile efforts to capture the spirit of openness, the info-space finds itself frequently in a state far-from-equilibrium. It is open to often-unmanageable viral fluctuations, which produce levels of spontaneity, uncertainty and emergent order. So while corporations look to capture the perpetual, flexible and friction-free income streams from centralised information flows, viral code acts as an anarchic, acentred Deleuzian rhizome. It thrives on the openness of info-space, producing a paradoxical counterpoint to a corporatised information society and its attempt to steer the info-machine. The Virus in the Open System Fred Cohen’s 1984 doctoral thesis on the computer virus locates three key features of openness that makes viral propagation possible (see Louw and Duffy, 1992 pp. 13-14) and predicts a condition common to everyday user experience of info-space. Firstly, the virus flourishes because of the computer’s capacity for information sharing_; transitive flows of code between nodes via discs, connected media, network links, user input and software use. In the process of information transfer the ‘witting and unwitting’ cooperation of users and computers is a necessary determinant of viral infection. Secondly, information flow must be _interpreted._ Before execution computers interpret incoming information as a series of instructions (strings of bits). However, before execution, there is no fundamental distinction between information received, and as such, information has no _meaning until it has been executed. Thus, the interpretation of information does not differentiate between a program and a virus. Thirdly, the alterability or manipulability of the information process allows the virus to modify information. For example, advanced polymorphic viruses avoid detection by using non-significant, or redundant code, to randomly encrypt and decrypt themselves. Cohen concludes that the only defence available to combat viral spread is the ‘limited transitivity of information flow’. However, a reduction in flow is contrary to the needs of the system and leads ultimately to the unacceptable limitation of sharing (Cohen, 1991). As Cohen states ‘To be perfectly secure against viral attacks, a system must protect against incoming information flow, while to be secure against leakage of information a system must protect against outgoing information flow. In order for systems to allow sharing, there must be some information flow. It is therefore the major conclusion of this paper that the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible.’ Cohen’s research does not simply end with the eradication of the virus via the limitation of openness, but instead leads to a contentious idea concerning the benevolent properties of viral computing and the potential legitimacy of ‘friendly contagion’. Cohen looks beyond the malevolent enemy of the open network to a benevolent solution. The viral ecosystem is an alternative to Turing-von Neumann capability. Key to this system is a benevolent virus,_ which epitomise the ethic of open culture. Drawing upon a biological analogy, benevolent viral computing _reproduces in order to accomplish its goals; the computing environment evolving_ rather than being ‘designed every step of the way’ (see Zetter, 2000). The _viral ecosystem_ demonstrates how the spread of viruses can purposely _evolve through the computational space using the shared processing power of all host machines. Information enters the host machine via infection and a translator program alerts the user. The benevolent virus_ passes through the host machine with any additional modifications made by the _infected_ _user. The End of Empirical Virus Research? Cohen claims that his research into ‘friendly contagion’ has been thwarted by network administrators and policy makers (See Levy, 1992 in Spiller, 2002) whose ‘apparent fear reaction’ to early experiments resulted in trying to solve technical problems with policy solutions. However, following a significant increase in malicious viral attacks, with estimated costs to the IT industry of $13 billion in 2001 (Pipkin, 2003 p. 41), research into legitimate viruses has not surprisingly shifted from the centre to the fringes of the computer science community (see Dibbell, 1995)._ _Current reputable and subsequently funded research tends to focus on efforts by the anti-virus community to develop computer hygiene. Nevertheless, malevolent or benevolent viral technology provides researchers with a valuable recourse. The virus draws analysis towards specific questions concerning the nature of information and the culture of openness. What follows is a delineation of a range of approaches, which endeavour to provide some answers. Virus as a Cultural Metaphor Sean Cubitt (in Dovey, 1996 pp. 31-58) positions the virus as a contradictory cultural element, lodged between the effective management of info-space and the potential for spontaneous transformation. However, distinct from Cohen’s aspectual analogy, Cubitt’s often-frivolous viral metaphor overflows with political meaning. He replaces the concept of information with a space of representation, which elevates the virus from empirical experience to a linguistic construct of reality. The invasive and contagious properties of the biological parasite are metaphorically transferred to viral technology; the computer virus is thus imbued with an alien otherness. Cubitt’s cultural discourse typically reflects humanist fears of being subjected to increasing levels of technological autonomy. The openness of info-space is determined by a managed society aiming to ‘provide the grounds for mutation’ (p. 46) necessary for profitable production. Yet the virus, as a possible consequence of that desire, becomes a potential opposition to ‘ideological formations’. Like Cohen, Cubitt concludes that the virus will always exist if the paths of sharing remain open to information flow. ‘Somehow’, Cubitt argues, ‘the net must be managed in such a way as to be both open and closed. Therefore, openness is obligatory and although, from the point of view of the administrator, it is a recipe for ‘anarchy, for chaos, for breakdown, for abjection’, the ‘closure’ of the network, despite eradicating the virus, ‘means that no benefits can accrue’ (p.55). Virus as a Bodily Extension From a virus writing perspective it is, arguably, the potential for free movement in the openness of info-space that that motivates the spread of viruses. As one writer infamously stated it is ‘the idea of making a program that would travel on its own, and go to places its creator could never go’ that inspires the spreading of viruses (see Gordon, 1993). In a defiant stand against the physical limitations of bodily movement from Eastern Europe to the US, the Bulgarian virus writer, the Dark Avenger, contended that ‘the American government can stop me from going to the US, but they can’t stop my virus’. This McLuhanesque conception of the virus, as a bodily extension (see McLuhan, 1964), is picked up on by Baudrillard in Cool Memories_ _(1990). He considers the computer virus as an ‘ultra-modern form of communication which does not distinguish, according to McLuhan, between the information itself and its carrier.’ To Baudrillard the prosperous proliferation of the virus is the result of its ability to be both the medium and the message. As such the virus is a pure form of information. The Virus as Information Like Cohen, Claude Shannon looks to the biological analogy, but argues that we have the potential to learn more about information transmission in artificial and natural systems by looking at difference rather than resemblance (see Campbell, 1982). One of the key aspects of this approach is the concept of redundancy. The theory of information argues that the patterns produced by the transmission of information are likely to travel in an entropic mode, from the unmixed to the mixed – from information to noise. Shannon’s concept of redundancy ensures that noise is diminished in a system of communication. Redundancy encodes information so that the receiver can successfully decode the message, holding back the entropic tide. Shannon considers the transmission of messages in the brain as highly redundant since it manages to obtain ‘overall reliability using unreliable components’ (in Campbell, 1982 p. 191). While computing uses redundancy to encode messages, compared to transmissions of biological information, it is fairly primitive. Unlike the brain, Turing-von-Neumann computation is inflexible and literal minded. In the brain information transmission relies not only on deterministic external input, but also self-directed spontaneity and uncertain electro-chemical pulses. Nevertheless, while Shannon’s binary code is constrained to a finite set of syntactic rules, it can produce an infinite number of possibilities. Indeed, the virus makes good use of redundancy to ensure its successful propagation. The polymorphic virus is not simply a chaotic, delinquent noise, but a decidedly redundant form of communication, which uses non-significant code to randomly flip itself over to avoid detection. Viral code thrives on the infinite potential of algorithmic computing; the open, flexible and undecidable grammar of the algorithm allows the virus to spread, infect and evolve. The polymorphic virus can encrypt and decrypt itself so as to avoid anti-viral scanners checking for known viral signatures from the phylum of code known to anti-virus researchers. As such, it is a raw form of Artificial Intelligence, relying on redundant inflexible_ _code programmed to act randomly, ignore or even forget information. Towards a Concept of Rhizomatic Viral Computation Using the concept of the rhizome Deleuze and Guattari (1987 p. 79) challenge the relation between noise and pattern established in information theory. They suggest that redundancy is not merely a ‘limitative condition’, but is key to the transmission of the message itself. Measuring up the efficiency of a highly redundant viral transmission against the ‘splendour’ of the short-term memory of a rhizomatic message, it is possible to draw some conclusions from their intervention. On the surface, the entropic tendency appears to be towards the mixed and the running down of the system’s energy. However, entropy is not the answer since information is not energy; it cannot be conserved, it can be created and destroyed. By definition information is something new, something that adds to existing information (see Campbell, 1982 p. 231), yet efficient information transmission creates invariance in a variant environment. In this sense, the pseudo-randomness of viral code, which pre-programs elements of uncertainty and free action into its propagation, challenges the efforts to make information centralised, structured and ordered. It does this by placing redundant noise within its message pattern. The virus readily ruptures the patterned symmetry of info-space and in terms of information produces something new. Viral transmission is pure information as its objective is to replicate itself throughout info-space; it mutates the space as well as itself. In a rhizomatic mode the anarchic virus is without a central agency; it is a profound rejection of all Generals and power centres. Viral infection, like the rhizomatic network, is made up of ‘finite networks of automata in which communication runs from any neighbour to any other’. Viral spread flows along non-pre-existent ‘channels of communication’ (1987 p. 17). Furthermore, while efforts are made to striate the virus using anti-viral techniques, there is growing evidence that viral information not only wants to be free, but is free to do as it likes. About the Author Tony Sampson is a Senior Lecturer and Course Tutor in Multimedia & Digital Culture, School of Cultural and Innovation Studies at the University of East London, UK Email: t.d.sampson@uel.ac.uk Citation reference for this article MLA Style Sampson, Tony. "A Virus in Info-Space" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/07_Sampson.php>. APA Style Sampson, T. (2004, Jul1). A Virus in Info-Space. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/07_Sampson.php>
APA, Harvard, Vancouver, ISO, and other styles
14

"Research Analysis on Encryption Algorithms for Cloud Authentication in Hms." International Journal of Recent Technology and Engineering 8, no. 2 (July 30, 2019): 640–43. http://dx.doi.org/10.35940/ijrte.b1032.078219.

Full text
Abstract:
With the advancement of Internet of Things, mutual sharing of data turns out to be simple among different brilliant gadgets over the globe. In such a situation, a smart health care should be accelerated with a goal to provide a secure patient’s record situated globally. In any case, as the sensitive data about a patient passes through an open channel (i.e., the Internet), there are odds of abusing this data and intercepting message communication and possibility of passive attack. These attacks are undetected and unavoidable over open channels. Thus, there is a necessity for a solid crypto system for safely handling keen human services information. To ensure that the patients' right over their own information, it is a assuring technique to scramble the information's by the time it gets re-appropriated. We are presenting a secure data retrieval scheme by comparing various algorithms that manage their attributes independently. The proposed application mainly insist on current attribute based encryption mechanism and providing compared efficiency.
APA, Harvard, Vancouver, ISO, and other styles
15

BAGARINAO, RICARDO T. "A Spatial Heterogeneity Perspective in Analyzing Cancer Mortality Distribution Using GIS." IAMURE International Journal of Ecology and Conservation 14, no. 1 (June 6, 2015). http://dx.doi.org/10.7718/ijec.v14i1.891.

Full text
Abstract:
An ecological spatial variation analysis of cancer incidence and mortality is increasingly important, as cancer has become a major cause of death across the globe. Using the concept of spatial heterogeneity as analytical framework, a spatial distribution analysis that used GIS was conducted for the cases of cancer mortality in Los Baños, Laguna, Philippines. Complete data retrieval was conducted for cases between 1990 and 2010 from the Registry Office of the Municipality. Data visualization and analysis were done using the graduated color legend type of the GIS software. Spatial units were delineated following the communities political boundary of Municipality. Descriptive statistics was computed to describe spatial variations of cases. Results indicate that cancer mortality is highly heterogeneous across spatial units. The highest number of cases is in Batong Malake. The number of cases reduces as it moves away from this area. It is recommended that an in-depth study be conducted to determine the causes of these spatial trends of cancer mortality to contextualize its interventions or reduction and management programs. Keywords - Biostatistics, spatial heterogeneity, cancer mortality, geographic information system, Los Baños, Laguna, Philippines
APA, Harvard, Vancouver, ISO, and other styles
16

Baker, Heather, Asher Grady, Collin Schwantes, Emily Iarocci, Rachel Campbell, Gus Calapristi, Scott Dowson, Michelle Hart, Lauren E. Charles, and Teresa Quitugua. "NBIC Biofeeds: Deploying a New, Digital Tool for Open Source Biosurveillance across Federal Agencies." Online Journal of Public Health Informatics 10, no. 1 (May 22, 2018). http://dx.doi.org/10.5210/ojphi.v10i1.8947.

Full text
Abstract:
ObjectiveThe National Biosurveillance Integration Center (NBIC) is deploying a scalable, flexible open source data collection, analysis, and dissemination tool to support biosurveillance operations by the U.S. Department of Homeland Security (DHS) and its federal interagency partners.IntroductionNBIC integrates, analyzes, and distributes key information about health and disease events to help ensure the nation’s responses are well-informed, save lives, and minimize economic impact. To meet its mission objectives, NBIC utilizes a variety of data sets, including open source information, to provide comprehensive coverage of biological events occurring across the globe. NBIC Biofeeds is a digital tool designed to improve the efficiency of analyzing large volumes of open source reporting and increase the number of relevant insights gleaned from this dataset. Moreover, the tool provides a mechanism to disseminate tailored, electronic message notifications in near-real time so that NBIC can share specific information of interest to its interagency partners in a timely manner.MethodsNBIC intends to implement operational use of the capability in FY 2018. The core components of the system are data collection, curation, and dissemination of information deemed important by NBIC subject matter experts. NBIC Biofeeds has captured information from more than 70,000 unique sources published from around the globe and presents, on average, 9,000 new biosurveillance-relevant articles to users each day. NBIC leverages a variety of data feeds, including third party aggregators like Google and subscription-based feeds such as HealthMap, as well as Really Simple Syndication (RSS) feeds and web-scraping of highly relevant sources.The NBIC biosurveillance taxonomy imbedded in the tool consists of more than 600 metadata targets that cover key information for understanding the significance of an active biological event, including etiologic agents, impact to humans and animals (e.g., infection severity, healthcare workers involved, type of host), social disruption, infrastructure strain, countermeasures engaged, and ‘red flag’ characteristics (e.g., pathogen appearance in a new geographic area, unusual clinical signs). This taxonomy serves as a foundation for data curation and can be tailored by NBIC partners to more directly meet their own mission objectives.At this time, metadata is predominately captured by NBIC analysts, who manually tag information, which triggers the population of three automatically-disseminated products from the tool: 1) the NBIC Daily Biosurveillance Review, 2) immediate and daily summary email notifications, and 3) custom-designed RSS feeds. These products are meant for individual recipients in the federal interagency and for consumption by other biosurveillance information technology systems, such as the Department of Defense, Defense Threat Reduction Agency (DTRA) Biosurveillance Ecosystem (BSVE). NBIC is working in partnership with DTRA to integrate NBIC Biofeeds as an application directly into the BSVE and further develop the BSVE as an all-in-one platform for biosurveillance data analytics.To improve the efficiency and effectiveness of gaining insights using NBIC Biofeeds, developers of the tool at the Pacific Northwest National Laboratory (PNNL) are researching and testing a variety of advanced analytics techniques focused on: 1) article relevancy ratings to improve the review of queried data, 2) significance ratings to elucidate the perceived severity of an event based on reported characteristics, 3) full-text article retrieval and storage for improved machine-tagging, and 4) anomaly detection for emerging threats. Testing and implementation of new analytic capabilities in NBIC Biofeeds is planned for this fiscal year.ResultsNBIC Biofeeds was developed to serve as a sophisticated and powerful open source biosurveillance technology of value to the federal government by providing information to stakeholders conducting open source biosurveillance as well as those consuming biosurveillance information. In FY 2018, NBIC Biofeeds will begin operational use by NBIC and an initial set of users in various federal agencies. User accounts for testing purposes will be available to other federal partners, and a broad scope of federal stakeholders can receive products directly from NBIC Biofeeds based on their interests.ConclusionsNBIC Biofeeds is expected to enable more rapid recognition and enhanced analysis of emerging biological events by NBIC analysts. NBIC anticipates other federal agencies with biosurveillance missions will find this technology of value and intends to offer use of the platform to those federal partners that can benefit from access to the tool and information generated from NBIC Biofeeds.
APA, Harvard, Vancouver, ISO, and other styles
17

Temate-Tiagueu, Yvette, Joseph Amlung, Dennis Stover, Philip Peters, John T. Brooks, Sridhar Papagari Sangareddy, Jina J. Dcruz, and Kamran Ahmed. "Dashboard Prototype for Improved HIV Monitoring and Reporting for Indiana." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9699.

Full text
Abstract:
ObjectiveThe objective was to design and develop a dashboard prototype (DP) that integrates HIV data from disparate sources to improve monitoring and reporting of HIV care continuum metrics in Indiana. The tool aimed to support Indiana State Department of Health (ISDH) to monitor key HIV performance indicators, more fully understand populations served, more quickly identify and respond to crucial needs, and assist in planning and decision-making.IntroductionIn 2015, ISDH responded to an HIV outbreak among persons using injection drugs in Scott County [1]. Information to manage the public health response to this event and aftermath included data from multiple sources (e.g., HIV testing, surveillance, contact tracing, medical care, and HIV prevention activities). During the outbreak, access to timely and accurate data for program monitoring and reporting was difficult for health department staff. Each dataset was managed separately and tailored to the relevant HIV program area’s needs. Our challenge was to create a platform that allowed separate systems to communicate with each other and design a DP that offered a consolidated view of data.ISDH initiated efforts to integrate these HIV data sources to better track HIV prevention, diagnosis, and care metrics statewide, support decision-making and policies, and facilitate a more rapid response to future HIV-related investigations. The Centers for Disease Control and Prevention (CDC) through its Info-Aid program provided technical assistance to support ISDH’s data integration process and develop a DP that could aggregate these data and improve reporting of crucial statewide metrics.After an initial assessment phase, an in-depth analysis of requirements resulted in several design principles and lessons learned that later translated into standardization of data formats and design of the data integration process [2].MethodsSpecific design principles and prototyping methods were applied during the 9 months that lasted the DP design and development process starting from June 2017.Requirements elicitation, analysis, and validationThe elicitation and analysis of the requirements were done using a dashboard content inventory tool to gather and analyze HIV reporting needs and dashboard requirements from stakeholders. Results of this analysis allowed us to validate project goals, list required functionalities, prioritize features, and design the initial dashboard architecture. The initial scope was Scott County.Design mappingThe design mapping exercise reviewed different scenarios involving data visualization using DP, clarified associations among data from different programs and determined how best to capture and present them in the DP. For example, we linked data in separate datasets using unique identifier or county name. This step’s output was to refine DP architecture.Parallel designIn a parallel design session, we drew dashboard mockups on paper with end users. These mockups helped illustrate how information captured during design mapping would be translated into visual design before prototype implementation. Drawings were converted to PowerPoint mockups for validation and modifications. The mockup helped testers and future users, interact and rapidly understand the DP architecture. The model can be used for designing other DP.IntegrationData integration was conducted in SAS by merging datasets from different program areas iteratively. Next, we cleaned (e.g., deleted records missing crucial information) and validated data. The integration step solved certain challenges with ISDH data (e.g. linking data across systems while automating data cleaning was planned for later), increased data consistency and reduced redundancy, and resulted in a consolidated view of the data.PrototypingAfter data integration, we extracted a reduced dataset to implement and test different DP features. The first prototype was in Excel. We applied a modular design that allowed frequent feedback and input from ISDH program managers. Developers of the first prototype were in two locations, but team members kept in close contact and further refined the DP through weekly communications. We expanded the DP scope from Scott County to include all counties in Indiana.Beta VersionTo enable advanced analysis and ease collaboration of the final tool across users, we moved to Tableau Desktop Professional version 10. All Excel screens were redeveloped and integrated into a unique dashboard for a consolidated view of ISDH programs. After beta version completion, usability tests were conducted to guide the DP production version.Technical requirementsAll users were provided Tableau Reader to interact with the tool. DP is not online, but shared by ISDH through a protected shared drive. Provisions are made for the DP to use a relational database that will provide greater data storage flexibility, management, and retrieval. DP benefits from the existing security infrastructure at ISDH that allows for safeguarding personal identifiable information, secured access, backup and restoration.ResultsSystem contentISDH’s data generated at the county and state level were used to assess the following domains: HIV Testing, HIV Surveillance, Contact Tracing, HIV Care Coordination, and Syringe Exchange. The DP was populated through an offline extract of the integrated datasets. This approach sped up the Tableau workbook and allowed monthly update to the uploaded datasets. The system also included reporting features to display aggregate information for multiple population groups.Stakeholders’ feedbackTo improve users’ experience, the development team trained and offered stakeholders multiple opportunities to provide feedback, which was collected informally from ISDH program directors to guide DP enhancements. The initial feedback was collected through demonstration to CDC domain experts and ISDH staff. They were led through different scenarios and provided comments on overall design and suggestions for improvement. The goal of the demos was to assess ease of use and benefits and determine how it could be used to engage with stakeholders inside and outside of ISDH.DP Action ReportingThe DP reporting function will allow users to download spreadsheets and graphs. Some reports will be automatically generated and some will be ad-hoc. All users, including the ISDH Quality Manager and grant writers, can use the tool to guide program evaluations and justifications for funding. The tool will provide a way for ISDH staff to stay current about work of grantees, document key interactions with each community, and track related next steps. In addition, through an extract of the integrated dataset (e.g., out-of-care HIV positives), DP could support another ISDH program area, Linkage to Care.ConclusionsWe describe the process to design and develop a DP to improve monitoring and reporting of statewide HIV-related data. The solution from this technical assistance project was a useful and innovative tool that allows for capture of time-crucial information about populations at high risk. The system is expected to help ISDH improves HIV surveillance and prevention in Indiana. Our approach could be adapted to similar public health areas in Indiana.References1. Peters PJ et al. HIV infection linked to injection use of oxymorphone in Indiana, 2014–2015. N Engl J Med. 2016;375(3):229-39.2. Ahmed K et al. Integrating data from disparate data systems for improved HIV reporting: Lessons learned. OJPHI. 2018 May 17;10 (1).
APA, Harvard, Vancouver, ISO, and other styles
18

Livingstone, Randall M. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no. 6 (November 23, 2010). http://dx.doi.org/10.5204/mcj.315.

Full text
Abstract:
Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.
APA, Harvard, Vancouver, ISO, and other styles
19

Marcheva, Marta. "The Networked Diaspora: Bulgarian Migrants on Facebook." M/C Journal 14, no. 2 (November 17, 2010). http://dx.doi.org/10.5204/mcj.323.

Full text
Abstract:
The need to sustain and/or create a collective identity is regularly seen as one of the cultural priorities of diasporic peoples and this, in turn, depends upon the existence of a uniquely diasporic form of communication and connection with the country of origin. Today, digital media technologies provide easy information recording and retrieval, and mobile IT networks allow global accessibility and participation in the redefinition of identities. Vis-à-vis our understanding of the proximity and connectivity associated with globalisation, the role of ICTs cannot be underestimated and is clearly more than a simple instrument for the expression of a pre-existing diasporic identity. Indeed, the concept of “e-diaspora” is gaining popularity. Consequently, research into the role of ICTs in the lives of diasporic peoples contributes to a definition of the concept of diaspora, understood here as the result of the dispersal of all members of a nation in several countries. In this context, I will demonstrate how members of the Bulgarian diaspora negotiate not only their identities but also their identifications through one of the most popular community websites, Facebook. My methodology consists of the active observation of Bulgarian users belonging to the diaspora, the participation in groups and forums on Facebook, and the analysis of discourses produced online. This research was conducted for the first time between 1 August 2008 and 31 May 2009 through the largest 20 (of 195) Bulgarian groups on the French version of Facebook and 40 (of over 500) on the English one. It is important to note that the public considered to be predominantly involved in Facebook is a young audience in the age group of 18-35 years. Therefore, this article is focused on two generations of Bulgarian immigrants: mostly recent young and second-generation migrants. The observed users are therefore members of the Bulgarian diaspora who have little or no experience of communism, who don’t feel the weight of the past, and who have grown up as free and often cosmopolitan citizens. Communist hegemony in Bulgaria began on 9 September 1944, when the army and the communist militiamen deposed the country’s government and handed power over to an anti-fascist coalition. During the following decades, Bulgaria became the perfect Soviet satellite and the imposed Stalinist model led to sharp curtailing of the economic and social contacts with the free world beyond the Iron Curtain. In 1989, the fall of the Berlin Wall marked the end of the communist era and the political and economic structures that supported it. Identity, Internet, and Diaspora Through the work of Mead, Todorov, and boyd it is possible to conceptualise the subject in terms of both of internal and external social identity (Mead, Todorov, boyd). In this article, I will focus, in particular, on social and national identities as expressions of the process of sharing stories, experiences, and understanding between individuals. In this respect, the phenomenon of Facebook is especially well placed to mediate between identifications which, according to Freud, facilitate the plural subjectivities and the establishment of an emotional network of mutual bonds between the individual and the group (Freud). This research also draws on Goffman who, from a sociological point of view, demystifies the representation of the Self by developing a dramaturgical theory (Goffman), whereby identity is constructed through the "roles" that people play on the social scene. Social life is a vast stage where the actors are required to adhere to certain socially acceptable rituals and guidelines. It means that we can consider the presentation of Self, or Others, as a facade or a construction of socially accepted features. Among all the ICTs, the Internet is, by far, the medium most likely to facilitate free expression of identity through a multitude of possible actions and community interactions. Personal and national memories circulate in the transnational space of the Internet and are reshaped when framed from specific circumstances such as those raised by the migration process. In an age of globalisation marked by the proliferation of population movements, instant communication, and cultural exchanges across geographic boundaries, the phenomenon of the diaspora has caught the attention of a growing number of scholars. I shall be working with Robin Cohen’s definition of diaspora which highlights the following common features: (1) dispersal from an original homeland; (2) the expansion from a homeland in search of work; (3) a collective memory and myth about the homeland; (4) an idealisation of the supposed ancestral homeland; (5) a return movement; (6) a strong ethnic group consciousness sustained over a long time; (7) a troubled relationship with host societies; (8) a sense of solidarity with co-ethnic members in other countries; and (9) the possibility of a distinctive creative, enriching life in tolerant host countries (Cohen). Following on this earlier work on the ways in which diasporas give rise to new forms of subjectivity, the concept of “e-diaspora” is now rapidly gaining in popularity. The complex association between diasporic groups and ICTs has led to a concept of e-diasporas that actively utilise ICTs to achieve community-specific goals, and that have become critical for the formation and sustenance of an exilic community for migrant groups around the globe (Srinivasan and Pyati). Diaspora and the Digital Age Anderson points out two key features of the Internet: first, it is a heterogeneous electronic medium, with hardly perceptible contours, and is in a state of constant development; second, it is a repository of “imagined communities” without geographical or legal legitimacy, whose members will probably never meet (Anderson). Unlike “real” communities, where people have physical interactions, in the imagined communities, individuals do not have face-to-face communication and daily contact, but they nonetheless feel a strong emotional attachment to the nation. The Internet not only opens new opportunities to gain greater visibility and strengthen the sense of belonging to community, but it also contributes to the emergence of a transnational public sphere where the communities scattered in various locations freely exchange their views and ideas without fear of restrictions or censorship from traditional media (Appadurai, Bernal). As a result, the Web becomes a virtual diasporic space which opens up, to those who have left their country, a new means of confrontation and social participation. Within this new diasporic space, migrants are bound in their disparate geographical locations by a common vision or myth about the homeland (Karim). Thanks to the Internet, the computer has become a primary technological intermediary between virtual networks, bringing its members closer in a “global village” where everyone is immediately connected to others. Thus, today’s diasporas are not the diaspora of previous generations in that the migration is experienced and negotiated very differently: people in one country are now able to continue to participate actively in another country. In this context, the arrival of community sites has increased the capacity of users to create a network on the Internet, to rediscover lost links, and strengthen new ones. Unlike offline communities, which may weaken once their members have left the physical space, online communities that are no longer limited by the requirement of physical presence in the common space have the capacity to endure. Identity Strategies of New Generations of Bulgarian Migrants It is very difficult to quantify migration to or from Bulgaria. Existing data is not only partial and limited but, in some cases, give an inaccurate view of migration from Bulgaria (Soultanova). Informal data confirm that one million Bulgarians, around 15 per cent of Bulgaria’s entire population (7,620,238 inhabitants in 2007), are now scattered around the world (National Statistical Institute of Bulgaria). The Bulgarian migrant is caught in a system of redefinition of identity through the duration of his or her relocation. Emigrating from a country like Bulgaria implies a high number of contingencies. Bulgarians’ self-identification is relative to the inferiority complex of a poor country which has a great deal to do to catch up with its neighbours. Before the accession of Bulgaria to the European Union, the country was often associated with what have been called “Third World countries” and seen as a source of crime and social problems. Members of the Bulgarian diaspora faced daily prejudice due to the bad reputation of their country of origin, though the extent of the hostility depended upon the “host” nation (Marcheva). Geographically, Bulgaria is one of the most eastern countries in Europe, the last to enter the European Union, and its image abroad has not facilitated the integration of the Bulgarian diaspora. The differences between Bulgarian migrants and the “host society” perpetuate a sentiment of marginality that is now countered with an online appeal for national identity markers and shared experiences. Facebook: The Ultimate Social Network The Growing Popularity of Facebook With more than 500 million active members, Facebook is the most visited website in the world. In June 2007, Facebook experienced a record annual increase of 270 per cent of connections in one year (source: comScore World Metrix). More than 70 translations of the site are available to date, including the Bulgarian version. What makes it unique is that Facebook positively encourages identity games. Moreover, Facebook provides the symbolic building blocks with which to build a collective identity through shared forms of discourse and ways of thinking. People are desperate to make a good impression on the Internet: that is why they spend so much time managing their online identity. One of the most important aspects of Facebook is that it enables users to control and manage their image, leaving the choice of how their profile appears on the pages of others a matter of personal preference at any given time. Despite some limitations, we will see that Facebook offers the Bulgarian community abroad the possibility of an intense and ongoing interaction with fellow nationals, including the opportunity to assert and develop a complex new national/transnational identity. Facebook Experiences of the Bulgarian Diaspora Created in the United States in 2004 and extended to use in Europe two or three years later, Facebook was quickly adopted by members of the Bulgarian diaspora. Here, it is very important to note that, although the Internet per se has enabled Bulgarians across the globe to introduce Cyrillic script into the public arena, it is definitely Facebook that has made digital Cyrillic visible. Early in computer history, keyboards with the Cyrillic alphabet simply did not exist. Thus, Bulgarians were forced to translate their language into Latin script. Today, almost all members of the Bulgarian population who own a computer use a keyboard that combines the two alphabets, Latin and Cyrillic, and this allows alternation between the two. This is not the case for the majority of Bulgarians living abroad who are forced to use a keyboard specific to their country of residence. Thus, Bulgarians online have adopted a hybrid code to speak and communicate. Since foreign keyboards are not equipped with the same consonants and vowels that exist in the Bulgarian language, they use the Latin letters that best suit the Bulgarian phonetic. Several possible interpretations of these “encoded” texts exist which become another way for the Bulgarian migrants to distinguish and assert themselves. One of these encoded scripts is supplemented by figures. For example, the number “6” written in Bulgarian “шест” is applied to represent the Bulgarian letter “ш.” Bulgarian immigrants therefore employ very specific codes of communication that enhance the feeling of belonging to a community that shares the same language, which is often incomprehensible to others. As the ultimate social networking website, Facebook brings together Bulgarians from all over the world and offers them a space to preserve online memorials and digital archives. As a result, the Bulgarian diaspora privileges this website in order to manage the strong links between its members. Indeed, within months of coming into online existence, Facebook established itself as a powerful social phenomenon for the Bulgarian diaspora and, very soon, a virtual map of the Bulgarian diaspora was formed. It should be noted, however, that this mapping was focused on the new generation of Bulgarian migrants more familiar with the Internet and most likely to travel. By identifying the presence of online groups by country or city, I was able to locate the most active Bulgarian communities: “Bulgarians in UK” (524 members), “Bulgarians in Chicago” (436 members), “Bulgarians studying in the UK” (346 members), “Bulgarians in America” (333 members), “Bulgarians in the USA” (314 members), “Bulgarians in Montreal” (249 members), “Bulgarians in Munich” (241 members), and so on. These figures are based on the “Groups” Application of Facebook as updated in February 2010. Through those groups, a symbolic diasporic geography is imagined and communicated: the digital “border crossing,” as well as the real one, becomes a major identity resource. Thus, Bulgarian users of Facebook are connecting from the four corners of the globe in order to rebuild family links and to participate virtually in the marriages, births, and lives of their families. It sometimes seems that the whole country has an appointment on Facebook, and that all the photos and stories of Bulgarians are more or less accessible to the community in general. Among its virtual initiatives, Facebook has made available to its users an effective mobilising tool, the Causes, which is used as a virtual noticeboard for activities and ideas circulating in “real life.” The members of the Bulgarian diaspora choose to adhere to different “causes” that may be local, national, or global, and that are complementary to the civic and socially responsible side of the identity they have chosen to construct online. Acting as a virtual realm in which distinct and overlapping trajectories coexist, Facebook thus enables users to articulate different stories and meanings and to foster a democratic imaginary about both the past and the future. Facebook encourages diasporas to produce new initiatives to revive or create collective memories and common values. Through photos and videos, scenes of everyday life are celebrated and manipulated as tools to reconstruct, reconcile, and display a part of the history and the identity of the migrant. By combating the feelings of disorientation, the consciousness of sharing the same national background and culture facilitates dialogue and neutralises the anxiety and loneliness of Bulgarian migrants. When cultural differences become more acute, the sense of isolation increases and this encourages migrants to look for company and solidarity online. As the number of immigrants connected and visible on Facebook gets larger, so the use of the Internet heightens their sense of a substantial collective identity. This is especially important for migrants during the early years of relocation when their sense of identity is most fragile. It can therefore be argued that, through the Internet, some Bulgarian migrants are replacing alienating face-to-face contact with virtual friends and enjoying the feeling of reassurance and belonging to a transnational community of compatriots. In this sense, Facebook is a propitious ground for the establishment of the three identity strategies defined by Herzfeld: cultural intimacy (or self-stereotypes); structural nostalgia (the evocation of a time when everything was going better); and the social poetic (the strategies aiming to retrieve a particular advantage and turn it into a permanent condition). In this way, the willingness to remain continuously in virtual contact with other Bulgarians often reveals a desire to return to the place of birth. Nostalgia and outsourcing of such sentiments help migrants to cope with feelings of frustration and disappointment. I observed that it is just after their return from summer holidays spent in Bulgaria that members of the Bulgarian diaspora are most active on the Bulgarian forums and pages on Facebook. The “return tourism” (Fourcade) during the summer or for the winter holidays seems to be a central theme in the forums on Facebook and an important source of emotional refuelling. Tensions between identities can also lead to creative formulations through Facebook’s pages. Thus, the group “You know you’re a Bulgarian when...”, which enjoys very active participation from the Bulgarian diaspora, is a space where everyone is invited to share, through a single sentence, some fact of everyday life with which all Bulgarians can identify. With humour and self-irony, this Facebook page demonstrates what is distinctive about being Bulgarian but also highlights frustration with certain prejudices and stereotypes. Frequently these profiles are characterised by seemingly “glocal” features. The same Bulgarian user could define himself as a Parisian, adhering to the group “You know you’re from Paris when...”, but also a native of a Bulgarian town (“You know you’re from Varna when...”). At the same time, he is an architect (“All architects on Facebook”), supporting the candidacy of Barack Obama, a fan of Japanese manga (“maNga”), of a French actor, an American cinema director, or Indian food. He joins a cause to save a wild beach on the Black Sea coast (“We love camping: Gradina Smokinia and Arapia”) and protests virtually against the slaughter of dolphins in the Faroe Islands (“World shame”). One month, the individual could identify as Bulgarian, but next month he might choose to locate himself in the country in which he is now resident. Thus, Facebook creates a virtual territory without borders for the cosmopolitan subject (Negroponte) and this confirms the premise that the Internet does not lead to the convergence of cultures, but rather confirms the opportunities for diversification and pluralism through multiple social and national affiliations. Facebook must therefore be seen as an advantageous space for the representation and interpretation of identity and for performance and digital existence. Bulgarian migrants bring together elements of their offline lives in order to construct, online, entirely new composite identities. The Bulgarians we have studied as part of this research almost never use pseudonyms and do not seem to feel the need to hide their material identities. This suggests that they are mature people who value their status as migrants of Bulgarian origin and who feel confident in presenting their natal identities rather than hiding behind a false name. Starting from this material social/national identity, which is revealed through the display of surname with a Slavic consonance, members of the Bulgarian diaspora choose to manage their complex virtual identities online. Conclusion Far from their homeland, beset with feelings of insecurity and alienation as well as daily experiences of social and cultural exclusion (much of it stemming from an ongoing prejudice towards citizens from ex-communist countries), it is no wonder that migrants from Bulgaria find relief in meeting up with compatriots in front of their screens. Although some migrants assume their Bulgarian identity as a mixture of different cultures and are trying to rethink and continuously negotiate their cultural practices (often through the display of contradictory feelings and identifications), others identify with an imagined community and enjoy drawing boundaries between what is “Bulgarian” and what is not. The indispensable daily visit to Facebook is clearly a means of forging an ongoing sense of belonging to the Bulgarian community scattered across the globe. Facebook makes possible the double presence of Bulgarian immigrants both here and there and facilitates the ongoing processes of identity construction that depend, more and more, upon new media. In this respect, the role that Facebook plays in the life of the Bulgarian diaspora may be seen as a facet of an increasingly dynamic transnational world in which interactive media may be seen to contribute creatively to the formation of collective identities and the deformation of monolithic cultures. References Anderson, Benedict. L’Imaginaire National: Réflexions sur l’Origine et l’Essor du Nationalisme. Paris: La Découverte, 1983. Appadurai, Ajun. Après le Colonialisme: Les Conséquences Culturelles de la Globalisation. Paris: Payot, 2001. Bernal, Victoria. “Diaspora, Cyberspace and Political Imagination: The Eritrean Diaspora Online.” Global Network 6 (2006): 161-79. boyd, danah. “Social Network Sites: Public, Private, or What?” Knowledge Tree (May 2007). Cohen, Robin. Global Diasporas: An Introduction. London: University College London Press. 1997. Goffman, Erving. La Présentation de Soi. Paris: Editions de Minuit, Collection Le Sens Commun, 1973. Fourcade, Marie-Blanche. “De l’Arménie au Québec: Itinéraires de Souvenirs Touristiques.” Ethnologies 27.1 (2005): 245-76. Freud, Sigmund. “Psychologie des Foules et Analyses du Moi.” Essais de Psychanalyse. Paris: Petite Bibliothèque Payot, 2001 (1921). Herzfeld, Michael. Intimité Culturelle. Presse de l’Université de Laval, 2008. Karim, Karim-Haiderali. The Media of Diaspora. Oxford: Routledge, 2003. Marcheva, Marta. “Bulgarian Diaspora and the Media Treatment of Bulgaria in the French, Italian and North American Press (1992–2007).” Unpublished PhD dissertation. Paris: University Panthéon – Assas Paris 2, 2010. Mead, George Herbert. L’Esprit, le Soi et la Société. Paris: PUF, 2006. Negroponte, Nicholas. Being Digital. Vintage, 2005. Soultanova, Ralitza. “Les Migrations Multiples de la Population Bulgare.” Actes du Dolloque «La France et les Migrants des Balkans: Un État des Lieux.” Paris: Courrier des Balkans, 2005. Srinivasan, Ramesh, and Ajit Pyati. “Diasporic Information Environments: Reframing Immigrant-Focused Information Research.” Journal of the American Society for Information Science and Technology 58.12 (2007): 1734-44. Todorov, Tzvetan. Nous et les Autres: La Réflexion Française sur la Diversité Humaine. Paris: Seuil, 1989.
APA, Harvard, Vancouver, ISO, and other styles
20

Watson, Robert. "E-Press and Oppress." M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2345.

Full text
Abstract:
From elephants to ABBA fans, silicon to hormone, the following discussion uses a new research method to look at printed text, motion pictures and a teenage rebel icon. If by ‘print’ we mean a mechanically reproduced impression of a cultural symbol in a medium, then printing has been with us since before microdot security prints were painted onto cars, before voice prints, laser prints, network servers, record pressings, motion picture prints, photo prints, colour woodblock prints, before books, textile prints, and footprints. If we accept that higher mammals such as elephants have a learnt culture, then it is possible to extend a definition of printing beyond Homo sapiens. Poole reports that elephants mechanically trumpet reproductions of human car horns into the air surrounding their society. If nothing else, this cross-species, cross-cultural reproduction, this ‘ability to mimic’ is ‘another sign of their intelligence’. Observation of child development suggests that the first significant meaningful ‘impression’ made on the human mind is that of the face of the child’s nurturer – usually its mother. The baby’s mind forms an ‘impression’, a mental print, a reproducible memory data set, of the nurturer’s face, voice, smell, touch, etc. That face is itself a cultural construct: hair style, makeup, piercings, tattoos, ornaments, nutrition-influenced skin and smell, perfume, temperature and voice. A mentally reproducible pattern of a unique face is formed in the mind, and we use that pattern to distinguish ‘familiar and strange’ in our expanding social orbit. The social relations of patterned memory – of imprinting – determine the extent to which we explore our world (armed with research aids such as text print) or whether we turn to violence or self-harm (Bretherton). While our cultural artifacts (such as vellum maps or networked voice message servers) bravely extend our significant patterns into the social world and the traversed environment, it is useful to remember that such artifacts, including print, are themselves understood by our original pattern-reproduction and impression system – the human mind, developed in childhood. The ‘print’ is brought to mind differently in different discourses. For a reader, a ‘print’ is a book, a memo or a broadsheet, whether it is the Indian Buddhist Sanskrit texts ordered to be printed in 593 AD by the Chinese emperor Sui Wen-ti (Silk Road) or the US Defense Department memo authorizing lower ranks to torture the prisoners taken by the Bush administration (Sanchez, cited in ABC). Other fields see prints differently. For a musician, a ‘print’ may be the sheet music which spread classical and popular music around the world; it may be a ‘record’ (as in a ‘recording’ session), where sound is impressed to wax, vinyl, charged silicon particles, or the alloys (Smith, “Elpida”) of an mp3 file. For the fine artist, a ‘print’ may be any mechanically reproduced two-dimensional (or embossed) impression of a significant image in media from paper to metal, textile to ceramics. ‘Print’ embraces the Japanese Ukiyo-e colour prints of Utamaro, the company logos that wink from credit card holographs, the early photographs of Talbot, and the textured patterns printed into neolithic ceramics. Computer hardware engineers print computational circuits. Homicide detectives investigate both sweaty finger prints and the repeated, mechanical gaits of suspects, which are imprinted into the earthy medium of a crime scene. For film makers, the ‘print’ may refer to a photochemical polyester reproduction of a motion picture artifact (the reel of ‘celluloid’), or a DVD laser disc impression of the same film. Textualist discourse has borrowed the word ‘print’ to mean ‘text’, so ‘print’ may also refer to the text elements within the vision track of a motion picture: the film’s opening titles, or texts photographed inside the motion picture story such as the sword-cut ‘Z’ in Zorro (Niblo). Before the invention of writing, the main mechanically reproduced impression of a cultural symbol in a medium was the humble footprint in the sand. The footprints of tribes – and neighbouring animals – cut tracks in the vegetation and the soil. Printed tracks led towards food, water, shelter, enemies and friends. Having learnt to pattern certain faces into their mental world, children grew older and were educated in the footprints of family and clan, enemies and food. The continuous impression of significant foot traffic in the medium of the earth produced the lines between significant nodes of prewriting and pre-wheeled cultures. These tracks were married to audio tracks, such as the song lines of the Australian Aborigines, or the ballads of tramping culture everywhere. A typical tramping song has the line, ‘There’s a track winding back to an old-fashion shack along the road to Gundagai,’ (O’Hagan), although this colonial-style song was actually written for radio and became an international hit on the airwaves, rather than the tramping trails. The printed tracks impressed by these cultural flows are highly contested and diverse, and their foot prints are woven into our very language. The names for printed tracks have entered our shared memory from the intersection of many cultures: ‘Track’ is a Germanic word entering English usage comparatively late (1470) and now used mainly in audio visual cultural reproduction, as in ‘soundtrack’. ‘Trek’ is a Dutch word for ‘track’ now used mainly by ecotourists and science fiction fans. ‘Learn’ is a Proto-Indo-European word: the verb ‘learn’ originally meant ‘to find a track’ back in the days when ‘learn’ had a noun form which meant ‘the sole of the foot’. ‘Tract’ and ‘trace’ are Latin words entering English print usage before 1374 and now used mainly in religious, and electronic surveillance, cultural reproduction. ‘Trench’ in 1386 was a French path cut through a forest. ‘Sagacity’ in English print in 1548 was originally the ability to track or hunt, in Proto-Indo-European cultures. ‘Career’ (in English before 1534) was the print made by chariots in ancient Rome. ‘Sleuth’ (1200) was a Norse noun for a track. ‘Investigation’ (1436) was Latin for studying a footprint (Harper). The arrival of symbolic writing scratched on caves, hearth stones, and trees (the original meaning of ‘book’ is tree), brought extremely limited text education close to home. Then, with baked clay tablets, incised boards, slate, bamboo, tortoise shell, cast metal, bark cloth, textiles, vellum, and – later – paper, a portability came to text that allowed any culture to venture away from known ‘foot’ paths with a reduction in the risk of becoming lost and perishing. So began the world of maps, memos, bills of sale, philosophic treatises and epic mythologies. Some of this was printed, such as the mechanical reproduction of coins, but the fine handwriting required of long, extended, portable texts could not be printed until the invention of paper in China about 2000 years ago. Compared to lithic architecture and genes, portable text is a fragile medium, and little survives from the millennia of its innovators. The printing of large non-text designs onto bark-paper and textiles began in neolithic times, but Sui Wen-ti’s imperial memo of 593 AD gives us the earliest written date for printed books, although we can assume they had been published for many years previously. The printed book was a combination of Indian philosophic thought, wood carving, ink chemistry and Chinese paper. The earliest surviving fragment of paper-print technology is ‘Mantras of the Dharani Sutra’, a Buddhist scripture written in the Sanskrit language of the Indian subcontinent, unearthed at an early Tang Dynasty site in Xian, China – making the fragment a veteran piece of printing, in the sense that Sanskrit books had been in print for at least a century by the early Tang Dynasty (Chinese Graphic Arts Net). At first, paper books were printed with page-size carved wooden boards. Five hundred years later, Pi Sheng (c.1041) baked individual reusable ceramic characters in a fire and invented the durable moveable type of modern printing (Silk Road 2000). Abandoning carved wooden tablets, the ‘digitizing’ of Chinese moveable type sped up the production of printed texts. In turn, Pi Sheng’s flexible, rapid, sustainable printing process expanded the political-cultural impact of the literati in Asian society. Digitized block text on paper produced a bureaucratic, literate elite so powerful in Asia that Louis XVI of France copied China’s print-based Confucian system of political authority for his own empire, and so began the rise of the examined public university systems, and the civil service systems, of most European states (Watson, Visions). By reason of its durability, its rapid mechanical reproduction, its culturally agreed signs, literate readership, revered authorship, shared ideology, and distributed portability, a ‘print’ can be a powerful cultural network which builds and expands empires. But print also attacks and destroys empires. A case in point is the Spanish conquest of Aztec America: The Aztecs had immense libraries of American literature on bark-cloth scrolls, a technology which predated paper. These libraries were wiped out by the invading Spanish, who carried a different book before them (Ewins). In the industrial age, the printing press and the gun were seen as the weapons of rebellions everywhere. In 1776, American rebels staffed their ‘Homeland Security’ units with paper makers, knowing that defeating the English would be based on printed and written documents (Hahn). Mao Zedong was a book librarian; Mao said political power came out of the barrel of a gun, but Mao himself came out of a library. With the spread of wireless networked servers, political ferment comes out of the barrel of the cell phone and the internet chat room these days. Witness the cell phone displays of a plane hitting a tower that appear immediately after 9/11 in the Middle East, or witness the show trials of a few US and UK lower ranks who published prints of their torturing activities onto the internet: only lower ranks who published prints were arrested or tried. The control of secure servers and satellites is the new press. These days, we live in a global library of burning books – ‘burning’ in the sense that ‘print’ is now a charged silicon medium (Smith, “Intel”) which is usually made readable by connecting the chip to nuclear reactors and petrochemically-fired power stations. World resources burn as we read our screens. Men, women, children burn too, as we watch our infotainment news in comfort while ‘their’ flickering dead faces are printed in our broadcast hearths. The print we watch is not the living; it is the voodoo of the living in the blackout behind the camera, engaging the blood sacrifice of the tormented and the unfortunate. Internet texts are also ‘on fire’ in the third sense of their fragility and instability as a medium: data bases regularly ‘print’ fail-safe copies in an attempt to postpone the inevitable mechanical, chemical and electrical failure that awaits all electronic media in time. Print defines a moral position for everyone. In reporting conflict, in deciding to go to press or censor, any ‘print’ cannot avoid an ethical context, starting with the fact that there is a difference in power between print maker, armed perpetrators, the weak, the peaceful, the publisher, and the viewer. So many human factors attend a text, video or voice ‘print’: its very existence as an aesthetic object, even before publication and reception, speaks of unbalanced, and therefore dynamic, power relationships. For example, Graham Greene departed unscathed from all the highly dangerous battlefields he entered as a novelist: Riot-torn Germany, London Blitz, Belgian Congo, Voodoo Haiti, Vietnam, Panama, Reagan’s Washington, and mafia Europe. His texts are peopled with the injustices of the less fortunate of the twentieth century, while he himself was a member of the fortunate (if not happy) elite, as is anyone today who has the luxury of time to read Greene’s works for pleasure. Ethically a member of London and Paris’ colonizers, Greene’s best writing still electrifies, perhaps partly because he was in the same line of fire as the victims he shared bread with. In fact, Greene hoped daily that he would escape from the dreadful conflicts he fictionalized via a body bag or an urn of ashes (see Sherry). In reading an author’s biography we have one window on the ethical dimensions of authority and print. If a print’s aesthetics are sometimes enduring, its ethical relationships are always mutable. Take the stylized logo of a running athlete: four limbs bent in a rotation of action. This dynamic icon has symbolized ‘good health’ in Hindu and Buddhist culture, from Madras to Tokyo, for thousands of years. The cross of bent limbs was borrowed for the militarized health programs of 1930s Germany, and, because of what was only a brief, recent, isolated yet monstrously horrific segment of its history in print, the bent-limbed swastika is now a vilified symbol in the West. The sign remains ‘impressed’ differently on traditional Eastern culture, and without the taint of Nazism. Dramatic prints are emotionally charged because, in depicting Homo sapiens in danger, or passionately in love, they elicit a hormonal reaction from the reader, the viewer, or the audience. The type of emotions triggered by a print vary across the whole gamut of human chemistry. A recent study of three genres of motion picture prints shows a marked differences in the hormonal responses of men compared to women when viewing a romance, an actioner, and a documentary (see Schultheiss, Wirth, and Stanton). Society is biochemically diverse in its engagement with printed culture, which raises questions about equality in the arts. Motion picture prints probably comprise around one third of internet traffic, in the form of stolen digitized movie files pirated across the globe via peer-to-peer file transfer networks (p2p), and burnt as DVD laser prints (BBC). There is also a US 40 billion dollar per annum legitimate commerce in DVD laser pressings (Grassl), which would suggest an US 80 billion per annum world total in legitimate laser disc print culture. The actively screen literate, or the ‘sliterati’ as I prefer to call them, research this world of motion picture prints via their peers, their internet information channels, their television programming, and their web forums. Most of this activity occurs outside the ambit of universities and schools. One large site of sliterate (screen literate) practice outside most schooling and official research is the net of online forums at imdb.com (International Movie Data Base). Imdb.com ‘prints’ about 25,000,000 top pages per month to client browsers. Hundreds of sliterati forums are located at imdb, including a forum for the Australian movie, Muriel’s Wedding (Hogan). Ten years after the release of Muriel’s Wedding, young people who are concerned with victimization and bullying still log on to http://us.imdb.com/title/tt0110598/board/> and put their thoughts into print: I still feel so bad for Muriel in the beginning of the movie, when the girls ‘dump’ her, and how much the poor girl cried and cried! Those girls were such biartches…I love how they got their comeuppance! bunniesormaybemidgets’s comment is typical of the current discussion. Muriel’s Wedding was a very popular film in its first cinema edition in Australia and elsewhere. About 30% of the entire over-14 Australian population went to see this photochemical polyester print in the cinemas on its first release. A decade on, the distributors printed a DVD laser disc edition. The story concerns Muriel (played by Toni Collette), the unemployed daughter of a corrupt, ‘police state’ politician. Muriel is bullied by her peers and she withdraws into a fantasy world, deluding herself that a white wedding will rescue her from the torments of her blighted life. Through theft and deceit (the modus operandi of her father) Muriel escapes to the entertainment industry and finds a ‘wicked’ girlfriend mentor. From a rebellious position of stubborn independence, Muriel plays out her fantasy. She gets her white wedding, before seeing both her father and her new married life as hollow shams which have goaded her abandoned mother to suicide. Redefining her life as a ‘game’ and assuming responsibility for her independence, Muriel turns her back on the mainstream, image-conscious, female gang of her oppressed youth. Muriel leaves the story, having rekindled her friendship with her rebel mentor. My methodological approach to viewing the laser disc print was to first make a more accessible, coded record of the entire movie. I was able to code and record the print in real time, using a new metalanguage (Watson, “Eyes”). The advantage of Coding is that ‘thinks’ the same way as film making, it does not sidetrack the analyst into prose. The Code splits the movie print into Vision Action [vision graphic elements, including text] (sound) The Coding splits the vision track into normal action and graphic elements, such as text, so this Coding is an ideal method for extracting all the text elements of a film in real time. After playing the film once, I had four and a half tightly packed pages of the coded story, including all its text elements in square brackets. Being a unique, indexed hard copy, the Coded copy allowed me immediate access to any point of the Muriel’s Wedding saga without having to search the DVD laser print. How are ‘print’ elements used in Muriel’s Wedding? Firstly, a rose-coloured monoprint of Muriel Heslop’s smiling face stares enigmatically from the plastic surface of the DVD picture disc. The print is a still photo captured from her smile as she walked down the aisle of her white wedding. In this print, Toni Collette is the Mona Lisa of Australian culture, except that fans of Muriel’s Wedding know the meaning of that smile is a magical combination of the actor’s art: the smile is both the flush of dreams come true and the frightening self deception that will kill her mother. Inserting and playing the disc, the text-dominant menu appears, and the film commences with the text-dominant opening titles. Text and titles confer a legitimacy on a work, whether it is a trade mark of the laser print owners, or the household names of stars. Text titles confer status relationships on both the presenters of the cultural artifact and the viewer who has entered into a legal license agreement with the owners of the movie. A title makes us comfortable, because the mind always seeks to name the unfamiliar, and a set of text titles does that job for us so that we can navigate the ‘tracks’ and settle into our engagement with the unfamiliar. The apparent ‘truth’ and ‘stability’ of printed text calms our fears and beguiles our uncertainties. Muriel attends the white wedding of a school bully bride, wearing a leopard print dress she has stolen. Muriel’s spotted wild animal print contrasts with the pure white handmade dress of the bride. In Muriel’s leopard textile print, we have the wild, rebellious, impoverished, inappropriate intrusion into the social ritual and fantasy of her high-status tormentor. An off-duty store detective recognizes the printed dress and calls the police. The police are themselves distinguished by their blue-and-white checked prints and other mechanically reproduced impressions of cultural symbols: in steel, brass, embroidery, leather and plastics. Muriel is driven in the police car past the stenciled town sign (‘Welcome To Porpoise Spit’ heads a paragraph of small print). She is delivered to her father, a politician who presides over the policing of his town. In a state where the judiciary, police and executive are hijacked by the same tyrant, Muriel’s father, Bill, pays off the police constables with a carton of legal drugs (beer) and Muriel must face her father’s wrath, which he proceeds to transfer to his detested wife. Like his daughter, the father also wears a spotted brown print costume, but his is a batik print from neighbouring Indonesia (incidentally, in a nation that takes the political status of its batik prints very seriously). Bill demands that Muriel find the receipt for the leopard print dress she claims she has purchased. The legitimate ownership of the object is enmeshed with a printed receipt, the printed evidence of trade. The law (and the paramilitary power behind the law) are legitimized, or contested, by the presence or absence of printed text. Muriel hides in her bedroom, surround by poster prints of the pop group ABBA. Torn-out prints of other people’s weddings adorn her mirror. Her face is embossed with the clown-like primary colours of the marionette as she lifts a bouquet to her chin and stares into the real time ‘print’ of her mirror image. Bill takes the opportunity of a business meeting with Japanese investors to feed his entire family at ‘Charlie Chan’’s restaurant. Muriel’s middle sister sloppily wears her father’s state election tee shirt, printed with the text: ‘Vote 1, Bill Heslop. You can’t stop progress.’ The text sets up two ironic gags that are paid off on the dialogue track: “He lost,’ we are told. ‘Progress’ turns out to be funding the concreting of a beach. Bill berates his daughter Muriel: she has no chance of becoming a printer’s apprentice and she has failed a typing course. Her dysfunction in printed text has been covered up by Bill: he has bribed the typing teacher to issue a printed diploma to his daughter. In the gambling saloon of the club, under the arrays of mechanically repeated cultural symbols lit above the poker machines (‘A’ for ace, ‘Q’ for queen, etc.), Bill’s secret girlfriend Diedre risks giving Muriel a cosmetics job. Another text icon in lights announces the surf nightclub ‘Breakers’. Tania, the newly married queen bitch who has made Muriel’s teenage years a living hell, breaks up with her husband, deciding to cash in his negotiable text documents – his Bali honeymoon tickets – and go on an island holiday with her girlfriends instead. Text documents are the enduring site of agreements between people and also the site of mutations to those agreements. Tania dumps Muriel, who sobs and sobs. Sobs are a mechanical, percussive reproduction impressed on the sound track. Returning home, we discover that Muriel’s older brother has failed a printed test and been rejected for police recruitment. There is a high incidence of print illiteracy in the Heslop family. Mrs Heslop (Jeannie Drynan), for instance, regularly has trouble at the post office. Muriel sees a chance to escape the oppression of her family by tricking her mother into giving her a blank cheque. Here is the confluence of the legitimacy of a bank’s printed negotiable document with the risk and freedom of a blank space for rebel Muriel’s handwriting. Unable to type, her handwriting has the power to steal every cent of her father’s savings. She leaves home and spends the family’s savings at an island resort. On the island, the text print-challenged Muriel dances to a recording (sound print) of ABBA, her hand gestures emphasizing her bewigged face, which is made up in an impression of her pop idol. Her imitation of her goddesses – the ABBA women, her only hope in a real world of people who hate or avoid her – is accompanied by her goddesses’ voices singing: ‘the mystery book on the shelf is always repeating itself.’ Before jpeg and gif image downloads, we had postcard prints and snail mail. Muriel sends a postcard to her family, lying about her ‘success’ in the cosmetics business. The printed missal is clutched by her father Bill (Bill Hunter), who proclaims about his daughter, ‘you can’t type but you really impress me’. Meanwhile, on Hibiscus Island, Muriel lies under a moonlit palm tree with her newly found mentor, ‘bad girl’ Ronda (Rachel Griffiths). In this critical scene, where foolish Muriel opens her heart’s yearnings to a confidante she can finally trust, the director and DP have chosen to shoot a flat, high contrast blue filtered image. The visual result is very much like the semiabstract Japanese Ukiyo-e woodblock prints by Utamaro. This Japanese printing style informed the rise of European modern painting (Monet, Van Gogh, Picasso, etc., were all important collectors and students of Ukiyo-e prints). The above print and text elements in Muriel’s Wedding take us 27 minutes into her story, as recorded on a single page of real-time handwritten Coding. Although not discussed here, the Coding recorded the complete film – a total of 106 minutes of text elements and main graphic elements – as four pages of Code. Referring to this Coding some weeks after it was made, I looked up the final code on page four: taxi [food of the sea] bq. Translation: a shop sign whizzes past in the film’s background, as Muriel and Ronda leave Porpoise Spit in a taxi. Over their heads the text ‘Food Of The Sea’ flashes. We are reminded that Muriel and Ronda are mermaids, fantastic creatures sprung from the brow of author PJ Hogan, and illuminated even today in the pantheon of women’s coming-of-age art works. That the movie is relevant ten years on is evidenced by the current usage of the Muriel’s Wedding online forum, an intersection of wider discussions by sliterate women on imdb.com who, like Muriel, are observers (and in some cases victims) of horrific pressure from ambitious female gangs and bullies. Text is always a minor element in a motion picture (unless it is a subtitled foreign film) and text usually whizzes by subliminally while viewing a film. By Coding the work for [text], all the text nuances made by the film makers come to light. While I have viewed Muriel’s Wedding on many occasions, it has only been in Coding it specifically for text that I have noticed that Muriel is a representative of that vast class of talented youth who are discriminated against by print (as in text) educators who cannot offer her a life-affirming identity in the English classroom. Severely depressed at school, and failing to type or get a printer’s apprenticeship, Muriel finds paid work (and hence, freedom, life, identity, independence) working in her audio visual printed medium of choice: a video store in a new city. Muriel found a sliterate admirer at the video store but she later dumped him for her fantasy man, before leaving him too. One of the points of conjecture on the imdb Muriel’s Wedding site is, did Muriel (in the unwritten future) get back together with admirer Brice Nobes? That we will never know. While a print forms a track that tells us where culture has been, a print cannot be the future, a print is never animate reality. At the end of any trail of prints, one must lift one’s head from the last impression, and negotiate satisfaction in the happening world. References Australian Broadcasting Corporation. “Memo Shows US General Approved Interrogations.” 30 Mar. 2005 http://www.abc.net.au>. British Broadcasting Commission. “Films ‘Fuel Online File-Sharing’.’’ 22 Feb. 2005 http://news.bbc.co.uk/1/hi/technology/3890527.stm>. Bretherton, I. “The Origins of Attachment Theory: John Bowlby and Mary Ainsworth.” 1994. 23 Jan. 2005 http://www.psy.med.br/livros/autores/bowlby/bowlby.pdf>. Bunniesormaybemidgets. Chat Room Comment. “What Did Those Girls Do to Rhonda?” 28 Mar. 2005 http://us.imdb.com/title/tt0110598/board/>. Chinese Graphic Arts Net. Mantras of the Dharani Sutra. 20 Feb. 2005 http://www.cgan.com/english/english/cpg/engcp10.htm>. Ewins, R. Barkcloth and the Origins of Paper. 1991. 20 Feb. 2005 http://www.justpacific.com/pacific/papers/barkcloth~paper.html>. Grassl K.R. The DVD Statistical Report. 14 Mar. 2005 http://www.corbell.com>. Hahn, C. M. The Topic Is Paper. 20 Feb. 2005 http://www.nystamp.org/Topic_is_paper.html>. Harper, D. Online Etymology Dictionary. 14 Mar. 2005 http://www.etymonline.com/>. Mask of Zorro, The. Screenplay by J McCulley. UA, 1920. Muriel’s Wedding. Dir. PJ Hogan. Perf. Toni Collette, Rachel Griffiths, Bill Hunter, and Jeannie Drynan. Village Roadshow, 1994. O’Hagan, Jack. On The Road to Gundagai. 1922. 2 Apr. 2005 http://ingeb.org/songs/roadtogu.html>. Poole, J.H., P.L. Tyack, A.S. Stoeger-Horwath, and S. Watwood. “Animal Behaviour: Elephants Are Capable of Vocal Learning.” Nature 24 Mar. 2005. Sanchez, R. “Interrogation and Counter-Resistance Policy.” 14 Sept. 2003. 30 Mar. 2005 http://www.abc.net.au>. Schultheiss, O.C., M.M. Wirth, and S.J. Stanton. “Effects of Affiliation and Power Motivation Arousal on Salivary Progesterone and Testosterone.” Hormones and Behavior 46 (2005). Sherry, N. The Life of Graham Greene. 3 vols. London: Jonathan Cape 2004, 1994, 1989. Silk Road. Printing. 2000. 20 Feb. 2005 http://www.silk-road.com/artl/printing.shtml>. Smith, T. “Elpida Licenses ‘DVD on a Chip’ Memory Tech.” The Register 20 Feb. 2005 http://www.theregister.co.uk/2005/02>. —. “Intel Boffins Build First Continuous Beam Silicon Laser.” The Register 20 Feb. 2005 http://www.theregister.co.uk/2005/02>. Watson, R. S. “Eyes And Ears: Dramatic Memory Slicing and Salable Media Content.” Innovation and Speculation, ed. Brad Haseman. Brisbane: QUT. [in press] Watson, R. S. Visions. Melbourne: Curriculum Corporation, 1994. Citation reference for this article MLA Style Watson, Robert. "E-Press and Oppress: Audio Visual Print Drama, Identity, Text and Motion Picture Rebellion." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/08-watson.php>. APA Style Watson, R. (Jun. 2005) "E-Press and Oppress: Audio Visual Print Drama, Identity, Text and Motion Picture Rebellion," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/08-watson.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography