To see the other types of publications on this topic, follow the link: Active Server Pages 3.0.

Journal articles on the topic 'Active Server Pages 3.0'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 journal articles for your research on the topic 'Active Server Pages 3.0.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hoofnagle, Andrew N., David Chou, and Michael L. Astion. "Online Database for Documenting Clinical Pathology Resident Education." Clinical Chemistry 53, no. 1 (2007): 134–37. http://dx.doi.org/10.1373/clinchem.2006.078550.

Full text
Abstract:
Abstract Background: Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. Methods: With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. Results: The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. Conclusions: We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.
APA, Harvard, Vancouver, ISO, and other styles
2

Dobrescu, Loretti Isabella. "Active Ageing and Solidarity between Generations in Europe: First Results from SHARE After the Economic Crisis. By Axel Bosch-Supan, Martina Brandt, Howard Litwin and Guglielmo Weber (Eds). De Gruyter, 2013, ISBN 978-3-11-029545-0, 402 pages." Journal of Pension Economics and Finance 13, no. 4 (2014): 465–67. http://dx.doi.org/10.1017/s1474747214000298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brulet, Alexandre, Guy Llorca, and Laurent Letrilliart. "Medical Wikis Dedicated to Clinical Practice: A Systematic Review." Journal of Medical Internet Research 17, no. 2 (2015): e48. http://dx.doi.org/10.2196/jmir.3574.

Full text
Abstract:
Background Wikis may give clinician communities the opportunity to build knowledge relevant to their practice. The only previous study reviewing a set of health-related wikis, without specification of purpose or audience, globally showed a poor reliability. Objective Our aim was to review medical wiki websites dedicated to clinical practices. Methods We used Google in ten languages, PubMed, Embase, Lilacs, and Web of Science to identify websites. The review included wiki sites, accessible and operating, having a topic relevant for clinical medicine, targeting physicians or medical students. Wikis were described according to their purposes, platform, management, information framework, contributions, content, and activity. Purposes were classified as “encyclopedic” or “non-encyclopedic”. The information framework quality was assessed based on the Health On the Net (HONcode) principles for collaborative websites, with additional criteria related to users’ transparency and editorial policy. From a sample of five articles per wikis, we assessed the readability using the Flesch test and compared articles according to the wikis’ main purpose. Annual editorial activities were estimated using the Google engine. Results Among 25 wikis included, 11 aimed at building an encyclopedia, five a textbook, three lessons, two oncology protocols, one a single article, and three at reporting clinical cases. Sixteen wikis were specialized with specific themes or disciplines. Fifteen wikis were using MediaWiki software as-is, three were hosted by online wiki farms, and seven were purpose-built. Except for one MediaWiki-based site, only purpose-built platforms managed detailed user disclosures. The owners were ten organizations, six individuals, four private companies, two universities, two scientific societies, and one unknown. Among 21 open communities, 10 required users’ credentials to give editing rights. The median information framework quality score was 6 out of 16 (range 0-15). Beyond this score, only one wiki had standardized peer-reviews. Physicians contributed to 22 wikis, medical learners to nine, and lay persons to four. Among 116 sampled articles, those from encyclopedic wikis had more videos, pictures, and external resources, whereas others had more posology details and better readability. The median creation year was 2007 (1997-2011), the median number of content pages was 620.5 (3-98,039), the median of revisions per article was 17.7 (3.6-180.5) and 0.015 of talk pages per article (0-0.42). Five wikis were particularly active, whereas six were declining. Two wikis have been discontinued after the completion of the study. Conclusions The 25 medical wikis we studied present various limitations in their format, management, and collaborative features. Professional medical wikis may be improved by using clinical cases, developing more detailed transparency and editorial policies, and involving postgraduate and continuing medical education learners.
APA, Harvard, Vancouver, ISO, and other styles
4

Jeffers, S. N., G. Schnabel, and J. P. Smith. "First Report of Resistance to Mefenoxam in Phytophthora cactorum in the United States and Elsewhere." Plant Disease 88, no. 5 (2004): 576. http://dx.doi.org/10.1094/pdis.2004.88.5.576a.

Full text
Abstract:
Phytophthora cactorum causes crown rot of strawberry (Fragaria × ananassa) (2), a disease that has been particularly severe during the last 5 years in the southeastern United States. In the fall of 2001, strawberry plants (cv. Camarosa) in a field in Lexington County, South Carolina exhibited typical crown rot symptoms (2) 1 to 2 weeks after transplanting, even though plants had been drenched with mefenoxam (Ridomil Gold; Syngenta Crop Protection, Greensboro, NC) immediately after transplanting. Initially, we observed leaves that had marginal necrosis, were smaller than normal, and were discolored. Soon after, diseased plants appeared stunted and unthrifty compared with other plants in the field, and some of these plants eventually wilted and died. Severely affected plants had necrotic roots and decayed crowns. Ten symptomatic plants were collected for isolation. In the laboratory, root and crown tissues were rinsed in running tap water and blotted dry, small pieces of necrotic tissue were placed aseptically on PAR-V8 selective medium (1), and isolation plates were placed at 20°C in the dark for up to 7 days. P. cactorum was recovered from six plants. Isolates produced characteristic asexual and sexual structures directly on the isolation plates (i.e., papillate sporangia on sympodial sporangiophores and oospores with paragynous antheridia) (2). A single hypha of an isolate from each plant was transferred to fresh PAR-V8, and pure cultures were stored on cornmeal agar in glass vials at 15°C in the dark. All six isolates from the Lexington County field and nine other isolates of P. cactorum from strawberry (three from South Carolina, three from North Carolina, and three from Florida) were tested for sensitivity to mefenoxam on fungicide-amended medium. Mefenoxam was added to 10% clarified V8 juice agar (cV8A) after autoclaving so the concentration in the medium was 100 ppm. Agar plugs from active colonies were transferred to mefenoxam-amended and nonamended cV8A (three replicates per treatment), plates were placed at 25°C in the dark for 3 days, and linear mycelium growth was measured. All six isolates from Lexington County were highly resistant to mefenoxam with mycelium growth relatively unrestricted on mefenoxam-amended medium (73 to 89% of that on nonamended medium). In comparison, the other nine isolates were sensitive to mefenoxam with mycelium growth severely restricted by 100 ppm of mefenoxam (0 to 7% of that on nonamended medium). To our knowledge, this is the first report of mefenoxam resistance in P. cactorum on strawberry or any other crop in the United States and elsewhere. Because mefenoxam is the primary fungicide used to manage Phytophthora crown rot in the southeastern United States, resistance may limit use of this fungicide in strawberry production. References: (1) A. J. Ferguson and S. N. Jeffers. Plant Dis. 83:1129, 1999. (2) E. Seemüller. Crown rot. Pages 50–51 in: Compendium of Strawberry Diseases, 2nd ed. J. L. Maas, ed. The American Phytopathological Society, St. Paul, MN, 1998.
APA, Harvard, Vancouver, ISO, and other styles
5

Farooqui, Mohammed, Janet Valdez, Susan Soto, et al. "Single Agent Ibrutinib in CLL/SLL Patients with and without Deletion 17p." Blood 126, no. 23 (2015): 2937. http://dx.doi.org/10.1182/blood.v126.23.2937.2937.

Full text
Abstract:
Abstract INTRODUCTION: Ibrutinib is FDA approved for patients (pts) with CLL who are previously treated or have deletion (del) 17p. Data on depth and durability of response beyond the first 2 years of therapy are limited. Here we compare the response of pts with and without del17p with long follow-up(f/u). PATIENTS AND METHODS: This investigator-initiated phase II trial (NCT01500733) enrolled 86 pts (Cohort 1: no del17p, n=35; Cohort 2: del17p n=51). Both treatment (tx) naïve (TN) and relapsed refractory (R/R) pts with active disease were eligible. Response was assessed by computed tomography (CT), physical exam, bone marrow (BM) biopsy, and routine clinical and laboratory studies. Spleen volume (SV) was calculated from CT scans using a General Electric Advanced Workstation Server. Eight color flow cytometry (FC) on peripheral blood (PB) and BM was performed at yearly intervals. Wilcoxon rank-sum test was used to examine the differences between the two cohorts. RESULTS: Median f/u for all pts currently on study was 36 months (mo). Among pts with no del17p 24 (69%) pts completed 2 years(y) and 12 (34%) pts completed 3y. 33 (65%) pts with del17p completed 2y and 18 (35%) pts completed 3y. Median age was 66y (33-85) and 70% had Rai stage III/IV. Most adverse events were grade ≤2, most commonly (>25%) diarrhea, nail ridging, arthralgias, rash, bruising, and cramps. Tx-related non hematologic toxicities grade ≥3 occurred in <15%, and grade ≥3 infections or cytopenias were reported <25% of pts. The estimated progression free survival and overall survival at 36 mo is 82% and 88%. A total of 81 pts (n=33 (no del17p), n=48 (del17p)) were evaluable for response (2 enrollment deviations, 1 malignancy, 2 deaths before 6 mo). 5 (6%) deaths occurred on study (4 infections not tx-related, 1 possibly tx-related sudden death). 2 (2%) pts were primary refractory and 7 (8%) pts had progressive disease (PD) after initial response (3 CLL, 2 PLL, 2 Richter's transformation). Best response was complete response (CR) in 17 (21%) pts, partial response (PR) in 60 (74%), and stable disease (SD) and progressive disease (PD) each in 2 (2%) pts. Median time to best response was 2y. Responses for TN vs R/R pts were: 20 vs 23% CR, 78% vs 68% PR, 0% vs 6% SD, 2% vs 3% PD, and for no del17p vs del17p: 21% vs 21% CR, 73% vs 75% PR, 3% vs 2% each with SD and PD. There were no statistically significant differences in response rates by prior tx or del17p status. Disease control in all evaluable tissue sites were compared between the two cohorts based on del17p status (no del17p vs del17p) (Table): median reduction in lymphadenopathy 89% (59-100) vs 82% (22-100), median reduction in SV 94% (26-100) vs 98% (37-100), median reduction in tumor infiltration in the BM was 90% (11-99) vs 88% (28-100), and ALC response showed a median reduction of 97% (range: +44% to -99%) and 95% (range: +119% to -99%). Table. Median tumor reduction at best response. Compartment All pts NO DEL17p DEL 17p P Nodal 85% (n=79) 89% (n=32) 82% (n=47) 0.01 Spleen 98% (n=67) 94% (n=28) 98% (n=39) 0.3 BM 89% (n=73) 90% (n=31) 88% (n=42) 0.6 ALC 96% (n=81) 97% (n=33) 95% (n=48) 0.16 We quantified depth of response by measuring the degree of flow cytometric minimal residual disease (MRD) (CLL % of leukocytes). The median MRD at 1y (n=51), 2y (n=46), and 3y (n=24) in PB was 41%, 12%, and 8% respectively. Median MRD values at 1y (n=17), 2y (n=32), and 3y (n=15) in BM was 12%, 8%, and 7%. There was no significant difference in MRD levels in pts by prior tx status. However, pts with no del17p tended to have less residual disease measured by FC than pts with del17p. For example, at 2y MRD levels in PB were 6.4% vs 16% and in BM 4.7% vs 11.1%, respectively (P =0.02). At 3y one patient with no del17p achieved near MRD negativity in PB at 0.018%, and MRD negativity in the BM at 0.007%. CONCLUSION: With continued therapy 95% of pts achieved a response by iwCLL criteria and the depth of response improved with 21% CRs irrespective of the presence of del17p. However, all patients remained MRDpositive. With a median f/u of 36 mo, responses were durable in the majority of pts, with secondary resistance developing in 8% of pts. Research supported by the Intramural Research Program of NHLBI. We thank our patients for participation. We acknowledge Pharmacyclics for providing study drug. Disclosures Off Label Use: Ibrutinib is approved for CLL patients with relapsed refractory disease and patients with deletion 17p CLL. Some of the patients on this abstract from this clinical trial are treatment naive CLL patients without 17p deletion. . Wiestner:Pharmacyclics: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
6

Irvine, R. "ProlIferation." Journal of Cell Science 113, no. 21 (2000): 3683–84. http://dx.doi.org/10.1242/jcs.113.21.3683a.

Full text
Abstract:
Biology of Phosphoinositides edited by Shamshad Cockroft Oxford University Press (2000) pp. 341. ISBN 0–19-963764-4 32.50 I can remember in the early days of the inositide explosion (around the mid-80s) we used to discuss over a few beers the idea of signing some kind of ‘non-proliferation treaty’ because the number of inositol phosphates seemed to be getting a little out of control. In the end we decided that because mathematically there were only 63 possibilities, it couldn't get much worse, so there wasn't any point in worrying. That turns out to have been a false security because there were only two polyphosphoinositol lipids then, and now there are seven - all the seven species feasible by phosphorylating the 3, 4 and 5 hydroxyls on phosphatidylinositol (PI) - and we also hadn't reckoned on the emerging complexities of the glycosyl-PIs, and of course the possibility that you could actually cram more than six phosphates on an inositol ring (as in the pyrophosphate-containing IP7 and IP8) would have been dismissed as absurd. Now, bemused as we are by the ever-increasing number of players, we face an even more awesome proliferation of functions. Phosphatidylinositol 4, 5-bisphosphate (PI45P2) is the chief offender here, as it seems to be cropping up in every remote corner of eukaryotic cells controlling a bewildering array of functions, but the 3-phosphorylated lipids are not much more well-behaved, and what all this means in the present context is that putting together a book on ‘The Biology of Phosphoinositides’ that can be contained in one volume has become impossible. There has to be a selection process, and Shamshad Cockcroft has done an admirable job in choosing the areas that are most topical, with a sufficiently broad sweep that almost anyone will find at least one of the chapters useful. But this very same expansion and proliferation makes equally difficult the choice of what to cover in a single chapter, and I found that the most successful chapters in this book are those which focus on either a very new (and thus restricted) part of the field, or simply a part that is easy to draw a boundary around. Thus, for example, Harald Stenmark's chapter deals only with PI3P and membrane trafficking, and thus it covers every reference that there is on the subject, but is still short enough and clear enough to be a good read. Steve Shears' chapter on inositol 3,4,5,6-tetrakisphosphate succeeds for exactly the same reason (and thank goodness there is one chapter on the inositol phosphates; all the rest are on the lipids where so much of the action has been in the 1990s - are the 2000s going to be the decade of the inositol phosphates?). Some efforts on larger areas are fine attempts, but can suffer from being just too full of information. For example, the simply heroic chapter by Len Stephens and his colleagues on all the rest of the 3-phosphorylated lipids that Harald didn't cover, runs to 60 pages plus 408 references, and I must admit I was reaching for the aspirin bottle somewhere in the middle. Also, some of the chapters perhaps suffer a bit in competition with reviews in journals - now there is a real proliferation problem (see below) - and with some of them I felt a strong sense of deja revu, whatever the merits of the individual chapter (e.g. those by Sue-Goo Rhee and colleagues on PI-PLC regulation, by Flanagan and Janmey on the cytoskeleton, and by Woscholski and Parker on the inositide phosphatases). I realise at this point that I am raising a much more general question about the need (or not) for books of this ilk in the 21st century. This book is a collection of reviews about selected aspects of a particularly active area of research. However, almost all journals now publish reviews, and we also have tools such as ISI, Medline etc that enable us to find all these up-to-date reviews on whatever topic we want at the push of a computer key. (ABSTRACT TRUNCATED)
APA, Harvard, Vancouver, ISO, and other styles
7

Campanaro, F., A. Zaffaroni, A. Batticciotto, A. Cappelli, M. P. Donadini, and A. Squizzato. "AB0565 JAK INHIBITORS AND PSORIATIC ARTHRITIS: A SYSTEMATIC REVIEW AND META-ANALYSIS." Annals of the Rheumatic Diseases 80, Suppl 1 (2021): 1319–20. http://dx.doi.org/10.1136/annrheumdis-2021-eular.3671.

Full text
Abstract:
Background:Despite the therapeutic armamentarium for the treatment of psoriatic arthritis (PsA) has considerably expanded over the last thirty years, there is a huge necessity of finding effective drugs for this disease. JAK inhibitors (JAKi) are small molecules able to interfere with the JAK/STAT pathway, involved in the pathogenesis of PsA (1). Up to now Tofacitinib is the only JAKi approved by the European Medicines Agency (EMA) for the treatment of PsA but in the next few years the number of approved JAKi is expected to rise significantly.Objectives:To assess the efficacy and safety of different JAKi for the treatment of PsA.Methods:A systematic review of the literature was performed to identify randomized controlled trials (RCTs), by electronic search of MEDLINE and EMBASE database until October 2020. Studies were considered eligible if they met the following criteria: I) study was a RCT; II) only patients with PsA were included; III) JAKi was compared to placebo in addition to the standard of care. Two reviewers (FC and AZ) performed study selection, with disagreements solved by the opinion of an expert reviewer (AS). The outcomes were expressed as odds ratio (OR) and 95% confidence intervals (95% CI). Statistical heterogeneity was assessed with the I2 statistic.Results:We identified 557 potentially relevant studies. A total of 554 studies were excluded based on title and/or abstract screening. Three RCTs for a total of 947 PsA patients treated with JAKi were included (2,3,4). Two were phase III studies on the efficacy and safety of Tofacitinib (OPAL Beyond and OPAL Broaden) and one was a phase II study on Filgotinib (Equator). All three studies were judged at low risk of bias according to Cochrane criteria (5). The primary efficacy outcome in all the studies was the number of patients who achieved the response rate of the American College of Rheumatology 20 score (ACR20). The outcomes evaluation was performed at 12 week for the Filgotinib trial and at 16 week for the Tofacitinib trials. We used for the main analyses the group of patients randomized to Tofacitinib 5 mg because this is the only dosage approved by the EMA for the treatment of PsA. JAKi showed a significantly higher ACR20 response rate compared to placebo (OR 3.54, 95% CI 1.76 - 7.09, I^2 = 74%). JAKi also showed a significantly higher ACR50 response rate (OR 3.36, 95% CI 2.22 - 5.09, I^2 = 0%), ACR70 response rate (OR 2.82, 95% CI 1.67 - 4.76, I^2 = 20%), PsARC response rate (OR 2.67, 95% CI 1.26 - 5.65, I^2 = 79%), PASI75 response rate (OR 3.15, 95% CI 1.61 - 6.15, I^2 = 45%) compared to placebo. JAKi were also associated with significantly better HAQ-DI (mean difference -0.23 95% CI -0.31 - -0.14) and fatigue, measured with FACIT-F (mean difference 3.54 95% CI 2.13 - 4.94). JAKi compared to placebo were associated with a non-statistically significant different risk of serious adverse events (OR 0.56, 95% CI 0.11 - 2.91, I^2 = 38%).Conclusion:This is the first published systematic review that performed a comprehensive and simultaneous evaluation of the efficacy and safety of JAKi for PsA in RCTs. Our analysis suggests a statistically significant benefit of JAKi, that appears to be effective and safe over placebo. The impact of these data on international clinical guidelines needs further investigation.References:[1]George E Fragoulis, et al. JAK-inhibitors. New players in the field of immune-mediated diseases, beyond rheumatoid arthritis, Rheumatology, Volume 58, Issue Supplement_1, February 2019, Pages i43–i54[2]Mease P, et al. Tofacitinib or adalimumab versus placebo for psoriatic arthritis. N Engl J Med 2017; 377: 1537-50.[3]Gladman D, et al. Tofacitinib for psoriatic arthritis in patients with an inadequate response to TNF inhibitors. N Engl J Med 2017; 377: 1525-36.[4]Mease P, et al. Efficacy and safety of filgotinib, a selective Janus kinase 1 inhibitor, in patients with active psoriatic arthritis (EQUATOR): results from a randomised, placebo-controlled, phase 2 trial. Lancet 2018;392:2367–77.[5]Higgins JP, et Al. Measuring inconsistency in meta-analyses. BMJ 2003;327:557-560Figure 1.ACR20 response rate of Jaki over PlaceboDisclosure of Interests:None declared.
APA, Harvard, Vancouver, ISO, and other styles
8

Blunt, Danielle N., Richard A. Wells, Martha Lenis, et al. "Health Related Quality of Life Remains Stable over Time in Myelodysplastic Syndrome: An MDS-CAN Prospective Study." Blood 132, Supplement 1 (2018): 4850. http://dx.doi.org/10.1182/blood-2018-99-114327.

Full text
Abstract:
Abstract Background: Health-Related Quality of life (HRQoL) is diminished in patients with myelodysplastic syndrome (MDS). We have previously shown that HRQoL remains stable over time and low hemoglobin, transfusion dependence (TD) and age > 65 years impact QoL1. Here, we present an updated larger data set with longer follow up and consider the impact of baseline characteristics and treatments received on patient-related outcomes. Methods: MDS-CAN is a prospective database active in 15 centers across Canada, enrolling patients since April 2012. In addition to disease and patient-related characteristics, we measure HRQoL at baseline and every 6 months using the EORTC-QLQ-C30, EQ-5D, and a global fatigue scale (GFS). We examined the impact of disease related factors (IPSS, IPSS-R, karyotype, TD), patient factors (ECOG, age, gender, co-morbidity (Charlson index), frailty (Rockwood scale), disability (Lawton-Brody Independent Activities of Daily Living), and treatments received at any time (azacitidine (AZA), lenalidomide, erythropoietin-stimulating agents (ESA), iron chelation) on QoL scores. AZA-treated patients were divided into responders (where documented) or deriving benefit (if > 6 cycles) vs. non-responders. Wilcoxon rank-sum or Kruskal-Wallis nonparametric tests were used to compare scores among subgroups. Changes in QoL were assessed with a linear mixed model to account for time- dependent covariates such as TD, risk scores and treatment. Results: 594 patients were enrolled a median of 2.2 months post diagnosis (IQR: 0.8, 4.8) with a median age of 73 years , 63% male gender and performance status (ECOG) of 0-1 in 90%. IPSS scores were low/int-1 in 73% and IPSS-R scores were very low (9%), low (30%), intermediate (27%), high (20%) and very high (14% of patients). 31% were transfusion dependent at enrolment. Treatments received at any time included AZA (38%), lenalidomide (9.8%), ESA (35%) and iron chelation (12%). At a median follow up of 17 months, 329 patients (55%) died with cause of death reported as AML in 22%. Baseline assessment: Mean EQ-5D global score for the cohort was 0.75 ± 0.25 and did not significantly change over time (Figure 1). Patients with high IPSS, high/very high IPSS-R, TD, lower hemoglobin, higher ECOG, increased comorbidity, frailty and disability were more likely to have lower EQ-5D/QLQ C30 scores (inferior QoL) and higher fatigue (GFS). Age was not significantly related to QoL. Interestingly, female gender was associated with inferior QoL by EQ-5D and GFS (Figure 2). Patients scoring in the lowest quartiles for physical performance tests (grip, 4 metre walk and 10x chair sit-stand tests) also had inferior QoL scores. QoL over time: By linear mixed modelling, we did not find significant differences in QoL over time in patients treated with or without AZA, lenalidomide, or ESAs measured by the EQ-5D instrument. Iron chelation was associated with lower scores (p=0.003) although this may simply be a surrogate for transfusion dependence which is associated with inferior QoL. AZA responding/deriving benefit patients had higher QoL scores from baseline and decreased fatigue compared with those not responding or not deriving benefit (Figure 3) measured by the QLQ-C30 and GFS instruments. Patients with the highest IPSS/IPSS-R risk groups had significantly inferior QoL over time. In conclusion, this study demonstrates that HRQoL remains fairly stable over time in MDS and implementation of treatment is not at the detriment of patient related outcomes. Patients treated with AZA who respond or remain on drug for > 6 months maintain higher QoL scores over time. Disease (IPSS, IPSS-R, hemoglobin, transfusion dependence) and patient-related factors (ECOG, gender, comorbidities, disability, frailty) are associated with reduced HRQoL. The prospective assessment of QoL using a validated MDS-specific QoL instrument (QUALMS) and disease course is underway. 1 Buckstein, R., Alibhai, S.M., Lam, A., et al. The health-related quality of life of MDS patients is impaired and most predicted by transfusion dependence, hemoglobin and age. Leukemia Research. May 2011 Vol 35, Supplement 1, Pages S55-56. Disclosures Wells: Alexion Pharmaceuticals, Inc.: Honoraria, Other: Travel Support , Research Funding; Novartis: Honoraria; Celgene: Honoraria, Membership on an entity's Board of Directors or advisory committees. Geddes:Alexion: Membership on an entity's Board of Directors or advisory committees; Novartis: Membership on an entity's Board of Directors or advisory committees; Celgene: Membership on an entity's Board of Directors or advisory committees, Research Funding. Zhu:Janssen: Consultancy; Novartis: Consultancy; Celgene: Consultancy, Research Funding. Sabloff:Celgene: Membership on an entity's Board of Directors or advisory committees. Keating:Bayer: Honoraria, Membership on an entity's Board of Directors or advisory committees. Leber:Novartis Canada: Honoraria, Membership on an entity's Board of Directors or advisory committees; Novartis Canada: Honoraria, Membership on an entity's Board of Directors or advisory committees. Leitch:Novartis: Honoraria, Research Funding, Speakers Bureau; Celgene: Honoraria, Research Funding; Alexion: Honoraria, Research Funding; AbbVie: Research Funding. Yee:Celgene, Novartis, Otsuka: Membership on an entity's Board of Directors or advisory committees; Agensys, Astex, GSK, Onconova, Genentech/Roche: Research Funding. St-Hilaire:Novartis: Membership on an entity's Board of Directors or advisory committees; Celgene: Honoraria, Membership on an entity's Board of Directors or advisory committees. Shamy:Amgen: Membership on an entity's Board of Directors or advisory committees; Novartis: Membership on an entity's Board of Directors or advisory committees; Celgene: Membership on an entity's Board of Directors or advisory committees. Elemary:Roche: Membership on an entity's Board of Directors or advisory committees; Lundbeck: Membership on an entity's Board of Directors or advisory committees; Amgen: Membership on an entity's Board of Directors or advisory committees; Celgene: Membership on an entity's Board of Directors or advisory committees, Research Funding. Delage:Celgene: Membership on an entity's Board of Directors or advisory committees; AbbVie: Research Funding; Roche: Membership on an entity's Board of Directors or advisory committees, Research Funding; Novartis: Membership on an entity's Board of Directors or advisory committees, Research Funding; Pfizer: Research Funding; BMS: Membership on an entity's Board of Directors or advisory committees, Research Funding. Rockwood:Pfizer: Research Funding; Lundbeck: Membership on an entity's Board of Directors or advisory committees; CHIR: Research Funding; Nova Scotia Health research foundation: Research Funding; Sanofi: Research Funding; Capital Health research support: Research Funding; Canadian consortium on neurodegeneration in aging and nutricia: Membership on an entity's Board of Directors or advisory committees; Alzheimer Society of Canada: Research Funding; Foundation Family Fund: Research Funding. Banerji:Teva: Other: Unrestricted grant received in the past; Gilead: Other: Unrestricted grant received in the past; Abbvie: Other: Unrestricted grant received in the past; Roche: Other: Unrestricted grant received in the past; Janssen: Other: Unrestricted grant received in the past. Buckstein:Celgene: Honoraria, Membership on an entity's Board of Directors or advisory committees, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
9

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
10

Adeoti, Olatunde Micheal, Abidemi Hawawu Bello, Olajumoke Elisabeth Adedokun, Kafilat Adenike Komolafe, David Ademola Adesina, and Opeyemi Joy Olaoye. "Distinctive Molecular typing of 16S rRNA of Bacillus species isolated from farm settlement." International Journal of Immunology and Microbiology 1, no. 1 (2021): 10–15. http://dx.doi.org/10.55124/ijim.v1i1.55.

Full text
Abstract:
Introduction: There are numerous methods of isolating and detecting organisms that are similar and closely related; one of the most reliable method is molecular typing of 16S rRNA. Apart from being omnipresent as a multigene family, or operons; it is evolutionarily stable; the 16S rRNA gene (1,500 bp) is large enough for informatics purposes.
 Materials and Method: This study employed molecular sequencing of 16S rRNA by Sanger method to reveal the specific organisms’ nucleotides and blasting (BLASTn) to show the similarities between the resulting organisms and existing organisms. The 16S rRNA remains the best choice of identification process for bacteria because of its distinguishing sizes and evolutionary stability.
 Results: All isolates were Gram positive rods and were positive in Biochemical tests such as oxidase, catalase, citrate, and protease but were in turn negative in coagulase and indole test tests. On sensitivity test; 80% of all the isolates were resistant to common antibiotics except ciprofloxacin and ceftriaxone. Based on the sequence difference in the variable region (V1) of 16S rRNA as observed from the molecular sequencing results; four isolates out of ten were identified. Six were different strains of B cereus. Others isolates include: wiedmannii, thuringensis, toyonensis and pseudomycoides. Sequence analysis of the primer annealing sites showed that there is no clear‐cut difference in the conserved region of 16S rRNA, and in the gyrB gene, between B. cereus and B. thuringiensis strains. Phylogenetic analysis showed that four isolates showed high similarity to each other; hence the limited number of deletions when subjected to alignments by maximum neighborhood joining parsimony using MEGA X software. B. toyonensis, B. wiedmannii and thuringensis were distantly related.
 Introduction
 Authors Pathogens cause illness and death in some countries and it also causes infections and gastrointestinal diseases in other countries thereby causing public health concern. Pathogens are organisms capable of causing diseases. Reliable methods are needed for the detection of pathogens due to pathogen evolution as a result of new human habits or new industrial practices.
 
 Microbial classification of organisms ranges from genus to specie level depending upon the technique used either phenotypic or genotypic. Presently, molecular methods now obtain advances to allow utilization in microbiology [1]. There are numerous molecular methods which are of fast and simple application to the detection of pathogen. Among the pathogens involved in human health, Bacillus cereus is interesting due to their ability to survive in various habitats [2].
 The genus Bacillus is aerobic or facultative anaerobic bacteria, gram positive spore forming rod shaped bacteria. Which can be characterized by two morphological forms, the vegetative cell which range from 1.02 to 1.2 um in width and from 3.0 to 5.0 in length, it can be straight or slightly curve, motile or non-motile, and the endospore (the non-swelling sporangium). The genus Bacillus is been characterized by the presence of endospore, which is not more than one per cell and they are resistant to many adverse environmental conditions such as heat, radiation, cold and disinfectants. It can also respire either in the presence or absence of oxygen [3]. Cell diameter of Bacillus cereus, sporangium and catalase test do not allow differentiation, where as important in differentiation among B. anthracis, B. cereus, B. thuringiensis can be considered by parasporal crystals and the presence of capsule. [4] Showed a B. thuringiensis strain capable of producing a capsule resembling that of B. anthracis. Most species of the genus display a great kind in physiological characteristics such as degradation of cellulose, starch, pectin, agar, hydrocarbons, production of enzymes and antibiotics and other characteristic such as acidophile, alkalinophile, psychrophile, and thermophile's which allows them to adapt to various environmental conditions [5]. In differentiating between species of the genus Bacillus it was difficult at early attempts when endospore formation and aerobic respiration were the main character used for classification. As reported by many authors that at molecular method level, the differentiation between B. thuringiensis and B. cereus is also very difficult.
 
 cereus can survive at the temperature between 4°c and 55°c. The mesophile strains can grow between the temperature of 10°c and 42°c, while psychotropic strains can survive at 4°c, whereas other strains are able to grow at 52 to 55°c. B. cereus vegetative cells grow at pH between 1.0 and 5.2. Heat resistant strain can survive and multiply in wet low acid foods in temperature ranging from 5 to 52°c. The survivability of B. cereus spores at 95°c decreases when the pH level decreases from 6.2 to 4.7 [6]. B. cereus can grow in the presence of salt with concentration up to 7.5% depending on the pH value.
 thuringiensis possesses a protein crystal that is toxic to insects. This toxin protein was first known as parasporal crystalline inclusion but was later referred to as π - endotoxin or in other ways known as insecticidal crystal protein [7]. Strains of B. thuringiensis bacteria possess a wide range of specificity in various orders of insects such as Lepidoptera, dipteral, coleoptera. These strains of bacteria produce crystalline proteins known as cry protein during sporulation. When B. thuringiensis infects an insects, it will cause the insect to loose appetite, enhances slow movement and over time the insect will die due to crystals of proteins that have been dissolved in the insect's stomach.
 
 In the cultivation of vegetable crops, the plant can be attack by many types of pests. Hence, in overcoming pest attacks farmers often use pesticides that contain active synthetic materials. Many negative effects arise from the folly use of chemical pesticides. Among the negative effect is the increase of pest population, resistance, death of natural enemy population and increase in residue level on Agricultural product which makes it unsafe for public consumption [8]. Therefore, it is necessary to find an alternative method in the control of crop pest. The best alternative that can be done is to replace the chemical insecticide with biological control which involves the use of living things in the form of microorganisms. In these profiling microbial communities, the main objective is to identify which bacteria and how much they are present in the environments. Most microbial profiling methods focus on the identification and quantification of bacteria with already sequenced genomes. Further, most methods utilize information obtained from entire genomes. Homology-based methods such as [1–4] classify sequences by detecting homology in reads belonging to either an entire genome or only a small set of marker genes. Composition-based methods generally use conserved compositional features of genomes for classification and as such they utilize less computational resources.Using the 16S rRNA gene instead of whole genome information is not only computational efficient but also economical; Illumina indicated that targeted sequencing of a focused region of interest reduces sequencing costs and enables deep sequencing, compared to whole-genome sequencing. On the other hand, as observed by [8], by focusing exclusively on one gene, one might lose essential information for advanced analyses. We, however, will provide an analysis that demonstrates that at least in the context of oral microbial communities, the 16S rRNA gene retains sufficient information to allow us detect unknown bacteria
 [9, 10]. This study aimed at employing 16S rRNA as an instrument of identification of seemingly close Bacillus species.
 Abbreviations
 BLAST, Basic Local Alignment sequence Tools; PCR, Polymerase Chains reactions; rRNA, ribosomal RNA;
 Material and methods
 T Sample collection. Soil samples were collected from three sources from Rice, Sugar Cane, vegetables and abandoned farmland in January 2019. The samples were labeled serially from Sample 1 to Sample 10 (S1 to S10).
 Bacterial culture: A serial dilution of 10 folds was performed. Bacterial suspension was diluted (10-10) with saline water and 100 μl of bacterial suspension werespread on Nutrient Agar plate and incubated for 24 hours. Bacterial colonies were isolated and grown in Nutrient Broth and nutrient agar. Other microbiological solid agar used include: Chocolate, Blood Agar, EMB, MacConkey, Simon citrate, MRS Agar. Bacteria were characterized by conventional technique by the use of morphological appearance and performance on biochemical analysis [11].
 Identification of bacteria:The identification of bacteria was based on morphological characteristics and biochemical tests carried out on the isolates. Morphological characteristics observed for each bacteria colony after 24 h of growth included colony appearance; cell shape, color, optical characteristics, consistency, colonial appearance and pigmentation. Biochemical characterizations were performed according to the method of [12]
 Catalase test: A small quantity of 24 h old culture was transferred into a drop of 3% Hydrogen peroxide solution on a clean slide with the aid of sterile inoculating loop. Gas seen as white froth indicates the presence of catalase enzyme [13] on the isolates.
 DNA Extraction Processes
 The extraction processes was in four phase which are:
 Collection of cell, lyses of cell, Collection of DNA by phenol, Concentration and purification of DNA.
 Collection of cell: the pure colonyof the bacteria culture was inoculated into a prepared sterile nutrient broth. After growth is confirmed by the turbidity of the culture, 1.5ml of the culture was taken into a centrifuge tube and was centrifuge at 5000 rpm for 5 minutes; the supernatant layer was discarded leaving the sediment.
 Lyses of cell: 400 microns of lyses buffer is added to the sediment and was mixed thoroughly and allow to stand for five minutes at room temperature (25°c). 200 microns of Sodium Dodecyl Sulfate (SDS) solution was added for protein lyses and was mixed gently and incubated at 65°c for 10 minutes.
 Collection of DNA by phenol; 500 microns of phenol chloroform was added to the solution for the separation of DNA, it was mixed completely and centrifuge at 10,000 rpm for 10 minutes. The white pallet seen at the top of the tube after centrifugation is separated into another sterile tube and 1micron of Isopropanol is added and incubated for 1hour at -20°c for precipitation of DNA. The DNA is seen as a colorless liquid in the solution.
 Concentration and purification of DNA: the solution was centrifuge at 10,000 rpm for 10 minutes. The supernatant layer was discarded and the remaining DNA pellets was washed with 1micron of 17% ethanol, mixed and centrifuge at 10,000 rpm for 10 minutes. The supernatant layer was discarded and air dried. 60 micron TE. Buffer was added for further dissolving of the DNA which was later stored at -40°c until it was required for use [14].
 PCR Amplification 
 This requires the use of primers (Forward and Reverse), polymerase enzyme, a template DNA and the d pieces which includedddATP, ddGTP and ddTTP, ddNTP. All this are called the master mix. 
 The PCR reactions consist of three main cycles.
 The DNA sample was heated at 940c to separate the two template of the DNA strand which was bonded by a hydrogen bond. Once both strand are separated the temperature is reduced to 570c (Annealing temperature). This temperature allows the binding of the forward and reverse primers to the template DNA. After binding the temperature is raised back to 720c which leads to the activation of polymerase enzyme and its start adding d NTPs to the DNA leading to the synthesize of new strands. The cycles were repeated several times in order to obtain millions of the copies of the target DNA [15].
 Preparation of Agarose Gel
 One gram (1 g) of agarose for DNA was measured or 2 g of agarose powdered will be measured for PCR analysis. This done by mixing the agarose powder with 100 ml 1×TAE in a microwaveable flask and microwaved for 1-3 minutes until the agarose is completely dissolved (do not over boil the solution as some of the buffer will evaporate) and thus alter the final percentage of the agarose in the gel. Allow the agarose solution to cool down to about 50°c then after five minutes 10µL was added to EZ vision DNA stain. EZ vision binds to the DNA and allows one to easily visualize the DNA under ultra violet (UV) light. The agarose was poured into the gel tray with the well comb firmly in place and this was placed in newly poured gel at 4°c for 10-15 mins or it sit at room temperature for 20-30 mins, until it has completely solidified[16].
 Loading and Running of samples on Agarose gel
 The agarose gel was placed into the chamber, and the process of electrophoresis commenced with running buffer introduced into the reservoir at the end of the chamber until it the buffer covered at least 2millimeter of the gel. It is advisable to place samples to be loaded in the correct order according to the lanes they are assigned to be running. When loading the samples keep the pipette tip perpendicular to the row of the wells as by supporting your accustomed hand with the second hand; this will reduce the risk of accidentally puncturing the wells with the tip. Lower the tip of the pipette until it breaks the surface of the buffer and is located just above the well. Once all the samples have been loaded it is advised to always avoid any movement of the gel chamber. This might result in the sample spilling into adjacent well. Place the lid on the gel chamber with the terminal correctly positioned to the matching electrodes on the gel chamber black to black and red to red. Remember that DNA is negatively charged hence the movement of the electric current from negatively charged to the positively charged depending on the bandwidth in Kilobytes. Once the electrode is connected to the power supply, switch ON the power supply then set the correct constant voltage (100) and stopwatch for proper time. Press the start button to begin the flow of current that will separate the DNA fragment.After few minutes the samples begins to migrate from the wells into the gel. As the DNA runs, the diaphragm moves from the negative electrode towards the positive electrode [17].
 PCR mix Components and Sanger Sequencing
 This is made up of primers which is both Forward and Reverse, the polymerase enzyme (Taq), a template DNA and the pieces of nucleotides which include: ddNTP, ddATP, ddGTP and ddTTP. Note that the specific Primer’s sequences for bacterial identification is: 785F 5' (GGA TTA GAT ACC CTG GTA) 3', 27F 5' (AGA GTT TGA TCM TGG CTC AG) 3', 907R 5' (CCG TCA ATT CMT TTR AGT TT) 3', 1492R 5' (TAC GGY TAC CTT GTT ACG ACT T) 3' in Sanger Sequencing techniques.
 BLAST
 The resulting genomic sequence were assembled and submitted in GenBank at NCBI for assignment of accession numbers. The resultant assertion numbers were subjected to homology search by using Basic Local Alignment Search Tool (BLAST) as NCBI with the assertion number MW362290, MW362291, MW362292, MW362293, MW362294 and MW362295 respectively. Whereas, the other isolates’ accession numbers were retrieved from NCBI GenBank which are:AB 738796.1, JH792136.1, MW 015768.1 and MG745385.1.MEGA 5.2 software was used for the construction of phylogenetic tree and phylogenetic analysis.
 All the organisms possess 100% identities, 0% gaps and 0.0% E.value which indicated that the organisms are closely related to the existing organisms. The use of 16S rRNA is the best identification process for bacteria because 16S rRNA gene has a distinguishing size of about 500 bases until 1500bp. Rather than using 23S rRNA which is of higher variation, The 16S rRNA is adopted in prokaryotes. 18S rRNA is used for identification in Eukaryotes
 Results
 The results of both the conventional morphological and cultural identification was correlated with the molecular sequencing results. Six isolates were confirmed B. cereus species while the other four isolates were. B. wiedmannii, B. thuringiensis, B. toyonensis and B. pseudomycoides.The 16S rRNA sequence of six isolates MW 362290.1- MW362295.1 were assigned accession numbers and deposited in the GenBank while the other four sequences were aligned to those available in the NCBI database. The alignment results showed closely relatedness to LT844650.1with an identity of 100% to 92.2% as above. The six isolates of Bacillus cereus great evolutionary relatedness as shown in the phylogenetic tree constructed using MEGA X software.
 Results
 The results of both the conventional morphological and cultural identification was correlated with the molecular sequencing results. Six isolates were confirmed B. cereus species while the other four isolates were. B. wiedmannii, B. thuringiensis, B. toyonensis and B. pseudomycoides.The 16S rRNA sequence of six isolates MW 362290.1- MW362295.1 were assigned accession numbers and deposited in the GenBank while the other four sequences were aligned to those available in the NCBI database. The alignment results showed closely relatedness to LT844650.1with an identity of 100% to 92.2% as above. The six isolates of Bacillus cereus great evolutionary relatedness as shown in the phylogenetic treeconstructed using MEGA X software.
 Discussion
 The results obtained in this study is consistent with the previous studies in other countries22,23 The results of the phylogenetic analysis of the 16S rRNA isolate of in this study was similar to the housekeeping genes proposed by [18, 19]. In comparing this study with the earlier study, B. cereus group comprising other species of Bacillus was hypothesized to be considered to form a single species with different ecotypes and pathotype. This study was able to phenotypically differentiated B. thuringiensis, B. pseudomycoides, B. toyonensis, B. wiedmannii and B. cereus sensu strito. Despite differences at the colonial appearance level, the 16S rRNA sequences have homology ranging from 100% to 92% providing insufficient resolution at the species level [6, 7, 18].After analysis through various methods, the strain was identified as Gram-positive bacteria of Bacillus cereus with a homology of 99.4%. Cohan [20] demonstrated that 95–99% of the similarity of 16S rRNA gene sequence between two bacteria hints towards a similar species while >99% indicates the same bacteria.The phylogenetic tree showed that B. toyonensis, B. thuringiensis and B. wiedmanniiare the outgroups of B. cereus
 group while B. pseudomycoides are most closely related to B. cereus group [19, 21, 22].
 Conclusion
 In the area of molecular epidemiology, genotypic typing method has greatly increased our ability to differentiate between micro-organisms at the intra and interspecies levels and have become an essential and powerful tool. Phenotypic method will still remain important in diagnostic microbiology and genotypic method will become increasingly popular.
 After analysis through various methods, the strain was identified as Gram-positive bacteria of Bacillus cereus with a homology of between 100% and 92.3%.
 Acknowledgments
 Collate acknowledgments in a separate section at the end of the article before the references, not as a footnote to the title. Use the unnumbered Acknowledgements Head style for the Acknowledgments heading. List here those individuals who provided help during the research. 
 Conflicts of interest
 The Authors declare that there is no conflict of interest.
 References:
 
 Simpkins Meyer F.; Paarmann D.; D’Souza M.; Olson R.; Glass EM.; Kubal M.; Paczian T.; Rodriguez A.; Stevens R. Wilke A The metagenomics rast server–a public resource for the automatic phylogenetic and functional analysis of metagenomes. BMC Bioinformatics. 2008, 9(1), 386.
 Segata N.; Waldron L.; Ballarini A.; Narasimhan V.; Jousson O.; Huttenhower C. Metagenomic microbial community profiling using unique clade-specific marker genes. Nature methods. 2012, 9(8), 811–814.
 Brady A.; Salzberg SL. Phymm and phymmbl: metagenomic phylogenetic classification with interpolated markov models. Nature Methods. 2009, 6(9), 673–676.
 Lindner MS.; Renard BY. Metagenomic abundance estimation and diagnostic testing on species level. Nucleic Acids Res. 2013, 41(1), 10–10.
 Wang A.; Ash G.J. Whole genome phylogeny of Bacillus by feature frequency profiles (FFP). Sci Rep. 2015, 5, 13644.
 Caroll L.M.; Kovac J.; Miller R.A.; Wiedmann M. Rapid, high-throughput identification of anthrax-causing and emetic Bacillus cereus group genome assemblies’ cereus group isolates using nucleotides sequencing data. Appli. Environ. 2017, 83: e01096-e01017
 Liu Y.; Lai Q. L.; Goker M.; Meier-Kolthoff J. P.; Wang M.; Sun Y. M.; Wang L.S.; Shao Z. Genomic insights into the taxonomic status of the Bacillus cereus group. Rep. 2015, 5, 14082.
 Lindner MS.; Renard BY. Metagenomic profiling of known and unknown microbes with microbegps. PloS ONE. 2015, 10(2), 0117711.
 Versalovic J.; Schneider M.; de Bruijn FJ.; Lupski JR. Genomic fingerprinting of bacteria using repetitive sequence based PCR (rep-PCR). Meth Mol Cell Biol. 1994, 5, 25–40.
 Arthur Y.; Ehebauer MT.; Mukhopadhyay S.; Hasnain SE. The PE/PPE multi gene family codes for virulence factors and is a possible source of mycobacterial antigenic variation: Perhaps more? Biochimie. 2013, 94, 110–116.
 Jusuf, E. Culture Collection of Potential Bacillus thuringiensis Bacterial Strains Insect Killer and the Making of a Library of Toxic Protein Coding Genes. Technical Report LIPI Biotechnology Research Center. 2008. pp. 18-31.
 Fawole, M.O.; B.A. Oso. Characterization of Bacteria: Laboratory Manual of Microbiology. 4th Edn., Spectrum Book Ltd., Ibadan, Nigeria, 2004, pp: 24-33.
 Cheesbrough, M. District Laboratory Practice in Tropical Countries. 2nd Edn., Cambridge University Press, Cambridge, UK., 2006, ISBN-13: 9781139449298.
 Giraffa G.; Neviani E. DNA-based, cultureindependent strategies for evaluating microbial communities in food associated ecosystem. Int J Food Microbiol. 2001, 67, 19–34.
 Ajeet Singh. DNA Extraction from a bacterial cell. A video on Experimental Biotechnology. 2020.
 Quick biochemistry. A YouTube video on polymerase chain reaction. 2018.
 Bio-Rad laboratories. A YouTube video on loading and running of samples on Agarose gel. 2012.
 Saitou N. and Nei, M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Biol. Evol. 1987, 4, 406-425. Doi: 101093/oxfordjournals.
 Lazarte N.J.; Lopez R.P.; Ghiringhelli P.D.; Beron C.M. Bacillus wiedmannii biovar thuringiensis: A specialized Mosquitocidal pathogen with plasmid from diverse origins Genome. Evol. 2018, 10(10), 2823-2833. Doi.1093/gbe/evy211
 Cohan, F.M. What are bacterial species? Rev. Microbiol. 2002, 56, 457-487
 Abiola C.; Oyetayo V.O. Isolation and Biochemical Characterization of Microorganisms Associated with the Fermentation of Kersting’s Groundnut (Macrotyloma geocarpum). Research Journal of Microbiology, 2016, 11: 47- 55.DOI:10.3923/jm.2016.47.55
 Adeoti O.M.; Usman T.A. Molecular Characterization of Rhizobacteria Isolates from Saki, Nigeria. Eur. Of Bio. Biotech. 2021, 2(2), 159. Doi 10.24018/ejbio.2021
APA, Harvard, Vancouver, ISO, and other styles
11

Wagner, Jeffrey, Kristin Salottolo, Christopher V. Fanale, Judd Jensen, and David Bar-Or. "Abstract TP261: Improving Neurologist Responsiveness for Telestroke Consultations: We Do Better!" Stroke 48, suppl_1 (2017). http://dx.doi.org/10.1161/str.48.suppl_1.tp261.

Full text
Abstract:
Introduction: Telemedicine is widely used for remote evaluation and treatment of acute stroke patients (telestroke). Because time equals brain, the NINDS and ACLS recommend a benchmark of 15 minutes from arrival to neurologic expertise. Our goal was to examine key time metrics and outcomes in an active telestroke network after implementing system-wide improvements. Methods: Our comprehensive stroke center (CSC) serves as the Rocky Mountain Region’s referral center for comprehensive stroke care. The telestroke program launched in 2006; stroke program leaders recently initiated several changes to improve time metrics, including a) a streamlined process for the centralized call center including centralized paging; b) a dedicated, on-call telestroke neurologist; c) outreach and education to the spoke hospitals for accessing the call center; d) back-up systems in place for technical issues and for times of high consult volumes. We examined the following metrics for all telestroke consults compared to patients admitted through the ED of the CSC over the past 18 months: median (interquartile range [IQR]) neurologist response time, transfer rate to the CSC, and IV t-PA rate. Median telestroke consult times (time to initiation of video consultation) were also reported, beginning April 2016. Results: Ten neurologists at the hub CSC responded to 4,283 pages from 45 spoke hospitals, averaging 8 pages/day. Overall, 14.4% of patients were transferred to the CSC for definitive care. The time to telestroke page response was 2 [1-3] minutes, which was comparable to neurologist response at the CSC of 0 [0-5] minutes for 686 patients. The median telestroke consult time was 2 [1-4] minutes for 70 patients. The IV tPA rate for treating acute ischemic stroke was comparable for telestroke vs. bedside consults (17.3% vs. 18.9%). Conclusions: Based on our mature telestroke network, it is possible to be as responsive with telestroke consults as with in-hospital consults. Several new, scalable processes resulted in improved time metrics. These time metrics should help define what should be expected from a Telestroke provider.
APA, Harvard, Vancouver, ISO, and other styles
12

"Neural Network Web-Based Human Resource Management System Model (NNWBHRMSM)." 2013 1, no. 2013 (2020): 75–87. http://dx.doi.org/10.47277/ijcncs/1(3)2.

Full text
Abstract:
As business activities are becoming increasing globally and as numerous firms expand their operations into overseas markets, there is need for human resource management (HRM) to ensure that they hire and keep good employees. From ages, firms/organizations have been having great problems in getting the right professionals into appropriate jobs and training. This research focuses at exploiting information technology in order to overcome these problems. The system, which is a network of inter–related processes, collects data from applicants through a web-based interface and matches with appropriate jobs. This prevents the frustration and some other problems inherent in the manual method of job recruitment, which is the traditional unstructured interview and knowledge based method for matching applicants to jobs. The proposed system is a neural network web-based human resource management system model running on Internet Information (IIS) server with capabilities for Active Server Page (ASP) and Microsoft Access; while Hypertext Markup Language (HTML) are used for authoring web pages. Finally, the system can run on the minimum Pentium machines with Windows XP operating system
APA, Harvard, Vancouver, ISO, and other styles
13

Votinov, Maksim Valer'evich. "THE FEATURES OF BUILDING WEB APPLICATIONS OF DATA SUPPORT OF AUTOMATIC CONTROL SYSTEMS." Vestnik of Astrakhan State Technical University. Series: Management, computer science and informatics, July 25, 2017, 40–47. http://dx.doi.org/10.24143/2072-9502-2017-3-40-47.

Full text
Abstract:
The article focuses on development telecommunication functions providing remote control of technological processes, their visualizing, and executive mechanisms of different processing facilities (e.g. executive mechanisms of a small-size dryer created in Murmansk State Technical University in order to smoke and dry fish). The solution of the problem became possible due to the web application developed according to ASP (Active Server Pages) technology and used on top the web-server controlled by IIS (Internet Information Services), Microsoft. The advantage of the web-application is that it allows logging onto the automated control system irrespective of a small-size dryer location (i.e. working place is mobile), and technological process control is possible from any mobile platforms and operational systems. Due to using MS SQL Server Express it became possible to organize nonstop data exchange between automatic system of the small dryer and the end user (simultaneous work of several users is possible). The article presents the scheme of the user and web-application interface. Class of the project safety (III) has been determined, as well as corresponding basic protection measures, which provide implementing the user identification and authentication systems, antivirus and network intrusion systems. Project costs proved to be lower than the cost of software TRACE MODE Data Center (35.000 rubles instead of 58.000 rubles), which is found a truly sustainable solutions to realize a remote access to the automation facility. A growing number of users doesn’t affect the final cost of the project.
APA, Harvard, Vancouver, ISO, and other styles
14

Kusriyanto, Heri. "MENINGKATKAN RELIABILITAS JARINGAN CLIENT SERVER DENGAN MENGGUNAKAN METODE VIRTUAL ROUTER REDUDANCY PROTOCOL (VRRP) BERBASIS CISCO DI PT.TRANS-PACIFIC PETROCHEMICAL INDOTAMA (TPPI) TUBAN." Indexia 2, no. 1 (2021). http://dx.doi.org/10.30587/indexia.v2i1.2554.

Full text
Abstract:
Dalam komunikasi jaringan Metro Ethernet (Metro-E) wide area network (WAN) perlu diperhatikan kemungkinan akan terjadinya gangguan pada router. Virtual Router Redudancy Protocol (VRRP) merupakan protocol redundancy standar Cisco yang menetapkan sebuah standby router dan active router yang disebut dengan istilah master router dan router lainnya menjadi backup router. Selain itu juga mempunyai virtual router. Virtual router didefinisikan melalui Virtual Router Identifier (VRID) dan IP address. Pemilihan master router juga dipengaruhi oleh nilai priority, semakin besar priority maka router tersebut akan menjadi master router. VRRP mempunyai 3 state, yaitu master, initialize, dan backup. Initialize adalah keadaan router pada saat menunggu adanya suatu even, state backup mempunyai tujuan untuk melakukan monitoring terhadap master router, jadi apabila master router down maka backup router akan mengambil alih tugas master router. State master mempunyai fungsi mengirimkan data ke router backup dengan dibuktikan packet loss 0 % dengan delay waktu 1,0085 ms. Tiap router akan mengirimkan hello packet didalam VRRP disebut dengan advertisement interval. secara default advertisement interval adalah 1 detik. Hal ini di gunakan untuk mengecek apakah router mengalami down atau jalur terputus, sehingga untuk hal ini fungsi layanan jaringan client server sangat di butuhkan oleh perusahaan PT.Trans - Pacific Petrochemical Indotama (TPPI) dalam melakukan syncronisasi client server antar kantor pusat Jakarta dan site plant Tuban guna meningkatkan stabilitas jaringan yang terjaga
APA, Harvard, Vancouver, ISO, and other styles
15

Leaver, Tama. "The Social Media Contradiction: Data Mining and Digital Death." M/C Journal 16, no. 2 (2013). http://dx.doi.org/10.5204/mcj.625.

Full text
Abstract:
Introduction Many social media tools and services are free to use. This fact often leads users to the mistaken presumption that the associated data generated whilst utilising these tools and services is without value. Users often focus on the social and presumed ephemeral nature of communication – imagining something that happens but then has no further record or value, akin to a telephone call – while corporations behind these tools tend to focus on the media side, the lasting value of these traces which can be combined, mined and analysed for new insight and revenue generation. This paper seeks to explore this social media contradiction in two ways. Firstly, a cursory examination of Google and Facebook will demonstrate how data mining and analysis are core practices for these corporate giants, central to their functioning, development and expansion. Yet the public rhetoric of these companies is not about the exchange of personal information for services, but rather the more utopian notions of organising the world’s information, or bringing everyone together through sharing. The second section of this paper examines some of the core ramifications of death in terms of social media, asking what happens when a user suddenly exists only as recorded media fragments, at least in digital terms. Death, at first glance, renders users (or post-users) without agency or, implicitly, value to companies which data-mine ongoing social practices. Yet the emergence of digital legacy management highlights the value of the data generated using social media, a value which persists even after death. The question of a digital estate thus illustrates the cumulative value of social media as media, even on an individual level. The ways Facebook and Google approach digital death are examined, demonstrating policies which enshrine the agency and rights of living users, but become far less coherent posthumously. Finally, along with digital legacy management, I will examine the potential for posthumous digital legacies which may, in some macabre ways, actually reanimate some aspects of a deceased user’s presence, such as the Lives On service which touts the slogan “when your heart stops beating, you'll keep tweeting”. Cumulatively, mapping digital legacy management by large online corporations, and the affordances of more focussed services dealing with digital death, illustrates the value of data generated by social media users, and the continued importance of the data even beyond the grave. Google While Google is universally synonymous with search, and is the world’s dominant search engine, it is less widely understood that one of the core elements keeping Google’s search results relevant is a complex operation mining user data. Different tools in Google’s array of services mine data in different ways (Zimmer, “Gaze”). Gmail, for example, uses algorithms to analyse an individual’s email in order to display the most relevant related advertising. This form of data mining is comparatively well known, with most Gmail users knowingly and willingly accepting more personalised advertising in order to use Google’s email service. However, the majority of people using Google’s search engine are unaware that search, too, is increasingly driven by the tracking, analysis and refining of results on the basis of user activity (Zimmer, “Externalities”). As Alexander Halavais (160–180) quite rightly argues, recent focus on the idea of social search – the deeper integration of social network information in gauging search results – is oxymoronic; all search, at least for Google, is driven by deep analysis of personal and aggregated social data. Indeed, the success of Google’s mining of user data has led to concerns that often invisible processes of customisation and personalisation will mean that the supposedly independent or objective algorithms producing Google’s search results will actually yield a different result for every person. As Siva Vaidhyanathan laments: “as users in a diverse array of countries train Google’s algorithms to respond to specialized queries with localised results, each place in the world will have a different list of what is important, true, or ‘relevant’ in response to any query” (138). Personalisation and customisation are not inherently problematic, and frequently do enhance the relevance of search results, but the main objection raised by critics is not Google’s data mining, but the lack of transparency in the way data are recorded, stored and utilised. Eli Pariser, for example, laments the development of a ubiquitous “filter bubble” wherein all search results are personalised and subjective but are hidden behind the rhetoric of computer-driven algorithmic objectivity (Pariser). While data mining informs and drives many of Google’s tools and services, the cumulative value of these captured fragments of information is best demonstrated by the new service Google Now. Google Now is a mobile app which delivers an ongoing stream of search results but without the need for user input. Google Now extrapolates the rhythms of a person’s life, their interests and their routines in order to algorithmically determine what information will be needed next, and automatically displays it on a user’s mobile device. Clearly Google Now is an extremely valuable and clever tool, and the more information a user shares, the better the ongoing customised results will be, demonstrating the direct exchange value of personal data: total personalisation requires total transparency. Each individual user will need to judge whether they wish to share with Google the considerable amount of personal information needed to make Google Now work. The pressing ethical question that remains is whether Google will ensure that users are sufficiently aware of the amount of data and personal privacy they are exchanging in order to utilise such a service. Facebook Facebook began as a closed network, open only to students at American universities, but has transformed over time to a much wider and more open network, with over a billion registered users. Facebook has continually reinvented their interface, protocols and design, often altering both privacy policies and users’ experience of privacy, and often meeting significant and vocal resistance in the process (boyd). The data mining performed by social networking service Facebook is also extensive, although primarily aimed at refining the way that targeted advertising appears on the platform. In 2007 Facebook partnered with various retail loyalty services and combined these records with Facebook’s user data. This information was used to power Facebook’s Beacon service, which added details of users’ retail history to their Facebook news feed (for example, “Tama just purchased a HTC One”). The impact of all of these seemingly unrelated purchases turning up in many people’s feeds suddenly revealed the complex surveillance, data mining and sharing of these data that was taking place (Doyle and Fraser). However, as Beacon was turned on, without consultation, for all Facebook users, there was a sizable backlash that meant that Facebook had to initially switch the service to opt-in, and then discontinue it altogether. While Beacon has been long since erased, it is notable that in early 2013 Facebook announced that they have strengthened partnerships with data mining and profiling companies, including Datalogix, Epsilon, Acxiom, and BlueKai, which harness customer information from a range of loyalty cards, to further refine the targeting ability offered to advertisers using Facebook (Hof). Facebook’s data mining, surveillance and integration across companies is thus still going on, but no longer directly visible to Facebook users, except in terms of the targeted advertisements which appear on the service. Facebook is also a platform, providing a scaffolding and gateway to many other tools and services. In order to use social games such as Zynga’s Farmville, Facebook users agree to allow Zynga to access their profile information, and use Facebook to authenticate their identity. Zynga has been unashamedly at the forefront of user analytics and data mining, attempting to algorithmically determine the best way to make virtual goods within their games attractive enough for users to pay for them with real money. Indeed, during a conference presentation, Zynga Vice President Ken Rudin stated outright that Zynga is “an analytics company masquerading as a games company” (Rudin). I would contend that this masquerade succeeds, as few Farmville players are likely to consider how their every choice and activity is being algorithmically scrutinised in order to determine what virtual goods they might actually buy. As an instance of what is widely being called ‘big data’, the data miing operations of Facebook, Zynga and similar services lead to a range of ethical questions (boyd and Crawford). While users may have ostensibly agreed to this data mining after clicking on Facebook’s Terms of Use agreement, the fact that almost no one reads these agreements when signing up for a service is the Internet’s worst kept secret. Similarly, the extension of these terms when Facebook operates as a platform for other applications is a far from transparent process. While examining the recording of user data leads to questions of privacy and surveillance, it is important to note that many users are often aware of the exchange to which they have agreed. Anders Albrechtslund deploys the term ‘social surveillance’ to usefully emphasise the knowing, playful and at times subversive approach some users take to the surveillance and data mining practices of online service providers. Similarly, E.J. Westlake notes that performances of self online are often not only knowing but deliberately false or misleading with the aim of exploiting the ways online activities are tracked. However, even users well aware of Facebook’s data mining on the site itself may be less informed about the social networking company’s mining of offsite activity. The introduction of ‘like’ buttons on many other Websites extends Facebook’s reach considerably. The various social plugins and ‘like’ buttons expand both active recording of user activity (where the like button is actually clicked) and passive data mining (since a cookie is installed or updated regardless of whether a button is actually pressed) (Gerlitz and Helmond). Indeed, because cookies – tiny packets of data exchanged and updated invisibly in browsers – assign each user a unique identifier, Facebook can either combine these data with an existing user’s profile or create profiles about non-users. If that person even joins Facebook, their account is connected with the existing, data-mined record of their Web activities (Roosendaal). As with Google, the significant issue here is not users knowingly sharing their data with Facebook, but the often complete lack of transparency in terms of the ways Facebook extracts and mines user data, both on Facebook itself and increasingly across applications using Facebook as a platform and across the Web through social plugins. Google after Death While data mining is clearly a core element in the operation of Facebook and Google, the ability to scrutinise the activities of users depends on those users being active; when someone dies, the question of the value and ownership of their digital assets becomes complicated, as does the way companies manage posthumous user information. For Google, the Gmail account of a deceased person becomes inactive; the stored email still takes up space on Google’s servers, but with no one using the account, no advertising is displayed and thus Google can earn no revenue from the account. However, the process of accessing the Gmail account of a deceased relative is an incredibly laborious one. In order to even begin the process, Google asks that someone physically mails a series of documents including a photocopy of a government-issued ID, the death certificate of the deceased person, evidence of an email the requester received from the deceased, along with other personal information. After Google have received and verified this information, they state that they might proceed to a second stage where further documents are required. Moreover, if at any stage Google decide that they cannot proceed in releasing a deceased relative’s Gmail account, they will not reveal their rationale. As their support documentation states: “because of our concerns for user privacy, if we determine that we cannot provide the Gmail content, we will not be able to share further details about the account or discuss our decision” (Google, “Accessing”). Thus, Google appears to enshrine the rights and privacy of individual users, even posthumously; the ownership or transfer of individual digital assets after death is neither a given, nor enshrined in Google’s policies. Yet, ironically, the economic value of that email to Google is likely zero, but the value of the email history of a loved one or business partner may be of substantial financial and emotional value, probably more so than when that person was alive. For those left behind, the value of email accounts as media, as a lasting record of social communication, is heightened. The question of how Google manages posthumous user data has been further complicated by the company’s March 2012 rationalisation of over seventy separate privacy policies for various tools and services they operate under the umbrella of a single privacy policy accessed using a single unified Google account. While this move was ostensibly to make privacy more understandable and transparent at Google, it had other impacts. For example, one of the side effects of a singular privacy policy and single Google identity is that deleting one of a recently deceased person’s services may inadvertently delete them all. Given that Google’s services include Gmail, YouTube and Picasa, this means that deleting an email account inadvertently erases all of the Google-hosted videos and photographs that individual posted during their lifetime. As Google warns, for example: “if you delete the Google Account to which your YouTube account is linked, you will delete both the Google Account AND your YouTube account, including all videos and account data” (Google, “What Happens”). A relative having gained access to a deceased person’s Gmail might sensibly delete the email account once the desired information is exported. However, it seems less likely that this executor would realise that in doing so all of the private and public videos that person had posted on YouTube would also permanently disappear. While material possessions can be carefully dispersed to specific individuals following the instructions in someone’s will, such affordances are not yet available for Google users. While it is entirely understandable that the ramification of policy changes are aimed at living users, as more and more online users pass away, the question of their digital assets becomes increasingly important. Google, for example, might allow a deceased person’s executor to elect which of their Google services should be kept online (perhaps their YouTube videos), which traces can be exported (perhaps their email), and which services can be deleted. At present, the lack of fine-grained controls over a user’s digital estate at Google makes this almost impossible. While it violates Google’s policies to transfer ownership of an account to another person, if someone does leave their passwords behind, this provides their loved ones with the best options in managing their digital legacy with Google. When someone dies and their online legacy is a collection of media fragments, the value of those media is far more apparent to the loved ones left behind rather than the companies housing those media. Facebook Memorialisation In response to users complaining that Facebook was suggesting they reconnect with deceased friends who had left Facebook profiles behind, in 2009 the company instituted an official policy of turning the Facebook profiles of departed users into memorial pages (Kelly). Technically, loved ones can choose between memorialisation and erasing an account altogether, but memorialisation is the default. This entails setting the account so that no one can log into it, and that no new friends (connections) can be made. Existing friends can access the page in line with the user’s final privacy settings, meaning that most friends will be able to post on the memorialised profile to remember that person in various ways (Facebook). Memorialised profiles (now Timelines, after Facebook’s redesign) thus become potential mourning spaces for existing connections. Since memorialised pages cannot make new connections, public memorial pages are increasingly popular on Facebook, frequently set up after a high-profile death, often involving young people, accidents or murder. Recent studies suggest that both of these Facebook spaces are allowing new online forms of mourning to emerge (Marwick and Ellison; Carroll and Landry; Kern, Forman, and Gil-Egui), although public pages have the downside of potentially inappropriate commentary and outright trolling (Phillips). Given Facebook has over a billion registered users, estimates already suggest that the platform houses 30 million profiles of deceased people, and this number will, of course, continue to grow (Kaleem). For Facebook, while posthumous users do not generate data themselves, the fact that they were part of a network means that their connections may interact with a memorialised account, or memorial page, and this activity, like all Facebook activities, allows the platform to display advertising and further track user interactions. However, at present Facebook’s options – to memorialise or delete accounts of deceased people – are fairly blunt. Once Facebook is aware that a user has died, no one is allowed to edit that person’s Facebook account or Timeline, so Facebook literally offers an all (memorialisation) or nothing (deletion) option. Given that Facebook is essentially a platform for performing identities, it seems a little short-sighted that executors cannot clean up or otherwise edit the final, lasting profile of a deceased Facebook user. As social networking services and social media become more ingrained in contemporary mourning practices, it may be that Facebook will allow more fine-grained control, positioning a digital executor also as a posthumous curator, making the final decision about what does and does not get kept in the memorialisation process. Since Facebook is continually mining user activity, the popularity of mourning as an activity on Facebook will likely mean that more attention is paid to the question of digital legacies. While the user themselves can no longer be social, the social practices of mourning, and the recording of a user as a media entity highlights the fact that social media can be about interactions which in significant ways include deceased users. Digital Legacy Services While the largest online corporations have fairly blunt tools for addressing digital death, there are a number of new tools and niche services emerging in this area which are attempting to offer nuanced control over digital legacies. Legacy Locker, for example, offers to store the passwords to all of a user’s online services and accounts, from Facebook to Paypal, and to store important documents and other digital material. Users designate beneficiaries who will receive this information after the account holder passes away, and this is confirmed by preselected “verifiers” who can attest to the account holder’s death. Death Switch similarly provides the ability to store and send information to users after the account holder dies, but tests whether someone is alive by sending verification emails; fail to respond to several prompts and Death Switch will determine a user has died, or is incapacitated, and executes the user’s final instructions. Perpetu goes a step further and offers the same tools as Legacy Locker but also automates existing options from social media services, allowing users to specify, for example, that their Facebook, Twitter or Gmail data should be downloaded and this archive should be sent to a designated recipient when the Perpetu user dies. These tools attempt to provide a more complex array of choices in terms of managing a user’s digital legacy, providing similar choices to those currently available when addressing material possessions in a formal will. At a broader level, the growing demand for these services attests to the ongoing value of online accounts and social media traces after a user’s death. Bequeathing passwords may not strictly follow the Terms of Use of the online services in question, but it is extremely hard to track or intervene when a user has the legitimate password, even if used by someone else. More to the point, this finely-grained legacy management allows far more flexibility in the utilisation and curation of digital assets posthumously. In the process of signing up for one of these services, or digital legacy management more broadly, the ongoing value and longevity of social media traces becomes more obvious to both the user planning their estate and those who ultimately have to manage it. The Social Media Afterlife The value of social media beyond the grave is also evident in the range of services which allow users to communicate in some fashion after they have passed away. Dead Social, for example, allows users to schedule posthumous social media activity, including the posting of tweets, sending of email, Facebook messages, or the release of online photos and videos. The service relies on a trusted executor confirming someone’s death, and after that releases these final messages effectively from beyond the grave. If I Die is a similar service, which also has an integrated Facebook application which ensures a user’s final message is directly displayed on their Timeline. In a bizarre promotional campaign around a service called If I Die First, the company is promising that the first user of the service to pass away will have their posthumous message delivered to a huge online audience, via popular blogs and mainstream press coverage. While this is not likely to appeal to everyone, the notion of a popular posthumous performance of self further complicates that question of what social media can mean after death. Illustrating the value of social media legacies in a quite different but equally powerful way, the Lives On service purports to algorithmically learn how a person uses Twitter while they are live, and then continue to tweet in their name after death. Internet critic Evgeny Morozov argues that Lives On is part of a Silicon Valley ideology of ‘solutionism’ which casts every facet of society as a problem in need of a digital solution (Morozov). In this instance, Lives On provides some semblance of a solution to the problem of death. While far from defeating death, the very fact that it might be possible to produce any meaningful approximation of a living person’s social media after they die is powerful testimony to the value of data mining and the importance of recognising that value. While Lives On is an experimental service in its infancy, it is worth wondering what sort of posthumous approximation might be built using the robust data profiles held by Facebook or Google. If Google Now can extrapolate what a user wants to see without any additional input, how hard would it be to retool this service to post what a user would have wanted after their death? Could there, in effect, be a Google After(life)? Conclusion Users of social media services have differing levels of awareness regarding the exchange they are agreeing to when signing up for services provided by Google or Facebook, and often value the social affordances without necessarily considering the ongoing media they are creating. Online corporations, by contrast, recognise and harness the informatic traces users generate through complex data mining and analysis. However, the death of a social media user provides a moment of rupture which highlights the significant value of the media traces a user leaves behind. More to the point, the value of these media becomes most evident to those left behind precisely because that individual can no longer be social. While beginning to address the issue of posthumous user data, Google and Facebook both have very blunt tools; Google might offer executors access while Facebook provides the option of locking a deceased user’s account as a memorial or removing it altogether. Neither of these responses do justice to the value that these media traces hold for the living, but emerging digital legacy management tools are increasingly providing a richer set of options for digital executors. While the differences between material and digital assets provoke an array of legal, spiritual and moral issues, digital traces nevertheless clearly hold significant and demonstrable value. For social media users, the death of someone they know is often the moment where the media side of social media – their lasting, infinitely replicable nature – becomes more important, more visible, and casts the value of the social media accounts of the living in a new light. For the larger online corporations and service providers, the inevitable increase in deceased users will likely provoke more fine-grained controls and responses to the question of digital legacies and posthumous profiles. It is likely, too, that the increase in online social practices of mourning will open new spaces and arenas for those same corporate giants to analyse and data-mine. References Albrechtslund, Anders. “Online Social Networking as Participatory Surveillance.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/article/view/2142/1949›. boyd, danah. “Facebook’s Privacy Trainwreck: Exposure, Invasion, and Social Convergence.” Convergence 14.1 (2008): 13–20. ———, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662–679. Carroll, Brian, and Katie Landry. “Logging On and Letting Out: Using Online Social Networks to Grieve and to Mourn.” Bulletin of Science, Technology & Society 30.5 (2010): 341–349. Doyle, Warwick, and Matthew Fraser. “Facebook, Surveillance and Power.” Facebook and Philosophy: What’s on Your Mind? Ed. D.E. Wittkower. Chicago, IL: Open Court, 2010. 215–230. Facebook. “Deactivating, Deleting & Memorializing Accounts.” Facebook Help Center. 2013. 7 Mar. 2013 ‹http://www.facebook.com/help/359046244166395/›. Gerlitz, Carolin, and Anne Helmond. “The Like Economy: Social Buttons and the Data-intensive Web.” New Media & Society (2013). Google. “Accessing a Deceased Person’s Mail.” 25 Jan. 2013. 21 Apr. 2013 ‹https://support.google.com/mail/answer/14300?hl=en›. ———. “What Happens to YouTube If I Delete My Google Account or Google+?” 8 Jan. 2013. 21 Apr. 2013 ‹http://support.google.com/youtube/bin/answer.py?hl=en&answer=69961&rd=1›. Halavais, Alexander. Search Engine Society. Polity, 2008. Hof, Robert. “Facebook Makes It Easier to Target Ads Based on Your Shopping History.” Forbes 27 Feb. 2013. 1 Mar. 2013 ‹http://www.forbes.com/sites/roberthof/2013/02/27/facebook-makes-it-easier-to-target-ads-based-on-your-shopping-history/›. Kaleem, Jaweed. “Death on Facebook Now Common as ‘Dead Profiles’ Create Vast Virtual Cemetery.” Huffington Post. 7 Dec. 2012. 7 Mar. 2013 ‹http://www.huffingtonpost.com/2012/12/07/death-facebook-dead-profiles_n_2245397.html›. Kelly, Max. “Memories of Friends Departed Endure on Facebook.” The Facebook Blog. 27 Oct. 2009. 7 Mar. 2013 ‹http://www.facebook.com/blog/blog.php?post=163091042130›. Kern, Rebecca, Abbe E. Forman, and Gisela Gil-Egui. “R.I.P.: Remain in Perpetuity. Facebook Memorial Pages.” Telematics and Informatics 30.1 (2012): 2–10. Marwick, Alice, and Nicole B. Ellison. “‘There Isn’t Wifi in Heaven!’ Negotiating Visibility on Facebook Memorial Pages.” Journal of Broadcasting & Electronic Media 56.3 (2012): 378–400. Morozov, Evgeny. “The Perils of Perfection.” The New York Times 2 Mar. 2013. 4 Mar. 2013 ‹http://www.nytimes.com/2013/03/03/opinion/sunday/the-perils-of-perfection.html?pagewanted=all&_r=0›. Pariser, Eli. The Filter Bubble: What the Internet Is Hiding from You. London: Viking, 2011. Phillips, Whitney. “LOLing at Tragedy: Facebook Trolls, Memorial Pages and Resistance to Grief Online.” First Monday 16.12 (2011). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/3168›. Roosendaal, Arnold. “We Are All Connected to Facebook … by Facebook!” European Data Protection: In Good Health? Ed. Serge Gutwirth et al. Dordrecht: Springer, 2012. 3–19. Rudin, Ken. “Actionable Analytics at Zynga: Leveraging Big Data to Make Online Games More Fun and Social.” San Diego, CA, 2010. Vaidhyanathan, Siva. The Googlization of Everything. 1st ed. Berkeley: University of California Press, 2011. Westlake, E.J. “Friend Me If You Facebook: Generation Y and Performative Surveillance.” TDR: The Drama Review 52.4 (2008): 21–40. Zimmer, Michael. “The Externalities of Search 2.0: The Emerging Privacy Threats When the Drive for the Perfect Search Engine Meets Web 2.0.” First Monday 13.3 (2008). 21 Apr. 2013 ‹http://firstmonday.org/ojs/index.php/fm/article/view/2136/1944›. ———. “The Gaze of the Perfect Search Engine: Google as an Infrastructure of Dataveillance.” Web Search. Eds. Amanda Spink & Michael Zimmer. Berlin: Springer, 2008. 77–99.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuntsman, Adi. "“Error: No Such Entry”." M/C Journal 10, no. 5 (2007). http://dx.doi.org/10.5204/mcj.2707.

Full text
Abstract:

 
 
 “Error: no such entry.” “The thread specified does not exist.” These messages appeared every now and then in my cyberethnography – a study of Russian-Israeli queer immigrants and their online social spaces. Anthropological research in cyberspace invites us to rethink the notion of “the field” and the very practice of ethnographic observation. In negotiating my own position as an anthropologist of online sociality, I was particularly inspired by Radhika Gajjala’s notion of “cyberethnography” as an epistemological and methodological practice of examining the relations between self and other, voice and voicelessness, belonging, exclusion, and silencing as they are mediated through information-communication technologies (“Interrupted” 183). The main cyberethnographic site of my research was the queer immigrants’ Website with its news, essays, and photo galleries, as well as the vibrant discussions that took place on the Website’s bulletin board. “The Forum,” as it was known among the participants, was visited daily by dozens, among them newbies, passers-by, and regulars. My study, dedicated to questions of home-making, violence, and belonging, was following the publications that appeared on the Website, as well as the daily discussions on the Forum. Both the publications and the discussions were archived since the Website’s creation in 2001 and throughout my fieldwork that took place in 2003-04. My participant observations of the discussions “in real time” were complemented by archival research, where one would expect to discover an anthropologist’s wildest dreams: the fully-documented everyday life of a community, a word-by-word account of what was said, when, and to whom. Or so I hoped. The “error” messages that appeared when I clicked on some of the links in the archive, or the absence of a thread I knew was there before, raised the question of erasure and deletion, of empty spaces that marked that which used to be, but which had ceased to exist. The “error” messages, in other words, disrupted my cyberethnography through what can be best described as haunting. “Haunting,” writes Avery Gordon in her Ghostly Matters, “describes how that which appears to be not there is often a seething presence, acting on and often meddling with taken-for-granted realities” (8). This essay looks into the seething presence of erasures in online archives. What is the role, I will ask, of online archives in the life of a cybercommunity? How and when are the archives preserved, and by whom? What are the relations between archives, erasure, and home-making in cyberspace? *** Many online communities based on mailing lists, newsgroups, or bulletin boards keep archives of their discussions – archives that at times go on for years. Sometimes they are accessible only to members of lists or communities that created them; other times they are open to all. Archived discussions can act as a form of collective history and as marks of belonging (or exclusion). As the records of everyday conversations remain on the Web, they provide a unique glance into the life of an online collective for a visitor or a newcomer. For those who participated in the discussions browsing through archives can bring nostalgic feelings: memories – pleasurable and/or painful – of times shared with others; memories of themselves in the past. Turning to archived discussions was not an infrequent act in the cybercommunity I studied. While there is no way to establish how many participants looked into how many archives, and how often they did so, there is a clear indicator that the archives were visited and reflected on. For one, old threads were sometimes “revived”: technically, a discussion thread is never closed unless the administrator decides to “freeze” it. If the thread is not “frozen,” anyone can go to an old discussion and post there; a new posting would automatically move an archived thread to the list of “recent”/“currently active” ones. As all the postings have times and dates, the reappearance of threads from months ago among the “recent discussions” indicates the use of archives. In addition to such “revivals,” every now and then someone would open a new discussion thread, posting a link to an old discussion and expressing thoughts about it. Sometimes it was a reflection on the Forum itself, or on the changes that took place there; many veteran participants wrote about the archived discussion in a sentimental fashion, remembering “the old days.” Other times it was a reflection on a participant’s life trajectory: looking at one’s old postings, a person would reflect on how s/he changed and sometimes on how the Website and its bulletin board changed his/her life. Looking at old discussions can be seen as performances of belonging: the repetitive reference to the archives constitutes the Forum as home with a multilayered past one can dwell on. Turning to the archives emphasises the importance of preservation, of keeping cyberwords as an object of collective possession and affective attachment. It links the individual and the collective: looking at old threads one can reflect on “how I used to be” and “how the Forum used to be.” Visiting the archives, then, constitutes the Website as simultaneously a site where belonging is performed, and an object of possession that can belong to a collective (Fortier). But the archives preserved on the Forum were never a complete documentation of the discussions. Many postings were edited immediately after appearance or later. In the first two and a half years of the Website’s existence any registered participant, as long as his/her nickname was not banned from the Forum, could browse through his/her messages and edit them. One day in 2003 one person decided to “commit virtual suicide” (as he and others called it). He went through all the postings and, since there was no option for deleting them all at once, he manually erased them one by one. Many participants were shocked to discover his acts, mourning him as well as the archives he damaged. The threads in which he had once taken part still carried signs of his presence: when participants edit their postings, all they can do is delete the text, leaving an empty space in the thread’s framework (only the administrator can modify the framework of a thread and delete text boxes). But the text box with name and date of each posting is still there. “The old discussions don’t make sense now,” a forum participant lamented, “because parts of the arguments are missing.” Following this “suicide” the Website’s administrator decided that from that point on participants could only edit their last posting but could not make any retrospective changes to the archives. Both the participants’ mourning of the mutilated threads and administrator’s decision suggest that there is a desire to preserve the archives as collective possession belonging to all and not to be tampered with by individuals. But the many conflicts between the administrator and some participants on what could be posted and what should be censored reveal that another form of ownership/ possession was at stake. “The Website is private property and I can do anything I like,” the administrator often wrote in response to those who questioned his erasure of other people’s postings, or his own rude and aggressive behaviour towards participants. Thus he broke the very rules of netiquette he had established – the Website’s terms of use prohibit personal attacks and aggressive language. Possession-as-belonging here was figured as simultaneously subjected to a collective “code of practice” and as arbitrary, dependant on one person’s changes of mind. Those who were particularly active in challenging the administrator (for example, by stating that although the Website is indeed privately owned, the interactions on the Forum belong to all; or by pointing out to the administrator that he was contradicting his own rules) were banned from the site or threatened with exclusion, and the threads where the banning was announced were sometimes deleted. Following the Forum’s rules, the administrator was censoring messages of an offensive nature, for example, commercial advertisements or links to pornographic Websites, as well as some personal attacks between participants. But among the threads doomed for erasure were also postings of a political nature, in particular those expressing radical left-wing views and opposing the tone of political loyalty dominating the site (while attacks on those participants who expressed the radical views were tolerated and even encouraged by the administrator). *** The archives that remain on the site, then, are not a full documentation of everyday narratives and conversations but the result of selection and regulation of both individual participants and – predominantly – the administrator. These archives are caught between several contradictory approaches to the Forum. One is embedded within the capitalist notion of payment as conferring ownership: I paid for the domain, says the administrator, therefore I own everything that takes place there. Another, manifested in the possibility of editing one’s postings, views cyberspeech as belonging first and foremost to the speaker who can modify and erase them as s/he pleases. The third defines the discussions that take place on the Forum as collective property that cannot be ruled by a single individual, precisely because it is the result of collective interaction. But while the second and the third approaches are shared by most participants, it is the idea of private ownership that seemed to dominate and facilitate most of the erasures. Erasure and modification performed by the administrator were not limited to censorship of particular topics, postings, or threads. The archive of the Forum as a whole was occasionally “cleared.” According to the administrator, the limited space on the site required “clearance” of the oldest threads to make room for new ones. Decisions about such clearances were not shared with anyone, nor were the participants notified about it in advance. One day parts of the archive simply disappeared, as I discovered during my fieldwork. When I began daily observations on the Website in December 2003, I looked at the archives page and saw that the General Forum section of the Forum went back for about a year and a half, and the Lesbian Forum section for about a year. I then decided to follow the discussions as they emerged and unfolded for 5-6 months, saving only the most interesting threads in my field diary, and to download all the archived threads later, for future detailed analysis. But to my great surprise, in May 2004 I discovered that while the General Forum still dated back to September 2002, the oldest thread on the Lesbian Forum was dated December 2003! All earlier threads were removed without any notice to Forum participants; and, as I learned later, no record of the threads was kept on- or offline. These examples of erasure and “clearance” demonstrate the complexity of ownership on the site: a mixture of legal and capitalist power intertwined with social hierarchies that determine which discussions and whose words are (more) valuable (The administrator has noted repeatedly that the discussions on the Lesbian Forum are “just chatter.” Ironically, both the differences in style between the General Forum and the Lesbian Forum and the administrator’s account of them resemble the (stereo)typical heterosexual gendering of talk). And while the effects and the results of erasure are compound, they undoubtly point to the complexity – and fragility! – of “home” in cyberspace and to the constant presence of violence in its constitution. During my fieldwork I felt the strange disparity between the narratives of the Website as a homey space (expressed both in the site’s official description and in some participants’ account of their experiences), and the frequent acts of erasure – not only of particular participants but more broadly of large parts of its archives. All too often, the neat picture of the “community archive” where one can nostalgically dwell on the collective past was disrupted by the “error” message. Error: no such entry. The thread specified does not exist. It was not only the incompleteness of archives that indicated fights and erasures. As I gradually learned throughout my fieldwork, the history of the Website itself was based on internal conflicts, omitted contributions, and constantly modified stories of origins. For example, the story of the Website’s establishment, as it was published in the About Us section of the site and reprinted in celebratory texts of the first anniversaries, presents the site as created by “three fathers.” The three were F, the administrator, M. who wrote, edited, and translated most of the material, and the third person whose name was never mentioned. When I asked about him on the site and later in interviews with both M. and F., they repeatedly and steadily ignored the question, and changed the subject of conversation. But the third “father” was not the only one whose name was omitted. In fact, the original Website was created by three women and another man. M. and F. joined later, and soon afterwards F., who had acted as the administrator during my fieldwork, took over the material and moved the site to another domain. Not only were the original creators erased from the site’s history; they were gradually ostracised from the new Website. When I interviewed two of the women, I mentioned the narrative of the site as a “child of three fathers.” “More like an adopted child,” chuckled one of them with bitterness, and told me the story of the original Website. Moved by their memories, the two took me to the computer. They went to the Internet Archive’s “WayBack Machine” Website – a mega-archive of sorts, an online server that keeps traces of old Web pages. One of the women managed to recover several pages of the old Website; sad and nostalgic, she shared with me the few precious traces of what was once her and her friends’ creation. But those, too, were haunted pages – most of the hyperlinks there generated “error” messages instead of actual articles or discussion threads. Error: no such entry. The thread specified does not exist. After a few years of working closely together on their “child,” M. and F. drifted apart, too. The hostility between the two intensified. Old materials (mostly written, translated, or edited by M. over a 3 year period) were moved into an archive by F. the administrator. They were made accessible through a small link hidden at the bottom of the homepage. One day they disappeared completely. Shortly afterwards, in September 2006, the Website celebrated its fifth anniversary. For this occasion the administrator wrote “the history of the Website,” where he presented it as his enterprise, noting in passing two other contributors whose involvement was short and marginal. Their names were not mentioned, but the two were described in a defaming and scornful way. *** So where do the “error” messages take us? What do they tell us about homes and communities in cyberspace? In her elaboration on cybercommunities, Radhika Gajjala notes that: Cyberspace provides a very apt site for the production of shifting yet fetishised frozen homes (shifting as more and more people get online and participate, frozen as their narratives remain on Websites and list archives through time in a timeless floating fashion) (“Interrupted”, 178). Gajjala’s notion of shifting yet fetishised and frozen homes is a useful term for capturing the nature of communication on the Forum throughout the 5 years of its existence. It was indeed a shifting home: many people came and participated, leaving parts of themselves in the archives; others were expelled and banned, leaving empty spaces and traces of erasure in the form of “error” messages. The presence of those erased or “cleared” was no longer registered in words – an ultimate sign of existence in the text-based online communication. And yet, they were there as ghosts, living through the traces left behind and the “seething presence” of haunting (Gordon 8). The Forum was a fetishised home, too, as the negotiation of ownership and the use of old threads demonstrate. However, Gajjala’s vision of archives suggests their wholeness, as if every word and every discussion is “frozen” in its entirety. The idea of fetishised homes does gesture to the complex and complicated reading of the archives; but what is left unproblematic are the archives themselves. Being attentive to the troubled, incomplete, and haunted archives invites a more careful and critical reading of cyberhomes – as Gajjala herself demonstrates in her discussion of online silences – and of the interrelation of violence and belonging in it (CyberSelves 2, 5). Constituted in cyberspace, the archives are embedded in the particular nature of online sociality, with its fantasy of timeless and floating traces, as well as with its brutality of deletion. Cyberwords do remain on archives and servers, sometimes for years; they can become ghosts of people who died or of collectives that no longer exist. But these ghosts, in turn, are haunted by the words and Webpages that never made it into the archives – words that were said but then deleted. And of course, cyberwords as fetishised and frozen homes are also haunted by what was never said in the first place, by silences that are as constitutive of homes as the words. References Fortier, Anne-Marie. “Community, Belonging and Intimate Ethnicity.” Modern Italy 1.1 (2006): 63-77. Gajjala, Radhika. “An Interrupted Postcolonial/Feminist Cyberethnography: Complicity and Resistance in the ‘Cyberfield’.” Feminist Media Studies 2.2 (2002): 177-193. Gajjala, Radhika. Cyber Selves: Feminist Ethnographies of South-Asian Women. Oxford: Alta Mira Press, 2004. Gordon, Avery. Ghostly Matters: Haunting and the Sociological Imagination. Minneapolis and London: U of Minneapolis P, 1997. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Kuntsman, Adi. "“Error: No Such Entry”: Haunted Ethnographies of Online Archives." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0711/05-kuntsman.php>. APA Style
 Kuntsman, A. (Oct. 2007) "“Error: No Such Entry”: Haunted Ethnographies of Online Archives," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0711/05-kuntsman.php>. 
APA, Harvard, Vancouver, ISO, and other styles
17

Sturm, Ulrike, Denise Beckton, and Donna Lee Brien. "Curation on Campus: An Exhibition Curatorial Experiment for Creative Industries Students." M/C Journal 18, no. 4 (2015). http://dx.doi.org/10.5204/mcj.1000.

Full text
Abstract:
Introduction The exhibition of an artist’s work is traditionally accepted as representing the final stage of the creative process (Staniszewski). This article asks, however, whether this traditional view can be reassessed so that the curatorial practice of mounting an exhibition becomes, itself, a creative outcome feeding into work that may still be in progress, and that simultaneously operates as a learning and teaching tool. To provide a preliminary examination of the issue, we use a single case study approach, taking an example of practice currently used at an Australian university. In this program, internal and external students work together to develop and deliver an exhibition of their own work in progress. The exhibition space has a professional website (‘CQUniversity Noosa Exhibition Space’), many community members and the local media attend exhibition openings, and the exhibition (which runs for three to four weeks) becomes an outcome students can include in their curriculum vitae. This article reflects on the experiences, challenges, and outcomes that have been gained through this process over the past twelve months. Due to this time frame, the case study is exploratory and its findings are provisional. The case study is an appropriate method to explore a small sample of events (in this case exhibitions) as, following Merriam, it allows the construction of a richer picture of an under-examined phenomenon to be constructed. Although it is clear that this approach will not offer results which can be generalised, it can, nevertheless, assist in opening up a field for investigation and constructing a holistic account of a phenomenon (in this case, the exhibition space as authentic learning experience and productive teaching tool), for, as Merriam states, “much can be learned from a particular case” (51). Jennings adds that even the smallest case study is useful as it includes an “in-depth examination of the subject with which to confirm or contest received generalizations” (14). Donmoyer extends thoughts on this, suggesting that the single case study is extremely useful as the “restricted conception of generalizability … solely in terms of sampling and statistical significance is no longer defensible or functional” (45). Using the available student course feedback, anonymous end-of-term course evaluations, and other available information, this case study account offers an example of what Merriam terms a “narrative description” (51), which seeks to offer readers the opportunity to engage and “learn vicariously from an encounter with the case” (Merriam 51) in question. This may, we propose, be particularly productive for other educators since what is “learn[ed] in a particular case can be transferred to similar situations” (Merriam 51). Breaking Ground exhibition, CQUniversity Noosa Exhibition Space, 2014. Photo by Ulrike Sturm. Background The Graduate Certificate of Creative Industries (Creative Practice) (CQU ‘CB82’) was developed in 2011 to meet the national Australian Quality Framework agency’s Level 8 (Graduate Certificate) standards in terms of what is called in their policies, the “level” of learning. This states that, following the program, graduates from this level of program “will have advanced knowledge and skills for professional or highly skilled work and/or further learning … [and] will apply knowledge and skills to demonstrate autonomy, well-developed judgment, adaptability and responsibility as a practitioner or learner” (AQF). The program was first delivered in 2012 and, since then, has been offered both two and three terms a year, attracting small numbers of students each term, with an average of 8 to 12 students a term. To meet these requirements, such programs are sometimes developed to provide professional and work-integrated learning tasks and learning outcomes for students (Patrick et al., Smith et al.). In this case, professionally relevant and related tasks and outcomes formed the basis for the program, its learning tasks, and its assessment regime. To this end, each student enrolled in this program works on an individual, self-determined (but developed in association with the teaching team and with feedback from peers) creative/professional project that is planned, developed, and delivered across one term of study for full- time students and two terms for part- timers. In order to ensure the AQF-required professional-level outcomes, many projects are designed and/or developed in partnership with professional arts institutions and community bodies. Partnerships mobilised utilised in this way have included those with local, state, and national bodies, including the local arts community, festivals, and educational support programs, as well as private business and community organisations. Student interaction with curation occurs regularly at art schools, where graduate and other student shows are scheduled as a regular events on the calendar of most tertiary art schools (Al-Amri), and the curated exhibition as an outcome has a longstanding tradition in tertiary fine arts education (Webb, Brien, and Burr). Yet in these cases, it is ultimately the creative work on show that is the focus of the learning experience and assessment process, rather than any focus on engagement with the curatorial process itself (Dally et al.). When art schools do involve students in the curatorial process, the focus usually still remains on the students' creative work (Sullivan). Another interaction with curation is when students undertaking a tertiary-level course or program in museum, and/or curatorial practice are engaged in the process of developing, mounting, and/or critiquing curated activities. These programs are, however, very small in number in Australia, where they are only offered at postgraduate level, with the exception of an undergraduate program at the University of Canberra (‘215JA.2’). By adopting “the exhibition” as a component of the learning process rather than its end product, including documentation of students’ work in progress as exhibition pieces, and incorporating it into a more general creative industries focused program, we argue that the curatorial experience can become an interactive learning platform for students ranging from diverse creative disciplines. The Student Experience Students in the program under consideration in this case study come from a wide spectrum of the creative industries, including creative writing, film, multimedia, music, and visual arts. Each term, at least half of the enrolments are distance students. The decision to establish an on-campus exhibition space was an experimental strategy that sought to bring together students from different creative disciplines and diverse locations, and actively involve them in the exhibition development and curatorial process. As well as their individual project work, the students also bring differing levels of prior professional experience to the program, and exhibit a wide range of learning styles and approaches when developing and completing their creative works and exegetical reflections. To cater for the variations listed above, but still meet the program milestones and learning outcomes that must (under the program rules) remain consistent for each student, we employed a multi-disciplinary approach to teaching that included strategies informed by Gardner’s theory of multiple intelligences (Gardner, Frames of Mind), which proposed and defined seven intelligences, and repeatedly criticised what he identified as an over-reliance on linguistic and logical indices as identifiers of intelligence. He asserted that these were traditional indicators of high scores on most IQ measures or tests of achievement but were not representative of overall levels of intelligence. Gardner later reinforced that, “unless individuals take a very active role in what it is that they’re studying, unless they learn to ask questions, to do things hands on, to essentially re-create things in their own mind and transform them as is needed, the ideas just disappear” (Edutopia). In alignment with Gardner’s views, we have noted that students enrolled in the program demonstrate strengths in several key intelligence areas, particularly interpersonal, musical, body-kinaesthetic, and spacial/visual intelligences (see Gardner, ‘Multiple Intelligences’, 8–18). To cater for, and further develop, these strengths, and also for the external students who were unable to attend university-based workshop sessions, we developed a range of resources with various approaches to hands-on creative tasks that related to the projects students were completing that term. These resources included the usual scholarly articles, books, and textbooks but were also sourced from the print and online media, guest speaker presentations, and digital sites such as You Tube and TED Talks, and through student input into group discussions. The positive reception of these individual project-relevant resources is evidenced in the class online discussion forums, where consecutive groups of students have consistently reflected on the positive impact these resources have had on their individual creative projects: This has been a difficult week with many issues presenting. As part of our Free Writing exercise in class, we explored ‘brain dumping’ and wrote anything (no matter how ridiculous) down. The great thing I discovered after completing this task was that by allowing myself to not censor my thoughts by compiling a writing masterpiece, I was indeed “free” to express everything. …. … I understand that this may not have been the original intended goal of Free Writing – but it is something I would highly recommend external students to try and see if it works for you (Student 'A', week 5, term 1 2015, Moodle reflection point). I found our discussion about crowdfunding particularly interesting. ... I intend to look at this model for future exhibitions. I think it could be a great way for me to look into developing an exhibition of paintings alongside some more commercial collateral such as prints and cards (Student 'B', week 6, term 1 2015, Moodle reflection point). In class I specifically enjoyed the black out activity and found the online videos exceptional, inspiring and innovating. I really enjoyed this activity and it was something that I can take away and use within the classroom when educating (Student 'C', week 8, term 1 2015, Moodle reflection point). The application of Gardner’s principles and strategies dovetailed with our framework for assessing learning outcomes, where we were guided by Boud’s seven propositions for assessment reform in higher education, which aim to “set directions for change, designed to enhance learning achievements for all students and improve the quality of their experience” (26). Boud asserts that assessment has most effect when: it is used to engage students in productive learning; feedback is used to improve student learning; students and teachers become partners in learning and assessment; students are inducted into the assessment practices of higher education; assessment and learning are placed at the centre of subject and program design; assessment and learning is a focus for staff and institutional development; and, assessment provides inclusive and trustworthy representation of student achievement. These propositions were integral to the design of learning outcomes for the exhibition. Teachers worked with students, individually and as a group, to build their capacity to curate the exhibition, and this included such things as the design and administration of invitations, and also the physical placement of works within the exhibition space. In this way, teachers and students became partners in the process of assessment. The final exhibition, as a learning outcome, meant that students were engaged in productive learning that placed both assessment and knowledge at the centre of subject and project design. It is a collation of creative pieces that embodies the class, as a whole; however, each piece also represents the skills and creativity of individual students and, in this way, are is a trustworthy representations of student achievement. While we aimed to employ all seven recommendations, our main focus was on ensuring that the exhibition, as an authentic learning experience, was productive and that the students were engaged as responsible and accountable co-facilitators of it. These factors are particularly relevant as almost all the students were either currently working, or planning to work, in their chosen creative field, where the work would necessarily involve both publication, performance, and/or exhibition of their artwork plus collaborative practice across disciplinary boundaries to make this happen (Brien). For this reason, we provided exhibition-related coursework tasks that we hoped were engaging and that also represented an authentic learning outcome for the students. Student Curatorship In this context, the opportunity to exhibit their own works-in-progress provided an authentic reason, with a deadline, for students to both work, and reflect, on their creative projects. The documentation of each student’s creative process was showcased as a stand-alone exhibition piece within the display. These exhibits not only served not only to highlight the different learning styles of each student, but also proved to inspire creativity and skill development. They also provided a working model whereby students (and potential enrollees) could view other students’ work and creative processes from inception to fully-realised project outcomes. The sample online reflections quoted above not only highlight the effectiveness of the online content delivery, but this engagement with the online forum also allowed remote students to comment on each other’s projects as well as to and respond to issues they were encountering in their project planning and development and creative practice. It was essential that this level of peer engagement was fostered for the curatorial project to be viable, as both internal and external students are involved in designing the invitation, catalogue, labels, and design of the space, while on-campus students hang and label work according to the group’s directions. Distance students send in items. This is a key point of this experiment: the process of curating an exhibition of work from diverse creative fields, and from students located thousands of kilometres apart, as a way of bringing cohesion to a diverse cohort of students. That cohesiveness provided an opportunity for authentic learning to occur because it was in relation to a task that each student apparently understood as personally, academically, and professionally relevant. This was supported by the anonymous course evaluation comments, which were overwhelmingly positive about the exhibition process – there were no negative comments regarding this aspect of the program, and over 60 per cent of the class supplied these evaluations. This also met a considerable point of anxiety in the current university environment whereby actively engaging students in online learning interactions is a continuing issue (Dixon, Dixon, and Axmann). A key question is: what relevance does this curatorial process have for a student whose field is not visual art, but, for instance, music, film, or writing? By displaying documentation of work in progress, this process connects students of all disciplines with an audience. For example, one student in 2014 who was a singer/songwriter, had her song available to be played on a laptop, alongside photographs of the studio when she was recording her song with her band. In conjunction with this, the cover artwork for her CD, together with the actual CD and CD cover, were framed and exhibited. Another student, who was also a musician but who was completing a music history project, sent in pages of the music transcriptions he had been working on during the course. This manuscript was bound and exhibited in a way that prompted some audience members to commented that it was like an artist’s book as well as a collection of data. Both of these students lived over 1,000 kilometres from the campus where the exhibition was held, but they were able to share with us as teaching staff, as well as with other students who were involved in the physical setting up of the exhibition, exactly how they envisaged their work being displayed. The feedback from both of these students was that this experience gave them a strong connection to the program. They described how, despite the issue of distance, they had had the opportunity to participate in a professional event that they were very keen to include on their curricula vitae. Another aspect of students actively participating in the curation of an exhibition which features work from diverse disciplines is that these students get a true sense of the collaborative interconnectedness of the disciplines of the creative industries (Brien). By way of example, the exhibit of the singer/songwriter referred to above involved not only the student and her band, but also the photographer who took the photographs, and the artist who designed the CD cover. Students collaboratively decided how this material was handled in the exhibition catalogue – all these names were included and their roles described. Breaking Ground exhibition, CQUniversity Noosa Exhibition Space, 2014. Photo by Ulrike Sturm. Outcomes and Conclusion We believe that the curation of an exhibition and the delivery of its constituent components raises student awareness that they are, as creatives, part of a network of industries, developing in them a genuine understanding of the way the creating industries works as a profession outside the academic setting. It is in this sense that this curatorial task is an authentic learning experience. In fact, what was initially perceived as a significant challenge—, that is, exhibiting work in progress from diverse creative fields—, has become a strength of the curatorial project. In reflecting on the experiences and outcomes that have occurred through the implementation of this example of curatorial practice, both as a learning tool and as a creative outcome in its own right, a key positive indicator for this approach is the high level of student satisfaction with the course, as recorded in the formal, anonymous university student evaluations (with 60–100 per cent of these completed for each term, when the university benchmark is 50 per cent completion), and the high level of professional outcomes achieved post-completion. The university evaluation scores have been in the top (4.5–5/.5) range for satisfaction over the program’s eight terms of delivery since 2012. Particularly in relation to subsequent professional outcomes, anecdotal feedback has been that the curatorial process served as an authentic and engaged learning experience because it equipped the students, now graduates, of the program with not only knowledge about how exhibitions work, but also a genuine understanding of the web of connections between the diverse creative arts and industries. Indeed, a number of students have submitted proposals to exhibit professionally in the space after graduation, again providing anecdotal feedback that the experience they gained through our model has had a sustaining impact on their creative practice. While the focus of this activity has been on creative learning for the students, it has also provided an interesting and engaging teaching experience for us as the program’s staff. We will continue to gather evidence relating to our model, and, with the next iteration of the exhibition project, a more detailed comparative analysis will be attempted. At this stage, with ethics approval, we plan to run an anonymous survey with all students involved in this activity, to develop questions for a focus group discussion with graduates. We are also in the process of contacting alumni of the program regarding professional outcomes to map these one, two, and five years after graduation. We will also keep a record of what percentage of students apply to exhibit in the space after graduation, as this will also be an additional marker of how professional and useful they perceive the experience to be. In conclusion, it can be stated that the 100 per cent pass rate and 0 per cent attrition rate from the program since its inception, coupled with a high level (over 60 per cent) of student progression to further post-graduate study in the creative industries, has not been detrimentally affected by this curatorial experiment, and has encouraged staff to continue with this approach. References Al-Amri, Mohammed. “Assessment Techniques Practiced in Teaching Art at Sultan Qaboos University in Oman.” International Journal of Education through Art 7.3 (2011): 267–282. AQF Levels. Australian Qualifications Framework website. 18 June 2015 ‹http://www.aqf.edu.au/aqf/in-detail/aqf-levels/›. Boud, D. Student Assessment for Learning in and after Courses: Final Report for Senior Fellowship. Sydney: Australian Learning and Teaching Council, 2010. Brien, Donna Lee, “Higher Education in the Corporate Century: Choosing Collaborative rather than Entrepreneurial or Competitive Models.” New Writing: The International Journal for the Practice and Theory of Creative Writing 4.2 (2007): 157–170. Brien, Donna Lee, and Axel Bruns, eds. “Collaborate.” M/C Journal 9.2 (2006). 18 June 2015 ‹http://journal.media-culture.org.au/0605›. Burton, D. Exhibiting Student Art: The Essential Guide for Teachers. New York: Teachers College Press, Columbia University, New York, 2006. CQUniversity. CB82 Graduate Certificate in Creative Industries. 18 July 2015 ‹https://handbook.cqu.edu.au/programs/index?programCode=CB82›. CQUniversity Noosa Exhibition Space. 20 July 2015 ‹http://www.cqunes.org›. Dally, Kerry, Allyson Holbrook, Miranda Lawry and Anne Graham. “Assessing the Exhibition and the Exegesis in Visual Arts Higher Degrees: Perspectives of Examiners.” Working Papers in Art & Design 3 (2004). 27 June 2015 ‹http://sitem.herts.ac.uk/artdes_research/papers/wpades/vol3/kdabs.html›. Degree Shows, Sydney College of the Arts. 2014. 18 June 2015 ‹http://sydney.edu.au/sca/galleries-events/degree-shows/index.shtml› Dixon, Robert, Kathryn Dixon, and Mandi Axmann. “Online Student Centred Discussion: Creating a Collaborative Learning Environment.” Hello! Where Are You in the Landscape of Educational Technology? Proceedings ASCILITE, Melbourne 2008. 256–264. Donmoyer, Robert. “Generalizability and the Single-Case Study.” Case Study Method: Key Issues, Key Texts. Eds. Roger Gomm, Martyn Hammersley, and Peter Foster. 2000. 45–68. Falk, J.H. “Assessing the Impact of Exhibit Arrangement on Visitor Behavior and Learning.” Curator: The Museum Journal 36.2 (1993): 133–146. Flyvbjerg, Bent. “Five Misunderstandings about Case-Study Research.” Qualitative Inquiry 12.2 (2006): 219–245. Gardner, H. Frames of Mind: The Theory of Multiple Intelligences, New York: Basic Books, 1983. ———. Multiple Intelligences: New Horizons in Theory and Practice, New York: Basic Books, 2006. George Lucas Education Foundation. 2015 Edutopia – What Works in Education. 16 June 2015 ‹http://www.edutopia.org/multiple-intelligences-howard-gardner-video#graph3›. Gerring, John. “What Is a Case Study and What Is It Good For?” American Political Science Review 98.02 (2004): 341–354. Hooper-Greenhill, Eilean. “Museums and Communication: An Introductory Essay.” Museum, Media, Message 1 (1995): 1. Jennings, Paul. The Public House in Bradford, 1770-1970. Keele: Keele University Press, 1995. Levy, Jack S. “Case Studies: Types, Designs, and Logics of Inference.” Conflict Management and Peace Science 25.1 (2008): 1–18. Merriam, Sharan B. Qualitative Research: A Guide to Design and Implementation: Revised and Expanded from Qualitative Research and Case Study Applications in Education. Jossey-Bass, 2009. Miles, M., and S. Rainbird. From Critical Distance to Engaged Proximity: Rethinking Assessment Methods to Enhance Interdisciplinary Collaborative Learning in the Creative Arts and Humanities. Final Report to the Australian Government Office for Learning and Teaching, Sydney. 2013. Monash University. Rethinking Assessment to Enhance Interdisciplinary Collaborative Learning in the Creative Arts and Humanities. Sydney: Office of Learning and Teaching, 2013. Muller, L. Reflective Curatorial Practice. 17 June 2015 ‹http://research.it.uts.edu.au/creative/linda/CCSBook/Jan%2021%20web%20pdfs/Muller.pdf›. O’Neill, Paul. Curating Subjects. London: Open Editions, 2007. Patrick, Carol-Joy, Deborah Peach, Catherine Pocknee, Fleur Webb, Marty Fletcher, and Gabriella Pretto. The WIL (Work Integrated Learning) Report: A National Scoping Study [Final Report]. Brisbane: Queensland University of Technology, 2008. Rule, A.C. “Editorial: The Components of Authentic Learning.” Journal of Authentic Learning 3.1 (2006): 1–10. Seawright, Jason, and John Gerring. “Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options.” Political Research Quarterly 61.2 (2008): 294–308. Smith, Martin, Sally Brooks, Anna Lichtenberg, Peter McIlveen, Peter Torjul, and Joanne Tyler. Career Development Learning: Maximising the Contribution of Work-Integrated Learning to the Student Experience. Final project report, June 2009. Wollongong: University of Wollongong, 2009. Sousa, D.A. How the Brain Learns: A Teacher’s Guide. 2nd ed. Thousand Oaks, CA: Corwin Press, 2001. Stake, R. “Qualitative Case Studies”. The Sage Handbook of Qualitative Research. 3rd ed. Eds. N.K. Denzin and Y.S. Lincoln. Thousand Oaks, CA: Sage, 2005. 433-466. Staniszewski, Mary Anne. The Power of Display: A History of Exhibition Installations at the Museum of Modern Art. Cambridge, MA: MIT Press, 1998. Sullivan, Graeme. Art Practice as Research: Inquiry in Visual Arts. Thousand Oaks, CA: Sage, 2010. University of Canberra. “Bachelor of Heritage, Museums and Conservation (215JA.2)”. Web. 27 July 2015. Ventzislavov, R. “Idle Arts: Reconsidering the Curator.” The Journal of Aesthetics and Art Criticism 72.1 (2014): 83–93. Verschuren, P. “Case Study as a Research Strategy: Some Ambiguities and Opportunities.” International Journal of Social Research Methodology 6.2 (2003): 121–139. Webb, Jen, and Donna Lee Brien. “Preparing Graduates for Creative Futures: Australian Creative Arts Programs in a Globalising Society.” Partnerships for World Graduates, AIC (Academia, Industry and Community) 2007 Conference, RMIT, Melbourne, 28–30 Nov. 2007. Webb, Jen, Donna Lee Brien, and Sandra Burr. “Doctoral Examination in the Creative Arts: Process, Practices and Standards.” Final Report. Canberra: Office of Learning and Teaching, 2013. Yin, Robert K. Case Study Research: Design and Methods. Thousand Oaks, CA: Sage, 2013.
APA, Harvard, Vancouver, ISO, and other styles
18

Cerratto, Teresa. "Chatting to Learn and Learning to Chat." M/C Journal 3, no. 4 (2000). http://dx.doi.org/10.5204/mcj.1866.

Full text
Abstract:
If we consider learning as a meaning-making process where people construct shared knowledge, it becomes a social dialogical activity in which knowledge is the result of an active process of articulation and reflection within a context (Jonassen et al.). An important element of this belief is that conversation is at the core of learning because knowledge is language-mediated. Within this context, what makes a conversation worthwhile and meaningful is how it is structured, how it is managed by the participants, and most importantly, how it is understood. In particular, conversation is essential in learning situations where the main goal is to generate a new understanding of the world (Bruner). Thus, if conversations can be seen as support for learning processes, the question then becomes how synchronous textual spaces mediate conversation and how chat affects learning. Experienced Teachers Learning in a Collaborative Virtual Environment We studied two different groups of experienced teachers from Kindergarten to Grade 12 (K-12) attending a Master of Education course entitled "Curriculum and Instruction". They communicated through a collaborative virtual environment (CVE) designed to enhance teachers' professional development: TAPPED IN™ (TI). We recorded their on-line conversations over six weeks. The teachers met twice a week for a two hour session and the data collected consisted of approximately 350-400 pages of text from transcripts. The following concerns, gleaned from an ongoing analysis of on-line conversations are of interest for this paper: The first concern has to do with the ability of teachers to concentrate on a task while managing multiple simultaneous conversations. The question is how to maintain the focus on the purpose of the goal oriented task. The second concern is related to the technical characteristics of a CVE and the teachers' feelings of being lost, too slow, or not understanding the point of the discussion. The question is how to deal with this confusion when the aim is to construct meanings from online discussion? The third concern is related the preceding points. It is concerned with the importance of a leader coaching and guiding experienced teachers online. We examined these three concerns, using TI during the teachers' on line discussions. Our primary goal in the analysis was to determine i) whether the teachers could conduct their learning activities through TI and ii) how goal-oriented conversations might be affected by the constraints of TI. The following examples come from a personal recorder. Messages are numbered in order to show their position in the session and to show the distance between the messages sent. Implications of Multitasking in Learning Sessions In CVEs, participants have the possibility of performing several tasks simultaneously (Holmevik). This is especially true when participants hold more than one conversation at a time. Participants can talk to one person or to the whole group while also chatting privately with people in the same CVE's room, in the same CVE or even in other CVEs. But the possibility of being able to participate in multiple conversations becomes potentially confusing and disorienting for teachers wanting to achieve a specific task. Let us give an example of how a main task (e.g. to share notes of pedagogical projects -- task 1) fragments into different tasks (e.g. learning a command -- how to create a note -- task 2; and socialise, express feelings and play with cows -- task 3). (Note that the students are in fact experienced teachers and a teacher is leading the session. The goal of the on-line session is to read and discuss the different educational projects that the students should have written in virtual notes.) The goal of the task became difficult to accomplish for teachers who were suddenly involved in more than one task at a time. In order to understand what is going on in this situation, participants had to accomplish extra work. They needed to filter messages and rank them to make the main objective of the session clear. In a goal-oriented session such as this, it is extremely important to keep track of the task as well as to concentrate on one activity at time. This entails a necessity to understand current threads in order to contribute to the object of interest for them as individuals and as a group member. Implications of Multi Threads and Floor-Taking in Goal-Oriented Conversations Perseverance with each message creates a parallelism that can become extremely disorienting to participants who intend to produce new understandings and not just maintain an awareness during on-line conversations. The larger the number of participants in a conversation, the more likely it is for fragmentation to occur. The jumbled and quickly scrolling screen can be quite disconcerting. Yet as mentioned by Mynatt et al., even between two participants, multi-threading is common due to the overlapping composition of conversational turns. Participants write simultaneously and the host computer sends the messages out sequentially. Under these conditions, competing conversational threads emerge continuously. It becomes difficult to know who actually holds the floor at the time. Here is an example showing a teacher -- student 2 -- looking for attention and trying to read and understand others' answers to his questions: Student 2 did not read message 26 sent from the teacher with care. In fact, the teacher did explain that there is a part in the assignment where students have to meet in order to exchange ideas about individual projects. Yet although S2's question was answered, S2 still did not understand. A possible reason is that S2 could have been focussed on writing the next question. Again, the teacher answered the question asked in message 29. However, S2 still did not understand in spite of S15 and S6 confirming that the teacher had already covered the question. Student 2 finally understood when the teacher addressed him directly and repeated what the other students had said before. In order to be heard, the students repeated their questions until they had the answer from the teacher. With more than a handful of participants, this attention seeking strategy may make on-line conversations confusing. Goal-oriented conversations then easily degenerate, as mentioned by Colomb and Simutis. These authors point out that one of the most common problems in using CMC is keeping students on task. Even experienced teachers do not escape from the possibility of converting from an instrumental discussion to a social one due to different misunderstanding between interlocutors. To be able to 'send' a message is not equivalent to claiming the 'floor'. An important extra task that teachers have to do in CVEs before sending a message is to think about how it meets the goal of the discussion. Looking for coherence and understanding is a must in learning situations and this becomes a great challenge in online learning sessions. On the other hand, different modalities of communication in CVEs may add richness and depth to online conversations when participants can anticipate constraints. Consider another group of teachers. They are discussing readings, and make great use of multiple modalities, such as gestures, to reframe misunderstandings. These gestures provide back channel information and other visual signs. Here is one example of what a group of teachers does in order to avoid embarrassing situations. As Mynatt et al. express, "the availability of multiple modalities gives complexity to the interactional rhythm, because people have choices about what modality to use at any particular moment and for any set of conversation partners" (138). Given these pros and cons of CVEs, the challenge of holding an on-line educational discussion requires the teachers to reestablish the context and control the underlying the sense of the conversation. This challenge could be also regarded as an exigency of the medium that 'invites' teachers to structure their conversations in order to encourage meaningful discussions. Importance of a Teacher of Teachers The problems mentioned earlier may be solved more easily when there is a leader at hand. Since these difficulties mainly arise at the start of learning the communication environment, it might be proposed that a leader is most critical in this phase. A comparison of two groups' interactions with and without a leader supports the intuition that a leader is crucial for keeping the learning on track even though the participants are experienced teachers. In this example, the task that the group performed was the same: "learning to attach an icon to their ellipses representing their presence in the system". Table 1. Data related to groups with and without a teacher Groups Learners Icons attached Messages produced Time employed 1. Without leader 12 0 549 56 min 8 sec 2. With leader 9 4 644 1h 27min 52 sec Fig. 1 Comparing flow and categories of the messages sent by the groups These frequencies confirm that teachers without a leader have more problems than the group with a leader. The number of successful icons attached by the groups (0 and 4 icons) demonstrates this claim. What happens is that the number of messages related to 'Task' decrease and those related to 'Relation' increase when there is no leader present -- a result which would be unsurprising among most people who have worked in 'real' classrooms. Messages produced and coded as 'Playing' and 'Feedback' also show a considerable difference between groups. Finally, categories such as 'Whisper' and 'Artifact' present in comparison to the others minimal differences between groups. A leader is a must for the smooth development of on-line conversation. The leader is a sort of mediator between the pedagogical task of the on-line conversation and what appears on the screen. The leader's task is to show which threads are important to follow or not and how messages should be read on the screen. Like an orchestra conductor, the leader coordinates tasks and makes sense of individual actions which are part of a common product and the quality of the on-going conversation. Discussion This ongoing research has demonstrated three important concerns surrounding experienced teachers' professional use of CVE. First, teachers chatting online have to anticipate the lack of assurance "that what gets sent gets read" and that gaining the floor in a CVE is "that one's message draws a response and in some way affect the direction of a current thread" (Colomb and Simutis). Teachers have to learn to negotiate turn-taking sequences behind the screen. When chatting, a person's intention to speak is not signalled. Overlapping and interruptions do not exist and non-verbal communication requires knowledge of gesture commands. Negotiating turns in online conversations is concerned with how people express information and what they express. In educational discussion, turns are generally taken when messages either present a good formulation of ideas, express controversial thinking, raise an issue that allows someone else to participate, or provide knowledge on the topic at hand. Second, teachers should learn to collaborate in online conversations. It is essential to be aware that people are writing a text while they discuss. The quality of the conversation will depend on one hand, how teachers manage the discussion and, on the other hand, the opinions they elaborate together. Third, teachers need leaders in online discussions. A leader has to be able to anticipate the text that the participants are writing. The leader has the responsibility of meeting pedagogical goals with a participant's messages. The leader has to show the coherence or incoherence of the discussion and raise issues that improve the level of the written interaction. These issues are extremely important in a context where people learn through conversations. As Laurillard has mentioned, "academic knowledge relies heavily on symbolic representation as the medium through which it is known. ... Students have to learn to handle the representations system as well as the ideas they represent" (27). Therefore, it is necessary that learners know and think about the rules of online discussion in order to adapt technical commands and effects to their needs. But these rules are in contrast to what participants expect from online conversations. Teachers want to perform their tasks with support of a computer program; they do not want to learn the computer program per se. CMC in learning activities must be based, not on visionary claims about technology as an all-purpose tool for automatic teaching/learning, but on specific accounts of how and why the technology affects the user's achievement of specific goals. Acknowledgements This study has been supported by a grant from the Swedish Transport & Communication Research Board. We wish to express our gratitude to Judi Fusco, who, in several ways, has been a bridge between the TI community and us. We also want to thank the teachers, CharlesE and FlorenceE, for having the courage of letting Tessy 'sit in' on the sessions. The 'expert' session was lead by TerryG, whom we also want to thank for her generosity. Susan Wildermuth came to us in the final spurt, and we owe her much for the reliability check, structuring of ideas, and hints about related research. Finally, all students struggling with TI are thanked for their willingness to participate in this study. References Cherny, L. Conversation and Community. Chat in a Virtual World. California: CSCLI Ed, 1999. Colomb, and Simutis. "Visible Conversations and Academic Inquiry: CMC in a Culturally Diverse Classroom." Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives. Ed. Susan Herring. Philadelphia: John Benjamins, 1996. 203-24. Bruner, Jerome. Acts of Meanings. Cambridge, MA: Harvard UP, 1990. Holmevik, J., and C. Haynes. MOOniversity. A Student's Guide to Online Learning Environments. Boston: Allyn and Bacon, 2000. Jonassen, D., et al. "Constructivism and Computer-Mediated Communication." Distance Education 9.2 (1992): 7-25. Laurrillard, Diana. Rethinking University Teaching: A Framework for the Effective Use of Educational Technology. London: Routledge, 1994. Mynatt, E. D., et al. "Network Communities: Something Old, Something New, Something Borrowed." Computer Supported Cooperative Work: The Journal of Collaborative Computing 7.1-2 (1998): 123-56. Schlager, M., J. Fusco, and P. Schank. "Evolution of an On-line Education Community of Practice." Building Virtual Communities: Learning and Change in Cyberspace. Ed. K. Ann Renninger and W. Shumar. NY: Cambridge UP, 2000. Wærn, Yvonne. "Absent Minds -- On Teacher Professional Development." Journal of Courseware Studies 22 (1999): 441-55. Citation reference for this article MLA style: Teresa Cerratto, Yvonne Wærn. "Chatting to Learn and Learning to Chat in Collaborative Virtual Environments." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/learning.php>. Chicago style: Teresa Cerratto, Yvonne Wærn, "Chatting to Learn and Learning to Chat in Collaborative Virtual Environments," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/learning.php> ([your date of access]). APA style: Teresa Cerratto, Yvonne Wærn. (2000) Chatting to learn and learning to chat in collaborative virtual environments. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/learning.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
19

Losh, Elizabeth. "Artificial Intelligence." M/C Journal 10, no. 5 (2007). http://dx.doi.org/10.5204/mcj.2710.

Full text
Abstract:

 
 
 On the morning of Thursday, 4 May 2006, the United States House Permanent Select Committee on Intelligence held an open hearing entitled “Terrorist Use of the Internet.” The Intelligence committee meeting was scheduled to take place in Room 1302 of the Longworth Office Building, a Depression-era structure with a neoclassical façade. Because of a dysfunctional elevator, some of the congressional representatives were late to the meeting. During the testimony about the newest political applications for cutting-edge digital technology, the microphones periodically malfunctioned, and witnesses complained of “technical problems” several times. By the end of the day it seemed that what was to be remembered about the hearing was the shocking revelation that terrorists were using videogames to recruit young jihadists. The Associated Press wrote a short, restrained article about the hearing that only mentioned “computer games and recruitment videos” in passing. Eager to have their version of the news item picked up, Reuters made videogames the focus of their coverage with a headline that announced, “Islamists Using US Videogames in Youth Appeal.” Like a game of telephone, as the Reuters videogame story was quickly re-run by several Internet news services, each iteration of the title seemed less true to the exact language of the original. One Internet news service changed the headline to “Islamic militants recruit using U.S. video games.” Fox News re-titled the story again to emphasise that this alert about technological manipulation was coming from recognised specialists in the anti-terrorism surveillance field: “Experts: Islamic Militants Customizing Violent Video Games.” As the story circulated, the body of the article remained largely unchanged, in which the Reuters reporter described the digital materials from Islamic extremists that were shown at the congressional hearing. During the segment that apparently most captured the attention of the wire service reporters, eerie music played as an English-speaking narrator condemned the “infidel” and declared that he had “put a jihad” on them, as aerial shots moved over 3D computer-generated images of flaming oil facilities and mosques covered with geometric designs. Suddenly, this menacing voice-over was interrupted by an explosion, as a virtual rocket was launched into a simulated military helicopter. The Reuters reporter shared this dystopian vision from cyberspace with Western audiences by quoting directly from the chilling commentary and describing a dissonant montage of images and remixed sound. “I was just a boy when the infidels came to my village in Blackhawk helicopters,” a narrator’s voice said as the screen flashed between images of street-level gunfights, explosions and helicopter assaults. Then came a recording of President George W. Bush’s September 16, 2001, statement: “This crusade, this war on terrorism, is going to take a while.” It was edited to repeat the word “crusade,” which Muslims often define as an attack on Islam by Christianity. According to the news reports, the key piece of evidence before Congress seemed to be a film by “SonicJihad” of recorded videogame play, which – according to the experts – was widely distributed online. Much of the clip takes place from the point of view of a first-person shooter, seen as if through the eyes of an armed insurgent, but the viewer also periodically sees third-person action in which the player appears as a running figure wearing a red-and-white checked keffiyeh, who dashes toward the screen with a rocket launcher balanced on his shoulder. Significantly, another of the player’s hand-held weapons is a detonator that triggers remote blasts. As jaunty music plays, helicopters, tanks, and armoured vehicles burst into smoke and flame. Finally, at the triumphant ending of the video, a green and white flag bearing a crescent is hoisted aloft into the sky to signify victory by Islamic forces. To explain the existence of this digital alternative history in which jihadists could be conquerors, the Reuters story described the deviousness of the country’s terrorist opponents, who were now apparently modifying popular videogames through their wizardry and inserting anti-American, pro-insurgency content into U.S.-made consumer technology. One of the latest video games modified by militants is the popular “Battlefield 2” from leading video game publisher, Electronic Arts Inc of Redwood City, California. Jeff Brown, a spokesman for Electronic Arts, said enthusiasts often write software modifications, known as “mods,” to video games. “Millions of people create mods on games around the world,” he said. “We have absolutely no control over them. It’s like drawing a mustache on a picture.” Although the Electronic Arts executive dismissed the activities of modders as a “mustache on a picture” that could only be considered little more than childish vandalism of their off-the-shelf corporate product, others saw a more serious form of criminality at work. Testifying experts and the legislators listening on the committee used the video to call for greater Internet surveillance efforts and electronic counter-measures. Within twenty-four hours of the sensationalistic news breaking, however, a group of Battlefield 2 fans was crowing about the idiocy of reporters. The game play footage wasn’t from a high-tech modification of the software by Islamic extremists; it had been posted on a Planet Battlefield forum the previous December of 2005 by a game fan who had cut together regular game play with a Bush remix and a parody snippet of the soundtrack from the 2004 hit comedy film Team America. The voice describing the Black Hawk helicopters was the voice of Trey Parker of South Park cartoon fame, and – much to Parker’s amusement – even the mention of “goats screaming” did not clue spectators in to the fact of a comic source. Ironically, the moment in the movie from which the sound clip is excerpted is one about intelligence gathering. As an agent of Team America, a fictional elite U.S. commando squad, the hero of the film’s all-puppet cast, Gary Johnston, is impersonating a jihadist radical inside a hostile Egyptian tavern that is modelled on the cantina scene from Star Wars. Additional laughs come from the fact that agent Johnston is accepted by the menacing terrorist cell as “Hakmed,” despite the fact that he utters a series of improbable clichés made up of incoherent stereotypes about life in the Middle East while dressed up in a disguise made up of shoe polish and a turban from a bathroom towel. The man behind the “SonicJihad” pseudonym turned out to be a twenty-five-year-old hospital administrator named Samir, and what reporters and representatives saw was nothing more exotic than game play from an add-on expansion pack of Battlefield 2, which – like other versions of the game – allows first-person shooter play from the position of the opponent as a standard feature. While SonicJihad initially joined his fellow gamers in ridiculing the mainstream media, he also expressed astonishment and outrage about a larger politics of reception. In one interview he argued that the media illiteracy of Reuters potentially enabled a whole series of category errors, in which harmless gamers could be demonised as terrorists. It wasn’t intended for the purpose what it was portrayed to be by the media. So no I don’t regret making a funny video . . . why should I? The only thing I regret is thinking that news from Reuters was objective and always right. The least they could do is some online research before publishing this. If they label me al-Qaeda just for making this silly video, that makes you think, what is this al-Qaeda? And is everything al-Qaeda? Although Sonic Jihad dismissed his own work as “silly” or “funny,” he expected considerably more from a credible news agency like Reuters: “objective” reporting, “online research,” and fact-checking before “publishing.” Within the week, almost all of the salient details in the Reuters story were revealed to be incorrect. SonicJihad’s film was not made by terrorists or for terrorists: it was not created by “Islamic militants” for “Muslim youths.” The videogame it depicted had not been modified by a “tech-savvy militant” with advanced programming skills. Of course, what is most extraordinary about this story isn’t just that Reuters merely got its facts wrong; it is that a self-identified “parody” video was shown to the august House Intelligence Committee by a team of well-paid “experts” from the Science Applications International Corporation (SAIC), a major contractor with the federal government, as key evidence of terrorist recruitment techniques and abuse of digital networks. Moreover, this story of media illiteracy unfolded in the context of a fundamental Constitutional debate about domestic surveillance via communications technology and the further regulation of digital content by lawmakers. Furthermore, the transcripts of the actual hearing showed that much more than simple gullibility or technological ignorance was in play. Based on their exchanges in the public record, elected representatives and government experts appear to be keenly aware that the digital discourses of an emerging information culture might be challenging their authority and that of the longstanding institutions of knowledge and power with which they are affiliated. These hearings can be seen as representative of a larger historical moment in which emphatic declarations about prohibiting specific practices in digital culture have come to occupy a prominent place at the podium, news desk, or official Web portal. This environment of cultural reaction can be used to explain why policy makers’ reaction to terrorists’ use of networked communication and digital media actually tells us more about our own American ideologies about technology and rhetoric in a contemporary information environment. When the experts come forward at the Sonic Jihad hearing to “walk us through the media and some of the products,” they present digital artefacts of an information economy that mirrors many of the features of our own consumption of objects of electronic discourse, which seem dangerously easy to copy and distribute and thus also create confusion about their intended meanings, audiences, and purposes. From this one hearing we can see how the reception of many new digital genres plays out in the public sphere of legislative discourse. Web pages, videogames, and Weblogs are mentioned specifically in the transcript. The main architecture of the witnesses’ presentation to the committee is organised according to the rhetorical conventions of a PowerPoint presentation. Moreover, the arguments made by expert witnesses about the relationship of orality to literacy or of public to private communications in new media are highly relevant to how we might understand other important digital genres, such as electronic mail or text messaging. The hearing also invites consideration of privacy, intellectual property, and digital “rights,” because moral values about freedom and ownership are alluded to by many of the elected representatives present, albeit often through the looking glass of user behaviours imagined as radically Other. For example, terrorists are described as “modders” and “hackers” who subvert those who properly create, own, legitimate, and regulate intellectual property. To explain embarrassing leaks of infinitely replicable digital files, witness Ron Roughead says, “We’re not even sure that they don’t even hack into the kinds of spaces that hold photographs in order to get pictures that our forces have taken.” Another witness, Undersecretary of Defense for Policy and International Affairs, Peter Rodman claims that “any video game that comes out, as soon as the code is released, they will modify it and change the game for their needs.” Thus, the implication of these witnesses’ testimony is that the release of code into the public domain can contribute to political subversion, much as covert intrusion into computer networks by stealthy hackers can. However, the witnesses from the Pentagon and from the government contractor SAIC often present a contradictory image of the supposed terrorists in the hearing transcripts. Sometimes the enemy is depicted as an organisation of technological masterminds, capable of manipulating the computer code of unwitting Americans and snatching their rightful intellectual property away; sometimes those from the opposing forces are depicted as pre-modern and even sub-literate political innocents. In contrast, the congressional representatives seem to focus on similarities when comparing the work of “terrorists” to the everyday digital practices of their constituents and even of themselves. According to the transcripts of this open hearing, legislators on both sides of the aisle express anxiety about domestic patterns of Internet reception. Even the legislators’ own Web pages are potentially disruptive electronic artefacts, particularly when the demands of digital labour interfere with their duties as lawmakers. Although the subject of the hearing is ostensibly terrorist Websites, Representative Anna Eshoo (D-California) bemoans the difficulty of maintaining her own official congressional site. As she observes, “So we are – as members, I think we’re very sensitive about what’s on our Website, and if I retained what I had on my Website three years ago, I’d be out of business. So we know that they have to be renewed. They go up, they go down, they’re rebuilt, they’re – you know, the message is targeted to the future.” In their questions, lawmakers identify Weblogs (blogs) as a particular area of concern as a destabilising alternative to authoritative print sources of information from established institutions. Representative Alcee Hastings (D-Florida) compares the polluting power of insurgent bloggers to that of influential online muckrakers from the American political Right. Hastings complains of “garbage on our regular mainstream news that comes from blog sites.” Representative Heather Wilson (R-New Mexico) attempts to project a media-savvy persona by bringing up the “phenomenon of blogging” in conjunction with her questions about jihadist Websites in which she notes how Internet traffic can be magnified by cooperative ventures among groups of ideologically like-minded content-providers: “These Websites, and particularly the most active ones, are they cross-linked? And do they have kind of hot links to your other favorite sites on them?” At one point Representative Wilson asks witness Rodman if he knows “of your 100 hottest sites where the Webmasters are educated? What nationality they are? Where they’re getting their money from?” In her questions, Wilson implicitly acknowledges that Web work reflects influences from pedagogical communities, economic networks of the exchange of capital, and even potentially the specific ideologies of nation-states. It is perhaps indicative of the government contractors’ anachronistic worldview that the witness is unable to answer Wilson’s question. He explains that his agency focuses on the physical location of the server or ISP rather than the social backgrounds of the individuals who might be manufacturing objectionable digital texts. The premise behind the contractors’ working method – surveilling the technical apparatus not the social network – may be related to other beliefs expressed by government witnesses, such as the supposition that jihadist Websites are collectively produced and spontaneously emerge from the indigenous, traditional, tribal culture, instead of assuming that Iraqi insurgents have analogous beliefs, practices, and technological awareness to those in first-world countries. The residual subtexts in the witnesses’ conjectures about competing cultures of orality and literacy may tell us something about a reactionary rhetoric around videogames and digital culture more generally. According to the experts before Congress, the Middle Eastern audience for these videogames and Websites is limited by its membership in a pre-literate society that is only capable of abortive cultural production without access to knowledge that is archived in printed codices. Sometimes the witnesses before Congress seem to be unintentionally channelling the ideas of the late literacy theorist Walter Ong about the “secondary orality” associated with talky electronic media such as television, radio, audio recording, or telephone communication. Later followers of Ong extend this concept of secondary orality to hypertext, hypermedia, e-mail, and blogs, because they similarly share features of both speech and written discourse. Although Ong’s disciples celebrate this vibrant reconnection to a mythic, communal past of what Kathleen Welch calls “electric rhetoric,” the defence industry consultants express their profound state of alarm at the potentially dangerous and subversive character of this hybrid form of communication. The concept of an “oral tradition” is first introduced by the expert witnesses in the context of modern marketing and product distribution: “The Internet is used for a variety of things – command and control,” one witness states. “One of the things that’s missed frequently is how and – how effective the adversary is at using the Internet to distribute product. They’re using that distribution network as a modern form of oral tradition, if you will.” Thus, although the Internet can be deployed for hierarchical “command and control” activities, it also functions as a highly efficient peer-to-peer distributed network for disseminating the commodity of information. Throughout the hearings, the witnesses imply that unregulated lateral communication among social actors who are not authorised to speak for nation-states or to produce legitimated expert discourses is potentially destabilising to political order. Witness Eric Michael describes the “oral tradition” and the conventions of communal life in the Middle East to emphasise the primacy of speech in the collective discursive practices of this alien population: “I’d like to point your attention to the media types and the fact that the oral tradition is listed as most important. The other media listed support that. And the significance of the oral tradition is more than just – it’s the medium by which, once it comes off the Internet, it is transferred.” The experts go on to claim that this “oral tradition” can contaminate other media because it functions as “rumor,” the traditional bane of the stately discourse of military leaders since the classical era. The oral tradition now also has an aspect of rumor. A[n] event takes place. There is an explosion in a city. Rumor is that the United States Air Force dropped a bomb and is doing indiscriminate killing. This ends up being discussed on the street. It ends up showing up in a Friday sermon in a mosque or in another religious institution. It then gets recycled into written materials. Media picks up the story and broadcasts it, at which point it’s now a fact. In this particular case that we were telling you about, it showed up on a network television, and their propaganda continues to go back to this false initial report on network television and continue to reiterate that it’s a fact, even though the United States government has proven that it was not a fact, even though the network has since recanted the broadcast. In this example, many-to-many discussion on the “street” is formalised into a one-to many “sermon” and then further stylised using technology in a one-to-many broadcast on “network television” in which “propaganda” that is “false” can no longer be disputed. This “oral tradition” is like digital media, because elements of discourse can be infinitely copied or “recycled,” and it is designed to “reiterate” content. In this hearing, the word “rhetoric” is associated with destructive counter-cultural forces by the witnesses who reiterate cultural truisms dating back to Plato and the Gorgias. For example, witness Eric Michael initially presents “rhetoric” as the use of culturally specific and hence untranslatable figures of speech, but he quickly moves to an outright castigation of the entire communicative mode. “Rhetoric,” he tells us, is designed to “distort the truth,” because it is a “selective” assembly or a “distortion.” Rhetoric is also at odds with reason, because it appeals to “emotion” and a romanticised Weltanschauung oriented around discourses of “struggle.” The film by SonicJihad is chosen as the final clip by the witnesses before Congress, because it allegedly combines many different types of emotional appeal, and thus it conveniently ties together all of the themes that the witnesses present to the legislators about unreliable oral or rhetorical sources in the Middle East: And there you see how all these products are linked together. And you can see where the games are set to psychologically condition you to go kill coalition forces. You can see how they use humor. You can see how the entire campaign is carefully crafted to first evoke an emotion and then to evoke a response and to direct that response in the direction that they want. Jihadist digital products, especially videogames, are effective means of manipulation, the witnesses argue, because they employ multiple channels of persuasion and carefully sequenced and integrated subliminal messages. To understand the larger cultural conversation of the hearing, it is important to keep in mind that the related argument that “games” can “psychologically condition” players to be predisposed to violence is one that was important in other congressional hearings of the period, as well one that played a role in bills and resolutions that were passed by the full body of the legislative branch. In the witness’s testimony an appeal to anti-game sympathies at home is combined with a critique of a closed anti-democratic system abroad in which the circuits of rhetorical production and their composite metonymic chains are described as those that command specific, unvarying, robotic responses. This sharp criticism of the artful use of a presentation style that is “crafted” is ironic, given that the witnesses’ “compilation” of jihadist digital material is staged in the form of a carefully structured PowerPoint presentation, one that is paced to a well-rehearsed rhythm of “slide, please” or “next slide” in the transcript. The transcript also reveals that the members of the House Intelligence Committee were not the original audience for the witnesses’ PowerPoint presentation. Rather, when it was first created by SAIC, this “expert” presentation was designed for training purposes for the troops on the ground, who would be facing the challenges of deployment in hostile terrain. According to the witnesses, having the slide show showcased before Congress was something of an afterthought. Nonetheless, Congressman Tiahrt (R-KN) is so impressed with the rhetorical mastery of the consultants that he tries to appropriate it. As Tiarht puts it, “I’d like to get a copy of that slide sometime.” From the hearing we also learn that the terrorists’ Websites are threatening precisely because they manifest a polymorphously perverse geometry of expansion. For example, one SAIC witness before the House Committee compares the replication and elaboration of digital material online to a “spiderweb.” Like Representative Eshoo’s site, he also notes that the terrorists’ sites go “up” and “down,” but the consultant is left to speculate about whether or not there is any “central coordination” to serve as an organising principle and to explain the persistence and consistency of messages despite the apparent lack of a single authorial ethos to offer a stable, humanised, point of reference. In the hearing, the oft-cited solution to the problem created by the hybridity and iterability of digital rhetoric appears to be “public diplomacy.” Both consultants and lawmakers seem to agree that the damaging messages of the insurgents must be countered with U.S. sanctioned information, and thus the phrase “public diplomacy” appears in the hearing seven times. However, witness Roughhead complains that the protean “oral tradition” and what Henry Jenkins has called the “transmedia” character of digital culture, which often crosses several platforms of traditional print, projection, or broadcast media, stymies their best rhetorical efforts: “I think the point that we’ve tried to make in the briefing is that wherever there’s Internet availability at all, they can then download these – these programs and put them onto compact discs, DVDs, or post them into posters, and provide them to a greater range of people in the oral tradition that they’ve grown up in. And so they only need a few Internet sites in order to distribute and disseminate the message.” Of course, to maintain their share of the government market, the Science Applications International Corporation also employs practices of publicity and promotion through the Internet and digital media. They use HTML Web pages for these purposes, as well as PowerPoint presentations and online video. The rhetoric of the Website of SAIC emphasises their motto “From Science to Solutions.” After a short Flash film about how SAIC scientists and engineers solve “complex technical problems,” the visitor is taken to the home page of the firm that re-emphasises their central message about expertise. The maps, uniforms, and specialised tools and equipment that are depicted in these opening Web pages reinforce an ethos of professional specialisation that is able to respond to multiple threats posed by the “global war on terror.” By 26 June 2006, the incident finally was being described as a “Pentagon Snafu” by ABC News. From the opening of reporter Jake Tapper’s investigative Webcast, established government institutions were put on the spot: “So, how much does the Pentagon know about videogames? Well, when it came to a recent appearance before Congress, apparently not enough.” Indeed, the very language about “experts” that was highlighted in the earlier coverage is repeated by Tapper in mockery, with the significant exception of “independent expert” Ian Bogost of the Georgia Institute of Technology. If the Pentagon and SAIC deride the legitimacy of rhetoric as a cultural practice, Bogost occupies himself with its defence. In his recent book Persuasive Games: The Expressive Power of Videogames, Bogost draws upon the authority of the “2,500 year history of rhetoric” to argue that videogames represent a significant development in that cultural narrative. Given that Bogost and his Watercooler Games Weblog co-editor Gonzalo Frasca were actively involved in the detective work that exposed the depth of professional incompetence involved in the government’s line-up of witnesses, it is appropriate that Bogost is given the final words in the ABC exposé. As Bogost says, “We should be deeply bothered by this. We should really be questioning the kind of advice that Congress is getting.” Bogost may be right that Congress received terrible counsel on that day, but a close reading of the transcript reveals that elected officials were much more than passive listeners: in fact they were lively participants in a cultural conversation about regulating digital media. After looking at the actual language of these exchanges, it seems that the persuasiveness of the misinformation from the Pentagon and SAIC had as much to do with lawmakers’ preconceived anxieties about practices of computer-mediated communication close to home as it did with the contradictory stereotypes that were presented to them about Internet practices abroad. In other words, lawmakers found themselves looking into a fun house mirror that distorted what should have been familiar artefacts of American popular culture because it was precisely what they wanted to see. References ABC News. “Terrorist Videogame?” Nightline Online. 21 June 2006. 22 June 2006 http://abcnews.go.com/Video/playerIndex?id=2105341>. Bogost, Ian. Persuasive Games: Videogames and Procedural Rhetoric. Cambridge, MA: MIT Press, 2007. Game Politics. “Was Congress Misled by ‘Terrorist’ Game Video? We Talk to Gamer Who Created the Footage.” 11 May 2006. http://gamepolitics.livejournal.com/285129.html#cutid1>. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. julieb. “David Morgan Is a Horrible Writer and Should Be Fired.” Online posting. 5 May 2006. Dvorak Uncensored Cage Match Forums. http://cagematch.dvorak.org/index.php/topic,130.0.html>. Mahmood. “Terrorists Don’t Recruit with Battlefield 2.” GGL Global Gaming. 16 May 2006 http://www.ggl.com/news.php?NewsId=3090>. Morgan, David. “Islamists Using U.S. Video Games in Youth Appeal.” Reuters online news service. 4 May 2006 http://today.reuters.com/news/ArticleNews.aspx?type=topNews &storyID=2006-05-04T215543Z_01_N04305973_RTRUKOC_0_US-SECURITY- VIDEOGAMES.xml&pageNumber=0&imageid=&cap=&sz=13&WTModLoc= NewsArt-C1-ArticlePage2>. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London/New York: Methuen, 1982. Parker, Trey. Online posting. 7 May 2006. 9 May 2006 http://www.treyparker.com>. Plato. “Gorgias.” Plato: Collected Dialogues. Princeton: Princeton UP, 1961. Shrader, Katherine. “Pentagon Surfing Thousands of Jihad Sites.” Associated Press 4 May 2006. SonicJihad. “SonicJihad: A Day in the Life of a Resistance Fighter.” Online posting. 26 Dec. 2005. Planet Battlefield Forums. 9 May 2006 http://www.forumplanet.com/planetbattlefield/topic.asp?fid=13670&tid=1806909&p=1>. Tapper, Jake, and Audery Taylor. “Terrorist Video Game or Pentagon Snafu?” ABC News Nightline 21 June 2006. 30 June 2006 http://abcnews.go.com/Nightline/Technology/story?id=2105128&page=1>. U.S. Congressional Record. Panel I of the Hearing of the House Select Intelligence Committee, Subject: “Terrorist Use of the Internet for Communications.” Federal News Service. 4 May 2006. Welch, Kathleen E. Electric Rhetoric: Classical Rhetoric, Oralism, and the New Literacy. Cambridge, MA: MIT Press, 1999. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Losh, Elizabeth. "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/08-losh.php>. APA Style
 Losh, E. (Oct. 2007) "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/08-losh.php>. 
APA, Harvard, Vancouver, ISO, and other styles
20

O'Boyle, Neil. "Plucky Little People on Tour: Depictions of Irish Football Fans at Euro 2016." M/C Journal 20, no. 4 (2017). http://dx.doi.org/10.5204/mcj.1246.

Full text
Abstract:
I called your producer on the way here in the car because I was very excited. I found out … I did one of those genetic testing things and I found out that I'm 63 percent Irish … I had no idea. I had no idea! I thought I was Scottish and Welsh. It turns out my parents are just full of shit, I guess. But now I’m Irish and it just makes so much sense! I'm a really good drinker. I love St. Patrick's Day. Potatoes are delicious. I'm looking forward to meeting all my cousins … [to Conan O’Brien] You and I are probably related! … Now I get to say things like, “It’s in me genes! I love that Conan O’Brien; he’s such a nice fella.” You’re kinda like a giant leprechaun. (Reese Witherspoon, Tuesday 21 March 2017)IntroductionAs an Irishman and a football fan, I watched the unfolding 2016 UEFA European Championship in France (hereafter ‘Euro 2016’) with a mixture of trepidation and delight. Although the Republic of Ireland team was eventually knocked out of the competition in defeat to the host nation, the players performed extremely well – most notably in defeating Italy 1:0. It is not the on-field performance of the Irish team that interests me in this short article, however, but rather how Irish fans travelling to the competition were depicted in the surrounding international news coverage. In particular, I focus on the centrality of fan footage – shot on smart phones and uploaded to YouTube (in most cases by fans themselves) – in this news coverage. In doing so, I reflect on how sports fans contribute to wider understandings of nationness in the global imagination and how their behaviour is often interpreted (as in the case here) through long-established tropes about people and places. The Media ManifoldTo “depict” something is to represent it in words and pictures. As the contemporary world is largely shaped by and dependent on mass media – and different forms of media have merged (or “converged”) through digital media platforms – mediated forms of depiction have become increasingly important in our lives. On one hand, the constant connectivity made possible in the digital age has made the representation of people and places less controllable, insofar as the information and knowledge about our world circulating through media devices are partly created by ordinary people. On the other hand, traditional broadcast media arguably remain the dominant narrators of people and places worldwide, and their stories, Gerbner reminds us, are largely formula-driven and dramatically charged, and work to “retribalize” modern society. However, a more important point, I suggest, is that so-called new and old media can no longer be thought of as separate and discrete; rather, our attention should focus on the complex interrelations made possible by deep mediatisation (Couldry and Hepp).As an example, consider that the Youtube video of Reese Witherspoon’s recent appearance on the Conan O’Brien chat show – from which the passage at the start of this article is taken – had already been viewed 54,669 times when I first viewed it, a mere 16 hours after it was originally posted. At that point, the televised interview had already been reported on in a variety of international digital news outlets, including rte.ie, independent.ie., nydailynews.com, msn.com, huffingtonpost.com, cote-ivoire.com – and myriad entertainment news sites. In other words, this short interview was consumed synchronously and asynchronously, over a number of different media platforms; it was viewed and reviewed, and critiqued and commented upon, and in turn found itself the subject of news commentary, which fed the ongoing cycle. And yet, it is important to also note that a multiplicity of media interactions does not automatically give rise to oppositional discourse and ideological contestation, as is sometimes assumed. In fact, how ostensibly ‘different’ kinds of media can work to produce a broadly shared construction of a people and place is particularly relevant here. Just as Reese Witherspoon’s interview on the Conan O’Brien show perpetuates a highly stereotypical version of Irishness across a number of platforms, news coverage of Irish fans at Euro 2016 largely conformed to established tropes about Irish people, but this was also fed – to some extent – by Irish fans themselves.Irish Identity, Sport, and the Global ImaginationThere is insufficient space here to describe in any detail the evolving representation of Irish identity, about which a vast literature has developed (nationally and internationally) over the past several decades. As with other varieties of nationness, Irishness has been constructed across a variety of cultural forms, including advertising, art, film, novels, travel brochures, plays and documentaries. Importantly, Irishness has also to a great extent been constructed outside of Ireland (Arrowsmith; Negra).As is well known, the Irish were historically constructed by their colonial masters as a small uncivilised race – as primitive wayward children, prone to “sentimentality, ineffectuality, nervous excitability and unworldliness” (Fanning 33). When pondering the “Celtic nature,” the renowned English poet and cultural critic Mathew Arnold concluded that “sentimental” was the best single term to use (100). This perception pervaded internationally, with early depictions of Irish-Americans in US cinema centring on varieties of negative excess, such as lawlessness, drunkenness and violence (Rains). Against this prevailing image of negative excess, the intellectuals and artists associated with what became known as the Celtic Revival began a conscious effort to “rebrand” Ireland from the nineteenth century onwards, reversing the negatives of the colonial project and celebrating Irish tradition, language and culture (Fanning).At first, only distinctly Irish sports associated with the amateur Gaelic Athletic Association (GAA) were co-opted in this very particular nation-building project. Since then, however, sport more generally has acted as a site for the negotiation of a variety of overlapping Irish identities. Cronin, for example, describes how the GAA successfully repackaged itself in the 1990s to reflect the confidence of Celtic Tiger Irishness while also remaining rooted in the counties and parishes across Ireland. Studies of Irish football and rugby have similarly examined how these sports have functioned as representatives of changed or evolving Irish identities (Arrowsmith; Free). And yet, throughout Ireland’s changing economic fortunes – from boom to bust, to the gradual renewal of late – a touristic image of Irishness has remained hegemonic in the global imagination. In popular culture, and especially American popular culture, Ireland is often depicted as a kind of pre-industrial theme park – a place where the effects of modernity are felt less, or are erased altogether (Negra). The Irish are known for their charm and sociability; in Clancy’s words, they are seen internationally as “simple, clever and friendly folk” (98). We can identify a number of representational tropes within this dominant image, but two in particular are apposite here: ‘smallness’ and ‘happy-go-luckiness’.Sporting NewsBefore we consider Euro 2016, it is worth briefly considering how the news industry approaches such events. “News”, Dahlgren reminds us, is not so much “information” as it is a specific kind of cultural discourse. News, in other words, is a particular kind of discursive composition that constructs and narrates stories in particular ways. Approaching sports coverage from this vantage point, Poulton and Roderick (xviii) suggest that “sport offers everything a good story should have: heroes and villains, triumph and disaster, achievement and despair, tension and drama.” Similarly, Jason Tuck observes that the media have long had a tendency to employ the “vocabulary of war” to “hype up sporting events,” a discursive tactic which, he argues, links “the two areas of life where the nation is a primary signifier” (190-191).In short, sport is abundant in news values, and media professionals strive to produce coverage that is attractive, interesting and exciting for audiences. Stead (340) suggests that there are three key characteristics governing the production of “media sports packages”: spectacularisation, dramatisation, and personalisation. These production characteristics ensure that sports coverage is exciting and interesting for viewers, but that it also in some respects conforms to their expectations. “This ‘emergent’ quality of sport in the media helps meet the perpetual audience need for something new and different alongside what is familiar and known” (Rowe 32). The disproportionate attention to Irish fans at Euro 2016 was perhaps new, but the overall depiction of the Irish was rather old, I would argue. The news discourse surrounding Euro 2016 worked to suggest, in the Irish case at least, that the nation was embodied not only in its on-field athletic representatives but more so, perhaps, in its travelling fans.Euro 2016In June 2016 the Euros kicked off in France, with the home team beating Romania 2-1. Despite widespread fears of potential terrorist attacks and disruption, the event passed successfully, with Portugal eventually lifting the Henri Delaunay Trophy. As the competition progressed, the behaviour of Irish fans quickly became a central news story, fuelled in large part by smart phone footage uploaded to the internet by Irish fans themselves. Amongst the many videos uploaded to the internet, several became the focus of news reports, especially those in which the goodwill and childlike playfulness of the Irish were on show. In one such video, Irish fans are seen singing lullabies to a baby on a Bordeaux train. In another video, Irish fans appear to help a French couple change a flat tire. In yet another video, Irish fans sing cheerfully as they clean up beer cans and bottles. (It is noteworthy that as of July 2017, some of these videos have been viewed several million times.)News providers quickly turned their attention to Irish fans, sometimes using these to draw stark contrasts with the behaviour of other fans, notably English and Russian fans. Buzzfeed, followed by ESPN, followed by Sky News, Le Monde, Fox News, the Washington Post and numerous other providers celebrated the exploits of Irish fans, with some such as Sky News and Aljazeera going so far as to produce video montages of the most “memorable moments” involving “the boys in green.” In an article titled ‘Irish fans win admirers at Euro 2016,’ Fox News reported that “social media is full of examples of Irish kindness” and that “that Irish wit has been a fixture at the tournament.” Aljazeera’s AJ+ news channel produced a video montage titled ‘Are Irish fans the champions of Euro 2016?’ which included spliced footage from some of the aforementioned videos. The Daily Mirror (UK edition) praised their “fun loving approach to watching football.” Similarly, a headline for NPR declared, “And as if they could not be adorable enough, in a quiet moment, Irish fans sang on a French train to help lull a baby to sleep.” It is important to note that viewer comments under many of these articles and videos were also generally effusive in their praise. For example, under the video ‘Irish Fans help French couple change flat tire,’ one viewer (Amsterdam 410) commented, ‘Irish people nicest people in world by far. they always happy just amazing people.’ Another (Juan Ardilla) commented, ‘Irish fans restored my faith in humanity.’As the final stages of the tournament approached, the Mayor of Paris announced that she was awarding the Medal of the City of Paris to Irish fans for their sporting goodwill. Back home in Ireland, the behaviour of Irish fans in France was also celebrated, with President Michael D. Higgins commenting that “Ireland could not wish for better ambassadors abroad.” In all of this news coverage, the humble kindness, helpfulness and friendliness of the Irish are depicted as native qualities and crystallise as a kind of ideal national character. Though laudatory, the tropes of smallness and happy-go-luckiness are again evident here, as is the recurrent depiction of Irishness as an ‘innocent identity’ (Negra). The “boys” in green are spirited in a non-threatening way, as children generally are. Notably, Stephan Reich, journalist with German sports magazine 11Freunde wrote: “the qualification of the Irish is a godsend. The Boys in Green can celebrate like no other nation, always peaceful, always sympathetic and emphatic, with an infectious, childlike joy.” Irishness as Antidote? The centrality of the Irish fan footage in the international news coverage of Euro 2016 is significant, I suggest, but interpreting its meaning is not a simple or straightforward task. Fans (like everyone) make choices about how to present themselves, and these choices are partly conscious and partly unconscious, partly spontaneous and partly conditioned. Pope (2008), for example, draws on Emile Durkheim to explain the behaviour of sports fans sociologically. “Sporting events,” Pope tells us, “exemplify the conditions of religious ritual: high rates of group interaction, focus on sacred symbols, and collective ritual behaviour symbolising group membership and strengthening shared beliefs, values, aspirations and emotions” (Pope 85). Pope reminds us, in other words, that what fans do and say, and wear and sing – in short, how they perform – is partly spontaneous and situated, and partly governed by a long-established fandom pedagogy that implies familiarity with a whole range of international football fan styles and embodied performances (Rowe). To this, we must add that fans of a national sports team generally uphold shared understandings of what constitutes desirable and appropriate patriotic behaviour. Finally, in the case reported here, we must also consider that the behaviour of Irish fans was also partly shaped by their awareness of participating in the developing media sport spectacle and, indeed, of their own position as ‘suppliers’ of news content. In effect, Irish fans at Euro 2016 occupied an interesting hybrid position between passive consumption and active production – ‘produser’ fans, as it were.On one hand, therefore, we can consider fan footage as evidence of spontaneous displays of affective unity, captured by fellow participants. The realism or ‘authenticity’ of these supposedly natural and unscripted performances is conveyed by the grainy images, and amateur, shaky camerawork, which ironically work to create an impression of unmediated reality (see Goldman and Papson). On the other hand, Mike Cronin considers them contrived, staged, and knowingly performative, and suggestive of “hyper-aware” Irish fans playing up to the camera.However, regardless of how we might explain or interpret these fan performances, it is the fact that they play a role in making Irishness public that most interests me here. For my purposes, the most important consideration is how the patriotic performances of Irish fans both fed and harmonized with the developing news coverage; the resulting depiction of the Irish was partly an outcome of journalistic conventions and partly a consequence of the self-essentialising performances of Irish fans. In a sense, these fan-centred videos were ready-made or ‘packaged’ for an international news audience: they are short, dramatic and entertaining, and their ideological content is in keeping with established tropes about Irishness. As a consequence, the media-sport discourse surrounding Euro 2016 – itself a mixture of international news values and home-grown essentialism – valorised a largely touristic understanding of Irishness, albeit one that many Irish people wilfully celebrate.Why such a construction of Irishness is internationally appealing is unclear, but it is certainly not new. John Fanning (26) cites a number of writers in highlighting that Ireland has long nurtured a romantic self-image that presents the country as a kind of balm for the complexities of the modern world. For example, he cites New York Times columnist Thomas Friedman, who observed in 2001 that “people all over the world are looking to Ireland for its reservoir of spirituality hoping to siphon off what they can feed to their souls which have become hungry for something other than consumption and computers.” Similarly, Diane Negra writes that “virtually every form of popular culture has in one way or another, presented Irishness as a moral antidote to contemporary ills ranging from globalisation to post-modern alienation, from crises over the meaning and practice of family values to environmental destruction” (3). Earlier, I described the Arnoldian image of the Irish as a race governed by ‘negative excess’. Arguably, in a time of profound ideological division and resurgent cultural nationalism – a time of polarisation and populism, of Trumpism and Euroscepticism – this ‘excess’ has once again been positively recoded, and now it is the ‘sentimental excess’ of the Irish that is imagined as a salve for the cultural schisms of our time.ConclusionMuch has been made of new media powers to contest official discourses. Sports fans, too, are now considered much less ‘controllable’ on account of their ability to disrupt official messages online (as well as offline). The case of Irish fans at Euro 2016, however, offers a reminder that we must avoid routine assumptions that the “uses” made of “new” and “old” media are necessarily divergent (Rowe, Ruddock and Hutchins). My interest here was less in what any single news item or fan-produced video tells us, but rather in the aggregate construction of Irishness that emerges in the media-sport discourse surrounding this event. Relatedly, in writing about the London Olympics, Wardle observed that most of what appeared on social media concerning the Games did not depart significantly from the celebratory tone of mainstream news media organisations. “In fact the absence of any story that threatened the hegemonic vision of the Games as nation-builder, shows that while social media provided an additional and new form of newsgathering, it had to fit within the traditional news structures, routines and agenda” (Wardle 12).Obviously, it is important to acknowledge the contestability of all media texts, including the news items and fan footage mentioned here, and to recognise that such texts are open to multiple interpretations based on diverse reading positions. And yet, here I have suggested that there is something of a ‘preferred’ reading in the depiction of Irish fans at Euro 2016. The news coverage, and the footage on which it draws, are important because of what they collectively suggest about Irish national identity: here we witness a shift from identity performance to identity writ large, and one means of analysing their international (and intertextual significance), I have suggested, is to view them through the prism of established tropes about Irishness.Travelling sports fans – for better or worse – are ‘carriers’ of places and cultures, and they remind us that “there is also a cultural economy of sport, where information, images, ideas and rhetorics are exchanged, where symbolic value is added, where metaphorical (and sometimes literal, in the case of publicly listed sports clubs) stocks rise and fall” (Rowe 24). There is no question, to borrow Rowe’s term, that Ireland’s ‘stocks’ rose considerably on account of Euro 2016. In news terms, Irish fans provided entertainment value; they were the ‘human interest’ story of the tournament; they were the ‘feel-good’ factor of the event – and importantly, they were the suppliers of much of this content (albeit unofficially). Ultimately, I suggest that we think of the overall depiction of the Irish at Euro 2016 as a co-construction of international news media practices and the self-presentational practices of Irish fans themselves. The result was not simply a depiction of idealised fandom, but more importantly, an idealisation of a people and a place, in which the plucky little people on tour became the global standard bearers of Irish identity.ReferencesArnold, Mathew. Celtic Literature. Carolina: Lulu Press, 2013.Arrowsmith, Aidan. “Plastic Paddies vs. Master Racers: ‘Soccer’ and Irish Identity.” International Journal of Cultural Studies 7.4 (2004). 25 Mar. 2017 <http://journals.sagepub.com/doi/pdf/10.1177/1367877904047864>.Boards and Networked Digital Media Sport Communities.” Convergence 16.3 (2010). 25 Mar. 2017 <http://journals.sagepub.com/doi/abs/10.1177/1354856510367622>.Clancy, Michael. Brand New Ireland: Tourism, Development and National Identity in the Irish Republic. Surrey and Vermont: Ashgate, 2009.Couldry, Nick, and Andreas Hepp. The Mediated Construction of Reality. Cambridge: Polity Press, 2016.Cronin, Michael. “Is It for the Glamour? Masculinity, Nationhood and Amateurism in Contemporary Projections of the Gaelic Athletic Association.” Irish Postmodernisms and Popular Culture. Eds. Wanda Balzano, Anne Mulhall, and Moynagh Sullivan. New York: Palgrave Macmillan, 2007. 39–51.Cronin, Mike. “Serenading Nuns: Irish Soccer Fandom as Performance.” Post-Celtic Tiger Irishness Symposium, Trinity College Dublin, 25 Nov. 2016.Dahlgren, Peter. “Beyond Information: TV News as a Cultural Discourse.” The European Journal of Communication Research 12.2 (1986): 125–36.Fanning, John. “Branding and Begorrah: The Importance of Ireland’s Nation Brand Image.” Irish Marketing Review 21.1-2 (2011). 25 Mar. 2017 <https://www.dit.ie/media/newsdocuments/2011/3%20Fanning.pdf>.Free, Marcus. “Diaspora and Rootedness, Amateurism and Professionalism in Media Discourses of Irish Soccer and Rugby in the 1990s and 2000s.” Éire-Ireland 48.1–2 (2013). 25 Mar. 2017 <https://muse.jhu.edu/article/510693/pdf>.Friedman, Thomas. “Foreign Affairs: The Lexus and the Shamrock.” The Opinion Pages. New York Times 3 Aug. 2001 <http://www.nytimes.com/2001/08/03/opinion/foreign-affairs-the-lexus-and-the-shamrock.html>.Gerbner, George. “The Stories We Tell and the Stories We Sell.” Journal of International Communication 18.2 (2012). 25 Mar. 2017 <http://dx.doi.org/10.1080/13216597.2012.709928>.Goldman, Robert, and Stephen Papson. Sign Wars: The Cluttered Landscape of Advertising. New York: Guilford Press, 1996.Negra, Diane. The Irish in Us. Durham, NC: Duke University Press, 2006.Pope, Whitney. “Emile Durkheim.” Key Sociological Thinkers. 2nd ed. Ed. Rob Stones. Hampshire: Palgrave Macmillan, 2008. 76-89.Poulton, Emma, and Martin Roderick. Sport in Films. London: Routledge, 2008.Rains, Stephanie. The Irish-American in Popular Culture 1945-2000. Dublin: Irish Academic Press, 2007.Rowe, David, Andy Ruddock, and Brett Hutchins. “Cultures of Complaint: Online Fan Message Boards and Networked Digital Media Sport Communities.” Convergence: The International Journal of Research into New Media Technology 16.3 (2010). 25 Mar. 2017 <http://journals.sagepub.com/doi/abs/10.1177/1354856510367622>.Rowe, David. Sport, Culture and the Media: The Unruly Trinity. 2nd ed. Berkshire: Open University Press, 2004.Stead, David. “Sport and the Media.” Sport and Society: A Student Introduction. 2nd ed. Ed. Barrie Houlihan. London: Sage, 2008. 328-347.Wardle, Claire. “Social Media, Newsgathering and the Olympics.” Journalism, Media and Cultural Studies 2 (2012). 25 Mar. 2017 <https://publications.cardiffuniversitypress.org/index.php/JOMEC/article/view/304>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!