Journal articles on the topic 'International Rice Research Institute. Information society Information technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'International Rice Research Institute. Information society Information technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Farmer, Kristine, Jeff Allen, Malak Khader, Tara Zimmerman, and Peter Johnstone. "Paralegal Students’ and Paralegal Instructors’ Perceptions of Synchronous and Asynchronous Online Paralegal Course Effectiveness: A Comparative Study." International Journal for Educational and Vocational Studies 3, no. 1 (March 30, 2021): 1. http://dx.doi.org/10.29103/ijevs.v3i1.3550.

Full text
Abstract:
To improve online learning pedagogy within the field of paralegal education, this study investigated how paralegal students and paralegal instructors perceived the effectiveness of synchronous and asynchronous online paralegal courses. This study intended to inform paralegal instructors and course developers how to better design, deliver, and evaluate effective online course instruction in the field of paralegal studies.Survey results were analyzed using independent samples t-test and correlational analysis, and indicated that overall, paralegal students and paralegal instructors positively perceived synchronous and asynchronous online paralegal courses. Paralegal instructors reported statistically significant higher perceptions than paralegal students: (1) of instructional design and course content in synchronous online paralegal courses; and (2) of technical assistance, communication, and course content in asynchronous online paralegal courses. Instructors also reported higher perceptions of the effectiveness of universal design, online instructional design, and course content in synchronous online paralegal courses than in asynchronous online paralegal courses. Paralegal students reported higher perceptions of asynchronous online paralegal course effectiveness regarding universal design than paralegal instructors. No statistically significant differences existed between paralegal students’ perceptions of the effectiveness of synchronous and asynchronous online paralegal courses. A strong, negative relationship existed between paralegal students’ age and their perceptions of effective synchronous paralegal courses, which were statistically and practically significant. Lastly, this study provided practical applicability and opportunities for future research. Akyol, Z., & Garrison, D. R. (2008). The development of a community of inquiry over time in an online course: Understanding the progression and integration of social, cognitive and teaching presence. Journal of Asynchronous Learning Networks, 12, 3-22. Retrieved from https://files.eric.ed.gov/fulltext/EJ837483.pdf Akyol, Z., Garrison, D. R., & Ozden, M. Y. (2009). Online and blended communities of inquiry: Exploring the developmental and perceptional differences. The International Review of Research in Open and Distributed Learning, 10(6), 65-83. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/765/1436 Allen, I. E., & Seaman, J. (2014). Grade change: Tracking online education in the United States. Babson Park, MA: Babson Survey Research Group and Quahog Research Group, LLC. Retrieved from https://www.utc.edu/learn/pdfs/online/sloanc-report-2014.pdf Alreck, P. L., & Settle, R. B. (2004). The Survey Research Handbook (3rd ed.) New York, NY: McGraw-Hill Irwin. American Association for Paralegal Education (2013, Oct.). AAfPE core competencies for paralegal programs. Retrieved from https://cdn.ymaws.com/www.aafpe.org/resource/resmgr/Docs/AAfPECoreCompetencies.pdf American Bar Association, Standing Committee on Paralegals. (2017). https://www.americanbar.org/groups/paralegals.html American Bar Association, Standing Committee on Paralegals (2013, September). Guidelines for the approval of paralegal education programs. Retrieved from https://www.americanbar.org/content/dam/aba/administrative/paralegals/ls_prlgs_2013_paralegal_guidelines.authcheckdam.pdf Astani, M., Ready, K. J., & Duplaga, E. A. (2010). Online course experience matters: Investigating students’ perceptions of online learning. Issues in Information Systems, 11(2), 14-21. Retrieved from http://iacis.org/iis/2010/14-21_LV2010_1526.pdf Bailey, C. J., & Card, K. A. (2009). Effective pedagogical practices for online teaching: Perception of experienced instructors. The Internet and Higher Education, 12, 152-155. doi: 10.1016/j.iheduc.2009.08.002 Bernard, R., Abrami, P., Borokhovski, E., Wade, C., Tamim , R., Surkes, M., & Bethel, E. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79, 1243-1289. doi: 10.3102/0034654309333844 Cherry, S. J., & Flora, B. H. (2017). Radiography faculty engaged in online education: Perceptions of effectiveness, satisfaction, and technological self-efficacy. Radiologic Technology, 88(3), 249-262. http://www.radiologictechnology.org/ Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Taylor & Francis Group. Colorado, J. T., & Eberle, J. (2010). Student demographics and success in online learning environments. Emporia State Research Studies, 46(1), 4-10. Retrieved from https://esirc.emporia.edu/bitstream/handle/123456789/380/205.2.pdf?sequence=1 Dutcher, C. W., Epps, K. K., & Cleaveland, M. C. (2015). Comparing business law in online and face to face formats: A difference in student learning perception. Academy of Educational Leadership Journal, 19, 123-134. http://www.abacademies.org/journals/academy-of-educational-leadership-journal-home.html Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191. Retrieved from http://www.gpower.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPower3-BRM-Paper.pdf Field, A. (2009). Discovery statistics using SPSS. (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc. Gall M., Borg, W., & Gall, J. (1996). Educational research: An introduction (6th ed.). White Plains, NY: Longman Press. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of distance education, 15(1), 7-23. Retrieved from http://cde.athabascau.ca/coi_site/documents/Garrison_Anderson_Archer_CogPres_Final.pdf Green, S. B., & Salkind, N. J. (2005). Using SPSS for Windows and Macintosh: Internal consistency estimates of reliability. Upper Saddle River, NJ: Pearson Prentice Hall. Harrell, I. L. (2008). Increasing the Success of Online Students. Inquiry, 13(1), 36-44. Retrieved from http://files.eric.ed.gov/fulltext/EJ833911.pdf Horspool, A., & Lange, C. (2012). Applying the scholarship of teaching and learning: student perceptions, behaviours and success online and face-to-face. Assessment & Evaluation in Higher Education, 37, 73-88. doi: 10.1080/02602938.2010.496532 Inman, E., Kerwin, M., & Mayes, L. (1999). Instructor and student attitudes toward distance learning. Community College Journal of Research & Practice, 23, 581-591. doi:10.1080/106689299264594 Institute of Legal Executives (ILEX). https://www.cilexcareers.org.uk/ Johnson, J. & Taggart, G. (1996). Computer assisted instruction in paralegal education: Does it help? Journal of Paralegal Education and Practice, 12, 1-21. Johnstone, Q. & Flood, J. (1982). Paralegals in English and American law offices. Windsor YB Access to Justice 2, 152. Jones, S. J. (2012). Reading between the lines of online course evaluations: Identifiable actions that improve student perceptions of teaching effectiveness and course value. Journal of Asynchronous Learning Networks, 16(1), 49-58. doi:http://dx.doi.org/10.24059/olj.v16i1.227 Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and psychological measurement, 30, 607-610. http://journals.sagepub.com/home/epm Liu, S., Gomez, J., Khan, B., & Yen, C. J. (2007). Toward a learner-oriented community college online course dropout framework. International Journal on ELearning, 6(4), 519-542. https://www.learntechlib.org/j/IJEL/ Lloyd, S. A., Byrne, M. M., & McCoy, T. S. (2012). Faculty-perceived barriers of online education. Journal of online learning and teaching, 8(1), 1-12. Retrieved from http://jolt.merlot.org/vol8no1/lloyd_0312.pdf Lockee, B., Burton, J., & Potter, K. (2010, March). Organizational perspectives on quality in distance learning. In D. Gibson & B. Dodge (Eds.), Proceedings of SITE 2010—Society for Information Technology & Teacher Education International Conference (pp. 659-664). San Diego, CA: Association for the Advancement of Computing in Education (AACE). https://www.learntechlib.org/p/33419/ Lowerison, G., Sclater, J., Schmid, R. F., & Abrami, P. C. (2006). Student perceived effectiveness of computer technology use in post-secondary classrooms. Computers & Education, 47(4), 465-489. doi:10.1016/j.compedu.2004.10.014 Retrieved from https://pdfs.semanticscholar.org/fc9c/13f0187d3967217aa82cc96c188427e29ec9.pdf Martins, L. L., & Kellermanns, F. W. (2004). A model of business school students' acceptance of a web-based course management system. Academy of Management Learning & Education, 3(1), 7-26. doi: 10.5465/AMLE.2004.12436815 Mayes, J. T. (2001). Quality in an e-University. Assessment & Evaluation in Higher Education, 26, 465-473. doi:10.1080/02602930120082032 McCabe, S. (2007). A brief history of the paralegal profession. Michigan Bar Journal, 86(7), 18-21. Retrieved from https://www.michbar.org/file/barjournal/article/documents/pdf4article1177.pdf McMillan, J. H. (2008). Educational Research: Fundamentals for the customer. Boston, MA: Pearson Education, Inc. Myers, C. B., Bennett, D., Brown, G., & Henderson, T. (2004). Emerging online learning environments and student learning: An analysis of faculty perceptions. Educational Technology & Society, 7(1), 78-86. Retrieved from http://www.ifets.info/journals/7_1/9.pdf Myers, K. (2002). Distance education: A primer. Journal of Paralegal Education & Practice, 18, 57-64. Nunnaly, J. (1978). Psychometric theory. New York: McGraw-Hill. Otter, R. R., Seipel, S., Graeff, T., Alexander, B., Boraiko, C., Gray, J., Petersen, K., & Sadler, K. (2013). Comparing student and faculty perceptions of online and traditional courses. The Internet and Higher Education, 19, 27-35. doi:10.1016/j.iheduc.2013.08.001 Popham, W. J. (2000). Modern educational measurement: Practical guidelines for educational leaders. Boston, MA: Allyn & Bacon. Rich, A. J., & Dereshiwsky, M. I. (2011). Assessing the comparative effectiveness of teaching undergraduate intermediate accounting in the online classroom format. Journal of College Teaching and Learning, 8(9), 19. https://www.cluteinstitute.com/ojs/index.php/TLC/ Robinson, C., & Hullinger, H. (2008). New benchmarks in higher education: Student engagement in online learning. The Journal of Education for Business, 84(2), 101-109. Retrieved from http://anitacrawley.net/Resources/Articles/New%20Benchmarks%20in%20Higher%20Education.pdf Salkind, N. J. (2008). Statistics for people who think they hate statistics. Los Angeles, CA: Sage Publications. Santos, J. (1999, April). Cronbach's Alpha: A tool for assessing the reliability of scales. Journal of Extension, 37, 2. Retrieved from https://www.joe.org/joe/1999april/tt3.php Seok, S., DaCosta, B., Kinsell, C., & Tung, C. K. (2010). Comparison of instructors' and students' perceptions of the effectiveness of online courses. Quarterly Review of Distance Education, 11(1), 25. Retrieved from http://online.nuc.edu/ctl_en/wp-content/uploads/2015/08/Online-education-effectiviness.pdf Sheridan, K., & Kelly, M. A. (2010). The indicators of instructor presence that are important to students in online courses. Journal of Online Learning and Teaching, 6(4), 767-779. Retrieved from http://jolt.merlot.org/vol6no4/sheridan_1210.pdf Shook, B. L., Greer, M. J., & Campbell, S. (2013). Student perceptions of online instruction. International Journal of Arts & Sciences, 6(4), 337. Retrieved from https://s3.amazonaws.com/academia.edu.documents/34496977/Ophoff.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1508119686&Signature=J1lJ8VO0xardd%2FwH35pGj14UeBg%3D&response-content-disposition=inline%3B%20filename%3DStudent_Perceptions_of_Online_Learning.pdf Song, L., Singleton, E. S., Hill, J. R., & Koh, M. H. (2004). Improving online learning: Student perceptions of useful and challenging characteristics. The Internet and Higher Education, 7, 59-70. doi:10.1016/j.iheduc.2003.11.003 Steiner, S. D., & Hyman, M. R. (2010). Improving the student experience: Allowing students enrolled in a required course to select online or face-to-face instruction. Marketing Education Review, 20, 29-34. doi:10.2753/MER1052-8008200105 Stoel, L., & Hye Lee, K. (2003). Modeling the effect of experience on student acceptance of web-based courseware. Internet Research, 13(5), 364-374. http://www.emeraldinsight.com/loi/intr Taggart, G., & Bodle, J. H. (2003). Example of assessment of student outcomes data from on-line paralegal courses: Lessons learned. Journal of Paralegal Education & Practice, 19, 29-36. Tanner, J. R., Noser, T. C., & Totaro, M. W. (2009). Business faculty and undergraduate students' perceptions of online learning: A comparative study. Journal of Information Systems Education, 20, 29-40. http://jise.org/ Tung, C.K. (2007). Perceptions of students and instructors of online and web-enhanced course effectiveness in community colleges (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database (Publication No. AAT 3284232). Vodanovich, S. J. & Piotrowski, C., & (2000). Are the reported barriers to Internet-based instruction warranted? A synthesis of recent research. Education, 121(1), 48-53. http://www.projectinnovation.com/education.html Ward, M. E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. The International Review of Research in Open and Distributed Learning, 11, 57-77. Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/867/1610? Wilkes, R. B., Simon, J. C., & Brooks, L. D. (2006). A comparison of faculty and undergraduate students' perceptions of online courses and degree programs. Journal of Information Systems Education, 17, 131-140. http://jise.org/
APA, Harvard, Vancouver, ISO, and other styles
2

Xie, Fangming, Longbiao Guo, Guangjun Ren, Peisong Hu, Feng Wang, Jianlong Xu, Xinqi Li, Fulin Qiu, and Madonna Angelita dela Paz. "Genetic diversity and structure of indica rice varieties from two heterotic pools of southern China and IRRI." Plant Genetic Resources 10, no. 3 (October 14, 2012): 186–93. http://dx.doi.org/10.1017/s147926211200024x.

Full text
Abstract:
Investigation of genetic diversity and the relationships among varieties and breeding lines is of great importance to facilitate parental selection in the development of inbred and hybrid rice varieties and in the construction of heterotic groups. The technology of single nucleotide polymorphism (SNP) is being advanced for the assessment of population diversity and genetic structures. We characterized 215 widely cultivated indica rice varieties developed in southern China and at the International Rice Research Institute (IRRI) using IRRI-developed SNP oligonucleotide pooled assay (OPA) to provide grouping information of rice mega-varieties for further heterotic pool study. The results revealed that the Chinese varieties were more divergent than the IRRI varieties. Two major subpopulations were clustered for the varieties using a model-based grouping method. The IRRI varieties were closely grouped and separated clearly from the majority of the Chinese varieties. The Chinese varieties were subclustered into three subgroups, but there was no clear evidence to separate the Chinese varieties into subgroups geographically, indicating a great degree of genetic integration of alleles and shared ancestries among those high-yielding modern varieties.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yunbo, Qiyuan Tang, Shaobing Peng, Danying Xing, Jianquan Qin, Rebecca C. Laza, and Bermenito R. Punzalan. "Water Use Efficiency and Physiological Response of Rice Cultivars under Alternate Wetting and Drying Conditions." Scientific World Journal 2012 (2012): 1–10. http://dx.doi.org/10.1100/2012/287907.

Full text
Abstract:
One of the technology options that can help farmers cope with water scarcity at the field level is alternate wetting and drying (AWD). Limited information is available on the varietal responses to nitrogen, AWD, and their interactions. Field experiments were conducted at the International Rice Research Institute (IRRI) farm in 2009 dry season (DS), 2009 wet season (WS), and 2010 DS to determine genotypic responses and water use efficiency of rice under two N rates and two water management treatments. Grain yield was not significantly different between AWD and continuous flooding (CF) across the three seasons. Interactive effects among variety, water management, and N rate were not significant. The high yield was attributed to the significantly higher grain weight, which in turn was due to slower grain filling and high leaf N at the later stage of grain filling of CF. AWD treatments accelerated the grain filling rate, shortened grain filling period, and enhanced whole plant senescence. Under normal dry-season conditions, such as 2010 DS, AWD reduced water input by 24.5% than CF; however, it decreased grain yield by 6.9% due to accelerated leaf senescence. The study indicates that proper water management greatly contributes to grain yield in the late stage of grain filling, and it is critical for safe AWD technology.
APA, Harvard, Vancouver, ISO, and other styles
4

Groups, African Pathologists' Summit Working. "Proceedings of the African Pathologists Summit; March 22–23, 2013; Dakar, Senegal: A Summary." Archives of Pathology & Laboratory Medicine 139, no. 1 (June 25, 2014): 126–32. http://dx.doi.org/10.5858/arpa.2013-0732-cc.

Full text
Abstract:
Context This report presents the proceedings of the African Pathologists Summit, held under the auspices of the African Organization for Research and Training in Cancer. Objectives To deliberate on the challenges and constraints of the practice of pathology in Sub-Saharan Africa and the avenues for addressing them. Participants Collaborating organizations included the American Society for Clinical Pathology; Association of Pathologists of Nigeria; British Division of the International Academy of Pathology; College of Pathologists of East, Central and Southern Africa; East African Division of the International Academy of Pathology; Friends of Africa–United States and Canadian Academy of Pathology Initiative; International Academy of Pathology; International Network for Cancer Treatment and Research; National Cancer Institute; National Health and Laboratory Service of South Africa; Nigerian Postgraduate Medical College; Royal College of Pathologists; West African Division of the International Academy of Pathology; and Faculty of Laboratory Medicine of the West African College of Physicians. Evidence Information on the status of the practice of pathology was based on the experience of the participants, who are current or past practitioners of pathology or are involved in pathology education and research in Sub-Saharan Africa. Consensus Process The deliberations were carried out through presentations and working discussion groups. Conclusions The significant lack of professional and technical personnel, inadequate infrastructure, limited training opportunities, poor funding of pathology services in Sub-Saharan Africa, and their significant impact on patient care were noted. The urgency of addressing these issues was recognized, and the recommendations that were made are contained in this report.
APA, Harvard, Vancouver, ISO, and other styles
5

Marsh, Sophia, and Ilse Truter. "VP33 Pharmacoeconomic Submission Requirements: Africa Compared With England." International Journal of Technology Assessment in Health Care 35, S1 (2019): 84. http://dx.doi.org/10.1017/s0266462319003076.

Full text
Abstract:
IntroductionThe South African Pharmacoeconomic Submissions Guideline (SAPG) is currently voluntary for medicines in the private health sector but may become mandatory and more widely used under the proposed National Health Insurance system. To make recommendations on evidence generation and areas where the SAPG could be strengthened, the study compared the SAPG requirements with other African pharmacoeconomic guidelines and the National Institute for Health and Care Excellence Methods Guide (NICE MG).MethodsThe World Health Organisation, International Network of Agencies for Health Technology Assessment (INAHTA), HTA International, and the International Society for Pharmacoeconomics and Outcomes Research websites were consulted, and email requests sent to named individuals from retrieved source material. The European Network for HTA Core Model® (version 3.0) (the Model®) provided the evaluation and comparison framework, using three criteria: completely, partly or not completely requiring the same or similar information as the Model®.ResultsOf the forty-five countries identified, only Egypt had a publicly available pharmacoeconomic guideline (Egyptian Pharmacoeconomic Guideline (EPG)). The guidelines varied considerably in their intended audience, size and content. All three guidelines’ primary focus was the cost and economic evaluation, and health problem and current use domains. Safety, organisational, ethical and legal aspects were poorly covered by the SAPG and EPG guidelines (less than thirty percent of issues in each domain completely / partly covered). The SAPG completely or partly required the same or similar information in the Model® for thirty-nine percent of total issues, the EPG thirty-three percent and the NICE MG sixty-six percentConclusionsThe SAPG was not as comprehensive as the NICE MG and poorly covered some key aspects of HTAs, suggesting that the SAPG could be developed to be more informative for decision-makers. Evidence generation should focus on describing the health problem the technology is targeting and on evidence that can be synthesized into cost-effectiveness analyses.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, John-Tark, and Gyei Kark Park. "Special Issue on ISIS 2009, Dong-A University, Busan, Korea." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 5 (July 20, 2010): 549. http://dx.doi.org/10.20965/jaciii.2010.p0549.

Full text
Abstract:
The 10th International Symposium on Advanced Intelligent Systems 2009 (ISIS2009) held on August 17-19, 2009, at the Bumin Campus of Dong-A University (http://www.donga.ac.kr/) in Busan, Korea, was sponsored by the Korean Institute of Intelligent System Society (KIIS) and cosponsored technically by the Japan Society for Fuzzy Theory and Intelligent Informatics (SOFT) and the Taiwanese Association for Artificial Intelligence (TAAI). The international symposium focused on state-of-art accomplishments, innovations, and potential directions in intelligent systems. It also marked an epoch of innovation and the dissemination of research into many interesting fields. Its broad theme covered the latest in technical fields, including artificial intelligence, intelligent systems, Ambient Intelligence (AmI), bioinformatics, information technology, and their wide-ranging applications, from basic theoretical work to practical engineering applications. The 80 featured papers were presented by 120 participants. With so many papers submitted to JACIII, this special issue consists of just two strictly selected papers. The first, deals with emerging research trends in robotics, proposing a new trajectory generation using the univariate Dynamic Encoding Algorithm for Searches (uDEAS) in the turning of a biped walking robot. The second paper, presenting the latest findings in AmI, details a newly designed and implemented robust capacitive sensor with parasitic parameter modeling over a range of high 200 KHz frequencies based on an Unscented Kalman Filter (UKF) algorithm. I would like to thank Mr. Kunihiko Uchida, Mr. Shinya Wakai, Ms. Reiko Ohta, and Mr. Shinji Isokawa as editorial staff of Fuji Technology Press for editing these complex manuscripts into their final form. And I really thank to Prof. Kaoru Hirota, Editor-in-Chief of JACIII for inviting me to direct this special issue on ISIS
APA, Harvard, Vancouver, ISO, and other styles
7

Numgaudienė, Ariana, and Birutė Žygaitienė. "Content Analysis of Technology Teacher Training Programmes of Some European Countries." Pedagogika 113, no. 1 (March 5, 2014): 112–22. http://dx.doi.org/10.15823/p.2014.1755.

Full text
Abstract:
The article deals with the problems of designing and updating study programmes during the integration process of the Lithuanian education system into the European education space. After the substantial change of general programmes of Basic education(2008) and Secondary education (2011) and seeking to fully involve self-development of general cultural, subject specific, generic and specific competencies which are necessary for teachers, it is important to update the study programmes.The problem of the research: what content of technology teacher training programme should be from the innovations point of view in order to meet the expectations of the changing society.The object of the research: the innovative content of the technology teacher training programme.The aim of the research: to highlight the innovative aspects of the content of technology teacher training programmes, having performed content analysis of technology teacher training programmes of the universities of Lithuania and some European countries.Research methods:analysis of scientific literature, analysis of the programmes of universities of some European countries which provide training for technology teachers as well as the analysis of the legal acts and strategic education policy documents of the European Union and the Republic of Lithuania.Updating of the study programme of technological education is a permanent process, which is conditioned by the following factors: market economy and the needs of information society, the fact that higher education is becoming mass, penetration of humanistic ideas into the content of education as well as the valid unified study quality assessment policy in the European Union.Taking into account the recommendations of the international experts’ group and considering international changes of analogous study programmes, the Committee of Technology Pedagogics Study Programmes of Lithuanian University of Educational Sciences in cooperation with the social partners carried out a research of opinions of students, graduates, university lecturers and employers on the study quality.They also performed a comprehensive analysis of the Bachelor’s degree study programmes of some Western European universities. The analysis revealed that theoretical models of study programmes design of different European universities have similarities and differences, which are determined by the philosophical aspect, humanistic ideas and the context of the national education policy. In the research the experience of five universities from the innovations point of view was used: the University of Helsinki (Finland), Queen Margaret University, Edinburgh (Great Britain), the Polytechnic Institute of Tomar (Portugal), and the University of Iceland.The following elective subjects have been included in the study programme of technology pedagogics: pedagogical ethics, sustainable development and social welfare, educational creative projects, family health education, health promoting nutrition education, visualization of technology education, eco creations, national and global food culture, interior design, technology education for special needs students, art therapy, development of leadership competencies, formation of study archives. The hidden curriculum of the study programme of technology pedagogics is ethnic culture, ecology, project activities.
APA, Harvard, Vancouver, ISO, and other styles
8

Vowotor, Michael Kwame, George Amoako, Baah Sefa-Ntiri, Samuel Amoah, Samuel Sonko Sackey, and Charles Lloyd Yeboah Amuah. "Assessment of Nine Micronutrients in Jasmine 85 Rice Grown in Ghana Using Neutron Activation Analysis." Environment and Pollution 9, no. 2 (September 28, 2020): 29. http://dx.doi.org/10.5539/ep.v9n2p29.

Full text
Abstract:
The amount of micronutrients in food is a key factor that determines the health status of a person. The concentrations of nine micronutrients, Sodium (Na), Magnesium (Mg), Chlorine (Cl), Potassium (K), Calcium (Ca), Vanadium (V), Manganese (Mn), Copper (Cu) and Iodine (I), in polished Jasmine 85 rice, locally cultivated in five rice farming areas in Ghana (Afienya, Afife, Dawhenya, Ashaiman and Aveyime), were determined using Neutron Activation Analysis. The standard materials used as reference were the International Atomic Energy Agency (IAEA)-530 Tuna fish homogenate and the National Institute of Standard and Technology (NIST) USA 1566b Oyster Tissue. Recoveries of the elemental concentrations ranged from 88% to 111% of the certified values. Relative standardization method was used in the quantification of the elements. The range of concentrations measured in the rice are: 142.3-188.1 mg/kg for Na, 483.2-875.7 mg/kg for Mg, 465.6-718.0 mg/kg for Cl, 514.6-2949.0 mg/kg for K, 2303.0-2622.0 mg/kg for Ca, 0.0698-0.1925 mg/kg for V, 9.956-14.460 mg/kg for Mn, 0.8728-1.6790 mg/kg for Cu and 0.1181-0.1447 mg/kg for I. Using Hierarchical clustering analysis and Principal Component Analysis to evaluate the intensities of measured concentrations, K was established to be the most abundant, and was used to categorize two distinct clusters; Group 1 farms (Ashaiman, Afienya, and Dawhenya) and Group 2 farms (Aveyime and Afife). Group 2 farms recorded elevated intensities of micronutrients. With Pearson's correlation coefficient, some noteworthy correlations realized were between Na and K (r = 0.951), Na and V (r = 0.842) and K and V (r = 0.812). This indicated the same or similar source inputs for each pair. The calculated mean daily intake of K exceeded the mean Recommended Dietary Allowable and Adequate Intake for all Life Stage Groups. Estimated health risk associated with the consumption of rice was only present for children between the ages of 1 and 3 for Mg. The information on these nine micronutrients content of the rice from these five farming areas would be valuable in rice consumption studies to evaluate the overall availability of micronutrients to the Ghanaian populace and age groups and also in nutrition planning for analysis of nationwide rice supplies, mainly for regions and countries known to be susceptible to deficiencies of these micronutrients. The techniques espoused in this research can be used to accurately determine the concentration of micronutrients in rice and also trace the area where the rice was produced.
APA, Harvard, Vancouver, ISO, and other styles
9

Inoue, Hiroshi, Renato U. Solidum, and Jr. "Special Issue on Enhancement of Earthquake and Volcano Monitoring and Effective Utilization of Disaster Mitigation Information in the Philippines." Journal of Disaster Research 10, no. 1 (February 1, 2015): 5–7. http://dx.doi.org/10.20965/jdr.2015.p0005.

Full text
Abstract:
This special issue of JDR features 18 papers and reports on an international 2010 to 2015 cooperative project entitled gEnhancement of Earthquake and Volcano Monitoring and Effective Utilization of Disaster Mitigation Information in the Philippines.h This project is being conducted under the SATREPS program (Science and Technology Research Partnership for Sustainable Development), cosponsored by the JST (Japan Science and Technology Agency) and JICA (Japan International Cooperation Agency). The Philippines is one of the worldfs most earthquake and volcano disaster-prone countries because it is located along the active boundary between the Philippine Sea Plate and Eurasian Plate. Collisions by the two plates generate plate subductions and crustal stress that generates earthquakes and volcanic activities on the archipelago. The Philippines has experienced numerous disastrous earthquakes, the most recent being the 1990 M7.8 Luzon earthquake, which killed over 1,000 local residents. A damaging earthquake also occurred during this 5-year project, in October 2013, on Bohol Island, causing about 200 deaths when houses and other buildings collapsed. Volcanoes are another major killer in the Philippines. The largest in the last century was when the Taal volcano erupted in 1911, killing 1,300 by a base surge. The 1991 Mt. Pinatubo eruption is known as the largest volcanic event in the 20th century. The Mayon volcano is also known to be a beautiful but dangerous volcano that frequently erupts, causing lahars ? steaming moving fluid masses of volcanic debris and water ? that damaged villages at the foot of the mountain. The PHIVOLCS (Philippine Institute of Volcanology and Seismology), a governmental agency mandated to monitor earthquakes and volcanoes, provides earthquake and volcano information and alerts to the public. It also conducts research on the mechanisms behind such natural phenomena and on evaluating such hazards and risks. The PHIVOLCSfs other mission is educating people and society on being prepared for disasters. Earthquake and volcano bulletins and alerts, research output, and educational materials and training provided by PHIVOLCS have enriched knowledge and enhanced measures against disaster. The primary target of this SATREPS project is to enhance existing monitoring networks, whose equipment has been provided by Japanese ODA (Official Development Aid). Through the SATREPS project, we have introduced the latest technology to provide the public with more accurate information more quickly. This project also promotes research for deepening the understanding of earthquakes and volcano activities in better assessing hazard and risk. Project components, tasks, and main Japanese organizations are as follows: 1) Earthquake and tsunami monitoring, NIED 1-1) Advanced real-time earthquake source information, Nagoya University 1-2) Real-time seismic intensity network, NIED 1-3) Tsunami monitoring and forecasting, NIED, JMA 2) Evaluation of earthquake generation potential, Kyoto University 2-1) Campaign and continuous GPS observation, Kyoto University, GSI 2-2) Geological and geomorphological studies of earthquake faults, Kyoto University 3) Integrated real-time monitoring of the Taal and Mayon volcanoes, Nagoya University 3-1) Seismic and infrasonic observation, Nagoya University 3-2) Continuous GPS monitoring, Kyoto University 3-3) Electromagnetic monitoring, Tokai University 4) Provision of disaster mitigation information and promotion of utilization, NIED 4-1) Simple seismic diagnosis, NIED 4-2) Tsunami victims interview manga (comic book form) and DVD, NIED 4-3) Disaster information portal site, NIED <span style="font-size: xx-small;">*NIED: National Institute for Earth Science and Disaster Prevention; JMA: Japan Meteorological Agency; GSI: Geospatial Information Authority of Japan</span> This issuefs first article by Melosantos et al., reports on results of installing a broadband seismometer network to provide seismic data used in the next two articles. Papers by Bonita and Punongbayan detail the results of SWIFT, a new earthquake source analysis system that automatically determines the location, size, and source mechanisms of moderate to large earthquakes. The report by Inoue et al. describes the development of the first instrumental intensity network system in the Philippines, followed by a report on its deployment and observation by Lasala et al. The article by Igarashi et al. describes the development of a tsunami simulation database for a local tsunami warning system in the Philippines. The next five papers represent the 2) Earthquake Generation Potential project component. Ohkura et al. detail the results of campaign GPS observations on Mindanao Island, which first delineated the detailed plate movement and internal deformation of Mindanao. Tobita et al. report the results of the first continuous GPS observations across the Philippine Fault. The next three papers describe the results of geological and geomorphological studies of the Philippine Fault on Mindanao Island by Perez et al., the 1973 Ragay Gulf Earthquake by Tsutsumi, and submarine mapping of the Philippine Fault by Yasuda et al.. These results provide insights on the recurrence and sizes of large damaging earthquakes in different areas. An electromagnetic study of the Taal volcano reported by Alanis et al. and the GPS monitoring of the Mayon volcano detailed by Takagi et al. are a part of intensive studies of these two volcanoes. Scientific research results were published in advance in other international journals by the research group concerning 3) Integrated Real-Time Volcano Monitoring of the Taal and Mayon Volcanoes. Real-time information on these volcanoes are telemetered to Manila and checked regularly as a part of standard operational procedures. Real-time earthquake and tsunami information by 1) Earthquake and Tsunami Monitoring has already been implemented in the monitoring system. The last five papers and reports cover results for 4) Provision of Disaster Mitigation Information and Promotion of Utilization. Imai et al. report on a full-scale shaking table test of typical residential Philippines houses made of hollow concrete blocks. They demonstrate the importance of following building codes. A paper by Imai et al. introduces simple seismic diagnosis for masonry houses as a practical tool for raising peoplefs awareness of housing vulnerability to earthquakes. Salcedo et al. report a dissemination strategy for the practical tools. The last two papers, by Villegas, report on video interviews made with Philippino tsunami survivors in the Tohoku area following the 2011 Great East Japan Earthquake. The results are compiled and selected stories published in comic-book form as easy-to-understand educational materials on tsunami disaster awareness. Information on earthquakes and volcanoes provided by the enhanced monitoring system, research output, and educational materials obtained through the SATREPS project are provided to stakeholders to enhance measures against disasters at various levels and in different timeframes. Readers of this special issue can reference information through a newly established SATREPS project portal site, the PHIVOLCS Disaster Information Portal, at <a href="http://satreps.phivolcs.dost.gov.ph/">http://satreps.phivolcs.dost.gov.ph/</a>. It can also be accessed from the PHIVOLCS web page at <a href="http://www.phivolcs.dost.gov.ph/">http://www.phivolcs.dost.gov.ph/</a>. Finally, I extend my sincere thanks to all authors and reviewers involved in this special issue.
APA, Harvard, Vancouver, ISO, and other styles
10

Davidson, Brian, Kurinchi Gurusamy, Neil Corrigan, Julie Croft, Sharon Ruddock, Alison Pullan, Julia Brown, et al. "Liver resection surgery compared with thermal ablation in high surgical risk patients with colorectal liver metastases: the LAVA international RCT." Health Technology Assessment 24, no. 21 (April 2020): 1–38. http://dx.doi.org/10.3310/hta24210.

Full text
Abstract:
Background Although surgical resection has been considered the only curative option for colorectal liver metastases, thermal ablation has recently been suggested as an alternative curative treatment. There have been no adequately powered trials comparing surgery with thermal ablation. Objectives Main objective – to compare the clinical effectiveness and cost-effectiveness of thermal ablation versus liver resection surgery in high surgical risk patients who would be eligible for liver resection. Pilot study objectives – to assess the feasibility of recruitment (through qualitative study), to assess the quality of ablations and liver resection surgery to determine acceptable standards for the main trial and to centrally review the reporting of computed tomography scan findings relating to ablation and outcomes and recurrence rate in both arms. Design A prospective, international (UK and the Netherlands), multicentre, open, pragmatic, parallel-group, randomised controlled non-inferiority trial with a 1-year internal pilot study. Setting Tertiary liver, pancreatic and gallbladder (hepatopancreatobiliary) centres in the UK and the Netherlands. Participants Adults with a specialist multidisciplinary team diagnosis of colorectal liver metastases who are at high surgical risk because of their age, comorbidities or tumour burden and who would be suitable for liver resection or thermal ablation. Interventions Thermal ablation conducted as per local policy (but centres were encouraged to recruit within Cardiovascular and Interventional Radiological Society of Europe guidelines) versus surgical liver resection performed as per centre protocol. Main outcome measures Pilot study – patients’ and clinicians’ acceptability of the trial to assist in optimisation of recruitment. Primary outcome – disease-free survival at 2 years post randomisation. Secondary outcomes – overall survival, timing and site of recurrence, additional therapy after treatment failure, quality of life, complications, length of hospital stay, costs, trial acceptability, and disease-free survival measured from end of intervention. It was planned that 5-year survival data would be documented through record linkage. Randomisation was performed by minimisation incorporating a random element, and this was a non-blinded study. Results In the pilot study over 1 year, a total of 366 patients with colorectal liver metastases were screened and 59 were considered eligible. Only nine participants were randomised. The trial was stopped early and none of the planned statistical analyses was performed. The key issues inhibiting recruitment included fewer than anticipated patients eligible for both treatments, misconceptions about the eligibility criteria for the trial, surgeons’ preference for one of the treatments (‘lack of clinical equipoise’ among some of the surgeons in the centre) with unconscious bias towards surgery, patients’ preference for one of the treatments, and lack of dedicated research nurses for the trial. Conclusions Recruitment feasibility was not demonstrated during the pilot stage of the trial; therefore, the trial closed early. In future, comparisons involving two very different treatments may benefit from an initial feasibility study or a longer period of internal pilot study to resolve these difficulties. Sufficient time should be allowed to set up arrangements through National Institute for Health Research (NIHR) Research Networks. Trial registration Current Controlled Trials ISRCTN52040363. Funding This project was funded by the NIHR Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 24, No. 21. See the NIHR Journals Library website for further project information.
APA, Harvard, Vancouver, ISO, and other styles
11

Murofushi, Toshiaki. "“Heart and Mind” Evaluation." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 5 (September 20, 2005): 439. http://dx.doi.org/10.20965/jaciii.2005.p0439.

Full text
Abstract:
Special Interest Group in Evaluation (SIG Eval) of Japan Society for Fuzzy Theory and intelligent informatics was founded by Professor Hisao Shiizuka, Kogakuin University, in 1993 to facilitate the exchange of research information within Japan on evaluation problems. Since 1996, SIG Eval has held an annual workshop, the Workshop on Evaluation of Heart and Mind. In addition to the workshop, SIG Eval has edited this special issue on “Heart and Mind” Evaluation. Contributors include those who often speak at the workshop. The first article, “Feasibility Study on Marketing Research Using Eye Movement: An Investigation of Image Presentation using an Eye Camera and Data Processing,” by Shin'ya Nagasawa, Sora Yim, and Hitoshi Hongo, asserts that, in physiological experiments using an eye camera, the user's interest influences purchasing behavior. The second article, “Statistical Image Analysis of Psychological Projective Drawings,” by Kazuhisa Takemura, Iyuki Takasaki, and Yumi Iwamitsu, discusses the use of statistical image analysis to overcome the difficulty in assessing the reliability of projective drawing techniques. The third article, “Fuzzy Least Squares Regression Analysis for Social Judgment Study,” by Kazuhisa Takemura, proposes fuzzy regression analysis in which a dependent variable, independent variables, and regression parameters are represented by triangular fuzzy numbers. The fourth to sixth articles discuss fuzzy measures, or capacities, which are quite popular for their application in subjective evaluation. The fourth article, “Identification of Fuzzy Measures with Distorted Probability Measures,” by Aoi Honda and Yoshiaki Okazaki, classifies fuzzy measures by introducing the concept of order type, and proposes the method of identifying fuzzy measure μ as a distorted probability of the same, or similar, order type as μ The fifth article, “Semiatoms in Choquet Integral Models of Multiattribute Decision Making,” by Toshiaki Murofushi, characterizes the concept of the semiatom in fuzzy measure theory in the multiattribute preference relation represented by a Choquet integral. The last article, “Some Characterizations of k-Monotonicity through the Bipolar Möbius Transform in Bi-Capacities,” by Katsushige Fujimoto and Toshiaki Murofushi, proposes the bipolar Möbius transform as an extension of the conventional Möbius transform of capacities to that of bi-capacities; the concept of bi-capacity was proposed by Grabisch and Labreuche (2002) for modeling decision making on a bipolar scale. We thank the reviewers and contributers for their time and effort in making this special issue possible, and we wish to thank the JACIII editorial board, especially Professors Kaoru Hirota and Toshio Fukuda, the Editors-in-Chief, and Kenta Uchino, Managing Editor, for their support and advice in putting this special issue together. I have assumed the role of General Chair of the Joint Conference of the Third International Conference on Soft Computing and Intelligent Systems and the Seventh International Symposium on Advanced Intelligent Systems (SCIS & ISIS 2006), to be held at Tokyo Institute of Technology, Japan, on September 20--24, 2006. As is customary, selected papers will be published in special issues of this journal. We invite you to submit your research papers and to participate in SCIS & ISIS 2006. For further information, please visit <u>http://scis2006.cs.dm.u-tokai.ac.jp/</u>.
APA, Harvard, Vancouver, ISO, and other styles
12

Anderberg, Peter, Gunilla Björling, Louise Stjernberg, and Doris Bohman. "Analyzing Nursing Students’ Relation to Electronic Health and Technology as Individuals and Students and in Their Future Career (the eNursEd Study): Protocol for a Longitudinal Study." JMIR Research Protocols 8, no. 10 (October 1, 2019): e14643. http://dx.doi.org/10.2196/14643.

Full text
Abstract:
Background The nursing profession has undergone several changes in the past decades, and new challenges are to come in the future; patients are now cared for in their home, hospitals are more specialized, and primary care will have a key role. Health informatics is essential in all core competencies in nursing. From an educational perspective, it is of great importance that students are prepared for the new demands and needs of the patients. From a societal point of view, the society, health care included, is facing several challenges related to technological developments and digitization. Preparation for the next decade of nursing education and practice must be done, without the advantage of certainty. A training for not-yet-existing technologies where educators should not be limited by present practice paradigms is desirable. This study presents the design, method, and protocol for a study that investigates undergraduate nursing students’ internet use, knowledge about electronic health (eHealth), and attitudes to technology and how experiences of eHealth are handled during the education in a multicenter study. Objective The primary aim of this research project is to describe the design of a longitudinal study and a qualitative substudy consisting of the following aspects that explore students’ knowledge about and relation to technology and eHealth: (1) what pre-existing knowledge and interest of this area the nursing students have and (2) how (and if) is it present in their education, (3) how do the students perceive this knowledge in their future career role, and (4) to what extent is the education capable of managing this knowledge? Methods The study consists of two parts: a longitudinal study and a qualitative substudy. Students from the BSc in Nursing program from the Blekinge Institute of Technology, Karlskrona, Sweden, and from the Swedish Red Cross University College, Stockholm/Huddinge, Sweden, were included in this study. Results The study is ongoing. Data analysis is currently underway, and the first results are expected to be published in 2019. Conclusions This study presents the design of a longitudinal study and a qualitative substudy. The eHealth in Nursing Education eNursEd study will answer several important questions about nursing students’ attitudes toward and use of information and communications technology in their private life, their education, and their emerging profession. Knowledge from this study will be used to compare different nursing programs and students’ knowledge about and relation to technology and eHealth. Results will also be communicated back to nursing educators to improve the teaching of eHealth, health informatics, and technology. International Registered Report Identifier (IRRID) DERR1-10.2196/14643
APA, Harvard, Vancouver, ISO, and other styles
13

Webster, Lucy, Derek Groskreutz, Anna Grinbergs-Saull, Rob Howard, John T. O’Brien, Gail Mountain, Sube Banerjee, et al. "Development of a core outcome set for disease modification trials in mild to moderate dementia: a systematic review, patient and public consultation and consensus recommendations." Health Technology Assessment 21, no. 26 (May 2017): 1–192. http://dx.doi.org/10.3310/hta21260.

Full text
Abstract:
BackgroundThere is currently no disease-modifying treatment available to halt or delay the progression of the disease pathology in dementia. An agreed core set of the best-available and most appropriate outcomes for disease modification would facilitate the design of trials and ensure consistency across disease modification trials, as well as making results comparable and meta-analysable in future trials.ObjectivesTo agree a set of core outcomes for disease modification trials for mild to moderate dementia with the UK dementia research community and patient and public involvement (PPI).Data sourcesWe included disease modification trials with quantitative outcomes of efficacy from (1) references from related systematic reviews in workstream 1; (2) searches of the Cochrane Dementia and Cognitive Improvement Group study register, Cochrane Central Register of Controlled Trials, Cumulative Index to Nursing and Allied Health Literature, EMBASE, Latin American and Caribbean Health Sciences Literature and PsycINFO on 11 December 2015, and clinical trial registries [International Standard Randomised Controlled Trial Number (ISRCTN) and clinicaltrials.gov] on 22 and 29 January 2016; and (3) hand-searches of reference lists of relevant systematic reviews from database searches.Review methodsThe project consisted of four workstreams. (1) We obtained related core outcome sets and work from co-applicants. (2) We systematically reviewed published and ongoing disease modification trials to identify the outcomes used in different domains. We extracted outcomes used in each trial, recording how many used each outcome and with how many participants. We divided outcomes into the domains measured and searched for validation data. (3) We consulted with PPI participants about recommended outcomes. (4) We presented all the synthesised information at a conference attended by the wider body of National Institute for Health Research (NIHR) dementia researchers to reach consensus on a core set of outcomes.ResultsWe included 149 papers from the 22,918 papers screened, referring to 125 individual trials. Eighty-one outcomes were used across trials, including 72 scales [31 cognitive, 12 activities of daily living (ADLs), 10 global, 16 neuropsychiatric and three quality of life] and nine biological techniques. We consulted with 18 people for PPI. The conference decided that only cognition and biological markers are core measures of disease modification. Cognition should be measured by the Mini Mental State Examination (MMSE) or the Alzheimer’s Disease Assessment Scale – Cognitive subscale (ADAS-Cog), and brain changes through structural magnetic resonance imaging (MRI) in a subset of participants. All other domains are important but not core. We recommend using the Neuropsychiatric Inventory for neuropsychiatric symptoms: the Disability Assessment for Dementia for ADLs, the Dementia Quality of Life Measure for quality of life and the Clinical Dementia Rating scale to measure dementia globally.LimitationsMost of the trials included participants with Alzheimer’s disease, so recommendations may not apply to other types of dementia. We did not conduct economic analyses. The PPI consultation was limited to members of the Alzheimer’s Society Research Network.ConclusionsCognitive outcomes and biological markers form the core outcome set for future disease modification trials, measured by the MMSE or ADAS-Cog, and structural MRI in a subset of participants.Future workWe envisage that the core set may be superseded in the future, particularly for other types of dementia. There is a need to develop an algorithm to compare scores on the MMSE and ADAS-Cog.Study registrationThe project was registered with Core Outcome Measures in Effectiveness Trials [www.comet-initiative.org/studies/details/819?result=true(accessed 7 April 2016)]. The systematic review protocol is registered as PROSPERO CRD42015027346.FundingThe National Institute for Health Research Health Technology Assessment programme.
APA, Harvard, Vancouver, ISO, and other styles
14

Andrews, Peter JD, H. Louise Sinclair, Aryelly Rodríguez, Bridget Harris, Jonathan Rhodes, Hannah Watson, and Gordon Murray. "Therapeutic hypothermia to reduce intracranial pressure after traumatic brain injury: the Eurotherm3235 RCT." Health Technology Assessment 22, no. 45 (August 2018): 1–134. http://dx.doi.org/10.3310/hta22450.

Full text
Abstract:
Background Traumatic brain injury (TBI) is a major cause of disability and death in young adults worldwide. It results in around 1 million hospital admissions annually in the European Union (EU), causes a majority of the 50,000 deaths from road traffic accidents and leaves a further ≈10,000 people severely disabled. Objective The Eurotherm3235 Trial was a pragmatic trial examining the effectiveness of hypothermia (32–35 °C) to reduce raised intracranial pressure (ICP) following severe TBI and reduce morbidity and mortality 6 months after TBI. Design An international, multicentre, randomised controlled trial. Setting Specialist neurological critical care units. Participants We included adult participants following TBI. Eligible patients had ICP monitoring in place with an ICP of > 20 mmHg despite first-line treatments. Participants were randomised to receive standard care with the addition of hypothermia (32–35 °C) or standard care alone. Online randomisation and the use of an electronic case report form (CRF) ensured concealment of random treatment allocation. It was not possible to blind local investigators to allocation as it was obvious which participants were receiving hypothermia. We collected information on how well the participant had recovered 6 months after injury. This information was provided either by the participant themself (if they were able) and/or a person close to them by completing the Glasgow Outcome Scale – Extended (GOSE) questionnaire. Telephone follow-up was carried out by a blinded independent clinician. Interventions The primary intervention to reduce ICP in the hypothermia group after randomisation was induction of hypothermia. Core temperature was initially reduced to 35 °C and decreased incrementally to a lower limit of 32 °C if necessary to maintain ICP at < 20 mmHg. Rewarming began after 48 hours if ICP remained controlled. Participants in the standard-care group received usual care at that centre, but without hypothermia. Main outcome measures The primary outcome measure was the GOSE [range 1 (dead) to 8 (upper good recovery)] at 6 months after the injury as assessed by an independent collaborator, blind to the intervention. A priori subgroup analysis tested the relationship between minimisation factors including being aged < 45 years, having a post-resuscitation Glasgow Coma Scale (GCS) motor score of < 2 on admission, having a time from injury of < 12 hours and patient outcome. Results We enrolled 387 patients from 47 centres in 18 countries. The trial was closed to recruitment following concerns raised by the Data and Safety Monitoring Committee in October 2014. On an intention-to-treat basis, 195 participants were randomised to hypothermia treatment and 192 to standard care. Regarding participant outcome, there was a higher mortality rate and poorer functional recovery at 6 months in the hypothermia group. The adjusted common odds ratio (OR) for the primary statistical analysis of the GOSE was 1.54 [95% confidence interval (CI) 1.03 to 2.31]; when the GOSE was dichotomised the OR was 1.74 (95% CI 1.09 to 2.77). Both results favoured standard care alone. In this pragmatic study, we did not collect data on adverse events. Data on serious adverse events (SAEs) were collected but were subject to reporting bias, with most SAEs being reported in the hypothermia group. Conclusions In participants following TBI and with an ICP of > 20 mmHg, titrated therapeutic hypothermia successfully reduced ICP but led to a higher mortality rate and worse functional outcome. Limitations Inability to blind treatment allocation as it was obvious which participants were randomised to the hypothermia group; there was biased recording of SAEs in the hypothermia group. We now believe that more adequately powered clinical trials of common therapies used to reduce ICP, such as hypertonic therapy, barbiturates and hyperventilation, are required to assess their potential benefits and risks to patients. Trial registration Current Controlled Trials ISRCTN34555414. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 22, No. 45. See the NIHR Journals Library website for further project information. The European Society of Intensive Care Medicine supported the pilot phase of this trial.
APA, Harvard, Vancouver, ISO, and other styles
15

Tundo, Pietro, Paul Anastas, David StC Black, Joseph Breen, Terrence J. Collins, Sofia Memoli, Junshi Miyamoto, Martyn Polyakoff, and William Tumas. "Synthetic pathways and processes in green chemistry. Introductory overview." Pure and Applied Chemistry 72, no. 7 (January 1, 2000): 1207–28. http://dx.doi.org/10.1351/pac200072071207.

Full text
Abstract:
ContentsGreen Chemistry in the International ContextThe Concept of green ChemistryDefinition of green chemistry | Green chemistry: Why now? | The historical context of green chemistry | The emergence of green chemistryThe Content of Green ChemistryAreas of green chemistry | Preliminary remarks | Alternative feedstocks | Benign reagents/synthetic pathways | Synthetic transformations | Solvents/reaction conditionsGreen Chemistry in the International ContextIt has come to be recognized in recent years, that the science of chemistry is central to addressing the problems facing the environment. Through the utilization of the various subdisciplines of chemistry and the molecular sciences, there is an increasing appreciation that the emerging area of green chemistry1is needed in the design and attainment of sustainable development. A central driving force in this increasing awareness is that green chemistry accomplishes both economic and environmental goals simultaneously through the use of sound, fundamental scientific principles. Recently, a basic strategy has been proposed for implementing the relationships between industry and academia, and hence, funding of the research that constitutes the engine of economic advancement; it is what many schools of economics call the "triple bottom line" philosophy, meaning that an enterprise will be economically sustainable if the objectives of environmental protection, societal benefit, and market advantage are all satisfied2. Triple bottom line is a strong idea for evaluating the success of environmental technologies. It is clear that the best environmentally friendly technology or discovery will not impact on the market if it is not economically advantageous; in the same way, the market that ignores environmental needs and human involvement will not prosper. This is the challenge for the future of the chemical industry, its development being strongly linked to the extent to which environmental and human needs can be reconciled with new ideas in fundamental research. On the other hand, it should be easy to foresee that the success of environmentally friendly reactions, products, and processes will improve competitiveness within the chemical industry. If companies are able to meet the needs of society, people will influence their own governments to foster those industries attempting such environmental initiatives. Of course, fundamental research will play a central role in achieving these worthy objectives. What we call green chemistry may in fact embody some of the most advanced perspectives and opportunities in chemical sciences.It is for these reasons that the International Union of Pure and Applied Chemistry (IUPAC) has a central role to play in advancing and promoting the continuing emergence and impact of green chemistry. When we think about how IUPAC furthers chemistry throughout the world, it is useful to refer to IUPAC's Strategic Plan. This plan demonstrates the direct relevance of the mission of IUPAC to green chemistry, and explains why there is growing enthusiasm for the pursuit of this new area as an appropriate activity of a scientific Union. The IUPAC Strategic Plan outlines among other goals:IUPAC will serve as a scientific, international, nongovernmental body in objectively addressing global issues involving the chemical sciences. Where appropriate, IUPAC will represent the interests of chemistry in governmental and nongovernmental forums.IUPAC will provide tools (e.g., standardized nomenclature and methods) and forums to help advance international research in the chemical sciences.IUPAC will assist chemistry-related industry in its contributions to sustainable development, wealth creation, and improvement in the quality of life.IUPAC will facilitate the development of effective channels of communication in the international chemistry community.IUPAC will promote the service of chemistry to society in both developed and developing countries.IUPAC will utilize its global perspective to contribute toward the enhancement of education in chemistry and to advance the public understanding of chemistry and the scientific method.IUPAC will make special efforts to encourage the career development of young chemists.IUPAC will broaden the geographical base of the Union and ensure that its human capital is drawn from all segments of the world chemistry community.IUPAC will encourage worldwide dissemination of information about the activities of the Union.IUPAC will assure sound management of its resources to provide maximum value for the funds invested in the Union.Through the vehicle of green chemistry, IUPAC can engage and is engaging the international community in issues of global importance to the environment and to industry, through education of young and established scientists, the provision of technical tools, governmental engagement, communication to the public and scientific communities, and the pursuit of sustainable development. By virtue of its status as a leading and internationally representative scientific body, IUPAC is able to collaborate closely in furthering individual national efforts as well as those of multinational entities.An important example of such collaboration in the area of green chemistry is that of IUPAC with the Organization for the Economical Cooperation and Development (OECD) in the project on "Sustainable Chemistry", aimed at promoting increased awareness of the subject in the member countries. During a meeting of the Environment Directorate (Paris, 6 June 1999), it was proposed that United States and Italy co-lead the activity, and that implementation of five recommendations to the member countries be accorded the highest priority, namely:research and developmentawards and recognition for work on sustainable chemistryexchange of technical information related to sustainable chemistryguidance on activities and tools to support sustainable chemistry programssustainable chemistry educationThese recommendations were perceived to have socio-economic implications for worldwide implementation of sustainable chemistry. How IUPAC and, in particular, its Divisions can contribute to this effort is under discussion. IUPAC is recognized for its ability to act as the scientific counterpart to OECD for all recommendations and activities. Although the initiatives being developed by the OECD are aimed primarily at determining the role that national institutions can play in facilitating the implementation and impact of green chemistry, it is recognized that each of these initiatives also has an important scientific component. Whether it is developing criteria or providing technical assessment for awards and recognition, identifying appropriate scientific areas for educational incorporation, or providing scientific insight into the areas of need for fundamental research and development, IUPAC can play and is beginning to play an important role as an international scientific authority on green chemistry.Other multinational organizations including, among others, the United Nations, the European Union, and the Asian Pacific Economic Community, are now beginning to assess the role that they can play in promoting the implementation of green chemistry to meet environmental and economic goals simultaneously. As an alternative to the traditional regulatory framework often implemented as a unilateral strategy, multinational governmental organizations are discovering that green chemistry as a nonregulatory, science-based approach, provides opportunities for innovation and economic development that are compatible with sustainable development. In addition, individual nations have been extremely active in green chemistry and provide plentiful examples of the successful utilization of green chemistry technologies. There are rapidly growing activities in government, industry, and academia in the United States, Italy, the United Kingdom, the Netherlands, Spain, Germany, Japan, China, and many other countries in Europe and Asia, that testify to the importance of green chemistry to the future of the central science of chemistry around the world.Organizations and Commissions currently involved in programs in green chemistry at the national or international level include, for example:U.S. Environmental Protection Agency (EPA), with the "Green Chemistry Program" which involves, among others, the National Science Foundation, the American Chemical Society, and the Green Chemistry Institute;European Directorate for R&D (DG Research), which included the goals of sustainable chemistry in the actions and research of the European Fifth Framework Programme;Interuniversity Consortium "Chemistry for the Environment", which groups about 30 Italian universities interested in environmentally benign chemistry and funds their research groups;UK Royal Society of Chemistry, which promotes the concept of green chemistry through a "UK Green Chemistry Network" and the scientific journal Green Chemistry;UNIDO-ICS (International Centre for Science and High Technology of the United Nations Industrial Development Organization) which is developing a global program on sustainable chemistry focusing on catalysis and cleaner technologies with particular attention to developing and emerging countries (the program is also connected with UNIDO network of centers for cleaner production); andMonash University, which is the first organization in Australia to undertake a green chemistry program.Footnotes:1. The terminology "green chemistry" or "sustainable chemistry" is the subject of debate. The expressions are intended to convey the same or very similar meanings, but each has its supporters and detractors, since "green" is vividly evocative but may assume an unintended political connotation, whereas "sustainable" can be paraphrased as "chemistry for a sustainable environment", and may be perceived as a less focused and less incisive description of the discipline. Other terms have been proposed, such as "chemistry for the environment" but this juxtaposition of keywords already embraces many diversified fields involving the environment, and does not capture the economic and social implications of sustainability. The Working Party decided to adopt the term green chemistry for the purpose of this overview. This decision does not imply official IUPAC endorsement for the choice. In fact, the IUPAC Committee on Chemistry and Industry (COCI) favors, and will continue to use sustainable chemistry to describe the discipline.2. J. Elkington, &lt; http://www.sustainability.co.uk/sustainability.htm
APA, Harvard, Vancouver, ISO, and other styles
16

Toro Uribe, Jorge A., and Walter F. Castro. "Condiciones que activan la argumentación del profesor de matemáticas en clase." Revista Chilena de Educación Matemática 12, no. 1 (April 20, 2020): 35–44. http://dx.doi.org/10.46219/rechiem.v12i1.11.

Full text
Abstract:
¿Cuáles son las condiciones que activan la argumentación del profesor de Matemáticas durante la discusión de tareas en clase? En este artículo se presentan posibles respuestas a esta pregunta, en el marco de un estudio que pretende comprender la argumentación del profesor de Matemáticas en un ambiente habitual de clase. Para ello se presenta una fundamentación teórica sobre la argumentación en la clase de Matemáticas. Los datos forman parte de un estudio más amplio, los cuales se tomaron durante lecciones de clase de décimo grado (estudiantes de 15 a 16 años), mientras la profesora y sus estudiantes discutían tareas sobre trigonometría. Se discuten fragmentos de episodios de clase, donde se describen indicadores de las condiciones que podrían activar la argumentación del profesor. Referencias Boero, P. (2011). Argumentation and proof: Discussing a “successful” classroom discussion. En M. Pytlak, T. Rowland, y E. Swoboda (Eds.), Actas del 7th Congress of the European Society for Research in Mathematics Education (pp. 120-130). Rzeszów, Polonia: ERME. Common Core State Standards Initiative. (2010). Common Core State Standards for Mathematics. Recuperado desde http://www.corestandards.org/assets/CCSSI_Math%20Standards.pdf Conner, A., Singletary, L., Smith, R., Wagner, P., y Francisco, R. (2014). Teacher support for collective argumentation: A framework for examining how teachers support students’ engagement in mathematical activities. Educational Studies in Mathematics, 86(3), 401-429. https://doi.org/10.1007/s10649-014-9532-8 van Eemeren, F., Grassen, B., Krabbe, E., Snoeck Henkemans, F., Verheij, B., y Wagemans, J. (2014). Handbook of Argumentation Theory. Dordrecht, Países Bajos: Springer. van Eemeren, F. y Grootendorst, R. (2011). Una Tteoría Sistemática de la Argumentación. La Perspectiva Pragmadialéctica. Buenos Aires, Argentina: Editorial Biblos. Knipping, C., y Reid, D. (2015). Reconstructing argumentation structures: A perspective on proving processes in secondary mathematics classroom interactions. En A. Bikner-Ahsbahs, C. Knipping, y N. Presmeg (Eds.), Approaches to qualitative research in mathematics education (pp. 75-101). New York: Springer. Krummheuer, G. (2011). Representation of the notion ‘‘learning-as-participation’’ in everyday situations of mathematics classes. ZDM Mathematics Education, 43(1), 81-90. https://doi.org/10.1007/s11858-010-0294-1 Metaxas, N. (2015). Mathematical argumentation of students participating in a mathematics–information technology project. International Research in Education, 3(1), 82-92. https://doi.org/10.5296/ire.v3i1.6767 Metaxas, N., Potari, D., y Zachariades, T. (2016). Analysis of a teacher’s pedagogical arguments using Toulmin’s model and argumentation schemes. Educational Studies in Mathematics, 93(3), 383-397. https://doi.org/10.1007/s10649-016-9701-z Pino-Fan, L., Assis, A., y Castro, W. (2015). Towards a methodology for the characterization of teachers' didactic-mathematical knowledge. EURASIA Journal of Mathematics, Science & Technology Education, 11(6), 1429-1456. https://doi.org/10.12973/eurasia.2015.1403a Prusak, N., Hershkowitz, R., y Schwarz, B. (2012). From visual reasoning to logical necessity through argumentative design. Educational Studies in Mathematics, 79(1), 19-40. https://doi.org/10.1007/s10649-011-9335-0 Santibáñez, C. (2015). Función, funcionalismo y funcionalización en la teoría pragma-dialéctica de la argumentación. Universum, 30(1), 233-252. https://dx.doi.org/10.4067/S0718-23762015000100014 Schoen, R. C., LaVenia, M., y Ozsoy, G. (2019). Teacher beliefs about mathematics teaching and learning: Identifying and clarifying three constructs. Cogent Education, 6(1), 1-29. https://doi.org/10.1080/2331186X.2019.1599488 Selling, S., Garcia, N., y Ball, D. (2016). What does it take to Develop Assessments of Mathematical Knowledge for Teaching?: Unpacking the Mathematical Work of Teaching. The Mathematics Enthusiast, 13(1), 35-51. Sfard, A. (2008). Thinking as communicating. Human development, the growth of discourses, and mathematizing. Cambridge, Reino Unido: Cambridge University Press. Solar, H. (2018). Implicaciones de la argumentación en el aula de matemáticas. Revista Colombiana de Educación, 74, 155-176. https://doi.org/10.17227/rce.num74-6902 Solar, H., y Deulofeu, J. (2016). Condiciones para promover el desarrollo de la competencia de argumentación en el aula de matemáticas. Bolema, 30(56), 1092-1112. http://dx.doi.org//10.1590/1980-4415v30n56a13 Staples, M., y Newton, J. (2016). Teachers' Contextualization of Argumentation in the Mathematics Classroom. Theory into Practice, 55(4), 294-301. https://doi.org/10.1080/00405841.2016.1208070 Stylianides, A., Bieda, K., y Morselli, F. (2016). Proof and Argumentation in Mathematics Education Research. En Á. Gutiérrez, G. Leder, y P. Boero (Eds.), The Second Handbook of Research on the Psychology of Mathematics Education (pp. 315-351). Rotterdam, Países Bajos: Sense Publishers. Toro, J. y Castro, W. (2019a). Features of mathematics’ teacher argumentation in classroom. En U. T. Jankvist, M. van den Heuvel-Panhuizen, y M. Veldhuis (Eds.), Proceedings of the Eleventh Congress of the European Society for Research in Mathematics Education (pp. 336-337). Utrecht, the Netherlands: Freudenthal Group & Freudenthal Institute, Utrecht University and ERME. Toro, J., y Castro, W. (2019b). Purposes of mathematics teacher argumentation during the discussion of tasks in the classroom. En M. Graven, H. Venkat, A. Essien, y P. Valero (Eds.), Proceedings of the 43rd Conference of the International Group for the Psychology of Mathematics Education (Vol. 4, pp. 458-477). Pretoria, Sudáfrica: PME. Toulmin, S. (2007). Los usos de la argumentación. Barcelona, España: Ediciones Península.
APA, Harvard, Vancouver, ISO, and other styles
17

Iveson, Timothy, Kathleen A. Boyd, Rachel S. Kerr, Jose Robles-Zurita, Mark P. Saunders, Andrew H. Briggs, Jim Cassidy, et al. "3-month versus 6-month adjuvant chemotherapy for patients with high-risk stage II and III colorectal cancer: 3-year follow-up of the SCOT non-inferiority RCT." Health Technology Assessment 23, no. 64 (December 2019): 1–88. http://dx.doi.org/10.3310/hta23640.

Full text
Abstract:
Background Oxaliplatin and fluoropyrimidine chemotherapy administered over 6 months is the standard adjuvant regimen for patients with high-risk stage II or III colorectal cancer. However, the regimen is associated with cumulative toxicity, characterised by chronic and often irreversible neuropathy. Objectives To assess the efficacy of 3-month versus 6-month adjuvant chemotherapy for colorectal cancer and to compare the toxicity, health-related quality of life and cost-effectiveness of the durations. Design An international, randomised, open-label, non-inferiority, Phase III, parallel-group trial. Setting A total of 244 oncology clinics from six countries: UK (England, Scotland, Wales and Northern Ireland), Denmark, Spain, Sweden, Australia and New Zealand. Participants Adults aged ≥ 18 years who had undergone curative resection for high-risk stage II or III adenocarcinoma of the colon or rectum. Interventions The adjuvant treatment regimen was either oxaliplatin and 5-fluorouracil or oxaliplatin and capecitabine, randomised to be administered over 3 or 6 months. Main outcome measures The primary outcome was disease-free survival. Overall survival, adverse events, neuropathy and health-related quality of life were also assessed. The main cost categories were chemotherapy treatment and hospitalisation. Cost-effectiveness was assessed through incremental cost comparisons and quality-adjusted life-year gains between the options and was reported as net monetary benefit using a willingness-to-pay threshold of £30,000 per quality-adjusted life-year per patient. Results Recruitment is closed. In total, 6088 patients were randomised (3044 per group) between 27 March 2008 and 29 November 2013, with 6065 included in the intention-to-treat analyses (3-month analysis, n = 3035; 6-month analysis, n = 3030). Follow-up for the primary analysis is complete. The 3-year disease-free survival rate in the 3-month treatment group was 76.7% (standard error 0.8%) and in the 6-month treatment group was 77.1% (standard error 0.8%), equating to a hazard ratio of 1.006 (95% confidence interval 0.909 to 1.114; p-value for non-inferiority = 0.012), confirming non-inferiority for 3-month adjuvant chemotherapy. Frequent adverse events (alopecia, anaemia, anorexia, diarrhoea, fatigue, hand–foot syndrome, mucositis, sensory neuropathy, neutropenia, pain, rash, altered taste, thrombocytopenia and watery eye) showed a significant increase in grade with 6-month duration; the greatest difference was for sensory neuropathy (grade ≥ 3 was 4% for 3-month vs.16% for 6-month duration), for which a higher rate of neuropathy was seen for the 6-month treatment group from month 4 to ≥ 5 years (p < 0.001). Quality-of-life scores were better in the 3-month treatment group over months 4–6. A cost-effectiveness analysis showed 3-month treatment to cost £4881 less over the 8-year analysis period, with an incremental net monetary benefit of £7246 per patient. Conclusions The study achieved its primary end point, showing that 3-month oxaliplatin-containing adjuvant chemotherapy is non-inferior to 6 months of the same regimen; 3-month treatment showed a better safety profile and cost less. For future work, further follow-up will refine long-term estimates of the duration effect on disease-free survival and overall survival. The health economic analysis will be updated to include long-term extrapolation for subgroups. We expect these analyses to be available in 2019–20. The Short Course Oncology Therapy (SCOT) study translational samples may allow the identification of patients who would benefit from longer treatment based on the molecular characteristics of their disease. Trial registration Current Controlled Trials ISRCTN59757862 and EudraCT 2007-003957-10. Funding This project was funded by the National Institute for Health Research (NIHR) Health Technology Assessment programme and will be published in full in Health Technology Assessment; Vol. 23, No. 64. See the NIHR Journals Library website for further project information. This research was supported by the Medical Research Council (transferred to NIHR Evaluation, Trials and Studies Coordinating Centre – Efficacy and Mechanism Evaluation; grant reference G0601705), the Swedish Cancer Society and Cancer Research UK Core Clinical Trials Unit Funding (funding reference C6716/A9894).
APA, Harvard, Vancouver, ISO, and other styles
18

سلطان, ولاء. "اسهامات عمليات ادارة المعرفة في تحقيق جودة التعليم الجامعي دراسة استطلاعية تحليلية باعتماد معمارية المنطق المضبب (fuzzy logic) في المعهد التقني/ نينوى." Al-Kitab Journal for Human Sciences 1, no. 2 (October 4, 2020): 246–64. http://dx.doi.org/10.32441/kjhs.01.02.p19.

Full text
Abstract:
The aim of this research is to focus on higher education organizations, those usually aspire to reform the educational system. This is because of the need to adapt to the new constantly changing requirements of society. The way is by raising the quality and efficiency of education in the university system to meet international standards. The reform must take into account the effectiveness and quality of education, compatible with global system, and must enable graduates to integrate easily into the modern labor market. As the knowledge is being the critical factor to the survival and sustainability of the organization, knowledge management processes have become an important productive component for the continuous flow of contemporary managerial concepts. Universities should also be prepared to work in the competitive education market, assuming greater administrative self-independence, flexible regulatory framework and adequate funding. Today higher education institutions require more openness and transparency, and directing researches into how public institutions perform at the higher education level, which affect the performance management of these institutions.Emphasis was placed, on the extent, to the contribution of knowledge management processes and their role to fulfill the quality of university education. The matter, which significantly stimulated the university's potential concerns in terms of the difficulty of preparing students for life and work. The expansion of higher education in the outside world took a completely different direction due to the intense competition and the emergence of Open Universities and the Internet revolution while this education remains self-sustaining. Moreover, the increase in the number of colleges and students has led to increased problems of quality control in education. This is a major and important issue faced by universities The analytical approach was used to analyze the information collected by the questionnaire, which was designed to take into account the spectroscopic and analytical clarity in the diagnosis of the dimensions of the research, its components and the measurement mechanism. In the applied side, the logic architecture (fuzzy logic) was adopted in the process of examining the level of knowledge management processes to achieve the quality of learning. The study population was divided into all sections of the institute. A sample of 50 teachers was selected and all data were subjected to statistical analysis using a package ( SPSS ). Conclusions have been reached, most notably are in determining the level of the actualmembership function of the application of knowledge management processes to achieve the quality of university learning, taking into account the necessary requirements for the implementation of these processes, including training, administrative, organizational, incentive and technology. A number of proposals were put forward to all Iraqi administrations, including the (knowledge-driven) organization. One of the most important of these proposals is to generate the conviction of university officials that quality management and its applications are necessary and decisive for the university's continuity, growth and development by improving the quality of its performance and generating the ability to meet the challenges that may arise in the future. The application of knowledge management processes through the trend towards the adoption of programs and modern quantitative methods in the interpretation of theoretical reality and starting through it to reflect the practical reality and practical.
APA, Harvard, Vancouver, ISO, and other styles
19

Jordan, Rachel E., Saimma Majothi, Nicola R. Heneghan, Deirdre B. Blissett, Richard D. Riley, Alice J. Sitch, Malcolm J. Price, et al. "Supported self-management for patients with moderate to severe chronic obstructive pulmonary disease (COPD): an evidence synthesis and economic analysis." Health Technology Assessment 19, no. 36 (May 2015): 1–516. http://dx.doi.org/10.3310/hta19360.

Full text
Abstract:
BackgroundSelf-management (SM) support for patients with chronic obstructive pulmonary disease (COPD) is variable in its coverage, content, method and timing of delivery. There is insufficient evidence for which SM interventions are the most effective and cost-effective.ObjectivesTo undertake (1) a systematic review of the evidence for the effectiveness of SM interventions commencing within 6 weeks of hospital discharge for an exacerbation for COPD (review 1); (2) a systematic review of the qualitative evidence about patient satisfaction, acceptance and barriers to SM interventions (review 2); (3) a systematic review of the cost-effectiveness of SM support interventions within 6 weeks of hospital discharge for an exacerbation of COPD (review 3); (4) a cost-effectiveness analysis and economic model of post-exacerbation SM support compared with usual care (UC) (economic model); and (5) a wider systematic review of the evidence of the effectiveness of SM support, including interventions (such as pulmonary rehabilitation) in which there are significant components of SM, to identify which components are the most important in reducing exacerbations, hospital admissions/readmissions and improving quality of life (review 4).MethodsThe following electronic databases were searched from inception to May 2012: MEDLINE, MEDLINE In-Process and Other Non-Indexed Citations, EMBASE, Cochrane Central Register of Controlled Trials (CENTRAL), and Science Citation Index [Institute of Scientific Information (ISI)]. Subject-specific databases were also searched: PEDro physiotherapy evidence database, PsycINFO and the Cochrane Airways Group Register of Trials. Ongoing studies were sourced through themetaRegister of Current Controlled Trials, International Standard Randomised Controlled Trial Number database, World Health Organization International Clinical Trials Registry Platform Portal and ClinicalTrials.gov. Specialist abstract and conference proceedings were sourced through ISI’s Conference Proceedings Citation Index and British Library’s Electronic Table of Contents (Zetoc). Hand-searching through European Respiratory Society, the American Thoracic Society and British Thoracic Society conference proceedings from 2010 to 2012 was also undertaken, and selected websites were also examined. Title, abstracts and full texts of potentially relevant studies were scanned by two independent reviewers. Primary studies were included if ≈90% of the population had COPD, the majority were of at least moderate severity and reported on any intervention that included a SM component or package. Accepted study designs and outcomes differed between the reviews. Risk of bias for randomised controlled trials (RCTs) was assessed using the Cochrane tool. Random-effects meta-analysis was used to combine studies where appropriate. A Markov model, taking a 30-year time horizon, compared a SM intervention immediately following a hospital admission for an acute exacerbation with UC. Incremental costs and quality-adjusted life-years were calculated, with sensitivity analyses.ResultsFrom 13,355 abstracts, 10 RCTs were included for review 1, one study each for reviews 2 and 3, and 174 RCTs for review 4. Available studies were heterogeneous and many were of poor quality. Meta-analysis identified no evidence of benefit of post-discharge SM support on admissions [hazard ratio (HR) 0.78, 95% confidence interval (CI) 0.52 to 1.17], mortality (HR 1.07, 95% CI 0.74 to 1.54) and most other health outcomes. A modest improvement in health-related quality of life (HRQoL) was identified but this was possibly biased due to high loss to follow-up. The economic model was speculative due to uncertainty in impact on readmissions. Compared with UC, post-discharge SM support (delivered within 6 weeks of discharge) was more costly and resulted in better outcomes (£683 cost difference and 0.0831 QALY gain). Studies assessing the effect of individual components were few but only exercise significantly improved HRQoL (3-month St George’s Respiratory Questionnaire 4.87, 95% CI 3.96 to 5.79). Multicomponent interventions produced an improved HRQoL compared with UC (mean difference 6.50, 95% CI 3.62 to 9.39, at 3 months). Results were consistent with a potential reduction in admissions. Interventions with more enhanced care from health-care professionals improved HRQoL and reduced admissions at 1-year follow-up. Interventions that included supervised or unsupervised structured exercise resulted in significant and clinically important improvements in HRQoL up to 6 months.LimitationsThis review was based on a comprehensive search strategy that should have identified most of the relevant studies. The main limitations result from the heterogeneity of studies available and widespread problems with their design and reporting.ConclusionsThere was little evidence of benefit of providing SM support to patients shortly after discharge from hospital, although effects observed were consistent with possible improvement in HRQoL and reduction in hospital admissions. It was not easy to tease out the most effective components of SM support packages, although interventions containing exercise seemed the most effective. Future work should include qualitative studies to explore barriers and facilitators to SM post exacerbation and novel approaches to affect behaviour change, tailored to the individual and their circumstances. Any new trials should be properly designed and conducted, with special attention to reducing loss to follow-up. Individual participant data meta-analysis may help to identify the most effective components of SM interventions.Study registrationThis study is registered as PROSPERO CRD42011001588.FundingThe National Institute for Health Research Health Technology Assessment programme.
APA, Harvard, Vancouver, ISO, and other styles
20

Alweera, Diluka, Nisha Sulari Kottearachchi, Dikkumburage Radhika Gimhani, and Kumudu Senarathna. "Single nucleotide polymorphisms in GBBSI and SSIIa genes in relation to starch physicochemical properties in selected rice (Oryza sativa L.) varieties." World Journal of Biology and Biotechnology 5, no. 2 (May 3, 2020): 23. http://dx.doi.org/10.33865/wjb.005.02.0305.

Full text
Abstract:
Starch quality is one of the most important agronomic traits in rice (Oryza sativa L). In this study, we identified single nucleotide polymorphisms (SNPs) in the Waxy and Alk genes of eight rice varieties and their associations with starch physicochemical properties.vi.e.vamylose content (AC) and gelatinization temperature (GT). Seven Sri Lankan rice varieties, Pachchaperumal, Herathbanda, At 354, Bg 352, Balasuriya, H 6 and Bw 295-5 were detected as high amylose varieties while Nipponbare exhibited low amylose content. In silico analysis of the Waxy gene revealed that all tested Sri Lankan varieties possessed ‘G’ (Wxa allele) instead of ‘T’ in the first intron which could explain varieties with high and intermediate amylose content. All Sri Lankan varieties had ‘A’ instead of ‘C’ in exon 6 of the Waxy gene and this fact was tally with the varieties showing high amylose content. Therefore, possessing the Wxa allele in the first intron and ‘A’ in exon 6 could be used as a molecular marker for the selection of high amylose varieties as validated using several Sri Lankan varieties. All Sri Lankan varieties except, Bw 295-5 exhibited the intermediate type of GT which could not be explained using the so far reported allelic differences in the Alk gene. However, Bw 295-5 which is a low GT variety had two nucleotide polymorphisms in the last exon of the Alk gene, i.e. ‘G’ and ‘TT’ that represent low GT class. Therefore, it can be concluded that sequence variations of Waxy and Alk genes reported in this study are useful in breeding local rice varieties with preferential amylose content and GT class.Key word Alk gene, amylose content, single nucleotide polymorphism, Waxy gene.INTRODUCTIONRice (Oryza sativa L.) is one of the leading food crops of the world. More than half of the world’s population relies on rice as the major daily source of calories and protein (Sartaj and Suraweera, 2005). After grain yield, quality is the most important aspect of rice breeding. Grain size and shape largely determine the market acceptability of rice, while cooking quality is influenced by the properties of starch. In rice grains starch is the major component that primarily controls rice quality. Starch consists of two forms of glucose polymers, relatively unbranched amylose and a highly branched amylopectin. Starch-synthesizing genes may contribute to variation in starch physicochemical properties because they affect the amount and structure of amylose and amylopectin in rice grain (Kharabian-Masouleh et al., 2012). Amylose content (AC), gelatinization temperature (GT) and gel consistency (GC) is the three most important determinants of eating and cooking quality. Amylose content is the ratio of amylose amount present in endosperm to total starch content. Rice varieties are grouped based on their amylose content into waxy (0-2%), very low (3-9%), low (10-19%), intermediate (20-25%), and high (> 25%) (Kongseree and Juliano, 1972). The most widely used method for amylose determination is a colorimetric assay where iodine binds with amylose to produce a blue-purple color, which is measured spectrophotometrically at a single wavelength (620nm). Low amylose content is usually associated with tender, cohesive and glossy cooked rice; while, high amylose content is associated with firm, fluffy and separate grains of cooked rice. The Waxy (Wx) gene, which encodes granule-bound starch synthase I (GBSSI), is the major gene controlling AC in rice (Nakamura, 2002). The Waxy gene is located on chromosome six and various single nucleotide polymorphisms (SNPs) of Wx were found, including a ‘G’ to ‘T’ SNP of the first intron, ‘A’ to ‘C’ SNP of the sixth exon and ‘C’ to ‘T’ SNP of the tenth exon (Larkin and Park, 2003). The ‘AGGTATA’ sequence at the 5’splice-junction coincides with the presence of the Wxa allele, while the ‘AGTTATA’ sequence coincides with the presence of the Wxb allele. Therefore, all intermediate and high amylose cultivars had ‘G’ nucleotide while low amylose cultivars had ‘T’ nucleotide at the putative leader intron 5′ splice site. The cytosine and thymidine (CT) dinucleotide repeats in the 5’- untranslated region (UTR) of the Waxy gene were reported to be a factor associated with AC. However, the relationship between these polymorphisms and amylose contents is not clear. Amylopectin chain length distribution plays a very important role to determine GT in cooked rice. The time required for cooking is determined by the gelatinization temperature of starch. It is important because it affects the texture of cooked rice and it is related to the cooking time of rice. The gelatinization temperature is estimated by the alkali digestibility test. It is measured by the alkali spreading value (ASV). The degree of spreading value of individual milled rice kernels in a weak alkali solution (1.7% KOH) is very closely correlated with gelatinized temperature. According to the ASV, rice varieties may be classified as low (55 to 69°C), intermediate (70 to 74°C) and high (> 74°C) GT classes. In a breeding program ASV is extensively used to estimate the gelatinization temperature. The synthesis of amylopectin is more complex than that of amylose. Polymorphisms in the starch synthase IIa (SSIIa) gene which is recognized as the Alk gene are responsible for the differences in GT in rice (Umemoto and Aoki, 2005; Waters et al., 2006). Two single nucleotide polymorphisms (SNPs) in the last exon of the Alk gene are responsible for the differences in GT in rice. The biochemical analysis clearly showed that the function of the amino acids caused by these two SNPs is essential for SSIIa enzyme activity (Nakamura et al., 2005) and those are ‘G’/‘A’ SNP at 4424 bp position and ‘GC’/‘TT’ SNPs at 4533/4534 bp position with reference to Nipponbare rice genomic sequence. Based on the SNPs, Low SSIIa enzyme activity results in S-type amylopectin, which is enriched in short chains whereas high SSIIa enzyme activity produces L-type amylopectin (Umemoto et al., 2004). Therefore, the combination of ‘G’ at SNP3 and ‘GC’ at SNP4 is required to produce L-type rice starch and this has a higher GT relative to S-type starch. GC is a standard assay that is used in rice improvement programs to determine the texture of softness and firmness in high amylose rice cultivars. Intermediate and low amylose rice usually has soft gel consistency. Sequence variation in exon 10 of the Waxy gene associates with GC (Tran et al., 2011).OBJECTIVES The objectives of this study were to detect polymorphisms in major starch synthesizing genes among several rice cultivars as models and to determine the relationship between their SNP variations and starch physicochemical properties. Also, we analyzed major starch synthesizing gene sequences of several Sri Lankan rice varieties in silico aiming at utilizing this information in rice breeding programs.MATERIALS AND METHODSPlant materials: Seeds of eight Oryza sativa L. accessions were obtained from the Rice Research and Development Institute (RRDI), Bathalagoda, Sri Lanka and Gene Bank of Plant Genetic Resource Center (PGRC), Gannoruwa.Characterization of grain physical parameters: Grain length and width were determined using a vernier caliper. Ten grains from each sample were collected randomly and measured to obtain the average length and width of the milled rice. The average length and width were recorded as their length and width. Based on the length and width of the grains, the milled rice grains were classified into four classes (table 1) according to the method accepted by RRDI Bathalagoda, Sri Lanka.According to the scale L/S – Long Slender, L/M – Long Medium, I/B – Intermediate Bold and S/R –Short RoundAnalysis of amylose content: Initially, rice samples were dehusked and polished prior to milling. Ten whole – milled rice kernels of eight rice samples were ground separately by using mortar and pestle. Amylose content per 100 mg was determined by measuring the blue value of rice varieties as described by Juliano (1971). About 100mg rice sample was shifted into a 100 mL volumetric flask and 1mL of 95% ethanol was added. Then 9mL of 1N NaOH was added and the content was boiled for 20min. at boiling temperature to gelatinize the starch. After cooling the content, the volume was made up to 100mL and 5mL of starch solution was pipetted out into a 100mL volumetric flask. The blue color was developed by adding 1mL of 1N acetic acid and 2 mL of iodine solution (0.2g iodine and 2.0g potassium iodine in 10 mL aqueous solution). Then volume was made up to 100mL with distilled water and the solution was kept for 20min. after shaking. Finally, the absorbance of the solution was measured at 620nm using Spectrophotometer T80 (PG Instruments Limited) as described by Juliano (1971). The standard curve was prepared using 40mg of potato-amylose to calculate the amylose content of rice varieties through absorbance values. Forty mg of potato amylose was put into a 100 mL of volumetric flask and 1ml of 95% ethanol and 9mL of NaOH were added and content was heated for 20min at boiling temperature. After cooling the content volume of the solution was made up to 100mL using distilled water. Then 1mL, 2mL, 3mL, 4mL and 5mL of amylose solution were pipetted out into 100mL flasks. Then 0.2mL, 0.4mL, 0.6mL, 0.8mL and 1mL of 1N acetic acid were added to the flasks respectively. Finally, 2mL of iodine solution was added to each flask and volume was made up to 100mL with distilled water. Solutions were stood up for 20min. after shaking and absorbance values were measured at 620nm. Measured absorbance values were plotted at 620nm against the concentration of anhydrous amylose (mg).Analysis of gelatinization temperature: GT was indirectly measured on rice by the alkali spreading value. Husked and polished seeds per accession were used for the analysis. Selected duplicate sets of six milled grains without cracks of each sample were put into Petri dishes. About 10mL of 1.7% KOH was added and grains were spread in the petri dish to provide enough space. The constant temperature at 30°C was maintained to ensure better reproducibility. After 23hrs, the degree of disintegration was quantified by a standard protocol with a numerical scale of 1–7 (table 2) as reported by Cruz and Khush (2000). As reported by Juliano (2003), GT of rice was determined using the alkaline spreading scale, where 1.0-2.5: High (74-80 °C), 2.6-3.4: High-intermediate (70-74 °C), 3.5-5.4: Intermediate (70-74 °C) and 5.5-7.0 Low: (55-70 °C).Bioinformatics and statistical analysis: The available literature was used to identify the most likely candidate genes associated with rice starch quality and their SNPs of each gene (Hirose et al., 2006; Waters and Henry, 2007; Tran et al., 2011). In all the tested varieties except Bg 352 and At 354, the DNA sequence of each gene was retrieved from the Rice SNP Seek database (http://snp-seek.irri.org/). The gene sequences of At 354 and Bg 352 were obtained from the National Research Council 16-016 project, Wayamba University of Sri Lanka. Multiple sequence alignment was conducted for the DNA sequence using Clustal Omegavsoftware (https://www.ebi.ac.uk/Tools/msa/clustalo/). Starch physiochemical data obtained were subjected to a one-way analysis of variance (ANOVA) followed by Duncan’s New Multiple Range Test (DNMRT) to determine the statistical differences among varieties at the significance level of p ≤ 0.05. Statistical analysis was done using SAS version 9.1 (SAS, 2004).ESULTS AND DISCUSSION: Physical properties of rice grains: Physical properties such as length, width, size, shape and pericarp color of rice grains obtained from eight different rice varieties are given in table 3. Classification of rice grains was carried out, according to their sizes and shapes based on Juliano (1985). The size of the rice grains was determined as per grain length while grain shape was determined by means of length and width ratio of the rice kernel. In the local market, rice is classified as Samba (short grain), Nadu (intermediate grain) and Kora (long/medium) based on the size of the grain (Pathiraje et al., 2010). Lengths of rice kernels were varied from 5.58 to 6.725 mm for all varieties. The highest grain length and width were given by At 354 and Pachchaperumal respectively. The varieties, Bw 295-5 and H 6 showed a length: width ratio over 3 which is considered as slender in grain shape. Bw 295-5, H 6, At 354, Bg 352 and Nipponbare possessed white pericarp and others possessed red pericarp.Relationship between amylose content and SNPs variation of waxy loci in selected varieties: Amylose content was measured in seven Sri Lankan rice varieties and one exotic rice variety. Amylose content of the evaluated varieties varied significantly with p ≤ 0.05 with the lowest of 15.11% and highest of 28.63% which were found in Nipponbare and Bw 295-5, respectively (table 4). The majority of the evaluated varieties fell into the high AC category (between 25-28%). Only Nipponbare could be clearly categorized under the low amylose group (table 4). The amylose content of Bg 352, Pachchaperumal and Herathbanda have already been determined by early studies of Rebeira et al. (2014) and Fernando et al. (2015). Most of the data obtained in the present experiment has agreed with the results of previous studies. Major genes such as Waxy and their functional SNPs have a major influence on amylose in rice (Nakamura et al., 2005). Accordingly, single nucleotide polymorphism, ‘G’/‘T’, at the 5’ leader intron splice site of the GBSSI has explained the variation in amylose content of varieties. Accordingly, high and intermediate amylose varieties have ‘AGGTATA’ while low amylose varieties have the sequence ‘AGTTATA’, which might lead to a decrease in the splicing efficiency. Therefore, the GBSSI activity of Nipponbare might be considerably weak and resulted in starch with low amylose content. Hence, producing ‘G’/‘T’ polymorphism clearly differentiates low amylose rice varieties, as reported by Nakamura et al. (2005). In GBSSI, Larkin and Park (2003) identified an ‘A’/‘C’ polymorphism in exon 6 and a ‘C’/‘T’ polymorphism in exon 10 which resulted in non- synonymous amino acid change. Chen et al. (2008) reported that the non-synonymous ‘A’/‘C’ SNP at exon 6 had the highest possible impact on GBSSI. Accordingly, the ‘A’/‘C’ polymorphism in exon 6 causes a tyrosine/serine amino acid substitution while the ‘C’/‘T’ polymorphism in exon 10 causes a serine/proline amino acid substitution. In view of this information, there is a relationship between the polymorphism detected by in silico analysis and amylose content obtained from our experiment. Out of the eight tested rice varieties, only one variety, Nipponbare was categorized as low amylose variety (10-19%) and it exhibited ‘T’ nucleotide at the intron splice site (table 4; figure 1). Varieties such as Pachchaperumal, Balasuriya, Bw 295-5, H 6, Herathbanda, At 354 and Bg 352 which contained high amylose (> 25%), had ‘G’ and ‘A’ nucleotides at intron splice site and exon 6 respectively (table 4; figure 1). The predominant allelic pattern of intron splice site and exon 6 are different in varieties containing intermediate amylose content (20-25%) which showed ‘G’ and ‘C’ nucleotides respectively. Of these selected rice varieties, none of the intermediate type amylose variety was found.Relationship between gel consistency and SNPs variation in Waxy loci: In this study, GC data of Herathbanda, Hondarawalu, Kuruluthuda, Pachchaperumal and Bg 352 were obtained from Fernando et al. (2015). The results of Tran et al. (2011) showed that the exon 10 ‘C’/‘T’ SNP of Wx has mainly affected GC. Accordingly, rice with a ‘C’ at exon 10 had soft and viscous gels once cooked. However, a sample with a ‘T’ had short and firm gels. In this study, Herathbanda, Hondarawalu, Kuruluthuda and Pachchaperumal had ‘C’ nucleotide and Bg 352 had ‘T’ nucleotide in exon 10 (table 5; figure 2). However, ‘C’/‘T’ substitution analysis could not be used to explain the GC of tested varieties.Relationship between gelatinized temperature and SNPs variation of Alk loci in selected rice varieties: Although there were differences in the scores, the degree of disintegration of all samples was saturated at 23 hrs. Most of the selected rice varieties showed the intermediate disintegration score. Varieties, Pachchaperumal, Balasuriya, H 6, Herathbanda, At 354 and Bg 352 were categorized into intermediate GT class (70–74°C) as indicated by an alkali spreading (AS) value of 5 (table 6; figure 3). Nipponbare and Bw 295-5 showed the highest disintegration score indicating the dispersion of all grains. Hence these varieties were categorized into low GT class (55-69°C) as indicated by an AS value of 6 (table 6; figure 3). However, high GT class rice varieties (> 74°C) were not found in the tested samples. Chromosomal mutation within the Alk gene has led to a number of single nucleotide polymorphisms (SNPs). Umemoto et al. (2004) identified four SNPs in Alk gene. Thus, SNP3 and SNP4 may be important genetic polymorphisms that are associated with GT class. According to the SNP3 and SNP4, eight rice varieties could be classified into either high GT or low GT types. If there is ‘A’ instead of ‘G’ at 4424 bp position of Alk gene with reference to Nipponbare rice genomic sequence, it codes methionine instead of valine amino acid residue in SSIIa, whilst two adjacent SNPs at bases 4533 and 4534 code for either leucine (‘GC’) or phenylalanine (‘TT’). Rice varieties with high GT starch had a combination of valine and leucine at these residues. Rice varieties with low GT starch had a combination of either methionine and leucine or valine and phenylalanine at these same residues. Nipponbare carried the ‘A’ and ‘GC’ nucleotides, while Bw 295-5 carried the ‘G’ and ‘TT’ nucleotides. Hence these varieties were classified into low GT class. Varieties such as Pachchaperumal, Balasuriya, H 6, Herathbanda, At 354 and Bg 352 carried ‘G’ and ‘GC’ nucleotides and these varieties were classified into high GT rice varieties. However, intermediate GT status could not be determined by SNP3 and SNP4 mutation of Alk gene (table 6; figure 4).In silico analysis of the polymorphisms in GBSSI gene and Alk genes of rice varieties retrieved from Rice-SNP-database: In this study, GBSSI gene and Alk gene were compared with the sequences retrieved from the Rice-SNP-Seek database to validate the SNPs further. As previously reported by Ayres et al. (1997), all low amylose varieties had the sequence ‘AGTTATA’ in exon 1. In agreement with preliminary work done by Larkin and Park (2003), all of the intermediate amylose varieties have the allelic pattern of GCC. All of the high amylose varieties have either the GAC or GAT allele of GBSSI. Among 42 rice accessions with the Sri Lankan pedigree, four allelic patterns were found; TAC, GCC, GAC and GAT (table 7). In this allelic pattern, the first letter corresponds to the ‘G’/‘T’ polymorphism in 5’ leader intron splice-junction, the second letter corresponds to the ‘A’/‘C’ polymorphism in exon 6 and the third letter corresponds to the ‘C’/‘T’ polymorphism in exon10 of Waxy gene. Analysis of the ‘G’/‘T’ polymorphism in the Wx locus showed that 41 rice cultivars shared the same ‘AGGTATA’ sequence at the 5’ leader intron splice-junction. But only 1 rice cultivar, Puttu nellu was found with ‘T’ nucleotide in intron1/exon1 junction site, which could be categorized as a low amylose variety (table 7). As discussed above, varieties with an intermediate level of apparent amylose could be reliably distinguished from those with higher apparent amylose based on a SNP in exon 6. Hence, only three rice varieties Nalumoolai Karuppan, Pannithi and Godawel with ‘C’ nucleotide in exon 6 exhibited the possibility of containing intermediate amylose content (table 7). High activity of GBSSI produces high amylose content leading to a non-waxy, non-sticky or non-glutinous phenotype. Therefore, according to the in silico genotypic results, rest of the 38 rice varieties may produce high amylose content in the endosperm (table 7). Proving this phenomenon. Abeysekera et al. (2017) has reported that usually, most of Sri Lankan rice varieties contain high amylose content. Targeted sequence analysis of exon 8 of the Alk gene in 42 different rice cultivars were found with three SNP polymorphisms that resulted in a changed amino acid sequence and, of these three SNPs, two SNPs were reported to be correlated with possible GT differences. Accordingly, Puttu nellu and 3210 rice varieties carried the ‘G’ and ‘TT’ nucleotides in SNP3 and SNP4 respectively (table 7). Hence these varieties can be classified into low GT class and except these two; other rice varieties carried the ‘G’ and ‘GC’ nucleotides in SNP3 and SNP4 respectively. Therefore, those varieties can possibly be classified into high GT rice varieties (table 7). However, further experiments are necessary to check the phenotypic variations for grain amylose content and GT class of in silico analyzed rice varieties. CONCLUSION Present results revealed the relationship between SNPs variation at Waxy loci and the amylose content of selected rice varieties. Accordingly, Pachchaperumal, At 354, Bg 352, Herathbanda, H 6, Balasuriya and Bw 295-5 with high amylose content had ‘G’ instead of ‘T’ in the first intron exhibiting the presence of Wxa allele with reference to Nipponbare which had low amylose content. Also all tested varieties had ‘A’ in exon 6 of the Waxy gene. Thus present findings i.e. presence of Wxa allele and SNP ‘A’ in exon 6 could be used as a potential molecular marker for the selection of high amylose varieties. In addition, Bw 295-5 which is a low GT variety, had two SNPs variations in the last exon of the Alk gene i.e. ‘G’ and ‘TT’ which is likely to be used to represent low GT class. Accordingly, sequence variations identified in Waxy and Alk genes could be utilized in the future rice breeding programs for the development of varieties with preferential amylose content and GT class.ACKNOWLEDGMENTSDirector and staff of the Gene Bank, Plant Genetic Resources Center, Gannoruwa are acknowledged for giving rice accessions.CONFLICT OF INTERESTAuthors have no conflict of interest.REFERENCESAbeysekera, W., G. Premakumara, A. Bentota and D. S. Abeysiriwardena, 2017. Grain amylose content and its stability over seasons in a selected set of rice varieties grown in Sri Lanka. Journal of agricultural sciences Sri Lanka, 12(1): 43-50.Ayres, N., A. McClung, P. Larkin, H. Bligh, C. Jones and W. Park, 1997. Microsatellites and a single-nucleotide polymorphism differentiate apparentamylose classes in an extended pedigree of us rice germ plasm. Theoretical applied genetics, 94(6-7): 773-781.Chen, M.-H., C. Bergman, S. Pinson and R. Fjellstrom, 2008. Waxy gene haplotypes: Associations with apparent amylose content and the effect by the environment in an international rice germplasm collection. Journal of cereal science, 47(3): 536-545.Cruz, N. D. and G. Khush, 2000. Rice grain quality evaluation procedures. Aromatic rices, 3: 15-28.Fernando, H., T. Kajenthini, S. Rebeira, T. Bamunuarachchige and H. Wickramasinghe, 2015. Validation of molecular markers for the analysis of genetic diversity of amylase content and gel consistency among representative rice varieties in sri lanka. Tropical agricultural research, 26(2): 317-328.Hirose, T., T. Ohdan, Y. Nakamura and T. Terao, 2006. Expression profiling of genes related to starch synthesis in rice leaf sheaths during the heading period. Physiologia plantarum, 128(3): 425-435.Juliano, B., 1971. A simplified assay for milled rice amylose. Journal of cereal science today, 16: 334-360.Juliano, B. O., 1985. Rice: Chemistry and technology. The american association of cereal chemists. Inc. St. Paul, Minnesota, USA, 774.Juliano, B. O., 2003. Rice chemistry and quality. Island publishing house. Island publishing house, Manila: 1-7.Kharabian-Masouleh, A., D. L. Waters, R. F. Reinke, R. Ward and R. J. Henry, 2012. Snp in starch biosynthesis genes associated with nutritional and functional properties of rice. Scientific reports, 2(1): 1-9.Kongseree, N. and B. O. Juliano, 1972. Physicochemical properties of rice grain and starch from lines differing in amylose content and gelatinization temperature. Journal of agricultural food chemistry, 20(3): 714-718.Larkin, P. D. and W. D. Park, 2003. Association of waxy gene single nucleotide polymorphisms with starch characteristics in rice (Oryza sativa L.). Molecular Breeding, 12(4): 335-339.Nakamura, Y., 2002. Towards a better understanding of the metabolic system for amylopectin biosynthesis in plants: Rice endosperm as a model tissue. Plant cell physiology, 43(7): 718-725.Nakamura, Y., P. B. Francisco, Y. Hosaka, A. Sato, T. Sawada, A. Kubo and N. Fujita, 2005. Essential amino acids of starch synthase iia differentiate amylopectin structure and starch quality between Japonica and Indica rice varieties. Plant molecular biology, 58(2): 213-227.Pathiraje, P., W. Madhujith, A. Chandrasekara and S. Nissanka, 2010. The effect of rice variety and parboiling on in vivo glycemic response. Journal of tropical agricultural research, 22(1): 26-33.Rebeira, S., H. Wickramasinghe, W. Samarasinghe and B. Prashantha, 2014. Diversity of grain quality characteristics of traditional rice (Oryza sativa L.) varieties in sri lanka. Tropical agricultural research, 25(4): 470-478.Sartaj, I. Z. and S. A. E. R. Suraweera, 2005. Comparison of different parboiling methods on the quality characteristics of rice. Annals of the Sri Lankan Department of Agriculture, 7: 245-252.Tran, N., V. Daygon, A. Resurreccion, R. Cuevas, H. Corpuz and M. Fitzgerald, 2011. A single nucleotide polymorphism in the waxy gene explains a significant component of gel consistency. Theoretical applied genetics, 123(4): 519-525.Umemoto, T. and N. Aoki, 2005. Single-nucleotide polymorphisms in rice starch synthase iia that alter starch gelatinisation and starch association of the enzyme. Functional plant biology, 32(9): 763-768.Umemoto, T., N. Aoki, H. Lin, Y. Nakamura, N. Inouchi, Y. Sato, M. Yano, H. Hirabayashi and S. Maruyama, 2004. Natural variation in rice starch synthase iia affects enzyme and starch properties. Functional plant biology, 31(7): 671-684.Waters, D. L. and R. J. Henry, 2007. Genetic manipulation of starch properties in plants: Patents 2001-2006. Recent patents on biotechnology, 1(3): 252-259.Waters, D. L., R. J. Henry, R. F. Reinke and M. A. Fitzgerald, 2006. Gelatinization temperature of rice explained by polymorphisms in starch synthase. Plant biotechnology journal, 4(1): 115-122.
APA, Harvard, Vancouver, ISO, and other styles
21

Rudas, Imre J. "Intelligent Engineering Systems." Journal of Advanced Computational Intelligence and Intelligent Informatics 4, no. 4 (July 20, 2000): 237–39. http://dx.doi.org/10.20965/jaciii.2000.p0237.

Full text
Abstract:
The "information revolution" of our time affects our entire generation. While a vision of the "Information Society," with its financial, legal, business, privacy, and other aspects has emerged in the past few years, the "traditional scene" of information technology, that is, industrial automation, maintained its significance as a field of unceasing development. Since the old-fashioned concept of "Hard Automation" applicable only to industrial processes of fixed, repetitive nature and manufacturing large batches of the same product1)was thrust to the background by keen market competition, the key element of this development remained the improvement of "Machine Intelligence". In spite of the fact that L. A. Zadeh already introduced the concept of "Machine Intelligence Quotient" in 1996 to measure machine intelligence2) , this term remained more or less of a mysterious meaning best explicable on the basis of practical needs. The weak point of hard automation is that the system configuration and operations are fixed and cannot be changed without incurring considerable cost and downtime. Mainly it can be used in applications that call for fast and accurate operation in large batch production. Whenever a variety of products must be manufactured in small batches and consequently the work-cells of a production line should be quickly reconfigured to accommodate a change in products, hard automation becomes inefficient and fails due to economic reasons. In these cases, new, more flexible way of automation, so-called "Soft Automation," are expedient and suitable. The most important "ingredient" of soft automation is its adaptive ability for efficiently coping with changing, unexpected or previously unknown conditions, and working with a high degree of uncertainty and imprecision since in practice increasing precision can be very costly. This adaptation must be realized without or within limited human interference: this is one essential component of machine intelligence. Another important factor is that engineering practice often must deal with complex systems of multiple variable and multiple parameter models almost always with strong nonlinear coupling. Conventional analysis-based approaches for describing and predicting the behavior of such systems in many cases are doomed to failure from the outset, even in the phase of the construction of a more or less appropriate mathematical model. These approaches normally are too categorical in the sense that in the name of "modeling accuracy," they try to describe all structural details of the real physical system to be modeled. This significantly increases the intricacy of the model and may result in huge computational burden without considerably improving precision. The best paradigm exemplifying this situation may be the classic perturbation theory: the less significant the achievable correction is, the more work must be invested for obtaining it. Another important component of machine intelligence is a kind of "structural uniformity" giving room and possibility to model arbitrary particular details a priori not specified and unknown. This idea is similar to that of the ready-to-wear industry, whose products can later be slightly modified in contrast to the custom-tailors' made-to-measure creations aiming at maximum accuracy from the beginning. Machines carry out these later corrections automatically. This "learning ability" is another key element of machine intelligence. To realize the above philosophy in a mathematically correct way, L. A. Zadeh separated Hard Computing from Soft Computing. This revelation immediately resulted in distinguishing between two essential complementary branches of machine intelligence: Hard Computing based Artificial Intelligence and Soft Computing based Computational Intelligence. In the last decades, it became generally known that fuzzy logic, artificial neural networks, and probabilistic reasoning based Soft Computing is a fruitful orientation in designing intelligent systems. Moreover, it became generally accepted that soft computing rather than hard computing should be viewed as the foundation of real machine intelligence via exploiting the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality. Further research in the past decade confirmed the view that typical components of present soft computing such as fuzzy logic, neurocomputing, evolutionary computation and probabilistic reasoning are complementary and best results can be obtained by their combined application. These complementary branches of Machine Intelligence, Artificial Intelligence and Computational Intelligence, serve as the basis of Intelligent Engineering Systems. The huge number of scientific results published in journals and conference proceedings worldwide substantiates this statement. Three years ago, a new series of conferences in this direction was initiated and launched with the support of several organizations including the IEEE Industrial Electronics Society and IEEE Hungary Section in technical cooperation with IEEE Robotics & Automation Society. The first event of the series hosted by Bdnki Dondt Polytechnic, Budapest, Hungary, was called "19997 IEEE International Conference on Intelligent Engineering Systems " (INES'97). The Technical University of Vienna, Austria hosted the next event of the series in 1998, followed by INES'99 held by the Technical University of Kosice, Slovakia. The present special issue consists of the extended and revised version of the most interesting papers selected out of the presentations of this conference. The papers exemplify recent development trends of intelligent engineering systems. The first paper pertains to the wider class of neural network applications. It is an interesting report of applying a special Adaptive Resonance Theory network for identifying objects in multispectral images. It is called "Extended Gaussian ARTMAP". The authors conclude that this network is especially advantageous for classification of large, low dimensional data sets. The second paper's subject belongs to the realm of fuzzy systems. It reports successful application of fundamental similarity relations in diagnostic systems. As an example failure detection of rolling-mill transmission is considered. The next paper represents the AI-branch of machine intelligence. The paper is a report on an EU-funded project focusing on the storage of knowledge in a corporate organizational memory used for storing and retrieving knowledge chunks for it. The flexible structure of the system makes it possible to adopt it to different SMEs via using company-specific conceptual terms rather than traditional keywords. The fourth selected paper's contribution is to the field of knowledge discovery. For this purpose in the first step, cluster analysis is done. The method is found to be helpful whenever little or no information on the characteristics of a given data set is available. The next paper approaches scheduling problems by the application of the multiagent system. It is concluded that due to the great number of interactions between components, MAS seems to be well suited for manufacturing scheduling problems. The sixth selected paper's topic is emerging intelligent technologies in computer-aided engineering. It discusses key issues of CAD/CAM technology of our days. The conclusion is that further development of CAD/CAM methods probably will serve companies on the competitive edge. The seventh paper of the selection is a report on seeking a special tradeoff between classical analytical modeling and traditional soft computing. It nonconventionally integrates uniform structures obtained from Lagrangian Classical Mechanics with other simple elements of machine intelligence such as saturated sigmoid transition functions borrowed from neural nets, and fuzzy rules with classical PID/ST, and a simplified version of regression analysis. It is concluded that these different components can successfully cooperate in adaptive robot control. The last paper focuses on the complexity problem of fuzzy and neural network approaches. A fuzzy rule base, be it generated from expert operators or by some learning or identification schemes, may contain redundant, weakly contributing, or outright inconsistent components. Moreover, in pursuit of good approximation, one may be tempted to overly assign the number of antecedent sets, thereby resulting in large fuzzy rule bases and much problems in computation time and storage space. Engineers using neural networks have to face the same complexity problem with the number of neurons and layers. A fuzzy rule base and neural network design, hence, have two important objectives. One is to achieve a good approximation. The other is to reduce the complexity. The main difficulty is that these two objectives are contradictory. A formal approach to extracting the more pertinent elements of a given rule set or neurons is, hence, highly desirable. The last paper is an attempt in this direction. References 1)C. W. De Silva. Automation Intelligence. Engineering Application of Artificial Intelligence. Vol. 7. No. 5. 471-477 (1994). 2)L. A. Zadeh. Fuzzy Logic, Neural Networks and Soft Computing. NATO Advanced Studies Institute on Soft Computing and Its Application. Antalya, Turkey. (1996). 3)L. A. Zadeh. Berkeley Initiative in Soft Computing. IEEE Industrial Electronics Society Newsletter. 41, (3), 8-10 (1994).
APA, Harvard, Vancouver, ISO, and other styles
22

Helmholz, P., S. Zlatanova, J. Barton, and M. Aleksandrov. "GEOINFORMATION FOR DISASTER MANAGEMENT 2020 (Gi4DM2020): PREFACE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-3/W1-2020 (November 18, 2020): 1–3. http://dx.doi.org/10.5194/isprs-archives-xliv-3-w1-2020-1-2020.

Full text
Abstract:
Abstract. Across the world, nature-triggered disasters fuelled by climate change are worsening. Some two billion people have been affected by the consequences of natural hazards over the last ten years, 95% of which were weather-related (such as floods and windstorms). Fires swept across large parts of California, and in Australia caused unprecedented destruction to lives, wildlife and bush. This picture is likely to become the new normal, and indeed may worsen if unchecked. The Intergovernmental Panel on Climate Change (IPCC) estimates that in some locations, disaster that once had a once-in-a-century frequency may become annual events by 2050.Disaster management needs to keep up. Good cooperation and coordination of crisis response operations are of critical importance to react rapidly and adequately to any crisis situation, while post-disaster recovery presents opportunities to build resilience towards reducing the scale of the next disaster. Technology to support crisis response has advanced greatly in the last few years. Systems for early warning, command and control and decision-making have been successfully implemented in many countries and regions all over the world. Efforts to improve humanitarian response, in particular in relation to combating disasters in rapidly urbanising cities, have also led to better approaches that grapple with complexity and uncertainty.The challenges however are daunting. Many aspects related to the efficient collection and integration of geo-information, applied semantics and situational awareness for disaster management are still open, while agencies, organisations and governmental authorities need to improve their practices for building better resilience.Gi4DM 2020 marked the 13th edition of the Geoinformation for Disaster Management series of conferences. The first conference was held in 2005 in the aftermath of the 2004 Indian Ocean earthquake and tsunami which claimed the lives of over 220,000 civilians. The 2019-20 Australian Bushfire Season saw some 18.6 million Ha of bushland burn, 5,900 buildings destroyed and nearly three billion vertebrates killed. Gi4DM 2020 then was held during Covid-19 pandemic, which took the lives of more than 1,150,000 people by the time of the conference. The pandemic affected the organisation of the conference, but the situation also provided the opportunity to address important global problems.The fundamental goal of the Gi4DM has always been to provide a forum where emergency responders, disaster managers, urban planners, stakeholders, researchers, data providers and system developers can discuss challenges, share experience, discuss new ideas and demonstrate technology. The 12 previous editions of Gi4DM conferences were held in Delft, the Netherlands (March 2005), Goa, India (September 2006), Toronto, Canada (May 2007), Harbin, China (August 2008), Prague, Czech Republic (January 2009), Torino, Italy (February 2010), Antalya, Turkey (May 2011), Enschede, the Netherlands (December, 2012), Hanoi, Vietnam (December 2013), Montpellier, France (2015), Istanbul, Turkey (2018) and Prague, Czech Republic (2019). Through the years Gi4DM has been organised in cooperation with different international bodies such as ISPRS, UNOOSA, ICA, ISCRAM, FIG, IAG, OGC and WFP and supported by national organisations.Gi4DM 2020 was held as part of Climate Change and Disaster Management: Technology and Resilience for a Troubled World. The event took place through the whole week of 30th of November to 4th of December, Sydney, Australia and included three events: Gi4DM 2020, NSW Surveying and Spatial Sciences Institute (NSW SSSI) annual meeting and Urban Resilience Asia Pacific 2 (URAP2).The event explored two interlinked aspects of disaster management in relation to climate change. The first was geo-information technologies and their application for work in crisis situations, as well as sensor and communication networks and their roles for improving situational awareness. The second aspect was resilience, and its role and purpose across the entire cycle of disaster management, from pre-disaster preparedness to post-disaster recovery including challenges and opportunities in relation to rapid urbanisation and the role of security in improved disaster management practices.This volume consists of 22 scientific papers. These were selected on the basis of double-blind review from among the 40 short papers submitted to the Gi4DM 2020 conference. Each paper was reviewed by two scientific reviewers. The authors of the papers were encouraged to revise, extend and adapt their papers to reflect the comments of the reviewers and fit the goals of this volume. The selected papers concentrate on monitoring and analysis of various aspects related to Covid-19 (4), emergency response (4), earthquakes (3), flood (2), forest fire, landslides, glaciers, drought, land cover change, crop management, surface temperature, address standardisation and education for disaster management. The presented methods range from remote sensing, LiDAR and photogrammetry on different platforms to GIS and Web-based technologies. Figure 1 illustrates the covered topics via wordcount of keywords and titles.The Gi4DM 2020 program consisted of scientific presentations, keynote speeches, panel discussions and tutorials. The four keynotes speakers Prof Suzan Cutter (Hazard and Vulnerability Research Institute, USC, US), Jeremy Fewtrell (NSW Fire and Rescue, Australia), Prof Orhan Altan (Ad-hoc Committee on RISK and Disaster Management, GeoUnions, Turkey) and Prof Philip Gibbins (Fenner School of Environment and Society, ANU, Australia) concentrated on different aspects of disaster and risk management in the context of climate change. Eight tutorials offered exciting workshops and hands-on on: Semantic web tools and technologies within Disaster Management, Structure-from-motion photogrammetry, Radar Remote Sensing, Dam safety: Monitoring subsidence with SAR Interferometry, Location-based Augmented Reality apps with Unity and Mapbox, Visualising bush fires datasets using open source, Making data smarter to manage disasters and emergency situational awareness and Response using HERE Location Services. The scientific sessions were blended with panel discussions to provide more opportunities to exchange ideas and experiences, connect people and researchers from all over the world.The editors of this volume acknowledge all members of the scientific committee for their time, careful review and valuable comments: Abdoulaye Diakité (Australia), Alexander Rudloff (Germany), Alias Abdul Rahman (Malaysia), Alper Yilmaz (USA), Amy Parker (Australia), Ashraf Dewan (Australia), Bapon Shm Fakhruddin (New Zealand), Batuhan Osmanoglu (USA), Ben Gorte (Australia), Bo Huang (Hong Kong), Brendon McAtee (Australia), Brian Lee (Australia), Bruce Forster (Australia), Charity Mundava (Australia), Charles Toth (USA), Chris Bellman (Australia), Chris Pettit (Australia), Clive Fraser (Australia), Craig Glennie (USA), David Belton (Australia), Dev Raj Paudyal (Australia), Dimitri Bulatov (Germany), Dipak Paudyal (Australia), Dorota Iwaszczuk (Germany), Edward Verbree (The Netherlands), Eliseo Clementini (Italy), Fabio Giulio Tonolo (Italy), Fazlay Faruque (USA), Filip Biljecki (Singapore), Petra Helmholz (Australia), Francesco Nex (The Netherlands), Franz Rottensteiner (Germany), George Sithole (South Africa), Graciela Metternicht (Australia), Haigang Sui (China), Hans-Gerd Maas (Germany), Hao Wu (China), Huayi Wu (China), Ivana Ivanova (Australia), Iyyanki Murali Krishna (India), Jack Barton (Australia), Jagannath Aryal (Australia), Jie Jiang (China), Joep Compvoets (Belgium), Jonathan Li (Canada), Kourosh Khoshelham (Australia), Krzysztof Bakuła (Poland), Lars Bodum (Denmark), Lena Halounova (Czech Republic), Madhu Chandra (Germany), Maria Antonia Brovelli (Italy), Martin Breunig (Germany), Martin Tomko (Australia), Mila Koeva (The Netherlands), Mingshu Wang (The Netherlands), Mitko Aleksandrov (Australia), Mulhim Al Doori (UAE), Nancy Glenn (Australia), Negin Nazarian (Australia), Norbert Pfeifer (Austria), Norman Kerle (The Netherlands), Orhan Altan (Turkey), Ori Gudes (Australia), Pawel Boguslawski (Poland), Peter van Oosterom (The Netherlands), Petr Kubíček (Czech Republic), Petros Patias (Greece), Piero Boccardo (Italy), Qiaoli Wu (China), Qing Zhu (China), Riza Yosia Sunindijo (Australia), Roland Billen (Belgium), Rudi Stouffs (Singapore), Scott Hawken (Australia), Serene Coetzee (South Africa), Shawn Laffan (Australia), Shisong Cao (China), Sisi Zlatanova (Australia), Songnian Li (Canada), Stephan Winter (Australia), Tarun Ghawana (Australia), Ümit Işıkdağ (Turkey), Wei Li (Australia), Wolfgang Reinhardt (Germany), Xianlian Liang (Finland) and Yanan Liu (China).The editors would like to express their gratitude to all contributors, who made this volume possible. Many thanks go to all supporting organisations: ISPRS, SSSI, URAP2, Blackash, Mercury and ISPRS Journal of Geoinformation. The editors are grateful to the continued support of the involved Universities: The University of New South Wales, Curtin University, Australian National University and The University of Melbourne.
APA, Harvard, Vancouver, ISO, and other styles
23

Helmholz, P., S. Zlatanova, J. Barton, and M. Aleksandrov. "GEOINFORMATION FOR DISASTER MANAGEMENT 2020 (GI4DM2020): PREFACE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-3/W1-2020 (November 17, 2020): 1–2. http://dx.doi.org/10.5194/isprs-annals-vi-3-w1-2020-1-2020.

Full text
Abstract:
Abstract. Across the world, nature-triggered disasters fuelled by climate change are worsening. Some two billion people have been affected by the consequences of natural hazards over the last ten years, 95% of which were weather-related (such as floods and windstorms). Fires swept across large parts of California, and in Australia caused unprecedented destruction to lives, wildlife and bush. This picture is likely to become the new normal, and indeed may worsen if unchecked. The Intergovernmental Panel on Climate Change (IPCC) estimates that in some locations, disaster that once had a once-in-a-century frequency may become annual events by 2050.Disaster management needs to keep up. Good cooperation and coordination of crisis response operations are of critical importance to react rapidly and adequately to any crisis situation, while post-disaster recovery presents opportunities to build resilience towards reducing the scale of the next disaster. Technology to support crisis response has advanced greatly in the last few years. Systems for early warning, command and control and decision-making have been successfully implemented in many countries and regions all over the world. Efforts to improve humanitarian response, in particular in relation to combating disasters in rapidly urbanising cities, have also led to better approaches that grapple with complexity and uncertainty.The challenges however are daunting. Many aspects related to the efficient collection and integration of geo-information, applied semantics and situational awareness for disaster management are still open, while agencies, organisations and governmental authorities need to improve their practices for building better resilience.Gi4DM 2020 marked the 13th edition of the Geoinformation for Disaster Management series of conferences. The first conference was held in 2005 in the aftermath of the 2004 Indian Ocean earthquake and tsunami which claimed the lives of over 220,000 civilians. The 2019-20 Australian Bushfire Season saw some 18.6 million Ha of bushland burn, 5,900 buildings destroyed and nearly three billion vertebrates killed. Gi4DM 2020 then was held during Covid-19 pandemic, which took the lives of more than 1,150,000 people by the time of the conference. The pandemic affected the organisation of the conference, but the situation also provided the opportunity to address important global problems.The fundamental goal of the Gi4DM has always been to provide a forum where emergency responders, disaster managers, urban planners, stakeholders, researchers, data providers and system developers can discuss challenges, share experience, discuss new ideas and demonstrate technology. The 12 previous editions of Gi4DM conferences were held in Delft, the Netherlands (March 2005), Goa, India (September 2006), Toronto, Canada (May 2007), Harbin, China (August 2008), Prague, Czech Republic (January 2009), Torino, Italy (February 2010), Antalya, Turkey (May 2011), Enschede, the Netherlands (December, 2012), Hanoi, Vietnam (December 2013), Montpellier, France (2015), Istanbul, Turkey (2018) and Prague, Czech Republic (2019). Through the years Gi4DM has been organised in cooperation with different international bodies such as ISPRS, UNOOSA, ICA, ISCRAM, FIG, IAG, OGC and WFP and supported by national organisations.Gi4DM 2020 was held as part of Climate Change and Disaster Management: Technology and Resilience for a Troubled World. The event took place through the whole week of 30th of November to 4th of December, Sydney, Australia and included three events: Gi4DM 2020, NSW Surveying and Spatial Sciences Institute (NSW SSSI) annual meeting and Urban Resilience Asia Pacific 2 (URAP2).The event explored two interlinked aspects of disaster management in relation to climate change. The first was geo-information technologies and their application for work in crisis situations, as well as sensor and communication networks and their roles for improving situational awareness. The second aspect was resilience, and its role and purpose across the entire cycle of disaster management, from pre-disaster preparedness to post-disaster recovery including challenges and opportunities in relation to rapid urbanisation and the role of security in improved disaster management practices.This volume consists of 16 peer-reviewed scientific papers. These were selected on the basis of double-blind review from among the 25 full papers submitted to the Gi4DM 2020 conference. Each paper was reviewed by three scientific reviewers. The authors of the papers were encouraged to revise, extend and adapt their papers to reflect the comments of the reviewers and fit the goals of this volume. The selected papers concentrate on monitoring and analysis of forest fire (3), landslides (3), flood (2), earthquake, avalanches, water pollution, heat, evacuation and urban sustainability, applying a variety of remote sensing, GIS and Web-based technologies. Figure 1 illustrates the scope of the covered topics though the word count of keywords and titles.The Gi4DM 2020 program consisted of scientific presentations, keynote speeches, panel discussions and tutorials. The four keynotes speakers Prof Suzan Cutter (Hazard and Vulnerability Research Institute, USC, US), Jeremy Fewtrell (NSW Fire and Rescue, Australia), Prof Orhan Altan (Ad-hoc Committee on RISK and Disaster Management, GeoUnions, Turkey) and Prof Philip Gibbins (Fenner School of Environment and Society, ANU, Australia) concentrated on different aspects of disaster and risk management in the context of climate change. Eight tutorials offered exciting workshops and hands-on on: Semantic web tools and technologies within Disaster Management, Structure-from-motion photogrammetry, Radar Remote Sensing, Dam safety: Monitoring subsidence with SAR Interferometry, Location-based Augmented Reality apps with Unity and Mapbox, Visualising bush fires datasets using open source, Making data smarter to manage disasters and emergency situational awareness and Response using HERE Location Services. The scientific sessions were blended with panel discussions to provide more opportunities to exchange ideas and experiences, connect people and researchers from all over the world.The editors of this volume acknowledge all members of the scientific committee for their time, careful review and valuable comments: Abdoulaye Diakité (Australia), Alexander Rudloff (Germany), Alias Abdul Rahman (Malaysia), Alper Yilmaz (USA), Amy Parker (Australia), Ashraf Dewan (Australia), Bapon Shm Fakhruddin (New Zealand), Batuhan Osmanoglu (USA), Ben Gorte (Australia), Bo Huang (Hong Kong), Brendon McAtee (Australia), Brian Lee (Australia), Bruce Forster (Australia), Charity Mundava (Australia), Charles Toth (USA), Chris Bellman (Australia), Chris Pettit (Australia), Clive Fraser (Australia), Craig Glennie (USA), David Belton (Australia), Dev Raj Paudyal (Australia), Dimitri Bulatov (Germany), Dipak Paudyal (Australia), Dorota Iwaszczuk (Germany), Edward Verbree (The Netherlands), Eliseo Clementini (Italy), Fabio Giulio Tonolo (Italy), Fazlay Faruque (USA), Filip Biljecki (Singapore), Petra Helmholz (Australia), Francesco Nex (The Netherlands), Franz Rottensteiner (Germany), George Sithole (South Africa), Graciela Metternicht (Australia), Haigang Sui (China), Hans-Gerd Maas (Germany), Hao Wu (China), Huayi Wu (China), Ivana Ivanova (Australia), Iyyanki Murali Krishna (India), Jack Barton (Australia), Jagannath Aryal (Australia), Jie Jiang (China), Joep Compvoets (Belgium), Jonathan Li (Canada), Kourosh Khoshelham (Australia), Krzysztof Bakuła (Poland), Lars Bodum (Denmark), Lena Halounova (Czech Republic), Madhu Chandra (Germany), Maria Antonia Brovelli (Italy), Martin Breunig (Germany), Martin Tomko (Australia), Mila Koeva (The Netherlands), Mingshu Wang (The Netherlands), Mitko Aleksandrov (Australia), Mulhim Al Doori (UAE), Nancy Glenn (Australia), Negin Nazarian (Australia), Norbert Pfeifer (Austria), Norman Kerle (The Netherlands), Orhan Altan (Turkey), Ori Gudes (Australia), Pawel Boguslawski (Poland), Peter van Oosterom (The Netherlands), Petr Kubíček (Czech Republic), Petros Patias (Greece), Piero Boccardo (Italy), Qiaoli Wu (China), Qing Zhu (China), Riza Yosia Sunindijo (Australia), Roland Billen (Belgium), Rudi Stouffs (Singapore), Scott Hawken (Australia), Serene Coetzee (South Africa), Shawn Laffan (Australia), Shisong Cao (China), Sisi Zlatanova (Australia), Songnian Li (Canada), Stephan Winter (Australia), Tarun Ghawana (Australia), Ümit Işıkdağ (Turkey), Wei Li (Australia), Wolfgang Reinhardt (Germany), Xianlian Liang (Finland) and Yanan Liu (China).The editors would like to express their gratitude to all contributors, who made this volume possible. Many thanks go to all supporting organisations: ISPRS, SSSI, URAP2, Blackash, Mercury and ISPRS Journal of Geoinformation. The editors are grateful to the continued support of the involved Universities: The University of New South Wales, Curtin University, Australian National University and The University of Melbourne.
APA, Harvard, Vancouver, ISO, and other styles
24

Popoola, Oluwatoyin Muse Johnson. "Preface to the First Issue of Indian Pacific Journal of Accounting and Finance." Indian-Pacific Journal of Accounting and Finance 1, no. 1 (January 1, 2017): 1–2. http://dx.doi.org/10.52962/ipjaf.2017.1.1.5.

Full text
Abstract:
It is a great pleasure and at the same time a challenge to introduce a new journal into the global community, especially when the objective is to publish high quality impactful manuscripts or papers. Although, accounting and finance studies constituted a primary focus for most of the scholars because of our understanding of their values. However, only a few of us spend much time to explore emerging areas. Notwithstanding the challenges, this journal seeks to provide readers throughout the world with technology backed quality peer reviewed scholarly articles on a broad range of established and emergent areas to accounting and finance in particular, and business, economics and social sciences in general. A one on one discussions with distinguished scholars attests to the fact that there is a dire necessity for such a journal in the Indian-Pacific axis. In order to create a niche for IPJAF as the most authoritative journal on accounting and finance, a team of highly valuable or distinguished scholars has agreed to serve on the editorial board. I am privileged and opportune to have Associate Editor-in-Chief, Aidi Ahmi (Universiti Utara Malaysia), and Associate Editors: Muhammad Ali Abdul Hamid (University of Sharjah, UAE), Bamidele Adepoju (Bayero University), Abayomi Ambali Alaka (Institute of Chartered Accountants of Nigeria), and Dorcas Adebola Babatunde (Afe Babalola University of Ado-Ekiti). Our editorial board members are scholars from several countries worldwide that are actively engaged in academic and professional committees, supervising doctoral thesis and doctoral teaching level courses. The Editorial Board is supported by a group of competent and experienced international review panel members from different continents of the world. With this synergy, the journal brings a significant representation of the field of accounting and finance both in established and developing areas. Our existence is anchored on the service and dedication of IPJAF editorial board and the editorial team. This inaugural volume consists of five manuscripts. Shitu and Popoola’s article, An investigation of Socially Sustainable Behaviour of Local Players in the Supply Chain of Shea Butter: A Role Theory Perspective, explores the roles, practices, and behaviour of local supply chain stakeholders (women entrepreneurs) in Shea nut picking and Shea butter processing in Rural Borgu, Nigeria. Also, the research examines the local buying agents (LBA) who serve as the middlemen between the rural women and the exporters of Shea butter. The findings indicate that the present active engagement and practices of these local stakeholders do not align with the principles of the sustainable supply chain. The paper exposes factors such as gender disparity, weak access to financial support, and information asymmetry as major contributors to the present roles, practices, and behaviour of the local actors. Lina and Jingga's article, Factors influencing Tax Avoidance activity: An empirical study from Indonesia Stock Exchange, examines the influence of the firm characteristics to tax avoidance activity in the listed companies in Indonesia. The paper adopts the proxies of firm size, leverage, capital intensity, inventory intensity as the business characteristics and return on asset and market-to-book ratio as control variables. The result of this research reveals that leverage has a positive influence towards tax avoidance activity, while the rest variables have no influence towards tax avoidance activity. Adedeji, Popoola and Ong Tse San's article, National Culture and Sustainability Disclosure Practices: A Literature Review, investigates the extent to which national culture is an explanatory variable for firm’s disclosure choices for sustainable development in the advanced, emerging and developing nations of the world, especially that entities interact in globally knowledge-based economies. The paper identifies that not much work had been done in the area of traits and characteristics in specific national cultural environments and their effects on sustainability disclosures, in particular, social and environmental disclosures. The paper concludes with the recognition of the need to gear up researchers and policy making bodies to encourage advancement of studies on the intellectual capital concept and resource-based value theory to enhance sustainability development globally. Imelda and Alodia's article, The analysis of Altman Model and Ohlson Model in Predicting Financial Distress of Manufacturing companies in the Indonesia Stock Exchange, examines the accuracy of the Altman Model and the Ohlson Model in Bankruptcy Prediction. The results of the paper show that the Ohlson Model and the Logit Analysis are more accurate than the Altman Model and the Multiple Discriminant Analysis in predicting bankruptcy of manufacturing firms in the Indonesian Stock Exchange (BEI) in 2010-2014. The paper reveals benchmark for consideration in determining the financial distress of a company such as the ratio of retained earnings to total assets, earnings before interest and taxes to total assets, market value of equity to total liabilities, sales to total assets, debt ratio, and return on assets, working capital to total assets and net income. Arowolo and Ahmad's article, Quality-differentiated Auditors, Block-holders and Monitoring Mechanisms, seeks to investigate how monitoring mechanisms influence the block-holders in 111 Nigerian non-financial listed companies to resolve the problem of business failures as a result of information asymmetry existing in the relationship of the managements with the shareholders. The study also investigates the mediating effect of the quality-differentiated auditors on the relationship between block-holders and monitoring mechanisms. The findings indicate that the block-holders significantly influence monitoring mechanisms. Also, the results reveal that quality-differentiated auditors positively affect monitoring mechanisms and that it significantly explains the relationship between block-holders and monitoring mechanisms. It is my conviction that in the coming year, the vision of IPJAF to publish high quality manuscripts in the established and emergent areas of accounting and finance from academic and professional researchers will be attained, maintained and appreciated. As you read throughout this inaugural volume of IPJAF, I would like to remind you that the success of our journal depends on your active participation and those of your colleagues and friends through submission of high quality articles for review and publication. I assure our prospective authors, regardless of the acceptance of your manuscripts or not, to enjoy the benefits IPJAF provides about mentoring nature of our review process, which provides high quality, helpful reviews tailored to assist authors in improving their manuscripts. I acknowledge your support as we strive to make IPJAF the most authoritative journal on accounting and finance for the community of academic, professional, industry, society and government. Oluwatoyin Muse Johnson PopoolaEditor-in-Chiefpopoola@omjpalpha.com
APA, Harvard, Vancouver, ISO, and other styles
25

Pylypchuk, Oleh, Oleh Strelko, and Yuliia Berdnychenko. "PREFACE." History of science and technology 11, no. 1 (June 26, 2021): 7–9. http://dx.doi.org/10.32703/2415-7422-2021-11-1-7-9.

Full text
Abstract:
In the new issue, our scientific journal offers you thirteen scientific articles. As always, we try to offer a wide variety of topics and areas and follow current trends in the history of science and technology. In the article by Olha Chumachenko, оn the basis of a wide base of sources, the article highlights and analyzes the development of research work of aircraft engine companies in Zaporizhzhia during the 1970s. The existence of a single system of functioning of the Zaporizhzhia production association “Motorobudivnyk” (now the Public Joint Stock Company “Motor Sich”) and the Zaporizhzhia Machine-Building Design Bureau “Progress” (now the State Enterprise “Ivchenko – Progress”) has been taken into account. Leonid Griffen and Nadiia Ryzheva present their vision of the essence of technology as a socio-historical phenomenon. The article reveals the authors' vision of the essence of the technology as a sociohistorical phenomenon. It is based on the idea that technology is not only a set of technical devices but a segment of the general system – a society – located between a social medium and its natural surroundings in the form of a peculiar social technosphere, which simultaneously separates and connects them. Definitely the article by Denis Kislov, which examines the period from the end of the XVII century to the beginning of the XIX century, is also of interest, when on the basis of deep philosophical concepts, a new vision of the development of statehood and human values raised. At this time, a certain re-thinking of the management and communication ideas of Antiquity and the Renaissance took place, which outlined the main promising trends in the statehood evolution, which to one degree or another were embodied in practice in the 19th and 20th centuries. A systematic approach and a comparative analysis of the causes and consequences of those years’ achievements for the present and the immediate future of the 21st century served as the methodological basis for a comprehensive review of the studies of that period. The article by Serhii Paliienko is devoted to an exploration of archaeological theory issues at the Institute of archaeology AS UkrSSR in the 1960s. This period is one of the worst studied in the history of Soviet archaeology. But it was the time when in the USSR archaeological researches reached the summit, quantitative methods and methods of natural sciences were applied and interest in theoretical issues had grown in archaeology. Now there are a lot of publications dedicated to theoretical discussions between archaeologists from Leningrad but the same researches about Kyiv scholars are still unknown The legacy of St. Luke in medical science, authors from Greece - this study aims to highlight key elements of the life of Valentyn Feliksovych Voino-Yasenetskyi and his scientific contribution to medicine. Among the scientists of European greatness, who at the turn of the XIX and XX centuries showed interest to the folklore of Galicia (Halychyna) and Galician Ukrainians, contributed to their national and cultural revival, one of the leading places is occupied by the outstanding Ukrainian scientist Ivan Verkhratskyi. He was both naturalist and philologist, as well as folklorist and ethnographer, organizer of scientific work, publisher and popularizer of Ukrainian literature, translator, publicist and famous public figure. I. H. Verkhratskyi was also an outstanding researcher of plants and animals of Eastern Galicia, a connoisseur of insects, especially butterflies, the author of the first school textbooks on natural science written in Ukrainian. A new emerging field that has seen the application of the drone technology is the healthcare sector. Over the years, the health sector has increasingly relied on the device for timely transportation of essential articles across the globe. Since its introduction in health, scholars have attempted to address the impact of drones on healthcare across Africa and the world at large. Among other things, it has been reported by scholars that the device has the ability to overcome the menace of weather constraints, inadequate personnel and inaccessible roads within the healthcare sector. This notwithstanding, data on drones and drone application in Ghana and her healthcare sector in particular appears to be little within the drone literature. Also, little attempt has been made by scholars to highlight the use of drones in African countries. By using a narrative review approach, the current study attempts to address the gap above. By this approach, a thorough literature search was performed to locate and assess scientific materials involving the application of drones in the military field and in the medical systems of Africans and Ghanaians in particular. The paper by Artemii Bernatskyi and Vladyslav Khaskin is devoted to the analysis of the history of the laser creation as one of the greatest technical inventions of the 20th century. This paper focuses on establishing a relation between the periodization of the stages of creation and implementation of certain types of lasers, with their influence on the invention of certain types of equipment and industrial technologies for processing the materials, the development of certain branches of the economy, and scientific-technological progress as a whole. The paper discusses the stages of: invention of the first laser; creation of the first commercial lasers; development of the first applications of lasers in industrial technologies for processing the materials. Special attention is paid to the “patent wars” that accompanied different stages of the creation of lasers. A comparative analysis of the market development for laser technology from the stage of creation to the present has been carried out. Nineteenth-century world exhibitions were platforms to demonstrate technical and technological changes that witnessed the modernization and industrialization of the world. World exhibitions have contributed to the promotion of new inventions and the popularization of already known, as well as the emergence of art objects of world importance. One of the most important world events at the turn of the century was the 1900 World's Fair in Paris. Thus, the author has tried to analyze the participation of representatives of the sugar industry in the World's Fair in 1900 and to define the role of exhibitions as indicators of economic development, to show the importance and influence of private entrepreneurs, especially from Ukraine, on the sugar industry and international contacts. The article by Viktor Verhunov highlights the life and creative path of the outstanding domestic scientist, theorist, methodologist and practitioner of agricultural engineering K. G. Schindler, associated with the formation of agricultural mechanics in Ukraine. The methodological foundation of the research is the principles of historicism, scientific nature and objectivity in reproducing the phenomena of the past based on the complex use of general scientific, special, interdisciplinary methods. For the first time a number of documents from Russian and Ukrainian archives, which reflect some facts of the professional biography of the scientist, were introduced into scientific circulation. The authors from Kremenchuk National University named after Mykhailo Ostrohradskyi presented a fascinating study of a bayonet fragment with severe damages of metal found in the city Kremenchuk (Ukraine) in one of the canals on the outskirts of the city, near the Dnipro River. Theoretical research to study blade weapons of the World War I period and the typology of the bayonets of that period, which made it possible to put forward an assumption about the possible identification of the object as a modified bayonet to the Mauser rifle has been carried out. Metal science expert examination was based on X-ray fluorescence spectrometry to determine the concentration of elements in the sample from the cleaned part of the blade. In the article by Mykola Ruban and Vadym Ponomarenko on the basis of the complex analysis of sources and scientific literature the attempt to investigate historical circumstances of development and construction of shunting electric locomotives at the Dnipropetrovsk electric locomotive plant has been made. The next scientific article continues the series of publications devoted to the assessment of activities of the heads of the Ministry of Railways of the Russian Empire. In this article, the authors have attempted to systematize and analyze historical data on the activities of Klavdii Semyonovych Nemeshaev as the Minister of Railways of the Russian Empire. The article also assesses the development and construction of railway network in the Russian Empire during Nemeshaev's office, in particular, of the Amur Line and Moscow Encircle Railway, as well as the increase in the capacity of the Trans-Siberian Railway. The article discusses K. S. Nemeshaev's contribution to the development of technology and the introduction of a new type of freight steam locomotive for state-owned railways. We hope that everyone will find interesting useful information in the new issue. And, of course, we welcome your new submissions.
APA, Harvard, Vancouver, ISO, and other styles
26

Pendrill, L. R., A. Allard, N. Fischer, P. M. Harris, J. Nguyen, and I. M. Smith. "Software to Maximize End-User Uptake of Conformity Assessment With Measurement Uncertainty, Including Bivariate Cases. The European EMPIR CASoft Project." NCSL International Measure 13, no. 1 (February 2021): 58–69. http://dx.doi.org/10.51843/measure.13.1.6.

Full text
Abstract:
Facilitating the uptake of established methodologies for risk-based decision-making in product conformity assessment taking into account measurement uncertainty by providing dedicated software is the aim of the European project EMPIR CASoft(2018–2020), involving the National Measurement Institutes from France, Sweden and the UK, and industrial partner Trescal (FR) as primary supporter. The freely available software helps end-users perform the required risk calculations in accordance with current practice and regulations and extends that current practice to include bivariate cases. The software is also aimed at supporting testing and calibration laboratories in the application of the latest version of the ISO/IEC 17025:2017 standard, which requires that“…the laboratory shall document the decision rule employed, taking into account the level of risk […] associated with the decision rule and apply the decision rule.” Initial experiences following launch of the new software in Spring 2020 are reported.
APA, Harvard, Vancouver, ISO, and other styles
27

Thị Tuyết Vân, Phan. "Education as a breaker of poverty: a critical perspective." Papers of Social Pedagogy 7, no. 2 (January 28, 2018): 30–41. http://dx.doi.org/10.5604/01.3001.0010.8049.

Full text
Abstract:
This paper aims to portray the overall picture of poverty in the world and mentions the key solution to overcome poverty from a critical perspective. The data and figures were quoted from a number of researchers and organizations in the field of poverty around the world. Simultaneously, the information strengthens the correlations among poverty and lack of education. Only appropriate philosophies of education can improve the country’s socio-economic conditions and contribute to effective solutions to worldwide poverty. In the 21st century, despite the rapid development of science and technology with a series of inventions brought into the world to make life more comfortable, human poverty remains a global problem, especially in developing countries. Poverty, according to Lister (2004), is reflected by the state of “low living standards and/or inability to participate fully in society because of lack of material resources” (p.7). The impact and serious consequences of poverty on multiple aspects of human life have been realized by different organizations and researchers from different contexts (Fraser, 2000; Lister, 2004; Lipman, 2004; Lister, 2008). This paper will indicate some of the concepts and research results on poverty. Figures and causes of poverty, and some solutions from education as a key breaker to poverty will also be discussed. Creating a universal definition of poverty is not simple (Nyasulu, 2010). There are conflicts among different groups of people defining poverty, based on different views and fields. Some writers, according to Nyasulu, tend to connect poverty with social problems, while others focus on political or other causes. However, the reality of poverty needs to be considered from different sides and ways; for that reason, the diversity of definitions assigned to poverty can help form the basis on which interventions are drawn (Ife and Tesoriero, 2006). For instance, in dealing with poverty issues, it is essential to intervene politically; economic intervention is very necessary to any definition of this matter. A political definition necessitates political interventions in dealing with poverty, and economic definitions inevitably lead to economic interventions. Similarly, Księżopolski (1999) uses several models to show the perspectives on poverty as marginal, motivation and socialist. These models look at poverty and solutions from different angles. Socialists, for example, emphasize the responsibilities of social organization. The state manages the micro levels and distributes the shares of national gross resources, at the same time fighting to maintain the narrow gap among classes. In his book, Księżopolski (1999) also emphasizes the changes and new values of charity funds or financial aid from churches or organizations recognized by the Poor Law. Speaking specifically, in the new stages poverty has been recognized differently, and support is also delivered in limited categories related to more specific and visible objectives, with the aim of helping the poor change their own status for sustainable improvement. Three ways of categorizing the poor and locating them in the appropriate places are (1) the powerless, (2) who is willing to work and (3) who is dodging work. Basically, poverty is determined not to belong to any specific cultures or politics; otherwise, it refers to the situation in which people’s earnings cannot support their minimum living standard (Rowntree, 1910). Human living standard is defined in Alfredsson & Eide’s work (1999) as follows: “Everyone has the right to a standard of living adequate for the health and well-being of himself and his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.” (p. 524). In addition, poverty is measured by Global Hunger Index (GHI), which is calculated by the International Food Policy Institute (IFPRI) every year. The GHI measures hunger not only globally, but also by country and region. To have the figures multi-dimensionally, the GHI is based on three indicators: 1. Undernourishment: the proportion of the undernourished as a percentage of the population (reflecting the share of the population with insufficient calorie intake). 2. Child underweight: the proportion of children under age 5 who are underweight (low weight for their age, reflecting wasting, stunted growth or both), which is one indicator of child under-nutrition. 3. Child mortality: the mortality rate of children under 5 (partially reflecting the fatal synergy of inadequate dietary intake and unhealthy environments). Apart from the individual aspects and the above measurement based on nutrition, which help partly imagine poverty, poverty is more complicated, not just being closely related to human physical life but badly affecting spiritual life. According to Jones and Novak (1999 cited in Lister, 2008), poverty not only characterizes the precarious financial situation but also makes people self-deprecating. Poverty turns itself into the roots of shame, guilt, humiliation and resistance. It leads the poor to the end of the road, and they will never call for help except in the worst situations. Education can help people escape poverty or make it worse. In fact, inequality in education has stolen opportunity for fighting poverty from people in many places around the world, in both developed and developing countries (Lipman, 2004). Lipman confirms: “Students need an education that instills a sense of hope and possibility that they can make a difference in their own family, school, and community and in the broader national and global community while it prepare them for multiple life choices.” (p.181) Bradshaw (2005) synthesizes five main causes of poverty: (1) individual deficiencies, (2) cultural belief systems that support subcultures of poverty, (3) economic, political and social distortions or discrimination, (4) geographical disparities and (5) cumulative and cyclical interdependencies. The researcher suggests the most appropriate solution corresponding with each cause. This reflects the diverse causes of poverty; otherwise, poverty easily happens because of social and political issues. From the literature review, it can be said that poverty comes from complex causes and reasons, and is not a problem of any single individual or country. Poverty has brought about serious consequences and needs to be dealt with by many methods and collective effort of many countries and organizations. This paper will focus on representing some alarming figures on poverty, problems of poverty and then the education as a key breaker to poverty. According to a statistics in 2012 on poverty from the United Nations Development Program (UNDP), nearly half the world's population lives below the poverty line, of which is less than $1.25 a day . In a statistics in 2015, of every 1,000 children, 93 do not live to age 5 , and about 448 million babies are stillborn each year . Poverty in the world is happening alarmingly. According to a World Bank study, the risk of poverty continues to increase on a global scale and, of the 2009 slowdown in economic growth, which led to higher prices for fuel and food, further pushed 53 million people into poverty in addition to almost 155 million in 2008. From 1990 to 2009, the average GHI in the world decreased by nearly one-fifth. Many countries had success in solving the problem of child nutrition; however, the mortality rate of children under 5 and the proportion of undernourished people are still high. From 2011 to 2013, the number of hungry people in the world was estimated at 842 million, down 17 percent compared with the period 1990 to 1992, according to a report released by the Food and Agriculture Organization of the United Nations (FAO) titled “The State of Food Insecurity in the World 2013” . Although poverty in some African countries had been improved in this stage, sub-Saharan Africa still maintained an area with high the highest percentage of hungry people in the world. The consequences and big problems resulting from poverty are terrible in the extreme. The following will illustrate the overall picture under the issues of health, unemployment, education and society and politics ➢ Health issues: According a report by Manos Unidas, a non- government organization (NGO) in Spain , poverty kills more than 30,000 children under age 5 worldwide every day, and 11 million children die each year because of poverty. Currently, 42 million people are living with HIV, 39 million of them in developing countries. The Manos Unidas report also shows that 15 million children globally have been orphaned because of AIDS. Scientists predict that by 2020 a number of African countries will have lost a quarter of their population to this disease. Simultaneously, chronic drought and lack of clean water have not only hindered economic development but also caused disastrous consequences of serious diseases across Africa. In fact, only 58 percent of Africans have access to clean water; as a result, the average life expectancy in Africa is the lowest in the world, just 45 years old (Bui, 2010). ➢ Unemployment issues: According to the United Nations, the youth unemployment rate in Africa is the highest in the world: 25.6 percent in the Middle East and North Africa. Unemployment with growth rates of 10 percent a year is one of the key issues causing poverty in African and negatively affecting programs and development plans. Total African debt amounts to $425 billion (Bui, 2010). In addition, joblessness caused by the global economic downturn pushed more than 140 million people in Asia into extreme poverty in 2009, the International Labor Organization (ILO) warned in a report titled The Fallout in Asia, prepared for the High-Level Regional Forum on Responding to the Economic Crisis in Asia and the Pacific, in Manila from Feb. 18 to 20, 2009 . Surprisingly, this situation also happens in developed countries. About 12.5 million people in the United Kingdom (accounting for 20 percent of the population) are living below the poverty line, and in 2005, 35 million people in the United States could not live without charity. At present, 620 million people in Asia are living on less than $1 per day; half of them are in India and China, two countries whose economies are considered to be growing. ➢ Education issues: Going to school is one of the basic needs of human beings, but poor people cannot achieve it. Globally, 130 million children do not attend school, 55 percent of them girls, and 82 million children have lost their childhoods by marrying too soon (Bui, 2010). Similarly, two-thirds of the 759 million illiterate people in total are women. Specifically, the illiteracy rate in Africa keeps increasing, accounting for about 40 percent of the African population at age 15 and over 50 percent of women at age 25. The number of illiterate people in the six countries with the highest number of illiterate people in the world - China, India, Indonesia, Brazil, Bangladesh and Egypt - reached 510 million, accounting for 70 percent of total global illiteracy. ➢ Social and political issues: Poverty leads to a number of social problems and instability in political systems of countries around the world. Actually, 246 million children are underage labors, including 72 million under age 10. Simultaneously, according to an estimate by the United Nations (UN), about 100 million children worldwide are living on the streets. For years, Africa has suffered a chronic refugee problem, with more than 7 million refugees currently and over 200 million people without homes because of a series of internal conflicts and civil wars. Poverty threatens stability and development; it also directly influences human development. Solving the problems caused by poverty takes a lot of time and resources, but afterward they can focus on developing their societies. Poverty has become a global issue with political significance of particular importance. It is a potential cause of political and social instability, even leading to violence and war not only within a country, but also in the whole world. Poverty and injustice together have raised fierce conflicts in international relations; if these conflicts are not satisfactorily resolved by peaceful means, war will inevitably break out. Obviously, poverty plus lack of understanding lead to disastrous consequences such as population growth, depletion of water resources, energy scarcity, pollution, food shortages and serious diseases (especially HIV/AIDS), which are not easy to control; simultaneously, poverty plus injustice will cause international crimes such as terrorism, drug and human trafficking, and money laundering. Among recognizable four issues above which reflected the serious consequences of poverty, the third ones, education, if being prioritized in intervention over other issues in the fighting against poverty is believed to bring more effectiveness in resolving the problems from the roots. In fact, human being with the possibility of being educated resulted from their distinctive linguistic ability makes them differential from other beings species on the earth (Barrow and Woods 2006, p.22). With education, human can be aware and more critical with their situations, they are aimed with abilities to deal with social problems as well as adversity for a better life; however, inequality in education has stolen opportunity for fighting poverty from unprivileged people (Lipman, 2004). An appropriate education can help increase chances for human to deal with all of the issues related to poverty; simultaneously it can narrow the unexpected side-effect of making poverty worse. A number of philosophies from ancient Greek to contemporary era focus on the aspect of education with their own epistemology, for example, idealism of Plato encouraged students to be truth seekers and pragmatism of Dewey enhanced the individual needs of students (Gutex, 1997). Education, more later on, especially critical pedagogy focuses on developing people independently and critically which is essential for poor people to have ability of being aware of what they are facing and then to have equivalent solutions for their problems. In other words, critical pedagogy helps people emancipate themselves and from that they can contribute to transform the situations or society they live in. In this sense, in his most influential work titled “Pedagogy of the Oppressed” (1972), Paulo Freire carried out his critical pedagogy by building up a community network of peasants- the marginalized and unprivileged party in his context, aiming at awakening their awareness about who they are and their roles in society at that time. To do so, he involved the peasants into a problem-posing education which was different from the traditional model of banking education with the technique of dialogue. Dialogue wasn’t just simply for people to learn about each other; but it was for figuring out the same voice; more importantly, for cooperation to build a social network for changing society. The peasants in such an educational community would be relieved from stressfulness and the feeling of being outsiders when all of them could discuss and exchange ideas with each other about the issues from their “praxis”. Praxis which was derived from what people act and linked to some values in their social lives, was defined by Freire as “reflection and action upon the world in order to transform it” (p.50). Critical pedagogy dialogical approach in Pedagogy of the Oppressed of Freire seems to be one of the helpful ways for solving poverty for its close connection to the nature of equality. It doesn’t require any highly intellectual teachers who lead the process; instead, everything happens naturally and the answers are identified by the emancipation of the learners themselves. It can be said that the effectiveness of this pedagogy for people to escape poverty comes from its direct impact on human critical consciousness; from that, learners would be fully aware of their current situations and self- figure out the appropriate solutions for their own. In addition, equality which was one of the essences making learners in critical pedagogy intellectually emancipate was reflected via the work titled “The Ignorant Schoolmaster” by Jacques Rancière (1991). In this work, the teacher and students seemed to be equal in terms of the knowledge. The explicator- teacher Joseph Jacotot employed the interrogative approach which was discovered to be universal because “he taught what he didn’t know”. Obviously, this teacher taught French to Flemish students while he couldn’t speak his students’ language. The ignorance which was not used in the literal sense but a metaphor showed that learners can absolutely realize their capacity for self-emancipation without the traditional teaching of transmission of knowledge from teachers. Regarding this, Rancière (1991, p.17) stated “that every common person might conceive his human dignity, take the measure of his intellectual capacity, and decide how to use it”. This education is so meaningful for poor people by being able to evoking their courageousness to develop themselves when they always try to stay away from the community due the fact that poverty is the roots of shame, guilt, humiliation and resistance (Novak, 1999). The contribution of critical pedagogy to solving poverty by changing the consciousness of people from their immanence is summarized by Freire’s argument in his “Pedagogy of Indignation” as follows: “It is certain that men and women can change the world for the better, can make it less unjust, but they can do so from starting point of concrete reality they “come upon” in their generation. They cannot do it on the basis of reveries, false dreams, or pure illusion”. (p.31) To sum up, education could be an extremely helpful way of solving poverty regarding the possibilities from the applications of studies in critical pedagogy for educational and social issues. Therefore, among the world issues, poverty could be possibly resolved in accordance with the indigenous people’s understanding of their praxis, their actions, cognitive transformation, and the solutions with emancipation in terms of the following keynotes: First, because the poor are powerless, they usually fall into the states of self-deprecation, shame, guilt and humiliation, as previously mentioned. In other words, they usually build a barrier between themselves and society, or they resist changing their status. Therefore, approaching them is not a simple matter; it requires much time and the contributions of psychologists and sociologists in learning about their aspirations, as well as evoking and nurturing the will and capacities of individuals, then providing people with chances to carry out their own potential for overcoming obstacles in life. Second, poverty happens easily in remote areas not endowed with favorable conditions for development. People there haven’t had a lot of access to modern civilization; nor do they earn a lot of money for a better life. Low literacy, together with the lack of healthy forms of entertainment and despair about life without exit, easily lead people into drug addiction, gambling and alcoholism. In other words, the vicious circle of poverty and powerlessness usually leads the poor to a dead end. Above all, they are lonely and need to be listened to, shared with and led to escape from their states. Community meetings for exchanging ideas, communicating and immediate intervening, along with appropriate forms of entertainment, should be held frequently to meet the expectations of the poor, direct them to appropriate jobs and, step by step, change their favorite habits of entertainment. Last but not least, poor people should be encouraged to participate in social forums where they can both raise their voices about their situations and make valuable suggestions for dealing with their poverty. Children from poor families should be completely exempted from school fees to encourage them to go to school, and curriculum should also focus on raising community awareness of poverty issues through extracurricular and volunteer activities, such as meeting and talking with the community, helping poor people with odd jobs, or simply spending time listening to them. Not a matter of any individual country, poverty has become a major problem, a threat to the survival, stability and development of the world and humanity. Globalization has become a bridge linking countries; for that reason, instability in any country can directly and deeply affect the stability of others. The international community has been joining hands to solve poverty; many anti-poverty organizations, including FAO (Food and Agriculture Organization), BecA (the Biosciences eastern and central Africa), UN-REDD (the United Nations Programme on Reducing Emissions from Deforestation and Forest Degradation), BRAC (Building Resources Across Communities), UNDP (United Nations Development Programme), WHO (World Health Organization) and Manos Unidas, operate both regionally and internationally, making some achievements by reducing the number of hungry people, estimated 842 million in the period 1990 to 1992, by 17 percent in 2011- to 2013 . The diverse methods used to deal with poverty have invested billions of dollars in education, health and healing. The Millennium Development Goals set by UNDP put forward eight solutions for addressing issues related to poverty holistically: 1) Eradicate extreme poverty and hunger. 2) Achieve universal primary education. 3) Promote gender equality and empower women. 4) Reduce child mortality. 5) Improve maternal health. 6) Combat HIV/AIDS, malaria and other diseases. 7) Ensure environmental sustainability. 8) Develop a global partnership for development. Although all of the mentioned solutions carried out directly by countries and organizations not only focus on the roots of poverty but break its circle, it is recognized that the solutions do not emphasize the role of the poor themselves which a critical pedagogy does. More than anyone, the poor should have a sense of their poverty so that they can become responsible for their own fate and actively fight poverty instead of waiting for help. It is not different from the cores of critical theory in solving educational and political issues that the poor should be aware and conscious about their situation and reflected context. It is required a critical transformation from their own praxis which would allow them to go through a process of learning, sharing, solving problems, and leading to social movements. This is similar to the method of giving poor people fish hooks rather than giving them fish. The government and people of any country understand better than anyone else clearly the strengths and characteristics of their homelands. It follows that they can efficiently contribute to causing poverty, preventing the return of poverty, and solving consequences of the poverty in their countries by many ways, especially a critical pedagogy; and indirectly narrow the scale of poverty in the world. In a word, the wars against poverty take time, money, energy and human resources, and they are absolutely not simple to end. Again, the poor and the challenged should be educated to be fully aware of their situation to that they can overcome poverty themselves. They need to be respected and receive sharing from the community. All forms of discrimination should be condemned and excluded from human society. When whole communities join hands in solving this universal problem, the endless circle of poverty can be addressed definitely someday. More importantly, every country should be responsible for finding appropriate ways to overcome poverty before receiving supports from other countries as well as the poor self-conscious responsibilities about themselves before receiving supports from the others, but the methods leading them to emancipation for their own transformation and later the social change.
APA, Harvard, Vancouver, ISO, and other styles
28

Hayat, Anees, Asia Riaz, and Nazia Suleman. "Effect of gamma irradiation and subsequent cold storage on the development and predatory potential of seven spotted ladybird beetle Coccinella septempunctata Linnaeus (Coleoptera; Coccinellidae) larvae." World Journal of Biology and Biotechnology 5, no. 2 (August 15, 2020): 37. http://dx.doi.org/10.33865/wjb.005.02.0297.

Full text
Abstract:
Seven spot ladybird beetle, (Coccinella septempunctata) is a widely distributed natural enemy of soft-bodied insect pests especially aphids worldwide. Both the adult and larvae of this coccinellid beetle are voracious feeders and serve as a commercially available biological control agent around the globe. Different techniques are adopted to enhance the mass rearing and storage of this natural enemy by taking advantage of its natural ability to withstand under extremely low temperatures and entering diapause under unfavorable low temperature conditions. The key objective of this study was to develop a cost effective technique for enhancing the storage life and predatory potential of the larvae of C. septempunctata through cold storage in conjunction with the use of nuclear techniques, gamma radiations. Results showed that the host eating potential of larvae was enhanced as the cold storage duration was increased. Gamma irradiation further enhanced the feeding potential of larvae that were kept under cold storage. Different irradiation doses also affected the development time of C. septempuntata larvae significantly. Without cold storage, the lower radiation doses (10 and 25 GY) prolonged the developmental time as compared to un-irradiated larvae. Furthermore, the higher dose of radiation (50GY) increased the developmental time after removal from cold storage. This study first time paves the way to use radiation in conjunction with cold storage as an effective technique in implementation of different biological control approaches as a part of any IPM programs.Key wordGamma irradiations; cold storage, Coccinella septempunctata larvae; predatory potential; integrated pest management programme.INTRODUCTIONNuclear techniques such as gamma radiations have a vast application in different programmes of biological control including continuous supply of sterilized host and improved rearing techniques (Greany and Carpenter, 2000; Cai et al., 2017). Similarly irradiation can be used for sentinel-host eggs and larvae for monitoring survival and distribution of parasitoids (Jordão-paranhos et al., 2003; Hendrichs et al., 2009; Tunçbilek et al., 2009; Zapater et al., 2009; Van Lenteren, 2012). Also, at the production level, such technique may facilitate the management of host rearing, improve quality and expedite transport of product (Fatima et al., 2009; Hamed et al., 2009; Wang et al., 2009). Gamma irradiations can also be used to stop insect’s development to enhance host suitability for their use in different mass rearing programs (Celmer-Warda, 2004; Hendrichs et al., 2009; Seth et al., 2009). Development and survival of all insects have a direct connection with temperatures which in turn affect the physical, functional and behavioral adaptations (Ramløy, 2000). Many insects living in moderate regions can survive at low temperature by process of diapause. A temperature between 0 to 10oC may cause some insects to become sluggish and they only become active when the temperature is suitable. Such insects show greater adaptations to flexible temperature regimes for better survival. Many studies have reported this concept of cold-hardiness in insects in general (Bale, 2002; Danks, 2006) and specifically in coccinellid beetles over past years (Watanabe, 2002; Koch et al., 2004; Pervez and Omkar, 2006; Labrie et al., 2008; Berkvens et al., 2010). Using this cold hardiness phenomenon, many coccinellids have been studied for the effect of cold storage such as Coccinella undecimpunctata (Abdel‐Salam and Abdel‐Baky, 2000), Coleomegilla maculata (Gagné and Coderre, 2001) and Harmonia axyridis (Watanabe, 2002). This natural phenomenon, therefore, can be a helpful tool in developing low temperature stockpiling for improving mass-rearing procedures (Mousapour et al., 2014). It may provide a significant output in terms of providing natural enemies as and when required during pest infestation peaks (Venkatesan et al., 2000). Use of irradiation in conjunction with cold storage proves to be an effective technique in implementation of different biological control approaches as a part of any IPM programme. A study reported that the pupate of house fly, Musca domestica irradiated at dose of 500 Gy and can stored up to 2 months at 6°C for future use for a parasitoid wasp Spalangia endius rearing (Zapater et al., 2009). Similarly, when irradiated at 20 GY, parasitic wasps Cotesia flavipes were stored safely up to two months without deterioration of their parasitic potential (Fatima et al., 2009). Similarly, bio-control program of sugarcane shoot borer Chilo infescatellus proved successful through the use of irradiation combined with cold storage of its egg and larval parasitoids Trichogramma chilonis and C. flavipes (Fatima et al., 2009). Less mobile life stages such as larvae are of significance in any IPM strategy because they remain on target site for more time period as compared to adults. Therefore, use of predatory larvae is very promising in different biological control approaches because of their immediate attack on pests and more resistance to unfavorable environmental conditions than delicate egg stage. In addition, with their augmentation into fields, larval stage shows their presence for longer time than adult stage and their feeding potential is also satisfactory as that of adults. For the best utilization of these predators in the field and maximum impact of 3rd and 4th larval instars on prey, we should encourage late 2nd second instar larvae of predatory beetles in the fields as these instars have more feeding capacity due to increased size and ability to handle larger preys.In spite of higher significance, there is little information available about the effect of cold storage on the survival of larval instars of different ladybird beetles and its effect on their predatory potential. Very few studies report the use of cold storage for non-diapausing larval stage like for Semiadalia undecimnotata and only one study reported the short-term storage (up to two weeks) of 2nd and 3rd instar coccinellid, C. maculate, without any loss in feeding voracity of larvae after storage (Gagné and Coderre, 2001). The survival of 3rd and 4th larval instars of C. undecimpunctata for 7 days after storage at 5oC was reported in a study but the survival rate declined after 15-60 days of storage (Abdel‐Salam and Abdel‐Baky, 2000). As C. septempunctata is considered one of the voracious predators (Afroz, 2001; Jandial and Malik, 2006; Bilashini and Singh, 2009; Xia et al., 2018) and diapause is a prominent feature of this beetle and it may undergo facultative diapause under suitable laboratory conditions (Suleman, 2015). No information is available to date about the combined effect of cold storage and irradiation on the larval instars of this species.OBJECTIVES The objective of this study was to devise a cost effective technique for the cold storage and its effect on the subsequent predatory potential of the seven spotted ladybird beetle larvae in conjunction with the use of gamma radiations. Hypothesis of the study was that an optimum length of low temperature treatment for storage purpose would not affect the predation capacity of C. septempunctata larvae and their developmental parameters including survival and pupation will remain unaffected. Furthermore, use of gamma irradiation will have some additional effects on survival and feeding capacity of irradiated C. septempunctata larvae. Such techniques can be utilized in different biocontrol programs where short term storage is required. So these larvae can be successfully imparted in different IPM programs against sucking complex of insect pests as a component of biological control strategyMATERIALS AND METHODSPlant materials: Collection and rearing of C. septempunctata: Adult C. septempunctata were collected from the wheat crop (in NIAB vicinity and farm area) in the month of March during late winter and early in spring season 2016-2017. They were kept in plastic jars and were fed with brassica aphids. Under controlled laboratory conditions (25+2oC, 16h: 8h L:D and 65+5% R.H.), eggs of C. septempuctata were obtained and after hatching, larvae were also given brassica aphids as dietary source. Larvae of second instar were selected for this experiment (as the first instar is generally very weak and vulnerable to mortality under low temperatures). As the larvae approached second instar, they were separated for the experimentation. Irradiation of larvae at different doses: Irradiation of larvae was carried out by the irradiation source 137CS at Radiation laboratory, and the larvae were then brought back to the IPM laboratory, Plant Protection Division, Nuclear Institute for Agriculture and Biology (NIAB) Faisalabad. Radiation doses of 10 GY (Grey), 25 GY and 50 GY were used to treat the second instar larvae. There were three replicates for each treatment and five larvae per replicate were used. Control treatment was left un-irradiated.Cold storage of irradiated larvae: In present work, second instar C. septempunctata larvae were studied for storage at low temperature of 8oC. The larvae were kept at 8oC for 0, I and II weeks where week 0 depicts no cold treatment and this set of larvae was left under laboratory conditions for feeding and to complete their development. For larvae that were kept under cold storage for one week at 8°C, the term week I was devised. Similarly, week II denotes the larvae that remained under cold conditions (8°C) for two continuous weeks. Larvae were removed from cold storage in their respective week i.e., after week I and week II and were left under laboratory conditions to complete their development by feeding on aphids. Data collection: For recording the predatory potential of C. septempunctata larvae, 100 aphids were provided per larva per replicate on a daily basis until pupation as this number was more than their feeding capacity to make sure that they were not starved (personal observation). Observations were recorded for survival rate, developmental time and feeding potential. Data analysis: Data were statistically analysed by Statistical Software SPSS (Version 16.0). The data were subjected to normality check through the One-sample Kolmogorov-Smirnov test. Non normal data were transformed to normal data which were then used for all parametric variance tests. One-way and two-way analyses of variance were used. For comparison between variables, LSD test at α 0.05 was applied.RESULTSFeeding potential of irradiated larvae after removal from cold storage: Results showed an increase in the feeding potential of C. septempunctata larvae with increased cold storage duration. The feeding potential was significantly higher for the larvae that spent maximum length of time (week II) under cold storage conditions followed by week I and week 0. Gamma irradiations further enhanced the feeding potential of larvae that were kept under cold storage. When larvae were irradiated at 10 GY, the eating capacity of larvae increased significantly with the duration of cold storage. Similarly, larvae that were irradiated at 25 GY, showed increase in feeding potential on aphids as the time period of cold storage increased. The feeding potential of larvae that were irradiated at 50 GY, was again significantly increased with increase of cold storage duration. When different radiation doses were compared to week 0 of storage, there was a significant difference in feeding potential and larvae irradiated at 50 GY consumed the maximum numbers of aphids when no cold storage was done followed by larvae irradiated at 10 and 25 GY. With the other treatment, where larvae were kept under cold storage for one week (week I) the larvae irradiated at 50GY again showed the highest feeding potential. The feeding potential of irradiated larvae was again significantly higher than the un-irradiated larvae that were kept for two weeks (week II) under cold storage (table 1).Two-way ANOVA was performed to check the interaction between the different radiation doses and different lengths of storage durations for feeding potential of C. septempunctata larvae on aphids. The feeding potential of larvae irradiated at different doses and subjected to variable durations of cold storage were significantly different for both the radiation doses and cold storage intervals. Furthermore, the interaction between the radiation doses and storage duration was also significant meaning that the larvae irradiated at different doses with different length of cold storage were having significant variations in feeding levels (table 2).Developmental time of irradiated larvae after removal from cold storage: Significant difference was found in the development time of the larvae of C. septempunctata when irradiated at different doses at week 0 (without cold storage). The larvae irradiated at 10 GY took the maximum time for development and with the increase in irradiation dosage, from 25 to 50 GY, the time of development was shortened. The larvae irradiated at 50 GY had the same development time as the un-irradiated ones. When, the irradiated larvae were subjected to cold storage of one week duration (week I), their development time after removal from storage condition varied significantly. The larvae irradiated at 25 GY took the maximum time for development followed by larvae irradiated at 50 GY and 10 GY. There was an indication that the development time was extended for irradiated larvae as compared to un-irradiated larvae.Results also depicted a significant difference in the time taken by irradiated larvae to complete their development after taken out from cold storage of two weeks duration (week II). As the storage time of irradiated larvae increased, the development time was prolonged. Results showed that the larvae that were irradiated at 25 and 50 GY, took the maximum time to complete their development. With the prolonged duration of cold storage up to two weeks (week II), this difference of development time was less evident at lower doses (10 GY). The larvae irradiated at 10 GY showed a significant difference in their developmental duration after being taken out of cold storage conditions of the week 0, I and II. There was no difference in the developmental duration of larvae that were un-irradiated and subjected to different regimes of storage. Un-irradiated larvae were least affected by the duration of storage. With the increase in the storage time, a decrease in the developmental time was recorded. Larvae that were irradiated at 10 GY, took the maximum period to complete their development when no cold storage was done (week 0) followed by week I and II of cold storage. When the larvae irradiated at 25 GY were compared for their development time, there was again significant difference for week 0, I and II of storage duration. Maximum time was taken by the larvae for their complete development when removed from cold storage after one week (week I). With the increase in storage duration the time taken by larvae to complete their development after removal from cold storage reduced.When the larvae were removed after different lengths of cold storage duration i.e., week 0, week I and week II, there was a significant difference in the developmental time afterwards. Results have shown that the higher dose of radiation, increased the developmental time after removal from cold storage. The larvae irradiated at 50 GY took the longest time to complete their development after removal from cold storage (week I and week II) as compared the larvae that were not kept under cold storage conditions (week 0) (table 3).Interaction between the different radiation doses and different lengths of storage durations for development time of larvae were checked by two-way ANOVA. The development time of larvae irradiated at different doses and subjected to variable durations of cold storage were significantly different for both the doses and cold storage intervals. Furthermore, the interaction between the radiation doses and storage duration was also significant meaning that the larvae irradiated at different doses with different length of cold storage were having significant variations in development times (table 4). DISCUSSIONThe present research work indicates the possibility of keeping the larval instars of C. septempunctata under cold storage conditions of 8oC for a short duration of around 14 days without affecting its further development and feeding potential. Furthermore, irradiation can enhance the feeding potential and increase the development time of larval instars. This in turn could be a useful technique in mass rearing and field release programmes for biological control through larval instars. Usually temperature range of 8-10oC is an optimal selection of low temperature for storage as reported earlier for eggs two spotted ladybird beetle, Adalia bipunctata and the eggs of C. septempunctata (Hamalainen and Markkula, 1977), Trichogramma species (Jalali and Singh, 1992) and fairyfly, Gonatocerus ashmeadi (Hymenoptra; Mymaridae) (Leopold and Chen, 2007). However, a study reported more than 80% survival rate for the coccinellid beetle, Harmonia axyridis for up to 150 days at moderately low temperature of 3-6oC (Ruan et al., 2012). So there is great flexibility in coccinellid adults and larvae for tolerating low temperature conditions. After removal from cold storage, larvae showed better feeding potential with consumption of more aphids when compared to normal larvae that were not placed under low temperature conditions. This indicates that when the adult or immature insect stages are subjected to low temperature environment, they tend to reduce their metabolic activity for keeping them alive on the reserves of their body fats and sustain themselves for a substantial length of time under such cold environment. Hereafter, the larval instars that were in cold storage were behaving as if starved for a certain length of time and showed more hunger. This behavior of improved or higher feeding potential of stored larvae has been reported previously (Chapman, 1998). Hence, the feeding potential of C. septempunctata larvae significantly increased after cold storage. Gagné and Coderre (2001) reported higher predatory efficacy in larvae of C. maculata when stored at the same temperature as in the present study i.e., 8oC. Similarly, Ruan et al. (2012) showed that the multicolored Asian ladybug, H. axyridis, when stored under cold conditions, had more eating capacity towards aphids Aphis craccivora Koch than the individuals that were not stored. Such studies indicate that the higher feeding potential in insects after being subjected to low temperature environmental conditions could be due to the maintenance of their metabolism rate to a certain level while utilizing their energy reserves to the maximum extent (Watanabe, 2002).The individuals coming out from cold storage are therefore capable of consuming more pray as they were in a condition of starvation and they have to regain their energy loss through enhanced consumption. Furthermore, the starvation in C. septempunctata has previously been reported to affect their feeding potential (Suleman et al., 2017). In the present study, the larval development was delayed after returning to normal laboratory conditions. Cold storage affects the life cycle of many insects other than coccinellids. The cold storage of green bug aphid parasitoid, Lysiphlebus testaceipes Cresson (Hymenoptra; Braconidae) mummies increased the life cycle 3-4 times. Nevertheless, in current study the development process of stored larvae resumed quickly after taking them out and larvae completed their development up to adult stage. Similar kinds of results were reported for resumption of larval development after removal from cold storage conditions. Such studies only report satisfactory survival rates and development for a short duration of cold storage but as the length of storage is increased, it could become harmful to certain insects. Gagné and Coderre (2001) reported that cold storage for longer period (three weeks) proved fatal for almost 40% of larvae of C. maculata. Furthermore, in the same study, the feeding potential of C. maculata larvae was also affected beyond two weeks of cold storage due to the loss of mobility after a long storage period. Many studies have reported that longer durations of low temperature conditions can either damage the metabolic pathways of body cells or may increase the levels of toxins within the bodies of insects. Also, low temperature exposure for longer duration may cause specific interruptions in the insect body especially neuro-hormones responsible for insect development, which could be dangerous or even life threatening.Chen et al. (2004) also reported that the biological qualities of parasitized Bemisia tabaci pupae on population quality of Encarsia formosa were affected negatively with increase in cold storage duration. Similarly, the egg hatchability of green lacewing Chrysoperla carnea Stephen was lost completely beyond 18 days of cold storage (Sohail et al., 2019). However, in the present study the cold storage was done for maximum two weeks and it is to be regarded as a short term storage hence the survival rate was satisfactory. Longer periods of cold storage for larvae are not considered safe due to their vulnerable state as compared to adults which are hardier. Also 2nd instar larvae used in the present study for cold storage for being bigger in size and physical stronger than 1st instar. Abdel‐Salam and Abdel‐Baky (2000) reported that in C. undecimpunctata the cold storage of 3rd and 4th larval instars was higher and considered safer than early larval instars. The same study showed sharp decline in survival rate after two weeks and there was no survival beyond 30-60 days of cold storage. The present study showed that short term storage of the larvae of C. septempunctata could be done without any loss of their feeding potential or development so the quality of predator remained unaffected. Similar kind of work for many other insects had been reported previously where cold storage technique proved useful without deteriorating the fitness of stored insects. For example, the flight ability of reared codling moth Cydia pomonella Linnaeus remained unaffected after removal from cold storage (Matveev et al., 2017). Moreover, a sturdy reported that pupae of a parasitoid wasp Trichogramma nerudai (Hymenoptera; Trichogrammatidae) could be safely put in cold storage for above than 50 days (Tezze and Botto, 2004). Similarly, a technique of cold storage of non-diapausing eggs of black fly Simulium ornaturm Meigen was developed at 1oC. Another study reported safe storage of a predatory bug insidious flower bug Orius insidiosus for more than 10 days at 8°C (Bueno et al., 2014).In present study without cold storage, the lower doses of 10 and 25 GY prolonged the developmental time as compared to un-irradiated larvae and higher doses of irradiations in conjunction with cold storage again significantly prolonged the developmental time of larvae when returned to the laboratory conditions. Salem et al. (2014) also reported that Gamma irradiations significantly increased the duration of developmental stages (larvae and pupae) in cutworm, Agrotis ipsilon (Hufnagel). In another study, where endoparasitic wasps Glyptapanteles liparidis were evaluated with irradiated and non-irradiated gypsy moth Lymantria dispar larvae for oviposition, it was found that non-irradiated larvae had a shorter time to reach the adult stage as compared to irradiated larvae (Novotny et al., 2003). Both for higher doses with cold storage and lower doses without cold storage extended the larval duration of C. septempunctata. In another study when the parasitoid wasp Habrobracon hebetor was irradiated at the dose of 10 GY, it resulted in prolonged longevity (Genchev et al., 2008). In the same study, when another parasitoid Ventruria canescens was irradiated at lower doses of 4GY and 3 GY, it resulted in increased emergence from the host larvae, while gamma irradiations at the dose of 1 GY and 2 GY significantly stimulated the rate of parasitism (Genchev et al., 2008). The current study also indicated higher rates of predation in the form of increased feeding potential of larvae as a result of irradiations at lower doses.CONCLUSIONThe outcome of the current study shows that storage of 2nd instar C. septempunctata at low temperature of 8oC for a short duration of about 14 days is completely safe and could have broader application in different biocontrol programs. Such flexibility in storage duration can also assist in different mass rearing techniques and commercial uses. The combination of gamma radiation with low temperature cold storage could be a useful tool in developing different biological pest management programs against sucking insect pests. Incidence of periodic occurrence of both the target insect pests with their predatory ladybird beetles in synchrony is an important aspect that could be further strengthened by cold storage techniques. Therefore, short or long term bulk cold storage of useful commercial biocontrol agents and then reactivating them at appropriate time of pest infestation is a simple but an advantageous method in mass rearing programs. Increased feeding capacity of stored larvae is another edge and hence such larvae may prove more beneficial as compared to unstored larvae. Both cold storage and improved feeding of the C. septempuctata larvae can be utilized for implementation of IPM for many sucking insect pests of various crops, fruits and vegetables. Due to some constraints this study could not be continued beyond two weeks but for future directions, higher doses and longer duration periods could further elaborate the understanding and better application of such useful techniques in future IPM programmes on a wider scale. Also, some other predatory coccinellid beetle species can be tested with similar doses and cold storage treatments to see how effective this technique is on other species as well.ACKNOWLEDGMENTS We acknowledge the Sugarcane Research and Development Board for providing a research grant (No. SRDB/P/4/16) to carry out this research work. This paper is a part of research thesis entitled “Effect of gamma irradiation on storage and predatory potential of seven spotted lady bird beetle larvae” submitted to Higher Education Commission, Pakistan for the degree of M.Phil. Biological Sciences.CONFLICT OF INTERESTAuthors have no conflict of interest.REFERENCESAbdel‐Salam, A. and N. J. J. o. A. E. Abdel‐Baky, 2000. Possible storage of Coccinella undecimpunctata (Col., coccinellidae) under low temperature and its effect on some biological characteristics. 124(3‐4): 169-176.Afroz, S., 2001. Relative abundance of aphids and their coccinellid predators. Journal of aphidology, 15: 113-118.Bale, J., 2002. Insects and low temperatures: From molecular biology to distributions and abundance. Biological sciences, 357(1423): 849-862.Berkvens, N., J. S. Bale, D. Berkvens, L. Tirry and P. De Clercq, 2010. Cold tolerance of the harlequin ladybird Harmonia axyridis in europe. Journal of insect physiology, 56(4): 438-444.Bilashini, Y. and T. J. I. J. A. E. Singh, 2009. Studies on population dynamics and feeding potential of Coccinella septempunctata linnaeus in relation to Lipaphis erysimi (kaltenbach) on cabbage. Indian journal of applied entomology, 23: 99-103.Bueno, V. H. P., L. M. Carvalho and J. Van Lenteren, 2014. Performance of Orius insidiosus after storage, exposure to dispersal material, handling and shipment processes. Bulletin of insectology, 67(2): 175-183.Cai, P., X. Gu, M. Yao, H. Zhang, J. Huang, A. Idress, Q. Ji, J. Chen and J. Yang, 2017. The optimal age and radiation dose for Bactrocera dorsalis (Hendel)(Diptera: Tephritidae) eggs as hosts for mass-reared Fopius arisanus (Sonan)(Hymenoptera: Braconidae). Biological control, 108: 89-97.Celmer-Warda, K., 2004. Preliminary studies suitability and acceptability of irradiated host larvae Plodia interpunctella (Hubner) on larval parasitoids Venturia canescens (gravenhorst). Annals of warsaw agricultural university, horticulture (Landscape Architecture), 25: 67-73.Chapman, R. F., 1998. The insects: Structure and function. Cambridge university press.Chen, Q., L.-f. Xiao, G.-r. Zhu, Y.-s. LIU, Y.-j. ZHANG, Q.-j. WU and B.-y. XU, 2004. Effect of cold storage on the quality of Encarsia formosa Gahan. Chinese journal of biological control, 20(2): 107-109.Danks, H., 2006. Insect adaptations to cold and changing environments. The Canadian entomologist, 138(1): 1-23.Fatima, B., N. Ahmad, R. M. Memon, M. Bux and Q. Ahmad, 2009. Enhancing biological control of sugarcane shoot borer, Chilo infuscatellus (Lepidoptera: Pyralidae), through use of radiation to improve laboratory rearing and field augmentation of egg and larval parasitoids. Biocontrol science technology, 19(sup1): 277-290.Gagné, I. and D. Coderre, 2001. Cold storage of Coleomegilla maculata larvae. Biocontrol science technology, 11(3): 361-369.Genchev, N., N. Balevski, D. Obretenchev and A. Obretencheva, 2008. Stimulation effects of low gamma radiation doses on perasitoids Habrobracon hebetor and Ventruria canescens. Journal of Balkan ecology, 11: 99-102.Greany, P. D. and J. E. Carpenter, 2000. Årea-ide control of fruit flies and other insect pests: Importance. Joint proceedings of the International Conference on Årea-Wide Control of insect pests, May 28–June 2, 1998 and the Fifth International symposium on fruit flies of economi, June 1-5.Hamalainen, M. and M. Markkula, 1977. Cool storage of Coccinella septempunctata and Adalia bipunctata (Col., coccinellidae) eggs for use in the biological control in greenhouses. Annales agricultural fennicae, 16: 132-136.Hamed, M., S. Nadeem and A. Riaz, 2009. Use of gamma radiation for improving the mass production of Trichogramma chilonis and Chrysoperla carnea. Biocontrol science technology, 19(sup1): 43-48.Hendrichs, J., K. Bloem, G. Hoch, J. E. Carpenter, P. Greany and A. S. Robinson, 2009. Improving the cost-effectiveness, trade and safety of biological control for agricultural insect pests using nuclear techniques. Biocontrol science technology, 19(sup1): 3-22.Jalali, S. and S. Singh, 1992. Differential response of four Trichogramma species to low temperatures for short term storage. Entomophaga, 37(1): 159-165.Jandial, V. K. and K. Malik, 2006. Feeding potential of Coccinella septempunctata Linn. (Coccinellidae: Coleoptera) on mustard aphid, lipaphis erysimi kalt. And potato peach aphid, Myzus persicae sulzer. Journal of entomological research, 30(4): 291-293.Jordão-paranhos, B. A., J. M. Walder and N. T. Papadopoulos, 2003. A simple method to study parasitism and field biology of the parasitoid Diachasmimorpha longicaudata (Hymenoptera: Braconidae) on Ceratitis capitata (Diptera: Tephritidae). Biocontrol science technology, 13(6): 631-639.Koch, R. L., M. Carrillo, R. Venette, C. Cannon and W. D. Hutchison, 2004. Cold hardiness of the multicolored asian lady beetle (Coleoptera: Coccinellidae). Environmental entomology, 33(4): 815-822.Labrie, G., D. Coderre and E. Lucas, 2008. Overwintering strategy of multicolored asian lady beetle (Coleoptera: Coccinellidae): Cold-free space as a factor of invasive success. Annals of the entomological society of America, 101(5): 860-866.Leopold, R. and W.-l. Chen, 2007. Cold storage of the adult stage of Gonatocerus ashmeadi girault: The impact on maternal and progeny quality. In: Proceedings of the 2007 pierce’s disease research symposium, San Diego, CA. pp: 42-46.Matveev, E., J. Kwon, G. Judd and M. J. T. C. E. Evenden, 2017. The effect of cold storage of mass-reared codling moths (Lepidoptera: Tortricidae) on subsequent flight capacity. The Canadian entomologist, 149(3): 391-398.Mousapour, Z., A. Askarianzadeh and H. Abbasipour, 2014. Effect of cold storage of pupae parasitoid wasp, Habrobracon hebetor (say)(Hymenoptera: Braconidae), on its efficiency. Archives of phytopathology plant protection, 47(8): 966-972.Novotny, J., M. Zúbrik, M. L. McManus and A. M. Liebhold, 2003. Sterile insect technique as a tool for increasing the efficacy of gypsy moth biocontrol. Proceedings: Ecology, survey and management of forest insects GTR-NE-311, 311.Pervez, A. and Omkar, 2006. Ecology and biological control application of multicoloured asian ladybird, Harmonia axyridis: A review. Biocontrol science technology, 16(2): 111-128.Ramløy, U.-B., 2000. Aspects of natural cold tolerance in ectothermic animals. Human reproduction, 15(suppl_5): 26-46.Ruan, C.-C., W.-M. Du, X.-M. Wang, J.-J. Zhang and L.-S. Zang, 2012. Effect of long-term cold storage on the fitness of pre-wintering Harmonia axyridis (pallas). BioControl, 57(1): 95-102.Salem, H., M. Fouda, A. Abas, W. Ali and A. Gabarty, 2014. Effects of gamma irradiation on the development and reproduction of the greasy cutworm, Agrotis ipsilon (Hufn.). Journal of radiation research applied sciences, 7(1): 110-115.Seth, R. K., T. K. Barik and S. Chauhan, 2009. Interaction of entomopathogenic nematodes, Steinernema glaseri (Rhabditida: Steinernematidae), cultured in irradiated hosts, with ‘f1 sterility’: Towards management of a tropical pest, Spodoptera litura (fabr.)(Lepidoptera: Noctuidae). Biocontrol science technology, 19(sup1): 139-155.Sohail, M., S. S. Khan, R. Muhammad, Q. A. Soomro, M. U. Asif and B. K. Solangi, 2019. Impact of insect growth regulators on biology and behavior of Chrysoperla carnea (stephens)(Neuroptera: Chrysopidae). Ecotoxicology, 28(9): 1115-1125.Suleman, N., 2015. Heterodynamic processes in Coccinella septempunctata L. (Coccinellidae: Coleoptera): A mini review. Entomological science, 18(2): 141-146.Suleman, N., M. Hamed and A. Riaz, 2017. Feeding potential of the predatory ladybird beetle Coccinella septempunctata (Coleoptera; Coccinellidae) as affected by the hunger levels on natural host species. Journal of phytopathology pest management, 4: 38-47.Tezze, A. A. and E. N. Botto, 2004. Effect of cold storage on the quality of Trichogramma nerudai (Hymenoptera: Trichogrammatidae). Biological control, 30(1): 11-16.Tunçbilek, A. S., U. Canpolat and F. Sumer, 2009. Suitability of irradiated and cold-stored eggs of Ephestia kuehniella (Pyralidae: Lepidoptera) and Sitotroga cerealella (Gelechidae: Lepidoptera) for stockpiling the egg-parasitoid Trichogramma evanescens (Trichogrammatidae: Hymenoptera) in diapause. Biocontrol science technology, 19(sup1): 127-138.Van Lenteren, J. C., 2012. The state of commercial augmentative biological control: Plenty of natural enemies, but a frustrating lack of uptake. BioControl, 57(1): 1-20.Venkatesan, T., S. Singh and S. Jalali, 2000. Effect of cold storage on cocoons of Goniozus nephantidis muesebeck (Hymenoptera: Bethylidae) stored for varying periods at different temperature regimes. Journal of entomological research, 24(1): 43-47.Wang, E., D. Lu, X. Liu and Y. Li, 2009. Evaluating the use of nuclear techniques for colonization and production of Trichogramma chilonis in combination with releasing irradiated moths for control of cotton bollworm, Helicoverpa armigera. Biocontrol science technology, 19(sup1): 235-242.Watanabe, M., 2002. Cold tolerance and myo-inositol accumulation in overwintering adults of a lady beetle, Harmonia axyridis (Coleoptera: Coccinellidae). European journal of entomology, 99(1): 5-10.Xia, J., J. Wang, J. Cui, P. Leffelaar, R. Rabbinge and W. Van Der Werf, 2018. Development of a stage-structured process-based predator–prey model to analyse biological control of cotton aphid, Aphis gossypii, by the sevenspot ladybeetle, Coccinella septempunctata, in cotton. Ecological complexity, 33: 11-30.Zapater, M. C., C. E. Andiarena, G. P. Camargo and N. Bartoloni, 2009. Use of irradiated musca domestica pupae to optimize mass rearing and commercial shipment of the parasitoid spalangia endius (Hymenoptera: Pteromalidae). Biocontrol science technology, 19(sup1): 261-270.
APA, Harvard, Vancouver, ISO, and other styles
29

Danaher, Pauline. "From Escoffier to Adria: Tracking Culinary Textbooks at the Dublin Institute of Technology 1941–2013." M/C Journal 16, no. 3 (June 23, 2013). http://dx.doi.org/10.5204/mcj.642.

Full text
Abstract:
IntroductionCulinary education in Ireland has long been influenced by culinary education being delivered in catering colleges in the United Kingdom (UK). Institutionalised culinary education started in Britain through the sponsorship of guild conglomerates (Lawson and Silver). The City & Guilds of London Institute for the Advancement of Technical Education opened its central institution in 1884. Culinary education in Ireland began in Kevin Street Technical School in the late 1880s. This consisted of evening courses in plain cookery. Dublin’s leading chefs and waiters of the time participated in developing courses in French culinary classics and these courses ran in Parnell Square Vocational School from 1926 (Mac Con Iomaire “The Changing”). St Mary’s College of Domestic Science was purpose built and opened in 1941 in Cathal Brugha Street. This was renamed the Dublin College of Catering in the 1950s. The Council for Education, Recruitment and Training for the Hotel Industry (CERT) was set up in 1963 and ran cookery courses using the City & Guilds of London examinations as its benchmark. In 1982, when the National Craft Curriculum Certification Board (NCCCB) was established, CERT began carrying out their own examinations. This allowed Irish catering education to set its own standards, establish its own criteria and award its own certificates, roles which were previously carried out by City & Guilds of London (Corr). CERT awarded its first certificates in professional cookery in 1989. The training role of CERT was taken over by Fáilte Ireland, the State tourism board, in 2003. Changing Trends in Cookery and Culinary Textbooks at DIT The Dublin College of Catering which became part of the Dublin Institute of Technology (DIT) is the flagship of catering education in Ireland (Mac Con Iomaire “The Changing”). The first DIT culinary award, was introduced in 1984 Certificate in Diet Cookery, later renamed Higher Certificate in Health and Nutrition for the Culinary Arts. On the 19th of July 1992 the Dublin Institute of Technology Act was enacted into law. This Act enabled DIT to provide vocational and technical education and training for the economic, technological, scientific, commercial, industrial, social and cultural development of the State (Ireland 1992). In 1998, DIT was granted degree awarding powers by the Irish state, enabling it to make major awards at Higher Certificate, Ordinary Bachelor Degree, Honors Bachelor Degree, Masters and PhD levels (Levels six to ten in the National Framework of Qualifications), as well as a range of minor, special purpose and supplemental awards (National NQAI). It was not until 1999, when a primary degree in Culinary Arts was sanctioned by the Department of Education in Ireland (Duff, The Story), that a more diverse range of textbooks was recommended based on a new liberal/vocational educational philosophy. DITs School of Culinary Arts currently offers: Higher Certificates Health and Nutrition for the Culinary Arts; Higher Certificate in Culinary Arts (Professional Culinary Practice); BSc (Ord) in Baking and Pastry Arts Management; BA (Hons) in Culinary Arts; BSc (Hons) Bar Management and Entrepreneurship; BSc (Hons) in Culinary Entrepreneurship; and, MSc in Culinary Innovation and Food Product Development. From 1942 to 1970, haute cuisine, or classical French cuisine was the most influential cooking trend in Irish cuisine and this is reflected in the culinary textbooks of that era. Haute cuisine has been influenced by many influential writers/chefs such as Francois La Varenne, Antoine Carême, Auguste Escoffier, Ferand Point, Paul Bocuse, Anton Mosiman, Albert and Michel Roux to name but a few. The period from 1947 to 1974 can be viewed as a “golden age” of haute cuisine in Ireland, as more award-winning world-class restaurants traded in Dublin during this period than at any other time in history (Mac Con Iomaire “The Changing”). Hotels and restaurants were run in the Escoffier partie system style which is a system of hierarchy among kitchen staff and areas of the kitchens specialising in cooking particular parts of the menu i.e sauces (saucier), fish (poissonnier), larder (garde manger), vegetable (legumier) and pastry (patissier). In the late 1960s, Escoffier-styled restaurants were considered overstaffed and were no longer financially viable. Restaurants began to be run by chef-proprietors, using plate rather than silver service. Nouvelle cuisine began in the 1970s and this became a modern form of haute cuisine (Gillespie). The rise in chef-proprietor run restaurants in Ireland reflected the same characteristics of the nouvelle cuisine movement. Culinary textbooks such as Practical Professional Cookery, La Technique, The Complete Guide to Modern Cooking, The Art of the Garde Mange and Patisserie interpreted nouvelle cuisine techniques and plated dishes. In 1977, the DIT began delivering courses in City & Guilds Advanced Kitchen & Larder 706/3 and Pastry 706/3, the only college in Ireland to do so at the time. Many graduates from these courses became the future Irish culinary lecturers, chef-proprietors, and culinary leaders. The next two decades saw a rise in fusion cooking, nouvelle cuisine, and a return to French classical cooking. Numerous Irish chefs were returning to Ireland having worked with Michelin starred chefs and opening new restaurants in the vein of classical French cooking, such as Kevin Thornton (Wine Epergne & Thorntons). These chefs were, in turn, influencing culinary training in DIT with a return to classical French cooking. New Classical French culinary textbooks such as New Classical Cuisine, The Modern Patisserie, The French Professional Pastry Series and Advanced Practical Cookery were being used in DIT In the last 15 years, science in cooking has become the current trend in culinary education in DIT. This is acknowledged by the increased number of culinary science textbooks and modules in molecular gastronomy offered in DIT. This also coincided with the launch of the BA (Hons) in Culinary Arts in DIT moving culinary education from a technical to a liberal education. Books such as The Science of Cooking, On Food and Cooking, The Fat Duck Cookbook and Modern Gastronomy now appear on recommended textbooks for culinary students.For the purpose of this article, practical classes held at DIT will be broken down as follows: hot kitchen class, larder classes, and pastry classes. These classes had recommended textbooks for each area. These can be broken down into three sections: hot kitche, larder, and pastry. This table identifies that the textbooks used in culinary education at DIT reflected the trends in cookery at the time they were being used. Hot Kitchen Larder Pastry Le Guide Culinaire. 1921. Le Guide Culinaire. 1921. The International Confectioner. 1968. Le Repertoire De La Cuisine. 1914. The Larder Chef, Classical Food Preparation and Presentation. 1969. Patisserie. 1971. All in the Cooking, Books 1&2. 1943 The Art of the Garde Manger. 1973. The Modern Patissier. 1986 Larousse Gastronomique. 1961. New Classic Cuisine. 1989. Professional French Pastry Series. 1987. Practical Cookery. 1962. The Curious Cook. 1990. Complete Pastrywork Techniques. 1991. Practical Professional Cookery. 1972. On Food and Cooking. The Science and Lore of the Kitchen. 1991. On Food and Cooking: The Science and Lore of the Kitchen. 1991 La Technique. 1976. Advanced Practical Cookery. 1995. Desserts: A Lifelong Passion. 1994. Escoffier: The Complete Guide to the Art of Modern Cookery. 1979. The Science of Cooking. 2000. Culinary Artistry. Dornenburg, 1996. Professional Cookery: The Process Approach. 1985. Garde Manger, The Art and Craft of the Cold Kitchen. 2004. Grande Finales: The Art of the Plated Dessert. 1997. On Food and Cooking: The Science and Lore of the Kitchen. 1991. The Science of Cooking. 2000. Fat Duck Cookbook. 2009. Modern Gastronomy. 2010. Tab.1. DIT Culinary Textbooks.1942–1960 During the first half of the 20th century, senior staff working in Dublin hotels, restaurants and clubs were predominately foreign born and trained. The two decades following World War II could be viewed as the “golden age” of haute cuisine in Dublin as many award-wining restaurants traded in the city at this time (Mac Con Iomaire “The Emergence”). Culinary education in DIT in 1942 saw the use of Escoffier’s Le Guide Culinaire as the defining textbook (Bowe). This was first published in 1903 and translated into English in 1907. In 1979 Cracknell and Kaufmann published a more comprehensive and update edited version under the title The Complete Guide to the Art of Modern Cookery by Escoffier for use in culinary colleges. This demonstrated that Escoffier’s work had withstood the test of the decades and was still relevant. Le Repertoire de La Cuisine by Louis Saulnier, a student of Escoffier, presented the fundamentals of French classical cookery. Le Repertoire was inspired by the work of Escoffier and contains thousands of classical recipes presented in a brief format that can be clearly understood by chefs and cooks. Le Repertoire remains an important part of any DIT culinary student’s textbook list. All in the Cooking by Josephine Marnell, Nora Breathnach, Ann Mairtin and Mor Murnaghan (1946) was one of the first cookbooks to be published in Ireland (Cashmann). This book was a domestic science cooking book written by lecturers in the Cathal Brugha Street College. There is a combination of classical French recipes and Irish recipes throughout the book. 1960s It was not until the 1960s that reference book Larousse Gastronomique and new textbooks such as Practical Cookery, The Larder Chef and International Confectionary made their way into DIT culinary education. These books still focused on classical French cooking but used lighter sauces and reflected more modern cooking equipment and techniques. Also, this period was the first time that specific books for larder and pastry work were introduced into the DIT culinary education system (Bowe). Larousse Gastronomique, which used Le Guide Culinaire as a basis (James), was first published in 1938 and translated into English in 1961. Practical Cookery, which is still used in DIT culinary education, is now in its 12th edition. Each edition has built on the previous, however, there is now criticism that some of the content is dated (Richards). Practical Cookery has established itself as a key textbook in culinary education both in Ireland and England. Practical Cookery recipes were laid out in easy to follow steps and food commodities were discussed briefly. The Larder Chef was first published in 1969 and is currently in its 4th edition. This book focuses on classical French larder techniques, butchery and fishmongery but recognises current trends and fashions in food presentation. The International Confectioner is no longer in print but is still used as a reference for basic recipes in pastry classes (Campbell). The Modern Patissier demonstrated more updated techniques and methods than were used in The International Confectioner. The Modern Patissier is still used as a reference book in DIT. 1970s The 1970s saw the decline in haute cuisine in Ireland, as it was in the process of being replaced by nouvelle cuisine. Irish chefs were being influenced by the works of chefs such as Paul Boucuse, Roger Verge, Michel Guerard, Raymond Olivier, Jean & Pierre Troisgros, Alain Senderens, Jacques Maniere, Jean Delaveine and Michel Guerard who advanced the uncomplicated natural presentation in food. Henri Gault claims that it was his manifesto published in October 1973 in Gault-Millau magazine which unleashed the movement called La Nouvelle Cuisine Française (Gault). In nouvelle cuisine, dishes in Carème and Escoffier’s style were rejected as over-rich and complicated. The principles underpinning this new movement focused on the freshness of ingredients, and lightness and harmony in all components and accompaniments, as well as basic and simple cooking methods and types of presentation. This was not, however, a complete overthrowing of the past, but a moving forward in the long-term process of cuisine development, utilising the very best from each evolution (Cousins). Books such as Practical Professional Cookery, The Art of the Garde Manger and Patisserie reflected this new lighter approach to cookery. Patisserie was first published in 1971, is now in its second edition, and continues to be used in DIT culinary education. This book became an essential textbook in pastrywork, and covers the entire syllabus of City & Guilds and CERT (now Fáilte Ireland). Patisserie covered all basic pastry recipes and techniques, while the second edition (in 1993) included new modern recipes, modern pastry equipment, commodities, and food hygiene regulations reflecting the changing catering environment. The Art of the Garde Manger is an American book highlighting the artistry, creativity, and cooking sensitivity need to be a successful Garde Manger (the larder chef who prepares cold preparation in a partie system kitchen). It reflected the dynamic changes occurring in the culinary world but recognised the importance of understanding basic French culinary principles. It is no longer used in DIT culinary education. La Technique is a guide to classical French preparation (Escoffier’s methods and techniques) using detailed pictures and notes. This book remains a very useful guide and reference for culinary students. Practical Professional Cookery also became an important textbook as it was written with the student and chef/lecturer in mind, as it provides a wider range of recipes and detailed information to assist in understanding the tasks at hand. It is based on classical French cooking and compliments Practical Cookery as a textbook, however, its recipes are for ten portions as opposed to four portions in Practical Cookery. Again this book was written with the City & Guilds examinations in mind. 1980s During the mid-1980s, many young Irish chefs and waiters emigrated. They returned in the late-1980s and early-1990s having gained vast experience of nouvelle and fusion cuisine in London, Paris, New York, California and elsewhere (Mac Con Iomaire, “The Changing”). These energetic, well-trained professionals began opening chef-proprietor restaurants around Dublin, providing invaluable training and positions for up-and-coming young chefs, waiters and culinary college graduates. The 1980s saw a return to French classical cookery textbook such as Professional Cookery: The Process Approach, New Classic Cuisine and the Professional French Pastry series, because educators saw the need for students to learn the basics of French cookery. Professional Cookery: The Process Approach was written by Daniel Stevenson who was, at the time, a senior lecturer in Food and Beverage Operations at Oxford Polytechnic in England. Again, this book was written for students with an emphasis on the cookery techniques and the practices of professional cookery. The Complete Guide to Modern Cooking by Escoffier continued to be used. This book is used by cooks and chefs as a reference for ingredients in dishes rather than a recipe book, as it does not go into detail in the methods as it is assumed the cook/chef would have the required experience to know the method of production. Le Guide Culinaire was only used on advanced City & Guilds courses in DIT during this decade (Bowe). New Classic Cuisine by the classically French trained chefs, Albert and Michel Roux (Gayot), is a classical French cuisine cookbook used as a reference by DIT culinary educators at the time because of the influence the Roux brothers were having over the English fine dining scene. The Professional French Pastry Series is a range of four volumes of pastry books: Vol. 1 Doughs, Batters and Meringues; Vol. 2 Creams, Confections and Finished Desserts; Vol. 3 Petit Four, Chocolate, Frozen Desserts and Sugar Work; and Vol. 4 Decorations, Borders and Letters, Marzipan, Modern Desserts. These books about classical French pastry making were used on the advanced pastry courses at DIT as learners needed a basic knowledge of pastry making to use them. 1990s Ireland in the late 1990s became a very prosperous and thriving European nation; the phenomena that became known as the “celtic tiger” was in full swing (Mac Con Iomaire “The Changing”). The Irish dining public were being treated to a resurgence of traditional Irish cuisine using fresh wholesome food (Hughes). The Irish population was considered more well-educated and well travelled than previous generations and culinary students were now becoming interested in the science of cooking. In 1996, the BA (Hons) in Culinary Arts program at DIT was first mooted (Hegarty). Finally, in 1999, a primary degree in Culinary Arts was sanctioned by the Department of Education underpinned by a new liberal/vocational philosophy in education (Duff). Teaching culinary arts in the past had been through a vocational education focus whereby students were taught skills for industry which were narrow, restrictive, and constraining, without the necessary knowledge to articulate the acquired skill. The reading list for culinary students reflected this new liberal education in culinary arts as Harold McGee’s books The Curious Cook and On Food and Cooking: The Science and Lore of the Kitchen explored and explained the science of cooking. On Food and Cooking: The Science and Lore of the Kitchen proposed that “science can make cooking more interesting by connecting it with the basic workings of the natural world” (Vega 373). Advanced Practical Cookery was written for City & Guilds students. In DIT this book was used by advanced culinary students sitting Fáilte Ireland examinations, and the second year of the new BA (Hons) in Culinary Arts. Culinary Artistry encouraged chefs to explore the creative process of culinary composition as it explored the intersection of food, imagination, and taste (Dornenburg). This book encouraged chefs to develop their own style of cuisine using fresh seasonal ingredients, and was used for advanced students but is no longer a set text. Chefs were being encouraged to show their artistic traits, and none more so than pastry chefs. Grande Finale: The Art of Plated Desserts encouraged advanced students to identify different “schools” of pastry in relation to the world of art and design. The concept of the recipes used in this book were built on the original spectacular pieces montées created by Antoine Carême. 2000–2013 After nouvelle cuisine, recent developments have included interest in various fusion cuisines, such as Asia-Pacific, and in molecular gastronomy. Molecular gastronomists strive to find perfect recipes using scientific methods of investigation (Blanck). Hervè This experimentation with recipes and his introduction to Nicholos Kurti led them to create a food discipline they called “molecular gastronomy”. In 1998, a number of creative chefs began experimenting with the incorporation of ingredients and techniques normally used in mass food production in order to arrive at previously unattainable culinary creations. This “new cooking” (Vega 373) required a knowledge of chemical reactions and physico-chemical phenomena in relation to food, as well as specialist tools, which were created by these early explorers. It has been suggested that molecular gastronomy is “science-based cooking” (Vega 375) and that this concept refers to conscious application of the principles and tools from food science and other disciplines for the development of new dishes particularly in the context of classical cuisine (Vega). The Science of Cooking assists students in understanding the chemistry and physics of cooking. This book takes traditional French techniques and recipes and refutes some of the claims and methods used in traditional recipes. Garde Manger: The Art and Craft of the Cold Kitchen is used for the advanced larder modules at DIT. This book builds on basic skills in the Larder Chef book. Molecular gastronomy as a subject area was developed in 2009 in DIT, the first of its kind in Ireland. The Fat Duck Cookbook and Modern Gastronomy underpin the theoretical aspects of the module. This module is taught to 4th year BA (Hons) in Culinary Arts students who already have three years experience in culinary education and the culinary industry, and also to MSc Culinary Innovation and Food Product Development students. Conclusion Escoffier, the master of French classical cuisine, still influences culinary textbooks to this day. His basic approach to cooking is considered essential to teaching culinary students, allowing them to embrace the core skills and competencies required to work in the professional environment. Teaching of culinary arts at DIT has moved vocational education to a more liberal basis, and it is imperative that the chosen textbooks reflect this development. This liberal education gives the students a broader understanding of cooking, hospitality management, food science, gastronomy, health and safety, oenology, and food product development. To date there is no practical culinary textbook written specifically for Irish culinary education, particularly within this new liberal/vocational paradigm. There is clearly a need for a new textbook which combines the best of Escoffier’s classical French techniques with the more modern molecular gastronomy techniques popularised by Ferran Adria. References Adria, Ferran. Modern Gastronomy A to Z: A Scientific and Gastronomic Lexicon. London: CRC P, 2010. Barker, William. The Modern Patissier. London: Hutchinson, 1974. Barham, Peter. The Science of Cooking. Berlin: Springer-Verlag, 2000. Bilheux, Roland, Alain Escoffier, Daniel Herve, and Jean-Maire Pouradier. Special and Decorative Breads. New York: Van Nostrand Reinhold, 1987. Blanck, J. "Molecular Gastronomy: Overview of a Controversial Food Science Discipline." Journal of Agricultural and Food Information 8.3 (2007): 77-85. Blumenthal, Heston. The Fat Duck Cookbook. London: Bloomsbury, 2001. Bode, Willi, and M.J. Leto. The Larder Chef. Oxford: Butter-Heinemann, 1969. Bowe, James. Personal Communication with Author. Dublin. 7 Apr. 2013. Boyle, Tish, and Timothy Moriarty. Grand Finales, The Art of the Plated Dessert. New York: John Wiley, 1997. Campbell, Anthony. Personal Communication with Author. Dublin, 10 Apr. 2013. Cashman, Dorothy. "An Exploratory Study of Irish Cookbooks." Unpublished M.Sc Thesis. Dublin: Dublin Institute of Technology, 2009. Ceserani, Victor, Ronald Kinton, and David Foskett. Practical Cookery. London: Hodder & Stoughton Educational, 1962. Ceserani, Victor, and David Foskett. Advanced Practical Cookery. London: Hodder & Stoughton Educational, 1995. Corr, Frank. Hotels in Ireland. Dublin: Jemma, 1987. Cousins, John, Kevin Gorman, and Marc Stierand. "Molecular Gastronomy: Cuisine Innovation or Modern Day Alchemy?" International Journal of Hospitality Management 22.3 (2009): 399–415. Cracknell, Harry Louis, and Ronald Kaufmann. Practical Professional Cookery. London: MacMillan, 1972. Cracknell, Harry Louis, and Ronald Kaufmann. Escoffier: The Complete Guide to the Art of Modern Cookery. New York: John Wiley, 1979. Dornenburg, Andrew, and Karen Page. Culinary Artistry. New York: John Wiley, 1996. Duff, Tom, Joseph Hegarty, and Matt Hussey. The Story of the Dublin Institute of Technology. Dublin: Blackhall, 2000. Escoffier, Auguste. Le Guide Culinaire. France: Flammarion, 1921. Escoffier, Auguste. The Complete Guide to the Art of Modern Cookery. Ed. Crachnell, Harry, and Ronald Kaufmann. New York: John Wiley, 1986. Gault, Henri. Nouvelle Cuisine, Cooks and Other People: Proceedings of the Oxford Symposium on Food and Cookery 1995. Devon: Prospect, 1996. 123-7. Gayot, Andre, and Mary, Evans. "The Best of London." Gault Millau (1996): 379. Gillespie, Cailein. "Gastrosophy and Nouvelle Cuisine: Entrepreneurial Fashion and Fiction." British Food Journal 96.10 (1994): 19-23. Gisslen, Wayne. Professional Cooking. Hoboken: John Wiley, 2011. Hanneman, Leonard. Patisserie. Oxford: Butterworth-Heinemann, 1971. Hegarty, Joseph. Standing the Heat. New York: Haworth P, 2004. Hsu, Kathy. "Global Tourism Higher Education Past, Present and Future." Journal of Teaching in Travel and Tourism 5.1/2/3 (2006): 251-267 Hughes, Mairtin. Ireland. Victoria: Lonely Planet, 2000. Ireland. Irish Statute Book: Dublin Institute of Technology Act 1992. Dublin: Stationery Office, 1992. James, Ken. Escoffier: The King of Chefs. Hambledon: Cambridge UP, 2002. Lawson, John, and Harold, Silver. Social History of Education in England. London: Methuen, 1973. Lehmann, Gilly. "English Cookery Books in the 18th Century." The Oxford Companion to Food. Oxford: Oxford UP, 1999. 227-9. Marnell, Josephine, Nora Breathnach, Ann Martin, and Mor Murnaghan. All in the Cooking Book 1 & 2. Dublin: Educational Company of Ireland, 1946. Mac Con Iomaire, Máirtín. "The Changing Geography and Fortunes of Dublin's Haute Cuisine Restaurants, 1958-2008." Food, Culture and Society: An International Journal of Multidisiplinary Research 14.4 (2011): 525-45. ---. "Chef Liam Kavanagh (1926-2011)." Gastronomica: The Journal of Food and Culture 12.2 (2012): 4-6. ---. "The Emergence, Development and Influence of French Haute Cuisine on Public Dining in Dublin Restaurants 1900-2000: An Oral History". PhD. Thesis. Dublin: Dublin Institute of Technology, 2009. McGee, Harold. The Curious Cook: More Kitchen Science and Lore. New York: Hungry Minds, 1990. ---. On Food and Cooking the Science and Lore of the Kitchen. London: Harper Collins, 1991. Montague, Prosper. Larousse Gastronomique. New York: Crown, 1961. National Qualification Authority of Ireland. "Review by the National Qualifications Authority of Ireland (NQAI) of the Effectiveness of the Quality Assurance Procedures of the Dublin Institute of Technology." 2010. 18 Feb. 2012 ‹http://www.dit.ie/media/documents/services/qualityassurance/terms_of_ref.doc› Nicolello, Ildo. Complete Pastrywork Techniques. London: Hodder & Stoughton, 1991. Pepin, Jacques. La Technique. New York: Black Dog & Leventhal, 1976. Richards, Peter. "Practical Cookery." 9th Ed. Caterer and Hotelkeeper (2001). 18 Feb. 2012 ‹http://www.catererandhotelkeeper.co.uk/Articles/30/7/2001/31923/practical-cookery-ninth-edition-victor-ceserani-ronald-kinton-and-david-foskett.htm›. Roux, Albert, and Michel Roux. New Classic Cuisine. New York: Little, Brown, 1989. Roux, Michel. Desserts: A Lifelong Passion. London: Conran Octopus, 1994. Saulnier, Louis. Le Repertoire De La Cuisine. London: Leon Jaeggi, 1914. Sonnenschmidt, Fredric, and John Nicholas. The Art of the Garde Manger. New York: Van Nostrand Reinhold, 1973. Spang, Rebecca. The Invention of the Restaurant: Paris and Modern Gastronomic Culture. Cambridge: Harvard UP, 2000. Stevenson, Daniel. Professional Cookery the Process Approach. London: Hutchinson, 1985. The Culinary Institute of America. Garde Manger: The Art and Craft of the Cold Kitchen. Hoboken: New Jersey, 2004. Vega, Cesar, and Job, Ubbink. "Molecular Gastronomy: A Food Fad or Science Supporting Innovation Cuisine?". Trends in Food Science & Technology 19 (2008): 372-82. Wilfred, Fance, and Michael Small. The New International Confectioner: Confectionary, Cakes, Pastries, Desserts, Ices and Savouries. 1968.
APA, Harvard, Vancouver, ISO, and other styles
30

Ramirez, Ludito, and Maria Theresa Velasco. "Knowledge Sharing Behavior of Rice Farmers in the Cyber-Villages." Annals of Tropical Research, August 4, 2015, 104–14. http://dx.doi.org/10.32945/atr3729.2015.

Full text
Abstract:
The International Rice Research Institute (IRRI) has established cyber-villages in an effort to speed up dissemination and adoption of rice technologies. How farmers share information obtained from the sources is less documented. In this paper, we present an analysis of the knowledge sharing behavior of rice farmers in the Cyber-villages — the communities assisted by IRRI for its innovative technology transfer modalities in Infanta, Quezon. The study involved 76 rice farmers from three LGU- and three NGO-managed barangays. Results revealed that both LGU-managed and NGO-managed cyber-village farmers had highly positive knowledge seeking behavior and moderately positive knowledge donating behavior. It indicated that they were more of knowledge seekers than knowledge donors. The latent networks are predominantly star and linear chain, characterized by sparse central hubs and non-reciprocated ties. The central actors are limited to the intermediaries, farmer-leaders, and emerging farmer-consultants.
APA, Harvard, Vancouver, ISO, and other styles
31

Kumar, Jailendra. "AI, Machine Learning! What next? Cognitive Computing." Advanced Computing and Communications, December 10, 2018. http://dx.doi.org/10.34048/2018.4.f4.

Full text
Abstract:
Continuing with its track record of showcasing the latest technological trends, Advanced Computing and Communications Society (ACCS) dedicated its 24th edition of International Conference on Advanced Computing and Communications (ADCOM 2018) to assimilate the advances in Cognitive Computing and Applications. ADCOM 2018 was organized at the Indian Institute of Information Technology, Bangalore (IIIT-B) from 21st September to 23rd September of 2018. The three-day conference saw delegates from industry, academia, government R&D and student community coming together on this common platform to share their knowledge and learn from each other about the research outcomes and potential applications of this new technology area.
APA, Harvard, Vancouver, ISO, and other styles
32

"BioBoard." Asia-Pacific Biotech News 10, no. 14 (July 30, 2006): 707–15. http://dx.doi.org/10.1142/s0219030306001224.

Full text
Abstract:
Australia to Lead a Global Project to Document Human Variation and Transformation Genetic Health. EvoGenix Nets $1.66 m Grant for Cancer Work. First Live Porcine Vaccine in Australia. PharmAust Collaborates with Bristol Pharma Australia. Agricultural Biotechnology International Conference in Melbourne. Stanford Signs Pact to Provide Online Health Information to China. Lonza Increases Stakes in China. China Invests in International Traditional Chinese Medicine Programs. China's Science Ministry Reforms to Prevent Misconduct. China Harnesses Brain Power for Life Science Industry. Chinese Society of Biotechnology Holds Annual Conference. TWAS Prizes in Biology — China and Brazil. China to Reform Biotech Policies. Bird Flu News in Indonesia. Hong Kong University Vice—Chancellor Lap-Chee Tsui Appointed as an Honor Professor of Zhejiang University. Australia and Korea Fishing for Synergies. Top Indian Health Institute Sacks its Director. LabVantage India Eyeing the Domestic Pharma & Healthcare Market. India's Panacea Biotec Signs Deal with Indonesian PT Bio Pharma. International Congress Held in India. Kyowa Hakko Kogyo to Expand its Fuji Plant and Accelerate R&D for Antibody Drugs. Crucell Licenses Cell Line Technology to Japanese Firm. Malaysia's Biotechnology Asia 2006 will Gather Global Industry Players, Researchers and Entrepreneurs. New Zealand Launch Technology Partnership Programs. CombinatoRx Receives Infectious Disease Research Grant from Singapore's EDB. Singapore's RIEC Allocates $1.4 billion for Three Research Programs. ITRI and UK's Sanger Institute Sign Research Collaboration MOU. Warren Buffet Pledges Most of his Fortune to Gates Foundation.
APA, Harvard, Vancouver, ISO, and other styles
33

"Correction." Applied Spectroscopy 54, no. 8 (August 2000): 1250. http://dx.doi.org/10.1366/0003702001950904.

Full text
Abstract:
In the July issue of Applied Spectroscopy and in a mailing that was sent to all SAS members with information on electing the officers and governing board delegates to the Society an error appeared in the biography of governing board delegate nominee. Dr. Wolfgang Kiefer. The following is what should have appeared under Dr. Kiefer's name. We apologize for any inconvenience. Dr. Wolfgang Kiefer was educated at the University of Munich in Germany receiving both his Diploma in Physics in 1967 and his Ph.D. in Physics in 1970 from that institution. From 1970–1972, Dr. Kiefer served as a Postdoctorate Fellow at the National Research Council of Canada in the Division of Chemistry. From there he went to the University of Munich as Assistant in the Department of Physics. In 1977, Dr. Kiefer left the University of Munich to become Professor for Experimental Physics at the University of Bayreuth in Germany followed by Full Professor/Head of the Institute for Experimental Physics at the University of Graz in Austria. In 1988, he took a position as Full Professor for Physical Chemistry at the University of Würzburg in Germany. From 1996–1997 he served as Vice Dean of Faculty of Chemistry and Pharmacy at the University and from 1997–1999 as Dean of Faculty of Chemistry and Pharmacy. Dr. Kiefer has been involved in numerous national and international activities over the course of his career. These include European Editor (Molecular Spectroscopy) for the journal, Applied Spectroscopy, member of the Editorial board, Associate Editor and Editor-in-Chief of the Journal of Raman Spectroscopy, Member of the Editorial Boards of the Asian Journal of Physics, Spectroscopy Letters, Trends in Applied Spectroscopy, A sian Chemistry Letters, and Chemical Physics Letters. He has been a member of the IUPAC Commission for Infrared and Raman Spectroscopy, Director of a NATO Institute on Nonlinear Raman Spectroscopy, an Association of British Spectroscopists Lecturer, and a member of several Steering Committees. Dr. Kiefer was Chairman of the XIII International Conference on Raman Spectroscopy and he served as Chairman of the International Steering Committee for International Raman Conferences. He was Visiting Professor of Hong Kong University of Science and Technology, Waseda University, Tokyo, Zhengzhou University, P.R. China and he is Honorary Professor of Wuhan University, P.R. China. Dr. Kiefer is also Honorary Member of the Advisory Board of the Committee on Light Scattering of the Chinese Physical Society, Honorary Fellow of the Laser and Spectroscopy Society of India (F.L.S.S.) and presently Foreign Councillor of the Institute for Molecular Science, Okazaki National Research Institutes, Japan. He is co-editor of five books and has published more than 500 papers. He was recently awarded the Society for Applied Spectroscopy's Distinguished Service Award for his contributions to SAS.
APA, Harvard, Vancouver, ISO, and other styles
34

"Bioboard." Asia-Pacific Biotech News 12, no. 14 (December 2008): 5–24. http://dx.doi.org/10.1142/s0219030308000888.

Full text
Abstract:
AUSTRALIA – PAST Protocol to Fast-track Stroke Treatment AUSTRALIA – Great Potential of Regenerative Heart Tissue in Embryonic Mice CHINA – Babies Killed by Tainted Milk Formula Increased to Six CHINA – Chinese Society of Hematology and Bayer Partner to Develop Comprehensive Care Hemophilia Treatment Centers throughout China CHINA – Drug Information Association (DIA) Opens New Office in China CHINA – China Leads Way to Develop Bird Flu Pandemic Forewarning System CHINA – Herbal Drug Recalled after Infant's Death CHINA – Toddler Virus Flares Up in China Again CHINA – Functional Gene Discovered for Rice's Grain-Filling CHINA – China Tops the Health List among Developing Countries CHINA – U.S. Food, Drug Regulator to Set Up Offices in China INDIA – Maharashtra Plans Two More Biotech Parks at Khalapur, Alibag INDIA – Institute of Clinical Research India Partners with SingHealth INDONESIA – 113th Bird Flu Death Recorded in Indonesia JAPAN – World's First Made-to-order Bones on Clinical Trial JAPAN – Dioxin Tied to Metabolic Syndrome in Japan MALAYSIA – Medicine Study at Newcastle University in Malaysia NEW ZEALAND – NZ Research Implants Pig Cells in Human Diabetics SINGAPORE – Singapore Develops Cell Therapy Treatment for Cancer SINGAPORE – Singapore Sets Up Second Heart Center to Meet Demand SINGAPORE – Singapore is First in Southeast Asia to Offer Robotic Surgery for Gynaecologic Cancers SINGAPORE – SNEC Takes the Lead at the Forefront of Lasik and Corneal Transplant Technology SINGAPORE – Singapore Pumps Funding to Boost Dengue, Diabetes Research SINGAPORE – Second Case of Rare Genetic Disorder that Afflict Toddlers Detected SINGAPORE – Singapore Landfill Gets New Lease of Life as an Eco Park SINGAPORE – Fusionopolis – World Within a City – Singapore's 2nd R&D Hub SINGAPORE – Singapore's Stem Cell Research Signifies A Global Breakthrough THAILAND – Thai Scientists' First Genetic Decode Advances Thailand into “Genomic” Era THAILAND – Bird Flu Found in Northern Thailand, First Outbreak in 10 months TAIWAN – A New Food Regulatory Agency to be Set Up in Taiwan VIETNAM – Vietnam to Host Global Rice Meet in 2010 VIETNAM – Liver Transplant on Vietnam's Youngest Receiver Successful VIETNAM – Ministries Unite For Greater Synergy to Develop Pharmaceutical Industry VIETNAM – Vietnam Launches Diabetes Awareness Project
APA, Harvard, Vancouver, ISO, and other styles
35

Egliston, Ben. "Building Skill in Videogames: A Play of Bodies, Controllers and Game-Guides." M/C Journal 20, no. 2 (April 26, 2017). http://dx.doi.org/10.5204/mcj.1218.

Full text
Abstract:
IntroductionIn his now-seminal book, Pilgrim in the Microworld (1983), David Sudnow details his process of learning to play the game Breakout on the Atari 2600. Sudnow develops an account of his graduation from a novice (having never played a videogame prior, and middle-aged at time of writing) to being able to fluidly perform the various configurative processes involved in an acclimated Breakout player’s repertoire.Sudnow’s account of videogame skill-development is not at odds with common-sense views on the matter: people become competent at videogames by playing them—we get used to how controllers work and feel, and to the timings of the game and those required of our bodies, through exposure. We learn by playing, failing, repeating, and ultimately internalising the game’s rhythms—allowing us to perform requisite actions. While he does not put it in as many words, Sudnow’s account affords parity to various human and nonhuman stakeholders involved in videogame-play: technical, temporal, and corporeal. Essentially, his point is that intertwined technical systems like software and human-interface devices—with their respective temporal rhythms, which coalesce and conflict with those of the human player—require management to play skilfully.The perspective Sudnow develops here is no doubt important, but modes of building competency cannot be strictly fixed around a player-videogame relationship; a relatively noncontroversial view in game studies. Videogame scholars have shown that there is currency in understanding how competencies in gameplay arise from engaging with ancillary objects beyond the thresholds of player-game relations; the literature to date casting a long shadow across a broad spectrum of materials and practices. Pursuing this thread, this article addresses the enterprise (and conceptualisation) of ‘skill building’ in videogames (taken as the ability to ‘beat games’ or cultivate the various competencies to do so) via the invocation of peripheral objects or practices. More precisely, this article develops the perspective that we need to attend to the impacts of ancillary objects on play—positioned as hybrid assemblage, as described in the work of writers like Sudnow. In doing so, I first survey how the intervention of peripheral game material has been researched and theorised in game studies, suggesting that many accounts deal too simply with how players build skill through these means—eliding the fact that play works as an engine of many moving parts. We do not simply become ‘better’ at videogames by engaging peripheral material. Furthering this view, I visit recent literature broadly associated with disciplines like post-phenomenology, which handles the hybridity of play and its extension across bodies, game systems, and other gaming material—attending to how skill building occurs; that is, through the recalibration of perceptual faculties operating in the bodily and temporal dimensions of videogame play. We become ‘better’ at videogames by drawing on peripheral gaming material to augment how we negotiate the rhythms of play.Following on from this, I conclude by mobilising post-phenomenological thinking to further consider skill-building through peripheral material, showing how such approaches can generate insights into important and emerging areas of this practice. Following recent games research, such as the work of James Ash, I adopt Bernard Stiegler’s formulation of technicity—pointing toward the conditioning of play through ancillary gaming objects: focusing particularly on the relationship between game skill, game guides, and embodied processes of memory and perception.In short, this article considers videogame skill-building, through means beyond the game, as a significant recalibration of embodied, temporal, and technical entanglements involved in play. Building Skill: From Guides to BodiesThere is a handsome literature that has sought to conceptualise the influence of ancillary game material, which can be traced to earlier theories of media convergence (Jenkins). More incisive accounts (pointing directly at game-skill) have been developed since, through theoretical rubrics such as paratext and metagaming. A point of congruence is the theme of relation: the idea that the locus of understanding and meaning can be specified through things outside the game. For scholars like Mia Consalvo (who popularised the notion of paratext in game studies), paratexts are a central motor in play. As Consalvo suggests, paratexts are quite often primed to condition how we do things in and around videogames; there is a great instructive potential in material like walkthrough guides, gaming magazines and cheating devices. Subsequent work has since made productive use of the concept to investigate game-skill and peripheral material and practice. Worth noting is Chris Paul’s research on World of Warcraft (WoW). Paul suggests that players disseminate high-level strategies through a practice known as ‘Theorycraft’ in the game’s community: one involving the use of paratextual statistics applications to optimise play—the results then disseminated across Web-forums (see also: Nardi).Metagaming (Salen and Zimmerman 482) is another concept that is often used to position the various extrinsic objects or practices installed in play—a concept deployed by scholars to conceptualise skill building through both games and the things at their thresholds (Donaldson). Moreover, the ability to negotiate out-of-game material has been positioned as a form of skill in its own right (see also: Donaldson). Becoming familiar with paratextual resources and being able to parse this information could then constitute skill-building. Ancillary gaming objects are important, and as some have argued, central in gaming culture (Consalvo). However, critical areas are left unexamined with respect to skill-building, because scholars often fail to place paratexts or metagaming in the contexts in which they operate; that is, amongst the complex technical, embodied and temporal conjunctures of play—such as those described by Sudnow. Conceptually, much of what Sudnow says in Microworld undergirds the post-human, object-oriented, or post-phenomenological literature that has begun to populate game studies (and indeed media studies more broadly). This materially-inflected writing takes seriously the fact that technical objects (like videogames) and human subjects are caught up in the rhythms of each other; digital media exists “as a mode or cluster of operations in consort with matter”, as Anna Munster tells us (330).To return to videogames, Patrick Crogan and Helen Kennedy argue that gameplay is about a “technicity” between human and nonhuman things, irreducible to any sole actor. Play is a confluence of metastable forces and conditions, a network of distributed agencies (see also Taylor, Assemblage). Others like Brendan Keogh forward post-phenomenological approaches (operating under scholars like Don Ihde)—looking past the subject-centred nature of videogame research. Ultimately, these theorists situate play as an ‘exploded diagram’, challenging anthropocentric accounts.This position has proven productive in research on ‘skilled’ or ‘high-level’ play (fertile ground for considering competency-development). Emma Witkowski, T.L. Taylor (Raising), and Todd Harper have suggested that skilled play in games emerges from the management of complex embodied and technical rhythms (echoing the points raised prior by Sudnow).Placing Paratexts in PlayWhile we have these varying accounts of how skill develops within and beyond player-game relationships, these two perspectives are rarely consolidated. That said, I address some of the limited body of work that has sought to place the paratext in the complex and distributed conjunctures of play; building a vocabulary and framework via encounters with what could loosely be called post-phenomenological thinking (not dissimilar to the just surveyed accounts). The strength of this work lies in its development of a more precise view of the operational reality of playing ‘with’ paratexts. The recent work of Darshana Jayemanne, Bjorn Nansen, and Thomas Apperley theorises the outward expansion of games and play, into diverse material, social, and spatial dimensions (147), as an ‘aesthetics of recruitment’. Consideration is given to ‘paratextual’ play and skill. For instance, they provide the example of players invoking the expertise they have witnessed broadcast through Websites like Twitch.tv or YouTube—skill-building operating here across various fronts, and through various modalities (155). Players are ‘recruited’, in different capacities, through expanded interfaces, which ultimately contour phenomenological encounters with games.Ash provides a fine-grained account in research on spatiotemporal perception and videogames—one much more focused on game-skill. Ash examines how high-level communities of players cultivate ‘spatiotemporal sensitivity’ in the game Street Fighter IV through—in Stiegler’s terms—‘exteriorising’ (Fault) game information into various data sets—producing what he calls ‘technicity’. In this way, Ash suggests that these paratextual materials don’t merely ‘influence play’ (Technology 200), but rather direct how players perceive time, and habituate exteriorised temporal rhythms into their embodied facility (a translation of high-level play). By doing so, the game can be played more proficiently. Following the broadly post-phenomenological direction of these works, I develop a brief account of two paratextual practices. Like Ash, I deploy the work of Stiegler (drawing also on Ash’s usage). I utilise Stiegler’s theoretical schema of technicity to roughly sketch how some other areas of skill-building via peripheral material can be placed within the context of play—looking particularly at the conditioning of embodied faculties of player anticipation, memory and perception through play and paratext alike. A Technicity of ParatextThe general premise of Stiegler’s technicity is that the human cannot be thought of independent from their technical supplements—that is, ‘exterior’ technical objects which could include, but are not limited to, technologies (Fault). Stiegler argues that the human, and their fundamental memory structure is finite, and as such is reliant on technical prostheses, which register and transmit experience (Fault 17). This technical supplement is what Stiegler terms ‘tertiary retention’. In short, for Stiegler, technicity can be understood as the interweaving of ‘lived’ consciousness (Cinematic 21) with tertiary retentional apparatus—which is palpably felt in our orientations in and toward time (Fault) and space (including the ‘space’ of our bodies, see New Critique 11).To be more precise, tertiary retention conditions the relationship between perception, anticipation, and subjective memory (or what Stiegler—by way of phenomenologist Edmund Husserl, whose work he renovates—calls primary retention, protention, and secondary retention respectively). As Ash demonstrates (Technology), Stiegler’s framework is rich with potential in investigating the relationship between videogames and their peripheral materials. Invoking technicity, we can rethink—and expand on—commonly encountered forms of paratexts, such as game guides or walkthroughs (an example Consalvo gives in Cheating). Stiegler’s framework provides a means to assess the technical organisation (through both games and paratexts) of embodied and temporal conditions of ‘skilled play’. Following Stiegler, Consalvo’s example of a game guide is a kind of ‘exteriorisation of play’ (to the guide) that adjusts the embodied and temporal conditions of anticipation and memory (which Sudnow would tell us are key in skill-development). To work through an example, if I was playing a hard game (such as Dark Souls [From Software]), the general idea is that I would be playing from memories of the just experienced, and with expectations of what’s to come based on everything that’s happened prior (following Stiegler). There is a technicity in the game’s design here, as Ash would tell us (Technology 190-91). By way of Stiegler (and his reading of Heidegger), Ash argues a popular trend in game design is to force a technologically-mediated interplay between memory, anticipation, and perception by making videogames ‘about’ a “a future outside of present experience” (Technology 191), but hinging this on past-memory. Players then, to be ‘skilful’, and move forward through the game environment without dying, need to manage cognitive and somatic memory (which, in Dark Souls, is conventionally accrued through trial-and-error play; learning through error incentivised through punitive game mechanics, such as item-loss). So, if I was playing against one of the game’s ‘bosses’ (powerful enemies), I would generally only be familiar with the way they manoeuvre, the speed with which they do so, and where and when to attack based on prior encounter. For instance, my past-experience (of having died numerous times) would generally inform me that using a two-handed sword allows me to get in two attacks on a boss before needing to retreat to avoid fatal damage. Following Stiegler, we can understand the inscription of videogame experience in objects like game guides as giving rise to anticipation and memory—albeit based on a “past that I have not lived but rather inherited as tertiary retentions” (Cinematic 60). Tertiary retentions trigger processes of selection in our anticipations, memories, and perceptions. Where videogame technologies are traditionally the tertiary retentions in play (Ash, Technologies), the use of game-guides refracts anticipation, memory, and perception through joint systems of tertiary retention—resulting in the outcome of more efficiently beating a game.To return to my previous example of navigating Dark Souls: where I might have died otherwise, via the guide, I’d be cognisant to the timings within which I can attack the boss without sustaining damage, and when to dodge its crushing blows—allowing me to eventually defeat it and move toward the stage’s end (prompting somatic and cognitive memory shifts, which influence my anticipation in-game). Through ‘neurological’ accounts of technology—such as Stiegler’s technicity—we can think more closely about how playing with a skill-building apparatus (like a game guide) works in practice; allowing us to identify how various situations ingame can be managed via deferring functions of the player (such as memory) to exteriorised objects—shifting conditions of skill building. The prism of technicity is also useful in conceptualising some of the new ways players are building skill beyond the game. In recent years, gaming paratexts have transformed in scope and scale. Gaming has shifted into an age of quantification—with analytics platforms which harvest, aggregate, and present player data gaining significant traction, particularly in competitive and multiplayer videogames. These platforms perform numerous operations that assist players in developing skill—and are marketed as tools for players to improve by reflecting on their own practices and the practices of others (functioning similarly to the previously noted practice of TheoryCraft, but operating at a wider scale). To focus on one example, the WarCraftLogs application in WoW (Image 1) is a highly-sophisticated form of videogame analytics; the perspective of technicity providing insights into its functionality as skill-building apparatus.Image 1: WarCraftLogs. Image credit: Ben Egliston. Following Ash’s use of Stiegler (Technology), quantifying the operations that go into playing WoW can be conceptualised as what Stiegler calls a system of traces (Technology 196). Because of his central thesis of ‘technical existence’, Stiegler maintains that ‘interiority’ is coincident with technical support. As such, there is no calculation, no mental phenomena, that does not arise from internal manipulation of exteriorised symbols (Cinematic 52-54). Following on with his discussion of videogames, Ash suggests that in the exteriorisation of gameplay there is “no opposition between gesture, calculation and the representation of symbols” (Technology 196); the symbols working as an ‘abbreviation’ of gameplay that can be read as such. Drawing influence from this view, I show that ‘Big Data’ analytics platforms like WarCraftLogs similarly allow users to ‘read’ play as a set of exteriorised symbols—with significant outcomes for skill-building; allowing users to exteriorise their own play, examine the exteriorised play of others, and compare exteriorisations of their own play with those of others. Image 2: WarCraftLogs Gameplay Breakdown. Image credit: Ben Egliston.Image 2 shows a screenshot of the WarCraftLogs interface. Here we can see the exteriorisation of gameplay, and how the platform breaks down player inputs and in-game occurrences (written and numeric, like Ash’s game data). The screenshot shows a ‘raid boss’ (where players team up to defeat powerful computer-controlled enemies)—atomising the sequence of inputs a player has made over the course of the encounter. This is an accurate ledger of play—a readout that can speak to mechanical performance (specific ingame events occurred at a specific time), as well as caching and providing parses of somatic inputs and execution (e.g. ability to trace the rates at which players expend in-game resources can provide insights into rapidity of button presses). If information falls outside what is presented, players can work with an Application Programming Interface to develop customised readouts (this is encouraged through other game-data platforms, like OpenDota in Dota 2). Through this system, players can exteriorise their own input and output or view the play of others—both useful in building skill. The first point here—of exteriorising one’s own experience—resonates with Stiegler’s renovation of Husserl's ‘temporal object’—that is, an object that exists in and is formed through time—through temporal fluxes of what appears, what happens and what manifests itself in disappearing (Cinematic 14). Stiegler suggests that tertiary retentional apparatus (e.g. a gramophone) allow us to re-experience a temporal object (e.g. a melody) which would otherwise not be possible due to the finitude of human memory.To elaborate, Stiegler argues that primary memories recede into secondary memory (which is selective reactivation of perception), but through technologies of recording, (such as game-data) we can re-experience these things verbatim. So ultimately, games analytics platforms—as exteriorised technologies of recording—facilitate this after-the-fact interplay between primary and secondary memory where players can ‘audit’ their past performance, reflecting on well-played encounters or revising error. These platforms allow the detailed examination of responses to game mechanics, and provide readouts of the technical and embodied rhythms of play (which can be incorporated into future play via reading the data). Beyond self-reflection, these platforms allow the examination of other’s play. The aggregation and sorting of game-data makes expertise both visible and legible. To elaborate, players are ranked on their performance based on all submitted log-data, offering a view of how expertise ‘works’.Image 3: Top-Ranking Players in WarCraftLogs. Image credit: Ben Egliston.Image 3 shows the top-ranked players on an encounter (the top 10 of over 100,000 logs), which means that these players have performed most competently out of all gameplay parses (the metric being most damage dealt per-second in defeating a boss). Users of the platform can look in detail at the actions performed by top players in that encounter—reading and mobilising data in a similar manner to game-guides; markedly different, however, in terms of the scope (i.e. there are many available logs to draw from) and richness of the data (more detailed and current—with log rankings recalibrated regularly). Conceptually, we can also draw parallels with previous work (see: Ash, Technology)—where the habituation of expert game data can produce new videogame technicities; ways of ‘experiencing’ play as ‘higher-level’ organisation of space and time (Ash, Technology). So, if a player wanted to ‘learn from the experts’ they would restructure their own rhythms of play around high-level logs which provide an ordered readout of various sequences of inputs involved in playing well. Moreover, the platform allows players to compare their logs to those of others—so these various introspective and outward-facing uses can work together, conditioning anticipations with inscriptions of past-play and ‘prosthetic’ memories through other’s log-data. In my experience as a WoW player, I often performed better (or built skill) by comparing and contrasting my own detailed readouts of play to the inputs and outputs of the best players in the world.To summarise, through technicity, I have briefly shown how exteriorising play shifts the conditions of skill-building from recalibrating msnesic and anticipatory processes through ‘firsthand’ play, to reworking these functions through engaging both games and extrinsic objects, like game guides and analytics platforms. Additionally, by reviewing and adopting various usages of technicity, I have pointed out how we might more holistically situate the gaming paratext in skill building. Conclusion There is little doubt—as exemplified through both scholarly and popular interest—that paratextual videogame material reframes modes of building game skill. Following recent work, and by providing a brief account of two paratextual practices (venturing the framework of technicity, via Stiegler and Ash—showing the complication of memory, perception, and anticipation in skill-building), I have contended that videogame-skill building—via paratextual material—can be rendered a process of operating outside of, but still caught up in, the complex assemblages of time, bodies, and technical architectures described by Sudnow at this article’s outset. Additionally, by reviewing and adopting ideas associated with technics and post-phenomenology, this article has aimed to contribute to the development of more ‘complete’ accounts of the processes and practices comprising skill building regimens of contemporary videogame players.References Ash, James. “Technology, Technicity and Emerging Practices of Temporal Sensitivity in Videogames.” Environment and Planning A 44.1 (2012): 187-201.———. “Technologies of Captivation: Videogames and the Attunement of Affect.” Body and Society 19.1 (2013): 27-51.Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge: Massachusetts Institute of Technology P, 2007. Crogan, Patrick, and Helen Kennedy. “Technologies between Games and Culture.” Games and Culture 4.2 (2009): 107-14.Donaldson, Scott. “Mechanics and Metagame: Exploring Binary Expertise in League of Legends.” Games and Culture (2015). 4 Jun. 2015 <http://journals.sagepub.com/doi/abs/10.1177/1555412015590063>.From Software. Dark Souls. Playstation 3 Game. 2011.Harper, Todd. The Culture of Digital Fighting Games: Performance and Practice. New York: Routledge, 2014.Jayemanne, Darshana, Bjorn Nansen, and Thomas H. Apperley. “Postdigital Interfaces and the Aesthetics of Recruitment.” Transactions of the Digital Games Research Association 2.3 (2016): 145-72.Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006.Keogh, Brendan. “Across Worlds and Bodies.” Journal of Games Criticism 1.1 (2014). Jan. 2014 <http://gamescriticism.org/articles/keogh-1-1/>.Munster, Anna. “Materiality.” The Johns Hopkins Guide to Digital Media. Eds. Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson. Baltimore: Johns Hopkins UP, 2014. 327-30. Nardi, Bonnie. My Life as Night Elf Priest: An Anthropological Account of World of Warcraft. Ann Arbor: Michigan UP, 2010. OpenDota. OpenDota. Web browser application. 2017.Paul, Christopher A. “Optimizing Play: How Theory Craft Changes Gameplay and Design.” Game Studies: The International Journal of Computer Game Research 11.2 (2011). May 2011 <http://gamestudies.org/1102/articles/paul>.Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge: Massachusetts Institute of Technology P, 2004.Stiegler, Bernard. Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford UP, 1998.———. For a New Critique of Political Economy. Cambridge: Polity, 2010.———. Technics and Time, 3: Cinematic Time and the Question of Malaise. Stanford: Stanford UP, 2011.Sudnow, David. Pilgrim in the Microworld. New York: Warner Books, 1983.Taylor, T.L. “The Assemblage of Play.” Games and Culture 4.4 (2009): 331-39.———. Raising the Stakes: E-Sports and the Professionalization of Computer Gaming. Cambridge: Massachusetts Institute of Technology P, 2012.WarCraftLogs. WarCraftLogs. Web browser application. 2016.Witkowski, Emma. “On the Digital Playing Field: How We ‘Do Sport’ with Networked Computer Games.” Games and Culture 7.5 (2012): 349-74.
APA, Harvard, Vancouver, ISO, and other styles
36

Stockwell, Stephen. "Theory-Jamming." M/C Journal 9, no. 6 (December 1, 2006). http://dx.doi.org/10.5204/mcj.2691.

Full text
Abstract:
“The intellect must not only desire surreptitious delights; it must become completely free and celebrate Saturnalia.” (Nietzsche 6) Theory-jamming suggests an array of eclectic methods, deployed in response to emerging conditions, using traditional patterns to generate innovative moves, seeking harmony and syncopation, transparent about purpose and power, aiming for demonstrable certainties while aware of their own provisional fragility. In this paper, theory-jamming is suggested as an antidote for the confusion and disarray that typifies communication theory. Communication theory as the means to conceptualise the transmission of information and the negotiation of meaning has never been a stable entity. Entrenched divisions between ‘administrative’ and ‘critical’ tendencies are played out within schools and emerging disciplines and across a range of scientific/humanist, quantitative/qualitative and political/cultural paradigms. “Of course, this is only the beginning of the mischief for there are many other polarities at play and a host of variations within polar contrasts” (Dervin, Shields and Song). This paper argues that the play of contending schools with little purchase on each other, or anything much, has turned meta-discourse about communication into an ontological spiral. Perhaps the only way to ride out this storm is to look towards communication practices that confront these issues and appreciate their theoretical underpinnings. From its roots in jazz and blues to its contemporary manifestations in rap and hip-hop and throughout the communication industries, the jam (or improvised reorganisation of traditional themes into new and striking patterns) confronts the ontological spiral in music, and life, by taking the flotsam flung out of the spiral to piece together the means to transcend the downward pull into the abyss. Many pretenders have a theory. Theory abounds: language theory, number theory, game theory, quantum theory, string theory, chaos theory, cyber-theory, queer theory, even conspiracy theory and, most poignantly, the putative theory of everything. But since Bertrand Russell’s unsustainable class of all classes, Gödel’s systemically unprovable propositions and Heisenberger’s uncertainty principle, the propensity for theories to fall into holes in themselves has been apparent. Nowhere is this more obvious than in communication theory where many schools contend without actually connecting to each other. From the 1930s, as the mass media formed, there have been administrative and critical tendencies at war in the communication arena. Some point to the origins of the split in the Institute of Social Research’s Radio Project where pragmatic sociologist, Paul Lazarsfeld broke with Frankfurt School critical theorist, Theodor Adorno over the quality of data. Lazarsfeld was keen to produce results while Adorno complained the data over-simplified the relationship between mass media and audiences (Rogers). From this split grew the twin disciplines of mass communication (quantitative, liberal, commercial and lost in its obsession with the measurement of minor media effects) and cultural/media studies (qualitative, post-Marxist, radical and lost in simulacra of their own devising). The complexity of interactions between these two disciplines, with the same subject matter but very different ways of thinking about it, is the foundation of the ontological black hole in communication theory. As the disciplines have spread out across universities, professional organizations and publishers, they have been used and abused for ideological, institutional and personal purposes. By the summer of 1983, the split was documented in a special issue of the Journal of Communication titled “Ferment in the Field”. Further, professional courses in journalism, public relations, marketing, advertising and media production have complex relations with both theoretical wings, which need the student numbers and are adept at constructing and defending new boundaries. The 90s saw any number ‘wars’: Journalism vs Cultural Studies, Cultural Studies vs Cultural Policy Studies, Cultural Studies vs Public Relations, Public Relations vs Journalism. More recently, the study of new communication technologies has led to a profusion of nascent, neo-disciplines shadowing, mimicking and reacting with old communication studies: “Internet studies; New media studies; Digital media studies; Digital arts and culture studies; Cyberculture studies; Critical cyberculture studies; Networked culture studies; Informatics; Information science; Information society studies; Contemporary media studies” (Silver & Massanari 1). As this shower of cyberstudies spirals by, it is further warped by the split between the hard science of communication infrastructure in engineering and information technology and what the liberal arts have to offer. The early, heroic attempt to bridge this gap by Claude Shannon and, particularly, Warren Weaver was met with disdain by both sides. Weaver’s philosophical interpretation of Shannon’s mathematics, accommodating the interests of technology and of human communication together, is a useful example of how disparate ideas can connect productively. But how does a communications scholar find such connections? How can we find purchase amongst this avalanche of ideas and agendas? Where can we get the traction to move beyond twentieth century Balkanisation of communications theory to embrace the whole? An answer came to me while watching the Discovery Channel. A documentary on apes showed them leaping from branch to branch, settling on a swaying platform of leaves, eating and preening, then leaping into the void until they make another landing, settling again… until the next leap. They are looking for what is viable and never come to ground. Why are we concerned to ground theory which can only prove its own impossibility while disregarding the certainty of what is viable for now? I carried this uneasy insight for almost five years, until I read Nietzsche on the methods of the pre-Platonic philosophers: “Two wanderers stand in a wild forest brook flowing over rocks; the one leaps across using the stones of the brook, moving to and fro ever further… The other stands there helplessly at each moment. At first he must construct the footing that can support his heavy steps; when this does not work, no god helps him across the brook. Is it only boundless rash flight across great spaces? Is it only greater acceleration? No, it is with flights of fantasy, in continuous leaps from possibility to possibility taken as certainties; an ingenious notion shows them to him, and he conjectures that there are formally demonstrable certainties” (Nietzsche 26). Nietzsche’s advice to take the leap is salutary but theory must be more than jumping from one good idea to the next. What guidance do the practices of communication offer? Considering new forms that have developed since the 1930s, as communication theory went into meltdown, the significance of the jam is unavoidable. While the jam session began as improvised jazz and blues music for practice, fellowship and fun, it quickly became the forum for exploring new kinds of music arising from the deconstruction of the old and experimentation with technical, and ontological, possibilities. The jam arose as a spin-off of the dance music circuit in the 1930s. After the main, professional show was over, small groups would gather together in all-night dives for informal, spontaneous sessions of unrehearsed improvisation, playing for their own pleasure, “in accordance with their own esthetic [sic] standards” (Cameron 177). But the jam is much more than having a go. The improvisation occurs on standard melodies: “Theoretically …certain introductions, cadenzas, clichés and ensemble obbligati assume traditional associations (as) ‘folkways’… that are rarely written down but rather learned from hearing (“head jobs”)” (Cameron 178-9). From this platform of tradition, the artist must “imagine in advance the pattern which unfolds… select a part in the pattern appropriate to the occasion, instrument and personal abilities (then) produce startlingly distinctive sound patterns (that) rationalise the impossible.” The jam is founded on its very impossibility: “the jazz aesthetic is basically a paradox… traditionalism and the radical originality are irreconcilable” (Cameron 181). So how do we escape from this paradox, the same paradox that catches all communication theorists between the demands of the past and the impossibility of the future? “Experimentation is mandatory and formal rules become suspect because they too quickly stereotype and ossify” (Cameron 181). The jam seems to work because it offers the possibility of the impossible made real by the act of communication. This play between the possible and the impossible, the rumbling engine of narrative, is the dynamo of the jam. Theory-jamming seeks to activate just such a dynamo. Rather than having a group of players on their instruments, the communication theorist has access a range of theoretical riffs and moves that can be orchestrated to respond to the question in focus, to latest developments, to contradictions or blank spaces within theoretical terrains. The theory-jammer works to their own standards, turning ideas learned from others (‘head jobs’) into their own distinctive patterns, still reliant on traditional melody, harmony and syncopation but now bent, twisted and reorganised into an entirely new story. The practice of following old pathways to new destinations has a long tradition in the West as eclecticism, a Graeco-Roman, particularly Alexandrian, philosophical tradition from the first century BC to the end of the classical period. Typified by Potamo who “encouraged his pupils instead to learn from a variety of masters”, eclecticism sought the best from each school, “all that teaches righteousness combined, the complete eclectic unity” (Kelley 578). By selecting the best, most reasonable, most useful elements from existing philosophical beliefs, polymaths such as Cicero sought the harmonious solution of particular problems. We see something similar to eclecticism in the East in the practices of ‘wild fox zen’ which teaches liberation from conceptual fixation (Heine). The 20th century’s most interesting eclectic was probably Walter Benjamin whose method owes something to both scientific Marxism and the Jewish Kabbalah. His hero was the rag-picker who had the cunning to create life from refuse and detritus. Benjamin’s greatest work, the unfinished Arcades Project, sought to create history from the same. It is a collection of photos, ephemera and transcriptions from books and newspapers (Benjamin). The particularity of eclecticism may be contrasted with the claim to universality of syncretism, the reconciliation of disparate or opposing beliefs by melding together various schools of thought into a new orthodoxy. Theory-jammers are not looking for a final solution but rather they seek what will work on this problem now, to come to a provisional solution, always aware that other, better, further solutions may be ahead. Elements of the jam are apparent in other contemporary forms of communication. For example bricolage, the practice from art, culture and information systems, involves tinkering elements together by trial and error, in ways not originally planned. Pastiche, from literature to the movies, mimics style while creating a new message. In theatre and TV comedy, improvisation has become a style in itself. Theory-jamming has direct connections with brainstorming, the practice that originated in the advertising industry to generate new ideas and solutions by kicking around possibilities. Against the hyper-administration of modern life, as the disintegration of grand theory immobilises thinkers, theory-jamming provides the means to think new thoughts. As a political activist and communications practitioner in Australia over the last thirty years, I have always been bemused by the human propensity to factionalise. Rather than getting bogged down by positions, I have sought to use administrative structures to explore critical ideas, to marshal critical approaches into administrative apparatus, to weld together critical and administrative formations in ways useful to both sides, bust most importantly, in ways useful to human society and a healthy environment. I've been accused of selling-out by the critical camp and of being unrealistic by the administrative side. My response is that we have much more to learn by listening and adapting than we do by self-satisfied stasis. Five Theses on Theory-Jamming Eclecticism requires Ethnography: the eclectic is the ethnographer loose in their own mind. “The free spirit surveys things, and now for the first time mundane existence appears to it worthy of contemplation…” (Nietzsche 6). Enculturation and Enumeration need each other: qualitative and quantitative research work best when they work off each other. “Beginners learned how to establish parallels, by means of the Game’s symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the Game freely wove the initial theme into unlimited combinations.” (Hesse) Ephemera and Esoterica tell us the most: the back-story is the real story as we stumble on the greatest truths as if by accident. “…the mind’s deeper currents often need to be surprised by indirection, sometimes, indeed, by treachery and ruse, as when you steer away from a goal in order to reach it more directly…” (Jameson 71). Experimentation beyond Empiricism: more than testing our sense of our sense data of the world. Communication theory extends from infra-red to ultraviolet, from silent to ultrasonic, from absolute zero to complete heat, from the sub-atomic to the inter-galactic. “That is the true characteristic of the philosophical drive: wonderment at that which lies before everyone.” (Nietzsche 6). Extravagance and Exuberance: don’t stop until you’ve got enough. Theory-jamming opens the possibility for a unified theory of communication that starts, not with a false narrative certainty, but with the gaps in communication: the distance between what we know and what we say, between what we say and what we write, between what we write and what others read back, between what others say and what we hear. References Benjamin, Walter. The Arcades Project. Cambridge, Mass: Harvard UP, 2002. Cameron, W. B. “Sociological Notes on the Jam Session.” Social Forces 33 (Dec. 1954): 177–82. Dervin, B., P. Shields and M. Song. “More than Misunderstanding, Less than War.” Paper at International Communication Association annual meeting, New York City, NY, 2005. 5 Oct. 2006 http://www.allacademic.com/meta/p13530_index.html>. “Ferment in the Field.” Journal of Communication 33.3 (1983). Heine, Steven. “Putting the ‘Fox’ Back in the ‘Wild Fox Koan’: The Intersection of Philosophical and Popular Religious Elements in The Ch’an/Zen Koan Tradition.” Harvard Journal of Asiatic Studies 56.2 (Dec. 1996): 257-317. Hesse, Hermann. The Glass Bead Game. Harmondsworth: Penguin, 1972. Jameson, Fredric. “Postmodernism, or the Cultural Logic of Late Capitalism.” New Left Review 146 (1984): 53-90. Kelley, Donald R. “Eclecticism and the History of Ideas.” Journal of the History of Ideas 62.4 (Oct. 2001): 577-592 Nietzsche, Friedrich. The Pre-Platonic Philosophers. Urbana: University of Illinois Press, 2001. Rogers, E. M. “The Empirical and the Critical Schools of Communication Research.” Communication Yearbook 5 (1982): 125-144. Shannon, C.E., and W. Weaver. The Mathematical Theory of Communication. Urbana: University of Illinois Press, 1949. Silver, David, Adrienne Massanari. Critical Cyberculture Studies. New York: NYU P, 2006. Citation reference for this article MLA Style Stockwell, Stephen. "Theory-Jamming: Uses of Eclectic Method in an Ontological Spiral." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/09-stockwell.php>. APA Style Stockwell, S. (Dec. 2006) "Theory-Jamming: Uses of Eclectic Method in an Ontological Spiral," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/09-stockwell.php>.
APA, Harvard, Vancouver, ISO, and other styles
37

Burns, Alex. "Doubting the Global War on Terror." M/C Journal 14, no. 1 (January 24, 2011). http://dx.doi.org/10.5204/mcj.338.

Full text
Abstract:
Photograph by Gonzalo Echeverria (2010)Declaring War Soon after Al Qaeda’s terrorist attacks on 11 September 2001, the Bush Administration described its new grand strategy: the “Global War on Terror”. This underpinned the subsequent counter-insurgency in Afghanistan and the United States invasion of Iraq in March 2003. Media pundits quickly applied the Global War on Terror label to the Madrid, Bali and London bombings, to convey how Al Qaeda’s terrorism had gone transnational. Meanwhile, international relations scholars debated the extent to which September 11 had changed the international system (Brenner; Mann 303). American intellectuals adopted several variations of the Global War on Terror in what initially felt like a transitional period of US foreign policy (Burns). Walter Laqueur suggested Al Qaeda was engaged in a “cosmological” and perpetual war. Paul Berman likened Al Qaeda and militant Islam to the past ideological battles against communism and fascism (Heilbrunn 248). In a widely cited article, neoconservative thinker Norman Podhoretz suggested the United States faced “World War IV”, which had three interlocking drivers: Al Qaeda and trans-national terrorism; political Islam as the West’s existential enemy; and nuclear proliferation to ‘rogue’ countries and non-state actors (Friedman 3). Podhoretz’s tone reflected a revival of his earlier Cold War politics and critique of the New Left (Friedman 148-149; Halper and Clarke 56; Heilbrunn 210). These stances attracted widespread support. For instance, the United States Marine Corp recalibrated its mission to fight a long war against “World War IV-like” enemies. Yet these stances left the United States unprepared as the combat situations in Afghanistan and Iraq worsened (Ricks; Ferguson; Filkins). Neoconservative ideals for Iraq “regime change” to transform the Middle East failed to deal with other security problems such as Pakistan’s Musharraf regime (Dorrien 110; Halper and Clarke 210-211; Friedman 121, 223; Heilbrunn 252). The Manichean and open-ended framing became a self-fulfilling prophecy for insurgents, jihadists, and militias. The Bush Administration quietly abandoned the Global War on Terror in July 2005. Widespread support had given way to policymaker doubt. Why did so many intellectuals and strategists embrace the Global War on Terror as the best possible “grand strategy” perspective of a post-September 11 world? Why was there so little doubt of this worldview? This is a debate with roots as old as the Sceptics versus the Sophists. Explanations usually focus on the Bush Administration’s “Vulcans” war cabinet: Vice President Dick Cheney, Secretary of Defense Donald Rumsfield, and National Security Advisor Condoleezza Rice, who later became Secretary of State (Mann xv-xvi). The “Vulcans” were named after the Roman god Vulcan because Rice’s hometown Birmingham, Alabama, had “a mammoth fifty-six foot statue . . . [in] homage to the city’s steel industry” (Mann x) and the name stuck. Alternatively, explanations focus on how neoconservative thinkers shaped the intellectual climate after September 11, in a receptive media climate. Biographers suggest that “neoconservatism had become an echo chamber” (Heilbrunn 242) with its own media outlets, pundits, and think-tanks such as the American Enterprise Institute and Project for a New America. Neoconservatism briefly flourished in Washington DC until Iraq’s sectarian violence discredited the “Vulcans” and neoconservative strategists like Paul Wolfowitz (Friedman; Ferguson). The neoconservatives' combination of September 11’s aftermath with strongly argued historical analogies was initially convincing. They conferred with scholars such as Bernard Lewis, Samuel P. Huntington and Victor Davis Hanson to construct classicist historical narratives and to explain cultural differences. However, the history of the decade after September 11 also contains mis-steps and mistakes which make it a series of contingent decisions (Ferguson; Bergen). One way to analyse these contingent decisions is to pose “what if?” counterfactuals, or feasible alternatives to historical events (Lebow). For instance, what if September 11 had been a chemical and biological weapons attack? (Mann 317). Appendix 1 includes a range of alternative possibilities and “minimal rewrites” or slight variations on the historical events which occurred. Collectively, these counterfactuals suggest the role of agency, chance, luck, and the juxtaposition of better and worse outcomes. They pose challenges to the classicist interpretation adopted soon after September 11 to justify “World War IV” (Podhoretz). A ‘Two-Track’ Process for ‘World War IV’ After the September 11 attacks, I think an overlapping two-track process occurred with the “Vulcans” cabinet, neoconservative advisers, and two “echo chambers”: neoconservative think-tanks and the post-September 11 media. Crucially, Bush’s “Vulcans” war cabinet succeeded in gaining civilian control of the United States war decision process. Although successful in initiating the 2003 Iraq War this civilian control created a deeper crisis in US civil-military relations (Stevenson; Morgan). The “Vulcans” relied on “politicised” intelligence such as a United Kingdom intelligence report on Iraq’s weapons development program. The report enabled “a climate of undifferentiated fear to arise” because its public version did not distinguish between chemical, biological, radiological or nuclear weapons (Halper and Clarke, 210). The cautious 2003 National Intelligence Estimates (NIE) report on Iraq was only released in a strongly edited form. For instance, the US Department of Energy had expressed doubts about claims that Iraq had approached Niger for uranium, and was using aluminium tubes for biological and chemical weapons development. Meanwhile, the post-September 11 media had become a second “echo chamber” (Halper and Clarke 194-196) which amplified neoconservative arguments. Berman, Laqueur, Podhoretz and others who framed the intellectual climate were “risk entrepreneurs” (Mueller 41-43) that supported the “World War IV” vision. The media also engaged in aggressive “flak” campaigns (Herman and Chomsky 26-28; Mueller 39-42) designed to limit debate and to stress foreign policy stances and themes which supported the Bush Administration. When former Central Intelligence Agency director James Woolsey’s claimed that Al Qaeda had close connections to Iraqi intelligence, this was promoted in several books, including Michael Ledeen’s War Against The Terror Masters, Stephen Hayes’ The Connection, and Laurie Mylroie’s Bush v. The Beltway; and in partisan media such as Fox News, NewsMax, and The Weekly Standard who each attacked the US State Department and the CIA (Dorrien 183; Hayes; Ledeen; Mylroie; Heilbrunn 237, 243-244; Mann 310). This was the media “echo chamber” at work. The group Accuracy in Media also campaigned successfully to ensure that US cable providers did not give Al Jazeera English access to US audiences (Barker). Cosmopolitan ideals seemed incompatible with what the “flak” groups desired. The two-track process converged on two now infamous speeches. US President Bush’s State of the Union Address on 29 January 2002, and US Secretary of State Colin Powell’s presentation to the United Nations on 5 February 2003. Bush’s speech included a line from neoconservative David Frumm about North Korea, Iraq and Iran as an “Axis of Evil” (Dorrien 158; Halper and Clarke 139-140; Mann 242, 317-321). Powell’s presentation to the United Nations included now-debunked threat assessments. In fact, Powell had altered the speech’s original draft by I. Lewis “Scooter” Libby, who was Cheney’s chief of staff (Dorrien 183-184). Powell claimed that Iraq had mobile biological weapons facilities, linked to Abu Musab al-Zarqawi. However, the International Atomic Energy Agency’s (IAEA) Mohamed El-Baradei, the Defense Intelligence Agency, the State Department, and the Institute for Science and International Security all strongly doubted this claim, as did international observers (Dorrien 184; Halper and Clarke 212-213; Mann 353-354). Yet this information was suppressed: attacked by “flak” or given little visible media coverage. Powell’s agenda included trying to rebuild an international coalition and to head off weather changes that would affect military operations in the Middle East (Mann 351). Both speeches used politicised variants of “weapons of mass destruction”, taken from the counterterrorism literature (Stern; Laqueur). Bush’s speech created an inflated geopolitical threat whilst Powell relied on flawed intelligence and scientific visuals to communicate a non-existent threat (Vogel). However, they had the intended effect on decision makers. US Under-Secretary of Defense, the neoconservative Paul Wolfowitz, later revealed to Vanity Fair that “weapons of mass destruction” was selected as an issue that all potential stakeholders could agree on (Wilkie 69). Perhaps the only remaining outlet was satire: Armando Iannucci’s 2009 film In The Loop parodied the diplomatic politics surrounding Powell’s speech and the civil-military tensions on the Iraq War’s eve. In the short term the two track process worked in heading off doubt. The “Vulcans” blocked important information on pre-war Iraq intelligence from reaching the media and the general public (Prados). Alternatively, they ignored area specialists and other experts, such as when Coalition Provisional Authority’s L. Paul Bremer ignored the US State Department’s fifteen volume ‘Future of Iraq’ project (Ferguson). Public “flak” and “risk entrepreneurs” mobilised a range of motivations from grief and revenge to historical memory and identity politics. This combination of private and public processes meant that although doubts were expressed, they could be contained through the dual echo chambers of neoconservative policymaking and the post-September 11 media. These factors enabled the “Vulcans” to proceed with their “regime change” plans despite strong public opposition from anti-war protestors. Expressing DoubtsMany experts and institutions expressed doubt about specific claims the Bush Administration made to support the 2003 Iraq War. This doubt came from three different and sometimes overlapping groups. Subject matter experts such as the IAEA’s Mohamed El-Baradei and weapons development scientists countered the UK intelligence report and Powell’s UN speech. However, they did not get the media coverage warranted due to “flak” and “echo chamber” dynamics. Others could challenge misleading historical analogies between insurgent Iraq and Nazi Germany, and yet not change the broader outcomes (Benjamin). Independent journalists one group who gained new information during the 1990-91 Gulf War: some entered Iraq from Kuwait and documented a more humanitarian side of the war to journalists embedded with US military units (Uyarra). Finally, there were dissenters from bureaucratic and institutional processes. In some cases, all three overlapped. In their separate analyses of the post-September 11 debate on intelligence “failure”, Zegart and Jervis point to a range of analytic misperceptions and institutional problems. However, the intelligence community is separated from policymakers such as the “Vulcans”. Compartmentalisation due to the “need to know” principle also means that doubting analysts can be blocked from releasing information. Andrew Wilkie discovered this when he resigned from Australia’s Office for National Assessments (ONA) as a transnational issues analyst. Wilkie questioned the pre-war assessments in Powell’s United Nations speech that were used to justify the 2003 Iraq War. Wilkie was then attacked publicly by Australian Prime Minister John Howard. This overshadowed a more important fact: both Howard and Wilkie knew that due to Australian legislation, Wilkie could not publicly comment on ONA intelligence, despite the invitation to do so. This barrier also prevented other intelligence analysts from responding to the “Vulcans”, and to “flak” and “echo chamber” dynamics in the media and neoconservative think-tanks. Many analysts knew that the excerpts released from the 2003 NIE on Iraq was highly edited (Prados). For example, Australian agencies such as the ONA, the Department of Foreign Affairs and Trade, and the Department of Defence knew this (Wilkie 98). However, analysts are trained not to interfere with policymakers, even when there are significant civil-military irregularities. Military officials who spoke out about pre-war planning against the “Vulcans” and their neoconservative supporters were silenced (Ricks; Ferguson). Greenlight Capital’s hedge fund manager David Einhorn illustrates in a different context what might happen if analysts did comment. Einhorn gave a speech to the Ira Sohn Conference on 15 May 2002 debunking the management of Allied Capital. Einhorn’s “short-selling” led to retaliation from Allied Capital, a Securities and Exchange Commission investigation, and growing evidence of potential fraud. If analysts adopted Einhorn’s tactics—combining rigorous analysis with targeted, public denunciation that is widely reported—then this may have short-circuited the “flak” and “echo chamber” effects prior to the 2003 Iraq War. The intelligence community usually tries to pre-empt such outcomes via contestation exercises and similar processes. This was the goal of the 2003 NIE on Iraq, despite the fact that the US Department of Energy which had the expertise was overruled by other agencies who expressed opinions not necessarily based on rigorous scientific and technical analysis (Prados; Vogel). In counterterrorism circles, similar disinformation arose about Aum Shinrikyo’s biological weapons research after its sarin gas attack on Tokyo’s subway system on 20 March 1995 (Leitenberg). Disinformation also arose regarding nuclear weapons proliferation to non-state actors in the 1990s (Stern). Interestingly, several of the “Vulcans” and neoconservatives had been involved in an earlier controversial contestation exercise: Team B in 1976. The Central Intelligence Agency (CIA) assembled three Team B groups in order to evaluate and forecast Soviet military capabilities. One group headed by historian Richard Pipes gave highly “alarmist” forecasts and then attacked a CIA NIE about the Soviets (Dorrien 50-56; Mueller 81). The neoconservatives adopted these same tactics to reframe the 2003 NIE from its position of caution, expressed by several intelligence agencies and experts, to belief that Iraq possessed a current, covert program to develop weapons of mass destruction (Prados). Alternatively, information may be leaked to the media to express doubt. “Non-attributable” background interviews to establishment journalists like Seymour Hersh and Bob Woodward achieved this. Wikileaks publisher Julian Assange has recently achieved notoriety due to US diplomatic cables from the SIPRNet network released from 28 November 2010 onwards. Supporters have favourably compared Assange to Daniel Ellsberg, the RAND researcher who leaked the Pentagon Papers (Ellsberg; Ehrlich and Goldsmith). Whilst Elsberg succeeded because a network of US national papers continued to print excerpts from the Pentagon Papers despite lawsuit threats, Assange relied in part on favourable coverage from the UK’s Guardian newspaper. However, suspected sources such as US Army soldier Bradley Manning are not protected whilst media outlets are relatively free to publish their scoops (Walt, ‘Woodward’). Assange’s publication of SIPRNet’s diplomatic cables will also likely mean greater restrictions on diplomatic and military intelligence (Walt, ‘Don’t Write’). Beyond ‘Doubt’ Iraq’s worsening security discredited many of the factors that had given the neoconservatives credibility. The post-September 11 media became increasingly more critical of the US military in Iraq (Ferguson) and cautious about the “echo chamber” of think-tanks and media outlets. Internet sites for Al Jazeera English, Al-Arabiya and other networks have enabled people to bypass “flak” and directly access these different viewpoints. Most damagingly, the non-discovery of Iraq’s weapons of mass destruction discredited both the 2003 NIE on Iraq and Colin Powell’s United Nations presentation (Wilkie 104). Likewise, “risk entrepreneurs” who foresaw “World War IV” in 2002 and 2003 have now distanced themselves from these apocalyptic forecasts due to a series of mis-steps and mistakes by the Bush Administration and Al Qaeda’s over-calculation (Bergen). The emergence of sites such as Wikileaks, and networks like Al Jazeera English and Al-Arabiya, are a response to the politics of the past decade. They attempt to short-circuit past “echo chambers” through providing access to different sources and leaked data. The Global War on Terror framed the Bush Administration’s response to September 11 as a war (Kirk; Mueller 59). Whilst this prematurely closed off other possibilities, it has also unleashed a series of dynamics which have undermined the neoconservative agenda. The “classicist” history and historical analogies constructed to justify the “World War IV” scenario are just one of several potential frameworks. “Flak” organisations and media “echo chambers” are now challenged by well-financed and strategic alternatives such as Al Jazeera English and Al-Arabiya. Doubt is one defence against “risk entrepreneurs” who seek to promote a particular idea: doubt guards against uncritical adoption. Perhaps the enduring lesson of the post-September 11 debates, though, is that doubt alone is not enough. What is needed are individuals and institutions that understand the strategies which the neoconservatives and others have used, and who also have the soft power skills during crises to influence critical decision-makers to choose alternatives. Appendix 1: Counterfactuals Richard Ned Lebow uses “what if?” counterfactuals to examine alternative possibilities and “minimal rewrites” or slight variations on the historical events that occurred. The following counterfactuals suggest that the Bush Administration’s Global War on Terror could have evolved very differently . . . or not occurred at all. Fact: The 2003 Iraq War and 2001 Afghanistan counterinsurgency shaped the Bush Administration’s post-September 11 grand strategy. Counterfactual #1: Al Gore decisively wins the 2000 U.S. election. Bush v. Gore never occurs. After the September 11 attacks, Gore focuses on international alliance-building and gains widespread diplomatic support rather than a neoconservative agenda. He authorises Special Operations Forces in Afghanistan and works closely with the Musharraf regime in Pakistan to target Al Qaeda’s muhajideen. He ‘contains’ Saddam Hussein’s Iraq through measurement and signature, technical intelligence, and more stringent monitoring by the International Atomic Energy Agency. Minimal Rewrite: United 93 crashes in Washington DC, killing senior members of the Gore Administration. Fact: U.S. Special Operations Forces failed to kill Osama bin Laden in late November and early December 2001 at Tora Bora. Counterfactual #2: U.S. Special Operations Forces kill Osama bin Laden in early December 2001 during skirmishes at Tora Bora. Ayman al-Zawahiri is critically wounded, captured, and imprisoned. The rest of Al Qaeda is scattered. Minimal Rewrite: Osama bin Laden’s death turns him into a self-mythologised hero for decades. Fact: The UK Blair Government supplied a 50-page intelligence dossier on Iraq’s weapons development program which the Bush Administration used to support its pre-war planning. Counterfactual #3: Rogue intelligence analysts debunk the UK Blair Government’s claims through a series of ‘targeted’ leaks to establishment news sources. Minimal Rewrite: The 50-page intelligence dossier is later discovered to be correct about Iraq’s weapons development program. Fact: The Bush Administration used the 2003 National Intelligence Estimate to “build its case” for “regime change” in Saddam Hussein’s Iraq. Counterfactual #4: A joint investigation by The New York Times and The Washington Post rebuts U.S. Secretary of State Colin Powell’s speech to the United National Security Council, delivered on 5 February 2003. Minimal Rewrite: The Central Intelligence Agency’s whitepaper “Iraq’s Weapons of Mass Destruction Programs” (October 2002) more accurately reflects the 2003 NIE’s cautious assessments. Fact: The Bush Administration relied on Ahmed Chalabi for its postwar estimates about Iraq’s reconstruction. Counterfactual #5: The Bush Administration ignores Chalabi’s advice and relies instead on the U.S. State Department’s 15 volume report “The Future of Iraq”. Minimal Rewrite: The Coalition Provisional Authority appoints Ahmed Chalabi to head an interim Iraqi government. Fact: L. Paul Bremer signed orders to disband Iraq’s Army and to De-Ba’athify Iraq’s new government. Counterfactual #6: Bremer keeps Iraq’s Army intact and uses it to impose security in Baghdad to prevent looting and to thwart insurgents. Rather than a De-Ba’athification policy, Bremer uses former Baath Party members to gather situational intelligence. Minimal Rewrite: Iraq’s Army refuses to disband and the De-Ba’athification policy uncovers several conspiracies to undermine the Coalition Provisional Authority. AcknowledgmentsThanks to Stephen McGrail for advice on science and technology analysis.References Barker, Greg. “War of Ideas”. PBS Frontline. Boston, MA: 2007. ‹http://www.pbs.org/frontlineworld/stories/newswar/video1.html› Benjamin, Daniel. “Condi’s Phony History.” Slate 29 Aug. 2003. ‹http://www.slate.com/id/2087768/pagenum/all/›. Bergen, Peter L. The Longest War: The Enduring Conflict between America and Al Qaeda. New York: The Free Press, 2011. Berman, Paul. Terror and Liberalism. W.W. Norton & Company: New York, 2003. Brenner, William J. “In Search of Monsters: Realism and Progress in International Relations Theory after September 11.” Security Studies 15.3 (2006): 496-528. Burns, Alex. “The Worldflash of a Coming Future.” M/C Journal 6.2 (April 2003). ‹http://journal.media-culture.org.au/0304/08-worldflash.php›. Dorrien, Gary. Imperial Designs: Neoconservatism and the New Pax Americana. New York: Routledge, 2004. Ehrlich, Judith, and Goldsmith, Rick. The Most Dangerous Man in America: Daniel Ellsberg and the Pentagon Papers. Berkley CA: Kovno Communications, 2009. Einhorn, David. Fooling Some of the People All of the Time: A Long Short (and Now Complete) Story. Hoboken NJ: John Wiley & Sons, 2010. Ellison, Sarah. “The Man Who Spilled The Secrets.” Vanity Fair (Feb. 2011). ‹http://www.vanityfair.com/politics/features/2011/02/the-guardian-201102›. Ellsberg, Daniel. Secrets: A Memoir of Vietnam and the Pentagon Papers. New York: Viking, 2002. Ferguson, Charles. No End in Sight, New York: Representational Pictures, 2007. Filkins, Dexter. The Forever War. New York: Vintage Books, 2008. Friedman, Murray. The Neoconservative Revolution: Jewish Intellectuals and the Shaping of Public Policy. New York: Cambridge UP, 2005. Halper, Stefan, and Jonathan Clarke. America Alone: The Neo-Conservatives and the Global Order. New York: Cambridge UP, 2004. Hayes, Stephen F. The Connection: How Al Qaeda’s Collaboration with Saddam Hussein Has Endangered America. New York: HarperCollins, 2004. Heilbrunn, Jacob. They Knew They Were Right: The Rise of the Neocons. New York: Doubleday, 2008. Herman, Edward S., and Noam Chomsky. Manufacturing Consent: The Political Economy of the Mass Media. Rev. ed. New York: Pantheon Books, 2002. Iannucci, Armando. In The Loop. London: BBC Films, 2009. Jervis, Robert. Why Intelligence Fails: Lessons from the Iranian Revolution and the Iraq War. Ithaca NY: Cornell UP, 2010. Kirk, Michael. “The War behind Closed Doors.” PBS Frontline. Boston, MA: 2003. ‹http://www.pbs.org/wgbh/pages/frontline/shows/iraq/›. Laqueur, Walter. No End to War: Terrorism in the Twenty-First Century. New York: Continuum, 2003. Lebow, Richard Ned. Forbidden Fruit: Counterfactuals and International Relations. Princeton NJ: Princeton UP, 2010. Ledeen, Michael. The War against The Terror Masters. New York: St. Martin’s Griffin, 2003. Leitenberg, Milton. “Aum Shinrikyo's Efforts to Produce Biological Weapons: A Case Study in the Serial Propagation of Misinformation.” Terrorism and Political Violence 11.4 (1999): 149-158. Mann, James. Rise of the Vulcans: The History of Bush’s War Cabinet. New York: Viking Penguin, 2004. Morgan, Matthew J. The American Military after 9/11: Society, State, and Empire. New York: Palgrave Macmillan, 2008. Mueller, John. Overblown: How Politicians and the Terrorism Industry Inflate National Security Threats, and Why We Believe Them. New York: The Free Press, 2009. Mylroie, Laurie. Bush v The Beltway: The Inside Battle over War in Iraq. New York: Regan Books, 2003. Nutt, Paul C. Why Decisions Fail. San Francisco: Berrett-Koelher, 2002. Podhoretz, Norman. “How to Win World War IV”. Commentary 113.2 (2002): 19-29. Prados, John. Hoodwinked: The Documents That Reveal How Bush Sold Us a War. New York: The New Press, 2004. Ricks, Thomas. Fiasco: The American Military Adventure in Iraq. New York: The Penguin Press, 2006. Stern, Jessica. The Ultimate Terrorists. Boston, MA: Harvard UP, 2001. Stevenson, Charles A. Warriors and Politicians: US Civil-Military Relations under Stress. New York: Routledge, 2006. Walt, Stephen M. “Should Bob Woodward Be Arrested?” Foreign Policy 10 Dec. 2010. ‹http://walt.foreignpolicy.com/posts/2010/12/10/more_wikileaks_double_standards›. Walt, Stephen M. “‘Don’t Write If You Can Talk...’: The Latest from WikiLeaks.” Foreign Policy 29 Nov. 2010. ‹http://walt.foreignpolicy.com/posts/2010/11/29/dont_write_if_you_can_talk_the_latest_from_wikileaks›. Wilkie, Andrew. Axis of Deceit. Melbourne: Black Ink Books, 2003. Uyarra, Esteban Manzanares. “War Feels like War”. London: BBC, 2003. Vogel, Kathleen M. “Iraqi Winnebagos™ of Death: Imagined and Realized Futures of US Bioweapons Threat Assessments.” Science and Public Policy 35.8 (2008): 561–573. Zegart, Amy. Spying Blind: The CIA, the FBI and the Origins of 9/11. Princeton NJ: Princeton UP, 2007.
APA, Harvard, Vancouver, ISO, and other styles
38

Karlin, Beth, and John Johnson. "Measuring Impact: The Importance of Evaluation for Documentary Film Campaigns." M/C Journal 14, no. 6 (November 18, 2011). http://dx.doi.org/10.5204/mcj.444.

Full text
Abstract:
Introduction Documentary film has grown significantly in the past decade, with high profile films such as Fahrenheit 9/11, Supersize Me, and An Inconvenient Truth garnering increased attention both at the box office and in the news media. In addition, the rising prominence of web-based media has provided new opportunities for documentary to create social impact. Films are now typically released with websites, Facebook pages, twitter feeds, and web videos to increase both reach and impact. This combination of technology and broader audience appeal has given rise to a current landscape in which documentary films are imbedded within coordinated multi-media campaigns. New media have not only opened up new avenues for communicating with audiences, they have also created new opportunities for data collection and analysis of film impacts. A recent report by McKinsey and Company highlighted this potential, introducing and discussing the implications of increasing consumer information being recorded on the Internet as well as through networked sensors in the physical world. As they found: "Big data—large pools of data that can be captured, communicated, aggregated, stored, and analyzed—is now part of every sector and function of the global economy" (Manyika et al. iv). This data can be mined to learn a great deal about both individual and cultural response to documentary films and the issues they represent. Although film has a rich history in humanities research, this new set of tools enables an empirical approach grounded in the social sciences. However, several researchers across disciplines have noted that limited investigation has been conducted in this area. Although there has always been an emphasis on social impact in film and many filmmakers and scholars have made legitimate (and possibly illegitimate) claims of impact, few have attempted to empirically justify these claims. Over fifteen years ago, noted film scholar Brian Winston commented that "the underlying assumption of most social documentaries—that they shall act as agents of reform and change—is almost never demonstrated" (236). A decade later, Political Scientist David Whiteman repeated this sentiment, arguing that, "despite widespread speculation about the impact of documentaries, the topic has received relatively little systematic attention" ("Evolving"). And earlier this year, the introduction to a special issue of Mass Communication and Society on documentary film stated, "documentary film, despite its growing influence and many impacts, has mostly been overlooked by social scientists studying the media and communication" (Nisbet and Aufderheide 451). Film has been studied extensively as entertainment, as narrative, and as cultural event, but the study of film as an agent of social change is still in its infancy. This paper introduces a systematic approach to measuring the social impact of documentary film aiming to: (1) discuss the context of documentary film and its potential impact; and (2) argue for a social science approach, discussing key issues about conducting such research. Changes in Documentary Practice Documentary film has been used as a tool for promoting social change throughout its history. John Grierson, who coined the term "documentary" in 1926, believed it could be used to influence the ideas and actions of people in ways once reserved for church and school. He presented his thoughts on this emerging genre in his 1932 essay, First Principles of Documentary, saying, "We believe that the cinema's capacity for getting around, for observing and selecting from life itself, can be exploited in a new and vital art form" (97). Richard Barsam further specified the definition of documentary, distinguishing it from non-fiction film, such that all documentaries are non-fiction films but not all non-fiction films are documentaries. He distinguishes documentary from other forms of non-fiction film (i.e. travel films, educational films, newsreels) by its purpose; it is a film with an opinion and a specific message that aims to persuade or influence the audience. And Bill Nichols writes that the definition of documentary may even expand beyond the film itself, defining it as a "filmmaking practice, a cinematic tradition, and mode of audience reception" (12). Documentary film has undergone many significant changes since its inception, from the heavily staged romanticism movement of the 1920s to the propagandist tradition of governments using film to persuade individuals to support national agendas to the introduction of cinéma vérité in the 1960s and historical documentary in the 1980s (cf. Barnouw). However, the recent upsurge in popularity of documentary media, combined with technological advances of internet and computers have opened up a whole new set of opportunities for film to serve as both art and agent for social change. One such opportunity is in the creation of film-based social action campaigns. Over the past decade, filmmakers have taken a more active role in promoting social change by coordinating film releases with action campaigns. Companies such as Participant Media (An Inconvenient Truth, Food Inc., etc.) now create "specific social action campaigns for each film and documentary designed to give a voice to issues that resonate in the films" (Participant Media). In addition, a new sector of "social media" consultants are now offering services, including "consultation, strategic planning for alternative distribution, website and social media development, and complete campaign management services to filmmakers to ensure the content of nonfiction media truly meets the intention for change" (Working Films). The emergence of new forms of media and technology are changing our conceptions of both documentary film and social action. Technologies such as podcasts, video blogs, internet radio, social media and network applications, and collaborative web editing "both unsettle and extend concepts and assumptions at the heart of 'documentary' as a practice and as an idea" (Ellsworth). In the past decade, we have seen new forms of documentary creation, distribution, marketing, and engagement. Likewise, film campaigns are utilizing a broad array of strategies to engage audience members, including "action kits, screening programs, educational curriculums and classes, house parties, seminars, panels" that often turn into "ongoing 'legacy' programs that are updated and revised to continue beyond the film's domestic and international theatrical, DVD and television windows" (Participant Media). This move towards multi-media documentary film is becoming not only commonplace, but expected as a part of filmmaking. NYU film professor and documentary film pioneer George Stoney recently noted, "50 percent of the documentary filmmaker's job is making the movie, and 50 percent is figuring out what its impact can be and how it can move audiences to action" (qtd. in Nisbet, "Gasland"). In his book Convergence Culture, Henry Jenkins, coined the term "transmedia storytelling", which he later defined as "a process where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience" ("Transmedia"). When applied to documentary film, it is the elements of the "issue" raised by the film that get dispersed across these channels, coordinating, not just an entertainment experience, but a social action campaign. Dimensions of Evaluation It is not unreasonable to assume that such film campaigns, just like any policy or program, have the possibility to influence viewers' knowledge, attitudes, and behavior. Measuring this impact has become increasingly important, as funders of documentary and issue-based films want look to understand the "return on investment" of films in terms of social impact so that they can compare them with other projects, including non-media, direct service projects. Although we "feel" like films make a difference to the individuals who also see them in the broader cultures in which they are embedded, measurement and empirical analysis of this impact are vitally important for both providing feedback to filmmakers and funders as well as informing future efforts attempting to leverage film for social change. This type of systematic assessment, or program evaluation, is often discussed in terms of two primary goals—formative (or process) and summative (or impact) evaluation (cf. Muraskin; Trochim and Donnelly). Formative evaluation studies program materials and activities to strengthen a program, and summative evaluation examines program outcomes. In terms of documentary film, these two goals can be described as follows: Formative Evaluation: Informing the Process As programs (broadly defined as an intentional set of activities with the aim of having some specific impact), the people who interact with them, and the cultures they are situated in are constantly changing, program development and evaluation is an ongoing learning cycle. Film campaigns, which are an intentional set of activities with the aim of impacting individual viewers and broader cultures, fit squarely within this purview. Without formulating hypotheses about the relationships between program activities and goals and then collecting and analyzing data during implementation to test them, it is difficult to learn ways to improve programs (or continue doing what works best in the most efficient manner). Attention to this process enables those involved to learn more about, not only what works, but how and why it works and even gain insights about how program outcomes may be affected by changes to resource availability, potential audiences, or infrastructure. Filmmakers are constantly learning and honing their craft and realizing the impact of their practice can help the artistic process. Often faced with tight budgets and timelines, they are forced to confront tradeoffs all the time, in the writing, production and post-production process. Understanding where they are having impact can improve their decision-making, which can help both the individual project and the overall field. Summative Evaluation: Quantifying Impacts Evaluation is used in many different fields to determine whether programs are achieving their intended goals and objectives. It became popular in the 1960s as a way of understanding the impact of the Great Society programs and has continued to grow since that time (Madaus and Stufflebeam). A recent White House memo stated that "rigorous, independent program evaluations can be a key resource in determining whether government programs are achieving their intended outcomes as well as possible and at the lowest possible cost" and the United States Office of Management and Budget (OMB) launched an initiative to increase the practice of "impact evaluations, or evaluations aimed at determining the causal effects of programs" (Orszag 1). Documentary films, like government programs, generally target a national audience, aim to serve a social purpose, and often do not provide a return on their investment. Participant Media, the most visible and arguably most successful documentary production company in the film industry, made recent headlines for its difficulty in making a profit during its seven-year history (Cieply). Owner and founder Jeff Skoll reported investing hundreds of millions of dollars into the company and CEO James Berk added that the company sometimes measures success, not by profit, but by "whether Mr. Skoll could have exerted more impact simply by spending his money philanthropically" (Cieply). Because of this, documentary projects often rely on grant funding, and are starting to approach funders beyond traditional arts and media sources. "Filmmakers are finding new fiscal and non-fiscal partners, in constituencies that would not traditionally be considered—or consider themselves—media funders or partners" (BRITDOC 6). And funders increasingly expect tangible data about their return on investment. Says Luis Ubiñas, president of Ford Foundation, which recently launched the Just Films Initiative: In these times of global economic uncertainty, with increasing demand for limited philanthropic dollars, assessing our effectiveness is more important than ever. Today, staying on the frontlines of social change means gauging, with thoughtfulness and rigor, the immediate and distant outcomes of our funding. Establishing the need for evaluation is not enough—attention to methodology is also critical. Valid research methodology is a critical component of understanding around the role entertainment can play in impacting social and environmental issues. The following issues are vital to measuring impact. Defining the Project Though this may seem like an obvious step, it is essential to determine the nature of the project so one can create research questions and hypotheses based on a complete understanding of the "treatment". One organization that provides a great example of the integration of documentary film imbedded into a larger campaign or movement is Invisible Children. Founded in 2005, Invisible Children is both a media-based organization as well as an economic development NGO with the goal of raising awareness and meeting the needs of child soldiers and other youth suffering as a result of the ongoing war in northern Uganda. Although Invisible Children began as a documentary film, it has grown into a large non-profit organization with an operating budget of over $8 million and a staff of over a hundred employees and interns throughout the year as well as volunteers in all 50 states and several countries. Invisible Children programming includes films, events, fundraising campaigns, contests, social media platforms, blogs, videos, two national "tours" per year, merchandise, and even a 650-person three-day youth summit in August 2011 called The Fourth Estate. Individually, each of these components might lead to specific outcomes; collectively, they might lead to others. In order to properly assess impacts of the film "project", it is important to take all of these components into consideration and think about who they may impact and how. This informs the research questions, hypotheses, and methods used in evaluation. Film campaigns may even include partnerships with existing social movements and non-profit organizations targeting social change. The American University Center for Social Media concluded in a case study of three issue-based documentary film campaigns: Digital technologies do not replace, but are closely entwined with, longstanding on-the-ground activities of stakeholders and citizens working for social change. Projects like these forge new tools, pipelines, and circuits of circulation in a multiplatform media environment. They help to create sustainable network infrastructures for participatory public media that extend from local communities to transnational circuits and from grassroots communities to policy makers. (Abrash) Expanding the Focus of Impact beyond the Individual A recent focus has shifted the dialogue on film impact. Whiteman ("Theaters") argues that traditional metrics of film "success" tend to focus on studio economic indicators that are far more relevant to large budget films. Current efforts focused on box office receipts and audience size, the author claims, are really measures of successful film marketing or promotion, missing the mark when it comes to understanding social impact. He instead stresses the importance of developing a more comprehensive model. His "coalition model" broadens the range and types of impact of film beyond traditional metrics to include the entire filmmaking process, from production to distribution. Whiteman (“Theaters”) argues that a narrow focus on the size of the audience for a film, its box office receipts, and viewers' attitudes does not incorporate the potential reach of a documentary film. Impacts within the coalition model include both individual and policy levels. Individual impacts (with an emphasis on activist groups) include educating members, mobilizing for action, and raising group status; policy includes altering both agenda for and the substance of policy deliberations. The Fledgling Fund (Barrett and Leddy) expanded on this concept and identified five distinct impacts of documentary film campaigns. These potential impacts expand from individual viewers to groups, movements, and eventually to what they call the "ultimate goal" of social change. Each is introduced briefly below. Quality Film. The film itself can be presented as a quality film or media project, creating enjoyment or evoking emotion in the part of audiences. "By this we mean a film that has a compelling narrative that draws viewers in and can engage them in the issue and illustrate complex problems in ways that statistics cannot" (Barrett and Leddy, 6). Public Awareness. Film can increase public awareness by bringing light to issues and stories that may have otherwise been unknown or not often thought about. This is the level of impact that has received the most attention, as films are often discussed in terms of their "educational" value. "A project's ability to raise awareness around a particular issue, since awareness is a critical building block for both individual change and broader social change" (Barrett and Leddy, 6). Public Engagement. Impact, however, need not stop at simply raising public awareness. Engagement "indicates a shift from simply being aware of an issue to acting on this awareness. Were a film and its outreach campaign able to provide an answer to the question 'What can I do?' and more importantly mobilize that individual to act?" (Barrett and Leddy, 7). This is where an associated film campaign becomes increasingly important, as transmedia outlets such as Facebook, websites, blogs, etc. can build off the interest and awareness developed through watching a film and provide outlets for viewers channel their constructive efforts. Social Movement. In addition to impacts on individuals, films can also serve to mobilize groups focused on a particular problem. The filmmaker can create a campaign around the film to promote its goals and/or work with existing groups focused on a particular issue, so that the film can be used as a tool for mobilization and collaboration. "Moving beyond measures of impact as they relate to individual awareness and engagement, we look at the project's impact as it relates to the broader social movement … if a project can strengthen the work of key advocacy organizations that have strong commitment to the issues raised in the film" (Barrett and Leddy, 7). Social Change. The final level of impact and "ultimate goal" of an issue-based film is long-term and systemic social change. "While we understand that realizing social change is often a long and complex process, we do believe it is possible and that for some projects and issues there are key indicators of success" (Barrett and Leddy, 7). This can take the form of policy or legislative change, passed through film-based lobbying efforts, or shifts in public dialogue and behavior. Legislative change typically takes place beyond the social movement stage, when there is enough support to pressure legislators to change or create policy. Film-inspired activism has been seen in issues ranging from environmental causes such as agriculture (Food Inc.) and toxic products (Blue Vinyl) to social causes such as foreign conflict (Invisible Children) and education (Waiting for Superman). Documentary films can also have a strong influence as media agenda-setters, as films provide dramatic "news pegs" for journalists seeking to either sustain or generation new coverage of an issue (Nisbet "Introduction" 5), such as the media coverage of climate change in conjunction with An Inconvenient Truth. Barrett and Leddy, however, note that not all films target all five impacts and that different films may lead to different impacts. "In some cases we could look to key legislative or policy changes that were driven by, or at least supported by the project... In other cases, we can point to shifts in public dialogue and how issues are framed and discussed" (7). It is possible that specific film and/or campaign characteristics may lead to different impacts; this is a nascent area for research and one with great promise for both practical and theoretical utility. Innovations in Tools and Methods Finally, the selection of tools is a vital component for assessing impact and the new media landscape is enabling innovations in the methods and strategies for program evaluation. Whereas the traditional domain of film impact measurement included box office statistics, focus groups, and exit surveys, innovations in data collection and analysis have expanded the reach of what questions we can ask and how we are able to answer them. For example, press coverage can assist in understanding and measuring the increase in awareness about an issue post-release. Looking directly at web-traffic changes "enables the creation of an information-seeking curve that can define the parameters of a teachable moment" (Hart and Leiserowitz 360). Audience reception can be measured, not only via interviews and focus groups, but also through content and sentiment analysis of web content and online analytics. "Sophisticated analytics can substantially improve decision making, minimize risks, and unearth valuable insights that would otherwise remain hidden" (Manyika et al. 5). These new tools are significantly changing evaluation, expanding what we can learn about the social impacts of film through triangulation of self-report data with measurement of actual behavior in virtual environments. Conclusion The changing media landscape both allows and impels evaluation of film impacts on individual viewers and the broader culture in which they are imbedded. Although such analysis may have previously been limited to box office numbers, critics' reviews, and theater exit surveys, the rise of new media provides both the ability to connect filmmakers, activists, and viewers in new ways and the data in which to study the process. This capability, combined with significant growth in the documentary landscape, suggests a great potential for documentary film to contribute to some of our most pressing social and environmental needs. A social scientific approach, that combines empirical analysis with theory applied from basic science, ensures that impact can be measured and leveraged in a way that is useful for both filmmakers as well as funders. In the end, this attention to impact ensures a continued thriving marketplace for issue-based documentary films in our social landscape. References Abrash, Barbara. "Social Issue Documentary: The Evolution of Public Engagement." American University Center for Social Media 21 Apr. 2010. 26 Sep. 2011 ‹http://www.centerforsocialmedia.org/›. Aufderheide, Patricia. "The Changing Documentary Marketplace." Cineaste 30.3 (2005): 24-28. Barnouw, Eric. Documentary: A History of the Non-Fiction Film. New York: Oxford UP, 1993. Barrett, Diana and Sheila Leddy. "Assessing Creative Media's Social Impact." The Fledgling Fund, Dec. 2008. 15 Sep. 2011 ‹http://www.thefledglingfund.org/media/research.html›. Barsam, Richard M. Nonfiction Film: A Critical History. Bloomington: Indiana UP. 1992. BRITDOC Foundation. The End of the Line: A Social Impact Evaluation. London: Channel 4, 2011. 12 Oct. 2011 ‹http://britdoc.org/news_details/the_social_impact_of_the_end_of_the_line/›. Cieply, Michael. "Uneven Growth for Film Studio with a Message." New York Times 5 Jun. 2011: B1. Ellsworth, Elizabeth. "Emerging Media and Documentary Practice." The New School Graduate Program in International Affairs. Aug. 2008. 22 Sep. 2011. ‹http://www.gpia.info/node/911›. Grierson, John. "First Principles of Documentary (1932)." Imagining Reality: The Faber Book of Documentary. Eds. Kevin Macdonald and Mark Cousins. London: Faber and Faber, 1996. 97-102. Hart, Philip Solomon and Anthony Leiserowitz. "Finding the Teachable Moment: An Analysis of Information-Seeking Behavior on Global Warming Related Websites during the Release of The Day After Tomorrow." Environmental Communication: A Journal of Nature and Culture 3.3 (2009): 355-66. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. ———. "Transmedia Storytelling 101." Confessions of an Aca-Fan. The Official Weblog of Henry Jenkins. 22 Mar. 2007. 10 Oct. 2011 ‹http://www.henryjenkins.org/2007/03/transmedia_storytelling_101.html›. Madaus, George, and Daniel Stufflebeam. "Program Evaluation: A Historical Overview." Evaluation in Education and Human Services 49.1 (2002): 3-18. Manyika, James, Michael Chui, Jacques Bughin, Brad Brown, Richard Dobbs, Charles Roxburgh, and Angela Hung Byers. Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global Institute. May 2011 ‹http://www.mckinsey.com/mgi/publications/big_data/›. Muraskin, Lana. Understanding Evaluation: The Way to Better Prevention Programs. Washington: U.S. Department of Education, 1993. 8 Oct. 2011 ‹http://www2.ed.gov/PDFDocs/handbook.pdf›. Nichols, Bill. "Foreword." Documenting the Documentary: Close Readings of Documentary Film and Video. Eds. Barry Keith Grant and Jeannette Sloniowski. Detroit: Wayne State UP, 1997. 11-13. Nisbet, Matthew. "Gasland and Dirty Business: Documentary Films Shape Debate on Energy Policy." Big Think, 9 May 2011. 1 Oct. 2011 ‹http://bigthink.com/ideas/38345›. ———. "Introduction: Understanding the Social Impact of a Documentary Film." Documentaries on a Mission: How Nonprofits Are Making Movies for Public Engagement. Ed. Karen Hirsch, Center for Social Media. Mar. 2007. 10 Sep. 2011 ‹http://aladinrc.wrlc.org/bitstream/1961/4634/1/docs_on_a_mission.pdf›. Nisbet, Matthew, and Patricia Aufderheide. "Documentary Film: Towards a Research Agenda on Forms, Functions, and Impacts." Mass Communication and Society 12.4 (2011): 450-56. Orszag, Peter. Increased Emphasis on Program Evaluation. Washington: Office of Management and Budget. 7 Oct. 2009. 10 Oct. 2011 ‹http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_2010/m10-01.pdf›. Participant Media. "Our Mission." 2011. 2 Apr. 2011 ‹http://www.participantmedia.com/company/about_us.php.›. Plantinga, Carl. Rhetoric and Representation in Nonfiction Film. Cambridge: Cambridge UP, 1997. Trochim, William, and James Donnelly. Research Methods Knowledge Base. 3rd ed. Mason: Atomic Dogs, 2007. Ubiñas, Luis. "President's Message." 2009 Annual Report. Ford Foundation, Sep. 2010. 10 Oct. 2011 ‹http://www.fordfoundation.org/about-us/2009-annual-report/presidents-message›. Vladica, Florin, and Charles Davis. "Business Innovation and New Media Practices in Documentary Film Production and Distribution: Conceptual Framework and Review of Evidence." The Media as a Driver of the Information Society. Eds. Ed Albarran, Paulo Faustino, and R. Santos. Lisbon, Portugal: Media XXI / Formal, 2009. 299-319. Whiteman, David. "Out of the Theaters and into the Streets: A Coalition Model of the Political Impact of Documentary Film and Video." Political Communication 21.1 (2004): 51-69. ———. "The Evolving Impact of Documentary Film: Sacrifice and the Rise of Issue-Centered Outreach." Post Script 22 Jun. 2007. 10 Sep. 2011 ‹http://www.allbusiness.com/media-telecommunications/movies-sound-recording/5517496-1.html›. Winston, Brian. Claiming the Real: The Documentary Film Revisited. London: British Film Institute, 1995. Working Films. "Nonprofits: Working Films." Foundation Source Access 31 May 2011. 5 Oct. 2011 ‹http://access.foundationsource.com/nonprofit/working-films/›.
APA, Harvard, Vancouver, ISO, and other styles
39

Burns, Alex. "Oblique Strategies for Ambient Journalism." M/C Journal 13, no. 2 (April 15, 2010). http://dx.doi.org/10.5204/mcj.230.

Full text
Abstract:
Alfred Hermida recently posited ‘ambient journalism’ as a new framework for para- and professional journalists, who use social networks like Twitter for story sources, and as a news delivery platform. Beginning with this framework, this article explores the following questions: How does Hermida define ‘ambient journalism’ and what is its significance? Are there alternative definitions? What lessons do current platforms provide for the design of future, real-time platforms that ‘ambient journalists’ might use? What lessons does the work of Brian Eno provide–the musician and producer who coined the term ‘ambient music’ over three decades ago? My aim here is to formulate an alternative definition of ambient journalism that emphasises craft, skills acquisition, and the mental models of professional journalists, which are the foundations more generally for journalism practices. Rather than Hermida’s participatory media context I emphasise ‘institutional adaptiveness’: how journalists and newsrooms in media institutions rely on craft and skills, and how emerging platforms can augment these foundations, rather than replace them. Hermida’s Ambient Journalism and the Role of Journalists Hermida describes ambient journalism as: “broad, asynchronous, lightweight and always-on communication systems [that] are creating new kinds of interactions around the news, and are enabling citizens to maintain a mental model of news and events around them” (Hermida 2). His ideas appear to have two related aspects. He conceives ambient journalism as an “awareness system” between individuals that functions as a collective intelligence or kind of ‘distributed cognition’ at a group level (Hermida 2, 4-6). Facebook, Twitter and other online social networks are examples. Hermida also suggests that such networks enable non-professionals to engage in ‘communication’ and ‘conversation’ about news and media events (Hermida 2, 7). In a helpful clarification, Hermida observes that ‘para-journalists’ are like the paralegals or non-lawyers who provide administrative support in the legal profession and, in academic debates about journalism, are more commonly known as ‘citizen journalists’. Thus, Hermida’s ambient journalism appears to be: (1) an information systems model of new platforms and networks, and (2) a normative argument that these tools empower ‘para-journalists’ to engage in journalism and real-time commentary. Hermida’s thesis is intriguing and worthy of further discussion and debate. As currently formulated however it risks sharing the blind-spots and contradictions of the academic literature that Hermida cites, which suffers from poor theory-building (Burns). A major reason is that the participatory media context on which Hermida often builds his work has different mental models and normative theories than the journalists or media institutions that are the target of critique. Ambient journalism would be a stronger and more convincing framework if these incorrect assumptions were jettisoned. Others may also potentially misunderstand what Hermida proposes, because the academic debate is often polarised between para-journalists and professional journalists, due to different views about institutions, the politics of knowledge, decision heuristics, journalist training, and normative theoretical traditions (Christians et al. 126; Cole and Harcup 166-176). In the academic debate, para-journalists or ‘citizen journalists’ may be said to have a communitarian ethic and desire more autonomous solutions to journalists who are framed as uncritical and reliant on official sources, and to media institutions who are portrayed as surveillance-like ‘monitors’ of society (Christians et al. 124-127). This is however only one of a range of possible relationships. Sole reliance on para-journalists could be a premature solution to a more complex media ecology. Journalism craft, which does not rely just on official sources, also has a range of practices that already provides the “more complex ways of understanding and reporting on the subtleties of public communication” sought (Hermida 2). Citizen- and para-journalist accounts may overlook micro-studies in how newsrooms adopt technological innovations and integrate them into newsgathering routines (Hemmingway 196). Thus, an examination of the realities of professional journalism will help to cast a better light on how ambient journalism can shape the mental models of para-journalists, and provide more rigorous analysis of news and similar events. Professional journalism has several core dimensions that para-journalists may overlook. Journalism’s foundation as an experiential craft includes guidance and norms that orient the journalist to information, and that includes practitioner ethics. This craft is experiential; the basis for journalism’s claim to “social expertise” as a discipline; and more like the original Linux and Open Source movements which evolved through creative conflict (Sennett 9, 25-27, 125-127, 249-251). There are learnable, transmissible skills to contextually evaluate, filter, select and distil the essential insights. This craft-based foundation and skills informs and structures the journalist’s cognitive witnessing of an event, either directly or via reconstructed, cultivated sources. The journalist publishes through a recognised media institution or online platform, which provides communal validation and verification. There is far more here than the academic portrayal of journalists as ‘gate-watchers’ for a ‘corporatist’ media elite. Craft and skills distinguish the professional journalist from Hermida’s para-journalist. Increasingly, media institutions hire journalists who are trained in other craft-based research methods (Burns and Saunders). Bethany McLean who ‘broke’ the Enron scandal was an investment banker; documentary filmmaker Errol Morris first interviewed serial killers for an early project; and Neil Chenoweth used ‘forensic accounting’ techniques to investigate Rupert Murdoch and Kerry Packer. Such expertise allows the journalist to filter information, and to mediate any influences in the external environment, in order to develop an individualised, ‘embodied’ perspective (Hofstadter 234; Thompson; Garfinkel and Rawls). Para-journalists and social network platforms cannot replace this expertise, which is often unique to individual journalists and their research teams. Ambient Journalism and Twitter Current academic debates about how citizen- and para-journalists may augment or even replace professional journalists can often turn into legitimation battles whether the ‘de facto’ solution is a social media network rather than a media institution. For example, Hermida discusses Twitter, a micro-blogging platform that allows users to post 140-character messages that are small, discrete information chunks, for short-term and episodic memory. Twitter enables users to monitor other users, to group other messages, and to search for terms specified by a hashtag. Twitter thus illustrates how social media platforms can make data more transparent and explicit to non-specialists like para-journalists. In fact, Twitter is suitable for five different categories of real-time information: news, pre-news, rumours, the formation of social media and subject-based networks, and “molecular search” using granular data-mining tools (Leinweber 204-205). In this model, the para-journalist acts as a navigator and “way-finder” to new information (Morville, Findability). Jaron Lanier, an early designer of ‘virtual reality’ systems, is perhaps the most vocal critic of relying on groups of non-experts and tools like Twitter, instead of individuals who have professional expertise. For Lanier, what underlies debates about citizen- and para-journalists is a philosophy of “cybernetic totalism” and “digital Maoism” which exalts the Internet collective at the expense of truly individual views. He is deeply critical of Hermida’s chosen platform, Twitter: “A design that shares Twitter’s feature of providing ambient continuous contact between people could perhaps drop Twitter’s adoration of fragments. We don’t really know, because it is an unexplored design space” [emphasis added] (Lanier 24). In part, Lanier’s objection is traceable back to an unresolved debate on human factors and design in information science. Influenced by the post-war research into cybernetics, J.C.R. Licklider proposed a cyborg-like model of “man-machine symbiosis” between computers and humans (Licklider). In turn, Licklider’s framework influenced Douglas Engelbart, who shaped the growth of human-computer interaction, and the design of computer interfaces, the mouse, and other tools (Engelbart). In taking a system-level view of platforms Hermida builds on the strength of Licklider and Engelbart’s work. Yet because he focuses on para-journalists, and does not appear to include the craft and skills-based expertise of professional journalists, it is unclear how he would answer Lanier’s fears about how reliance on groups for news and other information is superior to individual expertise and judgment. Hermida’s two case studies point to this unresolved problem. Both cases appear to show how Twitter provides quicker and better forms of news and information, thereby increasing the effectiveness of para-journalists to engage in journalism and real-time commentary. However, alternative explanations may exist that raise questions about Twitter as a new platform, and thus these cases might actually reveal circumstances in which ambient journalism may fail. Hermida alludes to how para-journalists now fulfil the earlier role of ‘first responders’ and stringers, in providing the “immediate dissemination” of non-official information about disasters and emergencies (Hermida 1-2; Haddow and Haddow 117-118). Whilst important, this is really a specific role. In fact, disaster and emergency reporting occurs within well-established practices, professional ethics, and institutional routines that may involve journalists, government officials, and professional communication experts (Moeller). Officials and emergency management planners are concerned that citizen- or para-journalism is equated with the craft and skills of professional journalism. The experience of these officials and planners in 2005’s Hurricane Katrina in the United States, and in 2009’s Black Saturday bushfires in Australia, suggests that whilst para-journalists might be ‘first responders’ in a decentralised, complex crisis, they are perceived to spread rumours and potential social unrest when people need reliable information (Haddow and Haddow 39). These terms of engagement between officials, planners and para-journalists are still to be resolved. Hermida readily acknowledges that Twitter and other social network platforms are vulnerable to rumours (Hermida 3-4; Sunstein). However, his other case study, Iran’s 2009 election crisis, further complicates the vision of ambient journalism, and always-on communication systems in particular. Hermida discusses several events during the crisis: the US State Department request to halt a server upgrade, how the Basij’s shooting of bystander Neda Soltan was captured on a mobile phone camera, the spread across social network platforms, and the high-velocity number of ‘tweets’ or messages during the first two weeks of Iran’s electoral uncertainty (Hermida 1). The US State Department was interested in how Twitter could be used for non-official sources, and to inform people who were monitoring the election events. Twitter’s perceived ‘success’ during Iran’s 2009 election now looks rather different when other factors are considered such as: the dynamics and patterns of Tehran street protests; Iran’s clerics who used Soltan’s death as propaganda; claims that Iran’s intelligence services used Twitter to track down and to kill protestors; the ‘black box’ case of what the US State Department and others actually did during the crisis; the history of neo-conservative interest in a Twitter-like platform for strategic information operations; and the Iranian diaspora’s incitement of Tehran student protests via satellite broadcasts. Iran’s 2009 election crisis has important lessons for ambient journalism: always-on communication systems may create noise and spread rumours; ‘mirror-imaging’ of mental models may occur, when other participants have very different worldviews and ‘contexts of use’ for social network platforms; and the new kinds of interaction may not lead to effective intervention in crisis events. Hermida’s combination of news and non-news fragments is the perfect environment for psychological operations and strategic information warfare (Burns and Eltham). Lessons of Current Platforms for Ambient Journalism We have discussed some unresolved problems for ambient journalism as a framework for journalists, and as mental models for news and similar events. Hermida’s goal of an “awareness system” faces a further challenge: the phenomenological limitations of human consciousness to deal with information complexity and ambiguous situations, whether by becoming ‘entangled’ in abstract information or by developing new, unexpected uses for emergent technologies (Thackara; Thompson; Hofstadter 101-102, 186; Morville, Findability, 55, 57, 158). The recursive and reflective capacities of human consciousness imposes its own epistemological frames. It’s still unclear how Licklider’s human-computer interaction will shape consciousness, but Douglas Hofstadter’s experiments with art and video-based group experiments may be suggestive. Hofstadter observes: “the interpenetration of our worlds becomes so great that our worldviews start to fuse” (266). Current research into user experience and information design provides some validation of Hofstadter’s experience, such as how Google is now the ‘default’ search engine, and how its interface design shapes the user’s subjective experience of online search (Morville, Findability; Morville, Search Patterns). Several models of Hermida’s awareness system already exist that build on Hofstadter’s insight. Within the information systems field, on-going research into artificial intelligence–‘expert systems’ that can model expertise as algorithms and decision rules, genetic algorithms, and evolutionary computation–has attempted to achieve Hermida’s goal. What these systems share are mental models of cognition, learning and adaptiveness to new information, often with forecasting and prediction capabilities. Such systems work in journalism areas such as finance and sports that involve analytics, data-mining and statistics, and in related fields such as health informatics where there are clear, explicit guidelines on information and international standards. After a mid-1980s investment bubble (Leinweber 183-184) these systems now underpin the technology platforms of global finance and news intermediaries. Bloomberg LP’s ubiquitous dual-screen computers, proprietary network and data analytics (www.bloomberg.com), and its competitors such as Thomson Reuters (www.thomsonreuters.com and www.reuters.com), illustrate how financial analysts and traders rely on an “awareness system” to navigate global stock-markets (Clifford and Creswell). For example, a Bloomberg subscriber can access real-time analytics from exchanges, markets, and from data vendors such as Dow Jones, NYSE Euronext and Thomson Reuters. They can use portfolio management tools to evaluate market information, to make allocation and trading decisions, to monitor ‘breaking’ news, and to integrate this information. Twitter is perhaps the para-journalist equivalent to how professional journalists and finance analysts rely on Bloomberg’s platform for real-time market and business information. Already, hedge funds like PhaseCapital are data-mining Twitter’s ‘tweets’ or messages for rumours, shifts in stock-market sentiment, and to analyse potential trading patterns (Pritchett and Palmer). The US-based Securities and Exchange Commission, and researchers like David Gelernter and Paul Tetlock, have also shown the benefits of applied data-mining for regulatory market supervision, in particular to uncover analysts who provide ‘whisper numbers’ to online message boards, and who have access to material, non-public information (Leinweber 60, 136, 144-145, 208, 219, 241-246). Hermida’s framework might be developed further for such regulatory supervision. Hermida’s awareness system may also benefit from the algorithms found in high-frequency trading (HFT) systems that Citadel Group, Goldman Sachs, Renaissance Technologies, and other quantitative financial institutions use. Rather than human traders, HFT uses co-located servers and complex algorithms, to make high-volume trades on stock-markets that take advantage of microsecond changes in prices (Duhigg). HFT capabilities are shrouded in secrecy, and became the focus of regulatory attention after several high-profile investigations of traders alleged to have stolen the software code (Bray and Bunge). One public example is Streambase (www.streambase.com), a ‘complex event processing’ (CEP) platform that can be used in HFT, and commercialised from the Project Aurora research collaboration between Brandeis University, Brown University, and Massachusetts Institute of Technology. CEP and HFT may be the ‘killer apps’ of Hermida’s awareness system. Alternatively, they may confirm Jaron Lanier’s worst fears: your data-stream and user-generated content can be harvested by others–for their gain, and your loss! Conclusion: Brian Eno and Redefining ‘Ambient Journalism’ On the basis of the above discussion, I suggest a modified definition of Hermida’s thesis: ‘Ambient journalism’ is an emerging analytical framework for journalists, informed by cognitive, cybernetic, and information systems research. It ‘sensitises’ the individual journalist, whether professional or ‘para-professional’, to observe and to evaluate their immediate context. In doing so, ‘ambient journalism’, like journalism generally, emphasises ‘novel’ information. It can also inform the design of real-time platforms for journalistic sources and news delivery. Individual ‘ambient journalists’ can learn much from the career of musician and producer Brian Eno. His personal definition of ‘ambient’ is “an atmosphere, or a surrounding influence: a tint,” that relies on the co-evolution of the musician, creative horizons, and studio technology as a tool, just as para-journalists use Twitter as a platform (Sheppard 278; Eno 293-297). Like para-journalists, Eno claims to be a “self-educated but largely untrained” musician and yet also a craft-based producer (McFadzean; Tamm 177; 44-50). Perhaps Eno would frame the distinction between para-journalist and professional journalist as “axis thinking” (Eno 298, 302) which is needlessly polarised due to different normative theories, stances, and practices. Furthermore, I would argue that Eno’s worldview was shaped by similar influences to Licklider and Engelbart, who appear to have informed Hermida’s assumptions. These influences include the mathematician and game theorist John von Neumann and biologist Richard Dawkins (Eno 162); musicians Eric Satie, John Cage and his book Silence (Eno 19-22, 162; Sheppard 22, 36, 378-379); and the field of self-organising systems, in particular cyberneticist Stafford Beer (Eno 245; Tamm 86; Sheppard 224). Eno summed up the central lesson of this theoretical corpus during his collaborations with New York’s ‘No Wave’ scene in 1978, of “people experimenting with their lives” (Eno 253; Reynolds 146-147; Sheppard 290-295). Importantly, he developed a personal view of normative theories through practice-based research, on a range of projects, and with different creative and collaborative teams. Rather than a technological solution, Eno settled on a way to encode his craft and skills into a quasi-experimental, transmittable method—an aim of practitioner development in professional journalism. Even if only a “founding myth,” the story of Eno’s 1975 street accident with a taxi, and how he conceived ‘ambient music’ during his hospital stay, illustrates how ambient journalists might perceive something new in specific circumstances (Tamm 131; Sheppard 186-188). More tellingly, this background informed his collaboration with the late painter Peter Schmidt, to co-create the Oblique Strategies deck of aphorisms: aleatory, oracular messages that appeared dependent on chance, luck, and randomness, but that in fact were based on Eno and Schmidt’s creative philosophy and work guidelines (Tamm 77-78; Sheppard 178-179; Reynolds 170). In short, Eno was engaging with the kind of reflective practices that underpin exemplary professional journalism. He was able to encode this craft and skills into a quasi-experimental method, rather than a technological solution. Journalists and practitioners who adopt Hermida’s framework could learn much from the published accounts of Eno’s practice-based research, in the context of creative projects and collaborative teams. In particular, these detail the contexts and choices of Eno’s early ambient music recordings (Sheppard 199-200); Eno’s duels with David Bowie during ‘Sense of Doubt’ for the Heroes album (Tamm 158; Sheppard 254-255); troubled collaborations with Talking Heads and David Byrne (Reynolds 165-170; Sheppard; 338-347, 353); a curatorial, mentor role on U2’s The Unforgettable Fire (Sheppard 368-369); the ‘grand, stadium scale’ experiments of U2’s 1991-93 ZooTV tour (Sheppard 404); the Zorn-like games of Bowie’s Outside album (Eno 382-389); and the ‘generative’ artwork 77 Million Paintings (Eno 330-332; Tamm 133-135; Sheppard 278-279; Eno 435). Eno is clearly a highly flexible maker and producer. Developing such flexibility would ensure ambient journalism remains open to novelty as an analytical framework that may enhance the practitioner development and work of professional journalists and para-journalists alike.Acknowledgments The author thanks editor Luke Jaaniste, Alfred Hermida, and the two blind peer reviewers for their constructive feedback and reflective insights. References Bray, Chad, and Jacob Bunge. “Ex-Goldman Programmer Indicted for Trade Secrets Theft.” The Wall Street Journal 12 Feb. 2010. 17 March 2010 ‹http://online.wsj.com/article/SB10001424052748703382904575059660427173510.html›. Burns, Alex. “Select Issues with New Media Theories of Citizen Journalism.” M/C Journal 11.1 (2008). 17 March 2010 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/view/30›.———, and Barry Saunders. “Journalists as Investigators and ‘Quality Media’ Reputation.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 281-297. 17 March 2010 ‹http://eprints.vu.edu.au/15229/1/CPRF09BurnsSaunders.pdf›.———, and Ben Eltham. “Twitter Free Iran: An Evaluation of Twitter’s Role in Public Diplomacy and Information Operations in Iran’s 2009 Election Crisis.” Record of the Communications Policy and Research Forum 2009. Eds. Franco Papandrea and Mark Armstrong. Sydney: Network Insight Institute, 298-310. 17 March 2010 ‹http://eprints.vu.edu.au/15230/1/CPRF09BurnsEltham.pdf›. Christians, Clifford G., Theodore Glasser, Denis McQuail, Kaarle Nordenstreng, and Robert A. White. Normative Theories of the Media: Journalism in Democratic Societies. Champaign, IL: University of Illinois Press, 2009. Clifford, Stephanie, and Julie Creswell. “At Bloomberg, Modest Strategy to Rule the World.” The New York Times 14 Nov. 2009. 17 March 2010 ‹http://www.nytimes.com/2009/11/15/business/media/15bloom.html?ref=businessandpagewanted=all›.Cole, Peter, and Tony Harcup. Newspaper Journalism. Thousand Oaks, CA: Sage Publications, 2010. Duhigg, Charles. “Stock Traders Find Speed Pays, in Milliseconds.” The New York Times 23 July 2009. 17 March 2010 ‹http://www.nytimes.com/2009/07/24/business/24trading.html?_r=2andref=business›. Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework, 1962.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era. London: Phaidon Press, 2002. 60-67. Eno, Brian. A Year with Swollen Appendices. London: Faber and Faber, 1996. Garfinkel, Harold, and Anne Warfield Rawls. Toward a Sociological Theory of Information. Boulder, CO: Paradigm Publishers, 2008. Hadlow, George D., and Kim S. Haddow. Disaster Communications in a Changing Media World, Butterworth-Heinemann, Burlington MA, 2009. Hemmingway, Emma. Into the Newsroom: Exploring the Digital Production of Regional Television News. Milton Park: Routledge, 2008. Hermida, Alfred. “Twittering the News: The Emergence of Ambient Journalism.” Journalism Practice 4.3 (2010): 1-12. Hofstadter, Douglas. I Am a Strange Loop. New York: Perseus Books, 2007. Lanier, Jaron. You Are Not a Gadget: A Manifesto. London: Allen Lane, 2010. Leinweber, David. Nerds on Wall Street: Math, Machines and Wired Markets. Hoboken, NJ: John Wiley and Sons, 2009. Licklider, J.C.R. “Man-Machine Symbiosis, 1960.” Ed. Neil Spiller. Cyber Reader: Critical Writings for the Digital Era, London: Phaidon Press, 2002. 52-59. McFadzean, Elspeth. “What Can We Learn from Creative People? The Story of Brian Eno.” Management Decision 38.1 (2000): 51-56. Moeller, Susan. Compassion Fatigue: How the Media Sell Disease, Famine, War and Death. New York: Routledge, 1998. Morville, Peter. Ambient Findability. Sebastopol, CA: O’Reilly Press, 2005. ———. Search Patterns. Sebastopol, CA: O’Reilly Press, 2010.Pritchett, Eric, and Mark Palmer. ‘Following the Tweet Trail.’ CNBC 11 July 2009. 17 March 2010 ‹http://www.casttv.com/ext/ug0p08›. Reynolds, Simon. Rip It Up and Start Again: Postpunk 1978-1984. London: Penguin Books, 2006. Sennett, Richard. The Craftsman. London: Penguin Books, 2008. Sheppard, David. On Some Faraway Beach: The Life and Times of Brian Eno. London: Orion Books, 2008. Sunstein, Cass. On Rumours: How Falsehoods Spread, Why We Believe Them, What Can Be Done. New York: Farrar, Straus and Giroux, 2009. Tamm, Eric. Brian Eno: His Music and the Vertical Colour of Sound. New York: Da Capo Press, 1995. Thackara, John. In the Bubble: Designing in a Complex World. Boston, MA: The MIT Press, 1995. Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Science of Mind. Boston, MA: Belknap Press, 2007.
APA, Harvard, Vancouver, ISO, and other styles
40

Khairunnisa, Khairunnisa, Yusya Abubakar, and Didik Sugianto. "Do Disaster Literacy and Mitigation Policy Affect Residents Resettling in Tsunami Prone Areas? Study from the City of Banda Aceh, Indonesia." Forum Geografi 35, no. 1 (July 10, 2021). http://dx.doi.org/10.23917/forgeo.v35i1.11510.

Full text
Abstract:
Akbar, A., Ma'rif, S. (2014). Arah Perkembangan Kawasan Perumahan Pasca Bencana Tsunami di Kota Banda Aceh. Teknik PWK (Perencanaan Wilayah Kota), 3(2), 274-284.Bandrova, T., Zlatanova, S., Konecny, M. (2012). Three-dimensional maps for disaster management. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume I-2, XXII ISPRS Congress, August-September 2012, pp. 19-24. International Society for Photogrammetry and Remote Sensing.BNPB. (2012). Menuju Indonesia Tangguh Tsunami. Jakarta: Badan Nasional Penanggulangan Bencana.BNPB. (2016). Kebijakan dan Strategi Penanggulangan Bencana 2015-2019 (Jakstra PB).BPBA. (2015). Kajian Risiko Bencana Aceh 2016-2020.BPBD. (2017). Rencana Pengurangan Bencana. Banda Aceh.BRR. (2005). Program Blueprint Aceh.Carreño, M. L., Cardona, O. D., Barbat, A. H. (2007). A Disaster Risk Management Performance Index. Natural Hazards, 41(1), 1-20.Danugroho, A., Umamah, N., Pratama, A. R. (2020). Aceh Tsunami and Government Policy in Handling It: A Historical Study. In IOP Conference Series: Earth and Environmental Science (Vol. 485, No. 1, p. 012140). IOP Publishing.Febriana, D. S., Abubakar, Y. (2015). Kesiapsiagaan Masyarakat Desa Siaga Bencana dalam Menghadapi Bencana Gempa Bumi di Kecamatan Meuraxa Kota Banda Aceh. Jurnal Ilmu Kebencanaan: Program Pascasarjana Unsyiah, 2(3).Gadeng, A. N., Furqan, M. H. (2019). The Development of Settlement in the Tsunami Red Zone Area of Banda Aceh City. KnE Social Sciences, 1-13.Godschalk, D., Bohl, C. C., Beatley, T., Berke, P., Brower, D., Kaiser, E. J. (1999). Natural Hazard Mitigation: Recasting Disaster Policy and Planning. Island press.Goltz, J., Yamori, K. (2020). Tsunami Preparedness and Mitigation Strategies. In Oxford Research Encyclopedia of Natural Hazard Science.Herrmann, G. (2013). Regulation of Coastal Zones and Natural Disasters: Mitigating the Impact of Tsunamis in Chile Through Urban and Regional Planning. Issues in Legal Scholarship, 11(1), 29-44.Jain, Garima., Singh, Chandni and Malani, T. (2017). Rethinking Post-disaster Relocation in Urban India. International Institute for Environment and Development.Kafle, S. K. (2006). Rapid Disaster Risk Assessment of Coastal Communities: A Case Study of Mutiara Village, Banda Aceh, Indonesia. In Proceedings of the International Conference on Environment and Disaster Management Held in Jakarta, Indonesia on December (pp. 5-8).Mardiatno, D., Malawani, M. N., Annisa, D. N., Wacano, D. (2017). Review on Tsunami Risk Reduction in Indonesia Based on Coastal and Settlement Typology. The Indonesian Journal of Geography, 49(2), 186-197.Marlyono, S. G. (2017). Peranan Literasi Informasi Bencana terhadap Kesiapsiagaan Bencana Masyarakat Jawa Barat. Jurnal Geografi Gea, 16(2), 116-123.Oktari, R. S., Nugroho, A., Fahmi, M., Suppasri, A., Munadi, K., Amra, R. (2021). Fifteen years of the 2004 Indian Ocean Tsunami in Aceh-Indonesia: Mitigation, preparedness and challenges for a long-term disaster recovery process. International Journal of Disaster Risk Reduction, 54, 102052.Peacock, W. G. and H. R. (2011). The Adoption and Implementation of Hazard Mitigation Policies and Strategies by Coastal Jurisdictions in Texas: The Planning Survey Results. Retrieved from http/TheAdoptionandImplementationofHazardMitigationPoliciesandStrategiesbyCoastalJurisdictionsinTexasDec2011.pdfPemerintah Kota Banda Aceh. (2009).Rencana Tata Ruang dan Wilayah (RTRW) Kota Banda Aceh 2009-2029.Priyowidodo, G., Luik, J. E. (2013). Literasi mitigasi bencana tsunami untuk masyarakat pesisir di Kabupaten Pacitan Jawa Timur. Ekotrans, 13(1), 47-61.PU, K. (2015). Rancangan Pembangunan Infrastruktur dan Inventaris Jangka Menengah (RPI-2JM) Bidang Cipta Karya 2015-2019.Sambah, A. B., Miura, F. (2019). Geo Spatial Analysis for Tsunami Risk Mapping. In Advanced Remote Sensing Technology for Synthetic Aperture Radar Applications, Tsunami Disasters, and Infrastructure. IntechOpen.Schwab, A. K., Sandler, D., Brower, D. J. (2016). Hazard Mitigation and Preparedness: An Introductory Text For Emergency Management and Planning Professionals. CRC Press.Shigenobu, T., Istiyanto, D., Kuribayashi, D. (2009). Sustainable Tsunami Risk Reduction and Utilization of Tsunami Hazard Map (THM).Strunz, G., Post, J., Zosseder, K., Wegscheider, S., Mück, M., Riedlinger, T., ... Muhari, A. (2011). Tsunami Risk Assessment in Indonesia. Natural Hazards and Earth System Sciences, 11(1), 67-82.Sugiyono. (2015). Metode penelitian pendidikan:(pendekatan kuantitatif, kualitatif dan R D). Bandung: Alfabeta.Sunarto, S., Marfai, M. A. (2012). Potensi Bencana Tsunami dan Kesiapsiagaan Masyarakat Menghadapi Bencana Studi Kasus Desa Sumberagung Banyuwangi Jawa Timur. In Forum Geografi (Vol. 26, No. 1, pp. 17-28).Syamsidik, Nugroho, A., Suryani, R., Fahmi., M. (2019). Aceh Pasca 15 Tahun Bencana Tsunami: Kilas Balik dan Proses Pemulihan. Banda Aceh: Tsunami and Disaster Mitigation Research Center (TDMRC).Torani, S., Majd, P. M., Maroufi, S. S., Dowlati, M., Sheikhi, R. A. (2019). The Importance of Education on Disasters and Emergencies: A review article. Journal of Education and Health promotion, 8.Triatmadja, R. (2011). Tsunami: Kejadian, Penjalaran, Daya Rusak, dan Mitigasinya. Gadjah Mada University Press.Widianto, A., Damen, M. (2014). Determination of Coastal Belt in the Disaster Prone Area: A Case Study in The Coastal Area of Bantul Regency, Yogyakarta, Indonesia. The Indonesian Journal of Geography, 46(2), 125.
APA, Harvard, Vancouver, ISO, and other styles
41

Van Toan, Dinh. "Development of Enterprises in Universities and Policy Implications for University Governance Reform in Vietnam." VNU Journal of Science: Economics and Business 35, no. 1 (March 22, 2019). http://dx.doi.org/10.25073/2588-1108/vnueab.4201.

Full text
Abstract:
The article focuses on analyzing the content and relationship between the development of enterprises, enterprise-university models and governance in higher education institutions, thereby providing policy recommendations on innovation in university governance in Vietnam. In the article, documents from internationally published researches as well as arguments on the mentioned subjects are analyzed and synthesized. Results of surveys and analysis on the status of universities in Vietnam that are presented in the article also demonstrate a detailed picture of difficulties and issues in enterprise development and transition into enterprise-university model. On this basis, the article provides recommendations for universities and on the issues that require government’s intervention through supportive policies and mechanisms to accelerate the process of university governance reform in the current period of 4.0 revolution in university education. Keywords Higher education institutions, Developing enterprise in universities, University-enterprise model, University governance References [1] Trần Anh Tài, Trịnh Ngọc Thạch, Mô hình đại học doanh nghiệp: Kinh nghiệm quốc tế và gợi ý cho Việt Nam, Tái bản lần thứ nhất, NXB Khoa học Xã hội, 2003.[2] Yokoyama K, Entrepreneurialism in Japanese and UK Universities: Governance, Management, Leadership and Funding. High Educ (2006) 52: 523. https://doi.org/10.1007/s10734-005-1168-2.[3] Dinh Van Toan, University - Enterprise Cooperation in International Context and Implications for Vietnam, Vietnam Economic Review No. 7 (275), (2017).[4] Dinh Van Toan, Hoang Van Hai, Nguyen Phuong Mai, The Role of Entrepreneurship Development in Universities to Promote Knowledge Sharing: The Case of Vietnam National University, Hanoi, Kỷ yếu tại hội thảo quốc tế: "Asia Pacific Conference on Information Management 2016: Common Platform to A Sustainable Society In The Dynamic Asia Pacific", Hanoi, 2016.[5] Wennekers S. & Thurik R., Linking Entrepreneurship and Economic Growth, Small Business Economics (1999) 13: 27. https://doi.org/10.1023/A:1008063200484.[6] Clark. B. R., Creating Entrepreneurial Universities: Organizational Pathways of Transformation, Oxford: IAU Press and Pergamon, 1998.[7] Etzkowitz H., MIT and The Rise of Entrepreneurial Science, Routledge, New York, 2002.https://doi.org/10.4324/9780203216675.[8] Geiger R. L., Knowledge and Money: Research Universities and The Paradox of The Marketplace, Stanford University Press, 2004.[9] Slaughter, S., Leslie, L., Academic Capitalism: Politics, Policies and The Entrepreneurial University, John Hopkins University Press, Baltimore, 1997.[10] Slaughter, S., Rhoades G., Academic Capitalism and The New Economy: Markets, State and Higher Education, John Hopkins University Press, Baltimore, 2004.[11] Washburn, J., University Inc: The Corporate Corruption of Higher Education, Stanford University Press, 2005. [12] Han J. và Heshmati A., Determinants of Financial Rewards from Industry-University Collaboration in South Korea, IZA Discussion Paper No. 7695 (2013). [13] Trần Anh Tài, Liên kết nhà trường và doanh nghiệp trong hoạt động đào tạo và nghiên cứu khoa học - kinh nghiệm quốc tế và gợi ý cho Việt Nam, Đề tài cấp ĐHQG, 2009-2010, 2010. [14] Yusof M., Jain K. K., Categories of University-level entrepreneurship: a literature survey, Int. Entrep. Manag. J (2010) 6:81-96. DOI 10.1007/s11365-007-0072-x.[15] Dinh Van Toan, Promoting university startups’ development: International experiences and policy recommendations for Vietnam, Vietnam’s Socio-Economic Development, Vol. 22, No. 90, 7/2017, tr. 19-42.[16] Rothaermel F.T., Agung S.D. and Jiang L., University entrepreneurship: a taxonomy of the literature, Industrial and Corporate Change, Volume 16, Number 4, Oxford University Press, 2007, pp. 691-791.[17] Bercovitz J. & Feldman M., Entrepreneurial Universities and Technology Transfer: A Conceptual Framework for Understanding Knowledge Based Economic Development, The Journal of Technology Transfer (2006) 31: 175. https://doi.org/10.1007/s10961-005-5029-z[18] Bercovitz, J., Feldman, M., Feller, I. và cộng sự, Organizational Structure as a Determinant of Academic Patent and Licensing Behavior: An Exploratory Study of Duke, John Hopkins, and Pennsylvania State Universities, The Journal of Technology Transfer (2001) 26: 21. https://doi.org/10.1023/A:1007828026904[19] Feldman, M., Bercovitz, J., Burton, R., Equity and The Technology Strategies of American Research Universities, Management Science, 48(1), 2002, 105-121.[20] Owen-Smith, J., Trends and transitions in the institutional environment for public and private science, Higher Education, 49, 2005, 91-117.[21] Owen-Smith J., Powell W. W., The Expanding Role of University Patenting in the Life Sciences: Assessing The Importance of Experience and Connectivity, Research Policy, 32(9), 2003, 1695-1711.[22] Colyvas J.A., Powell W.W., From Vulnerable to Venerated: The Institutionalization of Academic Entrepreneurship in The Life Science, in Martin Ruef, Michael Lounsbury (ed.) The Sociology of Entrepreneurship (Research in the Sociology of Organizations, Volume 25) Emerald Group Publishing Limited, 2007, pp.219 – 259. [23] Luthje C., Franke N., Fostering entrepreneurship through university education and training: Lessons from Massachusetts Institute of Techolology, European Academy of Management, 2nd Annual Conference on Innovative Research in Management, Stockholm, 2002.[24] Trần Anh Tài, Liên kết nhà trường và doanh nghiệp trong hoạt động đào tạo và nghiên cứu khoa học - kinh nghiệm quốc tế và gợi ý cho Việt Nam, Đề tài cấp ĐHQG, 2009-2010. [25] G. Dalmarco, W. Hulsink, Creating entrepreneurial university in an emerging country: Evidence from Brazil, Technological Forecasting and Social Change, 2018. DOI: 10.1016/j.techfore.2018.04.015] [26] Đinh Văn Toàn, 2018, Phát triển doanh nghiệp trong đại học: Kinh nghiệm trên thế giới và gợi ý chính sách cho Việt Nam, Tạp chí Kinh tế và dự báo, số 33, 12/2018, tr.58-60.[27] Nguyễn Hữu Đức, Nguyễn Hữu Thành Chung, Nghiêm Xuân Huy, Mai Thị Quỳnh Lan, Trần Thị Bích Liễu, Hà Quang Thụy, Nguyễn Lộc, Tiếp cận giáo dục đại học 4.0 – Các đặc trưng và tiêu chí đánh giá, Tạp chí Khoa học ĐHQGHN: Nghiên cứu chính sách và quản lý, Vol.34, số 4, 2018.
APA, Harvard, Vancouver, ISO, and other styles
42

Mesquita, Afrânio Rubens de. "Prefácio." Revista Brasileira de Geofísica 31, no. 5 (December 1, 2013). http://dx.doi.org/10.22564/rbgf.v31i5.392.

Full text
Abstract:
PREFACEThe articles of this supplement resulted from the 5 th International Congress of the Brazilian Geophysical Society held in São Paulo city, Brazil, at the Convention Center of the Transamérica Hotel, from 28 th September to 2 nd of October 1997. The participants of the Round Table Discussions on “Mean Sea Level Changes Along the Brazilian Coast” were Dr. Denizar Blitzkow, Polytechnic School of the University of São Paulo, (POLI-USP), Prof. Dr. Waldenir Veronese Furtado, Institute of Oceanography (IO-USP), Dr. Joseph Harari (IO-USP), Dr. Roberto Teixeira from the Brazilian Institute of Geography and Statistics (IBGE), and the invited coordinator Prof. Dr. Afrânio Rubens de Mesquita (IO-USP). Soon after the first presentation of the IBGE representative, on the efforts of his Institute regarding sea level matters, it became clear that, apart from a M.Sc. Thesis of Mesquita (1968) and the contributions of Johannenssen (1967), Mesquita et al. (1986) and Mesquita et al. (1994), little was known by the participants, about the history of the primordial sea level measurements along the Brazilian coast, one of the objectives of the meeting. So, following the strong recommendations of the Table participants, a short review on the early Brazilian sea level measurements was planned for a much needed general historical account on the topic. For this purpose, several researchers such as The Commander Frederico Corner Bentes, Directorate of Hydrography and Navigation (DHN) of the Brazilian Navy, Ms. Maria Helena Severo (DHN) and Eng. Jose Antonio dos Santos, National Institute of Ports and Rivers (INPH), long involved with the national sea level measurements were asked to present their views. Promptly, they all provided useful information on the ports and present difficulties with the Brazilian Law relative to the “Terrenos de Marinha” (Sea/Land Limits). Admiral Max Justo Guedes of the General Documentation Service (SDG) of the Brazilian Navy gave an account of the first “Roteiros”– Safe ways to approach the cities (ports) of that time by the sea –, written by the Portuguese navigators in the XVI Century, on the newly found land of “Terra de Santa Cruz”, Brazil’s first given name. Admiral Dr. Alberto Dos Santos Franco (IO-USP/DHN) gave information on the first works on sea level analysis published by the National Observatory (ON) Scientists, Belford Vieira (1928) and Lemos (1928). In a visit to ON, which belongs to the National Council of Scientific and Technological Research (CNPq) and after a thorough discussion on sea level matters in Brazil, Dr. Luiz Muniz Barreto showed the Library Museum, where the Tide Predictor machine, purchased from England, in the beginning of the XX century, is well kept and preserved. Afterwards, Dr. Mauro de Andrade Sousa of ON, sent a photography (Fig. 1) of the Kelvin machine (the same Kelvin of the Absolute Temperature), a tide predictor firstly used in the Country by ON to produce Tide Tables. From 1964 until now, the astronomical prediction of Tides (Tide Tables) for most of the Brazilian ports is produced using computer software and published by the DHN. Before the 5 th International Congress of Geophysics, the Global Observing Sea Level System (GLOSS), a program of the Intergovernmental Oceanographic Commission (IOC) of UNESCO, had already offered a Training Course on sea level matters, in 1993 at IO-USP (IOC. 1999) and, six years later, a Training Workshop was also given at IO-USP in 1999 (IOC. 2000). Several participants of the Portuguese and Spanish speaking countries of the Americas and Africa (Argentina, Brazil, Chile, Mozambique, Uruguay, Peru, São Tome and Principe and Venezuela) were invited to take part in the Course and Workshop, under the auspices of the IOC. During the Training Course of 1993, Dr. David Pugh, Director of GLOSS, proposed to publish a Newsletter for sea level matters as a FORUM of the involved countries. The Newsletter, after the approval of the IOC Chairman at the time, Dr. Albert Tolkachev, ended up as the Afro America GLOSS News (AAGN). The newsletter had its first Edition published by IO-USP and was paper-printed up to its 4 th Edition. After that, under the registration Number ISSN: 1983-0319, from the CNPq and the new forum of GLOSS, which the Afro-American Spanish and Portuguese speaking countries already had, started to be disseminated only electronically. Currently on its 15 th Edition, the News Letter can be accessed on: www.mares.io.usp.br, Icon Afro America GLOSS News (AAGN),the electronic address of the “Laboratory of Tides and Oceanic Temporal Processes” (MAPTOLAB) of IO-USP, where other contributions on Brazilian sea level, besides the ones given in this Supplement, can also be found. The acronym GLOSS identifies the IOC program, which aims to produce an overall global long-term sea level data set from permanent measuring stations, distributed in ocean islands and all over the continental borders about 500 Km on average apart from each other, covering evenly both Earth hemispheres. The program follows the lines of the Permanent Service for the Mean Sea Level (PSMSL), a Service established in 1933 by the International Association for the Physical Sciences of the Ocean (IAPSO), which, however, has a much stronger and denser sea level data contribution from countries of the Northern Hemisphere. The Service receives and organizes sea level data sent by all countries with maritime borders, members of the United Nations (UN) and freely distributes the data to interested people, on the site http://www.pol.ac.uk/psmsl. The Permanent Station of Cananeia, Brazil, which has the GLOSS number 194 together with several other permanent stations (San Francisco, USA, Brest, France and many others), belongs to a chosen group of stations (Brazil has 9 GLOSS Stations) prepared to produce real time sea level, accompanied by gravity, GPS and meteorological high quality data measurements, aiming to contribute for a strictly reliable “in situ” data knowledge regarding the Global Earth sea level variability. Following the recommendations of the Round Table for a search of the first historical events, it was found that sea level measurements started in the Brazilian coast in 1781. The year when the Portuguese astronomer Sanches Dorta came to the Southern oceans, interested in studying the attraction between masses, applied to the oceanic tides a fundamental global law discovered by Isaak Newton in the seventeenth century. Nearly a hundred years later the Law was confirmed by Henry Cavendish. Another nearly hundred years passed and a few years after the transfer of the Portuguese Crown from Europe to Brazil, in 1808, the Port of Rio de Janeiro was occupied, in 1831, for the first systematic sea level measurements ever performed on the Brazilian coast. The one year recorded tidal signal, showing a clear semidiurnal tide is kept nowadays in the Library of the Directory of Hydrography and Navigation (DHN) of the Brazilian Navy. After the proclamation of the Brazilian Republic in 1889, systematic sea level measurements at several ports along the coast were organized and established by the Port Authorities precursors of INPH. Sea level analyses based on these measurements were made by Belford Vieira (op. cit.) and Lemos (op. cit.) of the aforementioned National Observatory (ON), and the Institute of the National Council of Research and Technology (CNPq), which gave the knowledge of tides and tidal analysis a valuable boost at that time. For some reason, the measurements of 1831 were included into the Brazilian Federal law No. 9760 of 1946, to serve as the National Reference (NR) for determining the sea/land limits of the “Terrenos de Marinha”, and inadvertently took it as if it were a fixed and permanent level along the years, which is known today to be untrue. Not only for this reason, but also for the fact that the datum, the reference level (RL) in the Port of Rio de Janeiro, to which the measurements of 1831 were referred to, was lost, making the 1946 Law inapplicable nowadays. The recommendations of the Round Table participants seemed to have been providential for the action which was taken, in order to solve these unexpected events. A method for recovering the 1831 limits of high waters, referred by Law 9760, was produced recently and is shown in this supplement. It is also shown the first attempt to identify, on the coast of São Paulo State, from the bathymetry of the marine charts produced by DHN, several details of the bottom of the shelf area. The Paleo Rivers and terraces covered by the most recent de-glaciation period, which started about 20,000 years ago, were computationally uncovered from the charts, showing several paleo entrances of rivers and other sediment features of the shelf around “Ilha Bela”, an island off the coast of S˜ao Sebastião. Another tidal analysis contribution, following the first studies of ON scientists, but now using computer facilities and the Fast Fourier Transform for tidal analysis, developed by Franco and Rock (1971), is also shown in this Supplement. Estimates of Constituents amplitudes as M2 and S2 seem to be decreasing along the years. In two ports of the coast this was effective, as a consequence of tidal energy being transferred from the astronomical Tide Generator Potential (PGM), created basically by the Sun and the Moon, to nonlinear components generated by tidal currents in a process of continuously modifying the beaches, estuarine borders and the shelf area. A study on the generation of nonlinear tidal components, also envisaged by Franco (2009) in his book on tides, seems to be the answer to some basic questions of this field of knowledge. Harari & Camargo (1994) worked along the same lines covering the entire South Eastern Shelf. As for Long Term Sea Level Trends, the sea level series produced by the National Institute of Research for Ports and Rivers (INPH), with the 10 years series obtained by the Geodetic Survey of USA, in various Brazilian ports, together with the sea level series of Cananeia of IO-USP, allowed the first estimation of Brazil’s long term trend, as about 30 cm/cty. A study comparing this value with the global value of sea level variation obtained from the PSMSL data series, shows that among the positively and negatively trended global tidal series, the Brazilian series are well above the mean global trend value of about 18 cm/cty. This result was communicated to IAPSO in the 1987 meeting in Honolulu, Hawaii, USA. In another attempt to decipher the long term sea level contents of these series, the correlation values, as a measure of collinearity and proximity values, as well as the distance of the yearly mean data values of sea level to the calculated regression line, are shown to be invariant with rotation of the Cartesian axes in this Supplement. Not following the recommendations of the Round Table but for the completeness of this Preface, these values, estimated from the Permanent Service for the Mean Sea Level data, with the Brazilian series included, allowed the definition of a function F, which, being also invariant with axis rotation, seems to measure the sort of characteristic state of variability of each sea level series. The plot of F values against the corresponding trend values of the 60 to 100 year-long PSMSL series is shown in Figure 2. This plot shows positive values of F reaching the 18 cm/cty, in good agreement with the recent International Panel for Climate Changes (IPCC) estimated global value. However, the negative side of the Figure also shows other values of F giving other information, which is enigmatic and is discussed in Mesquita (2004). For the comprehensiveness of this Preface and continuation of the subjects, although not exactly following the discussions of the Round Table, other related topics were developed since the 5th Symposium in 1997, for the extreme sea level events. They were estimated for the port of Cananeia, indicating average values of 2.80 m above mean sea level, which appears to be representative of the entire Brazilian coast and probable to occur within the next hundred years, as shown by Franco et al. (2007). Again for completeness, the topic on the steric and halosteric sea levels has also been talked about a lot after the 1997 reunion. Prospects of further studies on the topic rely on proposed oceanographic annual section measurements on the Southeastern coast, “The Capricorn Section,” aimed at estimating the variability and the long term steric and halosteric sea levels contributions, as expressed in Mesquita (2009). These data and the time series measurements (sea level, GPS, meteorology and gravity), already taken at Cananeia and Ubatuba research Stations, both near the Tropic of Capricorn, should allow to locally estimate the values of almost all basic components of the sea level over the Brazilian Southeastern area and perhaps also of the whole South Atlantic, allowing for quantitative studies on their composition, long term variability and their climatic influence.
APA, Harvard, Vancouver, ISO, and other styles
43

Thinh, Nguyen Hong, Tran Hoang Tung, and Le Vu Ha. "Depth-aware salient object segmentation." VNU Journal of Science: Computer Science and Communication Engineering 36, no. 2 (October 7, 2020). http://dx.doi.org/10.25073/2588-1086/vnucsce.217.

Full text
Abstract:
Object segmentation is an important task which is widely employed in many computer vision applications such as object detection, tracking, recognition, and retrieval. It can be seen as a two-phase process: object detection and segmentation. Object segmentation becomes more challenging in case there is no prior knowledge about the object in the scene. In such conditions, visual attention analysis via saliency mapping may offer a mean to predict the object location by using visual contrast, local or global, to identify regions that draw strong attention in the image. However, in such situations as clutter background, highly varied object surface, or shadow, regular and salient object segmentation approaches based on a single image feature such as color or brightness have shown to be insufficient for the task. This work proposes a new salient object segmentation method which uses a depth map obtained from the input image for enhancing the accuracy of saliency mapping. A deep learning-based method is employed for depth map estimation. Our experiments showed that the proposed method outperforms other state-of-the-art object segmentation algorithms in terms of recall and precision. KeywordsSaliency map, Depth map, deep learning, object segmentation References[1] Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on pattern analysis and machine intelligence 20(11) (1998) 1254-1259.[2] Goferman, L. Zelnik-Manor, A. Tal, Context-aware saliency detection, IEEE transactions on pattern analysis and machine intelligence 34(10) (2012) 1915-1926.[3] Kanan, M.H. Tong, L. Zhang, G.W. Cottrell, Sun: Top-down saliency using natural statistics, Visual cognition 17(6-7) (2009) 979-1003.[4] Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, H.-Y. Shum, Learning to detect a salient object, IEEE Transactions on Pattern analysis and machine intelligence 33(2) (2011) 353-367.[5] Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, Saliency filters: Contrast based filtering for salient region detection, in: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 733-740.[6] M. Cheng, N.J. Mitra, X. Huang, P.H. Torr, S.M. Hu, Global contrast based salient region detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 37(3) (2015) 569-582.[7] Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE transactions on pattern analysis and machine intelligence 35(1) (2013) 185-207.[8] Simonyan, A. Vedaldi, A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, arXiv preprint arXiv:1312.6034.[9] Li, Y. Yu, Visual saliency based on multiscale deep features, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5455-5463.[10] Liu, J. Han, Dhsnet: Deep hierarchical saliency network for salient object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 678-686.[11] Achanta, S. Hemami, F. Estrada, S. Susstrunk, Frequency-tuned saliency detection model, CVPR: Proc IEEE, 2009, pp. 1597-604.Fu, J. Cheng, Z. Li, H. Lu, Saliency cuts: An automatic approach to object segmentation, in: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, IEEE, 2008, pp. 1-4Borenstein, J. Malik, Shape guided object segmentation, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 1, IEEE, 2006, pp. 969-976.Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, S. Li, Automatic salient object segmentation based on context and shape prior., in: BMVC. 6 (2011) 9.Ciptadi, T. Hermans, J.M. Rehg, An in depth view of saliency, Georgia Institute of Technology, 2013.Desingh, K.M. Krishna, D. Rajan, C. Jawahar, Depth really matters: Improving visual salient region detection with depth., in: BMVC, 2013.Li, J. Ye, Y. Ji, H. Ling, J. Yu, Saliency detection on light field, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2806-2813.Koch, S. Ullman, Shifts in selective visual attention: towards the underlying neural circuitry, in: Matters of intelligence, Springer, 1987, pp. 115-141.Laina, C. Rupprecht, V. Belagiannis, F. Tombari, N. Navab, Deeper depth prediction with fully convolutional residual networks, in: 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, 2016, pp. 239-248.Bruce, J. Tsotsos, Saliency based on information maximization, in: Advances in neural information processing systems, 2006, pp. 155-162.Ren, X. Gong, L. Yu, W. Zhou, M. Ying Yang, Exploiting global priors for rgb-d saliency detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 25-32.Fang, J. Wang, M. Narwaria, P. Le Callet, W. Lin, Saliency detection for stereoscopic images., IEEE Trans. Image Processing 23(6) (2014) 2625-2636.Hou, L. Zhang, Saliency detection: A spectral residual approach, in: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, IEEE, 2007, pp. 1-8.Guo, Q. Ma, L. Zhang, Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform, in: Computer vision and pattern recognition, 2008. cvpr 2008. ieee conference on, IEEE, 2008, pp. 1-8.Fang, W. Lin, B.S. Lee, C.T. Lau, Z. Chen, C.W. Lin, Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum, IEEE Transactions on Multimedia 14(1) (2012) 187-198.Lang, T.V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, S. Yan, Depth matters: Influence of depth cues on visual saliency, in: Computer vision-ECCV 2012, Springer, 2012, pp. 101-115.Zhang, G. Jiang, M. Yu, K. Chen, Stereoscopic visual attention model for 3d video, in: International Conference on Multimedia Modeling, Springer, 2010, pp. 314-324.Wang, M.P. Da Silva, P. Le Callet, V. Ricordel, Computational model of stereoscopic 3d visual saliency, IEEE Transactions on Image Processing 22(6) (2013) 2151-2165.Peng, B. Li, W. Xiong, W. Hu, R. Ji, Rgbd salient object detection: A benchmark and algorithms, in: European Conference on Computer Vision (ECCV), 2014, pp. 92-109.Wu, L. Duan, L. Kong, Rgb-d salient object detection via feature fusion and multi-scale enhancement, in: CCF Chinese Conference on Computer Vision, Springer, 2015, pp. 359-368.Xue, Y. Gu, Y. Li, J. Yang, Rgb-d saliency detection via mutual guided manifold ranking, in: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 2015, pp. 666-670.Katz, A. Adler, Depth camera based on structured light and stereo vision, uS Patent App. 12/877,595 (Mar. 8 2012).Chatterjee, G. Molina, D. Lelescu, Systems and methods for determining depth from multiple views of a scene that include aliasing using hypothesized fusion, uS Patent App. 13/623,091 (Mar. 21 2013).Matthies, T. Kanade, R. Szeliski, Kalman filter-based algorithms for estimating depth from image sequences, International Journal of Computer Vision 3(3) (1989) 209-238.Y. Schechner, N. Kiryati, Depth from defocus vs. stereo: How different really are they?, International Journal of Computer Vision 39(2) (2000) 141-162.Delage, H. Lee, A.Y. Ng, A dynamic bayesian network model for autonomous 3d reconstruction from a single indoor image, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 2, IEEE, 2006, pp. 2418-2428.Saxena, M. Sun, A.Y. Ng, Make3d: Learning 3d scene structure from a single still image, IEEE transactions on pattern analysis and machine intelligence 31(5) (2009) 824-840.Hedau, D. Hoiem, D. Forsyth, Recovering the spatial layout of cluttered rooms, in: Computer vision, 2009 IEEE 12th international conference on, IEEE, 2009, pp. 1849-1856.Liu, S. Gould, D. Koller, Single image depth estimation from predicted semantic labels, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, 2010, pp. 1253-1260.Ladicky, J. Shi, M. Pollefeys, Pulling things out of perspective, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 89-96.K. Nathan Silberman, Derek Hoiem, R. Fergus, Indoor segmentation and support inference from rgbd images, in: ECCV, 2012.Liu, J. Yuen, A. Torralba, Sift flow: Dense correspondence across scenes and its applications, IEEE transactions on pattern analysis and machine intelligence 33(5) (2011) 978-994.Konrad, M. Wang, P. Ishwar, 2d-to-3d image conversion by learning depth from examples, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, 2012, pp. 16-22.Liu, C. Shen, G. Lin, Deep convolutional neural fields for depth estimation from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5162-5170.Wang, X. Shen, Z. Lin, S. Cohen, B. Price, A.L. Yuille, Towards unified depth and semantic prediction from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2800-2809.Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The kitti dataset, International Journal of Robotics Research (IJRR).Achanta, S. Süsstrunk, Saliency detection using maximum symmetric surround, in: Image processing (ICIP), 2010 17th IEEE international conference on, IEEE, 2010, pp. 2653-2656.E. Rahtu, J. Kannala, M. Salo, J. Heikkilä, Segmenting salient objects from images and videos, in: Computer Vision-ECCV 2010, Springer, 2010, pp. 366-37.
APA, Harvard, Vancouver, ISO, and other styles
44

Brien, Donna Lee. "Powdered, Essence or Brewed?: Making and Cooking with Coffee in Australia in the 1950s and 1960s." M/C Journal 15, no. 2 (April 4, 2012). http://dx.doi.org/10.5204/mcj.475.

Full text
Abstract:
Introduction: From Trifle to Tiramisu Tiramisu is an Italian dessert cake, usually comprising sponge finger biscuits soaked in coffee and liquor, layered with a mixture of egg yolk, mascarpone and cream, and topped with sifted cocoa. Once a gourmet dish, tiramisu, which means “pick me up” in Italian (Volpi), is today very popular in Australia where it is available for purchase not only in restaurants and cafés, but also from fast food chains and supermarkets. Recipes abound in cookery books and magazines and online. It is certainly more widely available and written about in Australia than the once ubiquitous English trifle which, comprising variations on the theme of sherry soaked sponge cake, custard and cream, it closely resembles. It could be asserted that its strong coffee taste has enabled the tiramisu to triumph over the trifle in contemporary Australia, yet coffee is also a recurrent ingredient in cakes and icings in nineteenth and early twentieth century Australian cookbooks. Acknowledging that coffee consumption in Australia doubled during the years of the Second World War and maintained high rates of growth afterwards (Khamis; Adams), this article draws on examples of culinary writing during this period of increasing popularity to investigate the use of coffee in cookery as well as a beverage in these mid-twentieth century decades. In doing so, it engages with a lively scholarly discussion on what has driven this change—whether the American glamour and sophistication associated with coffee, post-war immigration from the Mediterranean and other parts of Europe, or the influence of the media and developments in technology (see, for discussion, Adams; Collins et al.; Khamis; Symons). Coffee in Australian Mid-century Epicurean Writing In Australian epicurean writing in the 1950s and 1960s, freshly brewed coffee is clearly identified as the beverage of choice for those with gourmet tastes. In 1952, The West Australian reported that Johnnie Walker, then president of the Sydney Gourmet Society had “sweated over an ordinary kitchen stove to give 12 Melbourne women a perfect meal” (“A Gourmet” 8). Walker prepared a menu comprising: savoury biscuits; pumpkin soup made with a beef, ham, and veal stock; duck braised with “26 ounces of dry red wine, a bottle and a half of curacao and orange juice;” Spanish fried rice; a “French lettuce salad with the Italian influence of garlic;” and, strawberries with strawberry brandy and whipped cream. He served sherry with the biscuits, red wine with the duck, champagne with the sweet, and coffee to finish. It is, however, the adjectives that matter here—that the sherry and wine were dry, not sweet, and the coffee was percolated and black, not instant and milky. Other examples of epicurean writing suggested that fresh coffee should also be unadulterated. In 1951, American food writer William Wallace Irwin who travelled to, and published in, Australia as “The Garrulous Gourmet,” wrote scathingly of the practice of adding chicory to coffee in France and elsewhere (104). This castigation of the French for their coffee was unusual, with most articles at this time praising Gallic gastronomy. Indicative of this is Nancy Cashmore’s travel article for Adelaide’s Advertiser in 1954. Titled “In Dordogne and Burgundy the Gourmet Will Find … A Gastronomic Paradise,” Cashmore details the purchasing, preparation, presentation, and, of course, consumption of excellent food and wine. Good coffee is an integral part of every meal and every day: “from these parts come exquisite pate de fois, truffles, delicious little cakes, conserved meats, wild mushrooms, walnuts and plums. … The day begins with new bread and coffee … nothing is imported, nothing is stale” (6). Memorable luncheons of “hors-d’oeuvre … a meat course, followed by a salad, cheese and possibly a sweet” (6) always ended with black coffee and sometimes a sugar lump soaked in liqueur. In Australian Wines and Food (AW&F), a quarterly epicurean magazine that was published from 1956 to 1960, coffee was regularly featured as a gourmet kitchen staple alongside wine and cheese. Articles on the history, growing, marketing, blending, roasting, purchase, and brewing of coffee during these years were accompanied with full-page advertisements for Bushell’s vacuum packed pure “roaster fresh” coffee, Robert Timms’s “Royal Special” blend for “coffee connoisseurs,” and the Masterfoods range of “superior” imported and locally produced foodstuffs, which included vacuum packed coffee alongside such items as paprika, bay leaves and canned asparagus. AW&F believed Australia’s growing coffee consumption the result of increased participation in quality dining experiences whether in restaurants, the “scores of colourful coffee shops opening their doors to a new generation” (“Coffee” 39) or at home. With regard to domestic coffee drinking, AW&F reported a revived interest in “the long neglected art of brewing good coffee in the home” (“Coffee” 39). Instructions given range from boiling in a pot to percolating and “expresso” (Bancroft 10; “Coffee” 37-9). Coffee was also mentioned in every issue as the only fitting ending to a fine meal, when port, other fortified wines or liqueurs usually accompanied a small demi-tasse of (strong) black coffee. Coffee was also identified as one of the locally produced speciality foods that were flown into the USA for a consulate dinner: “more than a ton of carefully selected foodstuffs was flown to New York by Qantas in three separate airlifts … beef fillet steaks, kangaroo tails, Sydney rock oysters, King prawns, crayfish tails, tropical fruits and passion fruit, New Guinea coffee, chocolates, muscatels and almonds” (“Australian” 16). It is noteworthy that tea is not profiled in the entire run of the magazine. A decade later, in the second half of the 1960s, the new Australian gourmet magazine Epicurean included a number of similar articles on coffee. In 1966 and 1969, celebrity chef and regular Epicurean columnist Graham Kerr also included an illustrated guide to making coffee in two of the books produced alongside his television series, The Graham Kerr Cookbook (125) and The Graham Kerr Cookbook by the Galloping Gourmet (266-67). These included advice to buy freshly roasted beans at least once a week and to invest in an electric coffee grinder. Kerr uses a glass percolator in each and makes an iced (milk) coffee based on double strength cooled brewed coffee. Entertaining with Margaret Fulton (1971) is the first Margaret Fulton cookery book to include detailed information on making coffee from ground beans at home. In this volume, which was clearly aimed at the gourmet-inclined end of the domestic market, Fulton, then cookery editor for popular magazine Woman’s Day, provides a morning coffee menu and proclaims that “Good hot coffee will never taste so good as it does at this time of the day” (90). With the stress on the “good,” Fulton, like Kerr, advises that beans be purchased and ground as they are needed or that only a small amounts of freshly ground coffee be obtained at one time. For Fulton, quality is clearly linked to price—“buy the best you can afford” (90)—but while advising that “Mocha coffee, which comes from Aden and Mocha, is generally considered the best” (90), she also concedes that consumers will “find by experience” (90) which blends they prefer. She includes detailed information on storage and preparation, noting that there are also “dozens of pieces of coffee making equipment to choose from” (90). Fulton includes instructions on how to make coffee for guests at a wedding breakfast or other large event, gently heating home sewn muslin bags filled with finely ground coffee in urns of barely boiling water (64). Alongside these instructions, Fulton also provides recipes for a sophisticated selection of coffee-flavoured desserts such as an iced coffee soufflé and coffee biscuits and meringues that would be perfect accompaniments to her brewed coffees. Cooking with Coffee A prominent and popular advocate of Continental and Asian cookery in Melbourne in the 1950s, Maria Kozslik Donovan wrote and illustrated five cookery books and had a successful international career as a food writer in the 1960s and 1970s. Maria Kozslik was Hungarian by birth and education and was also educated in the USA before marrying Patrick Donovan, an Australian, and migrating to Sydney with him in 1950. After a brief stay there and in Adelaide, they relocated to Melbourne in 1953 where she ran a cookery school and wrote for prominent daily newspaper The Age, penning hundreds of her weekly “Epicure’s Corner: Continental Recipes with Maria Kozslik” column from 1954 to 1961. Her groundbreaking Continental Cookery in Australia (1955) collects some 140 recipes, many of which would appear in her column—predominantly featuring French, Italian, Viennese, and Hungarian dishes, as well as some from the Middle East and the Balkans—each with an informative paragraph or two regarding European cooking and dining practices that set the recipes in context. Continental Cookery in Australia includes one recipe for Mocha Torte (162), which she translates as Coffee Cream Cake and identifies as “the favourite of the gay and party-loving Viennese … [in] the many cafés and sweet shops of Salzburg and Vienna” (162). In this recipe, a plain sponge is cut into four thin layers and filled and covered with a rich mocha cream custard made from egg yolks, sugar and a good measure of coffee, which, when cooled, is beaten into creamed butter. In her recipe for Mocha Cream, Donovan identifies the type of coffee to be used and its strength, specifying that “strong Mocha” be used, and pleading, “please, no essence!” She also suggests that the cake’s top can be decorated with shavings of the then quite exotic “coffee bean chocolate,” which she notes can be found at “most continental confectioners” (162), but which would have been difficult to obtain outside the main urban centres. Coffee also appears in her Café Frappe, where cooled strong black coffee is poured into iced-filled glasses, and dressed with a touch of sugar and whipped cream (165). For this recipe the only other direction that Donovan gives regarding coffee is to “prepare and cool” strong black coffee (165) but it is obvious—from her eschewing of other convenience foods throughout the volume—that she means freshly brewed ground coffee. In contrast, less adventurous cookery books paint a different picture of coffee use in the home at this time. Thus, the more concise Selected Continental Recipes for the Australian Home (1955) by the Australian-born Zelmear M. Deutsch—who, stating that upon marrying a Viennese husband, she became aware of “the fascinating ways of Continental Cuisine” (back cover)—includes three recipes that include coffee. Deutsch’s Mocha Creams (chocolate truffles with a hint of coffee) (76-77), almond meringues filled with coffee whipped cream (89-90), and Mocha Cream Filling comprising butter beaten with chocolate, vanilla, sugar, and coffee (95), all use “powdered” instant coffee, which is, moreover, used extremely sparingly. Her Almond Coffee Torte, for example, requires only half a teaspoon of powdered coffee to a quarter of a pint (300 mls) of cream, which is also sweetened with vanilla sugar (89-90). In contrast to the examples from Fulton and Donovan above (but in common with many cookbooks before and after) Deutsch uses the term “mocha” to describe a mix of coffee and chocolate, rather than to refer to a fine-quality coffee. The term itself is also used to describe a soft, rich brown color and, therefore, at times, the resulting hue of these dishes. The word itself is of late eighteenth century origin, and comes from the eponymous name of a Red Sea port from where coffee was shipped. While Selected Continental Recipes appears to be Deutsch’s first and only book, Anne Mason was a prolific food, wine and travel writer. Before migrating to England in 1958, she was well known in Australia as the presenter of a live weekly television program, Anne Mason’s Home-Tested Recipes, which aired from 1957. She also wrote a number of popular cookery books and had a long-standing weekly column in The Age. Her ‘Home-Tested Recipes’ feature published recipes contributed by readers, which she selected and tested. A number of these were collected in her Treasury of Australian Cookery, published in London in 1962, and included those influenced by “the country cooking of England […] Continental influence […] and oriental ideas” (11). Mason includes numerous recipes featuring coffee, but (as in Deutsch above) almost all are described as mocha-flavoured and listed as such in the detailed index. In Mason’s book, this mocha taste is, in fact, featured more frequently in sweet dishes than any of the other popular flavours (vanilla, honey, lemon, apple, banana, coconut, or passionfruit) except for chocolate. These mocha recipes include cakes: Chocolate-Mocha Refrigerator cake—plain sponge layered with a coffee-chocolate mousse (134), Mocha Gateau Ring—plain sponge and choux pastry puffs filled with cream or ice cream and thickly iced with mocha icing (136) and Mocha Nut Cake—a coffee and cocoa butter cake filled and iced with mocha icing and almonds (166). There are also recipes for Mocha Meringues—small coffee/cocoa-flavoured meringue rosettes joined together in pairs with whipped cream (168), a dessert Mocha Omelette featuring the addition of instant coffee and sugar to the eggs and which is filled with grated chocolate (181) and Mocha-Crunch Ice Cream—a coffee essence-scented ice cream with chocolate biscuit crumbs (144) that was also featured in an ice cream bombe layered with chocolate-rum and vanilla ice creams (152). Mason’s coffee recipes are also given prominence in the accompanying illustrations. Although the book contains only nine pages in full colour, the Mocha Gateau Ring is featured on both the cover and opposite the title page of the book and the Mocha Nut Cake is given an entire coloured page. The coffee component of Mason’s recipes is almost always sourced from either instant coffee (granules or powdered) or liquid coffee essence, however, while the cake for the Mocha Nut Cake uses instant coffee, its mocha icing and filling calls for “3 dessertspoons [of] hot black coffee” (167). The recipe does not, however, describe if this is made from instant, essence, or ground beans. The two other mocha icings both use instant coffee mixed with cocoa, icing sugar and hot water, while one also includes margarine for softness. The recipe for Mocha Cup (202) in the chapter for Children’s Party Fare (198-203), listed alongside clown-shaped biscuits and directions to decorate cakes with sweets, plastic spaceships and dolls, surprisingly comprises a sophisticated mix of grated dark chocolate melted in a pint of “hot black coffee” lightened with milk, sugar and vanilla essence, and topped with cream. There are no instructions for brewing or otherwise making fresh coffee in the volume. The Australian culinary masterwork of the 1960s, The Margaret Fulton Cookbook, which was published in 1968 and sold out its first (record) print run of 100,000 copies in record time, is still in print, with a revised 2004 edition bringing the number of copies sold to over 1.5 million (Brien). The first edition’s cake section of the book includes a Coffee Sponge sandwich using coffee essence in both the cake and its creamy filling and topping (166) and Iced Coffee Cakes that also use coffee essence in the cupcakes and instant coffee powder in the glacé icing (166). A Hazelnut Swiss Roll is filled with a coffee butter cream called Coffee Creme au Beurre, with instant coffee flavouring an egg custard which is beaten into creamed butter (167)—similar to Koszlik’s Mocha Cream but a little lighter, using milk instead of cream and fewer eggs. Fulton also includes an Austrian Chocolate Cake in her Continental Cakes section that uses “black coffee” in a mocha ganache that is used as a frosting (175), and her sweet hot coffee soufflé calls for “1/2 cup strong coffee” (36). Fulton also features a recipe for Irish Coffee—sweetened hot black coffee with (Irish) whiskey added, and cream floated on top (205). Nowhere is fresh or brewed coffee specified, and on the page dedicated to weights, measures, and oven temperatures, instant coffee powder appears on the list of commonly used ingredients alongside flour, sugar, icing sugar, golden syrup, and butter (242). American Influence While the influence of American habits such as supermarket shopping and fast food on Australian foodways is reported in many venues, recognition of its influence on Australian coffee culture is more muted (see, for exceptions, Khamis; Adams). Yet American modes of making and utilising coffee also influenced the Australian use of coffee, whether drunk as beverage or employed as a flavouring agent. In 1956, the Australian Women’s Weekly published a full colour Wade’s Cornflour advertorial of biscuit recipes under the banner, “Dione Lucas’s Manhattan Mochas: The New Coffee Cookie All America Loves, and Now It’s Here” (56). The use of the American “cookie” instead of the Australian “biscuit” is telling here, the popularity of all things American sure to ensure, the advert suggested, that the Mochas (coffee biscuits topped with chocolate icing) would be so popular as to be “More than a recipe—a craze” (56). This American influence can also been seen in cakes and other baked goods made specifically to serve with coffee, but not necessarily containing it. The recipe for Zulu Boys published in The Argus in 1945, a small chocolate and cinnamon cake with peanuts and cornflakes added, is a good example. Reported to “keep moist for some time,” these were “not too sweet, and are especially useful to serve with a glass of wine or a cup of black coffee” (Vesta Junior 9), the recipe a precursor to many in the 1950s and 1960s. Margaret Fulton includes a Spicy Coffee Cake in The Margaret Fulton Cookbook. This is similar to her Cinnamon Tea Cake in being an easy to mix cake topped with cinnamon sugar, but is more robust in flavour and texture with the addition of whole bran cereal, raisins and spices (163). Her “Morning Coffee” section in Entertaining with Margaret Fulton similarly includes a selection of quite strongly flavoured and substantially textured cakes and biscuits (90-92), while her recipes for Afternoon Tea are lighter and more delicate in taste and appearance (85-89). Concluding Remarks: Integration and Evolution, Not Revolution Trusted Tasmanian writer on all matters domestic, Marjorie Bligh, published six books on cookery, craft, home economics, and gardening, and produced four editions of her much-loved household manual under all three of her married names: Blackwell, Cooper and Bligh (Wood). The second edition of At Home with Marjorie Bligh: A Household Manual (published c.1965-71) provides more evidence of how, rather than jettisoning one form in favour of another, Australian housewives were adept at integrating both ground and other more instant forms of coffee into their culinary repertoires. She thus includes instructions on both how to efficiently clean a coffee percolator (percolating with a detergent and borax solution) (312) as well as how to make coffee essence at home by simmering one cup of ground coffee with three cups of water and one cup of sugar for one hour, straining and bottling (281). She also includes recipes for cakes, icings, and drinks that use both brewed and instant coffee as well as coffee essence. In Entertaining with Margaret Fulton, Fulton similarly allows consumer choice, urging that “If you like your coffee with a strong flavour, choose one to which a little chicory has been added” (90). Bligh’s volume similarly reveals how the path from trifle to tiramisu was meandering and one which added recipes to Australian foodways, rather than deleted them. Her recipe for Coffee Trifle has strong similarities to tiramisu, with sponge cake soaked in strong milk coffee and sherry layered with a rich custard made from butter, sugar, egg yolks, and black coffee, and then decorated with whipped cream, glace cherries, and walnuts (169). This recipe precedes published references to tiramisu as, although the origins of tiramisu are debated (Black), references to the dessert only began to appear in the 1980s, and there is no mention of the dish in such authoritative sources as Elizabeth David’s 1954 Italian Food, which features a number of traditional Italian coffee-based desserts including granita, ice cream and those made with cream cheese and rice. By the 1990s, however, respected Australian chef and food researcher, the late Mietta O’Donnell, wrote that if pizza was “the most travelled of Italian dishes, then tiramisu is the country’s most famous dessert” and, today, Australian home cooks are using the dish as a basis for a series of variations that even include replacing the coffee with fruit juices and other flavouring agents. Long-lived Australian coffee recipes are similarly being re-made in line with current taste and habits, with celebrated chef Neil Perry’s recent Simple Coffee and Cream Sponge Cake comprising a classic cream-filled vanilla sponge topped with an icing made with “strong espresso”. To “glam up” the cake, Perry suggests sprinkling the top with chocolate-covered roasted coffee beans—cycling back to Maria Koszlik’s “coffee bean chocolate” (162) and showing just how resilient good taste can be. Acknowledgements The research for this article was completed while I was the recipient of a Research Fellowship in the Special Collections at the William Angliss Institute (WAI) of TAFE in Melbourne, where I utilised their culinary collections. Thank you to the staff of the WAI Special Collections for their generous assistance, as well as to the Faculty of Arts, Business, Informatics and Education at Central Queensland University for supporting this research. Thank you to Jill Adams for her assistance with this article and for sharing her “Manhattan Mocha” file with me, and also to the peer reviewers for their generous and helpful feedback. All errors are, of course, my own.References “A Gourmet Makes a Perfect Meal.” The West Australian 4 Jul. 1952: 8.Adams, Jill. “Australia’s American Coffee Culture.” Australasian Journal of Popular Culture (2012): forthcoming. “Australian Wines Served at New York Dinner.” Australian Wines and Food 1.5 (1958): 16. Bancroft, P. A. “Let’s Make Some Coffee.” Australian Wines & Food Quarterly 4.1 (1960): 10. Black, Jane. “The Trail of Tiramisu.” Washington Post 11 Jul. 2007. 15 Feb. 2012 ‹http://www.washingtonpost.com/wp-dyn/content/article/2007/07/10/AR2007071000327.html›. Bligh, Marjorie. At Home with Marjorie Bligh: A Household Manual. Devonport: M. Bligh, c.1965-71. 2nd ed. Brien, Donna Lee. “Australian Celebrity Chefs 1950-1980: A Preliminary Study.” Australian Folklore 21 (2006): 201-18. Cashmore, Nancy. “In Dordogne and Burgundy the Gourmet Will Find … A Gastronomic Paradise.” The Advertiser 23 Jan. (1954): 6. “Coffee Beginnings.” Australian Wines & Food Quarterly 1.4 (1957/1958): 37-39. Collins, Jock, Katherine Gibson, Caroline Alcorso, Stephen Castles, and David Tait. A Shop Full of Dreams: Ethnic Small Business in Australia. Sydney: Pluto Press, 1995. David, Elizabeth. Italian Food. New York: Penguin Books, 1999. 1st pub. UK: Macdonald, 1954, and New York: Knoft, 1954. Donovan, Maria Kozslik. Continental Cookery in Australia. Melbourne: William Heinemann, 1955. Reprint ed. 1956. -----.“Epicure’s Corner: Continental Recipes with Maria Kozslik.” The Age 4 Jun. (1954): 7. Fulton, Margaret. The Margaret Fulton Cookbook. Dee Why West: Paul Hamlyn, 1968. -----. Entertaining with Margaret Fulton. Dee Why West: Paul Hamlyn, 1971. Irwin, William Wallace. The Garrulous Gourmet. Sydney: The Shepherd P, 1951. Khamis, Susie. “It Only Takes a Jiffy to Make: Nestlé, Australia and the Convenience of Instant Coffee.” Food, Culture & Society 12.2 (2009): 217-33. Kerr, Graham. The Graham Kerr Cookbook. Wellington, Auckland, and Sydney: AH & AW Reed, 1966. -----. The Graham Kerr Cookbook by The Galloping Gourmet. New York: Doubleday, 1969. Mason, Anne. A Treasury of Australian Cookery. London: Andre Deutsch, 1962. Mason, Peter. “Anne Mason.” The Guardian 20 Octo.2006. 15 Feb. 2012 Masterfoods. “Masterfoods” [advertising insert]. Australian Wines and Food 2.10 (1959): btwn. 8 & 9.“Masters of Food.” Australian Wines & Food Quarterly 2.11 (1959/1960): 23. O’Donnell, Mietta. “Tiramisu.” Mietta’s Italian Family Recipe, 14 Aug. 2004. 15 Feb. 2012 ‹http://www.miettas.com/food_wine_recipes/recipes/italianrecipes/dessert/tiramisu.html›. Perry, Neil. “Simple Coffee and Cream Sponge Cake.” The Age 12 Mar. 2012. 15 Feb. 2012 ‹http://www.theage.com.au/lifestyle/cuisine/baking/recipe/simple-coffee-and-cream-sponge-cake-20120312-1utlm.html›. Symons, Michael. One Continuous Picnic: A History of Eating in Australia. Adelaide: Duck Press, 2007. 1st. Pub. Melbourne: Melbourne UP, 1982. ‘Vesta Junior’. “The Beautiful Fuss of Old Time Baking Days.” The Argus 20 Mar. 1945: 9. Volpi, Anna Maria. “All About Tiramisu.” Anna Maria’s Open Kitchen 20 Aug. 2004. 15 Feb. 2012 ‹http://www.annamariavolpi.com/tiramisu.html›. Wade’s Cornflour. “Dione Lucas’ Manhattan Mochas: The New Coffee Cookie All America Loves, and Now It’s Here.” The Australian Women’s Weekly 1 Aug. (1956): 56. Wood, Danielle. Housewife Superstar: The Very Best of Marjorie Bligh. Melbourne: Text Publishing, 2011.
APA, Harvard, Vancouver, ISO, and other styles
45

Ali, Kawsar. "Zoom-ing in on White Supremacy." M/C Journal 24, no. 3 (June 21, 2021). http://dx.doi.org/10.5204/mcj.2786.

Full text
Abstract:
The Alt Right Are Not Alright Academic explorations complicating both the Internet and whiteness have often focussed on the rise of the “alt-right” to examine the co-option of digital technologies to extend white supremacy (Daniels, “Cyber Racism”; Daniels, “Algorithmic Rise”; Nagle). The term “alt-right” refers to media organisations, personalities, and sarcastic Internet users who promote the “alternative right”, understood as extremely conservative, political views online. The alt-right, in all of their online variations and inter-grouping, are infamous for supporting white supremacy online, “characterized by heavy use of social media and online memes. Alt-righters eschew ‘establishment’ conservatism, skew young, and embrace white ethnonationalism as a fundamental value” (Southern Poverty Law Center). Theoretical studies of the alt-right have largely focussed on its growing presence across social media and websites such as Twitter, Reddit, and notoriously “chan” sites 4chan and 8chan, through the political discussions referred to as “threads” on the site (Nagle; Daniels, “Algorithmic Rise”; Hawley). As well, the ability of online users to surpass national boundaries and spread global white supremacy through the Internet has also been studied (Back et al.). The alt-right have found a home on the Internet, using its features to cunningly recruit members and to establish a growing community that mainstream politically extreme views (Daniels, “Cyber Racism”; Daniels, “Algorithmic Rise; Munn). This body of knowledge shows that academics have been able to produce critically relevant literature regarding the alt-right despite the online anonymity of the majority of its members. For example, Conway et al., in their analysis of the history and social media patterns of the alt-right, follow the unique nature of the Christchurch Massacre, encompassing the use and development of message boards, fringe websites, and social media sites to champion white supremacy online. Positioning my research in this literature, I am interested in contributing further knowledge regarding the alt-right, white supremacy, and the Internet by exploring the sinister conducting of Zoom-bombing anti-racist events. Here, I will investigate how white supremacy through the Internet can lead to violence, abuse, and fear that “transcends the virtual world to damage real, live humans beings” via Zoom-bombing, an act that is situated in a larger co-option of the Internet by the alt-right and white supremacists, but has been under theorised as a hate crime (Daniels; “Cyber Racism” 7). Shitposting I want to preface this chapter by acknowledging that while I understand the Internet, through my own external investigations of race, power and the Internet, as a series of entities that produce racial violence both online and offline, I am aware of the use of the Internet to frame, discuss, and share anti-racist activism. Here we can turn to the work of philosopher Michel de Certeau who conceived the idea of a “tactic” as a way to construct a space of agency in opposition to institutional power. This becomes a way that marginalised groups, such as racialised peoples, can utilise the Internet as a tactical material to assert themselves and their non-compliance with the state. Particularly, shitposting, a tactic often associated with the alt-right, has also been co-opted by those who fight for social justice and rally against oppression both online and offline. As Roderick Graham explores, the Internet, and for this exploration, shitposting, can be used to proliferate deviant and racist material but also as a “deviant” byway of oppositional and anti-racist material. Despite this, a lot can be said about the invisible yet present claims and support of whiteness through Internet and digital technologies, as well as the activity of users channelled through these screens, such as the alt-right and their digital tactics. As Vikki Fraser remarks, “the internet assumes whiteness as the norm – whiteness is made visible through what is left unsaid, through the assumption that white need not be said” (120). It is through the lens of white privilege and claims to white supremacy that online irony, by way of shitposting, is co-opted and understood as an inherently alt-right tool, through the deviance it entails. Their sinister co-option of shitposting bolsters audacious claims as to who has the right to exist, in their support of white identity, but also hides behind a veil of mischief that can hide their more insidious intention and political ideologies. The alt-right have used “shitposting”, an online style of posting and interacting with other users, to create a form of online communication for a translocal identity of white nationalist members. Sean McEwan defines shitposting as “a form of Internet interaction predicated upon thwarting established norms of discourse in favour of seemingly anarchic, poor quality contributions” (19). Far from being random, however, I argue that shitposting functions as a discourse that is employed by online communities to discuss, proliferate, and introduce white supremacist ideals among their communities as well as into the mainstream. In the course of this article, I will introduce racist Zoom-bombing as a tactic situated in shitposting which can be used as a means of white supremacist discourse and an attempt to block anti-racist efforts. By this line, the function of discourse as one “to preserve or to reproduce discourse (within) a closed community” is calculatingly met through shitposting, Zoom-bombing, and more overt forms of white supremacy online (Foucault 225-226). Using memes, dehumanisation, and sarcasm, online white supremacists have created a means of both organising and mainstreaming white supremacy through humour that allows insidious themes to be mocked and then spread online. Foucault writes that “in every society the production of discourse is at once controlled, selected, organised and redistributed according to a certain number of procedures, whose role is to avert its powers and danger, to cope with chance events, to evade ponderous, awesome materiality” (216). As Philippe-Joseph Salazar recontextualises to online white supremacists, “the first procedure of control is to define what is prohibited, in essence, to set aside that which cannot be spoken about, and thus to produce strategies to counter it” (137). By this line, the alt-right reorganises these procedures and allocates a checked speech that will allow their ideas to proliferate in like-minded and growing communities. As a result, online white supremacists becoming a “community of discourse” advantages them in two ways: first, ironic language permits the mainstreaming of hate that allows sinister content to enter the public as the severity of their intentions is doubted due to the sarcastic language employed. Second, shitposting is employed as an entry gate to more serious and dangerous participation with white supremacist action, engagement, and ideologies. It is important to note that white privilege is embodied in these discursive practices as despite this exploitation of emerging technologies to further white supremacy, there are approaches that theorise the alt-right as “crazed product(s) of an isolated, extremist milieu with no links to the mainstream” (Moses 201). In this way, it is useful to consider shitposting as an informal approach that mirrors legitimised white sovereignties and authorised white supremacy. The result is that white supremacist online users succeed in “not only in assembling a community of actors and a collective of authors, on the dual territory of digital communication and grass-roots activism”, but also shape an effective fellowship of discourse that audiences react well to online, encouraging its reception and mainstreaming (Salazar 142). Continuing, as McBain writes, “someone who would not dream of donning a white cap and attending a Ku Klux Klan meeting might find themselves laughing along to a video by the alt-right satirist RamZPaul”. This idea is echoed in a leaked stylistic guide by white supremacist website and message board the Daily Stormer that highlights irony as a cultivated mechanism used to draw new audiences to the far right, step by step (Wilson). As showcased in the screen capture below of the stylistic guide, “the reader is at first drawn in by curiosity or the naughty humor and is slowly awakened to reality by repeatedly reading the same points” (Feinburg). The result of this style of writing is used “to immerse recruits in an online movement culture built on memes, racial panic and the worst of Internet culture” (Wilson). Figure 1: A screenshot of the Daily Stormer’s playbook, expanding on the stylistic decisions of alt-right writers. Racist Zoom-Bombing In the timely text “Racist Zoombombing”, Lisa Nakamura et al. write the following: Zoombombing is more than just trolling; though it belongs to a broad category of online behavior meant to produce a negative reaction, it has an intimate connection with online conspiracy theorists and white supremacy … . Zoombombing should not be lumped into the larger category of trolling, both because the word “trolling” has become so broad it is nearly meaningless at times, and because zoombombing is designed to cause intimate harm and terrorize its target in distinct ways. (30) Notwithstanding the seriousness of Zoom-bombing, and to not minimise its insidiousness by understanding it as a form of shitposting, my article seeks to reiterate the seriousness of shitposting, which, in the age of COVID-19, Zoom-bombing has become an example of. I seek to purport the insidiousness of the tactical strategies of the alt-right online in a larger context of white violence online. Therefore, I am proposing a more critical look at the tactical use of the Internet by the alt-right, in theorising shitposting and Zoom-bombing as means of hate crimes wherein they impose upon anti-racist activism and organising. Newlands et al., receiving only limited exposure pre-pandemic, write that “Zoom has become a household name and an essential component for parties (Matyszczyk, 2020), weddings (Pajer, 2020), school and work” (1). However, through this came the strategic use of co-opting the application by the alt-right to digitise terror and ensure a “growing framework of memetic warfare” (Nakamura et al. 31). Kruglanski et al. label this co-opting of online tools to champion white supremacy operations via Zoom-bombing an example of shitposting: Not yet protesting the lockdown orders in front of statehouses, far-right extremists infiltrated Zoom calls and shared their screens, projecting violent and graphic imagery such as swastikas and pornography into the homes of unsuspecting attendees and making it impossible for schools to rely on Zoom for home-based lessons. Such actions, known as “Zoombombing,” were eventually curtailed by Zoom features requiring hosts to admit people into Zoom meetings as a default setting with an option to opt-out. (128) By this, we can draw on existing literature that has theorised white supremacists as innovation opportunists regarding their co-option of the Internet, as supported through Jessie Daniels’s work, “during the shift of the white supremacist movement from print to digital online users exploited emerging technologies to further their ideological goals” (“Algorithmic Rise” 63). Selfe and Selfe write in their description of the computer interface as a “political and ideological boundary land” that may serve larger cultural systems of domination in much the same way that geopolitical borders do (418). Considering these theorisations of white supremacists utilising tools that appear neutral for racialised aims and the political possibilities of whiteness online, we can consider racist Zoom-bombing as an assertion of a battle that seeks to disrupt racial justice online but also assert white supremacy as its own legitimate cause. My first encounter of local Zoom-bombing was during the Institute for Culture and Society (ICS) Seminar titled “Intersecting Crises” by Western Sydney University. The event sought to explore the concatenation of deeply inextricable ecological, political, economic, racial, and social crises. An academic involved in the facilitation of the event, Alana Lentin, live tweeted during the Zoom-bombing of the event: Figure 2: Academic Alana Lentin on Twitter live tweeting the Zoom-bombing of the Intersecting Crises event. Upon reflecting on this instance, I wondered, could efforts have been organised to prevent white supremacy? In considering who may or may not be responsible for halting racist shit-posting, we can problematise the work of R David Lankes, who writes that “Zoom-bombing is when inadequate security on the part of the person organizing a video conference allows uninvited users to join and disrupt a meeting. It can be anything from a prankster logging on, yelling, and logging off to uninvited users” (217). However, this beckons two areas to consider in theorising racist Zoom-bombing as a means of isolated trolling. First, this approach to Zoom-bombing minimises the sinister intentions of Zoom-bombing when referring to people as pranksters. Albeit withholding the “mimic trickery and mischief that were already present in spaces such as real-life classrooms and town halls” it may be more useful to consider theorising Zoom-bombing as often racialised harassment and a counter aggression to anti-racist initiatives (Nakamura et al. 30). Due to the live nature of most Zoom meetings, it is increasingly difficult to halt the threat of the alt-right from Zoom-bombing meetings. In “A First Look at Zoom-bombings” a range of preventative strategies are encouraged for Zoom organisers including “unique meeting links for each participant, although we acknowledge that this has usability implications and might not always be feasible” (Ling et al. 1). The alt-right exploit gaps, akin to co-opting the mainstreaming of trolling and shitposting, to put forward their agenda on white supremacy and assert their presence when not welcome. Therefore, utilising the pandemic to instil new forms of terror, it can be said that Zoom-bombing becomes a new means to shitpost, where the alt-right “exploits Zoom’s uniquely liminal space, a space of intimacy generated by users via the relationship between the digital screen and what it can depict, the device’s audio tools and how they can transmit and receive sound, the software that we can see, and the software that we can’t” (Nakamura et al. 29). Second, this definition of Zoom-bombing begs the question, is this a fair assessment to write that reiterates the blame of organisers? Rather, we can consider other gaps that have resulted in the misuse of Zoom co-opted by the alt-right: “two conditions have paved the way for Zoom-bombing: a resurgent fascist movement that has found its legs and best megaphone on the Internet and an often-unwitting public who have been suddenly required to spend many hours a day on this platform” (Nakamura et al. 29). In this way, it is interesting to note that recommendations to halt Zoom-bombing revolve around the energy, resources, and attention of the organisers to practically address possible threats, rather than the onus being placed on those who maintain these systems and those who Zoom-bomb. As Jessie Daniels states, “we should hold the platform accountable for this type of damage that it's facilitated. It's the platform's fault and it shouldn't be left to individual users who are making Zoom millions, if not billions, of dollars right now” (Ruf 8). Brian Friedberg, Gabrielle Lim, and Joan Donovan explore the organised efforts by the alt-right to impose on Zoom events and disturb schedules: “coordinated raids of Zoom meetings have become a social activity traversing the networked terrain of multiple platforms and web spaces. Raiders coordinate by sharing links to Zoom meetings targets and other operational and logistical details regarding the execution of an attack” (14). By encouraging a mass coordination of racist Zoom-bombing, in turn, social justice organisers are made to feel overwhelmed and that their efforts will be counteracted inevitably by a large and organised group, albeit appearing prankster-like. Aligning with the idea that “Zoombombing conceals and contains the terror and psychological harm that targets of active harassment face because it doesn’t leave a trace unless an alert user records the meeting”, it is useful to consider to what extent racist Zoom-bombing becomes a new weapon of the alt-right to entertain and affirm current members, and engage and influence new members (Nakamura et al. 34). I propose that we consider Zoom-bombing through shitposting, which is within “the location of matrix of domination (white supremacy, heteropatriarchy, ableism, capitalism, and settler colonialism)” to challenge the role of interface design and Internet infrastructure in enabling racial violence online (Costanza-Chock). Conclusion As Nakamura et al. have argued, Zoom-bombing is indeed “part of the lineage or ecosystem of trollish behavior”, yet these new forms of alt-right shitposting “[need] to be critiqued and understood as more than simply trolling because this term emerged during an earlier, less media-rich and interpersonally live Internet” (32). I recommend theorising the alt-right in a way that highlights the larger structures of white power, privilege, and supremacy that maintain their online and offline legacies beyond Zoom, “to view white supremacy not as a static ideology or condition, but to instead focus on its geographic and temporal contingency” that allows acts of hate crime by individuals on politicised bodies (Inwood and Bonds 722). This corresponds with Claire Renzetti’s argument that “criminologists theorise that committing a hate crime is a means of accomplishing a particular type of power, hegemonic masculinity, which is described as white, Christian, able-bodied and heterosexual” – an approach that can be applied to theorisations of the alt-right and online violence (136). This violent white masculinity occupies a hegemonic hold in the formation, reproduction, and extension of white supremacy that is then shared, affirmed, and idolised through a racialised Internet (Donaldson et al.). Therefore, I recommend that we situate Zoom-bombing as a means of shitposting, by reiterating the severity of shitposting with the same intentions and sinister goals of hate crimes and racial violence. References Back, Les, et al. “Racism on the Internet: Mapping Neo-Fascist Subcultures in Cyber-Space.” Nation and Race: The Developing Euro-American Racist Subculture. Eds. Jeffrey Kaplan and Tore Bjørgo. Northeastern UP, 1993. 73-101. Bonds, Anne, and Joshua Inwood. “Beyond White Privilege: Geographies of White Supremacy and Settler Colonialism.” Progress in Human Geography 40 (2015): 715-733. Conway, Maura, et al. “Right-Wing Extremists’ Persistent Online Presence: History and Contemporary Trends.” The International Centre for Counter-Terrorism – The Hague. Policy Brief, 2019. Costanza-Chock, Sasha. “Design Justice and User Interface Design, 2020.” Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, 2020. Daniels, Jessie. “The Algorithmic Rise of the ‘Alt-Right.’” Contexts 17 (2018): 60-65. ———. “Race and Racism in Internet Studies: A Review and Critique.” New Media & Society 15 (2013): 695-719. ———. Cyber Racism: White Supremacy Online and the New Attack on Civil Rights. Rowman and Littlefield, 2009. De Certeau, Michel. The Practice of Everyday Life. First ed. U of California P, 1980. Donaldson, Mike. “What Is Hegemonic Masculinity?” Theory and Society 22 (1993): 643-657. Feinburg, Ashley. “This Is The Daily Stormer’s Playbook.” Huffington Post 13 Dec. 2017. <http://www.huffpost.com/entry/daily-stormer-nazi-style-guide_n_5a2ece19e4b0ce3b344492f2>. Foucault, Michel. “The Discourse on Language.” The Archaeology of Knowledge and the Discourse on Language. Ed. A.M. Sheridan Smith. Pantheon, 1971. 215-237. Fraser, Vicki. “Online Bodies and Sexual Subjectivities: In Whose Image?” The Racial Politics of Bodies, Nations and Knowledges. Eds. Barbara Baird and Damien W. Riggs. Newcastle: Cambridge Scholars Publishing, 2015. 116-132. Friedberg, Brian, Gabrielle Lim, and Joan Donovan. “Space Invaders: The Networked Terrain of Zoom Bombing.” Harvard Shorenstein Center, 2020. Graham, Roderick. “Race, Social Media and Deviance.” The Palgrave Handbook of International Cybercrime and Cyberdeviance. Eds. Thomas J. Holt and Adam M. Bossler, 2019. 67-90. Hawley, George. Making Sense of the Alt-Right. Columbia UP, 2017. Henry, Matthew G., and Lawrence D. Berg. “Geographers Performing Nationalism and Hetero-Masculinity.” Gender, Place & Culture 13 (2006): 629-645. Kruglanski, Arie W., et al. “Terrorism in Time of the Pandemic: Exploiting Mayhem.” Global Security: Health, Science and Policy 5 (2020): 121-132. Lankes, R. David. Forged in War: How a Century of War Created Today's Information Society. Rowman & Littlefield, 2021. Ling, Chen, et al. “A First Look at Zoombombing, 2021.” Proceedings of the 42nd IEEE Symposium on Security and Privacy. Oakland, 2021. McBain, Sophie. “The Alt-Right, and How the Paranoia of White Identity Politics Fuelled Trump’s Rise.” New Statesman 27 Nov. 2017. <http://www.newstatesman.com/culture/books/2017/11/alt-right-and-how-paranoia-white-identity-politics-fuelled-trump-s-rise>. McEwan, Sean. “Nation of Shitposters: Ironic Engagement with the Facebook Posts of Shannon Noll as Reconfiguration of an Australian National Identity.” Journal of Media and Communication 8 (2017): 19-39. Morgensen, Scott Lauria. “Theorising Gender, Sexuality and Settler Colonialism: An Introduction.” Settler Colonial Studies 2 (2012): 2-22. Moses, A Dirk. “‘White Genocide’ and the Ethics of Public Analysis.” Journal of Genocide Research 21 (2019): 1-13. Munn, Luke. “Algorithmic Hate: Brenton Tarrant and the Dark Social Web.” VoxPol, 3 Apr. 2019. <http://www.voxpol.eu/algorithmic-hate-brenton-tarrant-and-the-dark-social-web>. Nagle, Angela. Kill All Normies: Online Culture Wars from 4chan and Tumblr to Trump and the Alt-Right. Zero Books, 2017. Nakamura, Lisa, et al. Racist Zoom-Bombing. Routledge, 2021. Newlands, Gemma, et al. “Innovation under Pressure: Implications for Data Privacy during the COVID-19 Pandemic.” Big Data & Society July-December (2020): 1-14. Perry, Barbara, and Ryan Scrivens. “White Pride Worldwide: Constructing Global Identities Online.” The Globalisation of Hate: Internationalising Hate Crime. Eds. Jennifer Schweppe and Mark Austin Walters. Oxford UP, 2016. 65-78. Renzetti, Claire. Feminist Criminology. Routledge, 2013. Ruf, Jessica. “‘Spirit-Murdering' Comes to Zoom: Racist Attacks Plague Online Learning.” Issues in Higher Education 37 (2020): 8. Salazar, Philippe-Joseph. “The Alt-Right as a Community of Discourse.” Javnost – The Public 25 (2018): 135-143. Selfe, Cyntia L., and Richard J. Selfe, Jr. “The Politics of the Interface: Power and Its Exercise in Electronic Contact Zones.” College Composition and Communication 45 (1994): 480-504. Southern Poverty Law Center. “Alt-Right.” <http://www.splcenter.org/fighting-hate/extremist-files/ideology/alt-right>. Wilson, Jason. “Do the Christchurch Shootings Expose the Murderous Nature of ‘Ironic’ Online Fascism?” The Guardian, 16 Mar. 2019. <http://www.theguardian.com/world/commentisfree/2019/mar/15/do-the-christchurch-shootings-expose-the-murderous-nature-of-ironic-online-fascism>.
APA, Harvard, Vancouver, ISO, and other styles
46

Thiele, Franziska. "Social Media as Tools of Exclusion in Academia?" M/C Journal 23, no. 6 (November 28, 2020). http://dx.doi.org/10.5204/mcj.1693.

Full text
Abstract:
Introduction I have this somewhat diffuse concern that at some point, I am in an appointment procedure ... and people say: ‘He has to ... be on social media, [and] have followers ..., because otherwise he can’t say anything about the field of research, otherwise he won’t identify with it … and we need a direct connection to legitimise our discipline in the population!’ And this is where I think: ‘For God’s sake! No, I really don’t want that.’ (Postdoc) Social media such as Facebook or Twitter have become an integral part of many people’s everyday lives and have introduced severe changes to the ways we communicate with each other and about ourselves. Presenting ourselves on social media and creating different online personas has become a normal practice (Vorderer et al. 270). While social media such as Facebook were at first mostly used to communicate with friends and family, they were soon also used for work-related communication (Cardon and Marshall). Later, professional networks such as LinkedIn, which focus on working relations and career management and special interest networks, such as the academic social networking sites (ASNS) Academia.edu and ResearchGate, catering specifically to academic needs, emerged. Even though social media have been around for more than 15 years now, academics in general and German academics in particular are rather reluctant users of these tools in a work-related context (König and Nentwich 175; Lo 155; Pscheida et al. 1). This is surprising as studies indicate that the presence and positive self-portrayal of researchers in social media as well as the distribution of articles via social networks such as Academia.edu or ResearchGate have a positive effect on the visibility of academics as well as the likelihood of their articles being read and cited (Eysenbach; Lo 192; Terras). Gruzd, Staves, and Wilk even assume that the presence in online media could become a relevant criterion in the allocation of scientific jobs. Science is a field where competition for long-term positions is high. In 2017, only about 17% of all scientific personnel in Germany had permanent positions, and of these 10% were professors (Federal Statistical Office 32). Having a professorship is therefore the best shot at obtaining a permanent position in the scientific field. However, the average vocational age is 40 (Zimmer et al. 40), which leads to a long phase of career-related uncertainty. Directing attention to yourself by acquiring knowledge in the use of social media for professional self-representation might offer a career advantage when trying to obtain a professorship. At the same time, social media, which have been praised for giving a voice to the unheard, become a tool for the exclusion of scholars who might not want or be able to use these tools as part of their work and career-related communication, and might remain unseen and unheard. The author obtained current data on this topic while working on a project on Mediated Scholarly Communication in Post-Normal and Traditional Science under the project lead of Corinna Lüthje. The project was funded by the German Research Foundation (DFG). In the project, German-speaking scholars were interviewed about their work-related media usage in qualitative interviews. Among them were users and non-users of social media. For this article, 16 interviews with communication scholars (three PhD students, six postdocs, seven professors) were chosen for a closer analysis, because of all the interviewees they described the (dis)advantages of career-related social media use in the most detail, giving the deepest insights into whether social media contribute to a social exclusion of academics or not. How to Define Social Exclusion (in Academia)? The term social exclusion describes a separation of individuals or groups from mainstream society (Walsh et al.). Exclusion is a practice which implies agency. It can be the result of the actions of others, but individuals can also exclude themselves by choosing not to be part of something, for example of social media and the communication taking part there (Atkinson 14). Exclusion is an everyday social practice, because wherever there is an in-group there will always be an out-group. This is what Bourdieu calls distinction. Symbols and behaviours of distinction both function as signs of demarcation and belonging (Bourdieu, Distinction). Those are not always explicitly communicated, but part of people’s behaviour. They act on a social sense by telling them how to behave appropriately in a certain situation. According to Bourdieu, the practical sense is part of the habitus (Bourdieu, The Logic of Practice). The habitus generates patterns of action that come naturally and do not have to be reflected by the actor, due to an implicit knowledge that is acquired during the course of (group-specific) socialisation. For scholars, the process of socialisation in an area of research involves the acquisition of a so-called disciplinary self-image, which is crucial to building a disciplinary identity. In every discipline it contains a dominant disciplinary self-image which defines the scientific perspectives, practices, and even media that are typically used and therefore belong to the mainstream of a discipline (Huber 24). Yet, there is a societal mainstream outside of science which scholars are a part of. Furthermore, they have been socialised into other groups as well. Therefore, the disciplinary mainstream and the habitus of its members can be impacted upon by the societal mainstream and other fields of society. For example, societally mainstream social media, such as Twitter or Facebook, focussing on establishing and sustaining social connections, might be used for scholarly communication just as well as ASNS. The latter cater to the needs of scholars to not just network with colleagues, but to upload academic articles, share and track them, and consume scholarly information (Meishar-Tal and Pieterse 17). Both can become part of the disciplinary mainstream of media usage. In order to define whether and how social media contribute to forms of social exclusion among communication scholars, it is helpful to first identify in how far their usage is part of the disciplinary mainstream, and what their including features are. In contrast to this, forms of exclusion will be analysed and discussed on the basis of qualitative interviews with communication scholars. Including Features of Social Media for Communication Scholars The interviews for this essay were first conducted in 2016. At that time all of the 16 communication scholars interviewed used at least one social medium such as ResearchGate (8), Academia.edu (8), Twitter (10), or Facebook (11) as part of their scientific workflow. By 2019, all of them had a ResearchGate and 11 an Academia.edu account, 13 were on Twitter and 13 on Facebook. This supports the notion of one of the professors, who said that he registered with ResearchGate in 2016 because "everyone’s doing that now!” It also indicates that the work-related presence especially on ResearchGate, but also on other social media, is part of the disciplinary mainstream of communication science. The interviewees figured that the social media they used helped them to increase their visibility in their own community through promoting their work and networking. They also mentioned that they were helpful to keep up to date on the newest articles and on what was happening in communication science in general. The usage of ResearchGate and Academia.edu focussed on publications. Here the scholars could, as one professor put it, access articles that were not available via their university libraries, as well as “previously unpublished articles”. They also liked that they could see "what other scientists are working on" (professor) and were informed via e-mail "when someone publishes a new publication" (PhD student). The interviewees saw clear advantages to their registration with the ASNS, because they felt that they became "much more visible and present" (postdoc) in the scientific community. Seven of the communication scholars (two PhD students, three postdocs, two professors) shared their publications on ResearchGate and Academia.edu. Two described doing cross-network promotion, where they would write a post about their publications on Twitter or Facebook that linked to the full article on Academia.edu or ResearchGate. The usage of Twitter and especially Facebook focussed a lot more on accessing discipline-related information and social networking. The communication scholars mentioned that various sections and working groups of professional organisations in their research field had accounts on Facebook, where they would post news. A postdoc said that she was on Facebook "because I get a lot of information from certain scientists that I wouldn’t have gotten otherwise". Several interviewees pointed out that Twitter is "a place where you can find professional networks, become a part of them or create them yourself" (professor). On Twitter the interviewees explained that they were rather making new connections. Facebook was used to maintain and intensify existing professional relationships. They applied it to communicate with their local networks at their institute, just as well as for international communication. A postdoc and a professor both mentioned that they perceived that Scandinavian or US-American colleagues were easier to contact via Facebook than via any other medium. One professor described how he used Facebook at international conferences to arrange meetings with people he knew and wanted to meet. But to him Facebook also catered to accessing more personal information about his colleagues, thus creating a new "mixture of professional respect for the work of other scientists and personal relationships", which resulted in a "new kind of friendship". Excluding Features of Social Media for Communication Scholars While everyone may create an Academia.edu, Facebook, or Twitter account, ResearchGate is already an exclusive network in itself, as only people working in a scientific field are allowed to join. In 2016, eight of the interviewees and in 2019 all of them had signed up to ResearchGate. So at least among the communication scholars, this did not seem to be an excluding factor. More of an issue was for one of the postdocs that she did not have the copyright to upload her published articles on the ASNS and therefore refrained from uploading them. Interestingly enough, this did not seem to worry any of the other interviewees, and concerns were mostly voiced in relation to the societal mainstream social media. Although all of the interviewees had an account with at least one social medium, three of them described that they did not use or had withdrawn from using Facebook and Twitter. For one professor and one PhD student this had to do with the privacy and data security issues of these networks. The PhD student said that she did not want to be reminded of what she “tweeted maybe 10 years ago somewhere”, and also considered tweeting to be irrelevant in her community. To her, important scientific findings would rather be presented in front of a professional audience and not so much to the “general public”, which she felt was mostly addressed on social media. The professor mentioned that she had been on Facebook since she was a postdoc, but decided to stop using the service when it introduced new rules on data security. On one hand she saw the “benefits” of the network to “stay informed about what is happening in the community”, and especially “in regards to the promotion of young researchers, since some of the junior research groups are very active there”. On the other she found it problematic for her own time management and said that she received a lot of the posted information via e-mail as well. A postdoc mentioned that he had a Facebook account to stay in contact with young scholars he met at a networking event, but never used it. He would rather connect with his colleagues in person at conferences. He felt people would just use social media to “show off what they do and how awesome it is”, which he did not understand. He mentioned that if this “is how you do it now … I don't think this is for me.” Another professor described that Facebook "is the channel for German-speaking science to generate social traffic”, but that he did not like to use it, because “there is so much nonsense ... . It’s just not fun. Twitter is more fun, but the effect is much smaller", as bigger target groups could be reached via Facebook. The majority of the interviewees did not use mainstream social media because they were intrinsically motivated. They rather did it because they felt that it was expected of them to be there, and that it was important for their career to be visible there. Many were worried that they would miss out on opportunities to promote themselves, network, and receive information if they did not use Twitter or Facebook. One of the postdocs mentioned, for example, that she was not a fan of Twitter and would often not know what to write, but that the professor she worked for had told her she needed to tweet regularly. But she did see the benefits as she said that she had underestimated the effect of this at first: “I think, if you want to keep up, then you have to do that, because people don’t notice you.” This also indicates a disciplinary mainstream of social media usage. Conclusion The interviews indicate that the usage of ResearchGate in particular, but also of Academia.edu and of the societal mainstream social media platforms Twitter and Facebook has become part of the disciplinary mainstream of communication science and the habitus of many of its members. ResearchGate mainly targets people working in the scientific field, while excluding everyone else. Its focus on publication sharing makes the network very attractive among its main target group, and serves at the same time as a symbol of distinction from other groups (Bourdieu, Distinction). Yet it also raises copyright issues, which led at least one of the participants to refrain from using this option. The societal mainstream social media Twitter and Facebook, on the other hand, have a broader reach and were more often used by the interviewees for social networking purposes than the ASNS. The interviewees emphasised the benefits of Twitter and Facebook for exchanging information and connecting with others. Factors that led the communication scholars to refrain from using the networks, and thus were excluding factors, were data security and privacy concerns; disliking that the networks were used to “show off”; as well as considering Twitter and Facebook as unfit for addressing the scholarly target group properly. The last statement on the target group, which was made by a PhD student, does not seem to represent the mainstream of the communication scholars interviewed, however. Many of them were using Twitter and Facebook for scholarly communication and rather seemed to find them advantageous. Still, this perception of the disciplinary mainstream led to her not using them for work-related purposes, and excluding her from their advantages. Even though, as one professor described it, a lot of information shared via Facebook is often spread through other communication channels as well, some can only be received via the networks. Although social media are mostly just a substitute for face-to-face communication, by not using them scholars will miss out on the possibilities of creating the “new kind of friendship” another professor mentioned, where professional and personal relations mix. The results of this study show that social media use is advantageous for academics as they offer possibilities to access exclusive information, form new kinds of relations, as well as promote oneself and one’s publications. At the same time, those not using these social media are excluded and might experience career-related disadvantages. As described in the introduction, academia is a competitive environment where many people try to obtain a few permanent positions. By default, this leads to processes of exclusion rather than integration. Any means to stand out from competitors are welcome to emerging scholars, and a career-related advantage will be used. If the growth in the number of communication scholars in the sample signing up to social networks between 2016 to 2019 is any indication, it is likely that the networks have not yet reached their full potential as tools for career advancement among scientific communities, and will become more important in the future. Now one could argue that the communication scholars who were interviewed for this essay are a special case, because they might use social media more actively than other scholars due to their area of research. Though this might be true, studies of other scholarly fields show that social media are being applied just the same (though maybe less extensively), and that they are used to establish cooperations and increase the amount of people reading and citing their publications (Eysenbach; Lo 192; Terras). The question is whether researchers will be able to avoid using social media when striving for a career in science in the future, which can only be answered by further research on the topic. References Atkinson, A.B. “Social Exclusion, Poverty and Unemployment.” Exclusion, Employment and Opportunity. Eds. A.B. Atkinson and John Hills. London: London School of Economics and Political Science, 1998. 1–20. Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Cambridge, Massachusetts: Harvard UP, 1984. ———. The Logic of Practice. Stanford, California: Stanford UP, 1990. Cardon, Peter W., and Bryan Marshall. “The Hype and Reality of Social Media Use for Work Collaboration and Team Communication.” International Journal of Business Communication 52.3 (2015): 273–93. Eysenbach, Gunther. “Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact.” Journal of Medical Internet Research 13.4 (2011): e123. Federal Statistical Office [Statistisches Bundesamt]. Hochschulen auf einen Blick: Ausgabe 2018: 2018. 27 Dec. 2019 <https://www.destatis.de/Migration/DE/Publikationen/Thematisch/BildungForschungKultur/Hochschulen/BroschuereHochschulenBlick.html>. Gruzd, Anatoliy, Kathleen Staves, and Amanda Wilk. “Tenure and Promotion in the Age of Online Social Media.” Proceedings of the American Society for Information Science and Technology 48.1 (2011): 1–9. Huber, Nathalie. Kommunikationswissenschaft als Beruf: Zum Selbstverständnis von Professoren des Faches im deutschsprachigen Raum. Köln: Herbert von Halem Verlag, 2010. König, René, and Michael Nentwich. “Soziale Medien in der Wissenschaft.” Handbuch Soziale Medien. Eds. Jan-Hinrik Schmidt and Monika Taddicken. Wiesbaden: Springer Fachmedien, 2017. 170–188. Lo, Yin-Yueh. “Online Communication beyond the Scientific Community: Scientists' Use of New Media in Germany, Taiwan and the United States to Address the Public.” 2016. 17 Oct. 2019 <https://refubium.fu-berlin.de/bitstream/handle/fub188/7426/Diss_Lo_2016.pdf?sequence=1&isAllowed=y>. Meishar-Tal, Hagit, and Efrat Pieterse. “Why Do Academics Use Academic Social Networking Sites?” IRRODL 18.1 (2017). Pscheida, Daniela, Claudia Minet, Sabrina Herbst, Steffen Albrecht, and Thomas Köhler. Nutzung von Social Media und onlinebasierten Anwendungen in der Wissenschaft: Ergebnisse des Science 2.0-Survey 2014. Dresden: Leibniz-Forschungsverbund „Science 2.0“, 2014. 17 Mar. 2020. <https://d-nb.info/1069096679/34>. Terras, Melissa. The Verdict: Is Blogging or Tweeting about Research Papers Worth It? LSE Impact Blog, 2012. 28 Dec. 2019 <https://blogs.lse.ac.uk/impactofsocialsciences/2012/04/19/blog-tweeting-papers-worth-it/>. Vorderer, Peter, et al. “Der mediatisierte Lebenswandel: Permanently Online, Permanently Connected.” Publizistik 60.3 (2015): 259–76. Walsh, Kieran, Thomas Scharf, and Norah Keating. “Social Exclusion of Older Persons: a Scoping Review and Conceptual Framework.” European Journal of Ageing 14.1 (2017): 81–98. Zimmer, Annette, Holger Krimmer, and Freia Stallmann. “Winners among Losers: Zur Feminisierung der deutschen Universitäten.” Beiträge zur Hochschulforschung 4.28 (2006): 30-57. 17 Mar. 2020 <https://www.uni-bremen.de/fileadmin/user_upload/sites/zentrale-frauenbeauftragte/Berichte/4-2006-zimmer-krimmer-stallmann.pdf>.
APA, Harvard, Vancouver, ISO, and other styles
47

Quan, Nguyen Van, and Vu Cong Giao. "E-government and State Governance in the Morden Time." VNU Journal of Science: Legal Studies 35, no. 3 (September 24, 2019). http://dx.doi.org/10.25073/2588-1167/vnuls.4202.

Full text
Abstract:
Currently, e-government is one of the important tools to improve the efficiency of state management and the quality of public services. E-government applications contribute to meeting the requirements of modern governance, such as publicity, transparency, accountability, timeliness of public administration and citizen participation. Therefore, e-government is being developed and applied by various countries in the world including Vietnam. Keywords: E-government, Digital Government, Open Government, Governance, State Governance. References: [1] ADB (2005), Governance: Sound Development Management Governance, tại https://www.adb.org/sites/default/files/institutional-document/32027/govpolicy.pdf, truy cập ngày 18/12/2018.[2] Sabri Boubaker, Duc Khuong Nguyen (editors), Corporate Governance and Corporate Social Responsibility: Emerging Markets Focus, World Scientific Publishing Co Pte Ltd, 2014, tr. 377. Dẫn theo Nguyễn Văn Quân, Nguồn gốc và sự phát triển của quản trị tốt, trong cuốn “Quản trị tốt: Lý luận và thực tiễn”, Vũ Công Giao, Nguyễn Hoàng Anh, Đặng Minh Tuấn, Nguyễn Minh Tuấn (đồng chủ biên), NXB Chính trị Quốc gia, 2017.[3] Michiel Backus., “e-Governance and Developing Countries: Introduction and examples”, Research Report ; No. 3, April, 2001, Xem: https://bibalex.org/baifa/Attachment/Documents/119334.pdf, truy cập ngày 18/12/2018.[4] S. Bhatnagar, “e-government: from vision to implementation: a practical guide with case studies” New Delhi ; Thousand Oaks, Calif. : Sage Publications, 2004.[5] Vũ Công Giao, Nguyễn Hoàng Anh, Đặng Minh Tuấn, Nguyễn Minh Tuấn (đồng chủ biên) “Quản trị tốt: Lý luận và thực tiễn”, NXB Chính trị Quốc gia, 2017.[6] World Bank (2006), Making PRSP Inclusive, tại http://siteresources.worldbank.org/DISABILITY/Resources/280658-1172608138489/MakingPRSPInclusive.pdf, truy cập ngày 18/12/2018.[7] Global Agenda Council on the Future of Government - World Economic Forum (2011), The Future of Government Lessons Learned From Around The World, Xem: http://www3.weforum.org/docs/EU11/WEF_EU11_FutureofGovernment_Report.pdf –[8] Hanna, Nagy., Transforming Government and Building the Information Society: Challenges and Opportunities for the Developing World, Nagy Hanna & Peter T. Knight editors, Springer, NY, 2010.[9] Heeks, R., “iGovernment : Understanding e-Governance for Development”, Working Paper Series : Paper No. 11, Institute for Development Policy and Management, Xem: http://unpan1.un.org/intradoc/groups/public/documents/NISPAcee/UNPAN015484.pdf, truy cập ngày 18/12/2018.[10] Richard Heeks, “Most e-Government-for-Development Projects Fail How Can Risks be Reduced, 2003, Xem: http://unpan1.un.org/intradoc/groups/public/documents/cafrad/unpan011226.pdf, truy cập ngày 18/12/2018.[11] J. Guida, and M. Crow “e-government and e-governance”, in Unwin, T. (ed.), ICT4D: International and Communication Technology for Development, Cambridge University Press 2009. Xem: https://www.itu.int/ITU-D/cyb/app/docs/e-gov_for_dev_countries-report.pdf, truy cập ngày 18/12/2018.[12] Bob Jessop, The State Past, Present, Future, Polity, 2016, tr.166-169, tại http://www.ritsumei.ac.jp/acd/re/k- rsc/hss/book/pdf/vol07_08.pdf[13] S. Joseph. Jr. Nye and D. John (2000), Governance in a globalizing world, Brookings Institution Press.[14] Joseph Stiglitz, “Globalization And The Economic Role Of The State In The New Millennium”, Journal Of Industrial and Corporate Change, 2003.[15] Báo Lao động, Xây dựng chính phủ điện tử, rào cản nào?, xem: https://laodong.vn/thoi-su/xay-dung-chinh-phu-dien-tu-rao-can-nao-631923.ldo, truy cập ngày 18/12/2018.[16] Phạm Tiến Luật, Những thách thức trong xây dựng chính phủ điện tử ở Việt Nam, Tạp chí Quản lý nhà nước, số 264 (1/2018).[17] D. Nute, “Net eases Government Purchasing Process”, The American City & County Journal, 117 (1), 2002; K.A. O’Connell, “Computerizing Government: The Next Generation”, The American City & County Journal, 118 (8), 2003.[18] OECD (2004), Principles of Corporate Governance, tại: http://www.oecd.org/corporate/ca/corporategovernanceprinciples/31557724.pdf, truy cập ngày 18/12/2018.[19] United Nation (2002), World Public sector report Globalization and the State, tại: https://publicadministration.un.org/publications/content/PDFs/E-Library%20Archives/World%20Public%20Sector%20Report%20series/World%20Public%20Sector%20Report.2001.pdf , truy cập ngày 11/11/2018.[20] United Nations Economic and Social Commission for Asia and the Pacific, What is Good Governance?, tại: https://www.unescap.org/sites/default/files/good-governance.pdf ., truy cập ngày 18/12/2018.[21] UNDP (1997), Governance & Sustainable Human Development. A UNDP Policy Document. New York United Nations Development Programme, 1997.[22] Jim Macnamara, The Quadrivium of Online Public Consultation: Policy, Culture, Resources, Technology, Dẫn theo Nguyễn Đức Lam, Quản trị tốt: những chuẩn mực chung, tài liệu đã dẫn. Vũ Công Giao, Nguyễn Hoàng Anh, Đặng Minh Tuấn, Nguyễn Minh Tuấn (đồng chủ biên) “Quản trị tốt: Lý luận và thực tiễn”, NXB Chính trị Quốc gia, 2017.[23] United Nations : Department of economic and Social Affairs, Division for Public Administration and Development Management, “The Global e-Government Survey 2008”, xem: https://publicadministration.un.org/egovkb/portals/egovkb/Documents/un/2008-Survey/unpan028607.pdf, truy cập ngày 18/12/2018.
APA, Harvard, Vancouver, ISO, and other styles
48

Ensminger, David Allen. "Populating the Ambient Space of Texts: The Intimate Graffiti of Doodles. Proposals Toward a Theory." M/C Journal 13, no. 2 (March 9, 2010). http://dx.doi.org/10.5204/mcj.219.

Full text
Abstract:
In a media saturated world, doodles have recently received the kind of attention usually reserved for coverage of racy extra marital affairs, corrupt governance, and product malfunction. Former British Prime Minister Blair’s private doodling at a World Economic Forum meeting in 2005 raised suspicions that he, according to one keen graphologist, struggled “to maintain control in a confusing world," which infers he was attempting to cohere a scattershot, fragmentary series of events (Spiegel). However, placid-faced Microsoft CEO Bill Gates, who sat nearby, actually scrawled the doodles. In this case, perhaps the scrawls mimicked the ambience in the room: Gates might have been ‘tuning’–registering the ‘white noise’ of the participants, letting his unconscious dictate doodles as a way to cope with the dissonance trekking in with the officialspeak. The doodles may have documented and registered the space between words, acting like deposits from his gestalt.Sometimes the most intriguing doodles co-exist with printed texts. This includes common vernacular graffiti that lines public and private books and magazines. Such graffiti exposes tensions in the role of readers as well as horror vacui: a fear of unused, empty space. Yet, school children fingering fresh pages and stiff book spines for the first few times often consider their book pages as sanctioned, discreet, and inviolable. The book is an object of financial and cultural investment, or imbued both with mystique and ideologies. Yet, in the e-book era, the old-fashioned, physical page is a relic of sorts, a holdover from coarse papyrus culled from wetland sage, linking us to the First Dynasty in Egypt. Some might consider the page as a vessel for typography, a mere framing device for text. The margins may reflect a perimeter of nothingness, an invisible borderland that doodles render visible by inhabiting them. Perhaps the margins are a bare landscape, like unmarred flat sand in a black and white panchromatic photo with unique tonal signature and distinct grain. Perhaps the margins are a mute locality, a space where words have evaporated, or a yet-to-be-explored environment, or an ambient field. Then comes the doodle, an icon of vernacular art.As a modern folklorist, I have studied and explored vernacular art at length, especially forms that may challenge and fissure aesthetic, cultural, and social mores, even within my own field. For instance, I contend that Grandma Prisbrey’s “Bottle Village,” featuring millions of artfully arranged pencils, bottles, and dolls culled from dumps in Southern California, is a syncretic culturescape with underlying feminist symbolism, not merely the product of trauma and hoarding (Ensminger). Recently, I flew to Oregon to deliver a paper on Mexican-American gravesite traditions. In a quest for increased multicultural tolerance, I argued that inexpensive dimestore objects left on Catholic immigrant graves do not represent a messy landscape of trinkets but unique spiritual environments with links to customs 3,000 years old. For me, doodles represent a variation on graffiti-style art with cultural antecedents stretching back throughout history, ranging from ancient scrawls on Greek ruins to contemporary park benches (with chiseled names, dates, and symbols), public bathroom latrinalia, and spray can aerosol art, including ‘bombing’ and ‘tagging’ hailed as “Spectacular Vernaculars” by Russell Potter (1995). Noted folklorist Alan Dundes mused on the meaning of latrinalia in Here I Sit – A Study of American Latrinalia (1966), which has inspired pop culture books and web pages for the preservation and discussion of such art (see for instance, www.itsallinthehead.com/gallery1.html). Older texts such as Classic American Graffiti by Allen Walker Read (1935), originally intended for “students of linguistics, folk-lore, abnormal psychology,” reveal the field’s longstanding interest in marginal, crude, and profane graffiti.Yet, to my knowledge, a monograph on doodles has yet to be published by a folklorist, perhaps because the art form is reconsidered too idiosyncratic, too private, the difference between jots and doodles too blurry for a taxonomy and not the domain of identifiable folk groups. In addition, the doodles in texts often remain hidden until single readers encounter them. No broad public interaction is likely, unless a library text circulates freely, which may not occur after doodles are discovered. In essence, the books become tainted, infected goods. Whereas latrinalia speaks openly and irreverently, doodles feature a different scale and audience.Doodles in texts may represent a kind of speaking from the ‘margin’s margins,’ revealing the reader-cum-writer’s idiosyncratic, self-meaningful, and stylised hieroglyphics from the ambient margins of one’s consciousness set forth in the ambient margins of the page. The original page itself is an ambient territory that allows the meaning of the text to take effect. When those liminal spaces (both between and betwixt, in which the rules of page format, design, style, and typography are abandoned) are altered by the presence of doodles, the formerly blank, surplus, and soft spaces of the page offer messages coterminous with the text, often allowing readers to speak, however haphazardly and unconsciously, with and against the triggering text. The bleached whiteness can become a crowded milieu in the hands of a reader re-scripting the ambient territory. If the book is borrowed, then the margins are also an intimate negotiation with shared or public space. The cryptic residue of the doodler now resides, waiting, for the city of eyes.Throughout history, both admired artists and Presidents regularly doodled. Famed Italian Renaissance painter Filippo Lippi avoided strenuous studying by doodling in his books (Van Cleave 44). Both sides of the American political spectrum have produced plentiful inky depictions as well: roughshod Democratic President Johnson drew flags and pagodas; former Hollywood fantasy fulfiller turned politician Republican President Reagan’s specialty was western themes, recalling tropes both from his actor period and his duration acting as President; meanwhile, former law student turned current President, Barack Obama, has sketched members of Congress and the Senate for charity auctions. These doodles are rich fodder for both psychologists and cross-discipline analysts that propose theories regarding the automatic writing and self-styled miniature pictures of civic leaders. Doodles allow graphologists to navigate and determine the internal, cognitive fabric of the maker. To critics, they exist as mere trifles and offer nothing more than an iota of insight; doodles are not uncanny offerings from the recesses of memory, like bite-sized Rorschach tests, but simply sloppy scrawls of the bored.Ambient music theory may shed some light. Timothy Morton argues that Brian Eno designed to make music that evoked “space whose quality had become minimally significant” and “deconstruct the opposition … between figure and ground.” In fact, doodles may yield the same attributes as well. After a doodle is inserted into texts, the typography loses its primacy. There is a merging of the horizons. The text of the author can conflate with the text of the reader in an uneasy dance of meaning: the page becomes an interface revealing a landscape of signs and symbols with multiple intelligences–one manufactured and condoned, the other vernacular and unsanctioned. A fixed end or beginning between the two no longer exists. The ambient space allows potential energies to hover at the edge, ready to illustrate a tension zone and occupy the page. The blank spaces keep inviting responses. An emergent discourse is always in waiting, always threatening to overspill the text’s intended meaning. In fact, the doodles may carry more weight than the intended text: the hierarchy between authorship and readership may topple.Resistant reading may take shape during these bouts. The doodle is an invasion and signals the geography of disruption, even when innocuous. It is a leveling tool. As doodlers place it alongside official discourse, they move away from positions of passivity, being mere consumers, and claim their own autonomy and agency. The space becomes co-determinant as boundaries are blurred. The destiny of the original text’s meaning is deferred. The habitus of the reader becomes embodied in the scrawl, and the next reader must negotiate and navigate the cultural capital of this new author. As such, the doodle constitutes an alternative authority and economy of meaning within the text.Recent studies indicate doodling, often regarded as behavior that announces a person’s boredom and withdrawal, is actually a very special tool to prevent memory loss. Jackie Andrade, an expert from the School of Psychology at the University of Plymouth, maintains that doodling actually “offsets the effects of selective memory blockade,” which yields a surprising result (quoted in “Doodling Gets”). Doodlers exhibit 29% more memory recall than those who passively listen, frozen in an unequal bond with the speaker/lecturer. Students that doodle actually retain more information and are likely more productive due to their active listening. They adeptly absorb information while students who stare patiently or daydream falter.Furthermore, in a 2006 paper, Andrew Kear argues that “doodling is a way in which students, consciously or not, stake a claim of personal agency and challenge some the values inherent in the education system” (2). As a teacher concerned with the engagement of students, he asked for three classes to submit their doodles. Letting them submit any two-dimensional graphic or text made during a class (even if made from body fluid), he soon discovered examples of “acts of resistance” in “student-initiated effort[s] to carve out a sense of place within the educational institution” (6). Not simply an ennui-prone teenager or a proto-surrealist trying to render some automatic writing from the fringes of cognition, a student doodling may represent contested space both in terms of the page itself and the ambience of the environment. The doodle indicates tension, and according to Kear, reflects students reclaiming “their own self-recognized voice” (6).In a widely referenced 1966 article (known as the “doodle” article) intended to describe the paragraph organisational styles of different cultures, Robert Kaplan used five doodles to investigate a writer’s thought patterns, which are rooted in cultural values. Now considered rather problematic by some critics after being adopted by educators for teacher-training materials, Kaplan’s doodles-as-models suggest, “English speakers develop their ideas in a linear, hierarchal fashion and ‘Orientals’ in a non-liner, spiral fashion…” (Severino 45). In turn, when used as pedagogical tools, these graphics, intentionally or not, may lead an “ethnocentric, assimilationist stance” (45). In this case, doodles likely shape the discourse of English as Second Language instruction. Doodles also represent a unique kind of “finger trace,” not unlike prints from the tips of a person’s fingers and snowflakes. Such symbol systems might be used for “a means of lightweight authentication,” according to Christopher Varenhorst of MIT (1). Doodles, he posits, can be used as “passdoodles"–a means by which a program can “quickly identify users.” They are singular expressions that are quirky and hard to duplicate; thus, doodles could serve as substitute methods of verifying people who desire devices that can safeguard their privacy without users having to rely on an ever-increasing number of passwords. Doodles may represent one such key. For many years, psychologists and psychiatrists have used doodles as therapeutic tools in their treatment of children that have endured hardship, ailments, and assault. They may indicate conditions, explain various symptoms and pathologies, and reveal patterns that otherwise may go unnoticed. For instance, doodles may “reflect a specific physical illness and point to family stress, accidents, difficult sibling relationships, and trauma” (Lowe 307). Lowe reports that children who create a doodle featuring their own caricature on the far side of the page, distant from an image of parent figures on the same page, may be experiencing detachment, while the portrayal of a father figure with “jagged teeth” may indicate a menace. What may be difficult to investigate in a doctor’s office conversation or clinical overview may, in fact, be gleaned from “the evaluation of a child’s spontaneous doodle” (307). So, if children are suffering physically or psychologically and unable to express themselves in a fully conscious and articulate way, doodles may reveal their “self-concept” and how they feel about their bodies; therefore, such creative and descriptive inroads are important diagnostic tools (307). Austrian born researcher Erich Guttman and his cohort Walter MacLay both pioneered art therapy in England during the mid-twentieth century. They posited doodles might offer some insight into the condition of schizophrenics. Guttman was intrigued by both the paintings associated with the Surrealist movement and the pioneering, much-debated work of Sigmund Freud too. Although Guttman mostly studied professionally trained artists who suffered from delusions and other conditions, he also collected a variety of art from patients, including those undergoing mescaline therapy, which alters a person’s consciousness. In a stroke of luck, they were able to convince a newspaper editor at the Evening Standard to provide them over 9,000 doodles that were provided by readers for a contest, each coded with the person’s name, age, and occupation. This invaluable data let the academicians compare the work of those hospitalised with the larger population. Their results, released in 1938, contain several key declarations and remain significant contributions to the field. Subsequently, Francis Reitman recounted them in his own book Psychotic Art: Doodles “release the censor of the conscious mind,” allowing a person to “relax, which to creative people was indispensable to production.”No appropriate descriptive terminology could be agreed upon.“Doodles are not communications,” for the meaning is only apparent when analysed individually.Doodles are “self-meaningful.” (37) Doodles, the authors also established, could be divided into this taxonomy: “stereotypy, ornamental details, movements, figures, faces and animals” or those “depicting scenes, medley, and mixtures” (37). The authors also noted that practitioners from the Jungian school of psychology often used “spontaneously produced drawings” that were quite “doodle-like in nature” in their own discussions (37). As a modern folklorist, I venture that doodles offer rich potential for our discipline as well. At this stage, I am offering a series of dictums, especially in regards to doodles that are commonly found adjacent to text in books and magazines, notebooks and journals, that may be expanded upon and investigated further. Doodles allow the reader to repopulate the text with ideogram-like expressions that are highly personalised, even inscrutable, like ambient sounds.Doodles re-purpose the text. The text no longer is unidirectional. The text becomes a point of convergence between writer and reader. The doodling allows for such a conversation, bilateral flow, or “talking back” to the text.Doodles reveal a secret language–informal codes that hearken back to the “lively, spontaneous, and charged with feeling” works of child art or naïve art that Victor Sanua discusses as being replaced in a child’s later years by art that is “stilted, formal, and conforming” (62).Doodling animates blank margins, the dead space of the text adjacent to the script, making such places ripe for spontaneous, fertile, and exploratory markings.Doodling reveals a democratic, participatory ethos. No text is too sacred, no narrative too inviolable. Anything can be reworked by the intimate graffiti of the reader. The authority of the book is not fixed; readers negotiate and form a second intelligence imprinted over the top of the original text, blurring modes of power.Doodles reveal liminal moments. Since the reader in unmonitored, he or she can express thoughts that may be considered marginal or taboo by the next reader. The original subject of the book itself does not restrict the reader. Thus, within the margins of the page, a brief suspension of boundaries and borders, authority and power, occurs. The reader hides in anonymity, free to reroute the meaning of the book. Doodling may convey a reader’s infantalism. Every book can become a picture book. This art can be the route returning a reader to the ambience of childhood.Doodling may constitute Illuminated/Painted Texts in reverse, commemorating the significance of the object in hitherto unexpected forms and revealing the reader’s codex. William Blake adorned his own poems by illuminating the skin/page that held his living verse; common readers may do so too, in naïve, nomadic, and primitive forms. Doodling demarcates tension zones, yielding social-historical insights into eras while offering psychological glimpses and displaying aesthetic values of readers-cum-writers.Doodling reveals margins as inter-zones, replete with psychogeography. While the typography is sanctioned, legitimate, normalised, and official discourse (“chartered” and “manacled,” to hijack lines from William Blake), the margins are a vernacular depository, a terminus, allowing readers a sense of agency and autonomy. The doodled page becomes a visible reminder and signifier: all pages are potentially “contested” spaces. Whereas graffiti often allows a writer to hide anonymously in the light in a city besieged by multiple conflicting texts, doodles allow a reader-cum-writer’s imprint to live in the cocoon of a formerly fossilised text, waiting for the light. Upon being opened, the book, now a chimera, truly breathes. Further exploration and analysis should likely consider several issues. What truly constitutes and shapes the role of agent and reader? Is the reader an agent all the time, or only when offering resistant readings through doodles? How is a doodler’s agency mediated by the author or the format of texts in forms that I have to map? Lastly, if, as I have argued, the ambient space allows potential energies to hover at the edge, ready to illustrate a tension zone and occupy the page, what occurs in the age of digital or e-books? Will these platforms signal an age of acquiescence to manufactured products or signal era of vernacular responses, somehow hitched to html code and PDF file infiltration? Will bytes totally replace type soon in the future, shaping unforeseen actions by doodlers? Attached Figures Figure One presents the intimate graffiti of my grandfather, found in the 1907 edition of his McGuffey’s Eclectic Spelling Book. The depiction is simple, even crude, revealing a figure found on the adjacent page to Lesson 248, “Of Characters Used in Punctuation,” which lists the perfunctory functions of commas, semicolons, periods, and so forth. This doodle may offset the routine, rote, and rather humdrum memorisation of such grammatical tools. The smiling figure may embody and signify joy on an otherwise machine-made bare page, a space where my grandfather illustrated his desires (to lighten a mood, to ease dissatisfaction?). Historians Joe Austin and Michael Willard examine how youth have been historically left without legitimate spaces in which to live out their autonomy outside of adult surveillance. For instance, graffiti often found on walls and trains may reflect a sad reality: young people are pushed to appropriate “nomadic, temporary, abandoned, illegal, or otherwise unwatched spaces within the landscape” (14). Indeed, book graffiti, like the graffiti found on surfaces throughout cities, may offer youth a sense of appropriation, authorship, agency, and autonomy: they take the page of the book, commit their writing or illustration to the page, discover some freedom, and feel temporarily independent even while they are young and disempowered. Figure Two depicts the doodles of experimental filmmaker Jim Fetterley (Animal Charm productions) during his tenure as a student at the Art Institute of Chicago in the early 1990s. His two doodles flank the text of “Lady Lazarus” by Sylvia Plath, regarded by most readers as an autobiographical poem that addresses her own suicide attempts. The story of Lazarus is grounded in the Biblical story of John Lazarus of Bethany, who was resurrected from the dead. The poem also alludes to the Holocaust (“Nazi Lampshades”), the folklore surrounding cats (“And like the cat I have nine times to die”), and impending omens of death (“eye pits “ … “sour breath”). The lower doodle seems to signify a motorised tank-like machine, replete with a furnace or engine compartment on top that bellows smoke. Such ominous images, saturated with potential cartoon-like violence, may link to the World War II references in the poem. Meanwhile, the upper doodle seems to be curiously insect-like, and Fetterley’s name can be found within the illustration, just like Plath’s poem is self-reflexive and addresses her own plight. Most viewers might find the image a bit more lighthearted than the poem, a caricature of something biomorphic and surreal, but not very lethal. Again, perhaps this is a counter-message to the weight of the poem, a way to balance the mood and tone, or it may well represent the larval-like apparition that haunts the very thoughts of Plath in the poem: the impending disease of her mind, as understood by the wary reader. References Austin, Joe, and Michael Willard. “Introduction: Angels of History, Demons of Culture.” Eds. Joe Austion and Michael Willard. Generations of Youth: Youth Cultures and History in Twentieth-Century America. New York: NYU Press, 1998. “Doodling Gets Its Due: Those Tiny Artworks May Aid Memory.” World Science 2 March 2009. 15 Jan. 2009 ‹http://www.world-science.net/othernews/090302_doodle›. Dundes, Alan. “Here I Sit – A Study of American Latrinalia.” Papers of the Kroeber Anthropological Society 34: 91-105. Ensminger, David. “All Bottle Up: Reinterpreting the Culturescape of Grandma Prisbey.” Adironack Review 9.3 (Fall 2008). ‹http://adirondackreview.homestead.com/ensminger2.html›. Kear, Andrew. “Drawings in the Margins: Doodling in Class an Act of Reclamation.” Graduate Student Conference. University of Toronto, 2006. ‹http://gradstudentconference.oise.utoronto.ca/documents/185/Drawing%20in%20the%20Margins.doc›. Lowe, Sheila R. The Complete Idiot’s Guide to Handwriting Analysis. New York: Alpha Books, 1999. Morton, Timothy. “‘Twinkle, Twinkle Little Star’ as an Ambient Poem; a Study of Dialectical Image; with Some Remarks on Coleridge and Wordsworth.” Romantic Circles Praxis Series (2001). 6 Jan. 2009 ‹http://www.rc.umd.edu/praxis/ecology/morton/morton.html›. Potter, Russell A. Spectacular Vernaculars: Hip Hop and the Politics of Postmodernism. Albany: State University of New York, 1995. Read, Allen Walker. Classic American Graffiti: Lexical Evidence from Folk Epigraphy in Western North America. Waukesha, Wisconsin: Maledicta Press, 1997. Reitman, Francis. Psychotic Art. London: Routledge, 1999. Sanua, Victor. “The World of Mystery and Wonder of the Schizophrenic Patient.” International Journal of Social Psychiatry 8 (1961): 62-65. Severino, Carol. “The ‘Doodles’ in Context: Qualifying Claims about Contrastive Rhetoric.” The Writing Center Journal 14.1 (Fall 1993): 44-62. Van Cleave, Claire. Master Drawings of the Italian Rennaissance. Cambridge, Mass.: Harvard UP, 2007. Varenhost, Christopher. Passdoodles: A Lightweight Authentication Method. Research Science Institute. Cambridge, Mass.: Massachusetts Institute of Technology, 2004.
APA, Harvard, Vancouver, ISO, and other styles
49

Wolbring, Gregor. "Is There an End to Out-Able? Is There an End to the Rat Race for Abilities?" M/C Journal 11, no. 3 (July 2, 2008). http://dx.doi.org/10.5204/mcj.57.

Full text
Abstract:
Introduction The purpose of this paper is to explore discourses of ‘ability’ and ‘ableism’. Terms such as abled, dis-abled, en-abled, dis-enabled, diff-abled, transable, assume different meanings as we eliminate ‘species-typical’ as the norm and make beyond ‘species-typical’ the norm. This paper contends that there is a pressing need for society to deal with ableism in all of its forms and its consequences. The discourses around 'able' and 'ableism' fall into two main categories. The discourse around species-typical versus sub-species-typical as identified by certain powerful members of the species is one category. This discourse has a long history and is linked to the discourse around health, disease and medicine. This discourse is about people (Harris, "One Principle"; Watson; Duke) who portray disabled people within a medical model of disability (Finkelstein; Penney; Malhotra; British Film Institute; Oliver), a model that classifies disabled people as having an intrinsic defect, an impairment that leads to ‘subnormal’ functioning. Disability Studies is an academic field that questions the medical model and the issue of ‘who defines whom’ as sub-species typical (Taylor, Shoultz, and Walker; Centre for Disability Studies; Disability and Human Development Department; Disabilitystudies.net; Society for Disability Studies; Campbell). The other category is the discourse around the claim that one has, as a species or a social group, superior abilities compared to other species or other segments in ones species whereby this superiority is seen as species-typical. Science and technology research and development and different forms of ableism have always been and will continue to be inter-related. The desire and expectation for certain abilities has led to science and technology research and development that promise the fulfillment of these desires and expectations. And science and technology research and development led to products that enabled new abilities and new expectations and desires for new forms of abilities and ableism. Emerging forms of science and technology, in particular the converging of nanotechnology, biotechnology, information technology, cognitive sciences and synthetic biology (NBICS), increasingly enable the modification of appearance and functioning of biological structures including the human body and the bodies of other species beyond existing norms and inter and intra species-typical boundaries. This leads to a changed understanding of the self, the body, relationships with others of the species, and with other species and the environment. There are also accompanying changes in anticipated, desired and rejected abilities and the transhumanisation of the two ableism categories. A transhumanised form of ableism is a network of beliefs, processes and practices that perceives the improvement of biological structures including the human body and functioning beyond species-typical boundaries as the norm, as essential. It judges an unenhanced biological structure including the human body as a diminished state of existence (Wolbring, "Triangle"; Wolbring, "Why"; Wolbring, "Glossary"). A by-product of this emerging form of ableism is the appearance of the ‘Techno Poor impaired and disabled people’ (Wolbring, "Glossary"); people who don’t want or who can’t afford beyond-species-typical body ability enhancements and who are, in accordance with the transhumanised form of ableism, perceived as people in a diminished state of being human and experience negative treatment as ‘disabled’ accordingly (Miller). Ableism Today: The First Category Ableism (Campbell; Carlson; Overboe) privileges ‘species typical abilities’ while labelling ‘sub-species-typical abilities’ as deficient, as impaired and undesirable often with the accompanying disablism (Miller) the discriminatory, oppressive, or abusive behaviour arising from the belief that sub-species-typical people are inferior to others. To quote the UK bioethicist John Harris I do define disability as “a physical or mental condition we have a strong [rational] preference not to be in” and that it is more importantly a condition which is in some sense a “‘harmed condition’”. So for me the essential elements are that a disabling condition is harmful to the person in that condition and that consequently that person has a strong rational preference not to be in such a condition. (Harris, "Is There") Harris’s quote highlights the non acceptance of sub-species-typical abilities as variations. Indeed the term “disabled” is mostly used to describe a person who is perceived as having an intrinsic defect, an impairment, disease, or chronic illness that leads to ‘subnormal’ functioning. A low quality of life and other negative consequences are often seen as the inevitable, unavoidable consequence of such ‘disability’. However many disabled people do not perceive themselves as suffering entities with a poor quality of life, in need of cure and fixing. As troubling as it is, that there is a difference in perception between the ‘afflicted’ and the ‘non-afflicted’ (Wolbring, "Triangle"; also see references in Wolbring, "Science") even more troubling is the fact that the ‘non-afflicted’ for the most part do not accept the self-perception of the ‘afflicted’ if the self-perception does not fit the agenda of the ‘non-afflicted’ (Wolbring, "Triangle"; Wolbring, "Science"). The views of disabled people who do not see themselves within the patient/medical model are rarely heard (see for example the positive non medical description of Down Syndrome — Canadian Down Syndrome Society), blatantly ignored — a fact that was recognised in the final documents of the 1999 UNESCO World Conference on Sciences (UNESCO, "Declaration on Science"; UNESCO, "Science Agenda") or rejected as shown by the Harris quote (Wolbring, "Science"). The non acceptance of ‘sub-species-typical functioning’ as a variation as evident in the Harris quote, also plays itself out in the case that a species-typical person wants to become sub-species-typical. Such behaviour is classified as a disorder, the sentiment being that no one with sound mind would seek to become sub-species-typical. Furthermore many of the so called sub-species-typical who accept their body structure and its way of functioning, use the ability language and measure employed by species-typical people to gain social acceptance and environmental accommodations. One can often hear ‘sub-species-typical people’ stating that “they can be as ‘able’ as the species-typical people if they receive the right accommodations”. Ableism Today: The Second Category The first category of ableism is only part of the ableism story. Ableism is much broader and more pervasive and not limited to the species-typical, sub-species dichotomy. The second category of ableism is a set of beliefs, processes and practices that produce a particular understanding of the self, the body, relationships with others of the species, and with other species and the environment, based on abilities that are exhibited or cherished (Wolbring, "Why"; Wolbring, "NBICS"). This form of ableism has been used historically and still is used by various social groups to justify their elevated level of rights and status in relation to other social groups, other species and to the environment they live in (Wolbring, "Why"; Wolbring, "NBICS"). In these cases the claim is not about species-typical versus sub-species-typical, but that one has - as a species or a social group- superior abilities compared to other species or other segments in ones species. Ableism reflects the sentiment of certain social groups and social structures to cherish and promote certain abilities such as productivity and competitiveness over others such as empathy, compassion and kindness (favouritism of abilities). This favouritism for certain abilities over others leads to the labelling of those who exhibit real or perceived differences from these ‘essential’ abilities, as deficient, and can lead to or justify other isms such as racism (it is often stated that the favoured race has superior cognitive abilities over other races), sexism (at the end of the 19th Century women were viewed as biologically fragile, lacking strength), emotional (exhibiting an undesirable ability), and thus incapable of bearing the responsibility of voting, owning property, and retaining custody of their own children (Wolbring, "Science"; Silvers), cast-ism, ageism (missing the ability one has as a youth), speciesism (the elevated status of the species homo sapiens is often justified by stating that the homo sapiens has superior cognitive abilities), anti-environmentalism, GDP-ism and consumerism (Wolbring, "Why"; Wolbring, "NBICS") and this superiority is seen as species-typical. This flavour of ableism is rarely questioned. Even as the less able classified group tries to show that they are as able as the other group. It is not questioned that ability is used as a measure of worthiness and judgement to start with (Wolbring, "Why"). Science and Technology and Ableism The direction and governance of science and technology and ableism are becoming increasingly interrelated. How we judge and deal with abilities and what abilities we cherish influences the direction and governance of science and technology processes, products and research and development. The increasing ability, demand for, and acceptance of changing, improving, modifying, enhancing the human body and other biological organisms including animals and microbes in terms of their structure, function or capabilities beyond their species-typical boundaries and the starting capability to synthesis, to generate, to design new genomes, new species from scratch (synthetic biology) leads to a changed understanding of oneself, one’s body, and one’s relationship with others of the species, other species and the environment and new forms of ableism and disablism. I have outlined so far the dynamics and characteristics of the existing ableism discourses. The story does not stop here. Advances in science and technology enable transhumanised forms of the two categories of ableism exhibiting similar dynamics and characteristics as seen with the non transhumanised forms of ableism. Transhumanisation of the First Category of AbleismThe transhumanised form of the first category of ableism is a network of beliefs, processes and practices that perceives the constant improvement of biological structures including the human body and functioning beyond species typical boundaries as the norm, as essential and judges an unenhanced biological structure — species-typical and sub-species-typical — including the human body as limited, defective, as a diminished state of existence (Wolbring, "Triangle"; Wolbring, "Why"; Wolbring, "Glossary"). It follows the same ideas and dynamics as its non transhumanised counterpart. It just moves the level of expected abilities from species-typical to beyond-species-typical. It follows a transhumanist model of health (43) where "health" is no longer the endpoint of biological systems functioning within species-typical, normative frameworks. In this model, all Homo sapiens — no matter how conventionally "medically healthy" — are defined as limited, defective, and in need of constant improvement made possible by new technologies (a little bit like the constant software upgrades we do on our computers). "Health" in this model means having obtained at any given time, maximum enhancement (improvement) of abilities, functioning and body structure. The transhumanist model of health sees enhancement beyond species-typical body structures and functioning as therapeutic interventions (transhumanisation of medicalisation; 2, 43). The transhumanisation of health and ableism could lead to a move in priorities away from curing sub-species-typical people towards species-typical functioning — that might be seen increasingly as futile and a waste of healthcare and medical resources – towards using health care dollars first to enhance species-typical bodies towards beyond-species-typical functioning and then later to shift the priorities to further enhance the human bodies of beyond species-typical body structures and functioning (enhancement medicine). Similar to the discourse of its non transhumanised counterpart there might not be a choice in the future to reject the enhancements. An earlier quote by Harris (Harris, "Is There") highlighted the non acceptance of sub- species-typical as a state one can be in. Harris makes in his 2007 book Enhancing Evolution: The Ethical Case for Making Better People the case that its moral to do enhancement if not immoral not to do it (Harris, "One Principle"). Keeping in mind the disablement people face who are labelled as subnormative it is reasonable to expect that those who cannot afford or do not want certain enhancements will be perceived as impaired (techno poor impaired) and will experience disablement (techno poor disabled) in tune with how the ‘impaired labelled people’ are treated today. Transhumanisation of the Second Category of Ableism The second category of Ableism is less about species-typical but about arbitrary flagging certain abilities as indicators of rights. The hierarchy of worthiness and superiority is also transhumanised.Cognition: Moving from Human to Sentient Rights Cognition is one ability used to justify many hierarchies within and between species. If it comes to pass whether through artificial intelligence advances or through cognitive enhancement of non human biological entities that other cognitive able sentient species appear one can expect that rights will eventually shift towards cognition as the measure of rights entitlement (sentient rights) and away from belonging to a given species like homo sapiens as a prerequisite of rights. If species-typical abilities are not important anymore but certain abilities are, abilities that can be added to all kind of species, one can expect that species as a concept might become obsolete or we will see a reinterpretation of species as one that exhibits certain abilities (given or natural). The Climate Change Link: Ableism and Transhumanism The disregard for nature reflects another form of ableism: humans are here to use nature as they see fit as they see themselves as superior to nature because of their abilities. We might see a climate change-driven appeal for a transhuman version of ableism, where the transhumanisation of humans is seen as a solution for coping with climate change. This could become especially popular if we reach a ‘point of no return’, where severe climate change consequences can no longer be prevented. Other Developments One Can Anticipate under a Transhumanised Form of AbleismThe Olympics would see only beyond-species-typical enhanced athletes compete (it doesn’t matter whether they were species-typical before or seen as sub-species-typical) and the transhumanised version of the Paralympics would host species and sub-species-typical athletes (Wolbring, "Oscar Pistorius"). Transhumanised versions of Abled, dis-abled, en-abled, dis-enabled, diff-abled, transable, and out-able will appear where the goal is to have the newest upgrades (abled), that one tries to out-able others by having better enhancements, that access to enhancements is seen as en-ablement and the lack of access as disenablement, that differently abled will not be used for just about sub-species-typical but for species-typical and species-sub-typical, that transable will not be about the species-typical who want to be sub-species-typical but about the beyond-species-typical who want to be species-typical. A Final WordTo answer the questions posed in the title. With the fall of the species-typical barrier it is unlikely that there will be an endpoint to the race for abilities and the sentiment of out-able-ing others (on an individual or collective level). The question remaining is who will have access to which abilities and which abilities are thought after for which purpose. I leave the reader with an exchange of two characters in the videogame Deus Ex: Invisible War, a PC and X-Box videogame released in 2003. It is another indicator for the embeddiness of ableism in societies fabric that the below is the only hit in Google for the term ‘commodification of ability’ despite the widespread societal commodification of abilities as this paper has hopefully shown. Conversation between Alex D and Paul DentonPaul Denton: If you want to even out the social order, you have to change the nature of power itself. Right? And what creates power? Wealth, physical strength, legislation — maybe — but none of those is the root principle of power.Alex D: I’m listening.Paul Denton: Ability is the ideal that drives the modern state. It's a synonym for one's worth, one's social reach, one's "election," in the Biblical sense, and it's the ideal that needs to be changed if people are to begin living as equals.Alex D: And you think you can equalise humanity with biomodification?Paul Denton: The commodification of ability — tuition, of course, but, increasingly, genetic treatments, cybernetic protocols, now biomods — has had the side effect of creating a self-perpetuating aristocracy in all advanced societies. When ability becomes a public resource, what will distinguish people will be what they do with it. Intention. Dedication. Integrity. The qualities we would choose as the bedrock of the social order. (Deus Ex: Invisible War) References British Film Institute. "Ways of Thinking about Disability." 2008. 25 June 2008 ‹http://www.bfi.org.uk/education/teaching/disability/thinking/›. Canadian Down Syndrome Society. "Down Syndrome Redefined." 2007. 25 June 2008 ‹http://www.cdss.ca/site/about_us/policies_and_statements/down_syndrome.php›. Carlson, Licia. "Cognitive Ableism and Disability Studies: Feminist Reflections on the History of Mental Retardation." Hypatia 16.4 (2001): 124-46. Centre for Disability Studies. "What is the Centre for Disability Studies (CDS)?" Leeds: Leeds University, 2008. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/what.htm›. Deus Ex: Invisible War. "The Commodification of Ability." Wikiquote, 2008 (2003). 25 June 2008 ‹http://en.wikiquote.org/wiki/Deus_Ex:_Invisible_War›. Disability and Human Development Department. "PhD in Disability Studies." Chicago: University of Illinois at Chicago, 2008. 25 June 2008 ‹http://www.ahs.uic.edu/dhd/academics/phd.php›, ‹http://www.ahs.uic.edu/dhd/academics/phd_objectives.php›. Disabilitystudies.net. "About the disabilitystudies.net." 2008. 25 June 2008 ‹http://www.disabilitystudies.net/index.php›. Duke, Winston D. "The New Biology." Reason 1972. 25 June 2008 ‹http://www.lifeissues.net/writers/irvi/irvi_34winstonduke.html›. Finkelstein, Vic. "Modelling Disability." Leeds: Disability Studies Program, Leeds University, 1996. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/archiveuk/finkelstein/models/models.htm›. Campbell, Fiona A.K. "Inciting Legal Fictions: 'Disability's' Date with Ontology and the Ableist Body of the Law." Griffith Law Review 10.1 (2001): 42. Harris, J. Enhancing Evolution: The Ethical Case for Making Better People. Princeton University Press, 2007. 25 June 2008 ‹http://www.studia.no/vare.php?ean=9780691128443›. Harris, J. "Is There a Coherent Social Conception of Disability?" Journal of Medical Ethics 26.2 (2000): 95-100. Harris, J. "One Principle and Three Fallacies of Disability Studies." Journal of Medical Ethics 27.6 (2001): 383-87. Malhotra, Ravi. "The Politics of the Disability Rights Movements." New Politics 8.3 (2001). 25 June 2008 ‹http://www.wpunj.edu/newpol/issue31/malhot31.htm›. Oliver, Mike. "The Politics of Disablement." Leeds: Disability Studies Program, Leeds University, 1990. 25 June 2008 ‹http://www.leeds.ac.uk/disability-studies/archiveuk/Oliver/p%20of%20d%20Oliver%20contents.pdf›, ‹http://www.leeds.ac.uk/disability-studies/archiveuk/Oliver/p%20of%20d%20Oliver1.pdf›. Overboe, James. "Vitalism: Subjectivity Exceeding Racism, Sexism, and (Psychiatric) Ableism." Wagadu: A Journal of Transnational Women's and Gender Studies 4 (2007). 25 June 2008 ‹http://web.cortland.edu/wagadu/Volume%204/Articles%20Volume%204/Chapter2.htm› ‹http://web.cortland.edu/wagadu/Volume%204/Vol4pdfs/Chapter%202.pdf›. Miller, Paul, Sophia Parker, and Sarah Gillinson. "Disablism: How to Tackle the Last Prejudice." London: Demos, 2004. 25 June 2008 ‹http://www.demos.co.uk/files/disablism.pdf›. Penney, Jonathan. "A Constitution for the Disabled or a Disabled Constitution? Toward a New Approach to Disability for the Purposes of Section 15(1)." Journal of Law and Equality 1.1 (2002): 84-115. 25 June 2008 ‹http://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID876878_code574775.pdf?abstractid=876878&mirid=1›. Silvers, A., D. Wasserman, and M.B. Mahowald. Disability, Difference, Discrimination: Perspective on Justice in Bioethics and Public Policy. Landham: Rowman & Littlefield, 1998. Society for Disability Studies (USA). "General Guidelines for Disability Studies Program." 2004. 25 June 2008 ‹http://www.uic.edu/orgs/sds/generalinfo.html#4›, ‹http://www.uic.edu/orgs/sds/Guidelines%20for%20DS%20Program.doc›. Taylor, Steven, Bonnie Shoultz, and Pamela Walker. "Disability Studies: Information and Resources.". Syracuse: The Center on Human Policy, Law, and Disability Studies, Syracuse University, 2003. 25 June 2008 ‹http://thechp.syr.edu//Disability_Studies_2003_current.html#Introduction›. UNESCO. "UNESCO World Conference on Sciences Declaration on Science and the Use of Scientific Knowledge." 1999. 25 June 2008 ‹http://www.unesco.org/science/wcs/eng/declaration_e.htm›. UNESCO. "UNESCO World Conference on Sciences Science Agenda-Framework for Action." 1999. 25 June 2008 ‹http://www.unesco.org/science/wcs/eng/framework.htm›. Watson, James D. "Genes and Politics." Journal of Molecular Medicine 75.9 (1997): 624-36. Wolbring, G. "Science and Technology and the Triple D (Disease, Disability, Defect)." In Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology and Cognitive Science, eds. Mihail C. Roco and William Sims Bainbridge. Dordrecht: Kluwer Academic, 2003. 232-43. 25 June 2008 ‹http://www.wtec.org/ConvergingTechnologies/›, ‹http://www.bioethicsanddisability.org/nbic.html›. Wolbring, G. "The Triangle of Enhancement Medicine, Disabled People, and the Concept of Health: A New Challenge for HTA, Health Research, and Health Policy." Edmonton: Alberta Heritage Foundation for Medical Research, Health Technology Assessment Unit, 2005. 25 June 2008 ‹http://www.ihe.ca/documents/hta/HTA-FR23.pdf›. Wolbring, G. "Glossary for the 21st Century." International Center for Bioethics, Culture and Disability, 2007. 25 June 2008 ‹http://www.bioethicsanddisability.org/glossary.htm›. Wolbring, G. "NBICS, Other Convergences, Ableism and the Culture of Peace." Innovationwatch.com, 2007. 25 June 2008 ‹http://www.innovationwatch.com/choiceisyours/choiceisyours-2007-04-15.htm›. Wolbring, G. "Oscar Pistorius and the Future Nature of Olympic, Paralympic and Other Sports." SCRIPTed — A Journal of Law, Technology & Society 5.1 (2008): 139-60. 25 June 2008 ‹http://www.law.ed.ac.uk/ahrc/script-ed/vol5-1/wolbring.pdf›. Wolbring, G. "Why NBIC? Why Human Performance Enhancement?" Innovation: The European Journal of Social Science Research 21.1 (2008): 25-40.
APA, Harvard, Vancouver, ISO, and other styles
50

Howarth, Anita. "Exploring a Curatorial Turn in Journalism." M/C Journal 18, no. 4 (August 11, 2015). http://dx.doi.org/10.5204/mcj.1004.

Full text
Abstract:
Introduction Curation-related discourses have become widespread. The growing public profile of curators, the emergence of new curation-related discourses and their proliferation beyond the confines of museums, particularly on social media, have led some to conclude that we now live in an age of curation (Buskirk cited in Synder). Curation is commonly understood in instrumentalist terms as the evaluation, selection and presentation of artefacts around a central theme or motif (see O’Neill; Synder). However, there is a growing academic interest in what underlies the shifting discourses and practices. Many are asking what do these changes mean (Martinon) now that “the curatorial turn” has positioned curation as a legitimate object of academic study (O’Neill). This article locates an exploration of the curatorial turn in journalism studies since 2010 within the shifting meanings of curation from antiquity to the digital age. It argues that the industry is facing a Foucauldian moment where the changing political economy of news and the proliferation of user-generated content on social media have disrupted the monopolies traditional news media held over the circulation of knowledge of current affairs and the power this gave them to shape public debate. The disruptions are profound, prompting a rethinking of journalism (Peters and Broersma; Schudson). However, debates have polarised between those who view news curation as symptomatic of the demise of journalism and others who see it as part of a wider revival of the profession, freed from monopolistic institutions to circulate a wider array of knowledge and viewpoints (see Picard). This article eschews such polarisations and instead draws on Robert Picard’s argument that journalism is in transition and that journalism, as a set of professional practices, is adapting to the age of curation but that those traditional news providers that fail to adapt will most likely decline. However, Picard’s approach does not address the definitional problem as to what distinguishes news curating from other journalistic practices when the commonly used instrumental definition can apply to editing. This article aims to negotiate this problem by addressing some of the conceptual ambiguities that arise from wholly instrumental notions of news curation. From “Cura” to the Curatorial Turn and the Age of Curation Modern instrumentalist definitions are necessary but not sufficient for an exploration of the curatorial turn in journalism. Tracing the meanings of curation over time facilitates an expansion of the instrumental to include metaphoric conceptualisations. The term originated in a Latin allegory about a mythological figure, personified as the “cura”, translated literally as care or concern, and who created human beings from the clay of the earth. Having created the human, the cura was charged by the gods with the lifelong care of the human (Reich) and at the same time became a symbol of curiosity and creativity (see Nowotny). “Curators” first emerged in Imperial Rome to denote a public officer charged with maintaining order and the emperor’s finances (Nowotny) but by the fourteenth century the meaning had shifted to that of religious officer charged with the care of souls (Gaskill). At this point the metaphorical associations of creativity and curiosity subsided. Six hundred years later souls had been replaced by artefacts valorised because of their contribution to human knowledge or as a testament to exceptional human creativity (Nowotny). Objects of curiosity and originality, as well as their creators, were reified and curation became the specialist practice of an expert custodian charged with the care and preservation of artefacts but relegated to the background to collect, evaluate and archive artefacts entrusted to the care of museums and to be preserved for future generations. Instrumentalist meanings thus dominated. From the 1960s discourses shifted again from the privileging of a “producer who actually creates the object in its materiality” to an entire set of actors (Bourdieu 261). These shifts were part of the changing political economy of museums, the growing prevalence of exhibitions and the emergence of mega-exhibitions hosted in global cities and capable of attracting massive audiences (see O’Neill). The curator was no longer seen merely as a custodian but able to add cultural value to artefacts when drawing individual items together into a collection, interpreting their relevance to a theme then re-presenting them through a story or visuals (see O’Neill). The verb “to curate”, which had first entered the English lexicon in the early 1900s but was used sporadically (Synder), proliferated from the 1960s in museum studies (Farquharson cited in O’Neill) as mega-exhibitions attracted publicity and the higher profile of curators attracted the attention of intellectuals prompting a curatorial turn in museum studies. The curatorial turn in museum studies from the 1980s marks the emergence of curation as a legitimate object of academic enquiry. O’Neill identified a “Foucauldian moment” in museum studies where shifting discourses signified challenges to, and disruptions of, traditional forms of knowledge-based power. Curation was no longer seen as a neutral activity of preservation, but one located within a contested political economy and invested with contradictions and complexities. Philosophers such as Martinon and Nowotny have highlighted the impossibility of separating the oversight of valuable artefacts from the processes by which these are selected, valorised and signified and what, at times, has been the controversial appropriation of creative outputs. Thus, a new critical approach emerged. Recently, curating-related discourses have expanded beyond the “rarefied” world of museum studies (Synder). Social media platforms have facilitated the proliferation of user-generated content offering a vast array of new artefacts. Information circulates widely and new discourses can challenge traditional bases of knowledge. Audiences now actively search for new material driven in part by curiosity and a growing distrust of the professions and establishments (see Holmberg). The boundaries between professionals and lay people are blurring and, some argue, knowledge is being democratized (see Ibrahim; Holmberg). However, as new information becomes voluminous, alternative truths, misinformation and false information compete for attention and there is a growing demand for the verification, selection and presentation of artefacts, that is online curation (Picard; Bakker). Thus, the appropriation of social media is disrupting traditional power relations but also offering new opportunities for new information-related practices. Journalism is facing its own Foucauldian moment. A Foucauldian Moment in Journalism Studies Journalism has been traditionally understood as capturing today’s happenings, verifying the facts of an event, then presenting these as a narrative that reporters update as news unfolds. News has been seen as the preserve of professionals trained to interview eyewitnesses or experts, to verify facts and to compile what they found into a compelling narrative (Hallin and Mancini). News-gathering was typically the work of an individual tasked with collecting stand-alone stories then passing them onto editors to evaluate, select, prioritise and collate these into a collection that formed a newspaper or news programme . This understanding of journalism emerged from the 1830s along with a type of news that was accessible, that large numbers of people wanted to read and that, consequently, attracted advertising making news profitable (Park). The idea that presumed trained journalists were best placed to produce news appeared first in the UK and USA then spread worldwide (Hallin and Mancini). At the same time as there was growing demand for news, space constraints restricted how much could be published and the high costs of production served as a barrier to entry first in print then later in broadcast media (Picard; Curran and Seaton). The large news organisations that employed these professionals were thus able to control the circulation of information and knowledge they generated and the editors that selected content were able, in part, to shape public debates (Picard; Habermas). Social media challenge the control traditional media have had over the production and dissemination of news since the mid-1800s. Practically every major global news story in 2010 and 2011 from natural disasters to uprisings was broken by ordinary people on social media (Bruns and Highfield). Twitter facilitates a steady stream of updates at an almost real-time speed that 24-hour news channels cannot match. Facebook, Instagram and blogs add commentary, context, visuals and personal stories to breaking news. Experts and official sources routinely post announcements on social media platforms enabling anyone to access much of the same source material that previously was the preserve of reporters. Investigations by bloggers have exposed abuses of power by companies and governments that journalists on traditional media have failed to (Wischnowski). Audiences and advertisers are migrating away from traditional newspapers to a range of different online platforms. News consumers now actively use search engines to find available information of interest and look for efficient ways of sifting through the proliferation of the useful and the dubious, the revelatory and the misleading or inaccurate (see Picard). That is, news organisations and the professional journalists they employ are increasingly operating in a hyper-competitive (see Picard) and hyper-sceptical environment. This paper posits that cumulatively these are disrupting the control news organisations have and journalism is facing a Foucauldian moment when shifting discourses signify a disturbance of the intellectual rules that shape who and what knowledge of news is produced and hence the power relations they sustain. Social media not only challenge the core news business of reporting, they also present new opportunities. Some traditional organisations have responded by adding new activities to their repertoire of practices. In 2011, the Guardian uploaded its entire database of the expense claims of British MPs onto its Website and invited readers to select, evaluate and comment on entries, a form of crowd-sourced curating. Andy Carvin, while at National Public Radio (NPR) built an international reputation from his curation of breaking news, opinion and commentary on Twitter as Syria became too dangerous for foreign correspondents to enter. New types of press agencies such as Storyful have emerged around a curatorial business model that aggregates information culled from social media and uses journalists to evaluate and repackage them as news stories that are sold onto traditional news media around the world (Guerrini). Research into the growing market for such skills in the Netherlands found more advertisements for “news curators” than for “traditional reporters” (Bakker). At the same time, organic and spontaneous curation can emerge out of Twitter and Facebook communities that is capable of challenging news reporting by traditional media (Lewis and Westlund). Curation has become a common refrain attracting the attention of academics. A Curatorial Turn in Journalism The curatorial turn in journalism studies is manifest in the growing academic attention to curation-related discourses and practices. A review of four academic journals in the field, Journalism, Journalism Studies, Journalism Practice, and Digital Journalism found the first mention of journalism and curation emerged in 2010 with references in nearly 40 articles by July 2015. The meta-analysis that follows draws on this corpus. The consensus is that traditional business models based on mass circulation and advertising are failing partly because of the proliferation of alternative sources of information and the migration of readers in search of it. While some of this alternative content is credible, much is dubious and the sheer volume of information makes it difficult to discern what to believe. It is unsurprising, then, that there is a growing demand for “new types and practices of curation and information vetting” that attest to “the veracity and accuracy of content” particularly of news (Picard 280). However, academics disagree on whether new information practices such as curation are replacing or supplementing traditional newsgathering. Some look for evidence of displacement in the expansion of job advertisements for news curators relative to those for traditional reporters (Bakker). Others look at how new and traditional practices co-exist in organisations like the BBC, Guardian and NPR, sometimes clashing and sometimes collaborating in the co-creation of content (McQuail cited in Fahy and Nisbet; Hermida and Thurman). The debate has polarised between whether these changes signify the “twilight years of journalism or a new dawn” (Picard). Optimists view the proliferation of alternative sources of information as breaking the control traditional organisations held over news production, exposing their ideological biases and disrupting their traditional knowledge-based power and practices (see Hermida; Siapera, Papadopoulou, and Archontakis; Compton and Benedetti). Others have focused on the loss of “traditional” permanent journalistic jobs (see Schwalbe, Silcock, and Candello; Spaulding) with the implication that traditional forms of professional practice are in demise. Picard rejects this polarisation, counter-arguing that much analysis implicitly conflates journalism as a practice with the news organisations that have traditionally hosted it. Journalists may or may not be located within a traditional media organisation and social media is offering numerous opportunities for them to operate independently and for new types of hybrid practices and organisations such as Storyful to emerge outside of traditional operations. Picard argues that making the most of the opportunities social media presents is revitalising the profession offering a new dawn but that those traditional organisations that fail to adapt to the new media landscape and new practices are in their twilight years and likely to decline. These divergences, he argues, highlight a profession and industry in transition from an old order to a new one (Picard). This notion of journalism in transition usefully negotiates confusion over what curation in the social media age means for news providers but it does not address the uncertainty as to where it sits in relation to journalism. Futuristic accounts predict that journalists will become “managers of content rather than simply sourcing one story next to another” and that roles will shift from reporting to curation (Montgomery cited in Bakker; see Fahy and Nisbet). Others insist curators are not journalists but “information workers” or “gatecheckers” (McQuail 2013 cited in Bakker; Schwalbe, Silcock, and Candello) thereby differentiating the professional from the manual worker and reinforcing the historic elitism of the professions by implying curation is a lesser practice. However, such demarcation is problematic in that arguably both journalist and news curator can be seen as information workers and the instrumental definition outlined at the beginning of this article is as relevant to curation as it is to news editing. It is therefore necessary to revisit commonly used definitions (see Bakker; Guerrini; Synder). The literature broadly defines content creation, including news reporting, as the generation of original content that is distinguishable from aggregation and curation, both of which entail working with existing material. News aggregation is the automated use of computer algorithms to find and collect existing content relevant to a specified subject followed by the generation of a list or image gallery (Bakker; Synder). While aggregators may help with the collection component of news curation, the practices differ in their relation to technology. Apart from the upfront human design of the original algorithm, aggregation is wholly machine-driven while modern news curation adds human intervention to the technological processes of aggregation (Bakker). This intervention is conscious rather than automated, active rather than passive. It brings to bear human knowledge, expertise and interpretation to verify and evaluate content, filter and select artefacts based on their perceived quality and relevance for a particular topic or theme then re-present them in an accessible form as a narrative or infographics or both. While it does not involve the generation of original news content in the way news reporting does, curation is more than the collation of information. It can also involve the re-presenting of it in imaginative ways, the re-formulating of existing content in new configurations. In this sense, curation can constitute a form of creativity increasingly common in the social media age, that of re-mixing and re-imagining of existing material to create something novel (Navas and Gallagher). The distinction, therefore, between content creation and content curation lies primarily in the relation to original material and not the assumed presence or otherwise of creativity. In addition, curation outputs need not stand apart from news reports. They can serve to contextualize news in ways that short reports cannot while the latter provides original content to sit alongside curated materials. Thus the two types of news-related practices can complement rather than compete with each other. While this addresses the relation between reporting and curation, it does not clarify the relation between curating and editing. Bakker eludes to this when he argues curating also involves “editing … enriching or combining content from different sources” (599). But teasing out the distinctions is tricky because editing encompasses a wide range of sub-specialisations and divergent duties. Broadly speaking, editors are “newsrooms professionals … with decision-making authority over content and structure” who evaluate, verify and select information so are “quality controllers” in newsrooms (Stepp). This conceptualization overlaps with the instrumentalist definition of curation and while the broad type of skills and tasks involved are similar, the two are not synonymous. Editors tends to be relatively experienced professionals who have worked up the newsroom ranks whereas news curators are often new entrants ultimately answerable to editors. Furthermore, curation in the social media age involves voluminous material that curators sift through as part of first level content collection and it involves ever more complex verification processes as digital technologies make it increasingly easy to alter and falsify information and images. The quality control role of curators may also involve in-house specialists or junior staff working with external experts in a particular region or specialisation (Fahy and Nisbett). Some of job advertisements suggest a growing demand for specialist curatorial skills and position these alongside other newsroom professionals (Bakker). Whether this means they are journalists is still open to question. Conclusion This article has presented a more expansive conceptualisation of news curation than is commonly used in journalism studies, by including both the instrumental and the symbolic dimensions of a proliferating practice. It also sought to avoid confining this wider conceptualisation within unhelpful polarisations as to whether news curation is symbolic of a wider demise or revival of journalism by distinguishing the profession from the organisation in which it operates. The article was then free to negotiate the conceptual ambiguity surrounding the often taken-for-granted instrumental meanings of curation. It argues that what distinguishes news curation from traditional newsgathering is the relationship to original content. While the reporter generates the journalistic equivalent of original content in the form of news, the imaginative curator re-mixes and re-presents existing content in potentially novel ways. This has faint echoes of the mythological cura creating something new from the existing clay. The other conceptual ambiguity negotiated was in the definitional overlaps between curating and editing. On the one hand, this questions the appropriateness of reducing the news curator to the status of an “information worker”, a manual labourer rather than a professional. On the other hand, it positions news curators as one of many types of newsroom professionals. What distinguishes them from others is their status in the newsroom, the volume, nature and verification of the material they work with and the re-mixing of different components to create something novel and useful. References Bakker, Piet. “Mr. Gates Returns: Curation, Community Management and Other New Roles for Journalists.” Journalism Studies 15.5 (2014): 596-606. Bourdieu, Pierre. The Field of Cultural Production. New York: Columbia UP, 1993. Bruns, Axel, and Tim Highfield. “Blogs, Twitter, and Breaking News: The Produsage of Citizen Journalism.” Produsing Theory in a Digital World: The Intersection of Audiences and Production in Contemporary Theory. New York: Peter Lang. 15–32. Compton, James R., and Paul Benedetti. “Labour, New Media and the Institutional Restructuring of Journalism.” Journalism Studies 11.4 (2010): 487–499. Curran, J., and J. Seaton. “The Liberal Theory of Press Freedom.” Power without Responsibility. London: Routledge, 2003. Fahy, Declan, and Matthew C. Nisbet. “The Science Journalist Online: Shifting Roles and Emerging Practices.” Journalism 12.7 (2011): 778–793. Guerrini, Federico. “Newsroom Curators & Independent Storytellers : Content Curation As a New Form of Journalism.” Reuters Institute Fellowship Paper (2013): 1–62. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Massachussetts, CA: MIT P, 1991. Hallin, Daniel, and Paolo Mancini. Comparing Media Systems beyond the Western World. Cambridge: Cambridge U P (2012). ———. Comparing Media Systems: Three Models of Media and Politics. Cambridge: Cambridge UP, 2004. Harb, Zahera. “Photojournalism and Citizen Journalism.” Journalism Practice (2012): 37–41. Hermida, Alfred. “Tweets and Truth.” Journalism Practice 6.5-6 (2012): 659–668. Hermida, Alfred, and Neil Thurman. “A Clash of Cultures: The Integration of User-Generated Content within Professional Journalistic Frameworks at British Newspaper Websites.” Journalism Practice 2.3 (2008): 343–356. Holmberg, Christopher. “Politicization of the Low-Carb High-Fat Diet in Sweden, Promoted on Social Media by Non-Conventional Experts.” International Journal of E-Politics (2015). Ibrahim, Yasmin. “The Discourses of Empowerment and Web 2.0.” Handbook of Research on Web 2.0, 3.0, and X.0: Technologies, Business, and Social Applications. Ed. San Murugesan. Hershey, PA, IGI Global, 2010. 828–845. Lewis, Seth C., and Oscar Westlund. “Actors, Actants, Audiences, and Activities in Cross-Media News Work.” Digital Journalism (July 2014 ): 1–19. Martinon, Jean-Paul. The Curatorial: A Philosophy of Curating. Ed. Jean-Paul Martinon. London: Bloomsbury P, 2013. Navas, Eduardo, and Owen Gallagher, eds. Routledge Companion to Remix Studies. London and New York: Routledge, 2014. Nowotny, Stefan. “The Curator Crosses the River: A Fabulation.” The Curatorial: A Philosophy of Curating. Ed. Jean-Paul Martinon. London: Bloomsbury Academic, 2013. O’Neill, Paul. The Curatorial Turn: From Practice to Discourse. Bristol: Intellect, 2007. Park, Robert E. “Reflections on Communication and Culture.” American Journal of Sociology 44.2 (1938): 187–205. Peters, Chris, and Marcel Broersma. Rethinking Journalism: Trust and Participation in a Transformed News Landscape. London: Routledge, 2013. Phillips, E. Barbara, and Michael Schudson. “Discovering the News: A Social History of American Newspapers.” Contemporary Sociology 1980: 812. Picard, Robert G. “Twilight or New Dawn of Journalism?” Digital Journalism (May 2014): 1–11. Reich, Warren. “Classic Article: History of the Notion of Care.” Encyclopedia of BioEthics. Ed. Warren Reich. Revised ed. New York: Simon and Schuster, 1995: 319–331. Rugg, Judith, and Michèle Sedgwick, eds. Issues in Curating Contemporary Art and Performance. Bristol: Intellect, 2007. Schudson, Michael. “Would Journalism Please Hold Still!” Re-Thinking Journalism. Eds. Chris Peters and Marcel Broersma. Abingdon: Routledge, 2013. Schwalbe, Carol B., B. William Silcock, and Elizabeth Candello. “Gatecheckers at the Visual News Stream.” Journalism Practice 9.4 (2015): 465-83. Siapera, Eugenia, Lambrini Papadopoulou, and Fragiskos Archontakis. “Post-Crisis Journalism.” Journalism Studies 16.3 (2014): 449–465. Spaulding, S. “The Poetics of Goodbye: Change and Nostalgia in Goodbye Narratives Penned by Ex-Baltimore Sun Employees.” Journalism (2014): 1–14. Stepp, Carl Sessions. Editing for Today’s Newsroom: New Perspectives for a Changing Profession. Abingdon: Lawrence Erlbaum, 2013. Synder, Ilana. “Discourses of ‘Curation’ in Digital Times.” Discourse and Digital Practices: Doing Discourse Analysis in the Digital Age. Eds. Rodney H. Harris, Alice Chik, and Christoph Hafner. Oxford: Routledge, 2015. 209–225. Thurman, Neil, and Nic Newman. “The Future of Breaking News Online?” Journalism Studies 15.5 (2014): 655-67. Wischnowski, Benjamin J. “Bloggers with Shields: Reconciling the Blogosphere’s Intrinsic Editorial Process with Traditional Concepts of Media Accountability.” Iowa Law Review 97.327 (2011).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography