To see the other types of publications on this topic, follow the link: State Resource Recovery Program (Minn.).

Journal articles on the topic 'State Resource Recovery Program (Minn.)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 journal articles for your research on the topic 'State Resource Recovery Program (Minn.).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jung, Seung-Hwan, Eunhee Park, Ju-Hyun Kim, Bi-Ang Park, Ja-Won Yu, Ae-Ryoung Kim, and Tae-Du Jung. "Effects of Self RehAbilitation Video Exercises (SAVE) on Functional Restorations in Patients with Subacute Stroke." Healthcare 9, no. 5 (May 11, 2021): 565. http://dx.doi.org/10.3390/healthcare9050565.

Full text
Abstract:
Background: Additional exercise therapy has been shown to positively affect acute stroke rehabilitation, which requires an effective method to deliver increased exercise. In this study, we designed a 4-week caregiver-supervised self-exercise program with videos, named “Self rehAbilitation Video Exercises (SAVE)”, to improve the functional outcomes and facilitate early recovery by increasing the continuity of rehabilitation therapy after acute stroke. Methods: This study is a non-randomized trial. Eighty-eight patients were included in an intervention group (SAVE group), who received conventional rehabilitation therapies and an additional self-rehabilitation session by watching bedside exercise videos and continued their own exercises in their rooms for 60 min every day for 4 weeks. Ninety-six patients were included in a control group, who received only conventional rehabilitation therapies. After 4 weeks of hospitalization, both groups assessed several outcome measurements, including the Berg Balance Scale (BBS), Modified Barthel Index (MBI), physical component summary (PCS) and the mental component summary of the Short-Form Survey 36 (SF-36), Mini-Mental State Examination, and Beck Depression Inventory. Results: Differences in BBS, MBI, and PCS components in SF-36 were more statistically significant in the SAVE group than that in the control group (p < 0.05). Patients in the SAVE group showed more significant improvement in BBS, MBI, and PCS components in SF-36 as compared to that in the control group. Conclusions: This evidence-based SAVE intervention can optimize patient recovery after a subacute stroke while keeping the available resources in mind.
APA, Harvard, Vancouver, ISO, and other styles
2

Wiesenburg, Denis, Bob Shipp, Joel Fodrie, Sean Powers, Julien Lartigue, Kelly Darnell, Melissa Baustian, Cam Ngo, John Valentine, and Kateryna Wowk. "Prospects for Gulf of Mexico Environmental Recovery and Restoration." Oceanography 34, no. 1 (March 1, 2021): 164–73. http://dx.doi.org/10.5670/oceanog.2021.124.

Full text
Abstract:
Previous oil spills provide clear evidence that ecosystem restoration efforts are challenging, and recovery can take decades. Similar to the Ixtoc-I well blowout in 1979, the Deepwater Horizon (DWH) oil spill was enormous both in volume of oil spilled and duration, resulting in environmental impacts from the deep ocean to the Gulf of Mexico coastline. Data collected during the National Resource Damage Assessment showed significant damage to coastal areas (especially marshes), marine organisms, and deep-sea habitat. Previous spills have shown that disparate regions recover at different rates, with especially long-term effects in salt marshes and deep-sea habitat. Environmental recovery and restoration in the northern Gulf of Mexico are dependent upon fundamental knowledge of ecosystem processes in the region. Post-DWH research data provide a starting point for better understanding baselines and ecosystem processes. It is imperative to use the best science available to fully understand DWH environmental impacts and determine the appropriate means to ameliorate those impacts through restoration. Filling data gaps will be necessary to make better restoration decisions, and establishing new baselines will require long-term studies. Future research, especially via NOAA’s RESTORE Science Program and its state-based Centers of Excellence, should provide a path to understanding the potential for restoration and recovery of this vital marine ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
3

Bosak, P., O. Stokalyuk, O. Korolova, and V. Popovych. "ENVIRONMENTAL MANAGEMENT IN DEVELOPMENT PROJECTS MINING INDUSTRY." Bulletin of Lviv State University of Life Safety 22 (December 28, 2020): 5–11. http://dx.doi.org/10.32447/20784643.22.2020.01.

Full text
Abstract:
Introduction. In the industrial regions of Ukraine, the structure of nature formed over a long period without objective laws of development and recovery of natural resource systems and ecosystems. All natural ecosystems (atmosphere, hydrosphere, lithosphere and biosphere) have undergone forceful anthropogenic pressure. Chemical, radioactive and other pollution of the natural environment causes various, often incurable diseases, irreversible changes in the genetic structure of cells, which leads to an increase in the birth rate of defective generations. A special place in this context takes ecological safety in mining complexes. Mining complexes interrelated processes of human impact on the environment to provide raw materials and energy resources of various fields of economic activity. In a broad sense, the resource means as sources of matter and space - the environment of their location and life.Purpose. The purpose of the work is to highlight the importance of environmental safety management in mining complexes and areas for improvement.Methods. We used the methods of theoretical research to improve environmental safety in the mining complexes.Results. Based on theoretical research and improving environmental safety in mining complexes, the following measures are needed: control over the environment, reproduction and protection of its resources, improvement of natural living conditions, improvement of mining complexes as an environmental safety management system.Following environmental principles, it is necessary to manage the environment safety of mining complexes with the involvement of existing tools and methods of environmental analysis, as environment safety of this activity is of great importance for all areas of the environment. It is a question of studying of the zones of technogenic influence which are forming around dumps of mine breeds, and maintenance of physical, chemical stability of mining wastes. Management of ecological safety in mining areas will allow excluding the occurrence of the ecological-extreme situation after completion of the operation of coal deposits and during the future use of the disturbing territories. Stimulating self-education and ecological education, practical activity is at the same time a kind of generator in the process of developing beliefs in the main direction of the knowledge of conscious and responsible attitude to nature, ecological consciousness.Conclusion. Reasonable measures must regulate the impact of mining complexes on the environment. That means the monitoring of the state of the environment in mining areas; assessment of the threat of environmental hazards; prevention of complication of the ecological situation of the mining area (reclamation, phytomeliorative measures of heaps); development (implementation) of appropriate programs aimed at reducing environmental hazards.
APA, Harvard, Vancouver, ISO, and other styles
4

Perin, Guido, Francesco Romagnoli, Fabrizio Perin, and Andrea Giacometti. "Preliminary Study on Mini-Modus Device Designed to Oxygenate Bottom Anoxic Waters without Perturbing Polluted Sediments." Environments 7, no. 3 (March 20, 2020): 23. http://dx.doi.org/10.3390/environments7030023.

Full text
Abstract:
The Tangential Guanabara Bay Aeration and Recovery (TAGUBAR) project derives its origins from a Brazilian government decision to tackle the planning and management challenges related to the restoration of some degraded aquatic ecosystems such as Guanabara Bay (state of Rio de Janeiro), Vitória Bay, and Espírito Santo Bay (state of Espírito Santo). This was performed by using the successful outcomes of a previous Ministry of Foreign Affairs and Directorate General for Cooperation and Development (i.e., Direttore Generale alla Cooperazione allo Sviluppo, MFA–DGCS) cooperation program. The general objective of the program was to contribute to the economic and social development of the population living around Guanabara, Vitória, and Espírito Santo Bays, while promoting the conservation of their natural resources. This objective was supposed to be achieved by investing money to consolidate the local authorities’ ability to plan and implement a reconditioning program within a systemic management framework in severely polluted ecosystems such as Guanabara Bay, where sediments are highly contaminated. Sediments normally represent the final fate for most contaminants. Therefore, it would be highly undesirable to perturb them, if one wishes to avoid contaminant recycling. In this context, we explored a bench-scale novel technology, called the module for the decontamination of units of sediment (MODUS), which produces an oxygenated water flow directed parallel to the sediment floor that is aimed to create “tangential aeration” of the bottom water column. The purpose of this is to avoid perturbing the top sediment layer, as a flow directed toward the bottom sediment would most probably resuspend this layer. Three kinds of tests were performed to characterize a bench-scale version of MODUS (referred to as “mini-MODUS”) behavior: turbulence–sediment resuspension tests, hydrodynamic tests, and oxygenation–aeration tests. In order to understand the functioning of the mini-MODUS, we needed to eliminate as many variables as possible. Therefore, we chose a static version of the module (i.e., no speed for the mini-MODUS as well as no water current with respect to the bottom sediment and no flume setting), leaving dynamic studies for a future paper. The turbulence tests showed that the water enters and exits the mini-MODUS mouths without resuspending the sediment surface at all, even if the sediment is very soft. Water flow was only localized very close to both mouth openings. Hydrodynamic tests showed an interesting behavior. An increase of low air flows produced a sharp linear increase of the water flow. However, a plateau was quickly reached and then no further increase of water flow was observed, implying that for a certain specific geometry of the equipment and for the given experimental conditions, an increase in the air flow does not produce any reduction of the residence time within the aeration reactor. Oxygenation–aeration tests explored three parameters that were deemed to be most important for our study: the oxygen global transfer coefficient, KLa; the oxygenation capacity, OC; and the oxygenation efficiency, OE%. An air flow increase causes an increase of both KLa and OC, while OE% decreases (no plateau was observed for KLa and OC). The better air flow would be a compromise between high KLa and OC, with no disadvantageous OE%, a compromise that will be the topic of the next paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Newman, Kimberly S., Carol-Ann Manen, and Nancy E. Kinner. "RESEARCH NEEDS IN OIL SPILL RESPONSE." International Oil Spill Conference Proceedings 2005, no. 1 (May 1, 2005): 131–33. http://dx.doi.org/10.7901/2169-3358-2005-1-131.

Full text
Abstract:
ABSTRACT As funding for spill research and development (R&D) has declined in recent years, partnerships among relevant federal and state agencies, industry and academia have increased in importance. In order to encourage thinking about spill R&D, develop agreement on research needs and foster these partnerships, the Coastal Response Research Center (CRRC), a cooperative program between the National Oceanic and Atmospheric Administration (NOAA) and the University of New Hampshire (UNH), hosted a three day workshop in November 2003 to identify applied science needs that could improve decision making across the continuum of oil spill preparedness, response and recovery. The emphasis was on research that could decrease the impact of spills on NOAA trust resources or enhance the recovery of the impacted resources. More than 30 experts in the areas of spill processes, response techniques and habitat restoration participated in the three day workshop. The group included scientists from federal and state agencies, industry and academia. The goals of the workshop were to identify knowledge gaps in the area of spill response and restoration and determine the best approach for addressing these gaps. Starting with six categories: Fate and Transport of Released Materials; Effects of Spills and Spill Response on Organisms; Effects of Spills and Spill Response on Habitats; Social and Economic Concerns and Needs; Quantitative Metrics for Use in Injury Determination and Restoration; and Restoration Methods, the participants identified over 80 areas of need, including a broad category of communication, and evaluated them with respect to their technical feasibility and potential impact on resource recovery.
APA, Harvard, Vancouver, ISO, and other styles
6

Waterhouse, Lynn, Scott A. Heppell, Christy V. Pattengill-Semmens, Croy McCoy, Phillippe Bush, Bradley C. Johnson, and Brice X. Semmens. "Recovery of critically endangered Nassau grouper (Epinephelus striatus) in the Cayman Islands following targeted conservation actions." Proceedings of the National Academy of Sciences 117, no. 3 (January 6, 2020): 1587–95. http://dx.doi.org/10.1073/pnas.1917132117.

Full text
Abstract:
Many large-bodied marine fishes that form spawning aggregations, such as the Nassau grouper (Epinephelus striatus), have suffered regional overfishing due to exploitation during spawning. In response, marine resource managers in many locations have established marine protected areas or seasonal closures to recover these overfished stocks. The challenge in assessing management effectiveness lies largely in the development of accurate estimates to track stock size through time. For the past 15 y, the Cayman Islands government has taken a series of management actions aimed at recovering collapsed stocks of Nassau grouper. Importantly, the government also partnered with academic and nonprofit organizations to establish a research and monitoring program (Grouper Moon) aimed at documenting the impacts of conservation action. Here, we develop an integrated population model of 2 Cayman Nassau grouper stocks based on both diver-collected mark–resight observations and video censuses. Using both data types across multiple years, we fit parameters for a state–space model for population growth. We show that over the last 15 y the Nassau grouper population on Little Cayman has more than tripled in response to conservation efforts. Census data from Cayman Brac, while more sparse, show a similar pattern. These findings demonstrate that spatial and seasonal closures aimed at rebuilding aggregation-based fisheries can foster conservation success.
APA, Harvard, Vancouver, ISO, and other styles
7

Brackbill, Robert M., Amy R. Kahn, Jiehui Li, Rachel Zeig-Owens, David G. Goldfarb, Molly Skerker, Mark R. Farfel, et al. "Combining Three Cohorts of World Trade Center Rescue/Recovery Workers for Assessing Cancer Incidence and Mortality." International Journal of Environmental Research and Public Health 18, no. 4 (February 3, 2021): 1386. http://dx.doi.org/10.3390/ijerph18041386.

Full text
Abstract:
Three cohorts including the Fire Department of the City of New York (FDNY), the World Trade Center Health Registry (WTCHR), and the General Responder Cohort (GRC), each funded by the World Trade Center Health Program have reported associations between WTC-exposures and cancer. Results have generally been consistent with effect estimates for excess incidence for all cancers ranging from 6 to 14% above background rates. Pooling would increase sample size and de-duplicate cases between the cohorts. However, pooling required time consuming steps: obtaining Institutional Review Board (IRB) approvals and legal agreements from entities involved; establishing an honest broker for managing the data; de-duplicating the pooled cohort files; applying to State Cancer Registries (SCRs) for matched cancer cases; and finalizing analysis data files. Obtaining SCR data use agreements ranged from 6.5 to 114.5 weeks with six states requiring >20 weeks. Records from FDNY (n = 16,221), WTCHR (n = 29,372), and GRC (n = 33,427) were combined de-duplicated resulting in 69,102 unique individuals. Overall, 7894 cancer tumors were matched to the pooled cohort, increasing the number cancers by as much as 58% compared to previous analyses. Pooling resulted in a coherent resource for future research for studies on rare cancers and mortality, with more representative of occupations and WTC- exposure.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Thi Ha, Thi Huong Giang Nguyen, Thi Hien Luong Nguyen, Mai Anh Nguyen, and Thi Minh Thuy Nguyen. "Thực trạng kiến thức của nhân viên phục hồi chức năng cộng đồng tại huyện Quỳnh Phụ, tỉnh Thái Bình năm 2020." Journal of Health and Development Studies 05, no. 01 (February 20, 2021): 95–103. http://dx.doi.org/10.38148/jhds.0501skpt20-054.

Full text
Abstract:
Community-Based Rehabilitation Program (CBR) has been established as a strategy to improve access to rehabilitation services by maximizing the use of local resources, in which the main resource is community rehabilitation workers. The implementation of the functions and duties of community rehabilitation workers plays a very important role in performing the functions and tasks of the CBR program. Therefore, the study "Knowledge and task performance of community rehabilitation workers in Quynh Phu district, Thai Binh province in 2020" was conducted with the aim of: "Describing the current state of knowledge about task performance of community rehabilitation workers in Quynh Phu district, Thai Binh province by 2020”. Research method: Using cross section design on the sample selected by total 114 community rehabilitation workers in Quynh Phu district, Thai Binh province. Results: Average age of rehabilitation workers was 51.25 years old, mostly 87.7% female and be trained in primary-level health; There are 56.2% of community rehabilitation workers achieved the required knowledge in performing the tasks of a community rehabilitation worker. The percentage of the required knowledge achieved by detecting and reporting the condition of the disabled and assessing rehabilitation needs, implementing community-based rehabilitation for the disabled, and follow-up management accounts for 64.9%, 59.6%, and 81.5%, respectively. Conclusion: The percentage of workers with required knowledge about the task performance of rehabilitation workers is not high at 52.6%; of which, knowledge about follow-up management for the disabled make up the highest rate of 85.1%, the lowest rate is knowledge of implementing CBR for PWDs, accounting for 59.6%. Keywords: knowledge, rehabilitation workers, community, knowledge and task performance of community rehabilitation workers in Quynh Phu District, Thaibinh province, 2020
APA, Harvard, Vancouver, ISO, and other styles
9

Ivantsova, Ekaterina Dmitrievna. "Investment encouragement mechanisms in forestry sector: Analysis of global experience and its viability in Russia." Вестник Пермского университета. Серия «Экономика» = Perm University Herald. ECONOMY 15, no. 4 (2020): 566–86. http://dx.doi.org/10.17072/1994-9960-2020-4-566-586.

Full text
Abstract:
Undoubtedly, a forestry sector is an integral element in the economy of Russia. A forestry sector is defined to be a set of industries, including forest industry and forestry, and its relevant tasks today are to improve the competitiveness of the forest industry and to provide the advanced growth for the sector on the whole. One of the barriers preventing the Russian forestry sector from development is a low recycling degree of raw wood, which, in its turn, is determined by a deficit of wood processing enterprises and underdeveloped investment encouragement mechanisms. Efficient investment strategy should account for the national and international practices, which supports the relevance of this research. The purpose of the study is to systematize the best global practices in investment encouragement in the forestry sector to reason their application in national context with regard to the institutional and natural climatic features of the development in the forest industry and forestry in Russia. Methodology of the research includes general scientific methods, as well as a comprehensive approach aimed to analyze the most relevant national and international materials and reports from the specialized institutions dealing with the issues concerning the forest and forest resources management. The scientific novelty of the research focuses on the classification of the best global practices in investment encouragement in the forestry sector of economy, the classification summarizes the experiences of the leading countries in forestry. The study identifies the most popular investment encouragement methods, including administrative and economic measures, in the leading countries in the respective forest resource reserves. The programs of non-financial support for the forest land users, consultations, and educational programs, R&D encouragement are among the administrative measures. Economic measures cover the fiscal and monetary policy tools, including concessional taxation, public subsidies, joint investment of the projects, and soft loans. The paper proves that direct transfer from the international practices will not give any significant positive effect and development in the forestry sector of Russia and will not improve the competitiveness of the national forest processing companies as the investment encouragement measures should account for the natural climatic, social economic and institutional features of the forestry sector. One should take into account that the forest land property rights belong to the state in the Russian Federation. With these features in mind and the classification of the best global practices in investment encouragement in the forestry sector of economy in hand, it has been found that the support measures which work in Canada are likely to be the most efficient ones under the national conditions because Canada runs similar forest land property rights. Canada practises R&D encouragement and public subsidies which could be implemented in Russia as a type of public-private partnership or other types of joint investment of the projects in the forestry sector. Therefore, the research carefully looks at the public measure started back in 2007 and aimed at large-scale investment projects in forest exploration and identifies a number of associated problems. The most burning issues are as follows: national investors are not sufficiently interested in the project completion, the products from the forest processing enterprises have low profitability, which is determined by high electricity and railway tariffs, there is no spatial distribution scheme for particular types of production with regard to the availability of forest resources and the needs of the domestic market in timber and paper products, wood is harvested illegally on the rented plots designed to be used for the priority investment projects, the forest resources are not sufficiently applied and recovered, the deadlines and other project’s parameters are violated, and the feedback links between the enterprises and the authorities monitoring the projects are underdeveloped. Along with that, the practices of public subsidy programs show that acquiring the status of a priority investment project in forest exploration is seen to be one of the most efficient measures in investment encouragement in the forestry sector. Further research should focus on the development of a comprehensive approach to the analysis of the efficiency of the priority investment projects to justify the offers in investment encouragement mechanism improvement in the forestry sector in Russia. This approach should be based on the analysis of the econometric factors which determine the success of the projects.
APA, Harvard, Vancouver, ISO, and other styles
10

Yoshioka, Gary A., Lori Jonas, and Katherine E. Armstrong. "EMERGENCY REPORTING FOR OIL DISCHARGES: RECENT STATUTORY AND REGULATORY CHANGES." International Oil Spill Conference Proceedings 1989, no. 1 (February 1, 1989): 253–56. http://dx.doi.org/10.7901/2169-3358-1989-1-253.

Full text
Abstract:
ABSTRACT The principal trigger for federal reporting of discharges of oil remains the “sheen test” promulgated in 1970 under the authority of the Clean Water Act. The sheen test is not, however, the only requirement of concern to potential dischargers of oil. Certain provisions of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), of Title III of the Superfund Amendments and Reauthorization Act of 1986 (SARA), and of other environmental statutes may also apply, complicating the picture for the regulated community. CERCLA requires federal reporting of releases of hazardous substances, but contains an exclusion for most releases of petroleum products. The CERCLA “petroleum exclusion” is not, however, absolute, and recent policy documents by the U.S. Environmental Protection Agency (EPA) have clarified the scope of the exclusion. Releases of certain petroleum waste streams listed under the Resource Conservation and Recovery Act (RCRA), for example, are not excluded. Title III of SARA, which expands the emergency reporting requirements of CERCLA to include notification of state and local response authorities for releases of CERCLA hazardous substances and of listed “extremely hazardous substances” (EHSs) does not contain the equivalent of the CERCLA petroleum exclusion. To the extent that the list of EHSs includes certain constituents of petroleum products, it is possible that state and local reporting will be required under SARA Title III for oil discharges otherwise (or previously) exempted from federal reporting under CERCLA. In 1988, a regulatory program for reporting leaks from underground storage tanks was established under Subtitle I of RCRA. In addition to these federal requirements, statutory provisions enacted by various states also affect oil discharge reporting. This paper presents a review of the changing regulatory picture for oil discharge reporting requirements by first examining recent regulatory and judicial developments related to the oil sheen test, and then exploring the potential impact of other federal and state programs. Summary tables comparing federal provisions are provided as a useful guide for determining the scope of these programs.
APA, Harvard, Vancouver, ISO, and other styles
11

Symons, Lisa, and Heather A. Parker-Hall. "The SS Jacob Luckenbach: Integration of NOAA (National Oceanic and Atmospheric Administration) Trust Issues into the Response1." International Oil Spill Conference Proceedings 2003, no. 1 (April 1, 2003): 649–53. http://dx.doi.org/10.7901/2169-3358-2003-1-649.

Full text
Abstract:
ABSTRACT Since at least 1992, state and federal trustees have struggled to deal with episodic “mystery” spills that have impacted thousands of seabirds and compromised hundreds of miles of California coastline. In November 2001, another of these mystery events spurred the United States Coast Guard (USCG), state, and federal trustees to initiate a cooperative response and investigation. As impacts from the same oil type continued into January, it soon became evident that this oil most probably stemmed from a submerged source and not transient vessels. By February 2002, a source was identified for this and many of the previous mystery spills —the 1953 wreck of the cargo ship SS Jacob Luckenbach, fully fuelled and laden with materials for the Korean War effort. The vessel now sits in 176 feet of water, 17 miles off San Francisco Bay in the Gulf of the Farallones National Marine Sanctuary. The Luckenbach itself is an historic resource, protected by the National Historic Preservation Act (NHPA) 16 U.S.C.470 et seq and the National Marine Sanctuary Act (NMSA) 16 U.S.C. 1431 et seq. as amended by Public Law 106–513. The wreck rests in one of the most biologically productive regions of California, home to countless sensitive resources including several listed species, and is within a series of marine protected areas. The Unified Command (UC) comprised of USCG, California Department of Fish and Game's Office of Spill Prevention and Response (OSPR) and other state and federal agencies, were faced with an unusual set of challenges. First, finding accurate historical information about the vessel and its cargo, determining liability, and coordinating salvage and recovery operations complicated by both historical and ecological trustee issues during the Sanctuary's most biologically active and sensitive season. NOAA's National Marine Sanctuary Program (NMSP) played a particularly strong role in this response. Linked closely to the UC through NOAA's Scientific Support Coordinator, NMSP provided invaluable support in determining possible sources - engaged knowledgeable local divers in the process, located key historical documentation about the wreck, tracked down original owners and hull insurers, and assisted in the coordination of input from all trustees. Closely integrated coordination was a key factor in preparing for and determining the outcome of this response.
APA, Harvard, Vancouver, ISO, and other styles
12

van Rees, Charles B., Paul R. Chang, Jillian Cosgrove, David W. DesRochers, Hugo K. W. Gee, Jennifer L. Gutscher-Chutz, Aaron Nadig, et al. "Estimation of Vital Rates for the Hawaiian Gallinule, a Cryptic, Endangered Waterbird." Journal of Fish and Wildlife Management 9, no. 1 (February 8, 2018): 117–31. http://dx.doi.org/10.3996/102017-jfwm-084.

Full text
Abstract:
Abstract Vital rates describe the demographic traits of organisms and are an essential resource for wildlife managers to assess local resource conditions and to set objectives for and evaluate management actions. Endangered waterbirds on the Hawaiian Islands have been managed intensively at state and federal refuges since the 1970s, but with little quantitative research on their life history. Information on the vital rates of these taxa is needed to assess the efficacy of different management strategies and to target parts of the life cycle that may be limiting their recovery. Here, we present the most comprehensive data to date on the vital rates (reproduction and survival) of the Hawaiian gallinule Gallinula galeata sandvicensis, a behaviorally cryptic, endangered subspecies of wetland bird endemic to the Hawaiian Islands that is now found only on Kaua‘i and O‘ahu. We review unpublished reproduction data for 252 nests observed between 1979 and 2014 and assess a database of 1,620 sightings of 423 individually color-banded birds between 2004 and 2017. From the resighting data, we estimated annual apparent survival at two managed wetlands on O‘ahu using Cormack–Jolly–Seber models in program MARK. We found that Hawaiian gallinules have smaller mean clutch sizes than do other species in the genus Gallinula and that clutch sizes on Kaua‘i are larger than those on O‘ahu. The longest-lived bird in our dataset was recovered dead at age 7 y and 8 mo, and the youngest confirmed age at first breeding was 1 y and 11 mo. In 4 y of monitoring 14 wetland sites, we confirmed three interwetland movements on O‘ahu. In our pooled dataset, we found no statistically significant differences between managed and unmanaged wetlands in clutch size or reproductive success, but we acknowledge that there were limited data from unmanaged wetlands. Our best supported survival models estimated an overall annual apparent survival of 0.663 (95% CI = 0.572–0.759); detection varied across wetlands and study years. First-year survival is a key missing component in our understanding of the demography of Hawaiian gallinules. These data provide the foundation for quantitative management and assessment of extinction risk of this endangered subspecies.
APA, Harvard, Vancouver, ISO, and other styles
13

Dotsenko, V. V. "Methods of developing stress resistance of law enforcement officers at the stage of professional training." Law and Safety 69, no. 2 (December 26, 2018): 29–35. http://dx.doi.org/10.32631/pb.2018.2.04.

Full text
Abstract:
The results of theoretical analysis of modern directions, approaches and methods of stress management have been presented. It has been determined that the classification of methods of stress management depends on the type of psychotherapy, the direction of work with stress, time parameters of interaction with stress factors, the method of influencing the functional state, the method of anti-stress influence, etc. Based on the analysis of various scientific approaches, the methods of developing the ability to handle stress from law enforcers who study in higher education institutions with specific learning conditions are systematized and divided into three branches: prevention, neutralization and correction of stress, and recovery of organism resources. A series of trainings aimed on the formation and development of stress resistance and resource conservation among police officers at the stage of professional training were presented. For the first year cadets of Kharkiv National University of Internal Affairs a training program “Adaptation” was developed, the purpose of which is to develop skills and abilities of self-organization of the person that are essential for studying in higher education institutions with specific learning conditions; development of responsibility, social courage, high standards of behavior and motivation of achievement; development of active and prosocial models of behavior. For the second year cadets, a training program “Stress and Lifestyle” was developed, the purpose of which is the formation of rules of psycho-hygiene and mastering the methods of self-regulation of stress. For the third year cadets there is a training “Professional stress”, which task is to develop responsibility for personal development and promote self-realization and the formation of a cadet as a self-sufficient creative person. Also for the third year cadets we offer personal growth training “Life design of the person”, the purpose of which is to form the need for an active life position, willingness for self-development, self-improvement and increase of responsibility for one’s own life. On the basis of the research, conclusions were made on the expediency of the integrated implementation of the training system as a mean of forming and developing stress resistance and enhancing the existing personal resources of police officers at various stages of professional training.
APA, Harvard, Vancouver, ISO, and other styles
14

Curtis, Lisa M., James George, Volker Vallon, Stephen Barnes, Victor Darley-Usmar, Sucheta Vaingankar, Gary R. Cutter, et al. "UAB-UCSD O’Brien Center for Acute Kidney Injury Research." American Journal of Physiology-Renal Physiology 320, no. 5 (May 1, 2021): F870—F882. http://dx.doi.org/10.1152/ajprenal.00661.2020.

Full text
Abstract:
Acute kidney injury (AKI) remains a significant clinical problem through its diverse etiologies, the challenges of robust measurements of injury and recovery, and its progression to chronic kidney disease (CKD). Bridging the gap in our knowledge of this disorder requires bringing together not only the technical resources for research but also the investigators currently endeavoring to expand our knowledge and those who might bring novel ideas and expertise to this important challenge. The University of Alabama at Birmingham-University of California-San Diego O’Brien Center for Acute Kidney Injury Research brings together technical expertise and programmatic and educational efforts to advance our knowledge in these diverse issues and the required infrastructure to develop areas of novel exploration. Since its inception in 2008, this O’Brien Center has grown its impact by providing state-of-the-art resources in clinical and preclinical modeling of AKI, a bioanalytical core that facilitates measurement of critical biomarkers, including serum creatinine via LC-MS/MS among others, and a biostatistical resource that assists from design to analysis. Through these core resources and with additional educational efforts, our center has grown its investigator base to include >200 members from 51 institutions. Importantly, this center has translated its pilot and catalyst funding program with a $37 return per dollar invested. Over 500 publications have resulted from the support provided with a relative citation ratio of 2.18 ± 0.12 (iCite). Through its efforts, this disease-centric O’Brien Center is providing the infrastructure and focus to help the development of the next generation of researchers in the basic and clinical science of AKI. This center creates the promise of the application at the bedside of the advances in AKI made by current and future investigators.
APA, Harvard, Vancouver, ISO, and other styles
15

Seal, Marion. "Health advance directives, policy and clinical practice: a perspective on the synergy of an effective advance care planning framework." Australian Health Review 34, no. 1 (2010): 80. http://dx.doi.org/10.1071/ah09784.

Full text
Abstract:
The delivery of quality care at the end of life should be seamless across all health care settings and independent from variables such as institutional largeness, charismatic leadership, funding sources and blind luck … People have come to fear the prospect of a technologically protracted death or abandonment with untreated emotional and physical stress. (Field and Castle cited in Fins et al., p. 1–2). 1 Australians are entitled to plan in advance the medical treatments they would allow in the event of incapacity using advance directives (ADs). A critical role of ADs is protecting people from unwanted inappropriate cardiopulmonary resuscitation (CPR) at the end stage of life. Generally, ADs are enacted in the context of medical evaluation. However, first responders to a potential cardiac arrest are often non-medical, and in the absence of medical instruction, default CPR applies. That is, unless there is a clear AD CPR refusal on hand and policy supports compliance. Such policy occurs in jurisdictions where statute ADs qualifying or actioning scope is prescriptive enough for organisations to expect all health professionals to appropriately observe them. ADs under common law or similar in nature statute ADs are open to broader clinical translation because the operational criteria are set by the patient. According policy examples require initial medical evaluation to determine their application. Advance care planning (ACP) programs can help bring AD legislation to effect (J. Cashmore, speech at the launch of the Respecting Patient Choices Program at The Queen Elizabeth Hospital, Adelaide, SA, 2004). However, the efficacy of AD CPR refusal depends on the synergy of prevailing AD legislation and ensuing policy. When delivery fails, then democratic AD law is bypassed by paradigms such as the Physician Orders for Life-Sustaining Treatment (POLST) community form, as flagged in Australian Resuscitation Council guidelines. 2 Amidst Australian AD review and statute reform this paper offers a perspective on the attributes of a working AD model, drawing on the Respecting Patient Choices Program (RPCP) experience at The Queen Elizabeth Hospital (TQEH) under SA law. The SA Consent to Medical Treatment and Palliative Care Act 1995 and its ‘Anticipatory Direction’ has been foundational to policy enabling non-medical first responders to honour ADs when the patient is at the end stage of life with no real prospect of recovery. 3 The ‘Anticipatory Direction’ provision stands also to direct appointed surrogate decision-makers. It attunes with health discipline ethics codes; does not require a pre-existing medical condition and can be completed independently in the community. Conceivably, the model offers a national AD option, able to deliver AD CPR refusals, as an adjunct to existing common law and statute provisions. This paper only represents the views of the author and it does not constitute legal advice. What is known about the topic?Differences in advance directive (AD) frameworks across Australian states and territories and between legislated and common law can be confusing. 4 Therefore, health professionals need policy clarifying their expected response. Although it is assumed that ADs, including CPR refusals at the end of life will be respected, unless statute legislation is conducive to policy authorising that non-medical first responders to an emergency can observe clear AD CPR refusals, the provision may be ineffectual. Inappropriate, unwanted CPR can render a person indefinitely in a condition they may have previously deemed intolerable. Such intervention also causes distress to staff and families and ties up resources in high demand settings. What does this paper add?That effectual AD law needs to not only enshrine the rights of individuals but that the provision also needs to be deliverable. To be deliverable, statute AD formulation or operational criteria need to be appropriately scoped so that organisations, through policy, are prepared to legally support nurses and ambulance officers in making a medically unsupervised decision to observe clear CPR refusals. This is a critical provision, given ADs in common law (or similar statute) can apply broadly and, in policy examples, require medical authorisation to enact in order to ensure the person’s operational terms are clinically indicated. Moreover, compliance from health professionals (by act or omission) with in-situ ADs in an unavoidable emergency cannot be assumed unless the scope harmonises with ethics codes. This paper identifies a working model of AD delivery in SA under the Consent to Medical Treatment and Palliative Care Act 1995 through the Respecting Patient Choices Program. What are the implications for practitioners?A clear, robust AD framework is vital for the appropriate care and peace of mind of those approaching their end of life. A nationally recognised AD option is suggested to avail people, particularly the elderly, of their legal right to grant or refuse consent to CPR at the end of life. ADs should not exclude those without medical conditions from making advance refusals, but in order to ensure appropriate delivery in an emergency response, they need to be scoped so as that they will not be prematurely enacted yet clinically and ethically safe for all health professionals to operationalise. Failure to achieve this may give rise to systems bypassing legislation, such as the American (Physician Orders for Life-Sustaining Treatment) POLST example. It is suggested that the current SA Anticipatory Direction under the Consent to Medical treatment and Palliative Care Act 1995 provides a model of legislation producing a framework able to deliver such AD expectations, evidenced by supportive acute and community organisational policies. Definitions.Advance care planning (ACP) is a process whereby a person (ideally ‘in consultation with health care providers, family members and important others’ 5 ), decides on and ‘makes known choices regarding possible future medical treatment and palliative care, in the event that they lose the ability to speak for themselves’ (Office of the Public Advocate, South Australia, see www.opa.sa.gov.au). Advance directives (ADs) in this paper refers to legal documents or informal documents under common law containing individuals’ instructions consent to or refusing future medical treatment in certain circumstances when criteria in the law are met. A legal advance directive may also appoint a surrogate decision-maker.
APA, Harvard, Vancouver, ISO, and other styles
16

Hayashi, Haruo. "Long-term Recovery from Recent Disasters in Japan and the United States." Journal of Disaster Research 2, no. 6 (December 1, 2007): 413–18. http://dx.doi.org/10.20965/jdr.2007.p0413.

Full text
Abstract:
In this issue of Journal of Disaster Research, we introduce nine papers on societal responses to recent catastrophic disasters with special focus on long-term recovery processes in Japan and the United States. As disaster impacts increase, we also find that recovery times take longer and the processes for recovery become more complicated. On January 17th of 1995, a magnitude 7.2 earthquake hit the Hanshin and Awaji regions of Japan, resulting in the largest disaster in Japan in 50 years. In this disaster which we call the Kobe earthquake hereafter, over 6,000 people were killed and the damage and losses totaled more than 100 billion US dollars. The long-term recovery from the Kobe earthquake disaster took more than ten years to complete. One of the most important responsibilities of disaster researchers has been to scientifically monitor and record the long-term recovery process following this unprecedented disaster and discern the lessons that can be applied to future disasters. The first seven papers in this issue present some of the key lessons our research team learned from the studying the long-term recovery following the Kobe earthquake disaster. We have two additional papers that deal with two recent disasters in the United States – the terrorist attacks on World Trade Center in New York on September 11 of 2001 and the devastation of New Orleans by the 2005 Hurricane Katrina and subsequent levee failures. These disasters have raised a number of new research questions about long-term recovery that US researchers are studying because of the unprecedented size and nature of these disasters’ impacts. Mr. Mammen’s paper reviews the long-term recovery processes observed at and around the World Trade Center site over the last six years. Ms. Johnson’s paper provides a detailed account of the protracted reconstruction planning efforts in the city of New Orleans to illustrate a set of sufficient and necessary conditions for successful recovery. All nine papers in this issue share a theoretical framework for long-term recovery processes which we developed based first upon the lessons learned from the Kobe earthquake and later expanded through observations made following other recent disasters in the world. The following sections provide a brief description of each paper as an introduction to this special issue. 1. The Need for Multiple Recovery Goals After the 1995 Kobe earthquake, the long-term recovery process began with the formulation of disaster recovery plans by the City of Kobe – the most severely impacted municipality – and an overarching plan by Hyogo Prefecture which coordinated 20 impacted municipalities; this planning effort took six months. Before the Kobe earthquake, as indicated in Mr. Maki’s paper in this issue, Japanese theories about, and approaches to, recovery focused mainly on physical recovery, particularly: the redevelopment plans for destroyed areas; the location and standards for housing and building reconstruction; and, the repair and rehabilitation of utility systems. But the lingering problems of some of the recent catastrophes in Japan and elsewhere indicate that there are multiple dimensions of recovery that must be considered. We propose that two other key dimensions are economic recovery and life recovery. The goal of economic recovery is the revitalization of the local disaster impacted economy, including both major industries and small businesses. The goal of life recovery is the restoration of the livelihoods of disaster victims. The recovery plans formulated following the 1995 Kobe earthquake, including the City of Kobe’s and Hyogo Prefecture’s plans, all stressed these two dimensions in addition to physical recovery. The basic structure of both the City of Kobe’s and Hyogo Prefecture’s recovery plans are summarized in Fig. 1. Each plan has three elements that work simultaneously. The first and most basic element of recovery is the restoration of damaged infrastructure. This helps both physical recovery and economic recovery. Once homes and work places are recovered, Life recovery of the impacted people can be achieved as the final goal of recovery. Figure 2 provides a “recovery report card” of the progress made by 2006 – 11 years into Kobe’s recovery. Infrastructure was restored in two years, which was probably the fastest infrastructure restoration ever, after such a major disaster; it astonished the world. Within five years, more than 140,000 housing units were constructed using a variety of financial means and ownership patterns, and exceeding the number of demolished housing units. Governments at all levels – municipal, prefectural, and national – provided affordable public rental apartments. Private developers, both local and national, also built condominiums and apartments. Disaster victims themselves also invested a lot to reconstruct their homes. Eleven major redevelopment projects were undertaken and all were completed in 10 years. In sum, the physical recovery following the 1995 Kobe earthquake was extensive and has been viewed as a major success. In contrast, economic recovery and life recovery are still underway more than 13 years later. Before the Kobe earthquake, Japan’s policy approaches to recovery assumed that economic recovery and life recovery would be achieved by infusing ample amounts of public funding for physical recovery into the disaster area. Even though the City of Kobe’s and Hyogo Prefecture’s recovery plans set economic recovery and life recovery as key goals, there was not clear policy guidance to accomplish them. Without a clear articulation of the desired end-state, economic recovery programs for both large and small businesses were ill-timed and ill-matched to the needs of these businesses trying to recover amidst a prolonged slump in the overall Japanese economy that began in 1997. “Life recovery” programs implemented as part of Kobe’s recovery were essentially social welfare programs for low-income and/or senior citizens. 2. Requirements for Successful Physical Recovery Why was the physical recovery following the 1995 Kobe earthquake so successful in terms of infrastructure restoration, the replacement of damaged housing units, and completion of urban redevelopment projects? There are at least three key success factors that can be applied to other disaster recovery efforts: 1) citizen participation in recovery planning efforts, 2) strong local leadership, and 3) the establishment of numerical targets for recovery. Citizen participation As pointed out in the three papers on recovery planning processes by Mr. Maki, Mr. Mammen, and Ms. Johnson, citizen participation is one of the indispensable factors for successful recovery plans. Thousands of citizens participated in planning workshops organized by America Speaks as part of both the World Trade Center and City of New Orleans recovery planning efforts. Although no such workshops were held as part of the City of Kobe’s recovery planning process, citizen participation had been part of the City of Kobe’s general plan update that had occurred shortly before the earthquake. The City of Kobe’s recovery plan is, in large part, an adaptation of the 1995-2005 general plan. On January 13 of 1995, the City of Kobe formally approved its new, 1995-2005 general plan which had been developed over the course of three years with full of citizen participation. City officials, responsible for drafting the City of Kobe’s recovery plan, have later admitted that they were able to prepare the city’s recovery plan in six months because they had the preceding three years of planning for the new general plan with citizen participation. Based on this lesson, Odiya City compiled its recovery plan based on the recommendations obtained from a series of five stakeholder workshops after the 2004 Niigata Chuetsu earthquake. <strong>Fig. 1. </strong> Basic structure of recovery plans from the 1995 Kobe earthquake. <strong>Fig. 2. </strong> “Disaster recovery report card” of the progress made by 2006. Strong leadership In the aftermath of the Kobe earthquake, local leadership had a defining role in the recovery process. Kobe’s former Mayor, Mr. Yukitoshi Sasayama, was hired to work in Kobe City government as an urban planner, rebuilding Kobe following World War II. He knew the city intimately. When he saw damage in one area on his way to the City Hall right after the earthquake, he knew what levels of damage to expect in other parts of the city. It was he who called for the two-month moratorium on rebuilding in Kobe city on the day of the earthquake. The moratorium provided time for the city to formulate a vision and policies to guide the various levels of government, private investors, and residents in rebuilding. It was a quite unpopular policy when Mayor Sasayama announced it. Citizens expected the city to be focusing on shelters and mass care, not a ban on reconstruction. Based on his experience in rebuilding Kobe following WWII, he was determined not to allow haphazard reconstruction in the city. It took several years before Kobe citizens appreciated the moratorium. Numerical targets Former Governor Mr. Toshitami Kaihara provided some key numerical targets for recovery which were announced in the prefecture and municipal recovery plans. They were: 1) Hyogo Prefecture would rebuild all the damaged housing units in three years, 2) all the temporary housing would be removed within five years, and 3) physical recovery would be completed in ten years. All of these numerical targets were achieved. Having numerical targets was critical to directing and motivating all the stakeholders including the national government’s investment, and it proved to be the foundation for Japan’s fundamental approach to recovery following the 1995 earthquake. 3. Economic Recovery as the Prime Goal of Disaster Recovery In Japan, it is the responsibility of the national government to supply the financial support to restore damaged infrastructure and public facilities in the impacted area as soon as possible. The long-term recovery following the Kobe earthquake is the first time, in Japan’s modern history, that a major rebuilding effort occurred during a time when there was not also strong national economic growth. In contrast, between 1945 and 1990, Japan enjoyed a high level of national economic growth which helped facilitate the recoveries following WWII and other large fires. In the first year after the Kobe earthquake, Japan’s national government invested more than US$ 80 billion in recovery. These funds went mainly towards the repair and reconstruction of infrastructure and public facilities. Now, looking back, we can also see that these investments also nearly crushed the local economy. Too much money flowed into the local economy over too short a period of time and it also did not have the “trickle-down” effect that might have been intended. To accomplish numerical targets for physical recovery, the national government awarded contracts to large companies from Osaka and Tokyo. But, these large out-of-town contractors also tended to have their own labor and supply chains already intact, and did not use local resources and labor, as might have been expected. Essentially, ten years of housing supply was completed in less than three years, which led to a significant local economic slump. Large amounts of public investment for recovery are not necessarily a panacea for local businesses, and local economic recovery, as shown in the following two examples from the Kobe earthquake. A significant national investment was made to rebuild the Port of Kobe to a higher seismic standard, but both its foreign export and import trade never recovered to pre-disaster levels. While the Kobe Port was out of business, both the Yokohama Port and the Osaka Port increased their business, even though many economists initially predicted that the Kaohsiung Port in Chinese Taipei or the Pusan Port in Korea would capture this business. Business stayed at all of these ports even after the reopening of the Kobe Port. Similarly, the Hanshin Railway was severely damaged and it took half a year to resume its operation, but it never regained its pre-disaster readership. In this case, two other local railway services, the JR and Hankyu lines, maintained their increased readership even after the Hanshin railway resumed operation. As illustrated by these examples, pre-disaster customers who relied on previous economic output could not necessarily afford to wait for local industries to recover and may have had to take their business elsewhere. Our research suggests that the significant recovery investment made by Japan’s national government may have been a disincentive for new economic development in the impacted area. Government may have been the only significant financial risk-taker in the impacted area during the national economic slow-down. But, its focus was on restoring what had been lost rather than promoting new or emerging economic development. Thus, there may have been a missed opportunity to provide incentives or put pressure on major businesses and industries to develop new businesses and attract new customers in return for the public investment. The significant recovery investment by Japan’s national government may have also created an over-reliance of individuals on public spending and government support. As indicated in Ms. Karatani’s paper, individual savings of Kobe’s residents has continued to rise since the earthquake and the number of individuals on social welfare has also decreased below pre-disaster levels. Based on our research on economic recovery from the Kobe earthquake, at least two lessons emerge: 1) Successful economic recovery requires coordination among all three recovery goals – Economic, Physical and Life Recovery, and 2) “Recovery indices” are needed to better chart recovery progress in real-time and help ensure that the recovery investments are being used effectively. Economic recovery as the prime goal of recovery Physical recovery, especially the restoration of infrastructure and public facilities, may be the most direct and socially accepted provision of outside financial assistance into an impacted area. However, lessons learned from the Kobe earthquake suggest that the sheer amount of such assistance may not be effective as it should be. Thus, as shown in Fig. 3, economic recovery should be the top priority goal for recovery among the three goals and serve as a guiding force for physical recovery and life recovery. Physical recovery can be a powerful facilitator of post-disaster economic development by upgrading social infrastructure and public facilities in compliance with economic recovery plans. In this way, it is possible to turn a disaster into an opportunity for future sustainable development. Life recovery may also be achieved with a healthy economic recovery that increases tax revenue in the impacted area. In order to achieve this coordination among all three recovery goals, municipalities in the impacted areas should have access to flexible forms of post-disaster financing. The community development block grant program that has been used after several large disasters in the United States, provide impacted municipalities with a more flexible form of funding and the ability to better determine what to do and when. The participation of key stakeholders is also an indispensable element of success that enables block grant programs to transform local needs into concrete businesses. In sum, an effective economic recovery combines good coordination of national support to restore infrastructure and public facilities and local initiatives that promote community recovery. Developing Recovery Indices Long-term recovery takes time. As Mr. Tatsuki’s paper explains, periodical social survey data indicates that it took ten years before the initial impacts of the Kobe earthquake were no longer affecting the well-being of disaster victims and the recovery was completed. In order to manage this long-term recovery process effectively, it is important to have some indices to visualize the recovery processes. In this issue, three papers by Mr. Takashima, Ms. Karatani, and Mr. Kimura define three different kinds of recovery indices that can be used to continually monitor the progress of the recovery. Mr. Takashima focuses on electric power consumption in the impacted area as an index for impact and recovery. Chronological change in electric power consumption can be obtained from the monthly reports of power company branches. Daily estimates can also be made by tracking changes in city lights using a satellite called DMSP. Changes in city lights can be a very useful recovery measure especially at the early stages since it can be updated daily for anywhere in the world. Ms. Karatani focuses on the chronological patterns of monthly macro-statistics that prefecture and city governments collect as part of their routine monitoring of services and operations. For researchers, it is extremely costly and virtually impossible to launch post-disaster projects that collect recovery data continuously for ten years. It is more practical for researchers to utilize data that is already being collected by local governments or other agencies and use this data to create disaster impact and recovery indices. Ms. Karatani found three basic patterns of disaster impact and recovery in the local government data that she studied: 1) Some activities increased soon after the disaster event and then slumped, such as housing construction; 2) Some activities reduced sharply for a period of time after the disaster and then rebounded to previous levels, such as grocery consumption; and 3) Some activities reduced sharply for a while and never returned to previous levels, such as the Kobe Port and Hanshin Railway. Mr. Kimura focuses on the psychology of disaster victims. He developed a “recovery and reconstruction calendar” that clarifies the process that disaster victims undergo in rebuilding their shattered lives. His work is based on the results of random surveys. Despite differences in disaster size and locality, survey data from the 1995 Kobe earthquake and the 2004 Niigata-ken Chuetsu earthquake indicate that the recovery and reconstruction calendar is highly reliable and stable in clarifying the recovery and reconstruction process. <strong>Fig. 3.</strong> Integrated plan of disaster recovery. 4. Life Recovery as the Ultimate Goal of Disaster Recovery Life recovery starts with the identification of the disaster victims. In Japan, local governments in the impacted area issue a “damage certificate” to disaster victims by household, recording the extent of each victim’s housing damage. After the Kobe earthquake, a total of 500,000 certificates were issued. These certificates, in turn, were used by both public and private organizations to determine victim’s eligibility for individual assistance programs. However, about 30% of those victims who received certificates after the Kobe earthquake were dissatisfied with the results of assessment. This caused long and severe disputes for more than three years. Based on the lessons learned from the Kobe earthquake, Mr. Horie’s paper presents (1) a standardized procedure for building damage assessment and (2) an inspector training system. This system has been adopted as the official building damage assessment system for issuing damage certificates to victims of the 2004 Niigata-ken Chuetsu earthquake, the 2007 Noto-Peninsula earthquake, and the 2007 Niigata-ken Chuetsu Oki earthquake. Personal and family recovery, which we term life recovery, was one of the explicit goals of the recovery plan from the Kobe earthquake, but it was unclear in both recovery theory and practice as to how this would be measured and accomplished. Now, after studying the recovery in Kobe and other regions, Ms. Tamura’s paper proposes that there are seven elements that define the meaning of life recovery for disaster victims. She recently tested this model in a workshop with Kobe disaster victims. The seven elements and victims’ rankings are shown in Fig. 4. Regaining housing and restoring social networks were, by far, the top recovery indicators for victims. Restoration of neighborhood character ranked third. Demographic shifts and redevelopment plans implemented following the Kobe earthquake forced significant neighborhood changes upon many victims. Next in line were: having a sense of being better prepared and reducing their vulnerability to future disasters; regaining their physical and mental health; and restoration of their income, job, and the economy. The provision of government assistance also provided victims with a sense of life recovery. Mr. Tatsuki’s paper summarizes the results of four random-sample surveys of residents within the most severely impacted areas of Hyogo Prefecture. These surveys were conducted biannually since 1999,. Based on the results of survey data from 1999, 2001, 2003, and 2005, it is our conclusion that life recovery took ten years for victims in the area impacted significantly by the Kobe earthquake. Fig. 5 shows that by comparing the two structural equation models of disaster recovery (from 2003 and 2005), damage caused by the Kobe earthquake was no longer a determinant of life recovery in the 2005 model. It was still one of the major determinants in the 2003 model as it was in 1999 and 2001. This is the first time in the history of disaster research that the entire recovery process has been scientifically described. It can be utilized as a resource and provide benchmarks for monitoring the recovery from future disasters. <strong>Fig. 4.</strong> Ethnographical meaning of “life recovery” obtained from the 5th year review of the Kobe earthquake by the City of Kobe. <strong>Fig. 5.</strong> Life recovery models of 2003 and 2005. 6. The Need for an Integrated Recovery Plan The recovery lessons from Kobe and other regions suggest that we need more integrated recovery plans that use physical recovery as a tool for economic recovery, which in turn helps disaster victims. Furthermore, we believe that economic recovery should be the top priority for recovery, and physical recovery should be regarded as a tool for stimulating economic recovery and upgrading social infrastructure (as shown in Fig. 6). With this approach, disaster recovery can help build the foundation for a long-lasting and sustainable community. Figure 6 proposes a more detailed model for a more holistic recovery process. The ultimate goal of any recovery process should be achieving life recovery for all disaster victims. We believe that to get there, both direct and indirect approaches must be taken. Direct approaches include: the provision of funds and goods for victims, for physical and mental health care, and for housing reconstruction. Indirect approaches for life recovery are those which facilitate economic recovery, which also has both direct and indirect approaches. Direct approaches to economic recovery include: subsidies, loans, and tax exemptions. Indirect approaches to economic recovery include, most significantly, the direct projects to restore infrastructure and public buildings. More subtle approaches include: setting new regulations or deregulations, providing technical support, and creating new businesses. A holistic recovery process needs to strategically combine all of these approaches, and there must be collaborative implementation by all the key stakeholders, including local governments, non-profit and non-governmental organizations (NPOs and NGOs), community-based organizations (CBOs), and the private sector. Therefore, community and stakeholder participation in the planning process is essential to achieve buy-in for the vision and desired outcomes of the recovery plan. Securing the required financial resources is also critical to successful implementation. In thinking of stakeholders, it is important to differentiate between supporting entities and operating agencies. Supporting entities are those organizations that supply the necessary funding for recovery. Both Japan’s national government and the federal government in the U.S. are the prime supporting entities in the recovery from the 1995 Kobe earthquake and the 2001 World Trade Center recovery. In Taiwan, the Buddhist organization and the national government of Taiwan were major supporting entities in the recovery from the 1999 Chi-Chi earthquake. Operating agencies are those organizations that implement various recovery measures. In Japan, local governments in the impacted area are operating agencies, while the national government is a supporting entity. In the United States, community development block grants provide an opportunity for many operating agencies to implement various recovery measures. As Mr. Mammen’ paper describes, many NPOs, NGOs, and/or CBOs in addition to local governments have had major roles in implementing various kinds programs funded by block grants as part of the World Trade Center recovery. No one, single organization can provide effective help for all kinds of disaster victims individually or collectively. The needs of disaster victims may be conflicting with each other because of their diversity. Their divergent needs can be successfully met by the diversity of operating agencies that have responsibility for implementing recovery measures. In a similar context, block grants made to individual households, such as microfinance, has been a vital recovery mechanism for victims in Thailand who suffered from the 2004 Sumatra earthquake and tsunami disaster. Both disaster victims and government officers at all levels strongly supported the microfinance so that disaster victims themselves would become operating agencies for recovery. Empowering individuals in sustainable life recovery is indeed the ultimate goal of recovery. <strong>Fig. 6.</strong> A holistic recovery policy model.
APA, Harvard, Vancouver, ISO, and other styles
17

Naimi-Tajdar, Reza, Choongyong Han, Kamy Sepehrnoori, Todd James Arbogast, and Mark A. Miller. "A Fully Implicit, Compositional, Parallel Simulator for IOR Processes in Fractured Reservoirs." SPE Journal 12, no. 03 (September 1, 2007): 367–81. http://dx.doi.org/10.2118/100079-pa.

Full text
Abstract:
Summary Naturally fractured reservoirs contain a significant amount of the world oil reserves. A number of these reservoirs contain several billion barrels of oil. Accurate and efficient reservoir simulation of naturally fractured reservoirs is one of the most important, challenging, and computationally intensive problems in reservoir engineering. Parallel reservoir simulators developed for naturally fractured reservoirs can effectively address the computational problem. A new accurate parallel simulator for large-scale naturally fractured reservoirs, capable of modeling fluid flow in both rock matrix and fractures, has been developed. The simulator is a parallel, 3D, fully implicit, equation-of-state compositional model that solves very large, sparse linear systems arising from discretization of the governing partial differential equations. A generalized dual-porosity model, the multiple-interacting-continua (MINC), has been implemented in this simulator. The matrix blocks are discretized into subgrids in both horizontal and vertical directions to offer a more accurate transient flow description in matrix blocks. We believe this implementation has led to a unique and powerful reservoir simulator that can be used by small and large oil producers to help them in the design and prediction of complex gas and waterflooding processes on their desktops or a cluster of computers. Some features of this simulator, such as modeling both gas and water processes and the ability of 2D matrix subgridding are not available in any commercial simulator to the best of our knowledge. The code was developed on a cluster of processors, which has proven to be a very efficient and convenient resource for developing parallel programs. The results were successfully verified against analytical solutions and commercial simulators (ECLIPSE and GEM). Excellent results were achieved for a variety of reservoir case studies. Applications of this model for several IOR processes (including gas and water injection) are demonstrated. Results from using the simulator on a cluster of processors are also presented. Excellent speedup ratios were obtained. Introduction The dual-porosity model is one of the most widely used conceptual models for simulating naturally fractured reservoirs. In the dual-porosity model, two types of porosity are present in a rock volume: fracture and matrix. Matrix blocks are surrounded by fractures and the system is visualized as a set of stacked volumes, representing matrix blocks separated by fractures (Fig. 1). There is no communication between matrix blocks in this model, and the fracture network is continuous. Matrix blocks do communicate with the fractures that surround them. A mass balance for each of the media yields two continuity equations that are connected by matrix-fracture transfer functions which characterize fluid flow between matrix blocks and fractures. The performance of dual-porosity simulators is largely determined by the accuracy of this transfer function. The dual-porosity continuum approach was first proposed by Barenblatt et al. (1960) for a single-phase system. Later, Warren and Root (1963) used this approach to develop a pressure-transient analysis method for naturally fractured reservoirs. Kazemi et al. (1976) extended the Warren and Root method to multiphase flow using a 2D, two-phase, black-oil formulation. The two equations were then linked by means of a matrix-fracture transfer function. Since the publication of Kazemi et al. (1976), the dual-porosity approach has been widely used in the industry to develop field-scale reservoir simulation models for naturally fractured reservoir performance (Thomas et al. 1983; Gilman and Kazemi 1983; Dean and Lo 1988; Beckner et al. 1988; Rossen and Shen 1989). In simulating a fractured reservoir, we are faced with the fact that matrix blocks may contain well over 90% of the total oil reserve. The primary problem of oil recovery from a fractured reservoir is essentially that of extracting oil from these matrix blocks. Therefore it is crucial to understand the mechanisms that take place in matrix blocks and to simulate these processes within their container as accurately as possible. Discretizing the matrix blocks into subgrids or subdomains is a very good solution to accurately take into account transient and spatially nonlinear flow behavior in the matrix blocks. The resulting finite-difference equations are solved along with the fracture equations to calculate matrix-fracture transfer flow. The way that matrix blocks are discretized varies in the proposed models, but the objective is to accurately model pressure and saturation gradients in the matrix blocks (Saidi 1975; Gilman and Kazemi 1983; Gilman 1986; Pruess and Narasimhan 1985; Wu and Pruess 1988; Chen et al. 1987; Douglas et al. 1989; Beckner et al. 1991; Aldejain 1999).
APA, Harvard, Vancouver, ISO, and other styles
18

Munir, Ningky Sasanti, Eva Hotnaidah Saragih, and Martinus Sulistio Rusli. "BCA’s employer branding – the challenge ahead." Emerald Emerging Markets Case Studies 6, no. 3 (August 15, 2016): 1–22. http://dx.doi.org/10.1108/eemcs-08-2015-0177.

Full text
Abstract:
Subject area PT. Bank Central Asia, Tbk. (BCA), the largest national private bank in Indonesia, won an award for the Best Bank at the Euromoney Awards for Excellence (Asia) 2014. During the same event, in several categories, haloBCATM and BCA employees also won several awards. Previously, a number of awards were received by BCA such as: Best Indonesia Local Private Bank in 2010, Contact Center World Champion in 2012 and 2013, and Best Mega Contact Center in Asia Pacific Region in 2014. BCA is currently facing a problem of an aging population. Since the economy crisis facing the country in 1998, BCA has recruited fewer employees. The company resumed recruiting in 2010. BCA’s human resource (HR) profile in 2013 showed that nearly half of BCA’s permanent employees were aged 45 years or older, 40 per cent of whom have been working for more than 20 years. At the time of their retirement, the Bank faces the potential of losing a significant number of employees from three different generations. BCA has raised its efforts to recruit new talent. However, recruitment is not easy, as BCA wants its new employees to continue maintaining BCA’s heritage, building the Bank to become an Indonesian company that they can be proud of. How have these values, which have been a common belief, a foundation to work passionately and the glue that bonds the Bank’s employees, executives and owners, been communicated outside of the BCA and have been used to attract the future successors of BCA in Indonesia? Study level/applicability Master Degree in Human Resources Management or MBA Program. Case overview PT Bank Central Asia Tbk (BCA), which was established on February 1957, is Indonesia’s largest lender by market value and the second largest bank by assets. The bank has experienced a remarkable recovery from the Asian Financial Crisis in the late 1990s when the Indonesian banking system became almost bankrupt. It provides both commercial and personal banking services through its 1,000-plus branches across the country. As the largest national private bank, BCA is a well-known bank in Indonesia. BCA is managing more than 12 million customer accounts, processing hundreds of millions of financial transactions and fulfilling the needs of individual and corporate customers through various products and services. BCA Automatic Teller Machines (ATMs) are located virtually and BCA’s Electronic Data Capture (EDC) machines are available at many merchants both in big cities or small towns across Indonesia’s archipelago. However, for a nation with a population of more than 240 million spread out over 34 provinces, the presence of BCA is still deemed unevenly distributed. In the next 10 years, BCA has no plan yet of expanding outside of Indonesia. BCA put its attention on developing its market in Eastern Indonesia. Funding sources, which usually becomes an issue for expanding companies, are not a source of concern for BCA. BCA is currently facing a problem of an aging population. Since the economy crisis facing the country in 1998, BCA has recruited fewer new employees. The company had recently resumed recruiting in 2010. BCA’s HR profile in 2013 showed that nearly half of BCA’s permanent employees were 45 years of age or older, 40 percent of whom have been working for more than 20 years. At the time of their retirement, the Bank faces the potential of losing a significant number of employees from three different generations. Currently, BCA has raised its efforts to recruit new talent and its future leaders through various programs, such as: BCA Development Program (BDP), one of the most acknowledged management trainee programs in the Indonesian banking industry, provides intensive and rigorous training to selected new recruits to ensure development of BCA key talents and future leaders. HR business partners that actively visit campuses in the eastern region of Indonesia. Socialization programs in state and private universities. Job fairs, Web recruitment, internships and employee referrals, job opportunity advertisements posted at BCA branch offices located near universities and in the leading mass media. Utilization of recruitment consultant services, especially to find candidates with specific qualifications. Utilization of communication media printed (poster, flyer, booklet, banners) and electronically. Provision of scholarships to high school graduates with excellent academic records but facing financial difficulties. However, recruitment is not easy for BCA because – like other well-known companies in Indonesia – the Bank only recruits the best people based on the prospective employees’ hard and soft competencies. BCA’s aim to project a positive perception toward its employees as “a fun workplace with family-oriented atmosphere, and commitment about employees’ development” has yet to strongly resonate in Indonesia’s labor market. BCA wants its new employees to continue maintaining BCA’s heritage, building the Bank to become an Indonesian company that they can be proud of. How have these values, which have been a common belief, a foundation to work passionately and the glue that bonds the Bank’s employees, executives and owners, been communicated outside of BCA and have been used to attract the future successors of BCA in Indonesia? How should BCA obtain a large number of qualified talent pools through an effective Employer Branding strategy? Expected learning outcomes By the end of discussing the case, the learner will be: conceptually: able to explain what is meant by employer branding, internal and external approach and able to explain the relationship of employer branding with business strategy, talent management strategies and HR management functions as a whole; practically: able to identify and analyze BCA Recent Condition – able to explain the BCA brand image in the eyes of public/external/job seekers in Indonesia and internal/current employees of BCA – able to identify strategies that BCA does to recruit potential job seekers – and able to explain the influence of innovative products and services that BCA has currently on BCA employer branding; able to identify BCA goals/needs; able to identify the characteristics, needs and preferences of BCA target group of workers, concerning to the latest issues arise such as: Gen Y and AEC (ASEAN Economic Community); able to evaluate the effectiveness of BCA employer branding strategy and communications and to identify the problems faced by BCA related to employer branding; able to generate ideas related to the improvement of BCA employer branding strategy and programs – what message to be branded (company unique employee value propositions – tangibles and intangibles) – what program to be implemented (internal and external) – and how is the integrated marketing communication strategy (segmenting-targeting-positioning, channels). Supplementary materials Teaching notes are available for educators only. Please contact your library to gain login details or email support@emeraldinsight.com to request teaching notes. Subject code CSS:6: Human Resource Management.
APA, Harvard, Vancouver, ISO, and other styles
19

Son, Changwon, Farzan Sasangohar, and S. Camille Peres. "Redefining and Measuring Resilience in Emergency Management Systems." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1651–52. http://dx.doi.org/10.1177/1541931213601899.

Full text
Abstract:
Inherent limitations of controlling risks in complex socio-technical systems were revealed in several major catastrophic disasters such as nuclear meltdown in Fukushima Daiichi nuclear power plant in 2011, well blowout in Deepwater Horizon drilling rig in 2010, and Hurricane Katrina in 2005. While desired risk management leans toward the prevention of such unwanted events, the mitigation of their impact becomes more important and emergency response operations provide the last line of protection against disasters (Kanno, Makita, & Furuta, 2008). In response to September 11 terrorist attack at World Trade Center in New York, U.S. Government launched the National Incident Management System (NIMS), an integrated national and multi-jurisdictional emergency preparedness and response program (Department of Homeland Security, 2008). The NIMS framework is characterized by a common operating picture, interoperability, reliability, scalability and portability, and resilience and redundancy (Department of Homeland Security, 2008). Among these characteristics, effective emergency response operations require resilience because planned-for actions may not be implementable and therefore the emergency response organizations must adapt to and cope with uncertain and changing environment (Mendonca, Beroggi, & Wallace, 2003). There have been many attempts to define resilience in various disciplines (Hollnagel, Woods, & Leveson, 2007). Nevertheless, such attempts for emergency management systems (EMS) is still scarce in the existing body of resilience literature. By considering traits of EMS, this study proposes the definition of resilience as ‘ a system’s capability to respond to different kinds of disrupting events and to bring the system back to a desired state in a timely manner with efficient use of resources, and with minimum loss of performance capacity.’ In order to model resilience in EMS, the U.S. NIMS is chosen because it allows for investigation of resilient behavior among different components that inevitably involve both human agents and technological artifacts as joint cognitive systems (JCSs) (Hollnagel & Woods, 2005). In the NIMS, the largest JCS comprises five critical functions: Command, Planning, Operations, Logistics and Finance & Administration (F&A) (Department of Homeland Security, 2008). External stimuli or inputs to this JCS are events that occur outside of its boundary such as uncontrolled events. When these events do occur, they are typically perceived by the ‘boots-on-the-ground’ in the Operations function. The perceived data are reported and transported to the Planning function in which such data are transformed into useful and meaningful information. This information provides knowledge base for generating a set of decisions. Subsequently, Command function selects some of those decisions and authorizes them with adequate resources so that Operations actually take actions for such decisions to the uncontrolled events. This compensation process continues until the JCS achieves its systematic goal which is to put the event under control. On the other hand, Logistics feeds required and requested resources such as workforce, equipment and material for the system operations and F&A does the accounting of resources as those resources are actually used to execute its given missions. Such JCS utilizes two types of memory: a collective working memory (CWM) can be manifested in the form of shared displays, document or whiteboards used by teams; similarly, collective long-term memory (CLTM) can take forms of past accident reports, procedures and guidelines. Based on this conceptual framework for resilience of emergency operations, five Resilient Performance Factors (RPFs) are suggested to make resilience operational in EMS. Such RPFs are adaptive response, rapidity of recovery, resource utilization, performance stability and team situation awareness. Adaptation is one of the most obvious patterns of resilient performance (Leveson et al., 2006; Rankin, Lundberg, Woltjer, Rollenhagen, & Hollnagel, 2014). Another factor that typifies resilience of any socio- technical system is how quickly or slowly it bounces back from perturbations (Hosseini, Barker, & Ramirez-Marquez, 2016). In most systems, resources are constrained. Hence, resilience requires the effective and efficient use of resources to varying demands. As such demands persist over time, the system’s performance level tends to diminish. For the EMS to remain resilient, its performance should be maintained in a stable fashion. Finally, EMS is is expected to possess the ability to perceive what is currently taking place, to comprehend what such occurrence actually means, and to anticipate what may happen and decide what to do about it. When this occurs within a team, it is often referred to as team situation awareness (Endsley, 1995; McManus, Seville, Brunsden, & Vargo, 2007). This resilience model for EMS needs validation and many assumptions and simplifications made in this work require further justification. This model will be discussed and validated by using subsequent data collection from Emergency Operations Training Center operated by Texas A&M Engineering Extension Service (TEEX) and will be reported in future publications.
APA, Harvard, Vancouver, ISO, and other styles
20

Engelhardt, F. R. "A Perspective on the Application of Chemistry to Oil Spill Response." Pure and Applied Chemistry 71, no. 1 (January 1, 1999): 1–4. http://dx.doi.org/10.1351/pac199971010001.

Full text
Abstract:
It might seem incongruous that a research focused organisation such as the International Union for Pure and Applied Chemistry would pay attention to an issue as pragmatic as oil spills. After all, an oil spill tends to be viewed as a very practical matter, its issues characterised by loss of a valuable commercial product, damage to the environment, high costs of clean up, high legal liabilities, and very much media attention. Oil spills are not generally considered a pure or even applied chemistry issue. However, this would be a very short-sighted interpretation. Effectively every element of an oil spill, whether environmental, physical, operational or legal, is related to the complex chemistry of the oil and its breakdown products released to the environment. Indeed, it would be safe to say that if petroleum were a simple chemical product, the difficulties inherent in clean up of an oil spill would be much reduced, no matter what the origin or cause of the spill.The chemical nature of oil is directly related to the fate and environmental impacts of spilled oil, whether on water or on land, and to the effectiveness of the diversity of countermeasures which might be deployed. While evaluation of the effects of spilled oil on the environment receives much attention in forums with a biological or toxicological focus, which often do take into consideration chemical factors, the complex topic of the chemistry of oil spills in direct relation to countermeasures is examined more rarely. The various chapter in this document discuss a diversity of oil spill countermeasures, and target the chemical and consequently physical behaviour of oil which determines its characteristics at the time of the spill. While oil spills occur in fresh and salt waters, and on land, marine oil spills remain the larger issue - there tends to be more oil spilled, environmental problems are more complex, and countermeasures are more difficult to implement. The following papers generally reflect and review the current state of knowledge in their topic area, and are representative of the most recent surge in research and development activities, stimulated particularly by the Exxon Valdez spill in Prince William Sound, Alaska in 1989. It appears that oil spill research undergoes cycles of interest, activity and funding, linked to key oil spills. Previously, the Torrey Canyon spill in the English Channel off Land's End, in the United Kingdom in 1967 provided general incentive for research and development, as did the Amoco Cadiz spill off the coast of Brittany, France in 1978. Other oil spills, such as the 1968 Santa Barbara Channel, California spill, or the Braer spill off the Shetlands in 1993, among others, have also stimulated specific areas of research and development on the basis of issues that arose in their particular spill scenario.The articles in this publication have been contributed by recognised international experts in the spill response field, and have received the benefit of peer review. The articles are representative of the major categories of oil spill response research, spanning a wide range of technologies, supportive knowledge and experience, to include reviews of:This collection of review articles concludes with an evaluation of oil spill response technologies for developing nations, appropriately so since that is where much of the oil development and production currently occurs in the world.One area which has seen much recent expansion is that of the essential linkage between detailed understanding of spilled oil physical/chemical properties and the effectiveness of response countermeasures. Crude oils and oil products are known to differ greatly in physical and chemical properties and these tend to change significantly over the time course of spilled oil recovery operations. Such changes have long been recognised to have a major influence on the effectiveness of response methods and equipment, which increases the time and cost of operations and risk of resource damage. All countermeasures are influenced, whether sorbents, booms, skimmers, dispersants, burning of oil and so forth. The incentive is for a rapid and accurate method of predicting changes in oil properties following spill notification, which could be used in both the planning and early phases of spill response, including an initial specific selection of an effective early countermeasure. In later stages of the response, more accurate planning for clean up method and equipment deployment would shorten response time and reduce costs. An additional benefit would be more effective planning for recall of equipment not needed, as well as potentially decreasing the risk of natural resource damage and costs due to more effective spilled oil recovery. The concept of "Windows of Opportunity" for oil spill response measures has been derived from multiple investigations in industry and government research organisations.Although dispersants have been used to date in almost one hundred large spills world-wide, government approval for dispersant use has long been inhibited by a lack of understanding of the factors determining the operational effectiveness of dispersants, and the environmental trade-offs which might need to be made to protect sensitive areas from spilled oil. Recent advances in chemical dispersant development, formulation of low toxicity dispersants with broader application, and better understanding of dispersant fate and effects have combined to a more ready acceptance of this countermeasure by many, although not yet all, regulatory authorities throughout the world. In addition to the category of dispersants, chemical countermeasures include many diverse agents, such as beach cleaners, demulsifiers, elasticity modifiers and bird cleaning agents, each with a unique and specialised role in clean up activities. However, the concerns for the use of these 'alternative chemicals' relate to the interpretation and application of toxico-ecological data to the decision process. If in the future the ecological issues concerning chemical treating agents can be further successfully resolved, the oil spill response community will have an increased range of options for response. However, extensive laboratory and field testing is required in many instances for new chemical dispersant materials and demulsifiers to improve the effectiveness of these materials on weathered oils and water in oil emulsions. The acceptance of in situ (i.e. 'on site') burning of spilled oil has been limited by valid operational concerns about the integrity of fireproof booms, the limited weather window for burning due to the rapid emulsification of oils, the need to develop methods for the ignition of emulsified and weathered oils, and public concerns about the toxicity of the smoke generated during burning. However, burning provides an option, another tool in the tool-box, for the responder called in to combat an oil spill. Burning decreases the amount of oil that must be collected mechanically, thus reducing cleanup costs, storage, transportation, and oily waste disposal requirements. It also would decrease potential contact with sensitive marine and coastal environments and consequently reduce the potential for associated damage costs. Laboratory and field studies over the last ten years have addressed essential information requirements for feasibility, techniques, and effectiveness, as well as health and safety. The results of research in situ burning has led to its acceptance in a number coastal jurisdictions throughout the world, prompting the response industry to purchase and position in situ burning equipment and train its operators to use this alternative technology in approved regions.Although not a direct recovery measure in itself, the application of remote sensing to oil spill response assists in slick identification, tracking, and prediction, which in many instances is an early requirement for effective response. An inadequate ability to see spilled oil seriously reduces effectiveness of oil spill response operations. Conversely, good capability to detect spilled oil, especially areas of thick oil, at night and other conditions of reduced visibility could more than double response effectiveness and greatly enhance control of the spill to minimise damage, especially to sensitive shorelines. Advances have been made in both airborne and satellite remote sensing. It has become possible to move from large and expensive to operate airborne systems to small aircraft, more widely available and practical for spill response operators. Also, the limitations in delayed data processing and information communication are being overcome by development of systems operating in functional real-time, which is essential for enhanced response capacity. Spill detection using satellites has also advanced markedly since 1989, with the ongoing intention to provide coverage of oil spill areas as early warning, or when flying by aircraft is not possible. An early useful application was an ERS-1 satellite program for detection of oil slicks, launched in 1992. More recently, spill detection capability has been developed for the Canadian Radarsat satellites, ERS-2 and a few other satellite programs.The topic of bioremediation of spilled oil, that is, to use microbes to assist in clean up, is a corollary to the deployment of traditional countermeasures. It had not seen much operational or regulatory support until the Exxon Valdez spill, where it was initiated as a spill mitigation method, establishing bioremediation as a major oil spill R&D area. Bioremediation of oil spills was defined as being one of three different approaches: enhancement of local existing microbial fauna by the addition of nutrients to stimulate their growth; 'seeding' the oil impacted environment with microbes occurring naturally in that environment; and, inoculating the oil impacted environment with microbes not normally found there, including genetically engineered bacterial populations. Research emphasis and regulatory countenance has been predominantly on the first approach. Evaluation of operational utility of is continuing to identify conditions under which bioremediation can be used in an environmentally sound and effective manner, and to make recommendations to responders for the implementation of this technology.The issue of hydrocarbon toxicity has been examined in petroleum refinery and petrochemical workers for more than a decade, and experimentally in test animals for a much longer period. However, there has been little specific information available on the effects of oil spills on human health, neither for oil spill response workers nor for incidentally exposed individuals. More recently, as reviewed in an article on human health effects in this publication, some reports have been published of skin irritation and dermatitis from exposure of skin to oil during cleanup, as well as nausea from inhalation of volatile fractions. Although there are to date no epidemiological studies of exposure by oil spill workers to petroleum hydrocarbons, the matter is drawing increasing attention.One of the more important issues surrounding the choice and extent of application of oil spill countermeasures is knowledge about the ecological effectiveness of such response, that is, the balance point between continuation of clean up activities and letting the environment take care of its own eventual recovery. It is the last point which has driven much of the discussions and research associated with the concept of 'how clean is clean', or, how much cleanup is enough or too much. The results of such diverse research efforts are being used increasingly and successfully to link spilled oil chemistry to countermeasures practices and equipment. The advances are being integrated into more effective response management models and response command systems. In summary, applied chemical research and development has actively contributed to an enhancement in oil spill response capability. Nonetheless, it seems that the pace of oil spill research and countermeasures development is slowing. The decrease is at least temporally associated with a decline in the frequency and magnitude of oil spills in recent years. Spill statistics gathered by organisations such as the publishers of the Oil Spill Intelligence Report, show that world-wide oil spill incidence and volume have continued to decline since the time of the Exxon Valdez spill event (see the Oil Spill Intelligence Report publication "International Oil Spill Statistics: 1997", Cutter Information Corp.). It is probably not coincidental that the amount of funding available for oil spill research and development, from both government and private industry sources, has declined similarly. In that context, the following articles are more a statement of currently accepted knowledge and practice, rather than being a 'snapshot in time' of intense ongoing research activities. The articles serve to capture the applied chemistry knowledge and experience of practitioners in a complex field, application of which remains essential for the development of improved oil spill countermeasures, and their effective use in real spill situations.
APA, Harvard, Vancouver, ISO, and other styles
21

TYLER, ROGER, R. P. MAJOR, H. SCOTT. "Project STARR - State of Texas Advanced Oil and Gas Resource Recovery Program : ABSTRACT." AAPG Bulletin 81 (1997) (1997). http://dx.doi.org/10.1306/3b05c3ac-172a-11d7-8645000102c1865d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

TYLER, ROGER, MAJOR, R. P. HAMLIN,. "Abstract: State of Texas Advanced Oil and Gas Resource Recovery Program -- Project Starr ." AAPG Bulletin 82 (1998) (1998). http://dx.doi.org/10.1306/1d9bc593-172d-11d7-8645000102c1865d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sanakulov, K., and N. P. Snitka. "Muruntau gold mining and refining operation: Resources, expansion program and future considerations." Gornyi Zhurnal, May 31, 2021, 43–47. http://dx.doi.org/10.17580/gzh.2021.05.02.

Full text
Abstract:
The international geological community has acknowledged Muruntau gold deposit as the greatest discovery in the mid-to-late 20th century. Muruntau mine field holds the total appraised resource potential of more than 4.5 thousand tons of gold. Hydrometallurgical plant GMZ-3 implements gold-ore processing by gravitational sedimentation and adsorption. The technological and instrumental modernization of the gold processing circuit toward its higher capacity, gold recovery and thoroughness are the important aspects of production improvement and cost reduction. The developed and introduced ore milling flowchart provides replacement of the second milling stage pumps by higher-capacity pumps backed up with additional cyclones. Aiming to ensure stable gold production at plants GMZ-2 and GMZ-3, Navoi MMC’s experts accomplished the feasibility study of mining operations in Chukurkuduk and Turbai deposits in 2020. The growth prospects for open pit mining in Murunatu–Myutenbai fields after 2060 are estimated using the model of optimized ultimate pit limit design at the gold price of USD 1500/t. The model ultimate pit limit embraces all probable reserves as per the detailed 2D seismic data as of early 2020, including proven reserves intended for open pit and underground mining. The gold ore appraisal and the expansion program elaborated for Muruntau gold mining and refining integrated works in joint Muruntau–Myutenbai field, through implementation of operation phases V and further make it possible to forecast stable performance up to 2030–2050.
APA, Harvard, Vancouver, ISO, and other styles
24

CLIFT, SIGRID J., ROGER TYLER, H. S. "Abstract: State of Texas Advanced Oil and Gas Resource Recovery Program-Project STARR-A Strategy for Independents ." AAPG Bulletin 83 (1999) (1999). http://dx.doi.org/10.1306/e4fd4c83-1732-11d7-8645000102c1865d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hopkins, Liza, Glenda Pedwell, Katie Wilson, and Prunella Howell-Jay. "Implementing youth peer support in an early psychosis program." Journal of Mental Health Training, Education and Practice ahead-of-print, ahead-of-print (December 4, 2020). http://dx.doi.org/10.1108/jmhtep-03-2020-0014.

Full text
Abstract:
Purpose The purpose of this study was to identify and understand the barriers and enablers to the implementation of youth peer support in a clinical mental health service. The development of a lived experience workforce in mental health is a key component of policy at both the state and the federal level in Australia. Implementing a peer workforce within existing clinical services, however, can be a challenging task. Furthermore, implementing peer support in a youth mental health setting involves a further degree of complexity, involving a degree of care for young people being invited to provide peer support when they may be still early in their own recovery journey. Design/methodology/approach This paper reports on a formative evaluation of the beginning stages of implementation of a youth peer workforce within an existing clinical mental health service in Melbourne. Findings The project found that it was feasible and beneficial to implement youth peer support; however, significant challenges remain, including lack of appropriate training for young people, uncertainty amongst clinical staff about the boundaries of the peer role and the potential for “tokenism” in the face of slow cultural change across the whole service. Originality/value Very little evaluation has yet been undertaken into the effectiveness of implementing peer support in youth mental health services. This paper offers an opportunity to investigate where services may need to identify strengths and address difficulties when undertaking future implementation efforts.
APA, Harvard, Vancouver, ISO, and other styles
26

Siedschlag, Alexander, Tiangeng Lu, Andrea Jerković, and Weston Kensinger. "Opioid Crisis Response and Resilience: Results and Perspectives from a Multi-Agency Tabletop Exercise at the Pennsylvania Emergency Management Agency." Journal of Homeland Security and Emergency Management, April 12, 2021. http://dx.doi.org/10.1515/jhsem-2020-0079.

Full text
Abstract:
Abstract This article presents and discusses, in the new context of COVID-19, findings from a tabletop exercise on response and resilience in the ongoing opioid crisis in Pennsylvania. The exercise was organized by [identifying information removed] and held at the Pennsylvania Emergency Management Agency (PEMA), in further collaboration with the Governor’s Office of Homeland Security, the Pennsylvania Department of Health, and with the participation of several additional agencies and institutions. It addressed first-responder and whole-community response and resilience to the ongoing opioid crisis. More than 50 experts participated in the one-day program that involved state and local agencies, first-responder organizations, as well as academia in a discussion about effectuating comprehensive response to overdose incidents. Participant experts represented a wide array of backgrounds, including state and local law enforcement agencies; emergency medical technicians; public health and health care professionals; and scholars from the fields of law, security studies, public policy, and public health, among other relevant areas. Participants addressed specific challenges, including resource sharing among responders; capacity-building for long-term recovery; effective integration of non-traditional partners, such as spontaneous volunteers and donors; and public education and outreach to improve prevention. The exercise aimed to strengthen the whole-community approach to emergency response.
APA, Harvard, Vancouver, ISO, and other styles
27

Doménech, Pablo, J. E. Robles García, A. García Cortés, B. Miñana López, C. Gutiérrez Castañé, D. Rosell Costa, F. Ramón de Fata Chillón, et al. "Kidney Transplant and Ileal Conduit Diversion on the Same Surgical Procedure: Clinical Case and Review of the Literature." Transplantation Case Reports, May 30, 2020, 1–3. http://dx.doi.org/10.31487/j.tcr.2020.02.02.

Full text
Abstract:
Introduction: There are multiple causes of end-stage renal disease (ESRD). One of the most uncommon cause is the obstruction of the lower urinary tract due to the development of new endourological procedures and the improvement in clean intermittent catheterization. However, urodynamic problems that require solutions to bladder problems continue to appear that will directly affect the function of the kidney graft. Objective: Clearly state the possibility of performing a bladder conduit technique at the same time as a kidney transplant as an option for patients who undergo kidney transplantation with incompetent bladders. A clinical case is described as an example. Material and Methods: The clinical case of a patient with left cutaneous ureterostomy due to neurogenic bladder who is a candidate for renal transplant is presented. An ileal conduit type urinary diversion is performed in the same surgical act as the renal transplant. The existing literature is analyzed in relation to the different types of urinary diversion and how they affect renal function. Clinical Case and Results: Here we present a 50-year-old male with hypotonic bladder since 19th years old secondary to sacral lipectomy. He developed a progressive deterioration of renal function until he started hemodialysis program in 2018. Ileal conduit and renal transplant are performed through right pararectal incision, reimplantation of the ureter in the antimesenteric side of the intestinal loop. No increase of complications was observed in the post-transplant. The patient was discharged the 7th day after surgery. Serum creatinine at 6 months after renal transplantation 1.2mg/dl. Conclusion: Ileal conduit is a valid resource in patients with neurogenic bladders or with emptying problems whose solution puts at risk the functionality of the graft. Similar recovery is observed in time compared to a kidney transplant without ileal shunt. Post-transplant graft function was good without an increase in complications.
APA, Harvard, Vancouver, ISO, and other styles
28

Kennedy, Jenny, Indigo Holcombe-James, and Kate Mannell. "Access Denied." M/C Journal 24, no. 3 (June 21, 2021). http://dx.doi.org/10.5204/mcj.2785.

Full text
Abstract:
Introduction As social-distancing mandates in response to COVID-19 restricted in-person data collection methods such as participant observation and interviews, researchers turned to socially distant methods such as interviewing via video-conferencing technology (Lobe et al.). These were not new tools nor methods, but the pandemic muted any bias towards face-to-face data collection methods. Exemplified in crowd-sourced documents such as Doing Fieldwork in a Pandemic, researchers were encouraged to pivot to digital methods as a means of fulfilling research objectives, “specifically, ideas for avoiding in-person interactions by using mediated forms that will achieve similar ends” (Lupton). The benefits of digital methods for expanding participant cohorts and scope of research have been touted long before 2020 and COVID-19, and, as noted by Murthy, are “compelling” (“Emergent” 172). Research conducted by digital methods can expect to reap benefits such as “global datasets/respondents” and “new modalities for involving respondents” (Murthy, “Emergent” 172). The pivot to digital methods is not in and of itself an issue. What concerns us is that in the dialogues about shifting to digital methods during COVID-19, there does not yet appear to have been a critical consideration of how participant samples and collected data will be impacted upon or skewed towards recording the experiences of advantaged cohorts. Existing literature focusses on the time-saving benefits for the researcher, reduction of travel costs (Fujii), the minimal costs for users of specific platforms – e.g. Skype –, and presumes ubiquity of device access for participants (Cater). We found no discussion on data costs of accessing such services being potential barriers to participation in research, although Deakin and Wakefield did share our concern that: Online interviews may ... mean that some participants are excluded due to the need to have technological competence required to participate, obtain software and to maintain Internet connection for the duration of the discussion. In this sense, access to certain groups may be a problem and may lead to issues of representativeness. (605) We write this as a provocation to our colleagues conducting research at this time to consider the cultural and material capital of their participants and how that capital enables them to participate in digitally-mediated data gathering practices, or not, and to what extent. Despite highlighting the potential benefits of digital methods within a methodological tool kit, Murthy previously cautioned against the implications posed by digital exclusion, noting that “the drawback of these research options is that membership of these communities is inherently restricted to the digital ‘haves’ ... rather than the ‘have nots’” (“Digital” 845). In this article, we argue that while tools such as Zoom have indeed enabled fieldwork to continue despite COVID disruptions, this shift to online platforms has important and under-acknowledged implications for who is and is not able to participate in research. In making this argument, we draw on examples from the Connected Students project, a study of digital inclusion that commenced just as COVID-19 restrictions came into effect in the Australian state of Victoria at the start of 2020. We draw on the experiences of these households to illustrate the barriers that such cohorts face when participating in online research. We begin by providing details about the Connected Students project and then contextualising it through a discussion of research on digital inclusion. We then outline three areas in which households would have experienced (or still do experience) difficulties participating in online research: data, devices, and skills. We use these findings to highlight the barriers that disadvantaged groups may face when engaging in data collection activities over Zoom and question how this is impacting on who is and is not being included in research during COVID-19. The Connected Students Program The Connected Students program was conducted in Shepparton, a regional city located 180km north of Melbourne. The town itself has a population of around 30,000, while the Greater Shepparton region comprises around 64,000 residents. Shepparton was chosen as the program’s site because it is characterised by a unique combination of low-income and low levels of digital inclusion. First, Shepparton ranks in the lowest interval for the Australian Bureau of Statistics’ Socio-Economic Indexes for Areas (SEIFA) and the Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD), as reported in 2016 (Australian Bureau of Statistics, “Census”; Australian Bureau of Statistics, “Index”). Although Shepparton has a strong agricultural and horticultural industry with a number of food-based manufacturing companies in the area, including fruit canneries, dairies, and food processing plants, the town has high levels of long-term and intergenerational unemployment and jobless families. Second, Shepparton is in a regional area that ranks in the lowest interval for the Australian Digital Inclusion Index (Thomas et al.), which measures digital inclusion across dimensions of access, ability, and affordability. Funded by Telstra, Australia’s largest telecommunications provider, and delivered in partnership with Greater Shepparton Secondary College (GSSC), the Connected Students program provided low-income households with a laptop and an unlimited broadband Internet connection for up to two years. Households were recruited to the project via GSSC. To be eligible, households needed to hold a health care card and have at least one child attending the school in year 10, 11, or 12. Both the student and a caregiver were required to participate in the project to be eligible. Additional household members were invited to take part in the research, but were not required to. (See Kennedy & Holcombe-James; and Kennedy et al., "Connected Students", for further details regarding household demographics.) The Australian Digital Inclusion Index identifies that affordability is a significant barrier to digital inclusion in Australia (Thomas et al.). The project’s objective was to measure how removing affordability barriers to accessing connectivity for households impacts on digital inclusion. By providing participating households with a free unlimited broadband internet connection for the duration of the research, the project removed the costs associated with digital access. Access alone is not enough to resolve the digital exclusion confronted by these low-income households. Digital exclusion in these instances is not derived simply from the cost of Internet access, but from the cost of digital devices. As a result, these households typically lacked sufficient digital devices. Each household was therefore provided both a high speed Internet connection, and a brand new laptop with built-in camera, microphone, and speakers (a standard tool kit for video conferencing). Data collection for the Connected Students project was intended to be conducted face-to-face. We had planned in-person observations including semi-structured interviews with household members conducted at three intervals throughout the project’s duration (beginning, middle, and end), and technology tours of each home to spatially and socially map device locations and uses (Kennedy et al., Digital Domesticity). As we readied to make our first research trip to commence the study, COVID-19 was wreaking havoc. It quickly became apparent we would not be travelling to work, much less travelling around the state. We thus pivoted to digital methods, with all our data collection shifting online to interviews conducted via digital platforms such as Zoom and Microsoft Teams. While the pivot to digital methods saved travel hours, allowing us to scale up the number of households we planned to interview, it also demonstrated unexpected aspects of our participants’ lived experiences of digital exclusion. In this article, we draw on our first round of interviews which were conducted with 35 households over Zoom or Microsoft Teams during lockdown. The practice of conducting these interviews reveals insights into the barriers that households faced to digital research participation. In describing these experiences, we use pseudonyms for individual participants and refer to households using the pseudonym for the student participant from that household. Why Does Digital Inclusion Matter? Digital inclusion is broadly defined as universal access to the technologies necessary to participate in social and civic life (Helsper; Livingstone and Helsper). Although recent years have seen an increase in the number of connected households and devices (Thomas et al., “2020”), digital inclusion remains uneven. As elsewhere, digital disadvantage in the Australian context falls along geographic and socioeconomic lines (Alam and Imran; Atkinson et al.; Blanchard et al.; Rennie et al.). Digitally excluded population groups typically experience some combination of education, employment, income, social, and mental health hardship; their predicament is compounded by a myriad of important services moving online, from utility payments, to social services, to job seeking platforms (Australian Council of Social Service; Chen; Commonwealth Ombudsman). In addition to challenges in using essential services, digitally excluded Australians also miss out on the social and cultural benefits of Internet use (Ragnedda and Ruiu). Digital inclusion – and the affordability of digital access – should thus be a key concern for researchers looking to apply online methods. Households in the lowest income quintile spend 6.2% of their disposable income on telecommunications services, almost three times more than wealthier households (Ogle). Those in the lowest income quintile pay a “poverty premium” for their data, almost five times more per unit of data than those in the highest income quintile (Ogle and Musolino). As evidenced by the Australian Digital Inclusion Index, this is driven in part by a higher reliance on mobile-only access (Thomas et al., “2020”). Low-income households are more likely to access critical education, business, and government services through mobile data rather than fixed broadband data (Thomas et al., “2020”). For low-income households, digital participation is the top expense after housing, food, and transport, and is higher than domestic energy costs (Ogle). In the pursuit of responsible and ethical research, we caution against assuming research participants are able to bear the brunt of access costs in terms of having a suitable device, expending their own data resources, and having adequate skills to be able to complete the activity without undue stress. We draw examples from the Connected Students project to support this argument below. Findings: Barriers to Research Participation for Digitally Excluded Households If the Connected Students program had not provided participating households with a technology kit, their preexisting conditions of digital exclusion would have limited their research participation in three key ways. First, households with limited Internet access (particularly those reliant on mobile-only connectivity, and who have a few gigabytes of data per month) would have struggled to provide the data needed for video conferencing. Second, households would have struggled to participate due to a lack of adequate devices. Third, and critically, although the Connected Students technology kit provided households with the data and devices required to participate in the digital ethnography, this did not necessarily resolve the skills gaps that our households confronted. Data Prior to receiving the Connected Students technology kit, many households in our sample had limited modes of connectivity and access to data. For households with comparatively less or lower quality access to data, digital participation – whether for the research discussed here, or in contemporary life – came with very real costs. This was especially the case for households that did not have a home Internet connection and instead relied solely on mobile data. For these households, who carefully managed their data to avoid running out, participating in research through extended video conferences would have been impossible unless adequate financial reimbursement was offered. Households with very limited Internet access used a range of practices to manage and extend their data access by shifting internet costs away from the household budget. This often involved making use of free public Wi-Fi or library internet services. Ellie’s household, for instance, spent their weekends at the public library so that she and her sister could complete their homework. While laborious, these strategies worked well for the families in everyday life. However, they would have been highly unsuitable for participating in research, particularly during the pandemic. On the most obvious level, the expectations of library use – if not silent, then certainly quiet – would have prohibited a successful interview. Further, during COVID-19 lockdowns, public libraries (and other places that provide public Internet) became inaccessible for significant periods of time. Lastly, for some research designs, the location of participants is important even when participation is occurring online. In the case of our own project, the house itself as the site of the interview was critical as our research sought to understand how the layout and materiality of the home impacts on experiences of digital inclusion. We asked participants to guide us around their home, showing where technologies and social activities are colocated. In using the data provided by the Connected Students technology kit, households with limited Internet were able to conduct interviews within their households. For these families, participating in online research would have been near impossible without the Connected Students Internet. Devices Even with adequate Internet connections, many households would have struggled to participate due to a lack of suitable devices. Laptops, which generally provide the best video conferencing experience, were seen as prohibitively expensive for many families. As a result, many families did not have a laptop or were making do with a laptop that was excessively slow, unreliable, and/or had very limited functions. Desktop computers were rare and generally outdated to the extent that they were not able to support video conferencing. One parent, Melissa, described their barely-functioning desktop as “like part of the furniture more than a computer”. Had the Connected Students program not provided a new laptop with video and audio capabilities, participation in video interviews would have been difficult. This is highlighted by the challenges students in these households faced in completing online schooling prior to receiving the Connected Students kit. A participating student, Mallory, for example, explained she had previously not had a laptop, reliant only on her phone and an old iPad: Interviewer: Were you able to do all your homework on those, or was it sometimes tricky?Mallory: Sometimes it was tricky, especially if they wanted to do a call or something ... . Then it got a bit hard because then I would use up all my data, and then didn’t have much left.Interviewer: Yeah. Right.Julia (Parent): ... But as far as schoolwork, it’s hard to do everything on an iPad. A laptop or a computer is obviously easier to manoeuvre around for different things. This example raises several common issues that would likely present barriers to research participation. First, Mallory’s household did not have a laptop before being provided with one through the Connected Students program. Second, while her household did prioritise purchasing tablets and smartphones, which could be used for video conferencing, these were more difficult to navigate for certain tasks and used up mobile data which, as noted above, was often a limited resource. Lastly, it is worth noting that in households which did already own a functioning laptop, it was often shared between several household members. As one parent, Vanessa, noted, “yeah, until we got the [Connected Students] devices, we had one laptop between the four of us that are here. And Noel had the majority use of that because that was his school work took priority”. This lack of individuated access to a device would make participation in some research designs difficult, particularly those that rely on regular access to a suitable device. Skills Despite the Connected Students program’s provision of data and device access, this did not ensure successful research participation. Many households struggled to engage with video research interviews due to insufficient digital skills. While a household with Internet connectivity might be considered on the “right” side of the digital divide, connectivity alone does not ensure participation. People also need to have the knowledge and skills required to use online resources. Brianna’s household, for example, had downloaded Microsoft Teams to their desktop computer in readiness for the interview, but had neglected to consider whether that device had video or audio capabilities. To work around this restriction, the household decided to complete the interview via the Connected Students laptop, but this too proved difficult. Neither Brianna nor her parents were confident in transferring the link to the interview between devices, whether by email or otherwise, requiring the researchers to talk them through the steps required to log on, find, and send the link via email. While Brianna’s household faced digital skills challenges that affected both parent and student participants, in others such as Ariel’s, these challenges were focussed at the parental level. In these instances, the student participant provided a vital resource, helping adults navigate platforms and participate in the research. As Celeste, Ariel’s parent, explained, it's just new things that I get a bit – like, even on here, because your email had come through to me and I said to Ariel "We're going to use your computer with Teams. How do we do this?" So, yeah, worked it out. I just had to look up my email address, but I [initially thought] oh, my god; what am I supposed to do here? Although helpful in our own research given its focus on school-aged young people, this dynamic of parents being helped by their dependents illustrates that the adults in our sample were often unfamiliar with the digital skills required for video conferencing. Research focussing only on adults, or on households in which students have not developed these skills through extended periods of online education such as occurred during the COVID-19 lockdowns, may find participants lacking the digital skills to participate in video interviews. Participation was also impacted upon by participants' lack of more subtle digital skills around the norms and conventions of video conferencing. Several households, for example, conducted their interviews in less ideal situations, such as from both moving and parked cars. A portion of the household interview with Piper’s household was completed as they drove the 30 minutes from their home into Shepperton. Due to living out of town, this household often experienced poor reception. The interview was thus regularly disrupted as they dropped in and out of range, with the interview transcript peppered with interjections such as “we’re going through a bit of an Internet light spot ... we’re back ... sorry ...” (Karina, parent). Finally, Piper switched the device on which they were taking the interview to gain a better connection: “my iPad that we were meeting on has worse Internet than my phone Internet, so we kind of changed it around” (Karina). Choosing to participate in the research from locations other than the home provides evidence of the limited time available to these families, and the onerousness of research participation. These choices also indicate unfamiliarity with video conferencing norms. As digitally excluded households, these participants were likely not the target of popular discussions throughout the pandemic about optimising video conferences through careful consideration of lighting, background, make-up and positioning (e.g. Lasky; Niven-Phillips). This was often identified by how participants positioned themselves in front of the camera, often choosing not to sit squarely within the camera lens. Sometimes this was because several household members were participating and struggled to all sit within view of the single device, but awkward camera positioning also occurred with only one or two people present. A number of interviews were initially conducted with shoulders, or foreheads, or ceilings rather than “whole” participants until we asked them to reposition the device so that the camera was pointing towards their faces. In noting this unfamiliarity we do not seek to criticise or apportion responsibility for accruing such skills to participating households, but rather to highlight the impact this had on the type of conversation between researcher and participant. Such practices offer valuable insight into how digital exclusion impacts on individual’s everyday lives as well as on their research participation. Conclusion Throughout the pandemic, digital methods such as video conferencing have been invaluable for researchers. However, while these methods have enabled fieldwork to continue despite COVID-19 disruptions, the shift to online platforms has important and under-acknowledged implications for who is and is not able to participate in research. In this article, we have drawn on our research with low-income households to demonstrate the barriers that such cohorts experience when participating in online research. Without the technology kits provided as part of our research design, these households would have struggled to participate due to a lack of adequate data and devices. Further, even with the kits provided, households faced additional barriers due to a lack of digital literacy. These experiences raise a number of questions that we encourage researchers to consider when designing methods that avoid in person interactions, and when reviewing studies that use similar approaches: who doesn’t have the technological access needed to participate in digital and online research? What are the implications of this for who and what is most visible in research conducted during the pandemic? Beyond questions of access, to what extent will disadvantaged populations not volunteer to participate in online research because of discomfort or unfamiliarity with digital tools and norms? When low-income participants are included, how can researchers ensure that participation does not unduly burden them by using up precious data resources? And, how can researchers facilitate positive and meaningful participation among those who might be less comfortable interacting through mediums like video conferencing? In raising these questions we acknowledge that not all research will or should be focussed on engaging with disadvantaged cohorts. Rather, our point is that through asking questions such as this, we will be better able to reflect on how data and participant samples are being impacted upon by shifts to digital methods during COVID-19 and beyond. As researchers, we may not always be able to adapt Zoom-based methods to be fully inclusive, but we can acknowledge this as a limitation and keep it in mind when reporting our findings, and later when engaging with the research that was largely conducted online during the pandemic. Lastly, while the Connected Students project focusses on impacts of affordability on digital inclusion, digital disadvantage intersects with many other forms of disadvantage. Thus, while our study focussed specifically on financial disadvantage, our call to be aware of who is and is not able to participate in Zoom-based research applies to digital exclusion more broadly, whatever its cause. Acknowledgements The Connected Students project was funded by Telstra. This research was also supported under the Australian Research Council's Discovery Early Career Researchers Award funding scheme (project number DE200100540). References Alam, Khorshed, and Sophia Imran. “The Digital Divide and Social Inclusion among Refugee Migrants: A Case in Regional Australia.” Information Technology & People 28.2 (2015): 344–65. Atkinson, John, Rosemary Black, and Allan Curtis. “Exploring the Digital Divide in an Australian Regional City: A Case Study of Albury”. Australian Geographer 39.4 (2008): 479–493. Australian Bureau of Statistics. “Census of Population and Housing: Socio-Economic Indexes for Areas (SEIFA), Australia, 2016.” 2016. <https://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/2033.0.55.001~2016~Main%20Features~SOCIO-ECONOMIC%20INDEXES%20FOR%20AREAS%20(SEIFA)%202016~1>. ———. “Index of Relative Socio-Economic Advantage and Disadvantage (IRSAD).” 2016. <https://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/2033.0.55.001~2016~Main%20Features~IRSAD~20>. Australian Council of Social Service. “The Future of Parents Next: Submission to Senate Community Affairs Committee.” 8 Feb. 2019. <http://web.archive.org/web/20200612014954/https://www.acoss.org.au/wp-content/uploads/2019/02/ACOSS-submission-into-Parents-Next_FINAL.pdf>. Beer, David. “The Social Power of Algorithms.” Information, Communication & Society 20.1 (2017): 1–13. Blanchard, Michelle, et al. “Rethinking the Digital Divide: Findings from a Study of Marginalised Young People’s Information Communication Technology (ICT) Use.” Youth Studies Australia 27.4 (2008): 35–42. Cater, Janet. “Skype: A Cost Effective Method for Qualitative Research.” Rehabilitation Counselors and Educators Journal 4.2 (2011): 10-17. Chen, Jesse. “Breaking Down Barriers to Digital Government: How Can We Enable Vulnerable Consumers to Have Equal Participation in Digital Government?” Sydney: Australian Communications Consumer Action Network, 2017. <http://web.archive.org/web/20200612015130/https://accan.org.au/Breaking%20Down%20Barriers%20to%20Digital%20Government.pdf>. Commonwealth Ombudsman. “Centrelink’s Automated Debt Raising and Recovery System: Implementation Report, Report No. 012019.” Commonwealth Ombudsman, 2019. <http://web.archive.org/web/20200612015307/https://www.ombudsman.gov.au/__data/assets/pdf_file/0025/98314/April-2019-Centrelinks-Automated-Debt-Raising-and-Recovery-System.pdf>. Deakin Hannah, and Kelly Wakefield. “Skype Interviewing: Reflections of Two PhD Researchers.” Qualitative Research 14.5 (2014): 603-616. Fujii, LeeAnn. Interviewing in Social Science Research: A Relational Approach. Routledge, 2018. Helsper, Ellen. “Digital Inclusion: An Analysis of Social Disadvantage and the Information Society.” London: Department for Communities and Local Government, 2008. Kennedy, Jenny, and Indigo Holcombe-James. “Connected Students Milestone Report 1: Project Commencement". Melbourne: RMIT, 2021. <https://apo.org.au/node/312817>. Kennedy, Jenny, et al. “Connected Students Milestone Report 2: Findings from First Round of Interviews". Melbourne: RMIT, 2021. <https://apo.org.au/node/312818>. Kennedy, Jenny, et al. Digital Domesticity: Media, Materiality, and Home Life. Oxford UP, 2020. Lasky, Julie. “How to Look Your Best on a Webcam.” New York Times, 25 Mar. 2020 <http://www.nytimes.com/2020/03/25/realestate/coronavirus-webcam-appearance.html>. Livingstone, Sonia, and Ellen Helsper. “Gradations in Digital Inclusion: Children, Young People and the Digital Divide.” New Media & Society 9.4 (2007): 671–696. Lobe, Bojana, David L. Morgan, and Kim A. Hoffman. “Qualitative Data Collection in an Era of Social Distancing.” International Journal of Qualitative Methods 19 (2020): 1–8. Lupton, Deborah. “Doing Fieldwork in a Pandemic (Crowd-Sourced Document).” 2020. <http://docs.google.com/document/d/1clGjGABB2h2qbduTgfqribHmog9B6P0NvMgVuiHZCl8/edit?ts=5e88ae0a#>. Murthy, Dhiraj. “Digital Ethnography: An Examination of the Use of New Technologies for Social Research”. Sociology 42.2 (2008): 837–855. ———. “Emergent Digital Ethnographic Methods for Social Research.” Handbook of Emergent Technologies in Social Research. Ed. Sharlene Nagy Hesse-Biber. Oxford UP, 2011. 158–179. Niven-Phillips, Lisa. “‘Virtual Meetings Aren’t Going Anywhere Soon’: How to Put Your Best Zoom Face Forward.” The Guardian, 27 Mar. 2021. <http://www.theguardian.com/fashion/2021/mar/27/virtual-meetings-arent-going-anywhere-soon-how-to-put-your-best-zoom-face-forward>. Ogle, Greg. “Telecommunications Expenditure in Australia: Fact Sheet.” Sydney: Australian Communications Consumer Action Network, 2017. <https://web.archive.org/web/20200612043803/https://accan.org.au/files/Reports/ACCAN_SACOSS%20Telecommunications%20Expenditure_web_v2.pdf>. Ogle, Greg, and Vanessa Musolino. “Connectivity Costs: Telecommunications Affordability for Low Income Australians.” Sydney: Australian Communications Consumer Action Network, 2016. <https://web.archive.org/web/20200612043944/https://accan.org.au/files/Reports/161011_Connectivity%20Costs_accessible-web.pdf>. Ragnedda, Massimo, and Maria Laura Ruiu. “Social Capital and the Three Levels of Digital Divide.” Theorizing Digital Divides. Eds. Massimo Ragnedda and Glenn Muschert. Routledge, 2017. 21–34. Rennie, Ellie, et al. “At Home on the Outstation: Barriers to Home Internet in Remote Indigenous Communities.” Telecommunications Policy 37.6 (2013): 583–93. Taylor, Linnet. “What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally. Big Data & Society 4.2 (2017): 1–14. Thomas, Julian, et al. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2018. Melbourne: RMIT University, for Telstra, 2018. ———. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2019. Melbourne: RMIT University and Swinburne University of Technology, for Telstra, 2019. ———. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2020. Melbourne: RMIT University and Swinburne University of Technology, for Telstra, 2020. Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology 30 (2015): 75–89.
APA, Harvard, Vancouver, ISO, and other styles
29

Gardner, Paula. "The Perpetually Sick Self." M/C Journal 5, no. 5 (October 1, 2002). http://dx.doi.org/10.5204/mcj.1986.

Full text
Abstract:
Since the mid-eighties, personality and mood have undergone vigorous surveillance and repair across new populations in the United States. While government and the psy-complexes 1 have always had a stake in promoting citizen health, it is unique that, today, State, industry, and non-governmental organisations recruit consumers to act upon their own mental health. And while citizen behaviours in public spaces have long been fodder for diagnosis, the scope of behaviours and the breadth of the surveyed population has expanded significantly over the past twenty years. How has the notion of behavioural illness been successfully spun to recruit new populations to behavioural diagnosis and repair? Why is it a reasonable proposition that our personalities might be sick, our moods ill? This essay investigates the cultural promotion of a 'script' that assumes sick moods are possible, encourages the self-assessment of risk and self-management of dysfunctional mood, and has thus helped to create a new, adjustable subject. Michel Foucault (1976, 1988) contended that in order for subjects to act upon their selves -- for example, assess themselves via the behavioural health script -- we must view the Self as a construction, a work in progress that is alterable and in need of alteration in order for psychiatric action to seem appropriate. This conception of the self constitutes an extreme theoretical shift from the early modern belief (of Rousseau or Kant) that a core soul inhabited and shaped being, or the moral self.2 Foucault (1976) insisted that subjects are 'not born but made' through formal and informal social discourses that construct knowledge of the 'normal' self. Throughout the 19th century and the modern era, as medical, juridical, and psychiatric institutions gained increasing cultural capital, the normal self became allegedly 'knowable' through science. In turn, the citizen became 'professionalised' (Funicello 1993) -- answerable to these constructed standards, or subject to what Foucault termed biopower. In order to avoid punishments wrested upon the 'deviant' such as being placed in asylum or criminalised, citizens capitulated to social norms, and thus helped the State to achieve social order. 3 While 'technologies of power' or domination determined the conduct of individuals in the premodern era, 'technologies of the self' became prominent in the modern era.4 (Foucault, 'Technologies of the Self') These, explained Foucault, permit individuals to act upon their 'bodies, souls, thoughts, conduct and ways of being' to transform them, to attain happiness, or perfection, among other things (18). Contemporary psychiatric discourses, for example, call upon citizens to transform via self-regulation, and thus lessened the State's disciplinary burden. Since the mid-twentieth century, biopsychiatry has been embraced nationally, and played a key role in propagating self-disciplining citizens. Biopsychiatric logic is viewed culturally as common sense due to a number of occurrences. The dominant media have enthusiastically celebrated so-called biotechnical successes, such as sheep cloning and the development of better drugs to treat Schizophrenia. Hype has also surrounded newer drugs to treat depression (i.e. Prozac) and anxiety (i.e. Paxil), as well as the 'cosmetic' use of antidepressants to allegedly improve personality.5 Citizens, then, are enlisted to trust in psychiatric science to repair mood dysfunction, but also to reveal the 'true' self, occluded by biologically impaired mood. Suggesting that biopsychiatry's 'knowledge' of the human brain has revealed the human condition and can repair sick selves, these discourses have helped to launch the behavioural health script into the national psyche. The successful marketing of the script was also achieved by the diagnostic philosophy encouraged by revisions of Diagnostic and Statistical Manual or Mental Disorders(the DSM; these renovations increased the number of affective (mood) and personality diagnoses and broadened diagnostic criteria. The new DSMs 6 institutionalised the pathologisation of common personality and mood distresses as biological or genetic disorders. The texts constitute 'knowledge' of normal personality and behaviour, and press consumers toward biotechnical tools to repair the defunct self. Ian Hacking (1995) suggests that new moral concepts emerge when old ones acquire new connotations, thereby affecting our sense of who we are. The once moral self, known through introspection, is thus transformed via biopsychiatry into a self that is constructed in accordance with scientific 'knowledge'. The State and various private industries have a stake in promoting this Sick Self script. Promoting Diagnosis of the Sick Self Employing the DSM's broad criteria, research by the National Institute of Mental Health (NIMH), contends that a significant percentage of the population is behaviourally ill. The most recent Surgeon General report on Mental Health (from 1999) which also employed broad criteria, argues that a striking 50 million Americans are afflicted with a mental illness each year, most of which were non-major disorders affecting behaviour, personality and mood.7 Additionally, studies suggest that behavioural illness results in lost work days and increases demand for health services, thus constituting a severe financial burden to the State. Such studies consequently provide the State with ample reason to promote behavioural illness. In predicting an epidemic in behavioural illness and a huge increase in mental health service needs, the State has constructed health policy in accordance with the behavioural sickness script. Health policy embraces DSM diagnostic tools that sweep in a wide population by diagnosing risk as illness and links diagnosis with biotechnical recovery methods. Because criteria for these disorders have expanded and diagnoses have become more vague, however, over-diagnosis of the population has become common . 8 Depression, for example, is broadly defined to include moods ranging from the blues to suicidal ideation. Yet, the Sick Self script is ubiquitously embraced by NGO, industry, and State discourses, calling for consumer self-scrutiny and strongly promoting psychopharmaceuticals. These activities has been most successful; to wit: personality disorders were among the most common diagnoses of the 80's, and depression, which was a rare disorder thirty-five years ago, became the most common mental illness in the late 90's (Healy). Consumer Health Groups & Industry Promotions Health institutions and drug industries promote mood illness and market drug remedies as a means of profit maximisation. Broad spectrum diagnoses are, by definition, easy to sell to a wide population and create a vast market for recovery products. Pharmaceutical and insurance companies (each multibillion dollar industries), an expanding variety of self-help industries, consumer health web sites, and an array of psy-complex workers all have a stake in promoting the broad diagnosis of mood and behavioural disorders. 9 In so doing, consumer groups and the health and pharmaceutical industries not only encourage self-discipline (aligning themselves with State productivity goals), but create a vast, ongoing market for recovery products. Promoting Illness and Recovery So strong is the linkage between illness and recovery that pharmaceutical company Eli Lilly sells Prozac by promoting the broad notion of depression, rather than the drug itself. It does so through depression brochures (advertised on TV) and a web page that discusses depression symptoms and offers a depression quiz, instead of product information. Likewise, Psych Central, a typical informational health site, provides consumers standard DSM depression definitions and information (from the biopsychiatric-driven American Psychiatric Association (APA) or the NIMH, and liberal behavioural illness quizzes that typically over-diagnose consumers. 10The Psych Central site also lists a broad range of depression symptoms, while its FAQ link promotes the self-management of mood ailments. For example, the site directs those who believe that they are depressed and want help to contact a physician, obtain a diagnosis, and initiate antidepressant treatment. Such web sites, viewed as a whole, appear to deliver certified knowledge that a 'normal' mood exists, that mood disorders are common, and that abiding citizens should diagnosis and treat their mood ailment. Another essential component of the behavioural script is the suggestion that the modern self's mood is interminably sick. Because common mood distresses are fodder for diagnosis, the self is always at risk of illness, and requires vigilant self-scrutiny. The self is never a finished product. Moreover, mood sickness is insidious and quickly spirals from risk to full-blown disorder. 11 As such, behavioural illness requires on-going self-assessment. Finally, because mood sickness threatens social productivity and State financial solvency, a moral overtone is added to the mix -- good citizens are encouraged to treat their mood dysfunctions promptly, for the common good. The script thus constructs citizenship as a motive for behavioural self-scrutiny; as such, it can naturally recommend that individuals, rather than experts, take charge of the surveillance process. The recommendation of self-determined illness is also a sales feature of the script, appealing to the American ethic of individualism -- even, paradoxically, as the script proposes that science best directs us to our selves. Self-Managed Recovery Health institutions and industries that deploy this script recommend not only self-diagnosis, but also self-managed treatment as the ideal treatment. Health information web sites, for example, tend to displace the expert by encouraging consumers to pre-diagnose their selves (often via on-line quizzes) and to then consult an expert for formal diagnosis and to organise a treatment program. Like governmental heath organisation's web sites, these commonly link consumer-driven, broad-spectrum diagnosis to psycho-pharmaceutical treatment, primarily by listing drugs as the first line of treatment, and linking consumers to drug information. Unsurprisingly, pharmaceutical companies support or own many 'informational' sites. Depression-net.com, for example, is owned by Organon, maker of Remeron, an SSRI in competition with Prozac.12 Still, even sites that receive little or no funding tend to display drugs prominently; for example, Internet Mental Health, which accepts no drug funding lists drugs immediately after diagnosis on the sidebar. This trend illustrates the extent to which drugs are viewed by consumers as a first step in addressing all types of mood sicknesses. Consumer health sites, geared toward Internet users seeking health care information (estimated to be 43% of the 120 million users) promote the illness-recovery link more aggressively. Dr.koop.com, one of the most visited sites on the Internet, describes itself as 'consumer-focused' and 'interactive'. Yet, the homepage of this site tends to include 'news' stories that relay the success of drugs or report on new biopsychiatric studies in depression or mental health. Some consumer sites such as Consumer health sites, geared toward Internet users seeking health care information (estimated to be 43% of the 120 million users) promote the illness-recovery link more aggressively. Dr.koop.com, one of the most visited sites on the Internet, describes itself as 'consumer-focused' and 'interactive'. Yet, the homepage of this site tends to include 'news' stories that relay the success of drugs or report on new biopsychiatric studies in depression or mental health. Some consumer sites such as WebMD prominently display links to drugstores, (such as Drugstore.com), many of which are owned in part or entirely by pharmaceutical companies.13 Similar to the common practices of direct-to-consumer advertising, both informational and consumer sites by-pass the expert, promote recovery via drugs, and direct the consumer to a doctor in search of a prescription, rather than health care advice. State, informational and consumer web sites all help to construct certain populations as at-risk for behavioural sickness. The NIMH information page on depression -- uncanny in its likeness to consumer health and pharmaceutical sites -- utilises the DSM definition of depression and recommends the standard regime of diagnosis and biotechnical treatments (highlighting antidepressants) most appropriate for a diagnosis of major, rather than minor, depression. The site also elaborates the broad approach to mood illness, and recommends that women, children and seniors -- groups deemed at-risk by the broad criteria -- be especially scrutinised for depression. By articulating the broad DSM definition of depression, a generalisable 'self' -- anyone suffering common ailments including sadness, lethargy or weight change -- is deemed at risk of depression or other behavioural illness. At the same time, at-risk groups are constructed as populations in need of more urgent scrutiny, namely society's less powerful individuals, rather than middle-aged males. That is, society's decision-makers--psychiatric researchers, State policy-makers, pharmaceutical CEO's, (etc) are considered least at risk for having defunct selves and productivity functioning. Selling Mood Sickness These brief examples illustrate the standard presentation of behavioural illness information on the Web and from traditional resources such as mailings, brochures, and consumer manuals. Presenting the ideal self as knowable and achievable with the help of bio-psychiatric science, these discourses encourage citizens to self-scrutinise, self-define, and even self-manage the possibility of mood or behavioural dysfunction. Because the individual gathers information, determines her pre-diagnosis, and seeks out a recovery technology, the many choices involved in behavioural scrutiny make it appear to be a free and 'democratic' activity. Additionally, as individuals take on the role of the expert, self-diagnosing via questionnaires, the highly disciplinary nature of the behavioural diagnosis appears unthreatening to individual sovereignty. Thus, this technology of the self solves an age-old problem of capitalist democracy -- how to simultaneously instill citizen's faith in absolute individual liberty (as a source of good government), and, at the same time, the need to achieve the absolute governance of the individual (Miller). Foucault contended that citizens are brought into the social contract of citizenship not simply through social and governmental contracts but by processes of policing that become embedded in our notions of citizenship. The process of self-management recommended by the ubiquitous behavioural script functions smoothly as a technology of surveillance in this era, where the ideal self is known and repaired through biopsychiatric science, the democratic responsibility of a good citizen. The liberal contract has always entailed an exchange of rights for freedoms -- in Rousseau's terms 'making men free by making them subjects.' (Miller xviii) When we make ourselves subjects to ongoing behavioural scrutiny, the resulting Self is not freed, rather it is constrained by a perpetual sickness. Notes 1 This term is used in a Foucaultian sense, to refer to all those who work under and benefit or profit from the dominant biological model of psychiatry dominant since the 1950's in the U.S. 2 For more discussion, see Ian Hacking, Rewriting the Soul; Multiple Personality and the Sciences of Memory. (1995) 3 In his essay 'Technologies of the Self' (1988) Foucault outlines the four major types of technologies that function as practical reason and entice citizens to behave according to constructed social standards. Among these are technologies of production (that permit us to produce things), technologies of sign systems (permitting us to use symbols), and the technologies of power and self mentioned in the above text. Through these technologies, operations of individuals become highly regulated, some visible and some difficult to perceive. The less visible technologies of the self became essential to the smooth functioning of society in the modern era. 4 'Technologies' is used to refer to mechanisms and actions of institutions or simply social norms and habits, that work, ultimately, to govern the individual, or create behaviour that serves desires of the State and dominant social bodies. 5 Peter Kramer, author of the best-selling book Listening to Prozac (1995) contends that his patients using Prozac often credited the drug with helping their true personalities to surface. 6 The two revisions occurred in 1987 and 1994. 7 Of that group, only five percent of that group suffers a 'severe' form of mental illness (such as schizophrenia, or extreme form of bipolar or obsessive compulsive disorder), while the rest suffer less severe behavioural and mood disorders. Similar research (also based on broad criteria) was published throughout the 90's suggesting an American epidemic of behavioural illness; it was claimed that 17% of the population is neurotic, while 10-15% of the population (and 30-50% of those seeking care) was said to possess a personality disorder. (Hales and Hales, 1995) 8 The most widely assigned diagnoses in this category today are: depression, multiple personality, adjustment disorder, eating disorders and Attention Deficit Hyperactivity Disorder (ADHD), which have extremely broad criteria, and are easily assigned to a wide segment of the population. 9The quizzes offered at these sites are standard in psychiatry; the difference here is that these are consumer-conducted. Lilly uses the Zung Self-Assessment Tool, which asks 20 broad questions regarding mood, and overdiagnoses individuals with potential depression. By responding to vague questions such as 'Morning is when I feel the best', 'I notice that I am losing weight', and 'I feel downhearted, blue and sad' with the choice of 'sometimes', individuals are thereby pre-diagnosed with potential depression. (https://secure.prozac.com/Main/zung.jsp) Psych central uses the Goldberg Inventory that is similarly broad, consumer-operated, and also tends to overdiagnose. 10 The DSM and other psychiatric texts and consumer manuals commonly suggest that undiagnosed depression will lead, eventually, to full-blown major depression. While a minority of individuals who suffer ongoing episodes of major depression will eventually suffer chronic major depression, it has not been found that minor depression will snowball into major depression or chronic major depression. This in fact, is one of the many suspicions among researchers that is referred to as fact in psychiatric literature and consumer manuals. A similar case in point is the suggestion that depression is a brain disorder, when in fact, research has not determined biochemistry or genetics to be the 'cause' of major depression. 11 Increasingly, Pharmaceutical sites are indistinguishable from consumer sites, as in the case of Bristol-Meyers Squibb's depression page, (http://www.livinglifebetter.com/src/htdo...) offering a layperson's depression definition and, immediately thereafter, information on its antidepressant Serzone. 12 Like the informational and State sites, these also link consumers to depression information (generally NIMH, FDA or APA research), as well as questionnaires. References American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, D.C: American Psychiatric Press, Inc., 1994. Cruikshank, Barbara. The Will to Empower: Democratic Citizens and Other Subjects. Ithaca, NY: Cornell University Press, 1999. Foucault, Michel. Madness and Civilization; A History of Insanity in the Age of Reason. New York: Vintage, 1961. - - - . The Order of Things; An Archaeology of the Human Science., New York: Vintage, 1966. - - - . The History of Sexuality; An Introduction, Volume I. New York: Vintage, 1976. - - - . 'Technologies of the Self', Technologies of the Self; A Seminar with Michel Foucault. Ed. Luther Martin, Huck Gutman, and Patrick H. Hutton. Amherst: University of Amherst Press, 1988. 16-49. Funicello, Theresa. The Tyranny of Kindness; Dismantling the Welfare System to End Poverty in America. New York: Atlantic Monthly Press, 1993. Hales, Dianne R. and Robert E. Hales. Caring For the Mind: The Comprehensive Guide to Mental Health. New York: Bantam Books, 1995. Healy, David. The Anti-Depressant Era. Cambridge, Mass: Harvard University Press, 1997. Kramer, Peter D. Listening to Prozac; A Psychiatrist Explores Antidepressant Drugs and the Remaking of the Self. New York: Viking, 1993. Miller, Toby. The Well-Tempered Self; Citizenship, Culture and the Postmodern Subject. Baltimore: The John Hopkins University Press, 1993. - - - . Technologies of Truth: Cultural Citizenship and the Popular Media. Minneapolis: University of Minnesota Press, 1998. Office of the Surgeon General. Mental Health: A Report of the Surgeon General. 1999. <http://www.surgeongeneral.gov/library/me...> Rose, Nickolas. Governing the Soul; The Shaping of the Private Self. London: Routledge, 1990. Links http://www.drugstore.com http://psychcentral.com/library/depression_faq.htm http://www.wikipedia.com/wiki/DSM-IV http://www.nimh.nih.gov/publicat/depression.cfm http://www.livinglifebetter.com/src/htdocs/index.asp?keyword=depression_index http://my.webmd.com http://www.mentalhealth.com http://www.surgeongeneral.gov/library/mentalhealth/home.html http://www.prozac.com http://my.webmd.com/ http://www.a-silver-lining.org/BPNDepth/criteria_d.html#MDD http://psychcentral.com/depquiz.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Gardner, Paula. "The Perpetually Sick Self" M/C: A Journal of Media and Culture 5.5 (2002). [your date of access] < http://www.media-culture.org.au/mc/0210/Gardner.html &gt. Chicago Style Gardner, Paula, "The Perpetually Sick Self" M/C: A Journal of Media and Culture 5, no. 5 (2002), < http://www.media-culture.org.au/mc/0210/Gardner.html &gt ([your date of access]). APA Style Gardner, Paula. (2002) The Perpetually Sick Self. M/C: A Journal of Media and Culture 5(5). < http://www.media-culture.org.au/mc/0210/Gardner.html &gt ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
30

Grossman, Michele. "Prognosis Critical: Resilience and Multiculturalism in Contemporary Australia." M/C Journal 16, no. 5 (August 28, 2013). http://dx.doi.org/10.5204/mcj.699.

Full text
Abstract:
Introduction Most developed countries, including Australia, have a strong focus on national, state and local strategies for emergency management and response in the face of disasters and crises. This framework can include coping with catastrophic dislocation, service disruption, injury or loss of life in the face of natural disasters such as major fires, floods, earthquakes or other large-impact natural events, as well as dealing with similar catastrophes resulting from human actions such as bombs, biological agents, cyber-attacks targeting essential services such as communications networks, or other crises affecting large populations. Emergency management frameworks for crisis and disaster response are distinguished by their focus on the domestic context for such events; that is, how to manage and assist the ways in which civilian populations, who are for the most part inexperienced and untrained in dealing with crises and disasters, are able to respond and behave in such situations so as to minimise the impacts of a catastrophic event. Even in countries like Australia that demonstrate a strong public commitment to cultural pluralism and social cohesion, ethno-cultural diversity can be seen as a risk or threat to national security and values at times of political, natural, economic and/or social tensions and crises. Australian government policymakers have recently focused, with increasing intensity, on “community resilience” as a key element in countering extremism and enhancing emergency preparedness and response. In some sense, this is the result of a tacit acknowledgement by government agencies that there are limits to what they can do for domestic communities should such a catastrophic event occur, and accordingly, the focus in recent times has shifted to how governments can best help people to help themselves in such situations, a key element of the contemporary “resilience” approach. Yet despite the robustly multicultural nature of Australian society, explicit engagement with Australia’s cultural diversity flickers only fleetingly on this agenda, which continues to pursue approaches to community resilience in the absence of understandings about how these terms and formations may themselves need to be diversified to maximise engagement by all citizens in a multicultural polity. There have been some recent efforts in Australia to move in this direction, for example the Australian Emergency Management Institute (AEMI)’s recent suite of projects with culturally and linguistically diverse (CALD) communities (2006-2010) and the current Australia-New Zealand Counter-Terrorism Committee-supported project on “Harnessing Resilience Capital in Culturally Diverse Communities to Counter Violent Extremism” (Grossman and Tahiri), which I discuss in a longer forthcoming version of this essay (Grossman). Yet the understanding of ethno-cultural identity and difference that underlies much policy thinking on resilience remains problematic for the way in which it invests in a view of the cultural dimensions of community resilience as relic rather than resource – valorising the preservation of and respect for cultural norms and traditions, but silent on what different ethno-cultural communities might contribute toward expanded definitions of both “community” and “resilience” by virtue of the transformative potential and existing cultural capital they bring with them into new national and also translocal settings. For example, a primary conclusion of the joint program between AEMI and the Australian Multicultural Commission is that CALD communities are largely “vulnerable” in the context of disasters and emergency management and need to be better integrated into majority-culture models of theorising and embedding community resilience. This focus on stronger national integration and the “vulnerability” of culturally diverse ethno-cultural communities in the Australian context echoes the work of scholars beyond Australia such as McGhee, Mouritsen (Reflections, Citizenship) and Joppke. They argue that the “civic turn” in debates around resurgent contemporary nationalism and multicultural immigration policies privileges civic integration over genuine two-way multiculturalism. This approach sidesteps the transculturational (Ortiz; Welsch; Mignolo; Bennesaieh; Robins; Stein) aspects of contemporary social identities and exchange by paying lip-service to cultural diversity while affirming a neo-liberal construct of civic values and principles as a universalising goal of Western democratic states within a global market economy. It also suggests a superficial tribute to cultural diversity that does not embed diversity comprehensively at the levels of either conceptualising or resourcing different elements of Australian transcultural communities within the generalised framework of “community resilience.” And by emphasising cultural difference as vulnerability rather than as resource or asset, it fails to acknowledge the varieties of resilience capital that many culturally diverse individuals and communities may bring with them when they resettle in new environments, by ignoring the question of what “resilience” actually means to those from culturally diverse communities. In so doing, it also avoids the critical task of incorporating intercultural definitional diversity around the concepts of both “community” and “resilience” used to promote social cohesion and the capacity to recover from disasters and crises. How we might do differently in thinking about the broader challenges for multiculturalism itself as a resilient transnational concept and practice? The Concept of Resilience The meanings of resilience vary by disciplinary perspective. While there is no universally accepted definition of the concept, it is widely acknowledged that resilience refers to the capacity of an individual to do well in spite of exposure to acute trauma or sustained adversity (Liebenberg 219). Originating in the Latin word resilio, meaning ‘to jump back’, there is general consensus that resilience pertains to an individual’s, community’s or system’s ability to adapt to and ‘bounce back’ from a disruptive event (Mohaupt 63, Longstaff et al. 3). Over the past decade there has been a dramatic rise in interest in the clinical, community and family sciences concerning resilience to a broad range of adversities (Weine 62). While debate continues over which discipline can be credited with first employing resilience as a concept, Mohaupt argues that most of the literature on resilience cites social psychology and psychiatry as the origin for the concept beginning in the mid-20th century. The pioneer researchers of what became known as resilience research studied the impact on children living in dysfunctional families. For example, the findings of work by Garmezy, Werner and Smith and Rutter showed that about one third of children in these studies were coping very well despite considerable adversities and traumas. In asking what it was that prevented the children in their research from being negatively influenced by their home environments, such research provided the basis for future research on resilience. Such work was also ground-breaking for identifying the so-called ‘protective factors’ or resources that individuals can operationalise when dealing with adversity. In essence, protective factors are those conditions in the individual that protect them from the risk of dysfunction and enable recovery from trauma. They mitigate the effects of stressors or risk factors, that is, those conditions that predispose one to harm (Hajek 15). Protective factors include the inborn traits or qualities within an individual, those defining an individual’s environment, and also the interaction between the two. Together, these factors give people the strength, skills and motivation to cope in difficult situations and re-establish (a version of) ‘normal’ life (Gunnestad). Identifying protective factors is important in terms of understanding the particular resources a given sociocultural group has at its disposal, but it is also vital to consider the interconnections between various protective mechanisms, how they might influence each other, and to what degree. An individual, for instance, might display resilience or adaptive functioning in a particular domain (e.g. emotional functioning) but experience significant deficits in another (e.g. academic achievement) (Hunter 2). It is also essential to scrutinise how the interaction between protective factors and risk factors creates patterns of resilience. Finally, a comprehensive understanding of the interrelated nature of protective mechanisms and risk factors is imperative for designing effective interventions and tailored preventive strategies (Weine 65). In short, contemporary thinking about resilience suggests it is neither entirely personal nor strictly social, but an interactive and iterative combination of the two. It is a quality of the environment as much as the individual. For Ungar, resilience is the complex entanglements between “individuals and their social ecologies [that] will determine the degree of positive outcomes experienced” (3). Thinking about resilience as context-dependent is important because research that is too trait-based or actor-centred risks ignoring any structural or institutional forces. A more ecological interpretation of resilience, one that takes into a person’s context and environment into account, is vital in order to avoid blaming the victim for any hardships they face, or relieving state and institutional structures from their responsibilities in addressing social adversity, which can “emphasise self-help in line with a neo-conservative agenda instead of stimulating state responsibility” (Mohaupt 67). Nevertheless, Ungar posits that a coherent definition of resilience has yet to be developed that adequately ‘captures the dual focus of the individual and the individual’s social ecology and how the two must both be accounted for when determining the criteria for judging outcomes and discerning processes associated with resilience’ (7). Recent resilience research has consequently prompted a shift away from vulnerability towards protective processes — a shift that highlights the sustained capabilities of individuals and communities under threat or at risk. Locating ‘Culture’ in the Literature on Resilience However, an understanding of the role of culture has remained elusive or marginalised within this trend; there has been comparatively little sustained investigation into the applicability of resilience constructs to non-western cultures, or how the resources available for survival might differ from those accessible to western populations (Ungar 4). As such, a growing body of researchers is calling for more rigorous inquiry into culturally determined outcomes that might be associated with resilience in non-western or multicultural cultures and contexts, for example where Indigenous and minority immigrant communities live side by side with their ‘mainstream’ neighbours in western settings (Ungar 2). ‘Cultural resilience’ considers the role that cultural background plays in determining the ability of individuals and communities to be resilient in the face of adversity. For Clauss-Ehlers, the term describes the degree to which the strengths of one’s culture promote the development of coping (198). Culturally-focused resilience suggests that people can manage and overcome stress and trauma based not on individual characteristics alone, but also from the support of broader sociocultural factors (culture, cultural values, language, customs, norms) (Clauss-Ehlers 324). The innate cultural strengths of a culture may or may not differ from the strengths of other cultures; the emphasis here is not so much comparatively inter-cultural as intensively intra-cultural (VanBreda 215). A culturally focused resilience model thus involves “a dynamic, interactive process in which the individual negotiates stress through a combination of character traits, cultural background, cultural values, and facilitating factors in the sociocultural environment” (Clauss-Ehlers 199). In understanding ways of ‘coping and hoping, surviving and thriving’, it is thus crucial to consider how culturally and linguistically diverse minorities navigate the cultural understandings and assumptions of both their countries of origin and those of their current domicile (Ungar 12). Gunnestad claims that people who master the rules and norms of their new culture without abandoning their own language, values and social support are more resilient than those who tenaciously maintain their own culture at the expense of adjusting to their new environment. They are also more resilient than those who forego their own culture and assimilate with the host society (14). Accordingly, if the combination of both valuing one’s culture as well as learning about the culture of the new system produces greater resilience and adaptive capacities, serious problems can arise when a majority tries to acculturate a minority to the mainstream by taking away or not recognising important parts of the minority culture. In terms of resilience, if cultural factors are denied or diminished in accounting for and strengthening resilience – in other words, if people are stripped of what they possess by way of resilience built through cultural knowledge, disposition and networks – they do in fact become vulnerable, because ‘they do not automatically gain those cultural strengths that the majority has acquired over generations’ (Gunnestad 14). Mobilising ‘Culture’ in Australian Approaches to Community Resilience The realpolitik of how concepts of resilience and culture are mobilised is highly relevant here. As noted above, when ethnocultural difference is positioned as a risk or a threat to national identity, security and values, this is precisely the moment when vigorously, even aggressively, nationalised definitions of ‘community’ and ‘identity’ that minoritise or disavow cultural diversities come to the fore in public discourse. The Australian evocation of nationalism and national identity, particularly in the way it has framed policy discussion on managing national responses to disasters and threats, has arguably been more muted than some of the European hysteria witnessed recently around cultural diversity and national life. Yet we still struggle with the idea that newcomers to Australia might fall on the surplus rather than the deficit side of the ledger when it comes to identifying and harnessing resilience capital. A brief example of this trend is explored here. From 2006 to 2010, the Australian Emergency Management Institute embarked on an ambitious government-funded four-year program devoted to strengthening community resilience in relation to disasters with specific reference to engaging CALD communities across Australia. The program, Inclusive Emergency Management with CALD Communities, was part of a wider Australian National Action Plan to Build Social Cohesion, Harmony and Security in the wake of the London terrorist bombings in July 2005. Involving CALD community organisations as well as various emergency and disaster management agencies, the program ran various workshops and agency-community partnership pilots, developed national school education resources, and commissioned an evaluation of the program’s effectiveness (Farrow et al.). While my critique here is certainly not aimed at emergency management or disaster response agencies and personnel themselves – dedicated professionals who often achieve remarkable results in emergency and disaster response under extraordinarily difficult circumstances – it is nevertheless important to highlight how the assumptions underlying elements of AEMI’s experience and outcomes reflect the persistent ways in which ethnocultural diversity is rendered as a problem to be surmounted or a liability to be redressed, rather than as an asset to be built upon or a resource to be valued and mobilised. AEMI’s explicit effort to engage with CALD communities in building overall community resilience was important in its tacit acknowledgement that emergency and disaster services were (and often remain) under-resourced and under-prepared in dealing with the complexities of cultural diversity in emergency situations. Despite these good intentions, however, while the program produced some positive outcomes and contributed to crucial relationship building between CALD communities and emergency services within various jurisdictions, it also continued to frame the challenge of working with cultural diversity as a problem of increased vulnerability during disasters for recently arrived and refugee background CALD individuals and communities. This highlights a common feature in community resilience-building initiatives, which is to focus on those who are already ‘robust’ versus those who are ‘vulnerable’ in relation to resilience indicators, and whose needs may require different or additional resources in order to be met. At one level, this is a pragmatic resourcing issue: national agencies understandably want to put their people, energy and dollars where they are most needed in pursuit of a steady-state unified national response at times of crisis. Nor should it be argued that at least some CALD groups, particularly those from new arrival and refugee communities, are not vulnerable in at least some of the ways and for some of the reasons suggested in the program evaluation. However, the consistent focus on CALD communities as ‘vulnerable’ and ‘in need’ is problematic, as well as partial. It casts members of these communities as structurally and inherently less able and less resilient in the context of disasters and emergencies: in some sense, as those who, already ‘victims’ of chronic social deficits such as low English proficiency, social isolation and a mysterious unidentified set of ‘cultural factors’, can become doubly victimised in acute crisis and disaster scenarios. In what is by now a familiar trope, the description of CALD communities as ‘vulnerable’ precludes asking questions about what they do have, what they do know, and what they do or can contribute to how we respond to disaster and emergency events in our communities. A more profound problem in this sphere revolves around working out how best to engage CALD communities and individuals within existing approaches to disaster and emergency preparedness and response. This reflects a fundamental but unavoidable limitation of disaster preparedness models: they are innately spatially and geographically bounded, and consequently understand ‘communities’ in these terms, rather than expanding definitions of ‘community’ to include the dimensions of community-as-social-relations. While some good engagement outcomes were achieved locally around cross-cultural knowledge for emergency services workers, the AEMI program fell short of asking some of the harder questions about how emergency and disaster service scaffolding and resilience-building approaches might themselves need to change or transform, using a cross-cutting model of ‘communities’ as both geographic places and multicultural spaces (Bartowiak-Théron and Crehan) in order to be more effective in national scenarios in which cultural diversity should be taken for granted. Toward Acknowledgement of Resilience Capital Most significantly, the AEMI program did not produce any recognition of the ways in which CALD communities already possess resilience capital, or consider how this might be drawn on in formulating stronger community initiatives around disaster and threats preparedness for the future. Of course, not all individuals within such communities, nor all communities across varying circumstances, will demonstrate resilience, and we need to be careful of either overgeneralising or romanticising the kinds and degrees of ‘resilience capital’ that may exist within them. Nevertheless, at least some have developed ways of withstanding crises and adapting to new conditions of living. This is particularly so in connection with individual and group behaviours around resource sharing, care-giving and social responsibility under adverse circumstances (Grossman and Tahiri) – all of which are directly relevant to emergency and disaster response. While some of these resilient behaviours may have been nurtured or enhanced by particular experiences and environments, they can, as the discussion of recent literature above suggests, also be rooted more deeply in cultural norms, habits and beliefs. Whatever their origins, for culturally diverse societies to achieve genuine resilience in the face of both natural and human-made disasters, it is critical to call on the ‘social memory’ (Folke et al.) of communities faced with responding to emergencies and crises. Such wellsprings of social memory ‘come from the diversity of individuals and institutions that draw on reservoirs of practices, knowledge, values, and worldviews and is crucial for preparing the system for change, building resilience, and for coping with surprise’ (Adger et al.). Consequently, if we accept the challenge of mapping an approach to cultural diversity as resource rather than relic into our thinking around strengthening community resilience, there are significant gains to be made. For a whole range of reasons, no diversity-sensitive model or measure of resilience should invest in static understandings of ethnicities and cultures; all around the world, ethnocultural identities and communities are in a constant and sometimes accelerated state of dynamism, reconfiguration and flux. But to ignore the resilience capital and potential protective factors that ethnocultural diversity can offer to the strengthening of community resilience more broadly is to miss important opportunities that can help suture the existing disconnects between proactive approaches to intercultural connectedness and social inclusion on the one hand, and reactive approaches to threats, national security and disaster response on the other, undermining the effort to advance effectively on either front. This means that dominant social institutions and structures must be willing to contemplate their own transformation as the result of transcultural engagement, rather than merely insisting, as is often the case, that ‘other’ cultures and communities conform to existing hegemonic paradigms of being and of living. In many ways, this is the most critical step of all. A resilience model and strategy that questions its own culturally informed yet taken-for-granted assumptions and premises, goes out into communities to test and refine these, and returns to redesign its approach based on the new knowledge it acquires, would reflect genuine progress toward an effective transculturational approach to community resilience in culturally diverse contexts.References Adger, W. Neil, Terry P. Hughes, Carl Folke, Stephen R. Carpenter and Johan Rockström. “Social-Ecological Resilience to Coastal Disasters.” Science 309.5737 (2005): 1036-1039. ‹http://www.sciencemag.org/content/309/5737/1036.full> Bartowiak-Théron, Isabelle, and Anna Corbo Crehan. “The Changing Nature of Communities: Implications for Police and Community Policing.” Community Policing in Australia: Australian Institute of Criminology (AIC) Reports, Research and Policy Series 111 (2010): 8-15. Benessaieh, Afef. “Multiculturalism, Interculturality, Transculturality.” Ed. A. Benessaieh. Transcultural Americas/Ameriques Transculturelles. Ottawa: U of Ottawa Press/Les Presses de l’Unversite d’Ottawa, 2010. 11-38. Clauss-Ehlers, Caroline S. “Sociocultural Factors, Resilience and Coping: Support for a Culturally Sensitive Measure of Resilience.” Journal of Applied Developmental Psychology 29 (2008): 197-212. Clauss-Ehlers, Caroline S. “Cultural Resilience.” Encyclopedia of Cross-Cultural School Psychology. Ed. C. S. Clauss-Ehlers. New York: Springer, 2010. 324-326. Farrow, David, Anthea Rutter and Rosalind Hurworth. Evaluation of the Inclusive Emergency Management with Culturally and Linguistically Diverse (CALD) Communities Program. Parkville, Vic.: Centre for Program Evaluation, U of Melbourne, July 2009. ‹http://www.ag.gov.au/www/emaweb/rwpattach.nsf/VAP/(9A5D88DBA63D32A661E6369859739356)~Final+Evaluation+Report+-+July+2009.pdf/$file/Final+Evaluation+Report+-+July+2009.pdf>.Folke, Carl, Thomas Hahn, Per Olsson, and Jon Norberg. “Adaptive Governance of Social-Ecological Systems.” Annual Review of Environment and Resources 30 (2005): 441-73. ‹http://arjournals.annualreviews.org/doi/pdf/10.1146/annurev.energy.30.050504.144511>. Garmezy, Norman. “The Study of Competence in Children at Risk for Severe Psychopathology.” The Child in His Family: Children at Psychiatric Risk. Vol. 3. Eds. E. J. Anthony and C. Koupernick. New York: Wiley, 1974. 77-97. Grossman, Michele. “Resilient Multiculturalism? Diversifying Australian Approaches to Community Resilience and Cultural Difference”. Global Perspectives on Multiculturalism in the 21st Century. Eds. B. E. de B’beri and F. Mansouri. London: Routledge, 2014. Grossman, Michele, and Hussein Tahiri. Harnessing Resilience Capital in Culturally Diverse Communities to Counter Violent Extremism. Canberra: Australia-New Zealand Counter-Terrorism Committee, forthcoming 2014. Grossman, Michele. “Cultural Resilience and Strengthening Communities”. Safeguarding Australia Summit, Canberra. 23 Sep. 2010. ‹http://www.safeguardingaustraliasummit.org.au/uploader/resources/Michele_Grossman.pdf>. Gunnestad, Arve. “Resilience in a Cross-Cultural Perspective: How Resilience Is Generated in Different Cultures.” Journal of Intercultural Communication 11 (2006). ‹http://www.immi.se/intercultural/nr11/gunnestad.htm>. Hajek, Lisa J. “Belonging and Resilience: A Phenomenological Study.” Unpublished Master of Science thesis, U of Wisconsin-Stout. Menomonie, Wisconsin, 2003. Hunter, Cathryn. “Is Resilience Still a Useful Concept When Working with Children and Young People?” Child Family Community Australia (CFA) Paper 2. Melbourne: Australian Institute of Family Studies, 2012.Joppke, Christian. "Beyond National Models: Civic Integration Policies for Immigrants in Western Europe". West European Politics 30.1 (2007): 1-22. Liebenberg, Linda, Michael Ungar, and Fons van de Vijver. “Validation of the Child and Youth Resilience Measure-28 (CYRM-28) among Canadian Youth.” Research on Social Work Practice 22.2 (2012): 219-226. Longstaff, Patricia H., Nicholas J. Armstrong, Keli Perrin, Whitney May Parker, and Matthew A. Hidek. “Building Resilient Communities: A Preliminary Framework for Assessment.” Homeland Security Affairs 6.3 (2010): 1-23. ‹http://www.hsaj.org/?fullarticle=6.3.6>. McGhee, Derek. The End of Multiculturalism? Terrorism, Integration and Human Rights. Maidenhead: Open U P, 2008.Mignolo, Walter. Local Histories/Global Designs: Coloniality, Subaltern Knowledges, and Border Thinking. Princeton: Princeton U P, 2000. Mohaupt, Sarah. “Review Article: Resilience and Social Exclusion.” Social Policy and Society 8 (2009): 63-71.Mouritsen, Per. "The Culture of Citizenship: A Reflection on Civic Integration in Europe." Ed. R. Zapata-Barrero. Citizenship Policies in the Age of Diversity: Europe at the Crossroad." Barcelona: CIDOB Foundation, 2009: 23-35. Mouritsen, Per. “Political Responses to Cultural Conflict: Reflections on the Ambiguities of the Civic Turn.” Ed. P. Mouritsen and K.E. Jørgensen. Constituting Communities. Political Solutions to Cultural Conflict, London: Palgrave, 2008. 1-30. Ortiz, Fernando. Cuban Counterpoint: Tobacco and Sugar. Trans. Harriet de Onís. Intr. Fernando Coronil and Bronislaw Malinowski. Durham, NC: Duke U P, 1995 [1940]. Robins, Kevin. The Challenge of Transcultural Diversities: Final Report on the Transversal Study on Cultural Policy and Cultural Diversity. Culture and Cultural Heritage Department. Strasbourg: Council of European Publishing, 2006. Rutter, Michael. “Protective Factors in Children’s Responses to Stress and Disadvantage.” Annals of the Academy of Medicine, Singapore 8 (1979): 324-38. Stein, Mark. “The Location of Transculture.” Transcultural English Studies: Fictions, Theories, Realities. Eds. F. Schulze-Engler and S. Helff. Cross/Cultures 102/ANSEL Papers 12. Amsterdam and New York: Rodopi, 2009. 251-266. Ungar, Michael. “Resilience across Cultures.” British Journal of Social Work 38.2 (2008): 218-235. First published online 2006: 1-18. In-text references refer to the online Advance Access edition ‹http://bjsw.oxfordjournals.org/content/early/2006/10/18/bjsw.bcl343.full.pdf>. VanBreda, Adrian DuPlessis. Resilience Theory: A Literature Review. Erasmuskloof: South African Military Health Service, Military Psychological Institute, Social Work Research & Development, 2001. Weine, Stevan. “Building Resilience to Violent Extremism in Muslim Diaspora Communities in the United States.” Dynamics of Asymmetric Conflict 5.1 (2012): 60-73. Welsch, Wolfgang. “Transculturality: The Puzzling Form of Cultures Today.” Spaces of Culture: City, Nation World. Eds. M. Featherstone and S. Lash. London: Sage, 1999. 194-213. Werner, Emmy E., and Ruth S. Smith. Vulnerable But Invincible: A Longitudinal Study of\ Resilience and Youth. New York: McGraw Hill, 1982. NotesThe concept of ‘resilience capital’ I offer here is in line with one strand of contemporary theorising around resilience – that of resilience as social or socio-ecological capital – but moves beyond the idea of enhancing general social connectedness and community cohesion by emphasising the ways in which culturally diverse communities may already be robustly networked and resourceful within micro-communal settings, with new resources and knowledge both to draw on and to offer other communities or the ‘national community’ at large. In effect, ‘resilience capital’ speaks to the importance of finding ‘the communities within the community’ (Bartowiak-Théron and Crehan 11) and recognising their capacity to contribute to broad-scale resilience and recovery.I am indebted for the discussion of the literature on resilience here to Dr Peta Stephenson, Centre for Cultural Diversity and Wellbeing, Victoria University, who is working on a related project (M. Grossman and H. Tahiri, Harnessing Resilience Capital in Culturally Diverse Communities to Counter Violent Extremism, forthcoming 2014).
APA, Harvard, Vancouver, ISO, and other styles
31

Brien, Donna Lee. "Unplanned Educational Obsolescence: Is the ‘Traditional’ PhD Becoming Obsolete?" M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.160.

Full text
Abstract:
Discussions of the economic theory of planned obsolescence—the purposeful embedding of redundancy into the functionality or other aspect of a product—in the 1980s and 1990s often focused on the impact of such a design strategy on manufacturers, consumers, the market, and, ultimately, profits (see, for example, Bulow; Lee and Lee; Waldman). More recently, assessments of such shortened product life cycles have included calculations of the environmental and other costs of such waste (Claudio; Kondoh; Unruh). Commonly utilised examples are consumer products such as cars, whitegoods and small appliances, fashion clothing and accessories, and, more recently, new technologies and their constituent components. This discourse has been adopted by those who configure workers as human resources, and who speak both of skills (Janßen and Backes-Gellner) and human capital itself (Chauhan and Chauhan) being made obsolete by market forces in both predictable and unplanned ways. This includes debate over whether formal education can assist in developing the skills that make their possessors less liable to become obsolete in the workforce (Dubin; Holtmann; Borghans and de Grip; Gould, Moav and Weinberg). However, aside from periodic expressions of disciplinary angst (as in questions such as whether the Liberal Arts and other disciplines are becoming obsolete) are rarely found in discussions regarding higher education. Yet, higher education has been subsumed into a culture of commercial service provision as driven by markets and profit as the industries that design and deliver consumer goods. McKelvey and Holmén characterise this as a shift “from social institution to knowledge business” in the subtitle of their 2009 volume on European universities, and the recent decade has seen many higher educational institutions openly striving to be entrepreneurial. Despite some debate over the functioning of market or market-like mechanisms in higher education (see, for instance, Texeira et al), the corporatisation of higher education has led inevitably to market segmentation in the products the sector delivers. Such market segmentation results in what are called over-differentiated products, seemingly endless variations in the same product to attempt to increase consumption and attendant sales. Milk is a commonly cited example, with supermarkets today stocking full cream, semi-skimmed, skimmed, lactose-free, soy, rice, goat, GM-free and ‘smart’ (enriched with various vitamins, minerals and proteins) varieties; and many of these available in fresh, UHT, dehydrated and/or organic versions. In the education market, this practice has resulted in a large number of often minutely differentiated, but differently named, degrees and other programs. Where there were once a small number of undergraduate degrees with discipline variety within them (including the Bachelor of Arts and Bachelor of Science awards), students can now graduate with a named qualification in a myriad of discipline and professional areas. The attempt to secure a larger percentage of the potential client pool (who are themselves often seeking to update their own skills and knowledges to avoid workforce obsolescence) has also resulted in a significant increase in the number of postgraduate coursework certificates, diplomas and other qualifications across the sector. The Masters degree has fractured from a research program into a range of coursework, coursework plus research, and research only programs. Such proliferation has also affected one of the foundations of the quality and integrity of the higher education system, and one of the last bastions of conventional practice, the doctoral degree. The PhD as ‘Gold-Standard’ Market Leader? The Doctor of Philosophy (PhD) is usually understood as a largely independent discipline-based research project that results in a substantial piece of reporting, the thesis, that makes a “substantial original contribution to knowledge in the form of new knowledge or significant and original adaptation, application and interpretation of existing knowledge” (AQF). As the highest level of degree conferred by most universities, the PhD is commonly understood as indicating the height of formal educational attainment, and has, until relatively recently, been above reproach and alteration. Yet, whereas universities internationally once offered a single doctorate named the PhD, many now offer a number of doctoral level degrees. In Australia, for example, candidates can also complete PhDs by Publication and by Project, as well as practice-led doctorates in, and named Doctorates of/in, Creative Arts, Creative Industries, Laws, Performance and other ‘new’ discipline areas. The Professional Doctorate, introduced into Australia in the early 1990s, has achieved such longevity that it now has it’s own “first generation” incarnations in (and about) disciplines such as Education, Business, Psychology and Journalism, as well as a contemporary “second generation” version which features professionally-practice-led Mode 2 knowledge production (Maxwell; also discussed in Lee, Brennan and Green 281). The uniquely Australian PhD by Project in the disciplines of architecture, design, business, engineering and education also includes coursework, and is practice and particularly workplace (or community) focused, but unlike the above, does not have to include a research element—although this is not precluded (Usher). A significant number of Australian universities also currently offer a PhD by Publication, known also as the PhD by Published Papers and PhD by Published Works. Introduced in the 1960s in the UK, the PhD by Publication there is today almost exclusively undertaken by academic staff at their own institutions, and usually consists of published work(s), a critical appraisal of that work within the research context, and an oral examination. The named degree is rare in the USA, although the practice of granting PhDs on the basis of prior publications is not unknown. In Australia, an examination of a number of universities that offer the degree reveals no consistency in terms of the framing policies except for the generic Australian Qualifications Framework accreditation statement (AQF), entry requirements and conditions of candidature, or resulting form and examination guidelines. Some Australian universities, for instance, require all externally peer-refereed publications, while others will count works that are self-published. Some require actual publications or works in press, but others count works that are still at submission stage. The UK PhD by Publication shows similar variation, with no consensus on purpose, length or format of this degree (Draper). Across Australia and the UK, some institutions accept previously published work and require little or no campus participation, while others have a significant minimum enrolment period and count only work generated during candidature (see Brien for more detail). Despite the plethora of named degrees at doctoral level, many academics continue to support the PhD’s claim to rigor and intellectual attainment. Most often, however, these arguments cite tradition rather than any real assessment of quality. The archaic trappings of conferral—the caps, gowns and various other instruments of distinction—emphasise a narrative in which it is often noted that doctorates were first conferred by the University of Paris in the 12th century and then elsewhere in medieval Europe. However, challenges to this account note that today’s largely independently researched thesis is a relatively recent arrival to educational history, being only introduced into Germany in the early nineteenth century (Bourner, Bowden and Laing; Park 4), the USA in a modified form in the mid-nineteenth century and the UK in 1917 (Jolley 227). The Australian PhD is even more recent, with the first only awarded in 1948 and still relatively rare until the 1970s (Nelson 3; Valadkhani and Ville). Additionally, PhDs in the USA, Canada and Denmark today almost always incorporate a significant taught coursework element (Noble). This is unlike the ‘traditional’ PhD in the UK and Australia, although the UK also currently offers a number of what are known there as ‘taught doctorates’. Somewhat confusingly, while these do incorporate coursework, they still include a significant research component (UKCGE). However, the UK is also adopting what has been identified as an American-inflected model which consists mostly, or largely, of coursework, and which is becoming known as the ‘New Route British PhD’ (Jolley 228). It could be posited that, within such a competitive market environment, which appears to be driven by both a drive for novelty and a desire to meet consumer demand, obsolescence therefore, and necessarily, threatens the very existence of the ‘traditional’ PhD. This obsolescence could be seen as especially likely as, alongside the existence of the above mentioned ‘new’ degrees, the ‘traditional’ research-based PhD at some universities in Australia and the UK in particular is, itself, also in the process of becoming ‘professionalised’, with some (still traditionally-framed) programs nevertheless incorporating workplace-oriented frameworks and/or experiences (Jolley 229; Kroll and Brien) to meet professionally-focused objectives that it is acknowledged cannot be met by producing a research thesis alone. While this emphasis can be seen as operating at the expense of specific disciplinary knowledge (Pole 107; Ball; Laing and Brabazon 265), and criticised for that, this workplace focus has arisen, internationally, as an institutional response to requests from both governments and industry for training in generic skills in university programs at all levels (Manathunga and Wissler). At the same time, the acknowledged unpredictability of the future workplace is driving a cognate move from discipline specific knowledge to what have been described as “problem solving and knowledge management approaches” across all disciplines (Gilbert; Valadkhani and Ville 2). While few query a link between university-level learning and the needs of the workplace, or the motivating belief that the overarching role of higher education is the provision of professional training for its client-students (see Laing and Brabazon for an exception), it also should be noted that a lack of relevance is one of the contributors to dysfunction, and thence to obsolescence. The PhD as Dysfunctional Degree? Perhaps, however, it is not competition that threatens the traditional PhD but, rather, its own design flaws. A report in The New York Times in 2007 alerted readers to what many supervisors, candidates, and researchers internationally have recognised for some time: that the PhD may be dysfunctional (Berger). In Australia and elsewhere, attention has focused on the uneven quality of doctoral-level degrees across institutions, especially in relation to their content, rigor, entry and assessment standards, and this has not precluded questions regarding the PhD (AVCC; Carey, Webb, Brien; Neumann; Jolley; McWilliam et al., "Silly"). It should be noted that this important examination of standards has, however, been accompanied by an increase in the awarding of Honorary Doctorates. This practice ranges from the most reputable universities’ recognising individuals’ significant contributions to knowledge, culture and/or society, to wholly disreputable institutions offering such qualifications in return for payment (Starrs). While generally contested in terms of their status, Honorary Doctorates granted to sports, show business and political figures are the most controversial and include an award conferred on puppet Kermit the Frog in 1996 (Jeffries), and some leading institutions including MIT, Cornell University and the London School of Economics and Political Science are distinctive in not awarding Honorary Doctorates. However, while distracting, the Honorary Doctorate itself does not answer all the questions regarding the quality of doctoral programs in general, or the Doctor of Philosophy in particular. The PhD also has high attrition rates: 50 per cent or more across Australia, the USA and Canada (Halse 322; Lovitts and Nelson). For those who remain in the programs, lengthy completion times (known internationally as ‘time-to-degree’) are common in many countries, with averages of 10.5 years to completion in Canada, and from 8.2 to more than 13 years (depending on discipline) in the USA (Berger). The current government performance-based funding model for Australian research higher degrees focuses attention on timely completion, and there is no doubt that, under this system—where universities only receive funding for a minimum period of candidature when those candidates have completed their degrees—more candidates are completing within the required time periods (Cuthbert). Yet, such a focus has distracted from assessment of the quality and outcomes of such programs of study. A detailed survey, based on the theses lodged in Australian libraries, has estimated that at least 51,000 PhD theses were completed in Australia to 2003 (Evans et al. 7). However, little attention has been paid to the consequences of this work, that is, the effects that the generation of these theses has had on either candidates or the nation. There has been no assessment, for instance, of the impact on candidates of undertaking and completing a doctorate on such facets of their lives as their employment opportunities, professional choices and salary levels, nor any effect on their personal happiness or levels of creativity. Nor has there been any real evaluation of the effect of these degrees on GDP, rates of the commercialisation of research, the generation of intellectual property, meeting national agendas in areas such as innovation, productivity or creativity, and/or the quality of the Australian creative and performing arts. Government-funded and other Australian studies have, however, noted for at least a decade both that the high numbers of graduates are mismatched to a lack of market demand for doctoral qualifications outside of academia (Kemp), and that an oversupply of doctorally qualified job seekers is driving wages down in some sectors (Jones 26). Even academia is demanding more than a PhD. Within the USA, doctoral graduates of some disciplines (English is an often-cited example) are undertaking second PhDs in their quest to secure an academic position. In Australia, entry-level academic positions increasingly require a scholarly publishing history alongside a doctoral-level qualification and, in common with other quantitative exercises in the UK and in New Zealand, the current Excellence in Research for Australia research evaluation exercise values scholarly publications more than higher degree qualifications. Concluding Remarks: The PhD as Obsolete or Retro-Chic? Disciplines and fields are reacting to this situation in various ways, but the trend appears to be towards increased market segmentation. Despite these charges of PhD dysfunction, there are also dangers in the over-differentiation of higher degrees as a practice. If universities do not adequately resource the professional development and other support for supervisors and all those involved in the delivery of all these degrees, those institutions may find that they have spread the existing skills, knowledge and other institutional assets too thinly to sustain some or even any of these degrees. This could lead to the diminishing quality (and an attendant diminishing perception of the value) of all the higher degrees available in those institutions as well as the reputation of the hosting country’s entire higher education system. As works in progress, the various ‘new’ doctoral degrees can also promote a sense of working on unstable ground for both candidates and supervisors (McWilliam et al., Research Training), and higher degree examiners will necessarily be unfamiliar with expected standards. Candidates are attempting to discern the advantages and disadvantages of each form in order to choose the degree that they believe is right for them (see, for example, Robins and Kanowski), but such assessment is difficult without the benefit of hindsight. Furthermore, not every form may fit the unpredictable future aspirations of candidates or the volatile future needs of the workplace. The rate with which everything once new descends from stylish popularity through stages of unfashionableness to become outdated and, eventually, discarded is increasing. This escalation may result in the discipline-based research PhD becoming seen as archaic and, eventually, obsolete. Perhaps, alternatively, it will lead to newer and more fashionable forms of doctoral study being discarded instead. Laing and Brabazon go further to find that all doctoral level study’s inability to “contribute in a measurable and quantifiable way to social, economic or political change” problematises the very existence of all these degrees (265). Yet, we all know that some objects, styles, practices and technologies that become obsolete are later recovered and reassessed as once again interesting. They rise once again to be judged as fashionable and valuable. Perhaps even if made obsolete, this will be the fate of the PhD or other doctoral degrees?References Australian Qualifications Framework (AQF). “Doctoral Degree”. AQF Qualifications. 4 May 2009 ‹http://www.aqf.edu.au/doctor.htm›. Australian Vice-Chancellors’ Committee (AVCC). Universities and Their Students: Principles for the Provision of Education by Australian Universities. Canberra: AVCC, 2002. 4 May 2009 ‹http://www.universitiesaustralia.edu.au/documents/publications/Principles_final_Dec02.pdf›. Ball, L. “Preparing Graduates in Art and Design to Meet the Challenges of Working in the Creative Industries: A New Model For Work.” Art, Design and Communication in Higher Education 1.1 (2002): 10–24. Berger, Joseph. “Exploring Ways to Shorten the Ascent to a Ph.D.” Education. The New York Times, 3 Oct. 2008. 4 May 2009 ‹http://nytimes.com/2007/10/03/education/03education.html›. Borghans, Lex, and Andries de Grip. Eds. The Overeducated Worker?: The Economics of Skill Utilization. Cheltenham, UK: Edward Elgar, 2000. Bourner, T., R. Bowden and S. Laing. “Professional Doctorates in England”. Studies in Higher Education 26 (2001) 65–83. Brien, Donna Lee. “Publish or Perish?: Investigating the Doctorate by Publication in Writing”. The Creativity and Uncertainty Papers: the Refereed Proceedings of the 13th Conference of the Australian Association of Writing Programs. AAWP, 2008. 4 May 2009 ‹http://www.aawp.org.au/creativity-and-uncertainty-papers›. Bulow, Jeremy. “An Economic Theory of Planned Obsolescence.” The Quarterly Journal of Economics 101.4 (Nov. 1986): 729–50. Carey, Janene, Jen Webb, and Donna Lee Brien. “Examining Uncertainty: Australian Creative Research Higher Degrees”. The Creativity and Uncertainty Papers: the Refereed Proceedings of the 13th Conference of the Australian Association of Writing Programs. AAWP, 2008. 4 May 2009 ‹http://www.aawp.org.au/creativity-and-uncertainty-papers›. Chauhan, S. P., and Daisy Chauhan. “Human Obsolescence: A Wake–up Call to Avert a Crisis.” Global Business Review 9.1 (2008): 85–100. Claudio, Luz. "Environmental Impact of the Clothing Industry." Environmental Health Perspectives 115.9 (Set. 2007): A449–54. Cuthbert, Denise. “HASS PhD Completions Rates: Beyond the Doom and Gloom”. Council for the Humanities, Arts and Social Sciences, 3 March 2008. 4 May 2009 ‹http://www.chass.org.au/articles/ART20080303DC.php›. Draper, S. W. PhDs by Publication. University of Glasgow, 11 Aug. 2008. 4 May 2009 ‹http://www.psy.gla.ac.uk/~steve/resources/phd.html. Dubin, Samuel S. “Obsolescence or Lifelong Education: A Choice for the Professional.” American Psychologist 27.5 (1972): 486–98. Evans, Terry, Peter Macauley, Margot Pearson, and Karen Tregenza. “A Brief Review of PhDs in Creative and Performing Arts in Australia”. Proceeding of the Association for Active Researchers Newcastle Mini-Conference, 2–4 October 2003. Melbourne: Australian Association for Research in Education, 2003. 4 May 2009 ‹http://www.aare.edu.au/conf03nc. Gilbert, R. “A Framework for Evaluating the Doctoral Curriculum”. Assessment and Evaluation in Higher Education 29.3 (2004): 299–309. Gould, Eric D., Omer Moav, and Bruce A. Weinberg. “Skill Obsolescence and Wage Inequality within Education Groups.” The Economics of Skills Obsolescence. Eds. Andries de Grip, Jasper van Loo, and Ken Mayhew. Amsterdam: JAI Press, 2002. 215–34. Halse, Christine. “Is the Doctorate in Crisis?” Nagoya Journal of Higher Education 34 Apr. (2007): 321–37. Holtmann, A.G. “On-the-Job Training, Obsolescence, Options, and Retraining.” Southern Economic Journal 38.3 (1972): 414–17. Janßen, Simon, and Uschi Backes-Gellner. “Skill Obsolescence, Vintage Effects and Changing Tasks.” Applied Economics Quarterly 55.1 (2009): 83–103. Jeffries, Stuart. “I’m a Celebrity, Get Me an Honorary Degree”. The Guardian 6 July 2006. 4 May 2009 ‹http://www.guardian.co.uk/music/2006/jul/06/highereducation.popandrock. Jolley, Jeremy. “Choose your Doctorate.” Journal of Clinical Nursing 16.2 (2007): 225–33. Jones, Elka. “Beyond Supply and Demand: Assessing the Ph.D. Job Market.” Occupational Outlook Quarterly Winter (2002-2003): 22–33. Kemp, D. ­New Knowledge, New Opportunities: A Discussion Paper on Higher Education Research and Research Training. Canberra: Australian Government Printing Service, 1999. Kondoh, Shinsuke, Keijiro Masui, Mitsuro Hattori, Nozomu Mishima, and Mitsutaka Matsumoto. “Total Performance Analysis of Product Life Cycle Considering the Deterioration and Obsolescence of Product Value.” International Journal of Product Development 6.3–4 (2008): 334–52. Kroll, Jeri, and Donna Lee Brien. “Studying for the Future: Training Creative Writing Postgraduates For Life After Degrees.” Australian Online Journal of Arts Education 2.1 July (2006): 1–13. Laing, Stuart, and Tara Brabazon. “Creative Doctorates, Creative Education? Aligning Universities with the Creative Economy.” Nebula 4.2 (June 2007): 253–67. Lee, Alison, Marie Brennan, and Bill Green. “Re-imagining Doctoral Education: Professional Doctorates and Beyond.” Higher Education Research & Development 28.3 2009): 275–87. Lee, Ho, and Jonghwa Lee. “A Theory of Economic Obsolescence.” The Journal of Industrial Economics 46.3 (Sep. 1998): 383–401. Lovitts, B. E., and C. Nelson. “The Hidden Crisis in Graduate Education: Attrition from Ph.D. Programs.” Academe 86.6 (2000): 44–50. Manathunga, Catherine, and Rod Wissler. “Generic Skill Development for Research Higher Degree Students: An Australian Example”. International Journal of Instructional Media, 30.3 (2003): 233–46. Maxwell, T. W. “From First to Second Generation Professional Doctorate.” Studies in Higher Education 28.3 (2003): 279–91. McKelvey, Maureen, and Magnus Holmén. Ed. Learning to Compete in European Universities: From Social Institution to Knowledge Business. Cheltenham, UK: Edward Elgar Publishing, 2009. McWilliam, Erica, Alan Lawson, Terry Evans, and Peter G Taylor. “‘Silly, Soft and Otherwise Suspect’: Doctoral Education as Risky Business”. Australian Journal of Education 49.2 (2005): 214–27. 4 May 2009. http://eprints.qut.edu.au/archive/00004171. McWilliam, Erica, Peter G. Taylor, P. Thomson, B. Green, T. W. Maxwell, H. Wildy, and D. Simmons. Research Training in Doctoral Programs: What Can Be Learned for Professional Doctorates? Evaluations and Investigations Programme 02/8. Canberra: Commonwealth of Australia, 2002. Nelson, Hank. “A Doctor in Every House: The PhD Then Now and Soon”. Occasional Paper GS93/3. Canberra: The Graduate School, Australian National University, 1993. 4 May 2009 ‹http://dspace.anu.edu.au/bitstream/1885/41552/1/GS93_3.pdf›. Neumann, Ruth. The Doctoral Education Experience: Diversity and Complexity. 03/12 Evaluations and Investigations Programme. Canberra: Department of Education, Science and Training, 2003. Noble K. A. Changing Doctoral Degrees: An International Perspective. Buckingham: Society for Research into Higher Education, 1994. Park, Chris. Redefining the Doctorate: Discussion Paper. York: The Higher Education Academy, 2007. Pole, Christopher. “Technicians and Scholars in Pursuit of the PhD: Some Reflections on Doctoral Study.” Research Papers in Education 15 (2000): 95–111. Robins, Lisa M., and Peter J. Kanowski. “PhD by Publication: A Student’s Perspective”. Journal of Research Practice 4.2 (2008). 4 May 2009 ‹http://jrp.icaap.org›. Sheely, Stephen. “The First Among Equals: The PhD—Academic Standard or Historical Accident?”. Advancing International Perspectives: Proceedings of the Higher Education Research and Development Society of Australasia Conference, 1997. 654-57. 4 May 2009 ‹http://www.herdsa.org.au/wp-content/uploads/conference/1997/sheely01.pdf›. Texeira, Pedro, Ben Jongbloed, David Dill, and Alberto Amaral. Eds. Markets in Higher Education: Rethoric or Reality? Dordrecht, the Netherlands: Kluwer, 2004. UK Council for Graduate Education (UKCGE). Professional Doctorates. Dudley: UKCGE, 2002. Unruh, Gregory C. “The Biosphere Rules.” Harvard Business Review Feb. 2008: 111–17. Usher R. “A Diversity of Doctorates: Fitness for the Knowledge Economy?”. Higher Education Research & Development 21 (2002): 143–53. Valadkhani, Abbas, and Simon Ville. “A Disciplinary Analysis of the Contribution of Academic Staff to PhD Completions in Australian Universities”. International Journal of Business & Management Education 15.1 (2007): 1–22. Waldman, Michael. “A New Perspective on Planned Obsolescence.” The Quarterly Journal of Economics 108.1 (Feb. 1993): 273–83.
APA, Harvard, Vancouver, ISO, and other styles
32

Munro, Andrew. "Discursive Resilience." M/C Journal 16, no. 5 (August 28, 2013). http://dx.doi.org/10.5204/mcj.710.

Full text
Abstract:
By most accounts, “resilience” is a pretty resilient concept. Or policy instrument. Or heuristic tool. It’s this last that really concerns us here: resilience not as a politics, but rather as a descriptive device for attempts in the humanities—particularly in rhetoric and cultural studies—to adequately describe a discursive event. Or rather, to adequately describe a class of discursive events: those that involve rhetorical resistance by victimised subjects. I’ve argued elsewhere (Munro, Descriptive; Reading) that Peircean semiosis, inflected by a rhetorical postulate of genre, equips us well to closely describe a discursive event. Here, I want briefly to suggest that resilience—“discursive” resilience, to coin a term—might usefully supplement these hypotheses, at least from time to time. To support this suggestion, I’ll signal some uses of resilience before turning briefly to a case study: a sensational Argentine homicide case, which occurred in October 2002, and came to be known as the caso Belsunce. At the time, Argentina was wracked by economic crises and political instability. The imposition of severe restrictions on cash withdrawals from bank deposits had provoked major civil unrest. Between 21 December 2001 and 2 January 2002, Argentines witnessed a succession of five presidents. “Resilient” is a term that readily comes to mind to describe many of those who endured this catastrophic period. To describe the caso Belsunce, however—to describe its constitution and import as a discursive event—we might appeal to some more disciplinary-specific understandings of resilience. Glossing Peircean semiosis as a teleological process, Short notes that “one and the same thing […] may be many different signs at once” (106). Any given sign, in other words, admits of multiple interpretants or uptakes. And so it is with resilience, which is both a keyword in academic disciplines ranging from psychology to ecology and political science, and a buzzword in several corporate domains and spheres of governmental activity. It’s particularly prevalent in the discourses of highly networked post-9/11 Anglophone societies. So what, pray tell, is resilience? To the American Psychological Association, resilience comprises “the process of adapting well in the face of adversity.” To the Resilience Solutions Group at Arizona State University, resilience is “the capacity to recover fully from acute stressors, to carry on in the face of chronic difficulties: to regain one’s balance after losing it.” To the Stockholm Resilience Centre, resilience amounts to the “capacity of a system to continually change and adapt yet remain within critical thresholds,” while to the Resilience Alliance, resilience is similarly “the capacity of a system to absorb disturbance and still retain its basic function and structure” (Walker and Salt xiii). The adjective “resilient” is thus predicated of those entities, individuals or collectivities, which exhibit “resilience”. A “resilient Australia,” for example, is one “where all Australians are better able to adapt to change, where we have reduced exposure to risks, and where we are all better able to bounce back from disaster” (Australian Government). It’s tempting here to synthesise these statements with a sense of “ordinary language” usage to derive a definitional distillate: “resilience” is a capacity attributed to an entity which recovers intact from major injury. This capacity is evidenced in a reaction or uptake: a “resilient” entity is one which suffers some insult or disturbance, but whose integrity is held to have been maintained, or even enhanced, by its resistive or adaptive response. A conjecturally “resilient” entity is thus one which would presumably evince resilience if faced with an unrealised aversive event. However, such abstractions ignore how definitional claims do rhetorical work. On any given occasion, how “resilience” and its cognates are construed and what they connote are a function, at least in part, of the purposes of rhetorical agents and the protocols and objects of the disciplines or genres in which these agents put these terms to work. In disciplines operating within the same form of life or sphere of activity—disciplines sharing general conventions and broad objects of inquiry, such as the capacious ecological sciences or the contiguous fields of study within the ambit of applied psychology—resilience acts, at least at times, as a something of a “boundary object” (Star and Griesemer). Correlatively, across more diverse and distant fields of inquiry, resilience can work in more seemingly exclusive or contradictory ways (see Handmer and Dovers). Rhetorical aims and disciplinary objects similarly determine the originary tales we are inclined to tell. In the social sciences, the advent of resilience is often attributed to applied psychology, indebted, in turn, to epidemiology (see Seery, Holman and Cohen Silver). In environmental science, by contrast, resilience is typically taken to be a theory born in ecology (indebted to engineering and to the physical sciences, in particular to complex systems theory [see Janssen, Schoon, Ke and Börner]). Having no foundational claim to stake and, moreover, having different purposes and taking different objects, some more recent uptakes of resilience, in, for instance, securitisation studies, allow for its multidisciplinary roots (see Bourbeau; Kaufmann). But if resilience is many things to many people, a couple of commonalities in its range of translations should be drawn out. First, irrespective of its discipline or sphere of activity, talk of resilience typically entails construing an object of inquiry qua system, be that system an individual, a community of circumstance, a state, a socio-ecological unit or some differently delimited entity. This bounded system suffers some insult with no resulting loss of structural, relational, functional or other integrity. Second, resilience is usually marshalled to promote a politics. Resilience talk often consorts with discourses of meliorative action and of readily quantifiable practical effects. When the environmental sciences take the “Earth system” and the dynamics of global change as their objects of inquiry, a postulate of resilience is key to the elaboration and implementation of natural resource management policy. Proponents of socio-ecological resilience see the resilience hypothesis as enabling a demonstrably more enlightened stewardship of the biosphere (see Folke et al.; Holling; Walker and Salt). When applied psychology takes the anomalous situation of disadvantaged, at-risk individuals triumphing over trauma as its declared object of inquiry, a postulate of resilience is key to the positing and identification of personal and environmental resources or protective factors which would enable the overcoming of adversity. Proponents of psychosocial resilience see this concept as enabling the elaboration and implementation of interventions to foster individual and collective wellbeing (see Goldstein and Brooks; Ungar). Similarly, when policy think-tanks and government departments and agencies take the apprehension of particular threats to the social fabric as their object of inquiry, a postulate of resilience—or of a lack thereof—is critical to the elaboration and implementation of urban infrastructure, emergency planning and disaster management policies (see Drury et al.; Handmer and Dovers). However, despite its often positive connotations, resilience is well understood as a “normatively open” (Bourbeau 11) concept. This openness is apparent in some theories and practices of resilience. In limnological modelling, for example, eutrophication can result in a lake’s being in an undesirable, albeit resilient, turbid-water state (see Carpenter et al.; Walker and Meyers). But perhaps the negative connotations or indeed perverse effects of resilience are most apparent in some of its political uptakes. Certainly, governmental operationalisations of resilience are coming under increased scrutiny. Chief among the criticisms levelled at the “muddled politics” (Grove 147) of and around resilience is that its mobilisation works to constitute a particular neoliberal subjectivity (see Joseph; Neocleous). By enabling a conservative focus on individual responsibility, preparedness and adaptability, the topos of resilience contributes critically to the development of neoliberal governmentality (Joseph). In a practical sense, this deployment of resilience silences resistance: “building resilient subjects,” observe Evans and Reid (85), “involves the deliberate disabling of political habits. […] Resilient subjects are subjects that have accepted the imperative not to resist or secure themselves from the difficulties they are faced with but instead adapt to their enabling conditions.” It’s this prospect of practical acquiescence that sees resistance at times opposed to resilience (Neocleous). “Good intentions not withstanding,” notes Grove (146), “the effect of resilience initiatives is often to defend and strengthen the political economic status quo.” There’s much to commend in these analyses of how neoliberal uses of resilience constitute citizens as highly accommodating of capital and the state. But such critiques pertain to the governmental mobilisation of resilience in the contemporary “advanced liberal” settings of “various Anglo-Saxon countries” (Joseph 47). There are, of course, other instances—other events in other times and places—in which resilience indisputably sorts with resistance. Such an event is the caso Belsunce, in which a rhetorically resilient journalistic community pushed back, resisting some of the excesses of a corrupt neoliberal Argentine regime. I’ll turn briefly to this infamous case to suggest that a notion of “discursive resilience” might afford us some purchase when it comes to describing discursive events. To be clear: we’re considering resilience here not as an anticipatory politics, but rather as an analytic device to supplement the descriptive tools of Peircean semiosis and a rhetorical postulate of genre. As such, it’s more an instrument than an answer: a program, perhaps, for ongoing work. Although drawing on different disciplinary construals of the term, this use of resilience would be particularly indebted to the resilience thinking developed in ecology (see Carpenter el al.; Folke et al.; Holling; Walker et al.; Walker and Salt). Things would, of course, be lost in translation (see Adger; Gallopín): in taking a discursive event, rather than the dynamics of a socio-ecological system, as our object of inquiry, we’d retain some topological analogies while dispensing with, for example, Holling’s four-phase adaptive cycle (see Carpenter et al.; Folke; Gunderson; Gunderson and Holling; Walker et al.). For our purposes, it’s unlikely that descriptions of ecosystem succession need to be carried across. However, the general postulates of ecological resilience thinking—that a system is a complex series of dynamic relations and functions located at any given time within a basin of attraction (or stability domain or system regime) delimited by thresholds; that it is subject to multiple attractors and follows trajectories describable over varying scales of time and space; that these trajectories are inflected by exogenous and endogenous perturbations to which the system is subject; that the system either proves itself resilient to these perturbations in its adaptive or resistive response, or transforms, flipping from one domain (or basin) to another may well prove useful to some descriptive projects in the humanities. Resilience is fundamentally a question of uptake or response. Hence, when examining resilience in socio-ecological systems, Gallopín notes that it’s useful to consider “not only the resilience of the system (maintenance within a basin) but also coping with impacts produced and taking advantage of opportunities” (300). Argentine society in the early-to-mid 2000s was one such socio-political system, and the caso Belsunce was both one such impact and one such opportunity. Well-connected in the world of finance, 57-year-old former stockbroker Carlos Alberto Carrascosa lived with his 50-year-old sociologist turned charity worker wife, María Marta García Belsunce, close to their relatives in the exclusive gated community of Carmel Country Club, Pilar, Provincia de Buenos Aires, Argentina. At 7:07 pm on Sunday 27 October 2002, Carrascosa called ambulance emergencies, claiming that his wife had slipped and knocked her head while drawing a bath alone that rainy Sunday afternoon. At the time of his call, it transpired, Carrascosa was at home in the presence of intimates. Blood was pooled on the bathroom floor and smeared and spattered on its walls and adjoining areas. María Marta lay lifeless, brain matter oozing from several holes in her left parietal and temporal lobes. This was the moment when Carrascosa, calm and coherent, called emergency services, but didn’t advert the police. Someone, he told the operator, had slipped in the bath and bumped her head. Carrascosa described María Marta as breathing, with a faint pulse, but somehow failed to mention the holes in her head. “A knock with a tap,” a police source told journalist Horacio Cecchi, “really doesn’t compare with the five shots to the head, the spillage of brain matter and the loss of about half a litre of blood suffered by the victim” (Cecchi and Kollmann). Rather than a bathroom tap, María Marta’s head had met with five bullets discharged from a .32-calibre revolver. In effect, reported Cecchi, María Marta had died twice. “While perhaps a common conceit in fiction,” notes Cecchi, “in reality, dying twice is, by definition, impossible. María Marta’s two obscure endings seem to unsettle this certainty.” Her cadaver was eventually subjected to an autopsy, and what had been a tale of clumsiness and happenstance was rewritten, reinscribed under the Argentine Penal Code. The autopsy was conducted 36 days after the burial of María Marta; nine days later, she was mentioned for the second time in the mainstream Argentine press. Her reappearance, however, was marked by a shift in rubrics: from a short death notice in La Nación, María Marta was translated to the crime section of Argentina’s dailies. Until his wife’s mediatic reapparition, Carroscosa and other relatives had persisted with their “accident” hypothesis. Indeed, they’d taken a range of measures to preclude the sorts of uptakes that might ordinarily be expected to flow, under functioning liberal democratic regimes, from the discovery of a corpse with five projectiles lodged in its head. Subsequently recited as part of Carrascosa’s indictment, these measures were extensively reiterated in media coverage of the case. One of the more notorious actions involved the disposal of the sixth bullet, which was found lying under María Marta. In the course of moving the body of his half-sister, John Hurtig retrieved a small metallic object. This discovery was discussed by a number of family members, including Carrascosa, who had received ballistics training during his four years of naval instruction at the Escuela Nacional de Náutica de la Armada. They determined that the object was a lug or connector rod (“pituto”) used in library shelving: nothing, in any case, to indicate a homicide. With this determination made, the “pituto” was duly wrapped in lavatory paper and flushed down the toilet. This episode occasioned a range of outraged articles in Argentine dailies examining the topoi of privilege, power, corruption and impunity. “Distinguished persons,” notes Viau pointedly, “are so disposed […] that in the midst of all that chaos, they can locate a small, hard, steely object, wrap it in lavatory paper and flush it down the toilet, for that must be how they usually dispose of […] all that rubbish that no longer fits under the carpet.” Most often, though, critical comment was conducted by translating the reporting of the case to the genres of crime fiction. In an article entitled Someone Call Agatha Christie, Quick!, H.A.T. writes that “[s]omething smells rotten in the Carmel Country; a whole pile of rubbish seems to have been swept under its plush carpets.” An exemplary intervention in this vein was the work of journalist and novelist Vicente Battista, for whom the case (María Marta) “synthesizes the best of both traditions of crime fiction: the murder mystery and the hard-boiled novels.” “The crime,” Battista (¿Hubo Otra Mujer?) has Rodolfo observe in the first of his speculative dialogues on the case, “seems to be lifted from an Agatha Christie novel, but the criminal turns out to be a copy of the savage killers that Jim Thompson usually depicts.” Later, in an interview in which he correctly predicted the verdict, Battista expanded on these remarks: This familiar plot brings together the English murder mystery and the American hard-boiled novels. The murder mystery because it has all the elements: the crime takes place in a sealed room. In this instance, sealed not only because it occurred in a house, but also in a country, a sealed place of privilege. The victim was a society lady. Burglary is not the motive. In classic murder mystery novels, it was a bit unseemly that one should kill in order to rob. One killed either for a juicy sum of money, or for revenge, or out of passion. In those novels there were neither corrupt judges nor fugitive lawyers. Once Sherlock Holmes […] or Hercule Poirot […] said ‘this is the murderer’, that was that. That’s to say, once fingered in the climactic living room scene, with everyone gathered around the hearth, the perpetrator wouldn’t resist at all. And everyone would be happy because the judges were thought to be upright persons, at least in fiction. […] The violence of the crime of María Marta is part of the hard-boiled novel, and the sealed location in which it takes place, part of the murder mystery (Alarcón). I’ve argued elsewhere (Munro, Belsunce) that the translation of the case to the genres of crime fiction and their metaanalysis was a means by which a victimised Argentine public, represented by a disempowered and marginalised fourth estate, sought some rhetorical recompense. The postulate of resilience, however, might help further to describe and contextualise this notorious discursive event. A disaffected Argentine press finds itself in a stability domain with multiple attractors: on the one hand, an acquiescence to ever-increasing politico-juridical corruption, malfeasance and elitist impunity; on the other, an attractor of increasing contestation, democratisation, accountability and transparency. A discursive event like the caso Belsunce further perturbs Argentine society, threatening to displace it from its democratising trajectory. Unable to enforce due process, Argentina’s fourth estate adapts, doing what, in the circumstances, amounts to the next best thing: it denounces the proceedings by translating the case to the genres of crime fiction. In so doing, it engages a venerable reception history in which the co-constitution of true crime fiction and investigative journalism is exemplified by the figure of Rodolfo Walsh, whose denunciatory works mark a “politicisation of crime” (see Amar Sánchez Juegos; El sueño). Put otherwise, a section of Argentina’s fourth estate bounced back: by making poetics do rhetorical work, it resisted the pull towards what ecology calls an undesirable basin of attraction. Through a show of discursive resilience, these journalists worked to keep Argentine society on a democratising track. References Adger, Neil W. “Social and Ecological Resilience: Are They Related?” Progress in Human Geography 24.3 (2000): 347-64. Alarcón, Cristina. “Lo Único Real Que Tenemos Es Un Cadáver.” 2007. 12 July 2007 ‹http://www.pagina12.com.ar/diario/elpais/subnotas/87986-28144-2007-07-12.html>. Amar Sánchez, Ana María. “El Sueño Eterno de Justicia.” Textos De Y Sobre Rodolfo Walsh. Ed. Jorge Raúl Lafforgue. Buenos Aires: Alianza, 2000. 205-18. ———. Juegos De Seducción Y Traición. Literatura Y Cultura De Masas. Rosario: Beatriz Viterbo, 2000. American Psychological Association. “What Is Resilience?” 2013. 9 Aug 2013 ‹http://www.apa.org/helpcenter/road-resilience.aspx>. Australian Government. “Critical Infrastructure Resilience Strategy.” 2009. 9 Aug 2013 ‹http://www.tisn.gov.au/Documents/Australian+Government+s+Critical+Infrastructure+Resilience+Strategy.pdf>. Battista, Vicente. “¿Hubo Otra Mujer?” Clarín 2003. 26 Jan. 2003 ‹http://old.clarin.com/diario/2003/01/26/s-03402.htm>. ———. “María Marta: El Relato Del Crimen.” Clarín 2003. 16 Jan. 2003 ‹http://old.clarin.com/diario/2003/01/16/o-01701.htm>. Bourbeau, Philippe. “Resiliencism: Premises and Promises in Securitisation Research.” Resilience: International Policies, Practices and Discourses 1.1 (2013): 3-17. Carpenter, Steve, et al. “From Metaphor to Measurement: Resilience of What to What?” Ecosystems 4 (2001): 765-81. Cecchi, Horacio. “Las Dos Muertes De María Marta.” Página 12 (2002). 12 Dec. 2002 ‹http://www.pagina12.com.ar/diario/sociedad/3-14095-2002-12-12.html>. Cecchi, Horacio, and Raúl Kollmann. “Un Escenario Sigilosamente Montado.” Página 12 (2002). 13 Dec. 2002 ‹http://www.pagina12.com.ar/diario/sociedad/3-14122-2002-12-13.html>. Drury, John, et al. “Representing Crowd Behaviour in Emergency Planning Guidance: ‘Mass Panic’ or Collective Resilience?” Resilience: International Policies, Practices and Discourses 1.1 (2013): 18-37. Evans, Brad, and Julian Reid. “Dangerously Exposed: The Life and Death of the Resilient Subject.” Resilience: Interational Policies, Practices and Discourses 1.2 (2013): 83-98. Folke, Carl. “Resilience: The Emergence of a Perspective for Social-Ecological Systems Analyses.” Global Environmental Change 16 (2006): 253-67. Folke, Carl, et al. “Resilience Thinking: Integrating Resilience, Adaptability and Transformability.” Ecology and Society 15.4 (2010). Gallopín, Gilberto C. “Linkages between Vulnerability, Resilience, and Adaptive Capacity.” Global Environmental Change 16 (2006): 293-303. Goldstein, Sam, and Robert B. Brooks, eds. Handbook of Resilience in Children. New York: Springer Science and Business Media, 2006. Grove, Kevin. “On Resilience Politics: From Transformation to Subversion.” Resilience: Interational Policies, Practices and Discourses 1.2 (2013): 146-53. Gunderson, Lance H. “Ecological Resilience - in Theory and Application.” Annual Review of Ecology and Systematics 31 (2000): 425-39. Gunderson, Lance H., and C. S. Holling, eds. Panarchy Understanding Transformations in Human and Natural Systems. Washington: Island, 2002. Handmer, John W., and Stephen R. Dovers. “A Typology of Resilience: Rethinking Institutions for Sustainable Development.” Organization & Environment 9.4 (1996): 482-511. H.A.T. “Urgente: Llamen a Agatha Christie.” El País (2003). 14 Jan. 2003 ‹http://historico.elpais.com.uy/03/01/14/pinter_26140.asp>. Holling, Crawford S. “Resilience and Stability of Ecological Systems.” Annual Review of Ecology and Systematics 4 (1973): 1-23. Janssen, Marco A., et al. “Scholarly Networks on Resilience, Vulnerability and Adaptation within the Human Dimensions of Global Environmental Change.” Global Environmental Change 16 (2006): 240-52. Joseph, Jonathan. “Resilience as Embedded Neoliberalism: A Governmentality Approach.” Resilience: International Policies, Practices and Discourses 1.1 (2013): 38-52. Kaufmann, Mareile. “Emergent Self-Organisation in Emergencies: Resilience Rationales in Interconnected Societies.” Resilience: Interational Policies, Practices and Discourses 1.1 (2013): 53-68. Munro, Andrew. “The Belsunce Case Judgement, Uptake, Genre.” Cultural Studies Review 13.2 (2007): 190-204. ———. “The Descriptive Purchase of Performativity.” Culture, Theory and Critique 53.1 (2012). ———. “Reading Austin Rhetorically.” Philosophy and Rhetoric 46.1 (2013): 22-43. Neocleous, Mark. “Resisting Resilience.” Radical Philosophy 178 March/April (2013): 2-7. Resilience Solutions Group, Arizona State U. “What Is Resilience?” 2013. 9 Aug. 2013 ‹http://resilience.asu.edu/what-is-resilience>. Seery, Mark D., E. Alison Holman, and Roxane Cohen Silver. “Whatever Does Not Kill Us: Cumulative Lifetime Adversity, Vulnerability, and Resilience.” Journal of Personality and Social Psychology 99.6 (2010): 1025-41. Short, Thomas L. “What They Said in Amsterdam: Peirce's Semiotic Today.” Semiotica 60.1-2 (1986): 103-28. Star, Susan Leigh, and James R. Griesemer. “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-39.” Social Studies of Science 19.3 (1989): 387-420. Stockholm Resilience Centre. “What Is Resilience?” 2007. 9 Aug. 2013 ‹http://www.stockholmresilience.org/21/research/what-is-resilience.html>. Ungar, Michael ed. Handbook for Working with Children and Youth Pathways to Resilience across Cultures and Contexts. Thousand Oaks: Sage, 2005. Viau, Susana. “Carmel.” Página 12 (2002). 27 Dec. 2002 ‹http://www.pagina12.com.ar/diario/contratapa/13-14651-2002-12-27.html>. Walker, Brian, et al. “Resilience, Adaptability and Transformability in Social-Ecological Systems.” Ecology and Society 9.2 (2004). Walker, Brian, and Jacqueline A. Meyers. “Thresholds in Ecological and Social-Ecological Systems: A Developing Database.” Ecology and Society 9.2 (2004). Walker, Brian, and David Salt. Resilience Thinking Sustaining Ecosystems and People in a Changing World. Washington: Island, 2006.
APA, Harvard, Vancouver, ISO, and other styles
33

Harrison, Paul. "Remaining Still." M/C Journal 12, no. 1 (February 25, 2009). http://dx.doi.org/10.5204/mcj.135.

Full text
Abstract:
A political minimalism? That would obviously go against the grain of our current political ideology → in fact, we are in an era of political maximalisation (Roland Barthes 200, arrow in original).Barthes’ comment is found in the ‘Annex’ to his 1978 lecture course The Neutral. Despite the three decade difference I don’t things have changed that much, certainly not insofar as academic debate about the cultural and social is concerned. At conferences I regularly hear the demand that the speaker or speakers account for the ‘political intent’, ‘worth’ or ‘utility’ of their work, or observe how speakers attempt to pre-empt and disarm such calls through judicious phrasing and citing. Following his diagnosis Barthes (201-206) proceeds to write under the title ‘To Give Leave’. Here he notes the incessant demand placed upon us, as citizens, as consumers, as representative cultural subjects and as biopolitical entities and, in this context, as academics to have and to communicate our allegiances, views and opinions. Echoing the acts, (or rather the ‘non-acts’), of Melville’s Bartleby, Barthes describes the scandalous nature of suspending the obligation of holding views; the apparent immorality of suspending the obligation of being interested, engaged, opinionated, committed – even if one only ever suspends provisionally, momentarily even. For the length of a five thousand word essay perhaps. In this short, unfortunately telegraphic and quite speculative essay I want pause to consider a few gestures or figures of ‘suspension’, ‘decline’ and ‘remaining aside’. What follows is in three parts. First a comment on the nature of the ‘demand to communicate’ identified by Barthes and its links to longer running moral and practical imperatives within Western understandings of the subject, the social and the political. Second, the most substantial section but still an all too brief account of the apparent ‘passivity’ of the narrator of Imre Kertész’s novel Fatelessness and the ways in which the novel may be read as a reflection on the nature of agency and determination. Third, a very brief conclusion, the question directly; what politics or what apprehension of politics, could a reflection on stillness and its ‘political minimalism’ offer? 1.For Barthes, (in 1978), one of the factors defining the contemporary intellectual scene was the way in which “politics invades all phenomena, economic, cultural, ethical” coupled with the “radicalization” of “political behaviors” (200), perhaps most notably in the arrogance of political discourse as it assumes the place of a master discourse. Writing in 1991 Bill Readings identified a similar phenomenon. For Readings the category of the political and politically inspired critique were operating by encircling their objects within a presupposed “universal language of political significance into which one might translate everything according to its effectivity”, an approach which has the effect of always making “the political […] the bottom line, the last instance where meaning can be definitively asserted” (quoted in Clark 3) or, we may add, realized. There is, of course, much that could be said here, not least concerning the significant differences in context, (between, for example, the various forms of revolutionary Marxism, Communism and Maoism which seem to preoccupy Barthes and the emancipatory identity and cultural politics which swept through literature departments in the US and beyond in the last two decades of the twentieth century). However it is also possible to suggest that a general grammar and, moreover, a general acceptance of a telos of the political persists.Barthes' (204-206) account of ‘political maximalisation’ is accompanied by a diagnosis of its productivist virility, (be it, in 1978, on the part of the increasingly reduced revolutionary left or the burgeoning neo-liberal right). The antithesis, or, rather, the outside of such an arrangement or frame would not be another political program but rather a certain stammering, a lassitude or dilatoriness. A flaccidness even; “a devirilized image” wherein from the point of view of the (political) actor or critic, “you are demoted to the contemptible mass of the undecided of those who don’t know who to vote for: old, lost ladies whom they brutalize: vote however you want, but vote” (Barthes 204). Hence Barthes is not suggesting a counter-move, a radical refusal, a ‘No’ shouted back to the information saturated market society. What is truly scandalous he suggests, is not opposition or refusal but the ‘non-reply’. What is truly scandalous, roughish even, is the decline or deferral and so the provisional suspension of the choice (and the blackmail) of the ‘yes’ or ‘no’, the ‘this’ or the ‘that’, the ‘with us’ or ‘against us’.In Literature and Evil Georges Bataille concludes his essay on Kafka with a comment on such a decline. According to Bataille, the reason why Kafka remains an ambivalent writer for critics, (and especially for those who would seek to enrol his work to political ends), lays precisely in his constant withdrawal; “There was nothing he [Kafka] could have asserted, or in the name of which he could have spoken. What he was, which was nothing, only existed to the extent in which effective activity condemned him” (167). ‘Effective activity’ refers, contextually, to a certain form of Communism but more broadly to the rationalization or systematization intrinsic to any political program, political programs (or ideologies) as such, be they communist, liberal or libertarian. At least insofar as, as implied above, the political is taken to coincide with a certain metaphysics and morality of action and the consequent linking of freedom to work, (a factor common to communist, fascist and liberal political programs), and so to the labour of the progressive self-realization and achievement of the self, the autos or ipse (see Derrida 6-18). Be it via, for example, Marx’s account of human’s intrinsic ‘capacity for work’ (Arbeitskraft), Heidegger’s account of necessary existential (and ultimately communal) struggle (Kampf), or Weber’s diagnoses of the (Protestant/bourgeois) liberal project to realize human potentiality (see also Agamben Man without Content; François 1-64). Hence what is ‘evil’ in Kafka is not any particular deed but the deferral of deeds; his ambivalence or immorality in the eyes of certain critics being due to the question his writing poses to “the ultimate authority of action” (Bataille 153) and so to the space beyond action onto which it opens. What could this space of ‘worklessness’ or ‘unwork’ look like? This non-virile, anti-heroic space? This would not be a space of ‘inaction’, (a term still too dependent, albeit negatively, on action), but of ‘non-action’; of ‘non-productive’ or non-disclosive action. That is to say, and as a first attempt at definition, ‘action’ or ‘praxis’, if we can still call it that, which does not generate or bring to light any specific positive content. As a way to highlight the difficulties and pitfalls, (at least with certain traditions), which stand in the way of thinking such a space, we may highlight Giorgio Agamben’s comments on the widespread coincidence of a metaphysics of action with the determination of both the subject, its teleology and its orientation in the world:According to current opinion, all of man’s [sic] doing – that of the artist and the craftsman as well as that of the workman and the politician – is praxis – manifestation of a will that produces a concrete effect. When we say that man has a productive status on earth, we mean, that the status of his dwelling on the earth is a practical one […] This productive doing now everywhere determines the status of man on earth – man understood as the living being (animal) that works (laborans), and, in work, produces himself (Man without Content 68; 70-71 original emphasis).Beyond or before practical being then, that is to say before and beyond the determination of the subject as essentially or intrinsically active and engaged, another space, another dwelling. Maybe nocturnal, certainly one with a different light to that of the day; one not gathered in and by the telos of the ipse or the turning of the autos, an interruption of labour, an unravelling. Remaining still, unravelling together (see Harrison In the absence).2.Kertész’s novel Sorstalanság was first published in his native Hungary in 1975. It has been translated into English twice, in 1992 as Fateless and in 2004 as Fatelessness. Fatelessness opens in Budapest on the day before György Köves’ – the novel’s fourteen year old narrator – father has to report for ‘labour service’. It goes on to recount Köves’ own detention and deportation and the year spent in the camps of Auschwitz-Birkenau, Buchenwald and Zeitz. During this period Köves’ health declines, gradually at first and then rapidly to a moment of near death. He survives and the novel closes with his return to his home town. Köves is, as Kertész has put it in various interviews and as is made clear in the novel, a ‘non-Jewish Jew’; a non-practicing and non-believing Hungarian Jew from a largely assimilated family who neither reads nor speaks Hebrew or Yiddish. While Kertész has insisted that the novel is precisely that, a novel, a work of literature and not an autobiography, we should note that Kertész was himself imprisoned in Buchenwald and Zeitz when fourteen.Not without reservations but for the sake of brevity I shall focus on only one theme in the novel; determination and agency, or what Kertész calls ‘determinacy’. Writing in his journal Galley Boat-Log (Gályanapló) in May 1965 Kertész suggests ‘Novel of Fatelessness’ as a possible title for his work and then reflects on what he means by ‘fate’, the entry is worth quoting at length.The external determinacy, the stigma which constrains our life in a situation, an absurdity, in the given totalitarianism, thwarts us; thus, when we live out the determinacy which is doled out to us as a reality, instead of the necessity which stems from our own (relative) freedom – that is what I call fatelessness.What is essential is that our determinacy should always be in conflict with our natural views and inclinations; that is how fatelessness manifests itself in a chemically pure state. The two possible modes of protection: we transform into our determinacy (Kafka’s centipede), voluntarily so to say, and I that way attempt to assimilate our determinacy to our fate; or else we rebel against it, and so fall victim to our determinacy. Neither of these is a true solution, for in both cases we are obliged to perceive our determinacy […] as reality, whilst the determining force, that absurd power, in a way triumphs over us: it gives us a name and turns us into an object, even though we were born for other things.The dilemma of my ‘Muslim’ [Köves]: How can he construct a fate out of his own determinacy? (Galley Boat-Log 98 original emphasis).The dilemma of determinacy then; how can Köves, who is both determined by and superfluous to the Nazi regime, to wider Hungarian society, to his neighbours and to his family, gain some kind of control over his existence? Throughout Fatelessness people prove repeatedly unable to control their destinies, be it Köves himself, his father, his stepmother, his uncles, his friends from the oil refinery, or even Bandi Citrom, Köves’ mentor in the camps. The case of the ‘Expert’ provides a telescoped example. First appearing when Köves and his friends are arrested the ‘Expert’ is an imposing figure, well dressed, fluent in German and the director of a factory involved in the war effort (Fatelessness 50). Later at the brickworks, where the Jews who have been rounded up are being held prior to deportation, he appears more dishevelled and slightly less confident. Still, he takes the ‘audacious’ step of addressing a German officer directly (and receives some placatory ‘advice’ as his reward) (68-69). By the time the group arrives at the camp Köves has difficulty recognising him and without a word of protest, the ‘Expert’ does not pass the initial selection (88).Köves displays no such initiative with regard to his situation. He is reactive or passive, never active. For Köves events unfold as a series of situations and circumstances which are, he tells himself, essentially reasonable and to which he has to adapt and conform so that he may get on. Nothing more than “given situations with the new givens inherent in them” (259), as he explains near the end of the novel. As Köves' identity papers testify, his life and its continuation are the effect of arbitrary sets of circumstances which he is compelled to live through; “I am not alive on my own account but benefiting the war effort in the manufacturing industry” (29). In his Nobel lecture Kertész described Köves' situation:the hero of my novel does not live his own time in the concentration camps, for neither his time nor his language, not even his own person, is really his. He doesn’t remember; he exists. So he has to languish, poor boy, in the dreary trap of linearity, and cannot shake off the painful details. Instead of a spectacular series of great and tragic moments, he has to live through everything, which is oppressive and offers little variety, like life itself (Heureka! no pagination).Without any wilful or effective action on the part of the narrator and with only ‘the dreary trap of linearity’ where one would expect drama, plot, rationalization or stylization, Fatelessness can read as an arbitrarily punctuated series of waitings. Köves waiting for his father to leave, waiting in the customs shed, waiting at the brick works, waiting in train carriages, waiting on the ramp, waiting at roll call, waiting in the infirmary. Here is the first period of waiting described in the book, it is the day before his father’s departure and he is waiting for his father and stepmother as they go through the accounts at the family shop:I tried to be patient for a bit. Striving to think of Father, and more specifically the fact that he would be going tomorrow and, quite probably, I would not see him for a long time after that; but after a while I grew weary with that notion and then seeing as there was nothing else I could do for my father, I began to be bored. Even having to sit around became a drag, so simply for the sake of a change I stood up to take a drink of water from the tap. They said nothing. Later on, I also made my way to the back, between the planks, in order to pee. On returning I washed my hands at the rusty, tiled sink, then unpacked my morning snack from my school satchel, ate that, and finally took another drink from the tap. They still said nothing. I sat back in my place. After that, I got terribly bored for another absolute age (Fatelessness 9). It is interesting to consider exactly how this passage presages those that will come. Certainly this scene is an effect of the political context, his father and stepmother have to go through the books because of the summons to labour service and because of the racial laws on who may own and profit from a business. However, the specifically familial setting should not be overlooked, particularly when read alongside Kertész’s other novels where, as Madeleine Gustafsson writes, Communist dictatorship is “portrayed almost as an uninterrupted continuation of life in the camp – which in turn [...] is depicted as a continuation of the patriarchal dictatorship of a joyless childhood” (no pagination, see, for example, Kertész Kaddish). Time to turn back to our question; does Fatelessness provide an answer to the ‘dilemma of determinacy’? We should think carefully before answering. As Julia Karolle suggests, the composition of the novel and our search for a logic within itreveal the abuses that reason must endure in order to create any story or history about the Holocaust […]. Ultimately Kertész challenges the reader not to make up for the lack of logic in Fatelessness, but rather to consider the nature of its absence (92 original emphasis).Still, with this point in mind, (and despite what has been said above), the novel does contain a scene in which Köves appears to affirm his existence.In many respects the scene is the culmination of the novel. The camps have been liberated and Köves has returned to Budapest. Finding his father and step-mother’s apartment occupied by strangers he calls on his Aunt and Uncle Fleischmann and Uncle Steiner. The discussion which follows would repay a slower reading, however again for the sake of brevity I shall focus on only a few short excerpts. Köves suggests that everyone took their ‘steps’ towards the events which have unfolded and that prediction and retrospection are false perspectives which give the illusion of order and inevitability whereas, in reality, “everything becomes clear only gradually, sequentially over time, step-by-step” (Fatelessness 249): “They [his Uncles] too had taken their own steps. They too […] had said farewell to my father as if we had already buried him, and even later has squabbled about whether I should take the train or the suburban bus to Auschwitz” (260). Fleischmann and Steiner react angrily, claiming that such an understanding makes the ‘victims’ the ‘guilty ones’. Köves responds by saying that they do not understand him and asks they see that:It was impossible, they must try to understand, impossible to take everything away from me, impossible for me to be neither winner nor loser, for me not to be right and not to be mistaken that I was neither the cause nor effect of anything; they should try to see, I almost pleaded, that I could not swallow that idiotic bitterness, that I should merely be innocent (260-261).Karolle (93-94) suggests that Köves' discussion with his uncles marks the moment where he accepts and affirms his existence and, from this point on begins to take control of and responsibility. Hence for Karolle the end of the novel depicts an ‘authentic’ moment of self-affirmation as Köves steps forward and refuses to participate in “the factual historical narrative of Auschwitz, to forget what he knows, and to be unequivocally categorized as a victim of history” (95). In distinction to Karolle, Adrienne Kertzer argues that Köves' moment of self-affirmation is, in fact, one of self-deception. Rather than acknowledging that it was “inexplicable luck” and a “series of random acts” (Kertzer 122) which saved his life or that his near death was due to an accident of birth, Köves asserts his personal freedom. Hence – and following István Deák – Kertzer suggests that we should read Fatelessness as a satire, ‘a modern Candide’. A satire on the hope of finding meaning, be it personal or metaphysical, in such experiences and events, the closing scenes of the novel being an ironic reflection on the “desperate desire to see […] life as meaningful” (Kertzer 122). So, while Köves convinces himself of his logic his uncles say to each other “‘Leave him be! Can’t you see he only wants to talk? Let him talk! Leave him be!’ And talk I did, albeit possibly to no avail and even a little incoherently” (Fatelessness 259). Which are we to choose then? The affirmation of agency (with Karolle) or the diagnosis of determination (with Kertzer)? Karolle and Kertzer give insightful analyses, (and ones which are certainly not limited to the passages quoted above), however it seems to me that they move too quickly to resolve the ‘dilemma’ presented by Köves, if not of Fatelessness as a whole. Still, we have a little time before having to name and decide Köves’ fate. Kertész’s use of the word ‘hero’ to describe Köves above – ‘the hero of my novel…’ – is, perhaps, more than a little ironic. As Kertész asks (in 1966), how can there be a hero, how can one be heroic, when one is one’s ‘determinacies’? What sense does it make to speak of heroic actions if “man [sic] is no more than his situation”? (Galley Boat-Log 99). Köves’ time, his language, his identity, none are his. There is no place, no hidden reservoir of freedom, from which way he set in motion any efficacious action. All resources have already been corrupted. From Kertész’s journal (in 1975): “The masters of thought and ideologies have ruined my thought processes” (Galley Boat-Log 104). As Lawrence Langer has argued, the grammar of heroics, along with the linked terms ‘virtue’, ‘dignity’, ‘resistance’ ‘survival’ and ‘liberation’, (and the wider narrative and moral economies which these terms indicate and activate), do not survive the events being described. Here the ‘dilemma of determinacy’ becomes the dilemma of how to think and value the human outside or after such a grammar. How to think and value the human beyond a grammar of action and so beyond, as Lars Iyer puts it, “the equation of work and freedom that characterizes the great discourses of political modernity” (155). If this is possible. If such a grammar and equation isn’t too all pervasive, if something of the human still remains outside their economy. It may well be that our ability to read Fatelessness depends in large part on what we are prepared to forsake (see Langar 195). How to think the subject and a politics in contretemps, beyond or after the choice between determination or autonomy, passive or active, inaction or action, immoral or virtuous – if only for a moment? Kertész wonders, (in 1966), ”perhaps there is something to be savaged all the same, a tiny foolishness, something ultimately comic and frail that may be a sign of the will to live and still awakens sympathy” (Galley Boat-Log 99). Something, perhaps, which remains to be salvaged from the grammar of humanism, something that would not be reducible to context, to ‘determinacies’, and that, at the same time, does not add up to a (resurrected) agent. ‘A tiny foolishness, something ultimately comic and frail’. The press release announcing that Kertész had been awarded the Nobel prize for literature states that “For Kertész the spiritual dimension of man lies in his inability to adapt to life” (The Swedish Academy no pagination). Despite the difficulties presented by the somewhat over-determined term ‘spiritual’, this line strikes me as remarkably perspicuous. Like Melville’s Bartleby and Bataille’s Kafka before him, Kertész’s Köves’ existence, insofar as he exists, is made up by his non-action. That is to say, his existence is defined not by his actions or his inaction, (both of which are purely reactive and functional), but rather by his irreducibility to either. As commentators and critics have remarked, (and as the quotes given from the text above hopefully illustrate), Köves has an oddly formal and neutral ‘voice’. Köves’ blank, frequently equivocal tone may be read as a sign of his immaturity, his lack of understanding and his naivety. However I would suggest that before such factors, what characterizes Köves’ mode of address is its reticence to assert or disclose. Köves speaks, he speaks endlessly, but he says nothing or almost nothing - ‘to no avail and even a little incoherently’. Hence where Karolle seeks to recover an ‘intoned self-consciousness’ and Kertzer the repressed determining context, we may find Köves' address. Where Karolle’s and Kertzer’s approaches seek in some way to repair Köves words, to supplement them with either an agency to-come or an awareness of a context and, in doing so, pull his words fully into the light, Köves, it seems to me, remains elusive. His existence, insofar as we may speak of it, lies in his ‘inability to adapt to life’. His reserves are not composed of hidden or recoverable sources of agency but in his equivocality, in the way he takes leave of and remains aside from the very terms of the dilemma. It is as if with no resources of his own, he has an echo existence. As if still remaining itself where a tiny foolishness, something ultimately comic and frail.3.Is this it? Is this what we are to be left with in a ‘political minimalism’? It would seem more resignation or failure, turning away or quietism, the conceit of a beautiful soul, than any type of recognisable politics. On one level this is correct, however any such suspension or withdrawal, this moment of stillness where we are, is only ever a moment. However it is a moment which indicates a certain irreducibility and as such is, I believe, of great significance. Great significance, (or better ‘signifyingness’), even though – and precisely because – it is in itself without value. Being outside efficacy, labour or production, being outside economisation as such, it resides only in its inability to be integrated. What purpose does it serve? None. Or, perhaps, none other than demonstrating the irreducibility of a life, of a singular existence, to any discourse, narrative, identity or ideology, insofar as such structures, in their attempt to comprehend (or apprehend) the existent and put it to use always and violently fall short. As Theodor Adorno wrote;It is this passing-on and being unable to linger, this tacit assent to the primacy of the general over the particular, which constitutes not only the deception of idealism in hypostasizing concepts, but also its inhumanity, that has no sooner grasped the particular than it reduces it to a thought-station, and finally comes all too quickly to terms with suffering and death (74 emphasis added).This moment of stillness then, of declining and remaining aside, represents, for me, the anarchical and all but silent condition of possibility for all political strategy as such (see Harrison, Corporeal Remains). A condition of possibility which all political strategy carries within itself, more or less well, more or less consciously, as a memory of the finite and corporeal nature of existence. A memory which may always and eventually come to protest against the strategy itself. Strategy itself as strategy; as command, as a calculated and calculating order. And so, and we should be clear about this, such a remaining still is a demonstration.A demonstration not unlike, for example, that of the general anonymous population in José Saramago’s remarkable novel Seeing, who ‘act’ more forcefully through non-action than any through any ends-directed action. A demonstration of the kind which Agamben writes about after those in Tiananmen Square in 1989:The novelty of the coming politics is that it will no longer be the struggle for control of the state, but a struggle between the State and the non-State (humanity) […] [who] cannot form a societas because they do not poses any identity to vindicate or bond of belonging for which to seek recognition (Coming Community 85-67; original emphasis).A demonstration like that which sounds through Köves when his health fails in the camps and he finds himself being wheeled on a handcart taken for dead;a snatch of speech that I was barely able to make out came to my attention, and in that hoarse whispering I recognized even less readily the voice that has once – I could not help recollecting – been so strident: ‘I p … pro … test,’ it muttered” (Fatelessness 187 ellipses in original).The inmate pushing the cart stops and pulls him up by the shoulders, asking with astonishment “Was? Du willst noch leben? [What? You still want to live?] […] and right then I found it odd, since it could not have been warranted and, on the whole, was fairly irrational (187).AcknowledgmentsMy sincere thanks to the editors of this special issue, David Bissell and Gillian Fuller, for their interest, encouragement and patience. Thanks also to Sadie, especially for her comments on the final section. ReferencesAdorno, Theodor. Minima Moralia: Reflections on a Damaged Life. London: Verso, 1974.Agamben, Giorgio. The Coming Community. Minneapolis: U of Minnesota P, 1990.———. The Man without Content. Stanford: Stanford U P, 1999.Barthes, Roland. The Neutral. New York: Columbia U P, 2005.Bataille, Georges. Literature and Evil. London: Marion Boyars, 1985.Clarke, Timothy. The Poetics of Singularity: The Counter-Culturalist Turn in Heidegger, Derrida, Blanchot and the Late Gadamer. Edinburgh: Edinburgh U P, 2005.Deák, István. "Stranger in Hell." New York Review of Books 23 Sep. 2003: 65-68.Derrida, Jacques. Rogues. Two Essays on Reason. Stanford: Stanford U P, 2005.François, Anne-Lise. Open Secrets. The Literature of Uncounted Experience. Stanford: Stanford U P, 2008.Gustafsson, Madeleine. 2003 “Imre Kertész: A Medium for the Spirit of Auschwitz.” 6 Mar. 2009 ‹http://nobelprize.org/nobel_prizes/literature/articles/gustafsson/index.html›.Harrison, Paul. “Corporeal Remains: Vulnerability, Proximity, and Living On after the End of the World.” Environment and Planning A 40 (2008): 423-445.———.“In the Absence of Practice.” Environment and Planning D: Society and Space forthcoming.Heidegger, Martin. Introduction to Metaphysics. London: Yale U P, 2000.Iyer, Lars. Blanchot’s Communism: Art, Philosophy and the Political. Basingstoke: Palgrave Macmillan, 2004.Karolle, Julia. “Imre Kertész Fatelessness as Historical Fiction.” Imre Kertész and Holocaust Literature. Ed Louise O. Vasvári and Steven Tötösy de Zepetnek. West Lafayette: Purdue U P, 2005. 89-96.Kertész, Imre. 2002 “Heureka!” Nobel lecture. 6 Mar. 2009 ‹http://nobelprize.org/nobel_prizes/literature/laureates/2002/kertesz-lecture-e.html›.———. Fatelessness. London: Vintage, 2004.———. Kaddish for an Unborn Child. London: Vintage International, 2004.———.“Galley Boat-Log (Gályanapló): Excerpts.” Imre Kertész and Holocaust Literature. Ed Louise O. Vasvári and Steven Tötösy de Zepetnek. West Lafayette: Purdue University Press, 2005. 97-110.Kertzer, Adrienne. “Reading Imre Kertesz in English.” Imre Kertész and Holocaust Literature. Ed Louise O. Vasvári, and Steven Tötösy de Zepetnek. West Lafayette: Purdue U P, 2005. 111-124.Langer, Lawrence. Holocaust Testimonies: The Ruins of Memory. London: Yale U P, 1991.Melville, Herman. Bartleby the Scrivener: A Story of Wall Street. New Jersey: Melville House, 2004.Marx, Karl. Capital Volume 1. London: Penguin Books, 1976.Readings, Bill. “The Deconstruction of Politics.” In Deconstruction: A Reader. Ed Martin McQuillan. Edinburgh: Edinburgh U P, 2000. 388-396.Saramago, José. Seeing. London: Vintage, 2007. The Swedish Academy. "The Nobel Prize in Literature 2002: Imre Kertész." 2002. 6 Mar. 2009 ‹http://nobelprize.org/nobel_prizes/literature/laureates/2002/press.html›.Weber, Max. The Protestant Ethic and the Spirit of Capitalism. London: Routledge, 1992.
APA, Harvard, Vancouver, ISO, and other styles
34

Peaty, Gwyneth. "Power in Silence: Captions, Deafness, and the Final Girl." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1268.

Full text
Abstract:
IntroductionThe horror film Hush (2016) has attracted attention since its release due to the uniqueness of its central character—a deaf–mute author who lives in a world of silence. Maddie Young (Kate Siegel) moves into a remote cabin in the woods to recover from a breakup and finish her new novel. Aside from a cat, she is alone in the house, only engaging with loved ones via online messaging or video chats during which she uses American Sign Language (ASL). Maddie cannot hear nor speak, so writing is her primary mode of creative expression, and a key source of information for the audience. This article explores both the presence and absence of text in Hush, examining how textual “captions” of various kinds are both provided and withheld at key moments. As an author, Maddie battles the limits of written language as she struggles with writer’s block. As a person, she fights the limits of silence and isolation as a brutal killer invades her retreat. Accordingly, this article examines how the interplay between silence, text, and sound invites viewers to identify with the heroine’s experience and ultimate triumph.Hush is best described as a slasher—a horror film in which a single (usually male) killer stalks and kills a series of victims with relentless determination (Clover, Men, Women). Slashers are about close, visceral killing—blood and the hard stab of the knife. With her big brown eyes and gentle presence, quiet, deaf Maddie is clearly framed as a lamb to slaughter in the opening scenes. Indeed, throughout Hush, Maddie’s lack of hearing is leveraged to increase suspense and horror. The classic pantomime cry of “He’s behind you!” is taken to dark extremes as the audience watches a nameless man (John Gallagher Jr.) stalk the writer in her isolated house. She is unable to hear him enter the building, unable to sense him looming behind her. Neither does she hear him killing her friend outside on the porch, banging her body loudly against the French doors.And yet, despite her vulnerability, she rises to the challenge. Fighting back against her attacker using a variety of multisensory strategies, Maddie assumes the role of the “Final Girl” in this narrative. As Carol Clover has explained, the Final Girl is a key trope of slasher films, forming part of their essential structure. While others in the film are killed, “she alone looks death in the face; but she alone also finds the strength either to stay the killer long enough to be rescued (ending A) or to kill him herself (ending B)” (Clover, Her Body, Himself). However, reviews and discussions of Hush typically frame Maddie as a Final Girl with a difference. Adding disability into the equation is seen as “revolutionising” the trope (Sheppard) and “updating the Final Girl theory” for a new age (Laird). Indeed, the film presents its Final Girl as simultaneously deaf and powerful—a twist that potentially challenges the dynamics of the slasher and representations of disability more generally.My Weakness, My StrengthThe opening sequence of Hush introduces Maddie’s deafness through the use of sound, silence, and text. Following an establishing shot sweeping over the dark forest and down to her solitary cottage, the film opens to warm domesticity. Close-ups of onion, eggs, and garlic being prepared are accompanied by clear, crisp sounds of crackling, bubbling, slicing, and frying. The camera zooms out to focus on Maddie, busy at her culinary tasks. All noises begin to fade. The camera focuses on Maddie’s ear as audio is eliminated, replaced by silence. As she continues to cook, the audience experiences her world—a world devoid of sound. These initial moments also highlight the importance of digital communication technologies. Maddie moves smoothly between devices, switching from laptop computer to iPhone while sharing instant messages with a friend. Close-ups of these on-screen conversations provide viewers with additional narrative information, operating as an alternate form of captioning from within the diegesis. Snippets of text from other sources are likewise shown in passing, such as the author’s blurb on the jacket of her previous novel. The camera lingers on this book, allowing viewers to read that Maddie suffered hearing loss and vocal paralysis after contracting bacterial meningitis at 13 years old. Traditional closed captioning or subtitles are thus avoided in favour of less intrusive forms of expositional text that are integrated within the plot.While hearing characters, such as her neighbour and sister, use SimCom (simultaneous communication or sign supported speech) to communicate with her, Maddie signs in silence. Because the filmmakers have elected not to provide captions for her signs in these moments, a—typically non-ASL speaking—hearing audience will inevitably experience disruptions in comprehension and Maddie’s conversations can therefore only be partially understood. This allows for an interesting role reversal for viewers. As Katherine A. Jankowski (32) points out, deaf and hard of hearing audiences have long expressed dissatisfaction with accessing the spoken word on television and film due to a lack of closed captioning. Despite the increasing technological ease of captioning digital media in the 21st century, this barrier to accessibility continues to be an ongoing issue (Ellis and Kent). The hearing community do not share this frustrating background—television programs that include ASL are captioned to ensure hearing viewers can follow the story (see for example Beth Haller’s article on Switched at Birth in this special issue). Hush therefore inverts this dynamic by presenting ASL without captions. Whereas silence is used to draw hearing viewers into Maddie’s experience, her periodic use of ASL pushes them out again. This creates a push–pull dynamic, whereby the hearing audience identify with Maddie and empathise with the losses associated with being deaf and mute, but also realise that, as a result, she has developed additional skills that are beyond their ken.It is worth noting at this point that Maddie is not the first Final Girl with a disability. In the 1967 thriller Wait until Dark, for instance, Audrey Hepburn plays Susy Hendrix, a blind woman trapped in her home by three crooks. Martin F. Norden suggests that this film represented a “step forward” in cinematic representations of disability because its heroine is not simply an innocent victim, but “tough, resilient, and resourceful in her fight against the criminals who have misrepresented themselves to her and have broken into her apartment” (228). Susy’s blindness, at first presented as a source of vulnerability and frustration, becomes her strength in the film’s climax. Bashing out all the lights in the apartment, she forces the men to fight on her terms, in darkness, where she holds the upper hand. In a classic example of Final Girl tenacity, Susy stabs the last of them to death before help arrives. Maddie likewise uses her disability as a tactical advantage. An enhanced sense of touch allows her to detect the killer when he sneaks up behind her as she feels the lightest flutter upon the hairs of her neck. She also wields a blaring fire alarm as a weapon, deafening and disorienting her attacker, causing him to drop his knife.The similarities between these films are not coincidental. During an interview, director Mike Flanagan (who co-wrote Hush with wife Siegel) stated that they were directly informed by Wait until Dark. When asked about the choice to make Maddie’s character deaf, he explained that “it kind of happened because Kate and I were out to dinner and we were talking about movies we liked. One of the ones that we stumbled on that we both really liked was Wait Until Dark” (cited in Thurman). In the earlier film, director Terence Young used darkness to blind the audience—at times the screen is completely black and viewers must listen carefully to work out what is happening. Likewise, Flanagan and Siegel use silence to effectively deafen the audience at crucial moments. The viewers are therefore forced to experience the action as the heroines do.You’re Gonna Die Screaming But You Won’t Be HeardHorror films often depend upon sound design for impact—the most mundane visuals can be made frightening by the addition of a particular noise, effect, or tune. Therefore, in the context of the slasher genre, one of the most unique aspects of Hush is the absence of the Final Girl’s vocalisation. A mute heroine is deprived of the most basic expressive tool in the horror handbook—a good scream. “What really won me over,” comments one reviewer, “was the fact that this particular ‘final girl’ isn’t physically able to whinge or scream when in pain–something that really isn’t the norm in slasher/home invasion movies” (Gorman). Yet silence also plays an important part in this genre, “when the wind stops or the footfalls cease, death is near” (Whittington 183). Indeed, Hush’s tagline is “silence can be killer.”The arrival of the killer triggers a deep kind of silence in this particular film, because alternative captions, text, and other communicative techniques (including ASL) cease to be used or useful when the man begins terrorising Maddie. This is not entirely surprising, as the abject failure of technology is a familiar trope in slasher films. As Clover explains, “the emotional terrain of the slasher film is pretechnological” (Her Body, Himself, 198). In Hush, however, the focus on text in this context is notable. There is a sense that written modes of communication are unreliable when it counts. The killer steals her phone, and cuts electricity and Internet access to the house. She attempts to use the neighbours’ Wi-Fi via her laptop, but does not know the password. Quick thinking Maddie even scrawls backwards messages on her windows, “WON’T TELL. DIDN’T SEE FACE,” she writes in lipstick, “BOYFRIEND COMING HOME.” In response, the killer simply removes his mask, “You’ve seen it now” he says. They both know there is no boyfriend. The written word has shifted from being central to Maddie’s life, to largely irrelevant. Text cannot save her. It is only by using other strategies (and senses) that Maddie empowers herself to survive.Maddie’s struggles to communicate and take control are integral to the film’s unfolding narrative, and co-writer Siegel notes this was a conscious theme: “A lot of this movie is … a metaphor for feeling unheard. It’s a movie about asserting yourself and of course as a female writer I brought a lot to that.” In their reflection on the limits of both verbal and written communication, the writers of Hush owe a debt to another source of inspiration—Joss Whedon’s Buffy the Vampire Slayer television series. Season four, episode ten, also called Hush, was first aired on 14 December 1999 and features a critically acclaimed storyline in which the characters all lose their ability to speak. Voices from all over Sunnydale are stolen by monstrous fairytale figures called The Gentlemen, who use the silence to cut fresh hearts from living victims. Their appearance is heralded by a morbid rhyme:Can’t even shout, can’t even cry The Gentlemen are coming by. Looking in windows, knocking on doors, They need to take seven and they might take yours. Can’t call to mom, can’t say a word, You’re gonna die screaming but you won’t be heard.The theme of being “unheard” is clearly felt in this episode. Buffy and co attempt a variety of methods to compensate for their lost voices, such as hanging message boards around their necks, using basic text-to-voice computer software, and drawing on overhead projector slides. These tools essentially provide the captions for a story unfolding in silence, as no subtitles are provided. As it turns out, in many ways the friends’ non-verbal communication is more effective than their spoken words. Patrick Shade argues that the episode:celebrates the limits and virtues of both the nonverbal and the verbal. … We tend to be most readily aware of verbal means … but “Hush” stresses that we are embodied creatures whose communication consists in more than the spoken word. It reminds us that we have multiple resources we regularly employ in communicating.In a similar way, the film Hush emphasises alternative modes of expression through the device of the mute Final Girl, who must use all of her sensory and intellectual resources to survive. The evening begins with Maddie at leisure, unable to decide how to end her fictional novel. By the finale she is clarity incarnate. She assesses each real-life scene proactively and “writes” the end of the film on her own terms, showing that there is only one way to survive the night—she must fight.Deaf GainIn his discussion of disability and cinema, Norden explains that the majority of films position disabled people as outsiders and “others” because “filmmakers photograph and edit their work to reflect an able-bodied point of view” (1). The very apparatus of mainstream film, he argues, is designed to embody able-bodied experiences and encourage audience identification with able-bodied characters. He argues this bias results in disabled characters positioned as “objects of spectacle” to be pitied, feared or scorned by viewers. In Hush, however, the audience is consistently encouraged to identify with Maddie. As she fights for her life in the final scenes, sound fades away and the camera assumes a first-person perspective. The man is above, choking her on the floor, and we look up at him through her eyes. As Maddie’s groping hand finds a corkscrew and jabs the spike into his neck, we watch his death through her eyes too. The film thus assists viewers to apprehend Maddie’s strength intimately, rather than framing her as a spectacle or distanced “other” to be pitied.Importantly, it is this very core of perceived vulnerability, yet ultimate strength, that gives Maddie the edge over her attacker in the end. In this way, Maddie’s disabilities are not solely represented as a space of limitation or difference, but a potential wellspring of power. Hence the film supports, to some degree, the move to seeing deafness as gain, rather than loss:Deafness has long been viewed as a hearing loss—an absence, a void, a lack. It is virtually impossible to think of deafness without thinking of loss. And yet Deaf people do not often consider their lives to be defined by loss. Rather, there is something present in the lives of Deaf people, something full and complete. (Bauman and Murray, 3)As Bauman and Murray explain, the shift from “hearing loss” to “deaf gain” involves focusing on what is advantageous and unique about the deaf experience. They use the example of the Swiss national snowboarding team, who hired a deaf coach to boost their performance. The coach noticed they were depending too much on sound and used earplugs to teach a multi-sensory approach, “the earplugs forced them to learn to depend on the feel of the snow beneath their boards [and] the snowboarder’s performance improved markedly” (6). This idea that removing sound strengthens other senses is a thread that runs throughout Hush. For example, it is the loss of hearing and speech that are credited with inspiring Maddie’s successful writing career and innovative literary “voice”.Lennard J. Davis warns that framing people as heroic or empowered as a result of their disabilities can feed counterproductive stereotypes and perpetuate oppressive systems. “Privileging the inherent powers of the deaf or the blind is a form of patronizing,” he argues, because it traps such individuals within the concept of innate difference (106). Disparities between able and disabled people are easier to justify when disabled characters are presented as intrinsically “special” or “noble,” as this suggests inevitable divergence, rather than structural inequality. While this is something to keep in mind, Hush skirts the issue by presenting Maddie as a flawed, realistic character. She does not possess superpowers; she makes mistakes and gets injured. In short, she is a fallible human using what resources she has to the best of her abilities. As such, she represents a holistic vision of a disabled heroine rather than an overly glorified stereotype.ConclusionHush is a film about the limits of text, the gaps where language is impossible or insufficient, and the struggle to be heard as a woman with disabilities. It is a film about the difficulties surrounding both verbal and written communication, and our dependence upon them. The absence of closed captions or subtitles, combined with the use of alternative “captioning”—in the form of instant messaging, for instance—grounds the narrative in lived space, rather than providing easy extra-textual solutions. It also poses a challenge to a hearing audience, to cross the border of “otherness” and identify with a deaf heroine.Returning to the discussion of the Final Girl characterisation, Clover argues that this is a gendered device combining both traditionally feminine and masculine characteristics. The fluidity of the Final Girl is constant, “even during that final struggle she is now weak and now strong, now flees the killer and now charges him, now stabs and is stabbed, now cries out in fear and now shouts in anger” (Her Body, Himself, 221). Men viewing slasher films identify with the Final Girl’s “masculine” traits, and in the process find themselves looking through the eyes of a woman. In using a deaf character, Hush suggests that an evolution of this dynamic might also occur along the dis/abled boundary line. Maddie is a powerful survivor who shifts between weak and strong, frightened and fierce, but also between disabled and able. This portrayal encourages the audience to identify with her empowered traits and in the process look through the eyes of a disabled woman. Therefore, while slashers—and horror films in general—are not traditionally associated with progressive representations of disabilities, this evolution of the Final Girl may provide a fruitful topic of both research and filmmaking in the future.ReferencesBauman, Dirksen, and Joseph J. Murray. “Reframing: From Hearing Loss to Deaf Gain.” Trans. Fallon Brizendine and Emily Schenker. Deaf Studies Digital Journal 1 (2009): 1–10. <http://dsdj.gallaudet.edu/assets/section/section2/entry19/DSDJ_entry19.pdf>.Clover, Carol J. Men, Women, and Chain Saws: Gender in the Modern Horror Film. New Jersey: Princeton UP, 1992.———. “Her Body, Himself: Gender in the Slasher Film.” Representations 20 (1987): 187–228.Davis, Lennard J. Enforcing Normalcy: Disability, Deafness, and the Body. London: Verso, 1995.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Gorman, H. “Hush: Film Review.” Scream Horror Magazine (2016) <http://www.screamhorrormag.com/hush-film-review/>.Jankowski, Katherine A. Deaf Empowerment: Emergence, Struggle, and Rhetoric. Washington: Gallaudet UP, 1997.Laird, E.E. “Updating the Final Girl Theory.” Medium (2016) <https://medium.com/@TheFilmJournal/updating-the-final-girl-theory-b37ec0b1acf4>.Norden, M.F. Cinema of Isolation: A History of Physical Disability in the Movies. New Jersey: Rutgers UP, 1994.Shade, Patrick. “Screaming to Be Heard: Community and Communication in ‘Hush’.” Slayage 6.1 (2006). <http://www.whedonstudies.tv/uploads/2/6/2/8/26288593/shade_slayage_6.1.pdf>.Sheppard, D. “Hush: Revolutionising the Final Girl.” Eyes on Screen (2016). <https://eyesonscreen.wordpress.com/2016/06/08/hush-revolutionising-the-final-girl/>.Thurman, T. “‘Hush’ Director Mike Flanagan and Actress Kate Siegel on Their New Thriller!” Interview. Bloody Disgusting (2016). <http://bloody-disgusting.com/interviews/3384092/interview-hush-mike-flanagan-kate-siegel/>.Whittington, W. “Horror Sound Design.” A Companion to the Horror Film. Ed. Harry M. Benshoff. Oxford: John Wiley & Sons, 2014: 168–185.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography