To see the other types of publications on this topic, follow the link: US Engineering and Housing Support Center.

Journal articles on the topic 'US Engineering and Housing Support Center'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'US Engineering and Housing Support Center.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Siembieda, William, Laurie Johnson, and Guillermo Franco. "Rebuild Fast but Rebuild Better: Chile's Initial Recovery following the 27 February 2010 Earthquake and Tsunami." Earthquake Spectra 28, no. 1_suppl1 (2012): 621–41. http://dx.doi.org/10.1193/1.4000025.

Full text
Abstract:
The Chilean earthquake and tsunami disaster of 27 February 2010 impacted 12 million people in 900 cities and towns, causing more than US$30 billion in losses. This paper considers how the national government responded to the challenges of coastal and urban reconstruction, and examines the actions taken in the housing, land use mitigation planning, insurance, and risk reduction management sectors. The Chilean government utilized a mixed decentralized model for recovery management with strong direction from the national-level ministries and subnational planning and housing efforts at the regional and municipal levels. The national recovery plan guiding principles are used in this paper as a framework for progress. In 12 months, a series of temporary shelter villages and a system of recovery housing subsidies were established; risk-based land use plans were conducted in various coastal areas; a finance plan was adopted; changes to the national emergency management agency were made; and rapid payment of insurance claims were completed. Conflicts did arise related to the speed of housing recovery support, expropriation of land sites as future tsunami protection barriers, and extent of public participation in recovery plan making.
APA, Harvard, Vancouver, ISO, and other styles
2

Yamasaki, Eiichi, and Haruo Hayashi. "People Who Cannot Move During a Disaster – Initiatives and Examples in Japan Disaster Victim Support." Journal of Disaster Research 12, no. 1 (2017): 137–46. http://dx.doi.org/10.20965/jdr.2017.p0137.

Full text
Abstract:
The main purpose of this paper is to explore the vulnerability of disaster victims from the perspective of immobility, in contrast to the conventional perspective of mobility. What causes immobility in Japan? And how have immobile people been treated? In this article, I will attempt to answer these questions using some concrete examples. Immobile people have been recognized as “people requiring assistance during a disaster” (PRADD). This term helps us understand immobility in Japan. The Sanjou flood (2004) prompted the formulation of the “Guidelines for Evacuation Support of People Requiring Assistance during a Disaster.” The national government has encouraged local governments and residents to be prepared for a disaster using the guidelines. Nevertheless, preparations for disasters have not progressed very well. It was in this context that the Great East Japan Earthquake (GEJE) occurred. During the GEJE, immobility raised the risk of death for PRADD due to the tsunami. After the tsunami, there were also PRADD who could not evacuate to shelters because they were anxious about how life would be there. Now many victims live in temporary housing. There will be people who cannot move to temporary housing in the future. It is likely that they will be mainly PRADD. These cases make it clear that immobility causes vulnerability to disasters. I will also provide an example of how mobility causes vulnerability in a disaster – a stranded commuter or person during the GEJE.
APA, Harvard, Vancouver, ISO, and other styles
3

Kerwin, Donald, and Mike Nicholson. "Charting a Course to Rebuild and Strengthen the US Refugee Admissions Program (USRAP): Findings and Recommendations from the Center for Migration Studies Refugee Resettlement Survey: 2020." Journal on Migration and Human Security 9, no. 1 (2021): 1–30. http://dx.doi.org/10.1177/2331502420985043.

Full text
Abstract:
Executive Summary 1 This report analyzes the US Refugee Admissions Program (USRAP), leveraging data from a national survey of resettlement stakeholders conducted in 2020. 2 The survey examined USRAP from the time that refugees arrive in the United States. Its design and questionnaire were informed by three community gatherings organized by Refugee Council USA in the fall and winter of 2019, extensive input from an expert advisory group, and a literature review. This study finds that USRAP serves important purposes, enjoys extensive community support, and offers a variety of effective services. Overall, the survey finds a high degree of consensus on the US resettlement program’s strengths and objectives, and close alignment between its services and the needs of refugees at different stages of their settlement and integration. Because its infrastructure and community-based resettlement networks have been decimated in recent years, the main challenges of subsequent administrations, Congresses, and USRAP stakeholders will be to rebuild, revitalize, and regain broad and bipartisan support for the program. This article also recommends specific ways that USRAP’s programs and services can be strengthened. Among the study’s findings: 3 Most refugee respondents identified USRAP’s main purpose(s) as giving refugees new opportunities, helping them to integrate, offering hope to refugees living in difficult circumstances abroad, and saving lives. High percentages of refugees reported that the program allowed them to support themselves soon after arrival (92 percent), helped them to integrate (77 percent), and has a positive economic impact on local communities (71 percent). Refugee respondents also reported that the program encourages them to work in jobs that do not match their skills and credentials (56 percent), does not provide enough integration support after three months (54 percent), does not offer sufficient financial help during their first three months (49 percent), and reunites families too slowly (47 percent). Respondents identified the following main false ideas about the program: refugees pose a security risk (84 percent), use too many benefits and drain public finances (83 percent), and take the jobs of the native-born (74 percent). Refugee respondents reported using public benefits to meet basic needs, such as medical care, food, and housing. Non-refugee survey respondents believed at high rates that former refugees (69 percent) and refugee community advocate groups (64 percent) should be afforded a voice in the resettlement process. Non-refugee respondents indicated at high rates that the program’s employment requirements limit the time needed for refugees to learn English (65 percent) and limit their ability to pursue higher education (59 percent). Eighty-six percent of non-refugee respondents indicated that the Reception and Placement program is much too short (56 percent) or a little too short (30 percent). Respondents identified a wide range of persons and institutions as being very helpful to refugees in settling into their new communities: these included resettlement staff, friends, and acquaintances from refugees’ country of origin, members of places of worship, community organizations led by refugees or former refugees, and family members. Refugee respondents identified finding medical care (61 percent), housing (52 percent), and a job (49 percent) as the most helpful services in their first three months in the country. Refugees reported that the biggest challenge in their first year was to find employment that matched their educational or skill levels or backgrounds. The needs of refugees and the main obstacles to their successful integration differ by gender, reflecting at least in part the greater childcare responsibilities borne by refugee women. Refugee men reported needing assistance during their first three months in finding employment (68 percent), English Language Learning (ELL) courses (59 percent), and orientation services (56 percent), while refugee women reported needing orientation services (81 percent) and assistance in securing childcare (64 percent), finding ELL courses (53 percent), and enrolling children in school (49 percent). To open-response questions, non-refugee respondents identified as obstacles to the integration of men: digital literacy, (lack of) anti–domestic violence training, the need for more training to improve their jobs, the new public benefit rule, transportation to work, low wages, the need for more mental health services, cultural role adjustment, and lack of motivation. Non-refugee respondents identified as obstacles to the integration of women: lack of childcare and affordable housing, the different cultural roles of women in the United States, lack of affordable driver’s education classes, a shortage of ELL classes for those with low literacy or the illiterate, digital literacy challenges, difficulty navigating their children’s education and school systems, transportation problems, poorly paying jobs, and lack of friendships with US residents. Non-refugee respondents report that refugee children also face unique obstacles to integration, including limited funding or capacity to engage refugee parents in their children’s education, difficulties communicating with refugee families, and the unfamiliarity of teachers and school staff with the cultures and backgrounds of refugee children and families. LGBTQ refugees have many of the same basic needs as other refugees — education, housing, employment, transportation, psychosocial, and others — but face unique challenges in meeting these needs due to possible rejection by refugees and immigrants from their own countries and by other residents of their new communities. Since 2017, the number of resettlement agencies has fallen sharply, and large numbers of staff at the remaining agencies have been laid off. As a result, the program has suffered a loss in expertise, institutional knowledge, language diversity, and resettlement capacity. Resettlement agencies and community-based organizations (CBOs) reported at high rates that to accommodate pre-2017 numbers of refugees, they would need higher staffing levels in employment services (66 percent), general integration and adjustment services (62 percent), mental health care (44 percent) and medical case management (44 percent). Resettlement agencies indicated that they face immense operational and financial challenges, some of them longstanding (like per capita funding and secondary migration), and some related to the Trump administration’s hostility to the program. Section I introduces the article and provides historic context on the US refugee program. Section II outlines the resettlement process and its constituent programs. Section III describes the CMS Refugee Resettlement Survey: 2020. Section IV sets forth the study’s main findings, with subsections covering USRAP’s purpose and overall strengths and weaknesses; critiques of the program; the importance of receiving communities to resettlement and integration; the effectiveness of select USRAP programs and services; integration metrics; and obstacles to integration. The article ends with a series of recommendations to rebuild and strengthen this program.
APA, Harvard, Vancouver, ISO, and other styles
4

Gin, June L., Roger J. Casey, Jeffery L. Quarles, and Aram Dobalian. "Ensuring Continuity of Transitional Housing for Homeless Veterans: Promoting Disaster Resilience among the Veterans Health Administration’s Grant and Per Diem Providers." Journal of Primary Care & Community Health 10 (January 2019): 215013271986126. http://dx.doi.org/10.1177/2150132719861262.

Full text
Abstract:
The US Department of Veterans Affairs (VA) has committed significant resources toward eliminating homelessness among veterans as part of its health care mission. The VA Grant and Per Diem (GPD) program funds non-VA, community-based organizations to provide transitional housing and support services to veterans experiencing homelessness. During a disaster, GPD grantee organizations will be especially critical in ensuring the well-being of veterans residing in their programs. Recognizing the need to ensure continued access to this residential care, the VA GPD program implemented a disaster preparedness plan requirement for its grantee organizations in 2013. This study conducted semistructured interviews with leaders of 5 GPD grantee organizations, exploring their perceptions of the preparedness requirement, the assistance they would need to achieve desired preparedness outcomes, and their motivations toward preparedness. Organizations reported being extremely motivated toward improving their disaster preparedness, albeit often for reasons other than the new preparedness requirement, such as disaster risk or partnerships with local government. Two dominant themes in organizations’ identified needs were (1) the need to make preparedness seem as “easy and doable” as possible and (2) the desire to be more thoroughly integrated with partners. These themes suggest the need to develop materials specifically tailored to facilitate preparedness within the GPD nonprofit grantees, an effort currently being led by the VA’s Veterans Emergency Management Evaluation Center (VEMEC).
APA, Harvard, Vancouver, ISO, and other styles
5

Zeydan, Mithat, Bülent Bostancı, and Burcu Oralhan. "A New Hybrid Decision Making Approach for Housing Suitability Mapping of an Urban Area." Mathematical Problems in Engineering 2018 (October 15, 2018): 1–13. http://dx.doi.org/10.1155/2018/7038643.

Full text
Abstract:
In urban planning, housing evaluation of residential areas plays a critical role in promoting economic efficiency. This study produced an evolutionary-based map through the combination of hybrid Multicriteria Decision Making (MCDM) and Geographical Information System (GIS) by assessing suitability of housing location. Suitable locations were modelled and determined with the present study from very low suitability to very high suitability. In the first stage, Fuzzy DEMATEL (the Decision Making Trial and Evaluation Laboratory) and Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) under fuzzy conditions as a subjective and an objective (model-based) technique, respectively, were employed to find the weights of criteria which are critical part of decision making. In the second stage, housing evaluation map for these two approaches was drawn and their performances were classified and measured with WLC (Weighted Linear Combination) method. 29 criteria determined were prioritized as per judgment of urban planning and real estate experts for Fuzzy DEMATEL and CMA-ES. After having been coded to MATLAB for obtaining optimum weights in CMA-ES, all collected data for 160 houses were mapped as vectorial (positional) and transformed to raster (pixel) data by getting entered in ArcGIS 10.4 software. We achieved CMA-ES-WLC maximization values for 104 alternatives with (positive value) 65% performance, but we obtained FDEMATEL-WLC maximization values for 56 alternatives with (negative value) 35% performance. WLC values calculated with CMA-ES and FDEMATEL weights allowed us to conclude that the houses with the highest suitability in terms of investment are in Alpaslan, Köşk, and Melikgazi streets. The result shows that the methodology used in the application of this study performed in Turkey is an important and powerful technology in providing decision support for spatial planning.
APA, Harvard, Vancouver, ISO, and other styles
6

Alzamora Rumazo, C., A. Pryor, F. Ocampo Mendoza, J. Campos Villareal, J. M. Robledo, and E. Rodríguez Mercado. "Cleaner production in the chemical industry." Water Science and Technology 42, no. 5-6 (2000): 1–7. http://dx.doi.org/10.2166/wst.2000.0487.

Full text
Abstract:
A cleaner production demonstration study was developed in 1998 for the chemical industry by the Mexican Center for Cleaner Production with the support of the United States Agency for International Development (USAID). The project's objective was to develop cleaner production assessments for chemical plants by identifying and evaluating process and energy cleaner production opportunities for technical feasibility, economic benefit and environmental impact. Four plants in the chemical industry groups of inorganic and organic chemicals and plastic materials and synthetic resins were involved. The main results are: (1) a reduction of solid toxic residues in the organic chemicals plant of 3,474 kg/year with after-tax savings of US$ 318,304/year; (2) an increase in plant capacity of 56%, and 10% reduction in VOCs emissions in the plasticizers and epoxidated soybean oil plant with after-tax savings of US$ 2,356,000/year; (3) a reduction of 31,150 kg/year of ethylene oxide emissions with after-tax savings of US$ 17,750/year in the polyethylene glycol plant and (4) a reduction of CO2 emissions of 9.21% with after-tax savings of US$ 44,281/year in the inorganic chemicals plant. The principal areas for improvement in the chemical industry are process control and instrumentation, process design, maintenance programs and providing adequate utilities for the plants.
APA, Harvard, Vancouver, ISO, and other styles
7

Shrubb, Richard. "The UK homelessness epidemic – radical roots needing radical solutions." Praca Socjalna 35, no. 1 (2020): 13–34. http://dx.doi.org/10.5604/01.3001.0014.1172.

Full text
Abstract:
Using newspaper articles, government and charity reports and other secondary sources, this paper looks at the new problem of widespread homelessness brought about by the UK austerity economic policy after 2010. It assesse the growth of the problem due in particular to the re-engineering of welfare benefits. Looking at those who have fallen through the net, the paper focuses on the ability of local authorities to use the law to decline to support those presenting as homeless, including those released from prison. Addressing punitive measures taken by local authorities and law enforcement agencies, it highlights the difficulties faced by those targeted by such agencies. In the final section I look at two contrasting models of policy vis-à-vis homeless people – those in use in the United States and in Finland. The UK neither officially countenances homeless camps, as in the US, nor offers housing as a right, as in Finland. Drawing on an accusation made by Chris Glover in a December 2018 academic paper, I conclude that Friedrich Engels 1844 concept of social murder has been committed against thousands of people in an act of a term I coin as ‘Classism’. This act of class war against the most vulnerable has made many thousands more homeless and in precarious housing.
APA, Harvard, Vancouver, ISO, and other styles
8

Laumond, Jean–Paul, Mehdi Benallegue, Justin Carpentier, and Alain Berthoz. "The Yoyo-Man." International Journal of Robotics Research 36, no. 13-14 (2017): 1508–20. http://dx.doi.org/10.1177/0278364917693292.

Full text
Abstract:
The paper reports on two results issued from a multidisciplinary research action exploring the motor synergies of anthropomorphic walking. By combining the biomechanical, neurophysiology, and robotics perspectives, it is intended to better understand human locomotion with the ambition to better design bipedal robot architectures. The motivation of the research starts from the simple observation that humans may stumble when following a simple reflex-based locomotion on uneven terrains. The rationale combines two well established results in robotics and neuroscience, respectively: passive robot walkers, which are very efficient in terms of energy consumption, can be modeled by a simple rotating rimless wheel; humans and animals stabilize their head when moving. The seminal hypothesis is then to consider a wheel equipped with a stabilized mass on top of it as a plausible model of bipedal walking. The two results presented in the paper support the hypothesis. From a motion capture data basis of twelve human walkers, we show that the motions of the feet are organized around a geometric center, which is the center of mass, and is surprisingly not at the hip. After introducing a ground texture model that allows us to quantify the stability performance of walker control schemes, we show how compass-like passive walkers are better controlled when equipped with a stabilized 2-degree-of-freedom moving mass on top of them. The center of mass and head then play complementary roles that define what we call the Yoyo-Man. Beyond the two results presented in the paper, the Yoyo-Man model opens new perspectives to explore the computational foundations of anthropomorphic walking.
APA, Harvard, Vancouver, ISO, and other styles
9

Savytskyi, M. "PRYDNIPROVSKA STATE ACADEMY OF CIVIL ENGINEERING AND ARCHITECTURE ON THE WAY OF MODERNIZATION AND TRANSFORMATION INTO “GREEN” UNIVERSITY." Ukrainian Journal of Civil Engineering and Architecture, no. 1 (June 24, 2021): 7–13. http://dx.doi.org/10.30838/j.bpsacea.2312.230221.7.712.

Full text
Abstract:
Formulation of the problem. Prydniprovska State Academy of Civil Engineering and Architecture is a recognized educational and scientific center in the field of architecture and construction, which has outstanding traditions and achievements, realizes its mission in ensuring innovative development of Ukraine through infrastructure projects and programs, creation of fixed assets, housing and public construction. In 2020 Prydniprovska State Academy of Civil Engineering and Architecture celebrated its 90th anniversary. However, higher engineering and construction education in Yekaterinoslav − Dnipropetrovsk − Dnipro has more than 100 years: Yekaterinoslav Polytechnic Institute (1916−1921); Yekaterinoslav Evening Workers' Construction Technical School (1921−1930); Dnipropetrovsk Construction Institute (DCI, 1930−1935); Dnipropetrovsk Civil Engineering Institute (DCEI, 1935-1994); Prydniprovska State Academy of Civil Engineering and Architecture (PSACEA since 1994). The history of PSACEA is inextricably linked with the historical events in the country, as well as with the personalities - the rectors who headed the institution and directed its activities. The 30−60's – are the years of formation of the institution due to the hard work of DCI-DCEI and their leaders. In 1964, Reznichenko P.T. was appointed Rector of DCEI. The years of his leadership of the university (1964−1987) can be called the years of development during which the construction of infrastructure facilities was carried out – educational buildings, dormitories, swimming pool, scientific landfill and much more. Rector Bolshakov V.I., who headed DCEI − PSACEA for 31 years (from 1987 to 2018) is associated with the formation of PSACEA as a powerful scientific center of construction science. New socio-economic conditions require the modernization of all areas of PSACEA. The purpose of the article is to explore the ways of transformation of PSACEA into a center of modern architecture, science and technology, a green university. Conclusions. Further development of PSACEA should take place through the application and dissemination through engineering and research creative work of new knowledge, techniques and technologies, education of the younger generation in the spirit of humanism, promoting education, science and production with the support of government and civil society. The strategic goal of the academy is to become the leading architectural and construction university of Ukraine of European level of innovative type due to integration into the international scientific and educational space, preservation and development of traditions and achievements of DCEI−PSACEA school, creative application of world heritage in basic and applied research; to transform the academy into a “green” University of Architecture and Civil Engineering, the activities of which are based on the principles of sustainable development
APA, Harvard, Vancouver, ISO, and other styles
10

Warren, Robert, and Donald Kerwin. "Mass Deportations Would Impoverish US Families and Create Immense Social Costs." Journal on Migration and Human Security 5, no. 1 (2017): 1–8. http://dx.doi.org/10.1177/233150241700500101.

Full text
Abstract:
Executive Summary1 This paper provides a statistical portrait of the US undocumented population, with an emphasis on the social and economic condition of mixed-status households - that is, households that contain a US citizen and an undocumented resident. It is based primarily on data compiled by the Center for Migration Studies (CMS). Major findings include the following: • There were 3.3 million mixed-status households in the United States in 2014. • 6.6 million US-born citizens share 3 million households with undocumented residents (mostly their parents). Of these US-born citizens, 5.7 million are children (under age 18). • 2.9 million undocumented residents were 14 years old or younger when they were brought to the United States. • Three-quarters of a million undocumented residents are self-employed, having created their own jobs and in the process, creating jobs for many others. • A total of 1.3 million, or 13 percent of the undocumented over age 18, have college degrees. • Of those with college degrees, two-thirds, or 855,000, have degrees in four fields: engineering, business, communications, and social sciences. • Six million undocumented residents, or 55 percent of the total, speak English well, very well, or only English. • The unemployment rate for the undocumented was 6.6 percent, the same as the national rate in January 2014.2 • Seventy-three percent had incomes at or above the poverty level. • Sixty-two percent have lived in the United States for 10 years or more. • Their median household income was $41,000, about $12,700 lower than the national figure of $53,700 in 2014 (US Census Bureau 2015). Based on this profile, a massive deportation program can be expected to have the following major consequences: • Removing undocumented residents from mixed-status households would reduce median household income from $41,300 to $22,000, a drop of $19,300, or 47 percent, which would plunge millions of US families into poverty. • If just one-third of the US-born children of undocumented residents remained in the United States following a mass deportation program, which is a very low estimate, the cost of raising those children through their minority would total $118 billion. • The nation's housing market would be jeopardized because a high percentage of the 1.2 million mortgages held by households with undocumented immigrants would be in peril. • Gross domestic product (GDP) would be reduced by 1.4 percent in the first year, and cumulative GDP would be reduced by $4.7 trillion over 10 years. CMS derived its population estimates for 2014 using a series of statistical procedures that involved the analysis of data collected by the US Census Bureau's American Community Survey (ACS). The privacy of all respondents in the survey is legally mandated, and, for the reasons listed in the Appendix, the identity of undocumented residents cannot be derived from the data. A detailed description of the methodology used to develop the estimates is available at the CMS website.3
APA, Harvard, Vancouver, ISO, and other styles
11

Miera, Oliver, Katharina L. Schmitt, Hakan Akintuerk, et al. "Antithrombotic therapy in pediatric ventricular assist devices: Multicenter survey of the European EXCOR Pediatric Investigator Group." International Journal of Artificial Organs 41, no. 7 (2018): 385–92. http://dx.doi.org/10.1177/0391398818773040.

Full text
Abstract:
Objectives: Mechanical circulatory support for pediatric heart failure patients with the Berlin Heart EXCOR ventricular assist system is the only approved and established bridging strategy for recovery or heart transplantation. In recent years, the burden of thromboembolic events has led to modifications of the recommended antithrombotic therapy. Therefore, we aimed to assess modifications of antithrombotic practice among the European EXCOR Pediatric Investigator Group members. Methods: We sent a questionnaire assessing seven aspects of antithrombotic therapy to 18 European hospitals using the EXCOR device for children. Returned questionnaires were analyzed and identified antithrombotic strategies were descriptively compared to “Edmonton protocol” recommendations developed for the US EXCOR pediatric approval study. Results: Analysis of 18 received surveys revealed substantial deviations from the Edmonton protocol, including earlier start of heparin therapy at 6–12 h postoperatively and in 50% of surveyed centers, monitoring of heparin effectiveness with aPTT assay, administering vitamin K antagonists before 12 months of age. About 39% of centers use higher international normalized ratio targets, and platelet inhibition is changed in 56% including the use of clopidogrel instead of dipyridamole. Significant inter-center variability with multiple deviations from the Edmonton protocol was discovered with only one center following the Edmonton protocol completely. Conclusion: Current antithrombotic practice among European EXCOR users representing the treatment of more than 600 pediatric patients has changed over time with a trend toward a more aggressive therapy. There is a need for systematic evidence-based evaluation and harmonization of developmentally adjusted antithrombotic management practices in prospective studies toward revised recommendations.
APA, Harvard, Vancouver, ISO, and other styles
12

Pransky, Joanne. "The Pransky interview: Dr William “Red” Whittaker, Robotics Pioneer, Professor, Entrepreneur." Industrial Robot: An International Journal 43, no. 4 (2016): 349–53. http://dx.doi.org/10.1108/ir-04-2016-0124.

Full text
Abstract:
Purpose The following paper details a “Q&A interview” conducted by Joanne Pransky, Associate Editor of Industrial Robot Journal, to impart the combined technological, business and personal experience of a prominent, robotic industry engineer-turned successful business leader, regarding the commercialization and challenges of bringing technological inventions to the market while overseeing a company. The paper aims to discuss these issues. Design/methodology/approach The interviewee is Dr William “Red” Whittaker, Fredkin Research Professor of Robotics, Robotics Institute, Carnegie Mellon University (CMU); CEO of Astrobotic Technology; and President of Workhorse Technologies. Dr Whittaker provides answers to questions regarding the pioneering experiences of some of his technological wonders in land, sea, air, underwater, underground and space. Findings As a child, Dr Whittaker built things and made them work and dreamed about space and robots. He has since then turned his dreams, and those of the world, into realities. Dr Whittaker’s formal education includes a BS degree in civil engineering from Princeton and MS and PhD degrees in civil engineering from CMU. In response to designing a robot to cleanup radioactive material at the Three Mile Island nuclear plant, Dr Whittaker established the Field Robotics Center (FRC) in 1983. He is also the founder of the National Robotics Engineering Center, an operating unit within CMU’s Robotics Institute (RI), the world’s largest robotics research and development organization. Dr Whittaker has developed more than 60 robots, breaking new ground in autonomous vehicles, field robotics, space exploration, mining and agriculture. Dr Whittaker’s research addresses computer architectures for robots, modeling and planning for non-repetitive tasks, complex problems of objective sensing in random and dynamic environments and integration of complete robot systems. His current focus is Astrobotic Technology, a CMU spin-off firm that is developing space robotics technology to support planetary missions. Dr Whittaker is competing for the US$20m Google Lunar XPRIZE for privately landing a robot on the Moon. Originality/value Dr Whittaker coined the term “field robotics” to describe his research that centers on robots in unconstrained, uncontrived settings, typically outdoors and in the full range of operational and environmental conditions: robotics in the “natural” world. The Field Robotics Center has been one of the most successful initiatives within the entire robotics industry. As the Father of Field Robotics, Dr Whittaker has pioneered locomotion technologies, navigation and route-planning methods and advanced sensing systems. He has directed over US$100m worth of research programs and spearheaded several world-class robotic explorations and operations with significant outreach, education and technology commercializations. His ground vehicles have driven thousands of autonomous miles. Dr Whittaker won DARPA’s US$2m Urban Challenge. His Humvees finished second and third in the 2005 DARPA’s Grand race Challenge desert race. Other robot projects have included: Dante II, a walking robot that explored an active volcano; Nomad, which searched for meteorites in Antarctica; and Tugbot, which surveyed a 1,800-acre area of Nevada for buried hazards. Dr Whittaker is a member of the National Academy of Engineering. He is a fellow of the American Association for Artificial Intelligence and served on the National Academy of Sciences Space Studies Board. Dr Whittaker received the Alan Newell Medal for Research Excellence. He received Carnegie Mellon’s Teare Award for Teaching Excellence. He received the Joseph Engelberger Award for Outstanding Achievement in Robotics, the Advancement of Artificial Intelligence’s inaugural Feigenbaum Prize for his contributions to machine intelligence, the Institute of Electrical and Electronics Engineers Simon Ramo Medal, the American Society of Civil Engineers Columbia Medal, the Antarctic Service Medal and the American Spirit Honor Medal. Science Digest named Dr Whittaker one of the top 100 US innovators for his work in robotics. He has been recognized by Aviation Week & Space Technology and Design News magazines for outstanding achievement. Fortune named him a “Hero of US Manufacturing”. Dr Whittaker has advised 26 PhD students, has 16 patents and has authored over 200 publications. Dr Whittaker’s vision is to drive nanobiologics technology to fulfillment and create nanorobotic agents for enterprise on Earth and beyond (Figure 1).
APA, Harvard, Vancouver, ISO, and other styles
13

Baumgarten, B., O. Basu, N. Graf, et al. "A Meta-Model of Chemotherapy Planning in the Multi-Hospital/Multi-Trial-Center-Environment of Pediatric Oncology." Methods of Information in Medicine 43, no. 02 (2004): 171–83. http://dx.doi.org/10.1055/s-0038-1633856.

Full text
Abstract:
Summary Objective: Chemotherapy planning in pediatric oncology is complex and time-consuming. The correctness of the calculation according to state-of-the-art research is crucial for curing the child. Computer-assistance can be of great value. The objective of our research was to work out a meta-model of chemotherapy planning based on the Unified Modeling Language (UML). The meta-model is used for the development of an application system which serves as a knowledge-acquisition tool for chemotherapy protocols in pediatric oncology as well as for providing protocol-based care. Methods: We applied evolutionary prototyping, software re-engineering techniques and grounded theory, a qualitative method in social research. We repeated the following steps several times over the years: Based on a requirements analysis (i) a meta-model was developed or adapted, respectively (ii). The meta-model served as a basis for implementing evolutionary prototypes (iii). Further requirements were identified (i) from clinical use of the systems. Results: We developed a comprehensive UML-based meta-model for chemotherapy planning in pediatric oncology (chemoMM). We implemented it and introduced evolutionary prototypes (CATIPO and DOSPO) in several medical centers. Systematic validation of the prototypes enabled us to derive a final meta-model which covers the requirements that have turned out to be necessary in clinical routine. Conclusions: We have developed an application system that fits well into clinical routine of pediatric oncology in Germany. Validation results have shown that the implementation of the meta-model chemoMM can adequately support the knowledge acquisition process for protocol-based care.
APA, Harvard, Vancouver, ISO, and other styles
14

Sato, Shunichi. "Urban Renewal for Earthquake-Proof Systems." Journal of Disaster Research 1, no. 1 (2006): 95–102. http://dx.doi.org/10.20965/jdr.2006.p0095.

Full text
Abstract:
In the latter half of the twentieth century we have cities with a population of ten million or more and highly developed rapid transit and freeways. By December 1972, the total population of Tokyo, the Capital of Japan, had grown to 11.6 million. Tokyo, standing with New York City, Shanghai, and London, is now one of the world's largest cities. In the Japan islands, people are moving to bigger cities on a large scale. This may be concluded from the fact that the economic miracle transformed a battered Japan into one of the greatest industrial nations of the world during the last decade. Economic and industrial activity was concentrated in limited areas, especially on the outskirts of large cities which furnished the consumer markets and in the built-up town areas which envelop minor enterprises allied with big industries. As the nation's largest city and its capital, it was only natural that Tokyo's postwar population growth should have outpaced the rest of the country, because it was the center of the world's highest national economic growth. Tokyo also now plays an important role as a center of political power as in it are concentrated the legislative bodies, the judiciary, and the natural administration. The fact that today's national activities in every field including culture and economy are related to the central political activity accerates the centralization of head offices of enterprises in Tokyo where they can best cope with the economic policy of the government. The number of publications from Tokyo, for example, is 80 per cent of the national total. Tokyo is the center of the country. This centralization brings us much benefit and at the same time it exerts an evil influence. Tokyo is suffering from urban problems such as pollution, traffic congestion, housing shortages, etc. which are also major problems in the other big cities in the world. The rapidity of the centralization of people and industries in Tokyo has made matters worse. An administrative report of the Tokyo Metropolitan Government analyzes the situation as follows, "An emergence of super high buildings and coiling freeways in the center of Tokyo has dramatically changed it into a modernized city, but at the same time the change has brought about the by-products of air pollution and traffic jams that threaten our daily life and health. Housing shortages, commuter congestion and rising prices are also detrimental to the goal of a happy citizenry". In November 1972, the World Conference of Great Cities was held in Tokyo; when the Tokyo Declaration was announced stating, "we cannot deny the fact that science and technology which have brought about many benefits to human beings are also having destructive effects in the large cities," it was enough to remind each participant of the seriousness of their urban problems. There is also a saying, "city planning in the twentieth century is a fight against cars and slums." Indeed the city is product of civilized society and it fares well or ill coincidentally with changes in economy and society supported by the civilization. One must not forget that the main host of a city is neither industry nor machinery, but human beings. A city is a settlement designed for human beings. Therefore we must discharge our duty without delay to fight under given conditions for urban reconstruction with co-existing residential, industrial, and commercial zoning making a comfortable city in which to live and work. We can easily imagine the dreadful damage an overcrowded Tokyo will suffer during a great earthquake. The experience of ruinous damage brought about by repeated earthquakes in the past tells us that the continuing sprawl and overcrowding of Tokyo will undoubtedly increase the danger. Even the newest scientific technology cannot prevent earthquakes. We must, therefore, recognize that it is not the mischief of nature, but the easygoing attitude of people that brings much of the ruin and damage by earthquakes. That means that peoples' efforts have been the minimum, and so we are now meeting the challenge of reorganization of the functions and structures of Tokyo from the civil engineering point of view with human wisdom, courage, and technology.
APA, Harvard, Vancouver, ISO, and other styles
15

Sujatha, CN. "Coal Production Analysis using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 919–26. http://dx.doi.org/10.22214/ijraset.2021.35130.

Full text
Abstract:
Coal will keep on giving a significant segment of energy prerequisites in the US for at any rate the following quite a few years. It is basic that exact data portraying the sum, area, and nature of the coal assets and stores be accessible to satisfy energy needs. It is likewise significant that the US separate its coal assets productively, securely, and in a naturally mindful way. A restored center around government support for coal-related examination, facilitated across offices and with the dynamic cooperation of the states and modern area, is a basic component for every one of these necessities. In this project we attempt to predict the coal production using various features given the data set. We attempt to implement regression algorithms and find the best algorithm along with fine tuning the parameters of the algorithm. The existing system uses the linear regression model one of the main issues with this basic linear regression is that it does not have a regularization parameter and hence overfits the data. The system also does not provide enough pre-processing and visualization or Exploratory Data Analysis (EDA). We aim to build advanced regression models like ridge and lasso regression and also fine tune the parameters of the model. These models would be trained on a data set which will be engineered carefully after performing the feature engineering.
APA, Harvard, Vancouver, ISO, and other styles
16

Snyder, Rose, Christopher Leyton, Richard Elkind, et al. "Utilization & Barriers to Curative Therapies Among Adult Aplastic Anemia Patients in a Multiethnic Urban Underserved Cohort." Blood 136, Supplement 1 (2020): 11–12. http://dx.doi.org/10.1182/blood-2020-142704.

Full text
Abstract:
Introduction Acquired Aplastic Anemia (AA) is a rare life-threatening immune-mediated bone marrow disorder. AlloHSCT is the only available curative therapy for SAA with a 3 year survival probability for adults between 72-80% in the United States. (D'Souza et all, Biol BMT 2020). The management of AA is complex and requires complicated regimens, recruitment of a BM donor, supportive care and close monitoring of hematopoietic response to therapy. Patients unable to follow closely with their physician or who lack sufficient social support are often deemed inappropriate candidates for BMT. The Bronx is one of the poorest urban counties in the US. 27.4% of Bronx residents live below the poverty line and 59% speak a language other than English at home. The socioeconomic circumstances for many Bronx residents present a multitude of health challenges that lead to poor health outcomes. Delivering the complex diagnosis and treatment strategies required for AA, a rare disease, can be particularly challenging for this population. Through this retrospective cohort study, we sought to find out the rate of utilization of curative therapies among patients with severe aplastic anemia in the Bronx, NY and to identify barriers to their care. We hypothesized that despite several social & financial barriers, SAA patients that can avail of IST +/- AlloHSCT at a tertiary care center will have similar survival trends as the national standard. Methods Our study used a data search tool called Clinical Looking Glass to identify adult patients diagnosed with AA at Montefiore Medical Center (MMC) between 2000 and 2018. Under an IRB approved protocol, we extracted all patients with a bone marrow biopsy performed between 2000 and 2018 and an ICD-9 diagnosis code of AA. Only patients aged 17 and above at the time of the index date (BM biopsy) were included. We also reviewed each chart to ensure the diagnosis of AA was confirmed by the BM biopsy. We performed a retrospective chart review of each patient in our cohort using our electronic medical records. Clinical data collected included patient demographic information, AA classification, date of diagnosis, date of last follow up or date of death, type of therapy received, and identification of socioeconomic barriers to receiving appropriate care. Results Thirty three adult patients (aged 17 and above at time of BM biopsy-confirmed diagnosis) were diagnosed with AA at Montefiore Medical Center between 2000 and 2018. Age at diagnosis ranged from 17-79, with a mean age of 36 and median age 28. Fifty five percent of patients were younger than 30 at the time of diagnosis. Forty two percent of patients were female and 57.6% were male. Fifty two percent of patients were African American or Black, 27.3% were Hispanic, 12.1% were White, and 9.1% were Asian or Asian Indian. Forty two percent of cases were non-severe, 18.2% were severe, and 36.4% were very severe. Additionally, approximately 70% of our cohort was unmarried. Thirty (90%) of the patients were treated with IST (CSA + ATG), and ten (30.3%) were also treated with eltrombopag. Of the 18 patients with severe and very severe disease, seven patients (38.9%) underwent AlloHSCT. Twenty five patients (76%) were noted to be alive at the time of data-cut off for analysis (March 2020), 4 of which were post-AlloHSCT. 45% of patients in our cohort noted significant social & financial barriers to their care. Discussion: Our study demonstrates that despite significant socio-economic barriers to care, adult patients with SAA that are treated with IST +/- AlloHSCT when indicated, have overall survival that equals the national standard. Notably, our patient cohort was more than 75% Black and Hispanic.Race in the US is strongly correlated to socioeconomic status, education, and health insurance status. Some specific social barriers were identified in provider notes. These included difficulty finding a donor match, recent immigration, housing & financial insecurity, difficulty keeping follow up appointments due to transportation issues, lack of adequate health insurance. We believe the first step toward addressing the inequities in AA treatment is the continued acknowledgement of social barriers to care and addressing them in a timely manner. Socio-demographic research should inform health policy and guide interventions to ultimately reduce inequities in access and treatment for rare diseases in some of the most vulnerable in our population. Figure Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
17

García Díaz, Julián, Nieves Navarro Cano, and Edelmiro Rúa Álvarez. "Determination of the Real Cracking Moment of Two Reinforced Concrete Beams through the Use of Embedded Fiber Optic Sensors." Sensors 20, no. 3 (2020): 937. http://dx.doi.org/10.3390/s20030937.

Full text
Abstract:
This article investigates the possibility of applying weldable optic fiber sensors to the corrugated rebar in reinforced concrete structures to detect cracks and measure the deformation of the steel. Arrays have initially been designed comprised of two weldable optic fiber sensors, and one temperature sensor to compensate its effect in measuring deformations. A series of tests were performed on the structures to evaluate functioning of the sensors, and the results obtained from the deformation measures shown by the sensors have been stored using specific software. Two reinforced concrete beams simply resting on the support have been designed to perform the tests, and they have been monitored in the zones with maximum flexion moment. Different loading steps have been applied to the beams at the center of the span, using a loading cylinder, and the measurement of the load applied has been determined using a loading cell. The analysis of the deformation measurements of the corrugated rebar obtained by the optic fiber sensors has allowed us to determine the moment at which the concrete has cracked due to the effect of the loads applied and the deformation it has suffered by the effect of the different loading steps applied to the beams. This means that this method of measuring deformations in the corrugated rebar by weldable optic fiber sensors provides very precise results. Future lines of research will concentrate on determining an expression that indicates the real cracking moment of the concrete.
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Ya-Han, Chun-Tien Tai, Chih-Fong Tsai, and Min-Wei Huang. "Improvement of Adequate Digoxin Dosage: An Application of Machine Learning Approach." Journal of Healthcare Engineering 2018 (August 19, 2018): 1–9. http://dx.doi.org/10.1155/2018/3948245.

Full text
Abstract:
Digoxin is a high-alert medication because of its narrow therapeutic range and high drug-to-drug interactions (DDIs). Approximately 50% of digoxin toxicity cases are preventable, which motivated us to improve the treatment outcomes of digoxin. The objective of this study is to apply machine learning techniques to predict the appropriateness of initial digoxin dosage. A total of 307 inpatients who had their conditions treated with digoxin between 2004 and 2013 at a medical center in Taiwan were collected in the study. Ten independent variables, including demographic information, laboratory data, and whether the patients had CHF were also noted. A patient with serum digoxin concentration being controlled at 0.5–0.9 ng/mL after his/her initial digoxin dosage was defined as having an appropriate use of digoxin; otherwise, a patient was defined as having an inappropriate use of digoxin. Weka 3.7.3, an open source machine learning software, was adopted to develop prediction models. Six machine learning techniques were considered, including decision tree (C4.5), k-nearest neighbors (kNN), classification and regression tree (CART), randomForest (RF), multilayer perceptron (MLP), and logistic regression (LGR). In the non-DDI group, the area under ROC curve (AUC) of RF (0.912) was excellent, followed by that of MLP (0.813), CART (0.791), and C4.5 (0.784); the remaining classifiers performed poorly. For the DDI group, the AUC of RF (0.892) was the best, followed by CART (0.795), MLP (0.777), and C4.5 (0.774); the other classifiers’ performances were less than ideal. The decision tree-based approaches and MLP exhibited markedly superior accuracy performance, regardless of DDI status. Although digoxin is a high-alert medication, its initial dose can be accurately determined by using data mining techniques such as decision tree-based and MLP approaches. Developing a dosage decision support system may serve as a supplementary tool for clinicians and also increase drug safety in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
19

Segura, Peter Paul. "Oliverio O. Segura, MD (1933-2021) Through A Son’s Eyes – A Tribute to Dad." Philippine Journal of Otolaryngology Head and Neck Surgery 36, no. 1 (2021): 73. http://dx.doi.org/10.32412/pjohns.v36i1.1679.

Full text
Abstract:
I was born and raised in the old mining town of Barrio DAS (Don Andres Soriano), Lutopan, Toledo City where Atlas Consolidated Mining and Development Corp. (ACMDC) is situated. Dad started his practice in the company’s hospital as an EENT specialist in the early 60’s and was the ‘go to’ EENT Doc not only of nearby towns or cities (including Cebu City) but also the surrounding provinces in the early 70’s. In my elementary years, he was Assistant Director of ACMDC Hospital (we lived just behind in company housing, only a 3-minute walk). I grew interested in what my dad did, sometimes staying in his clinic an hour or so after school, amazed at how efficiently he handled his patients who always felt so satisfied seeing him. At the end of the day, there was always ‘buyot’ (basket) of vegetables, live chickens, freshwater crabs, crayfish, catfish or tilapia. I wondered if he went marketing earlier, but knew he was too busy for that (and mom did that) until I noticed endless lines of patients outside and remembered when he would say: “Being a doctor here - you’ll never go hungry!” I later realized they were PFs (professional fees) of his patients. As a company doctor, Dad received a fixed salary, free housing, utilities, gasoline, schooling for kids and a company car. It was the perfect life! The company even sponsored his further training in Johns-Hopkins, Baltimore, USA.
 
 A family man, he loved us so much and was a bit of a joker too, especially at mealtimes. Dad’s daily routine was from 8 am – 5 pm and changed into his tennis, pelota, or badminton outfit. He was the athlete, winning trophies and medals in local sports matches.
 
 Dad wanted me to go to the University of the Philippines (UP) High School in the city. I thought a change of environment would be interesting, but I would miss my friends. Anyway, I complied and there I started to understand that my dad was not just an EENT practicing in the Mines but was teaching in Cebu Institute of Medicine and Cebu Doctors College of Medicine (CDCM) and was a consultant in most of the hospitals in Cebu City. And still he went back up to the mountains, back to Lutopan, our mining town where our home was. The old ACMDC hospital was replaced with a new state-of-the-art hospital now named ACMDC Medical Center, complete with Burn Unit, Trauma center and an observation deck in the OR for teaching interns from CDCM. Dad enjoyed teaching them. Most of them are consultants today who are so fond of my dad that they always send their regards when they see me.
 
 My dad loved making model airplanes, vehicles, etc. and I realized I had that skill when I was 8 years old and I made my first airplane model. He used to build them out of Balsa wood which is so skillful. I can’t be half the man he was but I realized this hobby enhanced his surgical skills. My dad was so diplomatic and just said to get an engineering course before you become a pilot (most of dads brothers are engineers). I actually gave engineering a go, but after 1 ½ years I realized I was not cut out for it. I actually loved Biology and anything dealing with life and with all the exposure to my dad’s clinic and hospital activities … med school it was!
 
 At this point, my dad was already President of the ORL Central Visayas Chapter and was head of ENT Products and Hearing Center. As a graduate of the UP College of Medicine who finished Otorhinolaryngology residency with an additional year in Ophthalmology as one of the last EENTs to finish in UP PGH in the late 50’s, he hinted that if I finished my medical schooling in CDCM that I consider Otorhinolaryngology as a residency program and that UP-PGH would be a good training center. I ended up inheriting the ORL practice of my dad mostly, who taught me some of Ophthalmology outpatient procedures. Dad showed me clinical and surgical techniques in ENT management especially how to deal with patients beyond being a doctor! You don’t learn this in books but from experience. I learned a lot from my dad. Just so lucky I guess! He actually designed and made his own ENT Treatment Unit, which I’m still using to this day (with some modifications of my own). And he created a certain electrically powered ‘eye magnet’ with the help of my cousin (who’s an engineer now in Chicago) which can attract metallic foreign bodies from within the eyeball to the surface so they can easily be picked out – it really works!
 
 Dad loved to travel in his younger years especially abroad for conventions or just simply leisure or vacations, most of the time with my mom. But as he was getting older, travels became uncomfortable. His last travel with me was in 2012 for the AAO-HNS Convention in Washington DC. It was a great time as we then proceeded to a US Navy Airshow in nearby Virginia after the convention, meeting up with my brother who is retired from the USN. Then we took the train to New York and stayed with my sister who is a PICU nurse in NY Presbyterian. Then off to Missouri and Ohio visiting the National Museum of the US Air Force, the largest military aircraft museum in the world.
 
 For years, Dad had been battling with heredofamilial-hypercholesterolemia problem which took its toll on his liver and made him weak and tired but still he practiced and continued teaching and sharing his knowledge until he retired at the age of 80. By then, my wife and I would take him and my mom out on weekends, he loved to be driven around and eat in different places. I really witnessed and have seen how he suffered from his illness in his final years. But he never showed it or complained, never even wanted to use a cane! He didn’t want to be a burden to anyone. What most affected me was that my dad passed and I wasn’t even there. I had helped call for a physician to rush to the house and had oxygen cylinders to be brought for him as his end stage liver cirrhosis was causing cardio-pulmonary complications (non-COVID). Amidst all this I was the one admitted for 14 days because of COVID-19 pneumonia. My dad passed away peacefully at home as I was being discharged from the hospital. He was 88. I never reached him just to say good bye and cried when I reached home still dyspneic recovering from the viral pneumonia. I realized from my loved ones who told me that dad didn’t want me to stress out taking care of him, as I’ve been doing ever since, but instead to rest and recuperate myself. I cried again with that thought. In my view, he was not only a great Physician and Surgeon but also the greatest Dad. He lived a full life and touched so many lives with his treatments, charity services and teaching new physicians. It’s seeing, remembering and carrying on what he showed and taught us that really makes us miss him. I really love and miss my dad and with a smile on my face, I see he’s also happy to be with his brothers and sisters who passed on ahead. And that he’s rested. He is a man content, I remember he always said this, ‘ As long as I have a roof over my head and a bed to rest my back, I’m okay!”
APA, Harvard, Vancouver, ISO, and other styles
20

Barnard-Kelly, Katharine D., Diana Naranjo, Shideh Majidi, et al. "Suicide and Self-inflicted Injury in Diabetes: A Balancing Act." Journal of Diabetes Science and Technology 14, no. 6 (2019): 1010–16. http://dx.doi.org/10.1177/1932296819891136.

Full text
Abstract:
Glycemic control in type 1 diabetes mellitus (T1DM) remains a challenge for many, despite the availability of modern diabetes technology. While technologies have proven glycemic benefits and may reduce excess mortality in some populations, both mortality and complication rates remain significantly higher in T1DM than the general population. Diabetes technology can reduce some burdens of diabetes self-management, however, it may also increase anxiety, stress, and diabetes-related distress. Additional workload associated with diabetes technologies and the dominant focus on metabolic control may be at the expense of quality-of-life. Diabetes is associated with significantly increased risk of suicidal ideation, self-harm, and suicide. The risk increases for those with diabetes and comorbid mood disorder. For example, the prevalence of depression is significantly higher in people with diabetes than the general population, and thus, people with diabetes are at even higher risk of suicide. The Center for Disease Control and Prevention reported a 24% rise in US national suicide rates between 1999 and 2014, the highest in 30 years. In the United Kingdom, 6000 suicides occur annually. Rates of preventable self-injury mortality stand at 29.1 per 100 000 population. Individuals with diabetes have an increased risk of suicide, being three to four times more likely to attempt suicide than the general population. Furthermore, adolescents aged 15 to 19 are most likely to present at emergency departments for self-inflicted injuries (9.6 per 1000 visits), with accidents, alcohol-related injuries, and self-harm being the strongest risk factors for suicide, the second leading cause of death among 10 to 24 year olds. While we have developed tools to improve glycemic control, we must be cognizant that the psychological burden of chronic disease is a significant problem for this vulnerable population. It is crucial to determine the psychosocial and behavioral predictors to uptake and continued use of technology in order to aid the identification of those individuals most likely to realize benefits of any intervention as well as those individuals who may require more support to succeed with technology.
APA, Harvard, Vancouver, ISO, and other styles
21

Grupp, Stephan A., Theodore W. Laetsch, Jochen Buechner, et al. "Analysis of a Global Registration Trial of the Efficacy and Safety of CTL019 in Pediatric and Young Adults with Relapsed/Refractory Acute Lymphoblastic Leukemia (ALL)." Blood 128, no. 22 (2016): 221. http://dx.doi.org/10.1182/blood.v128.22.221.221.

Full text
Abstract:
Abstract A single-center trial of CD19 directed, lentiviral transduced chimeric antigen receptor (CAR) T cells (CTL019) for relapsed and refractory (r/r) B-ALL pediatric patients showed rates of CR >90% with prolonged CAR T cell persistence/CR without further therapy in the majority of patients infused (Maude NEJM 2014). We report here the feasibility, safety and efficacy of the first multicenter global pivotal registration CAR T cell trial. Features of this trial include: i) the first trial in which industry-manufactured cells were provided to all patients; ii) enrollment across 25 centers in the US, EU, Canada, Australia, and Japan; iii) successful transfer and manufacturing of cells in a global supply chain; and iv) successful implementation of cytokine release syndrome (CRS) management across a global trial. All patients had CD19 positive B-ALL with morphologic marrow tumor involvement at registration (>5% blasts), and were either primary refractory; chemo-refractory after first relapse, relapsed after second line therapy; or ineligible for allogeneic SCT. CTL019 was manufactured from patient PBMC under GMP conditions in the US, at a centralized "sponsor-owned" manufacturing facility, and supplied to all sites. The primary endpoint of overall remission rate (CR+CRi) within 3 months and secondary endpoints (EFS, DOR, OS and safety) were assessed by an independent review committee. Based on preliminary data as of March 2016, 57 patients were enrolled. There were 3 manufacturing failures (5%), 5 patients were not infused due to death or adverse events (9%), and 15 patients were pending infusion at the data cut off. Following fludarabine/cyclophosphamide lymphodepleting chemotherapy in the majority of the patients, 34 patients (median age 11 [3-23], 50% with prior HSCT) were infused with a single dose of CTL019 at a median dose of 2.9 x106 transduced CTL019 cells/kg (0.2 to 4). Among 29 patients reaching D28 prior to the data cutoff, 83% (24/29) achieved CR or CRi by local investigator assessment, all of which were MRD-negative. Two early deaths occurred prior to initial disease assessment, one due to disease progression and one due to intracranial hemorrhage. Two patients did not respond. One patient was in CR by BM at D28, but CSF was not assessed, therefore this patient was classified as "incomplete" assessment. Safety was managed by a protocol-specified CRS algorithm with no cases of refractory CRS. Using the Penn CRS grading scale, 82% of patients experienced CRS, with 7 grade 3 (21%) and 8 grade 4 (24%) events. 44% patients with CRS required anti-cytokine therapy; all received tocilizumab with or without other anti-cytokine therapy, with complete resolution of CRS. Besides CRS, the most common grade 3 and 4 non-hematologic AEs were febrile neutropenia (29%), increased bilirubin (21%), increased AST (21%), and hypotension (21%). 21% of patients experienced grade 3 or 4 neuropsychiatric events including confusion, delirium, encephalopathy, agitation and seizure; no cerebral edema was reported. CTL019 in vivo cellular kinetics by qPCR demonstrated transgene persistence in blood in responding patients at and beyond 6 months. Overall exposure (AUC 0-28d) and maximal expansion (Cmax) of CTL019 DNA measured by qPCR was higher in responding compared with non-responding patients. In summary, this pivotal global study in pediatric and young adult patients with r/r B-ALL receiving CTL019, confirms a high level of efficacy and a similar safety profile to that shown in the prior single center experience. Safety was effectively and reproducibly managed by appropriately trained investigators. The study has completed accrual. At the meeting, updated data from a planned formal interim analysis including safety, efficacy (primary and selected secondary endpoints), cellular kinetics, and impact of anti-cytokine therapy will be presented for more than 50 patients infused at 25 global sites. Disclosures Grupp: Jazz Pharmaceuticals: Consultancy; Novartis: Consultancy, Research Funding; Pfizer: Consultancy. Laetsch:Novartis: Consultancy; Loxo Oncology: Consultancy. Bittencourt:Seattle Genetics: Consultancy; Jazz Pharmaceuticals: Consultancy, Other: Educational Grant. Maude:Novartis: Consultancy. Myers:Novartis Pharmaceuticals: Consultancy. Rives:Novartis: Consultancy; Jazz Pharma: Consultancy. Nemecek:Medac, GmbH: Research Funding; Novartis: Consultancy; National Marrow Donor Program: Membership on an entity's Board of Directors or advisory committees. Schlis:Novartis: Honoraria. Martin:Jazz Pharmaceuticals: Other: One time discussion panel; Novartis: Other: Support of clinical trials. Bader:Medac: Consultancy, Research Funding; Riemser: Research Funding; Neovii Biotech: Research Funding; Servier: Consultancy, Honoraria; Novartis: Consultancy, Honoraria. Peters:Novartis: Consultancy; Jazz: Speakers Bureau; Amgen: Consultancy; Pfizer: Consultancy; Medac: Consultancy. Biondi:Novartis: Membership on an entity's Board of Directors or advisory committees, Other: Advisory Board; Cellgene: Other: Advisory Board; BMS: Membership on an entity's Board of Directors or advisory committees. Baruchel:Servier: Consultancy; Novartis: Consultancy; Celgene: Consultancy; Jazz: Consultancy; Baxalta: Research Funding. June:University of Pennsylvania: Patents & Royalties; Johnson & Johnson: Research Funding; Celldex: Consultancy, Equity Ownership; Pfizer: Honoraria; Immune Design: Consultancy, Equity Ownership; Novartis: Honoraria, Patents & Royalties: Immunology, Research Funding; Tmunity: Equity Ownership, Other: Founder, stockholder . Sen:Novartis: Employment. Zhang:Novartis: Employment. Thudium:Novartis: Employment. Wood:Novartis Pharmaceuticals: Employment, Other: Stock. Taran:Novartis: Employment. Pulsipher:Chimerix: Consultancy; Jazz Pharmaceutical: Consultancy; Novartis: Consultancy, Other: Study Steering Committee; Medac: Other: Housing support for conference.
APA, Harvard, Vancouver, ISO, and other styles
22

Baba, H., T. Watanabe, K. Miyata, and H. Matsumoto. "Area Business Continuity Management, A New Approach to Sustainable Local Economy." Journal of Disaster Research 10, no. 2 (2015): 204–9. http://dx.doi.org/10.20965/jdr.2015.p0204.

Full text
Abstract:
The flooding of the Chao Phraya River in Thailand and the Great East Japan Earthquake and Tsunami, both of which occurred in 2011, reminded us of the risks of business disruption and further impacts on national, regional, and global economies through supply chains when disasters occur anywhere in the world. Considering the increasing economic losses attributable to disasters, the fourth session of the Global Platform for Disaster Risk Reduction (2013) aimed to promote resilience and foster new opportunities for public-private partnerships as part of an overall approach to improving risk governance. Furthermore, it highlighted that a growing world requires a new approach to development action, emphasizing the private sector&rquo;s role in managing disaster risks. One of the most significant private sector contributions to disaster risk management is the creation of the business continuity plan/planning (BCP) and business continuity management (BCM) systems, which were standardized as ISO22301 and disseminated in many business enterprises around the world. However, a BCP or BCM system has been neither formulated for nor implemented in most local enterprises in industry agglomerated areas, even though these are located in areas vulnerable to disasters. Moreover, in the case of large-scale disasters, a business enterprise’s capacity may be too limited to mitigate damages and maintain operations through its own efforts, even if BCPs are prepared. The main reason for this is the disruption of public infrastructure and services. In order to minimize the negative economic impacts or economic losses, particularly in the case of a large-scale disaster that disrupts the fundamental infrastructure in certain areas, it is important to conduct risk assessment on a proper scale and to prepare scenario-based disaster management plans for area-wide damage mitigation. In addition, it is essential to have integrated resource management and strategic recovery plans to support each enterprise&rquo;s BCM actions in coordination with public sector activities. Considering this backgrounds, the Japan International Cooperation Agency (JICA) and the ASEAN Coordination Center for Humanitarian Assistance on Disaster Management (AHA Center) launched the “Natural Disaster Risk Assessment and Area Business Continuity Plan Formulation for Industrial Agglomerated Areas in the ASEAN Region” project in February 2013. The project introduced the new concept of the Area BCP, which, based on a risk assessment of the area, designates a framework and direction for coordinated damage mitigation measures and recovery actions by stakeholders, including individual enterprises, industrial area managers, local authorities, and infrastructure administrators, to allow business continuation of the industrial area as a whole. The project also established Area BCM as a cyclic process of risk assessment, sharing risk and impact information, determining a common strategy of risk management, developing the Area BCP, implementing and monitoring the planned actions to continuously improve the Area BCM system, and coordinating among stakeholders, in order to improve the capability for effective business continuity of the area. This paper aims to evaluate the progress of the project and to explore lessons from the applied process of Area BCM and its benefits.
APA, Harvard, Vancouver, ISO, and other styles
23

Boden, T. A., M. Krassovski, and B. Yang. "The AmeriFlux data activity and data system: an evolving collection of data management techniques, tools, products and services." Geoscientific Instrumentation, Methods and Data Systems Discussions 3, no. 1 (2013): 59–85. http://dx.doi.org/10.5194/gid-3-59-2013.

Full text
Abstract:
Abstract. The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL), USA has provided scientific data management support for the US Department of Energy and international climate change science since 1982. Among the many data archived and available from CDIAC are collections from long-term measurement projects. One current example is the AmeriFlux measurement network. AmeriFlux provides continuous measurements from forests, grasslands, wetlands, and croplands in North, Central, and South America and offers important insight about carbon cycling in terrestrial ecosystems. To successfully manage AmeriFlux data and support climate change research, CDIAC has designed flexible data systems using proven technologies and standards blended with new, evolving technologies and standards. The AmeriFlux data system, comprised primarily of a relational database, a PHP based data-interface and a FTP server, offers a broad suite of AmeriFlux data. The data interface allows users to query the AmeriFlux collection in a variety of ways and then subset, visualize and download the data. From the perspective of data stewardship, on the other hand, this system is designed for CDIAC to easily control database content, automate data movement, track data provenance, manage metadata content, and handle frequent additions and corrections. CDIAC and researchers in the flux community developed data submission guidelines to enhance the AmeriFlux data collection, enable automated data processing, and promote standardization across regional networks. Both continuous flux and meteorological data and irregular biological data collected at AmeriFlux sites are carefully scrutinized by CDIAC using established quality-control algorithms before the data are ingested into the AmeriFlux data system. Other tasks at CDIAC include reformatting and standardizing the diverse and heterogeneous datasets received from individual sites into a uniform and consistent network database, generating high-level derived products to meet the current demands from a broad user group, and developing new products in anticipation of future needs. In this paper, we share our approaches to meet the challenges of standardizing, archiving and delivering quality, well-documented AmeriFlux data worldwide to benefit others with similar challenges of handling diverse climate change data, to further heighten awareness and use of an outstanding ecological data resource, and to highlight expanded software engineering applications being used for climate change measurement data.
APA, Harvard, Vancouver, ISO, and other styles
24

Boden, T. A., M. Krassovski, and B. Yang. "The AmeriFlux data activity and data system: an evolving collection of data management techniques, tools, products and services." Geoscientific Instrumentation, Methods and Data Systems 2, no. 1 (2013): 165–76. http://dx.doi.org/10.5194/gi-2-165-2013.

Full text
Abstract:
Abstract. The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL), USA has provided scientific data management support for the US Department of Energy and international climate change science since 1982. Among the many data archived and available from CDIAC are collections from long-term measurement projects. One current example is the AmeriFlux measurement network. AmeriFlux provides continuous measurements from forests, grasslands, wetlands, and croplands in North, Central, and South America and offers important insight about carbon cycling in terrestrial ecosystems. To successfully manage AmeriFlux data and support climate change research, CDIAC has designed flexible data systems using proven technologies and standards blended with new, evolving technologies and standards. The AmeriFlux data system, comprised primarily of a relational database, a PHP-based data interface and a FTP server, offers a broad suite of AmeriFlux data. The data interface allows users to query the AmeriFlux collection in a variety of ways and then subset, visualize and download the data. From the perspective of data stewardship, on the other hand, this system is designed for CDIAC to easily control database content, automate data movement, track data provenance, manage metadata content, and handle frequent additions and corrections. CDIAC and researchers in the flux community developed data submission guidelines to enhance the AmeriFlux data collection, enable automated data processing, and promote standardization across regional networks. Both continuous flux and meteorological data and irregular biological data collected at AmeriFlux sites are carefully scrutinized by CDIAC using established quality-control algorithms before the data are ingested into the AmeriFlux data system. Other tasks at CDIAC include reformatting and standardizing the diverse and heterogeneous datasets received from individual sites into a uniform and consistent network database, generating high-level derived products to meet the current demands from a broad user group, and developing new products in anticipation of future needs. In this paper, we share our approaches to meet the challenges of standardizing, archiving and delivering quality, well-documented AmeriFlux data worldwide to benefit others with similar challenges of handling diverse climate change data, to further heighten awareness and use of an outstanding ecological data resource, and to highlight expanded software engineering applications being used for climate change measurement data.
APA, Harvard, Vancouver, ISO, and other styles
25

Carter, Jackie, Rafael Alberto Méndez-Romero, Pete Jones, Vanessa Higgins, and Andre Luiz Silva Samartini. "EmpoderaData: Sharing a successful work-placement data skills training model within Latin America, to develop capacity to deliver the SDGs." Statistical Journal of the IAOS 37, no. 3 (2021): 1009–21. http://dx.doi.org/10.3233/sji-210842.

Full text
Abstract:
EmpoderaData – from the Spanish word empoderar ‘to empower’ – is a partnership research project between the University of Manchester (UK), Fundação Getulio Vargas (Brazil), Universidad del Rosario (Colombia) and Data-Pop Alliance (US and France). The project builds upon a successful data-driven, research-led paid internship programme in the UK (Q-Step) which enables undergraduate social science students to practise data skills through immersion in the workplace. Two-hundred and fifty students have benefited from the Q-Step programme in six years, many graduating into analytical careers in civic society and industry. EmpoderaData aims to build on this experiential learning initiative by developing a data fellowship programme in order to foster and develop data literacy skills in Latin America, led by the need to address society’s most pressing issues and using the framework of the Sustainable Development Goals (SDGs). EmpoderaData Phase 1 explored whether the internship model would have relevance and usefulness within the context of three Latin American case study countries (Brazil, Colombia and Mexico). The team set out to establish a baseline of the state of data literacy and existing training programs in Brazil, Colombia and Mexico. As part of a ‘Big Data for the Common Good’ event, a workshop was held in São Paulo with thirty participants representing data literacy advocacy or policy formation and drawn from civil society, academia, the private and public sector. The main conclusions from this first phase are: (1) the most requested data literacy training need is for basic skills, including introductory statistics, foundation data analysis and methodological skills; (2) paid data fellowship models are acknowledged as a useful intervention; and (3) the notion of a ‘hybrid’ professional to build data literacy capacities for ‘social science’ purposes provides a practical way forward. In the EmpoderaData Phase 2 project our focus was on Colombia to explore the challenges and opportunities of developing a pilot data fellowship model there. Engaging with national, regional and international capacity development efforts, this highlighted a demand for partnerships between universities and organisations working on the social challenges represented by the SDGs. Partnerships ensure that the in-country data literacy pipeline is strengthened in a home-grown, self-sustaining way, producing a steady flow of data literate graduates into the institutions and sectors where critical data skills are most needed. We report on how the EmpoderaData project is exploring working with students studying Science, Technology, Engineering and Mathematics (STEM) degrees at the Universidad del Rosario, to improve the application of statistical methods to the social sciences. The aim is to strengthen STEM skills and develop youth empowerment across Colombia, urban and rural areas, to improve the quality of statistical education at the national level, and support the skills needed to deliver the SDGs. In parallel, the Fundação Getulio Vargas (FGV) Business School in São Paulo agreed to trial the work-placement programme in their undergraduate business and public policy degrees through a programme entitled ‘The FGV Q-Step Center to improve quantitative skills in undergraduate business students’. This two-year-long funded study will enable us to explore the transferability of the internship model from the UK to Brazil. The paper will discuss how the programme was established (following the lessons learned from EmpoderaData), explain how this model will be implemented in FGV, especially paying attention to how the curriculum will develop to support it, and how the impact of the programme will be monitored. The knowledge exchange generated from this study will complement the research conducted through the EmpoderaData project. The paper will cover the progress of the EmpoderaData project and FGV-Q-Step Center to date and explore how we are developing these initiatives, the challenges we have faced, and how through partnership working we are developing capacity building in statistical and data skills training.
APA, Harvard, Vancouver, ISO, and other styles
26

Topping, Kenneth C., Haruo Hayashi, William Siembieda, and Michael Boswell. "Special Issue on “Building Local Capacity for Long-term Disaster Resilience” Toward Disaster Resilient Communities." Journal of Disaster Research 5, no. 2 (2010): 127–29. http://dx.doi.org/10.20965/jdr.2010.p0127.

Full text
Abstract:
This special issue of JDR is centered on the theme of “Building Local Capacity for Long-term Disaster Resilience.” Eight papers and one commentary describe challenges in various countries of promoting disaster resilience at local, sub-national, and national levels. Resilience is broadly defined here as the capacity of a community to: 1) survive amajor disaster; 2) retain essential structure and functions; and 3) adapt to post-disaster opportunities for transforming community structure and functions to meet new challenges. This working definition is similar to others put forward in the growing literature on resilience. Resilience can also be seen as an element of sustainability. Initially referring only to environmental conditions, the concept of sustainable development was defined as that which meets the needs of present generations while not compromising the ability of future generations to meet their own needs (Bruntland Commission, Our Common Future, 1987). Now, the term sustainability has come to mean the need to preserve all resources for future use, including social, physical, economic, cultural and historical, as well as environmental resources. Disasters destroy resources, making communities less sustainable or even unsustainable. Resilience helps to protect resources, among other things, through coordination of all four disaster management functions: mitigation, preparedness, response, and recovery. Mitigation commonly involves reduction of risks and prevention of disaster losses through long-term sustained actions modifying the environment. Preparedness involves specific preparations for what to do and how to respond during a disaster at the personal, household, and community level. Response means actions taken immediately after a disaster to rescue survivors, conduct evacuation, feed and shelter victims, and restore communications. Recovery involves restoring lives, infrastructure, services, and economic activity, while seeking long-term community improvement. When possible, emphasis should be placed on building local resilience before a disaster when opportunities are greater for fostering sustainable physical, social, economic, and environmental structures and functions. Waiting until after a disaster to pursue sustainability invites preventable losses and reduces post-disaster resilience and opportunities for improvement. Community resilience involves both “soft” strategies which optimize disaster preparedness and response, and “hard” strategies which mitigate natural and human-caused hazards, thereby reducing disaster losses. Both “soft” and “hard” strategies are undertaken during disaster recovery. In many countries “soft” and “hard” resilience approaches coexist as uncoordinated activities. However, experience suggests that disaster outcomes are better when “soft” and “hard” strategies are purposely coordinated. Thus, “smart” resilience involves coordination of both “soft” and “hard” resilience strategies, i.e., “smart ” resilience = “soft ” resilience + “hard ” resilience. This concept is reflected in papers in Part 1 of this special issue, based on case studies from India, Japan, Mexico, Taiwan, and the US. Additional resilience studies from Japan, the US, and Venezuela will be featured in Part 2 of this special issue. The first group of papers in Part 1 review resilience issues in regional and community recovery. Chandrasekahr (1) uses a case study to illustrate varying effects of formal stakeholder participatory framework on capacity building following the 2004 Southeast Asia Tsunami from post-disaster recovery in southern India. Chen and Wang (2) examine multiple resiliency factors reflected in community recovery case studies from the Taiwan 1999 Chi Chi Earthquake and debris flow evacuation after Typhoon Markot of 2009. Kamel (3) compares factors affecting housing recovery following the US Northridge Earthquake and Hurricane Katrina. The second group of papers examines challenges of addressing resiliency at national and sub-national scales. Velazquez (4) examines national factors affecting disaster resilience in Mexico. Topping (5) provides an overview of the U.S. Disaster Mitigation Act of 2000, a nationwide experiment in local resilience capacity building through federal financial incentives encouraging local hazard mitigation planning. Boswell, Siembieda, and Topping (6) describe a new method to evaluate effectiveness of federally funded hazard mitigation projects in the US through California’s State Mitigation Assessment Review Team (SMART) loss reduction tracking system. The final group of papers explores methods of analysis, information dissemination, and pre-event planning. Siembieda (7) presents a model which can be deployed at any geographic level involving timely access to assets in order to reduce pre- and post-disaster vulnerability, as illustrated by community disaster recovery experiences in Central America. Hayashi (8) outlines a new information dissemination system useable at all levels called “micromedia” which provides individuals with real time disaster information regardless of their location. Finally, Poland (9) concludes with an invited special commentary addressing the challenges of creating more complete earthquake disaster resilience through pre-event evaluation of post-event needs at the community level, using San Francisco as the laboratory. The Editorial Committee extends its sincere appreciation to both the contributors and the JDR staff for their patience and determination in making this special issue possible. Thanks also to the reviewers for their insightful analytic comments and suggestions. Finally, the Committee wishes to thank Bayete Henderson for his keen and thorough editorial assistance and copy editing support.
APA, Harvard, Vancouver, ISO, and other styles
27

Savidge, Nicole, Susan K. Parsons, Daqin Mao, Ruth Ann Weidner, Kimberly S. Esham, and Angie Mae Rodday. "Quantifying Social Disadvantage for Patients with Sickle Cell Disease: Implications for Quality of Care." Blood 132, Supplement 1 (2018): 317. http://dx.doi.org/10.1182/blood-2018-99-113558.

Full text
Abstract:
Abstract Background: Socioeconomic disadvantage negatively affects healthcare utilization and disease outcomes. The Area Deprivation Index (ADI) is a well-established method for quantifying socioeconomic disadvantage that combines 17 US Census block indicators of poverty, education, housing, and employment (Singh Am J Public Health. 2003). ADI scores have been shown to be associated with hospital readmission and mortality in the general population and in chronic diseases, such as diabetes (Kind et al. Ann Intern Med. 2014). However, the ADI has not been used in patients with sickle cell disease (SCD), a group that faces health disparities based on race and socioeconomic status and that has high healthcare utilization, including a 41% readmission rate among young adults (Brousseau et al. JAMA. 2010). We applied the ADI to patients with SCD, hospitalized for vaso-occlusive crisis (VOC). We described patient, disease, and treatment characteristics by high and low ADI scores. Methods: This retrospective cohort study includes 449 consecutive hospitalizations for VOC among 63 adult patients (≥18 years) with SCD from 2013-2016 at an urban, US-based academic medical center. For this analysis, one hospitalization was randomly selected for each patient. Demographics, including street address, SCD characteristics, complications, treatment, hospital entry and discharge pain scores, length of stay and 30-day readmission were abstracted from electronic medical records (EMR) by trained study staff. History of SCD complications (e.g., acute chest syndrome, avascular necrosis) were reviewed by two hematologists. The 2013 Massachusetts (MA) ADI dataset was used to assign patients an ADI decile value based on the census block corresponding to the patient-reported address in the EMR (https://www.neighborhoodatlas.medicine.wisc.edu/). ADI deciles were calculated using ADI scores from all census blocks in MA ranked from lowest to highest (1-10), where higher scores indicate more deprivation. ADIs were divided into a more disadvantaged group and a less disadvantaged group at the sample median similar to previous research (Hu et al. Am J Med Qual. 2018). Summary statistics described patient and disease characteristics separately by these groups. Results: Out of the 63 patients in our cohort, the ADI was calculated for the 57 patients who had a valid MA address (90.5%). The median age was 26 years with 56.1% female and 77.2% black (Table). The majority of patients were publicly insured (28.1% Medicare, 66.7% Medicaid), while only 5.3% were privately insured. The most common genotype was Hemoglobin (Hb) SS (61.4%), followed by Hb SC (22.8%). The median MA ADI rank was 6; 27 patients (47.4%) were classified as less disadvantaged and 30 (52.6%) as more disadvantaged. Among the less disadvantaged group, 88.9% were black and 11.1% were Hispanic, while the more disadvantaged group was 66.7% black and 30.0% Hispanic. The less disadvantaged group had fewer Medicaid patients (59.3%) than the more disadvantaged group (73.3%). Hb SS genotype was more common in the less disadvantaged group (70.4%) than the more disadvantaged group (53.3%), where there were more patients with Hb SC (30.0%). For treatment and management, the less disadvantaged group had more patients prescribed hydroxyurea (74.1%) and home opioids (92.6%), compared to the more disadvantaged group (56.7% hydroxyurea; 80.0% home opioids). Length of hospital stay and pain scores were similar across the two groups. The 30-day readmission rate was 14.8% for the less disadvantaged group and 23.3% for the more disadvantaged group. Conclusions: We successfully applied a validated measure of socioeconomic disadvantage to a cohort of hospitalized patients with SCD. The more disadvantaged group consisted of more Hispanic patients with less Hb SS genotype and more Medicaid insurance. Further, there appeared to be differences in SCD management and readmissions by group, indicating possible disparities and opportunities to improve care and outcomes. Based solely on a patient's address from the EMR, the ADI can be used at the point of care to identify and ensure that the most vulnerable patients are receiving appropriate levels of social support. Given that SCD is referred to as a disease of double disadvantage, it is crucial to understand the intersection of this debilitating chronic illness within the context of socioeconomic disadvantage. Disclosures Parsons: Seattle Genetics: Research Funding. Rodday:Seattle Genetics: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
28

Hayashi, Haruo. "Long-term Recovery from Recent Disasters in Japan and the United States." Journal of Disaster Research 2, no. 6 (2007): 413–18. http://dx.doi.org/10.20965/jdr.2007.p0413.

Full text
Abstract:
In this issue of Journal of Disaster Research, we introduce nine papers on societal responses to recent catastrophic disasters with special focus on long-term recovery processes in Japan and the United States. As disaster impacts increase, we also find that recovery times take longer and the processes for recovery become more complicated. On January 17th of 1995, a magnitude 7.2 earthquake hit the Hanshin and Awaji regions of Japan, resulting in the largest disaster in Japan in 50 years. In this disaster which we call the Kobe earthquake hereafter, over 6,000 people were killed and the damage and losses totaled more than 100 billion US dollars. The long-term recovery from the Kobe earthquake disaster took more than ten years to complete. One of the most important responsibilities of disaster researchers has been to scientifically monitor and record the long-term recovery process following this unprecedented disaster and discern the lessons that can be applied to future disasters. The first seven papers in this issue present some of the key lessons our research team learned from the studying the long-term recovery following the Kobe earthquake disaster. We have two additional papers that deal with two recent disasters in the United States – the terrorist attacks on World Trade Center in New York on September 11 of 2001 and the devastation of New Orleans by the 2005 Hurricane Katrina and subsequent levee failures. These disasters have raised a number of new research questions about long-term recovery that US researchers are studying because of the unprecedented size and nature of these disasters’ impacts. Mr. Mammen’s paper reviews the long-term recovery processes observed at and around the World Trade Center site over the last six years. Ms. Johnson’s paper provides a detailed account of the protracted reconstruction planning efforts in the city of New Orleans to illustrate a set of sufficient and necessary conditions for successful recovery. All nine papers in this issue share a theoretical framework for long-term recovery processes which we developed based first upon the lessons learned from the Kobe earthquake and later expanded through observations made following other recent disasters in the world. The following sections provide a brief description of each paper as an introduction to this special issue. 1. The Need for Multiple Recovery Goals After the 1995 Kobe earthquake, the long-term recovery process began with the formulation of disaster recovery plans by the City of Kobe – the most severely impacted municipality – and an overarching plan by Hyogo Prefecture which coordinated 20 impacted municipalities; this planning effort took six months. Before the Kobe earthquake, as indicated in Mr. Maki’s paper in this issue, Japanese theories about, and approaches to, recovery focused mainly on physical recovery, particularly: the redevelopment plans for destroyed areas; the location and standards for housing and building reconstruction; and, the repair and rehabilitation of utility systems. But the lingering problems of some of the recent catastrophes in Japan and elsewhere indicate that there are multiple dimensions of recovery that must be considered. We propose that two other key dimensions are economic recovery and life recovery. The goal of economic recovery is the revitalization of the local disaster impacted economy, including both major industries and small businesses. The goal of life recovery is the restoration of the livelihoods of disaster victims. The recovery plans formulated following the 1995 Kobe earthquake, including the City of Kobe’s and Hyogo Prefecture’s plans, all stressed these two dimensions in addition to physical recovery. The basic structure of both the City of Kobe’s and Hyogo Prefecture’s recovery plans are summarized in Fig. 1. Each plan has three elements that work simultaneously. The first and most basic element of recovery is the restoration of damaged infrastructure. This helps both physical recovery and economic recovery. Once homes and work places are recovered, Life recovery of the impacted people can be achieved as the final goal of recovery. Figure 2 provides a “recovery report card” of the progress made by 2006 – 11 years into Kobe’s recovery. Infrastructure was restored in two years, which was probably the fastest infrastructure restoration ever, after such a major disaster; it astonished the world. Within five years, more than 140,000 housing units were constructed using a variety of financial means and ownership patterns, and exceeding the number of demolished housing units. Governments at all levels – municipal, prefectural, and national – provided affordable public rental apartments. Private developers, both local and national, also built condominiums and apartments. Disaster victims themselves also invested a lot to reconstruct their homes. Eleven major redevelopment projects were undertaken and all were completed in 10 years. In sum, the physical recovery following the 1995 Kobe earthquake was extensive and has been viewed as a major success. In contrast, economic recovery and life recovery are still underway more than 13 years later. Before the Kobe earthquake, Japan’s policy approaches to recovery assumed that economic recovery and life recovery would be achieved by infusing ample amounts of public funding for physical recovery into the disaster area. Even though the City of Kobe’s and Hyogo Prefecture’s recovery plans set economic recovery and life recovery as key goals, there was not clear policy guidance to accomplish them. Without a clear articulation of the desired end-state, economic recovery programs for both large and small businesses were ill-timed and ill-matched to the needs of these businesses trying to recover amidst a prolonged slump in the overall Japanese economy that began in 1997. “Life recovery” programs implemented as part of Kobe’s recovery were essentially social welfare programs for low-income and/or senior citizens. 2. Requirements for Successful Physical Recovery Why was the physical recovery following the 1995 Kobe earthquake so successful in terms of infrastructure restoration, the replacement of damaged housing units, and completion of urban redevelopment projects? There are at least three key success factors that can be applied to other disaster recovery efforts: 1) citizen participation in recovery planning efforts, 2) strong local leadership, and 3) the establishment of numerical targets for recovery. Citizen participation As pointed out in the three papers on recovery planning processes by Mr. Maki, Mr. Mammen, and Ms. Johnson, citizen participation is one of the indispensable factors for successful recovery plans. Thousands of citizens participated in planning workshops organized by America Speaks as part of both the World Trade Center and City of New Orleans recovery planning efforts. Although no such workshops were held as part of the City of Kobe’s recovery planning process, citizen participation had been part of the City of Kobe’s general plan update that had occurred shortly before the earthquake. The City of Kobe’s recovery plan is, in large part, an adaptation of the 1995-2005 general plan. On January 13 of 1995, the City of Kobe formally approved its new, 1995-2005 general plan which had been developed over the course of three years with full of citizen participation. City officials, responsible for drafting the City of Kobe’s recovery plan, have later admitted that they were able to prepare the city’s recovery plan in six months because they had the preceding three years of planning for the new general plan with citizen participation. Based on this lesson, Odiya City compiled its recovery plan based on the recommendations obtained from a series of five stakeholder workshops after the 2004 Niigata Chuetsu earthquake. <strong>Fig. 1. </strong> Basic structure of recovery plans from the 1995 Kobe earthquake. <strong>Fig. 2. </strong> “Disaster recovery report card” of the progress made by 2006. Strong leadership In the aftermath of the Kobe earthquake, local leadership had a defining role in the recovery process. Kobe’s former Mayor, Mr. Yukitoshi Sasayama, was hired to work in Kobe City government as an urban planner, rebuilding Kobe following World War II. He knew the city intimately. When he saw damage in one area on his way to the City Hall right after the earthquake, he knew what levels of damage to expect in other parts of the city. It was he who called for the two-month moratorium on rebuilding in Kobe city on the day of the earthquake. The moratorium provided time for the city to formulate a vision and policies to guide the various levels of government, private investors, and residents in rebuilding. It was a quite unpopular policy when Mayor Sasayama announced it. Citizens expected the city to be focusing on shelters and mass care, not a ban on reconstruction. Based on his experience in rebuilding Kobe following WWII, he was determined not to allow haphazard reconstruction in the city. It took several years before Kobe citizens appreciated the moratorium. Numerical targets Former Governor Mr. Toshitami Kaihara provided some key numerical targets for recovery which were announced in the prefecture and municipal recovery plans. They were: 1) Hyogo Prefecture would rebuild all the damaged housing units in three years, 2) all the temporary housing would be removed within five years, and 3) physical recovery would be completed in ten years. All of these numerical targets were achieved. Having numerical targets was critical to directing and motivating all the stakeholders including the national government’s investment, and it proved to be the foundation for Japan’s fundamental approach to recovery following the 1995 earthquake. 3. Economic Recovery as the Prime Goal of Disaster Recovery In Japan, it is the responsibility of the national government to supply the financial support to restore damaged infrastructure and public facilities in the impacted area as soon as possible. The long-term recovery following the Kobe earthquake is the first time, in Japan’s modern history, that a major rebuilding effort occurred during a time when there was not also strong national economic growth. In contrast, between 1945 and 1990, Japan enjoyed a high level of national economic growth which helped facilitate the recoveries following WWII and other large fires. In the first year after the Kobe earthquake, Japan’s national government invested more than US$ 80 billion in recovery. These funds went mainly towards the repair and reconstruction of infrastructure and public facilities. Now, looking back, we can also see that these investments also nearly crushed the local economy. Too much money flowed into the local economy over too short a period of time and it also did not have the “trickle-down” effect that might have been intended. To accomplish numerical targets for physical recovery, the national government awarded contracts to large companies from Osaka and Tokyo. But, these large out-of-town contractors also tended to have their own labor and supply chains already intact, and did not use local resources and labor, as might have been expected. Essentially, ten years of housing supply was completed in less than three years, which led to a significant local economic slump. Large amounts of public investment for recovery are not necessarily a panacea for local businesses, and local economic recovery, as shown in the following two examples from the Kobe earthquake. A significant national investment was made to rebuild the Port of Kobe to a higher seismic standard, but both its foreign export and import trade never recovered to pre-disaster levels. While the Kobe Port was out of business, both the Yokohama Port and the Osaka Port increased their business, even though many economists initially predicted that the Kaohsiung Port in Chinese Taipei or the Pusan Port in Korea would capture this business. Business stayed at all of these ports even after the reopening of the Kobe Port. Similarly, the Hanshin Railway was severely damaged and it took half a year to resume its operation, but it never regained its pre-disaster readership. In this case, two other local railway services, the JR and Hankyu lines, maintained their increased readership even after the Hanshin railway resumed operation. As illustrated by these examples, pre-disaster customers who relied on previous economic output could not necessarily afford to wait for local industries to recover and may have had to take their business elsewhere. Our research suggests that the significant recovery investment made by Japan’s national government may have been a disincentive for new economic development in the impacted area. Government may have been the only significant financial risk-taker in the impacted area during the national economic slow-down. But, its focus was on restoring what had been lost rather than promoting new or emerging economic development. Thus, there may have been a missed opportunity to provide incentives or put pressure on major businesses and industries to develop new businesses and attract new customers in return for the public investment. The significant recovery investment by Japan’s national government may have also created an over-reliance of individuals on public spending and government support. As indicated in Ms. Karatani’s paper, individual savings of Kobe’s residents has continued to rise since the earthquake and the number of individuals on social welfare has also decreased below pre-disaster levels. Based on our research on economic recovery from the Kobe earthquake, at least two lessons emerge: 1) Successful economic recovery requires coordination among all three recovery goals – Economic, Physical and Life Recovery, and 2) “Recovery indices” are needed to better chart recovery progress in real-time and help ensure that the recovery investments are being used effectively. Economic recovery as the prime goal of recovery Physical recovery, especially the restoration of infrastructure and public facilities, may be the most direct and socially accepted provision of outside financial assistance into an impacted area. However, lessons learned from the Kobe earthquake suggest that the sheer amount of such assistance may not be effective as it should be. Thus, as shown in Fig. 3, economic recovery should be the top priority goal for recovery among the three goals and serve as a guiding force for physical recovery and life recovery. Physical recovery can be a powerful facilitator of post-disaster economic development by upgrading social infrastructure and public facilities in compliance with economic recovery plans. In this way, it is possible to turn a disaster into an opportunity for future sustainable development. Life recovery may also be achieved with a healthy economic recovery that increases tax revenue in the impacted area. In order to achieve this coordination among all three recovery goals, municipalities in the impacted areas should have access to flexible forms of post-disaster financing. The community development block grant program that has been used after several large disasters in the United States, provide impacted municipalities with a more flexible form of funding and the ability to better determine what to do and when. The participation of key stakeholders is also an indispensable element of success that enables block grant programs to transform local needs into concrete businesses. In sum, an effective economic recovery combines good coordination of national support to restore infrastructure and public facilities and local initiatives that promote community recovery. Developing Recovery Indices Long-term recovery takes time. As Mr. Tatsuki’s paper explains, periodical social survey data indicates that it took ten years before the initial impacts of the Kobe earthquake were no longer affecting the well-being of disaster victims and the recovery was completed. In order to manage this long-term recovery process effectively, it is important to have some indices to visualize the recovery processes. In this issue, three papers by Mr. Takashima, Ms. Karatani, and Mr. Kimura define three different kinds of recovery indices that can be used to continually monitor the progress of the recovery. Mr. Takashima focuses on electric power consumption in the impacted area as an index for impact and recovery. Chronological change in electric power consumption can be obtained from the monthly reports of power company branches. Daily estimates can also be made by tracking changes in city lights using a satellite called DMSP. Changes in city lights can be a very useful recovery measure especially at the early stages since it can be updated daily for anywhere in the world. Ms. Karatani focuses on the chronological patterns of monthly macro-statistics that prefecture and city governments collect as part of their routine monitoring of services and operations. For researchers, it is extremely costly and virtually impossible to launch post-disaster projects that collect recovery data continuously for ten years. It is more practical for researchers to utilize data that is already being collected by local governments or other agencies and use this data to create disaster impact and recovery indices. Ms. Karatani found three basic patterns of disaster impact and recovery in the local government data that she studied: 1) Some activities increased soon after the disaster event and then slumped, such as housing construction; 2) Some activities reduced sharply for a period of time after the disaster and then rebounded to previous levels, such as grocery consumption; and 3) Some activities reduced sharply for a while and never returned to previous levels, such as the Kobe Port and Hanshin Railway. Mr. Kimura focuses on the psychology of disaster victims. He developed a “recovery and reconstruction calendar” that clarifies the process that disaster victims undergo in rebuilding their shattered lives. His work is based on the results of random surveys. Despite differences in disaster size and locality, survey data from the 1995 Kobe earthquake and the 2004 Niigata-ken Chuetsu earthquake indicate that the recovery and reconstruction calendar is highly reliable and stable in clarifying the recovery and reconstruction process. <strong>Fig. 3.</strong> Integrated plan of disaster recovery. 4. Life Recovery as the Ultimate Goal of Disaster Recovery Life recovery starts with the identification of the disaster victims. In Japan, local governments in the impacted area issue a “damage certificate” to disaster victims by household, recording the extent of each victim’s housing damage. After the Kobe earthquake, a total of 500,000 certificates were issued. These certificates, in turn, were used by both public and private organizations to determine victim’s eligibility for individual assistance programs. However, about 30% of those victims who received certificates after the Kobe earthquake were dissatisfied with the results of assessment. This caused long and severe disputes for more than three years. Based on the lessons learned from the Kobe earthquake, Mr. Horie’s paper presents (1) a standardized procedure for building damage assessment and (2) an inspector training system. This system has been adopted as the official building damage assessment system for issuing damage certificates to victims of the 2004 Niigata-ken Chuetsu earthquake, the 2007 Noto-Peninsula earthquake, and the 2007 Niigata-ken Chuetsu Oki earthquake. Personal and family recovery, which we term life recovery, was one of the explicit goals of the recovery plan from the Kobe earthquake, but it was unclear in both recovery theory and practice as to how this would be measured and accomplished. Now, after studying the recovery in Kobe and other regions, Ms. Tamura’s paper proposes that there are seven elements that define the meaning of life recovery for disaster victims. She recently tested this model in a workshop with Kobe disaster victims. The seven elements and victims’ rankings are shown in Fig. 4. Regaining housing and restoring social networks were, by far, the top recovery indicators for victims. Restoration of neighborhood character ranked third. Demographic shifts and redevelopment plans implemented following the Kobe earthquake forced significant neighborhood changes upon many victims. Next in line were: having a sense of being better prepared and reducing their vulnerability to future disasters; regaining their physical and mental health; and restoration of their income, job, and the economy. The provision of government assistance also provided victims with a sense of life recovery. Mr. Tatsuki’s paper summarizes the results of four random-sample surveys of residents within the most severely impacted areas of Hyogo Prefecture. These surveys were conducted biannually since 1999,. Based on the results of survey data from 1999, 2001, 2003, and 2005, it is our conclusion that life recovery took ten years for victims in the area impacted significantly by the Kobe earthquake. Fig. 5 shows that by comparing the two structural equation models of disaster recovery (from 2003 and 2005), damage caused by the Kobe earthquake was no longer a determinant of life recovery in the 2005 model. It was still one of the major determinants in the 2003 model as it was in 1999 and 2001. This is the first time in the history of disaster research that the entire recovery process has been scientifically described. It can be utilized as a resource and provide benchmarks for monitoring the recovery from future disasters. <strong>Fig. 4.</strong> Ethnographical meaning of “life recovery” obtained from the 5th year review of the Kobe earthquake by the City of Kobe. <strong>Fig. 5.</strong> Life recovery models of 2003 and 2005. 6. The Need for an Integrated Recovery Plan The recovery lessons from Kobe and other regions suggest that we need more integrated recovery plans that use physical recovery as a tool for economic recovery, which in turn helps disaster victims. Furthermore, we believe that economic recovery should be the top priority for recovery, and physical recovery should be regarded as a tool for stimulating economic recovery and upgrading social infrastructure (as shown in Fig. 6). With this approach, disaster recovery can help build the foundation for a long-lasting and sustainable community. Figure 6 proposes a more detailed model for a more holistic recovery process. The ultimate goal of any recovery process should be achieving life recovery for all disaster victims. We believe that to get there, both direct and indirect approaches must be taken. Direct approaches include: the provision of funds and goods for victims, for physical and mental health care, and for housing reconstruction. Indirect approaches for life recovery are those which facilitate economic recovery, which also has both direct and indirect approaches. Direct approaches to economic recovery include: subsidies, loans, and tax exemptions. Indirect approaches to economic recovery include, most significantly, the direct projects to restore infrastructure and public buildings. More subtle approaches include: setting new regulations or deregulations, providing technical support, and creating new businesses. A holistic recovery process needs to strategically combine all of these approaches, and there must be collaborative implementation by all the key stakeholders, including local governments, non-profit and non-governmental organizations (NPOs and NGOs), community-based organizations (CBOs), and the private sector. Therefore, community and stakeholder participation in the planning process is essential to achieve buy-in for the vision and desired outcomes of the recovery plan. Securing the required financial resources is also critical to successful implementation. In thinking of stakeholders, it is important to differentiate between supporting entities and operating agencies. Supporting entities are those organizations that supply the necessary funding for recovery. Both Japan’s national government and the federal government in the U.S. are the prime supporting entities in the recovery from the 1995 Kobe earthquake and the 2001 World Trade Center recovery. In Taiwan, the Buddhist organization and the national government of Taiwan were major supporting entities in the recovery from the 1999 Chi-Chi earthquake. Operating agencies are those organizations that implement various recovery measures. In Japan, local governments in the impacted area are operating agencies, while the national government is a supporting entity. In the United States, community development block grants provide an opportunity for many operating agencies to implement various recovery measures. As Mr. Mammen’ paper describes, many NPOs, NGOs, and/or CBOs in addition to local governments have had major roles in implementing various kinds programs funded by block grants as part of the World Trade Center recovery. No one, single organization can provide effective help for all kinds of disaster victims individually or collectively. The needs of disaster victims may be conflicting with each other because of their diversity. Their divergent needs can be successfully met by the diversity of operating agencies that have responsibility for implementing recovery measures. In a similar context, block grants made to individual households, such as microfinance, has been a vital recovery mechanism for victims in Thailand who suffered from the 2004 Sumatra earthquake and tsunami disaster. Both disaster victims and government officers at all levels strongly supported the microfinance so that disaster victims themselves would become operating agencies for recovery. Empowering individuals in sustainable life recovery is indeed the ultimate goal of recovery. <strong>Fig. 6.</strong> A holistic recovery policy model.
APA, Harvard, Vancouver, ISO, and other styles
29

Zubari, H. K., and A. E. Abdulwahab. "The Role of Sequential Well Testing in Improving Oil Recovery From a Closed Sand Lens in the Bahrain Field." SPE Reservoir Evaluation & Engineering 5, no. 02 (2002): 94–102. http://dx.doi.org/10.2118/77268-pa.

Full text
Abstract:
Summary The Ac zone refers to the sandstone facies of the Wara formation, which belongs to the Wasia group of the Middle Cretaceous age.* Owing to variable shale to sand ratios, the Ac zone is realized as different isolated sand lenses scattered all over the field. Owing to the complexity of the geology and the flow mechanism, well testing was used to manage the reservoir. The concept of a sequence of well tests on the same well was found to be essential in improving the oil recovery. Conducted on a specific well situated within a closed sand lens, sequential well testing yielded valuable information that assisted in understanding the drive mechanism and the deteriorating reservoir parameters, permeability, and skin. Such a method was crucial in improving the recovery from the sand lens as well as from the single-well-treatment perspective. This paper presents an example of how such a technique aided in improving the oil recovery from a small sand lens. Introduction Sequential well testing has played an important role in understanding the flow mechanism and the damage evolution in a dynamic sense within a sand lens. Computer-aided analysis of these well tests has resulted in better geological modeling and simulation, which further led to improved oil recovery. Sequential well testing was very effective in understanding the pressure-support mechanism, permeability deterioration, and increase in skin around wells. The tests further revealed the geological boundaries around the well. The geological setting was used as a quality-control parameter to ensure the consistency of all tests. A simulation model was constructed, taking into account well-test results, and it confirmed the strong influence of faulting on the production performance, which in this case is causing poor pressure support and a quick water breakthrough. Based on these results, the model was used to improve the sweep efficiency of the sand lens by exploring different schemes. Discussion Geology. The Ac zone refers to the sandstone facies of the Wara formation in the Bahrain field. It belongs to the Wasia group of the Middle Cretaceous age. The Wara in the Bahrain field varies from 60 to 95 ft in thickness, and the sand interval varies from 0 to 60 ft. Owing to variable shale to sand ratios, the Ac zone is realized as different isolated sand lenses scattered all over the field. Each sand lens varies in shape and size and acts as a trap for original oil accumulations and transferals from other zones; hence, it must be studied and optimized individually. For the purpose of this paper, a study of a small sand lens is presented. This sand lens (Fig. 1) is located in the northwest area of the Bahrain field. Its net thickness varies from 0 to 32 ft, and permeability varies from 0 md at the edges to 150 md at the center of the lens. The production behavior of anomalous wells (explained later) within the sand lens indicates that the sand body is cut by at least two sealing faults, making at least three isolated compartments. The sand lens is overlain and underlain by shale, making it an almost closed system; this is shown by the poor pressure support caused by faulting and juxtapositions with other zones. Production Behavior. It was observed that three nearby wells located within the same sand lens produce and behave differently. While Well 453 is an oil producer, Wells 376 and 244 are water and high-gas producers, respectively. A conceptual geological model that can provide an explanation for such behavior is shown in Fig. 2. This model was thought of originally but was changed later based on well tests. Sequential Well Testing. Six buildups were carried out on Well 453, which provided a good monitoring of the reservoir and a better understanding of the changes in reservoir behavior in terms of permeability deterioration, damage evolution, and pressure behavior. The main conclusion with regard to the geological setting was the fact that the sand body consisted of noncommunicating compartments; all tests clearly detected nearby combinations of faults and barriers. This allowed us to concentrate on developing the compartments more efficiently at a minimum cost. Table 1 shows the results of these buildups. It can be seen clearly from the table that the skin evolves at the sandface, causing a direct deterioration in the production rate. The increase in skin damage from 2.4 in the beginning of the well's life to 7.1 after 2 years is attributed to a combination of the following:Emulsion blocking. This is a main contributor to the skin problem in the Ac reservoir. This is evident by the more than 100% improvement seen after the well is treated with surfactant wash. However, the surfactant wash effect diminishes quickly as emulsion forms again around the wellbore, necessitating periodic treatments.Fines movement into the wellbore and blockage of the pore throats. This was proven by core testing, as shown in Fig. 3.**The swelling of clays caused by water encroachment into the area (water cut has increased from 0 to 25% in 5 years). X-ray diffraction (XRD) analysis of a typical Ac sand core is shown in Table 2. Lab testing concluded that the last two observations are most probably related because the presence of 10 to 20% kaolinite results in the problem of fines mobilization. The loose attachment of kaolinite plates to host grains causes fluid turbulence within a pore to dislodge delicately attached kaolinite, as shown in Figs. 4a and 4b. The loosened kaolinite plates then migrate to pore throats, where they lodge and act as a check valve.*** This means that skin evolution is triggered by a certain critical high flow rate. Fig. 5a shows the actual permeability deterioration before the initiation of the water-support scheme and the conducting of surfactant wash treatments. Geology. The Ac zone refers to the sandstone facies of the Wara formation in the Bahrain field. It belongs to the Wasia group of the Middle Cretaceous age. The Wara in the Bahrain field varies from 60 to 95 ft in thickness, and the sand interval varies from 0 to 60 ft. Owing to variable shale to sand ratios, the Ac zone is realized as different isolated sand lenses scattered all over the field. Each sand lens varies in shape and size and acts as a trap for original oil accumulations and transferals from other zones; hence, it must be studied and optimized individually. For the purpose of this paper, a study of a small sand lens is presented. This sand lens (Fig. 1) is located in the northwest area of the Bahrain field. Its net thickness varies from 0 to 32 ft, and permeability varies from 0 md at the edges to 150 md at the center of the lens. The production behavior of anomalous wells (explained later) within the sand lens indicates that the sand body is cut by at least two sealing faults, making at least three isolated compartments. The sand lens is overlain and underlain by shale, making it an almost closed system; this is shown by the poor pressure support caused by faulting and juxtapositions with other zones. Production Behavior. It was observed that three nearby wells located within the same sand lens produce and behave differently. While Well 453 is an oil producer, Wells 376 and 244 are water and high-gas producers, respectively. A conceptual geological model that can provide an explanation for such behavior is shown in Fig. 2. This model was thought of originally but was changed later based on well tests.
APA, Harvard, Vancouver, ISO, and other styles
30

Mbarki, Wafa, Moez Bouchouicha, Frederick Tshibasu Tshienda, Eric Moreau, and Mounir Sayadi. "Herniated Lumbar Disc Generation and Classification Using Cycle Generative Adversarial Networks on Axial View MRI." Electronics 10, no. 8 (2021): 982. http://dx.doi.org/10.3390/electronics10080982.

Full text
Abstract:
A frequent cause of lower back pain presenting with leg pain is a herniated lumbar intervertebral disc. A herniation or a herniated lumbar disc is a change of position of disc material (nucleus pulpous or annulus fibrosis). Usually, the lower back pain goes away within days or weeks. Regular treatment techniques for lower back pain include medication, exercises, relaxation methods and surgery. Back pain and back problems regularly occur in the lumbar region. The spinal canal is made up of vertebrae; each one protects the spinal nerves. Intervertebral discs and facet joints connect the vertebrae above and below. Groups of muscles and ligaments hold the vertebrae and the discs together. Muscles support the spine and the body weight, and they allow us to move. Pressure can result in excessive wear and tear of the other structures. For example, a common problem in the lower back is disc herniation. In this case, pressure on an intervertebral disc makes its center, the nucleus pulposus, protrude backwards and push against the spinal nerves, leading to lower back pain. Detection and classification are the two most important tasks in computer aided diagnosing systems. Detection of a herniated lumbar disc from magnetic resonance imaging (MRI) is a very difficult task for radiologist. The extraction of herniated discs has been achieved by different approaches such as active contours, region growing, watershed techniques, thresholding and deep learning. In this study, to detect intervertebral disc from axial MRIs we develop a method using generative adversarial networks (GANs), especially the CycleGAN model, to automatically generate and detect intervertebral disc and to classify the type of the herniated lumbar disc such as foraminal or median. We propose to explore the importance of axial view MRI to determine the herniation type. Accurately, GANs and other generative networks have created several ways to tackle different problems well known and challenging of medical image analysis, such as segmentation, reconstruction, data simulation, medical image de-noising, and classification. Moreover, their efficiency to synthesize images and data at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In our case, having a database that contains several images is a very difficult task. In this paper, we put forward a new approach based on GANs, in order to solve the problem of lumbar intervertebral disc images reduction. This method is based especially on CycleGAN. Consequently, the essential objective of our work is to generate and automatically classify the herniation type as foraminal or median using GANs. Our computer aided diagnosis (CAD) system achieved a 97.2% accuracy on our dataset. This result represents a very high-performance results by providing the state of the art and our work utilizing the GANs technique. Our CAD is very effective and efficient for classifying herniations of lumbar intervertebral discs. Therefore, the contribution of this study appears in: firstly, the use of the CycleGAN model based on convolutional layers to detect and classify the herniation type (median or foraminal) in lumbar intervertebral discs, secondly, the use of axial view MRI in order to classify the type of the herniated intervertebral disc. The main objective of this paper is to help radiologists automatically recognize and classify herniated lumbar discs.
APA, Harvard, Vancouver, ISO, and other styles
31

Siembieda, William. "Toward an Enhanced Concept of Disaster Resilience: A Commentary on Behalf of the Editorial Committee." Journal of Disaster Research 5, no. 5 (2010): 487–93. http://dx.doi.org/10.20965/jdr.2010.p0487.

Full text
Abstract:
1. Introduction This Special Issue (Part 2) expands upon the theme “Building Local Capacity for Long-term Disaster Resilience” presented in Special Issue Part 1 (JDR Volume 5, Number 2, April 2010) by examining the evolving concept of disaster resilience and providing additional reflections upon various aspects of its meaning. Part 1 provided a mixed set of examples of resiliency efforts, ranging from administrative challenges of integrating resilience into recovery to the analysis of hazard mitigation plans directed toward guiding local capability for developing resiliency. Resilience was broadly defined in the opening editorial of Special Issue Part 1 as “the capacity of a community to: 1) survive a major disaster, 2) retain essential structure and functions, and 3) adapt to post-disaster opportunities for transforming community structure and functions to meet new challenges.” In this editorial essay we first explore in Section 2 the history of resilience and then locate it within current academic and policy debates. Section 3 presents summaries of the papers in this issue. 2. Why is Resilience a Contemporary Theme? There is growing scholarly and policy interest in disaster resilience. In recent years, engineers [1], sociologists [2], geographers [3], economists [4], public policy analysts [5, 6], urban planners [7], hazards researchers [8], governments [9], and international organizations [10] have all contributed to the literature about this concept. Some authors view resilience as a mechanism for mitigating disaster impacts, with framework objectives such as resistance, absorption, and restoration [5]. Others, who focus on resiliency indicators, see it as an early warning system to assess community resiliency status [3, 8]. Recently, it has emerged as a component of social risk management that seeks to minimize social welfare loss from catastrophic disasters [6]. Manyena [11] traces scholarly exploration of resilience as an operational concept back at least five decades. Interest in resilience began in the 1940s with studies of children and trauma in the family and in the 1970s in the ecology literature as a useful framework to examine and measure the impact of assault or trauma on a defined eco-system component [12]. This led to modeling resilience measures for a variety of components within a defined ecosystem, leading to the realization that the systems approach to resiliency is attractive as a cross-disciplinary construct. The ecosystem analogy however, has limits when applied to disaster studies in that, historically, all catastrophic events have changed the place in which they occurred and a “return to normalcy” does not occur. This is true for modern urban societies as well as traditional agrarian societies. The adoption of “The Hyogo Framework for Action 2005-2015” (also known as The Hyogo Declaration) provides a global linkage and follows the United Nations 1990s International Decade for Natural Disaster Reduction effort. The 2005 Hyogo Declaration’s definition of resilience is: “The capacity of a system, community or society potentially exposed to hazards to adapt by resisting or changing in order to reach and maintain an acceptable level of functioning and structure.” The proposed measurement of resilience in the Hyogo Declaration is determined by “the degree to which the social system is capable of organizing itself to increase this capacity for learning from past disasters for better future protection and to improve risk reduction measures.” While very broad, this definition contains two key concepts: 1) adaptation, and 2) maintaining acceptable levels of functioning and structure. While adaptation requires certain capacities, maintaining acceptable levels of functioning and structure requires resources, forethought, and normative action. Some of these attributes are now reflected in the 2010 National Disaster Recovery Framework published by the U.S. Federal Emergency Management Agency (FEMA) [13]. With the emergence of this new thinking on resilience related to disasters, it is now a good time to reflect on the concept and assess what has recently been said in the literature. Bruneau et al. [1] offer an engineering sciences definition for community seismic resilience: “The ability of social units (e.g., organizations, communities) to mitigate hazards, contain the effects of disasters when they occur, and carry out recovery activities in ways that minimize social disruption and mitigate the effects of future earthquakes.” Rose [4] writes that resiliency is the ability of a system to recover from a severe shock. He distinguishes two types of resilience: (1) inherent – ability under normal circumstances and (2) adaptive – ability in crisis situations due to ingenuity or extra effort. By opening up resilience to categorization he provides a pathway to establish multi-disciplinary approaches, something that is presently lacking in practice. Rose is most concerned with business disruption which can take extensive periods of time to correct. In order to make resource decisions that lower overall societal costs (economic, social, governmental and physical), Rose calls for the establishment of measurements that function as resource decision allocation guides. This has been done in part through risk transfer tools such as private insurance. However, it has not been well-adopted by governments in deciding how to allocate mitigation resources. We need to ask why the interest in resilience has grown? Manyena [11] argues that the concept of resilience has gained currency without obtaining clarity of understanding, definition, substance, philosophical dimensions, or applicability to disaster management and sustainable development theory and practice. It is evident that the “emergency management model” does not itself provide sufficient guidance for policymakers since it is too command-and-control-oriented and does not adequately address mitigation and recovery. Also, large disasters are increasingly viewed as major disruptions of the economic and social conditions of a country, state/province, or city. Lowering post-disaster costs (human life, property loss, economic advancement and government disruption) is being taken more seriously by government and civil society. The lessening of costs is not something the traditional “preparedness” stage of emergency management has concerned itself with; this is an existing void in meeting the expanding interests of government and civil society. The concept of resilience helps further clarify the relationship between risk and vulnerability. If risk is defined as “the probability of an event or condition occurring [14]#8221; then it can be reduced through physical, social, governmental, or economic means, thereby reducing the likelihood of damage and loss. Nothing can be done to stop an earthquake, volcanic eruption, cyclone, hurricane, or other natural event, but the probability of damage and loss from natural and technological hazards can be addressed through structural and non-structural strategies. Vulnerability is the absence of capacity to resist or absorb a disaster impact. Changes in vulnerability can then be achieved by changes in these capacities. In this regard, Franco and Siembieda describe in this issue how coastal cities in Chile had low resilience and high vulnerability to the tsunami generated by the February 2010 earthquake, whereas modern buildings had high resilience and, therefore, were much less vulnerable to the powerful earthquake. We also see how the framework for policy development can change through differing perspectives. Eisner discusses in this issue how local non-governmental social service agencies are building their resilience capabilities to serve target populations after a disaster occurs, becoming self-renewing social organizations and demonstrating what Leonard and Howett [6] term “social resilience.” All of the contributions to this issue illustrate the lowering of disaster impacts and strengthening of capacity (at the household, community or governmental level) for what Alesch [15] terms “post-event viability” – a term reflecting how well a person, business, community, or government functions after a disaster in addition to what they might do prior to a disaster to lessen its impact. Viability might become the definition of recovery if it can be measured or agreed upon. 3. Contents of This Issue The insights provided by the papers in this issue contribute greater clarity to an understanding of resilience, together with its applicability to disaster management. In these papers we find tools and methods, process strategies, and planning approaches. There are five papers focused on local experiences, three on state (prefecture) experiences, and two on national experiences. The papers in this issue reinforce the concept of resilience as a process, not a product, because it is the sum of many actions. The resiliency outcome is the result of multiple inputs from the level of the individual and, at times, continuing up to the national or international organizational level. Through this exploration we see that the “resiliency” concept accepts that people will come into conflict with natural or anthropogenic hazards. The policy question then becomes how to lower the impact(s) of the conflict through “hard or soft” measures (see the Special Issue Part 1 editorial for a discussion of “hard” vs. “soft” resilience). Local level Go Urakawa and Haruo Hayashi illustrate how post-disaster operations for public utilities can be problematic because many practitioners have no direct experience in such operations, noting that the formats and methods normally used in recovery depend on personal skills and effort. They describe how these problems are addressed by creating manuals on measures for effectively implementing post-disaster operations. They develop a method to extract priority operations using business impact analysis (BIA) and project management based business flow diagrams (BFD). Their article effectively illustrates the practical aspects of strengthening the resiliency of public organizations. Richard Eisner presents the framework used to initiate the development and implementation of a process to create disaster resilience in faith-based and community-based organizations that provide services to vulnerable populations in San Francisco, California. A major project outcome is the Disaster Resilience Standard for Community- and Faith-Based Service Providers. This “standard” has general applicability for use by social service agencies in the public and non-profit sectors. Alejandro Linayo addresses the growing issue of technological risk in cities. He argues for the need to understand an inherent conflict between how we occupy urban space and the technological risks created by hazardous chemicals, radiation, oil and gas, and other hazardous materials storage and movement. The paper points out that information and procedural gaps exist in terms of citizen knowledge (the right to know) and local administrative knowledge (missing expertise). Advances and experience accumulated by the Venezuela Disaster Risk Management Research Center in identifying and integrating technological risk treatment for the city of Merida, Venezuela, are highlighted as a way to move forward. L. Teresa Guevara-Perez presents the case that certain urban zoning requirements in contemporary cities encourage and, in some cases, enforce the use of building configurations that have been long recognized by earthquake engineering as seismically vulnerable. Using Western Europe and the Modernist architectural movement, she develops the historical case for understanding discrepancies between urban zoning regulations and seismic codes that have led to vulnerable modern building configurations, and traces the international dissemination of architectural and urban planning concepts that have generated vulnerability in contemporary cities around the world. Jung Eun Kang, Walter Gillis Peacock, and Rahmawati Husein discuss an assessment protocol for Hazard Mitigation Plans applied to 12 coastal hazard zone plans in the state of Texas in the U.S. The components of these plans are systematically examined in order to highlight their respective strengths and weaknesses. The authors describe an assessment tool, the plan quality score (PQS), composed of seven primary components (vision statement, planning process, fact basis, goals and objectives, inter-organizational coordination, policies & actions, and implementation), as well as a component quality score (CQS). State (Prefecture) level Charles Real presents the Natural Hazard Zonation Policies for Land Use Planning and Development in California in the U.S. California has established state-level policies that utilize knowledge of where natural hazards are more likely to occur to enhance the effectiveness of land use planning as a tool for risk mitigation. Experience in California demonstrates that a combination of education, outreach, and mutually supporting policies that are linked to state-designated natural hazard zones can form an effective framework for enhancing the role of land use planning in reducing future losses from natural disasters. Norio Maki, Keiko Tamura, and Haruo Hayashi present a method for local government stakeholders involved in pre-disaster plan making to describe performance measures through the formulation of desired outcomes. Through a case study approach, Nara and Kyoto Prefectures’ separate experiences demonstrate how to conduct Strategic Earthquake Disaster Reduction Plans and Action Plans that have deep stakeholder buy-in and outcome measurability. Nara’s plan was prepared from 2,015 stakeholder ideas and Kyoto’s plan was prepared from 1,613 stakeholder ideas. Having a quantitative target for individual objectives ensures the measurability of plan progress. Both jurisdictions have undertaken evaluations of plan outcomes. Sandy Meyer, Eugene Henry, Roy E. Wright and Cynthia A. Palmer present the State of Florida in the U.S. and its experience with pre-disaster planning for post-disaster redevelopment. Drawing upon the lessons learned from the impacts of the 2004 and 2005 hurricane seasons, local governments and state leaders in Florida sought to find a way to encourage behavior that would create greater community resiliency in 2006. The paper presents initial efforts to develop a post-disaster redevelopment plan (PDRP), including the experience of a pilot county. National level Bo-Yao Lee provides a national perspective: New Zealand’s approach to emergency management, where all hazard risks are addressed through devolved accountability. This contemporary approach advocates collaboration and coordination, aiming to address all hazard risks through the “4Rs” – reduction, readiness, response, and recovery. Lee presents the impact of the Resource Management Act (1991), the Civil Defence Emergency Management Act (2002), and the Building Act (2004) that comprise the key legislation influencing and promoting integrated management for environment and hazard risk management. Guillermo Franco and William Siembieda provide a field assessment of the February 27, 2010, M8.8 earthquake and tsunami event in Chile. The papers present an initial damage and life-loss review and assessment of seismic building resiliency and the country’s rapid updating of building codes that have undergone continuous improvement over the past 60 years. The country’s land use planning system and its emergency management system are also described. The role of insurance coverage reveals problems in seismic coverage for homeowners. The unique role of the Catholic Church in providing temporary shelter and the central government’s five-point housing recovery plan are presented. A weakness in the government’s emergency management system’s early tsunami response system is noted. Acknowledgements The Editorial Committee extends its sincere appreciation to both the contributors and the JDR staff for their patience and determination in making Part 2 of this special issue possible. Thanks also to the reviewers for their insightful analytic comments and suggestions. Finally, the Committee wishes to again thank Bayete Henderson for his keen and thorough editorial assistance and copy editing support.
APA, Harvard, Vancouver, ISO, and other styles
32

Yakubu, Bashir Ishaku, Shua’ib Musa Hassan, and Sallau Osisiemo Asiribo. "AN ASSESSMENT OF SPATIAL VARIATION OF LAND SURFACE CHARACTERISTICS OF MINNA, NIGER STATE NIGERIA FOR SUSTAINABLE URBANIZATION USING GEOSPATIAL TECHNIQUES." Geosfera Indonesia 3, no. 2 (2018): 27. http://dx.doi.org/10.19184/geosi.v3i2.7934.

Full text
Abstract:
Rapid urbanization rates impact significantly on the nature of Land Cover patterns of the environment, which has been evident in the depletion of vegetal reserves and in general modifying the human climatic systems (Henderson, et al., 2017; Kumar, Masago, Mishra, & Fukushi, 2018; Luo and Lau, 2017). This study explores remote sensing classification technique and other auxiliary data to determine LULCC for a period of 50 years (1967-2016). The LULCC types identified were quantitatively evaluated using the change detection approach from results of maximum likelihood classification algorithm in GIS. Accuracy assessment results were evaluated and found to be between 56 to 98 percent of the LULC classification. The change detection analysis revealed change in the LULC types in Minna from 1976 to 2016. Built-up area increases from 74.82ha in 1976 to 116.58ha in 2016. Farmlands increased from 2.23 ha to 46.45ha and bared surface increases from 120.00ha to 161.31ha between 1976 to 2016 resulting to decline in vegetation, water body, and wetlands. The Decade of rapid urbanization was found to coincide with the period of increased Public Private Partnership Agreement (PPPA). Increase in farmlands was due to the adoption of urban agriculture which has influence on food security and the environmental sustainability. The observed increase in built up areas, farmlands and bare surfaces has substantially led to reduction in vegetation and water bodies. The oscillatory nature of water bodies LULCC which was not particularly consistent with the rates of urbanization also suggests that beyond the urbanization process, other factors may influence the LULCC of water bodies in urban settlements.
 Keywords: Minna, Niger State, Remote Sensing, Land Surface Characteristics
 
 References 
 Akinrinmade, A., Ibrahim, K., & Abdurrahman, A. (2012). Geological Investigation of Tagwai Dams using Remote Sensing Technique, Minna Niger State, Nigeria. Journal of Environment, 1(01), pp. 26-32.
 Amadi, A., & Olasehinde, P. (2010). Application of remote sensing techniques in hydrogeological mapping of parts of Bosso Area, Minna, North-Central Nigeria. International Journal of Physical Sciences, 5(9), pp. 1465-1474.
 Aplin, P., & Smith, G. (2008). Advances in object-based image classification. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37(B7), pp. 725-728.
 Ayele, G. T., Tebeje, A. K., Demissie, S. S., Belete, M. A., Jemberrie, M. A., Teshome, W. M., . . . Teshale, E. Z. (2018). Time Series Land Cover Mapping and Change Detection Analysis Using Geographic Information System and Remote Sensing, Northern Ethiopia. Air, Soil and Water Research, 11, p 1178622117751603.
 Azevedo, J. A., Chapman, L., & Muller, C. L. (2016). Quantifying the daytime and night-time urban heat island in Birmingham, UK: a comparison of satellite derived land surface temperature and high resolution air temperature observations. Remote Sensing, 8(2), p 153.
 Blaschke, T., Hay, G. J., Kelly, M., Lang, S., Hofmann, P., Addink, E., . . . van Coillie, F. (2014). Geographic object-based image analysis–towards a new paradigm. ISPRS Journal of Photogrammetry and Remote Sensing, 87, pp. 180-191.
 Bukata, R. P., Jerome, J. H., Kondratyev, A. S., & Pozdnyakov, D. V. (2018). Optical properties and remote sensing of inland and coastal waters: CRC press.
 Camps-Valls, G., Tuia, D., Bruzzone, L., & Benediktsson, J. A. (2014). Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE signal processing magazine, 31(1), pp. 45-54.
 Chen, J., Chen, J., Liao, A., Cao, X., Chen, L., Chen, X., . . . Lu, M. (2015). Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS Journal of Photogrammetry and Remote Sensing, 103, pp. 7-27.
 Chen, M., Mao, S., & Liu, Y. (2014). Big data: A survey. Mobile networks and applications, 19(2), pp. 171-209.
 Cheng, G., Han, J., Guo, L., Liu, Z., Bu, S., & Ren, J. (2015). Effective and efficient midlevel visual elements-oriented land-use classification using VHR remote sensing images. IEEE transactions on geoscience and remote sensing, 53(8), pp. 4238-4249.
 Cheng, G., Han, J., Zhou, P., & Guo, L. (2014). Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS Journal of Photogrammetry and Remote Sensing, 98, pp. 119-132.
 Coale, A. J., & Hoover, E. M. (2015). Population growth and economic development: Princeton University Press.
 Congalton, R. G., & Green, K. (2008). Assessing the accuracy of remotely sensed data: principles and practices: CRC press.
 Corner, R. J., Dewan, A. M., & Chakma, S. (2014). Monitoring and prediction of land-use and land-cover (LULC) change Dhaka megacity (pp. 75-97): Springer.
 Coutts, A. M., Harris, R. J., Phan, T., Livesley, S. J., Williams, N. S., & Tapper, N. J. (2016). Thermal infrared remote sensing of urban heat: Hotspots, vegetation, and an assessment of techniques for use in urban planning. Remote Sensing of Environment, 186, pp. 637-651.
 Debnath, A., Debnath, J., Ahmed, I., & Pan, N. D. (2017). Change detection in Land use/cover of a hilly area by Remote Sensing and GIS technique: A study on Tropical forest hill range, Baramura, Tripura, Northeast India. International journal of geomatics and geosciences, 7(3), pp. 293-309.
 Desheng, L., & Xia, F. (2010). Assessing object-based classification: advantages and limitations. Remote Sensing Letters, 1(4), pp. 187-194.
 Dewan, A. M., & Yamaguchi, Y. (2009). Land use and land cover change in Greater Dhaka, Bangladesh: Using remote sensing to promote sustainable urbanization. Applied Geography, 29(3), pp. 390-401.
 Dronova, I., Gong, P., Wang, L., & Zhong, L. (2015). Mapping dynamic cover types in a large seasonally flooded wetland using extended principal component analysis and object-based classification. Remote Sensing of Environment, 158, pp. 193-206.
 Duro, D. C., Franklin, S. E., & Dubé, M. G. (2012). A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sensing of Environment, 118, pp. 259-272.
 Elmhagen, B., Destouni, G., Angerbjörn, A., Borgström, S., Boyd, E., Cousins, S., . . . Hambäck, P. (2015). Interacting effects of change in climate, human population, land use, and water use on biodiversity and ecosystem services. Ecology and Society, 20(1)
 Farhani, S., & Ozturk, I. (2015). Causal relationship between CO 2 emissions, real GDP, energy consumption, financial development, trade openness, and urbanization in Tunisia. Environmental Science and Pollution Research, 22(20), pp. 15663-15676.
 Feng, L., Chen, B., Hayat, T., Alsaedi, A., & Ahmad, B. (2017). The driving force of water footprint under the rapid urbanization process: a structural decomposition analysis for Zhangye city in China. Journal of Cleaner Production, 163, pp. S322-S328.
 Fensham, R., & Fairfax, R. (2002). Aerial photography for assessing vegetation change: a review of applications and the relevance of findings for Australian vegetation history. Australian Journal of Botany, 50(4), pp. 415-429.
 Ferreira, N., Lage, M., Doraiswamy, H., Vo, H., Wilson, L., Werner, H., . . . Silva, C. (2015). Urbane: A 3d framework to support data driven decision making in urban development. Visual Analytics Science and Technology (VAST), 2015 IEEE Conference on.
 Garschagen, M., & Romero-Lankao, P. (2015). Exploring the relationships between urbanization trends and climate change vulnerability. Climatic Change, 133(1), pp. 37-52.
 Gokturk, S. B., Sumengen, B., Vu, D., Dalal, N., Yang, D., Lin, X., . . . Torresani, L. (2015). System and method for search portions of objects in images and features thereof: Google Patents.
 Government, N. S. (2007). Niger state (The Power State). Retrieved from http://nigerstate.blogspot.com.ng/
 Green, K., Kempka, D., & Lackey, L. (1994). Using remote sensing to detect and monitor land-cover and land-use change. Photogrammetric engineering and remote sensing, 60(3), pp. 331-337.
 Gu, W., Lv, Z., & Hao, M. (2017). Change detection method for remote sensing images based on an improved Markov random field. Multimedia Tools and Applications, 76(17), pp. 17719-17734.
 Guo, Y., & Shen, Y. (2015). Quantifying water and energy budgets and the impacts of climatic and human factors in the Haihe River Basin, China: 2. Trends and implications to water resources. Journal of Hydrology, 527, pp. 251-261.
 Hadi, F., Thapa, R. B., Helmi, M., Hazarika, M. K., Madawalagama, S., Deshapriya, L. N., & Center, G. (2016). Urban growth and land use/land cover modeling in Semarang, Central Java, Indonesia: Colombo-Srilanka, ACRS2016.
 Hagolle, O., Huc, M., Villa Pascual, D., & Dedieu, G. (2015). A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of FormoSat-2, LandSat, VENμS and Sentinel-2 images. Remote Sensing, 7(3), pp. 2668-2691.
 Hegazy, I. R., & Kaloop, M. R. (2015). Monitoring urban growth and land use change detection with GIS and remote sensing techniques in Daqahlia governorate Egypt. International Journal of Sustainable Built Environment, 4(1), pp. 117-124.
 Henderson, J. V., Storeygard, A., & Deichmann, U. (2017). Has climate change driven urbanization in Africa? Journal of development economics, 124, pp. 60-82.
 Hu, L., & Brunsell, N. A. (2015). A new perspective to assess the urban heat island through remotely sensed atmospheric profiles. Remote Sensing of Environment, 158, pp. 393-406.
 Hughes, S. J., Cabral, J. A., Bastos, R., Cortes, R., Vicente, J., Eitelberg, D., . . . Santos, M. (2016). A stochastic dynamic model to assess land use change scenarios on the ecological status of fluvial water bodies under the Water Framework Directive. Science of the Total Environment, 565, pp. 427-439.
 Hussain, M., Chen, D., Cheng, A., Wei, H., & Stanley, D. (2013). Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS Journal of Photogrammetry and Remote Sensing, 80, pp. 91-106.
 Hyyppä, J., Hyyppä, H., Inkinen, M., Engdahl, M., Linko, S., & Zhu, Y.-H. (2000). Accuracy comparison of various remote sensing data sources in the retrieval of forest stand attributes. Forest Ecology and Management, 128(1-2), pp. 109-120.
 Jiang, L., Wu, F., Liu, Y., & Deng, X. (2014). Modeling the impacts of urbanization and industrial transformation on water resources in China: an integrated hydro-economic CGE analysis. Sustainability, 6(11), pp. 7586-7600.
 Jin, S., Yang, L., Zhu, Z., & Homer, C. (2017). A land cover change detection and classification protocol for updating Alaska NLCD 2001 to 2011. Remote Sensing of Environment, 195, pp. 44-55.
 Joshi, N., Baumann, M., Ehammer, A., Fensholt, R., Grogan, K., Hostert, P., . . . Mitchard, E. T. (2016). A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sensing, 8(1), p 70.
 Kaliraj, S., Chandrasekar, N., & Magesh, N. (2015). Evaluation of multiple environmental factors for site-specific groundwater recharge structures in the Vaigai River upper basin, Tamil Nadu, India, using GIS-based weighted overlay analysis. Environmental earth sciences, 74(5), pp. 4355-4380.
 Koop, S. H., & van Leeuwen, C. J. (2015). Assessment of the sustainability of water resources management: A critical review of the City Blueprint approach. Water Resources Management, 29(15), pp. 5649-5670.
 Kumar, P., Masago, Y., Mishra, B. K., & Fukushi, K. (2018). Evaluating future stress due to combined effect of climate change and rapid urbanization for Pasig-Marikina River, Manila. Groundwater for Sustainable Development, 6, pp. 227-234.
 Lang, S. (2008). Object-based image analysis for remote sensing applications: modeling reality–dealing with complexity Object-based image analysis (pp. 3-27): Springer.
 Li, M., Zang, S., Zhang, B., Li, S., & Wu, C. (2014). A review of remote sensing image classification techniques: The role of spatio-contextual information. European Journal of Remote Sensing, 47(1), pp. 389-411.
 Liddle, B. (2014). Impact of population, age structure, and urbanization on carbon emissions/energy consumption: evidence from macro-level, cross-country analyses. Population and Environment, 35(3), pp. 286-304.
 Lillesand, T., Kiefer, R. W., & Chipman, J. (2014). Remote sensing and image interpretation: John Wiley & Sons.
 Liu, Y., Wang, Y., Peng, J., Du, Y., Liu, X., Li, S., & Zhang, D. (2015). Correlations between urbanization and vegetation degradation across the world’s metropolises using DMSP/OLS nighttime light data. Remote Sensing, 7(2), pp. 2067-2088.
 López, E., Bocco, G., Mendoza, M., & Duhau, E. (2001). Predicting land-cover and land-use change in the urban fringe: a case in Morelia city, Mexico. Landscape and urban planning, 55(4), pp. 271-285.
 Luo, M., & Lau, N.-C. (2017). Heat waves in southern China: Synoptic behavior, long-term change, and urbanization effects. Journal of Climate, 30(2), pp. 703-720.
 Mahboob, M. A., Atif, I., & Iqbal, J. (2015). Remote sensing and GIS applications for assessment of urban sprawl in Karachi, Pakistan. Science, Technology and Development, 34(3), pp. 179-188.
 Mallinis, G., Koutsias, N., Tsakiri-Strati, M., & Karteris, M. (2008). Object-based classification using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test site. ISPRS Journal of Photogrammetry and Remote Sensing, 63(2), pp. 237-250.
 Mas, J.-F., Velázquez, A., Díaz-Gallegos, J. R., Mayorga-Saucedo, R., Alcántara, C., Bocco, G., . . . Pérez-Vega, A. (2004). Assessing land use/cover changes: a nationwide multidate spatial database for Mexico. International Journal of Applied Earth Observation and Geoinformation, 5(4), pp. 249-261.
 Mathew, A., Chaudhary, R., Gupta, N., Khandelwal, S., & Kaul, N. (2015). Study of Urban Heat Island Effect on Ahmedabad City and Its Relationship with Urbanization and Vegetation Parameters. International Journal of Computer & Mathematical Science, 4, pp. 2347-2357.
 Megahed, Y., Cabral, P., Silva, J., & Caetano, M. (2015). Land cover mapping analysis and urban growth modelling using remote sensing techniques in greater Cairo region—Egypt. ISPRS International Journal of Geo-Information, 4(3), pp. 1750-1769.
 Metternicht, G. (2001). Assessing temporal and spatial changes of salinity using fuzzy logic, remote sensing and GIS. Foundations of an expert system. Ecological modelling, 144(2-3), pp. 163-179.
 Miller, R. B., & Small, C. (2003). Cities from space: potential applications of remote sensing in urban environmental research and policy. Environmental Science & Policy, 6(2), pp. 129-137.
 Mirzaei, P. A. (2015). Recent challenges in modeling of urban heat island. Sustainable Cities and Society, 19, pp. 200-206.
 Mohammed, I., Aboh, H., & Emenike, E. (2007). A regional geoelectric investigation for groundwater exploration in Minna area, north west Nigeria. Science World Journal, 2(4)
 Morenikeji, G., Umaru, E., Liman, S., & Ajagbe, M. (2015). Application of Remote Sensing and Geographic Information System in Monitoring the Dynamics of Landuse in Minna, Nigeria. International Journal of Academic Research in Business and Social Sciences, 5(6), pp. 320-337.
 Mukherjee, A. B., Krishna, A. P., & Patel, N. (2018). Application of Remote Sensing Technology, GIS and AHP-TOPSIS Model to Quantify Urban Landscape Vulnerability to Land Use Transformation Information and Communication Technology for Sustainable Development (pp. 31-40): Springer.
 Myint, S. W., Gober, P., Brazel, A., Grossman-Clarke, S., & Weng, Q. (2011). Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sensing of Environment, 115(5), pp. 1145-1161.
 Nemmour, H., & Chibani, Y. (2006). Multiple support vector machines for land cover change detection: An application for mapping urban extensions. ISPRS Journal of Photogrammetry and Remote Sensing, 61(2), pp. 125-133.
 Niu, X., & Ban, Y. (2013). Multi-temporal RADARSAT-2 polarimetric SAR data for urban land-cover classification using an object-based support vector machine and a rule-based approach. International journal of remote sensing, 34(1), pp. 1-26.
 Nogueira, K., Penatti, O. A., & dos Santos, J. A. (2017). Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognition, 61, pp. 539-556.
 Oguz, H., & Zengin, M. (2011). Analyzing land use/land cover change using remote sensing data and landscape structure metrics: a case study of Erzurum, Turkey. Fresenius Environmental Bulletin, 20(12), pp. 3258-3269.
 Pohl, C., & Van Genderen, J. L. (1998). Review article multisensor image fusion in remote sensing: concepts, methods and applications. International journal of remote sensing, 19(5), pp. 823-854.
 Price, O., & Bradstock, R. (2014). Countervailing effects of urbanization and vegetation extent on fire frequency on the Wildland Urban Interface: Disentangling fuel and ignition effects. Landscape and urban planning, 130, pp. 81-88.
 Prosdocimi, I., Kjeldsen, T., & Miller, J. (2015). Detection and attribution of urbanization effect on flood extremes using nonstationary flood‐frequency models. Water resources research, 51(6), pp. 4244-4262.
 Rawat, J., & Kumar, M. (2015). Monitoring land use/cover change using remote sensing and GIS techniques: A case study of Hawalbagh block, district Almora, Uttarakhand, India. The Egyptian Journal of Remote Sensing and Space Science, 18(1), pp. 77-84.
 Rokni, K., Ahmad, A., Solaimani, K., & Hazini, S. (2015). A new approach for surface water change detection: Integration of pixel level image fusion and image classification techniques. International Journal of Applied Earth Observation and Geoinformation, 34, pp. 226-234.
 Sakieh, Y., Amiri, B. J., Danekar, A., Feghhi, J., & Dezhkam, S. (2015). Simulating urban expansion and scenario prediction using a cellular automata urban growth model, SLEUTH, through a case study of Karaj City, Iran. Journal of Housing and the Built Environment, 30(4), pp. 591-611.
 Santra, A. (2016). Land Surface Temperature Estimation and Urban Heat Island Detection: A Remote Sensing Perspective. Remote Sensing Techniques and GIS Applications in Earth and Environmental Studies, p 16.
 Shrivastava, L., & Nag, S. (2017). MONITORING OF LAND USE/LAND COVER CHANGE USING GIS AND REMOTE SENSING TECHNIQUES: A CASE STUDY OF SAGAR RIVER WATERSHED, TRIBUTARY OF WAINGANGA RIVER OF MADHYA PRADESH, INDIA.
 Shuaibu, M., & Sulaiman, I. (2012). Application of remote sensing and GIS in land cover change detection in Mubi, Adamawa State, Nigeria. J Technol Educ Res, 5, pp. 43-55.
 Song, B., Li, J., Dalla Mura, M., Li, P., Plaza, A., Bioucas-Dias, J. M., . . . Chanussot, J. (2014). Remotely sensed image classification using sparse representations of morphological attribute profiles. IEEE transactions on geoscience and remote sensing, 52(8), pp. 5122-5136.
 Song, X.-P., Sexton, J. O., Huang, C., Channan, S., & Townshend, J. R. (2016). Characterizing the magnitude, timing and duration of urban growth from time series of Landsat-based estimates of impervious cover. Remote Sensing of Environment, 175, pp. 1-13.
 Tayyebi, A., Shafizadeh-Moghadam, H., & Tayyebi, A. H. (2018). Analyzing long-term spatio-temporal patterns of land surface temperature in response to rapid urbanization in the mega-city of Tehran. Land Use Policy, 71, pp. 459-469.
 Teodoro, A. C., Gutierres, F., Gomes, P., & Rocha, J. (2018). Remote Sensing Data and Image Classification Algorithms in the Identification of Beach Patterns Beach Management Tools-Concepts, Methodologies and Case Studies (pp. 579-587): Springer.
 Toth, C., & Jóźków, G. (2016). Remote sensing platforms and sensors: A survey. ISPRS Journal of Photogrammetry and Remote Sensing, 115, pp. 22-36.
 Tuholske, C., Tane, Z., López-Carr, D., Roberts, D., & Cassels, S. (2017). Thirty years of land use/cover change in the Caribbean: Assessing the relationship between urbanization and mangrove loss in Roatán, Honduras. Applied Geography, 88, pp. 84-93.
 Tuia, D., Flamary, R., & Courty, N. (2015). Multiclass feature learning for hyperspectral image classification: Sparse and hierarchical solutions. ISPRS Journal of Photogrammetry and Remote Sensing, 105, pp. 272-285.
 Tzotsos, A., & Argialas, D. (2008). Support vector machine classification for object-based image analysis Object-Based Image Analysis (pp. 663-677): Springer.
 Wang, L., Sousa, W., & Gong, P. (2004). Integration of object-based and pixel-based classification for mapping mangroves with IKONOS imagery. International journal of remote sensing, 25(24), pp. 5655-5668.
 Wang, Q., Zeng, Y.-e., & Wu, B.-w. (2016). Exploring the relationship between urbanization, energy consumption, and CO2 emissions in different provinces of China. Renewable and Sustainable Energy Reviews, 54, pp. 1563-1579.
 Wang, S., Ma, H., & Zhao, Y. (2014). Exploring the relationship between urbanization and the eco-environment—A case study of Beijing–Tianjin–Hebei region. Ecological Indicators, 45, pp. 171-183.
 Weitkamp, C. (2006). Lidar: range-resolved optical remote sensing of the atmosphere: Springer Science & Business.
 Wellmann, T., Haase, D., Knapp, S., Salbach, C., Selsam, P., & Lausch, A. (2018). Urban land use intensity assessment: The potential of spatio-temporal spectral traits with remote sensing. Ecological Indicators, 85, pp. 190-203.
 Whiteside, T. G., Boggs, G. S., & Maier, S. W. (2011). Comparing object-based and pixel-based classifications for mapping savannas. International Journal of Applied Earth Observation and Geoinformation, 13(6), pp. 884-893.
 Willhauck, G., Schneider, T., De Kok, R., & Ammer, U. (2000). Comparison of object oriented classification techniques and standard image analysis for the use of change detection between SPOT multispectral satellite images and aerial photos. Proceedings of XIX ISPRS congress.
 Winker, D. M., Vaughan, M. A., Omar, A., Hu, Y., Powell, K. A., Liu, Z., . . . Young, S. A. (2009). Overview of the CALIPSO mission and CALIOP data processing algorithms. Journal of Atmospheric and Oceanic Technology, 26(11), pp. 2310-2323.
 Yengoh, G. T., Dent, D., Olsson, L., Tengberg, A. E., & Tucker III, C. J. (2015). Use of the Normalized Difference Vegetation Index (NDVI) to Assess Land Degradation at Multiple Scales: Current Status, Future Trends, and Practical Considerations: Springer.
 Yu, Q., Gong, P., Clinton, N., Biging, G., Kelly, M., & Schirokauer, D. (2006). Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogrammetric Engineering & Remote Sensing, 72(7), pp. 799-811.
 Zhou, D., Zhao, S., Zhang, L., & Liu, S. (2016). Remotely sensed assessment of urbanization effects on vegetation phenology in China's 32 major cities. Remote Sensing of Environment, 176, pp. 272-281.
 Zhu, Z., Fu, Y., Woodcock, C. E., Olofsson, P., Vogelmann, J. E., Holden, C., . . . Yu, Y. (2016). Including land cover change in analysis of greenness trends using all available Landsat 5, 7, and 8 images: A case study from Guangzhou, China (2000–2014). Remote Sensing of Environment, 185, pp. 243-257.
 
 
 
 
 
APA, Harvard, Vancouver, ISO, and other styles
33

Tan, Huijun, Nathan McNeil, John MacArthur, and Kelly Rodgers. "Evaluation of a Transportation Incentive Program for Affordable Housing Residents." Transportation Research Record: Journal of the Transportation Research Board, February 28, 2021, 036119812199743. http://dx.doi.org/10.1177/0361198121997431.

Full text
Abstract:
This study looks at initial results from the Transportation Wallet for Residents of Affordable Housing pilot program launched by the City of Portland’s Bureau of Transportation. The program provides a set of transportation incentives for low-income participants, including a US$308 prepaid Visa card that could be applied to public transit or other transportation services, a free bike share membership, and access to discounted rates on several services. A survey was conducted with the program’s participants (278 total responses) to understand how they used the Transportation Wallet and how the program helped them use different transport modes to get around. The main findings include: (1) The financial support of this program encouraged some participants to use new mobility services (including Uber/Lyft, bike share, and e-scooter) that they had never used before; (2) the program increased access for participants, helping them make more trips and, for some, get to places they otherwise could not have gone; and (3) transportation fairs, where participants could learn about services and talk to providers, promoted both mode sign-up and mode usage, particularly for new mobility services and a reduced fare transit program. The survey results also point to some opportunities to improve the program. Participant feedback suggests that transportation agencies do more to streamline and educate participants on how to use new mobility services and coordinate different service providers to optimize seamless services for participants. The paper provides insights into the implementation and effectiveness of a transportation financial incentive program for low-income populations.
APA, Harvard, Vancouver, ISO, and other styles
34

McCarthy, G. J., O. E. Manz, R. J. Stevenson, D. J. Hassett, and G. H. Groenewold. "Western Fly Ash Research, Development and Data Center." MRS Proceedings 65 (1985). http://dx.doi.org/10.1557/proc-65-165.

Full text
Abstract:
With financial support from utilities and ash brokers*, the Western Fly Ash Research, Development and Data Center was established under the aegis of the North Dakota Mining and Mineral Resources Research Institute in August of 1985. Research will be performed by the two North Dakota universities in Grand Forks and Fargo. The fundamental objective of the Center is to enhance the knowledge base of the properties (chemical, mineralogical and physical) and reactions of the coal by-products (principally fly ash, but including bottom ash and FGD waste) produced in the Midwestern and Great Plains regions of the US. Most of the study specimens will be high-calcium (ASTM Class C) ash derived from low-rank lignite and subbituminous coals mined in North Dakota, Montana and Wyoming, although ash from other regions and coals is also being studied. The enhanced knowledge base should lead to more widespread utilization of these by-products [1,2] or, where this is necessary, to their safe and cost-effective disposal [3].
APA, Harvard, Vancouver, ISO, and other styles
35

Szuflita-Żurawska, Magdalena, and Anna Wałek. "Solving legal puzzles is not easy – supporting creating Data Management Plans in three scientific disciplines: chemistry, economics, and civil engineering." Septentrio Conference Series, no. 4 (September 22, 2020). http://dx.doi.org/10.7557/5.5576.

Full text
Abstract:
Open Science Competence Center at the Gdańsk University of Technology Library was established upon the Bridge of Data project at the end of 2018. Our main goals include providing support for the academic community for broad issues associated with Open Science, especially with Open Research Data. Our team of professionals help researchers in many topics such as: "what kinds of data you need to share", "how to make your data openly available to others", or "how to create a Data Management Plan" – that recently has been the most popular and demanding service. 
 One of the main challenges to support academic staff with Data Management Plans is dealing with the legal impediments to provide open access and reusing of research data for publicly funded scientific projects. The lack of understanding the legal issues in opening research is a significant barrier to facilitate Open Science. Much public-funded research requires to prepare a Data Management Plan that, among other items, provides information about ownership and user rights.
 One of the most common activity for scholars is choosing which license (if any) they are supposed to use in terms of the dissemination the scientific output. However, in many cases, resolving the right license for research data is not enough. Academic staff faces many tensions with a lack of clarity around legal requirements and obstacles. The increasing researchers' need for understanding and describing conflicting issues (e.g. patenting) results in looking for professional and knowledgeable support at the university.
 We examine the most frequent legal issues arising among DMPs from the three scientific disciplines: chemistry (e.g. ethical papers), economics (e.g. data value cycle), and civil engineering (e.g. complexity of construction data). In our presentation, we would like to introduce the main identified problems and show how mapping and benchmarking occurring problems among those disciplines help us to establish more efficient legal support for researchers.
APA, Harvard, Vancouver, ISO, and other styles
36

Jones, Erick C., Gohar Azeem, Erick C. Jones, et al. "Understanding the Last Mile Transportation Concept Impacting Underserved Global Communities to Save Lives During COVID-19 Pandemic." Frontiers in Future Transportation 2 (September 23, 2021). http://dx.doi.org/10.3389/ffutr.2021.732331.

Full text
Abstract:
The underserved population could be at risk during the times of crisis, unless there is strong involvement from government agencies such as local and state Health departments and federal Center for Disease Control (CDC). The COVID-19 pandemic was a crisis of different proportion, creating a different type of burden on government agencies. Vulnerable communities including the elderly populations and communities of color have been especially hard hit by this pandemic. This forced these agencies to change their strategies and supply chains to support all populations receiving therapeutics. The National Science Foundation [National Science Foundation (NSF) Award Abstract # 2028612] funded RAID Labs to help federal agencies with strategies. This paper is based on a NSF funded grant to work on investigating supply chain strategies that would minimize the impact on underserved populations during pandemic. This NSF funded study identified the phenomena of last mile importance. The last mile transportation concept was critical in saving lives during the pandemic for underserved populations. The supply chain model then maximizes social goods by sending drugs or vaccines to the communities that need it the most regardless of ability to pay. The outcome of this study helped us prioritize the communities that need the vaccines the most. This informs our supply chain model to shift resources to these areas showing the value in real time prioritization of the COVID-19 supply chain. This paper provides information can be used in our healthcare supply chain model to ensure timely delivery of vaccines and supplies to COVID-19 patients that are the most vulnerable and hence the overall impact of COVID-19 can be minimized. The use of electrical vehicles for last mile transportation can help in significantly fighting the climate change.
APA, Harvard, Vancouver, ISO, and other styles
37

Barnard-Kelly, Katharine, Ryan Charles Kelly, Daniel Chernavvsky, Rayhan Lal, Lauren Cohen, and Amar Ali. "Feasibility of Spotlight Consultations Tool in Routine Care: Real-World Evidence." Journal of Diabetes Science and Technology, March 12, 2021, 193229682199408. http://dx.doi.org/10.1177/1932296821994088.

Full text
Abstract:
Background: Burnout in people with diabetes and healthcare professionals (HCPs) is at an all-time high. Spotlight AQ, a novel “smart” adaptive patient questionnaire, is designed to improve consultations by rapidly identifying patient priorities and presenting these in the context of best-practice care pathways to aid consultations. We aimed to determine Spotlight AQ’s feasibility in routine care. Materials and Methods: The Spotlight prototype tool was trialed at three centers: two UK primary care centers and one US specialist center (June-September 2020). Participants with type 1 (T1D) or type 2 diabetes (T2D) completed the questionnaire prior to their routine consultations. Results were immediately available and formed the basis of the clinical discussion and decision-making within the clinic visit. Results: A convenience sample of 49 adults took part, n=31 T1D, ( n=18 female); and n=18 T2D ( n=10 male, n=4 female, n=4 gender unreported). Each identified two priority concerns. “Psychological burden of diabetes” was the most common priority concern (T1D n = 27, 87.1%) followed by “gaining more skills about particular aspects of diabetes” (T1D n=19, 61.3%), “improving support around me” ( n=8, 25.8%) and “diabetes-related treatment issues” ( n=8, 25.8%). Burden of diabetes was widespread as was lack of confidence around self-management. Similarly, psychological burden of diabetes was the primary concern for participants with T2D ( n=18,100%) followed by “gaining more skills about aspects of diabetes” ( n=7, 38.9%), “improving support around me” ( n=7, 38.9%) and “diabetes-related treatment issues” ( n=4; 22.2%). Conclusions: Spotlight AQ is acceptable and feasible for use in routine care. Gaining more skills and addressing the psychological burden of diabetes are high-priority areas that must be addressed to reduce high levels of distress.
APA, Harvard, Vancouver, ISO, and other styles
38

Patil, Rajkumar Bhimgonda, Basavraj S. Kothavale, Laxman Yadu Waghmode, and Michael Pecht. "Life cycle cost analysis of a computerized numerical control machine tool: a case study from Indian manufacturing industry." Journal of Quality in Maintenance Engineering ahead-of-print, ahead-of-print (2020). http://dx.doi.org/10.1108/jqme-07-2019-0069.

Full text
Abstract:
PurposeLife cycle cost (LCC) analysis is one of the key parameters in designing a sustainable product or system. The application of life cycle costing in the manufacturing industries is still limited due to several factors. Lack of understanding of LCC analysis methodologies is one of the key barriers. This paper presents a generalized framework for LCC analysis of repairable systems using reliability and maintainability principles.Design/methodology/approachThe developed LCC analysis framework and stochastic point processes are applied for the analysis of a typical computerized numerical control turning center (CNCTC) and governing equations for acquisition cost, operation cost, failure cost, support cost and net salvage value are developed. The LCC of the CNCTC is evaluated for the renewal process (RP) and minimal repair process (MRP) approach.FindingsThe LCC analysis of the CNCTC reveals that, the acquisition cost is only 7.59% of the LCC, whereas the operation, failure and support costs dominate and contribute nearly 93% of the LCC. The LCC per day for RP requires additional US$ 1.03 than that for MRP. The detailed LCC analysis of the CNCTC identifies the critical components of CNCTC and these components are: spindle motor, spindle motor cooling fan, spindle belt, drawbar, spindle bearing, oil seals, hydraulic hose, solenoid valve, tool holder, lubrication pump motor system, lubrication hose, coolant pump motor system, coolant hose, supply cables, drive battery.Originality/valueThe developed framework of LCC of a repairable system can be applied to any other repairable systems with the appropriate modifications. LCC analysis of CNCTC reveals that the procurement decision of a product or system should be based on LCC and not only on the acquisition cost. The optimum utilization of consumables such as cutting tools, coolant, oil and lubricant can save operation cost. Thus, use of high-efficiency electric motors and the usage of recommended consumables can prolong the life of several components of a system. Therefore, due consideration and attention to these parameters at product design stage itself will decrease failure and support cost and ultimately its LCC.
APA, Harvard, Vancouver, ISO, and other styles
39

Tan, X. Gary, Venkata Siva Sai Sujith Sajja, Maria M. D’Souza, et al. "A Methodology to Compare Biomechanical Simulations With Clinical Brain Imaging Analysis Utilizing Two Blunt Impact Cases." Frontiers in Bioengineering and Biotechnology 9 (July 1, 2021). http://dx.doi.org/10.3389/fbioe.2021.654677.

Full text
Abstract:
According to the US Defense and Veterans Brain Injury Center (DVBIC) and Centers for Disease Control and Prevention (CDC), mild traumatic brain injury (mTBI) is a common form of head injury. Medical imaging data provides clinical insight into tissue damage/injury and injury severity, and helps medical diagnosis. Computational modeling and simulation can predict the biomechanical characteristics of such injury, and are useful for development of protective equipment. Integration of techniques from computational biomechanics with medical data assessment modalities (e.g., magnetic resonance imaging or MRI) has not yet been used to predict injury, support early medical diagnosis, or assess effectiveness of personal protective equipment. This paper presents a methodology to map computational simulations with clinical data for interpreting blunt impact TBI utilizing two clinically different head injury case studies. MRI modalities, such as T1, T2, diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC), were used for simulation comparisons. The two clinical cases have been reconstructed using finite element analysis to predict head biomechanics based on medical reports documented by a clinician. The findings are mapped to simulation results using image-based clinical analyses of head impact injuries, and modalities that could capture simulation results have been identified. In case 1, the MRI results showed lesions in the brain with skull indentation, while case 2 had lesions in both coup and contrecoup sides with no skull deformation. Simulation data analyses show that different biomechanical measures and thresholds are needed to explain different blunt impact injury modalities; specifically, strain rate threshold corresponds well with brain injury with skull indentation, while minimum pressure threshold corresponds well with coup–contrecoup injury; and DWI has been found to be the most appropriate modality for MRI data interpretation. As the findings from these two cases are substantiated with additional clinical studies, this methodology can be broadly applied as a tool to support injury assessment in head trauma events and to improve countermeasures (e.g., diagnostics and protective equipment design) to mitigate these injuries.
APA, Harvard, Vancouver, ISO, and other styles
40

Lane-Fall, Meghan B., Athena Christakos, Gina C. Russell, et al. "Handoffs and transitions in critical care—understanding scalability: study protocol for a multicenter stepped wedge type 2 hybrid effectiveness-implementation trial." Implementation Science 16, no. 1 (2021). http://dx.doi.org/10.1186/s13012-021-01131-1.

Full text
Abstract:
Abstract Background The implementation of evidence-based practices in critical care faces specific challenges, including intense time pressure and patient acuity. These challenges result in evidence-to-practice gaps that diminish the impact of proven-effective interventions for patients requiring intensive care unit support. Research is needed to understand and address implementation determinants in critical care settings. Methods The Handoffs and Transitions in Critical Care—Understanding Scalability (HATRICC-US) study is a Type 2 hybrid effectiveness-implementation trial of standardized operating room (OR) to intensive care unit (ICU) handoffs. This mixed methods study will use a stepped wedge design with randomized roll out to test the effectiveness of a customized protocol for structuring communication between clinicians in the OR and the ICU. The study will be conducted in twelve ICUs (10 adult, 2 pediatric) based in five United States academic health systems. Contextual inquiry incorporating implementation science, systems engineering, and human factors engineering approaches will guide both protocol customization and identification of protocol implementation determinants. Implementation mapping will be used to select appropriate implementation strategies for each setting. Human-centered design will be used to create a digital toolkit for dissemination of study findings. The primary implementation outcome will be fidelity to the customized handoff protocol (unit of analysis: handoff). The primary effectiveness outcome will be a composite measure of new-onset organ failure cases (unit of analysis: ICU). Discussion The HATRICC-US study will customize, implement, and evaluate standardized procedures for OR to ICU handoffs in a heterogenous group of United States academic medical center intensive care units. Findings from this study have the potential to improve postsurgical communication, decrease adverse clinical outcomes, and inform the implementation of other evidence-based practices in critical care settings. Trial registration ClinicalTrials.gov identifier: NCT04571749. Date of registration: October 1, 2020.
APA, Harvard, Vancouver, ISO, and other styles
41

Dix, R., R. Pal, D. A. Brown, and M. Makhous. "Development of a Pedal Powered Wheelchair." Journal of Medical Devices 3, no. 2 (2009). http://dx.doi.org/10.1115/1.3136756.

Full text
Abstract:
A first student project to put pedals on a wheelchair for exercise and propulsion was unsuccessful. The need remained and in June of 2005 the “Eureka” event occurred. Seeing a five-year-old on her training-wheel-equipped bicycle suggested that a fifth wheel could be added in the center between the wheelchair's two large rear wheels, and a mast supported by the fifth wheel's axle could extend forward to support a front axle and pedal set. A chain drive completed the propulsion system. There are no pedal-powered wheelchairs currently on the market. Around 2001 a product (EZChair) without retractable pedals was on the market but withdrawn. A team at the University of Buffalo invented and patented a pedal-powered wheelchair in 1993 (US Patent 5,242,179), but it was not commercialized. Also, a Japanese company designed and built a series of fifth-wheel wheelchair designs. Between 2006 and late 2008 we built many prototypes incorporating geometries that permitted retracting the pedal. For compactness a “Pedalong” with three telescoping tubes was built but it proved impossible to secure tightly. In the next design twin telescoping tubes passing above and to the rear of the rear axle provided the desired extension. A clamp at the front of the outer tube provided tightness of the assembly. In the Northwestern research program (see below), there was some success, but awkwardness in operation prevented commercialization. In October 2008 a major design change from a fifth wheel in the center to a powering of the two standard rear wheels was begun. This required a new chain path geometry and addition of a differential to the drive train. With the new design user control, arm-powering and braking through the rear wheels is retained, and chair stability is improved. Twelve individuals with chronic post-stroke hemiplegia (>6 months post-stroke event) participated in a study to examine the metabolic energy expended when participants performed a 6-minute walk test, a 6 minute leg-propelled wheelchair trial (using the Pedalong), and a 6 minute arm-propelled wheelchair trial. VO2, VCO2, and distance traveled were measured using a portable metablic cart system and wheel-based distance measurement system. The Pedalong and walking trials showed equivalent oxygen consumption levels, but manual pushing was, on average, significantly less. All three modes (walking, leg-propelled and arm-propelled) resulting in similar distances traveled within the 6 minute period. The leg-propelled trials generated the greatest amount of VCO2 during expiration compared with the other modes. This means that more of the available oxygen is being utilized (metabolized) during the leg-propelled mode and so, a greater number of calories were being burned during this 6-minute test.
APA, Harvard, Vancouver, ISO, and other styles
42

Belsky, Kimberly, and Janice Smiell. "Navigating the Regulatory Pathways and Requirements for Tissue-Engineered Products in the Treatment of Burns in the United States." Journal of Burn Care & Research, December 10, 2020. http://dx.doi.org/10.1093/jbcr/iraa210.

Full text
Abstract:
Abstract In the burn treatment landscape, a variety of skin substitutes, human tissue-sourced products, and other products are being developed based on tissue engineering (ie, the combination of scaffolds, cells, and biologically active molecules into functional tissue with the goal of restoring, maintaining, or improving damaged tissue or whole organs) to provide dermal replacement, prevent infection, and prevent or mitigate scarring. Skin substitutes can have a variety of compositions (cellular vs acellular), origins (human, animal, or synthetically derived), and complexities (dermal or epidermal only vs composite).The regulation of tissue-engineered products in the United States occurs by one of several pathways established by the US Food and Drug Administration, including a Biologics License Application, a 510(k) (Class I and Class II devices), Premarket Approval (Class III devices), or a human cells, tissues, and cellular and tissue-based products designation. Key differentiators among these regulatory classifications include the amount and type of data required to support filing. For example, a Biologics License Application requires a clinical trial(s) and evaluation of safety and efficacy by the Center for Biologics Evaluation and Research. Applicable approved biologic products must also comply with submission of advertising and promotional materials per regulations.This review provides a description of, and associated requirements for, the various regulatory pathways for the approval or clearance of tissue-engineered products. Some of the regulatory challenges for commercialization of such products for the treatment of burns will be explored.
APA, Harvard, Vancouver, ISO, and other styles
43

Ramos, Idalia, Ramón A. Rivera, Nicholas J. Pinto, et al. "The Humacao Strange Matter Exhibition: Prem Brings Materials Science and Nanotechnology to Puerto Rican Communities." MRS Proceedings 1105 (2008). http://dx.doi.org/10.1557/proc-1105-oo03-01.

Full text
Abstract:
AbstractThe “UPRH-PENN Partnership for Research and Education in Materials” (PREM) is sponsored by the Division of Materials Research of the National Science Foundation and since 2004 it has contributed to increasing the participation of Puerto Rican men and women in materials research and education. The program integrates K-12, undergraduates, graduate students, and faculty collaborating through research and education in a partnership between the University of Puerto Rico at Humacao (UPRH) along with UPRC and UPRRP and the University of Pennsylvania Materials Research Science & Engineering Center. The UPRH-PREM has strong links with schools in the eastern region of Puerto Rico and has successfully integrated K-12 students and teachers into the program through workshops, web resources, open houses, a Summer Program and research experiences during the academic year. In an effort to integrate the wider community into the outreach efforts, from October to December 2007 PREM hosted the presentation of the Interactive Materials Science Exhibition “Strange Matter”. “Strange Matter” was designed and produced by the Materials Research Society (MRS) in conjunction with the Ontario Science Centre in Toronto. Funding for the project is being provided by the National Science Foundation and industrial partners 3M, Dow, Ford, Intel, and Alcan. The exhibition brings interactive activities that highlight the fact that materials science is all around us. This exhibition is the first and only one in the island of Puerto Rico and was presented at UPRH's Casa Roig Museum, a historic plantation house located in downtown Humacao. Local scientists complemented the exhibition with live demonstrations and talks to provide deeper explanations and motivate young visitors to study materials. To make the exhibition possible, PREM integrated UPR-Humacao administration, faculty, students, non-teaching workers, Casa Roig staff, schools, Humacao municipality, local businesses and individual citizens. Dozens of students, faculty and other members of the community were mobilized as volunteers to support all aspects of the exhibition.
APA, Harvard, Vancouver, ISO, and other styles
44

Bretag, Tracey. "Editorial Volume 6 (1)." International Journal for Educational Integrity 6, no. 1 (2010). http://dx.doi.org/10.21913/ijei.v6i1.669.

Full text
Abstract:
I am pleased to introduce the next issue of the International Journal for Educational Integrity. This issue includes revised papers from two key conferences in 2009: the 4th Asia Pacific Conference on Educational Integrity (4APCEI, Wollongong University, Australia), and the Center for Academic Integrity Annual International Conference (Washington University, US), as well as two original papers. The issue is truly international, with authors representing the United States, the Ukraine and Australia. Daniel Wueste, Director of the Rutland Institute for Ethics, and Teddi Fishman, Director of the recently renamed International Center for Academic Integrity, provide a framing piece for the issue, with their paper from 4APCEI which explores the limitations of customer service approaches in higher education. Wueste and Fishman, while acknowledging the seductive appeal of likening students to "customers", particularly as part of the "total quality movement", provide a rigorous critique of this potentially dangerous discourse. The authors demonstrate how education differs quite significantly from commerce and argue that “looking to professional practice for help in understanding the educational enterprise holds considerably more promise than looking to business practice”. Wueste and Fishman are forthright in their assertion that education is based on a reciprocal relationship between teacher and learner (rather than a transaction between vendor and vendee), and that intrinsic to this relationship is a shared commitment to integrity.
 
 Following on from Wueste's and Fishman's call for a re-articulation of values in higher education, are two papers from the CAI conference. Joanna Gilmore, Denise Strickland, Briana Timmerman, Michelle Maher (all from the University of South Carolina) and David Feldon (University of Virginia), investigate plagiarism by graduate students. Working with a sample of 113 masters and doctoral students from three university sites, representing technology, engineering, mathematics, or mathematics or science education, the researchers examined students' research proposals and conducted semi-structured interviews. Their key finding was that while plagiarism was a prevalent issue (almost 40% of the proposals contained notable plagiarism), this appeared to be largely unintentional due to a lack of disciplinary enculturation. Notably, this lack of disciplinary enculturation was further compounded for English as a Second Language (ESL) students at the pre-proposal stage, who also had to grapple with cultural differences, English language issues and a variety of other factors.
 
 William Hanson from Anderson University in California uses grounded theory and graph theory based analysis to create a "faculty ethics logic model" based on his research at a small, religiously affiliated university. Hanson sought to operationalise participant realities of the primary forces that drive teaching or resolving ethics issues and discovered that informal elements, rather than formal institutional influence, played a major role in response strategies. In particular, faculty members used existing knowledge, resources/artefacts, goals and beliefs and their actions were shaped by work group influence and collective norms within a Christian framework. Hanson concluded that ethics policy “cannot be wholly forced upon its members… informal institutional principles originate from faculty” and that teachers "must be considered as primary change agents in ethics reform..." This research has important implications in the context of academic integrity, pointing as it does to the central, although often informal role of teachers in nurturing and promoting academic integrity on campus.
 
 Jason Stephens (University of Connecticut), Volodymyr Romakin (Petro Mohyla State University, Ukraine) and Mariya Yukhymenko (University of Connecticut) extend previous studies which have compared cheating behaviours of US undergraduate students with students from other cultures, by investigating academic motivation and misconduct by Ukrainian students. Based on a self-report survey with a sample of 189 students from each country, their study investigated the differences between US and Ukrainian students' task value, goal orientations, moral beliefs and cheating behaviours. Significant differences between the two groups were found, most notably that Ukrainian students reported lower judgements about the wrongfulness of cheating behaviours, and correspondingly higher levels of engagement in cheating behaviour. In particular, academic task value was a significant predictor of cheating beliefs and behaviours for the Ukrainian students: the more useful and interesting the course was perceived to be, the less likely the Ukrainian students were to cheat - a finding which has clear implications for all educators, but particularly those working with Ukrainian students.
 
 The final paper by Australian authors, Robert Kennelly, Anna Maldoni and Doug Davis (University of Canberra) provides appropriate closure to this issue. While Wueste and Fishman opened the issue by exhorting us to re-examine the value and purpose of higher education, Kennelly et al. do just that by reminding readers that educational integrity requires more than a pledge from students not to cheat. All stakeholders, from those at the highest administrative level, to those instructors teaching occasional tutorials, need to be deeply committed to the learning needs of the diverse classroom. International EAL (English as an Additional Language) students in Australian universities have long carried the burden associated with the customer service model of higher education critiqued by Wueste and Fishman. International EAL students pay high tuition fees, have additional expenses and responsibilities to fulfil English language requirements (in most Australian universities, a minimum International English Language Test Score (IELTS) of 6.00 for undergraduate entry), and in many instances, find at arrival that this IELTS score is inadequate for the level of oral and written communication required. Furthermore, with decreasing government funding and the demise of student unions, the level of on-campus services has gradually declined, so that students not only struggle with their academic load, they are often lonely and isolated. The discipline-based approach to academic and language development trialled, evaluated and recommended by Kennelly et al. goes some way to addressing the academic needs of this group of students. Using data from six consecutive semesters, the authors provide compelling evidence that team-taught, disciplined-based support programs have the potential to improve international EAL students' competence in academic and critical literacy skills, while simultaneously building English language proficiency.
 
 I trust you will enjoy this issue of the International Journal for Educational Integrity, and invite you to submit manuscripts for review for Volume 7(1), to be published in mid-2011. Volume 6(2) is being guest edited by Chris Moore and Ruth Walker, on the topic of 'digital technologies and educational integrity' and is due to be published in December this year.
 
 Tracey Bretag, IJEI Editor
 tracey.bretag@unisa.edu.au
APA, Harvard, Vancouver, ISO, and other styles
45

Paull, John. "Beyond Equal: From Same But Different to the Doctrine of Substantial Equivalence." M/C Journal 11, no. 2 (2008). http://dx.doi.org/10.5204/mcj.36.

Full text
Abstract:
A same-but-different dichotomy has recently been encapsulated within the US Food and Drug Administration’s ill-defined concept of “substantial equivalence” (USFDA, FDA). By invoking this concept the genetically modified organism (GMO) industry has escaped the rigors of safety testing that might otherwise apply. The curious concept of “substantial equivalence” grants a presumption of safety to GMO food. This presumption has yet to be earned, and has been used to constrain labelling of both GMO and non-GMO food. It is an idea that well serves corporatism. It enables the claim of difference to secure patent protection, while upholding the contrary claim of sameness to avoid labelling and safety scrutiny. It offers the best of both worlds for corporate food entrepreneurs, and delivers the worst of both worlds to consumers. The term “substantial equivalence” has established its currency within the GMO discourse. As the opportunities for patenting food technologies expand, the GMO recruitment of this concept will likely be a dress rehearsal for the developing debates on the labelling and testing of other techno-foods – including nano-foods and clone-foods. “Substantial Equivalence” “Are the Seven Commandments the same as they used to be, Benjamin?” asks Clover in George Orwell’s “Animal Farm”. By way of response, Benjamin “read out to her what was written on the wall. There was nothing there now except a single Commandment. It ran: ALL ANIMALS ARE EQUAL BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS”. After this reductionist revelation, further novel and curious events at Manor Farm, “did not seem strange” (Orwell, ch. X). Equality is a concept at the very core of mathematics, but beyond the domain of logic, equality becomes a hotly contested notion – and the domain of food is no exception. A novel food has a regulatory advantage if it can claim to be the same as an established food – a food that has proven its worth over centuries, perhaps even millennia – and thus does not trigger new, perhaps costly and onerous, testing, compliance, and even new and burdensome regulations. On the other hand, such a novel food has an intellectual property (IP) advantage only in terms of its difference. And thus there is an entrenched dissonance for newly technologised foods, between claiming sameness, and claiming difference. The same/different dilemma is erased, so some would have it, by appeal to the curious new dualist doctrine of “substantial equivalence” whereby sameness and difference are claimed simultaneously, thereby creating a win/win for corporatism, and a loss/loss for consumerism. This ground has been pioneered, and to some extent conquered, by the GMO industry. The conquest has ramifications for other cryptic food technologies, that is technologies that are invisible to the consumer and that are not evident to the consumer other than via labelling. Cryptic technologies pertaining to food include GMOs, pesticides, hormone treatments, irradiation and, most recently, manufactured nano-particles introduced into the food production and delivery stream. Genetic modification of plants was reported as early as 1984 by Horsch et al. The case of Diamond v. Chakrabarty resulted in a US Supreme Court decision that upheld the prior decision of the US Court of Customs and Patent Appeal that “the fact that micro-organisms are alive is without legal significance for purposes of the patent law”, and ruled that the “respondent’s micro-organism plainly qualifies as patentable subject matter”. This was a majority decision of nine judges, with four judges dissenting (Burger). It was this Chakrabarty judgement that has seriously opened the Pandora’s box of GMOs because patenting rights makes GMOs an attractive corporate proposition by offering potentially unique monopoly rights over food. The rear guard action against GMOs has most often focussed on health repercussions (Smith, Genetic), food security issues, and also the potential for corporate malfeasance to hide behind a cloak of secrecy citing commercial confidentiality (Smith, Seeds). Others have tilted at the foundational plank on which the economics of the GMO industry sits: “I suggest that the main concern is that we do not want a single molecule of anything we eat to contribute to, or be patented and owned by, a reckless, ruthless chemical organisation” (Grist 22). The GMO industry exhibits bipolar behaviour, invoking the concept of “substantial difference” to claim patent rights by way of “novelty”, and then claiming “substantial equivalence” when dealing with other regulatory authorities including food, drug and pesticide agencies; a case of “having their cake and eating it too” (Engdahl 8). This is a clever slight-of-rhetoric, laying claim to the best of both worlds for corporations, and the worst of both worlds for consumers. Corporations achieve patent protection and no concomitant specific regulatory oversight; while consumers pay the cost of patent monopolization, and are not necessarily apprised, by way of labelling or otherwise, that they are purchasing and eating GMOs, and thereby financing the GMO industry. The lemma of “substantial equivalence” does not bear close scrutiny. It is a fuzzy concept that lacks a tight testable definition. It is exactly this fuzziness that allows lots of wriggle room to keep GMOs out of rigorous testing regimes. Millstone et al. argue that “substantial equivalence is a pseudo-scientific concept because it is a commercial and political judgement masquerading as if it is scientific. It is moreover, inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests. It therefore serves to discourage and inhibit informative scientific research” (526). “Substantial equivalence” grants GMOs the benefit of the doubt regarding safety, and thereby leaves unexamined the ramifications for human consumer health, for farm labourer and food-processor health, for the welfare of farm animals fed a diet of GMO grain, and for the well-being of the ecosystem, both in general and in its particularities. “Substantial equivalence” was introduced into the food discourse by an Organisation for Economic Co-operation and Development (OECD) report: “safety evaluation of foods derived by modern biotechnology: concepts and principles”. It is from this document that the ongoing mantra of assumed safety of GMOs derives: “modern biotechnology … does not inherently lead to foods that are less safe … . Therefore evaluation of foods and food components obtained from organisms developed by the application of the newer techniques does not necessitate a fundamental change in established principles, nor does it require a different standard of safety” (OECD, “Safety” 10). This was at the time, and remains, an act of faith, a pro-corporatist and a post-cautionary approach. The OECD motto reveals where their priorities lean: “for a better world economy” (OECD, “Better”). The term “substantial equivalence” was preceded by the 1992 USFDA concept of “substantial similarity” (Levidow, Murphy and Carr) and was adopted from a prior usage by the US Food and Drug Agency (USFDA) where it was used pertaining to medical devices (Miller). Even GMO proponents accept that “Substantial equivalence is not intended to be a scientific formulation; it is a conceptual tool for food producers and government regulators” (Miller 1043). And there’s the rub – there is no scientific definition of “substantial equivalence”, no scientific test of proof of concept, and nor is there likely to be, since this is a ‘spinmeister’ term. And yet this is the cornerstone on which rests the presumption of safety of GMOs. Absence of evidence is taken to be evidence of absence. History suggests that this is a fraught presumption. By way of contrast, the patenting of GMOs depends on the antithesis of assumed ‘sameness’. Patenting rests on proven, scrutinised, challengeable and robust tests of difference and novelty. Lightfoot et al. report that transgenic plants exhibit “unexpected changes [that] challenge the usual assumptions of GMO equivalence and suggest genomic, proteomic and metanomic characterization of transgenics is advisable” (1). GMO Milk and Contested Labelling Pesticide company Monsanto markets the genetically engineered hormone rBST (recombinant Bovine Somatotropin; also known as: rbST; rBGH, recombinant Bovine Growth Hormone; and the brand name Prosilac) to dairy farmers who inject it into their cows to increase milk production. This product is not approved for use in many jurisdictions, including Europe, Australia, New Zealand, Canada and Japan. Even Monsanto accepts that rBST leads to mastitis (inflammation and pus in the udder) and other “cow health problems”, however, it maintains that “these problems did not occur at rates that would prohibit the use of Prosilac” (Monsanto). A European Union study identified an extensive list of health concerns of rBST use (European Commission). The US Dairy Export Council however entertain no doubt. In their background document they ask “is milk from cows treated with rBST safe?” and answer “Absolutely” (USDEC). Meanwhile, Monsanto’s website raises and answers the question: “Is the milk from cows treated with rbST any different from milk from untreated cows? No” (Monsanto). Injecting cows with genetically modified hormones to boost their milk production remains a contested practice, banned in many countries. It is the claimed equivalence that has kept consumers of US dairy products in the dark, shielded rBST dairy farmers from having to declare that their milk production is GMO-enhanced, and has inhibited non-GMO producers from declaring their milk as non-GMO, non rBST, or not hormone enhanced. This is a battle that has simmered, and sometimes raged, for a decade in the US. Finally there is a modest victory for consumers: the Pennsylvania Department of Agriculture (PDA) requires all labels used on milk products to be approved in advance by the department. The standard issued in October 2007 (PDA, “Standards”) signalled to producers that any milk labels claiming rBST-free status would be rejected. This advice was rescinded in January 2008 with new, specific, department-approved textual constructions allowed, and ensuring that any “no rBST” style claim was paired with a PDA-prescribed disclaimer (PDA, “Revised Standards”). However, parsimonious labelling is prohibited: No labeling may contain references such as ‘No Hormones’, ‘Hormone Free’, ‘Free of Hormones’, ‘No BST’, ‘Free of BST’, ‘BST Free’,’No added BST’, or any statement which indicates, implies or could be construed to mean that no natural bovine somatotropin (BST) or synthetic bovine somatotropin (rBST) are contained in or added to the product. (PDA, “Revised Standards” 3) Difference claims are prohibited: In no instance shall any label state or imply that milk from cows not treated with recombinant bovine somatotropin (rBST, rbST, RBST or rbst) differs in composition from milk or products made with milk from treated cows, or that rBST is not contained in or added to the product. If a product is represented as, or intended to be represented to consumers as, containing or produced from milk from cows not treated with rBST any labeling information must convey only a difference in farming practices or dairy herd management methods. (PDA, “Revised Standards” 3) The PDA-approved labelling text for non-GMO dairy farmers is specified as follows: ‘From cows not treated with rBST. No significant difference has been shown between milk derived from rBST-treated and non-rBST-treated cows’ or a substantial equivalent. Hereinafter, the first sentence shall be referred to as the ‘Claim’, and the second sentence shall be referred to as the ‘Disclaimer’. (PDA, “Revised Standards” 4) It is onto the non-GMO dairy farmer alone, that the costs of compliance fall. These costs include label preparation and approval, proving non-usage of GMOs, and of creating and maintaining an audit trail. In nearby Ohio a similar consumer versus corporatist pantomime is playing out. This time with the Ohio Department of Agriculture (ODA) calling the shots, and again serving the GMO industry. The ODA prescribed text allowed to non-GMO dairy farmers is “from cows not supplemented with rbST” and this is to be conjoined with the mandatory disclaimer “no significant difference has been shown between milk derived from rbST-supplemented and non-rbST supplemented cows” (Curet). These are “emergency rules”: they apply for 90 days, and are proposed as permanent. Once again, the onus is on the non-GMO dairy farmers to document and prove their claims. GMO dairy farmers face no such governmental requirements, including no disclosure requirement, and thus an asymmetric regulatory impost is placed on the non-GMO farmer which opens up new opportunities for administrative demands and technocratic harassment. Levidow et al. argue, somewhat Eurocentrically, that from its 1990s adoption “as the basis for a harmonized science-based approach to risk assessment” (26) the concept of “substantial equivalence” has “been recast in at least three ways” (58). It is true that the GMO debate has evolved differently in the US and Europe, and with other jurisdictions usually adopting intermediate positions, yet the concept persists. Levidow et al. nominate their three recastings as: firstly an “implicit redefinition” by the appending of “extra phrases in official documents”; secondly, “it has been reinterpreted, as risk assessment processes have … required more evidence of safety than before, especially in Europe”; and thirdly, “it has been demoted in the European Union regulatory procedures so that it can no longer be used to justify the claim that a risk assessment is unnecessary” (58). Romeis et al. have proposed a decision tree approach to GMO risks based on cascading tiers of risk assessment. However what remains is that the defects of the concept of “substantial equivalence” persist. Schauzu identified that: such decisions are a matter of “opinion”; that there is “no clear definition of the term ‘substantial’”; that because genetic modification “is aimed at introducing new traits into organisms, the result will always be a different combination of genes and proteins”; and that “there is no general checklist that could be followed by those who are responsible for allowing a product to be placed on the market” (2). Benchmark for Further Food Novelties? The discourse, contestation, and debate about “substantial equivalence” have largely focussed on the introduction of GMOs into food production processes. GM can best be regarded as the test case, and proof of concept, for establishing “substantial equivalence” as a benchmark for evaluating new and forthcoming food technologies. This is of concern, because the concept of “substantial equivalence” is scientific hokum, and yet its persistence, even entrenchment, within regulatory agencies may be a harbinger of forthcoming same-but-different debates for nanotechnology and other future bioengineering. The appeal of “substantial equivalence” has been a brake on the creation of GMO-specific regulations and on rigorous GMO testing. The food nanotechnology industry can be expected to look to the precedent of the GMO debate to head off specific nano-regulations and nano-testing. As cloning becomes economically viable, then this may be another wave of food innovation that muddies the regulatory waters with the confused – and ultimately self-contradictory – concept of “substantial equivalence”. Nanotechnology engineers particles in the size range 1 to 100 nanometres – a nanometre is one billionth of a metre. This is interesting for manufacturers because at this size chemicals behave differently, or as the Australian Office of Nanotechnology expresses it, “new functionalities are obtained” (AON). Globally, government expenditure on nanotechnology research reached US$4.6 billion in 2006 (Roco 3.12). While there are now many patents (ETC Group; Roco), regulation specific to nanoparticles is lacking (Bowman and Hodge; Miller and Senjen). The USFDA advises that nano-manufacturers “must show a reasonable assurance of safety … or substantial equivalence” (FDA). A recent inventory of nano-products already on the market identified 580 products. Of these 11.4% were categorised as “Food and Beverage” (WWICS). This is at a time when public confidence in regulatory bodies is declining (HRA). In an Australian consumer survey on nanotechnology, 65% of respondents indicated they were concerned about “unknown and long term side effects”, and 71% agreed that it is important “to know if products are made with nanotechnology” (MARS 22). Cloned animals are currently more expensive to produce than traditional animal progeny. In the course of 678 pages, the USFDA Animal Cloning: A Draft Risk Assessment has not a single mention of “substantial equivalence”. However the Federation of Animal Science Societies (FASS) in its single page “Statement in Support of USFDA’s Risk Assessment Conclusion That Food from Cloned Animals Is Safe for Human Consumption” states that “FASS endorses the use of this comparative evaluation process as the foundation of establishing substantial equivalence of any food being evaluated. It must be emphasized that it is the food product itself that should be the focus of the evaluation rather than the technology used to generate cloned animals” (FASS 1). Contrary to the FASS derogation of the importance of process in food production, for consumers both the process and provenance of production is an important and integral aspect of a food product’s value and identity. Some consumers will legitimately insist that their Kalamata olives are from Greece, or their balsamic vinegar is from Modena. It was the British public’s growing awareness that their sugar was being produced by slave labour that enabled the boycotting of the product, and ultimately the outlawing of slavery (Hochschild). When consumers boycott Nestle, because of past or present marketing practices, or boycott produce of USA because of, for example, US foreign policy or animal welfare concerns, they are distinguishing the food based on the narrative of the food, the production process and/or production context which are a part of the identity of the food. Consumers attribute value to food based on production process and provenance information (Paull). Products produced by slave labour, by child labour, by political prisoners, by means of torture, theft, immoral, unethical or unsustainable practices are different from their alternatives. The process of production is a part of the identity of a product and consumers are increasingly interested in food narrative. It requires vigilance to ensure that these narratives are delivered with the product to the consumer, and are neither lost nor suppressed. Throughout the GM debate, the organic sector has successfully skirted the “substantial equivalence” debate by excluding GMOs from the certified organic food production process. This GMO-exclusion from the organic food stream is the one reprieve available to consumers worldwide who are keen to avoid GMOs in their diet. The organic industry carries the expectation of providing food produced without artificial pesticides and fertilizers, and by extension, without GMOs. Most recently, the Soil Association, the leading organic certifier in the UK, claims to be the first organisation in the world to exclude manufactured nonoparticles from their products (Soil Association). There has been the call that engineered nanoparticles be excluded from organic standards worldwide, given that there is no mandatory safety testing and no compulsory labelling in place (Paull and Lyons). The twisted rhetoric of oxymorons does not make the ideal foundation for policy. Setting food policy on the shifting sands of “substantial equivalence” seems foolhardy when we consider the potentially profound ramifications of globally mass marketing a dysfunctional food. If there is a 2×2 matrix of terms – “substantial equivalence”, substantial difference, insubstantial equivalence, insubstantial difference – while only one corner of this matrix is engaged for food policy, and while the elements remain matters of opinion rather than being testable by science, or by some other regime, then the public is the dupe, and potentially the victim. “Substantial equivalence” has served the GMO corporates well and the public poorly, and this asymmetry is slated to escalate if nano-food and clone-food are also folded into the “substantial equivalence” paradigm. Only in Orwellian Newspeak is war peace, or is same different. It is time to jettison the pseudo-scientific doctrine of “substantial equivalence”, as a convenient oxymoron, and embrace full disclosure of provenance, process and difference, so that consumers are not collateral in a continuing asymmetric knowledge war. References Australian Office of Nanotechnology (AON). Department of Industry, Tourism and Resources (DITR) 6 Aug. 2007. 24 Apr. 2008 < http://www.innovation.gov.au/Section/Innovation/Pages/ AustralianOfficeofNanotechnology.aspx >.Bowman, Diana, and Graeme Hodge. “A Small Matter of Regulation: An International Review of Nanotechnology Regulation.” Columbia Science and Technology Law Review 8 (2007): 1-32.Burger, Warren. “Sidney A. Diamond, Commissioner of Patents and Trademarks v. Ananda M. Chakrabarty, et al.” Supreme Court of the United States, decided 16 June 1980. 24 Apr. 2008 < http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=US&vol=447&invol=303 >.Curet, Monique. “New Rules Allow Dairy-Product Labels to Include Hormone Info.” The Columbus Dispatch 7 Feb. 2008. 24 Apr. 2008 < http://www.dispatch.com/live/content/business/stories/2008/02/07/dairy.html >.Engdahl, F. William. Seeds of Destruction. Montréal: Global Research, 2007.ETC Group. Down on the Farm: The Impact of Nano-Scale Technologies on Food and Agriculture. Ottawa: Action Group on Erosion, Technology and Conservation, November, 2004. European Commission. Report on Public Health Aspects of the Use of Bovine Somatotropin. Brussels: European Commission, 15-16 March 1999.Federation of Animal Science Societies (FASS). Statement in Support of FDA’s Risk Assessment Conclusion That Cloned Animals Are Safe for Human Consumption. 2007. 24 Apr. 2008 < http://www.fass.org/page.asp?pageID=191 >.Grist, Stuart. “True Threats to Reason.” New Scientist 197.2643 (16 Feb. 2008): 22-23.Hochschild, Adam. Bury the Chains: The British Struggle to Abolish Slavery. London: Pan Books, 2006.Horsch, Robert, Robert Fraley, Stephen Rogers, Patricia Sanders, Alan Lloyd, and Nancy Hoffman. “Inheritance of Functional Foreign Genes in Plants.” Science 223 (1984): 496-498.HRA. Awareness of and Attitudes toward Nanotechnology and Federal Regulatory Agencies: A Report of Findings. Washington: Peter D. Hart Research Associates, 25 Sep. 2007.Levidow, Les, Joseph Murphy, and Susan Carr. “Recasting ‘Substantial Equivalence’: Transatlantic Governance of GM Food.” Science, Technology, and Human Values 32.1 (Jan. 2007): 26-64.Lightfoot, David, Rajsree Mungur, Rafiqa Ameziane, Anthony Glass, and Karen Berhard. “Transgenic Manipulation of C and N Metabolism: Stretching the GMO Equivalence.” American Society of Plant Biologists Conference: Plant Biology, 2000.MARS. “Final Report: Australian Community Attitudes Held about Nanotechnology – Trends 2005-2007.” Report prepared for Department of Industry, Tourism and Resources (DITR). Miranda, NSW: Market Attitude Research Services, 12 June 2007.Miller, Georgia, and Rye Senjen. “Out of the Laboratory and on to Our Plates: Nanotechnology in Food and Agriculture.” Friends of the Earth, 2008. 24 Apr. 2008 < http://nano.foe.org.au/node/220 >.Miller, Henry. “Substantial Equivalence: Its Uses and Abuses.” Nature Biotechnology 17 (7 Nov. 1999): 1042-1043.Millstone, Erik, Eric Brunner, and Sue Mayer. “Beyond ‘Substantial Equivalence’.” Nature 401 (7 Oct. 1999): 525-526.Monsanto. “Posilac, Bovine Somatotropin by Monsanto: Questions and Answers about bST from the United States Food and Drug Administration.” 2007. 24 Apr. 2008 < http://www.monsantodairy.com/faqs/fda_safety.html >.Organisation for Economic Co-operation and Development (OECD). “For a Better World Economy.” Paris: OECD, 2008. 24 Apr. 2008 < http://www.oecd.org/ >.———. “Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.” Paris: OECD, 1993.Orwell, George. Animal Farm. Adelaide: ebooks@Adelaide, 2004 (1945). 30 Apr. 2008 < http://ebooks.adelaide.edu.au/o/orwell/george >.Paull, John. “Provenance, Purity and Price Premiums: Consumer Valuations of Organic and Place-of-Origin Food Labelling.” Research Masters thesis, University of Tasmania, Hobart, 2006. 24 Apr. 2008 < http://eprints.utas.edu.au/690/ >.Paull, John, and Kristen Lyons. “Nanotechnology: The Next Challenge for Organics.” Journal of Organic Systems (in press).Pennsylvania Department of Agriculture (PDA). “Revised Standards and Procedure for Approval of Proposed Labeling of Fluid Milk.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 17 Jan. 2008. ———. “Standards and Procedure for Approval of Proposed Labeling of Fluid Milk, Milk Products and Manufactured Dairy Products.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 22 Oct. 2007.Roco, Mihail. “National Nanotechnology Initiative – Past, Present, Future.” In William Goddard, Donald Brenner, Sergy Lyshevski and Gerald Iafrate, eds. Handbook of Nanoscience, Engineering and Technology. 2nd ed. Boca Raton, FL: CRC Press, 2007.Romeis, Jorg, Detlef Bartsch, Franz Bigler, Marco Candolfi, Marco Gielkins, et al. “Assessment of Risk of Insect-Resistant Transgenic Crops to Nontarget Arthropods.” Nature Biotechnology 26.2 (Feb. 2008): 203-208.Schauzu, Marianna. “The Concept of Substantial Equivalence in Safety Assessment of Food Derived from Genetically Modified Organisms.” AgBiotechNet 2 (Apr. 2000): 1-4.Soil Association. “Soil Association First Organisation in the World to Ban Nanoparticles – Potentially Toxic Beauty Products That Get Right under Your Skin.” London: Soil Association, 17 Jan. 2008. 24 Apr. 2008 < http://www.soilassociation.org/web/sa/saweb.nsf/848d689047 cb466780256a6b00298980/42308d944a3088a6802573d100351790!OpenDocument >.Smith, Jeffrey. Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods. Fairfield, Iowa: Yes! Books, 2007.———. Seeds of Deception. Melbourne: Scribe, 2004.U.S. Dairy Export Council (USDEC). Bovine Somatotropin (BST) Backgrounder. Arlington, VA: U.S. Dairy Export Council, 2006.U.S. Food and Drug Administration (USFDA). Animal Cloning: A Draft Risk Assessment. Rockville, MD: Center for Veterinary Medicine, U.S. Food and Drug Administration, 28 Dec. 2006.———. FDA and Nanotechnology Products. U.S. Department of Health and Human Services, U.S. Food and Drug Administration, 2008. 24 Apr. 2008 < http://www.fda.gov/nanotechnology/faqs.html >.Woodrow Wilson International Center for Scholars (WWICS). “A Nanotechnology Consumer Products Inventory.” Data set as at Sep. 2007. Woodrow Wilson International Center for Scholars, Project on Emerging Technologies, Sep. 2007. 24 Apr. 2008 < http://www.nanotechproject.org/inventories/consumer >.
APA, Harvard, Vancouver, ISO, and other styles
46

Gao, Xiang. "‘Staying in the Nationalist Bubble’." M/C Journal 24, no. 1 (2021). http://dx.doi.org/10.5204/mcj.2745.

Full text
Abstract:
Introduction The highly contagious COVID-19 virus has presented particularly difficult public policy challenges. The relatively late emergence of an effective treatments and vaccines, the structural stresses on health care systems, the lockdowns and the economic dislocations, the evident structural inequalities in effected societies, as well as the difficulty of prevention have tested social and political cohesion. Moreover, the intrusive nature of many prophylactic measures have led to individual liberty and human rights concerns. As noted by the Victorian (Australia) Ombudsman Report on the COVID-19 lockdown in Melbourne, we may be tempted, during a crisis, to view human rights as expendable in the pursuit of saving human lives. This thinking can lead to dangerous territory. It is not unlawful to curtail fundamental rights and freedoms when there are compelling reasons for doing so; human rights are inherently and inseparably a consideration of human lives. (5) These difficulties have raised issues about the importance of social or community capital in fighting the pandemic. This article discusses the impacts of social and community capital and other factors on the governmental efforts to combat the spread of infectious disease through the maintenance of social distancing and household ‘bubbles’. It argues that the beneficial effects of social and community capital towards fighting the pandemic, such as mutual respect and empathy, which underpins such public health measures as social distancing, the use of personal protective equipment, and lockdowns in the USA, have been undermined as preventive measures because they have been transmogrified to become a salient aspect of the “culture wars” (Peters). In contrast, states that have relatively lower social capital such a China have been able to more effectively arrest transmission of the disease because the government was been able to generate and personify a nationalist response to the virus and thus generate a more robust social consensus regarding the efforts to combat the disease. Social Capital and Culture Wars The response to COVID-19 required individuals, families, communities, and other types of groups to refrain from extensive interaction – to stay in their bubble. In these situations, especially given the asymptomatic nature of many COVID-19 infections and the serious imposition lockdowns and social distancing and isolation, the temptation for individuals to breach public health rules in high. From the perspective of policymakers, the response to fighting COVID-19 is a collective action problem. In studying collective action problems, scholars have paid much attention on the role of social and community capital (Ostrom and Ahn 17-35). Ostrom and Ahn comment that social capital “provides a synthesizing approach to how cultural, social, and institutional aspects of communities of various sizes jointly affect their capacity of dealing with collective-action problems” (24). Social capital is regarded as an evolving social type of cultural trait (Fukuyama; Guiso et al.). Adger argues that social capital “captures the nature of social relations” and “provides an explanation for how individuals use their relationships to other actors in societies for their own and for the collective good” (387). The most frequently used definition of social capital is the one proffered by Putnam who regards it as “features of social organization, such as networks, norms and social trust that facilitate coordination and cooperation for mutual benefit” (Putnam, “Bowling Alone” 65). All these studies suggest that social and community capital has at least two elements: “objective associations” and subjective ties among individuals. Objective associations, or social networks, refer to both formal and informal associations that are formed and engaged in on a voluntary basis by individuals and social groups. Subjective ties or norms, on the other hand, primarily stand for trust and reciprocity (Paxton). High levels of social capital have generally been associated with democratic politics and civil societies whose institutional performance benefits from the coordinated actions and civic culture that has been facilitated by high levels of social capital (Putnam, Democracy 167-9). Alternatively, a “good and fair” state and impartial institutions are important factors in generating and preserving high levels of social capital (Offe 42-87). Yet social capital is not limited to democratic civil societies and research is mixed on whether rising social capital manifests itself in a more vigorous civil society that in turn leads to democratising impulses. Castillo argues that various trust levels for institutions that reinforce submission, hierarchy, and cultural conservatism can be high in authoritarian governments, indicating that high levels of social capital do not necessarily lead to democratic civic societies (Castillo et al.). Roßteutscher concludes after a survey of social capita indicators in authoritarian states that social capital has little effect of democratisation and may in fact reinforce authoritarian rule: in nondemocratic contexts, however, it appears to throw a spanner in the works of democratization. Trust increases the stability of nondemocratic leaderships by generating popular support, by suppressing regime threatening forms of protest activity, and by nourishing undemocratic ideals concerning governance (752). In China, there has been ongoing debate concerning the presence of civil society and the level of social capital found across Chinese society. If one defines civil society as an intermediate associational realm between the state and the family, populated by autonomous organisations which are separate from the state that are formed voluntarily by members of society to protect or extend their interests or values, it is arguable that the PRC had a significant civil society or social capital in the first few decades after its establishment (White). However, most scholars agree that nascent civil society as well as a more salient social and community capital has emerged in China’s reform era. This was evident after the 2008 Sichuan earthquake, where the government welcomed community organising and community-driven donation campaigns for a limited period of time, giving the NGO sector and bottom-up social activism a boost, as evidenced in various policy areas such as disaster relief and rural community development (F. Wu 126; Xu 9). Nevertheless, the CCP and the Chinese state have been effective in maintaining significant control over civil society and autonomous groups without attempting to completely eliminate their autonomy or existence. The dramatic economic and social changes that have occurred since the 1978 Opening have unsurprisingly engendered numerous conflicts across the society. In response, the CCP and State have adjusted political economic policies to meet the changing demands of workers, migrants, the unemployed, minorities, farmers, local artisans, entrepreneurs, and the growing middle class. Often the demands arising from these groups have resulted in policy changes, including compensation. In other circumstances, where these groups remain dissatisfied, the government will tolerate them (ignore them but allow them to continue in the advocacy), or, when the need arises, supress the disaffected groups (F. Wu 2). At the same time, social organisations and other groups in civil society have often “refrained from open and broad contestation against the regime”, thereby gaining the space and autonomy to achieve the objectives (F. Wu 2). Studies of Chinese social or community capital suggest that a form of modern social capital has gradually emerged as Chinese society has become increasingly modernised and liberalised (despite being non-democratic), and that this social capital has begun to play an important role in shaping social and economic lives at the local level. However, this more modern form of social capital, arising from developmental and social changes, competes with traditional social values and social capital, which stresses parochial and particularistic feelings among known individuals while modern social capital emphasises general trust and reciprocal feelings among both known and unknown individuals. The objective element of these traditional values are those government-sanctioned, formal mass organisations such as Communist Youth and the All-China Federation of Women's Associations, where members are obliged to obey the organisation leadership. The predominant subjective values are parochial and particularistic feelings among individuals who know one another, such as guanxi and zongzu (Chen and Lu, 426). The concept of social capital emphasises that the underlying cooperative values found in individuals and groups within a culture are an important factor in solving collective problems. In contrast, the notion of “culture war” focusses on those values and differences that divide social and cultural groups. Barry defines culture wars as increases in volatility, expansion of polarisation, and conflict between those who are passionate about religiously motivated politics, traditional morality, and anti-intellectualism, and…those who embrace progressive politics, cultural openness, and scientific and modernist orientations. (90) The contemporary culture wars across the world manifest opposition by various groups in society who hold divergent worldviews and ideological positions. Proponents of culture war understand various issues as part of a broader set of religious, political, and moral/normative positions invoked in opposition to “elite”, “liberal”, or “left” ideologies. Within this Manichean universe opposition to such issues as climate change, Black Lives Matter, same sex rights, prison reform, gun control, and immigration becomes framed in binary terms, and infused with a moral sensibility (Chapman 8-10). In many disputes, the culture war often devolves into an epistemological dispute about the efficacy of scientific knowledge and authority, or a dispute between “practical” and theoretical knowledge. In this environment, even facts can become partisan narratives. For these “cultural” disputes are often how electoral prospects (generally right-wing) are advanced; “not through policies or promises of a better life, but by fostering a sense of threat, a fantasy that something profoundly pure … is constantly at risk of extinction” (Malik). This “zero-sum” social and policy environment that makes it difficult to compromise and has serious consequences for social stability or government policy, especially in a liberal democratic society. Of course, from the perspective of cultural materialism such a reductionist approach to culture and political and social values is not unexpected. “Culture” is one of the many arenas in which dominant social groups seek to express and reproduce their interests and preferences. “Culture” from this sense is “material” and is ultimately connected to the distribution of power, wealth, and resources in society. As such, the various policy areas that are understood as part of the “culture wars” are another domain where various dominant and subordinate groups and interests engaged in conflict express their values and goals. Yet it is unexpected that despite the pervasiveness of information available to individuals the pool of information consumed by individuals who view the “culture wars” as a touchstone for political behaviour and a narrative to categorise events and facts is relatively closed. This lack of balance has been magnified by social media algorithms, conspiracy-laced talk radio, and a media ecosystem that frames and discusses issues in a manner that elides into an easily understood “culture war” narrative. From this perspective, the groups (generally right-wing or traditionalist) exist within an information bubble that reinforces political, social, and cultural predilections. American and Chinese Reponses to COVID-19 The COVID-19 pandemic first broke out in Wuhan in December 2019. Initially unprepared and unwilling to accept the seriousness of the infection, the Chinese government regrouped from early mistakes and essentially controlled transmission in about three months. This positive outcome has been messaged as an exposition of the superiority of the Chinese governmental system and society both domestically and internationally; a positive, even heroic performance that evidences the populist credentials of the Chinese political leadership and demonstrates national excellence. The recently published White Paper entitled “Fighting COVID-19: China in Action” also summarises China’s “strategic achievement” in the simple language of numbers: in a month, the rising spread was contained; in two months, the daily case increase fell to single digits; and in three months, a “decisive victory” was secured in Wuhan City and Hubei Province (Xinhua). This clear articulation of the positive results has rallied political support. Indeed, a recent survey shows that 89 percent of citizens are satisfied with the government’s information dissemination during the pandemic (C Wu). As part of the effort, the government extensively promoted the provision of “political goods”, such as law and order, national unity and pride, and shared values. For example, severe publishments were introduced for violence against medical professionals and police, producing and selling counterfeit medications, raising commodity prices, spreading ‘rumours’, and being uncooperative with quarantine measures (Xu). Additionally, as an extension the popular anti-corruption campaign, many local political leaders were disciplined or received criminal charges for inappropriate behaviour, abuse of power, and corruption during the pandemic (People.cn, 2 Feb. 2020). Chinese state media also described fighting the virus as a global “competition”. In this competition a nation’s “material power” as well as “mental strength”, that calls for the highest level of nation unity and patriotism, is put to the test. This discourse recalled the global competition in light of the national mythology related to the formation of Chinese nation, the historical “hardship”, and the “heroic Chinese people” (People.cn, 7 Apr. 2020). Moreover, as the threat of infection receded, it was emphasised that China “won this competition” and the Chinese people have demonstrated the “great spirit of China” to the world: a result built upon the “heroism of the whole Party, Army, and Chinese people from all ethnic groups” (People.cn, 7 Apr. 2020). In contrast to the Chinese approach of emphasising national public goods as a justification for fighting the virus, the U.S. Trump Administration used nationalism, deflection, and “culture war” discourse to undermine health responses — an unprecedented response in American public health policy. The seriousness of the disease as well as the statistical evidence of its course through the American population was disputed. The President and various supporters raged against the COVID-19 “hoax”, social distancing, and lockdowns, disparaged public health institutions and advice, and encouraged protesters to “liberate” locked-down states (Russonello). “Our federal overlords say ‘no singing’ and ‘no shouting’ on Thanksgiving”, Representative Paul Gosar, a Republican of Arizona, wrote as he retweeted a Centers for Disease Control list of Thanksgiving safety tips (Weiner). People were encouraged, by way of the White House and Republican leadership, to ignore health regulations and not to comply with social distancing measures and the wearing of masks (Tracy). This encouragement led to threats against proponents of face masks such as Dr Anthony Fauci, one of the nation’s foremost experts on infectious diseases, who required bodyguards because of the many threats on his life. Fauci’s critics — including President Trump — countered Fauci’s promotion of mask wearing by stating accusingly that he once said mask-wearing was not necessary for ordinary people (Kelly). Conspiracy theories as to the safety of vaccinations also grew across the course of the year. As the 2020 election approached, the Administration ramped up efforts to downplay the serious of the virus by identifying it with “the media” and illegitimate “partisan” efforts to undermine the Trump presidency. It also ramped up its criticism of China as the source of the infection. This political self-centeredness undermined state and federal efforts to slow transmission (Shear et al.). At the same time, Trump chided health officials for moving too slowly on vaccine approvals, repeated charges that high infection rates were due to increased testing, and argued that COVID-19 deaths were exaggerated by medical providers for political and financial reasons. These claims were amplified by various conservative media personalities such as Rush Limbaugh, and Sean Hannity and Laura Ingraham of Fox News. The result of this “COVID-19 Denialism” and the alternative narrative of COVID-19 policy told through the lens of culture war has resulted in the United States having the highest number of COVID-19 cases, and the highest number of COVID-19 deaths. At the same time, the underlying social consensus and social capital that have historically assisted in generating positive public health outcomes has been significantly eroded. According to the Pew Research Center, the share of U.S. adults who say public health officials such as those at the Centers for Disease Control and Prevention are doing an excellent or good job responding to the outbreak decreased from 79% in March to 63% in August, with an especially sharp decrease among Republicans (Pew Research Center 2020). Social Capital and COVID-19 From the perspective of social or community capital, it could be expected that the American response to the Pandemic would be more effective than the Chinese response. Historically, the United States has had high levels of social capital, a highly developed public health system, and strong governmental capacity. In contrast, China has a relatively high level of governmental and public health capacity, but the level of social capital has been lower and there is a significant presence of traditional values which emphasise parochial and particularistic values. Moreover, the antecedent institutions of social capital, such as weak and inefficient formal institutions (Batjargal et al.), environmental turbulence and resource scarcity along with the transactional nature of guanxi (gift-giving and information exchange and relationship dependence) militate against finding a more effective social and community response to the public health emergency. Yet China’s response has been significantly more successful than the Unites States’. Paradoxically, the American response under the Trump Administration and the Chinese response both relied on an externalisation of the both the threat and the justifications for their particular response. In the American case, President Trump, while downplaying the seriousness of the virus, consistently called it the “China virus” in an effort to deflect responsibly as well as a means to avert attention away from the public health impacts. As recently as 3 January 2021, Trump tweeted that the number of “China Virus” cases and deaths in the U.S. were “far exaggerated”, while critically citing the Centers for Disease Control and Prevention's methodology: “When in doubt, call it COVID-19. Fake News!” (Bacon). The Chinese Government, meanwhile, has pursued a more aggressive foreign policy across the South China Sea, on the frontier in the Indian sub-continent, and against states such as Australia who have criticised the initial Chinese response to COVID-19. To this international criticism, the government reiterated its sovereign rights and emphasised its “victimhood” in the face of “anti-China” foreign forces. Chinese state media also highlighted China as “victim” of the coronavirus, but also as a target of Western “political manoeuvres” when investigating the beginning stages of the pandemic. The major difference, however, is that public health policy in the United States was superimposed on other more fundamental political and cultural cleavages, and part of this externalisation process included the assignation of “otherness” and demonisation of internal political opponents or characterising political opponents as bent on destroying the United States. This assignation of “otherness” to various internal groups is a crucial element in the culture wars. While this may have been inevitable given the increasingly frayed nature of American society post-2008, such a characterisation has been activity pushed by local, state, and national leadership in the Republican Party and the Trump Administration (Vogel et al.). In such circumstances, minimising health risks and highlighting civil rights concerns due to public health measures, along with assigning blame to the democratic opposition and foreign states such as China, can have a major impact of public health responses. The result has been that social trust beyond the bubble of one’s immediate circle or those who share similar beliefs is seriously compromised — and the collective action problem presented by COVID-19 remains unsolved. Daniel Aldrich’s study of disasters in Japan, India, and US demonstrates that pre-existing high levels of social capital would lead to stronger resilience and better recovery (Aldrich). Social capital helps coordinate resources and facilitate the reconstruction collectively and therefore would lead to better recovery (Alesch et al.). Yet there has not been much research on how the pool of social capital first came about and how a disaster may affect the creation and store of social capital. Rebecca Solnit has examined five major disasters and describes that after these events, survivors would reach out and work together to confront the challenges they face, therefore increasing the social capital in the community (Solnit). However, there are studies that have concluded that major disasters can damage the social fabric in local communities (Peacock et al.). The COVID-19 epidemic does not have the intensity and suddenness of other disasters but has had significant knock-on effects in increasing or decreasing social capital, depending on the institutional and social responses to the pandemic. In China, it appears that the positive social capital effects have been partially subsumed into a more generalised patriotic or nationalist affirmation of the government’s policy response. Unlike civil society responses to earlier crises, such as the 2008 Sichuan earthquake, there is less evidence of widespread community organisation and response to combat the epidemic at its initial stages. This suggests better institutional responses to the crisis by the government, but also a high degree of porosity between civil society and a national “imagined community” represented by the national state. The result has been an increased legitimacy for the Chinese government. Alternatively, in the United States the transformation of COVID-19 public health policy into a culture war issue has seriously impeded efforts to combat the epidemic in the short term by undermining the social consensus and social capital necessary to fight such a pandemic. Trust in American institutions is historically low, and President Trump’s untrue contention that President Biden’s election was due to “fraud” has further undermined the legitimacy of the American government, as evidenced by the attacks directed at Congress in the U.S. capital on 6 January 2021. As such, the lingering effects the pandemic will have on social, economic, and political institutions will likely reinforce the deep cultural and political cleavages and weaken interpersonal networks in American society. Conclusion The COVID-19 pandemic has devastated global public health and impacted deeply on the world economy. Unsurprisingly, given the serious economic, social, and political consequences, different government responses have been highly politicised. Various quarantine and infection case tracking methods have caused concern over state power intruding into private spheres. The usage of face masks, social distancing rules, and intra-state travel restrictions have aroused passionate debate over public health restrictions, individual liberty, and human rights. Yet underlying public health responses grounded in higher levels of social capital enhance the effectiveness of public health measures. In China, a country that has generally been associated with lower social capital, it is likely that the relatively strong policy response to COVID-19 will both enhance feelings of nationalism and Chinese exceptionalism and help create and increase the store of social capital. In the United States, the attribution of COVID-19 public health policy as part of the culture wars will continue to impede efforts to control the pandemic while further damaging the store of American community social capital that has assisted public health efforts over the past decades. References Adger, W. Neil. “Social Capital, Collective Action, and Adaptation to Climate Change.” Economic Geography 79.4 (2003): 387-404. Bacon, John. “Coronavirus Updates: Donald Trump Says US 'China Virus' Data Exaggerated; Dr. Anthony Fauci Protests, Draws President's Wrath.” USA Today 3 Jan. 2021. 4 Jan. 2021 <https://www.usatoday.com/story/news/health/2021/01/03/COVID-19-update-larry-king-ill-4-million-december-vaccinations-us/4114363001/>. Berry, Kate A. “Beyond the American Culture Wars.” Regions & Cohesion / Regiones y Cohesión / Régions et Cohésion 7.2 (Summer 2017): 90-95. Castillo, Juan C., Daniel Miranda, and Pablo Torres. “Authoritarianism, Social Dominance and Trust in Public Institutions.” Annual Scientific Meeting of the International Society of Political Psychology, Istanbul, 9-12 July 2011. 2 Jan. 2021 <https://pdfs.semanticscholar.org/>. Chapman, Roger. “Introduction, Culture Wars: Rhetoric and Reality.” Culture Wars: An Encyclopedia of Issues, Viewpoints, and Voices. Eds. Roger Chapman and M.E. Sharpe. 2010. 8-10. Chen, Jie, and Chunlong Lu. “Social Capital in Urban China: Attitudinal and Behavioral Effects on Grassroots Self-Government.” Social Science Quarterly 88.2 (June 2007): 422-442. China's State Council Information Office. “Fighting COVID-19: China in Action.” Xinhuanet 7 June 2020. 2 Sep. 2020 <http://www.xinhuanet.com/english/2020-06/07/c_139120424.htm?bsh_bid=551709954>. Fukuyama, Francis. Trust: The Social Virtues and the Creation of Prosperity. Hamish Hamilton, 1995. Kelly, Mike. “Welcome to the COVID-19 Culture Wars. Why Are We Fighting about Masks?’ Yahoo News 4 Dec. 2020 <https://www.msn.com/en-us/news/us/welcome-to-the-COVID-19-culture-wars-why-are-we-fighting-about-masks-mike-kelly/ar-BB1bCOHN>. Luigi Guiso, Paola Sapienza, and Luigi Zingales, “Social Capital as Good Culture.” National Bureau of Economic Research Working Paper No. 13712. 2007. 18 ct. 2017 <http://www.nber.org/papers/w13712.pdf>. Malik, Nesrine. “The Right's Culture War Is No Longer a Sideshow to Our Politics – It Is Our Politics.” The Guardian 31 Aug. 2020. 6 Jan. 2021 <https://www.theguardian.com/commentisfree/2020/aug/31/the-rights-culture-war-politics-rightwing-fantasy-elections>. Offe, Carl. “How Can We Trust Our Fellow Citizens?” Democracy and Trust. Ed. M.E. Warren. Cambridge University Press, 1999. 42-87. Ostrom, Elinor, and T.K. Ahn. “The Meaning of Social Capital and Its Link to Collective Action.” Handbook of Social Capital: The Troika of Sociology, Political Science and Economics. Eds. Gert Tinggaard Svendsen and Gunnar Lind Haase Svendsen. Edward Elgar, 2009. 17–35. Paxton, Pamela. “Is Social Capital Declining in the United States? A Multiple Indicator Assessment.” American Journal of Sociology 105.1 (1999): 88-127. People.cn. “Hubeisheng Huanggangshi chufen dangyuan ganbu 337 ren.” [“337 Party Cadres Were Disciplined in Huanggang, Hubei Province.”] 2 Feb. 2020. 10 Sep. 2020 <http://fanfu.people.com.cn/n1/2020/0130/c64371-31565382.html>. ———. “Zai yiqing fangkong douzheng zhong zhangxian weida zhongguo jingshen.” [“Demonstrating the Great Spirit of China in Fighting the Pandemic.”] 7 Apr. 2020. 9 Sep. 2020 <http://opinion.people.com.cn/n1/2020/0407/c1003-31663076.html>. Peters, Jeremy W. “How Abortion, Guns and Church Closings Made Coronavirus a Culture War.” New York Times 20 Apr. 2020. 6 Jan. 2021 <http://www.nytimes.com/2020/04/20/us/politics/coronavirus-protests-democrats-republicans.html>. Pew Research Center. “Americans Give the U.S. Low Marks for Its Handling of COVID-19, and So Do People in Other Countries.” 21 Sep. 2020. 15 Jan. 2021 <https://www.pewresearch.org/fact-tank/2020/09/21/americans-give-the-u-s-low-marks-for-its-handling-of-covid-19-and-so-do-people-in-other-countries/>. Putnam, Robert D. “Bowling Alone: America’s Declining Social Capital.” Journal of Democracy 6.1 (1995): 65-78. ———. Making Democracy Work: Civic Traditions in Modern Italy. Princeton University Press, 1993. Roßteutscher, Sigrid. “Social Capital Worldwide: Potential for Democratization or Stabilizer of Authoritarian Rule?” American Behavioural Scientist 53.5 (2010): 737–757. Russonello, G. “What’s Driving the Right-Wing Protesters Fighting the Quarantine?” New York Times 17 Apr. 2020. 2 Jan. 2021 <http://www.nytimes.com/2020/04/17/us/politics/poll-watch-quarantine-protesters.html>. Shear, Michael D., Maggie Haberman, Noah Weiland, Sharon LaFraniere, and Mark Mazzetti. “Trump’s Focus as the Pandemic Raged: What Would It Mean for Him?” New York Times 31 Dec. 2020. 2 Jan. 2021 <https://www.nytimes.com/2020/12/31/us/politics/trump-coronavirus.html>. Tracy, Marc. “Anti-Lockdown Protesters Get in Reporters’ (Masked) Faces.” New York Times 13 May 2020. 5 Jan. 2021 <https://www.nytimes.com/2020/05/13/business/media/lockdown-protests-reporters.html>. Victoria Ombudsman. “Investigation into the Detention and Treatment of Public Housing Residents Arising from a COVID-19 ‘Hard Lockdown’ in July 2020.” Dec. 2020. 8 Jan. 2021 <https://assets.ombudsman.vic.gov.au/>. Vogel, Kenneth P., Jim Rutenberg, and Lisa Lerer. “The Quiet Hand of Conservative Groups in the Anti-Lockdown Protests.” New York Times 21 Apr. 2020. 2 Jan. 2021 <http://www.nytimes.com/2020/04/21/us/politics/coronavirus-protests-trump.html>. Weiner, Jennifer. “Fake ‘War on Christmas’ and the Real Battle against COVID-19.” New York Times 7 Dec. 2020. 6 Jan. 2021 <https://www.nytimes.com/2020/12/07/opinion/christmas-religion-COVID-19.html>. White, Gordon. “Civil Society, Democratization and Development: Clearing the Analytical Ground.” Civil Society in Democratization. Eds. Peter Burnell and Peter Calvert. Taylor & Francis, 2004. 375-390. Wu, Cary. “How Chinese Citizens View Their Government’s Coronavirus Response.” The Conversation 5 June 2020. 2 Sep. 2020 <https://theconversation.com/how-chinese-citizens-view-their-governments-coronavirus-response-139176>. Wu, Fengshi. “An Emerging Group Name ‘Gongyi’: Ideational Collectivity in China's Civil Society.” China Review 17.2 (2017): 123-150. ———. “Evolving State-Society Relations in China: Introduction.” China Review 17.2 (2017): 1-6. Xu, Bin. “Consensus Crisis and Civil Society: The Sichuan Earthquake Response and State-Society Relations.” The China Journal 71 (2014): 91-108. Xu, Juan. “Wei yiqing fangkong zhulao fazhi diba.” [“Build a Strong Legal ‘Dam’ for Disease Control.”] People.cn 24 Feb. 2020. 10 Sep. 2020 <http://opinion.people.com.cn/n1/2020/0224/c1003-31600409.html>.
APA, Harvard, Vancouver, ISO, and other styles
47

Moore, Christopher Luke. "Digital Games Distribution: The Presence of the Past and the Future of Obsolescence." M/C Journal 12, no. 3 (2009). http://dx.doi.org/10.5204/mcj.166.

Full text
Abstract:
A common criticism of the rhythm video games genre — including series like Guitar Hero and Rock Band, is that playing musical simulation games is a waste of time when you could be playing an actual guitar and learning a real skill. A more serious criticism of games cultures draws attention to the degree of e-waste they produce. E-waste or electronic waste includes mobiles phones, computers, televisions and other electronic devices, containing toxic chemicals and metals whose landfill, recycling and salvaging all produce distinct environmental and social problems. The e-waste produced by games like Guitar Hero is obvious in the regular flow of merchandise transforming computer and video games stores into simulation music stores, filled with replica guitars, drum kits, microphones and other products whose half-lives are short and whose obsolescence is anticipated in the annual cycles of consumption and disposal. This paper explores the connection between e-waste and obsolescence in the games industry, and argues for the further consideration of consumers as part of the solution to the problem of e-waste. It uses a case study of the PC digital distribution software platform, Steam, to suggest that the digital distribution of games may offer an alternative model to market driven software and hardware obsolescence, and more generally, that such software platforms might be a place to support cultures of consumption that delay rather than promote hardware obsolescence and its inevitability as e-waste. The question is whether there exists a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities (its current 'green' benefit), but also for supporting consumer practices that further reduce e-waste. The games industry relies on a rapid production and innovation cycle, one that actively enforces hardware obsolescence. Current video game consoles, including the PlayStation 3, the Xbox 360 and Nintendo Wii, are the seventh generation of home gaming consoles to appear within forty years, and each generation is accompanied by an immense international transportation of games hardware, software (in various storage formats) and peripherals. Obsolescence also occurs at the software or content level and is significant because the games industry as a creative industry is dependent on the extensive management of multiple intellectual properties. The computing and video games software industry operates a close partnership with the hardware industry, and as such, software obsolescence directly contributes to hardware obsolescence. The obsolescence of content and the redundancy of the methods of policing its scarcity in the marketplace has been accelerated and altered by the processes of disintermediation with a range of outcomes (Flew). The music industry is perhaps the most advanced in terms of disintermediation with digital distribution at the center of the conflict between the legitimate and unauthorised access to intellectual property. This points to one issue with the hypothesis that digital distribution can lead to a reduction in hardware obsolescence, as the marketplace leader and key online distributor of music, Apple, is also the major producer of new media technologies and devices that are the paragon of stylistic obsolescence. Stylistic obsolescence, in which fashion changes products across seasons of consumption, has long been observed as the dominant form of scaled industrial innovation (Slade). Stylistic obsolescence is differentiated from mechanical or technological obsolescence as the deliberate supersedence of products by more advanced designs, better production techniques and other minor innovations. The line between the stylistic and technological obsolescence is not always clear, especially as reduced durability has become a powerful market strategy (Fitzpatrick). This occurs where the design of technologies is subsumed within the discourses of manufacturing, consumption and the logic of planned obsolescence in which the product or parts are intended to fail, degrade or under perform over time. It is especially the case with signature new media technologies such as laptop computers, mobile phones and portable games devices. Gamers are as guilty as other consumer groups in contributing to e-waste as participants in the industry's cycles of planned obsolescence, but some of them complicate discussions over the future of obsolescence and e-waste. Many gamers actively work to forestall the obsolescence of their games: they invest time in the play of older games (“retrogaming”) they donate labor and creative energy to the production of user-generated content as a means of sustaining involvement in gaming communities; and they produce entirely new game experiences for other users, based on existing software and hardware modifications known as 'mods'. With Guitar Hero and other 'rhythm' games it would be easy to argue that the hardware components of this genre have only one future: as waste. Alternatively, we could consider the actual lifespan of these objects (including their impact as e-waste) and the roles they play in the performances and practices of communities of gamers. For example, the Elmo Guitar Hero controller mod, the Tesla coil Guitar Hero controller interface, the Rock Band Speak n' Spellbinder mashup, the multiple and almost sacrilegious Fender guitar hero mods, the Guitar Hero Portable Turntable Mod and MAKE magazine's Trumpet Hero all indicate a significant diversity of user innovation, community formation and individual investment in the post-retail life of computer and video game hardware. Obsolescence is not just a problem for the games industry but for the computing and electronics industries more broadly as direct contributors to the social and environmental cost of electrical waste and obsolete electrical equipment. Planned obsolescence has long been the experience of gamers and computer users, as the basis of a utopian mythology of upgrades (Dovey and Kennedy). For PC users the upgrade pathway is traversed by the consumption of further hardware and software post initial purchase in a cycle of endless consumption, acquisition and waste (as older parts are replaced and eventually discarded). The accumulation and disposal of these cultural artefacts does not devalue or accrue in space or time at the same rate (Straw) and many users will persist for years, gradually upgrading and delaying obsolescence and even perpetuate the circulation of older cultural commodities. Flea markets and secondhand fairs are popular sites for the purchase of new, recent, old, and recycled computer hardware, and peripherals. Such practices and parallel markets support the strategies of 'making do' described by De Certeau, but they also continue the cycle of upgrade and obsolescence, and they are still consumed as part of the promise of the 'new', and the desire of a purchase that will finally 'fix' the users' computer in a state of completion (29). The planned obsolescence of new media technologies is common, but its success is mixed; for example, support for Microsoft's operating system Windows XP was officially withdrawn in April 2009 (Robinson), but due to the popularity in low cost PC 'netbooks' outfitted with an optimised XP operating system and a less than enthusiastic response to the 'next generation' Windows Vista, XP continues to be popular. Digital Distribution: A Solution? Gamers may be able to reduce the accumulation of e-waste by supporting the disintermediation of the games retail sector by means of online distribution. Disintermediation is the establishment of a direct relationship between the creators of content and their consumers through products and services offered by content producers (Flew 201). The move to digital distribution has already begun to reduce the need to physically handle commodities, but this currently signals only further support of planned, stylistic and technological obsolescence, increasing the rate at which the commodities for recording, storing, distributing and exhibiting digital content become e-waste. Digital distribution is sometimes overlooked as a potential means for promoting communities of user practice dedicated to e-waste reduction, at the same time it is actively employed to reduce the potential for the unregulated appropriation of content and restrict post-purchase sales through Digital Rights Management (DRM) technologies. Distributors like Amazon.com continue to pursue commercial opportunities in linking the user to digital distribution of content via exclusive hardware and software technologies. The Amazon e-book reader, the Kindle, operates via a proprietary mobile network using a commercially run version of the wireless 3G protocols. The e-book reader is heavily encrypted with Digital Rights Management (DRM) technologies and exclusive digital book formats designed to enforce current copyright restrictions and eliminate second-hand sales, lending, and further post-purchase distribution. The success of this mode of distribution is connected to Amazon's ability to tap both the mainstream market and the consumer demand for the less-than-popular; those books, movies, music and television series that may not have been 'hits' at the time of release. The desire to revisit forgotten niches, such as B-sides, comics, books, and older video games, suggests Chris Anderson, linked with so-called “long tail” economics. Recently Webb has queried the economic impact of the Long Tail as a business strategy, but does not deny the underlying dynamics, which suggest that content does not obsolesce in any straightforward way. Niche markets for older content are nourished by participatory cultures and Web 2.0 style online services. A good example of the Long Tail phenomenon is the recent case of the 1971 book A Lion Called Christian, by Anthony Burke and John Rendall, republished after the author's film of a visit to a resettled Christian in Africa was popularised on YouTube in 2008. Anderson's Long Tail theory suggests that over time a large number of items, each with unique rather than mass histories, will be subsumed as part of a larger community of consumers, including fans, collectors and everyday users with a long term interest in their use and preservation. If digital distribution platforms can reduce e-waste, they can perhaps be fostered by to ensuring digital consumers have access to morally and ethically aware consumer decisions, but also that they enjoy traditional consumer freedoms, such as the right to sell on and change or modify their property. For it is not only the fixation on the 'next generation' that contributes to obsolescence, but also technologies like DRM systems that discourage second hand sales and restrict modification. The legislative upgrades, patches and amendments to copyright law that have attempted to maintain the law's effectiveness in competing with peer-to-peer networks have supported DRM and other intellectual property enforcement technologies, despite the difficulties that owners of intellectual property have encountered with the effectiveness of DRM systems (Moore, Creative). The games industry continues to experiment with DRM, however, this industry also stands out as one of the few to have significantly incorporated the user within the official modes of production (Moore, Commonising). Is the games industry capable (or willing) of supporting a digital delivery system that attempts to minimise or even reverse software and hardware obsolescence? We can try to answer this question by looking in detail at the biggest digital distributor of PC games, Steam. Steam Figure 1: The Steam Application user interface retail section Steam is a digital distribution system designed for the Microsoft Windows operating system and operated by American video game development company and publisher, Valve Corporation. Steam combines online games retail, DRM technologies and internet-based distribution services with social networking and multiplayer features (in-game voice and text chat, user profiles, etc) and direct support for major games publishers, independent producers, and communities of user-contributors (modders). Steam, like the iTunes games store, Xbox Live and other digital distributors, provides consumers with direct digital downloads of new, recent and classic titles that can be accessed remotely by the user from any (internet equipped) location. Steam was first packaged with the physical distribution of Half Life 2 in 2004, and the platform's eventual popularity is tied to the success of that game franchise. Steam was not an optional component of the game's installation and many gamers protested in various online forums, while the platform was treated with suspicion by the global PC games press. It did not help that Steam was at launch everything that gamers take objection to: a persistent and initially 'buggy' piece of software that sits in the PC's operating system and occupies limited memory resources at the cost of hardware performance. Regular updates to the Steam software platform introduced social network features just as mainstream sites like MySpace and Facebook were emerging, and its popularity has undergone rapid subsequent growth. Steam now eclipses competitors with more than 20 million user accounts (Leahy) and Valve Corporation makes it publicly known that Steam collects large amounts of data about its users. This information is available via the public player profile in the community section of the Steam application. It includes the average number of hours the user plays per week, and can even indicate the difficulty the user has in navigating game obstacles. Valve reports on the number of users on Steam every two hours via its web site, with a population on average between one and two million simultaneous users (Valve, Steam). We know these users’ hardware profiles because Valve Corporation makes the results of its surveillance public knowledge via the Steam Hardware Survey. Valve’s hardware survey itself conceptualises obsolescence in two ways. First, it uses the results to define the 'cutting edge' of PC technologies and publishing the standards of its own high end production hardware on the companies blog. Second, the effect of the Survey is to subsequently define obsolescent hardware: for example, in the Survey results for April 2009, we can see that the slight majority of users maintain computers with two central processing units while a significant proportion (almost one third) of users still maintained much older PCs with a single CPU. Both effects of the Survey appear to be well understood by Valve: the Steam Hardware Survey automatically collects information about the community's computer hardware configurations and presents an aggregate picture of the stats on our web site. The survey helps us make better engineering and gameplay decisions, because it makes sure we're targeting machines our customers actually use, rather than measuring only against the hardware we've got in the office. We often get asked about the configuration of the machines we build around the office to do both game and Steam development. We also tend to turn over machines in the office pretty rapidly, at roughly every 18 months. (Valve, Team Fortress) Valve’s support of older hardware might counter perceptions that older PCs have no use and begins to reverse decades of opinion regarding planned and stylistic obsolescence in the PC hardware and software industries. Equally significant to the extension of the lives of older PCs is Steam's support for mods and its promotion of user generated content. By providing software for mod creation and distribution, Steam maximises what Postigo calls the development potential of fan-programmers. One of the 'payoffs' in the information/access exchange for the user with Steam is the degree to which Valve's End-User Licence Agreement (EULA) permits individuals and communities of 'modders' to appropriate its proprietary game content for use in the creation of new games and games materials for redistribution via Steam. These mods extend the play of the older games, by requiring their purchase via Steam in order for the individual user to participate in the modded experience. If Steam is able to encourage this kind of appropriation and community support for older content, then the potential exists for it to support cultures of consumption and practice of use that collaboratively maintain, extend, and prolong the life and use of games. Further, Steam incorporates the insights of “long tail” economics in a purely digital distribution model, in which the obsolescence of 'non-hit' game titles can be dramatically overturned. Published in November 2007, Unreal Tournament 3 (UT3) by Epic Games, was unappreciated in a market saturated with games in the first-person shooter genre. Epic republished UT3 on Steam 18 months later, making the game available to play for free for one weekend, followed by discounted access to new content. The 2000 per cent increase in players over the game's 'free' trial weekend, has translated into enough sales of the game for Epic to no longer consider the release a commercial failure: It’s an incredible precedent to set: making a game a success almost 18 months after a poor launch. It’s something that could only have happened now, and with a system like Steam...Something that silently updates a purchase with patches and extra content automatically, so you don’t have to make the decision to seek out some exciting new feature: it’s just there anyway. Something that, if you don’t already own it, advertises that game to you at an agreeably reduced price whenever it loads. Something that enjoys a vast community who are in turn plugged into a sea of smaller relevant communities. It’s incredibly sinister. It’s also incredibly exciting... (Meer) Clearly concerns exist about Steam's user privacy policy, but this also invites us to the think about the economic relationship between gamers and games companies as it is reconfigured through the private contractual relationship established by the EULA which accompanies the digital distribution model. The games industry has established contractual and licensing arrangements with its consumer base in order to support and reincorporate emerging trends in user generated cultures and other cultural formations within its official modes of production (Moore, "Commonising"). When we consider that Valve gets to tax sales of its virtual goods and can further sell the information farmed from its users to hardware manufacturers, it is reasonable to consider the relationship between the corporation and its gamers as exploitative. Gabe Newell, the Valve co-founder and managing director, conversely believes that people are willing to give up personal information if they feel it is being used to get better services (Leahy). If that sentiment is correct then consumers may be willing to further trade for services that can reduce obsolescence and begin to address the problems of e-waste from the ground up. Conclusion Clearly, there is a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities but also supporting consumer practices that further reduce e-waste. For an industry where only a small proportion of the games made break even, the successful relaunch of older games content indicates Steam's capacity to ameliorate software obsolescence. Digital distribution extends the use of commercially released games by providing disintermediated access to older and user-generated content. For Valve, this occurs within a network of exchange as access to user-generated content, social networking services, and support for the organisation and coordination of communities of gamers is traded for user-information and repeat business. Evidence for whether this will actively translate to an equivalent decrease in the obsolescence of game hardware might be observed with indicators like the Steam Hardware Survey in the future. The degree of potential offered by digital distribution is disrupted by a range of technical, commercial and legal hurdles, primary of which is the deployment of DRM, as part of a range of techniques designed to limit consumer behaviour post purchase. While intervention in the form of legislation and radical change to the insidious nature of electronics production is crucial in order to achieve long term reduction in e-waste, the user is currently considered only in terms of 'ethical' consumption and ultimately divested of responsibility through participation in corporate, state and civil recycling and e-waste management operations. The message is either 'careful what you purchase' or 'careful how you throw it away' and, like DRM, ignores the connections between product, producer and user and the consumer support for environmentally, ethically and socially positive production, distribrution, disposal and recycling. This article, has adopted a different strategy, one that sees digital distribution platforms like Steam, as capable, if not currently active, in supporting community practices that should be seriously considered in conjunction with a range of approaches to the challenge of obsolescence and e-waste. References Anderson, Chris. "The Long Tail." Wired Magazine 12. 10 (2004). 20 Apr. 2009 ‹http://www.wired.com/wired/archive/12.10/tail.html›. De Certeau, Michel. The Practice of Everyday Life. Berkeley: U of California P, 1984. Dovey, Jon, and Helen Kennedy. Game Cultures: Computer Games as New Media. London: Open University Press,2006. Fitzpatrick, Kathleen. The Anxiety of Obsolescence. Nashville: Vanderbilt UP, 2008. Flew, Terry. New Media: An Introduction. South Melbourne: Oxford UP, 2008. Leahy, Brian. "Live Blog: DICE 2009 Keynote - Gabe Newell, Valve Software." The Feed. G4TV 18 Feb. 2009. 16 Apr. 2009 ‹http://g4tv.com/thefeed/blog/post/693342/Live-Blog-DICE-2009-Keynote-–-Gabe-Newell-Valve-Software.html›. Meer, Alec. "Unreal Tournament 3 and the New Lazarus Effect." Rock, Paper, Shotgun 16 Mar. 2009. 24 Apr. 2009 ‹http://www.rockpapershotgun.com/2009/03/16/unreal-tournament-3-and-the-new-lazarus-effect/›.Moore, Christopher. "Commonising the Enclosure: Online Games and Reforming Intellectual Property Regimes." Australian Journal of Emerging Technologies and Society 3. 2, (2005). 12 Apr. 2009 ‹http://www.swin.edu.au/sbs/ajets/journal/issue5-V3N2/abstract_moore.htm›. Moore, Christopher. "Creative Choices: Changes to Australian Copyright Law and the Future of the Public Domain." Media International Australia 114 (Feb. 2005): 71–83. Postigo, Hector. "Of Mods and Modders: Chasing Down the Value of Fan-Based Digital Game Modification." Games and Culture 2 (2007): 300-13. Robinson, Daniel. "Windows XP Support Runs Out Next Week." PC Business Authority 8 Apr. 2009. 16 Apr. 2009 ‹http://www.pcauthority.com.au/News/142013,windows-xp-support-runs-out-next-week.aspx›. Straw, Will. "Exhausted Commodities: The Material Culture of Music." Canadian Journal of Communication 25.1 (2000): 175. Slade, Giles. Made to Break: Technology and Obsolescence in America. Cambridge: Harvard UP, 2006. Valve. "Steam and Game Stats." 26 Apr. 2009 ‹http://store.steampowered.com/stats/›. Valve. "Team Fortress 2: The Scout Update." Steam Marketing Message 20 Feb. 2009. 12 Apr. 2009 ‹http://storefront.steampowered.com/Steam/Marketing/message/2269/›. Webb, Richard. "Online Shopping and the Harry Potter Effect." New Scientist 2687 (2008): 52-55. 16 Apr. 2009 ‹http://www.newscientist.com/article/mg20026873.300-online-shopping-and-the-harry-potter-effect.html?page=2›. With thanks to Dr Nicola Evans and Dr Frances Steel for their feedback and comments on drafts of this paper.
APA, Harvard, Vancouver, ISO, and other styles
48

Kozak, Nadine Irène. "Building Community, Breaking Barriers: Little Free Libraries and Local Action in the United States." M/C Journal 20, no. 2 (2017). http://dx.doi.org/10.5204/mcj.1220.

Full text
Abstract:
Image 1: A Little Free Library. Image credit: Nadine Kozak.IntroductionLittle Free Libraries give people a reason to stop and exchange things they love: books. It seemed like a really good way to build a sense of community.Dannette Lank, Little Free Library steward, Whitefish Bay, Wisconsin, 2013 (Rumage)Against a backdrop of stagnant literacy rates and enduring perceptions of urban decay and the decline of communities in cities (NCES, “Average Literacy”; NCES, “Average Prose”; Putnam 25; Skogan 8), legions of Little Free Libraries (LFLs) have sprung up across the United States between 2009 and the present. LFLs are small, often homemade structures housing books and other physical media for passersby to choose a book to take or leave a book to share with others. People have installed the structures in front of homes, schools, libraries, churches, fire and police stations, community gardens, and in public parks. There are currently 50,000 LFLs around the world, most of which are in the continental United States (Aldrich, “Big”). LFLs encompass building in multiple senses of the term; LFLs are literally tiny buildings to house books and people use the structures for building neighbourhood social capital. The organisation behind the movement cites “building community” as one of its three core missions (Little Free Library). Rowan Moore, theorising humans’ reasons for building, argues desire and emotion are central (16). The LFL movement provides evidence for this claim: stewards erect LFLs based on hope for increased literacy and a desire to build community through their altruistic actions. This article investigates how LFLs build urban community and explores barriers to the endeavour, specifically municipal building and right of way ordinances used in attempts to eradicate the structures. It also examines local responses to these municipal actions and potential challenges to traditional public libraries brought about by LFLs, primarily the decrease of visits to public libraries and the use of LFLs to argue for defunding of publicly provided library services. The work argues that LFLs build community in some places but may threaten other community services. This article employs qualitative content analysis of 261 stewards’ comments about their registered LFLs on the organisation’s website drawn from the two largest cities in a Midwestern state and an interview with an LFL steward in a village in the same state to analyse how LFLs build community. The two cities, located in the state where the LFL movement began, provide a cross section of innovators, early adopters, and late adopters of the book exchanges, determined by their registered charter numbers. Press coverage and municipal documents from six cities across the US gathered through a snowball sample provide data about municipal challenges to LFLs. Blog posts penned by practising librarians furnish some opinions about the movement. This research, while not a representative sample, identifies common themes and issues around LFLs and provides a basis for future research.The act of building and curating an LFL is a representation of shared beliefs about literacy, community, and altruism. Establishing an LFL is an act of civic participation. As Nico Carpentier notes, while some civic participation is macro, carried out at the level of the nation, other participation is micro, conducted in “the spheres of school, family, workplace, church, and community” (17). Ruth H. Landman investigates voluntary activities in the city, including community gardening, and community bakeries, and argues that the people associated with these projects find themselves in a “denser web of relations” than previously (2). Gretchen M. Herrmann argues that neighbourhood garage sales, although fleeting events, build an enduring sense of community amongst participants (189). Ray Oldenburg contends that people create associational webs in what he calls “great good places”; third spaces separate from home and work (20-21). Little Free Libraries and Community BuildingEmotion plays a central role in the decision to become an LFL steward, the person who establishes and maintains the LFL. People recount their desire to build a sense of community and share their love of reading with neighbours (Charter 4684; Charter 8212; Charter 9437; Charter 9705; Charter 16561). One steward in the study reported, “I love books and I want to be able to help foster that love in our neighbourhood as well” (Charter 4369). Image 2: A Little Free Library, bench, water fountain, and dog’s water bowl for passersby to enjoy. Image credit: Nadine Kozak.Relationships and emotional ties are central to some people’s decisions to have an LFL. The LFL website catalogues many instances of memorial LFLs, tributes to librarians, teachers, and avid readers. Indeed, the first Little Free Library, built by Todd Bol in 2009, was a tribute to his late mother, a teacher who loved reading (“Our History”). In the two city study area, ten LFLs are memorials, allowing bereaved families to pass on a loved one’s penchant for sharing books and reading (Charter 1235; Charter 1309; Charter 4604; Charter 6219; Charter 6542; Charter 6954; Charter 10326; Charter 16734; Charter 24481; Charter 30369). In some cases, urban neighbours come together to build, erect, and stock LFLs. One steward wrote: “Those of us who live in this friendly neighborhood collaborated to design[,] build and paint a bungalow themed library” to match the houses in the neighbourhood (Charter 2532). Another noted: “Our neighbor across the street is a skilled woodworker, and offered to build the library for us if we would install it in our yard and maintain it. What a deal!” (Charter 18677). Community organisations also install and maintain LFLs, including 21 in the study population (e.g. Charter 31822; Charter 27155).Stewards report increased communication with neighbours due to their LFLs. A steward noted: “We celebrated the library’s launch on a Saturday morning with neighbors of all ages. We love sitting on our front porch and catching up with the people who stop to check out the books” (Charter 9673). Another exclaimed:within 24 hours, before I had time to paint it, my Little Free Library took on a life of its own. All of a sudden there were lots of books in it and people stopping by. I wondered where these books came from as I had not put any in there. Little kids in the neighborhood are all excited about it and I have met neighbors that I had never seen before. This is going to be fun! (Charter 15981)LFLs build community through social interaction and collaboration. This occurs when neighbours come together to build, install, and fill the structures. The structures also open avenues for conversation between neighbours who had no connection previously. Like Herrmann’s neighbourhood garage sales, LFLs create and maintain social ties between neighbours and link them by the books they share. Additionally, when neighbours gather and communicate at the LFL structure, they create a transitory third space for “informal public life”, where people can casually interact at a nearby location (Oldenburg 14, 288).Building Barriers, Creating CommunityThe erection of an LFL in an urban neighbourhood is not, however, always a welcome sight. The news analysis found that LFLs most often come to the attention of municipal authorities via citizen complaints, which lead to investigations and enforcement of ordinances. In Kansas, a neighbour called an LFL an “eyesore” and an “illegal detached structure” (Tapper). In Wisconsin, well-meaning future stewards contacted their village authorities to ask about rules, inadvertently setting off a six-month ban on LFLs (Stingl; Rumage). Resulting from complaints and inquiries, municipalities regulated, and in one case banned, LFLs, thus building barriers to citizens’ desires to foster community and share books with neighbours.Municipal governments use two major areas of established code to remove or prohibit LFLs: ordinances banning unapproved structures in residents’ yards and those concerned with obstructions to right of ways when stewards locate the LFLs between the public sidewalk and street.In the first instance, municipal ordinances prohibit either front yard or detached structures. Controversies over these ordinances and LFLs erupted in Whitefish Bay, Wisconsin, in 2012; Leawood, Kansas, in 2014; Shreveport, Louisiana, in 2015; and Dallas, Texas, in 2015. The Village of Whitefish Bay banned LFLs due to an ordinance prohibiting “front yard structures,” including mailboxes (Sanburn; Stingl). In Leawood, the city council argued that an LFL, owned by a nine-year-old boy, violated an ordinance that forbade the construction of any detached structures without city council permission. In Shreveport, the stewards of an LFL received a cease and desist letter from city council for having an “accessory structure” in the front yard (LaCasse; Burris) and Dallas officials knocked on a steward’s front door, informing her of a similar breach (Kellogg).In the second instance, some urban municipalities argued that LFLs are obstructions that block right of ways. In Lincoln, Nebraska, the public works director noted that the city “uses the area between the sidewalk and the street for snow storage in the winter, light poles, mailboxes, things like that.” The director continued: “And I imagine these little libraries are meant to congregate people like a water cooler, but we don’t want people hanging around near the road by the curb” (Heady). Both Lincoln in 2014 and Los Angeles (LA), California, in 2015, cited LFLs for obstructions. In Lincoln, the city notified the Southminster United Methodist Church that their LFL, located between the public sidewalk and street, violated a municipal ordinance (Sanburn). In LA, the Bureau of Street Services notified actor Peter Cook that his LFL, situated in the right of way, was an “obstruction” that Cook had to remove or the city would levy a fine (Moss). The city agreed at a hearing to consider a “revocable permit” for Cook’s LFL, but later denied its issuance (Condes).Stewards who found themselves in violation of municipal ordinances were able to harness emotion and build outrage over limits to individuals’ ability to erect LFLs. In Kansas, the stewards created a Facebook page, Spencer’s Little Free Library, which received over 31,000 likes and messages of support. One comment left on the page reads: “The public outcry will force those lame city officials to change their minds about it. Leave it to the stupid government to rain on everybody’s parade” (“Good”). Children’s author Daniel Handler sent a letter to the nine-year-old steward, writing as Lemony Snicket, “fighting against librarians is immoral and useless in the face of brave and noble readers such as yourself” (Spencer’s). Indeed, the young steward gave a successful speech to city hall arguing that the body should allow the structures because “‘lots of people in the neighborhood used the library and the books were always changing. I think it’s good for Leawood’” (Bauman). Other local LFL supporters also attended council and spoke in favour of the structures (Harper). In LA, Cook’s neighbours started a petition that gathered over 100 signatures, where people left comments including, “No to bullies!” (Lopez). Additionally, neighbours gathered to discuss the issue (Dana). In Shreveport, neighbours left stacks of books in their front yards, without a structure housing them due to the code banning accessory structures. One noted, “I’m basically telling the [Metropolitan Planning Commission] to go sod off” (Friedersdorf; Moss). LFL proponents reacted with frustration and anger at the perceived over-reach of the government toward harmless LFLs. In addition to the actions of neighbours and supporters, the national and local press commented on the municipal constraints. The LFL movement has benefitted from a significant amount of positive press in its formative years, a press willing to publicise and criticise municipal actions to thwart LFL development. Stewards’ struggles against municipal bureaucracies building barriers to LFLs makes prime fodder for the news media. Herbert J. Gans argues an enduring value in American news is “the preservation of the freedom of the individual against the encroachments of nation and society” (50). The juxtaposition of well-meaning LFL stewards against municipal councils and committees provided a compelling opportunity to illustrate this value.National media outlets, including Time (Sanburn), Christian Science Monitor (LaCasse), and The Atlantic, drew attention to the issue. Writing in The Atlantic, Conor Friedersdorf critically noted:I wish I was writing this to merely extol this trend [of community building via LFLs]. Alas, a subset of Americans are determined to regulate every last aspect of community life. Due to selection bias, they are overrepresented among local politicians and bureaucrats. And so they have power, despite their small-mindedness, inflexibility, and lack of common sense so extreme that they’ve taken to cracking down on Little Free Libraries, of all things. (Friedersdorf, n.p.)Other columnists mirrored this sentiment. Writing in the LA Times, one commentator sarcastically wrote that city officials were “cracking down on one of the country’s biggest problems: small community libraries where residents share books” (Schaub). Journalists argued this was government overreach on non-issues rather than tackling larger community problems, such as income inequality, homelessness, and aging infrastructure (Solomon; Schaub). The protests and negative press coverage led to, in the case of the municipalities with front yard and detached structure ordinances, détente between stewards and councils as the latter passed amendments permitting and regulating LFLs. Whitefish Bay, Leawood, and Shreveport amended ordinances to allow for LFLs, but also to regulate them (Everson; Topil; Siegel). Ordinances about LFLs restricted their number on city blocks, placement on private property, size and height, as well as required registration with the municipality in some cases. Lincoln officials allowed the church to relocate the LFL from the right of way to church property and waived the $500 fine for the obstruction violation (Sanburn). In addition to the amendments, the protests also led to civic participation and community building including presentations to city council, a petition, and symbolic acts of defiance. Through this protest, neighbours create communities—networks of people working toward a common goal. This aspect of community building around LFLs was unintentional but it brought people together nevertheless.Building a Challenge to Traditional Libraries?LFL marketing and communication staff member Margaret Aldrich suggests in The Little Free Library Book that LFLs are successful because they are “gratifyingly doable” projects that can be accomplished by an individual (16). It is this ease of building, erecting, and maintaining LFLs that builds concern as their proliferation could challenge aspects of library service, such as public funding and patron visits. Some professional librarians are in favour of the LFLs and are stewards themselves (Charter 121; Charter 2608; Charter 9702; Charter 41074; Rumage). Others envision great opportunities for collaboration between traditional libraries and LFLs, including the library publicising LFLs and encouraging their construction as well as using LFLs to serve areas without, or far from, a public library (Svehla; Shumaker). While lauding efforts to build community, some professional librarians question the nomenclature used by the movement. They argue the phrase Little Free Libraries is inaccurate as libraries are much more than random collections of books. Instead, critics contend, the LFL structures are closer to book swaps and exchanges than actual libraries, which offer a range of services such as Internet access, digital materials, community meeting spaces, and workshops and programming on a variety of topics (American Library Association; Annoyed Librarian). One university reference and instruction librarian worries about “the general public’s perception and lumping together of little free libraries and actual ‘real’ public libraries” (Hardenbrook). By way of illustration, he imagines someone asking, “‘why do we need our tax money to go to something that can be done for FREE?’” (Hardenbrook). Librarians holding this perspective fear the movement might add to a trend of neoliberalism, limiting or ending public funding for libraries, as politicians believe that the localised, individual solutions can replace publicly funded library services. This is a trend toward what James Ferguson calls “responsibilized” citizens, those “deployed to produce governmentalized results that do not depend on direct state intervention” (172). In other countries, this shift has already begun. In the United Kingdom (UK), governments are devolving formerly public services onto community groups and volunteers. Lindsay Findlay-King, Geoff Nichols, Deborah Forbes, and Gordon Macfadyen trace the impacts of the 2012 Localism Act in the UK, which caused “sport and library asset transfers” (12) to community and volunteer groups who were then responsible for service provision and, potentially, facility maintenance as well. Rather than being in charge of a “doable” LFL, community groups and volunteers become the operators of much larger facilities. Recent efforts in the US to privatise library services as governments attempt to cut budgets and streamline services (Streitfeld) ground this fear. Image 3: “Take a Book, Share a Book,” a Little Free Library motto. Image credit: Nadine Kozak. LFLs might have real consequences for public libraries. Another potential unintended consequence of the LFLs is decreasing visits to public libraries, which could provide officials seeking to defund them with evidence that they are no longer relevant or necessary. One LFL steward and avid reader remarked that she had not used her local public library since 2014 because “I was using the Little Free Libraries” (Steward). Academics and librarians must conduct more research to determine what impact, if any, LFLs are having on visits to traditional public libraries. ConclusionLittle Free Libraries across the United States, and increasingly in other countries, have generated discussion, promoted collaboration between neighbours, and led to sharing. In other words, they have built communities. This was the intended consequence of the LFL movement. There, however, has also been unplanned community building in response to municipal threats to the structures due to right of way, safety, and planning ordinances. The more threatening concern is not the municipal ordinances used to block LFL development, but rather the trend of privatisation of publicly provided services. While people are celebrating the community built by the LFLs, caution must be exercised lest central institutions of the public and community, traditional public libraries, be lost. Academics and communities ought to consider not just impact on their local community at the street level, but also wider structural concerns so that communities can foster many “great good places”—the Little Free Libraries and traditional public libraries as well.ReferencesAldrich, Margaret. “Big Milestone for Little Free Library: 50,000 Libraries Worldwide.” Little Free Library. Little Free Library Organization. 4 Nov. 2016. 25 Feb. 2017 <https://littlefreelibrary.org/big-milestone-for-little-free-library-50000-libraries-worldwide/>.Aldrich, Margaret. The Little Free Library Book: Take a Book, Return a Book. Minneapolis, MN: Coffee House Press, 2015.Annoyed Librarian. “How to Protect Little Free Libraries.” Library Journal Blog 9 Jul. 2015. 26 Mar. 2017 <http://lj.libraryjournal.com/blogs/annoyedlibrarian/2015/07/09/how-to-protect-little-free-libraries/>.American Library Association. “Public Library Use.” State of America’s Libraries: A Report from the American Library Association (2015). 25 Feb. 2017 <http://www.ala.org/tools/libfactsheets/alalibraryfactsheet06>.Bauman, Caroline. “‘Little Free Libraries’ Legal in Leawood Thanks to 9-year-old Spencer Collins.” The Kansas City Star 7 Jul. 2014. 25 Feb. 2017 <http://www.kansascity.com/news/politics-government/article687562.html>.Burris, Alexandria. “First Amendment Issues Surface in Little Free Library Case.” Shreveport Times 5 Feb. 2015. 25 Feb. 2017 <http://www.shreveporttimes.com/story/news/local/2015/02/05/expert-use-zoning-law-clashes-first-amendment/22922371/>.Carpentier, Nico. Media and Participation: A Site of Ideological-Democratic Struggle. Bristol: Intellect, 2011.Charter 121. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 1235. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 1309. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 2532. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 2608. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 4369. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 4604. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 4684. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 6219. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 6542. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 6954. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 8212. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 9437. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 9673. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 9702. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 9705. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 10326. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 15981. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 16561. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 16734. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 18677. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 24481. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 27155. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 30369. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 31822. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Charter 41074. “The World Map.” Little Free Library (2017). 26 Mar. 2017 <https://littlefreelibrary.org/ourmap/>.Condes, Yvonne. “Save the Little Library!” MomsLA 10 Aug. 2015. 25 Feb. 2017 <http://momsla.com/save-the-micro-library/>.Dana. “The Tenn-Mann Library Controversy, Part 3.” Read with Dana (30 Jan. 2015). 25 Feb. 2017 <https://readwithdana.wordpress.com/2015/01/30/the-tenn-mann-library-controversy-part-three/>.Everson, Jeff. “An Ordinance to Amend and Reenact Chapter 106 of the Shreveport Code of Ordinances Relative to Outdoor Book Exchange Boxes, and Otherwise Providing with Respect Thereto.” City of Shreveport, Louisiana 9 Oct. 2015. 25 Feb. 2017 <http://ftpcontent4.worldnow.com/ksla/pdf/LFLordinance.pdf>.Ferguson, James. “The Uses of Neoliberalism.” Antipode 41.S1 (2009): 166-84.Findlay-King, Lindsay, Geoff Nichols, Deborah Forbes, and Gordon Macfadyen. “Localism and the Big Society: The Asset Transfer of Leisure Centres and Libraries—Fighting Closures or Empowering Communities.” Leisure Studies (2017): 1-13.Friedersdorf, Conor. “The Danger of Being Neighborly without a Permit.” The Atlantic 20 Feb. 2015. 25 Feb. 2017 <https://www.theatlantic.com/national/archive/2015/02/little-free-library-crackdown/385531/>.Gans, Herbert J. Deciding What’s News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time. Evanston, IL: Northwestern University Press, 2004.“Good Luck Spencer.” Spencer’s Little Free Library Facebook Page 25 Jun. 2014. 26 Mar. 2017 <https://www.facebook.com/Spencerslittlefreelibrary/photos/pcb.527531327376433/527531260709773/?type=3>.Hardenbrook, Joe. “A Little Rant on Little Free Libraries (AKA Probably an Unpopular Post).” Mr. Library Dude (9 Apr. 2014). 25 Feb. 2017 <https://mrlibrarydude.wordpress.com/2014/04/09/a-little-rant-on-little-free-libraries-aka-probably-an-unpopular-post/>.Harper, Deb. “Minutes.” The Leawood City Council 7 Jul. 2014. <http://www.leawood.org/pdf/cc/min/07-07-14.pdf>. Heady, Chris. “City Wants Church to Move Little Library.” Lincoln Journal Star 9 Jul. 2014. 25 Feb. 2017 <http://journalstar.com/news/local/city-wants-church-to-move-little-library/article_7753901a-42cd-5b52-9674-fc54a4d51f47.html>. Herrmann, Gretchen M. “Garage Sales Make Good Neighbors: Building Community through Neighborhood Sales.” Human Organization 62.2 (2006): 181-191.Kellogg, Carolyn. “Officials Threaten to Destroy a Little Free Library in Texas.” Los Angeles Times (1 Oct. 2015). 25 Feb. 2017 <http://www.latimes.com/books/jacketcopy/la-et-jc-little-free-library-texas-20150930-story.html>.LaCasse, Alexander. “Why Are Some Cities Cracking Down on Little Free Libraries.” Christian Science Monitor (5 Feb. 2015). 25 Feb. 2017 <http://www.csmonitor.com/Books/chapter-and-verse/2015/0205/Why-are-some-cities-cracking-down-on-little-free-libraries>.Landman, Ruth H. Creating the Community in the City: Cooperatives and Community Gardens in Washington, DC Westport, CT: Bergin & Garvey, 1993. Little Free Library. Little Free Library Organization (2017). 25 Feb. 2017 <https://littlefreelibrary.org/>.Lopez, Steve. “Actor’s Curbside Libraries Is a Smash—for Most People.” LA Times 3 Feb. 2015. 25 Feb. 2017 <http://www.latimes.com/local/california/la-me-0204-lopez-library-20150204-column.html>.Moore, Rowan. Why We Build: Power and Desire in Architecture. New York: Harper Design, 2013.Moss, Laura. “City Zoning Laws Target Little Free Libraries.” Mother Nature Network 25 Aug. 2015. 25 Feb. 2017 <http://www.mnn.com/lifestyle/arts-culture/stories/city-zoning-laws-target-little-free-libraries>.National Center for Education Statistics (NCES). Average Literacy and Numeracy Scale Scores of 25- to 65-Year Olds, by Sex, Age Group, Highest Level of Educational Attainment, and Country of Other Education System: 2012, table 604.10. 25 Feb. 2017 <https://nces.ed.gov/programs/digest/d15/tables/dt15_604.10.asp?current=yes>.National Center for Education Statistics (NCES). Average Prose, Document, and Quantitative Literacy Scores of Adults: 1992 and 2003. National Assessment of Adult Literacy. 25 Feb. 2017 <https://nces.ed.gov/naal/kf_demographics.asp>.Oldenburg, Ray. The Great Good Place: Cafés, Coffee Shops, Bookstores, Bars, Hair Salons, and Other Hangouts at the Heart of a Community. New York: Marlowe & Company, 1999.“Our History.” Little Free Library. Little Free Library Organization (2017). 25 Feb. 2017 <https://littlefreelibrary.org/ourhistory/>.Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2001.Rumage, Jeff. “Little Free Libraries Now Allowed in Whitefish Bay.” Whitefish Bay Patch (8 May 2013). 25 Feb. 2017 <http://patch.com/wisconsin/whitefishbay/little-free-libraries-now-allowed-in-whitefish-bay>.Sanburn, Josh. “What Do Kansas and Nebraska Have against Small Libraries?” Time 10 Jul. 2014. 25 Feb. 2017 <http://time.com/2970649/tiny-libraries-violating-city-ordinances/>.Schaub, Michael. “Little Free Libraries on the Wrong Side of the Law.” LA Times 4 Feb. 2015. 25 Feb. 2017 <http://www.latimes.com/books/jacketcopy/la-et-jc-little-free-libraries-on-the-wrong-side-of-the-law-20150204-story.html>.Shumaker, David. “Public Libraries, Little Free Libraries, and Embedded Librarians.” The Embedded Librarian (28 April 2014) 26 Mar. 2017 <https://embeddedlibrarian.com/2014/04/28/public-libraries-little-free-libraries-and-embedded-librarians/>.Siegel, Julie. “An Ordinance to Amend Section 16.13 of the Municipal Code with Regard to Exempt Certain Little Free Libraries from Front Yard Setback Requirements.” Village of Whitefish Bay, Wisconsin (5 Aug. 2013).Skogan, Wesley G. Police and Community in Chicago: A Tale of Three Cities. New York: Oxford University Press, 2006.Solomon, Dan. “Dallas Is Regulating ‘Little Free Libraries’ for Some Reason.” Texas Monthly (14 Sept. 2016). 25 Feb. 2017 <http://www.texasmonthly.com/the-daily-post/dallas-regulating-little-free-libraries-reason/>.“Spencer’s Little Free Library.” Facebook 15 Jul. 2014. 25 Feb. 2017 <https://www.facebook.com/Spencerslittlefreelibrary/photos/pcb.527531327376433/527531260709773/?type=3>.Steward, M. Personal Interview. 7 Feb. 2017.Stingl, Jim. “Village Slaps Endnote on Little Libraries.” Milwaukee Journal Sentinel 11 Nov. 2012: 1B, 7B.Streitfeld, David. “Anger as a Private Company Takes over Libraries.” The New York Times (26 Sept. 2010). 25 Feb. 2017 <http://www.nytimes.com/2010/09/27/business/27libraries.html>.Svehla, Louise. “Little Free Libraries—The Possibilities Are Endless.” Public Libraries Online (8 Mar. 2013). 25 Feb. 2017 <http://publiclibrariesonline.org/2013/03/little-free-libraries-the-possibilities-are-endless/>.Tapper, Jake. “Boy Fights Council to Save His Library.” CNN 4 Jul. 2014. 25 Feb. 2017 <http://thelead.blogs.cnn.com/2014/07/04/boy-fights-to-save-his-library/>.Topil, Greg. “Little Free Libraries in Lincoln.” City of Lincoln, Nebraska (n.d.). 25 Feb. 2017 <http://lincoln.ne.gov/City/pworks/engine/row/little-library.htm>.
APA, Harvard, Vancouver, ISO, and other styles
49

Smith, Jenny Leigh. "Tushonka: Cultivating Soviet Postwar Taste." M/C Journal 13, no. 5 (2010). http://dx.doi.org/10.5204/mcj.299.

Full text
Abstract:
During World War II, the Soviet Union’s food supply was in a state of crisis. Hitler’s army had occupied the agricultural heartlands of Ukraine and Southern Russia in 1941 and, as a result, agricultural production for the entire nation had plummeted. Soldiers in Red Army, who easily ate the best rations in the country, subsisted on a daily allowance of just under a kilogram of bread, supplemented with meat, tea, sugar and butter when and if these items were available. The hunger of the Red Army and its effect on the morale and strength of Europe’s eastern warfront were causes for concern for the Soviet government and its European and American allies. The one country with a food surplus decided to do something to help, and in 1942 the United States agreed to send thousands of pounds of meat, cheese and butter overseas to help feed the Red Army. After receiving several shipments of the all-American spiced canned meat SPAM, the Red Army’s quartermaster put in a request for a more familiar canned pork product, Russian tushonka. Pound for pound, America sent more pigs overseas than soldiers during World War II, in part because pork was in oversupply in the America of the early 1940s. Shipping meat to hungry soldiers and civilians in war torn countries was a practical way to build business for the U.S. meat industry, which had been in decline throughout the 1930s. As per a Soviet-supplied recipe, the first cans of Lend-Lease tushonka were made in the heart of the American Midwest, at meatpacking plants in Iowa and Ohio (Stettinus 6-7). Government contracts in the meat packing industry helped fuel economic recovery, and meatpackers were in a position to take special request orders like the one for tushonka that came through the lines. Unlike SPAM, which was something of a novelty item during the war, tushonka was a food with a past. The original recipe was based on a recipe for preserved meat that had been a traditional product of the Ural Mountains, preserved in jars with salt and fat rather than by pressure and heat. Thus tushonka was requested—and was mass-produced—not simply as a convenience but also as a traditional and familiar food—a taste of home cooking that soldiers could carry with them into the field. Nikita Khrushchev later claimed that the arrival of tushonka was instrumental in helping the Red Army push back against the Nazi invasion (178). Unlike SPAM and other wartime rations, tushonka did not fade away after the war. Instead, it was distributed to the Soviet civilian population, appearing in charity donations and on the shelves of state shops. Often it was the only meat product available on a regular basis. Salty, fatty, and slightly grey-toned, tushonka was an unlikely hero of the postwar-era, but during this period tushonka rose from obscurity to become an emblem of socialist modernity. Because it was shelf stable and could be made from a variety of different cuts of meat, it proved an ideal product for the socialist production lines where supplies and the pace of production were infinitely variable. Unusual in a socialist system of supply, this product shaped production and distribution lines, and even influenced the layout of meatpacking factories and the genetic stocks of the animals that were to be eaten. Tushonka’s initial ubiquity in the postwar Soviet Union had little to do with the USSR’s own hog industry. Pig populations as well as their processing facilities had been decimated in the war, and pigs that did survive the Axis invasion had been evacuated East with human populations. Instead, the early presence of tushonka in the pig-scarce postwar Soviet Union had everything to do with Harry Truman’s unexpected September 1945 decision to end all “economically useful” Lend-Lease shipments to the Soviet Union (Martel). By the end of September, canned meat was practically the only product still being shipped as part of Lend-Lease (NARA RG 59). Although the United Nations was supposed to distribute these supplies to needy civilians free of cost, travelers to the Soviet Union in 1946 spotted cans of American tushonka for sale in state shops (Skeoch 231). After American tushonka “donations” disappeared from store shelves, the Soviet Union’s meat syndicates decided to continue producing the product. Between its first appearance during the war in 1943, and the 1957 announcement by Nikita Khrushchev that Soviet policy would restructure all state animal farms to support the mass production of one or several processed meat products, tushonka helped to drive the evolution of the Soviet Union’s meat packing industry. Its popularity with both planners and the public gave it the power to reach into food commodity chains. It is this backward reach and the longer-term impacts of these policies that make tushonka an unusual byproduct of the Cold War era. State planners loved tushonka: it was cheap to make, the logistics of preparing it were not complicated, it was easy to transport, and most importantly, it served as tangible evidence that the state was accomplishing a long-standing goal to get more meat to its citizenry and improving the diet of the average Soviet worker. Tushonka became a highly visible product in the Soviet Union’s much vaunted push to establish a modern food regime intended to rival that of the United States. Because it was shelf-stable, wartime tushonka had served as a practical food for soldiers, but after the war tushonka became an ideal food for workers who had neither the time nor the space to prepare a home-cooked meal with fresh meat. The Soviet state started to produce its own tushonka because it was such an excellent fit for the needs and abilities of the Soviet state—consumer demand was rarely considered by planners in this era. Not only did tushonka fit the look and taste of a modern processed meat product (that is, it was standard in texture and flavor from can to can, and was an obviously industrially processed product), it was also an excellent way to make the most of the predominant kind of meat the Soviet Union had the in the 1950s: small scraps low-grade pork and beef, trimmings leftover from butchering practices that focused on harvesting as much animal fat, rather than muscle, from the carcass in question. Just like tushonka, pork sausages and frozen pelmeny, a meat-filled pasta dumpling, also became winning postwar foods thanks to a happy synergy of increased animal production, better butchering and new food processing machines. As postwar pigs recovered their populations, the Soviet processed meat industry followed suit. One official source listed twenty-six different kinds of meat products being issued in 1964, although not all of these were pork (Danilov). An instructional manual distributed by the meat and milk syndicate demonstrated how meat shops should wrap and display sausages, and listed 24 different kinds of sausages that all needed a special style of tying up. Because of packaging shortages, the string that bound the sausage was wrapped in a different way for every type of sausage, and shop assistants were expected to be able to identify sausages based on the pattern of their binding. Pelmeny were produced at every meat factory that processed pork. These were “made from start to finish in a special, automated machine, human hands do not touch them. Which makes them a higher quality and better (prevoskhodnogo) product” (Book of Healthy and Delicious Food). These were foods that became possible to produce economically because of a co-occurring increase in pigs, the new standardized practice of equipping meatpacking plants with large-capacity grinders, and freezers or coolers and the enforcement of a system of grading meat. As the state began to rebuild Soviet agriculture from its near-collapse during the war, the Soviet Union looked to the United States for inspiration. Surprisingly, Soviet planners found some of the United States’ more outdated techniques to be quite valuable for new Soviet hog operations. The most striking of these was the adoption of competing phenotypes in the Soviet hog industry. Most major swine varieties had been developed and described in the 19th century in Germany and Great Britain. Breeds had a tendency to split into two phenotypically distinct groups, and in early 20th Century American pig farms, there was strong disagreement as to which style of pig was better suited to industrial conditions of production. Some pigs were “hot-blooded” (in other words, fast maturing and prolific reproducers) while others were a slower “big type” pig (a self-explanatory descriptor). Breeds rarely excelled at both traits and it was a matter of opinion whether speed or size was the most desirable trait to augment. The over-emphasis of either set of qualities damaged survival rates. At their largest, big type pigs resembled small hippopotamuses, and sows were so corpulent they unwittingly crushed their tiny piglets. But the sleeker hot-blooded pigs had a similarly lethal relationship with their young. Sows often produced litters of upwards of a dozen piglets and the stress of tending such a large brood led overwhelmed sows to devour their own offspring (Long). American pig breeders had been forced to navigate between these two undesirable extremes, but by the 1930s, big type pigs were fading in popularity mainly because butter and newly developed plant oils were replacing lard as the cooking fat of preference in American kitchens. The remarkable propensity of the big type to pack on pounds of extra fat was more of a liability than a benefit in this period, as the price that lard and salt pork plummeted in this decade. By the time U.S. meat packers were shipping cans of tushonka to their Soviet allies across the seas, US hog operations had already developed a strong preference for hot-blooded breeds and research had shifted to building and maintaining lean muscle on these swiftly maturing animals. When Soviet industrial planners hoping to learn how to make more tushonka entered the scene however, their interpretation of american efficiency was hardly predictable: scientifically nourished big type pigs may have been advantageous to the United States at midcentury, but the Soviet Union’s farms and hungry citizens had a very different list of needs and wants. At midcentury, Soviet pigs were still handicapped by old-fashioned variables such as cold weather, long winters, poor farm organisation and impoverished feed regimens. The look of the average Soviet hog operation was hardly industrial. In 1955 the typical Soviet pig was petite, shaggy, and slow to reproduce. In the absence of robust dairy or vegetable oil industries, Soviet pigs had always been valued for their fat rather than their meat, and tushonka had been a byproduct of an industry focused mainly on supplying the country with fat and lard. Until the mid 1950s, the most valuable pig on many Soviet state and collective farms was the nondescript but very rotund “lard and bacon” pig, an inefficient eater that could take upwards of two years to reach full maturity. In searching for a way to serve up more tushonka, Soviet planners became aware that their entire industry needed to be revamped. When the Soviet Union looked to the United States, planners were inspired by the earlier competition between hot-blooded and big type pigs, which Soviet planners thought, ambitiously, they could combine into one splendid pig. The Soviet Union imported new pigs from Poland, Lithuania, East Germany and Denmark, trying valiantly to create hybrid pigs that would exhibit both hot blood and big type. Soviet planners were especially interested in inspiring the Poland-China, an especially rotund specimen, to speed up its life cycle during them mid 1950s. Hybrdizing and cross breeding a Soviet super-pig, no matter how closely laid out on paper, was probably always a socialist pipe dream. However, when the Soviets decided to try to outbreed American hog breeders, they created an infrastructure for pigs and pig breeding that had a dramatic positive impact of hog populations across the country, and the 1950s were marked by a large increase in the number of pigs in the Soviet union, as well as dramatic increases in the numbers of purebred and scientific hybrids the country developed, all in the name of tushonka. It was not just the genetic stock that received a makeover in the postwar drive to can more tushonka; a revolution in the barnyard also took place and in less than 10 years, pigs were living in new housing stock and eating new feed sources. The most obvious postwar change was in farm layout and the use of building space. In the early 1950s, many collective farms had been consolidated. In 1940 there were a quarter of a million kolkhozii, by 1951 fewer than half that many remained (NARA RG166). Farm consolidation movements most often combined two, three or four collective farms into one economic unit, thus scaling up the average size and productivity of each collective farm and simplifying their administration. While there were originally ambitious plans to re-center farms around new “agro-city” bases with new, modern farm buildings, these projects were ultimately abandoned. Instead, existing buildings were repurposed and the several clusters of farm buildings that had once been the heart of separate villages acquired different uses. For animals this meant new barns and new daily routines. Barns were redesigned and compartmentalized around ideas of gender and age segregation—weaned baby pigs in one area, farrowing sows in another—as well as maximising growth and health. Pigs spent less outside time and more time at the trough. Pigs that were wanted for different purposes (breeding, meat and lard) were kept in different areas, isolated from each other to minimize the spread of disease as well as improve the efficiency of production. Much like postwar housing for humans, the new and improved pig barn was a crowded and often chaotic place where the electricity, heat and water functioned only sporadically. New barns were supposed to be mechanised. In some places, mechanisation had helped speed things along, but as one American official viewing a new mechanised pig farm in 1955 noted, “it did not appear to be a highly efficient organisation. The mechanised or automated operations, such as the preparation of hog feed, were eclipsed by the amount of hand labor which both preceded and followed the mechanised portion” (NARA RG166 1961). The American official estimated that by mechanizing, Soviet farms had actually increased the amount of human labor needed for farming operations. The other major environmental change took place away from the barnyard, in new crops the Soviet Union began to grow for fodder. The heart and soul of this project was establishing field corn as a major new fodder crop. Originally intended as a feed for cows that would replace hay, corn quickly became the feed of choice for raising pigs. After a visit by a United States delegation to Iowa and other U.S. farms over the summer of 1955, corn became the centerpiece of Khrushchev’s efforts to raise meat and milk productivity. These efforts were what earned Khrushchev his nickname of kukuruznik, or “corn fanatic.” Since so little of the Soviet Union looks or feels much like the plains and hills of Iowa, adopting corn might seem quixotic, but raising corn was a potentially practical move for a cold country. Unlike the other major fodder crops of turnips and potatoes, corn could be harvested early, while still green but already possessing a high level of protein. Corn provided a “gap month” of green feed during July and August, when grazing animals had eaten the first spring green growth but these same plants had not recovered their biomass. What corn remained in the fields in late summer was harvested and made into silage, and corn made the best silage that had been historically available in the Soviet Union. The high protein content of even silage made from green mass and unripe corn ears prevented them from losing weight in the winter. Thus the desire to put more meat on Soviet tables—a desire first prompted by American food donations of surplus pork from Iowa farmers adapting to agro-industrial reordering in their own country—pushed back into the commodity supply network of the Soviet Union. World War II rations that were well adapted to the uncertainty and poor infrastructure not just of war but also of peacetime were a source of inspiration for Soviet planners striving to improve the diets of citizens. To do this, they purchased and bred more and better animals, inventing breeds and paying attention, for the first time, to the efficiency and speed with which these animals were ready to become meat. Reinventing Soviet pigs pushed even back farther, and inspired agricultural economists and state planners to embrace new farm organizational structures. Pigs meant for the tushonka can spent more time inside eating, and led their lives in a rigid compartmentalization that mimicked emerging trends in human urban society. Beyond the barnyard, a new concern with feed-to weight conversions led agriculturalists to seek new crops; crops like corn that were costly to grow but were a perfect food for a pig destined for a tushonka tin. Thus in Soviet industrialization, pigs evolved. No longer simply recyclers of human waste, socialist pigs were consumers in their own right, their newly crafted genetic compositions demanded ever more technical feed sources in order to maximize their own productivity. Food is transformative, and in this case study the prosaic substance of canned meat proved to be unusually transformative for the history of the Soviet Union. In its early history it kept soldiers alive long enough to win an important war, later the requirements for its manufacture re-prioritized muscle tissue over fat tissue in the disassembly of carcasses. This transformative influence reached backwards into the supply lines and farms of the Soviet Union, revolutionizing the scale and goals of farming and meat packing for the Soviet food industry, as well as the relationship between the pig and the consumer. References Bentley, Amy. Eating for Victory: Food Rationing and the Politics of Domesticity. Where: University of Illinois Press, 1998. The Book of Healthy and Delicious Food, Kniga O Vkusnoi I Zdorovoi Pishche. Moscow: AMN Izd., 1952. 161. Danilov, M. M. Tovaravedenie Prodovol’stvennykh Tovarov: Miaso I Miasnye Tovarye. Moscow: Iz. Ekonomika, 1964. Khrushchev, Nikita. Khrushchev Remembers. New York: Little, Brown & Company, 1970. 178. Long, James. The Book of the Pig. London: Upcott Gill, 1886. 102. Lush, Jay & A.L. Anderson, “A Genetic History of Poland-China Swine: I—Early Breed History: The ‘Hot Blood’ versus the ‘Big Type’” Journal of Heredity 30.4 (1939): 149-56. Martel, Leon. Lend-Lease, Loans, and the Coming of the Cold War: A Study of the Implementation of Foreign Policy. Boulder: Westview Press, 1979. 35. National Archive and Records Administration (NARA). RG 59, General Records of the Department of State. Office of Soviet Union affairs, Box 6. “Records relating to Lend Lease with the USSR 1941-1952”. National Archive and Records Administration (NARA). RG166, Records of the Foreign Agricultural Service. Narrative reports 1940-1954. USSR Cotton-USSR Foreign trade. Box 64, Folder “farm management”. Report written by David V Kelly, 6 Apr. 1951. National Archive and Records Administration (NARA). RG 166, Records of the Foreign Agricultural Service. Narrative Reports 1955-1961. Folder: “Agriculture” “Visits to Soviet agricultural installations,” 15 Nov. 1961. Skeoch, L.A. Food Prices and Ration Scale in the Ukraine, 1946 The Review of Economics and Statistics 35.3 (Aug. 1953), 229-35. State Archive of the Russian Federation (GARF). Fond R-7021. The Report of Extraordinary Special State Commission on Wartime Losses Resulting from the German-Fascist Occupation cites the following losses in the German takeover. 1948. Stettinus, Edward R. Jr. Lend-Lease: Weapon for Victory. Penguin Books, 1944.
APA, Harvard, Vancouver, ISO, and other styles
50

Tseng, Emy, and Kyle Eischen. "The Geography of Cyberspace." M/C Journal 6, no. 4 (2003). http://dx.doi.org/10.5204/mcj.2224.

Full text
Abstract:
The Virtual and the Physical The structure of virtual space is a product of the Internet’s geography and technology. Debates around the nature of the virtual — culture, society, economy — often leave out this connection to “fibre”, to where and how we are physically linked to each other. Rather than signaling the “end of geography” (Mitchell 1999), the Internet reinforces its importance with “real world” physical and logical constraints shaping the geography of cyberspace. To contest the nature of the virtual world requires understanding and contesting the nature of the Internet’s architecture in the physical world. The Internet is built on a physical entity – the telecommunications networks that connect computers around the world. In order to participate on the Internet, one needs to connect to these networks. In an information society access to bandwidth determines the haves from the have-nots (Mitchell 1999), and bandwidth depends upon your location and economics. Increasingly, the new generation Internet distributes bandwidth unevenly amongst regions, cities, organizations, and individuals. The speed, type, size and quality of these networks determine the level and nature of participation available to communities. Yet these types of choices, the physical and technical aspects of the network, are the ones least understood, contested and linked to “real world” realities. The Technical is the Political Recently, the US government proposed a Total Information Awareness surveillance system for all digital communications nationally. While technically unworkable on multiple fronts, many believed that the architecture of the Internet simply prevented such data collection, because no physical access points exist through which all data flows. In reality, North America does have central access points – six to be exact – through which all data moves because it is physically impossible to create redundant systems. This simple factor of geography potentially shapes policies on speech, privacy, terrorism, and government-business relations to name just a few. These are not new issues or challenges, but merely new technologies. The geography of infrastructure – from electricity, train and telephone networks to the architectures of freeways, cities and buildings – has always been as much social and political as technical. The technology and the social norms embedded in the network geography (Eischen, 2002) are central to the nature of cyberspace. We may wish for a utopian vision, but the hidden social assumptions in mundane ‘engineering’ questions like the location of fibre or bandwidth quality will shape virtual world. The Changing Landscape of the Internet The original Internet infrastructure is being redesigned and rebuilt. The massive fibre-optic networks of the Internet backbones have been upgraded, and broadband access technologies – cable modem, Digital Subscriber Line (DSL) and now wireless Wi-Fi – are being installed closer to homes and businesses. New network technologies and protocols enable the network to serve up data even faster than before. However, the next generation Internet architecture is quite different from the popular utopian vision described above. The Internet is being restructured as an entertainment and commerce medium, driven by the convergence of telecommunications technologies and commercialization. It is moving towards a broadcast model where individual consumers have access to less upstream bandwidth than downstream, with the symmetry of vendor and customer redesigned and built to favor content depending on who provides, requests and receives content. This Internet infrastructure has both physical and logical components – the telecommunications networks that comprise the physical infrastructure and the protocols that comprise the logical infrastructure of the software that runs the Internet. We are in the process of partitioning this infrastructure, both physical and logical, into information conduits of different speeds and sizes. Access to these conduits depends on who and where you are. These emerging Internet infrastructure technologies – Broadband Access Networks, Caching and Content Delivery Networks, Quality of Service and Policy Protocols – are shaped by geographical, economic and social factors in their development, deployment and use. The Geography of Broadband These new broadband networks are being deployed initially in more privileged, densely populated communities in primary cities and their wealthy suburbs (Graham, 2000). Even though many have touted the potential of Wi-Fi networks to bring broadband to underserved areas, initial mappings of wireless deployment show correlation between income and location of hotspots (NYCWireless, 2003). Equally important, the most commonly deployed broadband technologies, cable modem and ADSL, follow a broadcast model by offering more downstream bandwidth than upstream bandwidth. Some cable companies limit upstream bandwidth even further to 256 Kbps in order to discourage subscribers from setting up home servers. The asymmetry of bandwidth leads to asymmetry of information flows where corporations produce information and users content. Internet Infrastructure: Toll Roads and the Priority of Packets The Internet originally was designed around ‘best effort’ service: data flows through the networks as packets, and all packets are treated equally. The TCP/IP protocols that comprise the Internet’s logical infrastructure (Lessig, 101) govern how data is transferred across the physical networks. In the Internet’s original design, each packet is routed to the best path known, with the transport quality level dependent on network conditions. However, network congestion and differing content locations lead to inconsistent levels of quality. In order to overcome Internet “bottlenecks”, technologies such as content caching and Quality of Service (QoS) protocols have been developed that allow large corporate customers to bypass the public infrastructure, partitioning the Internet into publicly and privately accessible data conduits or throughways. Since access is based on payment, these private throughways can be thought of as the new toll roads of the Internet. Companies such as Akamai are deploying private ‘content delivery’ networks. These networks replicate and store content in geographically dispersed servers close to the end users, reducing the distance content data needs to traverse. Large content providers pay these companies to store and serve their content on these networks. Internet Service Providers (ISPs) offer similar services for internal or hosted content. The Internet’s physical infrastructure consists of a system of interconnected networks. The major ISPs’ networks interconnect at Network Access Point (NAPs) the major intersections of the Internet backbone. Congestion at these public intersection points has resulted in InterNAP building and deploying private network access points (P-NAPs). Akamai content delivery network (Akamai, 2000) and InterNAP’s P-NAPs (InterNAP, 2000) deployment maps reveal a deployment of private infrastructure to a select group of highly-connected U.S. cities (Moss & Townsend, 2000), furthering the advantage these ‘global cities’ (Graham, 1999) have over other cities and regions. QoS protocols allow ISPs to define differing levels of service by providing preferential treatment to some amount of the network traffic. Smart routers, or policy routers, enable network providers to define policies for data packet treatment. The routers can discriminate between and prioritize the handling of packets based on destination, source, the ISP, data content type, etc. Such protocols and policies represent a departure from the original peer-to-peer architecture of data equality with ‘best-effort’. The ability to discriminate and prioritize data traffic is being built into the system, with economic and even political factors able to shape the way packets and content flow through the network. For example, during the war on Iraq, Akamai Technologies canceled its service contract with the Arabic news service Al Jazeera (CNET, 2003). Technology, Choices and Values To address the social choices underpinning seemingly benign technical choices of the next generation Internet, we need to understand the economic, geographic and social factors guiding choices about its design and deployment. Just as the current architecture of the Internet reflects the values of its original creators, this next generation Internet will reflect our choices and our values. The reality is that decisions with very long-term impacts will be made with or without debate. If any utopian vision of the Internet is to survive, it is crucial to embed the new architectures with specific values by asking difficult questions with no pre-defined or easy answers. These are questions that require social debate and consensus. Is the Internet fundamentally a public or private space? Who will have access? What information and whose information will be accessible? Which values and whose values should form the basis of the new infrastructure? Should the construction be subject to market forces alone or should ideas of social equity and fairness be embedded in the technology? Technologists, policy makers (at both national and local levels), researchers and the general public all have a part in determining the answers to these questions. Policymakers need to link future competition and innovation with equitable access for all citizens. Urban planners and local governments need to link infrastructure, economic sustainability and equity through public and public-private investments – especially in traditionally marginalized areas. Researchers need to continue mapping the complex interactions of investment in and deployment of infrastructure across the disciplines of economics, technology and urban planning. Technologists need to consider the societal implications and inform the policy debates of the technologies they build. Communities need to link technical issues with local ramifications, contesting and advocating with policymakers and corporations. The ultimate geography of cyberspace will reflect the geography of fibre. Understanding and contesting the present and future reality requires linking mundane technical questions with the questions of values in exactly these wider social and political debates. Works Cited Akamai. See <http://www.akamai.com/service/network.php> Eischen, Kyle. ‘The Social Impact of Informational Production: Software Development as an Informational Practice’. Center for Global, International and Regional Studies Working Paper #2002-1. 2002. UC Santa Cruz. <http://cgirs.ucsc.edu/publications/workingpapers/> Graham, Stephen. “Global Grids of Glass: On Global Cities, Telecommunications and Planetary Urban Networks.” Urban Studies. 1999. 36 (5-6). Graham, Stephen. “Constructing Premium Network Spaces: Reflections on Infrastructure Networks and Contemporary Urban Development.” International Journal of Urban and Regional Research. 2000. 24(1) March. InterNAP. See <http://www.internap.com/html/news_05022000.htm> Junnarkar, Sandeep. “Akamai ends Al-Jazeera server support”, CNET News.com, April 4, 2003. See <http://news.com.com/1200-1035-995546.php> Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. Mitchell, William. City of Bits. Cambridge, MA: MIT Press, 1999. Mosss, Mitchell L. and Anthony M. Townsend. “The Internet Backbone and the American Metropolis.” The Information Society Journal. 16(1): 35-47. Online at: <http://www.informationcity.org/research/internet-backbone-am... ...erican-metropolis/index.htm> Public Internet Project. “802.11b Survey of NYC.” <http://www.publicinternetproject.org/> Links http://cgirs.ucsc.edu/publications/workingpapers/ http://news.com.com/1200-1035-995546.html http://www.akamai.com/service/network.html http://www.informationcity.org/research/internet-backbone-american-metropolis/index.htm http://www.internap.com/html/news_05022000.htm http://www.publicinternetproject.org/ Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Eischen, Emy Tseng & Kyle. "The Geography of Cyberspace" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/03-geography.php>. APA Style Eischen, E. T. &. K. (2003, Aug 26). The Geography of Cyberspace. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/03-geography.php>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography