To see the other types of publications on this topic, follow the link: Brief and full account of Mr.

Journal articles on the topic 'Brief and full account of Mr'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Brief and full account of Mr.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bell, Stuart. "The Novel Theology of H. G. Wells." Journal for the History of Modern Theology / Zeitschrift für Neuere Theologiegeschichte 26, no. 2 (October 25, 2019): 104–23. http://dx.doi.org/10.1515/znth-2019-0018.

Full text
Abstract:
Abstract “Lambeth Palace is my Washpot. Over Fulham have I cast my breeches.” So declared the novelist and secularist H. G. Wells in a letter to his mistress, Rebecca West, in May 1917. His claim was that, because of him, Britain was “full of theological discussion” and theological books were “selling like hot cakes”. He was lunching with liberal churchmen and dining with bishops. Certainly, the first of the books published during Wells’s short “religious period”, the novel Mr. Britling Sees It Through, had sold very well on both sides of the Atlantic and made Wells financially secure. Geoffrey Studdert Kennedy (“Woodbine Willie”) wrote that, “Everyone ought to read Mr. H. G. Wells’s great novel, Mr. Britling Sees It Through. It is a gallant and illuminating attempt to state the question, and to answer it. His thought has brought him to a very real and living faith in God revealed in Jesus Christ, and has also brought relief to many troubled minds among the officers of the British Army.” Yet, Wells’s God was explicitly a finite God, and his theology was far from orthodox. How can we account for his boast and for the clerical affirmation which he certainly did receive? This article examines and re-evaluates previous accounts of the responses of clergy to Wells’s writing, correcting some narratives. It discusses the way in which many clergy used Mr. Britling as a means by which to engage in a populist way with the question of theodicy, and examines the letters which Wells received from several prominent clerics, locating their responses in the context of their own theological writings. This is shown to be key to understanding the reaction of writers such as Studdert Kennedy to Mr. Britling Sees It Through. Finally, an assessment is made of the veracity of Wells’s boasting to his mistress, concluding that his claims were somewhat exaggerated. “Lambeth Palace is my Washpot, Over Fulham have I cast my breeches.” Mit diesen Worten erklärte der literarisch außergewöhnlich erfolgreiche und entschieden säkular denkende, kirchenkritische Schriftsteller und Science-Fiction-Pionier Herbert George Wells seiner Geliebten, dass seinetwegen Großbritannien “full of theological discussion” sei. Nicht ohne Eitelkeit schrieb er es seinem im September 1916 mit Blick auf den Krieg geschriebenen und stark autobiographisch gefärbten Roman Mr. Britling Sees it Through von knapp 450 Seiten zu, dass theologische Bücher reißenden Absatz fänden. Auch war er stolz darauf, liberale Kleriker zum Lunch zu treffen und von Bischöfen zum abendlichen Dinner eingeladen zu werden. In einer kurzen Phase seines Lebens war – oder inszenierte sich – Wells als ein frommer, gläubiger Mensch. Sein damals veröffentlichter Roman Mr. Britling Sees It Through verkaufte sich sowohl in Nordamerika als auch im Heimatland so gut, dass der Autor nun definitiv finanziell gesichert war. Der anglikanische Priester und Dichter Geoffrey Studdert Kennedy, der im Ersten Weltkrieg Woodbine Willie genannt wurde, weil er verletzten und sterbenden Soldaten in den Phasen der Vorbereitung auf den Tod Woodbine-Zigaretten anbot, empfahl die Lektüre von Wells’ “great novel” Mr. Britling mit den Worten: “It is a gallant and illuminating attempt to state the question, and to answer it. His thought has brought him to a very real and living faith in God revealed in Jesus Christ, and has also brought relief to many troubled minds among the officers of the British Army.” Allerdings war H. G. Wells’ Gott ein durchaus endlicher Gott, und seine Theologie war alles andere als orthodox. Wie lassen sich dennoch seine evidente Prahlerei und die emphatische Zustimmung zu seinem Roman in den britischen Klerikereliten erklären? Im Aufsatz werden zunächst einige ältere Deutungen der Zustimmung führender Kleriker zu Wells’ Roman untersucht und einige der dabei leitenden Deutungsmuster kritisch infrage gestellt. Deutlich wird, dass nicht wenige anglikanische Geistliche Mr. Britling dazu nutzten, um höchst populistisch das umstrittene Theodizeeproblem anzusprechen. Auch werden die Briefe prominenter Geistlicher an Wells analysiert, mit Blick auf ihre eigenen Publikationen. Diese Reaktionen haben stark Studdert Kennedys Haltung zu Mr. Britling Sees It Through beeinflusst. Besonders aufrichtig war Wells mit Blick auf sich selbst allerdings nicht. Die Selbstinszenierung gegenüber seiner Geliebten war einfach nur peinliche Übertreibung.
APA, Harvard, Vancouver, ISO, and other styles
2

Haridas, Rajesh P., and Peter J. Stanbury. "A Brief Account of Mr Valentine Greatraks." Anaesthesia and Intensive Care 47, no. 3_suppl (September 2019): 44–45. http://dx.doi.org/10.1177/0310057x19854451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jha, R. K., S. Bharal, and B. Saha. "A Brief Account of Two Decades of Farmer Field School Implementation in Nepal." Journal of the Plant Protection Society 6 (December 1, 2020): 65–81. http://dx.doi.org/10.3126/jpps.v6i0.36473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmad, I., S. Kirimani, M. Rashid, and K. Ahmad. "MR Imaging of the Adnexal Masses: A Review." Nepalese Journal of Radiology 1, no. 1 (June 16, 2012): 54–60. http://dx.doi.org/10.3126/njr.v1i1.6326.

Full text
Abstract:
MR (magnetic resonance) imaging is a non invasive technique for evaluation of female pelvic masses. Due to its high spatial resolution and excellent tissue contrast, various masses of adnexal origin can be imaged and a confident diagnosis can be made. MRI helps to delineate normal anatomical structures and elucidate the pathological lesions. It has high sensitivity and specificity for differentiating benign adnexal masses from malignant ones. This review article gives a brief account of approach to adnexal masses based on tissue characterization on MR imaging.DOI: http://dx.doi.org/10.3126/njr.v1i1.6326 Nepalese Journal of Radiology Vol.1(1): 54-60
APA, Harvard, Vancouver, ISO, and other styles
5

Pletcher, R. H. "Progress in Turbulent Forced Convection." Journal of Heat Transfer 110, no. 4b (November 1, 1988): 1129–44. http://dx.doi.org/10.1115/1.3250615.

Full text
Abstract:
This paper presents a brief account of some recent progress toward the understanding and prediction of turbulent forced convection. The impact of technological advances in electronics and optical methods is pointed out. Coverage includes observations on structure, measurement techniques, experimental results, numerical strategies, turbulence modeling, and large eddy and full simulation.
APA, Harvard, Vancouver, ISO, and other styles
6

Fishman, Marlene, Alan F. Cathers, and Deborah Stamp. "Brief Report: Needle Punctures—Documentation and Incidence Rate Calculation." Infection Control 6, no. 1 (January 1985): 35–36. http://dx.doi.org/10.1017/s0195941700062470.

Full text
Abstract:
Potential hazards of puncture wounds have been well-defined and include transmission of hepatitis B virus, acquired immunodeficiency syndrome, syphilis, malaria, and other infectious diseases. Yet, standard methodology has not been used for statistical comparison. Attack rates have been expressed as needlesticks per full-time equivalents, needlesticks per employee per year, punctures per number of personnel, or punctures per number of hospital beds. These calculations do not account for the amount of time during which an employee is at risk of receiving a needle puncture. Also, numbers alone cannot account for intensity of care, potential exposures, or hours at risk. Nor can numbers provide an estimate of potential risk. A rate is more valuable than numbers because it measures the probability of occurrence. A meaningful incidence rate would be based on uniform data collection and would provide the number of puncture wounds per year for a standardized work period. This is similar in concept to nosocomial infections per patient-days of exposure. We propose the application of standard labor statistics methodology which accounts for man-hours worked, can be readily obtained in health care facilities, and can be modified as described here.
APA, Harvard, Vancouver, ISO, and other styles
7

Mann, Ruth E., and Stephen Rollnick. "Motivational Interviewing with a Sex Offender Who Believed He Was Innocent." Behavioural and Cognitive Psychotherapy 24, no. 2 (April 1996): 127–34. http://dx.doi.org/10.1017/s1352465800017392.

Full text
Abstract:
Motivational Interviewing (Miller, 1983; Miller and Rollnick, 1991) is an approach originally developed for problem drinkers but assumed to have wider applications. This paper describes one such application through the case of Mr D, an imprisoned sex offender who was identified under the procedures of the Prison Service Sex Offender Treatment Programme. Mr D was convicted of rape but did not believe that he had committed an offence, although he admitted having had sexual intercourse with the complainant. A full assessment of his offending suggested that he had made cognitive and behavioural errors prior to the act of intercourse and so motivational interviewing was employed to help him decide whether or not to participate in the treatment programme. As a result he decided that he would attend a treatment group. The case study concludes with a brief description of his progress whilst in the group and summarizes the results of the follow-up assessment. The application of motivational interviewing to this particular client group is discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Jonas, Saran, Giacinto Grieco, Robert Norman, Surah Grumet, and Ilan Kedan. "Mortality, ethnicity, and education in an occupational cohort." Ethnicity and Inequalities in Health and Social Care 7, no. 3 (September 9, 2014): 137–45. http://dx.doi.org/10.1108/eihsc-11-2013-0050.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate the relationship between occupational degree requirement and mortality between ethnic groups in a cohort of urban workers. Design/methodology/approach – The study included 118,606 health-insured full-time workers from the New York City Health and Hospitals Corporation (HHC). Mortality rates (MR) and mortality rate ratios (MRR) were calculated for major ethnic categories. Estimates were adjusted for age, sex, and occupational degree requirement. Findings – Prior to adjustment for degree requirement, mortality rates (MRs) by ethnic groups in the Health and Hospitals Corporation were in line with national estimates: highest for blacks, followed by whites, Hispanics, and Asian/Pacific Islander (APIs). After adjustment, the MR for blacks became comparable to whites (mortality rate ratio (MRR)=1.02). The low-Hispanic MR did not change; the Hispanic advantage persisted (MRR=0.66), as did the API advantage (MRR=0.50). Research limitations/implications – Higher education may not substantially change the MR for Hispanics, and it may only account for a portion of the survival advantage among APIs. The findings also suggest that without reducing the disparity in higher education attainment between blacks and whites, equality in other socioeconomic factors may not abolish the disparity in mortality between these groups. Originality/value – This study bypassed common limitations of ethnic mortality studies, with intrinsic parity for certain socio-economic status factors (full-time employment and health care access) across cohort members and consistent ethnic classification across time-points. This includes a cohort of API workers with complete self-identification of ethnicity, which has not been accomplished by previous investigations.
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Bingqiang, George Kattawar, Ping Yang, and Xiaodong Zhang. "A Brief Review of Mueller Matrix Calculations Associated with Oceanic Particles." Applied Sciences 8, no. 12 (December 19, 2018): 2686. http://dx.doi.org/10.3390/app8122686.

Full text
Abstract:
The complete Stokes vector contains much more information than the radiance of light for the remote sensing of the ocean. Unlike the conventional radiance-only radiative transfer simulations, a full Mueller matrix-Stokes vector treatment provides a rigorous and correct approach for solving the transfer of radiation in a scattering medium, such as the atmosphere-ocean system. In fact, radiative transfer simulation without considering the polarization state always gives incorrect results and the extent of the errors induced depends on a particular application being considered. However, the rigorous approach that fully takes the polarization state into account requires the knowledge of the complete single-scattering properties of oceanic particles with various sizes, morphologies, and refractive indices. For most oceanic particles, the comparisons between simulations and observations have demonstrated that the “equivalent-spherical” approximation is inadequate. We will therefore briefly summarize the advantages and disadvantages of a number of light scattering methods for non-spherical particles. Furthermore, examples for canonical cases with specifically oriented particles and randomly oriented particles will be illustrated.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Qiang, Yinglei Yu, Tian Wen, Jianwei Zhang, Zhangjing Yang, Fanlong Zhang, and Hui Zhang. "Segmentation of Brain MR Image Using Modified Student’s t-Mixture Model." Journal of Medical Imaging and Health Informatics 11, no. 10 (October 1, 2021): 2683–94. http://dx.doi.org/10.1166/jmihi.2021.3860.

Full text
Abstract:
In conventional brain image analysis, it is a critical step to segment brain magnetic resonance (MR) image into three major tissues: Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The main difficulties for segmenting brain MR image are partial volume effect, intensity inhomogeneity and noise, which result in challenging segmentation task. In this paper, we propose a novel modified method based on the basis of the conventional Student’s t-Mixture Model (SMM), for segmentation of brain MR image and correction of bias field. The advantages of our model are introduced as follows. First, we take account of the influence on the probabilities of the pixels in the adjacent region and take full advantage of the local spatial information and class information. Second, our modified SMM is derived from the traditional finite mixture model (FMM) by adding the bias field correction model; the logarithmic likelihood function of traditional FMM is revised. Third, the noise and bias field can be easily extended to combine with the SMM model and EM algorithm. Last but not least, the exponential coefficients are employed to control the results of segmentation details. As a result, our effective and highly accurate method exhibits high robustness on both simulated and real MR image segmentation, compared to the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Wilson, Peter, James Morgan, John W. Funder, Peter J. Fuller, and Morag J. Young. "Mediators of mineralocorticoid receptor-induced profibrotic inflammatory responses in the heart." Clinical Science 116, no. 9 (April 2, 2009): 731–39. http://dx.doi.org/10.1042/cs20080247.

Full text
Abstract:
Coronary, vascular and perivascular inflammation in rats following MR (mineralocorticoid receptor) activation plus salt are well-characterized precursors for the appearance of cardiac fibrosis. Endogenous corticosterone, in the presence of the 11βHSD2 (11β hydroxysteroid dehydrogenase type 2) inhibitor CBX (carbenoxolone) plus salt, produces similar inflammatory responses and tissue remodelling via activation of MR. MR-mediated oxidative stress has previously been suggested to account for these responses. In the present study we thus postulated that when 11βHSD2 is inhibited, endogenous corticosterone bound to unprotected MR in the vessel wall may similarly increase early biomarkers of oxidative stress. Uninephrectomized rats received either DOC (deoxycorticosterone), CBX or CBX plus the MR antagonist EPL (eplerenone) together with 0.9% saline to drink for 4, 8 or 16 days. Uninephrectomized rats maintained on 0.9% saline for 8 days served as controls. After 4 days, both DOC and CBX increased both macrophage infiltration and mRNA expression of the p22phox subunit of NADPH oxidase, whereas CBX, but not DOC, increased expression of the NOX2 (gp91phox) subunit. eNOS [endothelial NOS (NO synthase)] mRNA expression significantly decreased from 4 days for both treatments, and iNOS (inducible NOS) mRNA levels increased after 16 days of DOC or CBX; co-administration of EPL inhibited all responses to CBX. The responses characterized over this time course occurred before measurable increases in cardiac hypertrophy or fibrosis. The findings of the present study support the hypothesis that endogenous corticosterone in the presence of CBX can activate vascular MR to produce both inflammatory and oxidative tissue responses well before the onset of fibrosis, that the two MR ligands induce differential but overlapping patterns of gene expression, and that elevation of NOX2 subunit levels does not appear necessary for full expression of MR-mediated inflammatory and fibrogenic responses.
APA, Harvard, Vancouver, ISO, and other styles
12

Mannan, Sabira K., Keith H. Ruddock, and David S. Wooding. "Fixation Patterns Made during Brief Examination of Two-Dimensional Images." Perception 26, no. 8 (August 1997): 1059–72. http://dx.doi.org/10.1068/p261059.

Full text
Abstract:
Measurements were carried out of saccadic eye movements made during brief (3 s) examination of images which the observer was asked to identify. Each image was identified in three forms: low-pass filtered, high-pass filtered, and unfiltered. The analysis of the eye-movement patterns was based on the locations of fixations made during examination of the images, for which purpose a least-squares measure of similarity between two sets of locations was introduced. It is shown that there is a high degree of similarity between fixations made by the same observer to the different versions of a given image and that for a given image there is a high degree of similarity between fixations made by the eighteen observers who participated in the experiments. The similarities are greater for the initial 1.5 s than for the full viewing period of 3 s. The similarity between the locations of fixations and those of selected image features such as local contrast, high-spatial-frequency content, and edge density was also examined. It is shown that there is only weak similarity between the locations of fixations and those of any given local image feature, and the tendency of observers to fixate centrally on the image is identified as the principal reason for the low similarity values. It is shown that if the nonuniform distribution of eye movements is taken into account, significant similarities are found between the locations of fixations and those of certain image features, such as edge density.
APA, Harvard, Vancouver, ISO, and other styles
13

Löffler, Johannes Ludwig. "From Archangels to Virtual Pilgrims: A Brief History of Papal Digital Mobilization as Soft Power." Religions 12, no. 8 (August 18, 2021): 657. http://dx.doi.org/10.3390/rel12080657.

Full text
Abstract:
The perpetual public display of successful mass mobilization and pilgrimage has become a pillar of papal soft power. During the 20th century, the papacy had repeatedly demonstrated its ability to use new technologies for public communication, media content production and mass mobilization. John Paul II endorsed the establishment of the first Vatican website and an official papal e-mail account, which provided Catholics a new form of communication with the Holy Father. During the pontificate of Benedict XVI, the papacy created several Twitter accounts, which would become the backbone of papal digital mobilization. Francis built on the success of his predecessors as he initiated the modernization of the Holy See’s media department. However, with the growth of the Internet and the stress test of the COVID-19 pandemic, the mechanics of mobilization, pilgrimage and power have considerably changed. With the religious role of the popes taken as a given, the paper looks into the history of papal mobilization, the role of the Internet and why it is not used to its full potential yet.
APA, Harvard, Vancouver, ISO, and other styles
14

Brinkman, Paul D. "John Conrad Hansen (1869–1952) and his scientific illustrations." Archives of Natural History 45, no. 2 (October 2018): 233–44. http://dx.doi.org/10.3366/anh.2018.0516.

Full text
Abstract:
Over the course of his 14-year career at Chicago's Field Museum of Natural History, artist and engraver John Conrad Hansen rendered hundreds of beautiful and accurate scientific illustrations of animals – mostly extinct fossil vertebrates. His principal media were oil paintings, pencil, pen-and-ink and wash drawings. Many of his illustrations have been published in the scientific literature. His oil paintings, on the other hand, were made for display alongside specimens in the Field Museum's exhibits. Despite the quality of Hansen's full-colour reconstructions, few of them have been seen outside the Museum. A small, representative sample of his work is reproduced here, along with a brief account of his troubled life and career.
APA, Harvard, Vancouver, ISO, and other styles
15

Parmentier, Stephan. "A tale of two worlds: a (very) select overview of socio-legal studies in Belgium." International Journal of Law in Context 12, no. 1 (February 23, 2016): 81–97. http://dx.doi.org/10.1017/s174455231500035x.

Full text
Abstract:
AbstractSocio-legal studies in Belgium represent a diverse patchwork of many topics studied from many angles. This paper first presents a brief historical account of socio-legal studies and their organisation, in the north and south of the country. It has no ambition to give a full overview of socio-legal studies in Belgium, let alone be exhaustive. It merely focuses on the content and features of two topics that have constituted major strands of research over the last thirty years: courts and dispute processing, and public opinion about law and justice. It ends with some reflections on the nature of Belgian socio-legal research, as well as some recommendations on future orientations.
APA, Harvard, Vancouver, ISO, and other styles
16

Eaton-Evans, James, Janice M. Dulieu-Barton, Edward G. Little, and Ian A. Brown. "A New Approach to Stress Analysis of Vascular Devices Using High Resolution Thermoelastic Stress Analysis." Applied Mechanics and Materials 5-6 (October 2006): 63–70. http://dx.doi.org/10.4028/www.scientific.net/amm.5-6.63.

Full text
Abstract:
Thermoelastic Stress Analysis (TSA) is a non-contacting technique that provides full field stress information and can record high-resolution measurements from small structures. The work presented in this paper summarises the application of TSA to two types of small medical devices that are used to treat diseased arteries; angioplasty balloons and vascular stents. The use of high resolution optics is described along with a calibration methodology that allows quantitative stress measurements to be taken from the balloon structure. A brief account of a study undertaken to characterise the thermoelastic response from Nitinol is also included and it is demonstrated that thermoelastic data can be obtained from a stent at high resolutions.
APA, Harvard, Vancouver, ISO, and other styles
17

Ismail, R. "Contentious Issues Arising from Payments made in Full and Final Settlement." Potchefstroom Electronic Law Journal/Potchefstroomse Elektroniese Regsblad 11, no. 4 (July 4, 2017): 153. http://dx.doi.org/10.17159/1727-3781/2008/v11i4a2788.

Full text
Abstract:
Payments made in full and final settlement have on several occasions presented interpretative difficulties for our judiciary, as will become apparent from this case discussion: Be Bop A Lula Manufacturing & Printing v Kingtex Marketing 2008 3 SA 327 (SCA). The Supreme Court of Appeal reversed the judgments of the trial court and the appeal court (full bench of the Cape Provincial Division) which were in favour of the creditor. In such cases, the essential enquiry is whether an agreement of compromise exists. A transactio or compromise (in the form of a legal agreement) exists where the relevant parties agree to settle previously disputed or uncertain obligations. Like any other agreement, a compromise is based on the contractual rules of offer and acceptance. The first material enquiry in this case wherein the debtor delivered the cheque payment to the creditor (in full and final settlement of the account), is whether 1) an intended offer of compromise exists; or 2) did the debtor merely intend to make payment towards an admitted liability. The court in the Be Bop (SCA) case came to the correct finding that an offer of compromise existed. Whilst the judgment is brief, the finding itself gives practical recognition to the principle that admission of liability for a specific amount, accompanied by payment (in full and final settlement), may still be accompanied by an intended offer of compromise, instead of merely making payment towards an admission of liability.
APA, Harvard, Vancouver, ISO, and other styles
18

Newman, Karl, and Neil Walker. "II. Justice and Home Affairs." International and Comparative Law Quarterly 47, no. 1 (January 1998): 231–38. http://dx.doi.org/10.1017/s0020589300061650.

Full text
Abstract:
The addition of “Justice and Home Affairs” (JHA) to the list of subjects covered in Current Developments reflects the growing significance of this area of European law and policy within the overall Treaty framework. In this introductory note, a brief account is given of the history of co-operation between EU member States in JHA matters, culminating in the significant changes announced in the Treaty of Amsterdam in October 1997. It is a historical record which is marked by discontinuity and institutional complexity, full justice to which would require detailed analysis. Here we confine ourselves instead to a broad-brush approach, seeking to highlight the main themes which have characterised JHA co-operation. In future notes particular areas and issues of current interest will be examined more closely.
APA, Harvard, Vancouver, ISO, and other styles
19

Holgado-Tello, Fco Pablo, Pedro J. Amor, Amaia Lasa-Aristu, Fco Javier Domínguez-Sánchez, and Begoña Delgado. "Two new brief versions of the Cognitive Emotion Regulation Questionnaire and its relationships with depression and anxiety." Anales de Psicología 34, no. 3 (August 1, 2018): 458–64. http://dx.doi.org/10.6018/analesps.34.3.306531.

Full text
Abstract:
The Cognitive Emotion Regulation Questionnaire (CERQ) (Garnefski, et al., 2001) is a 36-item instrument for measuring cognitive strategies of emotional regulation. There is a brief, 18-item version that measures the same nine strategies as the full version (Garnefski and Kraaij, 2006a). The aim of this study was to develop a brief form of the CERQ, taking into account two different proposals: a 27-item and an 18-item instrument, the latter focusing solely on the assessment of the two general factors obtained in the second-order structure of the original CERQ model and identified in previous studies as adaptive strategies and less adaptive strategies. Participants in the study were 872 individuals aged 18-58 (mean 33.86, SD=8.43). The confirmatory factor analyses yield adequate overall indices in both versions, together with satisfactory validity. In the discussion, it is argued that the 27-item version is more appropriate for the specific rating of the nine regulation strategies people employ, and we propose the 18-item version as a suitable instrument in clinical context for an overall rating of an individual’s cognitive emotion regulation profile, furthermore, the criterion validity with depression and anxiety keeps similar to the larger versions.
APA, Harvard, Vancouver, ISO, and other styles
20

Sunderland, Matthew, Philip Batterham, Natacha Carragher, Alison Calear, and Tim Slade. "Developing and Validating a Computerized Adaptive Test to Measure Broad and Specific Factors of Internalizing in a Community Sample." Assessment 26, no. 6 (May 3, 2017): 1030–45. http://dx.doi.org/10.1177/1073191117707817.

Full text
Abstract:
Highly efficient assessments that better account for comorbidity between mood and anxiety disorders (internalizing) are required to identify individuals who are most at risk of psychopathology in the community. The current study examined the efficiency and validity associated with a multidimensional computerized adaptive test (CAT) to measure broad and specific levels of internalizing psychopathology. The sample comprised 3,175 respondents to an online survey. Items from five banks (generalized anxiety, depression, obsessive–compulsive disorder, panic disorder, social anxiety disorder) were jointly calibrated using a bifactor item response theory model. Simulations indicated that an adaptive algorithm could accurately ( rs ≥ 0.90) estimate general internalizing and specific disorder scores using on average 44 items in comparison with the full 133-item bank (67% reduction in items). Scores on the CAT demonstrate convergent and divergent validity with previously validated short severity scales and could significantly differentiate cases of DSM-5 disorder. As such, the CAT validly measures both broad and specific constructs of internalizing disorders in a manner similar to the full item bank and a static brief form but with greater gains in efficiency and, therefore, a reduced degree of respondent burden.
APA, Harvard, Vancouver, ISO, and other styles
21

Schäffner, Christina. "Political Discourse Analysis from the point of view of Translation Studies." Journal of Language and Politics 3, no. 1 (May 27, 2004): 117–50. http://dx.doi.org/10.1075/jlp.3.1.09sch.

Full text
Abstract:
Political discourse very often relies on translation. Political Discourse Analysis (PDA), however, has not yet taken full account of the phenomenon of translation. This paper argues that the disciplines of Translation Studies (TS) and PDA can benefit from closer cooperation. It starts by presenting examples of authentic translations of political texts, commenting on them from the point of view of TS. These examples concern political effects caused by specific translation solutions; the processes by which information is transferred via translation to another culture; and the structure and function of equally valid texts in their respective cultures. After a brief survey of the discipline of Translation Studies, the paper concludes with outlining scope for interdisciplinary cooperation between PDA and TS. This is illustrated with reference to an awareness of product features, multilingual texts, process analysis, and the politics of translation.
APA, Harvard, Vancouver, ISO, and other styles
22

Schroeder, Ralph. "The Dangerous Myth of Populism as a Thin Ideology." Populism 3, no. 1 (February 14, 2020): 13–28. http://dx.doi.org/10.1163/25888072-02021042.

Full text
Abstract:
Abstract The idea that populism is a ‘thin ideology’—unlike other full-bodied ‘thick’ ideologies like conservatism or socialism—has come close to being an orthodoxy among populism scholars. This paper challenges that view and argues that it is at best an open question whether populism meets the criteria of a thick ideology, which should be whether it offers a comprehensive program of political change and whether it has staying power. This argument will be made by reference to three countries, the United States, Sweden and India, all of which have recently seen a populist turn. The paper first summarizes debates about populism, ideology and social change. Then it provides a brief account of populism in the three country cases and argues that their populist turns may be a coherent and lasting new departure. The paper concludes with reflections about the broader ramifications of populism as ‘thick’ versus ‘thin’.
APA, Harvard, Vancouver, ISO, and other styles
23

Hainsworth, A. H., R. A. Levis, and R. S. Eisenberg. "Origins of open-channel noise in the large potassium channel of sarcoplasmic reticulum." Journal of General Physiology 104, no. 5 (November 1, 1994): 857–83. http://dx.doi.org/10.1085/jgp.104.5.857.

Full text
Abstract:
Open-channel noise was studied in the large potassium channel of the sarcoplasmic reticulum (SR). Inside-out patches were excised directly from the SR of split skeletal muscle fibers of lobster, with lobster relaxing ringer (LRR) in bath and pipette. The power spectrum of open-channel noise is very low and approximately flat in the 100 Hz-10 kHz frequency range. At 20 degrees C, with an applied voltage of 50 mV, the mean single-channel current (i) is 9 pA (mean single-channel conductance = 180 pS) and the mean power spectral density 1.1 x 10(-29) A2/Hz. The latter increases nonlinearly with (i), showing a progressively steeper dependence as (i) increases. At 20 mV, the mean power spectral density is almost independent of (i) and approximately 1.4 times that of the Johnson noise calculated for the equivalent ideal resistor with zero net current; at 70 mV it increases approximately in proportion to (i)2. The mean power spectral density has a weak temperature dependence, very similar to that of (i), and both are well described by a Q10 of 1.3 throughout the range 3-40 degrees C. Discrete ion transport events are thought to account for a significant fraction of the measured open-channel noise, probably approximately 30-50% at 50 mV. Brief interruptions of the single-channel current, due either to blockage of the open channel by an extrinsic aqueous species, or to intrinsic conformational changes in the channel molecule itself, were a possible additional source of open-channel noise. Experiments in modified bathing solutions indicate, however, that open-channel noise is not affected by any of the identified aqueous species present in LRR. In particular, magnesium ions, the species thought most likely to cause brief blockages, and calcium and hydrogen ions, have no detectable effect. This channel's openings exhibit many brief closings and substrates, due to intrinsic gating of the channel. Unresolved brief full closings are calculated to make a negligible contribution (< 1%) to the measured power spectral density. The only significant source of noise due to band width-limited missed events is brief, frequent 80% substrates (mean duration 20 microseconds, mean frequency 1,000 s-1) which account for a small part of the measured power spectral density (approximately 14%, at 50 mV, 20 degrees C). We conclude that a large fraction of the measured open-channel noise results from intrinsic conductance fluctuations, with a corner frequency higher than the resolution of our recordings, in the range 10(4)-10(7) Hz.(ABSTRACT TRUNCATED AT 400 WORDS)
APA, Harvard, Vancouver, ISO, and other styles
24

Hinton, Martin David. "On Arguments from Ignorance." Informal Logic 38, no. 2 (June 1, 2018): 184–212. http://dx.doi.org/10.22329/il.v38i2.4697.

Full text
Abstract:
The purpose of this paper is twofold: to give a good account of the argument from ignorance, with a presumptive argumentation scheme, and to raise issues on the work of Walton, the nature of abduction and the concept of epistemic closure. First, I offer a brief disambiguation of how the terms 'argument from ignorance' and 'argumentum ad ignorantiam' are used. Second, I show how attempts to embellish this form of reasoning by Douglas Walton and A.J. Kreider have been unnecessary and unhelpful. Lastly, I offer a full and effective account of the argument from ignorance and discuss the lessons of the analysis.Le but de cet article est double: donner un bon compte rendu de l'argument par l'ignorance, avec un schème d'argumentation présomptif, et soulever des questions sur certains aspects de l’œuvre de Walton, la nature des raisonnements abductifs et le concept de fermeture épistémique. Premièrement, j'offre une brève désambiguïsation de la façon dont les termes «argument par l'ignorance» et «argumentum ad ignorantiam» sont utilisés. Deuxièmement, je montre comment les tentatives de Douglas Walton et de A.J. Kreider d'embellir cette forme de raisonnement ont été ni nécessaires et ni utiles. Enfin, j'offre un compte-rendu complet et utile de l'argument par l'ignorance et je discute des leçons de l'analyse.
APA, Harvard, Vancouver, ISO, and other styles
25

Antsu, Gert. "Tech-Enabled Role Model: From Troublesome Past to Herald of Progress." Diplomatic Ukraine, no. XX (2019): 274–89. http://dx.doi.org/10.37837/2707-7683-2019-19.

Full text
Abstract:
The interview describes the state of relations between Ukraine and Estonia through the lens of experience of Gert Antsu, Ambassador Extraordinary and Plenipotentiary of the Republic of Estonia to Ukraine. It is mentioned Estonia has been one of the most consistent partners of Ukraine since establishment of diplomatic relations between Estonia and Ukraine, both in the past and recent years before and after the Revolution of Dignity and Russian aggression. Estonia undertakes a moral duty to support a friendly nation in these complex circumstances. However, even close political relations did not prevent damages to the sphere of economy, caused by aggression of the Russian Federation. The article reports the membership of Estonia in the Un Security Council will not bring substantial changes into relations with Ukraine, since they are friendly and close in any case. Estonia is highly interested in support of stable international order, taking into account the small territory of Estonia and frankly low force potential. Estonia is dedicated to a struggle against climate changes, since it faces ever-growing effects of such changes. Ten years ago, Estonia joined the EU Emissions Trading System, and Mr Ambassador participated in those negotiations. The article reports that e-governance is a positive trend among the priorities of policy of Mr Zelenskyi, President of Ukraine. Mr Ambassador has no doubts Estonia can contribute to enhancement of the system ‘Trembita’ that became a successful story of assistance of Estonia and the EU to Ukraine. However, to reveal the project potential in full, Ukraine has to solve several issues where experience of Estonia could be useful. The article also mentions Mr Ambassador studied Ukrainian. It would be hard for him to study it if he were from Belgium, Portugal or Korea and did not understand the peculiarities of the Slavic languages. He has about 30 book in Ukrainian, most of which are fictions. He considers studying of a language as an evident respect to the receiving state and a useful means for establishment of friendly relations with Ukrainian nation. Studying of languages is an integral attribute of a modern person, thus Mr Ambassador has no doubts: his successor will also study Ukrainian. Keywords: Estonia, interstate relations, climate changes, Ukrainian language, economic relations, e-governance.
APA, Harvard, Vancouver, ISO, and other styles
26

Boyko, Lyudmila. "On combining translator training with foreign language teaching." Slovo.ru: Baltic accent 10, no. 3 (2019): 114–28. http://dx.doi.org/10.5922/2225-5346-2019-3-9.

Full text
Abstract:
Contemporary methodological landscape in translator training (TT) is dominated by the competence-based principles whose epistemological roots are found in social constructivism asserting learners’ active participation in knowledge accrual. The paper gives a brief account of the status quo of TT and revisits the controversial issue of appropriateness of combining TT with foreign language teaching (FLT). The author maintains that FLT may, and quite often has to, be part of TT course, the share of linguistic component in TT depending on the curric­ulum design and teaching circumstances. Centred solely around the linguistic aspect of TT, the paper proposes combining training methods that serve the purposes of both TT and FLT. TT practices aimed at developing linguistic and translational competences simultaneously are subdivided into analytical and reinforcement training techniques, the latter being the focus of this paper. The author argues that exercise-type activities beneficial for both TT and FLT can be practiced in full harmony with the competence-based student-centred teaching principles.
APA, Harvard, Vancouver, ISO, and other styles
27

Vertonghen, Jikkemien, and Marianne Dortants. "Report on the workshop "Organising, Managing and Regulating Martial Arts" during the 21st EASM conference." Revista de Artes Marciales Asiáticas 8, no. 2 (January 22, 2014): 480. http://dx.doi.org/10.18002/rama.v8i2.987.

Full text
Abstract:
The present report provides a brief account of a workshop entitled “Organising, Managing and Regulating Martial Arts” organised during the 21<sup>st</sup> EASM conference held in Istanbul (Turkey) on September 12<sup>th</sup>, 2013. It was the first scientific workshop with regard to the organisational and policy related aspects of (full contact) martial arts. During this international meeting four scientists described in-depth the recent history and current situation regarding the organisation and regulation of martial arts in their country (i.e., France, Flanders (Belgium), Italy and the Netherlands). The workshop was a unique meeting which provided a good opportunity to obtain a better understanding of the specific situation with regard to the regulation of martial arts in some European countries and to exchange results of current research concerning this topic. Further research could be helpful to gain more insight in dealing with problems related to governance, regulation and management of martial arts within a European context.
APA, Harvard, Vancouver, ISO, and other styles
28

Trappes-Lomax, John. "Hiatus in Vergil and in Horace'sOdes." Proceedings of the Cambridge Philological Society 50 (2004): 141–58. http://dx.doi.org/10.1017/s0068673500001085.

Full text
Abstract:
Our discussion will be primarily concerned with hiatus and prosodic hiatus in Vergil; emendations will be proposed atE. 3.6;A. 4.235; 7.226; it will also be suggested that some of the hiatus-free readings to be found in Carolingian and later MSS deserve consideration. Emendations will also be proposed at Horace,Odes1.28.24; 3.6.10. In order to evaluate Vergilian innovation and Vergilian influence, we will need to give some brief account both of his predecessors and of the other Augustan poets.Vergil's immediate predecessorsComedy of course has its own rules, and a full discussion would be irrelevant; however it will be seen that some aspects of comic versification can be used to illustrate later practice. The fragmentary Latin poets are also excluded; what little we have of them has been exposed not only to the ordinary accidents of transcription but also to accidental misquotation; it is thus hardly possible to draw any certain conclusions.
APA, Harvard, Vancouver, ISO, and other styles
29

Cohan, Robert. "Reminiscences and Reflections at Eighty." Dance Research 22, no. 2 (October 2004): 101–38. http://dx.doi.org/10.3366/drs.2004.22.2.101.

Full text
Abstract:
The memoirs which follow, resulted from three extensive interviews (on 22 and 23 May 2004 in Nîmes, France, and on 23 July 2004 in London); the transcripts were then edited and submitted to Mr Cohan for amendment and approval. The text that follows is a full encapsulation of what was said, apart from a lengthy excursus on orientalism in Miss Graham's work and a shorter one on improvisation in dance. This is a personal account, not a connected history of Mr Cohan's activities: many episodes from a long and varied life in dance were not encompassed in the interviews, notably the dance company Robert Cohan formed after he left the Graham Company for the first time and the work in Broadway musicals that he also undertook at this stage of his performing career. The commentary on the Graham repertory and that of London Contemporary Dance Theatre (LCDT) is also indicative rather than exhaustive. Robert Cohan is aware of major episodes from the Graham years – such as the first Asian Tour – that form no part of this account. Similarly, he does not seek to retrace the ground so amply covered in the history of London Contemporary Dance Theatre. On the other hand, some of the subjects that did come up, have been discussed before – usually with some differences of emphasis or detail. But it is worth recalling in this regard that memory can exercise a refining and a condensing, as well, sometimes, as a distorting influence. In establishing the ‘truth’ about any matter it is as useful to have several accounts by the same witness as it is to have one account by several witnesses – just as in an epistolary novel by Richardson, an event looked at and described by the same person several times or by a number of different people, can produce a richer version of ‘reality’ than a single ‘definitive’ statement. Although every attempt has been made to rectify errors of minor detail, the decision has been taken not to provide any scholarly notes to the text. This contribution to the journal is best received as a primary historical document. Those seeking a chronological account of the events mentioned in the text, or further guidance on matters of detail, are referred to the standard works.
APA, Harvard, Vancouver, ISO, and other styles
30

Litovsky, Ilya A., and Evgeny A. Mavrychev. "Synthesis of a Radiator in the Frequency Range of 0.9…5.8 GHz." Journal of the Russian Universities. Radioelectronics 22, no. 4 (October 1, 2019): 45–52. http://dx.doi.org/10.32603/1993-8985-2019-22-4-45-52.

Full text
Abstract:
Introduction. In this work, we consider the problem of a radiator synthesis with the 50-Ohm port at the input in the frequency range of 0.9…5.8 GHz. At present, this frequency range is the most relevant for the electromagnetic environment analysis due to information exchange with the on-board equipment of unmanned aerial vehicles is most often realized in this frequency range.Objective. The main objective of this work is the synthesis of a radiator for an ultra-wideband antenna array in the frequency range of 0.9…5.8 GHz.Materials and methods. In this work, the method of full-wave electromagnetic simulation is used for the broadband radiator synthesis. The characteristics of the radiator are optimized by simulation and confirmed by experimental investigations of the radiator model. The antenna radiation pattern measurements are carried out in the anechoic chamber and standing wave ratio (SWR) is calculated by using the network analyzer.Results. A non-analytical method of the model parametric optimization considering the SWR<2 criterion and using the latest tools of the full-wave electromagnetic simulation is proposed. The examples of the designed optimized model with the final values of all parameters are reported. The calculated distributions of the electric field over the antenna, calculated radiation patterns at several frequency points, and calculated SWR of the model are presented. The radiator model is made taking into account simulation and optimization results. The measured main cross-sections of the radiation pattern and SWR of the model are shown. Conclusion. In the present work, the broadband radiator model in the frequency range of 0.9…5.8 GHz is designed. The machining and brief comparative analysis of the calculated and measured antenna characteristics is carried out and demonstrated a good agreement. The advantages of the proposed method and designed radiator model are described. The results of this work are relevant in the tasks of observation, direction finding and signals reception from unmanned aerial vehicles. Key words: ultra-wideband antenna, Vivaldi antenna, microwave range, full-wave electromagnetic simulation><2 criterion and using the latest tools of the full-wave electromagnetic simulation is proposed. The examples of the designed optimized model with the final values of all parameters are reported. The calculated distributions of the electric field over the antenna, calculated radiation patterns at several frequency points, and calculated SWR of the model are presented. The radiator model is made taking into account simulation and optimization results. The measured main cross-sections of the radiation pattern and SWR of the model are shown.Conclusion. In the present work, the broadband radiator model in the frequency range of 0.9…5.8 GHz is designed. The machining and brief comparative analysis of the calculated and measured antenna characteristics is carried out and demonstrated a good agreement. The advantages of the proposed method and designed radiator model are described. The results of this work are relevant in the tasks of observation, direction finding and signals reception from unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
31

Baroud, George. "Historicizing Migration and Displacement: Learning from the Early Roman Empire in the Time of the Nation-State. Response to Lachenicht, Susanne. Learning from Past Displacements? The History of Migrations between Historical Specificity, Presentism and Fractured Continuities. Humanities 2018, 7, 36." Humanities 9, no. 2 (April 29, 2020): 36. http://dx.doi.org/10.3390/h9020036.

Full text
Abstract:
My response to Susanne Lachenicht’s thought-provoking article is a brief attempt to take up her call to write histories that lead not to absolute certainties but to more understanding of the complexities of the past. I focus on documentation, border control, and citizenship in the Early Roman Empire to illustrate some of the radically different ways these were conceptualized and practiced in a premodern multiethnic empire like Rome than in a contemporary nation-state today. Passports, for example, and border control as we know it, did not exist, and migration was not tied to citizenship status. But the account I offer is deliberately tentative and full of qualifications to emphasize the real methodological challenges the study of this subject poses on account of fragmentary literary and material records and the numerous difficulties of interpreting these. I conclude by pointing out both the benefits and the limitations of framing history as a discipline from which one can learn. On the one hand, understanding how seemingly universal categories such as ‘citizen’ and ‘migrant’ are dynamic and constructed rather than static and natural can nuance public debates in nation-states which receive high numbers of migrants (like Germany, Lachenicht’s starting point) by countering ahistorical narratives of a monolithic and sedentary identity. On the other hand, knowledge of the past does not necessarily lead to moral edification.
APA, Harvard, Vancouver, ISO, and other styles
32

Oledzka, Aleksandra, and Lynne Ann Barker. "Visuospatial Executive Functions are Improved by Brief Brain Training in Young Rugby Players - Evidence of Far Transfer Test Effects: A Pilot Study." OBM Neurobiology 05, no. 02 (March 15, 2019): 1. http://dx.doi.org/10.21926/obm.neurobiol.2102093.

Full text
Abstract:
Brain training apps are becoming increasingly popular for at home use and as an adjunct to more traditional therapies. There is uncertainty about whether the effects of brain training transfer to real-world cognition, or performance on other cognitive assessment tests, or is specific only to the brain training app. Executive functions (EF’s) are higher-order cognitive processes important for activities of everyday living and autonomous goal-directed behaviour [1]. EF’s are associated with frontal brain networks that are susceptible to injury after head trauma and concussion so it is important to know whether these functions can be trained after a short training period (transfer effects beyond gains on app play), to general cognitive ability but findings so far have been mixed. The present study investigated efficacy of brief computerised brain training to in producing far-transfer effects to performance on standardised clinical tests of cognition in young rugby players with mixed concussion history, over a 4-week period. Athletes cognitive ability was assessed at baseline and after the training period on standardised tests to establish whether there were transfer effects. The putative relationship between concussion frequency and severity on baseline cognitive performance was also investigated. Results showed effective transfer effects from initial training to selective visuospatial executive functions. There was also a decline over the training period in non-verbal strategy initiation, although ability remained at average levels. Players showed no cognitive deficits at baseline, but correlational analyses and MR results indicated that concussion frequency, not severity, was a significant predictor of some visuospatial executive function scores at baseline. These preliminary findings hold promise for full scale studies investigating efficacy of brief brain training and association between sport-related concussion and cognition.
APA, Harvard, Vancouver, ISO, and other styles
33

Love, Jeff, and Michael Meng. "Heidegger and post-colonial fascism." Nationalities Papers 45, no. 2 (March 2017): 307–20. http://dx.doi.org/10.1080/00905992.2016.1255186.

Full text
Abstract:
Alexander Dugin is considered a fringe figure in contemporary Russia. Yet, his writings exert considerable influence and develop a virulent nationalism that exploits the vocabulary of post-colonial resistance in an unaccustomed way. Dugin should not be ignored, and this article gives a brief account of Dugin's peculiar brand of post-colonial thinking by reference to its central source: Martin Heidegger. Specifically, the article examines how Dugin adapts the anti-metaphysical thinking of Heidegger's most radical work of the 1930s – a thinking that seeks to renew Western thought in an other beginning – to the context of modern Russia as it tries to free itself from Western (American) domination. Dugin aims at nothing less than the creation of a new Russian identity and destiny that will not only save Russia but also, in a nod to Heidegger, renew the Western tradition itself from the “outside.” If Dugin's political project is ambitious, so is his interpretation of Heidegger which attempts to bring out the full radicality of Heidegger's thinking, both as philosophy and as politics.
APA, Harvard, Vancouver, ISO, and other styles
34

Esmaeili, Mohammad Javad. "THE SCIENCES OF THE ANCIENTS AND THEIR DIVISIONS AQSĀM ʿULŪM AL-AWĀʾIL: A TEXT ATTRIBUTED TO AVICENNA, AN EDITION WITH A BRIEF INTRODUCTION." Arabic Sciences and Philosophy 31, no. 2 (August 23, 2021): 183–223. http://dx.doi.org/10.1017/s0957423921000060.

Full text
Abstract:
AbstractThe famous philosopher and scientist Abū ʿAlī b. Sīnā (d. 428/1037) had an exceptional command of all the subjects on which he wrote. He is especially known for his many writings in logic, philosophy, and medicine. His influence was such that even in Europe, his works on physics, metaphysics and medicine in particular, were widely studied until the beginning of modern times. A keen mind, he had a full understanding of the inner structure of the Islamo-Hellenistic tradition that he perpetuated and in places helped to develop and reshape. This is not only borne out by his many writings, but in some instances also by his explicit accounts of the sciences and their divisions. This article contains an edition of one such account, of which only two copies have been identified so far. It will be argued (against Biesterfeldt) that the text in question is likely to have been written in Bukhārā when Avicenna was still in his early twenties. Moreover, it will be shown that it could very well be that the text was actually copied from his famous Al-ḥāṣil wal-maḥṣūl (Harvest reapings), a philosophical encyclopaedia in twenty volumes long since lost. The absence of algebra and a philosophical rather than a religious foundation of the sciences finally, are important clues to Avicenna's perspective on the rational sciences early in his career.
APA, Harvard, Vancouver, ISO, and other styles
35

Waldron, Jeremy. "Response to Critics." Review of Politics 67, no. 3 (2005): 495–514. http://dx.doi.org/10.1017/s0034670500034689.

Full text
Abstract:
I am grateful to all the participants in this symposium for the attention they have paid to my arguments in God, Locke, and Equality (GLE) and for the kind things they say about the book. I am grateful, too, to the editors of this Review for offering me the opportunity to respond. In this brief note, I want to answer some of the criticisms that have been made of my interpretation, particularly in regard to Locke's account of the underpinnings of basic equality. I shall not say much about the suggestion which I advanced at the beginning and the end of GLE to the effect that we—even now, in the twenty-first century—ought to take seriously the view that the principle of basic equality requires for its elaboration and support something along the lines of Locke's religious views and that, just as basic equality was not conceived or nurtured on purely secular premises, so it cannot be sustained on purely secular premises. A full elaboration and defense of this suggestion would require much more space than I allotted it in GLE or than I can allot it here. I hope eventually to provide this in a book, which will deal with basic equality directly rather than through the lens of John Locke's work. Here I will discuss this aspect only by way of brief response to the efforts by Professors Zuckert and Reiman to show (not just to say) that basic equality can be supported on purely secular foundations.
APA, Harvard, Vancouver, ISO, and other styles
36

Baker, R. O., F. Kuppe, S. Chugh, R. Bora, S. Stojanovic, and R. Batyck. "Full-Field Modeling Using Streamline-Based Simulation: Four Case Studies." SPE Reservoir Evaluation & Engineering 5, no. 02 (April 1, 2002): 126–34. http://dx.doi.org/10.2118/77172-pa.

Full text
Abstract:
Summary Modern streamline-based reservoir simulators are able to account for actual field conditions such as 3D multiphase flow effects, reservoir heterogeneity, gravity, and changing well conditions. A streamline simulator was used to model four field cases, with approximately 400 wells and 150,000 gridblocks. History-match run times were approximately 1 CPU hour per run, with the final history matches completed in approximately 1 month per field. In all field cases, a high percentage of wells were history matched within the first two to three runs. Streamline simulation not only enables a rapid turnaround time for studies, but it also serves as a different tool in resolving each of the studied fields' unique characteristics. The primary reasons for faster history matching of permeability fields using 3D streamline technology as compared to conventional finite-difference (FD) techniques are as follows: Streamlines clearly identify which producer-injector pairs communicate strongly (flow visualization). Streamlines allow the use of a very large number of wells, thereby substantially reducing the uncertainty associated with outer-boundary conditions. Streamline flow paths indicate that idealized drainage patterns do not exist in real fields. It is therefore unrealistic to extract symmetric elements out of a full field. The speed and efficiency of the method allows the solution of fine-scale and/or full-field models with hundreds of wells. The streamline simulator honors the historical total fluid injection and production volumes exactly because there are no drawdown constraints for incompressible problems. The technology allows for easy identification of regions that require modifications to achieve a history match. Streamlines provide new flow information (i.e., well connectivity, drainage volumes, and well allocation factors) that cannot be derived from conventional simulation methods. Introduction In the past, streamline-based flow simulation was quite limited in its application to field data. Emanuel and Milliken1 showed how hybrid streamtube models were used to history match field data rapidly to arrive at both an updated geologic model and a current oil-saturation distribution for input to FD simulations. FD simulators were then used in forecast mode. Recent advances in streamline-based flow simulators have overcome many of the limitations of previous streamline and streamtube methods.2-6 Streamline-based simulators are now fully 3D and account for multiphase gravity and fluid mobility effects as well as compressibility effects. Another key improvement is that the simulator can now account for changing well conditions due to rate changes, infill drilling, producer-injector conversions, and well abandonments. With advances in streamline methods, the technique is rapidly becoming a common tool to assist in the modeling and forecasting of field cases. As this technology has matured, it is becoming available to a larger group of engineers and is no longer confined to research centers. Published case studies using streamline simulators are now appearing from a broad distribution of sources.7–12 Because of the increasing interest in this technology, our first intent in this paper is to outline a methodology for where and how streamline-based simulation fits in the reservoir engineering toolbox. Our second objective is to provide insight into why we think the method works so well in some cases. Finally, we will demonstrate the application of the technology to everyday field situations useful to mainstream exploitation or reservoir engineers, as opposed to specialized or research applications. The Streamline Simulation Method For a more detailed mathematical description of the streamline method, please refer to the Appendix and subsequent references. In brief, the streamline simulation method solves a 3D problem by decoupling it into a series of 1D problems, each one solved along a streamline. Unlike FD simulation, streamline simulation relies on transporting fluids along a dynamically changing streamline- based flow grid, as opposed to the underlying Cartesian grid. The result is that large timestep sizes can be taken without numerical instabilities, giving the streamline method a near-linear scaling in terms of CPU efficiency vs. model size.6 For very large models, streamline-based simulators can be one to two orders of magnitude faster than FD methods. The timestep size in streamline methods is not limited by a classic grid throughput (CFL) condition but by how far fluids can be transported along the current streamline grid before the streamlines need to be updated. Factors that influence this limit include nonlinear effects like mobility, gravity, and well rate changes.5 In real field displacements, historical well effects have a far greater impact on streamline-pattern changes than do mobility and gravity. Thus, the key is determining how much historical data can be upscaled without significantly impacting simulation results. For all cases considered here, 1-year timestep sizes were more than adequate to capture changes in historical data, gravity, and mobility effects. It is worth noting that upscaling historical data also would benefit run times for FD simulations. Where possible, both SL and FD methods would then require similar simulation times. However, only for very coarse grids and specific problems is it possible to take 1-year timestep sizes with FD methods. As the grid becomes finer, CFL limitations begin to dictate the timestep size, which is much smaller than is necessary to honor nonlinearities. This is why streamline methods exhibit larger speed-up factors over FD methods as the number of grid cells increases.
APA, Harvard, Vancouver, ISO, and other styles
37

Hayashi, Haruo. "Long-term Recovery from Recent Disasters in Japan and the United States." Journal of Disaster Research 2, no. 6 (December 1, 2007): 413–18. http://dx.doi.org/10.20965/jdr.2007.p0413.

Full text
Abstract:
In this issue of Journal of Disaster Research, we introduce nine papers on societal responses to recent catastrophic disasters with special focus on long-term recovery processes in Japan and the United States. As disaster impacts increase, we also find that recovery times take longer and the processes for recovery become more complicated. On January 17th of 1995, a magnitude 7.2 earthquake hit the Hanshin and Awaji regions of Japan, resulting in the largest disaster in Japan in 50 years. In this disaster which we call the Kobe earthquake hereafter, over 6,000 people were killed and the damage and losses totaled more than 100 billion US dollars. The long-term recovery from the Kobe earthquake disaster took more than ten years to complete. One of the most important responsibilities of disaster researchers has been to scientifically monitor and record the long-term recovery process following this unprecedented disaster and discern the lessons that can be applied to future disasters. The first seven papers in this issue present some of the key lessons our research team learned from the studying the long-term recovery following the Kobe earthquake disaster. We have two additional papers that deal with two recent disasters in the United States – the terrorist attacks on World Trade Center in New York on September 11 of 2001 and the devastation of New Orleans by the 2005 Hurricane Katrina and subsequent levee failures. These disasters have raised a number of new research questions about long-term recovery that US researchers are studying because of the unprecedented size and nature of these disasters’ impacts. Mr. Mammen’s paper reviews the long-term recovery processes observed at and around the World Trade Center site over the last six years. Ms. Johnson’s paper provides a detailed account of the protracted reconstruction planning efforts in the city of New Orleans to illustrate a set of sufficient and necessary conditions for successful recovery. All nine papers in this issue share a theoretical framework for long-term recovery processes which we developed based first upon the lessons learned from the Kobe earthquake and later expanded through observations made following other recent disasters in the world. The following sections provide a brief description of each paper as an introduction to this special issue. 1. The Need for Multiple Recovery Goals After the 1995 Kobe earthquake, the long-term recovery process began with the formulation of disaster recovery plans by the City of Kobe – the most severely impacted municipality – and an overarching plan by Hyogo Prefecture which coordinated 20 impacted municipalities; this planning effort took six months. Before the Kobe earthquake, as indicated in Mr. Maki’s paper in this issue, Japanese theories about, and approaches to, recovery focused mainly on physical recovery, particularly: the redevelopment plans for destroyed areas; the location and standards for housing and building reconstruction; and, the repair and rehabilitation of utility systems. But the lingering problems of some of the recent catastrophes in Japan and elsewhere indicate that there are multiple dimensions of recovery that must be considered. We propose that two other key dimensions are economic recovery and life recovery. The goal of economic recovery is the revitalization of the local disaster impacted economy, including both major industries and small businesses. The goal of life recovery is the restoration of the livelihoods of disaster victims. The recovery plans formulated following the 1995 Kobe earthquake, including the City of Kobe’s and Hyogo Prefecture’s plans, all stressed these two dimensions in addition to physical recovery. The basic structure of both the City of Kobe’s and Hyogo Prefecture’s recovery plans are summarized in Fig. 1. Each plan has three elements that work simultaneously. The first and most basic element of recovery is the restoration of damaged infrastructure. This helps both physical recovery and economic recovery. Once homes and work places are recovered, Life recovery of the impacted people can be achieved as the final goal of recovery. Figure 2 provides a “recovery report card” of the progress made by 2006 – 11 years into Kobe’s recovery. Infrastructure was restored in two years, which was probably the fastest infrastructure restoration ever, after such a major disaster; it astonished the world. Within five years, more than 140,000 housing units were constructed using a variety of financial means and ownership patterns, and exceeding the number of demolished housing units. Governments at all levels – municipal, prefectural, and national – provided affordable public rental apartments. Private developers, both local and national, also built condominiums and apartments. Disaster victims themselves also invested a lot to reconstruct their homes. Eleven major redevelopment projects were undertaken and all were completed in 10 years. In sum, the physical recovery following the 1995 Kobe earthquake was extensive and has been viewed as a major success. In contrast, economic recovery and life recovery are still underway more than 13 years later. Before the Kobe earthquake, Japan’s policy approaches to recovery assumed that economic recovery and life recovery would be achieved by infusing ample amounts of public funding for physical recovery into the disaster area. Even though the City of Kobe’s and Hyogo Prefecture’s recovery plans set economic recovery and life recovery as key goals, there was not clear policy guidance to accomplish them. Without a clear articulation of the desired end-state, economic recovery programs for both large and small businesses were ill-timed and ill-matched to the needs of these businesses trying to recover amidst a prolonged slump in the overall Japanese economy that began in 1997. “Life recovery” programs implemented as part of Kobe’s recovery were essentially social welfare programs for low-income and/or senior citizens. 2. Requirements for Successful Physical Recovery Why was the physical recovery following the 1995 Kobe earthquake so successful in terms of infrastructure restoration, the replacement of damaged housing units, and completion of urban redevelopment projects? There are at least three key success factors that can be applied to other disaster recovery efforts: 1) citizen participation in recovery planning efforts, 2) strong local leadership, and 3) the establishment of numerical targets for recovery. Citizen participation As pointed out in the three papers on recovery planning processes by Mr. Maki, Mr. Mammen, and Ms. Johnson, citizen participation is one of the indispensable factors for successful recovery plans. Thousands of citizens participated in planning workshops organized by America Speaks as part of both the World Trade Center and City of New Orleans recovery planning efforts. Although no such workshops were held as part of the City of Kobe’s recovery planning process, citizen participation had been part of the City of Kobe’s general plan update that had occurred shortly before the earthquake. The City of Kobe’s recovery plan is, in large part, an adaptation of the 1995-2005 general plan. On January 13 of 1995, the City of Kobe formally approved its new, 1995-2005 general plan which had been developed over the course of three years with full of citizen participation. City officials, responsible for drafting the City of Kobe’s recovery plan, have later admitted that they were able to prepare the city’s recovery plan in six months because they had the preceding three years of planning for the new general plan with citizen participation. Based on this lesson, Odiya City compiled its recovery plan based on the recommendations obtained from a series of five stakeholder workshops after the 2004 Niigata Chuetsu earthquake. <strong>Fig. 1. </strong> Basic structure of recovery plans from the 1995 Kobe earthquake. <strong>Fig. 2. </strong> “Disaster recovery report card” of the progress made by 2006. Strong leadership In the aftermath of the Kobe earthquake, local leadership had a defining role in the recovery process. Kobe’s former Mayor, Mr. Yukitoshi Sasayama, was hired to work in Kobe City government as an urban planner, rebuilding Kobe following World War II. He knew the city intimately. When he saw damage in one area on his way to the City Hall right after the earthquake, he knew what levels of damage to expect in other parts of the city. It was he who called for the two-month moratorium on rebuilding in Kobe city on the day of the earthquake. The moratorium provided time for the city to formulate a vision and policies to guide the various levels of government, private investors, and residents in rebuilding. It was a quite unpopular policy when Mayor Sasayama announced it. Citizens expected the city to be focusing on shelters and mass care, not a ban on reconstruction. Based on his experience in rebuilding Kobe following WWII, he was determined not to allow haphazard reconstruction in the city. It took several years before Kobe citizens appreciated the moratorium. Numerical targets Former Governor Mr. Toshitami Kaihara provided some key numerical targets for recovery which were announced in the prefecture and municipal recovery plans. They were: 1) Hyogo Prefecture would rebuild all the damaged housing units in three years, 2) all the temporary housing would be removed within five years, and 3) physical recovery would be completed in ten years. All of these numerical targets were achieved. Having numerical targets was critical to directing and motivating all the stakeholders including the national government’s investment, and it proved to be the foundation for Japan’s fundamental approach to recovery following the 1995 earthquake. 3. Economic Recovery as the Prime Goal of Disaster Recovery In Japan, it is the responsibility of the national government to supply the financial support to restore damaged infrastructure and public facilities in the impacted area as soon as possible. The long-term recovery following the Kobe earthquake is the first time, in Japan’s modern history, that a major rebuilding effort occurred during a time when there was not also strong national economic growth. In contrast, between 1945 and 1990, Japan enjoyed a high level of national economic growth which helped facilitate the recoveries following WWII and other large fires. In the first year after the Kobe earthquake, Japan’s national government invested more than US$ 80 billion in recovery. These funds went mainly towards the repair and reconstruction of infrastructure and public facilities. Now, looking back, we can also see that these investments also nearly crushed the local economy. Too much money flowed into the local economy over too short a period of time and it also did not have the “trickle-down” effect that might have been intended. To accomplish numerical targets for physical recovery, the national government awarded contracts to large companies from Osaka and Tokyo. But, these large out-of-town contractors also tended to have their own labor and supply chains already intact, and did not use local resources and labor, as might have been expected. Essentially, ten years of housing supply was completed in less than three years, which led to a significant local economic slump. Large amounts of public investment for recovery are not necessarily a panacea for local businesses, and local economic recovery, as shown in the following two examples from the Kobe earthquake. A significant national investment was made to rebuild the Port of Kobe to a higher seismic standard, but both its foreign export and import trade never recovered to pre-disaster levels. While the Kobe Port was out of business, both the Yokohama Port and the Osaka Port increased their business, even though many economists initially predicted that the Kaohsiung Port in Chinese Taipei or the Pusan Port in Korea would capture this business. Business stayed at all of these ports even after the reopening of the Kobe Port. Similarly, the Hanshin Railway was severely damaged and it took half a year to resume its operation, but it never regained its pre-disaster readership. In this case, two other local railway services, the JR and Hankyu lines, maintained their increased readership even after the Hanshin railway resumed operation. As illustrated by these examples, pre-disaster customers who relied on previous economic output could not necessarily afford to wait for local industries to recover and may have had to take their business elsewhere. Our research suggests that the significant recovery investment made by Japan’s national government may have been a disincentive for new economic development in the impacted area. Government may have been the only significant financial risk-taker in the impacted area during the national economic slow-down. But, its focus was on restoring what had been lost rather than promoting new or emerging economic development. Thus, there may have been a missed opportunity to provide incentives or put pressure on major businesses and industries to develop new businesses and attract new customers in return for the public investment. The significant recovery investment by Japan’s national government may have also created an over-reliance of individuals on public spending and government support. As indicated in Ms. Karatani’s paper, individual savings of Kobe’s residents has continued to rise since the earthquake and the number of individuals on social welfare has also decreased below pre-disaster levels. Based on our research on economic recovery from the Kobe earthquake, at least two lessons emerge: 1) Successful economic recovery requires coordination among all three recovery goals – Economic, Physical and Life Recovery, and 2) “Recovery indices” are needed to better chart recovery progress in real-time and help ensure that the recovery investments are being used effectively. Economic recovery as the prime goal of recovery Physical recovery, especially the restoration of infrastructure and public facilities, may be the most direct and socially accepted provision of outside financial assistance into an impacted area. However, lessons learned from the Kobe earthquake suggest that the sheer amount of such assistance may not be effective as it should be. Thus, as shown in Fig. 3, economic recovery should be the top priority goal for recovery among the three goals and serve as a guiding force for physical recovery and life recovery. Physical recovery can be a powerful facilitator of post-disaster economic development by upgrading social infrastructure and public facilities in compliance with economic recovery plans. In this way, it is possible to turn a disaster into an opportunity for future sustainable development. Life recovery may also be achieved with a healthy economic recovery that increases tax revenue in the impacted area. In order to achieve this coordination among all three recovery goals, municipalities in the impacted areas should have access to flexible forms of post-disaster financing. The community development block grant program that has been used after several large disasters in the United States, provide impacted municipalities with a more flexible form of funding and the ability to better determine what to do and when. The participation of key stakeholders is also an indispensable element of success that enables block grant programs to transform local needs into concrete businesses. In sum, an effective economic recovery combines good coordination of national support to restore infrastructure and public facilities and local initiatives that promote community recovery. Developing Recovery Indices Long-term recovery takes time. As Mr. Tatsuki’s paper explains, periodical social survey data indicates that it took ten years before the initial impacts of the Kobe earthquake were no longer affecting the well-being of disaster victims and the recovery was completed. In order to manage this long-term recovery process effectively, it is important to have some indices to visualize the recovery processes. In this issue, three papers by Mr. Takashima, Ms. Karatani, and Mr. Kimura define three different kinds of recovery indices that can be used to continually monitor the progress of the recovery. Mr. Takashima focuses on electric power consumption in the impacted area as an index for impact and recovery. Chronological change in electric power consumption can be obtained from the monthly reports of power company branches. Daily estimates can also be made by tracking changes in city lights using a satellite called DMSP. Changes in city lights can be a very useful recovery measure especially at the early stages since it can be updated daily for anywhere in the world. Ms. Karatani focuses on the chronological patterns of monthly macro-statistics that prefecture and city governments collect as part of their routine monitoring of services and operations. For researchers, it is extremely costly and virtually impossible to launch post-disaster projects that collect recovery data continuously for ten years. It is more practical for researchers to utilize data that is already being collected by local governments or other agencies and use this data to create disaster impact and recovery indices. Ms. Karatani found three basic patterns of disaster impact and recovery in the local government data that she studied: 1) Some activities increased soon after the disaster event and then slumped, such as housing construction; 2) Some activities reduced sharply for a period of time after the disaster and then rebounded to previous levels, such as grocery consumption; and 3) Some activities reduced sharply for a while and never returned to previous levels, such as the Kobe Port and Hanshin Railway. Mr. Kimura focuses on the psychology of disaster victims. He developed a “recovery and reconstruction calendar” that clarifies the process that disaster victims undergo in rebuilding their shattered lives. His work is based on the results of random surveys. Despite differences in disaster size and locality, survey data from the 1995 Kobe earthquake and the 2004 Niigata-ken Chuetsu earthquake indicate that the recovery and reconstruction calendar is highly reliable and stable in clarifying the recovery and reconstruction process. <strong>Fig. 3.</strong> Integrated plan of disaster recovery. 4. Life Recovery as the Ultimate Goal of Disaster Recovery Life recovery starts with the identification of the disaster victims. In Japan, local governments in the impacted area issue a “damage certificate” to disaster victims by household, recording the extent of each victim’s housing damage. After the Kobe earthquake, a total of 500,000 certificates were issued. These certificates, in turn, were used by both public and private organizations to determine victim’s eligibility for individual assistance programs. However, about 30% of those victims who received certificates after the Kobe earthquake were dissatisfied with the results of assessment. This caused long and severe disputes for more than three years. Based on the lessons learned from the Kobe earthquake, Mr. Horie’s paper presents (1) a standardized procedure for building damage assessment and (2) an inspector training system. This system has been adopted as the official building damage assessment system for issuing damage certificates to victims of the 2004 Niigata-ken Chuetsu earthquake, the 2007 Noto-Peninsula earthquake, and the 2007 Niigata-ken Chuetsu Oki earthquake. Personal and family recovery, which we term life recovery, was one of the explicit goals of the recovery plan from the Kobe earthquake, but it was unclear in both recovery theory and practice as to how this would be measured and accomplished. Now, after studying the recovery in Kobe and other regions, Ms. Tamura’s paper proposes that there are seven elements that define the meaning of life recovery for disaster victims. She recently tested this model in a workshop with Kobe disaster victims. The seven elements and victims’ rankings are shown in Fig. 4. Regaining housing and restoring social networks were, by far, the top recovery indicators for victims. Restoration of neighborhood character ranked third. Demographic shifts and redevelopment plans implemented following the Kobe earthquake forced significant neighborhood changes upon many victims. Next in line were: having a sense of being better prepared and reducing their vulnerability to future disasters; regaining their physical and mental health; and restoration of their income, job, and the economy. The provision of government assistance also provided victims with a sense of life recovery. Mr. Tatsuki’s paper summarizes the results of four random-sample surveys of residents within the most severely impacted areas of Hyogo Prefecture. These surveys were conducted biannually since 1999,. Based on the results of survey data from 1999, 2001, 2003, and 2005, it is our conclusion that life recovery took ten years for victims in the area impacted significantly by the Kobe earthquake. Fig. 5 shows that by comparing the two structural equation models of disaster recovery (from 2003 and 2005), damage caused by the Kobe earthquake was no longer a determinant of life recovery in the 2005 model. It was still one of the major determinants in the 2003 model as it was in 1999 and 2001. This is the first time in the history of disaster research that the entire recovery process has been scientifically described. It can be utilized as a resource and provide benchmarks for monitoring the recovery from future disasters. <strong>Fig. 4.</strong> Ethnographical meaning of “life recovery” obtained from the 5th year review of the Kobe earthquake by the City of Kobe. <strong>Fig. 5.</strong> Life recovery models of 2003 and 2005. 6. The Need for an Integrated Recovery Plan The recovery lessons from Kobe and other regions suggest that we need more integrated recovery plans that use physical recovery as a tool for economic recovery, which in turn helps disaster victims. Furthermore, we believe that economic recovery should be the top priority for recovery, and physical recovery should be regarded as a tool for stimulating economic recovery and upgrading social infrastructure (as shown in Fig. 6). With this approach, disaster recovery can help build the foundation for a long-lasting and sustainable community. Figure 6 proposes a more detailed model for a more holistic recovery process. The ultimate goal of any recovery process should be achieving life recovery for all disaster victims. We believe that to get there, both direct and indirect approaches must be taken. Direct approaches include: the provision of funds and goods for victims, for physical and mental health care, and for housing reconstruction. Indirect approaches for life recovery are those which facilitate economic recovery, which also has both direct and indirect approaches. Direct approaches to economic recovery include: subsidies, loans, and tax exemptions. Indirect approaches to economic recovery include, most significantly, the direct projects to restore infrastructure and public buildings. More subtle approaches include: setting new regulations or deregulations, providing technical support, and creating new businesses. A holistic recovery process needs to strategically combine all of these approaches, and there must be collaborative implementation by all the key stakeholders, including local governments, non-profit and non-governmental organizations (NPOs and NGOs), community-based organizations (CBOs), and the private sector. Therefore, community and stakeholder participation in the planning process is essential to achieve buy-in for the vision and desired outcomes of the recovery plan. Securing the required financial resources is also critical to successful implementation. In thinking of stakeholders, it is important to differentiate between supporting entities and operating agencies. Supporting entities are those organizations that supply the necessary funding for recovery. Both Japan’s national government and the federal government in the U.S. are the prime supporting entities in the recovery from the 1995 Kobe earthquake and the 2001 World Trade Center recovery. In Taiwan, the Buddhist organization and the national government of Taiwan were major supporting entities in the recovery from the 1999 Chi-Chi earthquake. Operating agencies are those organizations that implement various recovery measures. In Japan, local governments in the impacted area are operating agencies, while the national government is a supporting entity. In the United States, community development block grants provide an opportunity for many operating agencies to implement various recovery measures. As Mr. Mammen’ paper describes, many NPOs, NGOs, and/or CBOs in addition to local governments have had major roles in implementing various kinds programs funded by block grants as part of the World Trade Center recovery. No one, single organization can provide effective help for all kinds of disaster victims individually or collectively. The needs of disaster victims may be conflicting with each other because of their diversity. Their divergent needs can be successfully met by the diversity of operating agencies that have responsibility for implementing recovery measures. In a similar context, block grants made to individual households, such as microfinance, has been a vital recovery mechanism for victims in Thailand who suffered from the 2004 Sumatra earthquake and tsunami disaster. Both disaster victims and government officers at all levels strongly supported the microfinance so that disaster victims themselves would become operating agencies for recovery. Empowering individuals in sustainable life recovery is indeed the ultimate goal of recovery. <strong>Fig. 6.</strong> A holistic recovery policy model.
APA, Harvard, Vancouver, ISO, and other styles
38

Cabeza-Gras, O., and V. Jaramillo-García. "Wind energy system in Ambocas-Ecuador: distributed generation and energy quality." Renewable Energy and Power Quality Journal 19 (September 2021): 609–13. http://dx.doi.org/10.24084/repqj19.361.

Full text
Abstract:
In this communication we present the construction of a wind farm, WF, with 10 MW of nominal power. This WF will increase the quantity and quality of electricity in the area of Ambocas, Loja, Ecuador, strengthen a system with many voltage drops. The place chosen is ideal, because it is long from population, in a hill side near an existing road. Wind is persistent and has a constant orientation all along the year. The generated power will be connected with the electricity system in the Portovelo Substation, which is about 12 km from the WF site. We have calculated the expected electricity production all along the year taking into account all important data to simulate successfully the WF operation in real conditions. We have also modelled the interconnexion of the WF with the substation and its effect in the 69 kV bar. Finally, a brief economical analysis of the project gives an annual average profit higher than 3.5 USD million without taxes, while the inversion would be cancelled in less than 5 years of the 20 ones planned for the WF in full operation.
APA, Harvard, Vancouver, ISO, and other styles
39

Taormina, Isabella, Tess Kennedy, Kristina K. Hardy, and Steven J. Hardy. "Adoption of a Multidimensional Approach to Assessing the Impact of Socioeconomic Status on Neurocognitive and Behavioral Outcomes in Pediatric Sickle Cell Disease." Blood 128, no. 22 (December 2, 2016): 3589. http://dx.doi.org/10.1182/blood.v128.22.3589.3589.

Full text
Abstract:
Abstract Introduction: Broad neurocognitive deficits have been documented in children with sickle cell disease (SCD), even in the absence of stroke. These deficits pose significant consequences, as lower cognitive abilities are associated with lower academic achievement. However, there has been limited research examining the relationship between neurocognitive functioning and socioeconomic status (SES) in youth with SCD. Given that children with SCD experience socioeconomic disadvantage at relatively high rates, SES has been posited as one explanation for the high prevalence of neurocognitive issues in SCD; particularly in the case of patients without stroke or those with less severe phenotypes. In order to better understand the role of SES, we sought to evaluate the effects of multiple distinct measures of SES on neurocognitive outcomes in pediatric SCD. Methods: Fifty-nine children with SCD ages 7-16 (M = 10.44, SD = 2.87; 42% male) enrolled in a larger study of the feasibility and efficacy of a computerized cognitive training program. Primary caregivers reported demographic information, including the child's age, gender, and ethnicity, and rated their child's executive functioning difficulties on the Behavior Rating Inventory of Executive Function (BRIEF). Scores on the BRIEF are represented as T scores, where higher scores reflect more problems. Caregivers also reported on multiple measures of SES, including the participant's health insurance type, whether the participant received free-or-reduced lunch at school, and rated the adequacy of household resources (Family Resource Scale; FRS) and their perceived community and national social status (The MacArthur Scale of Subjective Social Status). Children and adolescents completed the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V). Results: Multiple regression analyses were performed to examine the relationship between SES measures and performance-based and caregiver-reported neurocognitive and behavioral functioning. Controlling for age and gender, having public health insurance significantly predicted lower Full Scale IQs on the WISC-V (R2 = .158, b = -8.609, p = .021), as well as greater impairments on the BRIEF Working Memory (R2 = .219, b = -9.556, p = .014), Organization of Materials (R2 = .166, b = -7.498, p = .011), and Monitor (R2 = .137, b = -6.872, p = .038) subscales. Whereas, having private health insurance significantly predicted higher Full Scale IQs (R2 = .187, b = 10.376, p = 0.007) and fewer problems on the BRIEF Working Memory (R2 = .101, b = 7.868, p = .046), Organization of Materials (R2 = 0.209, b = 9.103, p = .003), and Monitor (R2 = .163, b = 8.231, p = .018) subscales. Additionally, receiving free-or-reduced lunch significantly predicted lower scores on a WISC-V task measuring processing speed (R2 = .316, b = -1.976, p = .006) and a composite indicator of processing speed (R2 = .226, b = -9.849, p = .011). In contrast to hypotheses, higher perceived social status within families' communities on the MacArthur Scale of Subjective Social Status was predictive of lower Full Scale IQs (R2 = .089, b = 1.646, p = .049) and higher perceived social status using the United States as a reference predicted greater impairments on the Plan/Organize (R2 = .169, b = 2.287 p = .011) and Initiate subscales of the BRIEF (R2 = .134, b = 1.839, p = .024). Conclusions: It is feasible to measure SES in multiple ways in clinical trials. In our study, SES significantly predicted performance-based and parent-reported neurocognitive functioning; however, each measure of SES appeared to account for a unique component of SES and demonstrated unique associations with neurocognitive outcomes. Public insurance was a significant predictor of more caregiver-rated problems with children's working memory, organizational skills, and executive functioning. Children who qualified for free-or reduced lunch also scored significantly lower on processing speed tasks. Findings support the hypothesis that SES plays an important role in determining neurocognitive and behavioral outcomes and highlights the value of conceptualizing and assessing SES as a multidimensional construct. Researchers and clinicians should routinely assess SES using various measures to enhance detection of neurocognitive difficulties and assist in crafting tailored interventions to mitigate negative consequences of low SES in children with SCD. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
40

Coxson, Darwyn. "Photoinhibition of net photosynthesis in Stereocaulon virgatum and S. tomentosum, a tropical–temperate comparison." Canadian Journal of Botany 65, no. 8 (August 1, 1987): 1707–15. http://dx.doi.org/10.1139/b87-233.

Full text
Abstract:
Stereocaulon tomentosum Fr., collected in pine–lichen woodlands of southwestern Alberta, Canada, and S. virgatum Ach., collected from recent lahar flows on La Soufrière, Guadeloupe (French West Indies), were examined for their response of net photosynthesis and respiration to brief periods of high light exposure at temperatures ranging from 10 to 40 °C. In each species, a 30-min exposure period to photon flux densities of 1500 μmol∙m−2∙s−1 resulted in a significant reduction of subsequently measured rates of net photosynthetic CO2 uptake in those treatment groups which had been held at temperatures at or above 30 °C. These results suggest an altered stability of interactions between light and dark reactions of net photosynthesis at higher temperatures. They also point to the need for close monitoring of field microclimatic conditions during periods when hydrated lichen thalli are exposed to full sun conditions. Selective pressures imposed on hydrated lichen thalli during these periods of insolation shock may well prove of much greater importance in the shaping of net photosynthetic response patterns than has previously been recognized and may account for many of the previously observed disparities between those temperatures prevailing during typical periods of thallus hydration (i.e., under overcast conditions) and those at which maximal rates of net photosynthetic uptake are seen (often a full 10 to 15 °C higher). These interactions appear equally important to lichens of tropical origins and to those of north temperate habitats and suggest common evolutionary drives on thermal acclimation in both environments.
APA, Harvard, Vancouver, ISO, and other styles
41

Câmara-Costa, Hugo, Kim S. Bull, Colin Kennedy, Andreas Wiener, Gabriele Calaminus, Anika Resch, Virginie Kieffer, et al. "Quality of survival and cognitive performance in children treated for medulloblastoma in the PNET 4 randomized controlled trial." Neuro-Oncology Practice 4, no. 3 (February 10, 2017): 161–70. http://dx.doi.org/10.1093/nop/npw028.

Full text
Abstract:
Abstract Background The relationship between direct assessments of cognitive performance and questionnaires assessing quality of survival (QoS) is reported to be weak-to-nonexistent. Conversely, the associations between questionnaires evaluating distinct domains of QoS tend to be strong. This pattern remains understudied. Methods In the HIT-SIOP PNET4 randomized controlled trial, cognitive assessments, including Full Scale, Verbal and Performance IQ, Working Memory, and Processing Speed, were undertaken in 137 survivors of standard-risk medulloblastoma from 4 European countries. QoS questionnaires, including self-reports and/or parent reports of the Behavior Rating Inventory of Executive Function (BRIEF), the Health Utilities Index, the Strengths and Difficulties Questionnaire, and the Pediatric Quality of Life Inventory, were completed for 151 survivors. Correlations among direct cognitive assessments, QoS questionnaires, and clinical data were examined in participants with both assessments available (n = 86). Results Correlations between direct measures of cognitive performance and QoS questionnaires were weak, except for moderate correlations between the BRIEF Metacognition Index (parent report) and working memory (r = .32) and between health status (self-report) and cognitive outcomes (r = .35–.44). Correlations among QoS questionnaires were moderate to strong both for parent and self-report (r = .39–.76). Principal Component Analysis demonstrated that questionnaires and cognitive assessments loaded on 2 separate factors. Conclusions We hypothesize that the strong correlations among QoS questionnaires is partially attributable to the positive/negative polarity of all questions on the questionnaires, coupled with the relative absence of disease-specific questions. These factors may be influenced by respondents’ personality and emotional characteristics, unlike direct assessments of cognitive functioning, and should be taken into account in clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
42

Berman, Robert. "Normativity, Equal Opportunity, and the Adjustment Problem in The Just State." Hegel Bulletin 33, no. 01 (2012): 45–56. http://dx.doi.org/10.1017/s026352320000032x.

Full text
Abstract:
The Just State, as Richard Winfield notes at the outset, is the culmination of an ambitious project devoted to limning the contours and content of a systematic philosophy of right. Apart from its value as a substantial contribution to the understanding of Hegel's Philosophy of Right, The Just State offers its own self-standing, content-rich account of political justice. Winfield's book is overflowing with argument and analysis, which calls for careful, detailed consideration, and the brief set of comments that follow cannot possibly do it full justice. It is useful, however, to focus on a specific topic concerning normativity discussed in the ‘Introduction’. In particular, I would like to raise some questions about the principle of equal opportunity and the adjustment problem that arises from it.Before turning to those questions, it might be helpful to offer an overview of the overarching structure of The Just State. The ‘Introduction’ and eight chapters can be clustered thematically into three parts. The ‘Introduction’ has the important task of clearing away the obstacles thrown up by sceptical arguments against the very possibility of a normative theory of political justice. Chapters 1-3 set the stage for the account of political justice proper, which then takes up the remainder of the book, chapters 4-8. So the argument starts, in the ‘Introduction’, with a refutation of the sceptical denial of the project of political philosophy understood as the normative theory of political justice. This is important because the positive upshot of this anti-sceptical argument is the identification of a singular normative criterion: self-determination.
APA, Harvard, Vancouver, ISO, and other styles
43

Mathew, Thomas, Laxman Gurung, Sahar Roshandel, Socrates Munoz, and G. Prakash. "Halotrimethylsilane-Nitrite/Nitrate Salts: Efficient and Versatile Reagent System for Diverse Organic Synthetic Transformations." Synlett 30, no. 09 (March 19, 2019): 1037–47. http://dx.doi.org/10.1055/s-0037-1612105.

Full text
Abstract:
The reagent system comprised of halotrimethylsilane and nitrite or nitrate salts has now been successfully used as an efficient system for a series of versatile synthetic transformations. In recent years, the significance and efficacy of this system for reactions such as nitration of aromatics and olefins, oxidation of thiols to sulfonyl chlorides, ipso-nitrosation/nitration of arylboronic acids, ipso-nitration of α,β-unsaturated carboxylic acids to nitro olefins, etc. have been disclosed. Though the reagent system has not been exploited to its full potential, the reported reactions reveal its advantages as a very safe and convenient system that works under mild conditions. This brief Account reveals various synthetic applications of halotrimethylsilane-nitrite/nitrate salts in organic synthesis hitherto reported.1 Introduction2 Reactions Using a Halotrimethylsilane-Nitrate Salt System2.1 Nitration of Olefins and Aromatics2.2 One-Pot Preparation of gem-Chloronitroso, gem-Chloronitro, and vic-Dichloro Compounds2.3 One-Step Conversion of Anilines into Haloarenes2.4 Deoximation of Aldoximes to Aldehydes and Ketoximes to ­Ketones2.5 One-Pot Synthesis of Cyclic/Noncyclic α-Nitroketones from ­Cyclic/Noncyclic Olefins2.6 ipso-Nitration of Arylboronic Acids2.7 ipso-Nitrosation of Arylboronic Acids2.8 Oxidation of Sulfides and Sulfoxides to Sulfones2.9 Oxidative Chlorination of Thiols and Disulfides to Sulfonyl ­Chlorides2.10 α-Halogenation of Carbonyl Compounds2.11 Decarboxylative ipso-Nitration and Dibromination of Cinnamic Acid3 Conclusion
APA, Harvard, Vancouver, ISO, and other styles
44

Agres, Kat R. "Change detection and schematic processing in music." Psychology of Music 47, no. 2 (January 23, 2018): 173–93. http://dx.doi.org/10.1177/0305735617751249.

Full text
Abstract:
Research into vision has highlighted the importance of gist representations in change detection and memory. This article puts forth the hypothesis that schematic processing and gist provide an account for change detection in music as well, where a musical gist is an abstracted memory representation for schematically consistent tones. The present experiments illuminate the content of gist memory representations by testing when listeners can detect single-tone changes in pairs of melodies. In Experiment 1, musicians and non-musicians listened to melodies varying in tonal structure. Less structure resulted in compromised change detection in both groups. Most often, musicians displayed more accurate change detection than non-musicians, but, surprisingly, when schematic processing could not contribute to memory encoding, musicians performed worse than their untrained counterparts. Experiment 2 utilized a full-factorial design to examine tonality, interval of pitch change, metrical position, and rhythm. Tonality had a particularly large effect on performance, with non-scale tones generally aiding change detection. Listeners were unlikely, however, to detect schematically-inconsistent tones when only brief melodic context was available. The results uphold the hypothesis that memory for melodies relies on schematic processing, with change detection dependent upon whether the change alters the schematic gist of the melody.
APA, Harvard, Vancouver, ISO, and other styles
45

Virts, R. A. "Numerical Solution of a Two-Dimensional Problem of Fluid Filtration in a Deformable Porous Medium." Izvestiya of Altai State University, no. 1(117) (March 17, 2021): 88–92. http://dx.doi.org/10.14258/izvasu(2021)1-14.

Full text
Abstract:
The paper considers a two-dimensional mathematical model of filtration of a viscous incompressible fluid in a deformable porous medium. The model is based on the equations of conservation of mass for liquid and solid phases, Darcy’s law, the rheological relationship for a porous medium, and the law of conservation of the balance of forces. In this article, the equation of the balance of forces is taken in full form, i.e. the viscous and elastic properties of the medium are taken into account. The aim of the work is a numerical study of a model initial-boundary value problem. Section 1 gives a statement of the problem and a brief review of the literature on works related to this topic. In item 2, the original system of equations is transformed. In the case of slow flows, when the convective term can be neglected, a system arises that consists of a second-order parabolic equation for the effective pressure of the medium and the first-order equation for porosity. Section 3 proposes an algorithm for the numerical solution of the resulting initial-boundary value problem. For the numerical implementation, a variable direction scheme for the heat equation with variable coefficients is used, as well as the Runge — Kutta scheme of the fourth order of approximation.
APA, Harvard, Vancouver, ISO, and other styles
46

Naidoo, Jay. "Was The Retief-Dingane Treaty a Fake?" History in Africa 12 (1985): 187–210. http://dx.doi.org/10.2307/3171720.

Full text
Abstract:
The entry into the Zulu territory of Natal in 1837 of the Trekker leader Piet Retief; his meeting with the Zulu Chief Dingane; the resultant agreement (Retief recovers some stolen cattle in return for a concession to a part of Natal); the subsequent meeting of the two leaders; the untoward actions of Dingane (the killing in February 1838 of the unsuspecting Retief and his sixty-seven followers, and the mortifying and widespread attacks on all the Trekker encampments in Natal); the gathering of a new contingent of Trekkers; the defeat of Ding-ane's forces ten months later at ‘Blood River’; and, finally, the discovery in December 1838 (near the identifiable remains of Retief) of the agreement, the title deed to Natal--these events, tragic and dramatic, constitute a brief but special chapter of settler and, notably, of Afrikaner history.The treaty's miraculous recovery, the eyewitness reports of its finding, the long line of historians crediting its authenticity, and the title deed's very genuineness all came under unexpected--and unwelcomed, suspicion, scrutiny and debate in the 1920s, however. To appreciate that debate it is necessary to begin at the beginning.The French naturalist, traveler, and writer Louis A. Dele-gorgue, who was with the Trekkers during some of the time between 1838 and 1840, was probably one of the first to provide a connected published account--after the discovery of the treaty in December 1838--of the Retief-Dingane encounter. Thereafter Hendrik Cloete, who was sent by the Cape Government as a special commissioner to negotiate with the Volksraad of Natal in May 1843, set out a relatively full account of Retiefs misadventures in Natal.
APA, Harvard, Vancouver, ISO, and other styles
47

Nasibullayev, I. Sh. "Application of free software FreeFem++/Gmsh and FreeCAD/CalculiX for simulation of static elasticity problems." Multiphase Systems 15, no. 3-4 (2020): 183–200. http://dx.doi.org/10.21662/mfs2020.3.129.

Full text
Abstract:
The paper discusses the stages of computer numerical simulation of engineering problems and ways to improve the accuracy of simulation; provides a brief overview of free software for simulation elasticity problems by the finite element method, as well as trends in the development of free CAD and CAE software. For a successful engineering study, it is necessary to choose a convenient tool that takes into account all the features of the problem being solved. Based on the solution of a test static problem of linear elasticity, two approaches to engineering modeling were demonstrated. The first approach requires programming skills - the full modeling cycle was written in the programming language of the FreeFem++ software. Additionally, the method mesh generating in the Gmsh program with subsequent use in the FreeFem++ program is shown. In the second approach, the full cycle of modeling is carried out through the interface of the FreeCAD program with the built-in CalculiX solver, which does not require programming skills. A way to parameterize the task using the Python interpreter built into FreeCAD is also proposed. The simulation results obtained using both approaches are compared for an object to which an external action is applied, determined by the Dirichlet or Neumann boundary conditions, and two types of object fastening are analyzed: rigid embedding and limitation by a plane with zero friction. The analysis of the use of computing resources by various direct and iterative methods is carried out. Within the framework of the considered test problem of static linear elasticity, the most optimal method in FreeFem++ is the iterative method of conjugate gradients CG both in terms of computation time and in terms of the memory used. The highest speed of calculations is provided by the Cholesky iterative method with conditioning by the incomplete Cholesky expansion in the CalculiX program.
APA, Harvard, Vancouver, ISO, and other styles
48

The Editors. "Notes from the Editors, January 2015." Monthly Review 66, no. 8 (December 31, 2014): 2. http://dx.doi.org/10.14452/mr-066-08-2015-01_0.

Full text
Abstract:
<div class="buynow"><a title="Back issue of Monthly Review, January 2015 (Volume 66, Number 8)" href="http://monthlyreview.org/back-issues/mr-066-08-2015-01/">buy this issue</a></div>The publication of socialist books in the United States has always encountered serious institutional obstacles. This can be seen in the enormous hurdles that stood in the way of the successful publication 130 years ago of the English translation of Engels&rsquo;s <em>The Condition of the Working Class in England </em>(1845)&mdash;today recognized as the classic account of the impact of the Industrial Revolution on workers. In 1885 Florence Kelley (-Wischnewetzky), the daughter of William D. Kelley, a U.S. Congressman and supporter of Lincoln, translated Engels&rsquo;s book into English. Her initial plan was to publish the translation in the United States with the respected publishing firm of G.P. Putnam &amp; Co. However, Putnam declined to publish it on the grounds that the book was outdated&hellip;and did not apply to U.S. industrialization, where such conditions of class exploitation were supposedly absent.&hellip; It is owing to these difficulties, associated with the U.S. publication of his book, that we have the benefit of some of Engels&rsquo;s more important comments regarding the problem of publishing socialist works in a capitalist society.<p class="mrlink">This article can also be found at the <a href="http://monthlyreview.org/index/volume-66-number-7" title="Vol. 66, No. 7: January 2015" target="_blank"><em>Monthly Review</em> website</a>, where most recent articles are published in full.</p><p class="mrpurchaselink"><a href="http://monthlyreview.org/index/volume-66-number-7" title="Vol. 66, No. 7: January 2015" target="_self">Click here to purchase a PDF version of this article at the <em>Monthly Review</em> website.</a></p>
APA, Harvard, Vancouver, ISO, and other styles
49

Xie, Peihong, Xin Li, and Xuemei Xie. "The integration of corporate non-market and market strategies: why, what, and how." Nankai Business Review International 5, no. 1 (February 25, 2014): 115–32. http://dx.doi.org/10.1108/nbri-01-2014-0003.

Full text
Abstract:
Purpose – This paper aims to systematically examine the key notion of integration of non-market and market strategies in the increasingly popular study of corporate non-market strategies. Design/methodology/approach – This paper is based on a brief literature review of the non-market strategy (NMS) research that shows the existing literature does not offer a clear and systematic account of the key notion of integration. It suggests any systematic account of integration should address at least three interrelated questions, i.e. why, what and how to integrate non-market and market strategies? Findings – For the why question, the authors use a formal model to demonstrate that the essence of the most important type of integration synergy lies in the positive spillover or externality from non-market to market strategies. For the what question, the authors identify the contents of integration at three levels, i.e. the level of non-market environment analysis, the level of NMS choice, and the level of non-market dynamic interactions. For the how question, the authors argue that the combination of non-market and market strategies should be seamless in terms of horizontal, vertical and intentional coordination. Overall, the authors argue, only when the right contents are combined and seamlessly coordinated will there be high synergies from integration of non-market and market strategies. Practical implications – Managers are advised to give non-market strategies full attention. Managers charged with non-market tasks should explore how to seamlessly coordinate non-market and market strategies in order to gain maximal synergies. Originality/value – This paper is the first to examine the key notion of integration in a systematic manner. It is the first to propose a three-question solution to systematic understanding of the notion and the first to propose the seamless coordination concept and its associated three aspects of seamless coordination.
APA, Harvard, Vancouver, ISO, and other styles
50

ALBOROV, Ivan, Fatima TEDEEVA, and Olga BURDZIEVA. "ECOLOGICAL ASPECTS OF THE TECHNOGENIC DEPOSITS PRESERVATION OF NON-FERROUS METALS IN THE NORTH CAUCASUS." Sustainable Development of Mountain Territories 13, no. 2 (June 30, 2021): 265–72. http://dx.doi.org/10.21177/1998-4502-2021-13-2-265-272.

Full text
Abstract:
The article presents the results of research on a comprehensive assessment of techno genic waste deposits located in the North Caucasus region, gives a brief description of the material composition of techno genic raw materials, element-by-element quantitative reserves that are part of the accumulated secondary georesources for their possible utilization, production of non-ferrous metals and raw materials for the manufacture of industrial building materials. The article considers the sanitary and ecological parameters of the functioning of man-made raw materials in complex orographic, meteorological and geographical conditions. The critical aspects of the preservation of techno genic waste deposits in the current anthropogenic conditions are identified. The risk factors for the removal of toxic and harmful geomaterials into the water area with significant harm to the flora and ichthyofaunal are identified. For a deeper assessment of the minerals contained in the extracted ore, a unified state register of techno genic waste deposits should be created, Taking into account the high fragmentation of all the above – mentioned process links at the present time , the owners, taking into account the current sanitary and regulatory requirements, must take adequate measures to ensure the regulatory and environmental requirements in accordance with the current Federal Law “On Production and Consumption Waste”. The need for complex processing of waste from mining and processing industries in the North Caucasus is also dictated by environmental aspects due to the high risk of the occupied territories (they are located in floodplain terraced areas of mountain rivers) due to the flooding of mountain rivers, including the high vulnerability of resort and recreational and balneological complexes due to the negative impact of these negative sources. The assessment of the useful components contained in the extracted ore is currently not carried out in full, and the accumulated waste from processing non-ferrous metal ores is used in small volumes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography