To see the other types of publications on this topic, follow the link: Estimativa a priori via blow-up.

Journal articles on the topic 'Estimativa a priori via blow-up'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Estimativa a priori via blow-up.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

García-Huidobro, Marta, Raúl Manasevich, and Satoshi Tanaka. "Positive Solutions for Systems of Quasilinear Equations with Non-homogeneous Operators and Weights." Advanced Nonlinear Studies 20, no. 2 (May 1, 2020): 293–310. http://dx.doi.org/10.1515/ans-2020-2082.

Full text
Abstract:
AbstractIn this paper we deal with positive radially symmetric solutions for a boundary value problem containing a strongly nonlinear operator. The proof of existence of positive solutions that we give uses the blow-up method as a main ingredient for the search of a-priori bounds of solutions. The blow-up argument is one by contradiction and uses a sort of scaling, reminiscent to the one used in the theory of minimal surfaces, see [B. Gidas and J. Spruck, A priori bounds for positive solutions of nonlinear elliptic equations, Comm. Partial Differential Equations 6 1981, 883–901], and therefore the homogeneity of the operators, Laplacian or p-Laplacian, and second members powers or power like functions play a fundamental role in the method. Thus, when the differential operators are no longer homogeneous, and similarly for the second members, applying the blow-up method to obtain a-priori bounds of solutions seems an almost impossible task. In spite of this fact, in [M. García-Huidobro, I. Guerra and R. Manásevich, Existence of positive radial solutions for a weakly coupled system via blow up, Abstr. Appl. Anal. 3 1998, 1–2, 105–131], we were able to overcome this difficulty and obtain a-priori bounds for a certain (simpler) type of problems. We show in this paper that the asymptotically homogeneous functions provide, in the same sense, a nonlinear rescaling, that allows us to generalize the blow-up method to our present situation. After the a-priori bounds are obtained, the existence of a solution follows from Leray–Schauder topological degree theory.
APA, Harvard, Vancouver, ISO, and other styles
2

Filippucci, Roberta, and Chiara Lini. "Existence results and a priori estimates for solutions of quasilinear problems with gradient terms." Opuscula Mathematica 39, no. 2 (2019): 195–206. http://dx.doi.org/10.7494/opmath.2019.39.2.195.

Full text
Abstract:
In this paper we establish a priori estimates and then an existence theorem of positive solutions for a Dirichlet problem on a bounded smooth domain in \(\mathbb{R}^N\) with a nonlinearity involving gradient terms. The existence result is proved with no use of a Liouville theorem for the limit problem obtained via the usual blow up method, in particular we refer to the modified version by Ruiz. In particular our existence theorem extends a result by Lorca and Ubilla in two directions, namely by considering a nonlinearity which includes in the gradient term a power of \(u\) and by removing the growth condition for the nonlinearity \(f\) at \(u=0\).
APA, Harvard, Vancouver, ISO, and other styles
3

Humayoo, Mahammad, and Xueqi Cheng. "Parameter Estimation with the Ordered ℓ2 Regularization via an Alternating Direction Method of Multipliers." Applied Sciences 9, no. 20 (October 12, 2019): 4291. http://dx.doi.org/10.3390/app9204291.

Full text
Abstract:
Regularization is a popular technique in machine learning for model estimation and for avoiding overfitting. Prior studies have found that modern ordered regularization can be more effective in handling highly correlated, high-dimensional data than traditional regularization. The reason stems from the fact that the ordered regularization can reject irrelevant variables and yield an accurate estimation of the parameters. How to scale up the ordered regularization problems when facing large-scale training data remains an unanswered question. This paper explores the problem of parameter estimation with the ordered ℓ 2 -regularization via Alternating Direction Method of Multipliers (ADMM), called ADMM-O ℓ 2 . The advantages of ADMM-O ℓ 2 include (i) scaling up the ordered ℓ 2 to a large-scale dataset, (ii) predicting parameters correctly by excluding irrelevant variables automatically, and (iii) having a fast convergence rate. Experimental results on both synthetic data and real data indicate that ADMM-O ℓ 2 can perform better than or comparable to several state-of-the-art baselines.
APA, Harvard, Vancouver, ISO, and other styles
4

Xin, Hua, Zhifang Liu, Yuhlong Lio, and Tzong-Ru Tsai. "Accelerated Life Test Method for the Doubly Truncated Burr Type XII Distribution." Mathematics 8, no. 2 (January 23, 2020): 162. http://dx.doi.org/10.3390/math8020162.

Full text
Abstract:
The Burr type XII (BurrXII) distribution is very flexible for modeling and has earned much attention in the past few decades. In this study, the maximum likelihood estimation method and two Bayesian estimation procedures are investigated based on constant-stress accelerated life test (ALT) samples, which are obtained from the doubly truncated three-parameter BurrXII distribution. Because computational difficulty occurs for maximum likelihood estimation method, two Bayesian procedures are suggested to estimate model parameters and lifetime quantiles under the normal use condition. A Markov Chain Monte Carlo approach using the Metropolis–Hastings algorithm via Gibbs sampling is built to obtain Bayes estimators of the model parameters and to construct credible intervals. The proposed Bayesian estimation procedures are simple for practical use, and the obtained Bayes estimates are reliable for evaluating the reliability of lifetime products based on ALT samples. Monte Carlo simulations were conducted to evaluate the performance of these two Bayesian estimation procedures. Simulation results show that the second Bayesian estimation procedure outperforms the first Bayesian estimation procedure in terms of bias and mean squared error when users do not have sufficient knowledge to set up hyperparameters in the prior distributions. Finally, a numerical example about oil-well pumps is used for illustration.
APA, Harvard, Vancouver, ISO, and other styles
5

Reuter, M., M. Buchwitz, O. Schneising, J. Heymann, H. Bovensmann, and J. P. Burrows. "A method for improved SCIAMACHY CO<sub>2</sub> retrieval in the presence of optically thin clouds." Atmospheric Measurement Techniques Discussions 2, no. 5 (October 8, 2009): 2483–538. http://dx.doi.org/10.5194/amtd-2-2483-2009.

Full text
Abstract:
Abstract. An optimal estimation based retrieval scheme for satellite based measurements of XCO2 (the column averaged mixing ratio of atmospheric CO2) is presented enabling accurate retrievals also in the presence of thin clouds. The proposed method is designed to analyze near-infrared nadir measurements of the SCIAMACHY instrument in the CO2 absorption band at 1580 nm and in the O2-A absorption band at around 760 nm. The algorithm accounts for scattering in an optically thin cirrus cloud layer and at aerosols of a default profile. The scattering information is mainly obtained from the O2-A band and a merged fit windows approach enables the transfer of information between the O2-A and the CO2 band. Via the optimal estimation technique, the algorithm is able to account for a priori information to further constrain the inversion. Test scenarios of simulated SCIAMACHY sun-normalized radiance measurements are analyzed in order to specify the quality of the proposed method. In contrast to existing algorithms, the systematic errors due to cirrus clouds with optical thicknesses up to 1.0 are reduced to values typically below 4 ppm. This shows that the proposed method has the potential to reduce uncertainties of SCIAMACHY retrieved XCO2 making this data product useful for surface flux inverse modeling.
APA, Harvard, Vancouver, ISO, and other styles
6

Turner, A. J., D. J. Jacob, K. J. Wecht, J. D. Maasakkers, E. Lundgren, A. E. Andrews, S. C. Biraud, et al. "Estimating global and North American methane emissions with high spatial resolution using GOSAT satellite data." Atmospheric Chemistry and Physics 15, no. 12 (June 30, 2015): 7049–69. http://dx.doi.org/10.5194/acp-15-7049-2015.

Full text
Abstract:
Abstract. We use 2009–2011 space-borne methane observations from the Greenhouse Gases Observing SATellite (GOSAT) to estimate global and North American methane emissions with 4° × 5° and up to 50 km × 50 km spatial resolution, respectively. GEOS-Chem and GOSAT data are first evaluated with atmospheric methane observations from surface and tower networks (NOAA/ESRL, TCCON) and aircraft (NOAA/ESRL, HIPPO), using the GEOS-Chem chemical transport model as a platform to facilitate comparison of GOSAT with in situ data. This identifies a high-latitude bias between the GOSAT data and GEOS-Chem that we correct via quadratic regression. Our global adjoint-based inversion yields a total methane source of 539 Tg a−1 with some important regional corrections to the EDGARv4.2 inventory used as a prior. Results serve as dynamic boundary conditions for an analytical inversion of North American methane emissions using radial basis functions to achieve high resolution of large sources and provide error characterization. We infer a US anthropogenic methane source of 40.2–42.7 Tg a−1, as compared to 24.9–27.0 Tg a−1 in the EDGAR and EPA bottom-up inventories, and 30.0–44.5 Tg a−1 in recent inverse studies. Our estimate is supported by independent surface and aircraft data and by previous inverse studies for California. We find that the emissions are highest in the southern–central US, the Central Valley of California, and Florida wetlands; large isolated point sources such as the US Four Corners also contribute. Using prior information on source locations, we attribute 29–44 % of US anthropogenic methane emissions to livestock, 22–31 % to oil/gas, 20 % to landfills/wastewater, and 11–15 % to coal. Wetlands contribute an additional 9.0–10.1 Tg a−1.
APA, Harvard, Vancouver, ISO, and other styles
7

Reuter, M., M. Buchwitz, O. Schneising, J. Heymann, H. Bovensmann, and J. P. Burrows. "A method for improved SCIAMACHY CO2 retrieval in the presence of optically thin clouds." Atmospheric Measurement Techniques 3, no. 1 (February 12, 2010): 209–32. http://dx.doi.org/10.5194/amt-3-209-2010.

Full text
Abstract:
Abstract. An optimal estimation based retrieval scheme for satellite based retrievals of XCO2 (the dry air column averaged mixing ratio of atmospheric CO2) is presented enabling accurate retrievals also in the presence of thin clouds. The proposed method is designed to analyze near-infrared nadir measurements of the SCIAMACHY instrument in the CO2 absorption band at 1580 nm and in the O2-A absorption band at around 760 nm. The algorithm accounts for scattering in an optically thin cirrus cloud layer and at aerosols of a default profile. The scattering information is mainly obtained from the O2-A band and a merged fit windows approach enables the transfer of information between the O2-A and the CO2 band. Via the optimal estimation technique, the algorithm is able to account for a priori information to further constrain the inversion. Test scenarios of simulated SCIAMACHY sun-normalized radiance measurements are analyzed in order to specify the quality of the proposed method. In contrast to existing algorithms for SCIAMACHY retrievals, the systematic errors due to cirrus clouds with optical thicknesses up to 1.0 are reduced to values below 4 ppm for most of the analyzed scenarios. This shows that the proposed method has the potential to reduce uncertainties of SCIAMACHY retrieved XCO2 making this data product potentially useful for surface flux inverse modeling.
APA, Harvard, Vancouver, ISO, and other styles
8

Polanía, Luisa F., Raja Bala, Ankur Purwar, Paul Matts, and Martin Maltz. "Skin Chromophore Estimation from Mobile Selfie Images using Constrained Independent Component Analysis." Electronic Imaging 2020, no. 14 (January 26, 2020): 357–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.14.coimg-357.

Full text
Abstract:
Human skin is made up of two primary chromophores: melanin, the pigment in the epidermis giving skin its color; and hemoglobin, the pigment in the red blood cells of the vascular network within the dermis. The relative concentrations of these chromophores provide a vital indicator for skin health and appearance. We present a technique to automatically estimate chromophore maps from RGB images of human faces captured with mobile devices such as smartphones. The ultimate goal is to provide a diagnostic aid for individuals to monitor and improve the quality of their facial skin. A previous method approaches the problem as one of blind source separation, and applies Independent Component Analysis (ICA) in camera RGB space to estimate the chromophores. We extend this technique in two important ways. First we observe that models for light transport in skin call for source separation to be performed in log spectral reflectance coordinates rather than in RGB. Thus we transform camera RGB to a spectral reflectance space prior to applying ICA. This process involves the use of a linear camera model and Principal Component Analysis to represent skin spectral reflectance as a lowdimensional manifold. The camera model requires knowledge of the incident illuminant, which we obtain via a novel technique that uses the human lip as a calibration object. Second, we address an inherent limitation with ICA that the ordering of the separated signals is random and ambiguous. We incorporate a domain-specific prior model for human chromophore spectra as a constraint in solving ICA. Results on a dataset of mobile camera images show high quality and unambiguous recovery of chromophores.
APA, Harvard, Vancouver, ISO, and other styles
9

Forman, B. A., and S. A. Margulis. "Impact of Covariance Localization on Ensemble Estimation of Surface Downwelling Longwave and Shortwave Radiation Fluxes." Journal of Hydrometeorology 13, no. 4 (August 1, 2012): 1301–16. http://dx.doi.org/10.1175/jhm-d-11-073.1.

Full text
Abstract:
Abstract Accurate estimates of terrestrial hydrologic states and fluxes are, in large part, dependent on accurate estimates of the spatiotemporal variability and uncertainty of land surface forcings, including downwelling longwave (LW) and shortwave (SW) fluxes. However, such characterization of land surface forcings does not always receive proper attention. This study attempts to better estimate LW and SW fluxes, including their uncertainties, by merging different sources of information while considering horizontal error correlations via implementation of a 2D conditioning procedure within a Bayesian framework. A total of 25 experiments were performed utilizing four different, readily available downwelling radiation products. The localized region of space used to constrain horizontal error correlations was defined using an influence length, , specified a priori. Quantitative comparisons are made against an independent, ground-based observational network. In general, results suggest moderate improvement in cloudy-sky LW fluxes and modest improvement in clear-sky SW fluxes during certain times of the year when using the 2D framework relative to a more traditional 1D framework, but only up to a certain influence length scale. Beyond this length scale the flux estimates were typically degraded because of the introduction of spurious correlations. The influence length scale that yielded the greatest improvement in LW radiative flux estimation during cloudy-sky conditions, in general, increased with increasing cloud cover. These findings have implications for improving downwelling radiative flux estimation and further enhancing existing Land Data Assimilation System (LDAS) frameworks.
APA, Harvard, Vancouver, ISO, and other styles
10

Oertel, Michael, Christopher Kittel, Jonas Martel, Jan-Henrik Mikesch, Marco Glashoerster, Matthias Stelljes, and Hans Theodor Eich. "Pulmonary Toxicity after Total Body Irradiation—An Underrated Complication? Estimation of Risk via Normal Tissue Complication Probability Calculations and Correlation with Clinical Data." Cancers 13, no. 12 (June 12, 2021): 2946. http://dx.doi.org/10.3390/cancers13122946.

Full text
Abstract:
Total body irradiation (TBI) is an essential part of various conditioning regimens prior to allogeneic stem cell transplantation, but is accompanied by relevant (long-term) toxicities. In the lungs, a complex mechanism induces initial inflammation (pneumonitis) followed by chronic fibrosis. The hereby presented analysis investigates the occurrence of pulmonary toxicity in a large patient collective and correlates it with data derived from normal tissue complication probability (NTCP) calculations. The clinical data of 335 hemato-oncological patients undergoing TBI were analyzed with a follow-up of 85 months. Overall, 24.8% of all patients displayed lung toxicities, predominantly pneumonia and pulmonary obstructions (13.4% and 6.0%, respectively). NTCP calculations estimated median risks to be 20.3%, 0.6% and 20.4% for overall pneumonitis (both radiological and clinical), symptomatic pneumonitis and lung fibrosis, respectively. These numbers are consistent with real-world data from the literature and further specify radiological and clinical apparent toxicity rates. Overall, the estimated risk for clinical apparent pneumonitis is very low, corresponding to the probability of non-infectious acute respiratory distress syndrome, although the underlying pathophysiology is not identical. Radiological pneumonitis and lung fibrosis are expected to be more common but require a more precise documentation by the transplantation team, radiologists and radiation oncologists.
APA, Harvard, Vancouver, ISO, and other styles
11

Turner, A. J., D. J. Jacob, K. J. Wecht, J. D. Maasakkers, S. C. Biraud, H. Boesch, K. W. Bowman, et al. "Estimating global and North American methane emissions with high spatial resolution using GOSAT satellite data." Atmospheric Chemistry and Physics Discussions 15, no. 4 (February 18, 2015): 4495–536. http://dx.doi.org/10.5194/acpd-15-4495-2015.

Full text
Abstract:
Abstract. We use 2009–2011 space-borne methane observations from the Greenhouse Gases Observing SATellite (GOSAT) to constrain global and North American inversions of methane emissions with 4° × 5° and up to 50 km × 50 km spatial resolution, respectively. The GOSAT data are first evaluated with atmospheric methane observations from surface networks (NOAA, TCCON) and aircraft (NOAA/DOE, HIPPO), using the GEOS-Chem chemical transport model as a platform to facilitate comparison of GOSAT with in situ data. This identifies a high-latitude bias between the GOSAT data and GEOS-Chem that we correct via quadratic regression. The surface and aircraft data are subsequently used for independent evaluation of the methane source inversions. Our global adjoint-based inversion yields a total methane source of 539 Tg a−1 and points to a large East Asian overestimate in the EDGARv4.2 inventory used as a prior. Results serve as dynamic boundary conditions for an analytical inversion of North American methane emissions using radial basis functions to achieve high resolution of large sources and provide full error characterization. We infer a US anthropogenic methane source of 40.2–42.7 Tg a−1, as compared to 24.9–27.0 Tg a−1 in the EDGAR and EPA bottom-up inventories, and 30.0–44.5 Tg a−1 in recent inverse studies. Our estimate is supported by independent surface and aircraft data and by previous inverse studies for California. We find that the emissions are highest in the South-Central US, the Central Valley of California, and Florida wetlands, large isolated point sources such as the US Four Corners also contribute. We attribute 29–44% of US anthropogenic methane emissions to livestock, 22–31% to oil/gas, 20% to landfills/waste water, and 11–15% to coal with an additional 9.0–10.1 Tg a−1 source from wetlands.
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Evan Y., Fenghai Duan, Mark Muzi, Jeremy Gorelick, Bennett Chin, Joshi J. Alumkal, Mary-Ellen Taplin, et al. "Correlation of 18F-fluoride PET response to dasatinib in castration-resistant prostate cancer bone metastases with progression-free survival: Preliminary results from ACRIN 6687." Journal of Clinical Oncology 31, no. 15_suppl (May 20, 2013): 5003. http://dx.doi.org/10.1200/jco.2013.31.15_suppl.5003.

Full text
Abstract:
5003 Background: Dasatinib is a SRC kinase inhibitor that decreases bone turnover in men with metastatic castration-resistant prostate cancer (mCRPC). 18F-fluoride PET was used to evaluate differential response between normal and tumor bone to dasatinib. Methods: Patients with bone mCRPC underwent dynamic 18F-flouride PET imaging prior to and 12 weeks after dasatinib treatment. Up to 5 bone metastases with matching normal bone regions were selected for analysis by SUVmax, Ki, K1and Patlak flux. Their pre-treatment values and change from pre-treatment to post-treatment values were evaluated via generalized estimating equations to predict skeletal-related events (SRE) and via Cox proportional hazards modeling to predict progression-free survival (PFS) with Prostate Cancer Working Group 2 criteria, overall survival and time to SRE. Results: Eighteen patients treated with dasatinib underwent baseline 18F-flouride PET imaging; 12 had follow-up scans allowing assessment of changes due to therapy. Median age for all patients was 69 (range 48-86) years. Significant decrease in SUVmax (p=0.0002) occurred in bone metastases with dasatinib while significant increases in Patlak flux (p=0.0033) occurred in normal bone. Significant differences in changes from tumor bone compared to normal bone in response to dasatinib were noted for SUVmax (p<0.0001). Of 18 patients, 17 have either met progression criteria or death by the time of this analysis. Decrease in tumor bone SUVmax (p=0.019), Ki(p=0.022), and Patlak flux (p=0.034) from pre-treatment to post-treatment correlates with longer PFS. Conclusions: 18F-fluoride PET indicates differential effect of dasatinib on tumor compared to normal bone in men with mCRPC. In patients undergoing pre- and post-dasatinib 18F-fluoride PET imaging a decrease in bone mCRPC fluoride uptake in response to treatment correlates with PFS. Clinical trial information: NCT00936975.
APA, Harvard, Vancouver, ISO, and other styles
13

Brown, Doug. "Estimating the composition of a forest seed bank: a comparison of the seed extraction and seedling emergence methods." Canadian Journal of Botany 70, no. 8 (August 1, 1992): 1603–12. http://dx.doi.org/10.1139/b92-202.

Full text
Abstract:
The composition of a forest seed bank was estimated using two methods: (i) seed extraction, i.e., the physical separation of the seeds from the soil via flotation in a salt solution, and (ii) seedling emergence, i.e., the germination of seedlings from soil samples incubated under greenhouse conditions for 5 months. The extraction method predicted a density of 12 500 seeds∙m−2, while the emergence method detected 3800 émergents∙m−2. There was considerable disparity in species composition derived from the two methods. The extraction method identified 102 different taxa, with 22 species making up 99% of the seeds and 5.6 + 0.2 species per sample. In contrast, the emergence technique identified fewer species (60) but had more species per sample (7.6 + 0.2). Eleven species made up 99% of the emergents. Verbascum thapsus represented 34% of the seedlings in the emergence study but only 1 % of the extracted seeds. Members of the Polygonaceae represented 19% of the extracted seeds but less than 1 % of the seedling emergents. No tree or shrub species were found with the emergence method, although they represented 8% of the extracted seeds. There was a poor correlation between the estimates of species number, seed density, and diversity obtained from the two methods. The seed extraction method had considerably higher variability for these parameters. It is apparent from this study that the seedling emergence and seed extraction methodologies do not produce similar estimates of the seed bank composition. The differences are such that comparisons should not be drawn between studies using the different methods. Careful considerations should be given to both the objectives of the seed bank study and the relevant literature prior to the selection of an appropriate method. Key words: seed bank, method, composition, diversity, density, sample number.
APA, Harvard, Vancouver, ISO, and other styles
14

McGillion, Michael H., Shaunattonie Henry, Jason W. Busse, Carley Ouellette, Joel Katz, Manon Choinière, Andre Lamy, et al. "Examination of psychological risk factors for chronic pain following cardiac surgery: protocol for a prospective observational study." BMJ Open 9, no. 2 (February 2019): e022995. http://dx.doi.org/10.1136/bmjopen-2018-022995.

Full text
Abstract:
IntroductionApproximately 400 000 Americans and 36 000 Canadians undergo cardiac surgery annually, and up to 56% will develop chronic postsurgical pain (CPSP). The primary aim of this study is to explore the association of pain-related beliefs and gender-based pain expectations on the development of CPSP. Secondary goals are to: (A) explore risk factors for poor functional status and patient-level cost of illness from a societal perspective up to 12 months following cardiac surgery; and (B) determine the impact of CPSP on quality-adjusted life years (QALYs) borne by cardiac surgery, in addition to the incremental cost for one additional QALY gained, among those who develop CPSP compared with those who do not.Methods and analysesIn this prospective cohort study, 1250 adults undergoing cardiac surgery, including coronary artery bypass grafting and open-heart procedures, will be recruited over a 3-year period. Putative risk factors for CPSP will be captured prior to surgery, at postoperative day 3 (in hospital) and day 30 (at home). Outcome data will be collected via telephone interview at 6-month and 12-month follow-up. We will employ generalised estimating equations to model the primary (CPSP) and secondary outcomes (function and cost) while adjusting for prespecified model covariates. QALYs will be estimated by converting data from the Short Form-12 (version 2) to a utility score.Ethics and disseminationThis protocol has been approved by the responsible bodies at each of the hospital sites, and study enrolment began May 2015. We will disseminate our results through CardiacPain.Net, a web-based knowledge dissemination platform, presentation at international conferences and publications in scientific journals.Trial registration numberNCT01842568.
APA, Harvard, Vancouver, ISO, and other styles
15

Qian, Chao, Yang Yu, Ke Tang, Yaochu Jin, Xin Yao, and Zhi-Hua Zhou. "On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments." Evolutionary Computation 26, no. 2 (June 2018): 237–67. http://dx.doi.org/10.1162/evco_a_00201.

Full text
Abstract:
In real-world optimization tasks, the objective (i.e., fitness) function evaluation is often disturbed by noise due to a wide range of uncertainties. Evolutionary algorithms are often employed in noisy optimization, where reducing the negative effect of noise is a crucial issue. Sampling is a popular strategy for dealing with noise: to estimate the fitness of a solution, it evaluates the fitness multiple ([Formula: see text]) times independently and then uses the sample average to approximate the true fitness. Obviously, sampling can make the fitness estimation closer to the true value, but also increases the estimation cost. Previous studies mainly focused on empirical analysis and design of efficient sampling strategies, while the impact of sampling is unclear from a theoretical viewpoint. In this article, we show that sampling can speed up noisy evolutionary optimization exponentially via rigorous running time analysis. For the (1[Formula: see text]1)-EA solving the OneMax and the LeadingOnes problems under prior (e.g., one-bit) or posterior (e.g., additive Gaussian) noise, we prove that, under a high noise level, the running time can be reduced from exponential to polynomial by sampling. The analysis also shows that a gap of one on the value of [Formula: see text] for sampling can lead to an exponential difference on the expected running time, cautioning for a careful selection of [Formula: see text]. We further prove by using two illustrative examples that sampling can be more effective for noise handling than parent populations and threshold selection, two strategies that have shown to be robust to noise. Finally, we also show that sampling can be ineffective when noise does not bring a negative impact.
APA, Harvard, Vancouver, ISO, and other styles
16

Molnar, Christoph, Almut Scherer, Xenofon Baraliakos, Manouk de Hooge, Raphael Micheroli, Pascale Exer, Rudolf O. Kissling, et al. "TNF blockers inhibit spinal radiographic progression in ankylosing spondylitis by reducing disease activity: results from the Swiss Clinical Quality Management cohort." Annals of the Rheumatic Diseases 77, no. 1 (September 22, 2017): 63–69. http://dx.doi.org/10.1136/annrheumdis-2017-211544.

Full text
Abstract:
ObjectivesTo analyse the impact of tumour necrosis factor inhibitors (TNFis) on spinal radiographic progression in ankylosing spondylitis (AS).MethodsPatients with AS in the Swiss Clinical Quality Management cohort with up to 10 years of follow-up and radiographic assessments every 2 years were included. Radiographs were scored by two readers according to the modified Stoke Ankylosing Spondylitis Spine Score (mSASSS) with known chronology. The relationship between TNFi use before a 2-year radiographic interval and progression within the interval was investigated using binomial generalised estimating equation models with adjustment for potential confounding and multiple imputation of missing values. Ankylosing Spondylitis Disease Activity Score (ASDAS) was regarded as mediating the effect of TNFi on progression and added to the model in a sensitivity analysis.ResultsA total of 432 patients with AS contributed to data for 616 radiographic intervals. Radiographic progression was defined as an increase in ≥2 mSASSS units in 2 years. Mean (SD) mSASSS increase was 0.9 (2.6) units in 2 years. Prior use of TNFi reduced the odds of progression by 50% (OR 0.50, 95% CI 0.28 to 0.88) in the multivariable analysis. While no direct effect of TNFi on progression was present in an analysis including time-varying ASDAS (OR 0.61, 95% CI 0.34 to 1.08), the indirect effect, via a reduction in ASDAS, was statistically significant (OR 0.75, 95% CI 0.59 to 0.97).ConclusionTNFis are associated with a reduction of spinal radiographic progression in patients with AS. This effect seems mediated through the inhibiting effect of TNFi on disease activity.
APA, Harvard, Vancouver, ISO, and other styles
17

Khan, Ammara, Ayesha Afzal, Abdul Rauf, and Akbar Waheed. "Comparative efficacy of melatonin in attenuation of endotoxin/LPS induced hepatotoxicity in BALB/c mice." International Journal of Basic & Clinical Pharmacology 7, no. 6 (May 22, 2018): 1191. http://dx.doi.org/10.18203/2319-2003.ijbcp20182105.

Full text
Abstract:
Background: Sepsis is characterized by overwhelming surge of cytokines and oxidative stress to one of many factors, gram negative bacteria commonly implicated. Despite major expansion and elaboration of sepsis pathophysiology and therapeutic approach; death rate remains very high in septic patients due to multiple organ damage including hepatotoxicity. The present study was aimed to ascertain the adequacy of melatonin (10mg/kg i.p), and its comparability with dexamethasone (3mg/kg i.p), delivered separately and collectively in endotoxin induced hepatotoxicity.Methods: The number of animals in each group was six. Endotoxin/LPS induced hepatotoxicity was reproduced in mice by giving LPS of serotype E. coli intraperitoneally. Preventive role was questioned by giving the experimental agent half an hour prior to LPS injection whereas therapeutic potential of the experimental agent was searched out via post LPS delivering. The extent of liver damage was adjudged via serum alanine aminotransferases (ALT) and aspartate aminotransferase (AST) estimation along with histopathological examination of liver tissue.Results: Melatonin was prosperous in aversion (Group 3) and curation (Group 4) of LPS invoked hepatotoxicity as evident by lessening of augmented ALT (≤0.01) and AST (≤0.01) along with restoration of pathological changes on liver sections (p≤0.05). Dexamethasone given before (Group5) and after LPS (Group 6) significantly (p≤0.05) attenuated LPS generated liver injury. Combination therapy with dexamethasone in conjunction with melatonin (Group 7) after LPS administration tapered LPS evoked hepatic dysfunction statistically considerably, however the result was comparable to single agent therapy.Conclusions: Melatonin set up promising results in endotoxin induced hepatotoxicity and can be used therapeutic adjuncts to conventional treatment strategies in sepsis induced liver failure. Combination therapies however generated no synergistic results.
APA, Harvard, Vancouver, ISO, and other styles
18

Jog, Neelakshi R., Kendra A. Young, Melissa E. Munroe, Michael T. Harmon, Joel M. Guthridge, Jennifer A. Kelly, Diane L. Kamen, et al. "Association of Epstein-Barr virus serological reactivation with transitioning to systemic lupus erythematosus in at-risk individuals." Annals of the Rheumatic Diseases 78, no. 9 (June 19, 2019): 1235–41. http://dx.doi.org/10.1136/annrheumdis-2019-215361.

Full text
Abstract:
ObjectiveSystemic lupus erythematosus (SLE) is a systemic autoimmune disease with unknown aetiology. Epstein-Barr virus (EBV) is an environmental factor associated with SLE. EBV maintains latency in B cells with frequent reactivation measured by antibodies against viral capsid antigen (VCA) and early antigen (EA). In this study, we determined whether EBV reactivation and single nucleotide polymorphisms (SNPs) in EBV-associated host genes are associated with SLE transition.MethodsSLE patient relatives (n=436) who did not have SLE at baseline were recontacted after 6.3 (±3.9) years and evaluated for interim transitioning to SLE (≥4 cumulative American College of Rheumatology criteria); 56 (13%) transitioned to SLE prior to the follow-up visit. At both visits, detailed demographic, environmental, clinical information and blood samples were obtained. Antibodies against viral antigens were measured by ELISA. SNPs in IL10, CR2, TNFAIP3 and CD40 genes were typed by ImmunoChip. Generalised estimating equations were used to test associations between viral antibody levels and transitioning to SLE.ResultsMean baseline VCA IgG (4.879±1.797 vs 3.866±1.795, p=0.0003) and EA IgG (1.192±1.113 vs 0.7774±0.8484, p=0.0236) levels were higher in transitioned compared with autoantibody negative non-transitioned relatives. Increased VCA IgG and EA IgG were associated with transitioning to SLE (OR 1.28 95% CI 1.07 to 1.53, p=0.007, OR 1.43 95% CI 1.06 to 1.93, p=0.02, respectively). Significant interactions were observed between CD40 variant rs48100485 and VCA IgG levels and IL10 variant rs3024493 and VCA IgA levels in transitioning to SLE.ConclusionHeightened serologic reactivation of EBV increases the probability of transitioning to SLE in unaffected SLE relatives.
APA, Harvard, Vancouver, ISO, and other styles
19

SenGupta, Swapnanil. "An empirical analysis on the impact of non-performing loans on investment and economic growth and the role of political governance." Indian Journal of Economics and Development 8 (December 9, 2020): 1–21. http://dx.doi.org/10.17485/ijed/v8.27.

Full text
Abstract:
Objective: To empirically analyze the link between nonperforming loans and investments along with the role of political governance. The estimation technique used is the fixed effects model including both the country and timMethods: e fixed effects. The dataset consists a panel of 103 countries with annual data over the period from 2000 to 2017. A unique composite political governance index has been prepared combining the six existing governance indicators via Principal Component Analysis (PCA). Findings: It is found that NPL has significant negative impact whereas, governance has significant positive impact on investments as per expectations. However, it is found that the negative impact of NPL on investment gets stronger in presence of good governance. This is a paradoxical result and further attempts has been made to rationalize the outcome. Applications: The study empirically proves the theory of negative impacts of NPL on investment in the economy. Furthermore, the role of political governance has been scrutinized. No prior works have been carried out on this topic. The paradoxical result in this study has opened up new areas for research. An extensive literature review has been provided along with a detailed discussion on the possible measures to tackle with the problems. JEL Classification: C3, E6, G0. Keywords: NPL; investment; political governance institutions; fixed effects model; composite political governance index
APA, Harvard, Vancouver, ISO, and other styles
20

Steiner, A. K., G. Kirchengast, and H. P. Ladreiter. "Inversion, error analysis, and validation of GPS/MET occultation data." Annales Geophysicae 17, no. 1 (January 31, 1999): 122–38. http://dx.doi.org/10.1007/s00585-999-0122-5.

Full text
Abstract:
Abstract. The global positioning system meteorology (GPS/MET) experiment was the first practical demonstration of global navigation satellite system (GNSS)-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum) of GNSS-transmitted radio waves caused by refraction during passage through the Earth's neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion). The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. No initialization and statistical errors yield better than 1 K accuracy up to 30 km but less than 3 K accuracy above 40 km. Given imperfect initialization, biases >2 K propagate down to below 30 km height in unfavorable realistic cases. Furthermore, results of a statistical validation of GPS/MET profiles through comparison with atmospheric analyses of the European Centre for Medium-range Weather Forecasts (ECMWF) are presented. The comparisons indicate the high utility of the occultation data in that very good agreement of upper troposphere/lower stratosphere temperature (better than 1.5 K rms, <0.5 K bias) is found for a region (Europe+USA) where the ECMWF analyses are known to be good, but poorer agreement for a region (Southern Pacific) where the analyses are known to be degraded.Key words. Atmospheric composition and structure (pressure; density and temperature), Meteorology and atmospheric dynamics (instruments and techniques), Radio science (remote sensing)
APA, Harvard, Vancouver, ISO, and other styles
21

Ruggeri, Kai, Áine Maguire, Susanne Schmitz, Elisa Haller, Cathal Walsh, Jack Bowden, Isla Kuhn, Ayesha Khan, Gordon Cook, and Michael O'Dwyer. "Estimating the Relative Effectiveness of Treatments in Relapsed/Refractory Multiple Myeloma through a Systematic Review and Network Meta-Analysis." Blood 126, no. 23 (December 3, 2015): 2103. http://dx.doi.org/10.1182/blood.v126.23.2103.2103.

Full text
Abstract:
Abstract Introduction: Recently introduced treatments have improved survival outcomes in relapsed or refractory multiple myeloma (rrMM), with positive outlooks from ongoing trials. However, evidence on relative effectiveness to inform best practice is lacking due to the paucity of head-to-head clinical trials. This systematic review aimed to compare all treatments in rrMM via a mixed treatment comparison (MTC), taking into account prior lines of therapy. Novel statistical techniques using a network meta-analysis (NMA) were developed to estimate relative effectiveness. Methods: A literature search was conducted August 2014 and repeated December 2014. Randomised control trials (RCTs) were included if they reported median duration of progression-free survival (PFS), overall survival (OS) or time to progression (TTP) as a primary or secondary rrMM treatment outcomes. A Bayesian NMA using non-informative prior distributions was fitted in the software application R using JAGS. Such models allow for the estimation of all pairwise comparisons within a connected network of evidence. Fixed effects were assumed, as each direct comparison in the network is informed by a maximum of two trials, which does not allow for the estimation of a heterogeneity parameter. Considerable heterogeneity was observed across studies. In particular the number of prior treatment lines among patients recruited into trials in rrMM varied markedly and was often not reported in enough detail to include this potentially important variable in the analysis. As a result, trials conducted in heavily pre-treated patient populations (3 or more prior lines of therapy) were excluded from the primary analysis. Results: A total of 24 RCTs reporting relevant outcomes for 20 different treatment regimens were identified for data extraction. It was not possible to link all 20 regimens within a single evidence network, but the majority (16) were incorporated within two networks (see figure 1). As a result, the analysis estimated all pairwise comparisons within each of the networks; it is not possible to draw conclusions for comparisons across networks. Results are presented in figure 2 for the yellow (larger) and blue networks (smaller). Three studies were excluded from the presented analysis as median PFS had not yet been reached at follow up. Median follow up across all identified studies ranged from 5.59 to 36 months. Within the yellow network, carfilzomib in combination with lenalidomide and dexamethasone was the most effective treatment, followed by lenalidomide and dexamethasone and then bortezomib. In the smaller blue evidence network, bortezomib in combination with dexamethasone and panobinostat was the most effective treatment. Discussion: Decision-making to optimise patient care, supported by clinical guidelines, requires evidence-based assessment of available treatments. In practise this is often restricted to a series of pair-wise comparisons of treatments such that drawing appropriate inferences across all available options is not possible. To our knowledge, the application of NMA and subsequent results presented here are the first of their kind in rrMM. Previous meta-analyses have been reported for individual treatments, but not for all available options. Our analysis appears broadly consistent with evidence from clinical trials for licensed treatments which are now the established standard of care in rrMM. One limitation of our analysis relates to the published evidence on prior treatment lines in rrMM patients. Fitting a meta-regression to explain heterogeneity may lead to confounding of the treatment effect given the available evidence. Patient level data would allow for a more reliable analysis of the effect of prior treatment lines on PFS. A further consideration is the level of evidence included within our analysis. NMA typically focuses on data drawn from RCTs though methods are available to allow for the inclusion of non-RCT evidence. A great deal of published evidence of this sort is available in the rrMM setting and future analytical approaches should explore how the inclusion of this evidence affects our findings. Figure 1. Reduced RCT evidence network (red links for trials which not reporting OS outcomes) Figure 1. Reduced RCT evidence network (red links for trials which not reporting OS outcomes) Figure 2. Odds ratio and 95% credible intervals for pairwise comparisons A versus B. Significant differences shaded in green. Estimates below 1 favour drug A, estimates above 1 favour drug B. Figure 2. Odds ratio and 95% credible intervals for pairwise comparisons A versus B. Significant differences shaded in green. Estimates below 1 favour drug A, estimates above 1 favour drug B. Figure 3. Figure 3. Disclosures Ruggeri: Cogentia UK: Research Funding. Maguire:Cogentia Healthcare Consulting: Research Funding. Cook:Celgene: Consultancy, Research Funding, Speakers Bureau; BMS: Consultancy; Sanofi: Consultancy, Speakers Bureau; Amgen: Consultancy, Speakers Bureau; Takeda Oncology: Consultancy, Research Funding, Speakers Bureau; Janssen: Consultancy, Research Funding, Speakers Bureau. O'Dwyer:Celgene: Honoraria, Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
22

Longuevergne, L., C. R. Wilson, B. R. Scanlon, and J. F. Crétaux. "GRACE water storage estimates for the Middle East and other regions with significant reservoir and lake storage." Hydrology and Earth System Sciences 17, no. 12 (December 5, 2013): 4817–30. http://dx.doi.org/10.5194/hess-17-4817-2013.

Full text
Abstract:
Abstract. While GRACE (Gravity Recovery and Climate Experiment) satellites are increasingly being used to monitor total water storage (TWS) changes globally, the impact of spatial distribution of water storage within a basin is generally ignored but may be substantial. In many basins, water is often stored in reservoirs or lakes, flooded areas, small aquifer systems, and other localized regions with areas typically below GRACE resolution (~200 000 km2). The objective of this study was to assess the impact of nonuniform water storage distribution on GRACE estimates of TWS changes as basin-wide averages, focusing on surface water reservoirs and using a priori information on reservoir storage from radar altimetry. Analysis included numerical experiments testing effects of location and areal extent of the localized mass (reservoirs) within a basin on basin-wide average water storage changes, and application to the lower Nile (Lake Nasser) and Tigris–Euphrates basins as examples. Numerical experiments show that by assuming uniform mass distribution, GRACE estimates may under- or overestimate basin-wide average water storage by up to a factor of ~2, depending on reservoir location and areal extent. Although reservoirs generally cover less than 1% of the basin area, and their spatial extent may be unresolved by GRACE, reservoir storage may dominate water storage changes in some basins. For example, reservoir storage accounts for ~95% of seasonal water storage changes in the lower Nile and 10% in the Tigris–Euphrates. Because reservoirs are used to mitigate droughts and buffer against climate extremes, their influence on interannual timescales can be large. For example, TWS decline during the 2007–2009 drought in the Tigris–Euphrates basin measured by GRACE was ~93 km3. Actual reservoir storage from satellite altimetry was limited to 27 km3, but their apparent impact on GRACE reached 45 km3, i.e., 50% of GRACE trend. Therefore, the actual impact of reservoirs would have been greatly underestimated (27 km3) if reservoir storage changes were assumed uniform in the basin. Consequently, estimated groundwater contribution from GRACE would have been largely overestimated in this region if the actual distribution of water was not explicitly taken into account. Effects of point masses on GRACE estimates are not easily accounted for via simple multiplicative scaling, but in many cases independent information may be available to improve estimates. Accurate estimation of the reservoir contribution is critical, especially when separating estimating groundwater storage changes from GRACE total water storage (TWS) changes. Because the influence of spatially concentrated water storage – and more generally water distribution – is significant, GRACE estimates will be improved by combining independent water mass spatial distribution information with GRACE observations, even when reservoir storage is not the dominant mechanism. In this regard, data from the upcoming Surface Water Ocean Topography (SWOT) satellite mission should be an especially important companion to GRACE-FO (Follow-On) observations.
APA, Harvard, Vancouver, ISO, and other styles
23

Benjamin-Neelon, Sara, Tiange Liu, Eve S. Puffer, Liz Turner, Daniel Zaltz, Andrew Thorne-Lyman, and Sherryl Broverman. "A Garden-Based Intervention to Improve Dietary Diversity in Kenyan School Children: Results from a Natural Experiment." Current Developments in Nutrition 4, Supplement_2 (May 29, 2020): 810. http://dx.doi.org/10.1093/cdn/nzaa053_015.

Full text
Abstract:
Abstract Objectives School gardens may improve child diet, but little is known about their effectiveness in rural areas in low-income countries. We evaluated the ability of school gardens to improve child diet in rural Kenya. We hypothesized that children in intervention schools would improve their dietary diversity and specifically their produce intake. Methods An non-government organization installed gardens in 2 primary schools. We selected 2 geographically proximal additional schools as comparisons. We conducted baseline assessments in 2013, prior to garden installation, and follow-up assessments a year later in 2014 in all 4 schools. We measured child dietary intake via a single 24-hour recall. We calculated dietary diversity using the women's dietary diversity score (WDDS) (continuous) and also examined each of the 10 food group components defined as adequate ≥15 g (binary). We conducted marginal linear or logistic regression models using a generalized estimating equation and included an exposure x time interaction to assess differences in outcomes between intervention and comparison schools from baseline to follow up. We controlled for child age, gender, and orphan status. Results We assessed 855 children (n = 438 intervention; n = 417 comparison) at baseline and 688 children (n = 383 intervention; n = 305 comparison) at follow up. Children in intervention schools were 51.8% male, compared to 56.5% in comparison schools. Mean (standard deviation) age was 11.6 (2.1) years in intervention and 11.8 (2.3) years in comparison schools. All children's WDDS worsened post-intervention. In adjusted difference in difference analyses, WDDS did not differ in intervention vs. comparison schools pre- to post-intervention (β 0.04, CI −0.19, 0.27). However, we observed less of a decrease in meeting adequate intake for pulses (OR 2.18, CI 1.18, 4.01) and other fruits (OR 1.55, CI 1.00, 2.40) in intervention versus comparison schools. Conversely, children in comparison schools had less of a decrease in meat, poultry, and fish compared to children in intervention schools (OR 0.67, CI 0.45, 0.99). Conclusions Children's WDDS worsened in all 4 schools, likely due to a severe drought that affected the region in 2014. We observed some differences in intervention vs. comparison children, but cannot attribute these improvements to school gardens. Funding Sources Duke Global Health Institute.
APA, Harvard, Vancouver, ISO, and other styles
24

Singh, H., S. Derksen, M. Sirski, S. McCulloch, and L. M. Lix. "A81 POST COLONOSCOPY COLORECTAL CANCERS IN MANITOBA: A POPULATION-BASED ANALYSIS." Journal of the Canadian Association of Gastroenterology 3, Supplement_1 (February 2020): 95–96. http://dx.doi.org/10.1093/jcag/gwz047.080.

Full text
Abstract:
Abstract Background Recent consensus guidelines from the World Endoscopy Organization (WEO) recommend all jurisdictions report unadjusted rates of post colonoscopy (PC) colorectal cancers (CRC). Until recently, prior reports have mostly focused on PC-CRC in the CRC screening age groups. Aims We evaluated the rate and predictors of PC-CRC in the adult population for the province of Manitoba from 1990 to 2016. Methods Individuals 18+ years at CRC diagnosis were identified from the Manitoba Cancer Registry. Colonoscopies in the 3 years preceding CRC diagnosis were identified via linkage to Manitoba Health (MH) physicians billing claims. CRCs were classified, based on WEO recommendations, as: (1) detected CRC (colonoscopy up to 6 months before CRC diagnosis) and (2) PC-CRC-3y (colonoscopy 6–36 months before CRC diagnosis). Generalized linear models with generalized estimating equations (to adjust for clustering within endoscopy physicians) were used to test for differences in rates over 3-time intervals (1990/91 – 1999/00; 2000/01 - 2009/10; 2010/11 – Dec 31 2016), provincial region of performance of colonoscopy and identify other associations from the MH data. Results Overall, 10.5% of the 16,639 CRCs diagnosed in the study period and with colonoscopy in the preceding 3 years were PC-CRC-3y. CRCs diagnosed between April 2000 and March 2010 were more likely to be PC-CRC-3y than those diagnosed between April 2010 and December 2016 (odds ratio [OR] 1.18; 95% confidence interval [CI]: 1.03–1.37). Female sex (OR for male: 0.86; 95% CI: 0.77–0.94), IBD diagnosis (OR 3.04; 95% CI: 2.56–4.52), prior CRC (OR 5.41; 95% CI: 4.61–6.34), prior colonoscopy (OR 2.10; 95% CI 1.88–2.36), diverticulosis (OR 2.39; 95% CI: 2.16–2.6), colonoscopy by GP (OR: 1.62; 95% CI 1.16–2.26 vs. surgeons) were associated with increased odds of PC-CRC-3y. There were no regional differences, and no effect of colonoscopy volume or age greater than 75 (or lower than 50). Conclusions In Manitoba, the PC-CRC-3y rate decreased slightly in recent years. The study results of large number of PC-CRC-3y along with only a slight decrease in rates over the years, support calls for root cause analysis to evaluate individual cases of PC-CRC. An initial focus could be the groups with increased risk of PC-CRC. Funding Agencies Manitoba Health
APA, Harvard, Vancouver, ISO, and other styles
25

Kumar, Anjani, Naveen Kumar Vishvakarma, Abhishek Tyagi, Alok Chandra Bharti, and Sukh Mahendra Singh. "Anti-neoplastic action of aspirin against a T-cell lymphoma involves an alteration in the tumour microenvironment and regulation of tumour cell survival." Bioscience Reports 32, no. 1 (October 10, 2011): 91–104. http://dx.doi.org/10.1042/bsr20110027.

Full text
Abstract:
The present study explores the potential of the anti-neoplastic action of aspirin in a transplantable murine tumour model of a spontaneously originated T-cell lymphoma designated as Dalton's lymphoma. The antitumour action of aspirin administered to tumour-bearing mice through oral and/or intraperitoneal (intratumoral) routes was measured via estimation of survival of tumour-bearing mice, tumour cell viability, tumour progression and changes in the tumour microenvironment. Intratumour administration of aspirin examined to assess its therapeutic potential resulted in retardation of tumour progression in tumour-bearing mice. Oral administration of aspirin to mice as a prophylactic measure prior to tumour transplantation further primed the anti-neoplastic action of aspirin administered at the tumour site. The anti-neoplastic action of aspirin was associated with a decline in tumour cell survival, augmented induction of apoptosis and nuclear shrinkage. Tumour cells of aspirin-treated mice were found arrested in G0/G1 phase of the cell cycle and showed nuclear localization of cyclin B1. Intratumoral administration of aspirin was accompanied by alterations in the biophysical, biochemical and immunological composition of the tumour microenvironment with respect to pH, level of dissolved O2, glucose, lactate, nitric oxide, IFNγ (interferon γ), IL-4 (interleukin-4), IL-6 and IL-10, whereas the TGF-β (tumour growth factor-β) level was unaltered. Tumour cells obtained from aspirin-treated tumour-bearing mice demonstrated an altered expression of pH regulators monocarboxylate transporter-1 and V-ATPase along with alteration in the level of cell survival regulatory molecules such as survivin, vascular endothelial growth factor, heat-shock protein 70, glucose transporter-1, SOCS-5 (suppressor of cytokine signalling-5), HIF-1α (hypoxia-inducible factor-1α) and PUMA (p53 up-regulated modulator of apoptosis). The study demonstrates a possible indirect involvement of the tumour microenvironment in addition to a direct but limited anti-neoplastic action of aspirin in the retardation of tumour growth.
APA, Harvard, Vancouver, ISO, and other styles
26

Jawa, Raagini, Michael Stein, Bradley Anderson, Jane M. Liebschutz, Catherine Stewart, Julia Keosaian, and Joshua A. Barocas. "1549. Association of Skin Infections with Sharing of Injection Drug Preparation Equipment among People who Inject Drugs." Open Forum Infectious Diseases 7, Supplement_1 (October 1, 2020): S775. http://dx.doi.org/10.1093/ofid/ofaa439.1729.

Full text
Abstract:
Abstract Background Sharing needles and injection drug preparation equipment (IDPE) among people who inject drugs (PWID) are well-established risk factors for viral transmission. Shared needles and IDPE may be a reservoir for bacteria and serve as a nidus for skin and soft tissue infections (SSTI). Given the rising rates of SSTIs in PWID, we investigated the association of needle and IDPE sharing on history and incidence of SSTI in a cohort of PWID. Methods Active inpatient PWID were recruited to a randomized control trial of a risk reduction intervention aimed at reducing bacterial and viral infections. A subset of participants (N=252) who injected drugs were included in the analysis. The primary dependent variable in this cross-sectional cohort study was self-reported incidence of SSTI one year post-hospitalization. We assessed three self-reported independent variables from baseline enrollment: 1) sharing needles, 2) sharing IDPE, and 3) sharing needles or IDPE and compared these groups separately to persons who reported not sharing via univariate and multi-level Poisson regression model estimating the adjusted effect of baseline sharing on incidence of SSTI during follow up. Results Participant characteristics: 37.9 years [mean]; 58% male; 90% primarily inject opioids, 43% inject with others, 13% shared IDPE only, 50% shared needles or IDPE. In general, persons who shared IDPE only compared to those who did not share were younger, more likely female, more likely Caucasian, were less likely to primarily inject opioids, and had a higher mean on the knowledge scale. We found no significant differences of prior self-reported SSTI. Adjusted for those randomized in the behavioral intervention arm for skin cleaning, persons who shared needles only and needles or IDPE had a higher incidence of SSTI compared with persons who did not share (IRR 1.90, 95% CI1.03-3.51, p=0.04; IRR 2.14, 95% CI 1.23-3.72 p=0.007). Persons who shared IDPE only did not have a statistically significant higher incidence of SSTI compared with persons who did not share (IRR 1.3, 95%CI 0.89-1.95 p=0.157). Conclusion In this cohort of hospitalized active PWID, we found a significant association between baseline sharing of needles or IDPE but not IDPE only with incidence of self-reported SSTI. Disclosures All Authors: No reported disclosures
APA, Harvard, Vancouver, ISO, and other styles
27

Ghosh, Toshi, Wilson I. Gonsalves, Dragan Jevremovic, S. Vincent Rajkumar, Michael M. Timm, William Morice, Angela Dispenzieri, et al. "The Prognostic Significance of Polyclonal Bone Marrow Plasma Cells in Patients with Actively Relapsing Multiple Myeloma." Blood 128, no. 22 (December 2, 2016): 1194. http://dx.doi.org/10.1182/blood.v128.22.1194.1194.

Full text
Abstract:
Abstract Background: Prior studies suggest that the presence of >5% polyclonal plasma cells (pPCs) among total plasma cells (PCs) within the bone marrow (BM) is associated with a longer progression-free survival, higher response rates, and lower frequency of high-risk cytogenetic abnormalities in patients with newly diagnosed multiple myeloma (MM). However, the incidence and prognostic utility of this factor in patients with relapsed and/or refractory MM has not been previously evaluated. Thus, we evaluated the prognostic value of quantifying the percentage of pPCs among the total PCs in the BM of patients with actively relapsing MM. Methods: We evaluated all MM patients with actively relapsing disease (biochemical and/or symptomatic) seen at the Mayo Clinic, Rochester, from 2012 to 2013, who had BM samples evaluated by seven-color multiparametric flow cytometry. All patients had at least 24 months of follow-up from the date of flow evaluation. Cell surface antigens were assessed by direct immunofluorescence antibodies for CD45, CD19, CD38, CD138, cytoplasmic Kappa and Lambda Ig light chains, and DAPI nuclear stain. The flow cytometry data was collected using the Becton Dickinson FACSCanto II instruments that analyzed 150,000 events (cells); this data was then analyzed by multi-parameter analysis using the BD FACS DIVA Software. PCs were selectively analyzed through combinatorial gating using light scatter properties and CD38, CD138, CD19, and CD45. Clonal PCs were separated from pPCs based on the differential expression of CD45, CD19, DAPI (in non-diploid cases), and immunoglobulin light chains. The percentage of pPCs was calculated in total PCs detected. Survival analysis was performed by the Kaplan-Meier method and differences were assessed using the log rank test. Results: There were 180 consecutive patients with actively relapsing MM who had BM biopsies analyzed via flow cytometry as part of their routine clinical evaluation. The median age of this group was 65 years (range: 40 - 87); 52% were male. At the time of this analysis, 104 patients had died, and the 2-year overall survival (OS) rate for the cohort was 58%. The median number of therapies received was 4 (range: 1 - 15). Of these patients, 61% received a prior ASCT, and almost all (99%) received prior regimens containing either immunomodulators or proteasome inhibitors. There were 55 (30%) patients with >5% pPCs among the total PCs in their BM. The median percentage of pPCs among total PCs in these 55 patients was 33% (range: 5 - 99). The median OS for those with >5% pPCs was not reached compared with 22 months for those with <5% pPCs (P = 0.028; Figure 1). Patients with <5% pPCs PCs had a higher likelihood of high-risk FISH cytogenetics compared with the rest of the patients. In a univariate analysis, increasing number of pPCs was associated with an improved OS, while higher labeling index, number of prior therapies, and the presence of high-risk FISH cytogenetics were associated with a worse OS. In a multivariate analysis, only the increasing number of pPCs (P = 0.006), higher labeling index (P = 0.0002) and number of prior therapies (P = 0.003) retained statistical significance. Conclusion: Quantitative estimation of the percentage of pPCs among the total PCs in the BM of patients with actively relapsing MM was determined to be a predictor of worse OS. As such, this parameter is able to identify a group of patients with MM with actively relapsing disease who have a particularly poor outcome. Further studies evaluating its biological significance are warranted. Figure 1 Kaplan-Meier curve comparing OS between patients with ≥5% pPCs and <5% pPCs among the total PCs in their BM. Figure 1. Kaplan-Meier curve comparing OS between patients with ≥5% pPCs and <5% pPCs among the total PCs in their BM. Disclosures Kapoor: Celgene: Research Funding; Amgen: Research Funding; Takeda: Research Funding. Gertz:Prothena Therapeutics: Research Funding; Novartis: Research Funding; Alnylam Pharmaceuticals: Research Funding; Research to Practice: Honoraria, Speakers Bureau; Med Learning Group: Honoraria, Speakers Bureau; Celgene: Honoraria; NCI Frederick: Honoraria; Sandoz Inc: Honoraria; GSK: Honoraria; Ionis: Research Funding; Annexon Biosciences: Research Funding. Kumar:AbbVie: Research Funding; Noxxon Pharma: Consultancy, Research Funding; Celgene: Consultancy, Research Funding; Janssen: Consultancy, Research Funding; Array BioPharma: Consultancy, Research Funding; Sanofi: Consultancy, Research Funding; Onyx: Consultancy, Research Funding; Skyline: Honoraria, Membership on an entity's Board of Directors or advisory committees; Millennium: Consultancy, Research Funding; Kesios: Consultancy; Glycomimetics: Consultancy; BMS: Consultancy.
APA, Harvard, Vancouver, ISO, and other styles
28

Campbell, C. A., R. P. Zentner, P. Basnyat, R. De Jong, R. Lemke, and R. Desjardins. "Nitrogen mineralization under summer fallow and continuous wheat in the semiarid Canadian prairie." Canadian Journal of Soil Science 88, no. 5 (November 1, 2008): 681–96. http://dx.doi.org/10.4141/cjss07115.

Full text
Abstract:
The ability of soils to provide a portion of the N required by crops via N mineralization of organic matter is of economic and environmental importance. Over a 40-yr period (1967–2006), soil NO3-N and plant-N measurements were made under summer fallow and in systems cropped to spring wheat (Triticum aestivum L.), on a medium-textured Orthic Brown Chernozem (Aridic Haploboroll), at Swift Current, Saskatchewan. These values were used to estimate net N mineralization (Nmin). Each year, above-ground plant N was measured at harvest and soil NO3-N was measured before seeding, soon after harvest, and just prior to freeze-up in October. Also, in the first 18 yr of this study NO3-N and above-ground plant N were measured eight times between spring and fall in selected treatments; these data were used to make a more detailed estimate of Nmin. In a third experiment, conducted on the same soil at a nearby site in 1975, many small lysimeters were sampled six times between spring and harvest of spring wheat. We used this lysimeter study to assess the effect of N fertilizer rate and soilwater on net Nmin. Results from the more frequent sampling were more plausible than those from sampling at three different times per year. On average, net Nmin in the 20-mo summer fallow period was about 118 kg ha-1 (15 kg ha-1 between harvest and the first spring, 93 kg ha-1 between the first spring and second fall, and 10 kg ha-1 between the second fall and seeding). The average net Nmin under a wheat crop between spring and fall was between 53 and 63kg ha-1. Net Nmin increased with water, but excessive water appeared to reduce apparent net Nmin, probably due to leaching and denitrification losses of N, which were not assessed in our estimation of Nmin. Regression analysis was used to show a positive association between net Nmin and precipitation, between spring and fall, for most of the systems examined. There was evidence that tillage promotes N mineralization. At normal rates of N fertilizer (i.e., < 100 kg ha-1), fertilizer had no effect on Nmin. Net Nmin was directly proportional to fallow frequency, averaging 68, 83, and 90 kg ha-1 yr-1 for continuous wheat, fallow-wheat-wheat, and fallow-wheat rotations, respectively. Although our results may only be applicable to medium-textured soils of similar organic matter content in the Brown and Dark Brown Chernozemic soil zone, they provide data and information against which process-based models can be tested. They also provide useful first approximations of Nmin measured under field conditions where few long-term data currently exist. Key words: N mineralization, plant-N, fertilizer-N, crop rotation, irrigation, tillage
APA, Harvard, Vancouver, ISO, and other styles
29

Moskowitz, Craig H., Michelle A. Fanale, Bijal D. Shah, Ranjana H. Advani, Robert Chen, Stella Kim, Ana Kostic, Tina Liu, Joanna Peng, and Andres Forero-Torres. "A Phase 1 Study of Denintuzumab Mafodotin (SGN-CD19A) in Relapsed/Refactory B-Lineage Non-Hodgkin Lymphoma." Blood 126, no. 23 (December 3, 2015): 182. http://dx.doi.org/10.1182/blood.v126.23.182.182.

Full text
Abstract:
Abstract Background Denintuzumab mafodotin (SGN-CD19A) is a novel antibody-drug conjugate (ADC) composed of a humanized anti-CD19 monoclonal antibody conjugated to the microtubule-disrupting agent monomethyl auristatin F (MMAF) via a maleimidocaproyl linker. CD19 is a B-cell-specific marker expressed in the vast majority of patients (pts) with B-cell non-Hodgkin lymphoma (NHL). Methods An ongoing phase 1, dose-escalation study is investigating the safety, tolerability, pharmacokinetics (PK), and antitumor activity of denintuzumab mafodotin in pts with relapsed or refractory (R/R) B-cell NHL (NCT 01786135). Eligible pts were ≥12 yrs of age and were R/R to ≥1 prior systemic regimens; pts with diffuse large B-cell lymphoma (DLBCL) or follicular lymphoma grade 3 (FL3) also received intensive salvage therapy ± autologous stem cell transplant (ASCT), unless they refused or were ineligible. Denintuzumab mafodotin was administered IV every 3 weeks (q3wk; 0.5-6 mg/kg) for dose escalation and every 6 weeks (q6wk; 3 mg/kg) in a subsequent expansion cohort. A modified continual reassessment method was used for dose allocation and maximum tolerated dose (MTD) estimation in the q3wk dosing schedule. Archived tissue was collected to assess potential biomarkers of response. Results To date, 62 pts have been treated, including 53 pts (85%) with DLBCL (of whom 16 had transformed DLBCL), 5 (8%) with mantle cell lymphoma, and 3 (5%) with FL3. Median age was 65 yrs (range, 28-81). Pts had received a median of 2 prior systemic therapies (range, 1-6); 15 pts (24%) had prior ASCT. Thirty-seven pts (60%) were refractory to the most recent prior therapy. Fifty-two pts were treated in the q3wk schedule (0.5-6 mg/kg), and 10 pts were treated with 3 mg/kg q6wk. Five pts remain on treatment (2 q3wk pts, 3 q6wk pts). Overall, 20 (33%) of 60 efficacy-evaluable pts achieved objective responses, including 13 (22%) with CRs. Eighteen of the 20 objective responses were achieved by the end of Cycle 2 (15 q3wk pts, 3 q6wk pts). Table.Q3wk Dosing (N=51)Q6wk Dosing (N=9)RelapsedaN=22RefractorybN=29RelapsedaN=3RefractorybN=6Best clinical response, n (%)Complete remission (CR)7 (32)3 (10)3 (100)-Partial remission (PR)4 (18)3 (10)--Stable disease (SD)6 (27)7 (24)-3 (50)Progression5 (23)16 (55)-3 (50)ORR (CR+PR), % (95% CI)50 (28, 72)21 (8, 40)100 (29, 100)-CR rate, % (95% CI)32 (14,55)10 (2, 27)100 (29, 100)-ORR=objective response rateaBest response of CR/PR with most recent prior therapybBest response of SD/PD with most recent prior therapy Median duration of objective response in the q3wk schedule was 39 wks for relapsed pts (95% CI: 11.6, - [range, 0.1+ to 73+ wks]) and 41 wks for refractory pts (95% CI: 13.7, 67 [range, 13.7 to 67 wks]); this included 2 pts who maintained their responses for >15 mos. Data for the q6wk schedule are not yet mature. The MTD was not reached at 0.5-6 mg/kg q3wk, and only 1 DLT was observed (G3 keratopathy at 3 mg/kg). Toxicity profiles were similar across both dosing schedules; the most frequently reported adverse events (AEs) were blurry vision (65%), dry eye (52%), fatigue and keratopathy (35% each), constipation (29%), photophobia (27%), and nausea (26%). Ocular symptoms and corneal exam findings consistent with superficial microcystic keratopathy were observed in 52 pts (84%); symptoms were less severe than the associated corneal exam findings. Keratopathy was managed with topical steroids and dose modifications, and improved/resolved within a median of ~5 wks (range, 1-17) in pts for whom there was sufficient follow-up. ADC PK demonstrated a mean terminal half-life of ~2 wks, and accumulation was observed following multiple dose administrations in both schedules. Conclusions Denintuzumab mafodotin is generally well tolerated and demonstrates encouraging activity with durable responses in heavily pre-treated pts with B-cell NHL. In relapsed pts, 56% achieved objective responses with a CR rate of 40% across both the q3wk and q6wk schedules. The low rate of myelosuppression and neuropathy suggests that denintuzumab mafodotin could be incorporated into novel combination regimens in earlier lines of therapy. A randomized phase 2 trial is being initiated to evaluate RICE (rituximab, ifosfamide, carboplatin, etoposide) ± denintuzumab mafodotin pre-ASCT as second-line treatment for pts with DLBCL. Disclosures Moskowitz: Seattle Genetics, Inc.: Consultancy, Research Funding; Merck: Research Funding; Genentech: Research Funding. Off Label Use: Denintuzumab mafodotin (SGN-CD19A) is not approved for use.. Fanale:Seattle Genetics, Inc.: Consultancy, Honoraria, Other: Travel expenses, Research Funding. Shah:Janssen: Speakers Bureau; Seattle Genetics: Research Funding; DeBartolo Institute for Personlaized Medicine: Research Funding; Rosetta Genomics: Research Funding; Acetylon Pharmaceuticals, INC: Membership on an entity's Board of Directors or advisory committees; Plexus Communications: Honoraria; Spectrum: Speakers Bureau; Pharmacyclics: Speakers Bureau; Bayer: Honoraria; Celgene: Consultancy, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; SWOG: Consultancy; NCCN: Consultancy. Chen:Genentech: Consultancy, Speakers Bureau; Millennium: Consultancy, Research Funding, Speakers Bureau; Seattle Genetics, Inc.: Consultancy, Other: Travel expenses, Research Funding, Speakers Bureau. Kim:Bayer: Consultancy; Seattle Genetics, Inc.: Consultancy, Research Funding; Eli Lilly: Consultancy. Kostic:Seattle Genetics, Inc.: Employment, Equity Ownership. Liu:Seattle Genetics, Inc.: Employment, Equity Ownership, Other: Travel expenses. Peng:Seattle Genetics, Inc.: Employment, Equity Ownership. Forero-Torres:Seattle Genetics, Inc.: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
30

Moskowitz, Craig H., Andres Forero-Torres, Bijal D. Shah, Ranjana Advani, Paul Hamlin, Stella Kim, Ana Kostic, Larissa Sandalic, Baiteng Zhao, and Michelle A. Fanale. "Interim Analysis of a Phase 1 Study of the Antibody-Drug Conjugate SGN-CD19A in Relapsed or Refractory B-Lineage Non-Hodgkin Lymphoma." Blood 124, no. 21 (December 6, 2014): 1741. http://dx.doi.org/10.1182/blood.v124.21.1741.1741.

Full text
Abstract:
Abstract Background CD19, a B-cell specific marker, is expressed in the majority of patients with B-cell non-Hodgkin lymphoma (NHL). SGN-CD19A is a novel antibody-drug conjugate (ADC) composed of a humanized anti-CD19 monoclonal antibody conjugated to the microtubule-disrupting agent monomethyl auristatin F (MMAF) via a maleimidocaproyl linker. Methods This ongoing phase 1, open-label, dose-escalation study investigates the safety, tolerability, pharmacokinetics, and antitumor activity of SGN-CD19A in patients with relapsed or refractory B-cell NHL (NCT 01786135). Eligible patients are ≥12 years of age and must have a confirmed diagnosis of diffuse large B-cell lymphoma (DLBCL), including transformed follicular histology; mantle cell lymphoma (MCL); follicular lymphoma grade 3 (FL3); Burkitt lymphoma; or B-cell lymphoblastic lymphoma. Patients must be relapsed or refractory to at least 1 prior systemic regimen. Patients with DLBCL or FL3 must have also received intensive salvage therapy with or without autologous stem cell transplant (SCT), unless they refused or were deemed ineligible. A modified continual reassessment method is used for dose allocation and maximum tolerated dose (MTD) estimation. SGN-CD19A is administered IV on Day 1 of 21-day cycles (0.5–6 mg/kg). Response is assessed with CT and PET scans according to the Revised Response Criteria for Malignant Lymphoma (Cheson 2007). Results To date, 44 patients have been treated: 39 patients (89%) with DLBCL (including 10 with transformed DLBCL), 4 (9%) with MCL, and 1 (2%) with FL3. Median age was 65 years (range, 33–81). Patients had a median of 2 prior systemic therapies (range, 1–7), and 10 patients (23%) had autologous SCT. Twenty-six patients (59%) were refractory to their most recent prior therapy, and 18 (41%) were relapsed. Patients received a median of 3 cycles of treatment (range, 1–12) at doses from 0.5–6 mg/kg. Eleven patients (25%) remain on treatment, and 33 have discontinued treatment (18 due to progressive disease [PD], 5 for investigator decision, 5 for adverse events [AE], 4 because of patient decision/non-AE, and 1 for SCT). No dose-limiting toxicity (DLT) in Cycle 1 has been reported. Treatment-emergent AEs reported in ≥20% of patients were blurred vision (59%), dry eye (39%), fatigue (39%), constipation (32%), keratopathy (23%), and pyrexia (20%). Corneal exam findings consistent with superficial microcystic keratopathy were observed in 25 patients (57%) and were mostly Grade 1/2. Grade 3/4 corneal AEs were observed in 4 patients at the higher doses; the majority resolved or improved to Grade 1/2 at last follow-up. Corneal AEs were treated with ophthalmic steroids, and during the trial steroid eye drop prophylaxis was instituted with each dose of study drug. SGN-CD19A ADC plasma exposures were approximately dose-proportional. Accumulation was observed following multiple dose administrations, consistent with a mean terminal half-life of about 2 weeks, suggesting less frequent dosing might be possible. In the 43 efficacy-evaluable patients, the objective response rate (ORR) is 30% (95% CI [17, 46]), including 7 complete responses (CRs; 16%) and 6 partial responses (PRs; 14%). Of the 13 patients with an objective response, 8 are still on study with follow-up times of 0.1–31 weeks; 2 are no longer on study; and 3 had subsequent PD or death with response durations of 14, 19, and 31 weeks. Table Best Clinical Response by Disease Status Relative to Most Recent Therapy, n (%) Relapsed N=17 Refractory N=26 Total N=43 CR 5 (29) 2 (8) 7 (16) PR 4 (24) 2 (8) 6(14) SD 4 (24) 9 (35) 13 (30) PD 4 (24) 13 (50) 17 (40) ORR (CR + PR), (95% CI) 53 (28, 77) 15 (4, 35) 30 (17, 46) Conclusions To date, SGN-CD19A has shown evidence of clinical activity with an ORR of 30% and CR rate of 16%. Enrollment in the trial is ongoing to further refine optimal dose and schedule. SGN-CD19A is generally well-tolerated. No DLTs have been observed in tested dose levels. Observed ocular AEs are manageable with steroid eye drops and dose modifications. The high response rate (53%) in relapsed patients and low rate of bone marrow suppression or neuropathy suggest that SGN-CD19A could be incorporated into novel combination regimens in earlier lines of therapy. Disclosures Moskowitz: Merck: Research Funding; Genentech: Research Funding; Seattle Genetics, Inc.: Consultancy, Research Funding. Off Label Use: SGN-CD19A is an investigational agent being studied in patients with B-cell malignancies. SGN-CD19A is not approved for use. . Forero-Torres:Seattle Genetics, Inc.: Research Funding, Speakers Bureau. Shah:Pharmacyclics: Speakers Bureau; SWOG: Consultancy; Celgene: Consultancy, Speakers Bureau; NCCN: Consultancy; Seattle Genetics, Inc.: Research Funding; Janssen: Speakers Bureau. Advani:Janssen Pharmaceuticals: Research Funding; Genentech: Research Funding; Pharmacyclics: Research Funding; Celgene: Research Funding; Takeda International Pharmaceuticals Co.: Research Funding; Seattle Genetics, Inc.: Research Funding, Travel expenses Other. Hamlin:Seattle Genetics, Inc.: Consultancy, Research Funding. Kim:Bayer: Consultancy; Eli Lily: Consultancy; Seattle Genetics, Inc.: Consultancy, Research Funding. Kostic:Seattle Genetics, Inc.: Employment, Equity Ownership. Sandalic:Seattle Genetics, Inc.: Employment, Equity Ownership. Zhao:Seattle Genetics, Inc.: Employment, Equity Ownership. Fanale:Seattle Genetics, Inc.: Consultancy, Honoraria, Research Funding, Travel expenses Other.
APA, Harvard, Vancouver, ISO, and other styles
31

Wen, Susanna, Michelle Richardson, Ryo Kita, Donna Stram, and Hiroomi Tada. "An Observational Study to Collect and Assess Tissue Samples from Subjects with One of Three Neoplastic Conditions (ANSWer)." Blood 136, Supplement 1 (November 5, 2020): 44–45. http://dx.doi.org/10.1182/blood-2020-140185.

Full text
Abstract:
Background: Multiple studies have demonstrated that established human cancer cell lines are inadequate to model intra- and inter-patient tumor heterogeneity and resulting mechanisms of disease resistance and progression. Ex vivo drug sensitivity screening (DSS) on primary patient samples may provide a unique approach to overcome these limitations, unlocking insights into drug resistance and identify novel combination therapies that overcome resistance. Notable Labs has built an automated, flow-cytometry-based high-throughput DSS platform that enables testing hundreds of drugs and combinations on individual patient samples at scale. In addition, custom-built software facilitates data analysis and delivery of results within clinically actionable turnaround times.1 This technology platform is in use in the ANSWer study to obtain evidence for potential clinical utility of DSS in clinical practice and management of hematologic malignancies. Study Design and Methods: This is a prospective, multicenter, observational study with collection of de-identified biospecimens with matched clinical data from up to 1000 participants from clinical networks in the United States and Canada. Clinical information, demographics, and medical data relevant to cancer status are collected from all participants and their medical record at baseline (at study entry and time of baseline biospecimen collection), and subsequent visits per patient consent, for up to 1 year. The primary assessment is the establishment of a tumor registry with annotated clinical outcomes. Exploratory assessments include correlation of ex vivo functional testing results with clinical outcomes, as well as identification of potential biomarkers that correlate responses with genotype and/or phenotype. Inclusion Criteria: Provide written informed consent; Age ≥ 18 years, male or female, of any race; Suspected hematologic malignancy (any of the below), in need of starting an active anti-cancer therapy* Acute myelogenous leukemia (AML)a Multiple myeloma (MM) Myelodysplastic syndrome (MDS)a Lymphoma Acute lymphocytic leukemia (ALL) Chronic lymphocytic leukemia (CLL) Chronic myelogenous leukemia (CML) Myeloproliferative Neoplasm (MPN)a Other (upon review and approval by medical monitor) Intent to start anti-cancer therapy within 21 days of biospecimen collection; ≥ 7 days from last anti-cancer therapy; Any number of prior therapies Subject's cohort is currently opena *Supportive care agents including erythropoiesis-stimulating agents (ESAs) such as EPO, Procrit, Aranesp, etc; granulocyte colony stimulating factor (GCSF); hydroxyurea (Hydrea); and luspatercept (Reblozyl) are not considered anti-cancer therapy for this study aCohorts currently open and enrolling: AML, MDS, MPN Exclusion Criteria: Unwilling or unable to give consent; Disease is in remission; Subject's cohort is not open at the time of consent; Subject is restarting an ongoing treatment regimen after a dose interruption Statistical Methods: For the primary assessment, descriptive statistics will be used to summarize baseline patient characteristics, therapies, clinical outcomes, and collection of biospecimens. For exploratory analyses,hierarchical clustering using Euclidean distance metrics and Ward minimum variance will be used to identify patient clusters with distinctex vivodrug sensitivity patterns within a particular treatment cohort. The correlation between clinical response and drug sensitivity will be assessed using the area under the receiver-operator curve (AUROC) and bootstrapping to test for significance. A generalized estimation equation model will be used to identify associations between additional exploratory biomarkers, such as somatic mutations identified via NGS, andex vivosensitivity to various drug classes within specific treatment cohorts. 1Spinner M, et al. Ex Vivo Drug Screening Defines Novel Drug Sensitivity Patterns for Informing Personalized Therapy in Myeloid Neoplasms. Blood. 2020 June 23; 4(12):2768-2778. DOI: 10.1182/bloodadvances.2020001934. The ANSWer Study is currently enrolling at multiple sites across the US with plans to expand in the US and Canada: Figure Disclosures Wen: Notable Labs:Current Employment, Current equity holder in private company.Richardson:Notable Labs:Current Employment, Current equity holder in private company.Kita:Notable Labs:Current Employment, Current equity holder in private company.Tada:Notable Labs:Current Employment, Current equity holder in private company.
APA, Harvard, Vancouver, ISO, and other styles
32

Gill, Sartaj S., Mark Doyle, Diane V. Thompson, Ronald Williams, June Yamrozik, Saundra B. Grant, and Robert W. W. Biederman. "Can 3D RVEF be Prognostic for the Non-Ischemic Cardiomyopathy Patient but not the Ischemic Cardiomyopathy Patient? A Cardiovascular MRI Study." Diagnostics 9, no. 1 (January 23, 2019): 16. http://dx.doi.org/10.3390/diagnostics9010016.

Full text
Abstract:
Background: While left ventricular ejection fraction (LVEF) has been shown to have prognostic value in ischemic cardiomyopathy (ICMX) patients, right ventricular ejection fraction (RVEF) has not been systematically evaluated in either ICMX or non-ischemic cardiomyopathy (NICMX) patients. Moreover, an accurate estimation of RVEF is problematic due to the geometry of the right ventricle (RV). Over the years, there have been improvements in the resolution, image acquisition and post-processing software for cardiac magnetic resonance imaging (CMR), such that CMR has become the “gold standard” for measuring RV volumetrics and RVEF. We hypothesize that CMR defines RVEF more so than LVEF and might have prognostic capabilities in ischemic and non-ischemic cardiomyopathy patients (ICMX and NICMX). Methods: Patients that underwent CMR at our institution between January 2005 and October 2012 were retrospectively selected if three-dimensional (3D) LVEF < 35%. Patients were further divided into ICMX and NICMX groups. The electronic medical record (EMR) database inquiry determined all-cause mortality and major adverse cardiovascular events (MACE). Additionally, a Social Security Death Index (SSI) database inquiry was performed to determine all-cause mortality in patients who were lost to follow-up. Patients were further sub-grouped on the basis of 3D RVEF ≥ 20%. Separately, patients were sub-grouped by LVEF ≥ 20% in both ICMX and NICMX cases. A cut-off of ≥20% was chosen for the RVEF based on the results of prior studies showing significance based on Kaplan–Meier (KM) survival curves. Cumulative event rates were estimated for each subgroup using the KM analysis and were compared using the log-rank test. The 3D RV/LVEFs were compared to all-cause mortality and MACE. ICMX patients were defined using the World Health Organization (WHO) criteria. Results: From a 7000-patient CMR database, 753 heart failure patients were selected. Eighty-seven patients met WHO definition of ICMX and NICMX (43 ICMX and 44 NICMX). The study patients were followed for a median of 3 years (Interquartile range or IQR 1.5–6.5 years). The mean age of patients was 58 ± 13 years; 79% were male. In ICMX, mean 3D LVEF was 21% ± 6% and mean 3D RVEF was 38% ± 14%, while for NICMX, mean 3D LVEF was 16% ± 6% and mean 3D RVEF was 30% ± 14% (p < 0.005 for intra- and inter-group comparison). It should be noted that LVEF < RVEF in both groups and the ejection fraction (EF) in NICMX was less than the corresponding EF in ICMX. Overall mortality was higher in ICMX than NICMX (12/40, 30% vs. 7/43, 16%; p < 0.05). Patients were stratified based on both RVEF and LVEF with a threshold of EF ≥ 20% separately. RVEF but not LVEF was a significant predictor of death for NICMX (χ2 = 8; p < 0.005), while LVEF did not predict death in ICMX (χ2 = 2, p = not significant). Similarly, time to MACE was predicted by RVEF for NICMX (χ2 = 9; p < 0.005) but not by LVEF in ICMX (χ2 = 1; p = NS). Importantly, RVEF, while predictive of NICMX MACE, did not emerge as a predictor of survival or MACE in ICMX. Conclusions: Via 3D CMR in non-ischemic CMX patients, RVEF has important value in predicting death and time to first MACE while 3D LVEF is far less predictive.
APA, Harvard, Vancouver, ISO, and other styles
33

Borate, Uma, Amir T. Fathi, Bijal D. Shah, Daniel J. DeAngelo, Lewis B. Silverman, Todd Michael Cooper, Tina M. Albertson, et al. "A First-In-Human Phase 1 Study Of The Antibody-Drug Conjugate SGN-CD19A In Relapsed Or Refractory B-Lineage Acute Leukemia and Highly Aggressive Lymphoma." Blood 122, no. 21 (November 15, 2013): 1437. http://dx.doi.org/10.1182/blood.v122.21.1437.1437.

Full text
Abstract:
Abstract Background CD19, a member of the immunoglobulin superfamily, is a B-cell specific marker that is found on B cells as early as the pro-B cell stage. CD19 is maintained upon malignant transformation and is expressed in the majority of patients with B-lineage leukemia and non-Hodgkin lymphoma (NHL). SGN-CD19A is a novel antibody-drug conjugate composed of a humanized anti-CD19 monoclonal antibody conjugated to the microtubule-disrupting agent monomethyl auristatin F (MMAF) via a maleimidocaproyl (mc) linker. Upon binding to CD19, SGN-CD19A internalizes and releases cys-mcMMAF, which binds to tubulin and induces G2/M arrest and apoptosis in the targeted cells. Methods A first-in-human, phase 1, open-label, dose-escalation study has been initiated to investigate the safety, tolerability, pharmacokinetics (PK), and antitumor activity of SGN-CD19A in adult and pediatric patients with relapsed or refractory (R/R) B-cell leukemia or highly aggressive B-cell lymphoma (CT.gov NCT01786096). Eligible patients must have a pathologically confirmed diagnosis of B-cell acute leukemia (B-ALL), Burkitt leukemia or lymphoma, or B-cell lymphoblastic lymphoma (B-LBL), and be R/R to at least 1 (adults) or 2 (pediatric) prior systemic regimens. A modified continual reassessment method is being used for dose allocation and maximum tolerated dose (MTD) estimation. SGN-CD19A is administered IV on Days 1 and 8 of 21-day cycles at up to 7 cohort-specific doses (0.3–2.3 mg/kg). Results Thirteen patients (11 adults, 2 pediatric) with R/R leukemia (9 B-ALL) or lymphoma (3 B-LBL, 1 Burkitt lymphoma) have been treated in this ongoing study. Adults (73% female) have a median age of 60 years (range, 26–74) and have received a median of 2 prior systemic therapies (range, 1–6). Four of the 11 adults (36%) have also received an allogeneic stem cell transplant (SCT). The pediatric patients, 2 females 13-and 14-years-old, have each received 3 prior systemic therapies; one of the pediatric patients has also received 2 allogeneic SCTs. To date, patients have been treated at 0.3 mg/kg (2 patients), 0.6 mg/kg (3 patients), 1.0 mg/kg (3 patients), and 1.3 mg/kg (5 patients). The maximum number of cycles received by a patient is 7. Four patients remain on treatment and 9 patients have discontinued treatment (7 due to progressive disease, 1 because of investigator decision, and 1 due to death). One patient with B-ALL treated at 1.0 mg/kg developed cardiac arrest in the setting of pre-existing electrolyte abnormalities and died 7 days after the first dose of SGN-CD19A; although this event was considered unrelated to study drug by the investigator, a possible relationship could not be excluded due to temporal association. Treatment-emergent adverse events reported for ≥10% of adult patients were nausea (64%); fatigue and pyrexia (55% each); chills (36%); headache (27%); and dyspnea, hypertension, oral pain, thrombocytopenia, tumor lysis syndrome, and vomiting (18% each). Drug-related AEs in adult patients were pyrexia (55%); nausea (45%); chills (36%); fatigue (27%); and headache, oral pain, and blurred vision (9% each). Drug-related AEs reported for the pediatric patients were abdominal pain, cough, diarrhea, dyspepsia, hyperuricemia, nausea, peripheral neuropathy, pruritus, pyrexia, tachycardia, and urticaria (all Grade 1 or 2, each in one patient). Preliminary data demonstrate rapid clearance of antibody-drug conjugate at low doses in patients with leukemia, suggesting target-mediated drug disposition. To date, best responses for patients with lymphoma are stable disease (2 patients) and progressive disease (2 patients). Best responses for the 8 leukemia patients with available response assessments are complete remission (1 adult at 1.3 mg/kg); resistant disease with clinical benefit, i.e., improvement in leukemia-related symptoms (4 patients); and progressive disease (3 patients). Conclusions MTDs have not yet been identified for adult or pediatric patients and dose-escalation continues in both populations. Antitumor activity has been observed, including 1 complete remission in a heavily pretreated B-ALL patient. Nonlinear clearance of the antibody-drug conjugate in leukemia patients suggests target-mediated disposition. Updated safety, PK, and response data will be presented at the meeting. A second trial is evaluating SGN-CD19A every 3 weeks in aggressive B-cell NHL (CT.gov NCT01786135). Disclosures: Borate: Seattle Genetics, Inc.: Research Funding; Genoptix: Consultancy. Fathi:Millennium: Research Funding; Seattle Genetics, Inc.: Advisory/Scientific board membership Other, Research Funding; Agios: Membership on an entity’s Board of Directors or advisory committees; Teva: Membership on an entity’s Board of Directors or advisory committees. Shah:Seattle Genetics, Inc.: Research Funding; NCCN: Membership on an entity’s Board of Directors or advisory committees; SWOG: Membership on an entity’s Board of Directors or advisory committees; Celgene: Speakers Bureau; Janssen/Pharmacyclics: Speakers Bureau. DeAngelo:Seattle Genetics, inc.: Research Funding. Silverman:Seattle Genetics, Inc.: Advisory/scientific board membership Other. Cooper:Seattle Genetics, Inc.: Research Funding. Albertson:Seattle Genetics, Inc.: Employment, Equity Ownership. O'Meara:Seattle Genetics, Inc.: Employment, Equity Ownership. Sandalic:Seattle Genetics, Inc.: Employment, Equity Ownership. Stevison:Seattle Genetics, Inc.: Employment, Equity Ownership. Chen:Seattle Genetics, Inc.: Consultancy, Research Funding, Speakers Bureau, Travel expenses Other.
APA, Harvard, Vancouver, ISO, and other styles
34

Fathi, Amir T., Uma Borate, Daniel J. DeAngelo, Maureen M. O'Brien, Tanya Trippett, Bijal D. Shah, Gregory A. Hale, et al. "A Phase 1 Study of Denintuzumab Mafodotin (SGN-CD19A) in Adults with Relapsed or Refractory B-Lineage Acute Leukemia (B-ALL) and Highly Aggressive Lymphoma." Blood 126, no. 23 (December 3, 2015): 1328. http://dx.doi.org/10.1182/blood.v126.23.1328.1328.

Full text
Abstract:
Abstract Background Denintuzumab mafodotin (SGN-CD19A) is an antibody-drug conjugate (ADC) composed of a humanized anti-CD19 monoclonal antibody conjugated to the microtubule-disrupting agent monomethyl auristatin F (MMAF) via a maleimidocaproyl linker. CD19 is a B cell-specific marker that is expressed in nearly all patients (pts) with B-lineage acute leukemia or lymphoma. Methods A phase 1 dose-escalation study is ongoing to evaluate the safety, tolerability, pharmacokinetics, and antitumor activity of denintuzumab mafodotin in pts with relapsed or refractory (R/R) B-ALL, B-cell lymphoma (B-LBL), or Burkitt leukemia/lymphoma (NCT 01786096). Eligible pts are ≥1 year of age and are R/R to ≥1 prior systemic regimen; patients with Philadelphia chromosome-positive (Ph+) disease must have failed a 2nd-generation TKI. A modified continual reassessment method was used for dose allocation and maximum tolerated dose (MTD) estimation. The study evaluated 2 dosing schedules: first weekly (Days 1 and 8 of 21-day cycles) and then once every 3 weeks (q3wk). This report presents data from the adult subset of pts in the study (≥18 years). Results To date, 71 adult pts with R/R B-ALL (n=59), B-LBL (n=6) or Burkitt leukemia/lymphoma (n=6) have been treated; median age is 45 years (range 18−77). Pts received a median of 2 prior therapies (range 1−8); 20 pts (28%) had prior allogeneic stem cell transplant. On the weekly schedule (0.3−3 mg/kg), 40 pts received a median of 2 cycles (range 1−27); 2 pts remain on treatment. On the q3wk schedule (4−6 mg/kg), 31 pts received a median of 3 cycles (range 1−6); 4 pts remain on treatment. An MTD was identified at 5 mg/kg q3wk and was not reached with weekly dosing. Best clinical responses to date for pts with B-ALL are summarized below: Table 1.Response category per Cheson 2003Efficacy-evaluable adult pts with B-ALLweekly dosing, N=32 n (%)q3wk dosing, N=23 n (%)Complete remission (CR)6 (19)3 (13)CR with incomplete platelet recovery (CRp)−3 (13)CR with incomplete blood recovery (CRi)−2 (9)Partial remission (PR)1 (3)−Resistant disease with clinical benefit15 (47)12 (52)Resistant disease without clinical benefit2 (6)-Progression8 (25)3 (13)CRc (CR+CRi+CRp), % (95% CI)19% (95% CI 7, 36)35% (95% CI 16, 57) In the q3wk schedule, the CRc rate was similar at 4, 5, and 6 mg/kg. The median duration of response across schedules is currently 27 weeks (95% CI 7, −). Of 12 pts with CRc and available minimal residual disease (MRD) assessment, 7 were MRD negative. 3 patients with MRD-negative CRs have been in remission for >1 year, 2 of whom have been on continuous treatment for 19 and 22 months. In the subset of pts with Ph+ B-ALL, 4 of 8 pts achieved CR and 1 pt a PR. In 6 pts with Burkitt leukemia/lymphoma, 1 achieved a CR. In 6 pts with B-LBL, objective responses were 1 CR and 2 PR. The adverse event (AE) profiles were similar across both dosing schedules; the most frequently reported AEs were pyrexia (54%), nausea (52%), fatigue (51%), headache (44%), chills (38%), vomiting (37%), blurred vision (35%), and anemia (34%). Ocular symptoms and corneal exam findings consistent with superficial microcystic keratopathy were observed in 40 pts (56%); symptoms were less severe than the associated corneal exam findings. Keratopathy was managed with topical steroids and dose modifications, and improved/resolved within a median of ~3 wks (range 1-17) in pts with sufficient follow-up. ADC exposures increased with dose, and in leukemia patients, target-mediated disposition was observed that diminished with higher doses in the q3wk schedule. Post treatment, flow cytometry data demonstrate that unbound CD19 on peripheral blasts inversely correlates with ADC concentration in circulation, and is present in the majority of evaluable patients at the end of the q3wk cycle. Conclusions Denintuzumab mafodotin is generally well tolerated and demonstrates activity in heavily pretreated adult pts with B-ALL and B-lineage highly aggressive lymphomas, including durable MRD-negative responses. The results of this trial indicate that the q3wk schedule, with a CRc rate of 35% in B-ALL, warrants further clinical investigation. Based on the encouraging responses observed to date in Ph+ B-ALL (CRs in 4 of 8 pts), an expansion cohort of these pts is currently enrolling on the q3wk schedule. Disclosures Fathi: Exelexis: Research Funding; Agios: Membership on an entity's Board of Directors or advisory committees; Merck: Membership on an entity's Board of Directors or advisory committees; Seattle Genetics: Membership on an entity's Board of Directors or advisory committees, Research Funding; Ariad: Consultancy; Takeda Pharmaceuticals International Co.: Research Funding. Off Label Use: SGN-CD19A is an investigational agent being studied in patients with B-cell malignancies. SGN-CD19A is not approved for use.. Borate:Seattle Genetics: Research Funding; Genoptix: Consultancy; Alexion: Speakers Bureau; Gilead: Speakers Bureau; Novartis: Speakers Bureau; Amgen: Speakers Bureau. DeAngelo:Agios: Consultancy; Incyte: Consultancy; Celgene: Consultancy; Amgen: Consultancy; Pfizer: Consultancy; Bristol Myers Squibb: Consultancy; Ariad: Consultancy; Novartis: Consultancy. O'Brien:Seattle Genetics, Inc.: Research Funding. Trippett:OSI Pharmaceuticals: Research Funding; Seattle Genetics, Inc.: Research Funding. Shah:NCCN: Consultancy; SWOG: Consultancy; Seattle Genetics: Research Funding; Acetylon Pharmaceuticals, INC: Membership on an entity's Board of Directors or advisory committees; Pharmacyclics: Speakers Bureau; Bayer: Honoraria; Celgene: Consultancy, Membership on an entity's Board of Directors or advisory committees, Speakers Bureau; Janssen: Speakers Bureau; DeBartolo Institute for Personlaized Medicine: Research Funding; Rosetta Genomics: Research Funding; Plexus Communications: Honoraria; Spectrum: Speakers Bureau. Hale:Seattle Genetics, Inc.: Research Funding; Hyundai: Research Funding; V Foundation: Research Funding. Silverman:Seattle Genetics, Inc.: Research Funding. Pauly:Seattle Genetics, Inc.: Research Funding. Kim:Bayer: Consultancy; Seattle Genetics, Inc.: Consultancy, Research Funding; Eli Lilly: Consultancy. Kostic:Seattle Genetics, Inc.: Employment, Equity Ownership. Huang:Seattle Genetics, Inc.: Employment, Equity Ownership. Pan:Seattle Genetics, Inc.: Employment, Equity Ownership. Chen:Genentech: Consultancy, Speakers Bureau; Seattle Genetics, Inc.: Consultancy, Other: Travel expenses, Research Funding, Speakers Bureau; Millennium: Consultancy, Research Funding, Speakers Bureau.
APA, Harvard, Vancouver, ISO, and other styles
35

Pan, Yongpeng, Zhenxue Chen, Xianming Li, and Weikai He. "Single-Image Dehazing via Dark Channel Prior and Adaptive Threshold." International Journal of Image and Graphics, March 8, 2021, 2150053. http://dx.doi.org/10.1142/s0219467821500534.

Full text
Abstract:
Due to the haze weather, the outdoor image quality is degraded, which reduces the image contrast, thereby reducing the efficiency of computer vision systems such as target recognition. There are two aspects of the traditional algorithm based on the principle of dark channel to be improved. First, the restored images obviously contain color distortion in the sky region. Second, the white regions in the scene easily affect the atmospheric light estimated. To solve the above problems, this paper proposes a single-image dehazing and image segmentation method via dark channel prior (DCP) and adaptive threshold. The sky region of hazing image is relatively bright, so sky region does not meet the DCP. The sky part is separated by the adaptive threshold, then the scenery and the sky area are dehazed, respectively. In order to avoid the interference caused by white objects to the estimation of atmospheric light, we estimate the value of atmospheric light using the separated area of the sky. The algorithm in this paper makes up for the shortcoming that the algorithm based on the DCP cannot effectively process the hazing image with sky region, avoiding the effect of white objects on estimating atmospheric light. Experimental results show the feasibility and effectiveness of the improved algorithm.
APA, Harvard, Vancouver, ISO, and other styles
36

Lakitan, Benyamin, KARTIKA KARTIKA, SUSILAWATI SUSILAWATI, and ANDI WIJAYA. "Acclimating leaf celery plant (Apium graveolens) via bottom wet culture for increasing its adaptability to tropical riparian wetland ecosystem." Biodiversitas Journal of Biological Diversity 22, no. 1 (December 26, 2020). http://dx.doi.org/10.13057/biodiv/d220139.

Full text
Abstract:
Abstract. Lakitan B, Kartika, Susilawati, Wijaya A. 2021. Acclimating leaf celery plant (Apium graveolens) via bottom wet culture to increase its adaptability to the tropical riparian wetland ecosystem. Biodiversitas 22: 320-328. Bottom-wet culture was set up for acclimating leaf celery plant prior to cultivation at shallow water table conditions. The aim of this research was to evaluate adaptability of leaf celery plants to riparian wetland ecosystem. Leaf celery was selected as potential candidate since natural habitat of its wild relatives is marshlands. Shading at 0%, 20%, and 60% was applied to reduced tropical sunlight intensity. Results of this study indicated that soil moisture was significantly increased in plants exposed to 60% shading, but leaf SPAD value was not significantly affected. Leaf celery is a perennial vegetable that can be frequently harvested. Weekly harvesting was rewarded with optimum yield and good quality leaves, i.e. high SPAD value (45.73 to 51.89). Delaying harvest to 3 weeks increased total yield but 52.12% of the harvested leaves were non-marketable. Mother plant of leaf celery produced suckers, but number of suckers only moderately correlated with yield (R2 = 0.56). Plants exposed to 60% shading produced significantly less suckers (9.00) than those exposed to full sunlight (12.46) and 20% shading (12.88) Use of zero intercept linear regression model, with length of leaf midrib (LLM) x leaf wingspan (LWS) as predictor, resulted in a geometrically based and accurate leaf area estimation model (LA = 0.3431(LLM x LWS); R2 = 0.87) for compound leaves of leaf celery plant. In conclusion, the most crucial factor in optimizing quantity and quality of yield was weekly harvesting focusing on marketable-size leaves.
APA, Harvard, Vancouver, ISO, and other styles
37

Dijkshoorn, Pieter, Piero Mori, and Mattia Cappelletti Zaffaroni. "High Resolution Site Characterization as key element for proper design and cost estimation of groundwater remediation." Acque Sotterranee - Italian Journal of Groundwater 3, no. 4 (December 30, 2014). http://dx.doi.org/10.7343/as-092-14-0119.

Full text
Abstract:
Substantial amounts of money are spent each year on cleaning up ground water contaminations that were caused by historical industrial site activities. Too often, however, remedial objectives are not achieved within the anticipated time frame. Moreover, remedial budgets which were estimated prior to the start of remediation turn out to be largely insufficient to meet the remedial objectives. This situation, very common, creates significant troubles for all the stakeholders involved in the remediation project. The reason for not meeting remedial regulatory closure criteria or exceeding remedial budgets is often due to an incomplete conceptual site model. Having conducted high resolution site characterization programs at numerous sites where remediation was previously conducted, ERM has found several recurring themes: • Missed source areas and plumes; • Inadequate understanding of source area and plume architectures (i.e., three-dimensional contaminant distribution); • Inadequate understanding of the effects of site (hydro)geologic conditions on the ability to access contamination (i.e., via remedial additive injections of groundwater/soil gas extraction). This paper explains why remediations often fail and what the alternatives to prevent these failures (and exceeding remedial budgets) are. More specifically, it focuses on alternative investigation methods and approaches that help to get to a more complete (high resolution) conceptual site model. This more complete conceptual site model in return helps a more focused remedial design with a higher remedial efficiency. As a minimum, it will take away a lot of (financial) uncertainty during the decision making when selecting a remedial alternative. Contaminants that have a greater density then water are known to have a greater complexity in terms of both investigation as well as remediation. Therefore, they will be the main focus of this paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Imanishi, Kazutoshi, Makiko Ohtani, and Takahiko Uchide. "Driving stress and seismotectonic implications of the 2013 Mw5.8 Awaji Island earthquake, southwestern Japan, based on earthquake focal mechanisms before and after the mainshock." Earth, Planets and Space 72, no. 1 (October 22, 2020). http://dx.doi.org/10.1186/s40623-020-01292-1.

Full text
Abstract:
Abstract A driving stress of the Mw5.8 reverse-faulting Awaji Island earthquake (2013), southwest Japan, was investigated using focal mechanism solutions of earthquakes before and after the mainshock. The seismic records from regional high-sensitivity seismic stations were used. Further, the stress tensor inversion method was applied to infer the stress fields in the source region. The results of the stress tensor inversion and the slip tendency analysis revealed that the stress field within the source region deviates from the surrounding area, in which the stress field locally contains a reverse-faulting component with ENE–WSW compression. This local fluctuation in the stress field is key to producing reverse-faulting earthquakes. The existing knowledge on regional-scale stress (tens to hundreds of km) cannot predict the occurrence of the Awaji Island earthquake, emphasizing the importance of estimating local-scale (< tens of km) stress information. It is possible that the local-scale stress heterogeneity has been formed by local tectonic movement, i.e., the formation of flexures in combination with recurring deep aseismic slips. The coseismic Coulomb stress change, induced by the disastrous 1995 Mw6.9 Kobe earthquake, increased along the fault plane of the Awaji Island earthquake; however, the postseismic stress change was negative. We concluded that the gradual stress build-up, due to the interseismic plate locking along the Nankai trough, overcame the postseismic stress reduction in a few years, pushing the Awaji Island earthquake fault over its failure threshold in 2013. The observation that the earthquake occurred in response to the interseismic plate locking has an important implication in terms of seismotectonics in southwest Japan, facilitating further research on the causal relationship between the inland earthquake activity and the Nankai trough earthquake. Furthermore, this study highlighted that the dataset before the mainshock may not have sufficient information to reflect the stress field in the source region due to the lack of earthquakes in that region. This is because the earthquake fault is generally locked prior to the mainshock. Further research is needed for estimating the stress field in the vicinity of an earthquake fault via seismicity before the mainshock alone.
APA, Harvard, Vancouver, ISO, and other styles
39

Sincovich, Alanna, and Alanna Sincovich. "795Contribution of school meals provision/receipt to “catch up growth” in young children: A counterfactual approach." International Journal of Epidemiology 50, Supplement_1 (September 1, 2021). http://dx.doi.org/10.1093/ije/dyab168.613.

Full text
Abstract:
Abstract Background The Early Childhood Education (ECE) Program, funded by the World Bank (USD 28 million), seeks to support expansion of quality ECE services with the objective of improving the development of children aged 3-5 years in disadvantaged communities in Lao PDR. The program targets an estimated 60,000 beneficiaries including children from 0-5 years, as well as their caregivers and wider communities throughout Northern Laos. Specifically, the program supports three different community-based interventions, the impact of which are being evaluated through three separate pragmatic clustered randomised control trials (RCTs) (Figre 1). Following completion of baseline data collection, program interventions commenced in November 2016, with endline due to be complete in April 2020. With an early education focus, improving children’s nutrition was not a key aim of the ECE program. However, midline results indicated a considerable reduction in prevalence of chronic undernutrition (i.e. stunting) amongst panel children between baseline and midline. Separate to the ECE program, school meal programs have been delivered in selected villages throughout Northern Laos, however, due to sample limitations (i.e. the number of schools with enough children and intervention budget) the condition of school meals was unable to be accommodated in the RCT design. For schools with a school meal program, school meals were expanded to include children aged 3-4 years attending CCDG and MAT interventions. Of methodological importance, the school meal program was a precursor to the ECE program and not all schools in the sample received the school meal program. Stunting, being too short for one’s age, is the result of chronic or recurrent undernutrition. Approximately 150 million children under 5 years (22%) experience stunting and associated cognitive impairments throughout the life course. Historically, the negative impacts of stunting have been thought to be irreversible after two years of age. However, there is dispute as to whether children who experience stunting before age two have the potential to “catch up” in height to their non-stunted peers, and if so, if those children also show cognitive catch up. Although provision of school meals was not included in the ECE study design, data collected provide opportunity to contribute to the debate around both physical and cognitive catch up growth. Evidence pertaining to this debate has the potential to inform early development and education strategies internationally. Traditionally, observational (i.e. non-randomised) data have been considered to exist within the domain of association and prediction, with causal inference preserved for studies with experimental (interventional or natural) design. More recently, advances in quantitative disciplines have brought development of new analytical techniques to draw causal conclusions from data collected in non-randomised settings (under a strict set of assumptions), supplementing traditional trial methods for causal estimation. These methods are grounded in the “potential outcomes” framework, and make use of “counterfactuals” and hypothetical trials to emulate experimental design within observational data. A counterfactual is a “what-if” statement drawing on the idea of causal contrasts within the same individual, e.g. if both exposure and non-exposure to a treatment could be observed for the same person, with all else held constant (i.e. both of their potential outcomes; Y1-Y0). Although it is not possible to observe both potential outcomes for an individual in the real world, this counterfactual approach employs sophisticated propensity weighting techniques and population level estimation to generate “exchangeable” groups of individuals, thus mimicking the randomisation and causal interpretability of a traditional trial. The present study seeks to utilise the aforementioned techniques in combination with comprehensive observational panel data collected through the ECE Program to determine if provision/receipt of school meals contributed to catch up in physical and cognitive growth of young children in Lao PDR. Methods Study sample The study sample included children at baseline and midline (n = 5680), who had an average age of 67 months at midline (5.6 years old), ranging from 37-90 months. Due to missing data in some of the key study variables (e.g. lack of known specific birth date), the final analysis sample included 4,948 children, representing 87% of the total sample of panel children. Exposure, outcome and confounders The key exposure was a binary variable measured via caretaker questionnaire, asking if their child had received meals from their ECE facility or primary school in the last week. Children’s height and weight was measured at baseline and midline, and caretakers provided children’s date of birth. Height-for-age z-scores (HAZ) were calculated using the WHO growth standards. The difference in HAZ scores from baseline to midline data collection was used as the key outcome of this study, with a binary variable indicating whether a child was stunted employed as a secondary outcome. Several variables were identified a priori as confounding the relationship between receipt of school meals and changes in HAZ scores/stunting, including child age, gender, and ethnicity, household socio-economic position, caretaker’s education, and village accessibility. Statistical analysis Descriptive statistics (prevalence) were computed for the outcome (stunted; yes, no) across each wave (baseline and midline), and stratified by child’s age at baseline. To determine the effect of receipt of school meals at baseline on stunting at midline, weighted linear and logistic regressions were employed, using stabilised inverse probability of treatment weighting (SIPTW) to control for observed confounding. The SIPTW can be represented in equation 1: Where A is the exposure (receipt of school meals) and C is a vector of observed confounders. The SIPTW approach makes use of propensity score weighting to generate a pseudo-population where the exposure is no longer related to the observed confounders, thus making the exposure groups “exchangeable” on these characteristics (mimicking randomisation). The final outcome models are represented in equations 2 and 3: Where E(Y) is the mean HAZ difference score, is the probability a child is stunted, is the intercept, is the coefficient for receipt of school meals, is the coefficient for the age difference between baseline and midline, is the coefficient for stunting at baseline and is an error term with Gaussian distribution in equation 2 and binominal distribution in equation 3. Results Table 1 shows that from baseline to midline there was a reduction in stunting of 8.43%. Table 2 indicates that the biggest reduction in stunting was observed for children aged 4 years old (12.0%), with smaller reductions observed for children aged 3 (6.4%) and 2 years old (6.2%). The receipt of school meals is hypothesised to be a potential driver of observed reductions in stunting from baseline to midline. Table 3 presents the distribution of children receiving school meals and reductions in stunting from baseline to midline, by child age. This indicates a possible relationship between the receipt of school meal and a reduction in stunting, with reductions in stunting larger in age groups with a higher proportion of children receiving school meals. This analysis is descriptive however, and does not take into account other characteristics that could also influence receipt of school meals and reductions in stunting, such as poverty. Regression analysis To isolate the relationship between receipt of school meals and reductions in stunting we employed linear and logistic regression models, weighted by SIPTW. This regression analysis indicates that receipt of a school meal at baseline results in a 0.07 standard deviation increase in height-for-age z-score at midline (Table 4). Results of the logistic regression indicate that children who received a school meal had 0.77 lower odds of being stunted at midline, after controlling for confounding, time between measurement at baseline and midline, and baseline stunting (Table 5). Methodological issue – effect interpretation In RCTs, the most common treatment effects calculated are the average treatment effect (ATE) and the average treatment effect on the treated (ATT). These are also known as intent to treat (ITT) and as-treated (AT) effects. The former indicates the effect of being assigned to a treatment on the outcome, whereas the latter indicates the effect of actually being treated on the outcome. As this study is attempting to replicate an RCT, a key question is, what is the most relevant effect estimate? Our results can be understood to be the ATT effect, as they represent the effect of receiving a school meal on stunting outcomes. In the RCT world, ATT/AT effects have been criticised due to potential external factors that affect choice of treatment, so the question is whether the present analysis has adequately accounted for these observed and unobserved confounding factors that affect self-selection into the treatment group i.e. receipt of school meals. Alternatively, the ATE/ITT effect is often considered closer to a real world effect of a treatment, however is strongly influenced by adherence. This can affect the generalisability of results, as different trials may have different adherence patterns. In relation to our trial, this effect could be estimated using aggregated village level data with the exposure re-coded to represent provision of school meals at ECE and school facilities within a village. However, school meals were employed in villages by multiple providers, thus defining school meal provision in a uniform way is difficult to ascertain. Finally, it may be appropriate to instead analyse the condition of school meals as a subgroup analysis within the original RCT design, despite the trial not having been powered to detect this effect. Conclusions The key conclusions of this abstract can summarised in terms of the central methodological questions that we are seeking to answer in collaboration with experts in the ECR workshop including: Key messages Data collected through the ECE Program in Laos provide opportunity to contribute to the debate around physical and cognitive “catch up growth” in stunted children. Using a counterfactual approach, our estimated as-treated effect indicated that stunted children aged 2-4 years at baseline who received school meals had lower odds of being stunted at midline (approximately two years later), however, we question if an intent-to-treat or sub-group analysis is better suited considering effects of interest and limitations in estimating effects.
APA, Harvard, Vancouver, ISO, and other styles
40

SONG, Ci, Zheng Chang, Patrik Magnusson, Erik Ingelsson, and Nancy L. Pedersen. "Abstract MP066: The Role of Environmental Factors in Modifying Heritability of Coronary Heart Disease." Circulation 125, suppl_10 (March 13, 2012). http://dx.doi.org/10.1161/circ.125.suppl_10.amp066.

Full text
Abstract:
Introduction Coronary heart disease (CHD) is affected by both genetic and environmental factors. These classes of factors may act independently or interactively (gene-environment interactions). Recent large genome-wide association (GWA) studies have identified several genetic loci associated with CHD. However, few studies have considered the modifying effects of specific environmental factors on these associations, and the full extent of how those environmental factors affect the additive genetic component of CHD is not known. Hypothesis Our hypothesis was that modifiable environmental factors including smoking, alcohol intake, physical activity and body mass index (BMI) would affect CHD risk via interactions with genetic factors. Methods The environmental factors studied were smoking (ever regular smoker, never regular smoker); alcohol intake (never-drinking, drinking); physical activity (sedentary vs. non-sedentary); and highest BMI during life time (normal [less than 25 kg/m 2 ], overweight [25 to 30 kg/m 2 ], and obesity [more than 30 kg/m 2 ]). All same-sex twins born before 1958 in the Swedish Twin Registry were included in the current study. We identified 6,133 incident CHD events as the first CHD onset without any prior myocardial infarction, stroke or heart failure. We estimated heritability of CHD conditioned on the above-mentioned environmental factors in up to 26,901 twin pairs with environmental information available. Parameter estimation and model fitting was conducted with structural equation modeling as implemented in the software package Mx. Preliminary results The overall heritability of CHD in our sample was 36% (adjusted for age and sex). Smoking and overweight/obesity significantly modified the additive genetic component of CHD (P<0.001 and P=0.038, respectively). Thus, heritability of CHD differed between smokers (29.0%) and non-smokers (42.5%). Further, heritability of CHD also differed among different categories of BMI: 29.9% among normal-weight, 27.1% among overweight and 23.6% among obese individuals. Neither alcohol consumption nor physical activity had any significant modifying effect on heritability of CHD (P=0.33 and 0.16, respectively) Conclusion In our nation-wide twin study of 26,901 twin pairs, purported measures of the environment affected heritability of CHD incidence. We demonstrated that smoking, as well as overweight and obesity modify heritability estimates, potentially indicating that genetic factors play a more prominent role for disease development in the absence of important environmental factors. Our results suggest that increased understanding of gene-environment interactions will be important for full understanding of the etiology of CHD.
APA, Harvard, Vancouver, ISO, and other styles
41

Thinh, Nguyen Hong, Tran Hoang Tung, and Le Vu Ha. "Depth-aware salient object segmentation." VNU Journal of Science: Computer Science and Communication Engineering 36, no. 2 (October 7, 2020). http://dx.doi.org/10.25073/2588-1086/vnucsce.217.

Full text
Abstract:
Object segmentation is an important task which is widely employed in many computer vision applications such as object detection, tracking, recognition, and retrieval. It can be seen as a two-phase process: object detection and segmentation. Object segmentation becomes more challenging in case there is no prior knowledge about the object in the scene. In such conditions, visual attention analysis via saliency mapping may offer a mean to predict the object location by using visual contrast, local or global, to identify regions that draw strong attention in the image. However, in such situations as clutter background, highly varied object surface, or shadow, regular and salient object segmentation approaches based on a single image feature such as color or brightness have shown to be insufficient for the task. This work proposes a new salient object segmentation method which uses a depth map obtained from the input image for enhancing the accuracy of saliency mapping. A deep learning-based method is employed for depth map estimation. Our experiments showed that the proposed method outperforms other state-of-the-art object segmentation algorithms in terms of recall and precision. KeywordsSaliency map, Depth map, deep learning, object segmentation References[1] Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on pattern analysis and machine intelligence 20(11) (1998) 1254-1259.[2] Goferman, L. Zelnik-Manor, A. Tal, Context-aware saliency detection, IEEE transactions on pattern analysis and machine intelligence 34(10) (2012) 1915-1926.[3] Kanan, M.H. Tong, L. Zhang, G.W. Cottrell, Sun: Top-down saliency using natural statistics, Visual cognition 17(6-7) (2009) 979-1003.[4] Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, H.-Y. Shum, Learning to detect a salient object, IEEE Transactions on Pattern analysis and machine intelligence 33(2) (2011) 353-367.[5] Perazzi, P. Krähenbühl, Y. Pritch, A. Hornung, Saliency filters: Contrast based filtering for salient region detection, in: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, IEEE, 2012, pp. 733-740.[6] M. Cheng, N.J. Mitra, X. Huang, P.H. Torr, S.M. Hu, Global contrast based salient region detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 37(3) (2015) 569-582.[7] Borji, L. Itti, State-of-the-art in visual attention modeling, IEEE transactions on pattern analysis and machine intelligence 35(1) (2013) 185-207.[8] Simonyan, A. Vedaldi, A. Zisserman, Deep inside convolutional networks: Visualising image classification models and saliency maps, arXiv preprint arXiv:1312.6034.[9] Li, Y. Yu, Visual saliency based on multiscale deep features, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5455-5463.[10] Liu, J. Han, Dhsnet: Deep hierarchical saliency network for salient object detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 678-686.[11] Achanta, S. Hemami, F. Estrada, S. Susstrunk, Frequency-tuned saliency detection model, CVPR: Proc IEEE, 2009, pp. 1597-604.Fu, J. Cheng, Z. Li, H. Lu, Saliency cuts: An automatic approach to object segmentation, in: Pattern Recognition, 2008. ICPR 2008. 19th International Conference on, IEEE, 2008, pp. 1-4Borenstein, J. Malik, Shape guided object segmentation, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 1, IEEE, 2006, pp. 969-976.Jiang, J. Wang, Z. Yuan, T. Liu, N. Zheng, S. Li, Automatic salient object segmentation based on context and shape prior., in: BMVC. 6 (2011) 9.Ciptadi, T. Hermans, J.M. Rehg, An in depth view of saliency, Georgia Institute of Technology, 2013.Desingh, K.M. Krishna, D. Rajan, C. Jawahar, Depth really matters: Improving visual salient region detection with depth., in: BMVC, 2013.Li, J. Ye, Y. Ji, H. Ling, J. Yu, Saliency detection on light field, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2806-2813.Koch, S. Ullman, Shifts in selective visual attention: towards the underlying neural circuitry, in: Matters of intelligence, Springer, 1987, pp. 115-141.Laina, C. Rupprecht, V. Belagiannis, F. Tombari, N. Navab, Deeper depth prediction with fully convolutional residual networks, in: 3D Vision (3DV), 2016 Fourth International Conference on, IEEE, 2016, pp. 239-248.Bruce, J. Tsotsos, Saliency based on information maximization, in: Advances in neural information processing systems, 2006, pp. 155-162.Ren, X. Gong, L. Yu, W. Zhou, M. Ying Yang, Exploiting global priors for rgb-d saliency detection, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2015, pp. 25-32.Fang, J. Wang, M. Narwaria, P. Le Callet, W. Lin, Saliency detection for stereoscopic images., IEEE Trans. Image Processing 23(6) (2014) 2625-2636.Hou, L. Zhang, Saliency detection: A spectral residual approach, in: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, IEEE, 2007, pp. 1-8.Guo, Q. Ma, L. Zhang, Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform, in: Computer vision and pattern recognition, 2008. cvpr 2008. ieee conference on, IEEE, 2008, pp. 1-8.Fang, W. Lin, B.S. Lee, C.T. Lau, Z. Chen, C.W. Lin, Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum, IEEE Transactions on Multimedia 14(1) (2012) 187-198.Lang, T.V. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, S. Yan, Depth matters: Influence of depth cues on visual saliency, in: Computer vision-ECCV 2012, Springer, 2012, pp. 101-115.Zhang, G. Jiang, M. Yu, K. Chen, Stereoscopic visual attention model for 3d video, in: International Conference on Multimedia Modeling, Springer, 2010, pp. 314-324.Wang, M.P. Da Silva, P. Le Callet, V. Ricordel, Computational model of stereoscopic 3d visual saliency, IEEE Transactions on Image Processing 22(6) (2013) 2151-2165.Peng, B. Li, W. Xiong, W. Hu, R. Ji, Rgbd salient object detection: A benchmark and algorithms, in: European Conference on Computer Vision (ECCV), 2014, pp. 92-109.Wu, L. Duan, L. Kong, Rgb-d salient object detection via feature fusion and multi-scale enhancement, in: CCF Chinese Conference on Computer Vision, Springer, 2015, pp. 359-368.Xue, Y. Gu, Y. Li, J. Yang, Rgb-d saliency detection via mutual guided manifold ranking, in: Image Processing (ICIP), 2015 IEEE International Conference on, IEEE, 2015, pp. 666-670.Katz, A. Adler, Depth camera based on structured light and stereo vision, uS Patent App. 12/877,595 (Mar. 8 2012).Chatterjee, G. Molina, D. Lelescu, Systems and methods for determining depth from multiple views of a scene that include aliasing using hypothesized fusion, uS Patent App. 13/623,091 (Mar. 21 2013).Matthies, T. Kanade, R. Szeliski, Kalman filter-based algorithms for estimating depth from image sequences, International Journal of Computer Vision 3(3) (1989) 209-238.Y. Schechner, N. Kiryati, Depth from defocus vs. stereo: How different really are they?, International Journal of Computer Vision 39(2) (2000) 141-162.Delage, H. Lee, A.Y. Ng, A dynamic bayesian network model for autonomous 3d reconstruction from a single indoor image, in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, Vol. 2, IEEE, 2006, pp. 2418-2428.Saxena, M. Sun, A.Y. Ng, Make3d: Learning 3d scene structure from a single still image, IEEE transactions on pattern analysis and machine intelligence 31(5) (2009) 824-840.Hedau, D. Hoiem, D. Forsyth, Recovering the spatial layout of cluttered rooms, in: Computer vision, 2009 IEEE 12th international conference on, IEEE, 2009, pp. 1849-1856.Liu, S. Gould, D. Koller, Single image depth estimation from predicted semantic labels, in: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, IEEE, 2010, pp. 1253-1260.Ladicky, J. Shi, M. Pollefeys, Pulling things out of perspective, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 89-96.K. Nathan Silberman, Derek Hoiem, R. Fergus, Indoor segmentation and support inference from rgbd images, in: ECCV, 2012.Liu, J. Yuen, A. Torralba, Sift flow: Dense correspondence across scenes and its applications, IEEE transactions on pattern analysis and machine intelligence 33(5) (2011) 978-994.Konrad, M. Wang, P. Ishwar, 2d-to-3d image conversion by learning depth from examples, in: Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer Society Conference on, IEEE, 2012, pp. 16-22.Liu, C. Shen, G. Lin, Deep convolutional neural fields for depth estimation from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5162-5170.Wang, X. Shen, Z. Lin, S. Cohen, B. Price, A.L. Yuille, Towards unified depth and semantic prediction from a single image, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2800-2809.Geiger, P. Lenz, C. Stiller, R. Urtasun, Vision meets robotics: The kitti dataset, International Journal of Robotics Research (IJRR).Achanta, S. Süsstrunk, Saliency detection using maximum symmetric surround, in: Image processing (ICIP), 2010 17th IEEE international conference on, IEEE, 2010, pp. 2653-2656.E. Rahtu, J. Kannala, M. Salo, J. Heikkilä, Segmenting salient objects from images and videos, in: Computer Vision-ECCV 2010, Springer, 2010, pp. 366-37.
APA, Harvard, Vancouver, ISO, and other styles
42

Lithgow, Kirstie, and Bernard Corenblum. "Polyuria: A Pathophysiologic Approach." Canadian Journal of General Internal Medicine 12, no. 2 (September 11, 2017). http://dx.doi.org/10.22374/cjgim.v12i2.247.

Full text
Abstract:
A 22-year-old man presented with a 3-week history of increased thirst, polydipsia, and polyuria. He described consuming large volumes of water and waking up multiple times throughout the night to drink and urinate. He also endorsed symptoms of fatigue and frequent headaches. Prior to this, he had been well. There was no history of diuretic use, lithium use, or renal disease. There was no prior head trauma, cranial irradiation, or intracranial pathology. He denied consumption of nutritional or protein supplements. Clinical exam revealed a well appearing young man with normal heart rate and blood pressure. Visual fields and general neurologic exam were grossly normal.Baselines investigations revealed serum sodium ranging from 141–142 mmol/L (reference range 133–145 mmol/L), creatinine 92 umol/L (50–120 umol/L), random glucose 5.4 mmol/L (3.3–11.0 mmol/L), potassium 4.0 (3.3–5.1 mmol/L) and ionized calcium 1.25 mmol/L (1.15–1.35 mmol/L). A 24-hour urine collection was arranged, and returned a urine volume of 5.6L (normal less than 3 litres/24 hours). Further investigations revealed a serum sodium of 142 mmol/L, serum osmolality 306 mmol/kg (280–300 mmol/kg), and urine osmolality of 102 mmol/kg (50–1200 mmol/kg). AM cortisol was 372 nmol/L (200–690 nmol/L).These results demonstrated inability to concentrate the urine, despite the physiologic stimulus of hyperosmolarity. Based on this, a presumptive diagnosis of diabetes insipidus was made. The patient was instructed to drink as much as he needed to satiate his thirst, and to avoid fluid restriction. The patient was started on DDAVP intranasal spray, which provided immediate relief from his symptoms. Magnetic resonance imaging of the brain revealed an unremarkable pituitary gland with abnormal thickening of the pituitary stalk and loss of the posterior pituitary bright spot. This confirmed the diagnosis of central diabetes insipidus, presumed secondary to infiltrative disease affecting the pituitary stalk.IntroductionPolyuria is defined as inappropriately high urine output relative to effective arterial blood volume and serum sodium. In adults, polyuria can be objectively quantified as urine output in excess of 3–3.5 L per day with a low urine osmolality (<300 mmol/kg).2Daily urine output is dependent on 2 major factors. The first is the amount of daily solute excretion, and the second is the urine concentrating capability of the nephron.3 Disturbances in either of these factors can occur by many different mechanisms, and can lead to a diuresis. This diuresis can be driven either by solute (solute diuresis), water (water diuresis) or a combination of these processes.4 A diagnostic algorithm for polyuria is outlined in Figure 1.Figure 1. Diagnostic Approach to Polyuria Solute DiuresisDaily solute intake varies between individuals, but typically averages about 10 mmol/kg or 500–800 mmol/day.2,3 Solute diuresis is the result of a higher solute load that exceeds the usual solute excretion. 4 Higher solute loads can be a consequence of either increased solute intake or increased solute generation through metabolism. High solute intake can occur from intravenous fluids, enteral or parental nutrition, and any other sources of exogenous protein, glucose, bicarbonate, or sugar alcohols.2,4 Metabolic processes leading to increased solute generation include hyperglycemia and azotemia. 2,4 Increased solute excretion drives urine output in a linear fashion.3 Furthermore, solute diuresis impairs the ability of the kidney to concentrate urine. Typically, in a pure solute diuresis, urine concentration is between 300 and 500 mmol/kg.2,4 The specific cause of solute diuresis can be further delineated with estimation of the urine electrolyte solute over 24 hours: 2(urine [Na]+urine [K]) ×24 hours.4 Values greater than 600 mmol/day suggest electrolytes are the solutes driving the diuresis, while values less than 600 mmol/day imply that the diuresis is due to a non-electrolyte solute, typically glucose or urea.Water DiuresisWater diuresis can occur due to excessive amounts of free water consumption (primary polydipsia) or impaired secretion or response to ADH (diabetes insipidus). In both cases, urine osmolality should be less than 100 mmol/kg.2 Primary polydipsia is characterized by excessive water consumption. This can be a result of compulsive water drinking (often observed in psychiatric disorders) or a defect in the thirst centre of the hypothalamus due to an infiltrative disease process.5,6The osmotic threshold for ADH release occurs at 280–290 mmol/kg. Failure to maximally concentrate the urine (1000–1200 mmol/kg in healthy kidneys) when serum osmolality rises above the osmotic threshold suggest diabetes insipidus.3 Diabetes insipidus (DI) can result from either insufficient ADH secretion from the posterior pituitary (central DI) or ADH resistance (nephrogenic DI).1Central DI can be caused by both congenital and acquired conditions known to affect the hypothalamic-neurohypophyseal system7,8 (Table 1). Polyuria occurs when 80% or more of the ADH secreting neurons are damaged 7. Metastatic disease has a predilection for the posterior pituitary, as its blood supply is derived from the systemic circulation, in contrast to the anterior pituitary which is supplied by the hypophyseal portal system.9 Rapid onset of polydipsia and polyuria in a patient older than 50 years of age should therefore raise immediate suspicion for metastatic disease.9 Treatment of adrenal insufficiency may “unmask” or exacerbate central DI, as normalization of blood pressure following glucocorticoid replacement inhibits ADH release.10 In the pregnant state, ADH degradation is increased due to placental production of vasopressinase. Any mechanism of hepatic dysfunction that occurs in pregnancy (pre-eclampsia, HELLP, acute fatty liver) will augment this normal physiology by reducing vasopressinase clearance, and can subsequently lead to transient DI 11In nephrogenic DI, ADH is present but the kidneys are unable to respond appropriately.8 In normal physiology, ADH acts to concentrate the urine via activation of the vasopressin V2 receptor, which leads to insertion of aquaporin-2 water channels in the collecting duct. 3,12 Nephrogenic DI can be primary (genetic) or secondary (acquired). Primary nephrogenic DI occurs as a result of genetic mutations affecting either the vasopressin 2 receptor or aquaporin-2 water channels; typically, such conditions present in infancy.12 Secondary nephrogenic DI can occur by a variety of mechanisms; the most common is chronic lithium administration. Lithium enters the principal cell in the collecting duct via epithelial sodium channels, and is thought to impair urinary concentrating ability via reduction in the number of principal cells and interference in signalling pathways involved in aquaporin. 12,13 Hypercalcemia, hypokalemia, obstructive uropathy, and pregnancy can lead to transient nephrogenic DI. 12,13 Hypercalcemia can lead to nephrogenic DI by causing a renal concentrating defect when calcium levels are persistently above 2.75 mmol/ L.14 Increased hydrostatic pressure from obstructive uropathy may lead to suppression of aquaporin-2 expression, resulting in transient nephrogenic DI.12 Nephrogenic DI can be caused by various renal diseases due to impairment of renal concentrating mechanisms, even before glomerular filtration rate is impaired. Polycystic kidney disease causes anatomic disruption of the medullary architecture. Polyuria in sickle cell disease results from a similar mechanism, as sickling in the vasa recta interferes with the countercurrent exchange mechanisms 16. Infiltrative renal disease including amyloid and Sjogren’s syndrome impair renal tubular function due to amyloid deposition and lymphocytic infiltration.17,18Mixed Water-Solute DiuresisIn some cases, polyuria can be caused by a combination of both mechanisms. The linear relationship between solute excretion and urine output described above is strongly influenced by ADH. In the setting of a solute diuresis, absence or deficiency of ADH can augment the degree of polyuria quite dramatically.14,19 Clinical examples of mixed diuresis include concurrent loading of both water and solute, chronic renal failure or infiltrative renal disease, relief of prolonged urinary obstruction, and partial DI.2,4 Typically in such scenarios, urine osmolality ranges from 100–300 mmol/kg.2 Conclusion Polyuria has a broad range of causes and can be a diagnostic challenge for clinicians. Understanding the pathophysiology that underpins the different mechanisms of polyuria is essential to appropriate workup, diagnosis, and treatment of this condition. If this is a complaint, the first step is to quantitate the 24-hour urine volume. We recommend referral to endocrinology when there is evidence of hypothalamic or pituitary disease, when a water deprivation test is required, or in cases where the diagnosis is unclear. DisclosureFunding sources: None.Conflicts of interest: None. References 1. Leung AK, Robson WL, Halperin ML. Polyuria in childhood. Clin Pediatr (Phila) 1991;30(11):634–40.2. Bhasin B, Velez JC. Evaluation of polyuria: the roles of solute loading and water diuresis. Am J Kidney Dis 2016;67(3):507–11.3. Rennke HG, Denker BM. Renal pathophysiology: the essentials. 4th ed. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins; 2014.4. Oster JR, Singer I, Thatte L, Grant-Taylor I, Diego JM. The polyuria of solute diuresis. Arch Intern Med 1997;157(7):721–9.5. Grois N, Fahrner B, Arceci RJ, Henter JI, McClain K, Lassmann H, et al. Central nervous system disease in Langerhans cell histiocytosis. J Pediatr 2010;156(6):873–81, 81.e1.6. Stuart CA, Neelon FA, Lebovitz HE. Disordered control of thirst in hypothalamic-pituitary sarcoidosis. N Engl J Med 1980;303(19):1078–82.7. Di Iorgi N, Napoli F, Allegri AE, Olivieri I, Bertelli E, Gallizia A, et al. Diabetes insipidus--diagnosis and management. Horm Res Paediatr 2012;77(2):69–84.8. Mahzari M, Liu D, Arnaout A, Lochnan H. Immune checkpoint inhibitor therapy associated hypophysitis. Clin Med Ins Endocrin Diabet 2015;8:21–8.9. Hermet M, Delévaux I, Trouillier S, André M, Chazal J, Aumaître O. Diabète insipide révélateur de métastases hypophysaires : quatre observations et revue de la littérature. La Revue de Médecine Interne 2009;30(5):425-9.10. Martin MM. Coexisting anterior pituitary and neurohypophyseal insufficiency: A syndrome with diagnostic implication. Arch Intern Med 1969;123(4):409–16.11. Aleksandrov N, Audibert F, Bedard MJ, Mahone M, Goffinet F, Kadoch IJ. Gestational diabetes insipidus: a review of an underdiagnosed condition. J Obstet Gynaecol Can 2010;32(3):225–31.12. Bockenhauer D, Bichet DG. Pathophysiology, diagnosis and management of nephrogenic diabetes insipidus. Nat Rev Nephrol 2015;11(10):576–88.13. Grünfeld JP, Rossier BC. Lithium nephrotoxicity revisited. Nat Rev Nephrol 2009;5(5):270.14. Rose BD, Post TW. Clinical physiology of acid-base and electrolyte disorders. 5th ed. New York: McGraw-Hill, Medical Pub. Division; 2001, 754.15. Gabow PA, Kaehny WD, Johnson AM, Duley IT, Manco-Johnson M, Lezotte DC, et al. The clinical utility of renal concentrating capacity in polycystic kidney disease. Kidney Internat 35(2):675–80.16. Hatch FE, Culbertson JW, Diggs LW. Nature of the renal concentrating defect in sickle cell disease. J Clin Invest 1967;46(3):336–4517. Carone FA, Epstein FH. Nephrogenic diabetes insipidus caused by amyloid disease: Evidence in man of the role of the collecting ducts in concentrating urine. Am J Med 1960;29(3):539–44.18. Shearn MA, Tu W-H. Nephrogenic diabetes insipidus and other defects of renal tubular function in Sjögren's syndrome. Am J Med 1965;39(2):312–8.19. Rennke HG, Denker BM. Renal pathophysiology: the essentials. 4th ed. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins; 2014. Figure 3.7, effects of ADH and solute excretion on urine volume, 88.
APA, Harvard, Vancouver, ISO, and other styles
43

Conti, Olivia. "Disciplining the Vernacular: Fair Use, YouTube, and Remixer Agency." M/C Journal 16, no. 4 (August 11, 2013). http://dx.doi.org/10.5204/mcj.685.

Full text
Abstract:
Introduction The research from which this piece derives explores political remix video (PRV), a genre in which remixers critique dominant discourses and power structures through guerrilla remixing of copyrighted footage (“What Is Political Remix Video?”). Specifically, I examined the works of political video remixer Elisa Kreisinger, whose queer remixes of shows such as Sex and the City and Mad Men received considerable attention between 2010 and the present. As a rhetoric scholar, I am attracted not only to the ways that remix functions discursively but also the ways in which remixers are constrained in their ability to argue, and what recourse they have in these situations of legal and technological constraint. Ultimately, many of these struggles play out on YouTube. This is unsurprising: many studies of YouTube and other user-generated content (UGC) platforms focus on the fact that commercial sites cannot constitute utopian, democratic, or free environments (Hilderbrand; Hess; Van Dijck). However, I find that, contrary to popular belief, YouTube’s commercial interests are not the primary factor limiting remixer agency. Rather, United States copyright law as enacted on YouTube has the most potential to inhibit remixers. This has led to many remixers becoming advocates for fair use, the provision in the Copyright Act of 1976 that allows for limited use of copyrighted content. With this in mind, I decided to delve more deeply into the framing of fair use by remixers and other advocates such as the Electronic Frontier Foundation (EFF) and the Center for Social Media. In studying discourses of fair use as they play out in the remix community, I find that the framing of fair use bears a striking similarity to what rhetoric scholars have termed vernacular discourse—a discourse emanating from a small segment of the larger civic community (Ono and Sloop 23). The vernacular is often framed as that which integrates the institutional or mainstream while simultaneously asserting its difference through appropriation and subversion. A video qualifies as fair use if it juxtaposes source material in a new way for the purposes of critique. In turn, a vernacular text asserts its “vernacularity” by taking up parts of pre-existing dominant institutional discourses in a way that resonates with a smaller community. My argument is that this tension between institutional and vernacular gives political remix video a multivalent argument—one that presents itself both in the text of the video itself as well as in the video’s status as a fair use of copyrighted material. Just as fair use represents the assertion of creator agency against unfair copyright law, vernacular discourse represents the assertion of a localised community within a world dominated by institutional discourses. In this way, remixers engage rights holders and other institutions in a pleasurable game of cat and mouse, a struggle to expose the boundaries of draconian copyright law. YouTube’s Commercial InterestsYouTube’s commercial interests operate at a level potentially invisible to the casual user. While users provide YouTube with content, they also provide the site with data—both metadata culled from their navigations of the site (page views, IP addresses) as well as member-provided data (such as real name and e-mail address). YouTube mines this data for a number of purposes—anything from interface optimisation to targeted advertising via Google’s AdSense. Users also perform a certain degree of labour to keep the site running smoothly, such as reporting videos that violate the Terms of Service, giving videos the thumbs up or thumbs down, and reporting spam comments. As such, users involved in YouTube’s participatory culture are also necessarily involved in the site’s commercial interests. While there are legitimate concerns regarding the privacy of personal information, especially after Google introduced policies in 2012 to facilitate a greater flow of information across all of their subsidiaries, it does not seem that this has diminished YouTube’s popularity (“Google: Privacy Policy”).Despite this, some make the argument that users provide the true benefit of UGC platforms like YouTube, yet reap few rewards, creating an exploitative dynamic (Van Dijck, 46). Two assumptions seem to underpin this argument: the first is that users do not desire to help these platforms prosper, the second is that users expect to profit from their efforts on the website. In response to these arguments, it’s worth calling attention to scholars who have used alternative economic models to account for user-platform coexistence. This is something that Henry Jenkins addresses in his recent book Spreadable Media, largely by focusing on assigning alternate sorts of value to user and fan labour—either the cultural worth of the gift, or the satisfaction of a job well done common to pre-industrial craftsmanship (61). However, there are still questions of how to account for participatory spaces in which labours of love coexist with massively profitable products. In service of this point, Jenkins calls up Lessig, who posits that many online networks operate as hybrid economies, which combine commercial and sharing economies. In a commercial economy, profit is the primary consideration, while a sharing economy is composed of participants who are there because they enjoy doing the work without any expectation of compensation (176). The strict separation between the two economies is, in Lessig’s estimation, essential to the hybrid economy’s success. While it would be difficult to incorporate these two economies together once each had been established, platforms like YouTube have always operated under the hybrid principle. YouTube’s users provide the site with its true value (through their uploading of content, provision of metadata, and use of the site), yet users do not come to YouTube with these tasks in mind—they come to YouTube because it provides an easy-to-use platform by which to share amateur creativity, and a community with whom to interact. Additionally, YouTube serves as the primary venue where remixers can achieve visibility and viral status—something Elisa Kreisinger acknowledged in our interviews (2012). However, users who are not concerned with broad visibility as much as with speaking to particular viewers may leave YouTube if they feel that the venue does not suit their content. Some feminist fan vidders, for instance, have withdrawn from YouTube due to what they perceived as a community who didn’t understand their work (Kreisinger, 2012). Additionally, Kreisinger ended up garnering many more views of her Queer Men remix on Vimeo due simply to the fact that the remix’s initial upload was blocked via YouTube’s Content ID feature. By the time Kreisinger had argued her case with YouTube, the Vimeo link had become the first stop for those viewing and sharing the remix, which received 72,000 views to date (“Queer Men”). Fair Use, Copyright, and Content IDThis instance points to the challenge that remixers face when dealing with copyright on YouTube, a site whose processes are not designed to accommodate fair use. Specifically, Title II, Section 512 of the DMCA (the Digital Millennium Copyright Act, passed in 1998) states that certain websites may qualify as “safe harbours” for copyright infringement if users upload the majority of the content to the site, or if the site is an information location service. These sites are insulated from copyright liability as long as they cooperate to some extent with rights holders. A common objection to Section 512 is that it requires media rights holders to police safe harbours in search of infringing content, rather than placing the onus on the platform provider (Meyers 939). In order to cooperate with Section 512 and rights holders, YouTube initiated the Content ID system in 2007. This system offers rights holders the ability to find and manage their content on the site by creating archives of footage against which user uploads are checked, allowing rights holders to automatically block, track, or monetise uses of their content (it is also worth noting that rights holders can make these responses country-specific) (“How Content ID Works”). At the current time, YouTube has over 15 million reference files against which it checks uploads (“Statistics - YouTube”). Thus, it’s fairly common for uploaded work to get flagged as a violation, especially when that work is a remix of popular institutional footage. If an upload is flagged by the Content ID system, the user can dispute the match, at which point the rights holder has the opportunity to either allow the video through, or to issue a DMCA takedown notice. They can also sue at any point during this process (“A Guide to YouTube Removals”). Content ID matches are relatively easy to dispute and do not generally require legal intervention. However, disputing these automatic takedowns requires users to be aware of their rights to fair use, and requires rights holders to acknowledge a fair use (“YouTube Removals”). This is only compounded by the fact that fair use is not a clearly defined right, but rather a vague provision relying on a balance between four factors: the purpose of the use, character of the work, the amount used, and the effect on the market value of the original (“US Copyright Office–Fair Use”). As Aufderheide and Jaszi observed in 2008, the rejection of videos for Content ID matches combined with the vagaries of fair use has a chilling effect on user-generated content. Rights Holders versus RemixersRights holders’ objections to Section 512 illustrate the ruling power dynamic in current intellectual property disputes: power rests with institutional rights-holding bodies (the RIAA, the MPAA) who assert their dominance over DMCA safe harbours such as YouTube (who must cooperate to stay in business) who, in turn, exert power over remixers (the lowest on the food chain, so to speak). Beyond the observed chilling effect of Content ID, remix on YouTube is shot through with discursive struggle between these rights-holding bodies and remixers attempting to express themselves and reach new communities. However, this has led political video remixers to become especially vocal when arguing for their uses of content. For instance, in the spring of 2009, Elisa Kreisinger curated a show entitled “REMOVED: The Politics of Remix Culture” in which blocked remixes screened alongside the remixers’ correspondence with YouTube. Kreisinger writes that each of these exchanges illustrate the dynamic between rights holders and remixers: “Your video is no longer available because FOX [or another rights-holding body] has chosen to block it (“Remixed/Removed”). Additionally, as Jenkins notes, even Content ID on YouTube is only made available to the largest rights holders—smaller companies must still go through an official DMCA takedown process to report infringement (Spreadable 51). In sum, though recent technological developments may give the appearance of democratising access to content, when it comes to policing UGC, technology has made it easier for the largest rights holders to stifle the creation of content.Additionally, it has been established that rights holders do occasionally use takedowns abusively, and recent court cases—specifically Lenz v. Universal Music Corp.—have established the need for rights holders to assess fair use in order to make a “good faith” assertion that users intend to infringe copyright prior to issuing a takedown notice. However, as Joseph M. Miller notes, the ruling fails to rebalance the burdens and incentives between rights holders and users (1723). This means that while rights holders are supposed to take fair use into account prior to issuing takedowns, there is no process in place that either effectively punishes rights holders who abuse copyright, or allows users to defend themselves without the possibility of massive financial loss (1726). As such, the system currently in place does not disallow or discourage features like Content ID, though cases like Lenz v. Universal indicate a push towards rebalancing the burden of determining fair use. In an effort to turn the tables, many have begun arguing for users’ rights and attempting to parse fair use for the layperson. The Electronic Frontier Foundation (EFF), for instance, has espoused an “environmental rhetoric” of fair use, casting intellectual property as a resource for users (Postigo 1020). Additionally, they have created practical guidelines for UGC creators dealing with DMCA takedowns and Content ID matches on YouTube. The Center for Social Media has also produced a number of fair use guides tailored to different use cases, one of which targeted online video producers. All of these efforts have a common goal: to educate content creators about the fair use of copyrighted content, and then to assert their use as fair in opposition to large rights-holding institutions (though they caution users against unfair uses of content or making risky legal moves that could lead to lawsuits). In relation to remix specifically, this means that remixers must differentiate themselves from institutional, commercial content producers, standing up both for the argument contained in their remix as well as their fair use of copyrighted content.In their “Code of Best Practices for Fair Use in Online Video,” the Center for Social Media note that an online video qualifies as a fair use if (among other things) it critiques copyrighted material and if it “recombines elements to make a new work that depends for its meaning on (often unlikely) relationships between the elements” (8). These two qualities are also two of the defining qualities of political remix video. For instance, they write that work meets the second criteria if it creates “new meaning by juxtaposition,” noting that in these cases “the recombinant new work has a cultural identity of its own and addresses an audience different from those for which its components were intended” (9). Remixes that use elements of familiar sources in unlikely combinations, such as those made by Elisa Kreisinger, generally seek to reach an audience who are familiar with the source content, but also object to it. Sex and the City, for instance, while it initially seemed willing to take on previously “taboo” topics in its exploration of dating in Manhattan, ended with each of the heterosexual characters paired with an opposite sex partner, and forays from this heteronormative narrative were contained either within in one-off episodes or tokenised gay characters. For this reason, Kreisinger noted that the intended audience for Queer Carrie were the queer and feminist viewers of Sex and the City who felt that the show was overly normative and exclusionary (Kreisinger, Art:21). As a result, the target audience of these remixes is different from the target audience of the source material—though the full nuance of the argument is best understood by those familiar with the source. Thus, the remix affirms the segment of the viewing community who saw only tokenised representations of their identity in the source text, and in so doing offers a critique of the original’s heteronormative focus.Fair Use and the VernacularVernacular discourse, as broadly defined by Kent A. Ono and John M. Sloop, refers to discourses that “emerge from discussions between members of self-identified smaller communities within the larger civic community.” It operates partially through appropriating dominant discourses in ways better suited to the vernacular community, through practices of pastiche and cultural syncretism (23). In an effort to better describe the intricacies of this type of discourse, Robert Glenn Howard theorised a hybrid “dialectical vernacular” that oscillates between institutional and vernacular discourse. This hybridity arises from the fact that the institutional and the vernacular are fundamentally inseparable, the vernacular establishing its meaning by asserting itself against the institutional (Howard, Toward 331). When put into use online, this notion of a “dialectical vernacular” is particularly interesting as it refers not only to the content of vernacular messages but also to their means of production. Howard notes that discourse embodying the dialectical vernacular is by nature secondary to institutional discourse, that the institutional must be clearly “structurally prior” (Howard, Vernacular 499). With this in mind it is unsurprising that political remix video—which asserts its secondary nature by calling upon pre-existing copyrighted content while simultaneously reaching out to smaller segments of the civic community—would qualify as a vernacular discourse.The notion of an institutional source’s structural prevalence also echoes throughout work on remix, both in practical guides such as the Center for Social Media’s “Best Practices” as well as in more theoretical takes on remix, like Eduardo Navas’ essay “Turbulence: Remixes + Bonus Beats,” in which he writes that:In brief, the remix when extended as a cultural practice is a second mix of something pre-existent; the material that is mixed for a second time must be recognized, otherwise it could be misunderstood as something new, and it would become plagiarism […] Without a history, the remix cannot be Remix. An elegant theoretical concept, this becomes muddier when considered in light of copyright law. If the history of remix is what gives it its meaning—the source text from which it is derived—then it is this same history that makes a fair use remix vulnerable to DMCA takedowns and other forms of discipline on YouTube. However, as per the criteria outlined by the Center for Social Media, it is also from this ironic juxtaposition of institutional sources that the remix object establishes its meaning, and thus its vernacularity. In this sense, the force of a political remix video’s argument is in many ways dependent on its status as an object in peril: vulnerable to the force of a law that has not yet swung in its favor, yet subversive nonetheless.With this in mind, YouTube and other UGC platforms represent a fraught layer of mediation between institutional and vernacular. As a site for the sharing of amateur video, YouTube has the potential to affirm small communities as users share similar videos, follow one particular channel together, or comment on videos posted by people in their networks. However, YouTube’s interface (rife with advertisements, constantly reminding users of its affiliation with Google) and cooperation with rights holders establish it as an institutional space. As such, remixes on the site are already imbued with the characteristic hybridity of the dialectical vernacular. This is especially true when the remixers (as in the case of PRV) have made the conscious choice to advocate for fair use at the same time that they distribute remixes dealing with other themes and resonating with other communities. ConclusionPolitical remix video sits at a fruitful juncture with regard to copyright as well as vernacularity. Like almost all remix, it makes its meaning through juxtaposing sources in a unique way, calling upon viewers to think about familiar texts in a new light. This creation invokes a new audience—a quality that makes it both vernacular and also a fair use of content. Given that PRV is defined by the “guerrilla” use of copyrighted footage, it has the potential to stand as a political statement outside of the thematic content of the remix simply due to the nature of its composition. This gives PRV tremendous potential for multivalent argument, as a video can simultaneously represent a marginalised community while advocating for copyright reform. This is only reinforced by the fact that many political video remixers have become vocal in advocating for fair use, asserting the strength of their community and their common goal.In addition to this argumentative richness, PRV’s relation to fair use and vernacularity exposes the complexity of the remix form: it continually oscillates between institutional affiliations and smaller vernacular communities. However, the hybridity of these remixes produces tension, much of which manifests on YouTube, where videos are easily responded to and challenged by both institutuional and vernacular authorities. In addition, a tension exists in the remix text itself between the source and the new, remixed message. Further research should attend to these areas of tension, while also exploring the tenacity of the remix community and their ability to advocate for themselves while circumventing copyright law.References“About Political Remix Video.” Political Remix Video. 15 Feb. 2012. ‹http://www.politicalremixvideo.com/what-is-political-remix/›.Aufderheide, Patricia, and Peter Jaszi. Reclaiming Fair Use: How to Put Balance Back in Copyright. Chicago: U of Chicago P, 2008. Kindle.“Code of Best Practices for Fair Use in Online Video.” The Center For Social Media, 2008. Van Dijck, José. “Users like You? Theorizing Agency in User-Generated Content.” Media Culture Society 31 (2009): 41-58.“A Guide to YouTube Removals,” The Electronic Frontier Foundation, 15 June 2013 ‹https://www.eff.org/issues/intellectual-property/guide-to-YouTube-removals›.Hilderbrand, Lucas. “YouTube: Where Cultural Memory and Copyright Converge.” Film Quarterly 61.1 (2007): 48-57.Howard, Robert Glenn. “The Vernacular Web of Participatory Media.” Critical Studies in Media Communication 25.5 (2008): 490-513.Howard, Robert Glenn. “Toward a Theory of the World Wide Web Vernacular: The Case for Pet Cloning.” Journal of Folklore Research 42.3 (2005): 323-60.“How Content ID Works.” YouTube. 21 June 2013. ‹https://support.google.com/youtube/answer/2797370?hl=en›.Jenkins, Henry, Sam Ford, and Joshua Green. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York: New York U P, 2013. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York U P, 2006. Kreisinger, Elisa. Interview with Nick Briz. Art:21. Art:21, 30 June 2011. 21 June 2013.Kreisinger, Elisa. “Queer Video Remix and LGBTQ Online Communities,” Transformative Works and Cultures 9 (2012). 19 June 2013 ‹http://journal.transformativeworks.org/index.php/twc/article/view/395/264›.Kreisinger, Elisa. Pop Culture Pirate. < http://www.popculturepirate.com/ >.Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin Books, 2008. PDF.Meyers, B.G. “Filtering Systems or Fair Use? A Comparative Analysis of Proposed Regulations for User-Generated Content.” Cardozo Arts & Entertainment Law Journal 26.3: 935-56.Miller, Joseph M. “Fair Use through the Lenz of § 512(c) of the DMCA: A Preemptive Defense to a Premature Remedy?” Iowa Law Review 95 (2009-2010): 1697-1729.Navas, Eduardo. “Turbulence: Remixes + Bonus Beats.” New Media Fix 1 Feb. 2007. 10 June 2013 ‹http://newmediafix.net/Turbulence07/Navas_EN.html›.Ono, Kent A., and John M. Sloop. Shifting Borders: Rhetoric, Immigration and California’s Proposition 187. Philadelphia: Temple U P, 2002.“Privacy Policy – Policies & Principles.” Google. 19 June 2013 ‹http://www.google.com/policies/privacy/›.Postigo, Hector. “Capturing Fair Use for The YouTube Generation: The Digital Rights Movement, the Electronic Frontier Foundation, and the User-Centered Framing of Fair Use.” Information, Communication & Society 11.7 (2008): 1008-27.“Statistics – YouTube.” YouTube. 21 June 2013 ‹http://www.youtube.com/yt/press/statistics.html›.“US Copyright Office: Fair Use,” U.S. Copyright Office. 19 June 2013 ‹http://www.copyright.gov/fls/fl102.html›.“YouTube Help.” YouTube FAQ. 19 June 2013 ‹http://support.google.com/youtube/?hl=en&topic=2676339&rd=2›.
APA, Harvard, Vancouver, ISO, and other styles
44

Proctor, Devin. "Wandering in the City: Time, Memory, and Experience in Digital Game Space." M/C Journal 22, no. 4 (August 14, 2019). http://dx.doi.org/10.5204/mcj.1549.

Full text
Abstract:
As I round the corner from Church Street onto Vesey, I am abruptly met with the façade of St. Paul’s Chapel and by the sudden memory of two things, both of which have not yet happened. I think about how, in a couple of decades, the area surrounding me will be burnt to the ground. I also recall how, just after the turn of the twenty-first century, the area will again crumble onto itself. It is 1759, and I—via my avatar—am wandering through downtown New York City in the videogame space of Assassin’s Creed: Rogue (AC:R). These spatial and temporal memories stem from the fact that I have previously (that is, earlier in my life) played an AC game set in New York City during the War for Independence (later in history), wherein the city’s lower west side burns at the hands of the British. Years before that (in my biographical timeline, though much later in history) I watched from twenty-something blocks north of here as flames erupted from the twin towers of the World Trade Center. Complicating the situation further, Michel de Certeau strolls with me in spirit, pondering observations he will make from almost this exact location (though roughly 1,100 feet higher up) 220 years from now, around the time I am being born. Perhaps the oddest aspect of this convoluted and temporally layered experience is the fact that I am not actually at the corner of Church and Vesey in 1759 at all, but rather on a couch, in Virginia, now. This particular type of sudden arrival at a space is only possible when it is not planned. Prior to the moment described above, I had finished a “mission” in the game that involved my coming to the city, so I decided I would just walk around a bit in the newly discovered digital New York of 1759. I wanted to take it in. I wanted to wander. Truly Being-in-a-place means attending to the interconnected Being-ness and Being-with-ness of all of the things that make up that place (Heidegger; Haraway). Conversely, to travel to or through a place entails a type of focused directionality toward a place that you are not currently Being in. Wandering, however, demands eschewing both, neither driven by an incessant goal, nor stuck in place by introspective ruminations. Instead, wandering is perhaps best described as a sort of mobile openness. A wanderer is not quite Benjamin’s flâneur, characterised by an “idle yet assertive negotiation of the street” (Coates 28), but also, I would argue, not quite de Certeau’s “Wandersmünner, whose bodies follow the thicks and thins of an urban ‘text’ they write without being able to read it” (de Certeau 93). Wandering requires a concerted effort at non-intentionality. That description may seem to fold in on itself, to be sure, but as the spaces around us are increasingly “canalized” (Rabinow and Foucault) and designed with specific trajectories and narratives in mind, inaction leads to the unconscious enacting of an externally derived intention; whereas any attempt to subvert that design is itself a wholly intentional act. This is why wandering is so difficult. It requires shedding layers. It takes practice, like meditation.In what follows, I will explore the possibility of revelatory moments enabled by the shedding of these layers of intention through my own experience in digital space (maybe the most designed and canalized spaces we inhabit). I come to recognise, as I disavow the designed narrative of game space, that it takes on other meanings, becomes another space. I find myself Being-there in a way that transcends the digital as we understand it, experiencing space that reaches into the past and future, into memory and fiction. Indeed, wandering is liminal, betwixt fixed points, spaces, and times, and the text you are reading will wander in this fashion—between the digital and the physical, between memory and experience, and among multiple pasts and the present—to arrive at a multilayered subjective sense of space, a palimpsest of placemaking.Before charging fully into digital time travel, however, we must attend to the business of context. In this case, this means addressing why I am talking about videogame space in Certaudian terms. Beginning as early as 1995, videogame theorists have employed de Certeau’s notion of “spatial stories” in their assertions that games allow players to construct the game’s narrative by travelling through and “colonizing” the space (Fuller and Jenkins). Most of the scholarship involving de Certeau and videogames, however, has been relegated to the concepts of “map/tour” in looking at digital embodiment within game space as experiential representatives of the place/space binary. Maps verbalise spatial experience in place terms, such as “it’s at the corner of this and that street”, whereas tours express the same in terms of movement through space, as in “turn right at the red house”. Videogames complicate this because “mapping is combined with touring when moving through the game-space” (Lammes).In Games as Inhabited Spaces, Bernadette Flynn moves beyond the map/tour dichotomy to argue that spatial theories can approach videogaming in a way no other viewpoint can, because neither narrative nor mechanics of play can speak to the “space” of a game. Thus, Flynn’s work is “focused on completely reconceiving gameplay as fundamentally configured with spatial practice” (59) through de Certeau’s concepts of “strategic” and “tactical” spatial use. Flynn explains:The ability to forge personal directions from a closed simulation links to de Certeau’s notion of tactics, where users can create their own trajectories from the formal organizations of space. For de Certeau, tactics are related to how people individualise trajectories of movement to create meaning and transformations of space. Strategies on the other hand, are more akin to the game designer’s particular matrix of formal structures, arrangements of time and space which operate to control and constrain gameplay. (59)Flynn takes much of her reading of de Certeau from Lev Manovich, who argues that a game designer “uses strategies to impose a particular matrix of space, time, experience, and meaning on his viewers; they, in turn, use ‘tactics’ to create their own trajectories […] within this matrix” (267). Manovich believes de Certeau’s theories offer a salient model for thinking about “the ways in which computer users navigate through computer spaces they did not design” (267). In Flynn’s and Manovich’s estimation, simply moving through digital space is a tactic, a subversion of its strategic and linear design.The views of game space as tactical have historically (and paradoxically) treated the subject of videogames from a strategic perspective, as a configurable space to be “navigated through”, as a way of attaining a certain goal. Dan Golding takes up this problem, distancing our engagement from the design and calling for a de Certeaudian treatment of videogame space “from below”, where “the spatial diegesis of the videogame is affordance based and constituted by the skills of the player”, including those accrued outside the game space (Golding 118). Similarly, Darshana Jayemanne adds a temporal element with the idea that these spatial constructions are happening alongside a “complexity” and “proliferation of temporal schemes” (Jayemanne 1, 4; see also Nikolchina). Building from Golding and Jayemanne, I illustrate here a space wherein the player, not the game, is at the fulcrum of both spatial and temporal complexity, by adding the notion that—along with skill and experience—players bring space and time with them into the game.Viewed with the above understanding of strategies, tactics, skill, and temporality, the act of wandering in a videogame seems inherently subversive: on one hand, by undergoing a destination-less exploration of game space, I am rejecting the game’s spatial narrative trajectory; on the other, I am eschewing both skill accrual and temporal insistence to attempt a sense of pure Being-in-the-game. Such rebellious freedom, however, is part of the design of this particular game space. AC:R is a “sand box” game, which means it involves a large environment that can be traversed in a non-linear fashion, allowing, supposedly, for more freedom and exploration. Indeed, much of the gameplay involves slowly making more space available for investigation in an outward—rather than unidirectional—course. A player opens up these new spaces by “synchronising a viewpoint”, which can only be done by climbing to the top of specific landmarks. One of the fundamental elements of the AC franchise is an acrobatic, free-running, parkour style of engagement with a player’s surroundings, “where practitioners weave through urban environments, hopping over barricades, debris, and other obstacles” (Laviolette 242), climbing walls and traversing rooftops in a way unthinkable (and probably illegal) in our everyday lives. People scaling buildings in major metropolitan areas outside of videogame space tend to get arrested, if they survive the climb. Possibly, these renegade climbers are seeking what de Certeau describes as the “voluptuous pleasure […] of ‘seeing the whole,’ of looking down on, totalizing the most immoderate of human texts” (92)—what he experienced, looking down from the top of the World Trade Center in the late 1970s.***On digital ground level, back in 1759, I look up to the top of St. Paul’s bell tower and crave that pleasure, so I climb. As I make my way up, Non-Player Characters (NPCs)—the townspeople and trader avatars who make up the interactive human scenery of the game—shout things such as “You’ll hurt yourself” and “I say! What on earth is he doing?” This is the game’s way of convincing me that I am enacting agency and writing my own spatial story. I seem to be deploying “tricky and stubborn procedures that elude discipline without being outside the field in which it is exercised” (de Certeau 96), when I am actually following the program the way I am supposed to. If I were not meant to climb the tower, I simply would not be able to. The fact that game developers go to the extent of recording dialogue to shout at me when I do this proves that they expect my transgression. This is part of the game’s “semi-social system”: a collection of in-game social norms that—to an extent—reflect the cultural understandings of outside non-digital society (Atkinson and Willis). These norms are enforced through social pressures and expectations in the game such that “these relative imperatives and influences, appearing to present players with ‘unlimited’ choices, [frame] them within the parameters of synthetic worlds whose social structure and assumptions are distinctly skewed in particular ways” (408). By using these semi-social systems, games communicate to players that performing a particular act is seen as wrong or scandalous by the in-game society (and therefore subversive), even when the action is necessary for the continuation of the spatial story.When I reach the top of the bell tower, I am able to “synchronise the viewpoint”—that is, unlock the map of this area of the city. Previously, I did not have access to an overhead view of the area, but now that I have indulged in de Certeau’s pleasure of “seeing the whole”, I can see not only the tactical view from the street, but also the strategic bird’s-eye view from above. From the top, looking out over the city—now The City, a conceivable whole rather than a collection of streets—it is difficult to picture the neighbourhood engulfed in flames. The stair-step Dutch-inspired rooflines still recall the very recent change from New Amsterdam to New York, but in thirty years’ time, they will all be torched and rebuilt, replaced with colonial Tudor boxes. I imagine myself as an eighteenth-century de Certeau, surveying pre-ruination New York City. I wonder how his thoughts would have changed if his viewpoint were coloured with knowledge of the future. Standing atop the very symbol of global power and wealth—a duo-lith that would exist for less than three decades—would his pleasure have been less “voluptuous”? While de Certeau considers the viewer from above like Icarus, whose “elevation transfigures him into a voyeur” (92), I identify more with Daedalus, preoccupied with impending disaster. I swan-dive from the tower into a hay cart, returning to the bustle of the street below.As I wander amongst the people of digital 1759 New York, the game continuously makes phatic advances at me. I bump into others on the street and they drop boxes they are carrying, or stumble to the side. Partial overheard conversations going on between townspeople—“… what with all these new taxes …”, “… but we’ve got a fine regiment here …”—both underscore the historical context of the game and imply that this is a world that exists even when I am not there. These characters and their conversations are as much a part of the strategic makeup of the city as the buildings are. They are the text, not the writers nor the readers. I am the only writer of this text, but I am merely transcribing a pre-programmed narrative. So, I am not an author, but rather a stenographer. For this short moment, though, I am allowed by the game to believe that I am making the choice not to transcribe; there are missions to complete, and I am ignoring them. I am taking in the city, forgetting—just as the design intends—that I am the only one here, the only person in the entire world, indeed, the person for whom this world exists.While wandering, I also experience conflicts and mergers between what Maurice Halbwachs has called historical, autobiographical, and collective memory types: respectively, these are memories created according to historical record, through one’s own life experience, and by the way a society tends to culturally frame and recall “important” events. De Certeau describes a memorable place as a “palimpsest, [where] subjectivity is already linked to the absence that structures it as existence” (109). Wandering through AC:R’s virtual representation of 1759 downtown New York, I am experiencing this palimpsest in multiple layers, activating my Halbwachsian memories and influencing one another in the creation of my subjectivity. This is the “absence” de Certeau speaks of. My visions of Revolutionary New York ablaze tug at me from beneath a veneer of peaceful Dutch architecture: two warring historical memory constructs. Simultaneously, this old world is painted on top of my autobiographical memories as a New Yorker for thirteen years, loudly ordering corned beef with Russian dressing at the deli that will be on this corner. Somewhere sandwiched between these layers hides a portrait of September 11th, 2001, painted either by collective memory or autobiographical memory, or, more likely, a collage of both. A plane entering a building. Fire. Seen by my eyes, and then re-seen countless times through the same televised imagery that the rest of the world outside our small downtown village saw it. Which images are from media, and which from memory?Above, as if presiding over the scene, Michel de Certeau hangs in the air at the collision site, suspended a 1000 feet above the North Pool of the 9/11 Memorial, rapt in “voluptuous pleasure”. And below, amid the colonists in their tricorns and waistcoats, people in grey ash-covered suits—ambulatory statues; golems—slowly and silently march ever uptown-wards. Dutch and Tudor town homes stretch skyward and transform into art-deco and glass monoliths. These multiform strata, like so many superimposed transparent maps, ground me in the idea of New York, creating the “fragmentary and inward-turning histories” (de Certeau 108) that give place to my subjectivity, allowing me to Be-there—even though, technically, I am not.My conscious decision to ignore the game’s narrative and wander has made this moment possible. While I understand that this is entirely part of the intended gameplay, I also know that the design cannot possibly account for the particular way in which I experience the space. And this is the fundamental point I am asserting here: that—along with the strategies and temporal complexities of the design and the tactics and skills of those on the ground—we bring into digital space our own temporal and experiential constructions that allow us to Be-in-the-game in ways not anticipated by its strategic design. Non-digital virtuality—in the tangled forms of autobiographical, historic, and collective memory—reaches into digital space, transforming the experience. Further, this changed game-experience becomes a part of my autobiographical “prosthetic memory” that I carry with me (Landsberg). When I visit New York in the future, and I inevitably find myself abruptly met with the façade of St Paul’s Chapel as I round the corner of Church Street and Vesey, I will be brought back to this moment. Will I continue to wander, or will I—if just for a second—entertain the urge to climb?***After the recent near destruction by fire of Notre-Dame, a different game in the AC franchise was offered as a free download, because it is set in revolutionary Paris and includes a very detailed and interactive version of the cathedral. Perhaps right now, on sundry couches in various geographical locations, people are wandering there: strolling along the Siene, re-experiencing time they once spent there; overhearing tense conversations about regime change along the Champs-Élysées that sound disturbingly familiar; or scaling the bell tower of the Notre-Dame Cathedral itself—site of revolution, desecration, destruction, and future rebuilding—to reach the pleasure of seeing the strategic whole at the top. And maybe, while they are up there, they will glance south-southwest to the 15th arrondissement, where de Certeau lies, enjoying some voluptuous Icarian viewpoint as-yet unimagined.ReferencesAtkinson, Rowland, and Paul Willis. “Transparent Cities: Re‐Shaping the Urban Experience through Interactive Video Game Simulation.” City 13.4 (2009): 403–417. DOI: 10.1080/13604810903298458.Benjamin, Walter. The Arcades Project. Trans. Howard Eiland and Kevin McLaughlin. Ed. Rolf Tiedmann. Cambridge, Mass.: Belknap Press, 2002. Coates, Jamie. “Key Figure of Mobility: The Flâneur.” Social Anthropology 25.1 (2017): 28–41. DOI: 10.1111/1469-8676.12381.De Certeau, Michel. The Practice of Everyday Life. Translated by Steven Rendall. Berkeley: University of California Press, 1984.Flynn, Bernadette. “Games as Inhabited Spaces.” Media International Australia, Incorporating Culture and Policy 110 (2004): 52–61. DOI: 10.1177/1329878X0411000108.Fuller, Mary, and Henry Jenkins. “‘Nintendo and New World Travel Writing: A Dialogue’ [in] CyberSociety: Computer-Mediated Communication and Community.” CyberSociety: Computer-Mediated Communication and Community. Ed. Steve Jones. Thousand Oaks: Sage, 1994. 57–72. <https://contentstore.cla.co.uk/secure/link?id=7dc700b8-cb87-e611-80c6-005056af4099>.Golding, Daniel. “Putting the Player Back in Their Place: Spatial Analysis from Below.” Journal of Gaming & Virtual Worlds 5.2 (2013): 117–30. DOI: 10.1386/jgvw.5.2.117_1.Halbwachs, Maurice. The Collective Memory. New York: Harper & Row, 1980.Haraway, Donna. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press Books, 2016.Heidegger, Martin. Existence and Being. Chicago: Henry Regnery Company, 1949.Jayemanne, Darshana. “Chronotypology: A Comparative Method for Analyzing Game Time.” Games and Culture (2019): 1–16. DOI: 10.1177/1555412019845593.Lammes, Sybille. “Playing the World: Computer Games, Cartography and Spatial Stories.” Aether: The Journal of Media Geography 3 (2008): 84–96. DOI: 10.1080/10402659908426297.Landsberg, Alison. Prosthetic Memory: The Transformation of American Remembrance in the Age of Mass Culture. New York: Columbia University Press, 2004.Laviolette, Patrick. “The Neo-Flâneur amongst Irresistible Decay.” Playgrounds and Battlefields: Critical Perspectives of Social Engagement. Eds. Martínez Jüristo and Klemen Slabina. Tallinn: Tallinn University Press, 2014. 243–71.Manovich, Lev. The Language of New Media. Cambridge, Massachusetts: MIT Press, 2002.Nikolchina, Miglena. “Time in Video Games: Repetitions of the New.” Differences 28.3 (2017): 19–43. DOI: 10.1215/10407391-4260519.Rabinow, Paul, and Michel Foucault. “Interview with Michel Foucault on Space, Knowledge and Power.” Skyline (March 1982): 17–20.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography