To see the other types of publications on this topic, follow the link: Minimum likelihood ratio.

Journal articles on the topic 'Minimum likelihood ratio'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Minimum likelihood ratio.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Chen, and Xiaohu Li. "Likelihood ratio order of sample minimum from heterogeneous Weibull random variables." Statistics & Probability Letters 97 (February 2015): 46–53. http://dx.doi.org/10.1016/j.spl.2014.10.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fu, Y. X., and W. H. Li. "Maximum likelihood estimation of population parameters." Genetics 134, no. 4 (August 1, 1993): 1261–70. http://dx.doi.org/10.1093/genetics/134.4.1261.

Full text
Abstract:
Abstract One of the most important parameters in population genetics is theta = 4Ne mu where Ne is the effective population size and mu is the rate of mutation per gene per generation. We study two related problems, using the maximum likelihood method and the theory of coalescence. One problem is the potential improvement of accuracy in estimating the parameter theta over existing methods and the other is the estimation of parameter lambda which is the ratio of two theta's. The minimum variances of estimates of the parameter theta are derived under two idealized situations. These minimum variances serve as the lower bounds of the variances of all possible estimates of theta in practice. We then show that Watterson's estimate of theta based on the number of segregating sites is asymptotically an optimal estimate of theta. However, for a finite sample of sequences, substantial improvement over Watterson's estimate is possible when theta is large. The maximum likelihood estimate of lambda = theta 1/theta 2 is obtained and the properties of the estimate are discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Pusadan, Mohammad Yazdi, Joko Lianto Buliali, and Raden Venantius Hari Ginardi. "Anomaly detection on flight route using similarity and grouping approach based-on automatic dependent surveillance-broadcast." International Journal of Advances in Intelligent Informatics 5, no. 3 (November 30, 2019): 285. http://dx.doi.org/10.26555/ijain.v5i3.232.

Full text
Abstract:
Flight anomaly detection is used to determine the abnormal state data on the flight route. This study focused on two groups: general aviation habits (C1)and anomalies (C2). Groups C1 and C2 are obtained through similarity test with references. The methods used are: 1) normalizing the training data form, 2) forming the training segment 3) calculating the log-likelihood value and determining the maximum log-likelihood (C1) and minimum log-likelihood (C2) values, 4) determining the percentage of data based on criteria C1 and C2 by grouping SVM, KNN, and K-means and 5) Testing with log-likelihood ratio. The results achieved in each segment are Log-likelihood value in C1Latitude is -15.97 and C1Longitude is -16.97. On the other hand, Log-likelihood value in C2Latitude is -19.3 (maximum) and -20.3 (minimum), and log-likelihood value in C2Longitude is -21.2 (maximum) and -24.8 (minimum). The largest percentage value in C1 is 96%, while the largest in C2 is 10%. Thus, the highest potential anomaly data is 10%, and the smallest is 3%. Also, there are performance tests based on F-measure to get accuracy and precision.
APA, Harvard, Vancouver, ISO, and other styles
4

Buszkiewicz, James H., Heather D. Hill, and Jennifer J. Otten. "Association of State Minimum Wage Rates and Health in Working-Age Adults Using the National Health Interview Survey." American Journal of Epidemiology 190, no. 1 (February 10, 2020): 21–30. http://dx.doi.org/10.1093/aje/kwaa018.

Full text
Abstract:
Abstract States adopt minimum wages to improve workers’ economic circumstances and well-being. Many studies, but not all, find evidence of health benefits from higher minimum wages. This study used a rigorous “triple difference” strategy to identify the associations between state minimum wages and adult obesity, body mass index (weight (kg)/height (m)2), hypertension, diabetes, fair or poor health, and serious psychological distress. National Health Interview Survey data (United States, 2008–2015) on adults aged 25–64 years (n = 131,430) were linked to state policies to estimate the prevalence odds ratio or mean difference in these outcomes associated with a $1 increase in current and 2-year lagged minimum wage among less-educated adults overall and by sex, race/ethnicity, and age. In contrast to prior studies, there was no association between current minimum wage and health; however, 2-year lagged minimum wage was positively associated with the likelihood of obesity (prevalence odds ratio = 1.08, 95% confidence interval: 1.00, 1.16) and with elevated body mass index (mean difference = 0.27, 95% confidence interval: 0.04, 0.49). In subgroup models, current and 2-year lagged minimum wage were associated with a higher likelihood of obesity among male and non-White or Hispanic adults. The associations with hypertension also varied by sex and the timing of the exposure.
APA, Harvard, Vancouver, ISO, and other styles
5

Rosinsky, Philip J., Jeffery W. Chen, Mitchell J. Yelton, Ajay C. Lall, David R. Maldonado, Mitchell B. Meghpara, Jacob Shapira, and Benjamin G. Domb. "Does failure to meet threshold scores for mHHS and iHOT-12 correlate to secondary operations following hip arthroscopy?" Journal of Hip Preservation Surgery 7, no. 2 (April 14, 2020): 272–80. http://dx.doi.org/10.1093/jhps/hnaa015.

Full text
Abstract:
Abstract The purpose of this study was to determine (i) if failing to achieve a patient-reported outcome (PRO) threshold at 1 year was associated with secondary operations at minimum 2-year follow-up and (ii)what outcome measure and threshold has the highest association with future surgeries. Inclusion criteria for this study were cases of primary hip arthroscopy between July 2014 and April 2017. Included patients had recorded pre-operative and 1-year post-operative modified Harris Hip Score (mHHS) and 12-item international Hip Outcome Tool (iHOT-12) scores. Patients were classified based on their ability to achieve minimal clinical important difference (MCID), substantial clinical benefit (SCB) and patient acceptable symptom state (PASS) for each PRO and the status of secondary operations at minimum 2-year follow-up. The sensitivity, specificity, accuracy, positive likelihood ratio and negative likelihood ratio for these thresholds were calculated. Of 425 eligible cases, 369 (86.8%) had minimum 2-year follow-up. Of the included patients, 28 underwent secondary operations (7.59%), with 14 undergoing secondary arthroscopies (3.79%) and 14 converting to total hip arthroplasty (3.79%). For mHHS, 267 (72.4%), 173 (46.9%) and 277 (75.1%) hips met MCID, SCB and PASS, respectively. For iHOT-12, 234 (63.4%), 218 (59.1%) and 280 (75.9%) hips met the respective thresholds. The highest specificity, sensitivity and accuracy were identified as for iHOT-12 MCID (0.79), iHOT-12 PASS (0.79) and iHOT-12 MCID (0.77), respectively. Patients not attaining MCID and PASS for mHHS and iHOT-12 at 1-year post-operatively are at increased risk of secondary operation. The most accurate threshold associated with secondary operation (0.77) is not achieving iHOT-12 MCID. Level of evidence: retrospective case series: level IV.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Kou‐Yuan, and King‐sun Fu. "Syntactic pattern recognition for the classification of Ricker wavelets." GEOPHYSICS 50, no. 10 (October 1985): 1548–55. http://dx.doi.org/10.1190/1.1441845.

Full text
Abstract:
Syntactic pattern recognition techniques are applied to the analysis of one‐dimensional seismic traces for classification of Ricker wavelets. The system for one‐dimensional seismic analysis includes a likelihood ratio test, optimal amplitude‐dependent encoding, probability of detecting the signal involved in the global and local detection, plus minimum‐distance and nearest‐neighbor classification rules. The relation between error probability and Levenshtein distance is proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Turovsky, A. L., and O. V. Drobik. "PROCEDURE FOR EVALUATION OF THE SUPPORTING FREQUENCY SIGNAL OF THE SATELLITE COMMUNICATION SYSTEM IN CONTINUOUS MODE." Radio Electronics, Computer Science, Control, no. 2 (June 26, 2021): 28–38. http://dx.doi.org/10.15588/1607-3274-2021-2-3.

Full text
Abstract:
Context. One of the features of satellite communication systems is the advantageous use in them during the reception of the signal in the continuous mode of phase modulation of signals intended for the transmission of useful information. The use of this type of modulation requires solving the problem of estimating the carrier frequency of the signal. And the estimation itself is reduced to the problem of estimating the frequency of the maximum in the spectrum of a fragment of a sinusoidal signal against the background of additive Gaussian noise. The article considers the process of estimating the carrier frequency of a signal by a satellite communication system in a continuous mode according to the rule of maximum likelihood. Objective. Development of a procedure for estimating the carrier frequency of a signal received by a satellite communication system in a continuous mode according to the maximum likelihood rule. Method. The procedure proposed in the work and the algorithm developed on its basis allows to estimate the carrier frequency according to the rule of maximum likelihood, taking into account the conditions of uncertainty of all signal parameters by the satellite communication system in continuous mode. The results. For the purpose of practical introduction of the specified algorithm in operating schemes of satellite communication, schemes of its hardware realization are offered in work. To illustrate the ratio of the limits of the minimum limiting variance of the carrier frequency estimate, the paper presents dependencies that allow comparing the minimum limiting variance defined by the lower Cramer-Rao boundary and the minimum limiting variance determined taking into account all signal parameters. Conclusions. Analysis of these dependences showed that in real conditions the minimum dispersion of the carrier frequency of the signal according to the rule of maximum likelihood received by the satellite communication system in continuous mode with uncertainty of all signal parameters may differ significantly from the minimum dispersion obtained by applying the lower Kramer-Rao boundary. Prospective research, development and creation of algorithms and techniques aimed at estimating the carrier frequency at the minimum limiting variance in the conditions of uncertainty of all parameters of the received signal should be aimed at the maximum approximation of the minimum limiting variance of the estimated carrier frequency to the lower Cramer-Rao boundary to estimate the carrier frequency under conditions of certainty of other signal parameters.
APA, Harvard, Vancouver, ISO, and other styles
8

Tam, Vincent H., Amy N. Schilling, Shadi Neshat, Keith Poole, David A. Melnick, and Elizabeth A. Coyle. "Optimization of Meropenem Minimum Concentration/MIC Ratio To Suppress In Vitro Resistance of Pseudomonas aeruginosa." Antimicrobial Agents and Chemotherapy 49, no. 12 (December 2005): 4920–27. http://dx.doi.org/10.1128/aac.49.12.4920-4927.2005.

Full text
Abstract:
ABSTRACT Suppression of resistance in a dense Pseudomonas aeruginosa population has previously been shown with optimized quinolone exposures. However, the relevance to β-lactams is unknown. We investigated the bactericidal activity of meropenem and its propensity to suppress P. aeruginosa resistance in an in vitro hollow-fiber infection model (HFIM). Two isogenic strains of P. aeruginosa (wild type and an AmpC stably derepressed mutant [MIC = 1 mg/liter]) were used. An HFIM inoculated with approximately 1 × 108 CFU/ml of bacteria was subjected to various meropenem exposures. Maintenance doses were given every 8 h to simulate the maximum concentration achieved after a 1-g dose in all regimens, but escalating unbound minimum concentrations (C mi ns) were simulated with different clearances. Serial samples were obtained over 5 days to quantify the meropenem concentrations, the total bacterial population, and subpopulations with reduced susceptibilities to meropenem (>3× the MIC). For both strains, a significant bacterial burden reduction was seen with all regimens at 24 h. Regrowth was apparent after 3 days, with the C min/MIC ratio being ≤1.7 (time above the MIC, 100%). Selective amplification of subpopulations with reduced susceptibilities to meropenem was suppressed with a C min/MIC of ≥6.2 or by adding tobramycin to meropenem (C min/MIC = 1.7). Investigations that were longer than 24 h and that used high inocula may be necessary to fully evaluate the relationship between drug exposures and the likelihood of resistance suppression. These results suggest that the C min/MIC of meropenem can be optimized to suppress the emergence of non-plasmid-mediated P. aeruginosa resistance. Our in vitro data support the use of an extended duration of meropenem infusion for the treatment of severe nosocomial infections in combination with an aminoglycoside.
APA, Harvard, Vancouver, ISO, and other styles
9

Juskowiak, Jochen, and Bernd Bertsche. "Application and Simulation Study of Stress-Dependent Weibull Lifetime Models." International Journal of Reliability, Quality and Safety Engineering 23, no. 02 (April 2016): 1650008. http://dx.doi.org/10.1142/s021853931650008x.

Full text
Abstract:
Different Weibull lifetime models are presented whose scale, shape and minimum lifetime parameters are stress-dependent. This allows describing and predicting the lifetime of products with a Weibull distribution more accurately wherever stress-dependence applies to the failure mechanism. For instance, this is the case for failures due to fatigue, on which this paper focusses. The proposed procedure encompasses a two-step maximum likelihood estimation and a Fisher matrix (FM) confidence bounds calculation, followed by a model evaluation. This model evaluation is conducted by means of a general plausibility check (PC), likelihood ratio test (LRT) and Bayesian information criterion (BIC). Their applicability to accelerated life test data is discussed and validated using test data. Finally, a simulation study confirms a wide range of applicability.
APA, Harvard, Vancouver, ISO, and other styles
10

S*, Erfiyani, and Amira Permatasari Tarigan**. "KETEPATAN PEMERIKSAAN RADIOLOGI DAN BTA APUSAN LANGSUNG DENGAN KULTUR DALAM DIAGNOSIS TUBERKULOSIS PARU DI MEDAN." Jurnal Ilmiah PANNMED (Pharmacist, Analyst, Nurse, Nutrition, Midwivery, Environment, Dentist) 9, no. 3 (January 28, 2019): 238–44. http://dx.doi.org/10.36911/pannmed.v9i3.214.

Full text
Abstract:
Lung tuberculosis is one of transmitted diseases which are caused by Mycobacterium tuberculosis throughnuclei droplet. Diagnosis for lung tuberculosis is usually detected by the examination of direct sputumremoval, thorax photo, and Culture. The type of the research was descriptive study by using diagnostic testand the samples consisted of sputum and thorax photo. The sputum was taken accidentally, in the morning,and accidentally which was examined by using Ziehl Nelsen method and culture. The objective of theresearch was to find out the effectiveness of examining direct BTA removal and radiology which werecompared with Culture. The samples were taken from Private Practice of Tuberculosis Specialists in Medanand in BP4, Medan which had fulfilled inclusive criteria, radiology examination and three times of phlegmtaking for direct BTA removal and culture were performed; after that, radiology diagnostic test for directBTA removal was compared with Culture. 60 samples on direct removal method indicated the value ofsensitivity of 59.38%, specificity of 92.86%, value of positive prediction of 90.48%, value of negativeprediction of 66.7%, ratio of positive likelihood of 8.31, and ratio of negative likelihood of 0.44. Radiologymethod indicated sensitivity of 62.63%, specificity of 82.14%, the value of positive prediction of 80.77%,value of negative prediction of 67.65%, ratio of positive likelihood of 3.67, and ratio of negative likelihood of0.42. Of the 21 samples, positive BTA which underwent minimum lung damage of 4 (6.7%), moderatedamage of 9 (15.0%), and wide damage of 5 (8.3%). The result of the research showed that radiologyexamination indicated that the value of sensitivity was higher than direct removal method, compared withculture as gold standard in BTA examination.That clinical benefit was bigger in direct removal method canbe seen from the value of positive prediction which was higher than radiology method so that it isrecommended that both examination techniques be used to diagnose lung tuberculosis
APA, Harvard, Vancouver, ISO, and other styles
11

Jhwueng, Dwueng-Chwuan, and Brian C. O’Meara. "On the Matrix Condition of Phylogenetic Tree." Evolutionary Bioinformatics 16 (January 2020): 117693432090172. http://dx.doi.org/10.1177/1176934320901721.

Full text
Abstract:
Phylogenetic comparative analyses use trees of evolutionary relationships between species to understand their evolution and ecology. A phylogenetic tree of n taxa can be algebraically transformed into an n by n squared symmetric phylogenetic covariance matrix C where each element [Formula: see text] in C represents the affinity between extant species i and extant species j. This matrix C is used internally in several comparative methods: for example, it is often inverted to compute the likelihood of the data under a model. However, if the matrix is ill-conditioned (ie, if [Formula: see text], defined by the ratio of the maximum eigenvalue of C to the minimum eigenvalue of C, is too high), this inversion may not be stable, and thus neither will be the calculation of the likelihood or parameter estimates that are based on optimizing the likelihood. We investigate this potential issue and propose several methods to attempt to remedy this issue.
APA, Harvard, Vancouver, ISO, and other styles
12

Fernández-Martínez, Juan Luis, Enrique J. deAndrés-Galiana, Enrique J. deAndrés-Galiana, Ana Cernea, Francisco Javier Fernández-Ovies, and Manuel Menéndez. "Sampling Defective Pathways in Parkinson Disease." Journal of Medical Informatics and Decision Making 1, no. 1 (January 22, 2019): 37–52. http://dx.doi.org/10.14302/issn.2641-5526.jmid-18-2529.

Full text
Abstract:
Discrimination of case-control status based on gene expression differences has potential to identify novel pathways relevant to neurodegenerative diseases including Parkinson’s disease (PD). In this paper we applied two different novel algorithms to predict dysregulated pathways of gene expression across several different regions of the brain in PD and controls. The Fisher’s ratio sampler uses the Fisher’s ratio of the most discriminatory genes as prior probability distribution to sample the genetic networks and their likelihood (accuracy) was established via Leave-One-Out-Cross Validation (LOOCV). The holdout sampler finds the minimum-scale signatures corresponding to different random holdouts, establishing their likelihood using the validation dataset in each holdout. Phenotype prediction problems have by genesis a very high underdetermined character. We used both approaches to sample different lists of genes that optimally discriminate PD from controls and subsequently used gene ontology to identify pathways affected by disease. Both algorithms identified common pathways of Insulin signaling, FOXA1 Transcription Factor Network, HIF-1 Signaling, p53 Signaling and Chromatin Regulation/Acetylation. This analysis provides new therapeutic targets to treat PD.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Gaoli, Jianping Wang, Junping Song, and Wei Chen. "A novel signal detection algorithm of multiple-input multiple-output Vertical-Bell Laboratories Layered Space-Time for underwater acoustic networks based on the improved minimum mean square error." International Journal of Distributed Sensor Networks 16, no. 12 (December 2020): 155014772097989. http://dx.doi.org/10.1177/1550147720979898.

Full text
Abstract:
Multiple-input multiple-output is a commonly used technology supporting for high-rate transmission over frequency-selective fading channels with multiple antennas. Vertical-Bell Laboratories Layered Space-Time is a detection method of a multiple-input multiple-output system, which establishes a direct correspondence between antennas and layers. Studies demonstrate that multiple-input multiple-output Vertical-Bell Laboratories Layered Space-Time is a meaningful way for underwater acoustic networks of high performance. However, considering the hardware constraints and energy consumption, achieving a trade-off between the bit error ratio and complexity is a crucial issue for underwater acoustic networks of multiple-input multiple-output Vertical-Bell Laboratories Layered Space-Time systems. This article proposes a novel signal detection algorithm of multiple-input multiple-output Vertical-Bell Laboratories Layered Space-Time. First, we address the unitary matrix of the underwater acoustic channel by LDLH decomposition. Second, we order the detection sequence based on the permutation matrix. Third, we detail the implementation of interference cancelation and slice processing. Finally, we perform experiments for comparing the bit error ratio, energy consumption, processing delay, and complexity of the proposed algorithm with zero-forcing Vertical-Bell Laboratories Layered Space-Time, minimum mean square error Vertical-Bell Laboratories Layered Space-Time, and maximum likelihood Vertical-Bell Laboratories Layered Space-Time. Results indicate that our algorithm maintains bit error ratio and the processing delay to that of maximum likelihood Vertical-Bell Laboratories Layered Space-Time algorithm. However, it reduces the energy consumption, which achieves a good trade-off between performance and complexity. This work supports on constructing underwater acoustic networks of multiple-input multiple-output Vertical-Bell Laboratories Layered Space-Time system.
APA, Harvard, Vancouver, ISO, and other styles
14

Cheng, S., M. Dietrich, S. Finnigan, A. Sandler, J. Crites, L. Ferranti, A. Wu, and D. Dilts. "A sense of urgency: Evaluating the link between clinical trial development time and the accrual performance of CTEP-sponsored studies." Journal of Clinical Oncology 27, no. 18_suppl (June 20, 2009): CRA6509. http://dx.doi.org/10.1200/jco.2009.27.18_suppl.cra6509.

Full text
Abstract:
CRA6509 Background: Post-activation barriers to oncology clinical trial accruals are well documented; however, potential barriers prior to trial opening are not. We investigate one such barrier: trial development time. Methods: National Cancer Institute Cancer Therapy Evaluation Program (NCI-CTEP) sponsored trials for all therapeutic, non-pediatric phase I,I/II, II, and III studies activated in an eight year period (2000–2007) were investigated (n=553). Successful trials were those achieving 100% of minimum accrual goal. Time to open a study was the calendar time from initial CTEP submission to trialactivation. Multivariable logistic regression analysis was used tocalculate unadjusted and adjusted odds ratios, controlling for study phase and size of expected accruals. Results: 40.0 percent (n=221) of CTEP-approved oncology trials failed to achieve minimum accrual goals, with 49.2 percent (n=30) of phase III trials failing to achieve at least 25 percent of accrual goals. A total of 8,723 patients (17.0% of accruals) accrued to those studies that were unable to achieve the projected minimum accrual goal. Trials requiring 9–12 months development were significantly more likely to achieve accrual goals (odds ratio, 1.94; 95% CI, 1.06 to 3.52, P=0.031) than trials requiring the median time (15–18 months); trials that exceeded 27 months of development time were significantly less likely of achieving accrual goals (odds ratio, 0.14; 95% CI, 0.04 to 0.54, P=0.004). Conclusions: A large percentage of oncology clinical trials do not achieve minimum projected accruals. Trial development time appears to be one important predictor of the likelihood of successfully achieving the minimum accrual goals. [Table: see text] No significant financial relationships to disclose.
APA, Harvard, Vancouver, ISO, and other styles
15

Cathcart, Mark J., Hsiao Yen Lok, Alexander J. McNeil, and Steven Morrison. "CALCULATING VARIABLE ANNUITY LIABILITY “GREEKS” USING MONTE CARLO SIMULATION." ASTIN Bulletin 45, no. 2 (January 5, 2015): 239–66. http://dx.doi.org/10.1017/asb.2014.31.

Full text
Abstract:
AbstractThe implementation of hedging strategies for variable annuity products requires the calculation of market risk sensitivities (or “Greeks”). The complex, path-dependent nature of these products means that these sensitivities are typically estimated by Monte Carlo methods. Standard market practice is to use a “bump and revalue” method in which sensitivities are approximated by finite differences. As well as requiring multiple valuations of the product, this approach is often unreliable for higher-order Greeks, such as gamma, and alternative pathwise (PW) and likelihood-ratio estimators should be preferred. This paper considers a stylized guaranteed minimum withdrawal benefit product in which the reference equity index follows a Heston stochastic volatility model in a stochastic interest rate environment. The complete set of first-order sensitivities with respect to index value, volatility and interest rate and the most important second-order sensitivities are calculated using PW, likelihood-ratio and mixed methods. It is observed that the PW method delivers the best estimates of first-order sensitivities while mixed estimation methods deliver considerably more accurate estimates of second-order sensitivities; moreover there are significant computational gains involved in using PW and mixed estimators rather than simple BnR estimators when many Greeks have to be calculated.
APA, Harvard, Vancouver, ISO, and other styles
16

Kim, Hye Jeong, In Ho Choi, So-Young Jin, Hyeong Kyu Park, Dong Won Byun, Kyoil Suh, and Myung Hi Yoo. "Efficacy of Shear-Wave Elastography for Detecting Postoperative Cervical Lymph Node Metastasis in Papillary Thyroid Carcinoma." International Journal of Endocrinology 2018 (September 4, 2018): 1–6. http://dx.doi.org/10.1155/2018/9382649.

Full text
Abstract:
Aims. To evaluate shear-wave elastography (SWE) as a tool to detect postoperative cervical lymph node (LN) metastasis in patients with papillary thyroid carcinoma (PTC). Methods. Forty-three LNs of 43 patients with PTC undergoing ultrasound (US) and SWE before ultrasound-guided fine-needle aspiration (FNA) for the evaluation of postoperative cervical LN recurrences were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Results. Among 43 indeterminate or suspicious LNs, 12 were malignant and 31 were benign. The EMean, EMin, EMax, and ESD values were significantly higher in malignant LNs than in benign LNs (EMean: 37.1 kPa in malignant versus 11.8 kPa in benign LNs, P<0.001; EMin: 11.3 kPa versus 5.1 kPa, P=0.046; EMax: 50.5 kPa versus 23.7 kPa, P<0.001; and ESD: 7.8 kPa versus 4.1 kPa, P=0.006). EMax had the highest accuracy (93.0%) when applied with a cut-off value of 37.5 kPa. It had a positive likelihood ratio of 25.83 and a diagnostic odds ratio of 150.0. Conclusions. The shear elasticity index of EMax, with higher likelihood ratios for malignant LNs, may help identify postoperative cervical LN metastasis in PTC patients with indeterminate or suspicious LNs.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Ning, Jun Tang, Shuang Wan, and Wei Zhu. "Three-Dimensional Extensions of Resolvability of Two Point Targets in MIMO Radar Using a Hypothesis Test Formulation." International Journal of Antennas and Propagation 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/693947.

Full text
Abstract:
Multiple-input multiple-output (MIMO) radar has been extensively studied in recent years, due to its outstanding performance in various aspects. Resolution and estimation of the targets’ parameters are central capabilities of a MIMO radar system. In this paper, the resolution performance of MIMO radar in terms of the three-dimensional statistical resolution limit (3D SRL) is investigated, where three-dimension refers to direction of arrival (DOA), direction of departure (DOD), and Doppler frequency of a target. The scenario of two closely spaced targets corrupted by broadband noise with/without clutter sources is considered and the closed-form expression of the resolution threshold, which is the minimum signal-to-noise ratio (SNR) required for a given resolution performance, is derived using a generalized likelihood ratio test (GLRT) formulation. Finally, both theoretical and numerical analyses are provided for some insights regarding the resolution limit.
APA, Harvard, Vancouver, ISO, and other styles
18

Hossein-Zadeh, Navid Ghavi. "Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models." Journal of Dairy Research 83, no. 3 (August 2016): 334–40. http://dx.doi.org/10.1017/s0022029916000340.

Full text
Abstract:
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (−2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
APA, Harvard, Vancouver, ISO, and other styles
19

Djeumou, Brice, Samson Lasaulce, and Antoine O. Berthet. "Combining Coded Signals with Arbitrary Modulations in Orthogonal Relay Channels." Research Letters in Communications 2008 (2008): 1–4. http://dx.doi.org/10.1155/2008/287320.

Full text
Abstract:
We consider a relay channel for which the following assumptions are made. (1) The source-destination and relay-destination channels are orthogonal (frequency division relay channel). (2) The relay implements the decode-and-forward protocol. (3) The source and relay implement the same channel encoder, namely, a convolutional encoder. (4) They can use arbitrary and possibly different modulations. In this framework, we derive the best combiner in the sense of the maximum likelihood (ML) at the destination and the branch metrics of the trellis associated with its channel decoder for the ML combiner and also for the maximum ratio combiner (MRC), cooperative-MRC (C-MRC), and the minimum mean-square error (MMSE) combiner.
APA, Harvard, Vancouver, ISO, and other styles
20

Hoffmann, Jeffrey. "Preemption and the MLR Provision of the Affordable Care Act." American Journal of Law & Medicine 40, no. 2-3 (June 2014): 280–97. http://dx.doi.org/10.1177/009885881404000207.

Full text
Abstract:
This Note focuses on the medical loss ratio provision (“MLR Provision”) of the Patient Protection and Affordable Care Act (ACA). The MLR Provision states that health insurance companies must spend at least a certain percentage of their premium revenue on “activities that improve healthcare quality” (in other words, meet a minimum threshold medical loss ratio) and comply with reporting requirements determined by the Secretary of the United States Department of Health and Human Services (HHS). Because states have historically had authority over the regulation of health insurance, there is an outstanding question as to whether or not the MLR Provision has legal authority to preempt conflicting state MLR regulations.Part II of this Note outlines the major requirements in the MLR Provision and discusses the history of MLR regulation in the United States. Part III discusses the likelihood that the courts will soon resolve the question of preemption regarding the MLR Provision.
APA, Harvard, Vancouver, ISO, and other styles
21

Sheikh, Nurnabi, Raisul Akram, Nausad Ali, SM Raysul Haque, Shabareen Tisha, Rashidul Alam Mahumud, Abdur Razzaque Sarker, and Marufa Sultana. "Infant and young child feeding practice, dietary diversity, associated predictors, and child health outcomes in Bangladesh." Journal of Child Health Care 24, no. 2 (June 3, 2019): 260–73. http://dx.doi.org/10.1177/1367493519852486.

Full text
Abstract:
The aim of this study was to explore the association of Infant and Young Child Feeding (IYCF) practices with health and nutritional status among children aged 0–23 months and to investigate the predictors of minimum acceptable diets (MADs) using Bangladesh Demographic and Health Survey (BDHS) data. Binary logistic regression models were performed to assess the association between IYCF and child health and to determine the influential predictors for MAD. About 55% mothers reported exclusive breastfeeding; 65% introduced solid, semisolid, or soft foods for their child; and 27% maintained minimum dietary diversity (MDD). About 64% children received recommended minimum meal frequency (MMF) and 23% received recommended MAD. The likelihood of having wasting was .22 times lower for the child who received MDD and MMF, respectively. MDD and MAD were associated with lower probability of experiencing underweight among children (Adjusted odds ratio [AOR] = .73 and AOR= .81, respectively). Early initiation and continuation of breastfeeding were significantly associated with reduction in diarrhea prevalence among young children. The findings of the study generated imperative evidence related to dietary diversity, associated factors, and child health outcomes. Policy should focus on the improvement of IYCF practices and complimentary food diversity by taking initiatives for designing and implementing effective interventions to tackle childhood morbidity.
APA, Harvard, Vancouver, ISO, and other styles
22

Burgmann, Heinz, Brigitte Stoiser, Gottfried Heinz, Peter Schenk, Petra Apfalter, Konstantin Zedtwitz-Liebenstein, Michael Frass, and Yehuda Carmeli. "Likelihood of Inadequate Treatment A Novel Approach to Evaluating Drug-Resistance Patterns." Infection Control & Hospital Epidemiology 30, no. 7 (July 2009): 672–77. http://dx.doi.org/10.1086/598245.

Full text
Abstract:
Objective.To provide a novel way to predict the likelihood that antibiotic therapy will result in prompt, adequate therapy on the basis of local microbiological data.Design and Setting.Prospective study conducted at 3 medical intensive care units at the Viennese General Hospital, a tertiary care medical university teaching hospital in Vienna, Austria.Patients.One hundred one patients who received mechanical ventilation and who met the criteria for having ventilator-associated pneumonia.Design.Fiberoptic bronchoscopic examination was performed, and bronchoalveolar samples were collected. Samples were analyzed immediately by a single technician. Minimum inhibitory concentrations were determined for imipenem, cephalosporins (cefepime and cefpirome), ciprofloxacin, and piperacillin-tazobactam, and drug resistance rates were calculated. These drug resistance rates were translated into the likelihood of inadequate therapy (LIT; the frequency of inadequately treated patients per antibiotic and drug-resistant strain), cumulative LIT (the cumulative frequency of inadequately treated patients), and syndrome-specific LIT.Results.Amongthe 101 bronchoalveolar samples, culture yielded significant (at least 1 × 104 colony-forming units per raL) polymicrobial findings for 34 and significant monomicrobial findings for 31; 36 culture results were negative. Of the isolates from patients with ventilator-associated pneumonia who had monomicrobial culture findings, 33% were gram-positive bacteria and 20% were gram-negative bacteria. LIT suggested that 1 of 2 patients was treated inadequately for Pseudomonas aeruginosa infection. The LIT for patients with ventilator-associated pneumonia revealed that the rank order of antibiotics for appropriate therapy was (1) imipenem, (2) cephalosporins, (3) ciprofloxacin, and (4) piperacillin-tazobactam. These calculations were based solely on microbiological data.Conclusions.The novel ratio LIT may help clinicians use microbiological data on drug resistance to predict which antimicrobial agents will provide adequate therapy. In daily practice, this new approach may be helpful for choosing adequate antimicrobial therapy.
APA, Harvard, Vancouver, ISO, and other styles
23

COHEN, SHIMON, and NATHAN INTRATOR. "ON DIFFERENT MODEL SELECTION CRITERIA IN A FORWARD AND BACKWARD REGRESSION HYBRID NETWORK." International Journal of Pattern Recognition and Artificial Intelligence 18, no. 05 (August 2004): 847–65. http://dx.doi.org/10.1142/s0218001404003447.

Full text
Abstract:
An assessment of the performance of a hybrid network with different model selection criteria is considered. These criteria are used in an automatic model selection algorithm for constructing a hybrid network of radial and perceptron hidden units for regression. A forward step builds the full hybrid network; a model selection criterion is used for controlling the network-size and another criterion is used for choosing the appropriate hidden unit for different regions of input space. This is followed by a conservative pruning step using Likelihood Ratio Test, Bayesian or Minimum Description Length, which leads to robust estimators with low variance. The result is a small architecture that performs well on difficult, realistic, benchmark data-sets of high dimensionality and small training size. Best results are obtained by using the Bayesian approach for the model selection.
APA, Harvard, Vancouver, ISO, and other styles
24

Costa-Moreira, Pedro, Filipe Vilas-Boas, Diana Martins, Pedro Moutinho-Ribeiro, Susana Lopes, Joanne Lopes, Helena Barroca, and Guilherme Macedo. "Use of suction during endoscopic ultrasound-guided fine needle biopsy of solid pancreatic lesions with a Franseen-tip needle: a pilot comparative trial." Endoscopy International Open 09, no. 03 (February 19, 2021): E401—E408. http://dx.doi.org/10.1055/a-1336-3170.

Full text
Abstract:
Abstract Background and study aims The utility of suction during endoscopic ultrasound (EUS) fine-needle biopsy (FNB) using Franseen-tip needle remains unclear and has not been evaluated in randomized trials. We designed a randomized crossover trial to compare the diagnostic yield during EUS-FNB using a 22G Franseen-tip needle, with and without standard suction. Patients and methods Consecutive patients undergoing EUS-guided sampling of solid pancreatic lesions were recruited. A minimum of two passes were performed for each case: one with 20-mL syringe suction (S+) and another without (S–). The order of passes was randomized and the pathologist blinded. The endpoints were the diagnostic yield and the impact of blood contamination in the diagnosis. Results Fifty consecutive patients were enrolled. The overall diagnostic accuracy was 84 %. A diagnosis of malignancy was obtained in 70 samples: 36 in the S+group and 34 in the S–group. A statistically significant difference was seen in the diagnostic accuracy (S+: 78 % vs. S–: 72 %, P < 0.01) and blood contamination (S+: 68 %; S–: 44 %, P < 0.01). The sensitivity, specificity, negative likelihood ratio and positive likelihood ratio for S+vs. S–samples were 76.6 % vs. 73.9 %, 100 % vs. 100 % and 0.23 vs. 0.26, NA vs NA, respectively. A negative impact of blood contamination in the overall diagnostic yield wasn’t seen, even in samples where suction was used (OR 0.36, P = 0.15) Conclusions We found a higher diagnostic yield with the use of suction. It was associated with a higher degree of sample blood contamination that did not affect the diagnostic performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Khan, Imran, Mohammad Zafar, Majid Ashraf, and Sunghwan Kim. "Computationally Efficient Channel Estimation in 5G Massive Multiple-Input Multiple-output Systems." Electronics 7, no. 12 (December 3, 2018): 382. http://dx.doi.org/10.3390/electronics7120382.

Full text
Abstract:
Traditional channel estimation algorithms such as minimum mean square error (MMSE) are widely used in massive multiple-input multiple-output (MIMO) systems, but require a matrix inversion operation and an enormous amount of computations, which result in high computational complexity and make them impractical to implement. To overcome the matrix inversion problem, we propose a computationally efficient hybrid steepest descent Gauss–Seidel (SDGS) joint detection, which directly estimates the user’s transmitted symbol vector, and can quickly converge to obtain an ideal estimation value with a few simple iterations. Moreover, signal detection performance was further improved by utilizing the bit log-likelihood ratio (LLR) for soft channel decoding. Simulation results showed that the proposed algorithm had better channel estimation performance, which improved the signal detection by 31.68% while the complexity was reduced by 45.72%, compared with the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
26

Carvalho, Lídia Raquel de, Sheila Zambello de Pinho, and Martha Maria Mischan. "Methods to verify parameter equality in nonlinear regression models." Scientia Agricola 67, no. 2 (April 2010): 218–22. http://dx.doi.org/10.1590/s0103-90162010000200014.

Full text
Abstract:
In biologic experiments, in which growth curves are adjusted to sample data, treatments applied to the experimental material can affect the parameter estimates. In these cases the interest is to compare the growth functions, in order to distinguish treatments. Three methods that verify the equality of parameters in nonlinear regression models were compared: (i) developed by Carvalho in 1996, performing ANOVA on estimates of parameters of individual fits; (ii) suggested by Regazzi in 2003, using the likelihood ratio method; and (iii) constructing a pooled variance from individual variances. The parametric tests, F and Tukey, were employed when the parameter estimators were near to present the properties of linear model estimators, that is, unbiasedness, normal distribution and minimum variance. The first and second methods presented similar results, but the third method is simpler in calculations and uses all information contained in the original data.
APA, Harvard, Vancouver, ISO, and other styles
27

Toksanbaeva, Mairash, and Raisa Popova. "Characteristics of industries determining high differentiation of wages in Russia." Population 24, no. 3 (September 24, 2021): 162–74. http://dx.doi.org/10.19181/population.2021.24.3.13.

Full text
Abstract:
The problem of reducing the differentiation of wages in Russia does not lose its relevance. The fund ratio is constantly decreasing, but still exceeds world standards. Its reduction due to growth of the minimum wage has already gone beyond acceptable boundaries. Beyond them, the qualifying wage differences decrease. Currently, they have already begun to contradict one of the fundamental functions of remuneration, namely the incentive function. This raises the question of finding opportunities to reduce the ratio of funds at the expense of wages in the 10th decile of the distribution. Wages in this decile were studied by industry (types of economic activity). Industries in which earnings in the 10th decile are higher than the average for the decile are selected for analysis. These industry earnings affect the value of the fund ratio, limiting its decline. The characteristics of the selected industries are reviewed. These are qualifications of personnel, financial and economic situation of enterprises, their form of ownership, and sectoral coefficient of funds. It has been established that the value of the fund ratio exceeding world standards is, in all likelihood, provided mainly by three industries. These include activities in the field of information and communication; financial and insurance activities; professional, scientific and technical activities. Administrative regulation of wages should be applied to the segments of these industries, where high wages are not consistent with the quality of services provided.
APA, Harvard, Vancouver, ISO, and other styles
28

Hahn, Jinyong, and Geert Ridder. "Non-Standard Tests through a Composite Null and Alternative in Point-Identified Parameters." Journal of Econometric Methods 4, no. 1 (January 1, 2015): 1–28. http://dx.doi.org/10.1515/jem-2014-0006.

Full text
Abstract:
AbstractWe propose a new approach to statistical inference on parameters that depend on population parameters in a non-standard way. As examples we consider a parameter that is interval identified and a parameter that is the maximum (or minimum) of population parameters. In both examples we transform the inference problem into a test of a composite null against a composite alternative hypothesis involving point identified population parameters. We use standard tools in this testing problem. This setup substantially simplifies the conceptual basis of the inference problem. By inverting the Likelihood Ratio test statistic for the composite null and composite alternative inference problem, we obtain a closed form expression for the confidence interval that does not require any tuning parameter and is uniformly valid. We use our method to derive a confidence interval for a regression coefficient in a multiple linear regression with an interval censored dependent variable.
APA, Harvard, Vancouver, ISO, and other styles
29

Brill, Keith F., and Fedor Mesinger. "Applying a General Analytic Method for Assessing Bias Sensitivity to Bias-Adjusted Threat and Equitable Threat Scores." Weather and Forecasting 24, no. 6 (December 1, 2009): 1748–54. http://dx.doi.org/10.1175/2009waf2222272.1.

Full text
Abstract:
Abstract Bias-adjusted threat and equitable threat scores were designed to account for the effects of placement errors in assessing the performance of under- or overbiased forecasts. These bias-adjusted performance measures exhibit bias sensitivity. The critical performance ratio (CPR) is the minimum fraction of added forecasts that are correct for a performance measure to indicate improvement if bias is increased. In the opposite case, the CPR is the maximum fraction of removed forecasts that are correct for a performance measure to indicate improvement if bias is decreased. The CPR is derived here for the bias-adjusted threat and equitable threat scores to quantify bias sensitivity relative to several other measures of performance including conventional threat and equitable threat scores. The CPR for a bias-adjusted equitable threat score may indicate the likelihood of preserving or increasing the conventional equitable threat score if forecasts are bias corrected based on past performance.
APA, Harvard, Vancouver, ISO, and other styles
30

Blain, Gabriel Constantino. "Incorporating climate trends in the stochastic modeling of extreme minimum air temperature series of Campinas, state of São Paulo, Brazil." Bragantia 70, no. 4 (2011): 952–57. http://dx.doi.org/10.1590/s0006-87052011000400031.

Full text
Abstract:
Under the hypothesis that the presence of climate trends in the annual extreme minimum air temperature series of Campinas (Tminabs; 1891-2010; 22º54'S; 47º05'W; 669 m) may no longer be neglected, the aim of the work was to describe the probabilistic structure of this series based on the general extreme value distribution (GEV) with parameters estimated as a function of a time covariate. The results obtained by applying the likelihood ratio test and the percentil-percentil and quantil-quantil plots, have indicated that the use of a time-dependent model provides a feasible description of the process under evaluation. In this non-stationary GEV model the parameters of location and scale were expressed as time-dependent functions. The shape parameter remained constant. It was also verified that although this non-stationary model has indicated an average increase in the values of the analyzed data, it does not allow us to conclude that the region of Campinas is now free from frost occurrence since this same model also reveals an increasing trend in the dispersions of the variable under evaluation. However, since the parameters of location and scale of this probabilistic model are significantly conditioned on time, the presence of climate trends in the analyzed time series is proven.
APA, Harvard, Vancouver, ISO, and other styles
31

Bezrukova, A. A., M. I. Yarmolinskaya, O. V. Sazonova, N. V. Spiridonova, and M. N. Komarova. "The relationship between breastfeeding and maternal education and family income." Kazan medical journal 102, no. 3 (June 10, 2021): 355–61. http://dx.doi.org/10.17816/kmj2021-355.

Full text
Abstract:
Aim. To assess the relationship between breastfeeding and maternal education and family income in Samara. Methods. A survey was conducted among 174 mothers in the children's outpatient clinic, in which their children were assigned at the place of residence. Information on breastfeeding and its duration, mother's education, and family income were obtained by questionnaire and copying data from outpatient records. Pearson's chi-square, MannWhitney U test and logistic regression were used in the statistical processing of the data. Results. There was a positive association of maternal education (2=11.25; p=0.024) and income (2=11.5; p=0.022) with breastfeeding practices. Higher education, compared with specialized secondary or secondary education, increased the likelihood of breastfeeding for more than 6 months with an odds ratio of 2.6 (95% confidence interval 1.185.73; p=0.018). The median (and its Q1Q3 quartiles) breastfeeding duration was 4.0 months (Q1Q3 2.014.0 months) for mothers with secondary or specialized secondary education and 12.0 months (Q1Q3 4.018.0 months; p=0.012) for mothers with higher education. It was found that 67.7% of women with incomes above the subsistence level per family member ($150) breastfed for more than 7 months and only 43.9% of women with incomes below the subsistence level (p=0.011). The likelihood of breastfeeding for more than 6 months in household income above one minimum wage per family member compared with household income below increases with an odds ratio of 2.18 (95% confidence interval 1.034.60; p=0.041). There was no relationship between lactation cessation and maternal education or family income. Conclusion. There is a positive relationship between mother's education and family income and the practice of breastfeeding.
APA, Harvard, Vancouver, ISO, and other styles
32

Nagel, E., S. Czesla, J. H. M. M. Schmitt, S. Dreizler, G. Anglada-Escudé, E. Rodríguez, I. Ribas, et al. "The CARMENES search for exoplanets around M dwarfs." Astronomy & Astrophysics 622 (February 2019): A153. http://dx.doi.org/10.1051/0004-6361/201834569.

Full text
Abstract:
We report the detection of a Neptune-mass exoplanet around the M4.0 dwarf GJ 4276 (G 232-070) based on radial velocity (RV) observations obtained with the CARMENES spectrograph. The RV variations of GJ 4276 are best explained by the presence of a planetary companion that has a minimum mass of mb sin i ≈ 16 M⊕ on a Pb = 13.35 day orbit. The analysis of the activity indicators and spectral diagnostics exclude stellar induced RV perturbations and prove the planetary interpretation of the RV signal. We show that a circular single-planet solution can be excluded by means of a likelihood ratio test. Instead, we find that the RV variations can be explained either by an eccentric orbit or interpreted as a pair of planets on circular orbits near a period ratio of 2:1. Although the eccentric single-planet solution is slightly preferred, our statistical analysis indicates that none of these two scenarios can be rejected with high confidence using the RV time series obtained so far. Based on the eccentric interpretation, we find that GJ 4276 b is the most eccentric (eb = 0.37) exoplanet around an M dwarf with such a short orbital period known today.
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, You-Bin, Young-Lyun Oh, Jung-Hee Shin, Sun-Wook Kim, Jae-Hoon Chung, Yong-Ki Min, Soo-Yeon Hahn, and Tae-Hyuk Kim. "Comparison of Four Ultrasonography-Based Risk Stratification Systems in Thyroid Nodules with Nondiagnostic/Unsatisfactory Cytology: A Real-World Study." Cancers 13, no. 8 (April 18, 2021): 1948. http://dx.doi.org/10.3390/cancers13081948.

Full text
Abstract:
We compared American Thyroid Association (ATA) guidelines, Korean (K)-Thyroid Imaging, Reporting and Data Systems (TIRADS), EU-TIRADS, and American College of Radiology (ACR) TIRADS in diagnosing malignancy for thyroid nodules with nondiagnostic/unsatisfactory cytology. Among 1143 nondiagnostic/unsatisfactory aspirations from April 2011 to March 2016, malignancy was detected in 39 of 89 excised nodules. The minimum malignancy rate was 7.82% in EU-TIRADS 5 and 1.87–3.00% in EU-TIRADS 3–4. In the other systems, the minimum malignancy rate was 14.29–16.19% in category 5 and ≤3% in the remaining categories. Although the EU-TIRADS category ≥ 5 exhibited the highest positive likelihood ratio (LR) of only 2.214, category ≥ 5 in the other systems yielded the highest positive LR of >5. Receiver operating characteristic (ROC) curves of all systems to predict malignancy were located statistically above the diagonal nondiscrimination line (P for ROC curve: EU-TIRADS, 0.0022; all others, 0.0001). The areas under the ROC curve (AUCs) were not significantly different among the four systems. The ATA guidelines, K-TIRADS, and ACR TIRADS may be useful to guide management for nondiagnostic/unsatisfactory nodules. The EU-TIRADS, although also useful, exhibited inferior performance in predicting malignancy for nondiagnostic/unsatisfactory nodules in Korea, an iodine-sufficient area.
APA, Harvard, Vancouver, ISO, and other styles
34

Mantovani, Alessandra Madia, Scott Duncan, Jamile Sanches Codogno, Manoel Carlos Spiguel Lima, and Rômulo Araújo Fernandes. "Different Amounts of Physical Activity Measured by Pedometer and the Associations With Health Outcomes in Adults." Journal of Physical Activity and Health 13, no. 11 (November 2016): 1183–91. http://dx.doi.org/10.1123/jpah.2015-0730.

Full text
Abstract:
Background:Physical activity level is an important tool to identify individuals predisposed to developing chronic diseases, which represent a major concern worldwide.Objective:To identify correlates of daily step counts measured using pedometers, as well as analyze the associations between health outcomes and 3 different amounts of daily physical activity.Methods:The sample comprised 278 participants (126 men and 153 women) with a mean age of 46.51 ± 9.02 years. Physical activity was assessed using pedometers for 7 consecutive days, and 3 amounts of daily physical activity were considered: ≥10,000 steps/day, ≥7500 steps/day, and <5000 steps/day. Sleep quality was assessed through a questionnaire, and dual-energy x-ray absorptiometry was used to measure body fat. Sociodemographic and anthropometric data were also collected.Results:The percentages of adults achieving at least 10,000 and 7500 steps/day on a minimum of 5 days of the evaluated week were 12.9% and 30.9%, respectively. Adults who reached ≥7500 steps/day had a lower likelihood of being obese (odds ratio [OR] = 0.38, 95% confidence interval [CI], 0.17–0.85) and reporting worse sleep quality (OR = 0.58, 95% CI, 0.34–0.99). Adults who reached <5000 steps/day had a higher likelihood of reporting worse sleep quality (OR = 2.11, 95% CI, 1.17–3.82).Conclusion:Physical activity in adulthood, as measured by pedometer, constituted a behavior related to lower adiposity and better sleep quality.
APA, Harvard, Vancouver, ISO, and other styles
35

Kuderer, N. M., C. W. Francis, J. Crawford, D. C. Dale, D. A. Wolff, E. Culakova, M. S. Poniewierski, and G. H. Lyman. "A prediction model for chemotherapy-associated thrombocytopenia in cancer patients." Journal of Clinical Oncology 24, no. 18_suppl (June 20, 2006): 8616. http://dx.doi.org/10.1200/jco.2006.24.18_suppl.8616.

Full text
Abstract:
8616 Background: Thrombocytopenia (TP) can lead to serious complications, however, little is known about the incidence and risk factors for chemotherapy-associated TP. A prospective, nationwide cohort study was undertaken to better define the impact of TP in cancer treatment. Methods: 2,842 patients with cancer of the breast, lung, colon, ovary or lymphoma initiating a new chemotherapy regimen have been prospectively enrolled at 115 randomly selected US community oncology practices between 2002 and 2005. Risk factors for chemotherapy-associated TP were identified, a multivariate logistic regression model based on pretreatment characteristics was developed, and test performance characteristics were estimated. Results: Over a median of 3 cycles of chemotherapy, minimum recorded platelet counts were: ≥150K in 53% of patients; 100–150K in 26%; 75–100K in 8%; 50–75K in 6% and <50K in 7%. Significant independent predictive factors for platelets <75K include type of cancer (P<.0001), type of chemotherapy including gemcitabine-based (P<.0001), anthracycline-based (P<.0001) and platinum-based (P<.0001) regimens, prior chemotherapy (P<.0001) or surgery (P=.005), age (P=.015), Caucasian ethnicity (P=.022), body surface area (P=.0001), planned relative dose intensity ≥85% (P=.082), diabetes (P=.018), pulmonary disease (P=.011), abnormal baseline platelets (P<.0001), hematocrit (P=0.030), alkaline phosphatase (P=.072) or albumin (P=.017). Model fit was good (Chi-square, P<.0001), R2 = 0.735 and c-statistic = 0.816 [95% CI: 0.792–0.840, P<.0001]. Model test performance characteristics [95% CI] at a ≥20% risk of TP include: sensitivity 56% [51–61]; specificity 88% [87–89]; likelihood ratio positive 4.63 [4.02–5.33]; likelihood ratio negative 0.50 [0.45–0.57]; and diagnostic odds ratio 9.22 [7.23–11.75]. Validation of the model is underway. Conclusions: This prediction model based on pretreatment factors identifies with high specificity patients at risk for clinically important chemotherapy-associated thrombocytopenia early in the treatment course. It may provide a valuable tool for guiding chemotherapy and new supportive care measures. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
36

French, RJ. "Changes in Cropping Systems at the Boundaries of the Pastoral and Cropping Zones in Southern Australia." Rangeland Journal 15, no. 1 (1993): 117. http://dx.doi.org/10.1071/rj9930117.

Full text
Abstract:
A history of cropping at the margins between arable and pastoral lands is examined. Assessment is made of the climatic factors that caused the abandonment of cropping. These criteria are then used to assess the likelihood that future cropping will persist along the present pastoral margins in different states of Australia. A minimum requirement is that the ratio of water use to evaporation in the growing season should exceed 0.3. An analysis of past climatic data should also be made to identify sequences of years when rainfall was both above and below average. Periods of above-average rainfall can lead to undue optimism for future cropping. Simple climatic models are required so that farmers can use them to predict the rainfall in the growing season and thereby make appropriate management decisions. Farming is a complex technical and financial business and farmers will need skills in monitoring, measuring and recording factors that influence their livelihood.
APA, Harvard, Vancouver, ISO, and other styles
37

Alviandi, Widayat, Brashto Bramantyo, Jenny Bashiruddin, and Novra Widayanti. "Peran Tes Dynamic Visual Acuity dan Tes Kalori dalam Menilai Gangguan Keseimbangan pada Pasien Tuberkulosis yang Mendapat Terapi Streptomisin." Oto Rhino Laryngologica Indonesiana 48, no. 2 (January 30, 2019): 112. http://dx.doi.org/10.32637/orli.v48i2.249.

Full text
Abstract:
Latar belakang: Gangguan keseimbangan merupakan efek samping pemberian streptomisin yang dapat menurunkan kualitas hidup. Saat ini belum didapatkan penelitian gangguan keseimbangan pada pasien tuberkulosis yang mendapat terapi streptomisin diperiksa menggunakan tes dynamic visual acuity (DVA) dan tes kalori. Tujuan: Penelitian ini bertujuan untuk melihat akurasi pemeriksaan keseimbangan dengan DVA pada pasien TB yang mendapatkan streptomisin dibandingkan dengan elektronistagmografi (ENG). Metode: Penelitian ini merupakan penelitian pra-eksperimental untuk mengetahui perubahan hasil pemeriksaan fungsi keseimbangan vestibuler pada suatu kelompok pasien TB sebelum dan setelah 56 kali pemberian streptomisin, atau bila timbul keluhan gangguan keseimbangan dengan tes DVA dan tes kalori selama April-Oktober 2014. Digunakan rancangan uji diagnostik untuk membandingkan kedua cara pengukuran setelah pengobatan. Hasil: Setelah pemberian terapi didapatkan 31 (77,5%) dari 40 subjek dengan pemeriksaan kalori dan 30 (75%) dengan pemeriksaan DVA. Rerata nilai kalori sebelum terapi sebesar 93,5±32,07°/detik dan setelah terapi sebesar 82,30±38,43°/detik, terjadi perubahan sebesar -11,25±50,55°/detik. Median nilai kenaikan DVA sebelum terapi adalah 0 (minimal 0-maksimal 2) baris dan setelah terapi adalah 3 (minimal 0-maksimal 6) baris, terjadi perubahan sebesar 3 (minimal 0-maksimal 5) baris. Sensitivitas pemeriksaan DVA 83%, spesifisitas 27%, nilai duga positif 17%, nilai duga negatif 90%, rasio kemungkinan positif 1,13, dan rasio kemungkinan negatif 0,63 dengan pemeriksaan kalori sebagai baku emas. Kesimpulan: Pemeriksaan DVA dapat digunakan sebagai skrining pemeriksaan kelemahan vestibuler perifer bilateral pada pasien tuberkulosis yang mendapat terapi streptomisin.Background: Impaired balance is the side effect of Streptomycin administration which can decrease the quality of life Up till now, there is no research yet on dynamic visual acuity (DVA) and caloric test in tuberculosis (TB) patients receiving streptomycin therapy. Objective: This study aims to look at the accuracy of the examination using DVA in TB patients receiving Streptomycin, compared to using electronistagmography (ENG). Methods: A pre-experimental study was used to determine changes in the vestibular function test results in a group of TB patients before and after56times administrationof Streptomycin, or when subjects complained of balance disorders, with DVA test and caloric test during April-October 2014. Designed diagnostic test was used after treatment to compare the two methods of measurement. Result: After therapy there was 31 (77.5%) out of 40 subjects with caloric examination and in 30 (75%) with DVA examination. The mean value of caloric examination before therapy was 93.5±32.07°/sec and after therapy was 82.30±38.43°/sec, the change was -11.25±50.55°/sec. The median value of increased DVA line before therapy was 0 line and after therapy was 3 line, there was a change of 3 (minimum 0-maximum 5) line. DVA examination has a sensitivity of 83%, a specificity of 27%, positive predictive value 17%, negative predictive value 90%, a positive likelihood ratio 1.13 and a negative likelihood ratio 0.63 with caloric examination as the gold standard. Conclusion: DVA examination can be used as a screening tool in bilateral peripheral vestibular weakness in TB patients who received Streptomycin therapy.
APA, Harvard, Vancouver, ISO, and other styles
38

Lei, Zhang, Cong Feng Liu, Wei Jiang, Gui Zhou Xu, and Ning Dai. "Performance Analysis of OFDM Channel System." Advanced Materials Research 268-270 (July 2011): 1438–46. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.1438.

Full text
Abstract:
OFDM is a promising digital communications technique for high data rate transmissions. In this paper, we have analyzed the performance of mobile OFDM systems in the presence of channel estimation error. A new channel estimation error model is presented. Based on the estimation error model, new and simple expressions for the average bit error probability of M-QAM OFDM systems are derived for zero-forcing, minimum mean square error and maximum likelihood receivers. A complex correlation coefficient is used to quantify the quality of the channel estimation scheme. We have shown that if the complex correlation coefficient is a function of the signal to noise ratio, then the inter-carrier interference caused by channel variations is the dominant source of performance degradation. On the other hand, if the channel estimation scheme produces a constant value for the complex correlation coefficient (estimation quality is not a function of the signal to noise ratio), then the channel estimation error will cause an error floor that is larger than the one caused by the inter-carrier interference, and is considered as one of the dominant causes of degradation. And channel estimation using pilot symbols are discussed in terms of different pilot symbol placement schemes.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Hyunil, Sangcheol Lee, Dokyung Kim, Weonmin Cho, Sungtan Cho, Siyeong Yoon, and Soonchul Lee. "Effect of Dosing Interval on Compliance of Osteoporosis Patients on Bisphosphonate Therapy: Observational Study Using Nationwide Insurance Claims Data." Journal of Clinical Medicine 10, no. 19 (September 24, 2021): 4350. http://dx.doi.org/10.3390/jcm10194350.

Full text
Abstract:
Only a few studies are available on the effect of the dosing interval of bisphosphonate on drug compliance. We analyzed the data of patients who were newly prescribed bisphosphonate using a national insurance claims database. Drug compliance was assessed by calculating medication possession ratio (MPR) over a minimum of a 1-year follow-up. This analysis included 281,996 new bisphosphonate users with a mean age of 68.9 years (92% women). The patients were divided into daily, weekly, monthly, 3-monthly, and switch groups (who changed the drug to other dosing intervals). The average MPR was the highest in the switch group (66%), and the longer the dosing interval, the higher the compliance (3-monthly, 56% vs. daily, 37%). “Non-compliant” was defined as an MPR under 80%. Various factors which were possibly associated with “non-compliant” MPR were investigated using multiple regression analysis. Multivariate analysis showed that male patients were more likely to be non-compliant with pharmacotherapy than female patients, with as odds ratio of 1.389. Younger patients had a significantly lower likelihood of being non-compliant than older patients for age 60–69 vs. age 80+. Long dosing intervals were recommended to improve compliance and special attention was given to older and male patients.
APA, Harvard, Vancouver, ISO, and other styles
40

Hall, Ronald G., Mark A. Swancutt, Claudia Meek, Richard D. Leff, and Tawanda Gumbo. "Ethambutol Pharmacokinetic Variability Is Linked to Body Mass in Overweight, Obese, and Extremely Obese People." Antimicrobial Agents and Chemotherapy 56, no. 3 (December 12, 2011): 1502–7. http://dx.doi.org/10.1128/aac.05623-11.

Full text
Abstract:
ABSTRACTWe conducted a prospective study of 18 adult volunteers (male-to-female ratio of 1) whose body mass index fell into categories of <25, 25 to 40, or >40 kg/m2, who received a single oral dose of 1,600 mg ethambutol. Only individuals with normal renal function were recruited. The minimum body mass (M) was 45.6 kg, the median was 90.8 kg, and the maximum weight was 160.4 kg. Ethambutol pharmacokinetics were best described by a two-compartment model. Inclusion of weight as a covariate dramatically improved the model, with a relative likelihood approaching infinity. The typical clearance was 42.6 liters/h. Ethambutol systemic clearance was proportional to (M/45.6)3/4and thus obeyed fractal geometry-based laws. This means that the area under the concentration-time curve (AUC) actually decreased for obese patients compared to that for leaner patients, reducing chances of concentration-dependent toxicity. On the other hand, such reduced AUCs could lead to therapy failure. Thus, new and individualized ethambutol dosing regimens need to be designed for obese and extremely obese patients.
APA, Harvard, Vancouver, ISO, and other styles
41

Schoettler, Jochen J., Thomas Kirschning, Michael Hagmann, Bianka Hahn, Anna-Meagan Fairley, Franz-Simon Centner, Verena Schneider-Lindner, et al. "Maintaining oxygen delivery is crucial to prevent intestinal ischemia in critical ill patients." PLOS ONE 16, no. 7 (July 9, 2021): e0254352. http://dx.doi.org/10.1371/journal.pone.0254352.

Full text
Abstract:
Background Intestinal ischemia is a common complication with obscure pathophysiology in critically ill patients. Since insufficient delivery of oxygen is discussed, we investigated the influence of oxygen delivery, hemoglobin, arterial oxygen saturation, cardiac index and the systemic vascular resistance index on the development of intestinal ischemia. Furthermore, we evaluated the predictive power of elevated lactate levels for the diagnosis of intestinal ischemia. Methods In a retrospective case-control study data (mean oxygen delivery, minimum oxygen delivery, systemic vascular resistance index) of critical ill patients from 02/2009–07/2017 were analyzed using a proportional hazard model. General model fit and linearity were tested by likelihood ratio tests. The components of oxygen delivery (hemoglobin, arterial oxygen saturation and cardiac index) were individually tested in models. Results 59 out of 874 patients developed intestinal ischemia. A mean oxygen delivery less than 250ml/min/m2 (LRT vs. null model: p = 0.018; LRT for non-linearity: p = 0.012) as well as a minimum oxygen delivery less than 400ml/min/m2 (LRT vs null model: p = 0.016; LRT for linearity: p = 0.019) were associated with increased risk of the development of intestinal ischemia. We found no significant influence of hemoglobin, arterial oxygen saturation, cardiac index or systemic vascular resistance index. Receiver operating characteristics analysis for elevated lactate levels, pH, CO2 and central venous saturation was poor with an area under the receiver operating characteristic of 0.5324, 0.52, 0.6017 and 0.6786. Conclusion There was a significant correlation for mean and minimum oxygen delivery with the incidence of intestinal ischemia for values below 250ml/min/m2 respectively 400ml/min/m2. Neither hemoglobin, arterial oxygen saturation, cardiac index, systemic vascular resistance index nor elevated lactate levels could be identified as individual risk factors.
APA, Harvard, Vancouver, ISO, and other styles
42

Zakeri, Issa, Anne L. Adolph, Maurice R. Puyau, Firoz A. Vohra, and Nancy F. Butte. "Application of cross-sectional time series modeling for the prediction of energy expenditure from heart rate and accelerometry." Journal of Applied Physiology 104, no. 6 (June 2008): 1665–73. http://dx.doi.org/10.1152/japplphysiol.01163.2007.

Full text
Abstract:
Accurate estimation of energy expenditure (EE) in children and adolescents is required for a better understanding of physiological, behavioral, and environmental factors affecting energy balance. Cross-sectional time series (CSTS) models, which account for correlation structure of repeated observations on the same individual, may be advantageous for prediction of EE. CSTS models for prediction of minute-by-minute EE and, hence, total EE (TEE) from heart rate (HR), physical activity (PA) measured by accelerometry, and observable subject variables were developed in 109 children and adolescents by use of Actiheart and 24-h room respiration calorimetry. CSTS models based on HR, PA, time-invariant covariates, and interactions were developed. These dynamic models involve lagged and lead values of HR and lagged values of PA for better description of the series of minute-by-minute EE. CSTS models with random intercepts and random slopes were investigated. For comparison, likelihood ratio tests were used. Log likelihood increased substantially when random slopes for HR and PA were added. The population-specific model uses HR and 1- and 2-min lagged and lead values of HR, HR2, and PA and 1- and 2-min lagged values of PA, PA2, age, age2, sex, weight, height, minimum HR, sitting HR, HR × height, HR × weight, HR × age, PA × weight, and PA × sex interactions ( P < 0.001). Prediction error for TEE was 0.9 ± 10.3% (mean ± SD). Errors were not correlated with age, weight, height, or body mass index. CSTS modeling provides a useful predictive model for EE and, hence, TEE in children and adolescents on the basis of HR and PA and other observable explanatory subject characteristics of age, sex, weight, and height.
APA, Harvard, Vancouver, ISO, and other styles
43

REA, E., J. LAFLÈCHE, S. STALKER, B. K. GUARDA, H. SHAPIRO, I. JOHNSON, S. J. BONDY, R. UPSHUR, M. L. RUSSELL, and M. ELIASZIW. "Duration and distance of exposure are important predictors of transmission among community contacts of Ontario SARS cases." Epidemiology and Infection 135, no. 6 (January 12, 2007): 914–21. http://dx.doi.org/10.1017/s0950268806007771.

Full text
Abstract:
SUMMARYWe report attack rates and contact-related predictors among community contacts of severe acute respiratory syndrome (SARS) cases from the 2003 Toronto-area outbreak. Community contact data was extracted from public health records for single, well-defined exposures to a SARS case. In total, 8662 community-acquired exposures resulted in 61 probable cases; a crude attack rate of 0·70% [95% confidence interval (CI) 0·54–0·90]. Persons aged 55–69 years were at higher risk of acquiring SARS (1·14%) than those either younger (0·60%) or older (0·70%). In multivariable analysis exposures for at least 30 min at a distance of ⩽1 m increased the likelihood of becoming a SARS case 20·4-fold (95% CI 11·8–35·1). Risk related to duration of illness in the source case at time of exposure was greatest for illness duration of 7–10 days (rate ratio 3·4, 95% CI 1·9–6·1). Longer and closer proximity exposures incurred the highest rate of disease. Separate measures of time and distance from source cases should be added to minimum datasets for the assessment of interventions for SARS and other emerging diseases.
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Aotian, David Miller, and Qing Pan. "Constrained maximum entropy models to select genotype interactions associated with censored failure times." Journal of Bioinformatics and Computational Biology 16, no. 06 (December 2018): 1840024. http://dx.doi.org/10.1142/s0219720018400243.

Full text
Abstract:
We propose a novel screening method targeting genotype interactions associated with disease risks. The proposed method extends the maximum entropy conditional probability model to address disease occurrences over time. Continuous occurrence times are grouped into intervals. The model estimates the conditional distribution over the disease occurrence intervals given individual genotypes by maximizing the corresponding entropy subject to constraints linking genotype interactions to time intervals. The EM algorithm is employed to handle observations with uncertainty, for which the disease occurrence is censored. Stepwise greedy search is proposed to screen a large number of candidate constraints. The minimum description length is employed to select the optimal set of constraints. Extensive simulations show that five or so quantile-dependent intervals are sufficient to categorize disease outcomes into different risk groups. Performance depends on sample size, number of genotypes, and minor allele frequencies. The proposed method outperforms the likelihood ratio test, Lasso, and a previous maximum entropy method with only binary (disease occurrence, non-occurrence) outcomes. Finally, a GWAS study for type 1 diabetes patients is used to illustrate our method. Novel one-genotype and two-genotype interactions associated with neuropathy are identified.
APA, Harvard, Vancouver, ISO, and other styles
45

Sur, Samarendra Nath, Rabindranath Bera, Akash Kumar Bhoi, Mahaboob Shaik, and Gonçalo Marques. "Capacity Analysis of Lattice Reduction Aided Equalizers for Massive MIMO Systems." Information 11, no. 6 (June 4, 2020): 301. http://dx.doi.org/10.3390/info11060301.

Full text
Abstract:
Massive multi-input-multi-output (MIMO) systems are the future of the communication system. The proper design of the MIMO system needs an appropriate choice of detection algorithms. At the same time, Lattice reduction (LR)-aided equalizers have been well investigated for MIMO systems. Many studies have been carried out over the Korkine–Zolotareff (KZ) and Lenstra–Lenstra–Lovász (LLL) algorithms. This paper presents an analysis of the channel capacity of the massive MIMO system. The mathematical calculations included in this paper correspond to the channel correlation effect on the channel capacity. Besides, the achievable gain over the linear receiver is also highlighted. In this study, all the calculations were further verified through the simulated results. The simulated results show the performance comparison between zero forcing (ZF), minimum mean squared error (MMSE), integer forcing (IF) receivers with log-likelihood ratio (LLR)-ZF, LLR-MMSE, KZ-ZF, and KZ-MMSE. The main objective of this work is to show that, when a lattice reduction algorithm is combined with the convention linear MIMO receiver, it improves the capacity tremendously. The same is proven here, as the KZ-MMSE receiver outperforms its counterparts in a significant margin.
APA, Harvard, Vancouver, ISO, and other styles
46

Sahu, Sima, Harsh Vikram Singh, Basant Kumar, and Amit Kumar Singh. "A Bayesian Multiresolution Approach for Noise Removal in Medical Magnetic Resonance Images." Journal of Intelligent Systems 29, no. 1 (January 10, 2018): 189–201. http://dx.doi.org/10.1515/jisys-2017-0402.

Full text
Abstract:
Abstract A Bayesian approach using wavelet coefficient modeling is proposed for de-noising additive white Gaussian noise in medical magnetic resonance imaging (MRI). In a parallel acquisition process, the magnetic resonance image is affected by white Gaussian noise, which is additive in nature. A normal inverse Gaussian probability distribution function is taken for modeling the wavelet coefficients. A Bayesian approach is implemented for filtering the noisy wavelet coefficients. The maximum likelihood estimator and median absolute deviation estimator are used to find the signal parameters, signal variances, and noise variances of the distribution. The minimum mean square error estimator is used for estimating the true wavelet coefficients. The proposed method is simulated on MRI. Performance and image quality parameters show that the proposed method has the capability to reduce the noise more effectively than other state-of-the-art methods. The proposed method provides 8.83%, 2.02%, 6.61%, and 30.74% improvement in peak signal-to-noise ratio, structure similarity index, Pratt’s figure of merit, and Bhattacharyya coefficient, respectively, over existing well-accepted methods. The effectiveness of the proposed method is evaluated by using the mean squared difference (MSD) parameter. MSD shows the degree of dissimilarity and is 0.000324 for the proposed method, which is less than that of the other existing methods and proves the effectiveness of the proposed method. Experimental results show that the proposed method is capable of achieving better signal-to-noise ratio performance than other tested de-noising methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Walden, A. T. "Non‐Gaussian reflectivity, entropy, and deconvolution." GEOPHYSICS 50, no. 12 (December 1985): 2862–88. http://dx.doi.org/10.1190/1.1441905.

Full text
Abstract:
Standard deconvolution techniques assume that the wavelet is minimum phase but generally make no assumptions about the amplitude distribution of the primary reflection coefficient sequence. For a white reflection sequence the assumption of a Gaussian distribution means that recovery of the true phase of the wavelet is impossible; however, a non‐Gaussian distribution in theory allows recovery of the phase. It is generally recognized that primary reflection coefficients typically have a non‐Gaussian amplitude distribution. Deconvolution techniques that assume whiteness but seek to exploit the non‐Gaussianity include Wiggins’ minimum entropy deconvolution (MED), Claerbout’s parsimonious deconvolution, and Gray’s variable norm deconvolution. These methods do not assume minimum phase. The deconvolution filter is defined by the maximization of a function called the objective. I examine these and other MED‐type deconvolution techniques. Maximizing the objective by setting derivatives to zero results in most cases in a deconvolution filter which is the solution of a highly nonlinear Toeplitz matrix equation. Wiggins’ original iterative approach to the solution is suitable for some methods, while for other methods straightforward iterative perturbation approaches may be used instead. The likely effects on noise of the nonlinearities involved are demonstrated as extremely varied. When the form of an objective remains constant with iteration, the most general description of the method is likelihood ratio maximization; when the form changes, a method seeks to maximize relative entropy at each iteration. I emphasize simple and useful link between three methods and the use of M-estimators in robust statistics. In attempting to assess the accuracy of the techniques, the choice between different families of distributions for modeling the distribution of reflection coefficients is important. The results provide important insights into methods of constructing and understanding the statistical implications and behavior of a chosen nonlinearity. A new objective is introduced to illustrate this, and a few particular preferences expressed. The methods are compared with the zero‐memory nonlinear deconvolution approach of Godfrey and Rocca (1981); for their approach, two distinctly different yet statistically comparable models for reflection coefficients are seen to give surprisingly similarly shaped nonlinearities. Finally, it is shown that each MED‐type method can be viewed as the minimization of a particular configurational entropy expression, where some suitable ratio plays the role of a probability.
APA, Harvard, Vancouver, ISO, and other styles
48

Colquhoun, David. "The reproducibility of research and the misinterpretation of p -values." Royal Society Open Science 4, no. 12 (December 2017): 171085. http://dx.doi.org/10.1098/rsos.171085.

Full text
Abstract:
We wish to answer this question: If you observe a ‘significant’ p -value after doing a single unbiased experiment, what is the probability that your result is a false positive? The weak evidence provided by p- values between 0.01 and 0.05 is explored by exact calculations of false positive risks. When you observe p = 0.05, the odds in favour of there being a real effect (given by the likelihood ratio) are about 3 : 1. This is far weaker evidence than the odds of 19 to 1 that might, wrongly, be inferred from the p- value. And if you want to limit the false positive risk to 5%, you would have to assume that you were 87% sure that there was a real effect before the experiment was done. If you observe p = 0.001 in a well-powered experiment, it gives a likelihood ratio of almost 100 : 1 odds on there being a real effect. That would usually be regarded as conclusive. But the false positive risk would still be 8% if the prior probability of a real effect were only 0.1. And, in this case, if you wanted to achieve a false positive risk of 5% you would need to observe p = 0.00045. It is recommended that the terms ‘significant’ and ‘non-significant’ should never be used. Rather, p- values should be supplemented by specifying the prior probability that would be needed to produce a specified (e.g. 5%) false positive risk. It may also be helpful to specify the minimum false positive risk associated with the observed p- value. Despite decades of warnings, many areas of science still insist on labelling a result of p < 0.05 as ‘statistically significant’. This practice must contribute to the lack of reproducibility in some areas of science. This is before you get to the many other well-known problems, like multiple comparisons, lack of randomization and p- hacking. Precise inductive inference is impossible and replication is the only way to be sure. Science is endangered by statistical misunderstanding, and by senior people who impose perverse incentives on scientists.
APA, Harvard, Vancouver, ISO, and other styles
49

Patel, S. I., D. Kukafka, C. Antonescu, D. Combs, J. Lee-Iannotti, S. F. Quan, and S. Parthasarathy. "1188 Test Characteristics of a Machine Learned Electronic Medical Record Extractable Tool for OSA Case Identification in a Community-Based Population." Sleep 43, Supplement_1 (April 2020): A454. http://dx.doi.org/10.1093/sleep/zsaa056.1182.

Full text
Abstract:
Abstract Introduction Obstructive sleep apnea (OSA) is a significantly underdiagnosed medical condition. A machine learning method known as SLIM (Supersparse Linear Integer Models) that can be extracted from the Electronic Health Record (EHR) has found to be superior to patient-reported sleep-related symptoms to diagnose OSA. Such an evaluation, however, was previously validated in a laboratory-based population. Our aim was to determine the test characteristics for the EHR-extractable SLIM tool in a community-based population. Methods Subjects who participated in the Sleep Heart Health Study (SHHS) were included in this analysis. Variable definitions of OSA were determined using an Apnea Hypopnea Index (AHI) threshold of 5 per hour, 15 per hour, or the presence of any comorbidity (hypertension, ischemic heart disease, stroke, mood disorders, impairment of cognition, or sleepiness) when the AHI was between 5 to 15 per hour. Variable hypopnea definitions based upon degree of oxygen desaturation and associated arousals were considered. Results In the SHHS dataset, the Receiver Operating Characteristics (ROC) for a SLIM score threshold of 9 for men and 5 for women was good when OSA was defined by AHI &gt; 5 per hour (hypopneas with either &gt; 3% oxygen desaturation or arousals). Specifically, the ROC was 0.72 (95% Confidence Intervals [CI] 0.70; 0.74) with a Positive Predictive Value [PPV] of 0.98 and Likelihood Ratio of a positive test (LR+) of 11.3. The LR+ (6.0) and PPV (0.92) were also good when an AHI of 5 per hour threshold was adopted with hypopneas scored using the minimum 3% oxygen desaturation alone. Similarly, the ROC was good 0.74 (95%CI 0.73; 0.76) with a Positive Predictive Value [PPV] of 0.98 and Likelihood Ratio of a positive test (LR+) of 11.3. The LR+ (8.9) and PPV (0.81) were also good in the presence of comorbidities when AHI was 5 to 15 per hour using &gt; 4% oxygen desaturation alone. Conclusion The EHR-extractable tool can be an actionable tool for case-identification of patients needing a referral for sleep study in a community-based population. Such an approach could facilitate an automated, rather than manual, OSA screening approach aimed at managing population health. Support HL138377
APA, Harvard, Vancouver, ISO, and other styles
50

Housman, Seth T., Amira A. Bhalodi, Ashley Shepard, James Nugent, and David P. Nicolau. "Vancomycin Tissue Pharmacokinetics in Patients with Lower-Limb Infections via In Vivo Microdialysis." Journal of the American Podiatric Medical Association 105, no. 5 (September 1, 2015): 381–88. http://dx.doi.org/10.7547/14-033.

Full text
Abstract:
Background Vancomycin is a common treatment option for skin and skin structure infections caused by methicillin-resistant Staphylococcus aureus (MRSA). Given the increasing prevalence of MRSA, vancomycin is widely used as empirical therapy. In patients with lower-limb infections, antimicrobial penetration is often reduced because of decreased vascular perfusion. In this study, we evaluated the tissue concentrations of vancomycin in hospitalized patients with lower-limb infections. Methods An in vivo microdialysis catheter was inserted near the margin of the wound and was perfused with lactated Ringer's solution. Tissue and serum samples were obtained after steady state for one dosing interval. Tissue concentrations were corrected for percentage of in vivo recovery using the retrodialysis technique. Results Nine patients were enrolled (mean ± SD: age, 54 ± 19 years; weight, 105.6 ± 31.5 kg). Patients received a mean of 12.8 mg/kg of vancomycin every 12 hours (n = 7), every 8 hours (n = 1), or every 24 hours (n = 1). Mean ± SD steady-state trough vancomycin concentrations in serum and tissue were 11.1 ± 3.3 and 6.0 ± 2.6 μg/mL. The mean ± SD 24-hour free drug areas under the curve for serum and wound were 283.7 ± 89.4 and 232.8 ± 75.7 μg*h/mL, respectively. The mean ± SD tissue penetration ratio was 0.8 ± 0.2. Conclusions These data suggest that against MRSA with minimum inhibitory concentrations of 1 μg/mL or less, vancomycin achieved blood pharmacodynamic targets required for the likelihood of success. Reduced concentrations may contribute to poor outcomes and the development of resistance. As other literature suggests, alternative agents may be needed when the pathogen of interest has a minimum inhibitory concentration greater than 1 μg/mL.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography