To see the other types of publications on this topic, follow the link: Strictly consistent scoring function.

Journal articles on the topic 'Strictly consistent scoring function'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Strictly consistent scoring function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fissler, Tobias, Jana Hlavinová, and Birgit Rudloff. "Elicitability and identifiability of set-valued measures of systemic risk." Finance and Stochastics 25, no. 1 (2020): 133–65. http://dx.doi.org/10.1007/s00780-020-00446-z.

Full text
Abstract:
AbstractIdentification and scoring functions are statistical tools to assess the calibration of risk measure estimates and to compare their performance with other estimates, e.g. in backtesting. A risk measure is called identifiable (elicitable) if it admits a strict identification function (strictly consistent scoring function). We consider measures of systemic risk introduced in Feinstein et al. (SIAM J. Financial Math. 8:672–708, 2017). Since these are set-valued, we work within the theoretical framework of Fissler et al. (preprint, available online at arXiv:1910.07912v2, 2020) for forecast evaluation of set-valued functionals. We construct oriented selective identification functions, which induce a mixture representation of (strictly) consistent scoring functions. Their applicability is demonstrated with a comprehensive simulation study.
APA, Harvard, Vancouver, ISO, and other styles
2

Hao, Zhenzhu. "Design and Implementation of Athlete’s Knee Health Monitoring System Based on Cloud Computing." Journal of Sensors 2022 (May 13, 2022): 1–8. http://dx.doi.org/10.1155/2022/4778376.

Full text
Abstract:
To learn about the health care of athletes in the knee joint, real-time to monitor the health of the human knee joint according to the cloud. The system uses a depth camera to collect the data information of the human lower limb alignment and obtains the spatial coordinate position of the human lower limb alignment through deep learning; then analyzes and processes the video sequence of the human lower limb alignment, including the wavelet function decomposition of the lower limb alignment information and reconstruction, and finally, the monitoring results were obtained by the knee joint scoring method. The system has completed the design of software and hardware and realized the method of extracting the coordinate information of the main joint points of the human body based on neural network. The health monitoring algorithm finally obtained the knee joint health status of the subjects through the evaluation system. By comparing the real health status of the subjects, the reliability of this research work was verified. The experiment shows that the monitoring error of the system is less than 10%, and the overall error is only 6%. The KSS standard score of the subjects is consistent with the monitoring and evaluation of the system, and the score trend is basically the same. For the situation that the overall score of the system is lower than the KSS standard, after communication and analysis with orthopaedic experts. It is speculated that it may be due to the subjective estimation of the subjects when measuring KSS, and the system monitoring algorithm is relatively strict and more objective. It is proved that the system designed in this paper can effectively monitor the health of athletes’ knee joint.
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Zachary J., and J. Eric Bickel. "Additive Scoring Rules for Discrete Sample Spaces." Decision Analysis 17, no. 2 (2020): 115–33. http://dx.doi.org/10.1287/deca.2019.0398.

Full text
Abstract:
In this paper, we develop strictly proper scoring rules that may be used to evaluate the accuracy of a sequence of probabilistic forecasts. In practice, when forecasts are submitted for multiple uncertainties, competing forecasts are ranked by their cumulative or average score. Alternatively, one could score the implied joint distributions. We demonstrate that these measures of forecast accuracy disagree under some commonly used rules. Furthermore, and most importantly, we show that forecast rankings can depend on the selected scoring procedure. In other words, under some scoring rules, the relative ranking of probabilistic forecasts does not depend solely on the information content of those forecasts and the observed outcome. Instead, the relative ranking of forecasts is a function of the process by which those forecasts are evaluated. As an alternative, we describe additive and strongly additive strictly proper scoring rules, which have the property that the score for the joint distribution is equal to a sum of scores for the associated marginal and conditional distributions. We give methods for constructing additive rules and demonstrate that the logarithmic score is the only strongly additive rule. Finally, we connect the additive properties of scoring rules with analogous properties for a general class of entropy measures.
APA, Harvard, Vancouver, ISO, and other styles
4

Carvalho, Arthur. "Tailored proper scoring rules elicit decision weights." Judgment and Decision Making 10, no. 1 (2015): 86–96. http://dx.doi.org/10.1017/s193029750000320x.

Full text
Abstract:
AbstractProper scoring rules are scoring methods that incentivize honest reporting of subjective probabilities, where an agent strictly maximizes his expected score by reporting his true belief. The implicit assumption behind proper scoring rules is that agents are risk neutral. Such an assumption is often unrealistic when agents are human beings. Modern theories of choice under uncertainty based on rank-dependent utilities assert that human beings weight nonlinear utilities using decision weights, which are differences between weighting functions applied to cumulative probabilities.In this paper, I investigate the reporting behavior of an agent with a rank-dependent utility when he is rewarded using a proper scoring rule tailored to his utility function. I show that such an agent misreports his true belief by reporting a vector of decision weights. My findings thus highlight the risk of utilizing proper scoring rules without prior knowledge about all the components that drive an agent’s attitude towards uncertainty. On the positive side, I discuss how tailored proper scoring rules can effectively elicit weighting functions. Moreover, I show how to obtain an agent’s true belief from his misreported belief once the weighting functions are known.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Changwei, Yama Aman, Xiaoxi Ji, and Yirong Mo. "Tetrel bonding interaction: an analysis with the block-localized wavefunction (BLW) approach." Physical Chemistry Chemical Physics 21, no. 22 (2019): 11776–84. http://dx.doi.org/10.1039/c9cp01710k.

Full text
Abstract:
In this study, fifty-one iconic tetrel bonding complexes were studied using the block localized wave function (BLW) method which can derive the self-consistent wavefunction for an electron-localized (diabatic) state where charge transfer is strictly deactivated.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Steve S., and Daniel J. Ehrlich. "Image-Based Phenotypic Screening with Human Primary T Cells Using One-Dimensional Imaging Cytometry with Self-Tuning Statistical-Gating Algorithms." SLAS DISCOVERY: Advancing the Science of Drug Discovery 22, no. 8 (2017): 985–94. http://dx.doi.org/10.1177/2472555217705953.

Full text
Abstract:
The parallel microfluidic cytometer (PMC) is an imaging flow cytometer that operates on statistical analysis of low-pixel-count, one-dimensional (1D) line scans. It is highly efficient in data collection and operates on suspension cells. In this article, we present a supervised automated pipeline for the PMC that minimizes operator intervention by incorporating multivariate logistic regression for data scoring. We test the self-tuning statistical algorithms in a human primary T-cell activation assay in flow using nuclear factor of activated T cells (NFAT) translocation as a readout and readily achieve an average Z′ of 0.55 and strictly standardized mean difference of 13 with standard phorbol myristate acetate/ionomycin induction. To implement the tests, we routinely load 4 µL samples and can readout 3000 to 9000 independent conditions from 15 mL of primary human blood (buffy coat fraction). We conclude that the new technology will support primary-cell protein-localization assays and “on-the-fly” data scoring at a sample throughput of more than 100,000 wells per day and that it is, in principle, consistent with a primary pharmaceutical screen.
APA, Harvard, Vancouver, ISO, and other styles
7

Asperó, David, та Philip D. Welch. "Bounded Martin's Maximum, weak Erdӧs cardinals, and ψAc". Journal of Symbolic Logic 67, № 3 (2002): 1141–52. http://dx.doi.org/10.2178/jsl/1190150154.

Full text
Abstract:
AbstractWe prove that a form of the Erdӧs property (consistent with V = L[Hω2] and strictly weaker than the Weak Chang's Conjecture at ω1), together with Bounded Martin's Maximum implies that Woodin's principle ψAC holds, and therefore . We also prove that ψAC implies that every function f: ω1 → ω1 is bounded by some canonical function on a club and use this to produce a model of the Bounded Semiproper Forcing Axiom in which Bounded Martin's Maximum fails.
APA, Harvard, Vancouver, ISO, and other styles
8

Alonso, Ricardo N., Maria B. Eizaguirre, Berenice Silva, et al. "Brain Function Assessment of Patients with Multiple Sclerosis in the Expanded Disability Status Scale." International Journal of MS Care 22, no. 1 (2020): 31–35. http://dx.doi.org/10.7224/1537-2073.2018-084.

Full text
Abstract:
Abstract Background: There is no consensus regarding assessment of the brain function functional system (FS) of the Expanded Disability Status Scale (EDSS) in patients with multiple sclerosis (MS). We sought to describe brain function FS assessment criteria used by Argentinian neurologists and, based on the results, propose redefined brain function FS criteria. Methods: A structured survey was conducted of 113 Argentinian neurologists. Considering the survey results, we decided to redefine the brain function FS scoring using the Brief International Cognitive Assessment for MS (BICAMS) battery. For 120 adult patients with MS we calculated the EDSS score without brain function FS (basal EDSS) and compared it with the EDSS score after adding the modified brain function FS (modified EDSS). Results: Of the 93 neurologists analyzed, 14% reported that they did not assess brain function FS, 35% reported that they assessed it through a nonstructured interview, and the remainder used other tools. Significant differences were found in EDSS scores before and after the inclusion of BICAMS (P < .001). Redefining the brain function FS, 15% of patients modified their basal EDSS score, as did 20% of those with a score of 4.0 or less. Conclusions: The survey results show the importance of unifying the brain function FS scoring criteria in calculating the EDSS score. While allowing more consistent brain function FS scoring, including the modified brain function FS led to a change in EDSS score in many patients, particularly in the lower range of EDSS scores. Considering the relevance of the EDSS for monitoring patients with MS and for decision making, it is imperative to further validate the modified brain function FS scoring.
APA, Harvard, Vancouver, ISO, and other styles
9

MONTALBÁN, ANTONIO, and JAMES WALSH. "ON THE INEVITABILITY OF THE CONSISTENCY OPERATOR." Journal of Symbolic Logic 84, no. 1 (2019): 205–25. http://dx.doi.org/10.1017/jsl.2018.65.

Full text
Abstract:
AbstractWe examine recursive monotonic functions on the Lindenbaum algebra of $EA$. We prove that no such function sends every consistent φ to a sentence with deductive strength strictly between φ and $\left( {\varphi \wedge Con\left( \varphi \right)} \right)$. We generalize this result to iterates of consistency into the effective transfinite. We then prove that for any recursive monotonic function f, if there is an iterate of $Con$ that bounds f everywhere, then f must be somewhere equal to an iterate of $Con$.
APA, Harvard, Vancouver, ISO, and other styles
10

Rusidawati, Rusidawati, Aprida Siska Lestia, and Saman Abdurrahman. "KARAKTERISTIK UKURAN RISIKO DISTORSI." EPSILON: JURNAL MATEMATIKA MURNI DAN TERAPAN 16, no. 1 (2022): 40. http://dx.doi.org/10.20527/epsilon.v16i1.5175.

Full text
Abstract:
Insurance is a risk transfer from the insured to the insurer. In general insurance companies are grouped into two types that life insurance and general insurance. For measure risk in general insurance the method used is using a measure of risk. In the study of risk management, there is one method forming risk measure known a distortion function. The purpose of this study is prove theorems of properties a measure of coherent and consistent risk of distortion. In this study explain the formation of a measure of risk distortion using a distortion function, indicates that if the distortion function is a concave function and shows the consistency of risk distortion measures preserve second order stochastic dominance and show coherence and consistency several of distortion risk measures. The results of this study concave distortion function is a necessary condition and sufficient condition for coherence and a strictly concave distortion function is a necessary condition and sufficient condition for strict ordering consistent with preserve second order stochastic dominance.
APA, Harvard, Vancouver, ISO, and other styles
11

Roszkowska, Ewa, and Tomasz Wachowicz. "The Multi-Criteria Negotiation Analysis Based on the Membership Function." Studies in Logic, Grammar and Rhetoric 37, no. 1 (2014): 195–217. http://dx.doi.org/10.2478/slgr-2014-0025.

Full text
Abstract:
Abstract In this paper we propose a multi-criteria model based on the fuzzy preferences approach which can be implemented in the prenegotiation phase to evaluate the negotiations packages. The applicability of some multi-criteria ranking methods were discussed for building a scoring function for negotiation packages. The first one is Simple Additive Weighting (SAW) technique which determines the sum of the partial satisfactions from each negotiation issue and aggregate them using the issue weights. The other one is Distance Based Methods (DBM), with its extension based on the distances to ideal or anti-ideal package, i.e. the TOPSIS procedure. In our approach the negotiator's preferences over the issues are represented by fuzzy membership functions and next a selected multi-criteria decision making method is adopted to determine the global rating of each package. The membership functions are used here as the equivalents of utility functions spread over the negotiation issues, which let us compare different type of data. One of the key advantages of the approach proposed is its usefulness for building a general scoring function in the ill-structured negotiation problem, namely the situation in which the problem itself as well as the negotiators preferences cannot be precisely defined, the available information is uncertain, subjective and vague. Secondly, all proposed variants of scoring functions produce consistent rankings, even though the new packages are added (or removed) and do not result in rank reversal.
APA, Harvard, Vancouver, ISO, and other styles
12

Merkl, Rainer. "AMIGOS: A Method for the Inspection of Genomic Organisation or Structure and its Application to Characterise Conserved Gene Arrangements." In Silico Biology: Journal of Biological Systems Modeling and Multi-Scale Simulation 6, no. 4 (2006): 281–306. https://doi.org/10.3233/isb-00242.

Full text
Abstract:
In order to identify and to characterise gene clusters conserved in microbial genomes, the algorithm AMIGOS was developed. It is based on a categorisation of genes using a predefined set of gene functions (GFs). After the categorisation of all genes of a genome and based on their location on a replicon, distances between GFs were determined and stored in genome-specific matrices. These matrices were used to identify GF clusters like those strictly conserved in 13 archaeal, in 47 bacterial genomes and in the combination of the sets. Within the combined set of these 60 microbial genomes, there exist only two strictly conserved clusters harbouring two ribosomal genes each, namely those for L4, L23 and L22, L29. In order to characterise less strictly conserved GF clusters, content of genomes i.e. matrices were analysed pairwise. Resulting clusters were merged to (meta-) clusters if their content overlapped. A scoring system named cons $_{CL}$ was developed. It quantifies conservedness of cluster membership for individual GFs. For the genome of Escherichia coli it was shown that a grouping of cluster elements on cons $_{CL}$ values dissected the clusters into smaller sets. These sets were frequently overlapped by known transcriptional units (TUs). This finding justifies the usage of cons $_{CL}$ scores to predict TU membership of genes. In addition, cons $_{CL}$ values provide a sound basis for non-homologous gene annotation. Based on cons $_{CL}$ values, examples of conserved clusters containing annotated genes and single ones with unknown function are given.
APA, Harvard, Vancouver, ISO, and other styles
13

Knight, John L., and Jun Yu. "EMPIRICAL CHARACTERISTIC FUNCTION IN TIME SERIES ESTIMATION." Econometric Theory 18, no. 3 (2002): 691–721. http://dx.doi.org/10.1017/s026646660218306x.

Full text
Abstract:
Because the empirical characteristic function (ECF) is the Fourier transform of the empirical distribution function, it retains all the information in the sample but can overcome difficulties arising from the likelihood. This paper discusses an estimation method via the ECF for strictly stationary processes. Under some regularity conditions, the resulting estimators are shown to be consistent and asymptotically normal. The method is applied to estimate the stable autoregressive moving average (ARMA) models. For the general stable ARMA model for which the maximum likelihood approach is not feasible, Monte Carlo evidence shows that the ECF method is a viable estimation method for all the parameters of interest. For the Gaussian ARMA model, a particular stable ARMA model, the optimal weight functions and estimating equations are given. Monte Carlo studies highlight the finite sample performances of the ECF method relative to the exact and conditional maximum likelihood methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Fehr, Lawrence A., Shelley M. Fischer, and Leighton E. Stamps. "Mock Jurors' Behavior: Sentencing as a Function of Jurors' Guilt." Psychological Reports 60, no. 3 (1987): 727–31. http://dx.doi.org/10.2466/pr0.1987.60.3.727.

Full text
Abstract:
Fictitious court cases involving rape and assault were presented to 98 college women to determine whether the tendency to deal harshly with alleged criminals is dependent upon certain personality characteristics of the mock jurors. Three measures of jurors' guilt were used to detect the presence or absence of such trends. Subjects scoring high on guilt dealt less harshly with alleged criminals than subjects low in guilt. This trend was consistent for all three independent scales of the Mosher Guilt Scales (sex guilt, hostility guilt, and morality guilt). It was concluded that those who tend to find fault with themsleves are sympathetic to the legal problems of others.
APA, Harvard, Vancouver, ISO, and other styles
15

Rombaux, Ph, C. Huart, and A. Mouraux. "Assessment of chemosensory function using electroencephalographic techniques." Rhinology journal 50, no. 1 (2012): 13–21. http://dx.doi.org/10.4193/rhino11.126.

Full text
Abstract:
Electroencephalographic techniques are widely used to provide an objective evaluation of the chemosensory function and to explore neural mechanisms related to the processing of chemosensory events. The most popular technique to evaluate brain responses to chemosensory stimuli is across trial time-domain averaging to reveal chemosensory event-related potentials (CSERP) embedded within the ongoing EEG. Nevertheless, this technique has a poor signal-to-noise ratio and cancels out stimulus-induced changes in the EEG signal that are not strictly phased-locked to stimulus onset. The fact that consistent CSERP are not systematically identifiable in healthy subjects currently constitutes a major limitation to the use of this technique for the diagnosis of chemosensory dysfunction. In this review, we will review the different techniques related to the recording and identification of CSERP, discuss some of their limitations, and propose some novel signal processing methods which could be used to enhance the signal-to-noise ratio of chemosensory event-related brain responses.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhou, Zhongbao, Xianghui Liu, Helu Xiao, TianTian Ren, and Wenbin Liu. "Time-Consistent Strategies for Multi-Period Portfolio Optimization with/without the Risk-Free Asset." Mathematical Problems in Engineering 2018 (September 6, 2018): 1–20. http://dx.doi.org/10.1155/2018/7563093.

Full text
Abstract:
The pre-commitment and time-consistent strategies are the two most representative investment strategies for the classic multi-period mean-variance portfolio selection problem. In this paper, we revisit the case in which there exists one risk-free asset in the market and prove that the time-consistent solution is equivalent to the optimal open-loop solution for the classic multi-period mean-variance model. Then, we further derive the explicit time-consistent solution for the classic multi-period mean-variance model only with risky assets, by constructing a novel Lagrange function and using backward induction. Also, we prove that the Sharpe ratio with both risky and risk-free assets strictly dominates that of only with risky assets under the time-consistent strategy setting. After the theoretical investigation, we perform extensive numerical simulations and out-of-sample tests to compare the performance of pre-commitment and time-consistent strategies. The empirical studies shed light on the important question: what is the primary motivation of using the time-consistent investment strategy.
APA, Harvard, Vancouver, ISO, and other styles
17

Cascio, Elizabeth U., and Diane Whitmore Schanzenbach. "First in the Class? Age and the Education Production Function." Education Finance and Policy 11, no. 3 (2016): 225–50. http://dx.doi.org/10.1162/edfp_a_00191.

Full text
Abstract:
We estimate the effects of relative age in kindergarten using data from an experiment where children of the same age were randomly assigned to different kindergarten classmates. We exploit the resulting experimental variation in relative age in conjunction with variation in expected kindergarten entry age based on birthdate to account for negative selection of some of the older school entrants. We find that, holding constant own age, having older classmates on average improves educational outcomes, increasing test scores up to eight years after kindergarten, and raising the probability of taking a college-entry exam. These findings suggest that delaying kindergarten entry, or so-called academic “redshirting,” does not harm other children—and may in fact benefit them—consistent with positive spillovers from higher-scoring or better-behaved peers.
APA, Harvard, Vancouver, ISO, and other styles
18

Amstutz, Samuel, Charles Dapogny, and Alex Ferrer. "A consistent approximation of the total perimeter functional for topology optimization algorithms." ESAIM: Control, Optimisation and Calculus of Variations 28 (2022): 18. http://dx.doi.org/10.1051/cocv/2022005.

Full text
Abstract:
This article revolves around the total perimeter functional, one particular version of the perimeter of a shape Ω contained in a fixed computational domain D measuring the total area of its boundary ∂Ω, as opposed to its relative perimeter, which only takes into account the regions of ∂Ω strictly inside D. We construct and analyze approximate versions of the total perimeter which make sense for general “density functions” u, as generalized characteristic functions of shapes. Their use in the context of density-based topology optimization is particularly convenient insofar as they do not involve the gradient of the optimized function u. Two different constructions are proposed: while the first one involves the convolution of the function u with a smooth mollifier, the second one is based on the resolution of an elliptic boundary-value problem featuring Robin boundary conditions. The “consistency” of these approximations with the original notion of total perimeter is appraised from various points of view. At first, we prove the pointwise convergence of our approximate functionals, then the convergence of their derivatives, as the level of smoothing tends to 0, when the considered density function u is the characteristic function of a “regular enough” shape Ω ⊂ D. Then, we focus on the Γ-convergence of the second type of approximate total perimeter functional, that based on elliptic regularization. Several numerical examples are eventually presented in two and three space dimensions to validate our theoretical findings and demonstrate the efficiency of the proposed functionals in the context of structural optimization.
APA, Harvard, Vancouver, ISO, and other styles
19

Shabbir, Tayyeb. "Mincerian Earnings Function for Pakistan." Pakistan Development Review 33, no. 1 (1994): 1–18. http://dx.doi.org/10.30541/v33i1pp.1-18.

Full text
Abstract:
Due to its central role in various debates about the detenninants of individual earnings, the Mincerian earnings function (MEF) as given in Mincer (1974) has attracted the attention of many economists. The MEF has been estimated virtually for every country except Pakistan, where a necessary condition has been missing, i.e., national level data on the exact number of years of schooling completed has not been available; instead, in a majority of the relevant micro-level surveys, schooling has been measured only in terms of a 'categorical' variable with possible values being 'Primary and Incomplete Middle', 'Middle and Incomplete Matric', etc. At best, this data deficiency has restricted the existing estimated earnings functions to what we refer to as the 'Dummies earnings functions' (DEF) since they are constrained to specify schooling in terms of a set of dichotomous dummy variables. Using a nationally representative data on male eameG, this study tries to fill the above gap by estimating the MEF both in its 'strict' as well as the 'extended' forms. In terms of the 'strict' MEF, i.e., the one analogous to Mincer's (1974) specification which essentially treats earnings as a function of schooling and job-market experience, the main fmdings are that the marginal rate of return to schooling is 8 percent, the experience- earnings profile is consistent with the pattern suggested by the human capital theory and as much as 41 percent of the variance in log earnings is accounted for by the strictly defined MEF. By and large, these findings are consistent with those implied by estimated MEFs for comparable LDCs. Further, the present study also estimates 'extended' MEF, whose specification supplements that of the 'strict' MEF by adding variables to control for urban vs rural background, occupational categories, employment status, and provincial heterogeneity. The 'extended' MEFs are also estimated separately for urban and rural samples and for each province. Formal 'Chow-type F tests' conducted to test for homogeneity of the parameters of MEF across different sub-samples reveal 'pervasive' segmentation across the above strata.
APA, Harvard, Vancouver, ISO, and other styles
20

Albrecht, Duane G., and Wilson S. Geisler. "Motion selectivity and the contrast-response function of simple cells in the visual cortex." Visual Neuroscience 7, no. 6 (1991): 531–46. http://dx.doi.org/10.1017/s0952523800010336.

Full text
Abstract:
AbstractThe responses of simple cells were recorded from the visual cortex of cats, as a function of the position and contrast of counterphase and drifting grating patterns, to assess whether direction selectivity can be accounted for on the basis of linear summation. The expected responses to a counterphase grating, given a strictly linear model, would be the sum of the responses to the two drifting components. The measured responses were not consistent with the linear prediction. For example, nearly all cells showed two positions where the responses approached zero (i.e. two “null phase positions”); this was true, even for the most direction selective cells. However, the measured responses were consistent with the hypothesis that direction selectivity is a consequence of the linear spatiotemporal receptive-field structure, coupled with the nonlinearities revealed by the contrast-response function: contrast gain control, halfwave rectification, and expansive exponent. When arranged in a particular sequence, each of these linear and nonlinear mechanisms performs a useful function in a general model of simple cells. The linear spatiotemporal receptive field initiates stimulus selectivity (for direction, orientation, spatial frequency, etc.). The expansive response exponent enhances selectivity. The contrast-set gain control maintains selectivity (over a wide range of contrasts, in spite of the limited dynamic response range and steep slope of the contrast-response function). Rectification conserves metabolic energy.
APA, Harvard, Vancouver, ISO, and other styles
21

Foster, Elinor R., and Jessica A. Downs. "Methylation of H3 K4 and K79 is not strictly dependent on H2B K123 ubiquitylation." Journal of Cell Biology 184, no. 5 (2009): 631–38. http://dx.doi.org/10.1083/jcb.200812088.

Full text
Abstract:
Covalent modifications of histone proteins have profound consequences on chromatin structure and function. Specific modification patterns constitute a code read by effector proteins. Studies from yeast found that H3 trimethylation at K4 and K79 is dependent on ubiquitylation of H2B K123, which is termed a “trans-tail pathway.” In this study, we show that a strain unable to be ubiquitylated on H2B (K123R) is still proficient for H3 trimethylation at both K4 and K79, indicating that H3 methylation status is not solely dependent on H2B ubiquitylation. However, additional mutations in H2B result in loss of H3 methylation when combined with htb1-K123R. Consistent with this, we find that the original strain used to identify the trans-tail pathway has a genomic mutation that, when combined with H2B K123R, results in defective H3 methylation. Finally, we show that strains lacking the ubiquitin ligase Bre1 are defective for H3 methylation, suggesting that there is an additional Bre1 substrate that in combination with H2B K123 facilitates H3 methylation.
APA, Harvard, Vancouver, ISO, and other styles
22

Schneider, Nadine, Gudrun Lange, Sally Hindle, Robert Klein, and Matthias Rarey. "A consistent description of HYdrogen bond and DEhydration energies in protein–ligand complexes: methods behind the HYDE scoring function." Journal of Computer-Aided Molecular Design 27, no. 1 (2012): 15–29. http://dx.doi.org/10.1007/s10822-012-9626-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

EDGECOMBE, GREGORY D. "Anatomical nomenclature: homology, standardization and datasets." Zootaxa 1950, no. 1 (2008): 87–95. http://dx.doi.org/10.11646/zootaxa.1950.1.8.

Full text
Abstract:
Strictly homology-based character names have the benefit of a consistent, evolutionary basis but must overcome practical problems in terms of the function that names serve as tools for communication. Character names should be fixed at the level of primary (rather than secondary) homology in order to maintain nomenclatural stability between competing phylogenies and to allow characters to potentially re-optimize with the addition of data. Inconsistent rules determine the priority of names for anatomical structures, in marked contrast to the stability and clarity provided by Codes for taxonomic nomenclature. Standardized anatomical nomenclature is amenable to a web-based, ontology-driven framework. Imagery and associated metadata linked to phylogenetic datasets facilitate character documentation, nomenclatural stability, and repeatability without requiring a formal process of typification.
APA, Harvard, Vancouver, ISO, and other styles
24

Nicholls, Peter K., Peter G. Stanton, Justin L. Chen, et al. "Activin Signaling Regulates Sertoli Cell Differentiation and Function." Endocrinology 153, no. 12 (2012): 6065–77. http://dx.doi.org/10.1210/en.2012-1821.

Full text
Abstract:
Abstract Throughout development, activin A signaling stimulates proliferation and inhibits differentiation of testicular Sertoli cells. A decline in activin levels at puberty corresponds with the differentiation of Sertoli cells that is required to sustain spermatogenesis. In this study, we consider whether terminally differentiated Sertoli cells can revert to a functionally immature phenotype in response to activin A. To increase systemic activin levels, the right tibialis anterior muscle of 7-wk-old C57BL/6J mice was transduced with an adeno-associated virus (rAAV6) expressing activin A. We show that chronic activin signaling reduces testis mass by 23.5% compared with control animals and induces a hypospermatogenic phenotype, consistent with a failure of Sertoli cells to support spermatogenesis. We use permeability tracers and transepithelial electrical resistance measurements to demonstrate that activin potently disrupts blood-testis-barrier function in adult mice and ablates tight junction formation in differentiated primary Sertoli cells, respectively. Furthermore, increased activin signaling reinitiates a program of cellular proliferation in primary Sertoli cells as determined by 5-ethynyl-2′-deoxyuridine incorporation. Proliferative cells reexpress juvenile markers, including cytokeratin-18, and suppress mature markers, including claudin-11. Thus, activin A is the first identified factor capable of reprogramming Sertoli cells to an immature, dedifferentiated phenotype. This study indicates that activin signaling must be strictly controlled in the adult in order to maintain Sertoli cell function in spermatogenesis.
APA, Harvard, Vancouver, ISO, and other styles
25

Sinha, A. A., C. Guidos, K. C. Lee, and E. Diener. "Functions of accessory cells in B cell responses to thymus-independent antigens." Journal of Immunology 138, no. 12 (1987): 4143–49. http://dx.doi.org/10.4049/jimmunol.138.12.4143.

Full text
Abstract:
Abstract The functions of adherent accessory (A) cells in thymus-independent (TI) B cell activation were investigated using homogeneous A cell lines with distinct cell surface and functional characteristics, as well as inhibitors of antigen processing and interleukin 1 (IL 1) secretion. B cell responses to both type 1 and type 2 TI antigens were found to be strictly A cell dependent. Only A cells capable of IL 1 secretion could restore responsiveness in A cell-depleted spleen cells, regardless of Ia expression or antigen-processing capability. Moreover, recombinant IL 1 completely replaced A cell function in B cell responses to both TI 1 and TI 2 antigens. Finally, T cell depletion did not diminish the reconstitution by IL 1. Thus in contrast to T cell activation, IL 1 secretion is the only A cell function required in TI B cell activation, and the data are consistent with a direct role for IL 1 in B cell activation.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yumo. "Dynamic Optimal Mean-Variance Portfolio Selection with a 3/2 Stochastic Volatility." Risks 9, no. 4 (2021): 61. http://dx.doi.org/10.3390/risks9040061.

Full text
Abstract:
This paper considers a mean-variance portfolio selection problem when the stock price has a 3/2 stochastic volatility in a complete market. Specifically, we assume that the stock price and the volatility are perfectly negative correlated. By applying a backward stochastic differential equation (BSDE) approach, closed-form expressions for the statically optimal (time-inconsistent) strategy and the value function are derived. Due to time-inconsistency of mean variance criterion, a dynamic formulation of the problem is presented. We obtain the dynamically optimal (time-consistent) strategy explicitly, which is shown to keep the wealth process strictly below the target (expected terminal wealth) before the terminal time. Finally, we provide numerical studies to show the impact of main model parameters on the efficient frontier and illustrate the differences between the two optimal wealth processes.
APA, Harvard, Vancouver, ISO, and other styles
27

Berlin, V., C. A. Styles, and G. R. Fink. "BIK1, a protein required for microtubule function during mating and mitosis in Saccharomyces cerevisiae, colocalizes with tubulin." Journal of Cell Biology 111, no. 6 (1990): 2573–86. http://dx.doi.org/10.1083/jcb.111.6.2573.

Full text
Abstract:
BIK1 function is required for nuclear fusion, chromosome disjunction, and nuclear segregation during mitosis. The BIK1 protein colocalizes with tubulin to the spindle pole body and mitotic spindle. Synthetic lethality observed in double mutant strains containing a mutation in the BIK1 gene and in the gene for alpha- or beta-tubulin is consistent with a physical interaction between BIK1 and tubulin. Furthermore, over- or underexpression of BIK1 causes aberrant microtubule assembly and function, bik1 null mutants are viable but contain very short or undetectable cytoplasmic microtubules. Spindle formation often occurs strictly within the mother cell, probably accounting for the many multinucleate and anucleate bik1 cells. Elevated levels of chromosome loss in bik1 cells are indicative of defective spindle function. Nuclear fusion is blocked in bik1 x bik1 zygotes, which have truncated cytoplasmic microtubules. Cells overexpressing BIK1 initially have abnormally short or nonexistent spindle microtubules and long cytoplasmic microtubules. Subsequently, cells lose all microtubule structures, coincident with the arrest of division. Based on these results, we propose that BIK1 is required stoichiometrically for the formation or stabilization of microtubules during mitosis and for spindle pole body fusion during conjugation.
APA, Harvard, Vancouver, ISO, and other styles
28

Yuan, C., and B. Malone. "Learning Optimal Bayesian Networks: A Shortest Path Perspective." Journal of Artificial Intelligence Research 48 (October 16, 2013): 23–65. http://dx.doi.org/10.1613/jair.4039.

Full text
Abstract:
In this paper, learning a Bayesian network structure that optimizes a scoring function for a given dataset is viewed as a shortest path problem in an implicit state-space search graph. This perspective highlights the importance of two research issues: the development of search strategies for solving the shortest path problem, and the design of heuristic functions for guiding the search. This paper introduces several techniques for addressing the issues. One is an A* search algorithm that learns an optimal Bayesian network structure by only searching the most promising part of the solution space. The others are mainly two heuristic functions. The first heuristic function represents a simple relaxation of the acyclicity constraint of a Bayesian network. Although admissible and consistent, the heuristic may introduce too much relaxation and result in a loose bound. The second heuristic function reduces the amount of relaxation by avoiding directed cycles within some groups of variables. Empirical results show that these methods constitute a promising approach to learning optimal Bayesian network structures.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhen, Tao, Erika Mijin Kwon, R. Katherine Hyde, et al. "Runx1 Is Strictly Required for Cbfb-MYH11 Induced Leukemia Development." Blood 128, no. 22 (2016): 2722. http://dx.doi.org/10.1182/blood.v128.22.2722.2722.

Full text
Abstract:
Abstract Inversion of chromosome 16 is a consistent finding in patients with acute myeloid leukemia subtype M4 with eosinophilia (AML M4Eo), which generates a CBFB-MYH11 fusion gene. The prevailing hypothesis for the mechanism of leukemia development by CBFbeta-SMMHC, the fusion protein encoded by CBFB-MYH11, is that CBFbeta-SMMHC is a dominant negative repressor of RUNX1, a transcription factor that physically interacts with CBFbeta and CBFbeta-SMMHC. If this hypothesis is correct, reducing RUNX1 activity should facilitate leukemogenesis by CBFB-MYH11. In fact, loss-of-function mutations in RUNX1 are common in human AML, but not in inv(16) AML. However, we previously demonstrated that CBFB-MYH11 has RUNX1-repression independent functions (Hyde et al., Blood 115:1433, 2010). Moreover, we recently showed that a dominant negative allele of Runx1, Runx1-lz, delayed leukemogenesis by CBFB-MYH11 in a mouse model (Hyde et al., Leukemia 29:1771, 2015). These findings challenge the RUNX1-repression model for CBFbeta-SMMHC mediated leukemogenesis. However, our previous findings are not conclusive since the Runx1+/lz mice used in the previous study have one wild-type Runx1 allele, and still retain some Runx1 function. To definitively address this question, we crossed Cre-based conditional Runx1 knockout mice (Runx1f/f) with Cre-based conditional Cbfb-MYH11 knockin mice (Cbfb+/56M) and Mx1-Cre mice to generate Runx1f/f, Mx1-Cre, Cbfb+/56Mmice, which express CBFbeta-SMMHC but not Runx1 after pIpC (poly I:C) treatment to induce Cre expression. Runx1f/f, Mx1-Cre, Cbfb+/56Mmice had more severe platelet deficiencies and higher numbers of Lin-/Sca1-/C-kit+ progenitors and Lin-/Sca1+/C-kit+ hematopoietic stem cells in the bone marrow when comapred with Runx1f/f, Mx1-Cre mice. Unexpectedly Runx1f/f, Mx1-Cre, Cbfb+/56Mmice also developed severe macrocytic anemia within two weeks after pIpC induction, which was lethal in about 1/3 of the mice. However, none of the Runx1f/f, Mx1-Cre, Cbfb+/56M mice developed leukemia up to one year after pIpC treatment. In contrast, all Mx1-Cre, Cbfb+/56M mice developed leukemia with an average survival of 4 months, as reported previously. These results suggest that Runx1 is strictly required for Cbfb-MYH11 induced leukemogenesis. To further study the mechanism of leukemogenesis, we performed RNA-Seq on C-kit+ bone marrow cells isolated from mice two weeks after pIpC treatment, to explore the global gene expression changes caused by Runx1 knockout on Cbfb-MYH11 expressing mice. Our preliminary data analysis showed that 1688 genes were differential expressed (Padj ≤0.05, FC ≥ 2) between Runx1f/f, Mx1-Cre, Cbfb+/56M and Mx1-Cre, Cbfb+/56M mice. Interestingly, many of these genes (48%) are Runx1 target genes. The above results suggest that mis-regulating the expression of Runx1 target genes contributes to leukemogenesis by CBFbeta-SMMHC. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
30

Lemarquand, Alice, Pierre Jannot, Léo Kammerlocher, et al. "A new trauma severity scoring system adapted to wearable monitoring: A pilot study." PLOS ONE 20, no. 3 (2025): e0318290. https://doi.org/10.1371/journal.pone.0318290.

Full text
Abstract:
Wearable technologies represent a strong development axis for various medical applications and these devices are increasingly used in daily life as illustrated by smart watches’ popularisation. Combined with new data processing methods, it constitutes a promising opportunity for telemonitoring, triage in mass casualty situations, or early diagnosis after a traffic or sport accident. An approach to processing the physiological data is to develop severity scoring systems to quantify the critical level of an individual’s health status. However, the existing severity scores require a human evaluation. A first version of a severity scoring system adapted to continuous and real-time wearable monitoring is proposed in this article. The focus is made on three physiological parameters straightforwardly measurable with wrist-wearables: heart rate, respiratory rate, and SpO2, which may be enough to characterise continuously hemodynamic and respiratory status. Intermediate score functions corresponding to each physiological parameter have been established using a sigmoid model. The boundary conditions have been defined based on a survey conducted among 54 health professionals. An adapted function has also been developed to merge the three intermediate scores into a global score. The scores are associated with a triage tricolour code: green for a low-priority casualty, orange for a delayable one, red for an urgent one. Preliminary confrontation of the new severity scoring system with real data has been carried out using a database of 84 subjects admitted to the intensive care unit. Colour classification by the new scoring system was compared with independent physicians’ direct evaluation as a reference. The prediction success rate values 74% over the entire database. Two examples of continuous monitoring over time are also given. The new score has turned out to be consistent, and may be easily upgraded with the integration of additional vital signs monitoring or medical information.
APA, Harvard, Vancouver, ISO, and other styles
31

Francovitch, R. J., and W. J. George. "A Dual Function Exposure System for Acute Exposure of Mice to Chemical Vapors." Journal of the American College of Toxicology 4, no. 1 (1985): 63–69. http://dx.doi.org/10.3109/10915818509014505.

Full text
Abstract:
A new and innovative portable system, allowing for the exposure of up to 24 mice to chemical vapors for a period of at least 4 hours, is presented. With this apparatus, mice can be selectively exposed to a chemical agent strictly by the dermal route or by inhalation. Specially designed animal cassettes or holders containing slots in the rear portion were developed for use in this system. These slotted areas in the holders provide adequate ventilation and permit mice to be exposed dermally to test substances. Operating parameters of the exposure system were evaluated with test atmospheres of ethylene dichloride (EDC). Distribution of vapor in the chamber was tested for 2 different concentrations of EDC, 500 ppm and 1200 ppm. Vapor distribution was found to be uniform throughout the chamber for both concentrations. The capability of the exposure system to maintain a consistent desired test atmosphere over a 4-hour period was also investigated. EDC was used at concentrations of 900 and 1000 ppm, and it was demonstrated that the EDC concentration varied by no more than ±8% over the test period. Thus, this exposure chamber represents an inexpensive, uniform, portable system for conducting inhalation or dermal exposure studies with selected contaminants that might represent a potential chemical hazard.
APA, Harvard, Vancouver, ISO, and other styles
32

Naveed, Waleed Afzal, Qian Liu, Congcong Lu, and Xiaolei Huang. "Symbiotic Bacterial Communities of Insects Feeding on the Same Plant Lineage: Distinct Composition but Congruent Function." Insects 15, no. 3 (2024): 187. http://dx.doi.org/10.3390/insects15030187.

Full text
Abstract:
The health and diversity of plant-feeding insects are strictly linked to their host plants and mutualistic symbionts. However, the study of bacterial symbionts within different insects on the same plant lineage is very limited. This study aimed to investigate the bacterial diversity in insect samples that exclusively feed on Bambusa, representing three insect orders, Hemiptera, Lepidoptera, and Blattodea, each exhibiting distinct dietary preferences. The bacterial community was predominantly composed of Proteobacteria, Spirochaetota, Cyanobacteria, Firmicutes, and Bacteroidota. The study found significant variations in symbiotic organisms among three insect orders: hemipterans had Buchnera, lepidopterans had Acinetobacter, and blattodean had Treponema. Furthermore, the dietary preferences of these insects played a pivotal role in shaping the symbiotic relationship of insects. Proteobacteria are prevalent in sap feeders, Spirochaetota dominate in stem feeders, and Cyanobacteria are abundant in leaf feeders. Seasonal influences also affect bacterial symbionts in P. bambucicola, with Serratia present exclusively in winter. We also observed that the bacterial composition varies across all samples, but their core functions appear to be consistent. This highlights the complex relationship between host phylogeny and diet, with phylogeny being the primary driver, shaping adaptations to specialized diets.
APA, Harvard, Vancouver, ISO, and other styles
33

Rousseau, Denis-Didier. "Climatic transfer function from quaternary molluscs in European loess deposits." Quaternary Research 36, no. 2 (1991): 195–209. http://dx.doi.org/10.1016/0033-5894(91)90025-z.

Full text
Abstract:
AbstractCorrespondence and multiple regression analysis of terrestrial molluscs in the loess sections of Achenheim (Alsace, France) has permitted the reconstruction of climatic variations during the last three glacial-interglacial cycles back to 339,000 yr B.P. The sequence has been dated according to the SPECMAP chronology of Imbrie et al. (1984) and the fossil faunas have been calibrated in relation to recent assemblages sampled in defined ecological conditions in Sweden and France. Transfer functions that relate the abundances of different species to climate allow the reconstruction of temperature and precipitation. Estimates for the coldest (February) and warmest (August) months in present-day Alsace were obtained and variations in temperature between −13° and 2°C in winter and 10° and 17°C in summer were determined. These results are consistent with those yielded by transfer functions using other continental fossils. Estimates differ for past precipitation. Summer precipitation is always less than present (with values between 50 and 78 mm, while modern August values are 76 mm higher). Winter estimates are always higher than the present mean (between 76 and 33 mm, while the recent February value is 34 mm). Comparisons between cycles show that the climatic patterns described for one cycle cannot be strictly applied to the others. Comparisons have been made with the pollen stratigraphy of La Grande Pile, the nearest quantified sequence to Achenheim, and with some Atlantic cores in order to study the magnitude of deviations from modern mean values of the climatic parameters.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Qing, David Pitt, and Xueyuan Wu. "On the prediction of claim duration for income protection insurance policyholders." Annals of Actuarial Science 8, no. 1 (2013): 42–62. http://dx.doi.org/10.1017/s1748499513000134.

Full text
Abstract:
AbstractThis paper explores how we can apply various modern data mining techniques to better understand Australian Income Protection Insurance (IPI). We provide a fast and objective method of scoring claims into different portfolios using available rating factors. Results from fitting several prediction models are compared based on not only the conventional loss prediction error function, but also a modified loss function. We demonstrate that the prediction power of all the data mining methods under consideration is clearly evident using a misclassification plot. We also point out that this predictability can be masked by looking at just the conventional prediction error function. We then suggest using the stepwise regression technique to reduce the number of variables used in the data mining methods. Apart from this variable selection method, we also look at principal components analysis to increase understanding of the rating factors that drive claim durations of insured lives. We also discuss and compare how different variable combining techniques can be used to weight available predicting variables. One interesting outcome we discover is that principal components analysis and the weighted combination prediction model together provide very consistent results on identifying the most significant variables for explaining claim durations.
APA, Harvard, Vancouver, ISO, and other styles
35

Choi, Hayeon, Youngkyoung Koo, and Sangsoo Park. "Modeling the Power Consumption of Function-Level Code Relocation for Low-Power Embedded Systems." Applied Sciences 9, no. 11 (2019): 2354. http://dx.doi.org/10.3390/app9112354.

Full text
Abstract:
The problems associated with the battery life of embedded systems were addressed by focusing on memory components that are heterogeneous and are known to meaningfully affect the power consumption and have not been fully exploited thus far. Our study establishes a model that predicts and orders the efficiency of function-level code relocation. This is based on extensive code profiling that was performed on an actual system to discover the impact and was achieved by using function-level code relocation between the different types of memory, i.e., flash memory and static RAM, to reduce the power consumption. This was accomplished by grouping the assembly instructions to evaluate the distinctive power reduction efficiency depending on function code placement. As a result of the profiling, the efficiency of the function-level code relocation was the lowest at 11.517% for the branch and control groups and the highest at 12.623% for the data processing group. Further, we propose a prior relocation-scoring model to estimate the effective relocation order among functions in a program. To demonstrate the effectiveness of the proposed model, benchmarks in the MiBench benchmark suite were selected as case studies. The experimental results are consistent in terms of the scored outputs produced by the proposed model and measured power reduction efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Han, Yibei Li, and Xiaoming Hu. "Discrete-time inverse linear quadratic optimal control over finite time-horizon under noisy output measurements." Control Theory and Technology 19, no. 4 (2021): 563–72. http://dx.doi.org/10.1007/s11768-021-00066-8.

Full text
Abstract:
AbstractIn this paper, the problem of inverse quadratic optimal control over finite time-horizon for discrete-time linear systems is considered. Our goal is to recover the corresponding quadratic objective function using noisy observations. First, the identifiability of the model structure for the inverse optimal control problem is analyzed under relative degree assumption and we show the model structure is strictly globally identifiable. Next, we study the inverse optimal control problem whose initial state distribution and the observation noise distribution are unknown, yet the exact observations on the initial states are available. We formulate the problem as a risk minimization problem and approximate the problem using empirical average. It is further shown that the solution to the approximated problem is statistically consistent under the assumption of relative degrees. We then study the case where the exact observations on the initial states are not available, yet the observation noises are known to be white Gaussian distributed and the distribution of the initial state is also Gaussian (with unknown mean and covariance). EM-algorithm is used to estimate the parameters in the objective function. The effectiveness of our results are demonstrated by numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
37

Granneman, Sander, Madhusudan R. Nandineni, and Susan J. Baserga. "The Putative NTPase Fap7 Mediates Cytoplasmic 20S Pre-rRNA Processing through a Direct Interaction with Rps14." Molecular and Cellular Biology 25, no. 23 (2005): 10352–64. http://dx.doi.org/10.1128/mcb.25.23.10352-10364.2005.

Full text
Abstract:
ABSTRACT One of the proteins identified as being involved in ribosome biogenesis by high-throughput studies, a putative P-loop-type kinase termed Fap7 (YDL166c), was shown to be required for the conversion of 20S pre-rRNA to 18S rRNA. However, the mechanism underlying this function has remained unclear. Here we demonstrate that Fap7 is strictly required for cleavage of the 20S pre-rRNA at site D in the cytoplasm. Genetic depletion of Fap7 causes accumulation of only the 20S pre-rRNA, which could be detected not only in 43S preribosomes but also in 80S-sized complexes. Fap7 is not a structural component of 43S preribosomes but likely transiently interacts with them by directly binding to Rps14, a ribosomal protein that is found near the 3′ end of the 18S rRNA. Consistent with an NTPase activity, conserved residues predicted to be required for nucleoside triphosphate (NTP) hydrolysis are essential for Fap7 function in vivo. We propose that Fap7 mediates cleavage of the 20S pre-rRNA at site D by directly interacting with Rps14 and speculate that it is an enzyme that functions as an NTP-dependent molecular switch in 18S rRNA maturation.
APA, Harvard, Vancouver, ISO, and other styles
38

Rogers, W. Erick, Alexander V. Babanin, and David W. Wang. "Observation-Consistent Input and Whitecapping Dissipation in a Model for Wind-Generated Surface Waves: Description and Simple Calculations." Journal of Atmospheric and Oceanic Technology 29, no. 9 (2012): 1329–46. http://dx.doi.org/10.1175/jtech-d-11-00092.1.

Full text
Abstract:
Abstract A new wind-input and wind-breaking dissipation for phase-averaged spectral models of wind-generated surface waves is presented. Both are based on recent field observations in Lake George, New South Wales, Australia, at moderate-to-strong wind-wave conditions. The respective parameterizations are built on quantitative measurements and incorporate new observed physical features, which until very recently were missing in source terms employed in operational models. Two novel features of the wind-input source function are those that account for the effects of full airflow separation (and therefore relative reduction of the input at strong wind forcing) and for nonlinear behavior of this term. The breaking term also incorporates two new features evident from observational studies; the dissipation consists of two parts—a strictly local dissipation term and a cumulative term—and there is a threshold for wave breaking, below which no breaking occurs. Four variants of the dissipation term are selected for evaluation, with minimal calibration to each. These four models are evaluated using simple calculations herein. Results are generally favorable. Evaluation for more complex situations will be addressed in a forthcoming paper.
APA, Harvard, Vancouver, ISO, and other styles
39

Dombi, József, Ana Vranković Lacković, and Jonatan Lerga. "A New Insight into Entropy Based on the Fuzzy Operators, Applied to Useful Information Extraction from Noisy Time-Frequency Distributions." Mathematics 11, no. 3 (2023): 505. http://dx.doi.org/10.3390/math11030505.

Full text
Abstract:
In this paper, we study the connections between generalized mean operators and entropies, where the mean value operators are related to the strictly monotone logical operators of fuzzy theory. Here, we propose a new entropy measure based on the family of generalized Dombi operators. Namely, this measure is obtained by using the Dombi operator as a generator function in the general solution of the bisymmetric functional equation. We show how the proposed entropy can be used in a fuzzy system where the performance is consistent in choosing the best alternative in the Multiple Attribute Decision-Making Problem. This newly defined entropy was also applied to the problem of extracting useful information from time-frequency representations of noisy, nonstationary, and multicomponent signals. The denoising results were compared to Shannon and Rényi entropies. The proposed entropy measure is shown to significantly outperform the competing ones in terms of denoising classification accuracy and the F1-score due to its sensitivity to small changes in the probability distribution.
APA, Harvard, Vancouver, ISO, and other styles
40

Fang, Rui, Bo Zhang, Hao Wang, Weihan Chen, and Xiaoyin Wang. "Optimized Design of Scoring Criteria for Large-Scale Innovation Competitions." Academic Journal of Science and Technology 9, no. 1 (2024): 50–58. http://dx.doi.org/10.54097/h3w23z80.

Full text
Abstract:
The reasonable and reliable evaluation score calculation scheme wins the top priority in large-scale innovation-competitions currently. Due to the disparity among individual experts, and the existing standard score calculation model merely based on the experts' own circumstance, it cannot fully reflect the comprehensive level of players, resulting in a certain degree of error in the evaluation results. From the perspective of individual and decision-making of a group, this paper improves existing models and introduces the concept of modified scores. To verify the feasibility of the model, four schemes were designed based on the analysis of data distribution. Consistency and difference factors were applied to compare. Come to a conclusion: scheme adopted the new standard score calculation model ranked the top, indicating that the new calculation model is more reliable. Next, considering that the volatile correction factor affected by data, there is a possibility of some poor actual results with high correction scores. To rectify this case, a power exponent is added to the correction factor and a sign function is adopted to specify the positive or negative of correction scores. To verify the feasibility of the model, four sets of controlled experiments were designed with the introduction of two factors: power exponent and reconsideration bonus, as well as the ranking consistency test based on the condition that the award order of the expert agreement was consistent. In the end, it was found that the scheme that introduced both power exponent and reconsideration bonus points had a sorting consistency rate of 74%, which increased by 30% compared to the original model, indicating the rationality of the modification. The expert evaluation standard score calculation model established in this paper comprehensively considers individual and group decision-making, and provides reasonable correction for the differential scoring between experts. At the same time, this scheme provides a reference basis for further optimizing large-scale innovation competition plans in the future.
APA, Harvard, Vancouver, ISO, and other styles
41

Bhattacharya, Debswapna. "refineD: improved protein structure refinement using machine learning based restrained relaxation." Bioinformatics 35, no. 18 (2019): 3320–28. http://dx.doi.org/10.1093/bioinformatics/btz101.

Full text
Abstract:
AbstractMotivationProtein structure refinement aims to bring moderately accurate template-based protein models closer to the native state through conformational sampling. However, guiding the sampling towards the native state by effectively using restraints remains a major issue in structure refinement.ResultsHere, we develop a machine learning based restrained relaxation protocol that uses deep discriminative learning based binary classifiers to predict multi-resolution probabilistic restraints from the starting structure and subsequently converts these restraints to be integrated into Rosetta all-atom energy function as additional scoring terms during structure refinement. We use four restraint resolutions as adopted in GDT-HA (0.5, 1, 2 and 4 Å), centered on the Cα atom of each residue that are predicted by ensemble of four deep discriminative classifiers trained using combinations of sequence and structure-derived features as well as several energy terms from Rosetta centroid scoring function. The proposed method, refineD, has been found to produce consistent and substantial structural refinement through the use of cumulative and non-cumulative restraints on 150 benchmarking targets. refineD outperforms unrestrained relaxation strategy or relaxation that is restrained to starting structures using the FastRelax application of Rosetta or atomic-level energy minimization based ModRefiner method as well as molecular dynamics (MD) simulation based FG-MD protocol. Furthermore, by adjusting restraint resolutions, the method addresses the tradeoff that exists between degree and consistency of refinement. These results demonstrate a promising new avenue for improving accuracy of template-based protein models by effectively guiding conformational sampling during structure refinement through the use of machine learning based restraints.Availability and implementationhttp://watson.cse.eng.auburn.edu/refineD/.Supplementary informationSupplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
42

Barrett, LE, SJ Cano, JP Zajicek, and JC Hobart. "Can the ABILHAND handle manual ability in MS?" Multiple Sclerosis Journal 19, no. 6 (2012): 806–15. http://dx.doi.org/10.1177/1352458512462919.

Full text
Abstract:
Background: Hand dysfunction is common in multiple sclerosis (MS). Recent interest has focused on incorporating patient-reported outcome (PRO) instruments into clinical trials. Nevertheless, examinations are rare in MS of existing manual ability measures. Objectives: The objective of this paper is to evaluate the 23-item ABILHAND, developed for use after stroke, in people with MS, comparing the findings from two psychometric approaches. Methods: We analysed ABILHAND data from 300 people with MS using: 1) traditional psychometric methods (data completeness, scaling assumptions, reliability, internal and external construct validity); and 2) Rasch measurement methods (including targeting, item response category ordering, data fit to the Rasch model, spread of item locations, item scoring bias, item stability, reliability, person response validity). Results: Traditional psychometric methods implied ABILHAND was reliable and valid in this sample. Rasch measurement methods supported this finding. The three-category scoring function worked as intended and item fit to Rasch model expectations was acceptable. The 23 items (location range −3.16 to +2.73 logits) mapped a continuum of manual ability. Reliability was high (Person Separation Index (PSI) = 0.95). Conclusion: Both psychometric evaluations supported ABILHAND as a robust manual ability PRO measure for MS. Rasch measurement methods were more informative and, consistent with its role of detecting anomalies, identified ways of advancing further ABILHAND’s measurement performance to reduce any potential for type II errors in clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
43

Ray, Shannon, Paul M. Alsing, Carlo Cafaro, and H. S. Jacinto. "A Differential-Geometric Approach to Quantum Ignorance Consistent with Entropic Properties of Statistical Mechanics." Entropy 25, no. 5 (2023): 788. http://dx.doi.org/10.3390/e25050788.

Full text
Abstract:
In this paper, we construct the metric tensor and volume for the manifold of purifications associated with an arbitrary reduced density operator ρS. We also define a quantum coarse-graining (CG) to study the volume where macrostates are the manifolds of purifications, which we call surfaces of ignorance (SOI), and microstates are the purifications of ρS. In this context, the volume functions as a multiplicity of the macrostates that quantifies the amount of information missing from ρS. Using examples where the SOI are generated using representations of SU(2), SO(3), and SO(N), we show two features of the CG: (1) A system beginning in an atypical macrostate of smaller volume evolves to macrostates of greater volume until it reaches the equilibrium macrostate in a process in which the system and environment become strictly more entangled, and (2) the equilibrium macrostate takes up the vast majority of the coarse-grained space especially as the dimension of the total system becomes large. Here, the equilibrium macrostate corresponds to a maximum entanglement between the system and the environment. To demonstrate feature (1) for the examples considered, we show that the volume behaves like the von Neumann entropy in that it is zero for pure states, maximal for maximally mixed states, and is a concave function with respect to the purity of ρS. These two features are essential to typicality arguments regarding thermalization and Boltzmann’s original CG.
APA, Harvard, Vancouver, ISO, and other styles
44

Bonat, Wagner H., Ricardo R. Petterle, John Hinde, and Clarice GB Demétrio. "Flexible quasi-beta regression models for continuous bounded data." Statistical Modelling 19, no. 6 (2018): 617–33. http://dx.doi.org/10.1177/1471082x18790847.

Full text
Abstract:
We propose a flexible class of regression models for continuous bounded data based on second-moment assumptions. The mean structure is modelled by means of a link function and a linear predictor, while the mean and variance relationship has the form [Formula: see text], where [Formula: see text], [Formula: see text] and [Formula: see text] are the mean, dispersion and power parameters respectively. The models are fitted by using an estimating function approach where the quasi-score and Pearson estimating functions are employed for the estimation of the regression and dispersion parameters respectively. The flexible quasi-beta regression model can automatically adapt to the underlying bounded data distribution by the estimation of the power parameter. Furthermore, the model can easily handle data with exact zeroes and ones in a unified way and has the Bernoulli mean and variance relationship as a limiting case. The computational implementation of the proposed model is fast, relying on a simple Newton scoring algorithm. Simulation studies, using datasets generated from simplex and beta regression models show that the estimating function estimators are unbiased and consistent for the regression coefficients. We illustrate the flexibility of the quasi-beta regression model to deal with bounded data with two examples. We provide an R implementation and the datasets as supplementary materials.
APA, Harvard, Vancouver, ISO, and other styles
45

Piasecki, Krzysztof, and Ewa Roszkowska. "On Application of Ordered Fuzzy Numbers in Ranking Linguistically Evaluated Negotiation Offers." Advances in Fuzzy Systems 2018 (November 1, 2018): 1–12. http://dx.doi.org/10.1155/2018/1569860.

Full text
Abstract:
The main purpose of this paper is to investigate the application potential of ordered fuzzy numbers (OFN) to support evaluation of negotiation offers. The Simple Additive Weighting (SAW) and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) methods are extended to the case when linguistic evaluations are represented by OFN. We study the applicability of OFN for linguistic evaluation negotiation options and also provide the theoretical foundations of SAW and TOPSIS for constructing a scoring function for negotiation offers. We show that the proposed framework allows us to represent the negotiation information in a more direct and adequate way, especially in ill-structured negotiation problems, allows for holistic evaluation of negotiation offers, and produces consistent rankings, even though new packages are added or removed. An example is presented in order to demonstrate the usefulness of presented fuzzy numerical approach in evaluation of negotiation offers.
APA, Harvard, Vancouver, ISO, and other styles
46

Shulga, Dmitry A., Nikita N. Ivanov, and Vladimir A. Palyulin. "In Silico Structure-Based Approach for Group Efficiency Estimation in Fragment-Based Drug Design Using Evaluation of Fragment Contributions." Molecules 27, no. 6 (2022): 1985. http://dx.doi.org/10.3390/molecules27061985.

Full text
Abstract:
The notion of a contribution of a specific group in an organic molecule’s property and/or activity is both common in our thinking and is still not strictly correct due to the inherent non-additivity of free energy with respect to molecular fragments composing a molecule. The fragment- based drug discovery (FBDD) approach has proven to be fruitful in addressing the above notions. The main difficulty of the FBDD, however, is in its reliance on the low throughput and expensive experimental means of determining the fragment-sized molecules binding. In this article we propose a way to enhance the throughput and availability of the FBDD methods by judiciously using an in silico means of assessing the contribution to ligand-receptor binding energy of fragments of a molecule under question using a previously developed in silico Reverse Fragment Based Drug Discovery (R-FBDD) approach. It has been shown that the proposed structure-based drug discovery (SBDD) type of approach fills in the vacant niche among the existing in silico approaches, which mainly stem from the ligand-based drug discovery (LBDD) counterparts. In order to illustrate the applicability of the approach, our work retrospectively repeats the findings of the use case of an FBDD hit-to-lead project devoted to the experimentally based determination of additive group efficiency (GE)—an analog of ligand efficiency (LE) for a group in the molecule—using the Free-Wilson (FW) decomposition. It is shown that in using our in silico approach to evaluate fragment contributions of a ligand and to estimate GE one can arrive at similar decisions as those made using the experimentally determined activity-based FW decomposition. It is also shown that the approach is rather robust to the choice of the scoring function, provided the latter demonstrates a decent scoring power. We argue that the proposed approach of in silico assessment of GE has a wider applicability domain and expect that it will be widely applicable to enhance the net throughput of drug discovery based on the FBDD paradigm.
APA, Harvard, Vancouver, ISO, and other styles
47

Kukadia, Saanchi, Oliver B. Hansen, Stephanie K. Eble, et al. "Correlation of the Modified Magnetic Resonance Observation of Cartilage Repair Tissue (MOCART) Score with Patient-Reported Outcome Scores for Postoperative Assessment of Ankle Osteochondral Lesions." Foot & Ankle Orthopaedics 7, no. 4 (2022): 2473011421S0073. http://dx.doi.org/10.1177/2473011421s00733.

Full text
Abstract:
Category: Ankle; Arthroscopy; Trauma Introduction/Purpose: The MOCART scoring system is commonly used in both the knee and ankle literature to quantitatively assess cartilage repairs on MRI. For both the knee and ankle, MOCART scores have demonstrated little ability to correlate with clinical outcomes such as survey scores. The system also suffers from issues with repeatability and reproducibility of individual scores. This study seeks to analyze the correlation between MOCART scores and PRMOIS scores obtained from the same time period from patients undergoing surgical management of an osteochondral lesion of the talus. It also seeks to determine MOCART's intra-rater reliability by analyzing multiple independent scoring attempts by the same radiologist. We hypothesized that MOCART scores would correlate with PROMIS outcomes and be repeatable for a given rater. Methods: Patients treated for an osteochondral lesion of the talus by a single surgeon in our department were screened for the existence of preoperative and postoperative MRI and survey scores completed within five months of one another. Each MRI was scored using the MOCART system by one radiologist fellowship-trained in musculoskeletal radiology on two separate occasions, with at least one week between scoring attempts. Each MOCART category and the overall score was compared to each PROMIS category. We also compared the presence of cysts and edema, as noted by the raters, to each PROMIS category. Results: MOCART scores were found to be repeatable between scoring attempts for individual categories and especially for the overall score. Preoperative MOCART scores correlated positively to preoperative PROMIS scores for the Physical Function (r= 0.0173), Pain Interference (r=0.1093), and Depression (r=0.0812) domains. Postoperative MOCART scores correlated positively with postoperative PROMIS scores for the Physical Function (r=0.1639) and Global Physical Health (r= 0.2152) domains. Postoperative MOCART scores did not correlate positively to change in PROMIS scores nor did change in MOCART scores correlate positively to change in PROMIS scores as we had expected. One significant correlation existed between postoperative MOCART score and pre to postoperative change in PROMIS Global Mental Health, but the correlation was negative (r= -0.527; p= 0.044). The presence of cyst and edema likewise did not demonstrate any consistent pattern. Conclusion: While the MOCART score may be repeatable for a given reader, it faces significant issues with correlation to PROMIS outcomes. This has been noted before for certain surgical techniques and other outcomes measurements. We find that this pattern holds true more broadly when looking at a range of methods for the treatment of osteochondral lesions. While quantitative evaluation of MRI is important for better understanding cartilage repair techniques, problems with the MOCART system should be acknowledged and solutions considered.
APA, Harvard, Vancouver, ISO, and other styles
48

Macdonald, K. I., A. Gipsman, A. Magit, et al. "Endoscopic sinus surgery in patients with cystic fibrosis: a systematic review and meta-analysis of pulmonary function." Rhinology journal 50, no. 4 (2012): 360–69. http://dx.doi.org/10.4193/rhino.11.271.

Full text
Abstract:
Introduction: The role of endoscopic sinus surgery (ESS) in patients with cystic fibrosis (CF) is not clearly defined. Objective: TO perform a systematic review of subjective and objective outcomes of ESS in CF. Methods: A systematic review was performed using the keywords 'sinusitis,' 'sinus surgery,' 'nasal polyps' and 'cystic fibrosis.' The quality of papers was assessed using the NICE scoring scale. Outcomes included safety, subjective symptoms, objective endoscopy scores, days spent in hospital, courses of antibiotics, and pulmonary function tests (PFTs). Results: Nineteen studies involving 586 patients were included in the review. There were four prospective cohort trials, and three were rated as good quality. There were no major complications attributable to ESS. There was consistent evidence in four cohort studies of improved sinonasal symptoms, including nasal obstruction, facial pain, headaches, rhinorrhea and olfaction. Three studies reported conflicting results in post-operative endoscopy scores. Three studies showed a decrease in days spent in hospital, and two showed a significant decrease in courses of intravenous antibiotics. A recent study, however, did not show a difference in either days spent in hospital or courses of antibiotics. Pulmonary function tests were not improved by ESS in six cohort trials, and one small study found significant improvement. A meta-analysis of FEV1 scores confirmed no significant difference. Conclusion: THE most consistent findings of this review were that ESS in patients with CF is safe, produces symptomatic benefit, and does not consistently improve PFTs. There were more conflicting results with regards to endoscopy scores, days spent in hospital, and courses of intravenous antibiotics. Future prospective studies, utilizing validated quality of life, symptom and endoscopy scales, are needed to further elucidate the role of ESS in the management of chronic rhinosinusitis in CF patients.
APA, Harvard, Vancouver, ISO, and other styles
49

Macdonald, K. I., A. Gipsman, A. Magit, et al. "Endoscopic sinus surgery in patients with cystic fibrosis: a systematic review and meta-analysis of pulmonary function." Rhinology journal 50, no. 4 (2012): 360–69. http://dx.doi.org/10.4193/rhino11.271.

Full text
Abstract:
Introduction: The role of endoscopic sinus surgery (ESS) in patients with cystic fibrosis (CF) is not clearly defined. Objective: TO perform a systematic review of subjective and objective outcomes of ESS in CF. Methods: A systematic review was performed using the keywords 'sinusitis,' 'sinus surgery,' 'nasal polyps' and 'cystic fibrosis.' The quality of papers was assessed using the NICE scoring scale. Outcomes included safety, subjective symptoms, objective endoscopy scores, days spent in hospital, courses of antibiotics, and pulmonary function tests (PFTs). Results: Nineteen studies involving 586 patients were included in the review. There were four prospective cohort trials, and three were rated as good quality. There were no major complications attributable to ESS. There was consistent evidence in four cohort studies of improved sinonasal symptoms, including nasal obstruction, facial pain, headaches, rhinorrhea and olfaction. Three studies reported conflicting results in post-operative endoscopy scores. Three studies showed a decrease in days spent in hospital, and two showed a significant decrease in courses of intravenous antibiotics. A recent study, however, did not show a difference in either days spent in hospital or courses of antibiotics. Pulmonary function tests were not improved by ESS in six cohort trials, and one small study found significant improvement. A meta-analysis of FEV1 scores confirmed no significant difference. Conclusion: THE most consistent findings of this review were that ESS in patients with CF is safe, produces symptomatic benefit, and does not consistently improve PFTs. There were more conflicting results with regards to endoscopy scores, days spent in hospital, and courses of intravenous antibiotics. Future prospective studies, utilizing validated quality of life, symptom and endoscopy scales, are needed to further elucidate the role of ESS in the management of chronic rhinosinusitis in CF patients.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Yumo. "Dynamic Optimal Mean-Variance Investment with Mispricing in the Family of 4/2 Stochastic Volatility Models." Mathematics 9, no. 18 (2021): 2293. http://dx.doi.org/10.3390/math9182293.

Full text
Abstract:
This paper considers an optimal investment problem with mispricing in the family of 4/2 stochastic volatility models under mean–variance criterion. The financial market consists of a risk-free asset, a market index and a pair of mispriced stocks. By applying the linear–quadratic stochastic control theory and solving the corresponding Hamilton–Jacobi–Bellman equation, explicit expressions for the statically optimal (pre-commitment) strategy and the corresponding optimal value function are derived. Moreover, a necessary verification theorem was provided based on an assumption of the model parameters with the investment horizon. Due to the time-inconsistency under mean–variance criterion, we give a dynamic formulation of the problem and obtain the closed-form expression of the dynamically optimal (time-consistent) strategy. This strategy is shown to keep the wealth process strictly below the target (expected terminal wealth) before the terminal time. Results on the special case without mispricing are included. Finally, some numerical examples are given to illustrate the effects of model parameters on the efficient frontier and the difference between static and dynamic optimality.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography