Добірка наукової літератури з теми "Estimated misclassification costs (EMC)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Estimated misclassification costs (EMC)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Estimated misclassification costs (EMC)":

1

Godolphin, Peter J., Philip M. Bath, Ale Algra, Eivind Berge, John Chalmers, Misha Eliasziw, Graeme J. Hankey, et al. "Cost-benefit of outcome adjudication in nine randomised stroke trials." Clinical Trials 17, no. 5 (July 10, 2020): 576–80. http://dx.doi.org/10.1177/1740774520939231.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Central adjudication of outcomes is common for randomised trials and should control for differential misclassification. However, few studies have estimated the cost of the adjudication process. Methods We estimated the cost of adjudicating the primary outcome in nine randomised stroke trials (25,436 participants). The costs included adjudicators’ time, direct payments to adjudicators, and co-ordinating centre costs (e.g. uploading cranial scans and general set-up costs). The number of events corrected after adjudication was our measure of benefit. We calculated cost per corrected event for each trial and in total. Results The primary outcome in all nine trials was either stroke or a composite that included stroke. In total, the adjudication process associated with this primary outcome cost in excess of £100,000 for a third of the trials (3/9). Mean cost per event corrected by adjudication was £2295.10 (SD: £1482.42). Conclusions Central adjudication is a time-consuming and potentially costly process. These costs need to be considered when designing a trial and should be evaluated alongside the potential benefits adjudication brings to determine whether they outweigh this expense.
2

McKee, Thomas E. "A Meta-Learning Approach to Predicting Financial Statement Fraud." Journal of Emerging Technologies in Accounting 6, no. 1 (January 1, 2009): 5–26. http://dx.doi.org/10.2308/jeta.2009.6.1.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
ABSTRACT: An “ultimate learning algorithm” is one that produces models that closely match the real world’s underlying distribution of functions. To try to create such an algorithm, researchers typically employ manual algorithm design with cross-validation. It has been shown that cross-validation is not a viable way to construct an ultimate learning algorithm. For machine learning researchers, “meta-learning” should be more desirable than manual algorithm design with cross-validation. Meta-learning is concerned with gaining knowledge about learning methodologies. One meta-learning approach involves evaluating the suitability of various algorithms for a learning task in order to select an appropriate algorithm. An alternative approach is to incorporate predictions from base algorithms as features to be evaluated by subsequent algorithms. This paper reports on exploratory research that implemented the latter approach as a three-layer stacked generalization model using neural networks, logistic regression, and classification tree algorithms to predict all categories of financial fraud. The purpose was to see if this form of meta-learning offered significant benefits for financial fraud prediction. Fifteen possible financial fraud predictors were identified based on a theoretical fraud model from prior research. Only public data for these possible predictors were obtained from U.S. Securities and Exchange Commission filings from the period 1995–2002 for a sample of 50 fraud and 50 non-fraud companies. These data were selected for the year prior to when the fraud was initiated. These variables were used to create a variety of neural network, logistic regression, and classification tree models while using holdout sample and cross-validation techniques. A 71.4 percent accurate neural network model was then stacked into a logistic regression model, increasing the prediction accuracy to 76.5 percent. The logistic regression model was subsequently stacked into a classification tree model to achieve an 83 percent accuracy rate. These results compared favorably to two prior neural network studies, also employing only public data, which achieved 63 percent accuracy rates. Model results were also analyzed via probability-adjusted overall error rates, relative misclassification costs, and receiver operating characteristics. The increase in classification accuracy from 71 percent to 83 percent, the decline in estimated overall error rate from 0.0057 to 0.0035, and the decline in relative misclassification costs from 2.79 to 0.58 suggest that benefits were achieved by the meta-learning stacking approach. Further research into the meta-learning stacking approach appears warranted.
3

Crespo, Carlos, Markus Linhart, Juan Acosta, David Soto-Iglesias, Mikel Martínez, Beatriz Jáuregui, Áurea Mira, et al. "Optimisation of cardiac resynchronisation therapy device selection guided by cardiac magnetic resonance imaging: Cost-effectiveness analysis." European Journal of Preventive Cardiology 27, no. 6 (September 5, 2019): 622–32. http://dx.doi.org/10.1177/2047487319873149.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background A recent study showed that the presence and characteristics of myocardial scar could independently predict appropriate implantable cardioverter-defibrillator therapies and the risk of sudden cardiac death in patients receiving a de novo cardiac resynchronisation device. Design The aim was to evaluate the cost-effectiveness of cardiac magnetic resonance imaging-based algorithms versus clinical practice in the decision-making process for the implantation of a cardiac resynchronisation device pacemaker versus cardiac resynchronisation device implantable cardioverter-defibrillator device in heart failure patients with indication for cardiac resynchronisation therapy. Methods An incidental Markov model was developed to simulate the lifetime progression of a heart failure patient cohort. Key health variables included in the model were New York Heart Association functional class, hospitalisations, sudden cardiac death and total mortality. The analysis was done from the healthcare system perspective. Costs (€2017), survival and quality-adjusted life years were assessed. Results At 5-year follow-up, algorithm I reduced mortality by 39% in patients with a cardiac resynchronisation device pacemaker who were underprotected due to misclassification by clinical protocol. This approach had the highest quality-adjusted life years (algorithm I 3.257 quality-adjusted life years; algorithm II 3.196 quality-adjusted life years; clinical protocol 3.167 quality-adjusted life years) and the lowest lifetime costs per patient (€20,960, €22,319 and €28,447, respectively). Algorithm I would improve results for three subgroups: non-ischaemic, New York Heart Association class III–IV and ≥65 years old. Furthermore, implementing this approach could generate an estimated €702 million in health system savings annually in European Society of Cardiology countries. Conclusion The application of cardiac magnetic resonance imaging-based algorithms could improve survival and quality-adjusted life years at a lower cost than current clinical practice (dominant strategy) used for assigning cardiac resynchronisation device pacemakers and cardiac resynchronisation device implantable cardioverter-defibrillators to heart failure patients.
4

Belforte, G., B. Bona, C. Cravetto, S. Frediani, M. Milanese, M. Molino, L. Saitta, and R. Tempo. "Selection and Assessment of Laboratory Tests for the Evaluation of Liver Functional Impairment." Methods of Information in Medicine 24, no. 01 (January 1985): 39–45. http://dx.doi.org/10.1055/s-0038-1635355.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
SummaryA large number of biochemical tests is at present available for the evaluation of the different aspects of the liver function. As a consequence, the requirement of laboratory investigations is generally increasing in clinical practice. However, the usefulness of such a tendency in liver investigation is questionable: the doubt concerns both the diagnostic efficacy of new individual tests and the opportunity of a progressive increase in tests requirement.This paper deals with the problem of selecting liver tests on the basis of their information content with respect to different aspects of liver function. The study was performed on 288 normal and pathological subjects. Two different methods were used: a classical statistical feature selection method, and a method based on the probability theory developed in the framework of the fuzzy sets theory. The ranking of the tests was based on the estimated misclassification rate.The results obtained appear quite relevant from a practical point of view: no effective improvement in liver functional assessment is reached by using more than 3 or 4 selected tests among the many available for each of the functional aspects considered.A reasonable compromise between diagnostic efficacy, feasibility, and costs indicates that the following groups of tests could be proposed for the functional assessment of liver diseases in clinical routine: Albumin, Pseudo-cholinesterase, and Prothrombin Time for biosynthesis and metabolism; Bilirubin, Gamma-glutamyl-transpeptidase, Bile Acids (fasting) for cholestasis; Aspartate Aminotransferase, Alanine Aminotransferase, Isocitrate-dehydrogenase, for cytolysis; Gamma-globulin, G, A and M Immunoglobulin for reactivity.
5

Dzik-Walczak, Aneta, and Mateusz Heba. "An implementation of ensemble methods, logistic regression, and neural network for default prediction in Peer-to-Peer lending." Zbornik radova Ekonomskog fakulteta u Rijeci: časopis za ekonomsku teoriju i praksu/Proceedings of Rijeka Faculty of Economics: Journal of Economics and Business 39, no. 1 (June 30, 2021): 163–97. http://dx.doi.org/10.18045/zbefri.2021.1.163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Credit scoring has become an important issue because competition among financial institutions is intense and even a small improvement in predictive accuracy can result in significant savings. Financial institutions are looking for optimal strategies using credit scoring models. Therefore, credit scoring tools are extensively studied. As a result, various parametric statistical methods, non-parametric statistical tools and soft computing approaches have been developed to improve the accuracy of credit scoring models. In this paper, different approaches are used to classify customers into those who repay the loan and those who default on a loan. The purpose of this study is to investigate the performance of two credit scoring techniques, the logistic regression model estimated on categorized variables modified with the use of WOE (Weight of Evidence) transformation, and neural networks. We also combine multiple classifiers and test whether ensemble learning has better performance. To evaluate the feasibility and effectiveness of these methods, the analysis is performed on Lending Club data. In addition, we investigate Peer-to-peer lending, also called social lending. From the results, it can be concluded that the logistic regression model can provide better performance than neural networks. The proposed ensemble model (a combination of logistic regression and neural network by averaging the probabilities obtained from both models) has higher AUC, Gini coefficient and Kolmogorov-Smirnov statistics compared to other models. Therefore, we can conclude that the ensemble model allows to successfully reduce the potential risks of losses due to misclassification costs.
6

Chan, Jocelyn, Heather F. Gidding, Christopher C. Blyth, Parveen Fathima, Sanjay Jayasinghe, Peter B. McIntyre, Hannah C. Moore, et al. "Levels of pneumococcal conjugate vaccine coverage and indirect protection against invasive pneumococcal disease and pneumonia hospitalisations in Australia: An observational study." PLOS Medicine 18, no. 8 (August 3, 2021): e1003733. http://dx.doi.org/10.1371/journal.pmed.1003733.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background There is limited empiric evidence on the coverage of pneumococcal conjugate vaccines (PCVs) required to generate substantial indirect protection. We investigate the association between population PCV coverage and indirect protection against invasive pneumococcal disease (IPD) and pneumonia hospitalisations among undervaccinated Australian children. Methods and findings Birth and vaccination records, IPD notifications, and hospitalisations were individually linked for children aged <5 years, born between 2001 and 2012 in 2 Australian states (New South Wales and Western Australia; 1.37 million children). Using Poisson regression models, we examined the association between PCV coverage, in small geographical units, and the incidence of (1) 7-valent PCV (PCV7)-type IPD; (2) all-cause pneumonia; and (3) pneumococcal and lobar pneumonia hospitalisation in undervaccinated children. Undervaccinated children received <2 doses of PCV at <12 months of age and no doses at ≥12 months of age. Potential confounding variables were selected for adjustment a priori with the assistance of a directed acyclic graph. There were strong inverse associations between PCV coverage and the incidence of PCV7-type IPD (adjusted incidence rate ratio [aIRR] 0.967, 95% confidence interval [CI] 0.958 to 0.975, p-value < 0.001), and pneumonia hospitalisations (all-cause pneumonia: aIRR 0.991 95% CI 0.990 to 0.994, p-value < 0.001) among undervaccinated children. Subgroup analyses for children <4 months old, urban, rural, and Indigenous populations showed similar trends, although effects were smaller for rural and Indigenous populations. Approximately 50% coverage of PCV7 among children <5 years of age was estimated to prevent up to 72.5% (95% CI 51.6 to 84.4) of PCV7-type IPD among undervaccinated children, while 90% coverage was estimated to prevent 95.2% (95% CI 89.4 to 97.8). The main limitations of this study include the potential for differential loss to follow-up, geographical misclassification of children (based on residential address at birth only), and unmeasured confounders. Conclusions In this study, we observed substantial indirect protection at lower levels of PCV coverage than previously described—challenging assumptions that high levels of PCV coverage (i.e., greater than 90%) are required. Understanding the association between PCV coverage and indirect protection is a priority since the control of vaccine-type pneumococcal disease is a prerequisite for reducing the number of PCV doses (from 3 to 2). Reduced dose schedules have the potential to substantially reduce program costs while maintaining vaccine impact.
7

Perera, Rafael, Richard Stevens, Jeffrey K. Aronson, Amitava Banerjee, Julie Evans, Benjamin G. Feakins, Susannah Fleming, et al. "Long-term monitoring in primary care for chronic kidney disease and chronic heart failure: a multi-method research programme." Programme Grants for Applied Research 9, no. 10 (August 2021): 1–218. http://dx.doi.org/10.3310/pgfar09100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Long-term monitoring is important in chronic condition management. Despite considerable costs of monitoring, there is no or poor evidence on how, what and when to monitor. The aim of this study was to improve understanding, methods, evidence base and practice of clinical monitoring in primary care, focusing on two areas: chronic kidney disease and chronic heart failure. Objectives The research questions were as follows: does the choice of test affect better care while being affordable to the NHS? Can the number of tests used to manage individuals with early-stage kidney disease, and hence the costs, be reduced? Is it possible to monitor heart failure using a simple blood test? Can this be done using a rapid test in a general practitioner consultation? Would changes in the management of these conditions be acceptable to patients and carers? Design Various study designs were employed, including cohort, feasibility study, Clinical Practice Research Datalink analysis, seven systematic reviews, two qualitative studies, one cost-effectiveness analysis and one cost recommendation. Setting This study was set in UK primary care. Data sources Data were collected from study participants and sourced from UK general practice and hospital electronic health records, and worldwide literature. Participants The participants were NHS patients (Clinical Practice Research Datalink: 4.5 million patients), chronic kidney disease and chronic heart failure patients managed in primary care (including 750 participants in the cohort study) and primary care health professionals. Interventions The interventions were monitoring with blood and urine tests (for chronic kidney disease) and monitoring with blood tests and weight measurement (for chronic heart failure). Main outcome measures The main outcomes were the frequency, accuracy, utility, acceptability, costs and cost-effectiveness of monitoring. Results Chronic kidney disease: serum creatinine testing has increased steadily since 1997, with most results being normal (83% in 2013). Increases in tests of creatinine and proteinuria correspond to their introduction as indicators in the Quality and Outcomes Framework. The Chronic Kidney Disease Epidemiology Collaboration equation had 2.7% greater accuracy (95% confidence interval 1.6% to 3.8%) than the Modification of Diet in Renal Disease equation for estimating glomerular filtration rate. Estimated annual transition rates to the next chronic kidney disease stage are ≈ 2% for people with normal urine albumin, 3–5% for people with microalbuminuria (3–30 mg/mmol) and 3–12% for people with macroalbuminuria (> 30 mg/mmol). Variability in estimated glomerular filtration rate-creatinine leads to misclassification of chronic kidney disease stage in 12–15% of tests in primary care. Glycaemic-control and lipid-modifying drugs are associated with a 6% (95% confidence interval 2% to 10%) and 4% (95% confidence interval 0% to 8%) improvement in renal function, respectively. Neither estimated glomerular filtration rate-creatinine nor estimated glomerular filtration rate-Cystatin C have utility in predicting rate of kidney function change. Patients viewed phrases such as ‘kidney damage’ or ‘kidney failure’ as frightening, and the term ‘chronic’ was misinterpreted as serious. Diagnosis of asymptomatic conditions (chronic kidney disease) was difficult to understand, and primary care professionals often did not use ‘chronic kidney disease’ when managing patients at early stages. General practitioners relied on Clinical Commissioning Group or Quality and Outcomes Framework alerts rather than National Institute for Health and Care Excellence guidance for information. Cost-effectiveness modelling did not demonstrate a tangible benefit of monitoring kidney function to guide preventative treatments, except for individuals with an estimated glomerular filtration rate of 60–90 ml/minute/1.73 m2, aged < 70 years and without cardiovascular disease, where monitoring every 3–4 years to guide cardiovascular prevention may be cost-effective. Chronic heart failure: natriuretic peptide-guided treatment could reduce all-cause mortality by 13% and heart failure admission by 20%. Implementing natriuretic peptide-guided treatment is likely to require predefined protocols, stringent natriuretic peptide targets, relative targets and being located in a specialist heart failure setting. Remote monitoring can reduce all-cause mortality and heart failure hospitalisation, and could improve quality of life. Diagnostic accuracy of point-of-care N-terminal prohormone of B-type natriuretic peptide (sensitivity, 0.99; specificity, 0.60) was better than point-of-care B-type natriuretic peptide (sensitivity, 0.95; specificity, 0.57). Within-person variation estimates for B-type natriuretic peptide and weight were as follows: coefficient of variation, 46% and coefficient of variation, 1.2%, respectively. Point-of-care N-terminal prohormone of B-type natriuretic peptide within-person variability over 12 months was 881 pg/ml (95% confidence interval 380 to 1382 pg/ml), whereas between-person variability was 1972 pg/ml (95% confidence interval 1525 to 2791 pg/ml). For individuals, monitoring provided reassurance; future changes, such as increased testing, would be acceptable. Point-of-care testing in general practice surgeries was perceived positively, reducing waiting time and anxiety. Community heart failure nurses had greater knowledge of National Institute for Health and Care Excellence guidance than general practitioners and practice nurses. Health-care professionals believed that the cost of natriuretic peptide tests in routine monitoring would outweigh potential benefits. The review of cost-effectiveness studies suggests that natriuretic peptide-guided treatment is cost-effective in specialist settings, but with no evidence for its value in primary care settings. Limitations No randomised controlled trial evidence was generated. The pathways to the benefit of monitoring chronic kidney disease were unclear. Conclusions It is difficult to ascribe quantifiable benefits to monitoring chronic kidney disease, because monitoring is unlikely to change treatment, especially in chronic kidney disease stages G3 and G4. New approaches to monitoring chronic heart failure, such as point-of-care natriuretic peptide tests in general practice, show promise if high within-test variability can be overcome. Future work The following future work is recommended: improve general practitioner–patient communication of early-stage renal function decline, and identify strategies to reduce the variability of natriuretic peptide. Study registration This study is registered as PROSPERO CRD42015017501, CRD42019134922 and CRD42016046902. Funding This project was funded by the National Institute for Health Research (NIHR) Programme Grants for Applied Research programme and will be published in full in Programme Grants for Applied Research; Vol. 9, No. 10. See the NIHR Journals Library website for further project information.
8

Schilhabel, Anke, Henrik Knecht, Anton W. Langerak, Jacques J. M. van Dongen, Michael Kneba, Jill Ray, Elizabeth Punnoose, et al. "Analytical Validation of Patient-Specific PCR-Based MRD Assessment for Use As a Primary Endpoint in CLL Clinical Trials." Blood 126, no. 23 (December 3, 2015): 2924. http://dx.doi.org/10.1182/blood.v126.23.2924.2924.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Introduction. Minimal residual disease (MRD) is an objective measure of disease status defined by the number of leukemic cells in the blood or bone marrow of leukemic patients. In recent clinical studies of chronic lymphocytic leukemia (CLL), undetectable MRD levels (< 1 tumor cell/10,000 leukocytes) have been shown to correlate with prolonged progression free survival (PFS) and overall survival, independent of treatment or known risk factors. MRD assessment has been proposed as an alternative to PFS as a primary endpoint in frontline CLL pivotal studies to evaluate the efficacy of novel therapies at an earlier time-point. Thorough standardization and validation are needed to use MRD as a primary surrogate endpoint. Allele specific oligonucleotide (ASO)-PCR of immunoglobulin (IG) gene rearrangements is a method for quantifying MRD using patient-specific PCR primers and has been standardized by the EuroMRD Consortium (www.EuroMRD.org). Given that each patient has individualized PCR primers designed for their leukemic clone, this posed a unique challenge for the analytical validation studies to demonstrate that the assays are uniform in their reproducibility and analytical sensitivity to measure MRD across patients with CLL. Here we report a comprehensive, IVD-guided analytical validation of the ASO-PCR technique according to the guidance of regulatory authorities. We provide evidence that the ASO-PCR methodology can reproducibly measure MRD to the required threshold of 10-4, across patients with CLL. Results. Performance of ASO-PCR was assessed using a combination of retrospective data from the CLL11 clinical trial and prospectively performed experiments. Patient assays from 60 CLL patients were tested in two EuroMRD laboratories to demonstrate linearity across the measurement range of 10-1 to 10-5, and a limit of detection of 6.3x10-5, which is below the cut-off of 10-4 used for defining MRD negativity. Concordance of the method to an orthogonal method was determined from the previously published comparison of flow cytometry with ASO-PCR (Boettcher et al., Leukemia 2009; 23: 2007) with 93.8 % overall agreement between both methods (n=452). Agreement of MRD status was >97% when comparing individually designed ASO primers for the same patient within the lab. The overall agreement between the two different laboratories using independently designed ASO-PCR assays was 93.5%. Precision was assessed above and below the threshold of 10-4 using ASO-PCR assays of 3 individual patient samples diluted to appropriate MRD levels listed in Table 1. Theexperiment was designed to mimic sources of variation by evaluating MRD samples over the course of the clinical study (3 days x 2 operators x 3 patients x 2 laboratories x 3 replicates). Overall variability was estimated using a mixed effects model including fixed patient effects and random effects for operator and day. Based on the known MRD distribution of frontline CLL patients, we estimate acceptable overall variability on the order of 80% CV at lower concentrations (≤ 3.2x10-4) and 40% CV at higher concentrations (> 3.2x10-4). This precision estimate provides reasonable misclassification rates (< 5%) due to the fact that the majority of patients had MRD levels either well above or below 10-4 level. Experiments also addressed stability of patient specimens and critical assay components. Table 1. Precision of ASO-PCR results obtained at MRD levels 10-2 to 10-5Table 1.Estimated Total CV (%) Averaged Across 3 patientsMRD levelKielErasmus1.00E-028.7034.791.00E-0310.0235.743.20E-0415.8233.271.00E-0431.7036.383.20E-0589.8978.801.00E-05258.06277.27 Conclusion. The analytical validation studies described here provide evidence that the ASO-PCR methodology, standardized by EuroMRD, performs well to reproducibly detect MRD status across CLL patients at the threshold of 10-4. These studies serve as an example for the validation of personalized, patient-specific quantitative clinical assays for use as a primary endpoint in clinical trials. The authors would like to acknowledge the valuable work of the following people who contributed to this work: M. Brüggemann (UKSH Kiel), R Raja, C. Cox, W. Darbonne, R. Desai, and K. Trunzer. Disclosures Langerak: DAKO: Patents & Royalties: Licensing of IP and Patent on Split-Signal FISH. Royalties for Dept. of Immunology, Erasmus MC, Rotterdam, NL; InVivoScribe: Patents & Royalties: Licensing of IP and Patent on BIOMED-2-based methods for PCR-based Clonality Diagnostics.; Roche: Other: Lab services in the field of MRD diagnostics provided by Dept of Immunology, Erasmus MC (Rotterdam). van Dongen:BD Biosciences (cont'd): Other: Laboratory Services in the field of technical validation of EuroFlow-OneFlow antibody tubes in dried format. The Laboratory Services are provided by the Laboratory of Medical Immunology, Dept. of Immunology, Erasmus MC, Rotterdam, NL; Cytognos: Patents & Royalties: Licensing of IP on Infinicyt software, Patents on EuroFlow-based flowcytometric Diagnosis and Classification of hematological malignancies, Patents on MRD diagnostics, and Patents on PID diagnostics.; Cytognos (continued): Patents & Royalties: Royalty income for EuroFlow Consortium. The Infinicyt software is provided to all EuroFlow members free-of-charge.Licensing of Patent on detection of IgE+ B-cells in allergic diseases. Royalties for Dept. of Immunology, Erasmus MC, Rotterdam, NL; DAKO: Patents & Royalties: Licensing of IP and Patent on Split-Signal FISH. Royalties for Dept. of Immunology, Erasmus MC, Rotterdam, NL; InVivoScribe: Patents & Royalties: Licensing of IP and Patent on BIOMED-2-based methods for PCR-based Clonality Diagnostics.. Royalty income for EuroClonality-BIOMED-2 Consortium; Immunostep: Patents & Royalties: Licensing of IP and Patents on immunobead-based dection of fusion proteins in acute leukemias and other tumors. Royalties for Dept. of Immunology, Erasmus MC and for EuroFlow Consortium; BD Biosciences: Other: Educational Services: Educational Lectures and Educational Workshops (+ related travelling costs). The lectures and workshops fully focus on the scientific achievements of the EuroFlow Consortium (No advertisement of products of BD Biosciences)., Patents & Royalties: Licensing of IP and Patent on EuroFlow-based flowcytometric Diagnosis and Classification of hematological malignancies; Royalty income for EuroFlow Consortium.; Roche: Consultancy, Other: Laboratory Services in the field of MRD diagnostics, provided by the Laboratory of Medical Immunology, Dept. of Immunology, Erasmus MC, Rotterdam, NL.. Ray:Genentech, Inc.: Employment. Punnoose:Genentech, Inc.: Employment. Kim:Genentech, Inc.: Employment. Haberberger:Genentech, Inc.: Employment. Bernaards:Roche: Employment. Zhu:Genentech, Inc.: Employment. Lewin-Koh:Genentech, Inc.: Employment. Ritgen:Roche: Membership on an entity's Board of Directors or advisory committees, Research Funding.
9

Baretta, A., M. A. Bisotti, A. Palazzin, J. Cano, J. Gomis-Tena, L. Romero, J. Saiz, and R. Bursi. "A web-based tool for the early identification and real time assessment of drug-induced proarrhythmic and torsade de pointes safety risk." European Heart Journal 41, Supplement_2 (November 1, 2020). http://dx.doi.org/10.1093/ehjci/ehaa946.3398.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Introduction It is well recognized that early identification of drug-induced proarrhythmic safety risks is crucial to drug development for ethical, animal sparing and costs reduction considerations. The availability, however, of easily accessible, user-friendly tools for real time assessments of the proarrhythmic potential of chemical compounds has been lacking. The novel Tx index, implemented in the presented web-based tool, was applied to a dataset of 84 compounds. Materials and methods The tool is based on 206,766 cellular simulations of compound-induced effects on Action Potential Duration (APD) in isolated endocardial, midmyocardial, and epicardial cells and on 7,072 tissue simulations on QT prolongation in a virtual tissue. Simulations were performed by blocking the slow and the fast components of the delayed rectifier current (IKs and IKr, respectively) and the L-type calcium current (ICaL) at different levels. Based on these simulations, four Tx indices were defined as the ratio of drug concentration leading to a 10% prolongation of the APDendo, APDmid, APDepi or QT over the maximum Effective Free Therapeutic Plasma Concentration (EFTPC), respectively. A dataset of 44 non-torsadogenic and 40 torsadogenic drug compounds was used to validate the performance of the tool. The workflow of the web-based tool was built on the cloud environment, in compliance with the highest standards of security and privacy. hERG test (positive response: hERG pIC50 &gt;6) was applied to the 84 compounds to compare performances. Results Receiver operating characteristic (ROC) curves were constructed on the four estimated Tx indices for each compound in the dataset to enable the identification of torsadogenic potential cut-off values2. These were identified as 8, 8, and 6.4 for Tx-APDendo, Tx-APDmid, Tx-APDepi and as 9.2 for Tx-QT, respectively. The classification of the 84 compounds resulted in an accuracy ranging between 87% and 88% for the four Tx indices Tx-APDendo, Tx-APDmid, Tx-APDepi and Tx-QT. Discussion and conclusion hERG block exhibits poor performance. When applying the hERG test to the 84 compounds, it exhibited a TPR of 55%, a TNR of 89%, and an A of 73%, in close agreement with previous studies. In comparison, the in silico Tx tests described in this study yield TPRs of 85%, TNRs of 86–89% and As of 86–87%. This method does not include drug effects on Na+ channels, which is related to the misclassification of 3 compounds (quetiapine, ranolazine, and lamotrigine – significant Na+ channels blockers at EFTPC). Future work will include this channel. The presented web-based tool is a highly innovative method for an accurate torsadogenic risk assessment. Each assessment required only a few seconds of computational time. Illustration workflow of the web tool Funding Acknowledgement Type of funding source: None

Дисертації з теми "Estimated misclassification costs (EMC)":

1

Detterfelt, Sebastian, and Isak Björkman. "Om att bedöma formler för att formulera bedömningar : En kvantitativ studie om precisionen i revisorers fortlevnadsbedömningar och konkursprediktionsmodeller." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Revisorer i Sverige har i tidigare studier funnits restriktiva med att ge ut fortlevnadsvarningar till konkursbolag. Det saknas studier om huruvida konkursprediktionsmodeller gör bättre förutsägelser  än revisorer i Sverige med hänsyn till kostnader för missklassificeringar (EMC). Syftet med studien är att jämföra precisionen i revisorers fortlevnadsbedömningar med precisionen i konkursprediktionsmodeller baserade på finansiella nyckeltal, samt att undersöka relationen mellan fortlevnadsvarningar, finansiella nyckeltal och konkurser. Studien är kvantitativ och har en deduktiv ansats med en komparativ forskningsdesign. Sekundärdata från svenska onoterade aktiebolags  årsredovisningar har använts.  Resultaten visar att revisorer i Sverige alltjämt är restriktiva med att ge ut fortlevnadsvarningar, men att deras bedömningar i huvudsak har högre precision än konkursprediktionsmodeller för K2-redovisande bolag. När redovisningen blir mer sofistikerad (K3), kan dock konkursprediktionsmodeller ge en högre precision. Studien belyser fortlevnadsbedömningar och konkursprediktionsmodellers utfall utifrån ett kostnadsperspektiv, samt bidrar med kunskap kring fördelar och begränsningar med konkursprediktionsmodeller inom revision.
Auditors in Sweden has been found restrictive with issuing going concern opinions to subsequent bankrupt companies. There is a lack of studies examining if bankruptcy prediction models make better predictions than auditors in Sweden when estimated misclassification costs (EMC) are considered. The purpose with this study is to compare the precision in going concern opinions with the precision from accounting-based bankruptcy prediction models, and to examine the relation between going concern opinions, accounting measures and bankruptcies. The study is quantitative with a deductive approach and comparative design. Secondary data from Swedish private limited companies’ annual reports has been used.  Our results show that auditors in Sweden are still restrictive with issuing going concern opinions, but that their evaluations to a large degree are more precise than the bankruptcy prediction models when used on companies that are reporting by the K2 framework. However, when the accounting numbers are more sophisticated by using the K3 framework, bankruptcy prediction models may be more precise. The study highlights going concern opinions and the outcome from bankruptcy prediction models through a perspective of estimated misclassification costs (EMC). It also contributes with knowledge regarding advantages and disadvantages with using bankruptcy prediction models in auditing.

До бібліографії