Literatura académica sobre el tema "Claim detection"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Claim detection".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Claim detection"

1

Sai Santosh Goud Bandari. "Machine Learning (ML) based Anomaly Detection in Insurance Industries." Journal of Information Systems Engineering and Management 10, no. 32s (2025): 13–21. https://doi.org/10.52783/jisem.v10i32s.5182.

Texto completo
Resumen
Handling claims presents significant difficulties for the insurance sector particularly in cases of duplicate claims, missing information, and false claims. Conventional manual techniques are prone to mistakes and inefficiencies, which substantially raises running expenses. This work presents an automated machine learning (ML) based solution for these problems. DBSCAN Clustering, Isolation Forest Classifier, and Random Forest Classifier are three specific ML techniques applied here. Early intervention is possible with the Random Forest Classifier as it can detect claims with lacking proof. While DBSCAN Clustering combines like data points to assist uncover and control duplicate claims, the Isolation Forest Classifier detects fraudulent claims by identifying abnormalities in the data. Using a big dataset, the suggested fix demonstrated significant performance and accuracy benefits in claim processing. Results demonstrate the ML models lower operational costs, less hand-made intervention, and better fraud detection. Reducing delays and mistakes in claim processing benefits the automated method in increasing client satisfaction as well. By automating major portions of claim processing, this paper shows the possibilities of ML in changing the insurance industry and generating cost savings, higher efficiency, and fraud protection. ML technology will become increasingly important in increasing the accuracy and efficiency of claim processing as the sector maintains its digital transformation under progress.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

K., P. PORKODI. "FRAUD CLAIM DETECTION USING SPARK." IJIERT - International Journal of Innovations in Engineering Research and Technology 4, no. 2 (2017): 10–13. https://doi.org/10.5281/zenodo.1462257.

Texto completo
Resumen
<strong>Objective:To reduce the fraud claims in health insurances companies and to improve outcomes in health care industry Analysis:In the existing system,Apache hadoop and Apache hive is used for processing data,it is a batch processing syste m. In proposed system,Apache spark is used for processing streaming data. Findings:EHR record is used as data source,it contains unique id for patients across world,so it is very easy to detect fraud claim with help of patientid. Apache spark processing streaming data on regular basis. But in existing system Apache hadoop and Apache hive takes hours of time to process the stored data. Improvement:Rule based model machine learning algorithm is used for detecting and automating the fraud claim and Apache spark is used for fast processing data,so it is more accurate and fast.</strong> <strong>https://www.ijiert.org/paper-details?paper_id=140995</strong>
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Agarwal, Shashank. "An Intelligent Machine Learning Approach for Fraud Detection in Medical Claim Insurance: A Comprehensive Study." Scholars Journal of Engineering and Technology 11, no. 09 (2023): 191–200. http://dx.doi.org/10.36347/sjet.2023.v11i09.003.

Texto completo
Resumen
Medical claim insurance fraud poses a significant challenge for insurance companies and the healthcare system, leading to financial losses and reduced efficiency. In response to this issue, we present an intelligent machine- learning approach for fraud detection in medical claim insurance to enhance fraud detection accuracy and efficiency. This comprehensive study investigates the application of advanced machine learning algorithms for identifying fraudulent claims within the insurance domain. We thoroughly evaluate several candidate algorithms to select an appropriate machine learning algorithm, considering the unique characteristics of medical claim insurance data. Our chosen algorithm demonstrates superior capabilities in handling fraud detection tasks and is the foundation for our proposed intelligent approach. Our proposed approach incorporates domain knowledge and expert rules, augmenting the machine learning algorithm to address the intricacies of fraud detection within the insurance context. We introduce modifications to the algorithm, further enhancing its performance in detecting fraudulent medical claims. Through an extensive experimental setup, we evaluate the performance of our intelligent machine-learning approach. The results indicate significant accuracy, precision, recall, and F1-score improvements compared to traditional fraud detection methods. Additionally, we conduct a comparative analysis with other machine learning algorithms, affirming the superiority of our approach in this domain. The discussion section offers insights into the interpretability of the experimental findings and highlights the strengths and limitations of our approach. We conclude by emphasizing the significance of our research for the insurance industry and the potential impact on the healthcare system's efficiency and cost-effectiveness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Prakosa, Hendri Kurniawan, and Nur Rokhman. "Anomaly Detection in Hospital Claims Using K-Means and Linear Regression." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 15, no. 4 (2021): 391. http://dx.doi.org/10.22146/ijccs.68160.

Texto completo
Resumen
BPJS Kesehatan, which has been in existence for almost a decade, is still experiencing a deficit in the process of guaranteeing participants. One of the factors that causes this is a discrepancy in the claim process which tends to harm BPJS Kesehatan. For example, by increasing the diagnostic coding so that the claim becomes bigger, making double claims or even recording false claims. These actions are based on government regulations is including fraud. Fraud can be detected by looking at the anomalies that appear in the claim data.This research aims to determine the anomaly of hospital claim to BPJS Kesehatan. The data used is BPJS claim data for 2015-2016. While the algorithm used is a combination of K-Means algorithm and Linear Regression. For optimal clustering results, density canopy algorithm was used to determine the initial centroid.Evaluation using silhouete index resulted in value of 0.82 with number of clusters 5 and RMSE value from simple linear regression modeling of 0.49 for billing costs and 0.97 for length of stay. Based on that, there are 435 anomaly points out of 10,000 data or 4.35%. It is hoped that with the identification of these, more effective follow-up can be carried out.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Goutham, Bilakanti. "Enhancing Claim Processing Efficiency with Generative AI." International Journal of Leading Research Publication 3, no. 1 (2022): 1–11. https://doi.org/10.5281/zenodo.15196823.

Texto completo
Resumen
The use of Generative AI in claim processing, via diversified intake channels, including emails, faxed submissions, and intake channels that are call-center in nature. Under traditional claim processing, there is always a great amount of manual labor in obtaining, validating, and processing information vis-a-vis claims, which results in inefficiencies and delays. Advanced AI models such as NLP and GANs are used to automate data extraction, detection of anomalies, and decision-making, thereby reducing the processing time and the operational cost of processing claims. Automated intelligence increases precision with fewer human errors, enhanced detection of fraud, and quicker approvals. Not only does the process optimize efficiency but also enhance customer satisfaction through quicker claims settlement. Employing machine learning techniques enables ongoing model enhancement, responding to new claim behaviors and regulatory requirements. Insurance companies and banks can greatly enhance compliance, mitigate risk, and enhance transparency by using real-time AI-powered insights. The article illustrates the transformative effect of Generative AI on making claim processing activities streamlined, scalable, and efficient in many industries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

IKUOMOLA, A. J., and O. E. Ojo. "AN EFFECTIVE HEALTH CARE INSURANCE FRAUD AND ABUSE DETECTION SYSTEM." Journal of Natural Sciences Engineering and Technology 15, no. 2 (2017): 1–12. http://dx.doi.org/10.51406/jnset.v15i2.1662.

Texto completo
Resumen
Due to the complexity of the processes within healthcare insurance systems and the large number of participants involved, it is very difficult to supervise the systems for fraud. The healthcare service providers’ fraud and abuse has become a serious problem. The practices such as billing for services that were never rendered, performing unnecessary medical services and misrepresenting non-covered treatment as covered treatments etc. not only contribute to the problem of rising health care expenditure but also affect the health of the patients. Traditional methods of detecting health care fraud and abuse are time-consuming and inefficient. In this paper, the health care insurance fraud and abuse detection system (HECIFADES) was proposed. The HECIFADES consist of six modules namely: claim, augment claim, claim database, profile database, profile updater and updated profiles. The system was implemented using Visual Studio 2010 and SQL. After testing, it was observed that HECIFADES was indeed an effective system for detecting fraudulent activities and yet very secured way for generating medical claims. It also improves the quality and mitigates potential payment risks and program vulnerabilities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Faseela, V. S., and Dr.P.Thangam. "A Review on Health Insurance Claim Fraud Detection." International Journal of Engineering Research & Science 4, no. 9 (2018): 26–28. https://doi.org/10.5281/zenodo.1441226.

Texto completo
Resumen
<strong><em>Abstract&mdash;</em></strong> <em>The anomaly or outlier detection is one of the applications of data mining. The major use of anomaly or outlier detection is fraud detection. </em><em>Health care fraud leads to substantial losses of money each year in many countries. Effective fraud detection is important for reducing the cost of Health care system. This paper reviews the various approaches used for detecting the fraudulent activities in Health insurance claim data. The approaches reviewed in this paper are Hierarchical Hidden Markov Models and Non Negative Matrix Factorization. The data mining goals achieved and functions performed in these approaches have given in this paper. </em>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Nortey, Ezekiel N. N., Reuben Pometsey, Louis Asiedu, Samuel Iddi, and Felix O. Mettle. "Anomaly Detection in Health Insurance Claims Using Bayesian Quantile Regression." International Journal of Mathematics and Mathematical Sciences 2021 (February 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/6667671.

Texto completo
Resumen
Research has shown that current health expenditure in most countries, especially in sub-Saharan Africa, is inadequate and unsustainable. Yet, fraud, abuse, and waste in health insurance claims by service providers and subscribers threaten the delivery of quality healthcare. It is therefore imperative to analyze health insurance claim data to identify potentially suspicious claims. Typically, anomaly detection can be posited as a classification problem that requires the use of statistical methods such as mixture models and machine learning approaches to classify data points as either normal or anomalous. Additionally, health insurance claim data are mostly associated with problems of sparsity, heteroscedasticity, multicollinearity, and the presence of missing values. The analyses of such data are best addressed by adopting more robust statistical techniques. In this paper, we utilized the Bayesian quantile regression model to establish the relations between claim outcome of interest and subject-level features and further classify claims as either normal or anomalous. An estimated model component is assumed to inherently capture the behaviors of the response variable. A Bayesian mixture model, assuming a normal mixture of two components, is used to label claims as either normal or anomalous. The model was applied to health insurance data captured on 115 people suffering from various cardiovascular diseases across different states in the USA. Results show that 25 out of 115 claims (21.7%) were potentially suspicious. The overall accuracy of the fitted model was assessed to be 92%. Through the methodological approach and empirical application, we demonstrated that the Bayesian quantile regression is a viable model for anomaly detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Siva, Krishna Jampani. "Fraud Detection in Insurance Claims Using AI." Journal of Scientific and Engineering Research 6, no. 1 (2019): 302–10. https://doi.org/10.5281/zenodo.14637405.

Texto completo
Resumen
The insurance industry has faced issues with fraudulent claims, which have resulted in financial losses and operational inefficiencies. Integrating Artificial Intelligence offers a transformative way of detecting fraud by analyzing patterns in claim histories and customer profiles, along with external datasets. The use of AI-driven techniques, such as machine learning algorithms, natural language processing, and anomaly detection models, now allows insurers to detect fraud with greater precision and efficiency. These systems use supervised and unsupervised learning methods for outlier detection, classification of risky claims, and reducing false positives. Dynamic adaptability to the AI solutions has proved newly hatched fraud tactics, providing resilience against evolving threats. The present article investigates how fraud in insurance is being revolutionized by AI, focusing on health, auto, and property insurance claims by examining in-depth the implications in facilitating the industry with increased trust and better operational efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Bakeyalakshmi, P., and S. K. Mahendran. "Enhanced replica detection scheme for efficient analysis of intrusion detection in MANET." International Journal of Engineering & Technology 7, no. 1.1 (2017): 565. http://dx.doi.org/10.14419/ijet.v7i1.1.10169.

Texto completo
Resumen
Nowadays, detection scheme of intrusion is placing a major role for efficient access and analysis in Mobile Ad-hoc network (MANET). In the past, the detection scheme of Intrusion was used to identify the efficiency of the network and in maximum systems it performs with huge rate of false alarm. In this paper, an Effective approach of the Enhanced Replica Detection scheme (ERDS) based on Sequential Probability Ratio Test (SPRT) is proposed to detect the malicious actions and to have a secure path without claim in an efficient manner. Also, provides strategies to avoid attacker and to provide secure communication. In order to have an efficient analysis of intrusion detection the proposed approach is implemented based on the anomaly. To achieve this, the detection scheme is established based on SPRT and demonstrated the performances of detection with less claim. The simulation results of control overhead, packet delivery ratio, efficient detection, energy consumption and average claims are carried out for the analysis of performance to show the improvement than the existing by using the network simulator tool. Also, the performance of the proposed system illustrated the detection of intrusion in the normal and attacker states of the network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Claim detection"

1

Alamri, Abdulaziz. "The detection of contradictory claims in biomedical abstracts." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/15893/.

Texto completo
Resumen
Research claims in the biomedical domain are not always consistent, and may even be contradictory. This thesis explores contradictions between research claims in order to determine whether or not it is possible to develop a solution to automate the detection of such phenomena. Such a solution will help decision-makers, including researchers, to alleviate the effects of contradictory claims on their decisions. This study develops two methodologies to construct corpora of contradictions. The first methodology utilises systematic reviews to construct a manually-annotated corpus of contradictions. The second methodology uses a different approach to construct a corpus of contradictions which does not rely on human annotation. This methodology is proposed to overcome the limitations of the manual annotation approach. Moreover, this thesis proposes a pipeline to detect contradictions in abstracts. The pipeline takes a question and a list of research abstracts which may contain answers to it. The output of the pipeline is a list of sentences extracted from abstracts which answer the question, where each sentence is annotated with an assertion value with respect to the question. Claims which feature opposing assertion values are considered as potentially contradictory claims. The research demonstrates that automating the detection of contradictory claims in research abstracts is a feasible problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yang, Li. "A comparison of unsupervised learning techniques for detection of medical abuse in automobile claims." California State University, Long Beach, 2013.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Roberts, Terisa. "The use of credit scorecard design, predictive modelling and text mining to detect fraud in the insurance industry / Terisa Roberts." Thesis, North-West University, 2011. http://hdl.handle.net/10394/10347.

Texto completo
Resumen
The use of analytical techniques for fraud detection and the design of fraud detection systems have been topics of several research projects in the past and have seen varying degrees of success in their practical implementation. In particular, several authors regard the use of credit risk scorecards for fraud detection as a useful analytical detection tool. However, research on analytical fraud detection for the South African insurance industry is limited. Furthermore, real world restrictions like the availability and quality of data elements, highly unbalanced datasets, interpretability challenges with complex analytical techniques and the evolving nature of insurance fraud contribute to the on-going challenge of detecting fraud successfully. Insurance organisations face financial instability from a global recession, tighter regulatory requirements and consolidation of the industry, which implore the need for a practical and effective fraud strategy. Given the volumes of structured and unstructured data available in data warehouses of insurance organisations, it would be sensible for an effective fraud strategy to take into account data-driven methods and incorporate analytical techniques into an overall fraud risk assessment system. Having said that, the complexity of the analytical techniques, coupled with the effort required to prepare the data to support it, should be carefully considered as some studies found that less complex algorithms produce equal or better results. Furthermore, an over reliance on analytical models can underestimate the underlying risk, as observed with credit risk at financial institutions during the financial crisis. An attractive property of the structure of the probabilistic weights-of-evidence (WOE) formulation for risk scorecard construction is its ability to handle data issues like missing values, outliers and rare cases. It is also transparent and flexible in allowing the re-adjustment of the bins based on expert knowledge or other business considerations. The approach proposed in the study is to construct fraud risk scorecards at entity level that incorporate sets of intrinsic and relational risk factors to support a robust fraud risk assessment. The study investigates the application of an integrated Suspicious Activity Assessment System (SAAS) empirically using real-world South African insurance data. The first case study uses a data sample of short-term insurance claims data and the second a data sample of life insurance claims data. Both case studies show promising results. The contributions of the study are summarised as follows: The study identified several challenges with the use of an analytical approach to fraud detection within the context of the South African insurance industry. The study proposes the development of fraud risk scorecards based on WOE measures for diagnostic fraud detection, within the context of the South African insurance industry, and the consideration of alternative algorithms to determine split points. To improve the discriminatory performance of the fraud risk scorecards, the study evaluated the use of analytical techniques, such as text mining, to identify risk factors. In order to identify risk factors from large sets of data, the study suggests the careful consideration of both the types of information as well as the types of statistical techniques in a fraud detection system. The types of information refer to the categories of input data available for analysis, translated into risk factors, and the types of statistical techniques refer to the constraints and assumptions of the underlying statistical techniques. In addition, the study advocates the use of an entity-focused approach to fraud detection, given that fraudulent activity typically occurs at an entity or group of entities level.<br>PhD, Operational Research, North-West University, Vaal Triangle Campus, 2011
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ceglia, Cesarina. "A comparison of parametric and non-parametric methods for detecting fraudulent automobile insurance claims." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10147317.

Texto completo
Resumen
<p> Fraudulent automobile insurance claims are not only a loss for insurance companies, but also for their policyholders. In order for insurance companies to prevent significant loss from false claims, they must raise their premiums for the policyholders. The goal of this research is to develop a decision making algorithm to determine whether a claim is classified as fraudulent based on the observed characteristics of a claim, which can in turn help prevent future loss. The data includes 923 cases of false claims, 14,497 cases of true claims and 33 describing variables from the years 1994 to 1996. To achieve the goal of this research, parametric and nonparametric methods are used to determine what variables play a major role in detecting fraudulent claims. These methods include logistic regression, the LASSO (least absolute shrinkage and selection operator) method, and Random Forests. This research concluded that a non-parametric Random Forests model classified fraudulent claims with the highest accuracy and best balance between sensitivity and specificity. Variable selection and importance are also implemented to improve the performance at which fraudulent claims are accurately classified.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Azu, Irina Mateko. "Creating a green baloney detection kit for green claims made in the CNW report : Dust to Dust : the energy cost of new vehicles : from concept to disposal." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45787.

Texto completo
Resumen
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.<br>Includes bibliographical references (p. 16).<br>In order to assess the veracity of a green claim made by CNW marketing research Inc., I created a green baloney detection kit. It will serve as a guiding post by which anyone can assess the potential environmental impact of any action taken on the basis of the claims made by CNW in their dust to dust report. In their report they state that after doing an extensive life cycle analysis of several cars sold in the United States in 2005, they found that high fuel economy did not necessarily correlate to a smaller environmental impact, but rather the biggest contribution to the environmental impact of automobiles is in their end-of-life disposal. My green baloney detection kit will be an adaptation of Carl Sagan's original baloney detection kit, which is a series of probes which serve as a pillar for detecting fallacious arguments or claims. My enquiries show that the Dust to Dust report does not pass the green baloney detection kit and with it nontechnical environmentally conscious automotive consumers can determine that the claims made by CNW are not scientifically sound and so their decisions should be based on those claims.<br>by Irina Mateko Azu.<br>S.B.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ben, Gamra Siwar. "Contribution à la mise en place de réseaux profonds pour l'étude de tableaux par le biais de l'explicabilité : Application au style Tenebrisme ("clair-obscur")." Electronic Thesis or Diss., Littoral, 2023. http://www.theses.fr/2023DUNK0695.

Texto completo
Resumen
La détection de visages à partir des images picturales du style clair-obscur suscite un intérêt croissant chez les historiens de l'art et les chercheurs afin d'estimer l'emplacement de l'illuminant et ainsi répondre à plusieurs questions techniques. L'apprentissage profond suscite un intérêt croissant en raison de ses excellentes performances. Une optimisation du Réseau "Faster Region-based Convolutional Neural Network" a démontré sa capacité à relever efficacement les défis et à fournir des résultats prometteurs en matière de détection de visages à partir des images clai-obscur. Cependant, ces réseaux sont caractérisés comme des "boites noires" en raison de la complexité inhérentes et de la non-linéarité de leurs architectures. Pour aborder ces problèmes, l'explicabilité devient un domaine de recherche actif pour comprendre les modèles profonds. Ainsi, nous proposons une nouvelle méthode d'explicabilité itérative basée sur des perturbations guidées pour expliquer les prédictions<br>Face detection from Tenebrism paintings is of growing interest to art historians and researchers in order to estimate the illuminant location, and thereby answer several technical questions. Deep learning is gaining increasing interest due to is high performance capabilities. An optimization of Faster Region based Convolutional Neural Network has demonstrated its ability to effectively address challenges and deliver promising face detection results from Tenebrism paintings. However, deep neural networks are often characterized as "black box" because of the inherent complexity and non-linearity of their architectures. To tackle these issues, eXplainable Artificial Intelligence (XAI) is becoming an active researcj area to understand deep models. So, we propose a novel iterative XAI method based on guided perturbations to explain model's application
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mukkananchery, Abey. "Iterative Methods for the Reconstruction of Tomographic Images with Unconventional Source-detector Configurations." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/1244.

Texto completo
Resumen
X-ray computed tomography (CT) holds a critical role in current medical practice for the evaluation of patients, particularly in the emergency department and intensive care units. Expensive high resolution stationary scanners are available in radiology departments of most hospitals. In many situations however, a small, inexpensive, portable CT unit would be of significant value. Several mobile or miniature CT scanners are available, but none of these systems have the range, flexibility or overall physical characteristics of a truly portable device. The main challenge is the design of a geometry that optimally trades image quality for system size. The goal of this work has been to develop analysis tools to help simulate and evaluate novel system geometries. To test the tools we have developed, three geometries have been considered in the thesis, namely, parallel projections, clam-shell and parallel plate geometries. The parallel projections geometry is commonly used in reconstruction of images by filtered back projection technique. A clam-shell structure consists of two semi-cylindrical braces that fold together over the patient's body and connect at the top. A parallel plate structure uses two fixed flat or curved plates on either side of the patient's body and image from fixed sources/detectors that are gated on and off so as to step the X-ray field through the body. The parallel plate geometry has been found to be the least reliable of the three geometries investigated, with the parallel projections geometry being the most reliable. For the targeted application, the clam-shell geometry seems to be the solution with more chances to succeed in the short term. We implemented the Van Cittert iterative technique for the reconstruction of images from projections. The thesis discusses a number of variations on the algorithm, such as the use of the Conjugate Gradient Method, several choices for the initial guess, and the incorporation of a priori information to handle the reconstruction of images with metal inserts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

CHEN, YAN. "Comparisons and Applications of Quantitative Signal Detections for Adverse Drug Reactions (ADRs): An Empirical Study Based On The Food And Drug Administration (FDA) Adverse Event Reporting System (AERS) And A Large Medical Claims Database." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1203534085.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chen, Yan. "Comparisons and applications of quantitative signal detections for adverse drug reactions (ADRs) an empirical study based On The food And drug administration (FDA) adverse event reporting system (AERS) and a large medical claims database /." Cincinnati, Ohio : University of Cincinnati, 2008. http://www.ohiolink.edu/etd/view.cgi?acc_num=ucin1203534085.

Texto completo
Resumen
Thesis (Ph.D. of Pharmacy Practice and Administrative Sciences)--University of Cincinnati, 2008.<br>Advisor: Jeff Guo PhD. Title from electronic thesis title page (viewed May 9, 2008). Keywords: data mining algorithms; adverse drug reactions; adverse event reporting system; signal detection; case-control study; antipsychotic; bipolar disorder. Includes abstract. Includes bibliographical references.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

BARACCHI, DANIELE. "Novel neural networks for structured data." Doctoral thesis, 2018. http://hdl.handle.net/2158/1113665.

Texto completo
Resumen
Complex relational structures are used to represent data in many scientific fields such as chemistry, bioinformatics, natural language processing and social network analysis. It is often desirable to classify these complex objects, a problem which is increasingly being dealt with machine learning approaches. While a number of algorithms have been shown to be effective in solving this task for graphs of moderate size, dealing with large structures still poses significant challenges due to the difficulty in scaling exhibited by the existing techniques. In this thesis we introduce a framework to approach supervised learning problems on structured data by extending the R-convolution concept used in graph kernels. We represent a graph (or, more in general, a relational structure) as a hierarchy of objects and we define how to unroll a template neural network on it. This approach is able to outperform state-of-the-art methods on large social networks datasets, while at the same time being competitive on small chemobiological datasets. We also introduce a lossless compression algorithm for the hierarchical decompositions that improves the temporal complexity of our approach by exploiting symmetries in the input data. Another contribution of this thesis is an application of the aforementioned framework to the context-dependent claim detection task. Claim detection is the assessment of whether a sentence contains a claim, i.e. the thesis, or conclusion, of an argument; in particular we focus on context-dependent claims, where the context (i.e. the topic of the argument) is a determining factor in classifying a sentence. We show how our framework is able to take advantage of contextual information in a straightforward way and we present some preliminary results that indicates how this approach is viable on real world datasets. A third contribution is a machine learning approach to aortic size normalcy assesment. The definition of normalcy is crucial when dealing with thoracic aortas, as a dilatation of its diameter often precedes serious disease. We build a new estimator based on OC-SVM fitted on a cohort of 1024 healty individuals aging from 5 to 89 years, and we compare its results to those obtained on the same set of subjects by an approach based on linear regression. As a further novelty, we also build a second estimator that combines the diameters measured at multiple levels in order to assess the normalcy of the overall shape of the aorta.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Claim detection"

1

Caldwell, Laura. Claim of innocence. Mira Books, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Joseph, Hansen. Death claims. No Exit Press, 1996.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Joseph, Hansen. Death claims. Alyson Books, 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kiker, Douglas. Murder on Clam Pond. Random House, 1986.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kiker, Douglas. Murder on Clam Pond. Thorndike Press, 1986.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pronzini, Bill. Crazybone: A "nameless detective" novel. Thorndike Press, 2000.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Harris, Charlaine. Crimes au clair de lune: Une anthologie de nouvelles inédites. Édition du Club Québec loisirs, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Pronzini, Bill. Crazy bone: A "nameless detective" novel. Carroll & Graf, 2000.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Phelan, Twist. Family claims: A Pinnacle Peak mystery. Poisoned Pen Press, 2006.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Holtschlag, David J. Detection of conveyance changes in St. Clair River using historical water-level and flow data with inverse one-dimensional hydrodynamic modeling. U.S. Dept. of the Interior, U.S. Geological Survey, 2009.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "Claim detection"

1

Cheema, Gullal S., Eric Müller-Budack, Christian Otto, and Ralph Ewerth. "Claim Detection in Social Media." In Event Analytics across Languages and Communities. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64451-1_11.

Texto completo
Resumen
AbstractIn recent years, the problem of misinformation on the web has become widespread across languages, countries and various social media platforms. One problem central to stopping the spread of misinformation is identifying claims and prioritising them for fact-checking. Although there has been much work on automated claim detection from text recently, the role of images and their variety still need to be explored. As posts and content shared on social media are often multimodal, it has become crucial to view the problem of misinformation and fake news from a multimodal perspective. In this chapter, first, we present an overview of existing claim detection methods and their limitations; second, we present a unimodal approach to identify check-worthy claims; third, and lastly, we introduce a dataset that takes both the image and text into account for detecting claims and benchmark recent multimodal models on the task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Duan, Xueyu, Mingxue Liao, Xinwei Zhao, Wenda Wu, and Pin Lv. "An Unsupervised Joint Model for Claim Detection." In Communications in Computer and Information Science. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-7983-3_18.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pecher, Branislav, Ivan Srba, Robert Moro, Matus Tomlein, and Maria Bielikova. "FireAnt: Claim-Based Medical Misinformation Detection and Monitoring." In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67670-4_38.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Peddireddy, Bhargavi, P. V. Rohith Kumar Reddy, and B. Srisatya Kapardi. "Health Insurance Claim Fraud Detection Using Artificial Neural Networks." In Cognitive Science and Technology. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-9266-5_13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lippi, Marco, Francesca Lagioia, Giuseppe Contissa, Giovanni Sartor, and Paolo Torroni. "Claim Detection in Judgments of the EU Court of Justice." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-00178-0_35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hafid, Salim, Wassim Ammar, Sandra Bringay, and Konstantin Todorov. "Cite-worthiness Detection on Social Media: A Preliminary Study." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-65794-8_2.

Texto completo
Resumen
AbstractDetecting cite-worthiness in text is seen as the problem of flagging a missing reference to a scientific result (an article or a dataset) that should come to support a claim formulated in the text. Previous work has taken interest in this problem in the context of scientific literature, motivated by the need to allow for reference recommendation for researchers and flag missing citations in scientific work. In this preliminary study, we extend this idea towards the context of social media. As scientific claims are often made to support various arguments in societal debates on the Web, it is crucial to flag non-referenced or unsupported claims that relate to science, as this promises to contribute to improving the quality of the debates online. We experiment with baseline models, initially tested on scientific literature, by applying them on the SciTweets dataset which gathers science-related claims from X. We show that models trained on scientific papers struggle to detect cite-worthy text from X, we discuss implications of such results and argue for the necessity to train models on social media corpora for satisfactory flagging of missing references on social media. We make our data publicly available to encourage further research on cite-worthiness detection on social media.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Allein, Liesbeth, and Marie-Francine Moens. "Checkworthiness in Automatic Claim Detection Models: Definitions and Analysis of Datasets." In Disinformation in Open Online Media. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-61841-4_1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Picardi, Ilenia, Luca Serafini, and Marco Serino. "Disentangling Discursive Spaces of Knowledge Refused by Science: An Analysis of the Epistemic Structures in the Narratives Repertoires on Health During the Covid-19 Pandemic." In Manufacturing Refused Knowledge in the Age of Epistemic Pluralism. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-7188-6_6.

Texto completo
Resumen
AbstractThis chapter provides an understanding of the social configurations with which Refused Knowledge Communities (RKCs) attribute credibility to knowledge about healthcare and wellbeing. This study focuses on how RKCs enrol knowledge claims and heterogeneous actors to build, maintain and legitimise forms of knowledge refused by science. The analysis relies on empirical materials related to the online discourses shared in the Alkaline Water (AW) and Five Biological Laws (5BLs) RKCs from January 2020 to December 2021—a time span characterised by the emergence of the Covid-19 pandemic and the management of the related health crisis—by identifying in each RKC distinct claims of refused knowledge and the actors that sustain these claims. Through a combination of qualitative analysis and network-analytic techniques, we examine the epistemic structures of the AW and 5BLs RKCs and formalise the connections between claims and actors within each RKC by a two-mode network in which claims are connected to actors. By means of community detection, we provide a visual analysis of the configuration of claim–actor connections, while using betweenness centrality scores to denote ‘flexible’ objects that link diverse sub-groups of nodes—that is, claims or actors that act as ‘boundary objects’ within these complex social worlds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Iskender, Neslihan, Robin Schaefer, Tim Polzehl, and Sebastian Möller. "Argument Mining in Tweets: Comparing Crowd and Expert Annotations for Automated Claim and Evidence Detection." In Natural Language Processing and Information Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80599-9_25.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

(Mary) Tai, Hsueh-Yung. "Applications of Big Data and Artificial Intelligence." In Digital Health Care in Taiwan. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05160-9_11.

Texto completo
Resumen
AbstractThis chapter introduces the application of National Health Insurance (NHI) big data in creating digital claim review tools and artificial intelligence (AI) training to improve review efficacy. By analyzing big data in the NHI medical information system, the National Health Insurance Administration (NHIA) can detect abnormal or unusual claims and efficiently reduce medical waste. AI models are further generated with the NHI big data to identify duplicated medical images and monitor the quality of uploaded images and test results from medical institutions.The NHIA also seeks external resources to explore the possibilities of diverse AI applications. Its big data have been applied to create an AI-based COVID-19 detection platform used by medical centers. Within it, high-risk patients’ X-ray images can be detected automatically and then an alert message is sent to doctors, thus preventing nosocomial COVID-19 infections.Besides a convenient digital claims system, the NHIA also provides contracted institutions with useful reminders, references, and graphic functions with figures and/or tables to help the quality of their self-management.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Claim detection"

1

Kotitsas, Sotiris, Panagiotis Kounoudis, Eleni Koutli, and Haris Papageorgiou. "Leveraging fine-tuned Large Language Models with LoRA for Effective Claim, Claimer, and Claim Object Detection." In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. https://doi.org/10.18653/v1/2024.eacl-long.156.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Saha, Diya, Manjira Sinha, and Tirthankar Dasgupta. "EnClaim: A Style Augmented Transformer Architecture for Environmental Claim Detection." In Proceedings of the 1st Workshop on Natural Language Processing Meets Climate Change (ClimateNLP 2024). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.climatenlp-1.9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kaushik, Priyanka, Saurabh Pratap Singh Rathore, Anand Singh Bisen, and Rachna Rathore. "Enhancing Insurance Claim Fraud Detection Through Advanced Data Analytics Techniques." In 2024 IEEE Region 10 Symposium (TENSYMP). IEEE, 2024. http://dx.doi.org/10.1109/tensymp61132.2024.10752284.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Majer, Laura, and Jan Šnajder. "Claim Check-Worthiness Detection: How Well do LLMs Grasp Annotation Guidelines?" In Proceedings of the Seventh Fact Extraction and VERification Workshop (FEVER). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.fever-1.27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Ni, Jingwei, Minjing Shi, Dominik Stammbach, Mrinmaya Sachan, Elliott Ash, and Markus Leippold. "AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators." In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.acl-long.104.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Shah, Agam, Arnav Hiray, Pratvi Shah, et al. "Numerical Claim Detection in Finance: A New Financial Dataset, Weak-Supervision Model, and Market Analysis." In Proceedings of the Seventh Fact Extraction and VERification Workshop (FEVER). Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.fever-1.21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Salazar, Armida P., Rodolfo C. Raga, and Susan S. Caluya. "Detecting Anomalies in Medical Claims with Clustering Algorithm." In 2024 Asia Pacific Conference on Innovation in Technology (APCIT). IEEE, 2024. http://dx.doi.org/10.1109/apcit62007.2024.10673480.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hardalov, Momchil, Anton Chernyavskiy, Ivan Koychev, Dmitry Ilvovsky, and Preslav Nakov. "CrowdChecked: Detecting Previously Fact-Checked Claims in Social Media." In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.aacl-main.22.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

P., Archana Reddy, Divya Jyothi G., Velumury Varshita, Chennupati Akshitha, and Aiswariya Milan K. "Understanding Graph Neural Networks Models for Healthcare Fraud Detection in Insurance Claims." In 2025 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE). IEEE, 2025. https://doi.org/10.1109/iitcee64140.2025.10915253.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Abo El-Enen, Mohamed Ahmed, Dina Tbaishat, Mustafa AbdulRazek, Amril Nazir, Reem Muhammad, and Ahmed T. Sahlol. "Generative AI with Big Data for Better Detection of Fraud in Medical Claims." In 2024 IEEE International Conference on E-health Networking, Application & Services (HealthCom). IEEE, 2024. https://doi.org/10.1109/healthcom60970.2024.10880817.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Claim detection"

1

Harman. PR-364-11706-R01 Testing In-Situ Coriolis Meter Verification Technology Detecting Corrosion and Erosion. Pipeline Research Council International, Inc. (PRCI), 2015. http://dx.doi.org/10.55274/r0010855.

Texto completo
Resumen
Coriolis flow meter diagnostics have made numerous advances in the past five years. Manufactures claim that they can detect corrosion build-up and coriolis tube erosion with their diagnostic software. This manufacturer-blinded study provides diagnostic results from a coriolis meter flow tested in water and in air at three different test pressures. Using wax, the meter was fouled to three different thicknesses, and eroded using sand-laden air. Wax-fouled and eroded flow performance and diagnostic results are compared to baseline data to substantiate and refute manufacturers� claims.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

CT Lung Densitometry, Consensus QIBA Profile. Chair Charles Hatt and Miranda Kirby. • The Publisher is Radiological Society of North America (RSNA)/Quantitative Imaging Biomarkers Alliance (QIBA), 2020. https://doi.org/10.1148/qiba/20200904.

Texto completo
Resumen
The goal of a QIBA Profile is to achieve a repeatable and useful level of performance for measures of lung density from quantitative CT using the RA-950 HU and Perc15 biomarkers of emphysema. Please see Appendix C for more detailed information on the calculation of and rationale for RA-950 HU and Perc15 as the biomarkers of choice. The Claim (Section 2) describes the performance in terms of bias and precision of RA-950 HU and Perc15 for detecting change in lung density. The Activities (Section 3) describe how to generate RA-950 HU and Perc15 for longitudinal studies of the change in lung density. Requirements are placed on the Actors that participate in those activities as necessary to achieve the Claim in Section 2. Assessment Procedures (Section 4) for evaluating specific requirements are defined as needed. This QIBA Profile (CT: Lung Densitometry) addresses RA-950 HU and Perc15 for longitudinal studies which are often used as biomarkers of emphysema progression in chronic obstructive pulmonary disease (COPD) or as a response to cessation of smoking and possible future treatment approaches. It places requirements on Acquisition Devices, Physicists, Technologists, Clinicians, Statisticians, Reconstruction Software and Image Analysis Software involved in Product Qualification, Staff Qualification, Periodic Quality Assurance, Subject Handling, Protocol Design, Image Data Acquisition, Image Data Reconstruction, Image QA, Image Distribution and Image Analysis. The requirements are focused on achieving negligible bias and avoiding unnecessary variability of the RA-950 HU and Perc15 measurements by compensating for variations in CT number due to inconsistency of lung inflation volume and calibration of the CT scanner, and vendor-specific bias due to CT scanner make and model. To meet the claims, scanner calibration is performed using a well characterized imaging phantom ideally containing lung equivalent density foams as described in Section 4.1. The clinical performance targets are to achieve bias and repeatability such that a change in RA950 HU of ≥ 3.7% of the normalized lung volume, or a change in Perc15 of ≥ 11 HU after lung volume adjustment can be accepted as indicative of a true change (with 95% confidence). This document is intended to help clinicians basing decisions on these biomarkers, imaging staff generating these biomarkers, vendor staff developing related products, purchasers of such products and investigators designing trials with imaging endpoints. Note that this document only states requirements to achieve the claim, not “requirements for standard of care.” Conformance to this Profile is less important than providing appropriate patient care. The compilation of this document represents the efforts of many individuals over a several years of effort, some but not all of whom are acknowledged in Appendix A. QIBA Profiles addressing other imaging biomarkers using CT, MRI, PET and Ultrasound can be found at qibawiki.rsna.org.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gao, Krishnamurthy, and McNealy. L52313 Performance Improvements of Current ILI Technologies for Mechanical Damage Detection Phase 2. Pipeline Research Council International, Inc. (PRCI), 2009. http://dx.doi.org/10.55274/r0010681.

Texto completo
Resumen
This final report provides a comprehensive and in-depth review of the current status of in-line inspection technologies including, but not limited to, Magnetic (Axial MFL, Circumferential MFL), and Geometrical (Caliper) methods in terms of their capabilities, limitations and potentials in detection, discrimination and characterization of various forms of pipeline mechanical damage, such as dents, dents with corrosion, and dents with cracks, gouges and dents with gouges. Capabilities of current technologies presented herein are based on validation data supplied by PRCI members. This report reviews and summarizes research regarding the capabilities of current in-line inspection (ILI) based technologies for the detection and discrimination of mechanical damage conducted by Blade Energy Partners in cooperation with participating ILI vendors and PRCI member pipeline operators. This research was conducted in two phases. The first identified the current deployed ILI technologies, vendor claims for capability and performance determination with validation data provided by the ILI vendors. Standardized performance measures were also developed and applied. The Phase I research was presented in an incomplete report, PR-328-063502-R01.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

99mTc SPECT-CT, Consensus QIBA Profile. Chair Yuni Dewaraja and Robert Miyaoka. Radiological Society of North America (RSNA)/Quantitative Imaging Biomarkers Alliance (QIBA), 2019. https://doi.org/10.1148/qiba/20191021.

Texto completo
Resumen
The quantification of 99mTc labeled biomarkers can add unique value in many different settings, ranging from clinical trials of investigation new drugs to the treatment of individual patients with marketed therapeutics. For example, goals of precision medicine include using companion radiopharmaceutical diagnostics as just-in-time, predictive biomarkers for selecting patients to receive targeted treatments, customizing doses of internally administered radiotherapeutics, and assessing responses to treatment. This Profile describes quantitative outcome measures that represent proxies of target concentration or target mass in topographically specific volumes of interest (VOIs). These outcome measures are usually expressed as the percent injected dose (i.e., radioactivity) per mL of tissue (%ID/mL), a standard uptake value ratio (SUVr), or a target-to-background ratio (TBR). In this profile, targeting is not limited to any single mechanism of action. Targeting can be based on interaction with a cell surface protein, an intracellular complex after diffusion, protein-mediated transport, endocytosis, or mechanical trapping in a capillary bed, as in the case of transarterial administration of embolic microspheres. Regardless, the profile focuses on quantification in well-defined volumes of interest. Technetium-99m based dopamine transporter imaging agents, such as TRODAT, are nearly direct links with some aspects of the predecessor profile on 123I-ioflupane for neurodegenerative disorders. (See www.qibawiki.rsna.org ) Cancer is often a base case of convenience for new material in this profile, but the intent is to create methods that can be useful in other therapeutic areas where the diseases are characterized by spatially-limited anatomical volumes, such as lung segments, or multifocal aggregations of targets, such as white blood cell surface receptors on pulmonary nodules in patients with sarcoidosis. Neoplastic masses that can be measured with x-ray computed tomography (CT) or magnetic resonance imaging (MRI) are the starting point. However, the intent is to create a profile that can be extrapolated to diseases in other therapeutic areas that are also associated with focal, or multi-focal pathology, such as pulmonary granulomatous diseases of autoimmune or infectious etiology, non-oncological diseases of organs such as polycystic kidney disease, and the like. The criteria for measurability are based on the current resolution of most SPECT-CT systems in clinical practice, and are independent of criteria for measurability in other contexts. For this SPECT profile, conformance requires that a “small” VOI must be greater than 30 mL to be measurable. It is understood that much smaller VOIs can sometimes exhibit high conspicuity on SPECT, but these use cases are beyond the scope of this profile and will not be tested for conformance in this version. It is left to individual stakeholders to show the extent to which they can achieve conformance when measuring VOIs less than 30 mL. The detection of smaller changes during clinical trials of large groups can be achieved by referring to the QIBA companion guidance on powering trials. The Claims (Section 2) asserts that compliance with the specifications described in this Profile will produce cross sectional estimates of the concentration of radioactivity [kBq/mL] in a volume of interest (VOI) or a target-to-background ratio (TBR) within a defined confidence interval (CI), and distinguish true biological change from system variance (i.e., measurement error) in individual patients or clinical trials of many patients who will be studied longitudinally with 99mTc SPECT agents. Both claims are founded on observations that target density varies between patients with the same disease as well as within patients with multi-focal disease. The Activities (Section 3) describes the requirements that are placed on the Actors who need to achieve the Claim. Section 3 specifies what the actors must do in order to estimate the amount of radioactivity in a volume of interest, expressed in kBq/mL (ideal) or as a TBR (acceptable) within a 95% CI surrounding the true value. Measurands such as %ID/mL are targets for nonclinical studies in animal models that use terminal sacrifice to establish ground truth for imaging studies. TBRs can be precarious, as the assumptions that depend on the physiology of the background regions matching the volume of interest can be hard to accept sometimes. It is up to each individual stakeholder to qualify the background regions used in their own use case. This profile qualifies only a few in some very limited contexts as examples. The Assessment Procedures (Section 4) for evaluating specific requirements are defined as needed. The requirements are focused on achieving sufficient accuracy and avoiding unnecessary variability of the measurements. The clinical performance target is to achieve a 95% confidence interval for concentration in units of kBq/mL (kilobequerels per milliliter) or %ID/mL (percent injected dose per milliliter) or TBR with both a reproducibility and a repeatability of +/- 8% within a single individual under zero-biological-change conditions. This document is intended to help clinicians basing decisions on these biomarkers, imaging staffs generating measurements of these biomarkers, vendors who are developing related products, purchasers of such products, and investigators designing trials. Note that this document only states requirements to achieve the claims, not “requirements on standard of care” nor compliance with any particular protocol for treating participants in clinical trial settings. Conformance to this Profile is secondary to properly caring for patients or adhering to the requirements of a protocol. QIBA Profiles addressing other imaging biomarkers using CT, MRI, PET and Ultrasound can be found at www.qibawiki.rsna.org.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

McNealy. L52295 Fundamentals and Performance Improvements of ILI Technologies for Mechanical Damage Inspection. Pipeline Research Council International, Inc. (PRCI), 2008. http://dx.doi.org/10.55274/r0010677.

Texto completo
Resumen
The objective of this effort is to assist the pipeline industry in selecting ILI technologies that are best suited for detecting and sizing the types of mechanical damage that may pose integrity concerns, and/or are required to be addressed by the existing Regulatory Rules. The practical need is driven by both the recent changes in Regulatory requirements, vis-à-vis mechanical damage, and the latest developments of ILI technologies aiming to detect and size such damage. Based on the information provided by the six participating ILI vendors, and taking fulladvantage of extensive previous work, this report presents the results of Phase I of the Project, including updated capabilities and deficiencies of the current MD ILI technologies, performance claims, and supporting validation data. In addition, this Phase 1 report details the fundamental approaches embodied within each technology then analyzes validation data and derives performance conclusions from available data. Finally, this report identifies further testing to be conducted within Phase II of the Project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Jarram, Paul, Phil Keogh, and Dave Tweddle. PR-478-143723-R01 Evaluation of Large Stand Off Magnetometry Techniques. Pipeline Research Council International, Inc. (PRCI), 2015. http://dx.doi.org/10.55274/r0010841.

Texto completo
Resumen
Monitoring the integrity of buried ageing ferromagnetic pipelines is a significant problem for infrastructure operators. Typically inspection relies on pig surveys, lDCVG, CIPS and contact NDT methods that often require pipes to be uncovered and often at great expense. This report contains the results of trials carried out on a controlled test bed using a novel remote sensing technique known as Stress Concentration Tomography (SCT) which claims to be capable of detecting corrosion, metal defects and the effects of ground movement by mapping variations in the earth's magnetic field around pipelines. The physical law upon which SCT has been engineered is Magnetostriction which is the process by which internal domains inside the structure of ferromagnetic materials, such as carbon steel alloys, create magnetic fields when subjected to mechanical stress. This report contains the results of controlled trials of the technology which potentially offers pipeline operators, particularly those of non-piggable pipelines, the benefit of considerable inspection cost savings since it is a non-invasive technique and no modification to the line or its operational parameters is required.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!