To see the other types of publications on this topic, follow the link: Claim detection.

Journal articles on the topic 'Claim detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Claim detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sai Santosh Goud Bandari. "Machine Learning (ML) based Anomaly Detection in Insurance Industries." Journal of Information Systems Engineering and Management 10, no. 32s (2025): 13–21. https://doi.org/10.52783/jisem.v10i32s.5182.

Full text
Abstract:
Handling claims presents significant difficulties for the insurance sector particularly in cases of duplicate claims, missing information, and false claims. Conventional manual techniques are prone to mistakes and inefficiencies, which substantially raises running expenses. This work presents an automated machine learning (ML) based solution for these problems. DBSCAN Clustering, Isolation Forest Classifier, and Random Forest Classifier are three specific ML techniques applied here. Early intervention is possible with the Random Forest Classifier as it can detect claims with lacking proof. While DBSCAN Clustering combines like data points to assist uncover and control duplicate claims, the Isolation Forest Classifier detects fraudulent claims by identifying abnormalities in the data. Using a big dataset, the suggested fix demonstrated significant performance and accuracy benefits in claim processing. Results demonstrate the ML models lower operational costs, less hand-made intervention, and better fraud detection. Reducing delays and mistakes in claim processing benefits the automated method in increasing client satisfaction as well. By automating major portions of claim processing, this paper shows the possibilities of ML in changing the insurance industry and generating cost savings, higher efficiency, and fraud protection. ML technology will become increasingly important in increasing the accuracy and efficiency of claim processing as the sector maintains its digital transformation under progress.
APA, Harvard, Vancouver, ISO, and other styles
2

K., P. PORKODI. "FRAUD CLAIM DETECTION USING SPARK." IJIERT - International Journal of Innovations in Engineering Research and Technology 4, no. 2 (2017): 10–13. https://doi.org/10.5281/zenodo.1462257.

Full text
Abstract:
<strong>Objective:To reduce the fraud claims in health insurances companies and to improve outcomes in health care industry Analysis:In the existing system,Apache hadoop and Apache hive is used for processing data,it is a batch processing syste m. In proposed system,Apache spark is used for processing streaming data. Findings:EHR record is used as data source,it contains unique id for patients across world,so it is very easy to detect fraud claim with help of patientid. Apache spark processing streaming data on regular basis. But in existing system Apache hadoop and Apache hive takes hours of time to process the stored data. Improvement:Rule based model machine learning algorithm is used for detecting and automating the fraud claim and Apache spark is used for fast processing data,so it is more accurate and fast.</strong> <strong>https://www.ijiert.org/paper-details?paper_id=140995</strong>
APA, Harvard, Vancouver, ISO, and other styles
3

Agarwal, Shashank. "An Intelligent Machine Learning Approach for Fraud Detection in Medical Claim Insurance: A Comprehensive Study." Scholars Journal of Engineering and Technology 11, no. 09 (2023): 191–200. http://dx.doi.org/10.36347/sjet.2023.v11i09.003.

Full text
Abstract:
Medical claim insurance fraud poses a significant challenge for insurance companies and the healthcare system, leading to financial losses and reduced efficiency. In response to this issue, we present an intelligent machine- learning approach for fraud detection in medical claim insurance to enhance fraud detection accuracy and efficiency. This comprehensive study investigates the application of advanced machine learning algorithms for identifying fraudulent claims within the insurance domain. We thoroughly evaluate several candidate algorithms to select an appropriate machine learning algorithm, considering the unique characteristics of medical claim insurance data. Our chosen algorithm demonstrates superior capabilities in handling fraud detection tasks and is the foundation for our proposed intelligent approach. Our proposed approach incorporates domain knowledge and expert rules, augmenting the machine learning algorithm to address the intricacies of fraud detection within the insurance context. We introduce modifications to the algorithm, further enhancing its performance in detecting fraudulent medical claims. Through an extensive experimental setup, we evaluate the performance of our intelligent machine-learning approach. The results indicate significant accuracy, precision, recall, and F1-score improvements compared to traditional fraud detection methods. Additionally, we conduct a comparative analysis with other machine learning algorithms, affirming the superiority of our approach in this domain. The discussion section offers insights into the interpretability of the experimental findings and highlights the strengths and limitations of our approach. We conclude by emphasizing the significance of our research for the insurance industry and the potential impact on the healthcare system's efficiency and cost-effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
4

Prakosa, Hendri Kurniawan, and Nur Rokhman. "Anomaly Detection in Hospital Claims Using K-Means and Linear Regression." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 15, no. 4 (2021): 391. http://dx.doi.org/10.22146/ijccs.68160.

Full text
Abstract:
BPJS Kesehatan, which has been in existence for almost a decade, is still experiencing a deficit in the process of guaranteeing participants. One of the factors that causes this is a discrepancy in the claim process which tends to harm BPJS Kesehatan. For example, by increasing the diagnostic coding so that the claim becomes bigger, making double claims or even recording false claims. These actions are based on government regulations is including fraud. Fraud can be detected by looking at the anomalies that appear in the claim data.This research aims to determine the anomaly of hospital claim to BPJS Kesehatan. The data used is BPJS claim data for 2015-2016. While the algorithm used is a combination of K-Means algorithm and Linear Regression. For optimal clustering results, density canopy algorithm was used to determine the initial centroid.Evaluation using silhouete index resulted in value of 0.82 with number of clusters 5 and RMSE value from simple linear regression modeling of 0.49 for billing costs and 0.97 for length of stay. Based on that, there are 435 anomaly points out of 10,000 data or 4.35%. It is hoped that with the identification of these, more effective follow-up can be carried out.
APA, Harvard, Vancouver, ISO, and other styles
5

Goutham, Bilakanti. "Enhancing Claim Processing Efficiency with Generative AI." International Journal of Leading Research Publication 3, no. 1 (2022): 1–11. https://doi.org/10.5281/zenodo.15196823.

Full text
Abstract:
The use of Generative AI in claim processing, via diversified intake channels, including emails, faxed submissions, and intake channels that are call-center in nature. Under traditional claim processing, there is always a great amount of manual labor in obtaining, validating, and processing information vis-a-vis claims, which results in inefficiencies and delays. Advanced AI models such as NLP and GANs are used to automate data extraction, detection of anomalies, and decision-making, thereby reducing the processing time and the operational cost of processing claims. Automated intelligence increases precision with fewer human errors, enhanced detection of fraud, and quicker approvals. Not only does the process optimize efficiency but also enhance customer satisfaction through quicker claims settlement. Employing machine learning techniques enables ongoing model enhancement, responding to new claim behaviors and regulatory requirements. Insurance companies and banks can greatly enhance compliance, mitigate risk, and enhance transparency by using real-time AI-powered insights. The article illustrates the transformative effect of Generative AI on making claim processing activities streamlined, scalable, and efficient in many industries.
APA, Harvard, Vancouver, ISO, and other styles
6

IKUOMOLA, A. J., and O. E. Ojo. "AN EFFECTIVE HEALTH CARE INSURANCE FRAUD AND ABUSE DETECTION SYSTEM." Journal of Natural Sciences Engineering and Technology 15, no. 2 (2017): 1–12. http://dx.doi.org/10.51406/jnset.v15i2.1662.

Full text
Abstract:
Due to the complexity of the processes within healthcare insurance systems and the large number of participants involved, it is very difficult to supervise the systems for fraud. The healthcare service providers’ fraud and abuse has become a serious problem. The practices such as billing for services that were never rendered, performing unnecessary medical services and misrepresenting non-covered treatment as covered treatments etc. not only contribute to the problem of rising health care expenditure but also affect the health of the patients. Traditional methods of detecting health care fraud and abuse are time-consuming and inefficient. In this paper, the health care insurance fraud and abuse detection system (HECIFADES) was proposed. The HECIFADES consist of six modules namely: claim, augment claim, claim database, profile database, profile updater and updated profiles. The system was implemented using Visual Studio 2010 and SQL. After testing, it was observed that HECIFADES was indeed an effective system for detecting fraudulent activities and yet very secured way for generating medical claims. It also improves the quality and mitigates potential payment risks and program vulnerabilities.
APA, Harvard, Vancouver, ISO, and other styles
7

Faseela, V. S., and Dr.P.Thangam. "A Review on Health Insurance Claim Fraud Detection." International Journal of Engineering Research & Science 4, no. 9 (2018): 26–28. https://doi.org/10.5281/zenodo.1441226.

Full text
Abstract:
<strong><em>Abstract&mdash;</em></strong> <em>The anomaly or outlier detection is one of the applications of data mining. The major use of anomaly or outlier detection is fraud detection. </em><em>Health care fraud leads to substantial losses of money each year in many countries. Effective fraud detection is important for reducing the cost of Health care system. This paper reviews the various approaches used for detecting the fraudulent activities in Health insurance claim data. The approaches reviewed in this paper are Hierarchical Hidden Markov Models and Non Negative Matrix Factorization. The data mining goals achieved and functions performed in these approaches have given in this paper. </em>
APA, Harvard, Vancouver, ISO, and other styles
8

Nortey, Ezekiel N. N., Reuben Pometsey, Louis Asiedu, Samuel Iddi, and Felix O. Mettle. "Anomaly Detection in Health Insurance Claims Using Bayesian Quantile Regression." International Journal of Mathematics and Mathematical Sciences 2021 (February 23, 2021): 1–11. http://dx.doi.org/10.1155/2021/6667671.

Full text
Abstract:
Research has shown that current health expenditure in most countries, especially in sub-Saharan Africa, is inadequate and unsustainable. Yet, fraud, abuse, and waste in health insurance claims by service providers and subscribers threaten the delivery of quality healthcare. It is therefore imperative to analyze health insurance claim data to identify potentially suspicious claims. Typically, anomaly detection can be posited as a classification problem that requires the use of statistical methods such as mixture models and machine learning approaches to classify data points as either normal or anomalous. Additionally, health insurance claim data are mostly associated with problems of sparsity, heteroscedasticity, multicollinearity, and the presence of missing values. The analyses of such data are best addressed by adopting more robust statistical techniques. In this paper, we utilized the Bayesian quantile regression model to establish the relations between claim outcome of interest and subject-level features and further classify claims as either normal or anomalous. An estimated model component is assumed to inherently capture the behaviors of the response variable. A Bayesian mixture model, assuming a normal mixture of two components, is used to label claims as either normal or anomalous. The model was applied to health insurance data captured on 115 people suffering from various cardiovascular diseases across different states in the USA. Results show that 25 out of 115 claims (21.7%) were potentially suspicious. The overall accuracy of the fitted model was assessed to be 92%. Through the methodological approach and empirical application, we demonstrated that the Bayesian quantile regression is a viable model for anomaly detection.
APA, Harvard, Vancouver, ISO, and other styles
9

Siva, Krishna Jampani. "Fraud Detection in Insurance Claims Using AI." Journal of Scientific and Engineering Research 6, no. 1 (2019): 302–10. https://doi.org/10.5281/zenodo.14637405.

Full text
Abstract:
The insurance industry has faced issues with fraudulent claims, which have resulted in financial losses and operational inefficiencies. Integrating Artificial Intelligence offers a transformative way of detecting fraud by analyzing patterns in claim histories and customer profiles, along with external datasets. The use of AI-driven techniques, such as machine learning algorithms, natural language processing, and anomaly detection models, now allows insurers to detect fraud with greater precision and efficiency. These systems use supervised and unsupervised learning methods for outlier detection, classification of risky claims, and reducing false positives. Dynamic adaptability to the AI solutions has proved newly hatched fraud tactics, providing resilience against evolving threats. The present article investigates how fraud in insurance is being revolutionized by AI, focusing on health, auto, and property insurance claims by examining in-depth the implications in facilitating the industry with increased trust and better operational efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Bakeyalakshmi, P., and S. K. Mahendran. "Enhanced replica detection scheme for efficient analysis of intrusion detection in MANET." International Journal of Engineering & Technology 7, no. 1.1 (2017): 565. http://dx.doi.org/10.14419/ijet.v7i1.1.10169.

Full text
Abstract:
Nowadays, detection scheme of intrusion is placing a major role for efficient access and analysis in Mobile Ad-hoc network (MANET). In the past, the detection scheme of Intrusion was used to identify the efficiency of the network and in maximum systems it performs with huge rate of false alarm. In this paper, an Effective approach of the Enhanced Replica Detection scheme (ERDS) based on Sequential Probability Ratio Test (SPRT) is proposed to detect the malicious actions and to have a secure path without claim in an efficient manner. Also, provides strategies to avoid attacker and to provide secure communication. In order to have an efficient analysis of intrusion detection the proposed approach is implemented based on the anomaly. To achieve this, the detection scheme is established based on SPRT and demonstrated the performances of detection with less claim. The simulation results of control overhead, packet delivery ratio, efficient detection, energy consumption and average claims are carried out for the analysis of performance to show the improvement than the existing by using the network simulator tool. Also, the performance of the proposed system illustrated the detection of intrusion in the normal and attacker states of the network.
APA, Harvard, Vancouver, ISO, and other styles
11

Bokaei, Mohammad Hadi, Minoo Nassajian, Mojgan Farhoodi, and Mona Davoudi Shamsi. "Claim Detection in Persian Twitter Posts." International Journal of Information and Communication Technology Research 16, no. 3 (2024): 25–34. https://doi.org/10.61186/itrc.16.3.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Abumansour, Amani S., and Arkaitz Zubiaga. "Check-worthy claim detection across topics for automated fact-checking." PeerJ Computer Science 9 (May 16, 2023): e1365. http://dx.doi.org/10.7717/peerj-cs.1365.

Full text
Abstract:
An important component of an automated fact-checking system is the claim check-worthiness detection system, which ranks sentences by prioritising them based on their need to be checked. Despite a body of research tackling the task, previous research has overlooked the challenging nature of identifying check-worthy claims across different topics. In this article, we assess and quantify the challenge of detecting check-worthy claims for new, unseen topics. After highlighting the problem, we propose the AraCWA model to mitigate the performance deterioration when detecting check-worthy claims across topics. The AraCWA model enables boosting the performance for new topics by incorporating two components for few-shot learning and data augmentation. Using a publicly available dataset of Arabic tweets consisting of 14 different topics, we demonstrate that our proposed data augmentation strategy achieves substantial improvements across topics overall, where the extent of the improvement varies across topics. Further, we analyse the semantic similarities between topics, suggesting that the similarity metric could be used as a proxy to determine the difficulty level of an unseen topic prior to undertaking the task of labelling the underlying sentences.
APA, Harvard, Vancouver, ISO, and other styles
13

Annaboina, Krishna, Samala Prasoona, Chada Ashritha, and Pesara Chakradhar Reddy. "Fraud Detection in Medical Insurance Claim Systems using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40522.

Full text
Abstract:
Fraud detection in medical insurance claim systems is crucial for preserving healthcare service integrity and minimizing financial losses. This study explores the application of Support Vector Machines (SVM) enhanced by GridSearchCV for hyperparameter optimization, aiming to detect fraudulent claims effectively. The research methodology involves preprocessing a comprehensive medical insurance claims dataset, focusing on extensive feature selection and engineering to improve model performance. GridSearchCV is utilized to conduct an exhaustive search over specified parameter ranges, identifying the optimal hyperparameters for the SVM model. To evaluate the model's effectiveness, metrics such as accuracy, precision, recall, and F1-score are employed. The results indicate that the optimized SVM model significantly improves the detection of fraudulent claims, outperforming baseline models. This study underscores the efficacy of integrating SVM with GridSearchCV in developing robust fraud detection systems, contributing to more reliable and efficient processing of medical insurance claims. Keywords – Fraud detection, Machine learning, Support Vector Machines (SVM),GridSearchCV, Hyperparameter optimization, Model accuracy, Evaluation metrics, Accuracy, Precision.
APA, Harvard, Vancouver, ISO, and other styles
14

Goutham, Bilakanti. "Automated Healthcare Claim Processing with AWS AI." International Journal of Leading Research Publication 5, no. 1 (2024): 1–12. https://doi.org/10.5281/zenodo.15196969.

Full text
Abstract:
The integration of artificial intelligence (AI) and cloud-based solutions is revolutionizing healthcare claims processing by enhancing efficiency, accuracy, and security. This study explores an AWS-powered system that leverages Amazon Textract, AWS Step Functions, and Amazon Recognition to automate document extraction, validation, and fraud detection. By eliminating manual processes, the AI-driven system significantly reduces claim processing time, minimizes errors, and ensures compliance with regulatory standards. Machine learning algorithms improve data classification and enhance decision-making, leading to faster reimbursements and reduced administrative burdens for healthcare payers and providers. The incorporation of AWS Step Functions optimizes workflow automation, while Amazon Rekognition strengthens fraud detection by identifying inconsistencies in submitted claims. The cloud-based architecture ensures scalability, reliability, and secure data management, facilitating seamless claim approvals. Furthermore, AI models continuously adapt to evolving fraud patterns, reinforcing system integrity and trust. This study highlights the impact of AI in streamlining claims processing, reducing costs, and enhancing operational efficiency in healthcare. The findings contribute to the ongoing transformation of healthcare administration through intelligent automation, positioning AI as a key enabler of future-ready healthcare systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahaboobsubani, Shaik. "Intelligent Automation for Insurance Claims Processing." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 7, no. 4 (2019): 1–9. https://doi.org/10.5281/zenodo.14352226.

Full text
Abstract:
The transformational effect of intelligent automation through machine learning models on insurance claim processing. Traditional manually operated workflows within claims processing are always vulnerable to inefficiency, inaccuracies, and fraudulent cases. Intelligent automation overcomes these challenges by streamlining processes that offer rapid claim validation and more accurate detection of fraud cases. The research work illustrates an integrated claims-processing mechanism that leverages predictive analytics and anomaly detection to filter out fraudulent patterns. Accuracy rates, time-to-resolution, and cost savings have seen significant enhancement compared to a purely manual process. Empirical analysis suggests that automated systems reduce the processing time by as high as 70% and can detect fraudulent claims with an accuracy of above 95%. Besides, intelligent systems provide scalability, flexibility to policy changes within insurance, and customer satisfaction through the acceleration of the processing of genuine claims. The results highlight how ML-driven automation can transform claims management into more efficient, reliable, and trustworthy ways in the insurance industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Wicaksono, Ridwan Lazuardy Bimo, and Aletta Divna Valensia Rohman. "Analysis Automobile Insurance Fraud Claim Using Decision Tree and Random Forest Method." International Journal of Global Operations Research 5, no. 4 (2024): 231–38. https://doi.org/10.47194/ijgor.v5i4.337.

Full text
Abstract:
Insurance fraud, particularly in the automobile sector, poses significant financial risks to insurance companies. This study aims to analyze fraudulent claims in automobile insurance using Decision Tree and Random Forest methods. A dataset consisting of 10,000 entries was utilized, containing variables such as vehicle type, claim amount, and claim status. The Decision Tree method was employed for its interpretability, while Random Forest was used for its superior accuracy. Results indicated that the Random Forest model outperformed the Decision Tree model, achieving an accuracy of 51.37% compared to 50.47%. This research highlights the effectiveness of machine learning techniques in detecting insurance fraud and provides insights for insurers to enhance their fraud detection systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Saravanakumar, V. "Detecting and Preventing Fraud in Insurance Claims by using Artificial Intelligence." ComFin Research 13, S1-i1-Mar (2025): 161–65. https://doi.org/10.34293/commerce.v13is1-i1-mar.8673.

Full text
Abstract:
Insurance companies have been suffered financial losses as a result of fraudulent activities. People from Insurance industry believe that fraud exists anywhere and they are being discovered across lines of business. Fraud in insurance claim is a significant issue for insurance companies and it has gotten worse in recent years. Most of the people recognized that traditional fraud detection methods are not enough to detecting and preventing fraud claims in insurance. This paper explores the current trends in insurance sector particularly identifying fraudulent activities in claims and utilize artificial intelligence in found and protect of deceit claims. This paper highlight different kind fraud and its impacts on insurance companies and policy holder, traditional methods used to detect and prevent fraud claims and its problems and lack of capabilities associated with traditional methods, emergence of Artificial intelligence and its impacts in various sectors, emerging fraud detection method using artificial intelligence, capabilities of Artificial Intelligence based fraud detecting methods, benefits gained by using Artificial Intelligence based fraud detecting methods, future trends in insurance sector.
APA, Harvard, Vancouver, ISO, and other styles
18

Subbian, Rajkumar Govindaswamy. "Enhancing Operational Efficiency in Claims Processing Through Technology." Asian Journal of Research in Computer Science 18, no. 3 (2025): 456–66. https://doi.org/10.9734/ajrcos/2025/v18i3604.

Full text
Abstract:
AIM: To analyze the impact of Technology-Driven Claims Automation, with a focus on real-time fraud detection and enhancing accuracy in claims intake and validation. This study explores how advanced technologies such as AI, machine learning, and automation streamline claims workflows, reduce processing time, and enhance decision-making accuracy while mitigating fraudulent activities in real-time. Study Design: A quasi-experimental design was employed to assess the effectiveness of Technology-Driven Claims Automation. The study analyzed pre- and post-implementation performance metrics, such as Claim Cycle Time, Claims Straight-Through Processing (STP) Rate, and Fraud Detection Rate. Place and Duration of Study: This study was conducted at Global Insurance Systems over a 16-week period from April to September 2024, involving all applications, tools and software used in Claims. Methodology: The methodology for enhancing claims processing leverages technology advancements in AI, automation, and predictive analytics to improve efficiency, accuracy, and fraud detection in Property &amp; Casualty (P&amp;C) insurance. It involves automated claims intake and processing, claims document verification, claims triaging and claim adjudication. Automated claims intake and processing eliminates manual data entry by using AI-powered chatbots, RPA, and cloud-based integrations, enabling policyholders to submit claims via self-service portals while AI validates and processes the information. Claims document verification applies OCR, NLP, and blockchain-based authentication to instantly extract, validate, and cross-check information from policyholder documents, invoices, and reports, improving accuracy and preventing fraud. Claims triaging utilizes OCR, machine learning, and computer vision to classify claims based on severity, risk, and potential fraud, ensuring legitimate claims are fast-tracked while suspicious cases are flagged for review. Multi-step workflow automation in claims adjudication integrates rule-based decision engines and predictive analytics to verify policy coverage, assess fraud risks, and automate approvals or payouts, reducing human intervention and processing time. Conclusion: The introduction of AI, automation, and predictive analytics in claims processing has significantly improved efficiency, fraud detection, and accuracy in the P&amp;C insurance industry. In our study, it reduced Claim Cycle Time by 50%, increased Fraud Detection Accuracy by 25%, reduced operational costs by 40%, and increased Customer Satisfaction by 35%. By leveraging automated claims intake, triaging, adjudication workflows, and document verification, insurers can streamline operations, reduce manual intervention, and enhance customer experience. These advancements enable faster settlements, better risk assessment, and improved compliance with regulatory standards. As technology continues to evolve, embracing AI-driven solutions will be essential for insurers to stay competitive, minimize fraud, and deliver seamless, real-time claims processing.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Yuxuan, Hongda Sun, Wenya Guo, et al. "BiDeV: Bilateral Defusing Verification for Complex Claim Fact-Checking." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 1 (2025): 541–49. https://doi.org/10.1609/aaai.v39i1.32034.

Full text
Abstract:
Complex claim fact-checking performs a crucial role in disinformation detection. However, existing fact-checking methods struggle with claim vagueness, specifically in effectively handling latent information and complex relations within claims. Moreover, evidence redundancy, where non-essential information complicates the verification process, remains a significant issue. To tackle these limitations, we propose Bilateral Defusing Verification (BiDeV), a novel fact-checking working-flow framework integrating multiple role-played LLMs to mimic the human-expert fact-checking process. BiDeV consists of two main modules: Vagueness Defusing identifies latent information and resolves complex relations to simplify the claim, and Redundancy Defusing eliminates redundant content to enhance the evidence quality. Extensive experimental results on two widely used challenging fact-checking benchmarks (Hover and Feverous-s) demonstrate that our BiDeV can achieve the best performance under both gold and open settings. This highlights the effectiveness of BiDeV in handling complex claims and ensuring precise fact-checking.
APA, Harvard, Vancouver, ISO, and other styles
20

Shreedharan, Krishna Kumar. "Automated Claims Processing in Guidewire ClaimCenter: Enhancing Efficiency and Accuracy in the Insurance Industry." Asian Journal of Research in Computer Science 18, no. 3 (2025): 232–38. https://doi.org/10.9734/ajrcos/2025/v18i3589.

Full text
Abstract:
Aims: This study explores the benefits, challenges, and future trends in implementation of automated claims processing in Guidewire ClaimCenter, a leading software platform for insurance providers and provides insights to help insurers who intend to implement automated claims processing in Guidewire ClaimCenter. Study Design: Mixed-methods approach, combining both qualitative and quantitative research to provide a comprehensive analysis of automation in Guidewire’s claims processing. Place and Duration of Study: Analysis between February 2024 and September 2024, based on data from North America, Europe, and Asia-Pacific insurance markets as documented in vendor case studies, expert interviews, customer testimonials, and industry reports. Methodology: Reviewed Guidewire product documentation, industry reports, whitepapers, and research papers related to digital transformation in insurance. Case study analysis from insurance companies that have implemented Guidewire ClaimCenter for claims automation, like Nationwide, AXA, and Liberty Mutual. Key performance indicators analyzed include claims processing time, cost savings, fraud detection, and customer experience improvements. Quantitative analysis included an online survey targeting about 100 professionals like claim adjusters, underwriters, and claim managers in insurance companies using Guidewire ClaimCenter and the questions focused on customer satisfaction, efficiency improvements, and cost savings. Results: Automated processing of claims in Guidewire ClaimCenter resulted up to 50% reduction in claim settlement time for standard claims, enhanced fraud detection, improved customer satisfaction, and reduced adjuster workload by up to 30%. Challenges with automated claims processing includes integration complexities, workforce adaptation, AI limitations, and handling complex claims. Conclusion: Guidewire ClaimCenter’s automation capabilities are transforming claims processing by enhancing efficiency, reducing costs, and improving policyholder satisfaction. Case studies from insurers demonstrate how Guidewire’s automation has led to faster settlements, reduced fraud, and improved customer experiences. Insurers investing in Guidewire ClaimCenter’s automation capabilities will be well-positioned to stay competitive in the evolving digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
21

Lomas, Dennis. "Representation of basic kinds: Not a case of evolutionary internalization of universal regularities." Behavioral and Brain Sciences 24, no. 4 (2001): 686–87. http://dx.doi.org/10.1017/s0140525x01500084.

Full text
Abstract:
Shepard claims that “evolutionary internalization of universal regularities in the world” takes place. His position is interesting and seems plausible with regard to “default” motion detection and aspects of colour constancy which he addresses. However, his claim is not convincing with regard to object recognition. [Shepard]
APA, Harvard, Vancouver, ISO, and other styles
22

Rahayu, Tiny, Mia Rahma Tika, and Sapta Lestariyowidodo. "Analysis Of Outside Claim Fragmentation On BPJS Claims In Hospital." KESANS : International Journal of Health and Science 1, no. 1 (2021): 22–27. http://dx.doi.org/10.54543/kesans.v1i1.6.

Full text
Abstract:
The Social Security Organizing Agency (BPJS) has provisions regardingfraud in which one form of fraud is the breakdown of service episodes that are not in accordance with medical indications (serviceunbundling or fragmentation)it is done by health care providers in Health Facilities Referral Follow-Up (FKRTL) the action is done intentionally, to get financial benefits from public relations. Health Insurance program in the National Social Security System through fraudulent acts that are not in accordance with the provisions of the laws and regulations. The purpose of this study is to analyze the occurrence of Fragmentation in Hospital X. This research method uses quantitative methods from the data obtained. The results of this study of the Hospital conducted fragmentation in february 33 files and march 24 files and the number of fragmentation in mountax services. The hospital argued not to experience losses because the claim package that has been arranged by the Health Insurance Organizing Agency (BPJS) Instead of the Health Insurance Organizing Agency (BPJS) prohibits fragmentation because it includesfraud. In this study, the hospital can fragment because it is not applied standard operating procedures properly and in accordance with PERMENKES Number 16 of 2020 the hospital must have a fraud prevention team in order to conduct early detection.
APA, Harvard, Vancouver, ISO, and other styles
23

Ncube, Hopewell Bongani, Belinda Mutunhu, Sibusisiwe Dube, and Kudakwashe Maguraushe. "Blockchain-Based Fraud Detection System for Healthcare Insurance Claims." European Conference on Innovation and Entrepreneurship 19, no. 1 (2024): 540–47. http://dx.doi.org/10.34190/ecie.19.1.2558.

Full text
Abstract:
Healthcare fraud is a huge concern that affects not only the financial viability of insurance companies but also the well-being of patients who may receive compromised care due to fraudulent acts. Addressing this issue demands novel solutions that can detect and prevent fraudulent conduct in healthcare insurance claims. The project intends to establish an automated fraud detection system using blockchain technology, which has advantages such as security, transparency, and data immutability. By leveraging blockchain's decentralized ledger, the system creates a tamper-proof platform for processing healthcare insurance claims, preventing fraudulent alterations and enhancing trust in the integrity of the claims process. Ethereum's blockchain platform and smart contracts play a critical role in ensuring the secure recording of transactions while preventing retroactive alterations. Moreover, an on-chain database is employed to manage relevant claim data, thereby safeguarding its integrity and ensuring accessibility. The decentralized nature of blockchain technology brings additional advantages by eliminating the need for intermediaries, thereby reducing administrative costs and streamlining the claim processing workflow. The adoption of methodologies such as Personal Extreme Programming (PXP) and Design Science Research Methodology (DSRM) fortifies the project's framework. PXP facilitates continuous improvement through incremental and iterative development, while DSRM ensures a structured approach to problem-solving, yielding reliable results. Through rigorous testing and validation, the automated fraud detection system enhances the efficiency and accuracy of fraud identification in healthcare insurance claims. By combining blockchain technology with methodological frameworks, this project offers a promising solution to combat healthcare fraud, safeguarding insurance systems' integrity and ensuring quality care for patients. Future iterations will focus on expanding the system's capabilities and refining its algorithms to counter the evolving fraudulent tactics prevalent in the healthcare industry.
APA, Harvard, Vancouver, ISO, and other styles
24

Ricchetti-Masterson, Kristen, Molly Aldridge, John Logie, Nittaya Suppapanya, and Suzanne F. Cook. "Exploring Methods to Measure the Prevalence of Ménière's Disease in the US Clinformatics™ Database, 2010-2012." Audiology and Neurotology 21, no. 3 (2016): 172–77. http://dx.doi.org/10.1159/000441963.

Full text
Abstract:
Recent studies on the epidemiology of the inner-ear disorder Ménière's disease (MD) use disparate methods for sample selection, case identification and length of observation. Prevalence estimates vary geographically from 17 to 513 cases per 100,000 people. We explored the impact of case detection strategies and observation periods in estimating the prevalence of MD in the USA, using data from a large insurance claims database. Using case detection strategies of ≥1, ≥2 and ≥3 ICD-9 claim codes for MD within a 1-year period, the 2012 prevalence estimates were 66, 27 and 14 cases per 100,000 people, respectively. For ≥1, ≥2 and ≥3 insurance claims within a 3-year observation period, the prevalence estimates were 200, 104 and 66 cases per 100,000 people, respectively. Estimates based on a single claim are likely to overestimate prevalence; this conclusion is aligned with the American Academy of Otolaryngology-Head and Neck Foundation criteria requiring ≥2 definitive episodes for a definite diagnosis, and it has implications for future epidemiologic research. We believe estimates for ≥2 claims may be a more conservative estimate of the prevalence of MD, and multiyear estimates may be needed to allow for adequate follow-up time.
APA, Harvard, Vancouver, ISO, and other styles
25

Machireddy, Jeshwanth Reddy. "Machine Learning and Automation in Healthcare Claims Processing." Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023 6, no. 1 (2024): 686–701. https://doi.org/10.60087/jaigs.v6i1.335.

Full text
Abstract:
Healthcare systems have evolved rapidly; driven by the need for efficient and accurate claims processing in order to reduce fraud, errors, and increase operational efficiency. “The existing traditional manual methods are error-prone, time-consuming and costlier in terms of administration. This chapter provides insight into how Machine Learning (ML) and Automation are fundamentally changing the way that healthcare claims are being managed in a revolutionary manner. We explore essential ML techniques in the form of predictive analytics, anomaly detection, natural language processing (NLP), and robotic process automation (RPA), that further enhance claims validation, fraud detection, and adjudication. This chapter explores various case studies and tangible implementations which have reported better claim accuracy, shorter processing times and increased fraud deterrent potential. Moreover, we explore challenges including data privacy, ethical implications, and regulatory compliance in AI-based automation. Further more, we discuss challenges such as data privacy, ethical considerations, and regulatory compliance in AI-driven automation. This will not only lead to a flowless operation and quicker settlement of the claim but also reduce the operational costs and, in return, improve the patient satisfaction in quality of care.
APA, Harvard, Vancouver, ISO, and other styles
26

Chethan, Chinthaprthi. "Automated Vehicle Damage Detection and Cost Estimator for Insurance Companies." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 2963–68. https://doi.org/10.22214/ijraset.2025.67989.

Full text
Abstract:
Using state-of-the-art deep learning algorithms, the "Automated Vehicle Damage Detection &amp; Cost Estimator for Insurance Companies" project seeks to transform the vehicle insurance industry by automating the damage assessment process. This AI-powered system uses YOLOv5, one of the most sophisticated object identification models available, to effectively analyse and evaluate vehicle damage, including that of two-wheelers and four-wheelers. The model can precisely identify, categorize, and assess various forms of damage, such as scratches, dents, cracks, and shattered pieces, because it has been thoroughly trained on a vast dataset of car damages. This high degree of accuracy lowers the possibility of errors in computations and false claims by guaranteeing that even small damages are accurately identified. Insurance firms may improve the speed of their claims processing, reduce human error, and produce accurate repair cost estimates based on real-time damage analysis by putting deep learning and computer vision technology into practice. By automating the evaluation process, the system will do away with the necessity for manual inspections, which are frequently laborious, arbitrary, and prone to errors. Additionally, the system would forecast repair costs based on variables like labor prices, spare parts, and vehicle depreciation by integrating repair cost estimating models. This will enable insurance companies to give equitable and transparent claim settlements. The entire insurance process will be greatly streamlined by this cutting-edge technology, which will not only speed up damage assessment but also result in quicker claim settlements and higher customer satisfaction. Owners of vehicles may now upload photos using a mobile application, and the system will automatically assess the damage and provide an immediate cost estimate, eliminating the need for them to wait days for damage assessment. By guaranteeing impartial, data-driven, and equitable claim handling, this automation-driven strategy would not only lower operating expenses for insurance companies but also foster confidence between policyholders and insurers
APA, Harvard, Vancouver, ISO, and other styles
27

Aditya, Kurniawan, Widodo Sri, and Maulindar Joni. "Fraud Detection in Health Insurance Claims Based on Artificial Intelligence (AI)." Engineering and Technology Journal 10, no. 02 (2025): 3735–39. https://doi.org/10.5281/zenodo.14788653.

Full text
Abstract:
Fraud in health insurance claims has become a significant problem affecting the provision of healthcare globally. In addition to the financial losses incurred, patients who actually need medical treatment also suffer. This is because healthcare providers are not paid on time, as a result of delays in the manual scrutiny of their claims. Health insurance claim fraud is perpetrated through service providers, insurance customers, and insurance companies. The need for the development of a decision support system (DSS) for accurate claims processing that can automatically detect fraud is urgently needed. The purpose of this research is to create a machine learning model that can detect fraud in health insurance claims based on artificial intelligence (AI). The method used is Deep Learning. The accuracy obtained is an accuracy of 86%.
APA, Harvard, Vancouver, ISO, and other styles
28

Subodh Nath Pushpak. "Quantum Machine Learning Technique for Insurance Claim Fraud Detection with Quantum Feature Selection." Journal of Information Systems Engineering and Management 10, no. 8s (2025): 750–56. https://doi.org/10.52783/jisem.v10i8s.1193.

Full text
Abstract:
This paper demonstrates a novel use of quantum machine learning (QML) algorithms for detecting fraudulent activities in the home insurance sector. Utilizing actual data and IBM Quantum processors through the Qiskit software stack, the study introduces an innovative method for selecting quantum features that are specifically designed to accommodate the limitations of Near Intermediate Scale Quantum (NISQ) technology by using the Quantum Support Vector Machine (QSVM) in conjunction with traditional machine learning techniques. A comprehensive comparison was conducted to evaluate their effectiveness in detecting fraud. The indicators such as accuracy, recall, and false positive rate are carefully analyzed. Despite the constraints of current quantum technology, QSVM shows excellent accuracy, especially on limited datasets, indicating its potential to enhance insurance fraud detection. The research emphasizes the crucial feature selection in optimizing QML algorithms for fraud detection tasks. It investigates the capacities of hybrid quantum/classical machine learning ensembles. Future research directions include expanding this study to actual hardware implementations to verify its practical feasibility. The work enhances financial security in the insurance business by using quantum computing technology in fraud detection approaches. It establishes the feasibility and efficacy of using quantum resources to solve difficult real-world issues, setting a foundation for or the broader application of QML in fraud detection and other fields.
APA, Harvard, Vancouver, ISO, and other styles
29

Hapsari, Luthfia Nurma, and Nur Rokhman. "Anomaly Detection of Hospital Claim Using Support Vector Regression." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 18, no. 1 (2024): 1. http://dx.doi.org/10.22146/ijccs.91857.

Full text
Abstract:
BPJS Kesehatan plays a crucial role in providing affordable access to healthcare services and reducing individual financial burdens. However, deficit issues can disrupt the sustainability of the program, making anomaly detection highly important to conduct. Previous research on unsupervised anomaly detection in BPJS Kesehatan revealed a limitation with Simple Linear Regression (SLR), which only accommodates linear relationships among independent variables and the target variable of BPJS Kesehatan claim values. Minister of Health Regulation No. 52 of 2016 identified eight influential non-linear independent variables, leading to the proposal of Support Vector Regression (SVR) to address SLR's shortcomings.Research findings demonstrate SVR's superior anomaly detection performance over SLR. Interestingly, the SVR model excels in anomaly detection but lacks in prediction. Optimal tuning of SVR hyperparameters (C=9, epsilon=90, gamma=0.009, residual anomaly definition &gt; 0.5*RMSE for both datasets) yields impressive metrics: Accuracy=0.97, Precision=0.84, Recall=0.97, and F1-Score=0.90. The anomaly detection results are expected to greatly support the sustainability of the BPJS Kesehatan program in Indonesia.
APA, Harvard, Vancouver, ISO, and other styles
30

Sufi, Fahim, and Musleh Alsulami. "Quantifying Truthfulness: A Probabilistic Framework for Atomic Claim-Based Misinformation Detection." Mathematics 13, no. 11 (2025): 1778. https://doi.org/10.3390/math13111778.

Full text
Abstract:
The increasing sophistication and volume of misinformation on digital platforms necessitate scalable, explainable, and semantically granular fact-checking systems. Existing approaches typically treat claims as indivisible units, overlooking internal contradictions and partial truths, thereby limiting their interpretability and trustworthiness. This paper addresses this gap by proposing a novel probabilistic framework that decomposes complex assertions into semantically atomic claims and computes their veracity through a structured evaluation of source credibility and evidence frequency. Each atomic unit is matched against a curated corpus of 11,928 cyber-related news entries using a binary alignment function, and its truthfulness is quantified via a composite score integrating both source reliability and support density. The framework introduces multiple aggregation strategies—arithmetic and geometric means—to construct claim-level veracity indices, offering both sensitivity and robustness. Empirical evaluation across eight cyber misinformation scenarios—encompassing over 40 atomic claims—demonstrates the system’s effectiveness. The model achieves a Mean Squared Error (MSE) of 0.037, Brier Score of 0.042, and a Spearman rank correlation of 0.88 against expert annotations. When thresholded for binary classification, the system records a Precision of 0.82, Recall of 0.79, and an F1-score of 0.805. The Expected Calibration Error (ECE) of 0.068 further validates the trustworthiness of the score distributions. These results affirm the framework’s ability to deliver interpretable, statistically reliable, and operationally scalable misinformation detection, with implications for automated journalism, governmental monitoring, and AI-based verification platforms.
APA, Harvard, Vancouver, ISO, and other styles
31

Harrag, Fouzi, and Mohamed Khalil Djahli. "Arabic Fake News Detection: A Fact Checking Based Deep Learning Approach." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 4 (2022): 1–34. http://dx.doi.org/10.1145/3501401.

Full text
Abstract:
Fake news stories can polarize society, particularly during political events. They undermine confidence in the media in general. Current NLP systems are still lacking the ability to properly interpret and classify Arabic fake news. Given the high stakes involved, determining truth in social media has recently become an emerging research that is attracting tremendous attention. Our literature review indicates that applying the state-of-the-art approaches on news content address some challenges in detecting fake news’ characteristics, which needs auxiliary information to make a clear determination. Moreover, the ‘Social-context-based’ and ‘propagation-based’ approaches can be either an alternative or complementary strategy to content-based approaches. The main goal of our research is to develop a model capable of automatically detecting truth given an Arabic news or claim. In particular, we propose a deep neural network approach that can classify fake and real news claims by exploiting ‘Convolutional Neuron Networks’. Our approach attempts to solve the problem from the fact checking perspective, where the fact-checking task involves predicting whether a given news text claim is factually authentic or fake. We opt to use an Arabic balanced corpus to build our model because it unifies stance detection, stance rationale, relevant document retrieval and fact-checking. The model is trained on different well selected attributes. An extensive evaluation has been conducted to demonstrate the ability of the fact-checking task in detecting the Arabic fake news. Our model outperforms the performance of the state-of-the-art approaches when applied to the same Arabic dataset with the highest accuracy of 91%.
APA, Harvard, Vancouver, ISO, and other styles
32

Sahana, Munavalli, and M. Hatture Sanjeevakumar. "Fraud Detection in Healthcare System using Symbolic Data Analysis." International Journal of Innovative Technology and Exploring Engineering (IJITEE) 10, no. 9 (2021): 1–7. https://doi.org/10.35940/ijitee.H9269.0710921.

Full text
Abstract:
In the era of digitization the frauds are found in all categories of health insurance. It is finished next to deliberate trickiness or distortion for acquiring some pitiful advantage in the form of health expenditures. Bigdata analysis can be utilized to recognize fraud in large sets of insurance claim data. In light of a couple of cases that are known or suspected to be false, the anomaly detection technique computes the closeness of each record to be fake by investigating the previous insurance claims. The investigators would then be able to have a nearer examination for the cases that have been set apart by data mining programming. One of the issues is the abuse of the medical insurance systems. Manual detection of frauds in the healthcare industry is strenuous work. Fraud and Abuse in the Health care system have become a significant concern and that too inside health insurance organizations, from the most recent couple of years because of the expanding misfortunes in incomes, handling medical claims have become a debilitating manual assignment, which is done by a couple of clinical specialists who have the duty of endorsing, adjusting, or dismissing the appropriations mentioned inside a restricted period from their gathering. Standard data mining techniques at this point do not sufficiently address the intricacy of the world. In this way, utilizing Symbolic Data Analysis is another sort of data analysis that permits us to address the intricacy of the real world and to recognize misrepresentation in the dataset.&nbsp;
APA, Harvard, Vancouver, ISO, and other styles
33

Glanz, J. "Papers Face Off Over Claim Of Neutrino Mass Detection." Science 269, no. 5231 (1995): 1671–72. http://dx.doi.org/10.1126/science.269.5231.1671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

VIAENE, S., G. DEDENE, and R. DERRIG. "Auto claim fraud detection using Bayesian learning neural networks." Expert Systems with Applications 29, no. 3 (2005): 653–66. http://dx.doi.org/10.1016/j.eswa.2005.04.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Sari, Panca Oktavia Candra, and Suharjito Suharjito. "Outlier Detection in Inpatient Claims Using DBSCAN and K-Means." JURNAL TEKNIK INFORMATIKA 15, no. 1 (2022): 1–10. http://dx.doi.org/10.15408/jti.v15i1.25682.

Full text
Abstract:
Health insurance helps people to obtain quality and affordable health services. The claim billing process is manually input code to the system, this can lack of errors and be suspected of being fraudulent. Claims suspected of fraud are traced manually to find incorrect inputs. The increasing volume of claims causes a decrease in the accuracy of tracing claims suspected of fraud and consumes time and energy. As an effort to prevent and reduce the occurrence of fraud, this study aims to determine the pattern of data on the occurrence of fraud based on the formation of data groupings. Data was prepared by combining claims for inpatient bills and patient bills from hospitals in 2020. Two methods were used in this study to form clusters, DBSCAN and KMeans. To find out the outliers in the cluster, Local Outlier Factor (LOF) was added. The results from experiments show that both methods can detect outlier data and distribute outlier data in the formed cluster. Variable that high effect becomes data outlier is the length of stay, claims code, and condition of patient when discharged from the hospital. Accuracy K-Means is 0.391, 0.003 higher than DBSCAN, which is 0.389.
APA, Harvard, Vancouver, ISO, and other styles
36

Sowah, Robert A., Marcellinus Kuuboore, Abdul Ofoli, et al. "Decision Support System (DSS) for Fraud Detection in Health Insurance Claims Using Genetic Support Vector Machines (GSVMs)." Journal of Engineering 2019 (September 2, 2019): 1–19. http://dx.doi.org/10.1155/2019/1432597.

Full text
Abstract:
Fraud in health insurance claims has become a significant problem whose rampant growth has deeply affected the global delivery of health services. In addition to financial losses incurred, patients who genuinely need medical care suffer because service providers are not paid on time as a result of delays in the manual vetting of their claims and are therefore unwilling to continue offering their services. Health insurance claims fraud is committed through service providers, insurance subscribers, and insurance companies. The need for the development of a decision support system (DSS) for accurate, automated claim processing to offset the attendant challenges faced by the National Health Insurance Scheme cannot be overstated. This paper utilized the National Health Insurance Scheme claims dataset obtained from hospitals in Ghana for detecting health insurance fraud and other anomalies. Genetic support vector machines (GSVMs), a novel hybridized data mining and statistical machine learning tool, which provide a set of sophisticated algorithms for the automatic detection of fraudulent claims in these health insurance databases are used. The experimental results have proven that the GSVM possessed better detection and classification performance when applied using SVM kernel classifiers. Three GSVM classifiers were evaluated and their results compared. Experimental results show a significant reduction in computational time on claims processing while increasing classification accuracy via the various SVM classifiers (linear (80.67%), polynomial (81.22%), and radial basis function (RBF) kernel (87.91%).
APA, Harvard, Vancouver, ISO, and other styles
37

Aragani, Venu Madhav. "Revolutionizing Insurance Through AI and Data Analytics: Innovating Policy Underwriting and Claims Management for the Digital Era." FMDB Transactions on Sustainable Computer Letters 2, no. 3 (2024): 176–85. https://doi.org/10.69888/ftscl.2024.000243.

Full text
Abstract:
This study examines how AI and data analytics can transform insurance. In particular, this study examines how AI may affect underwriting and claims administration. This study uses AI to improve underwriting accuracy, claim processing speed, fraud detection, and operating efficiency. An example dataset of insurance claims, underwriting reports, and customer satisfaction indicators will be used to measure AI’s impact on traditional insurance operations. It includes underwriting accuracy, claims-processing time, fraud detection, client happiness, and efficiency in conventional and AI-supported insurance platforms. Pandas and NumPy aided analysis by letting computations base, and Mathematica used it to display statistics graphically for deeper modelling and simulation applications. Underwriting accuracy rose from 80% to 100%, claims processing time fell from 30 to 18 days, and fraud detection accuracy rose from 75% to 92%. Additionally, AI procedures increased operational efficiency by 30% and customer satisfaction by 12%. These findings show that AI improves insurance and service processes and boosts customer satisfaction, putting AI at the heart of modernizing the insurance sector. The study proves AI improves insurance accuracy, efficiency, and customer experience.
APA, Harvard, Vancouver, ISO, and other styles
38

Jenita, Mary Arockiam, and Claret Seraphim Pushpanathan Angelin. "MapReduce-iterative support vector machine classifier: novel fraud detection systems in healthcare insurance industry." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 1 (2023): 756–69. https://doi.org/10.11591/ijece.v13i1.pp756-769.

Full text
Abstract:
Fraud in healthcare insurance claims is one of the significant research challenges that affect the growth of the healthcare services. The healthcare frauds are happening through subscribers, companies and the providers. The development of a decision support is to automate the claim data from service provider and to offset the patient&rsquo;s challenges. In this paper, a novel hybridized big data and statistical machine learning technique, named MapReduce based iterative support vector machine (MR-ISVM) that provide a set of sophisticated steps for the automatic detection of fraudulent claims in the health insurance databases. The experimental results have proven that the MR-ISVM classifier outperforms better in classification and detection than other support vector machine (SVM) kernel classifiers. From the results, a positive impact seen in declining the computational time on processing the healthcare insurance claims without compromising the classification accuracy is achieved. The proposed MR-ISVM classifier achieves 87.73% accuracy than the linear (75.3%) and radial basis function (79.98%).
APA, Harvard, Vancouver, ISO, and other styles
39

Ulianov MSc, PhD, Dr Policarpo Yoshin. "The CAT’s race: Who will claim this nobel prize?" Physics & Astronomy International Journal 8, no. 4 (2024): 210–15. http://dx.doi.org/10.15406/paij.2024.08.00350.

Full text
Abstract:
This article explores the competitive scientific endeavor to detect the Cosmic FM Background (CFMB), a predicted radiation whose discovery could substantiate the Small Bang model and potentially challenge the prevailing Big Bang paradigm in cosmology. We delve into the theoretical foundation of CFMB, detailing methodologies for its detection and discussing the profound implications such a discovery would hold for astrophysics. The detection of CFMB would not only shift current cosmological theories but also pave the way for new understanding of the universe’s earliest moments.
APA, Harvard, Vancouver, ISO, and other styles
40

Le, Van Nhat Thang, Jae-Gon Kim, Yeon-Mi Yang, and Dae-Woo Lee. "Evaluating the Checklist for Artificial Intelligence in Medical Imaging (CLAIM)-Based Quality of Reports Using Convolutional Neural Network for Odontogenic Cyst and Tumor Detection." Applied Sciences 11, no. 20 (2021): 9688. http://dx.doi.org/10.3390/app11209688.

Full text
Abstract:
This review aimed to explore whether studies employing a convolutional neural network (CNN) for odontogenic cyst and tumor detection follow the methodological reporting recommendations, the checklist for artificial intelligence in medical imaging (CLAIM). We retrieved the CNN studies using panoramic and cone-beam-computed tomographic images from inception to April 2021 in PubMed, EMBASE, Scopus, and Web of Science. The included studies were assessed according to the CLAIM. Among the 55 studies yielded, 6 CNN studies for odontogenic cyst and tumor detection were included. Following the CLAIM items, abstract, methods, results, discussion across the included studies were insufficiently described. The problem areas included item 2 in the abstract; items 6–9, 11–18, 20, 21, 23, 24, 26–31 in the methods; items 33, 34, 36, 37 in the results; item 38 in the discussion; and items 40–41 in “other information.” The CNN reports for odontogenic cyst and tumor detection were evaluated as low quality. Inadequate reporting reduces the robustness, comparability, and generalizability of a CNN study for dental radiograph diagnostics. The CLAIM is accepted as a good guideline in the study design to improve the reporting quality on artificial intelligence studies in the dental field.
APA, Harvard, Vancouver, ISO, and other styles
41

Sangishetty, Akanksha. "Revolutionizing Insurance Fraud Detection: A Data-Driven Approach for Enhanced Accuracy and Efficiency." Revolutionizing Insurance Fraud Detection: A Data-Driven Approach for Enhanced Accuracy and Efficiency 8, no. 10 (2023): 9. https://doi.org/10.5281/zenodo.10033870.

Full text
Abstract:
Fraudulent activities are increasingly prevalent across various sectors, imposing significant financial burdens on the insurance industry, estimated to cost billions annually. Insurance fraud, a deliberate and illicit act for financial gain, has emerged as a critical challenge faced by insurance companies worldwide. Often, the root cause of this issue can be traced back to shortcomings in the investigation of fraudulent claims. The repercussions of insurance fraud are extensive, leading to substantial financial losses and billions in avoidable expenses for the industry. This, in turn, necessitates the adoption of technology-driven solutions to combat fraudulent activities, offering policyholders a trustworthy and secure environment while substantially reducing fraudulent claims. The financial impact of these fraudulent activities, covered by increasing policy premiums, ultimately affects society at large. Conventional claim investigation procedures have faced criticism for their time-consuming and labor-intensive nature, often yielding unreliable outcomes. Consequently, this research leverages the Random Forest Classifier to develop a machine learning-based framework for fraud detection. Our study showcases the practical application of data analytics and machine learning techniques in automating the assessment of insurance claims, with a specific focus on the automatic identification of erroneous claims. Additionally, our system has the potential to generate heuristics for detecting fraud indicators. As a result, this approach positively contributes to the insurance industry by enhancing both the reputation of insurance firms and the satisfaction of customers.Keywords:- Insurance Fraud Detection, Support Vector Machine, Random Forest Classifier, Fraud Prevention, Customer Satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
42

Atanasious, Mohab Mahdy Helmy, Valentina Becchetti, Alessandro Giuseppi, et al. "An Insurtech Platform to Support Claim Management Through the Automatic Detection and Estimation of Car Damage from Pictures." Electronics 13, no. 22 (2024): 4333. http://dx.doi.org/10.3390/electronics13224333.

Full text
Abstract:
Claims management is a complex process through which an insurance company or responsible entity addresses and handles compensation requests from policyholders who have suffered damage or losses. This process entails several stages, including the notification of the claim, damage assessment, settlement of compensation, and, if necessary, dispute resolution. Fair, transparent and timely claims management is crucial for maintaining policyholders’ trust while also limiting the financial impact on the insurer. Technological innovations, such as the use of artificial intelligence and automation, are positively influencing this sector, enabling faster and more effective claims management. This study reports on Insoore AI, an insurtech solution that aims to automate a portion of claims management by integrating a computer vision solution based on some latest developments in deep learning to automatically recognize and localize car damage from user-provided pictures.
APA, Harvard, Vancouver, ISO, and other styles
43

Mary Arockiam, Jenita, and Angelin Claret Seraphim Pushpanathan. "MapReduce-iterative support vector machine classifier: novel fraud detection systems in healthcare insurance industry." International Journal of Electrical and Computer Engineering (IJECE) 13, no. 1 (2023): 756. http://dx.doi.org/10.11591/ijece.v13i1.pp756-769.

Full text
Abstract:
&lt;span&gt;Fraud in healthcare insurance claims is one of the significant research challenges that affect the growth of the healthcare services. The healthcare frauds are happening through subscribers, companies and the providers. The development of a decision support is to automate the claim data from service provider and to offset the patient’s challenges. In this paper, a novel hybridized big data and statistical machine learning technique, named MapReduce based iterative support vector machine (MR-ISVM) that provide a set of sophisticated steps for the automatic detection of fraudulent claims in the health insurance databases. The experimental results have proven that the MR-ISVM classifier outperforms better in classification and detection than other support vector machine (SVM) kernel classifiers. From the results, a positive impact seen in declining the computational time on processing the healthcare insurance claims without compromising the classification accuracy is achieved. The proposed MR-ISVM classifier achieves 87.73% accuracy than the linear (75.3%) and radial basis function (79.98%).&lt;/span&gt;
APA, Harvard, Vancouver, ISO, and other styles
44

Khadse, Prof D. B. "Health Care Provider Fraudulent Detection Using Machine Learning." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 09 (2024): 1–16. http://dx.doi.org/10.55041/ijsrem37654.

Full text
Abstract:
Healthcare fraud is a serious problem that affects the financial health and trust in healthcare systems around the world. This research paper focuses on using machine learning to detect fraudulent activities by healthcare providers. We analyze large amounts of data from Medicare claims to find unusual patterns that may indicate fraud. By using different machine learning methods, such as decision trees and random forests, we create a model that can accurately separate legitimate claims from fraudulent ones. To tackle the challenge of imbalanced data, we apply techniques like oversampling, which helps improve our model's performance. Our results show that this machine learning approach significantly enhances the accuracy and reliability of fraud detection compared to traditional methods. Additionally, our findings provide valuable insights for healthcare administrators and policymakers, helping them take action against fraud more effectively. By incorporating these advanced techniques into existing systems, we aim to support efforts to protect healthcare resources and improve patient care. This research not only adds to the understanding of fraud detection in healthcare but also offers practical solutions to fight against it effectively. Keywords: Provider Review, Insurance Claim Detection, Supervised Machine Learning, Support Vector Machine, Random Forest.
APA, Harvard, Vancouver, ISO, and other styles
45

S. Kaddi, Shweta, and Malini M. Patil. "Ensemble learning based health care claim fraud detection in an imbalance data environment." Indonesian Journal of Electrical Engineering and Computer Science 32, no. 3 (2023): 1686. http://dx.doi.org/10.11591/ijeecs.v32.i3.pp1686-1694.

Full text
Abstract:
&lt;p&gt;&lt;span&gt;Healthcare fraud has become a common encounter in the healthcare finance industry. The financial security of healthcare payers and providers is seriously impacted by healthcare fraud. When incorrect or exaggerated medical services are invoiced for reimbursement, fraudulent healthcare claims result. The effective operation of the healthcare system depends on the detection of such fraudulent actions. This paper develops a healthcare claim fraud detection method based on ensemble learning. Stack ensemble learning algorithm performance is compared to that of methods such as multi-layer perceptron (MLP) classifier, support vector classifier (SVC), logistic regression (LR), and decision tree (DT) algorithm. Because of the healthcare data imbalance, the normal transaction is significantly higher than the fraudulent transaction. The machine learning (ML) algorithm performs poorly because imbalanced data causes it to be biased toward the majority class. As a result, the data is unsampled using the synthetic minority oversampling technique (SMOTE) technique to provide balanced data. The experimental results show that for the identification of healthcare claim fraud, the ensemble learning strategy greatly &lt;span&gt;outperforms single learning algorithms. The stack ensemble learning outperforms all the area under the curve for the receiver-operating characteristic (AUC ROC) curves from various algorithms, and the AUC-ROC curve is determined to be producing results that are adequate for all approaches.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
46

Berendt, Bettina, Peter Burger, Rafael Hautekiet, Jan Jagers, Alexander Pleijter, and Peter Van Aelst. "FactRank: Developing automated claim detection for Dutch-language fact-checkers." Online Social Networks and Media 22 (March 2021): 100113. http://dx.doi.org/10.1016/j.osnem.2020.100113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Patil, Vaishnavi. "Fraud Detection and Analysis for Insurance Claim Using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 5559–65. http://dx.doi.org/10.22214/ijraset.2023.52875.

Full text
Abstract:
Abstract: Insurance fraud is an illegal conduct that is done on purpose in order to profit financially. This iscurrently the most serious issue that numerous insurance companies throughout the world are facing. The majority of the time, one or more gaps in the investigation of false claims has been identified as the primary factor. As a result, the requirement to use computer tools to stop fraud activitiesincreased. Providing customers with a dependable and stable environment while significantly lowering fraud claims. We demonstrated the results of our research by automating the evaluation of insurance claims using a variety of data methodologies, where the detection of erroneous claims would be done automatically using Data Analytics and Machine Learning techniques.
APA, Harvard, Vancouver, ISO, and other styles
48

Bach, Mirjana Pejić, Ksenija Dumičić, Berislav Žmuk, Tamara Ćurlin, and Jovana Zoroja. "Data mining approach to internal fraud in a project-based organization." International Journal of Information Systems and Project Management 8, no. 2 (2021): 81–101. http://dx.doi.org/10.12821/ijispm080204.

Full text
Abstract:
Data mining is an efficient methodology for uncovering and extracting information from large databases, which is widely used in different areas, e.g., customer relation management, financial fraud detection, healthcare management, and manufacturing. Data mining has been successfully used in various fraud detection and prevention areas, such as credit card fraud, taxation fraud, and fund transfer fraud. However, there are insufficient researches about the usage of data mining for fraud related to internal control. In order to increase awareness of data mining usefulness in internal control, we developed a case study in a project-based organization. We analyze the dataset about working-hour claims for projects, using two data mining techniques: chi-square automatic interaction detection (CHAID) decision tree and link analysis, in order to describe characteristics of fraudulent working-hour claims and to develop a model for automatic detection of potentially fraudulent ones. Results indicate that the following characteristics of the suspected working-hours claim were the most significant: sector of the customer, origin and level of expertise of the consultant, and cost of the consulting services. Our research contributes to the area of internal control supported by data mining, with the goal to prevent fraudulent working-hour claims in project-based organizations.
APA, Harvard, Vancouver, ISO, and other styles
49

Kronfeld, D. S., K. H. Treiber, T. M. Hess, and R. C. Boston. "Insulin resistance in the horse: Definition, detection, and dietetics,." Journal of Animal Science 83, suppl_13 (2005): E22—E31. http://dx.doi.org/10.2527/2005.8313_supple22x.

Full text
Abstract:
Abstract Specific quantitative methods for determining insulin resistance have been applied to obesity, activity/inactivity, reproductive efficiency, and exercise in horses, but only nonspecific indications have implicated insulin resistance as a risk factor or component of equine diseases. Insulin resistance derives from insulin insensitivity at the cell surface, which regulates glucose availability inside the cell, or from insulin ineffectiveness due to disruption of glucose metabolism inside the cell. Interplay of insensitivity and ineffectiveness should be considered in regard to patterns of disease, such as laminitis. Detection of insulin insensitivity is made weakly on the basis of fasting hyperinsulinemia, more strongly with a statistically validated surrogate, such as the logarithm of the reciprocal of basal insulinemia, or best by a specific quantitative method. Subjects found to be at risk can be managed to improve their insulin sensitivity by dietetics. Claims for dietetic prevention of a disease should be distinguished from claims for avoidance of a dietary risk factor. The evidence required for a claim of prevention is a controlled intervention trial as for a therapeutic drug, according to the U.S. FDA. In contrast, the evidence required for a claim of avoidance is association revealed by population studies plus causation shown by mechanistic experiments, as formulated in the Surgeon General of the Public Health Office's (1988) Report on Nutrition and Health. In this view, no appropriate evidence is available for the prevention or treatment of insulin resistance in an equine disease. Evidence is available, however, to justify avoidance of high-glycemic feeds, such as high starch intakes in grains, clover, and alfalfa, and high fructan intakes in grasses, to decrease the risk of acute digestive disturbances associated with rapid fermentation, and chronic metabolic disorders associated with insulin resistance. During submaximal exercise, high-glycemic meals have been shown to increase glucose utilization immediately. On the other hand, chronic adaptation to feeds that exchange corn oil and fiber sources for sources of sugar and starch confers benefits to athletic performance that may be due to several aspects of fat adaptation, including the regulation of insulin sensitivity, as well as glycolysis and lipid oxidation by signals from insulin receptors. Information regarding insulin resistance suggests methods for protecting health and promoting horse performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Sheikh Zeeshan Rehman, Syed Sameer Hashmi, Shaik Abdul Gaffar, and Dr. Surya Mukhi. "Medical Insurance Claim Using Face Id." International Journal of Information Technology and Computer Engineering 13, no. 2s (2025): 336–42. https://doi.org/10.62647/ijitce2025v13i2spp336-342.

Full text
Abstract:
To protect healthcare providers' finances and stop systematic abuse, it is essential to verify the veracity of medical insurance claims. In order to identify irregularities and fraudulent activity in health insurance claims, this study investigates a hybrid machine learning architecture that combines Support Vector Machines (SVM), Decision Trees, and Random Forest classifiers. To refine the input variables, a carefully selected dataset was subjected to extensive preprocessing, which included feature transformation, data normalization, and sophisticated dimensionality reduction techniques. GridSearchCV was used for hyperparameter tuning, which systematically found the best parameter combinations to maximize the prediction performance of each model. Metrics including accuracy, precision, recall, and F1-score were used to assess the effectiveness of the classifiers. The results showed that the modified ensemble models—Random Forest in particular—had better detection skills than more conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!