Academic literature on the topic 'Modelli diagnostici ML'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Modelli diagnostici ML.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Modelli diagnostici ML"

1

Bhore, Prof Priyanka. "Advancing Diagnostic Accuracy and Efficiency through Machine Learning Integration in Healthcare." International Journal for Research in Applied Science and Engineering Technology 13, no. 1 (2025): 1967–73. https://doi.org/10.22214/ijraset.2025.66707.

Full text
Abstract:
Machine learning (ML) has the potential to transform healthcare by improving the accuracy and efficiency of medical diagnoses. This project showcases the use of ML in healthcare through a DenseNet121 model designed to classify chest X-ray images into four categories: Pneumonia, Atelectasis, Pneumothorax, and No Finding. Utilizing the DenseNet121 architecture, recognized for its strong feature extraction abilities, the model was trained on a dataset of chest X-ray images along with relevant metadata. The goal was to accurately identify these conditions, thereby assisting healthcare professionals in their clinical decision-making. During the training phase, the model's performance was closely monitored with real-time validation to avoid overfitting. After training, the model's predictions on the test set were assessed using ROC curves and AUC scores, showing strong performance in classifying the targeted conditions. Incorporating ML models like DenseNet121 into clinical environments can greatly improve diagnostic accuracy and efficiency. These models can collaborate with hospitals to deliver quick and dependable diagnoses, alleviating some of the pressures on healthcare professionals and potentially leading to improved patient outcomes. The successful application of ML in this project underscores its promise as a valuable asset in the evolution of healthcare.
APA, Harvard, Vancouver, ISO, and other styles
2

Özer, İlyas. "Utilizing Machine Learning for Enhanced Diagnosis and Management of Pediatric Appendicitis: A Multilayer Neural Network Approach." Aintelia Science Notes 2, no. 2 (2023): 18–24. https://doi.org/10.5281/zenodo.10473089.

Full text
Abstract:
This study focuses on pediatric appendicitis, a leading cause of hospital admissions due to abdominal pain in children, characterized by a substantial risk of perforation, especially in younger patients. Traditional diagnostic methods, while effective, often lack specificity and are supplemented by varying laboratory and imaging techniques. This research introduces a novel application of machine learning (ML), specifically a multi-output neural network model, to address the complexities of diagnosing appendicitis, determining its severity, and guiding management strategies in pediatric cases. The model, with its unique architecture, has been trained and tested on a comprehensive dataset from Children’s Hospital St. Hedwig in Regensburg, Germany, which includes a wide array of clinical data and ultrasound images. The results demonstrate remarkable accuracy in classifying management approaches, severity levels, and diagnosis, highlighting the model's potential in supporting clinical decision-making. While not a replacement for clinical judgment, this model serves as a promising tool in the ongoing efforts to improve pediatric appendicitis care, offering a glimpse into the future of AI-enhanced medical diagnostics and treatment planning.
APA, Harvard, Vancouver, ISO, and other styles
3

Thamodharan, A. "Advanced Predictive Modeling for Early Detection of Diabetes Insipidus: Leveraging Machine Learning Algorithms to Enhance Diagnostic Accuracy and Personalized Treatment Pathways." International Scientific Journal of Engineering and Management 04, no. 04 (2025): 1–7. https://doi.org/10.55041/isjem03046.

Full text
Abstract:
Abstract: Diabetes Insipidus (DI) is a rare disorder characterized by the inability to concentrate urine, leading to frequent urination and excessive thirst. Early detection of DI is crucial for timely treatment, as delayed diagnosis can result in complications such as dehydration, electrolyte imbalances, and kidney damage. This paper explores the application of advanced predictive modeling techniques, particularly machine learning (ML) algorithms, to enhance the early detection and diagnosis of Diabetes Insipidus. Traditional diagnostic approaches, such as water deprivation tests and serum osmolality measurements, often require invasive procedures and are time-consuming. In contrast, ML-based models offer an opportunity to leverage clinical data for non-invasive, rapid, and accurate predictions, thereby improving diagnostic efficiency and patient outcomes.The paper reviews the various ML algorithms employed in the detection of DI, including decision trees, random forests, support vector machines (SVM), and deep learning methods. A significant focus is placed on feature engineering techniques, which help identify the most relevant clinical and laboratory parameters for the predictive models. Additionally, the integration of electronic health records (EHR) data, such as age, gender, history of dehydration, urine output, and serum electrolyte levels, is explored as a means to enhance the model's accuracy and robustness. Keywords: Diabetes Insipidus, Early Detection, Predictive Modeling, Machine Learning, Personalized Treatment, Diagnostic Accuracy, Feature Engineering, Electronic Health Records, Decision Trees, Support Vector Machines, Deep Learning, Cross-Validation, Model Evaluation.
APA, Harvard, Vancouver, ISO, and other styles
4

Ding, Yueheng. "Advances and Challenges in Machine Learning for Diabetes Prediction: A Comprehensive Review." Applied and Computational Engineering 109, no. 1 (2024): 75–80. http://dx.doi.org/10.54254/2755-2721/109/20241437.

Full text
Abstract:
Abstract. Diabetes mellitus is a prevalent and severe metabolic disorder disease that poses significant health risks globally, leading to substantial healthcare burdens. Recent days, advancements in artificial intelligence (AI) have markedly enhanced the accuracy and efficiency of diabetes outcome predicted by machine learning (ML), offering a promising approach for early intervention and treatment. This paper evaluates several advanced ML models, including Random Forest (RF), Support Vector Machine (SVM), and Neural Networks techniques based on neural networks. Each model's strengths and limitations are discussed, highlighting the improvements in predictive performance and diagnostic precision. Despite these advancements, the field faces ongoing challenges related to ethical considerations and data scale, which impact ML application in healthcare from both technical and moral aspects. Future efforts should focus on these challenges by promoting data sharing and integration while safeguarding privacy. Through these endeavors, we aim to advance the field of diabetes prediction and improve patient care.
APA, Harvard, Vancouver, ISO, and other styles
5

Kadhim, Dhuha Abdalredha, and Mazin Abed Mohammed. "Advanced Machine Learning Models for Accurate Kidney Cancer Classification Using CT Images." Mesopotamian Journal of Big Data 2025 (January 10, 2025): 1–25. https://doi.org/10.58496/mjbd/2025/001.

Full text
Abstract:
Kidney cancer, particularly renal cell carcinoma (RCC), poses significant challenges in early and accurate diagnosis due to the complexity of tumor characteristics in computerized tomography (CT) images. Traditional diagnostic approaches often struggle with variability in data and lack the precision required for effective clinical decision-making. This study aims to develop and evaluate machine learning (ML) models for the accurate classification of kidney cancer using CT images, focusing on improving diagnostic precision and addressing potential challenges of overfitting and dataset heterogeneity. Two ML models, Support Vector Machines (SVM) and Multi-Layer Perceptrons (MLP), were employed for classification. Key attribute extraction techniques, including grayscale-level co-occurrence matrix (GLCM) and Gabor filters, were utilized to capture texture and structural features of CT images. Data normalization and preprocessing ensured consistency and enhanced model reliability. The SVM model achieved an accuracy of 93%, while the MLP model demonstrated superior performance with a 99.64% accuracy rate. These results highlight the MLP model's ability to capture complex patterns in the data. However, the exceptional accuracy of the MLP model raises concerns about potential overfitting, warranting further evaluation on more diverse datasets. This study underscores the potential of ML techniques, particularly MLP, in enhancing the accuracy of kidney cancer diagnosis. Integrating such advanced ML models into clinical workflows could significantly improve patient outcomes.
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Batah, Mohammad, Mowafaq Salem Alzboon, and Muhyeeddin Alqaraleh. "Superior Classification of Brain Cancer Types Through Machine Learning Techniques Applied to Magnetic Resonance Imaging." Data and Metadata 4 (January 1, 2025): 472. http://dx.doi.org/10.56294/dm2025472.

Full text
Abstract:
Brain cancer remains one of the most challenging medical conditions due to its intricate nature and the critical functions of the brain. Effective diagnostic and treatment strategies are essential, particularly given the high stakes involved in early detection. Magnetic Resonance (MR) imaging has emerged as a crucial modality for the identification and monitoring of brain tumors, offering detailed insights into tumor morphology and behavior. Recent advancements in artificial intelligence (AI) and machine learning (ML) have revolutionized the analysis of medical imaging, significantly enhancing diagnostic precision and efficiency. This study classifies three primary brain tumor types—glioma, meningioma, and general brain tumors—utilizing a comprehensive dataset comprising 15,000 MR images obtained from Kaggle. We evaluated the performance of six distinct machine learning models: K-Nearest Neighbors (KNN), Neural Networks, Logistic Regression, Support Vector Machine (SVM), Decision Trees, and Random Forests. Each model's effectiveness was assessed through multiple metrics, including classification accuracy (CA), Area Under the Curve (AUC), F1 score, precision, and recall. Our findings reveal that KNN and Neural Networks achieved remarkable classification accuracies of 98.5% and 98.4%, respectively, significantly surpassing the performance of other evaluated models. These results underscore the promise of ML algorithms, particularly KNN and Neural Networks, in improving the diagnostic process for brain cancer through MR imaging. Future research will focus on validating these models with real-world clinical data, aiming to refine and enhance diagnostic methodologies, thus contributing to the development of more accurate, efficient, and accessible tools for brain cancer diagnosis and management.
APA, Harvard, Vancouver, ISO, and other styles
7

Hasan, Aseel, and Mahdi Mazinani. "DETECTION OF KERATOCONUS DISEASE DEPENDING ON CORNEAL TOPOGRAPHY USING DEEP LEARNING." Kufa Journal of Engineering 16, no. 1 (2025): 463–78. https://doi.org/10.30572/2018/kje/160125.

Full text
Abstract:
Keratoconus is a disease that ML has contributed much in its diagnosis and management. It is not a widely prevalent disease, with a research gap caused by the absence of standardized datasets for model training and evaluation. This work presents a novel dataset, which strengthens the CNN model's resilience and creates standards for assessing keratoconus diagnostic techniques. The research depends on data of patients examined at Jenna Ophthalmic Center in Baghdad. The proposed system works on three stages: pre-processing, feature extraction, and classification with machine learning algorithms including NB, KNN, ADA, DT, and CNN deep learning. The pre-processing stage involves cropping images to retain the relevant maps, which were subjected to contrast enhancement to improve image quality. The pre-processed data is then fed into Machine Learning(ML) algorithms and Convolutional Neural Network(CNN) models, by which the four corneal maps were analyzed. The precision of the ML method was quantified, yielding a precision score of 0.79 for the AdaBoost algorithm and an impressive score of 0.99 for the suggested CNN model exemplifying its high accuracy and ability to surpass all machine learning approaches. Applying PCA for feature extraction before utilizing tradition ML algorithms and CNN helps in achieving high-accuracy results.
APA, Harvard, Vancouver, ISO, and other styles
8

Patel, Sumir, Veysel Kocaman, Mehmet Burak Sayici, and Nikhil Patel. "Auto-machine learning for opportunistic thyroid nodule detection in lung cancer screening chest CT." Journal of Clinical Oncology 42, no. 16_suppl (2024): e13639-e13639. http://dx.doi.org/10.1200/jco.2024.42.16_suppl.e13639.

Full text
Abstract:
e13639 Background: Automated Machine Learning (Auto-ML) in medical imaging is a process that allows non-experts to utilize machine learning techniques, opening the door for non-coder physician-driven exploitation of the technology. Auto-ML was applied for opportunistic detection of thyroid nodules in the context of low-dose lung cancer screening chest CT, facilitated by an innovative platform integration. By leveraging scans originally intended for lung cancer screening, suspicious appearing asymptomatic thyroid nodules can also be screened for where technically feasible. Methods: CT scans from the National Lung Screening Trial (NLST) dataset were utilized. This dataset contains low-dose chest CT examinations initially used for lung cancer screening. Annotation was carried out within a no-code web-based platform, Gesund.ai. A board-certified diagnostic radiologist annotated 100 CT examinations. A case was labeled "suspicious" if it presented a thyroid nodule of 1 cm or greater in diameter, while cases with no such nodules were deemed "not suspicious." The Auto-ML feature was then engaged within the software platform for parameter adjustments to refine the model without the need for manual coding. This phase was characterized by an iterative exploration of model performance, facilitated by the platform’s robust validation tools. Results: Upon training and validation, the Auto-ML model, cultivated from 100 annotated cases and assessed against a separate set of 130 cases, demonstrated a discernment accuracy of 0.51 in identifying "suspicious" thyroid nodules with a sensitivity and precision of 0.51 and 0.62 respectively. The AUC for this class stands at 0.69, indicating a moderate ability to distinguish between the presence and absence of thyroid nodules. This metric, while foundational, offers insight into the model's potential efficacy in real-world diagnostic scenarios, reflecting the initial capabilities of the platform's Auto-ML integration in enhancing diagnostic processes. Conclusions: The findings from this study highlight the potential of Auto-ML in revolutionizing opportunistic screening for thyroid nodules via lung cancer CT. The initial accuracy rate underscores the necessity for further refinement and validation of the model, but also demonstrates the promising potential of Auto-ML. Institutional Auto-ML would include extraction of appropriate studies through the no-code software platform integrated directly within the PACS framework. This process would ensure data immobility and foster a secure environment for dataset creation. Annotation would be carried out by multiple physician imagers within the same facility, enhancing the data’s accuracy and comprehensiveness specific for the institution’s patient population. This exploration highlights the strategic role of Auto-ML in advancing cancer patient care through innovative technological integration.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar Singh, Siddhanta, and Anand Sharma. "Revving up insights: machine learning-based classification of OBD II data and driving behavior analysis using g-force metrics." Bulletin of Electrical Engineering and Informatics 14, no. 3 (2025): 2188–97. https://doi.org/10.11591/eei.v14i3.9398.

Full text
Abstract:
This research work uses machine learning (ML) approaches to classify on-board diagnostics II (OBD II) data and g-force measures to provide a thorough analysis of driving behavior. The research paper effectively demonstrates the classification of driving behaviours using OBD II and g-force data. Driving behaviours are analyzed by using ML algorithms such as random forest (RF), AdaBoost, and K-nearest neighbors (KNN). The analysis goes beyond a summary by discussing how OBD II data, g-force metrics, and the algorithms interrelate to classify ten distinct driving behaviors (e.g., weaving, swerving, and sideslipping). The RF classifier achieved the highest accuracy, which reinforces the strength of the chosen models. The inclusion of comparisons with other techniques supports arguments about the model's performance. The related works section connects the references to the central topic by highlighting prior approaches and research studies related to OBD II and driver behaviour analysis. The goals of this study are improving the accuracy of driving behaviour classification, with implications for traffic safety, driver education, and insurance sectors.
APA, Harvard, Vancouver, ISO, and other styles
10

S, Suresh, and Dhanalakshmi S. "Tuberculosis prediction: performance analysis of machine ‎learning models for early diagnosis and screening using ‎symptom severity level data." International Journal of Basic and Applied Sciences 14, no. 1 (2025): 435–44. https://doi.org/10.14419/parmkr90.

Full text
Abstract:
Tuberculosis (TB) remains a formidable issue for worldwide public health and calls for swift and exact diagnostic strategies to achieve the ‎best health results for those affected. A methodical machine learning (ML) sequence was diligently followed, featuring data preprocessing, ‎feature choice, encoding, and the training of the model in a logical order. A detailed investigation was performed on six unique machine ‎learning architectures, comprising the ANN, SVM, Decision Tree, Random Forest, XGBoost, and Logistic Regression, closely analyzing ‎their key performance measures essential for measuring their effectiveness, including accuracy, precision, recall, F1-score, and AUC-ROC, ‎hence providing an extensive view of their attributes and feasible uses across different sectors. The matter of class imbalance was diligently ‎approached through the execution of the Synthetic Minority Over-sampling Technique (SMOTE), and the model's performance was scruti-‎nized using 5-Fold Cross-Validation to affirm both consistency and relevance of the conclusions.‎ Achieving a stellar accuracy of 99.55%, an impeccable recall of 100%, and a noteworthy F1-score of 99.54%, the ANN model is hailed as ‎the premier model for tuberculosis forecasting. The Random Forest and SVM models also illustrated robust predictive performance, evidenced by elevated accuracy and AUC-ROC scores. In a contrasting view, Logistic Regression provided the least successful outcomes, ‎suggesting that linear models could be inadequately matched to the attributes of this dataset. This study elucidates the efficacy of machine ‎learning methodologies in the diagnostics of TB and emphasizes the critical role of symptom analysis and data-informed decision-making ‎within the healthcare sector‎.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Modelli diagnostici ML"

1

Navicelli, Andrea, Mario Tucci, and Filippo De Carlo. "Analisi ed applicazione di modelli diagnostici e prognostici per guasti e prestazioni di componenti di impianti industriali nell’era I4.0." Doctoral thesis, 2021. http://hdl.handle.net/2158/1234822.

Full text
Abstract:
Il ruolo fondamentale che la manutenzione gioca nei costi di esercizio e nella produttività degli impianti industriali ha portato le aziende e i ricercatori a spostare il loro interesse su questo tema. L'ultima frontiera dell'innovazione in campo manutentivo, resa possibile anche dall'avvento della quarta rivoluzione industriale che promuove la sensorizzazione e l’interconnessione di tutti i macchinari di impianto, è la manutenzione predittiva. Essa mira ad ottenere una previsione accurata della vita utile dei componenti degli impianti industriali al fine di ottimizzare la schedulazione degli interventi sul campo. Lo studio parte da una accurata revisione della letteratura scientifica di settore riguardante le tecniche diagnostiche e prognostiche applicate a componenti di impianti industriali, necessaria alla comprensione dei diversi modelli sviluppati in funzione della tipologia di componente e modo di guasto in analisi. Successivamente ho spostato l’attenzione sul concetto di manutenzione 4.0 al fine di mappare tutte le caratteristiche associate al paradigma dell'Industria 4.0 e le loro possibili applicazioni alla manutenzione. Lo studio condotto ha portato poi alla progettazione, sviluppo e validazione delle metodologie necessarie all’applicazione in real-time di modelli diagnostici e prognostici avanzati, sia statistici che machine learning, necessari all’implementazione sul campo di un sistema di manutenzione predittiva. Grazie all’applicazione delle metodologie proposte ad un caso studio è stato possibile non solo validare i modelli proposti ma anche definire l’architettura informatica necessaria alla loro corretta implementazione sul sistema distribuito di controllo (Distributed Control System - DCS) di impianto in funzione della tipologia del componente e del guasto in analisi. I modelli testati e validati hanno mostrato elevate prestazioni diagnostiche soprattutto per quanto riguarda i modelli ML che sfruttano le Support Vector Machine (SVM). In definitiva, questo lavoro di tesi mostra nel dettaglio tutti i passaggi necessari allo sviluppo di un sistema di manutenzione predittiva efficace in impianto: partendo dall’analisi dei modi di guasto e dalla sensorizzazione dei componenti, passando poi allo sviluppo dei modelli diagnostici e prognostici real-time fino alla costruzione dell’interfaccia di visualizzazione dei risultati delle analisi svolte, analizzando anche l’architettura informatica necessaria al suo corretto funzionamento. The fundamental role that maintenance plays in the operating costs and productivity of industrial plants has led companies and researchers to shift their interest in this issue. The last frontier of innovation in the maintenance field, made possible also by the advent of the fourth industrial revolution which promotes the sensorisation and interconnection of all plant machinery, is predictive maintenance. It aims to obtain an accurate forecast of the useful life of the industrial plants’ components in order to optimise the scheduling of interventions in the field. The study starts from an accurate review of the scientific literature concerning the diagnostic and prognostic techniques applied to industrial plant components, necessary to understand the different models developed according to the type of component and failure mode under analysis. Subsequently I shifted the focus to the maintenance 4.0 concept in order to map all the characteristics associated with the Industry 4.0 paradigm and their possible applications to maintenance operations. The study then led to the design, development and validation of the methodologies necessary for the real-time application of advanced diagnostic and prognostic models, both statistical and machine learning, necessary for the field implementation of a predictive maintenance system. Thanks to the application of the proposed methodologies to a case study, it was possible not only to validate the proposed models but also to define the IT architecture necessary for their correct implementation on the plant's Distributed Control System (DCS) according to the type of component and the fault under analysis. The tested and validated models showed high diagnostic performance, especially regarding the Support Vector Machine (SVM) Machine Learning models. Ultimately, this thesis shows in detail all the steps necessary for the development of an effective predictive maintenance system in the plant: starting from the analysis of failure modes and component sensorisation, then moving on to the development of real-time diagnostic and prognostic models up to the build-up of the interface for visualising the results of the analyses carried out, also analysing the IT architecture necessary for its correct operation.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Modelli diagnostici ML"

1

Jan, Saifullah, Aiman (83a409e8-2ed0-4d4e-a1d5-1ffcc121e8cb, Bilal Khan, and Muhammad Arshad. "Exploring COVID-19 Classification and Object Detection Strategies." In Deep Cognitive Modelling in Remote Sensing Image Processing. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-2913-9.ch009.

Full text
Abstract:
The overlapping imaging characteristics of COVID-19 viral pneumonia and non-COVID-19 viral pneumonia chest X-rays (CXRs) make differentiation difficult for radiologists. Machine learning (ML) has demonstrated promising outcomes in a range of medical sectors, enhancing diagnostic accuracy through its interaction with radiological tests. The potential contribution of ML models in assisting radiologists in discriminating COVID-19 from non-COVID-19 viral pneumonia from CXRs, on the other hand, deserves further examination and exploration. The goal of this study is to empirically assess ML models' capacity to classify X-ray images into COVID-19, pneumonia, and normal cases. The study evaluates the efficacy of K-nearest Neighbor (KNN), random forest (RF), AdaBoost (AB), and neural networks (NN) with various hidden neuron configurations using a wide range of performance measures. These metrics evaluate the area under the curve (AUC), classification accuracy (CA), F1 score (F1), precision, and recall, resulting in a comprehensive evaluation technique. ROC analysis is used to gain a thorough knowledge of the models' discriminating skills. The results show that NN models, particularly those with 100 and 150 hidden neurons, outperform in all criteria, proving their ability to reliably categorize medical disorders. Notably, the study emphasizes the difficulties in separating COVID-19 from pneumonia, emphasizing the importance of strong classification methods. While the study provides useful insights, its drawbacks include the use of a single dataset, the absence of more sophisticated deep learning architectures, and a lack of interpretability analyses. Nonetheless, the study adds to the developing area of medical picture categorization, directing future attempts to improve diagnosis accuracy and widen the use of machine learning in healthcare. The findings highlight the utility of NN models in medical diagnostics and pave the way for future study in this vital area of technology and healthcare.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramkumar, P., and Sivaprakash C. "Machine Learning Techniques for Automatic Diagnosis of Glaucoma Detection." In Advances in Healthcare Information Systems and Administration. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-7888-5.ch005.

Full text
Abstract:
Glaucoma is a primary cause of permanent blindness, and early and precise detection is essential to prevent significant visual loss. In the context of diagnosing glaucoma, this abstract focuses on three machine learning techniques: decision trees, linear regression, and support vector machines (SVM). Despite being primarily utilised for predictive modelling, linear regression has been modified to diagnose glaucoma by examining the correlation between continuous risk factors and diagnostic results, which helps identify high-risk patients early on. Because SVMs are good at handling high-dimensional data, they can be used to identify healthy versus glaucomatous eyes by maximising the margin between the two classes in the feature space by identifying the ideal hyperplane. Compared to conventional methods, these ML techniques greatly improve diagnostic accuracy, consistency, and speed; nonetheless, issues like the requirement for big, diverse datasets and guaranteeing with great results.
APA, Harvard, Vancouver, ISO, and other styles
3

Prasad, G. "Machine Learning-Based Solutions for Aerospace Engineering." In Innovative Machine Learning Applications in the Aerospace Industry. IGI Global, 2025. https://doi.org/10.4018/979-8-3693-7525-9.ch002.

Full text
Abstract:
The incorporation of machine learning (ML) in aircraft engineering has transformed the design, analysis, and operation of intricate aerospace systems. This study examines the present and developing applications of machine learning techniques in critical domains like aircraft design optimisation, defect detection and diagnostics, flight control systems, and predictive maintenance. Utilising extensive information from simulations, sensors, and real-time operations, machine learning models facilitate more efficient decision-making, improved system reliability, and decreased operational costs. Moreover, progress in deep learning, reinforcement learning, and neural networks is being progressively utilised for applications spanning aerodynamic modelling to autonomous flight control. This study emphasises the difficulties related to data quality, interpretability, and model validation in safety-critical aircraft contexts.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Modelli diagnostici ML"

1

korish, M., M. Ibrahim, L. Tealdi, and A. Al Hanaee. "Real-Time Production Optimization: A Machine Learning Approach to Virtual Flow Metering." In GOTECH. SPE, 2025. https://doi.org/10.2118/224561-ms.

Full text
Abstract:
Abstract This study investigates the application of machine learning (ML) techniques for virtual flow metering (VFM) in oil wells. To develop a robust and accurate VFM model, Two Different Fields were tested for the new technique, one field offshore Egypt & the other field is onshore Iraq; This comprehensive dataset included detailed production data, well parameters, and operational information. Various ML algorithms, including Random Forest, Support Vector Regression, and Artificial Neural Networks, were rigorously tested and compared to identify the optimal model for VFM. The selected model was then calibrated against production test data to ensure accurate and reliable predictions. The calibrated ML model enables the daily verification of production allocation, providing real-time insights into the performance of individual wells. By continuously monitoring production rates, anomalies such as production declines, unexpected shutdowns, or changes in reservoir behavior can be promptly detected. These timely insights facilitate efficient decision-making, allowing for timely interventions and optimization of production strategies. Additionally, the ML-based VFM model supports accurate reservoir allocation by providing reliable estimates of individual well contributions, enabling informed decisions on production allocation and field development planning. After rigorous calibration, the model's performance was further validated through a blind test, where it was applied to a new set of wells without prior training. The results of the blind test confirmed the model's usability and its ability to provide accurate predictions. Moreover, the daily application of the model has demonstrated reasonable agreement with well test data, further solidifying its reliability and practical value. The findings of this study demonstrate the significant potential of ML-based VFM to enhance the efficiency and effectiveness of oil production operations in Egypt & Iraq Assets. By leveraging advanced ML techniques and a comprehensive dataset, this approach offers a reliable and cost-effective solution for improving production performance diagnostics, optimizing reservoir management, and ultimately maximizing oil recovery
APA, Harvard, Vancouver, ISO, and other styles
2

Hassani, H., A. Shahbazi, A. Yusifov, et al. "Calculation of Production Back Allocation Using Machine Learning Algorithms." In GOTECH. SPE, 2024. http://dx.doi.org/10.2118/219392-ms.

Full text
Abstract:
Abstract The petroleum industry is reliant on precise and efficient back allocation, a process that calculates individual well production rates from shared facilities or multi-well platforms. Especially in matured facilities and legacy assets, traditional measurement techniques often fail to provide the necessary accuracy due to a lack of pre-installed flow meters and individual measurement mechanisms. Furthermore, these methods frequently require additional interventions, a factor that could potentially defer production, incur significant costs, and require extensive supply chain management. Despite these challenges, back allocation remains an essential process for effective reservoir, well, and field production performance management, as well as for ensuring accurate revenue allocation throughout a field's economic life cycle. This study explores the application of machine learning (ML) as an advanced, data-driven approach to overcome the complexities of back allocation and streamline the process with increased accuracy and reduced interventions. Three ML models were implemented for this purpose, namely XGBoost, Random Forest, and LightGBM. These models were designed to predict individual well oil rates based on various parameters easily obtainable from wellhead locations, including wellhead pressure, temperature, and choke size. A meticulous preprocessing stage was performed to make sure the data was ideally suited for the ML models. Further, hyperparameter tuning was applied to enhance model accuracy and performance. Of the three models, XGBoost showed remarkable performance, producing high R2 scores of 0.97, 0.98, 0.96, and 0.96 for each well. These scores underscore the model's strong capability to predict individual well oil rates with high precision, highlighting the potential of ML in addressing complex problems in the oil and gas sector. The study's findings present a promising advancement in the application of ML, particularly XGBoost, for accurate back allocation in combined production systems. The model's superior performance and prediction accuracy pave the way for improved decision-making related to reservoir management, well diagnostics, and cost optimization. The utilization of ML for back allocation holds considerable promise for boosting operational efficiency and profitability in oil and gas production systems. Looking ahead, further research will seek to apply the model on a larger scale and test its efficiency across varied field conditions and scenarios. This investigation will help to further validate the substantial advantages of employing ML methodologies in the petroleum industry.
APA, Harvard, Vancouver, ISO, and other styles
3

Kayode, Babatope O., Karl D. Stephen, and Abdullah Kaba. "Application of Data Science Algorithms to Establish a Novel Parameterization Approach for Static and Dynamic Models." In SPE Symposium: Leveraging Artificial Intelligence to Shape the Future of the Energy Industry. SPE, 2023. http://dx.doi.org/10.2118/214476-ms.

Full text
Abstract:
Abstract Numerical simulation results are the basis of numerous oil and gas field developments. We based the numerical simulation models (or dynamic models) on 3D geological models. We constructed a geological model using core and log data obtained from wells as inputs to create a reservoir prototype. This paper describes the applications of artificial intelligence (AI) algorithms for parameterization of static and dynamic modeling processes. Accordingly, a hypothetical 3D geological model was created, and porosity and permeability were distributed using sequential Gaussian simulation. Then, Petro-physical rock types (PRT) were defined in the 3D space as a function of porosity and permeability using a hypothetical Winland's R35 equation. Finally, hypothetical saturation-height functions (SHFs) were defined for different PRTs to populate water saturation in the 3D geological model. Subsequently, some wells were randomly defined in the 3D model to obtain the logs of porosity, permeability, SHF, PRT, repeat formation tester pressure (RFT), and datum pressures that are used in this study. A multivariate Gaussian regression was applied for anomaly detection, while core porosity and permeability were filtered. Subsequently, a fixed window average was used to detect the boundaries of core data stationarity and propose the optimum reservoir zone required to describe the internal heterogeneities of the reservoir. Then, we deployed the k-means clustering algorithm to determine the PRT and saturation height function (SHF) based on the core and log data derived from the hypothetical geological model. Finally, we used the clustering-based pattern recognition to cluster well datum pressures into homogeneous groups and create a connected reservoir region CRR map to be used as an input in the 3D permeability distribution. Our results demonstrate the value of additional diagnostics that can be used in conjunction with the traditional semi-log plot of porosity and permeability. This additional diagnostic approach is a semi-log plot of permeability versus depth, which can help check whether intra-reservoir heterogeneities observable in core data have been preserved in the 3D model. In our case, a 3D model created using the core and log data from the hypothetical model and honoring the internal reservoir architecture resulted in a better history match regarding the hypothetical geo-model's RFT pressure signature. Our results further demonstrate that PRT and SHF derived from k-means clustering are sufficiently similar to those of the hypothetical model. Time series anomaly filtering of pressures helped detect incorrect well data that may otherwise have gone unnoticed. Using the nearest-neighbor property distribution resulted in a geological model whose diagnostic plots indicated an excellent match with core data and allowed a better assessment of modeling uncertainties. The ML approaches presented in this study could help obtain data-derived PRT and SHF to complement Winland's interpretation when Mercury Injection Capillary Pressure (MICP) experiments are limited or unavailable, saving both time and cost. Using the fixed window averaging helps optimize the geological model zone assessment, resulting in a better intra-reservoir architecture. Finally, we derive insights into a more efficient core acquisition plan.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Modelli diagnostici ML"

1

Malkinson, Mertyn, Irit Davidson, Moshe Kotler, and Richard L. Witter. Epidemiology of Avian Leukosis Virus-subtype J Infection in Broiler Breeder Flocks of Poultry and its Eradication from Pedigree Breeding Stock. United States Department of Agriculture, 2003. http://dx.doi.org/10.32747/2003.7586459.bard.

Full text
Abstract:
Objectives 1. Establish diagnostic procedures to identify tolerant carrier birds based on a) Isolation of ALV-J from blood, b) Detection of group-specific antigen in cloacal swabs and egg albumen. Application of these procedures to broiler breeder flocks with the purpose of removing virus positive birds from the breeding program. 2. Survey the AL V-J infection status of foundation lines to estimate the feasibility of the eradication program 3. Investigate virus transmission through the embryonated egg (vertical) and between chicks in the early post-hatch period (horizontal). Establish a model for limiting horizontal spread by analyzing parameters operative in the hatchery and brooder house. 4. Compare the pathogenicity of AL V-J isolates for broiler chickens. 5. Determine whether AL V-J poses a human health hazard by examining its replication in mammalian and human cells. Revisions. The: eradication objective had to be terminated in the second year following the closing down of the Poultry Breeders Union (PBU) in Israel. This meant that their foundation flocks ceased to be available for selection. Instead, the following topics were investigated: a) Comparison of commercial breeding flocks with and without myeloid leukosis (matched controls) for viremia and serum antibody levels. b) Pathogenicity of Israeli isolates for turkey poults. c) Improvement of a diagnostic ELISA kit for measuring ALV-J antibodies Background. ALV-J, a novel subgroup of the avian leukosis virus family, was first isolated in 1988 from broiler breeders presenting myeloid leukosis (ML). The extent of its spread among commercial breeding flocks was not appreciated until the disease appeared in the USA in 1994 when it affected several major breeding companies almost simultaneously. In Israel, ML was diagnosed in 1996 and was traced to grandparent flocks imported in 1994-5, and by 1997-8, ML was present in one third of the commercial breeding flocks It was then realized that ALV-J transmission was following a similar pattern to that of other exogenous ALVs but because of its unusual genetic composition, the virus was able to establish an extended tolerant state in infected birds. Although losses from ML in affected flocks were somewhat higher than normal, both immunosuppression and depressed growth rates were encountered in affected broiler flocks and affected their profitability. Conclusions. As a result of the contraction in the number of international primary broiler breeders and exchange of male and female lines among them, ALV-J contamination of broiler breeder flocks affected the broiler industry worldwide within a short time span. The Israeli national breeding company (PBU) played out this scenario and presented us with an opportunity to apply existing information to contain the virus. This BARD project, based on the Israeli experience and with the aid of the ADOL collaborative effort, has managed to offer solutions for identifying and eliminating infected birds based on exhaustive virological and serological tests. The analysis of factors that determine the efficiency of horizontal transmission of virus in the hatchery resulted in the workable solution of raising young chicks in small groups through the brooder period. These results were made available to primary breeders as a strategy for reducing viral transmission. Based on phylogenetic analysis of selected Israeli ALV-J isolates, these could be divided into two groups that reflected the countries of origin of the grandparent stock. Implications. The availability of a simple and reliable means of screening day old chicks for vertical transmission is highly desirable in countries that rely on imported breeding stock for their broiler industry. The possibility that AL V-J may be transmitted to human consumers of broiler meat was discounted experimentally.
APA, Harvard, Vancouver, ISO, and other styles
2

99mTc SPECT-CT, Consensus QIBA Profile. Chair Yuni Dewaraja and Robert Miyaoka. Radiological Society of North America (RSNA)/Quantitative Imaging Biomarkers Alliance (QIBA), 2019. https://doi.org/10.1148/qiba/20191021.

Full text
Abstract:
The quantification of 99mTc labeled biomarkers can add unique value in many different settings, ranging from clinical trials of investigation new drugs to the treatment of individual patients with marketed therapeutics. For example, goals of precision medicine include using companion radiopharmaceutical diagnostics as just-in-time, predictive biomarkers for selecting patients to receive targeted treatments, customizing doses of internally administered radiotherapeutics, and assessing responses to treatment. This Profile describes quantitative outcome measures that represent proxies of target concentration or target mass in topographically specific volumes of interest (VOIs). These outcome measures are usually expressed as the percent injected dose (i.e., radioactivity) per mL of tissue (%ID/mL), a standard uptake value ratio (SUVr), or a target-to-background ratio (TBR). In this profile, targeting is not limited to any single mechanism of action. Targeting can be based on interaction with a cell surface protein, an intracellular complex after diffusion, protein-mediated transport, endocytosis, or mechanical trapping in a capillary bed, as in the case of transarterial administration of embolic microspheres. Regardless, the profile focuses on quantification in well-defined volumes of interest. Technetium-99m based dopamine transporter imaging agents, such as TRODAT, are nearly direct links with some aspects of the predecessor profile on 123I-ioflupane for neurodegenerative disorders. (See www.qibawiki.rsna.org ) Cancer is often a base case of convenience for new material in this profile, but the intent is to create methods that can be useful in other therapeutic areas where the diseases are characterized by spatially-limited anatomical volumes, such as lung segments, or multifocal aggregations of targets, such as white blood cell surface receptors on pulmonary nodules in patients with sarcoidosis. Neoplastic masses that can be measured with x-ray computed tomography (CT) or magnetic resonance imaging (MRI) are the starting point. However, the intent is to create a profile that can be extrapolated to diseases in other therapeutic areas that are also associated with focal, or multi-focal pathology, such as pulmonary granulomatous diseases of autoimmune or infectious etiology, non-oncological diseases of organs such as polycystic kidney disease, and the like. The criteria for measurability are based on the current resolution of most SPECT-CT systems in clinical practice, and are independent of criteria for measurability in other contexts. For this SPECT profile, conformance requires that a “small” VOI must be greater than 30 mL to be measurable. It is understood that much smaller VOIs can sometimes exhibit high conspicuity on SPECT, but these use cases are beyond the scope of this profile and will not be tested for conformance in this version. It is left to individual stakeholders to show the extent to which they can achieve conformance when measuring VOIs less than 30 mL. The detection of smaller changes during clinical trials of large groups can be achieved by referring to the QIBA companion guidance on powering trials. The Claims (Section 2) asserts that compliance with the specifications described in this Profile will produce cross sectional estimates of the concentration of radioactivity [kBq/mL] in a volume of interest (VOI) or a target-to-background ratio (TBR) within a defined confidence interval (CI), and distinguish true biological change from system variance (i.e., measurement error) in individual patients or clinical trials of many patients who will be studied longitudinally with 99mTc SPECT agents. Both claims are founded on observations that target density varies between patients with the same disease as well as within patients with multi-focal disease. The Activities (Section 3) describes the requirements that are placed on the Actors who need to achieve the Claim. Section 3 specifies what the actors must do in order to estimate the amount of radioactivity in a volume of interest, expressed in kBq/mL (ideal) or as a TBR (acceptable) within a 95% CI surrounding the true value. Measurands such as %ID/mL are targets for nonclinical studies in animal models that use terminal sacrifice to establish ground truth for imaging studies. TBRs can be precarious, as the assumptions that depend on the physiology of the background regions matching the volume of interest can be hard to accept sometimes. It is up to each individual stakeholder to qualify the background regions used in their own use case. This profile qualifies only a few in some very limited contexts as examples. The Assessment Procedures (Section 4) for evaluating specific requirements are defined as needed. The requirements are focused on achieving sufficient accuracy and avoiding unnecessary variability of the measurements. The clinical performance target is to achieve a 95% confidence interval for concentration in units of kBq/mL (kilobequerels per milliliter) or %ID/mL (percent injected dose per milliliter) or TBR with both a reproducibility and a repeatability of +/- 8% within a single individual under zero-biological-change conditions. This document is intended to help clinicians basing decisions on these biomarkers, imaging staffs generating measurements of these biomarkers, vendors who are developing related products, purchasers of such products, and investigators designing trials. Note that this document only states requirements to achieve the claims, not “requirements on standard of care” nor compliance with any particular protocol for treating participants in clinical trial settings. Conformance to this Profile is secondary to properly caring for patients or adhering to the requirements of a protocol. QIBA Profiles addressing other imaging biomarkers using CT, MRI, PET and Ultrasound can be found at www.qibawiki.rsna.org.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!