Academic literature on the topic 'K-fold validation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'K-fold validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "K-fold validation"

1

Li, Jessie. "Asymptotics of K-Fold Cross Validation." Journal of Artificial Intelligence Research 78 (November 14, 2023): 491–526. http://dx.doi.org/10.1613/jair.1.13974.

Full text
Abstract:
This paper investigates the asymptotic distribution of the K-fold cross validation error in an i.i.d. setting. As the number of observations n goes to infinity while keeping the number of folds K fixed, the K-fold cross validation error is √ n-consistent for the expected out-of-sample error and has an asymptotically normal distribution. A consistent estimate of the asymptotic variance is derived and used to construct asymptotically valid confidence intervals for the expected out-of-sample error. A hypothesis test is developed for comparing two estimators’ expected out-of-sample errors and a su
APA, Harvard, Vancouver, ISO, and other styles
2

Her Lee, Torng, and Griffin Msefula. "Predicting GDP with Machine Learning Technique." International Journal of Business & Management Studies 05, no. 08 (2024): 35–46. http://dx.doi.org/10.56734/ijbms.v5n8a5.

Full text
Abstract:
This paper proposes a method for reducing model errors in regressions when modelling macroeconomic variables by using machine learning algorithms and traditional time series regression models. In this paper, machine learning models are subjected to repeated k-fold cross validation and hyperparameter tuning. The linear model uses repeated k-fold cross validation, on the other hand, the traditional time series model Mixed Data Sampling Auto Regressive Distribution Lag model is run without repeated k-fold cross validation and hyperparameter tuning. The results show that integrating repeated k-fol
APA, Harvard, Vancouver, ISO, and other styles
3

Nti, Isaac Kofi, Owusu Nyarko-Boateng, and Justice Aning. "Performance of Machine Learning Algorithms with Different K Values in K-fold CrossValidation." International Journal of Information Technology and Computer Science 13, no. 6 (2021): 61–71. http://dx.doi.org/10.5815/ijitcs.2021.06.05.

Full text
Abstract:
The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attemp
APA, Harvard, Vancouver, ISO, and other styles
4

Soper, Daniel S. "Greed Is Good: Rapid Hyperparameter Optimization and Model Selection Using Greedy k-Fold Cross Validation." Electronics 10, no. 16 (2021): 1973. http://dx.doi.org/10.3390/electronics10161973.

Full text
Abstract:
Selecting a final machine learning (ML) model typically occurs after a process of hyperparameter optimization in which many candidate models with varying structural properties and algorithmic settings are evaluated and compared. Evaluating each candidate model commonly relies on k-fold cross validation, wherein the data are randomly subdivided into k folds, with each fold being iteratively used as a validation set for a model that has been trained using the remaining folds. While many research studies have sought to accelerate ML model selection by applying metaheuristic and other search metho
APA, Harvard, Vancouver, ISO, and other styles
5

Wong, Tzu-Tsung, and Po-Yang Yeh. "Reliable Accuracy Estimates from k-Fold Cross Validation." IEEE Transactions on Knowledge and Data Engineering 32, no. 8 (2020): 1586–94. http://dx.doi.org/10.1109/tkde.2019.2912815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Verak, Nutveesa, Phaklen Ehkan, Ruzelita Ngadiran, et al. "Comparative Analysis of ELM-Based Blind Image Quality Assessment with K-Fold Cross-Validation." Journal of Advanced Research in Applied Sciences and Engineering Technology 64, no. 4 (2025): 213–22. https://doi.org/10.37934/araset.64.4.213-222.

Full text
Abstract:
Extreme Learning Machine (ELM) is a potent algorithm for training single hidden layer feedforward neural networks (SLFNs), celebrated for its rapid convergence and promising performance. This study delves into the efficacy of ELM in image quality assessment, examining the impact of K-fold Cross-Validation. Experimental findings reveal that K-fold Cross-Validation enhances the ELM model's robustness, resulting in strong correlations between predicted and actual quality scores. However, for large databases, ELM without K-fold Cross-Validation offers slightly lower performance but significantly s
APA, Harvard, Vancouver, ISO, and other styles
7

Sibuea, Nuraini, Syamsudhuha Syamsudhuha, Arisman Adnan, and Divo Dharma Silalahi. "Robust Method with Cross-Validation in Partial Least Square Regression." Journal of Mathematics, Computations and Statistics 8, no. 1 (2025): 69–79. https://doi.org/10.35580/jmathcos.v8i1.4766.

Full text
Abstract:
Partial Least Squares Regression (PLSR) is a multivariate analysis technique used to handle data with highly correlated predictor variables or when the number of predictor variables exceeds the number of samples. PLSR is not robust to outliers, which can disrupt the stability and accuracy of the model. Cross-validation is an important approach to improve model reliability, particularly in data that contains outliers. This study aims to evaluate the effectiveness of K-fold cross-validation and nested cross-validation in a PLSR model using NIRS data from oil palm plantation soil that contains ou
APA, Harvard, Vancouver, ISO, and other styles
8

ALPTEKIN, AHMET, and OLCAY KURSUN. "MISS ONE OUT: A CROSS-VALIDATION METHOD UTILIZING INDUCED TEACHER NOISE." International Journal of Pattern Recognition and Artificial Intelligence 27, no. 07 (2013): 1351003. http://dx.doi.org/10.1142/s0218001413510038.

Full text
Abstract:
Leave-one-out (LOO) and its generalization, K-Fold, are among most well-known cross-validation methods, which divide the sample into many folds, each one of which is, in turn, left out for testing, while the other parts are used for training. In this study, as an extension of this idea, we propose a new cross-validation approach that we called miss-one-out (MOO) that mislabels the example(s) in each fold and keeps this fold in the training set as well, rather than leaving it out as LOO does. Then, MOO tests whether the trained classifier can correct the erroneous label of the training sample.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Zeyang, Jun Lai, Xiliang Chen, Lei Cao, and Jun Wang. "Curriculum Reinforcement Learning Based on K-Fold Cross Validation." Entropy 24, no. 12 (2022): 1787. http://dx.doi.org/10.3390/e24121787.

Full text
Abstract:
With the continuous development of deep reinforcement learning in intelligent control, combining automatic curriculum learning and deep reinforcement learning can improve the training performance and efficiency of algorithms from easy to difficult. Most existing automatic curriculum learning algorithms perform curriculum ranking through expert experience and a single network, which has the problems of difficult curriculum task ranking and slow convergence speed. In this paper, we propose a curriculum reinforcement learning method based on K-Fold Cross Validation that can estimate the relativit
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Ju E., and Jian Zhong Qiao. "Parameter Selection of SVR Based on Improved K-Fold Cross Validation." Applied Mechanics and Materials 462-463 (November 2013): 182–86. http://dx.doi.org/10.4028/www.scientific.net/amm.462-463.182.

Full text
Abstract:
This article firstly uses svm to forecast cashmere price time series. The forecasting result mainly depends on parameter selection. The normal parameter selection is based on k-fold cross validation. The k-fold cross validation is suitable for classification. In this essay, k-fold cross validation is improved to ensure that only the older data can be used to forecast latter data to improve prediction accuracy. This essay trains the cashmere price time series data to build mathematical model based on SVM. The selection of the model parameters are based on improved cross validation. The price of
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "K-fold validation"

1

Sood, Radhika. "Comparative Data Analytic Approach for Detection of Diabetes." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1544100930937728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ording, Marcus. "Context-Sensitive Code Completion : Improving Predictions with Genetic Algorithms." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205334.

Full text
Abstract:
Within the area of context-sensitive code completion there is a need for accurate predictive models in order to provide useful code completion predictions. The traditional method for optimizing the performance of code completion systems is to empirically evaluate the effect of each system parameter individually and fine-tune the parameters. This thesis presents a genetic algorithm that can optimize the system parameters with a degree-of-freedom equal to the number of parameters to optimize. The study evaluates the effect of the optimized parameters on the prediction quality of the studied code
APA, Harvard, Vancouver, ISO, and other styles
3

Piják, Marek. "Klasifikace emailové komunikace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385889.

Full text
Abstract:
This diploma's thesis is based around creating a classifier, which will be able to recognize an email communication received by Topefekt.s.r.o on daily basis and assigning it into classification class. This project will implement some of the most commonly used classification methods including machine learning. Thesis will also include evaluation comparing all used methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Birba, Delwende Eliane. "A Comparative study of data splitting algorithms for machine learning model selection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287194.

Full text
Abstract:
Data splitting is commonly used in machine learning to split data into a train, test, or validation set. This approach allows us to find the model hyper-parameter and also estimate the generalization performance. In this research, we conducted a comparative analysis of different data partitioning algorithms on both real and simulated data. Our main objective was to address the question of how the choice of data splitting algorithm can improve the estimation of the generalization performance. Data splitting algorithms used in this study were variants of k-fold, Kennard-Stone, SPXY ( sample set
APA, Harvard, Vancouver, ISO, and other styles
5

Martins, Natalie Henriques. "Modelos de agrupamento e classificação para os bairros da cidade do Rio de Janeiro sob a ótica da Inteligência Computacional: Lógica Fuzzy, Máquinas de Vetores Suporte e Algoritmos Genéticos." Universidade do Estado do Rio de Janeiro, 2015. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=9502.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior<br>A partir de 2011, ocorreram e ainda ocorrerão eventos de grande repercussão para a cidade do Rio de Janeiro, como a conferência Rio+20 das Nações Unidas e eventos esportivos de grande importância mundial (Copa do Mundo de Futebol, Olimpíadas e Paraolimpíadas). Estes acontecimentos possibilitam a atração de recursos financeiros para a cidade, assim como a geração de empregos, melhorias de infraestrutura e valorização imobiliária, tanto territorial quanto predial. Ao optar por um imóvel residencial em determinado bairro, não se avali
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Shan. "Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/86.

Full text
Abstract:
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the seco
APA, Harvard, Vancouver, ISO, and other styles
7

Tandan, Isabelle, and Erika Goteman. "Bank Customer Churn Prediction : A comparison between classification and evaluation methods." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411918.

Full text
Abstract:
This study aims to assess which supervised statistical learning method; random forest, logistic regression or K-nearest neighbor, that is the best at predicting banks customer churn. Additionally, the study evaluates which cross-validation set approach; k-Fold cross-validation or leave-one-out cross-validation that yields the most reliable results. Predicting customer churn has increased in popularity since new technology, regulation and changed demand has led to an increase in competition for banks. Thus, with greater reason, banks acknowledge the importance of maintaining their customer base
APA, Harvard, Vancouver, ISO, and other styles
8

Radeschnig, David. "Modelling Implied Volatility of American-Asian Options : A Simple Multivariate Regression Approach." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28951.

Full text
Abstract:
This report focus upon implied volatility for American styled Asian options, and a least squares approximation method as a way of estimating its magnitude. Asian option prices are calculated/approximated based on Quasi-Monte Carlo simulations and least squares regression, where a known volatility is being used as input. A regression tree then empirically builds a database of regression vectors for the implied volatility based on the simulated output of option prices. The mean squared errors between imputed and estimated volatilities are then compared using a five-folded cross-validation test a
APA, Harvard, Vancouver, ISO, and other styles
9

Bodin, Camilla. "Automatic Flight Maneuver Identification Using Machine Learning Methods." Thesis, Linköpings universitet, Reglerteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165844.

Full text
Abstract:
This thesis proposes a general approach to solve the offline flight-maneuver identification problem using machine learning methods. The purpose of the study was to provide means for the aircraft professionals at the flight test and verification department of Saab Aeronautics to automate the procedure of analyzing flight test data. The suggested approach succeeded in generating binary classifiers and multiclass classifiers that identified six flight maneuvers of different complexity from real flight test data. The binary classifiers solved the problem of identifying one maneuver from flight tes
APA, Harvard, Vancouver, ISO, and other styles
10

Po-YangYeh and 葉柏揚. "A Study on the Appropriateness of Repeating K-fold Cross Validation." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/6jc74q.

Full text
Abstract:
碩士<br>國立成功大學<br>工業與資訊管理學系<br>105<br>K-fold cross validation is a popular approach for evaluating the performance of classification algorithms. The variance of accuracy estimate resulting from this approach is generally relatively large for conservative inference. Several studies therefore suggested to repeatedly perform K-fold cross validation for reducing the variance. Most of them did not consider the correlation among the repetitions of K-fold cross validation, and hence the variance could be underestimated. The purpose of this thesis is to study the appropriateness of repeating K-fold cros
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "K-fold validation"

1

Kjeldsberg, Fabian, Ziaul Haque Munim, Morten Bustgaard, Sahil Bhagat, Emilia Lindroos, and Per Haavardtun. "Sensitivity of Predictive Performance Assessment Accuracy in Varying k-fold Cross Validation." In Lecture Notes in Networks and Systems. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-84170-5_7.

Full text
Abstract:
Abstract In machine learning (ML) applications, cross-validation (CV) allows greater generalizability of a trained algorithm over out-of-sample or new data. This study explores the accuracy of trained ML algorithms in predicting student performance in a maritime simulator exercise scenario in four different k-fold CVs. Three, five, eight, and ten-fold CVs were trained using a cloud-ML platform. Three top-performing ML algorithms were evaluated considering log loss, accuracy, and area under the curve (AUC). The results indicate higher predictive accuracy with increasing k in CV folds. Consideri
APA, Harvard, Vancouver, ISO, and other styles
2

Torres-Sospedra, Joaquín, Carlos Hernández-Espinosa, and Mercedes Fernández-Redondo. "Improving Adaptive Boosting with k-Cross-Fold Validation." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11816157_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chowdhury, Pinaki Roy, and K. K. Shukla. "On Generalization and K-Fold Cross Validation Performance of MLP Trained with EBPDT." In Advances in Soft Computing — AFSS 2002. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45631-7_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jiang, Ping, Zhigang Zeng, Jiejie Chen, and Tingwen Huang. "Generalized Regression Neural Networks with K-Fold Cross-Validation for Displacement of Landslide Forecasting." In Advances in Neural Networks – ISNN 2014. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12436-0_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nguyen, Tran-Trung, and Phu-Cuong Nguyen. "K-Fold Cross-Validation Technique for Predicting Ultimate Compressive Strength of Circular CFST Columns." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3303-5_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tilekar, Pruthvi, Purnima Singh, Nagnath Aherwadi, Sagar Pande, and Aditya Khamparia. "Breast Cancer Detection Using Image Processing and CNN Algorithm with K-Fold Cross-Validation." In Proceedings of Data Analytics and Management. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-6285-0_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Handa, Disha, and Kajal Rai. "Comparative Analysis of KNN Classifier with K-Fold Cross-Validation in Acoustic-Based Gender Recognition." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-1412-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Peco Chacon, Ana Maria, and Fausto Pedro García Márquez. "Support Vector Machine and K-fold Cross-validation to Detect False Alarms in Wind Turbines." In International Series in Operations Research & Management Science. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16620-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chacon, Ana María Peco, and Fausto Pedro García Márquez. "Ensembles Learning Algorithms with K-Fold Cross Validation to Detect False Alarms in Wind Turbines." In Proceedings of the Sixteenth International Conference on Management Science and Engineering Management – Volume 1. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10388-9_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Upadhyaya, Anushka, and Rahul. "Integrating K-fold cross-validation with convolutional neural networks for plant species and pathogen detection." In Intelligent Computing and Communication Techniques. CRC Press, 2025. https://doi.org/10.1201/9781003530190-97.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "K-fold validation"

1

Wang, Juan, Yan Zhang, Zhongying Wei, Jinghui Zhang, and Xiangmin Xie. "Harmonic Current Probabilistic Modeling Based on K-Fold Cross-Validation Kernel Density Estimation." In 2024 21st International Conference on Harmonics and Quality of Power (ICHQP). IEEE, 2024. https://doi.org/10.1109/ichqp61174.2024.10768699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Balamurali, A., and K. Vijaya Kumar. "Early Detection and Classification of Type-2 Diabetes Using Stratified k-Fold Validation." In 2024 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES). IEEE, 2024. https://doi.org/10.1109/icses63760.2024.10910675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hikmawati, Nina Kurnia, Yudi Ramdhani, and Doni Purnama Alamsyah. "Optimizing Forest Fire Detection Using PSO, Neural Networks, and k-Fold Cross-Validation." In 2024 Ninth International Conference on Informatics and Computing (ICIC). IEEE, 2024. https://doi.org/10.1109/icic64337.2024.10956756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nabih, Adil, Imane Moufid, Labiba Bousmaki, Driss Serrou, Ismail Lagrat, and Oussama Bouazaoui. "Optimizing the Validation Process of Resistance Spot Welds in the Automotive Industry Using TOPSIS and K-Fold Cross-Validation." In 2024 10th International Conference on Optimization and Applications (ICOA). IEEE, 2024. http://dx.doi.org/10.1109/icoa62581.2024.10753717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gupta, Nikita, Kuldeep Singh Jadon, Preeti Soni, Nitin Gupta, and Sanjay Kumar Dhurandher. "Heart Disease Prediction Evaluation of Machine Learning Models with PSO-Optimized K-Fold Cross-Validation." In 2024 IEEE Region 10 Symposium (TENSYMP). IEEE, 2024. http://dx.doi.org/10.1109/tensymp61132.2024.10752215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arani, Dinesh Ashok, and Rajesh Kumar Upadhyay. "Time-Series Classification Using a CNN-LSTM Model with Attention, Autoencoder, and K-Fold Cross-Validation." In 2025 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI). IEEE, 2025. https://doi.org/10.1109/icdsaai65575.2025.11011741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jain, Eshika, and Ravi Kumar. "Automated Strawberry Leaf Disease Detection Using ResNet-50 with K-Fold Cross-Validation: A Novel Approach." In 2024 9th International Conference on Communication and Electronics Systems (ICCES). IEEE, 2024. https://doi.org/10.1109/icces63552.2024.10859740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jagadeeswari, M., S. Saranya, S. Joshika, T. Jananisri, and U. Jeeva Meena. "Rolling Bearing Fault Diagnosis using Support Vector Machine Algorithm with K-Fold Cross Validation and Gaussian Kernel." In 2024 International Conference on Smart Systems for Electrical, Electronics, Communication and Computer Engineering (ICSSEECC). IEEE, 2024. http://dx.doi.org/10.1109/icsseecc61126.2024.10649498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Saputra, Apry Aditya, Khakam Ma'ruf, Rizal Justian Setiawan, Darmono, and Nur Azizah. "Enhancing Predictive Accuracy of LSTM Neural Networks for Diabetes Risk through K-Fold Cross-Validation: Comparison with K-Nearest Neighbors and Expert Systems." In 2024 International Conference on Decision Aid Sciences and Applications (DASA). IEEE, 2024. https://doi.org/10.1109/dasa63652.2024.10836564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shujaaddeen, Abeer Abdullah, Fadl Mutaher Ba-Alwi, Ammar T. Zahary, Ghaleb Al-Gaphari, Abdulkader M. Al-Badani, and Ayman Alsabry. "Enhancing a Random Forest Model Based on Single Rule Reduction for Tax Evasion Depends on the Values of K in K-Fold Validation Technique." In 2024 1st International Conference on Emerging Technologies for Dependable Internet of Things (ICETI). IEEE, 2024. https://doi.org/10.1109/iceti63946.2024.10777271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!