To see the other types of publications on this topic, follow the link: Candidate process selection machine learning.

Journal articles on the topic 'Candidate process selection machine learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Candidate process selection machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Goretzko, David, and Laura Sophia Finja Israel. "Pitfalls of Machine Learning-Based Personnel Selection." Journal of Personnel Psychology 21, no. 1 (2022): 37–47. http://dx.doi.org/10.1027/1866-5888/a000287.

Full text
Abstract:
Abstract. In recent years, machine learning (ML) modeling (often referred to as artificial intelligence) has become increasingly popular for personnel selection purposes. Numerous organizations use ML-based procedures for screening large candidate pools, while some companies try to automate the hiring process as far as possible. Since ML models can handle large sets of predictor variables and are therefore able to incorporate many different data sources (often more than common procedures can consider), they promise a higher predictive accuracy and objectivity in selecting the best candidate than traditional personal selection processes. However, there are some pitfalls and challenges that have to be taken into account when using ML for a sensitive issue as personnel selection. In this paper, we address these major challenges – namely the definition of a valid criterion, transparency regarding collected data and decision mechanisms, algorithmic fairness, changing data conditions, and adequate performance evaluation – and discuss some recommendations for implementing fair, transparent, and accurate ML-based selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Dr. Sandeep Tayal, Taniya Sharma, Shivansh Singhal, and Anurag Thakur. "Resume Screening using Machine Learning." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 2 (2024): 602–6. http://dx.doi.org/10.32628/cseit2410275.

Full text
Abstract:
This study explores the utilization of Machine Learning (ML) and Natural Language Processing (NLP) in automating the resume screening process. Traditional methods, often manual and subjective, fail to efficiently manage the volume and variety of resumes. By employing NLP techniques like named entity recognition and part-of-speech tagging, coupled with ML classifiers such as K-Nearest Neighbors and Support Vector Machines, we propose a system that enhances the precision of candidate selection while significantly reducing time and effort.
APA, Harvard, Vancouver, ISO, and other styles
3

Lyon, R. J. "Fifty Years of Candidate Pulsar Selection - What next?" Proceedings of the International Astronomical Union 13, S337 (2017): 25–28. http://dx.doi.org/10.1017/s1743921317007682.

Full text
Abstract:
AbstractFor fifty years astronomers have been searching for pulsar signals in observational data. Throughout this time the process of choosing detections worthy of investigation, so called ‘candidate selection’, has been effective, yielding thousands of pulsar discoveries. Yet in recent years technological advances have permitted the proliferation of pulsar-like candidates, straining our candidate selection capabilities, and ultimately reducing selection accuracy. To overcome such problems, we now apply ‘intelligent’ machine learning tools. Whilst these have achieved success, candidate volumes continue to increase, and our methods have to evolve to keep pace with the change. This talk considers how to meet this challenge as a community.
APA, Harvard, Vancouver, ISO, and other styles
4

B, Mr Sandeep. "JobOrbit-Intelligent Recruitment System." International Journal for Research in Applied Science and Engineering Technology 12, no. 12 (2024): 1128–31. https://doi.org/10.22214/ijraset.2024.66001.

Full text
Abstract:
This project focuses on developing an Intelligent Recruitment System leveraging advanced machine learning techniques to revolutionize the recruitment process. The system automates key tasks, such as analyzing resumes, extracting essential details, and matching candidate profiles with job requirements. By employing algorithms for natural language processing (NLP) and predictive analytics, the system evaluates a candidate's suitability for specific roles based on their skills, experience, and qualifications. Additionally, it offers tailored job recommendations to candidates, enhancing their job search experience while enabling employers to identify the best-fit talent efficiently. The Intelligent Recruitment System not only reduces manual effort but also ensures accuracy and fairness in candidate selection, ultimately transforming traditional hiring practices into a smarter and more efficient process.
APA, Harvard, Vancouver, ISO, and other styles
5

Tsiutsiura, Svilana, Andrii Yerukaiev, and Valery Andrusenko. "Optimization of personnel selection processes the perfect candidates search algorithm." Management of Development of Complex Systems, no. 58 (June 28, 2024): 86–92. http://dx.doi.org/10.32347/2412-9933.2024.58.86-92.

Full text
Abstract:
The article discusses an algorithmic approach to the selection of ideal candidates, using criteria analysis and an automated selection process, to optimize personnel selection processes and increase the efficiency of personnel management. The effectiveness of this method, as well as its practical applications in recruiting and internal transfer of employees, were studied. In today's world, where the competition in the labor market is extremely high, finding the ideal candidates becomes a key task for many companies. Choosing the right candidate for a job vacancy can have a significant impact on the success of a business, and automating the candidate selection process allows employers to significantly save time and resources on manual selection. Therefore, there is a need to research various methods and algorithms of personnel selection to achieve the maximum efficiency of process automation. Scientific studies and publications related to the selection of candidates and the application of algorithms in this area reflect the multifaceted and complex nature of the recruitment process. Previous research has addressed a wide range of issues, including the development of selection criteria, psychological tests, career trajectory analysis, and the effectiveness of various methods and approaches to candidate selection. Some research focuses on the development of machine learning and artificial intelligence algorithms to automate the candidate selection process, in particular by analyzing CVs, professional skills and testimonials. Other research examines the role of social media in recruitment and the development of algorithms to analyze candidate social media profiles. In addition, research has been conducted to examine the impact of various factors, such as cultural and social differences, on the candidate selection process. These studies provide a valuable contribution to the understanding of the most effective recruitment strategies and methods in different settings. A general trend in previous research is an attempt to ensure the objectivity, efficiency and innovation of the candidate selection process by developing new algorithms and methods that take into account a wide range of factors and business needs.
APA, Harvard, Vancouver, ISO, and other styles
6

Neelam, Yadav, and P. Panda Supriya. "Developing standard criteria for robotic process automation candidate process selection." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 4 (2024): 4291–300. https://doi.org/10.11591/ijai.v13.i4.pp4291-4300.

Full text
Abstract:
Robotic process automation (RPA) is a cutting-edge technology that provides software robots to repeat and mimic the repeatable tasks that a human user earlier performed. The use of software robots is encouraging because of their cost efficiency and easy implementation. Selecting and prioritizing a candidate process for automation is always challenging as all the business processes in an organization are not equally suitable for RPA implementation. Various studies have highlighted several criteria found in the literature for determining, prioritising, and selecting a business process for RPA. Nevertheless, there are no set standards for evaluating and analyzing a certain process or its tasks to determine whether they may be automated to use RPA. This paper aims to develop standard criteria and propose a consistent model to select and prioritize candidate process for RPA projects. To assess these criteria's applicability in the context of RPA, surveys among subject matter experts (SMEs) are used to validate them. Principal component analysis (PCA) and correlation are used to identify the top 20 criteria. Naïve Bayes algorithm is applied on the collected data for decision-making. The developed multi-criteria model exhibits strong precision and recall measures, with training and validation accuracy of 96% and 90%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

Shankar Pandey, Vishnu. "Optimizing Talent Acquisition Using Machine Learning Algorithms in Maruti Suzuki India Limited." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50167.

Full text
Abstract:
ABSTRACT In today’s competitive business environment, acquiring the right talent efficiently is critical for organizational success. Traditional recruitment methods often rely on subjective judgment, leading to inefficiencies and potential biases. This study aims to optimize the talent acquisition process at Maruti Suzuki India Limited by leveraging machine learning algorithms to enhance decision- making, reduce hiring time, and improve candidate-job fit. The research explores various machine learning techniques, such as logistic regression, decision trees, random forests, and support vector machines, to predict candidate success and retention based on historical recruitment data. Data was collected from Maruti Suzuki's internal HR systems and analyse for patterns related to qualifications, experience, skill sets, and interview performance. Feature engineering and model evaluation were conducted to identify the most accurate and interpretable model. The findings demonstrate that machine learning can significantly streamline the recruitment process, offering actionable insights into candidate selection, improving objectivity, and reducing human bias. The study concludes with recommendations for implementing a data-driven recruitment model that aligns with Maruti Suzuki's strategic HR goals. This research contributes to the growing field of HR analytics and provides a practical framework for deploying AI-driven talent acquisition solutions in the Indian automotive industry.
APA, Harvard, Vancouver, ISO, and other styles
8

Korrapati, Lakshmi Naga Vishnu Babu. "A Machine Learning Approach for Automation of Resume Recommendation System." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 4387–93. http://dx.doi.org/10.22214/ijraset.2022.43978.

Full text
Abstract:
Abstract: Finding qualified candidates for a vacant post can be difficult, notably if there are many applications. It can stifle team growth by making it difficult to hire the right person at the right time. "A Resume Recommendation System" has the potential to significantly simplify the time-consuming process of fair screening and shortlisting. It would certainly enhance candidate selection and decision-making. This system can handle a large number of resumes by first classifying them using multiple classifiers and then recommending them based on the job description. We offer research to improve data accuracy and completeness for resource matching by combining unstructured data sources and incorporating text mining algorithms. Our method identifies categories by extracting and learning new patterns from employee resumes.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, J. T., C. Lin, H. B. Xiao, J. H. Fan, D. Bastieri, and G. G. Wang. "Exploring TeV Candidates of Fermi Blazars through Machine Learning." Astrophysical Journal 950, no. 2 (2023): 123. http://dx.doi.org/10.3847/1538-4357/acca85.

Full text
Abstract:
Abstract In this work, we make use of a supervised machine-learning algorithm based on Logistic Regression (LR) to select TeV blazar candidates from the 4FGL-DR2/4LAC-DR2, 3FHL, 3HSP, and 2BIGB catalogs. LR constructs a hyperplane based on a selection of optimal parameters, named features, and hyperparameters whose values control the learning process and determine the values of features that a learning algorithm ends up learning, to discriminate TeV blazars from non-TeV blazars. In addition, it gives the probability (or logistic) that a source may be considered a TeV blazar candidate. Non-TeV blazars with logistics greater than 80% are considered high-confidence TeV candidates. Using this technique, we identify 40 high-confidence TeV candidates from the 4FGL-DR2/4LAC-DR2 blazars and we build the feature hyperplane to distinguish TeV and non-TeV blazars. We also calculate the hyperplanes for the 3FHL, 3HSP, and 2BIGB. Finally, we construct the broadband spectral energy distributions for the 40 candidates, testing for their detectability with various instruments. We find that seven of them are likely to be detected by existing or upcoming IACT observatories, while one could be observed with extensive air shower particle detector arrays.
APA, Harvard, Vancouver, ISO, and other styles
10

Villarroel, Beatriz, Kristiaan Pelckmans, Enrique Solano, et al. "Launching the VASCO Citizen Science Project." Universe 8, no. 11 (2022): 561. http://dx.doi.org/10.3390/universe8110561.

Full text
Abstract:
The Vanishing & Appearing Sources during a Century of Observations (VASCO) project investigates astronomical surveys spanning a time interval of 70 years, searching for unusual and exotic transients. We present herein the VASCO Citizen Science Project, which can identify unusual candidates driven by three different approaches: hypothesis, exploratory, and machine learning, which is particularly useful for SETI searches. To address the big data challenge, VASCO combines three methods: the Virtual Observatory, user-aided machine learning, and visual inspection through citizen science. Here we demonstrate the citizen science project and its improved candidate selection process, and we give a progress report. We also present the VASCO citizen science network led by amateur astronomy associations mainly located in Algeria, Cameroon, and Nigeria. At the moment of writing, the citizen science project has carefully examined 15,593 candidate image pairs in the data (ca. 10% of the candidates), and has so far identified 798 objects classified as “vanished”. The most interesting candidates will be followed up with optical and infrared imaging, together with the observations by the most potent radio telescopes.
APA, Harvard, Vancouver, ISO, and other styles
11

Rasal, Pratham. "Resume Parser Analysis Using Machine Learning and Natural Language Processing." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 2840–44. http://dx.doi.org/10.22214/ijraset.2023.52202.

Full text
Abstract:
Abstract: With the rise of online job application processes, submitting a resume has become easier than ever before. As a result, a larger number of individuals are impacted by this change. Many organizations still accept resumes by mail, which can cause challenges for their human resources departments. Sorting through a large number of applications to find the most suitable candidates can be a time-consuming process for these agencies. Job applicants submit resumes in a wide range of formats, including various fonts, font sizes, colors, and other design elements. Human resources departments are responsible for reviewing each application and selecting the most qualified candidate for the job. My suggestion is to incorporate natural language processing techniques into the project's parser to assist the human resources department or recruiting manager in analyzing the information provided in resumes. This approach can involve using keyword matching and other natural language processing methods to identify the most suitable candidates and obtain the most effective resumes. By doing so, the organization can improve its recruitment process and identify the best candidates for the job.
APA, Harvard, Vancouver, ISO, and other styles
12

Sripriya Akondi, Vishnu, Vineetha Menon, Jerome Baudry, and Jana Whittle. "Novel Big Data-Driven Machine Learning Models for Drug Discovery Application." Molecules 27, no. 3 (2022): 594. http://dx.doi.org/10.3390/molecules27030594.

Full text
Abstract:
Most contemporary drug discovery projects start with a ‘hit discovery’ phase where small chemicals are identified that have the capacity to interact, in a chemical sense, with a protein target involved in a given disease. To assist and accelerate this initial drug discovery process, ’virtual docking calculations’ are routinely performed, where computational models of proteins and computational models of small chemicals are evaluated for their capacities to bind together. In cutting-edge, contemporary implementations of this process, several conformations of protein targets are independently assayed in parallel ‘ensemble docking’ calculations. Some of these protein conformations, a minority of them, will be capable of binding many chemicals, while other protein conformations, the majority of them, will not be able to do so. This fact that only some of the conformations accessible to a protein will be ’selected’ by chemicals is known as ’conformational selection’ process in biology. This work describes a machine learning approach to characterize and identify the properties of protein conformations that will be selected (i.e., bind to) chemicals, and classified as potential binding drug candidates, unlike the remaining non-binding drug candidate protein conformations. This work also addresses the class imbalance problem through advanced machine learning techniques that maximize the prediction rate of potential protein molecular conformations for the test case proteins ADORA2A (Adenosine A2a Receptor) and OPRK1 (Opioid Receptor Kappa 1), and subsequently reduces the failure rates and hastens the drug discovery process.
APA, Harvard, Vancouver, ISO, and other styles
13

M.L, Pravesh, Samhita R, Mythili M, and Suprith S. "RESUME ANALYZER." International Research Journal of Computer Science 9, no. 8 (2022): 250–55. http://dx.doi.org/10.26562/irjcs.2022.v0908.19.

Full text
Abstract:
Resume analysis is the process in which a machine analyses a resume based on given requirements of the job description. With the flood of resumes received by companies, it is not effective and also not possible for a person to go through a number of resumes to select a candidate. They have become very popular among the companies in the process of determining candidate selection. The main objective of the project is to be able to match the requirements and skills from a job description to the resume applied. This gives an instantaneous result on whether the resume is accepted or rejected. The end process allows the company itself to be able to select candidates without the requirement of a third party and thus is also cost effective. A big number of resumes could be used in this project to sort the necessary application using various classifiers. Following classifications, the top n candidates will be sorted in accordance with the job description using content-based recommendation and cosine similarity. The project employs k-NN to determine which CVs are most similar to the supplied job description. Through machine learning, the system evaluates a resume for a particular position using NLP.
APA, Harvard, Vancouver, ISO, and other styles
14

Pal, Riya, Shahrukh Shaikh, Swaraj Satpute, and Sumedha Bhagwat. "Resume Classification using various Machine Learning Algorithms." ITM Web of Conferences 44 (2022): 03011. http://dx.doi.org/10.1051/itmconf/20224403011.

Full text
Abstract:
With the onset of the epidemic, everything has gone online, and individuals have been compelled to work from home. There is a need to automate the hiring process in order to enhance efficiency and decrease manual labour that may be done electronically. If resume categorization were done online, it would significantly save paperwork and human error. The recruiting process has several steps, but the first is resume categorization and verification. Automating the first stage would greatly assist the interview process in terms of speedy applicant selection. Classification of resumes will be performed using Machine Learning Algorithms such as Nave Bayes, Random Forest, and SVM, which will aid in the extraction of skills and show diverse capabilities under appropriate job profile classes. While the abilities are being extracted, an appropriate job profile may be retrieved from the categorised and pre-processed data and shown on the interviewer’s screen. During video interviews, this will aid the interviewer in the selection of candidates.
APA, Harvard, Vancouver, ISO, and other styles
15

Im, Yunjeong, Gyuwon Song, and Minsang Cho. "Perceiving Conflict of Interest Experts Recommendation System Based on a Machine Learning Approach." Applied Sciences 13, no. 4 (2023): 2214. http://dx.doi.org/10.3390/app13042214.

Full text
Abstract:
Academic societies and funding bodies that conduct peer reviews need to select the best reviewers in each field to ensure publication quality. Conventional approaches for reviewer selection focus on evaluating expertise based on research relevance by subject or discipline. An improved perceiving conflict of interest (CoI) reviewer recommendation process that combines the five expertise indices and graph analysis techniques is proposed in this paper. This approach collects metadata from the academic database and extracts candidates based on research field similarities utilizing text mining; then, the candidate scores are calculated and ranked through a professionalism index-based analysis. The highly connected subgraphs (HCS) algorithm is used to cluster similar researchers based on their association or intimacy in the researcher network. The proposed method is evaluated using root mean square error (RMSE) indicators for matching the field of publication and research fields of the recommended experts using keywords of papers published in Korean journals over the past five years. The results show that the system configures a group of Top-K reviewers with an RMSE 0.76. The proposed method can be applied to the academic society and national research management system to realize fair and efficient screening and management.
APA, Harvard, Vancouver, ISO, and other styles
16

Su, Shengshuai, Na Zhang, Peng Wang, et al. "Investigation and Optimization of EOR Screening by Implementing Machine Learning Algorithms." Applied Sciences 13, no. 22 (2023): 12267. http://dx.doi.org/10.3390/app132212267.

Full text
Abstract:
Enhanced oil recovery (EOR) is a complex process which has high investment cost and involves multiple disciplines including reservoir engineering, chemical engineering, geological engineering, etc. Finding the most suitable EOR technique for the candidate reservoir is time consuming and critical for reservoir engineers. The objective of this research is to propose a new methodology to assist engineers to make fast and scientific decisions on the EOR selection process by implementing machine learning algorithms to worldwide EOR projects. First, worldwide EOR project information were collected from oil companies, the extensive literature, and reports. Then, exploratory data analysis methods were employed to reveal the distribution and relationships among different reservoir/fluid parameters. Random forest, artificial neural networks, naïve Bayes, support vector machines, and decision trees were applied to the dataset to establish classification models, and five-fold cross-validation was performed to fully apply the dataset and ensure the performance of the model. Utilizing random search, we optimized the model’s hyper parameters to achieve optimal classification results. The results show that the random forest classification model has the highest accuracy and the accuracy of the test set increased from 88.54% to 91.15% without or with the optimization process, achieving an accuracy improvement of 2.61%. The prediction accuracy in the three categories of thermal flooding, gas injection, and chemical flooding were 100%, 96.51%, and 88.46%, respectively. The results also show that the established RF classification model has good capability to make recommendations of the EOR technique for a new candidate oil reservoir.
APA, Harvard, Vancouver, ISO, and other styles
17

M Vasuki, Mrs. "AI-Powered Campus Placement Portal for Efficient Recruitment." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem50057.

Full text
Abstract:
Abstract: The process of campus placement which is vital for going from education to work, gets stuck often due to ineffective candidate shortlisting, excessive waiting for communication and manual efforts in admin tasks. As a result of these problems, hiring can take more time, candidates may end up in roles that aren’t a good fit for them and there might not be enough transparency in the process. This project focuses on solving these challenges by developing a Campus Placement Portal with XGBoost-based ML techniques to result in more accurate and efficient selection of candidates. The portal gathers students, employers and academic institutions in one place which helps to simplify the whole process of campus recruitment. Using predictive analytics and decisions driven by data, the system helps to shortlist the best candidates by matching their skills, work history and how well they fit the job. With this method, employers and students can monitor how the matching process is going as it happens and this speeds up the process as well. The use of machine learning in campus recruiting greatly improves efficiency and pushes forward a new standard for future campus placement processes. To keep improving and growing, the system works to place students in good jobs and help employers easily hire the right candidates. Keywords: Campus Placement Portal, XG Boost, Recruitment Process, Candidate Shortlisting, Predictive Analytics, Student Profiling, AI in Recruitment
APA, Harvard, Vancouver, ISO, and other styles
18

Bhavsar, Dr S. A. "Automated Resume Parsing using Name Entity Recognition." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–7. https://doi.org/10.55041/isjem03365.

Full text
Abstract:
Abstract - The traditional hiring process often involves manually reviewing numerous resumes, making recruitment time-consuming and costly. To address this challenge, we propose an Automated Resume Parsing System using Named Entity Recognition (NER), an advanced Natural Language Processing (NLP) technique. Our system efficiently extracts key information, such as candidate names, skills, education, and work experience, from unstructured resume data, enabling structured representation and faster decision-making. By automating resume screening, our approach significantly reduces hiring costs and minimizes recruiter workload while improving accuracy in candidate selection. Furthermore, it enhances the efficiency of applicant shortlisting by filtering out irrelevant job applications. The system leverages machine learning models trained on diverse resume datasets to improve extraction accuracy and adaptability to various resume formats. Additionally, it integrates with applicant tracking systems (ATS) for seamless recruitment workflow automation. Experimental results demonstrate that our system achieves high precision in entity recognition, making it a valuable tool for modern recruitment platforms. The proposed solution not only optimizes the hiring process but also contributes to fair and unbiased candidate evaluation. Key Words: Automated Resume Parsing, Named Entity Recognition, Natural Language Processing, Recruitment Automation, Applicant Tracking System, Resume Screening, Machine Learning
APA, Harvard, Vancouver, ISO, and other styles
19

Soper, Daniel S. "Greed Is Good: Rapid Hyperparameter Optimization and Model Selection Using Greedy k-Fold Cross Validation." Electronics 10, no. 16 (2021): 1973. http://dx.doi.org/10.3390/electronics10161973.

Full text
Abstract:
Selecting a final machine learning (ML) model typically occurs after a process of hyperparameter optimization in which many candidate models with varying structural properties and algorithmic settings are evaluated and compared. Evaluating each candidate model commonly relies on k-fold cross validation, wherein the data are randomly subdivided into k folds, with each fold being iteratively used as a validation set for a model that has been trained using the remaining folds. While many research studies have sought to accelerate ML model selection by applying metaheuristic and other search methods to the hyperparameter space, no consideration has been given to the k-fold cross validation process itself as a means of rapidly identifying the best-performing model. The current study rectifies this oversight by introducing a greedy k-fold cross validation method and demonstrating that greedy k-fold cross validation can vastly reduce the average time required to identify the best-performing model when given a fixed computational budget and a set of candidate models. This improved search time is shown to hold across a variety of ML algorithms and real-world datasets. For scenarios without a computational budget, this paper also introduces an early stopping algorithm based on the greedy cross validation method. The greedy early stopping method is shown to outperform a competing, state-of-the-art early stopping method both in terms of search time and the quality of the ML models selected by the algorithm. Since hyperparameter optimization is among the most time-consuming, computationally intensive, and monetarily expensive tasks in the broader process of developing ML-based solutions, the ability to rapidly identify optimal machine learning models using greedy cross validation has obvious and substantial benefits to organizations and researchers alike.
APA, Harvard, Vancouver, ISO, and other styles
20

Narnaware, Monu. "College Admission Prediction using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 11 (2023): 520–23. http://dx.doi.org/10.22214/ijraset.2023.56539.

Full text
Abstract:
Students struggle mightily to get accepted into the college of their choice. The present engineering admission process is little challenging when it comes to picking the right college based on test results and areas of interest. The candidates' ability to complete the application form accurately depends on their selection, which may differ depending on their academic performance and entrance exam results. Numerous colleges provide a variety of engineering courses. Students find it difficult to arrange and list the appropriate institutions of their choice for courses based on their performance score. The historical college cut-off data used by the college admission predictor to determine the most likely colleges.
APA, Harvard, Vancouver, ISO, and other styles
21

Kulkarni1, Niranjan. "AI-Powered Resume Parsing for Efficient Recruitment." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem43332.

Full text
Abstract:
The incorporation of Artificial Intelligence (AI) into Business Process Management Systems (BPMS) has significantly transformed multiple industries, particularly human resource management. A notable innovation in this domain is the AI-driven Resume Parser, which enhances the recruitment process by automating resume evaluation and candidate selection. Conventional hiring methods can be inefficient, requiring extensive time and effort while being susceptible to human bias, making it challenging to identify the best candidates effectively. The proposed system utilizes Natural Language Processing (NLP) and Machine Learning (ML) to extract, classify, and organize essential resume details, allowing recruiters to make informed, data-driven hiring decisions. This research explores the implementation of AI-driven resume parsing, highlighting its role in enhancing efficiency, accuracy, and fairness in recruitment. The system can process resumes in multiple formats, handle large-scale applicant pools. However, challenges such as contextual ambiguity, data privacy concerns, and algorithmic bias necessitate human oversight to ensure ethical and reliable decision-making. As organizations increasingly adopt AI-driven automation to optimize business processes, the AI Resume Parser represents a transformative solution that enhances recruitment efficiency while reducing operational workload. Key Words: Artificial Intelligence, Business Process Management, AI in Recruitment, Resume Parsing, Natural Language Processing, Machine Learning, HR Automation, Data Extraction, Applicant Tracking System, Recruitment Optimization.
APA, Harvard, Vancouver, ISO, and other styles
22

Rasika, Patil. "Transitioning from Workday Recruiting to Eightfold ATS: Implementation Strategies and Best Practices." International Journal of Leading Research Publication 3, no. 11 (2022): 1–8. https://doi.org/10.5281/zenodo.14646753.

Full text
Abstract:
Organizations face numerous challenges when transitioning from one Applicant Tracking System (ATS) to another, especially when moving to more advanced, AI-powered systems. This paper outlines the process of migrating from Workday Recruiting to Eightfold ATS, with a focus on implementation phases, strategies for a smooth transition, and best practices for leveraging Eightfold's capabilities to enhance the recruitment process. Additionally, lessons learned from migration are discussed. Eightfold’s AI-driven platform is designed to modernize recruitment through machine learning and deep analytics, improving candidate selection and optimizing recruitment operations.
APA, Harvard, Vancouver, ISO, and other styles
23

Chung, Wu-Chun, Yan-Hui Lin, and Sih-Han Fang. "FedISM: Enhancing Data Imbalance via Shared Model in Federated Learning." Mathematics 11, no. 10 (2023): 2385. http://dx.doi.org/10.3390/math11102385.

Full text
Abstract:
Considering the sensitivity of data in medical scenarios, federated learning (FL) is suitable for applications that require data privacy. Medical personnel can use the FL framework for machine learning to assist in analyzing large-scale data that are protected within the institution. However, not all clients have the same distribution of datasets, so data imbalance problems occur among clients. The main challenge is to overcome the performance degradation caused by low accuracy and the inability to converge the model. This paper proposes a FedISM method to enhance performance in the case of Non-Independent Identically Distribution (Non-IID). FedISM exploits a shared model trained on a candidate dataset before performing FL among clients. The Candidate Selection Mechanism (CSM) was proposed to effectively select the most suitable candidate among clients for training the shared model. Based on the proposed approaches, FedISM not only trains the shared model without sharing any raw data, but it also provides an optimal solution through the selection of the best shared model. To evaluate performance, the proposed FedISM was applied to classify coronavirus disease (COVID), pneumonia, normal, and viral pneumonia in the experiments. The Dirichlet process was also used to simulate a variety of imbalanced data distributions. Experimental results show that FedISM improves accuracy by up to 25% when privacy concerns regarding patient data are rising among medical institutions.
APA, Harvard, Vancouver, ISO, and other styles
24

Hao, Yuhan, Gary M. Weiss, and Stuart M. Brown. "Identification of Candidate Genes Responsible for Age-related Macular Degeneration using Microarray Data." International Journal of Service Science, Management, Engineering, and Technology 9, no. 2 (2018): 33–60. http://dx.doi.org/10.4018/ijssmet.2018040102.

Full text
Abstract:
A DNA microarray can measure the expression of thousands of genes simultaneously, and this enables us to study the molecular pathways underlying Age-related Macular Degeneration. Previous studies have not determined which genes are responsible for the process of AMD. The authors address this deficiency by applying modern data mining and machine learning feature selection algorithms to the AMD microarray dataset. In this paper four methods are utilized to perform feature selection: Naïve Bayes, Random Forest, Random Lasso, and Ensemble Feature Selection. Functional Annotation of 20 final selected genes suggests that most of them are responsible for signal transduction in an individual cell or between cells. The top seven genes, five protein-coding genes and two non-coding RNAs, are explored from their signaling pathways, functional interactions and associations with retinal pigment epithelium cells. The authors conclude that Pten/PI3K/Akt pathway, NF-kappaB pathway, JNK cascade, Non-canonical Wnt Pathway, and two biological processes of cilia are likely to play important roles in AMD pathogenesis.
APA, Harvard, Vancouver, ISO, and other styles
25

Riyadi, Agus, Mate Kovacs, Uwe Serdült, and Victor Kryssanov. "Benchmarking with a Language Model Initial Selection for Text Classification Tasks." Machine Learning and Knowledge Extraction 7, no. 1 (2025): 3. https://doi.org/10.3390/make7010003.

Full text
Abstract:
The now-globally recognized concerns of AI’s environmental implications resulted in a growing awareness of the need to reduce AI carbon footprints, as well as to carry out AI processes responsibly and in an environmentally friendly manner. Benchmarking, a critical step when evaluating AI solutions with machine learning models, particularly with language models, has recently become a focal point of research aimed at reducing AI carbon emissions. Contemporary approaches to AI model benchmarking, however, do not enforce (nor do they assume) a model initial selection process. Consequently, modern model benchmarking is no different from a “brute force” testing of all candidate models before the best-performing one could be deployed. Obviously, the latter approach is inefficient and environmentally harmful. To address the carbon footprint challenges associated with language model selection, this study presents an original benchmarking approach with a model initial selection on a proxy evaluative task. The proposed approach, referred to as Language Model-Dataset Fit (LMDFit) benchmarking, is devised to complement the standard model benchmarking process with a procedure that would eliminate underperforming models from computationally extensive and, therefore, environmentally unfriendly tests. The LMDFit approach draws parallels from the organizational personnel selection process, where job candidates are first evaluated by conducting a number of basic skill assessments before they would be hired, thus mitigating the consequences of hiring unfit candidates for the organization. LMDFit benchmarking compares candidate model performances on a target-task small dataset to disqualify less-relevant models from further testing. A semantic similarity assessment of random texts is used as the proxy task for the initial selection, and the approach is explicated in the context of various text classification assignments. Extensive experiments across eight text classification tasks (both single- and multi-class) from diverse domains are conducted with seven popular pre-trained language models (both general-purpose and domain-specific). The results obtained demonstrate the efficiency of the proposed LMDFit approach in terms of the overall benchmarking time as well as estimated emissions (a 37% reduction, on average) in comparison to the conventional benchmarking process.
APA, Harvard, Vancouver, ISO, and other styles
26

Demir, Cüneyd, Merdin Danışmaz, and Mustafa Bozdemir. "Intelligent Selection of Mobility Systems For Unmanned Ground Vehicles Through Machine Learning." Revista de Gestão Social e Ambiental 19, no. 3 (2025): e011590. https://doi.org/10.24857/rgsa.v19n3-036.

Full text
Abstract:
Objective: The primary objective of this study is to enhance the selection process of mobility systems for unmanned ground vehicles (UGVs) by leveraging machine learning techniques. Specifically, it aims to identify the most suitable mobility systems that align with mission requirements and user needs while optimizing performance across diverse terrains. Theoretical Framework: This research is grounded in theories of systems engineering and decision-making processes related to vehicle design. It builds on the premise that mobility systems are key determinants of vehicle performance, affecting aspects such as energy efficiency, maneuverability, and load-carrying capacity. The integration of machine learning within the design process represents a shift from traditional methodologies, facilitating a data-driven approach to system selection. Method: The study employed a machine learning framework to analyze UGV mobility systems by addressing feedback from five key questions. Various classification algorithms were utilized, including Random Forest, Naive Bayes, Support Vector Machines, and k-Nearest Neighbors. The performance of these algorithms was evaluated based on accuracy metrics such as precision, recall, and F1 scores, allowing for a comprehensive assessment of their efficacy in predicting suitable mobility systems. Results and Discussion: The findings highlight that the Random Forest algorithm outperformed others with an accuracy of 98.7%, indicating its effectiveness in classifying suitable mobility systems for UGVs. The research discusses the implications of employing machine learning in this context, suggesting that it can streamline the design process by quickly identifying strong candidates for mobility systems. Challenges associated with the complexity of UGV parameters, and the importance of tailored mobility solutions are also explored. Research Implications: This study underscores the significance of integrating machine learning into the design and selection of UGV mobility systems, offering a new perspective on improving operational effectiveness. It provides insights for engineers and researchers in the field of unmanned systems, suggesting a paradigm shift that prioritizes data-driven decision-making over traditional approaches. Originality/Value: This research contributes original insights by introducing a novel approach to mobility system selection for UGVs through machine learning. It adds value by demonstrating the potential for increased accuracy and efficiency in system design, which could lead to enhanced mission success and reduced costs associated with design modifications. The study bridges a gap in existing literature by combining mobility system analysis with advanced computational techniques, paving the way for future advancements in unmanned vehicle design.
APA, Harvard, Vancouver, ISO, and other styles
27

Montesinos-López, Osval A., Arron H. Carter, David Alejandro Bernal-Sandoval, Bernabe Cano-Paez, Abelardo Montesinos-López, and José Crossa. "A Comparison between Three Tuning Strategies for Gaussian Kernels in the Context of Univariate Genomic Prediction." Genes 13, no. 12 (2022): 2282. http://dx.doi.org/10.3390/genes13122282.

Full text
Abstract:
Genomic prediction is revolutionizing plant breeding since candidate genotypes can be selected without the need to measure their trait in the field. When a reference population contains both phenotypic and genotypic information, it is trained by a statistical machine learning method that is subsequently used for making predictions of breeding or phenotypic values of candidate genotypes that were only genotyped. Nevertheless, the successful implementation of the genomic selection (GS) methodology depends on many factors. One key factor is the type of statistical machine learning method used since some are unable to capture nonlinear patterns available in the data. While kernel methods are powerful statistical machine learning algorithms that capture complex nonlinear patterns in the data, their successful implementation strongly depends on the careful tuning process of the involved hyperparameters. As such, in this paper we compare three methods of tuning (manual tuning, grid search, and Bayesian optimization) for the Gaussian kernel under a Bayesian best linear unbiased predictor model. We used six real datasets of wheat (Triticum aestivum L.) to compare the three strategies of tuning. We found that if we want to obtain the major benefits of using Gaussian kernels, it is very important to perform a careful tuning process. The best prediction performance was observed when the tuning process was performed with grid search and Bayesian optimization. However, we did not observe relevant differences between the grid search and Bayesian optimization approach. The observed gains in terms of prediction performance were between 2.1% and 27.8% across the six datasets under study.
APA, Harvard, Vancouver, ISO, and other styles
28

Patel, Anjali, Kiran Thakor, and Megha Patel. "Predictive Modeling for ATME-TOX Properties of Drug Using Machine Learning: A Review." SPU Journal of Science, Technology and Management Research 1, no. 1 (2024): 39–44. https://doi.org/10.63766/spujstmr.24.000007.

Full text
Abstract:
This survey paper comprehensively explores the landscape of predictive modeling for Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) properties of drugs through the lens of machine learning (ML) techniques. The review encompasses an extensive analysis of methodologies, data sets, advancements in ML algorithms, and their applications in drug discovery and development. Beginning with an overview of the significance of ADMET properties in drug development, the survey delves into various datasets utilized for modeling, encompassing chemical descriptors, biological activities, physicochemical properties, and toxicity endpoints. It scrutinizes the intricacies of feature engineering, emphasizing the importance of selecting informative features for accurate predictions. The survey critically evaluates an array of ML algorithms employed in predictive modeling, ranging from traditional methods to state-of-the-art deep learning architectures. It highlights the strengths, limitations, and applications of these algorithms in predicting ADMET properties, emphasizing the need for robust experimental design and validation protocols. Challenges such as interpretability, data quality, and integration of domain knowledge are addressed, underscoring the significance of standardized frameworks for ensuring reproducibility and generalizing ability of predictive models. Furthermore, the survey showcases successful applications of ML-based predictive modeling in optimizing drug candidate selection, mitigating toxicity risks, and expediting the drug discovery process.
APA, Harvard, Vancouver, ISO, and other styles
29

Yang, Chunyu. "Research on loan approval and credit risk based on the comparison of Machine learning models." SHS Web of Conferences 181 (2024): 02003. http://dx.doi.org/10.1051/shsconf/202418102003.

Full text
Abstract:
Nowadays, home loan is a frequently accessed component of people’s financing activities. Homeowners wants to increase the probability of loan acceptance, however banks seek to borrow money to low risk customers. This paper compared and examined the machine learning models to select when loan applicants evaluating their probability of success. This paper introduced the recommended models for the problem, explanations on how to use the selected model. 6 candidate models, including Logistic regression, Decision tree, Random Forest, support vector machine (SVM), Ada Boost and Neural Network are selected. The model selection process would focus on the model’s accuracy on test data as well as the interpretability of these models. The models’ result was interpreted to derive optimal strategies to be undertaken by both debtors and creditors. Throughout comparison between these models, logistic regression was the best in terms of interpretability and accuracy. Nonetheless, other models could bolster the decision-making process by examining their confusion matrices and the fitted importance of predictors in each model. This paper revealed practical implications of machine learning theories on home loan approval and credit risk and aimed to help decision making for both debtors and creditors.
APA, Harvard, Vancouver, ISO, and other styles
30

YUSKOVYCH-ZHUKOVSKA, VALENTYNA, and OLEG BOGUT. "AN INTELLECTUAL INFORMATION SYSTEM FOR RANK-BASED SELECTION OF WEB PROGRAMMERS." Herald of Khmelnytskyi National University. Technical sciences 345, no. 6(2) (2024): 11–20. https://doi.org/10.31891/2307-5732-2024-345-6-1.

Full text
Abstract:
The rapid growth of the digital economy and the increasing demand for high-quality web applications have intensified the need for skilled web programmers. The selection and evaluation of these professionals pose significant challenges, particularly for organizations seeking to balance technical proficiency, team collaboration, and alignment with project objectives. Traditional hiring methods often fail to address the complexities of evaluating candidates' multifaceted skills, leading to inefficiencies in recruitment processes and suboptimal project outcomes. As a result, the development of intellectual information systems for automated and objective evaluation of web programmers has emerged as a crucial area of study. This article presents the conceptual framework of an intellectual information system for rank-based selection of web programmers. The proposed system integrates principles of artificial intelligence, machine learning, and multi-criteria decision analysis to ensure objective, transparent, and efficient evaluations. The core of the system is a multi-dimensional model that assesses candidates based on technical expertise, problem-solving skills, programming efficiency, adherence to coding standards, and soft skills such as communication and adaptability. A critical component of the system is its ability to dynamically weigh these criteria based on the specific requirements of a given role or project. The primary goal of this research is to design a system that facilitates rank-based selection through objective scoring mechanisms, enabling organizations to identify candidates best suited to their specific requirements. By employing advanced data analytics, the system is capable of generating detailed profiles for each candidate, offering insights into their technical and behavioral competencies. Additionally, the system supports integration with corporate learning management systems (LMS) to provide targeted training recommendations for skill enhancement. The intellectual information system proposed in this article represents a significant advancement in corporate IT education and human resource management. By automating and standardizing the evaluation process, the system not only reduces the time and cost associated with recruitment but also ensures a higher degree of precision in candidate selection. This innovation has the potential to transform the hiring landscape, fostering a more data-driven, equitable, and efficient approach to workforce development in the web programming industry.
APA, Harvard, Vancouver, ISO, and other styles
31

Narwade, Rutuja, Srujami Palkar, Isha Zade, and Nidhi Sanghavi. "Personality Prediction with CV Analysis." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (2022): 970–74. http://dx.doi.org/10.22214/ijraset.2022.41359.

Full text
Abstract:
Abstract: When it comes to demography, personality plays a crucial role in deciphering a person’s caliber and work ethic. An individual’s personality becomes a vital resource for the organization that he/she works for. One way to adjudicate an individual’s personality is to frame a questionnaire or by analysis of their resume (CV). In the traditional sense, when it comes to hiring an individual, employers manually filter through the applicants’ CVs as per the job description. In this paper, we render a system that motorizes the eligibility check and aptitude evaluation of a candidate in the shortlisting strategy. To overcome the predicaments encountered in the traditional procedure, a web application has been curated for aptitude analysis (personality evaluation) and CV analysis. The system’s primary aim is to analyze the professional ability of the candidate based on the uploaded CV and the prepared questionnaire. The system employs Natural Language Processing (NLP) for the CV analysis and Machine Learning (ML) for the personality evaluation. The output of the curated system aids in applicant filtering. Further, the resulting scores help in evaluating the qualities of the applicant such as the kind of mindset he/she has and the skills he/she has accumulated over time. This approach has been proposed keeping in mind the hurdles encountered while analyzing an applicant during the hiring process and aims in providing a seamless system that will be able to aid in making a fair decision in the selection process. Keywords: Personality Prediction, CV Analysis, Machine Learning, Natural Language Processing, Big Five Personality Model, Psychometric Analysis, Hiring, and Selection.
APA, Harvard, Vancouver, ISO, and other styles
32

Christopher, R. Norman, Gargon Elizabeth, M. G. Leeflang Mariska, Névéol Aurélie, and Williamson Paula. "Evaluation of an automatic article selection method for timelier updates of the Comet Core Outcome Set database." Database (Oxford) 2019 (November 7, 2019): baz109. https://doi.org/10.1093/database/baz109.

Full text
Abstract:
Curated databases of scientific literature play an important role in helping researchers find relevant literature, but populating such databases is a labour intensive and time-consuming process. One such database is the freely accessible Comet Core Outcome Set database, which was originally populated using manual screening in an annually updated systematic review. In order to reduce the workload and facilitate more timely updates we are evaluating machine learning methods to reduce the number of references needed to screen. In this study we have evaluated a machine learning approach based on logistic regression to automatically rank the candidate articles. Data from the original systematic review and its four first review updates were used to train the model and evaluate performance. We estimated that using automatic screening would yield a workload reduction of at least 75% while keeping the number of missed references around 2%. We judged this to be an acceptable trade-off for this systematic review, and the method is now being used for the next round of the Comet database update.
APA, Harvard, Vancouver, ISO, and other styles
33

Agbasiere, Chinyere Linda, and Goodness Rex Nze-Igwe. "Algorithmic Fairness in Recruitment: Designing AI-Powered Hiring Tools to Identify and Reduce Biases in Candidate Selection." Path of Science 11, no. 4 (2025): 5001. https://doi.org/10.22178/pos.116-10.

Full text
Abstract:
The study looks into how artificial intelligence (AI) affects hiring procedures, focusing on the fairness of the algorithms that drive these tools. AI has improved the efficiency of the hiring process, yet its use results in institutionalised discrimination. The AI systems used for recruitment, which base evaluations on past performance data, have the potential to discriminate against minority candidates as well as women through unintentional actions. The ability of AI systems to decrease human biases during recruitment encounters major challenges, as Amazon's discriminatory resume screening demonstrates the issues in systemic bias maintenance. This paper discusses the origins of algorithmic bias, including biased training records, defining labels, and choosing features, and suggests debiasing methods. Methods such as reweighting, adversarial debiasing, and fairness-aware algorithms are assessed for suitability in developing unbiased AI hiring systems. A quantitative approach is used in the research, web scraping data from extensive secondary sources to assess these biases and their mitigation measures. A Fair Machine Learning (FML) theoretical framework is utilised, which introduces fairness constraints into machine learning models so that hiring models do not perpetuate present discrimination. The ethical, legal, and organisational ramifications of using AI for recruitment are further examined under GDPR and Equal Employment Opportunity law provisions. By investigating HR practitioners' experiences and AI-based recruitment data, the study aims to develop guidelines for designing open, accountable, and equitable AI-based hiring processes. The findings emphasise the value of human oversight and the necessity of regular audits to guarantee equity in AI hiring software and, consequently, encourage diversity and equal opportunity during employment.
APA, Harvard, Vancouver, ISO, and other styles
34

Arvind, Kumar Sinha, Amir Khusru Akhtar Md, and Kumar Mohit. "Automated Resume Parsing and Job Domain Prediction using Machine Learning." Indian Journal of Science and Technology 16, no. 26 (2023): 1967–74. https://doi.org/10.17485/IJST/v16i26.880.

Full text
Abstract:
Abstract <strong>Objectives:</strong>&nbsp;This study aims to develop an efficient approach for parsing resumes and predicting job domains using natural language processing (NLP) techniques and named entity recognition to enhance the resume screening process for recruiters.&nbsp;<strong>Methods:</strong>&nbsp;The proposed approach involves preprocessing steps, such as cleaning, tokenization, stop-word removal, stemming, and lemmatization, implemented with the PyMuPDF and doc2text Python modules. Regular expressions and the spaCy library are utilized for entity recognition and name extraction. The model achieved a prediction accuracy of 92.08% and an F1-score of 0.92 on a dataset of 1000 resumes. An ablation experiment assessed the contributions of different factors.&nbsp;<strong>Findings:</strong>&nbsp;The approach demonstrated a high prediction accuracy of 92.08% and F1-score of 0.92 for job domain prediction, effectively identifying relevant job domains from resumes. Evaluations on individual job domains showed excellent precision and recall scores, validating its applicability. Preprocessing techniques significantly improved accuracy, while the integration of regular expressions and spaCy enhanced the model&rsquo;s performance. This approach automates resume screening, reducing recruiters&rsquo; workload, saving time and effort, and improving candidate selection and the hiring process.&nbsp;<strong>Novelty:</strong>&nbsp;This study introduces a novel approach combining NLP techniques, regular expressions, and entity recognition for resume parsing and job domain prediction. This integration enhances accuracy and efficiency, offering a unique solution for resume screening. <strong>Keywords:</strong> Resume parsing; Job domain prediction; Entity recognition; Machine learning; Natural Language Processing
APA, Harvard, Vancouver, ISO, and other styles
35

A, Mrs Amulya,. "SMART HIRE BASED SYSTEM." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem31773.

Full text
Abstract:
An efficient and effective hiring process is a step-by-step process for hiring a new employee, whereby an organization identifies its talent needs, recruits from its talent pool and eventually hires the most qualified candidates. Most companies have their own hiring processes. What follows are the most common steps in the hiring process across industry and regardless of company size. Keep in mind, however, that the specific details of the hiring process are unique to each company. Our model is to Automate the interview process by using Machine learning in order to simplify the process by selecting the key words, skills, passions from the profile and sending the mails to the applicants. In most organisations, recruitment is a complex and time-consuming process, since they get a large number of job applications for openings that are called. managing those process applications manually would want a great amount of labour, time, and cash. further, the hr personnel or the recruitment group ought to manually shortlist the incoming candidates and time table interviews for them as a result. this lengthy technique makes the organization's operations inefficient. therefore, we deliberate to build a web-software with an aim to automate the interview system. the software consists of following capabilities which includes candidate character prediction the use of device gaining knowledge of, computerized resume parser, coping with of the scheduling of interviews, video recording in browser, test self assurance degree and other character trends of the candidate the use of video (eye movements and face feelings) and tone (speech) evaluation, hold track of the recruitment data and send mail to chose/rejected candidates robotically in one-click. this machine will as a consequence make the hiring technique plenty faster &amp; smoother, allow recruitments in larger numbers, generate concise insights and offer summary of the candidate's profile with a purpose to encompass the resume, responses to questions, technical competencies, character traits, video and tone evaluation end result.
APA, Harvard, Vancouver, ISO, and other styles
36

Abulail, Rawan Nassri. "Enhanced Fitness Proportionate Selection Algorithm for Parent Selection in Genetic Algorithms." Journal of Internet Services and Information Security 15, no. 1 (2025): 257–70. https://doi.org/10.58346/jisis.2025.i1.016.

Full text
Abstract:
A genetic Algorithm is an evolutionary algorithm that models and simulates biological behavior, whether evolution or genetics, to reach a high-quality solution for search and optimization problems. There are many areas and applications to which genetic algorithms can be applied, like machine learning, feature selection, engineering design, and function optimization. Three leading operators must be applied to each generation's reproduction process; the first is the Selection process, which is applied to the initial population to select the candidate parents to mate and recombine to produce the next generation(offspring). The second operator is a crossover, which is applied to the selected parents from the previous operation (Selection) to make new individuals (offspring) carrying the same traits from parents by combining the parent's chromosomes; the last operator is a mutation, which is applied to the new offspring after crossover. Mutation operation aims to change the value of the chromosome gene randomly. In this research, the selection process will be demonstrated in detail. Then, fitness proportionate selection (FPS) will be presented as one of the most popular methods used in the selection process. The main problem of FPS is the candidate parent, which will mate and recombine to reproduce the next generation; in some cases, a strong individual can mate with a weak one and produce offspring with lower quality traits than the strong parents as a consequence of trait exchange, which happens between that pair. The researcher proposed an enhancement of the FPS algorithm to ensure that strong parents will mate and reproduce strong offspring and propagate their strong traits to the next generations; the proposed enhancement can be summarized as adding a step to the standards FPS to sort the selected individual in ascending or descending order after selection process and before applying cross over and mutation phases. The researcher conducted three experiments to prove the improvements in the fitness value as a consequence of applying that additional step in the selection algorithm; the experiments were performed with three different population sizes and reproduced 100 generations. The fitness score was measured in each generation, and the researcher presented the fitness score evolution over the GA iterations. The results were precise, proving that the sorted individuals after Selection gave better fitness scores than those obtained by applying the standard FPS.
APA, Harvard, Vancouver, ISO, and other styles
37

Bhardwaj, Trived, Jitendra Kumar, Divyanshi Chauhan, and Babita Sharma. "Resume Genics." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (2025): 3309–14. https://doi.org/10.22214/ijraset.2025.68077.

Full text
Abstract:
Abstract: Resume Genics is a website that helps job seekers to create a fabulous resume. Nowadays, hundreds or thousands of people apply for one job vacancy so you should create such a resume which can be easily selected. This website increases the percentage of selection and makes it better by converting into professionals. In this websites use algorithms and machine learning to automate the scanning, analyzing, and ranking of resumes against job requirements. Resume Genics will give better format and templates and use professional language and also reduce the human errors. It will receive the resume as input and process it and provide a score or ranking to identify how much candidate is qualified and make it easier for the recruiter to shortlist applicants. It helps the job seeker to get his dream job and qualify the resume selecting process into the dream company.
APA, Harvard, Vancouver, ISO, and other styles
38

Kabirov, Danil M., and Olga A. Denisova. "A PLATFORM FOR AUTOMATING INTERVIEWS USING ARTIFICIAL INTELLIGENCE." Electrical and data processing facilities and systems 21, no. 1 (2025): 130–40. https://doi.org/10.17122/1999-5458-2025-21-1-130-140.

Full text
Abstract:
Relevance The relevance of developing a platform for automating interviews using artificial intelligence is determined by a number of factors. The concept of this product touches on two important areas: information technology and human resource management. Many industries actively need to replace outdated methods of organizing HR processes with innovative solutions using artificial intelligence. Secondly, in the course of the work, the study of the possibilities of integrating various machine learning methods into recruitment processes is touched upon, which has been an urgent topic for the past 5 years. The current implementation of the traditional approach to recruitment often faces many difficulties, from the high complexity of the candidate selection processes to the difficulty of assessing their professional and personal qualities. In this regard, the search and implementation of innovative solutions aimed at optimizing and improving the recruitment process is coming to the fore. Aim of research To develop a web platform for automating the recruitment process using artificial intelligence technologies, which will improve the efficiency of hiring and the accuracy of evaluating candidates. Research methods Analysis of existing business processes, software development using Next.js and Prisma, and machine learning algorithms for generating test tasks, facial recognition and analyzing the emotional background of candidates. Results A web platform has been developed that successfully automates the recruitment process in the oil and gas industry by integrating artificial intelligence technologies to generate test tasks, face recognition and analyze the emotional background of candidates. The platform provides high performance and scalability through the use of modern technologies such as Next.js, Prisma, WebSockets and peer-to-peer connections. The implemented functionality, including the creation and publication of vacancies, user registration and online interviews, significantly increases the efficiency and accuracy of evaluating candidates, which makes the platform competitive in the market.
APA, Harvard, Vancouver, ISO, and other styles
39

Wahidin, Didin, Okke Rosmaladewi, and Janaenah. "Implementasi Artificial Intelligence dalam Proses Rekrutmen dan Seleksi : Studi Kasus pada PT Saripetejo Sukabumi." Jurnal Indragiri Penelitian Multidisiplin 4, no. 3 (2024): 84–88. https://doi.org/10.58707/jipm.v4i3.1074.

Full text
Abstract:
This research explores the implementation of Artificial Intelligence (AI) in recruitment and selection processes at PT Saripetejo Sukabumi. Utilizing a qualitative approach with a case study method, the study aims to analyze digital transformation in human resource management. The research focuses on identifying the AI system architecture, organizational implications, and implementation challenges. Data collection was conducted through in-depth interviews, participatory observation, and document analysis. Research findings reveal that the AI system comprises four key components: candidate filtering algorithms, machine learning-based competency analysis, automatic psychometric assessment, and intelligent virtual interview platform. AI implementation increases recruitment process efficiency by up to 60% with a candidate matching accuracy of 75%. The study uncovers that successful AI integration requires a holistic approach considering technological, ethical, and human resource development aspects. The findings provide theoretical and practical contributions to understanding digital transformation in contemporary recruitment practices
APA, Harvard, Vancouver, ISO, and other styles
40

Simion, Petronela Cristina, Mirona Ana Maria Popescu, Iustina Cristina Costea-Marcu, and Iuliana Grecu. "Human Resource Management in Modern Society." Advances in Science and Technology 110 (September 27, 2021): 25–30. http://dx.doi.org/10.4028/www.scientific.net/ast.110.25.

Full text
Abstract:
Recruitment is one of the main pillars for the proper functioning of a healthy environment, which meets its objectives and is a process coordinated by the human resources department, together with the respective managers for each position. The recruitment process consists in promoting vacancies using the most appropriate channels, means and tools to maximize the attraction rate of the most suitable candidates, as addressability and accessibility within the target group of potential applicants to meet almost all the conditions. and the criteria set out in the profile of the ideal candidate. There is an acute need for skills in a labor market that demands competitiveness as soon as possible. The most important thing that managers and leaders do in an organization is to hire the right people for the right job. At present, the recruitment process has become very dynamic and is constantly changing. We are currently facing an impressive increase in the use of technology and automation in almost every aspect of the recruitment industry. Undoubtedly, the automation of the recruitment process and the integration of artificial intelligence and machine learning algorithms in these systems has brought a number of benefits and changed the way candidates are selected. The authors of the article aim to conduct a bibliographic research to illustrate the current state of human resources management and the methods adopted in its optimization. The latest trends in this field are researched and the main challenges that appear are exposed. There are presented the means used to maintain employees within the organizations and increase their productivity. The collected data are interpreted and a model of systematization of the recruitment process is proposed using process modeling, for an easier implementation. In conclusion, it is found that the role of recruiters will change through the adoption on a larger scale of solutions to automate the process of search and selection of candidates. Although the benefits of recruiting automation could outweigh the arguments against it, it is prudent for recruiters to combine technology with the human factor in the selection process of the right candidate. The most important strategic challenge in HR for companies is to maintain a high level of employee involvement, which is difficult to achieve only through technology.
APA, Harvard, Vancouver, ISO, and other styles
41

G. Venkateshwaran,. "Artificial Intelligence in HR: Transforming Recruitment and Selection in IT Industry." Journal of Information Systems Engineering and Management 10, no. 17s (2025): 38–45. https://doi.org/10.52783/jisem.v10i17s.2705.

Full text
Abstract:
e integration of Artificial Intelligence (AI) in Human Resource Management (HRM) has revolutionized traditional recruitment and selection processes, particularly in the IT industry, where rapid technological advancements demand a skilled and dynamic workforce. AI-driven recruitment systems leverage machine learning, natural language processing (NLP), and predictive analytics to enhance efficiency, reduce hiring biases, and improve decision-making. This study explores the transformative role of AI in recruitment and selection within the IT sector, highlighting its benefits, challenges, and future implications. AI-powered tools have streamlined various stages of the hiring process, from resume screening to candidate assessment. Automated applicant tracking systems (ATS) equipped with AI algorithms can efficiently scan thousands of resumes, filtering out the most relevant candidates based on predefined criteria. Additionally, AI-driven chatbots and virtual assistants engage with applicants, provide real-time responses, schedule interviews, and enhance the candidate experience. These tools reduce the time-to-hire and improve the quality of recruitment by identifying the best-fit candidates based on skills, experience, and cultural alignment. Another critical advantage of AI in recruitment is its potential to reduce human biases. Traditional hiring processes are often influenced by unconscious biases related to gender, ethnicity, or educational background. AI-driven assessments focus on skills-based evaluation, utilizing predictive analytics to match candidates with job roles based on competencies rather than demographic factors. Furthermore, video interview analysis tools can assess verbal and non-verbal cues using facial recognition and speech analysis, helping recruiters make data-driven hiring decisions. Despite these benefits, AI in recruitment comes with challenges. One significant concern is data privacy and ethical considerations. AI algorithms rely on vast datasets, raising questions about data security, transparency, and fairness. If trained on biased historical data, AI systems may perpetuate discrimination rather than eliminate it. Ensuring algorithmic fairness and regulatory compliance, such as adhering to General Data Protection Regulation (GDPR) and other data protection laws, is crucial for ethical AI deployment in HR. Additionally, the human touch in recruitment remains essential. While AI can handle administrative tasks and initial screenings, final hiring decisions still require human judgment. Organizations must strike a balance between AI automation and human intuition to ensure a holistic approach to talent acquisition. HR professionals must also undergo upskilling to effectively leverage AI tools and interpret AI-driven insights. The future of AI in HRM will see advancements in predictive hiring, sentiment analysis, and skill gap analysis. AI-powered platforms will not only match candidates to current job roles but also predict future skill requirements and recommend personalized learning paths for employees. In the IT industry, where skills evolve rapidly, AI-driven workforce planning will play a crucial role in talent retention and upskilling initiatives. This paper analyzes the impact of AI on recruitment and selection in the IT industry, focusing on efficiency, bias reduction, candidate experience, and decision-making improvements. It examines AI-driven tools like ATS, chatbots, and predictive analytics, discusses ethical concerns, and explores the future role of AI in workforce planning and talent acquisition.
APA, Harvard, Vancouver, ISO, and other styles
42

Zheng, Zilai, Takehiro Morimoto, and Yuji Murayama. "A GIS-Based Bivariate Logistic Regression Model for the Site-Suitability Analysis of Parcel-Pickup Lockers: A Case Study of Guangzhou, China." ISPRS International Journal of Geo-Information 10, no. 10 (2021): 648. http://dx.doi.org/10.3390/ijgi10100648.

Full text
Abstract:
The site-suitability analysis (SSA) of parcel-pickup lockers (PPLs) is becoming a critical problem in last-mile logistics. Most studies have focused on the site-selection problem to identify the best site from given potential sites in specific areas, while few have solved the site-search problem to determine the boundary of the suitable area. A GIS-based bivariate logistic regression (LR) model using the supervised machine-learning (ML) algorithm was developed for suitability classification in this study. Eight crucial factors were selected from 27 candidate variables using stepwise methods with a training dataset in the best LR model. The variable of the proximity to residential buildings was more important than that to various commercial buildings, transport services, and roads. Among the four types of residential buildings, the most crucial factor was the proximity to residential quarters. A test dataset was employed for the validation process, showing that the best LR model had excellent performance. The results identified the suitable areas for PPLs, accounting for 8% of the total area of Guangzhou (GZ). A decision-maker can focus on these suitable areas as the site-selection ranges for PPLs, which significantly reduces the difficulty of analysis and time costs. This method can quickly decompose a large-scale area into several small-scale suitable areas, with relevance to the problem of selecting sites from various candidate sites.
APA, Harvard, Vancouver, ISO, and other styles
43

Poliszczuk, Artem, Agnieszka Pollo, Katarzyna Małk, et al. "Active galactic nuclei catalog from the AKARI NEP-Wide field." Astronomy & Astrophysics 651 (July 2021): A108. http://dx.doi.org/10.1051/0004-6361/202040219.

Full text
Abstract:
Context. The north ecliptic pole (NEP) field provides a unique set of panchromatic data that are well suited for active galactic nuclei (AGN) studies. The selection of AGN candidates is often based on mid-infrared (MIR) measurements. Such methods, despite their effectiveness, strongly reduce the breadth of resulting catalogs due to the MIR detection condition. Modern machine learning techniques can solve this problem by finding similar selection criteria using only optical and near-infrared (NIR) data. Aims. The aim of this study is to create a reliable AGN candidates catalog from the NEP field using a combination of optical SUBARU/HSC and NIR AKARI/IRC data and, consequently, to develop an efficient alternative for the MIR-based AKARI/IRC selection technique. Methods. We tested set of supervised machine learning algorithms for the purposes of carrying out an efficient process for AGN selection. The best models were compiled into a majority voting scheme, which used the most popular classification results to produce the final AGN catalog. An additional analysis of the catalog properties was performed as a spectral energy distribution fitting via the CIGALE software. Results. The obtained catalog of 465 AGN candidates (out of 33 119 objects) is characterized by 73% purity and 64% completeness. This new classification demonstrates a suitable consistency with the MIR-based selection. Moreover, 76% of the obtained catalog can be found solely using the new method due to the lack of MIR detection for most of the new AGN candidates. The training data, codes, and final catalog are available via the github repository. The final catalog of AGN candidates is also available via the CDS service. Conclusions. The new selection methods presented in this paper are proven to be a better alternative for the MIR color AGN selection. Machine learning techniques not only show similar effectiveness, but also involve less demanding optical and NIR observations, substantially increasing the extent of available data samples.
APA, Harvard, Vancouver, ISO, and other styles
44

Chuang, Yen-Ching, Shu-Kung Hu, James J. H. Liou, and Gwo-Hshiung Tzeng. "A DATA-DRIVEN MADM MODEL FOR PERSONNEL SELECTION AND IMPROVEMENT." Technological and Economic Development of Economy 26, no. 4 (2020): 751–84. http://dx.doi.org/10.3846/tede.2020.12366.

Full text
Abstract:
Personnel selection and human resource improvement are characteristically multiple-attribute decision-making (MADM) problems. Previously developed MADM models have principally depended on experts’ judgements as input for the derivation of solutions. However, the subjectivity of the experts’ experience can have a negative influence on this type of decision-making process. With the arrival of today’s data-based decision-making environment, we develop a data-driven MADM model, which integrates machine learning and MADM methods, to help managers select personnel more objectively and to support their competency improvement. First, RST, a machining learning tool, is applied to obtain the initial influential significance-relation matrix from real assessment data. Subsequently, the DANP method is used to derive an influential significance-network relation map and influential weights from the initial matrix. Finally, the PROMETHEE-AS method is applied to assess the gap between the aspiration and current levels for every candidate. An example was carried out using performance data with evaluation attributes obtained from the human resource department of a Chinese food company. The results revealed that the data-driven MADM model could enable human resource managers to resolve the issues of personnel selection and improvement simultaneously, and can actually be applied in the era of big data analytics in the future.
APA, Harvard, Vancouver, ISO, and other styles
45

Duchanoy, Carlos A., Hiram Calvo, and Marco A. Moreno-Armendáriz. "ASAMS: An Adaptive Sequential Sampling and Automatic Model Selection for Artificial Intelligence Surrogate Modeling." Sensors 20, no. 18 (2020): 5332. http://dx.doi.org/10.3390/s20185332.

Full text
Abstract:
Surrogate Modeling (SM) is often used to reduce the computational burden of time-consuming system simulations. However, continuous advances in Artificial Intelligence (AI) and the spread of embedded sensors have led to the creation of Digital Twins (DT), Design Mining (DM), and Soft Sensors (SS). These methodologies represent a new challenge for the generation of surrogate models since they require the implementation of elaborated artificial intelligence algorithms and minimize the number of physical experiments measured. To reduce the assessment of a physical system, several existing adaptive sequential sampling methodologies have been developed; however, they are limited in most part to the Kriging models and Kriging-model-based Monte Carlo Simulation. In this paper, we integrate a distinct adaptive sampling methodology to an automated machine learning methodology (AutoML) to help in the process of model selection while minimizing the system evaluation and maximizing the system performance for surrogate models based on artificial intelligence algorithms. In each iteration, this framework uses a grid search algorithm to determine the best candidate models and perform a leave-one-out cross-validation to calculate the performance of each sampled point. A Voronoi diagram is applied to partition the sampling region into some local cells, and the Voronoi vertexes are considered as new candidate points. The performance of the sample points is used to estimate the accuracy of the model for a set of candidate points to select those that will improve more the model’s accuracy. Then, the number of candidate models is reduced. Finally, the performance of the framework is tested using two examples to demonstrate the applicability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
46

Obi, E. D., J. A. Yentumi, O. O. Ajayi ., O. I. Omotuyi, and S. B. Ashimolowo. "SkyNet For Drugs: A Novel User-Centric Approach to Drug Discovery." Advances in Multidisciplinary & Scientific Research Journal Publication 12, no. 4 (2024): 19–36. http://dx.doi.org/10.22624/aims/digital/v12n4p3.

Full text
Abstract:
Traditional drug discovery is an expensive and laborious multi-step process that requires a detailed understanding of disease pathobiology, potential drug target characterization, synthesis, experimental evaluation, and optimization of putative drug candidates as a pretext for clinical evaluation which often does not translate into success. With the advent of whole genome sequencing, machine learning, and artificial intelligence, drug discovery and development are now enhanced both in speed and precision. SkyNet For Drugs (Skynet4D) is an AI-driven platform that automates the preclinical processes by integrating drug-target retrieval based on patient's diagnostic medical reports, and chemical database/ADMETox screening leading to the selection of a potential drug candidate. Skynet4D framework utilizes a combination of Natural Language Processing (NLP-GenAI), custom molecular docking tools (DiffDock®-GNINA®) for high-throughput virtual screening (HTVS), and an AI-enabled ADMETox (ADMETAi®) tool to generate drug candidates with a high chance of being effective within clinical settings. In this paper, the proof-of-concept for Skynet4D was presented with a medical diagnostic report of a COVID-19 patient; leading to the prediction of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS‑CoV‑2) main protease, betaCoV_Nsp5_Mpro as putative targets. The target sequences retrieval, 3D structures retrieval/generation, putative ligand search (COCONUT, LOTUS, ChEMBL, etc.), and ranking were done autonomously within the Skynet4D pipeline. DiffDock docking confidence score of ≥ 1 and a GNINA binding affinity score of ≤ -6.0 kcal/mol, signaled suitable ligands for selection within the DiffDock®-GNINA®. The best-ranked compounds were filtered with ADMETAi® which ultimately gave Lansoprazole as the candidate compound. Keywords: Drug discovery, Computer-aided drug discovery, Structure-based drug design, Skynet4D, High-throughput virtual screening, Patient-centric drug discovery SkyNet For Drugs: A Novel User-Centric Approach to Drug Discovery 1*Obi, E.D., 1Yentumi, J.A., 2Ajayi O.O., 3Omotuyi, O.I. &amp; 1Ashimolowo, S.B. 1 Autogon Inc. 3002 Falls at Fairdale, 77057, USA 2Department of Computer Science, Adekunle Ajasin University, Nigeria 3Department of Pharmacology and Toxicology, College of Pharmacy, Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria E-mail: eobi@autogon.ai, joshua@autogon.ai, olusola.ajayi@aaua.edu.ng, olaposi.omotuyi@abuad.edu.ng, bolu@autogon.ai; Phone: 1*+1 832 925 1036, 1+233 54 100 6410, 2+27 64 0499906, 3+234 807 094 3256, 1+234 803 527 8071; ABSTRACT Traditional drug discovery is an expensive and laborious multi-step process that requires a detailed understanding of disease pathobiology, potential drug target characterization, synthesis, experimental evaluation, and optimization of putative drug candidates as a pretext for clinical evaluation which often does not translate into success. With the advent of whole genome sequencing, machine learning, and artificial intelligence, drug discovery and development are now enhanced both in speed and precision. SkyNet For Drugs (Skynet4D) is an AI-driven platform that automates the preclinical processes by integrating drug-target retrieval based on patient's diagnostic medical reports, and chemical database/ADMETox screening leading to the selection of a potential drug candidate. Skynet4D framework utilizes a combination of Natural Language Processing (NLP-GenAI), custom molecular docking tools (DiffDock®-GNINA®) for high-throughput virtual screening (HTVS), and an AI-enabled ADMETox (ADMETAi®) tool to generate drug candidates with a high chance of being effective within clinical settings. In this paper, the proof-of-concept for Skynet4D was presented with a medical diagnostic report of a COVID-19 patient; leading to the prediction of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS‑CoV‑2) main protease, betaCoV_Nsp5_Mpro as putative targets. The target sequences retrieval, 3D structures retrieval/generation, putative ligand search (COCONUT, LOTUS, ChEMBL, etc.), and ranking were done autonomously within the Skynet4D pipeline. DiffDock docking confidence score of ≥ 1 and a GNINA binding affinity score of ≤ -6.0 kcal/mol, signaled suitable ligands for selection within the DiffDock®-GNINA®. The best-ranked compounds were filtered with ADMETAi® which ultimately gave Lansoprazole as the candidate compound. Keywords: Drug discovery, Computer-aided drug discovery, Structure-based drug design, Skynet4D, High-throughput virtual screening, Patient-centric drug discovery
APA, Harvard, Vancouver, ISO, and other styles
47

Alzaqebah, Malek, Sana Jawarneh, Rami Mustafa A. Mohammad, et al. "Hybrid feature selection method based on particle swarm optimization and adaptive local search method." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (2021): 2414. http://dx.doi.org/10.11591/ijece.v11i3.pp2414-2422.

Full text
Abstract:
Machine learning has been expansively examined with data classification as the most popularly researched subject. The accurateness of prediction is impacted by the data provided to the classification algorithm. Meanwhile, utilizing a large amount of data may incur costs especially in data collection and preprocessing. Studies on feature selection were mainly to establish techniques that can decrease the number of utilized features (attributes) in classification, also using data that generate accurate prediction is important. Hence, a particle swarm optimization (PSO) algorithm is suggested in the current article for selecting the ideal set of features. PSO algorithm showed to be superior in different domains in exploring the search space and local search algorithms are good in exploiting the search regions. Thus, we propose the hybridized PSO algorithm with an adaptive local search technique which works based on the current PSO search state and used for accepting the candidate solution. Having this combination balances the local intensification as well as the global diversification of the searching process. Hence, the suggested algorithm surpasses the original PSO algorithm and other comparable approaches, in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
48

Charlet, Jean, and Licong Cui. "Knowledge Representation and Management: 2023 Highlights and the Rise of Knowledge Graph Embeddings." Yearbook of Medical Informatics 33, no. 01 (2024): 223–26. https://doi.org/10.1055/s-0044-1800748.

Full text
Abstract:
Summary Objectives: We aim to identify, select, and summarize the best papers published in 2023 for the Knowledge Representation and Management (KRM) section of the International Medical Informatics Association (IMIA) Yearbook. Methods: We performed PubMed queries and adhered to the IMIA Yearbook guidelines for conducting biomedical informatics literature review to select the best papers in KRM published in 2023. Results: Our search yielded a total of 1,666 publications from PubMed. From these, we identified 15 papers as potential candidates for the best papers, and three of them were finally selected as the best papers in the KRM section. The candidate best papers covered three main topics: knowledge graph, knowledge interoperability, and ontology. Notably, two of the three selected best papers explored the potential of knowledge graph embeddings for predicting intensive care unit readmissions and measuring disease distances, respectively. Conclusions: The selection process for the best papers in the KRM section for 2023 showcased a wide spectrum of topics, with knowledge graph embeddings emerging as a promising area for supporting machine learning applications in biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
49

Dangeti, Appalaraju, Deekshi Gladiola Bynagari, and Krishnaveni Vydani. "Revolutionizing Drug Formulation: Harnessing Artificial Intelligence and Machine Learning for Enhanced Stability, Formulation Optimization, and Accelerated Development." International Journal of Pharmaceutical Sciences and Medicine 8, no. 8 (2023): 18–29. http://dx.doi.org/10.47760/ijpsm.2023.v08i08.003.

Full text
Abstract:
The integration of Artificial Intelligence (AI) and Machine Learning (ML) in the field of pharmaceutical drug formulation has sparked a paradigm shift in the way drug stability is predicted, formulations are optimized, and drug development is expedited. This review delves into the transformative impact of AI and ML techniques on pharmaceutical research and development. It highlights how predictive models driven by AI algorithms are effectively simulating drug degradation pathways and stability profiles, enabling scientists to make informed decisions during formulation design. Moreover, the utilization of ML algorithms to analyze vast datasets has led to the discovery of optimal formulations by identifying critical relationships between formulation variables, excipients, and drug properties. This approach not only reduces experimentation time and costs but also enhances the likelihood of developing robust and effective drug products. Furthermore, AI-powered drug development platforms are shortening the timeline for candidate selection, preclinical evaluations, and clinical trials, thereby accelerating the entire drug development process. This article explores the evolving landscape of AI and ML in drug formulation, discusses challenges, and anticipates future prospects in this transformative field.
APA, Harvard, Vancouver, ISO, and other styles
50

DAVYDENKO, Nina. "META-MODEL OF DESIGN OF INFORMATION TECHNOLOGY PROTOTYPE OF CLASSIFICATION OF OBJECTS BY IMAGE SHAPE USING GMDH NEURAL NETWORKS IN THE NOTATION OF A UNIFIED MODELING LANGUAGE." Herald of Khmelnytskyi National University. Technical sciences 311, no. 4 (2022): 78–81. http://dx.doi.org/10.31891/2307-5732-2022-311-4-78-81.

Full text
Abstract:
The article is devoted to the development of intelligent monitoring systems of technological processes based on the use of machine vision systems. The principles of object-oriented formalization of the process of designing information technology of the classification of objects by the geometric shape of the image obtained from the machine vision system are proposed in the article. The proposed information technology is based on the use of machine learning technologies and provides for the selection of the best structure of the classifier model from a set of candidate models. Construction of candidate models is based on the use of the group method of data handling, based on the principles of self-organization of models. The outline of the image of the object obtained from the intelligent monitoring system is the input information. The set of input data contains a set of morphometric parameters that describe the geometric shape of the figure formed by the contour of the image of the object, as well as the label of the class to which the object belongs. The formation of the set of input data is implemented in the block «Image Processing». The decisive rule of classification is built in the block of synthesis of models of information technology of classification. GMDH neural networks were used as an algorithm for model synthesis. The choice of the best structure of the model is performed by a set of criteria. The information technology for constructing the classifier model is implemented by supplementing the block of algorithms for synthesis of MSUA models with the block «Class of models», which implements the process of selecting the class of functions for building models, and the block «Verification of models», which implements the best model structure. Construction of the meta-model of the design process was performed using a unified modeling language. The functional meta-model is represented by a use case diagram.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!