To see the other types of publications on this topic, follow the link: AI-Optimized Routing.

Journal articles on the topic 'AI-Optimized Routing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'AI-Optimized Routing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wobiageri, Ndidi Abidde, Eyidia Nkechinyere, and Iyaminapu Iyoloma Collins. "Optimization of Wireless Mesh Networks for Disaster Response Communication." International Journal of Current Science Research and Review 08, no. 03 (2025): 1312–19. https://doi.org/10.5281/zenodo.15040499.

Full text
Abstract:
Abstract : Wireless Mesh Networks (WMNs) have emerged as a resilient and adaptable solution for disaster response communication, offering self-healing and self-organizing capabilities that ensure uninterrupted connectivity in emergency scenarios. Traditional communication infrastructures often fail due to network congestion, power outages, and physical damage during disasters, necessitating an optimized approach for rapid and reliable data transmission. This study presents an AI-optimized WMN framework aimed at enhancing network performance by improving packet delivery ratio (PDR), reducing end-to-end delay, optimizing energy consumption, increasing network throughput, and strengthening security. Simulations conducted in MATLAB Simulink compare the performance of AI-optimized routing with conventional protocols such as AODV (Ad hoc On-Demand Distance Vector) and OLSR (Optimized Link State Routing). Results demonstrate that AI-optimized routing achieves a 15.5% higher PDR, 43% lower delay, 49% increased throughput, and 30% reduced energy consumption compared to traditional approaches. Furthermore, an AI-driven Intrusion Detection System (IDS) improves network security by increasing attack detection accuracy to 94.6% while reducing false positive rates to 5.2%. The findings highlight the significance of AI-based routing optimization in disaster scenarios, ensuring robust, energy-efficient, and secure communication for first responders and affected communities. Future research will explore hybrid AI-blockchain security mechanisms, 5G and satellite network integration, and real-world experimental validation to further enhance WMN resilience in extreme disaster conditions.
APA, Harvard, Vancouver, ISO, and other styles
2

Krishna Kurrapati, Vamshi. "Method and System for Determining an Optimized Cable Routing Plan Using AI and Machine Learning." International Journal of Science and Research (IJSR) 14, no. 3 (2025): 1496–501. https://doi.org/10.21275/sr25320195324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nelloru, Nageswara Rao. "AI-Orchestrated Claims Routing in Modernized Insurance Core Systems." European Journal of Computer Science and Information Technology 13, no. 47 (2025): 95–102. https://doi.org/10.37745/ejcsit.2013/vol13n4795102.

Full text
Abstract:
This article explores the transformative impact of AI-orchestrated claims routing in modernized insurance core systems, focusing on the integration of machine learning models and automated decision engines. The article examines how AI-driven systems have revolutionized traditional claims processing through enhanced triage mechanisms, sophisticated business rules integration, and predictive analytics. The article demonstrates significant improvements in claims processing efficiency, fraud detection, and resource allocation through cloud-based architectures and API-driven integration. The article highlights how automated systems have reduced processing times, improved accuracy in claim classification, and optimized adjuster workload distribution while maintaining regulatory compliance. The article also addresses the operational benefits of AI implementation, including reduced costs, enhanced customer satisfaction, and improved fraud detection capabilities, providing compelling evidence for the effectiveness of AI-driven claims management systems in modern insurance operations.
APA, Harvard, Vancouver, ISO, and other styles
4

Gowda, Yashas R., Disha S, Shashank K, and Prasad PS. "AI-Driven Progressive Web Application for Enhancing Emergency Healthcare Decision-Making." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–3. https://doi.org/10.55041/ijsrem40129.

Full text
Abstract:
The "Emergency Lifeline: Instant Hospital Info for Critical Moments" mobile application aims to revolutionize emergency healthcare by providing real-time hospital information, medical guidance, and optimized routing during critical situations. The application integrates advanced technologies such as Artificial Intelligence (AI), Geolocation, Telemedicine, and Multilingual Support to offer a comprehensive emergency solution. By leveraging AI for symptom analysis and hospital recommendations, GPS for real- time location tracking and optimized routing, and telemedicine for remote consultations, the app ensures that users can access medical assistance quickly and effectively. The inclusion of multilingual support, along with voice recognition and text-to- speech capabilities, ensures accessibility for a diverse user base, including those with disabilities or literacy challenges. This innovative approach enables users to make informed decisions, minimize response times, and improve healthcare outcomes in emergency scenarios. The system's design focuses on user experience, data security, and collaboration with healthcare providers to ensure accurate, up-to-date information. Overall, the app aims to enhance the efficiency and accessibility of emergency healthcare, making it a vital tool for individuals facing critical health situations. Keyword - Emergency healthcare, mobile application, Artificial Intelligence (AI), Geolocation, Telemedicine, Multilingual support, Real-time hospital information, Symptom analysis, Routing optimization, Text-to-speech, Voice recognition, Healthcare accessibility, Critical moments, Medical guidance, User experience, Data security.
APA, Harvard, Vancouver, ISO, and other styles
5

Akram, Elentably. "Using Artificial Intelligence to Reduce Carbon Emissions from the Saudi Commercial Fleet." Global Journal of Arts Humanity and Social Sciences 4, no. 12 (2024): 1040–44. https://doi.org/10.5281/zenodo.14413059.

Full text
Abstract:
The Saudi commercial fleet contributes significantly to the nation's carbon footprint. This paper explores the potential of Artificial Intelligence (AI) to optimize fleet operations and reduce associated emissions. We delve into various AI applications, including predictive maintenance, optimized routing, and intelligent speed control, highlighting their efficacy in minimizing fuel consumption and maximizing operational efficiency. The paper also discusses challenges and opportunities associated with AI implementation in the Saudi context, considering the specific requirements of the local fleet and infrastructure. Ultimately, integrating AI into fleet management systems offers a promising pathway towards achieving sustainability goals and reducing the environmental impact of commercial transport in Saudi Arabia.  
APA, Harvard, Vancouver, ISO, and other styles
6

Begović, Muhamed, Samir Čaušević, Belma Memić, and Adisa Hasković. "AI-aided Traffic Differentiated QoS Routing and Dynamic Offloading in Distributed Fragmentation Optimized SDN-IoT." International Journal of Engineering Research and Technology 13, no. 8 (2020): 1880. http://dx.doi.org/10.37624/ijert/13.8.2020.1880-1895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Almarri, Safiah, Hur Al Safwan, Shahd Al Qisoom, Soufien Gdaim, and Abdelkrim Zitouni. "Optimized Wireless Sensor Network Architecture for AI-Based Wildfire Detection in Remote Areas." Fire 8, no. 7 (2025): 245. https://doi.org/10.3390/fire8070245.

Full text
Abstract:
Wildfires are complex natural disasters that significantly impact ecosystems and human communities. The early detection and prediction of forest fire risk are necessary for effective forest management and resource protection. This paper proposes an innovative early detection system based on a wireless sensor network (WSN) composed of interconnected Arduino nodes arranged in a hybrid circular/star topology. This configuration reduces the number of required nodes by 53–55% compared to conventional Mesh 2D topologies while enhancing data collection efficiency. Each node integrates temperature/humidity sensors and uses ZigBee communication for the real-time monitoring of wildfire risk conditions. This optimized topology ensures 41–81% lower latency and 50–60% fewer hops than conventional Mesh 2D topologies. The system also integrates artificial intelligence (AI) algorithms (multiclass logistic regression) to process sensor data and predict fire risk levels with 99.97% accuracy, enabling proactive wildfire mitigation. Simulations for a 300 m radius area show the non-dense hybrid topology is the most energy-efficient, outperforming dense and Mesh 2D topologies. Additionally, the dense topology achieves the lowest packet loss rate (PLR), reducing losses by up to 80.4% compared to Mesh 2D. Adaptive routing, dynamic round-robin arbitration, vertical tier jumps, and GSM connectivity ensure reliable communication in remote areas, providing a cost-effective solution for wildfire mitigation and broader environmental monitoring.
APA, Harvard, Vancouver, ISO, and other styles
8

Arunkumar, Paramasivan. "AI for Seamless Cross-Border Transactions A New Era for Global Card Services." International Journal of Leading Research Publication 5, no. 5 (2024): 1–11. https://doi.org/10.5281/zenodo.14551534.

Full text
Abstract:
Artificial Intelligence (AI) is revolutionizing international payment systems, paving the way for faster, more secure, and cost-effective cross-border transactions. Traditional cross-border payment systems have been plagued by delays, high transaction fees, and complex regulatory challenges, often resulting in inefficient global card services. AI technologies, such as machine learning, natural language processing, and data analytics, are now enabling seamless and optimized payment processing by automating currency conversion, detecting fraud, and optimizing payment routing in real-time. These advancements ensure a smoother, quicker experience for consumers and businesses alike, reducing operational costs and increasing security.AI’s ability to process vast amounts of data allows it to identify and mitigate fraud risks, providing enhanced protection in the international payment landscape. The limitations of traditional systems, AI is not only improving the efficiency of global card services but is also making cross-border payments more accessible and transparent, ultimately benefiting consumers worldwide. The future of cross-border transactions promises a more seamless and interconnected global economy, driven by AI’s transformative potential.
APA, Harvard, Vancouver, ISO, and other styles
9

Singh, Sukhdeep. "Enhancing Road-Based Supply ‹ Chain Efficiency in the US: Integrating AI/ML-Based C3I Systems with loT-Enabled Trucks." International Journal of Supply Chain Management 13, no. 4 (2024): 1–10. http://dx.doi.org/10.59160/ijscm.v13i4.6253.

Full text
Abstract:
This paper explores the integration of AI/ML- based C3I systems with IoT-enabled trucks to enhance the efficiency of road-based supply chains in the US. By examining the components and benefits of these technologies, as well as their implementation strategies and real-world applications, this paper provides a comprehensive overview of how these advancements can transform supply chain logistics. The author has been at the forefront of crafting an IoT sensor-based Fleet and Transportation Management System for one of the top 10 telematics companies. It utilizes AI and ML to provide strategic and operational insights within a logistics environment for the complete paradigm ranging from SMB to enterprise level companies. The discussion covers operational benefits such as reduced downtime and optimized routing, and strategic advantages including improved decision-making and increased resilience against disruptions. Ultimately, the goal is to highlight how these innovations can drive significant improvements in the performance and reliability of road-based supply chains, leading to lower costs and better profits.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Tushar. "Challenges of Last-Mile Delivery in E-Commerce Logistics." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49613.

Full text
Abstract:
ABSTRACT The last-mile delivery segment in e-commerce logistics—the final stretch from distribution center to customer doorstep—presents critical operational challenges, especially in the Indian context. This research investigates key barriers including infrastructure gaps, rising customer expectations, and insufficient technological integration. Employing a mixed-method design, data were collected via surveys (n=356) and interviews (n=44). Statistical analysis revealed strong correlations between technology use and delivery reliability, while infrastructural issues were inversely related to customer satisfaction. The study proposes strategic solutions such as AI-driven routing, hyperlocal hubs, and public-private partnerships. Findings suggest that last-mile logistics, if optimized, can provide significant competitive advantage in India’s dynamic e-commerce market..
APA, Harvard, Vancouver, ISO, and other styles
11

Mamoun Yaqoub Mimoun Alhammali and Abdulghader Basheer Murad Jalgum. "Enhanced Multi-path Energy-Efficient Routing with Dynamic Load Balancing for MANETs Using Bellman-Ford Algorithm." International Journal of Information Technology and Computer Engineering 12, no. 4 (2024): 243–58. https://doi.org/10.62647/ijitce.2024.v12.i4.pp243-258.

Full text
Abstract:
Mobile Ad Hoc Networks (MANETs) play a crucial role in modern communication, particularly in dynamic and infrastructure-less environments such as disaster recovery, military operations, and IoT-based applications. However, traditional routing protocols like AODV, DSR, and DSDV face significant challenges, including high energy consumption, frequent route failures, and inefficient load balancing. The Bellman-Ford algorithm, despite its robustness in shortest-path computations, lacks energy-awareness, leading to rapid node depletion and network instability. To address these limitations, this study proposes an Enhanced Multi-path Energy-Efficient Routing (EMEER) with Dynamic Load Balancing, incorporating an optimized Bellman-Ford algorithm. The key novelties of this approach include an energy-aware routing metric that considers residual energy levels, a multi-path selection strategy to enhance fault tolerance, and a dynamic load-balancing mechanism that distributes traffic to prevent congestion. These enhancements collectively improve network reliability and longevity. The proposed EMEER protocol is implemented in a simulated MANET environment using NS-3, and its performance is evaluated against existing protocols. The results demonstrate that EMEER achieves a 25.6% reduction in energy consumption, a 17.3% improvement in packet delivery ratio (PDR), and a 31.8% increase in network lifetime compared to traditional approaches. Furthermore, it significantly reduces end-to-end delay and routing overhead. By integrating energy efficiency with adaptive load balancing, this study contributes to the development of more sustainable and resilient MANET architectures. Future work will explore the integration of AI-driven optimization techniques, inspired by hybrid metaheuristic models used in edge computing, to further enhance routing adaptability.
APA, Harvard, Vancouver, ISO, and other styles
12

Alladi, Ramesh. "Programmable NLU Powered 3-in-1 Bot :Video, Voice, Text." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49760.

Full text
Abstract:
Abstract - This study introduces a Programmable NLU-Powered 3-in-1 ChatBot, a conversational AI framework designed to enhance response precision and contextual relevance through the integration of multiple specialized language models. The system comprises three distinct large language models (LLMs), each optimized for specific interaction domains: general conversation, problem-solving, and technical inquiry. A fourth LLM operates as a natural language understanding (NLU) controller, responsible for analyzing user input and determining the most appropriate model to handle the query. Developed using a modular architecture, the chatbot supports real-time interaction, session-level context retention, and adjustable response creativity. By employing a model selection mechanism guided by semantic analysis, the framework ensures that responses are not only contextually aligned but also domain-appropriate. This study highlights the effectiveness of combining specialized LLMs with an intelligent routing layer to improve the adaptability and accuracy of multi-domain conversational systems. The study represents a significant step toward redefining virtual assistant capabilities by offering an engaging, multi-modal interface that enhances user interaction and information retention. Key Words: Natural Language Processing (NLP), Multi-Model ChatBot, Large Language Models (LLMs), Domain-Specific Language Models, Conversational AI.
APA, Harvard, Vancouver, ISO, and other styles
13

R, Suganya, Balaramesh P, Kannadhasan S, Chandrasekhar V, Arun Raj S R, and Dhilipkumar R. "Wireless Sensor Networks in Environmental Monitoring Applications Challenges and Future Directions." ITM Web of Conferences 76 (2025): 03002. https://doi.org/10.1051/itmconf/20257603002.

Full text
Abstract:
Chivalry: Signal Processing for WMSNs Humanoid vertical lace up boot where the lacing up shoes are made of suede or synthetic leather. Yet, the current research on WSNs shows limitations such as much energy consumption, scalability issues, security risks and no multi-environmental integration. To tackle those challenges, this research proposes an enhanced, optimized WSN framework integrative of blockchain-encrypted security, artificial intelligence (AI)-based adaptive routing, and hybrid edge–cloud computing. Their proposed system consisting of terrestrial, aerial and underwater WSNs ensures high-quality integration for continuous environmental monitoring, due to its seamless, energy-efficient and scalable nature. Also, the long-range communication protocols (LoRaWAN & NB-IoT) are included to boost data sending from distant regions. In this paper, we introduce a new UAV-assisted, solar-powered sensor network solution for coverage extension and operational cost reduction. This research further investigates AI-based energy-conscious data aggregation methods to reduce power utilization and enhance sensor lifespan. We perform real-world validation to show the practicality of proposed system to monitor air pollutants, water quality and climate parameters. As a result, the recommended framework greatly improves data reliability, scalability, and real-time decision-making capacities, therefore it can serve as an applicable model for modern environmental monitoring.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Jiaqi, and Oksana Kudriavtseva. "Enhancing Decision-Making Efficiency Through Production Process Diagnostics." American Journal of Financial Technology and Innovation 3, no. 1 (2025): 89–95. https://doi.org/10.54536/ajfti.v3i1.4038.

Full text
Abstract:
In response to fragmented approaches in green manufacturing research, this study proposes an integrated decision-support framework that unifies production process diagnosis, multi-resource optimization, and data-driven analytics to enhance sustainability in complex manufacturing systems. Combining theoretical modeling (e.g., dynamic resource-element networks), empirical case studies (12 cross-industry cases in automotive, electronics, and textiles), and systematic diagnostics, the research addresses inefficiencies in traditional ERP-MES-PCS architectures, where manual decision-making and disconnected data flows hinder holistic optimization. Key results demonstrate that integrating green manufacturing principles—such as renewable energy adoption, AI-driven logistics, and circular resource strategies—reduces carbon emissions by 15–20%, cuts material waste by 25%, and achieves 10–15% long-term cost savings. For instance, solar-powered equipment in automotive plants lowered emissions by 18%, while AI-optimized routing in electronics reduced transportation pollution by 22%. The framework establishes actionable benchmarks (e.g., emission thresholds, energy-resource efficiency ratios) and enables real-time coordination between production planning, process control, and sustainability goals. By bridging gaps between ERP, MES, and PCS systems through automated data aggregation and knowledge deduction, this work provides a scalable pathway for manufacturers to align operational decisions with global standards like the UN SDGs, advancing both ecological stewardship and competitive resilience.
APA, Harvard, Vancouver, ISO, and other styles
15

Kaviarasan, S., and R. Srinivasan. "A Novel Spider Monkey Optimized Fuzzy C-Means Algorithm (SMOFCM) for Energy-Based Cluster-Head Selection in WSNs." International Journal of Electrical and Electronics Research 11, no. 1 (2023): 169–75. http://dx.doi.org/10.37391/ijeer.110124.

Full text
Abstract:
AI is getting increasingly complex as a result of its widespread deployment, making energy efficiency in Wireless Sensor Network (WSN)-based Internet of Things (IoT) systems a highly difficult problem to solve. In energy-constrained networks, cluster-based hierarchical routing protocols are a very efficient technique for transferring data between nodes. In this paper, a novel Spider Monkey Optimized Fuzzy C-Means Algorithm (SMOFCM) is proposed to improve the lifetime of the network and less energy consumption. The proposed SMOFCM technique makes use of the Fuzzy C-means clustering framework to build up the cluster formation, and the Spider Monkey Optimization technique to select the Cluster Head (CH). MATLAB was used to model the suggested SMOFCM. The suggested framework's network lifetime, number of alive nodes (NAN), energy consumption, throughput, and residual energy are compared to those of more established frameworks like LEACH, K-MEANS, DRESEP, and SMOTECP. SMOFCM technique improves the network lifetime by 11.95%, 7.59%, 4.97% and 3.83% better than LEACH, K-MEANS, DRESEP, and SMOTECP. According to experimental findings, the proposed SMOFCM technique outperforms the existing model.
APA, Harvard, Vancouver, ISO, and other styles
16

Favour Uche Ojika, Wilfred Oseremen Owobu, Olumese Anthony Abieba, Oluwafunmilayo Janet Esan, Bright Chibunna Ubamadu, and Andrew Ifesinachi Daraojimba. "AI-Enhanced Knowledge Management Systems: A Framework for Improving Enterprise Search and Workflow Automation through NLP and TensorFlow." Computer Science & IT Research Journal 6, no. 3 (2025): 201–30. https://doi.org/10.51594/csitrj.v6i3.1884.

Full text
Abstract:
In the era of digital transformation, organizations are increasingly adopting artificial intelligence (AI) to enhance knowledge management systems (KMS) and gain a competitive edge. This paper proposes a novel framework for AI-enhanced knowledge management that leverages Natural Language Processing (NLP) and TensorFlow to improve enterprise search capabilities and workflow automation. Traditional KMS often struggle with unstructured data, inefficient information retrieval, and fragmented workflows, leading to reduced productivity and decision-making inefficiencies. By integrating advanced NLP algorithms with TensorFlow’s scalable machine learning capabilities, the proposed framework addresses these challenges through intelligent content classification, semantic search, and automated knowledge extraction. The framework begins with data ingestion from diverse sources, including emails, reports, and databases, which are processed using NLP techniques such as named entity recognition, sentiment analysis, and topic modeling. TensorFlow models are then employed to train and fine-tune neural networks for document classification and intent recognition, enabling contextual understanding and prioritization of enterprise content. The system supports a dynamic knowledge graph that interlinks related concepts, documents, and workflows, facilitating real-time, query-responsive search and content recommendation. Moreover, the framework incorporates workflow automation by integrating AI models that identify repetitive tasks and suggest optimized processes using predictive analytics. This reduces manual effort, enhances task routing, and supports intelligent alerts and decision support mechanisms. A case study in a mid-sized enterprise demonstrates a 35% improvement in knowledge retrieval time and a 28% reduction in workflow execution delays after implementation. The proposed AI-enhanced KMS offers a scalable, adaptive solution for managing organizational knowledge in real-time, thus supporting knowledge workers with timely, relevant, and context-aware insights. It emphasizes the role of NLP for linguistic comprehension and TensorFlow for deep learning-based model optimization, providing a robust foundation for future enterprise intelligence systems. The research contributes to the growing field of AI in enterprise settings, highlighting the potential of integrated technologies to redefine knowledge access and operational efficiency. Keywords: Artificial Intelligence, Knowledge Management Systems, Enterprise Search, Workflow Automation, Natural Language Processing, TensorFlow, Semantic Search, Knowledge Graph, Machine Learning, Information Retrieval.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Limiao, Fangfang Gou, Huiyun Long, Keke He, and Jia Wu. "Effective Data Optimization and Evaluation Based on Social Communication with AI-Assisted in Opportunistic Social Networks." Wireless Communications and Mobile Computing 2022 (July 1, 2022): 1–14. http://dx.doi.org/10.1155/2022/4879557.

Full text
Abstract:
Billions of people around the world send and receive data over online networks daily. Sufficient and redundant data are transmitted over social platforms with AI-assisted in 5G networks. In opportunistic social networks, the main challenge faced by traditional methods is that numerous user nodes participate in data transmission, causing a lot of message copy redundancy and node cache consumption. As a result, the transmission delay of the algorithm is high, the node energy consumption is too large, and even information is lost. To solve these problems, this study establishes an artificial intelligence-based optimization multiple evaluation method. The main purpose of this method is to avoid information loss caused by data loss when reducing data noise, reasonably select communication nodes in opportunistic social network scenarios, optimize data transmission performance, and avoid network congestion. Moreover, our method can effectively identify and exclude potential malicious nodes, reducing the situation that packets are intercepted and discarded. The experiment confirms that the optimized transmission evaluation scheme can effectively reduce routing overheads and energy consumption of a user node, improve the delivery ratio of node data transmission, and ensure the reliability and security of data transmission.
APA, Harvard, Vancouver, ISO, and other styles
18

Bragatto, Tommaso, Mohammad Ghoreishi, Francesca Santori, et al. "Moving Towards Electrified Waste Management Fleet: State of the Art and Future Trends." Energies 18, no. 8 (2025): 1992. https://doi.org/10.3390/en18081992.

Full text
Abstract:
Efficient waste management remains critical to achieving sustainable urban development, addressing challenges related to resource conservation, environmental preservation, and carbon emissions reduction. This review synthesizes advancements in waste management technologies, focusing on three transformative areas: optimization techniques, the integration of electric vehicles (EVs), and the adoption of smart technologies. Optimization methodologies, such as vehicle routing problems (VRPs) and dynamic scheduling, have demonstrated significant improvements in operational efficiency and emissions reduction. The integration of EVs has emerged as a sustainable alternative to traditional diesel fleets, reducing greenhouse gas emissions while addressing infrastructure and economic challenges. Additionally, the application of smart technologies, including Internet of Things (IoT), artificial intelligence (AI), and the Geographic Information System (GIS), has revolutionized waste monitoring and decision-making, enhancing the alignment of waste systems with circular economy principles. Despite these advancements, barriers such as high costs, technological complexities, and geographic disparities persist, necessitating scalable, inclusive solutions. This review highlights the need for interdisciplinary research, policy standardization, and global collaboration to overcome these challenges. The findings provide actionable insights for policymakers, municipalities, and businesses, enabling data-driven decision-making, optimized waste collection, and enhanced sustainability strategies in modern waste management systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Parikh, Raj, and Khushi Sandip Parikh. "Mathematical Foundations of AI-Based Secure Physical Design Verification." Indian Journal of VLSI Design 5, no. 1 (2025): 1–7. https://doi.org/10.54105/ijvlsid.a1230.05010325.

Full text
Abstract:
Concerns about hardware security are raised by the increasing dependence on third-party Semiconductor Intellectual Property in system-on-chip design, especially during physical design verification. Traditional rule-based verification methods, such as Design Rule Checking (DRC) and Layout vs. Schematic (LVS) checking, together with side-channel analysis indicated apparent deficiencies in dealing with new forms of threat. The impossibility of distinguishing dependable from malicious insertions in ICs makes it hard to prevent such dangers as hardware Trojans (HTs); side-channel vulnerabilities remain everywhere, and modifications at various stages of the manufacturing process can be hard to detect. This thesis addresses these security challenges by defining a theoretical AI-driven framework for secure physical design verification that couples graph neural network models (GNNs) and probabilistic modeling with constraints optimized to maximize IC security. This approach views physical design verification as graph-based machine learning: GNNs identify unauthorized modifications or discrepancies between the layout and circuit netlist through the acquisition of behavioral metrics and structural feature extraction of netlist data. A probabilistic DRC model is derived after processing some learning data using recurrent algorithms. This model departs from the rigid rules of traditional deterministic DRC in that it uses machine learning-based predictions to estimate the likelihood that design rules will be violated. Also, we can model mathematical foundations for the secure routing as a constrained pathfinding problem for all myths addressed above concerning these different methods—moves are optimized to avoid sources of security problems. These problems might include crosstalk-induced leakage and electromagnetic sidechannel threats. Lagrange multipliers and Karush-Kuhn-Tucker (KKT) conditions are included in verification to maintain security constraints while ensuring efficient use of resources. Then, HT detection is reformulated as GNN-based node embeddings, whose information propagation throughout the circuit graph picks up modifications at boundary nodes and those less deep in the structure. As an alternative to experience-based anomaly detection proposed in earlier work, a theoretical softmaxbased anomaly classification framework is put forward here to model HT insertion probabilities, gathering acceptable anomalies at various levels of circuit design from RTL-level to Gate-level as necessary. The capturing of side-channel signals becomes the focus of a deep learning-based theoretical run-time anomaly detection model, aiming at power and electromagnetic (EM) leakage patterns so that all potential threats can be detected early on. This theoretical framework provides a conceptual methodology for scalable, automated, and robust security verification in modern ICs through graph-based learning, and constrained optimization methods. It lays a foundation to advance secure semiconductor designs further using AI-driven techniques without recourse to benchmarks or empirical validations.
APA, Harvard, Vancouver, ISO, and other styles
20

Khushi, Sandip Parikh. "Mathematical Foundations of AI-Based Secure Physical Design Verification." Indian Journal of VLSI Design (IJVLSID) 5, no. 1 (2025): 1–7. https://doi.org/10.54105/ijvlsid.A1230.05010325.

Full text
Abstract:
<strong>Abstract:</strong> Concerns about hardware security are raised by the increasing dependence on third-party Semiconductor Intellectual Property in system-on-chip design, especially during physical design verification. Traditional rule-based verification methods, such as Design Rule Checking (DRC) and Layout vs. Schematic (LVS) checking, together with side-channel analysis indicated apparent deficiencies in dealing with new forms of threat. The impossibility of distinguishing dependable from malicious insertions in ICs makes it hard to prevent such dangers as hardware Trojans (HTs); side-channel vulnerabilities remain everywhere, and modifications at various stages of the manufacturing process can be hard to detect. This thesis addresses these security challenges by defining a theoretical AI-driven framework for secure physical design verification that couples graph neural network models (GNNs) and probabilistic modeling with constraints optimized to maximize IC security. This approach views physical design verification as graph-based machine learning: GNNs identify unauthorized modifications or discrepancies between the layout and circuit netlist through the acquisition of behavioral metrics and structural feature extraction of netlist data. A probabilistic DRC model is derived after processing some learning data using recurrent algorithms. This model departs from the rigid rules of traditional deterministic DRC in that it uses machine learning-based predictions to estimate the likelihood that design rules will be violated. Also, we can model mathematical foundations for the secure routing as a constrained pathfinding problem for all myths addressed above concerning these different methods&mdash;moves are optimized to avoid sources of security problems. These problems might include crosstalk-induced leakage and electromagnetic sidechannel threats. Lagrange multipliers and Karush-Kuhn-Tucker (KKT) conditions are included in verification to maintain security constraints while ensuring efficient use of resources. Then, HT detection is reformulated as GNN-based node embeddings, whose information propagation throughout the circuit graph picks up modifications at boundary nodes and those less deep in the structure. As an alternative to experience-based anomaly detection proposed in earlier work, a theoretical softmaxbased anomaly classification framework is put forward here to model HT insertion probabilities, gathering acceptable anomalies at various levels of circuit design from RTL-level to Gate-level as necessary. The capturing of side-channel signals becomes the focus of a deep learning-based theoretical run-time anomaly detection model, aiming at power and electromagnetic (EM) leakage patterns so that all potential threats can be detected early on. This theoretical framework provides a conceptual methodology for scalable, automated, and robust security verification in modern ICs through graph-based learning, and constrained optimization methods. It lays a foundation to advance secure semiconductor designs further using AI-driven techniques without recourse to benchmarks or empirical validations.
APA, Harvard, Vancouver, ISO, and other styles
21

Deepak Prasad Baloni. "Restructuring Communication Architecture: Enhancing Vertical And Cross-Functional Efficiency In Psus." European Economic Letters (EEL) 15, no. 2 (2025): 4401–14. https://doi.org/10.52783/eel.v15i2.3284.

Full text
Abstract:
Indian Public Sector Undertakings (PSUs) have historically suffered from archaic rigid hierarchical modes of communication which have caused inefficiencies, stagnancy in agility, and culminated decision-making lags within the organization. These systems obstruct vertical (both top-down and bottom-up) and cross-functional interdepartmental communication which in turn reduces organizational productivity and responsiveness. This research proposes an optimized communication framework which restructures workflows for PSUs and employs digital interfaces, decentralized communication nodes, and digital platforms to improve information dissemination and access at all levels within the organization. This study utilized a mixed-method methodology consisting of structured interviews with senior PSU officials alongside an employee survey and process audit within three representative PSUs. The study presents a new layered model with real-time dashboards, AI message routing, collaborative interfacing, and other tools designed to remove siloed interdepartmental frameworks. The findings highlighted measurable enhancement in communication clarity, diminished information bottlenecks, heightened synergy, and improved inter-departmental collaboration reporting. The model is also capable of sustaining feedback loops that augment enhanced mechanisms which foster accountability and transparency. This digitally-driven framework of communication is suggested as a digitally transformative solution to PSUs along with operational excellence goals to sustain competitive advantage. The research supports policy-level innovations toward responsive governance as well as reigns scalable digital paradigm solutions aligned with future ready governance.
APA, Harvard, Vancouver, ISO, and other styles
22

Izmaylov, Maxim K. "Artificial intelligence for optimized routine administrative tasks: Opportunities, challenges and prospects." Вестник Пермского университета Серия «Экономика» = Perm University Herald ECONOMY 19, no. 4 (2024): 395–408. https://doi.org/10.17072/1994-9960-2024-4-395-408.

Full text
Abstract:
Introduction. Artificial intelligence (AI) is a field of computer science aimed at creating systems to perform tasks that require intellectual abilities. Digital economy combined with AI creates new opportunities, including optimized administrative routine tasks executed by the heads of modern commercial organizations. This gives the relevance to the present research. Purpose. The purpose is to study trends in AI development for optimized routine administrative tasks, outline perspective areas for its further development. Methods. The study refers to general traditional scientific methods: deduction, analysis, systematization. Results. The author explores the trends in the development of artificial intelligence, reviews and describes the technologies and methods of its use in management. The use of AI technologies in the Russian enterprises is also looked at. Conclusions. The study reveals that the introduction of AI technologies to deal with administrative routine tasks provides an opportunity to simplify and accelerate the work of employees, and improve production efficiency. This causes many Russian companies to start using AI in their activities or further integrate it into the management process.
APA, Harvard, Vancouver, ISO, and other styles
23

Apple, Jim, Paul Chang, Aran Clauson, et al. "Green Driver: AI in a Microcosm." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 1311–16. http://dx.doi.org/10.1609/aaai.v25i1.7798.

Full text
Abstract:
The Green Driver app is a dynamic routing application for GPS-enabled smartphones. Green Driver combines client GPS data with real-time traffic light information provided by cities to determine optimal routes in response to driver route requests. Routes are optimized with respect to travel time, with the intention of saving the driver both time and fuel, and rerouting can occur if warranted. During a routing session, client phones communicate with a centralized server that both collects GPS data and processes route requests. All relevant data are anonymized and saved to databases for analysis; statistics are calculated from the aggregate data and fed back to the routing engine to improve future routing. Analyses can also be performed to discern driver trends: where do drivers tend to go, how long do they stay, when and where does traffic congestion occur, and so on. The system uses a number of techniques from the field of artificial intelligence. We apply a variant of A* search for solving the stochastic shortest path problem in order to find optimal driving routes through a network of roads given light-status information. We also use dynamic programming and hidden Markov models to determine the progress of a driver through a network of roads from GPS data and light-status data. The Green Driver system is currently deployed for testing in Eugene, Oregon, and is scheduled for large-scale deployment in Portland, Oregon, in Spring 2011.
APA, Harvard, Vancouver, ISO, and other styles
24

Dr.A.Shaji, George, Dr.T.Baskar, P. Balaji Srikaanth Dr., and Pandey Dr.Digvijay. "Innovative Traffic Management for Enhanced Cybersecurity in Modern Network Environments." Partners Universal International Research Journal (PUIRJ) 03, no. 04 (2024): 1–13. https://doi.org/10.5281/zenodo.14480018.

Full text
Abstract:
As enterprise networks evolve to support new paradigms like cloud computing and mobile access, the traditional classification of traffic flows into north-south (client-server) and east-west (server-server) is no longer adequate. The proliferation of virtualization, microservices and distributed applications has led to explosive growth in lateral east-west traffic, which now accounts for over 75% of data center flows. If the infrastructure is not built properly, this extreme change exposes networks to higher cybersecurity threats. This paper analyzes modern data center and business network designs in-depth, examining traffic patterns and newly developing attack routes. Using real-world case studies and network simulation, we demonstrate how flat L2 network fabrics lead to excessive broadcast traffic, DHCP exhaustion, MAC table overflows and lack of segmentation - all factors that can be exploited in cyber-attacks. Comparative analysis shows that legacy network designs optimized for north-south traffic fall short in securing dense east-west flows. To address these vulnerabilities, we explore innovative traffic management approaches like hierarchical L3 fabrics which provide logical segmentation, routing controls and bandwidth optimization. Novel data plane detection and response technologies can also enforce identity and security policy for lateral traffic. Using quantified metrics like latency, throughput, and attack success rates, we showcase the significant security and performance gains of proposed techniques over traditional solutions. Additionally, we examine upcoming paradigms like intent-based networking and zero trust architectures which offer integrated visibility, micro-segmentation, and granular policy control across modern hybrid environments. With extensive simulation modeling, our research demonstrates up to 2x improvement in detecting and containing threats with these emerging approaches. We also highlight additional innovations around encryption, AI-based analytics and smart network adaptability needed to future-proof security as traffic patterns continue evolving. Finally, this work provides specific understanding of contemporary network traffic properties, their security consequences, quantitative comparison of present against proposed remedies, and ideas for creative management approaches. Organizations can achieve strong cybersecurity for changing enterprise traffic by radically reevaluating L2/L3 data center fabrics, leveraging new data plane capabilities, and implementing developing intent-based secure network concepts.
APA, Harvard, Vancouver, ISO, and other styles
25

Haferlach, Claudia, Siegfried Hänselmann, Wencke Walter, et al. "Artificial Intelligence Substantially Supports Chromosome Banding Analysis Maintaining Its Strengths in Hematologic Diagnostics Even in the Era of Newer Technologies." Blood 136, Supplement 1 (2020): 47–48. http://dx.doi.org/10.1182/blood-2020-137463.

Full text
Abstract:
Background: Chromosome banding analysis (CBA) is one of the most important techniques in diagnostics and prognostication in hematologic neoplasms. CBA is still a challenging method with very labor-intensive wet lab processes and karyotyping that requires highly skilled and experienced specialists for tumor cytogenetics. Short turnaround times (TAT) are becoming increasingly important to enable genetics-based treatment stratification at diagnosis. Aim: Improve TAT and quality of CBA by automated wet lab processes and AI-based algorithms for automatic karyotyping. Methods: In the last 15 years the CBA workflow has gradually been automated with focus on the wet lab and metaphase capturing processes. Now, a retrospective unselected digital data set of 100,000 manually arranged karyograms (KG) with normal karyotype (NKG) from routine diagnostics was used to train a deep neural network (DNN) classifier to automatically determine the class/number and orientation of the respective chromosomes (AI based classifier normal, AI-CN). With a total of 6 Mio parameters, the DNN uses two distinct output layers to simultaneously predict the chromosome number (24 classes) and the angle that is required to rotate the chromosome in its correct, vertical position (360 classes). Training of the DNN took 16 days on a Nvidia RTX 2080 Ti graphic card with 4352 cores. AI-CN was implemented into the routine workflow (including ISO 15189) after 7 months of development and intensive testing. Results: The AI-CN was tested by highly experienced staff in an independent prospective validation set of 500 NKG: 22,675/23,000 chromosomes (98.6%) were correctly assigned by AI-CN. In 369/500 (73.8%) of cells all chromosomes were correctly assigned, in an additional 20% only 2 chromosomes were interchanged. The chromosomes accounting for the majority of misclassifications were chromosomes 14 and 15 as well as 4 and 5, which are difficult to distinguish in poor quality metaphases also for humans. The 1st AI-CN was implemented into routine diagnostics in August 2019 and the 2nd AI-CN - optimized for chromosome orientation - was used since November 2019. Since then more than 17,500 cases have been processed with AI-CN (&amp;gt;350,000 metaphases) in routine diagnostics resulting in the following benefits: 1) Reduced working time: an experienced cytogeneticist needs - depending on chromosome quality - between 1 and 3 minutes to arrange a KG, while AI-CN needs only 1 second and the cytogeneticist about 30 seconds to review the KG. 2) Shorter TAT: The proportion of cases reported within 5 days increased from 30% before AI-CN (2019) to 36% with AI-CN1 (2019) and 45% with AI-CN2 (2019/2020), while the proportion of cases reported &amp;gt;7 days was reduced to 28%, 21%, and 17%, respectively (figure). Using AI-CN for aberrant karyotypes results in correct assignment of normal chromosomes and thus also correct KG in cases with solely numerical chromosome abnormalities. Derivative chromosomes derived from structural abnormalities (SA) that differ clearly from any normal chromosome are not automatically assigned but are left out for manual classification. Thus, even in cases with SA, using AI-CN saves time. To allow AI based SA assignment, two additional classifiers normal/aberrant (CNA) were built: AI-CNA1 was trained on 54,634 KG encompassing 10 different SA (AKG) and 100,000 NKG and AI-CNA2 was trained on all AKG and an equal number of NKG. First validation tests are promising and optimization is ongoing. Once the CNA has been optimized, a standardized high quality of chromosome aberration detection is feasible. A fully automated separation of chromosomes is currently in progress and will reduce the TAT by another 12-24 hours. In a fully automated workflow the detection of small subclones can be further optimized by increasing today's standard of 20 metaphases to several hundred, even without any delay in TAT and need for additional personnel. Conclusions: Implementation of AI in CBA substantially improves the quality of results and shortens turnaround times even in comparison to highly trained and experienced cytogeneticists. In the majority of cases a complete karyotype analysis can be guaranteed within 3 to 7 days, allowing CBA based treatment strategies at diagnosis. This fully automated workflow can be implemented worldwide, is rapidly scalable, can be performed cloud based and requires in the near future fewer experienced tumor cytogeneticists. Figure Disclosures Hänselmann: MetaSystems: Current Employment. Lörch:MetaSystems: Current equity holder in private company.
APA, Harvard, Vancouver, ISO, and other styles
26

Ramachandran, Dr P. "HR’s Perspective Towards the Artificial Intelligence in the Selected Sector of Tamil Nadu." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 04 (2025): 1–9. https://doi.org/10.55041/ijsrem46617.

Full text
Abstract:
Abstract- Artificial Intelligence (AI) is revolutionizing Human Resource Management (HRM) by improving operational efficiency and decision-making accuracy. AI technologies, which include high-speed computation, vast data sets, and advanced algorithms, enable HR departments to automate routine tasks, optimize workflows, and enhance employee experiences. AI applications in HRM span recruitment, payroll management, talent acquisition, employee engagement, and data-driven decision-making. By automating administrative processes, AI frees HR professionals to focus on strategic activities that add organizational value. Additionally, AI-driven analytics provide insights into workforce trends, facilitating data-informed decisions and helping organizations plan their workforce more effectively. In recruitment, AI tools streamline candidate screening, reduce bias, and accelerate hiring, while personalized learning and development programs enhance employee growth. As AI continues to advance, its integration into HRM offers significant competitive advantages, making it an indispensable tool for modern HR functions. Organizations leveraging AI in HR can achieve optimized cost efficiency, improve employee retention, and drive organizational success in the digital age. Keywords: Artificial Intelligence, Human Resource Management, Recruitment, Talent Acquisition, Payroll Management, Employee Engagement, Data-Driven Decision Making, AI Analytics, Automation, Workforce Planning, Learning and Development, Strategic HR Functions, Digital Transformation
APA, Harvard, Vancouver, ISO, and other styles
27

Mucherla, Sampath, and Sachin More. "Artificial Intelligence in ERP: Unlocking New Horizons in Supply Chain Forecasting and Resource Optimization." International Journal of Supply Chain Management 10, no. 1 (2025): 1–10. https://doi.org/10.47604/ijscm.3234.

Full text
Abstract:
Purpose: This article aims to evaluate the potential of Artificial intelligence (Ai) in ERP systems to enhance Forecasting in supply chain and resource optimization. It explores how AI can improve forecast accuracy, automate routine tasks, and optimize resource allocation within ERP systems. By analyzing the benefits and challenges of AI integration, the document provides insights for organizations seeking to leverage AI for enhanced supply chain management. Methodology: The methodology employed in the document involves both qualitative and quantitative research techniques. Quantitative data, such as forecasting accuracy, inventory turnover, and cost, is gathered from real-time ERP system data logs to measure ERP performance before and after AI implementation. Qualitative data is collected through interviews with ERP system administrators and supply chain managers to gain insights on user experience and challenges faced in Supply chain function and during AI integration. The research also discusses the selection of suitable machine learning models and their implementation methodology, including data preprocessing, training, and testing phases. Performance metrics, such as Mean Absolute Percentage Error (MAPE), are used to assess the improvements achieved through AI integration. Findings: The study found that AI integration in ERP systems significantly improved forecasting accuracy by 20%. This was attributed to AI's ability to analyze vast amounts of data and identify patterns that traditional ERP systems cannot do without significant work. Inventory turnover ratio increased by 33%, indicating faster movement of stock and reduced holding costs. This was due to AI's improved demand forecasting and real-time inventory adjustments. Operational costs were reduced by 15% due to automation of routine tasks, optimized resource allocation, and minimized waste in production and logistics. Unique Contribution to Theory, Practice and Policy: The research supports existing literature and case studies, confirming AI's potential to revolutionize ERP systems and supply chain management. The findings support existing literature on the potential of AI in supply chain management, specifically in forecasting and resource optimization. The research demonstrates the tangible benefits of AI integration, such as improved forecasting accuracy, optimized resource allocation, and reduced operational costs. The discussion on potential challenges, such as data security and algorithmic bias, helps organizations anticipate and address these issues proactively. The findings can inform government policies and industry regulations related to AI adoption in ERP systems and supply chain management. The emphasis on addressing algorithmic bias and data security concerns encourages responsible and ethical AI implementation. The research highlights the transformative potential of AI, encouraging businesses and policymakers to invest in AI-driven solutions for enhanced supply chain resilience and competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Weihan. "Artificial Intelligence in Strategic Business Decisions: Enhancing Market Competitiveness." Advances in Economics, Management and Political Sciences 117, no. 1 (2024): 87–93. http://dx.doi.org/10.54254/2754-1169/117/20241987.

Full text
Abstract:
Abstract: Integrating Artificial Intelligence (AI) into strategic decision-making is essential for enhancing market competitiveness in today's dynamic business environment. AI technologies such as machine learning, natural language processing (NLP), and predictive analytics optimize operations, personalize customer experiences, and drive product innovation. Machine learning algorithms analyze vast data to uncover patterns, aiding better decision-making. Predictive analytics forecasts market trends and consumer behaviors, allowing companies to anticipate demand and streamline supply chains, reducing risks like overproduction and stockouts. NLP-powered chatbots improve customer interactions by handling routine inquiries, freeing human agents for complex issues, and enabling personalized marketing. AI also accelerates product development by analyzing market data and consumer feedback, simulating scenarios, and predicting outcomes. Operational efficiency is enhanced through automation and optimized workflows, saving costs and increasing productivity. Despite these benefits, challenges such as data privacy, algorithmic bias, significant investment, and a shift to a data-driven culture must be managed. Effective AI integration offers significant competitive advantages, positioning companies to leverage predictive analytics, personalized customer interactions, and operational efficiency for growth and innovation.
APA, Harvard, Vancouver, ISO, and other styles
29

Anagnostopoulou, Afroditi, Dimitrios Tolikas, Evangelos Spyrou, Attila Akac, and Vassilios Kappatos. "The Analysis and AI Simulation of Passenger Flows in an Airport Terminal: A Decision-Making Tool." Sustainability 16, no. 3 (2024): 1346. http://dx.doi.org/10.3390/su16031346.

Full text
Abstract:
In this paper, a decision-making tool is proposed that can utilize different strategies to deal with passenger flows in airport terminals. A simulation model has been developed to investigate these strategies, which can be updated and modified based on the current requirements of an airport terminal. The proposed tool could help airport managers and relevant decision makers proactively mitigate potential risks and evaluate crowd management strategies. The aim is to eliminate risk factors due to overcrowding and minimize passenger waiting times within the terminal to provide a seamless, safe and satisfying travel experience. Overcrowding in certain areas of the terminal makes it difficult for passengers to move freely and increases the risk of accidents (especially in the event of an emergency), security problems and service interruptions. In addition, long queues can lead to frustration among passengers and increase potential conflicts or stress-related incidents. Based on the derived results, the optimized routing of passengers using modern technological solutions is the most promising crowd management strategy for a sample airport that can handle 800 passengers per hour.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Xiaoyang. "Navigating the AI Energy Challenge: A Sociotechnical Framework and Strategic Solutions for Sustainable Artificial Intelligence." SHS Web of Conferences 218 (2025): 01025. https://doi.org/10.1051/shsconf/202521801025.

Full text
Abstract:
Artificial intelligence is at the intersection of innovation and escalating energy demands. This paper addresses the AI energy paradox through an integrated sociotechnical framework that combines technological architectures, organizational practices, and adaptive governance. Comprehensive case analyses reveal critical leverage points where targeted interventions boost performance while significantly reducing energy consumption. Our findings challenge conventional views of inherent efficiency–performance trade-offs, showing that these limitations largely stem from outdated design choices. We propose a balanced strategy: deploy mid-scale models for routine, high-efficiency tasks (e.g., dataset processing and rapid document summarization) and reserve high-capacity models with advanced reasoning for complex challenges. By aligning optimized hardware architectures with strategic policy measures, our approach offers considerable economic, operational, and environmental benefits. Furthermore, our analysis highlights an urgent need for innovative, energy-conscious AI development strategies. This roadmap empowers researchers, practitioners, and policymakers to harness AI’s transformative potential while ensuring ethical and sustainable development for current and future generations.
APA, Harvard, Vancouver, ISO, and other styles
31

M, Ms Pushpalatha. "Virtual Banker at your Fingertips Using Chatbot System." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 01 (2025): 1–9. https://doi.org/10.55041/ijsrem40744.

Full text
Abstract:
The Virtual Banker systemThrough AI-Powered chatbots, real-time announcements, and video call support, it will ensure an extremely smooth and interactive banking experience for its users. The improvements in this system include a responsive user interface with integration of AI and APIs to provide dynamically interactive responses and backend secure connectivity to fetch data in real time. This also includes the features dynamic announcement management, live escalation on chat, analytics-driven improvements, and more. Scalable, optimized performance, and robust security, the system is capable of offering a comprehensive digital banking solution. The platform features interactive and AI-augmented learning tools to enhance the learning process. These tools include AI chatbots, word definition features, and content understanding mechanisms, allowing users to engage with complex concepts effectively. The integration of quizzes, Q&amp;A sessions, and other testing formats reinforces learning and allows candidates to gauge their understanding of key topics. This paper examines the implementation and impact of chatbot systems in the banking sector. It highlights their ability to provide 24/7 customer service, streamline routine banking operations, and enhance user experiences. Key benefits include cost efficiency for banks, reduced wait times for customers, and improved operational scalability. However, challenges such as data security concerns, customer resistance to AI, and the need for continuous technological upgrades remain significant barriers to widespread adoption. Index Terms -Virtual banker, chatbot systems, artificial intelligence (AI) in banking, natural language processing (NLP), financial technology (FinTech), customer service automation, banking operations, transaction efficiency, digital banking solutions, machine learning in banking, AI-powered financial assistance, banking security and privacy, user experience in banking, predictive analytics in FinTech, and voice-enabled banking chatbots
APA, Harvard, Vancouver, ISO, and other styles
32

Yadav, Gaurav, and Mohammad Ubaidullah Bokhari. "Significance of Artificial Intelligence in Mental Health Detection for Overall Development." International Journal for Research in Applied Science and Engineering Technology 11, no. 8 (2023): 1461–66. http://dx.doi.org/10.22214/ijraset.2023.55390.

Full text
Abstract:
Abstract: With the rapid development of human civilization, the capability of the human brain is also increasing with the fast pace of development which is eventually leading to mental health issues varying from young stage students to working professionals. Mental health is the least concerning issue in the life of today’s generation when compared to physical health. One of the main concerns about which very few are aware is that mental problems can lead directly to several other physical problems and several terminal diseases. Such cases can affect a person’s normal life adversely in various ways from affecting day-to-day routine, the standard of living, and relationship trauma. Early-stage detection is significantly important for the cause but gets delayed due to irresponsible handling or ignorance of the situation, where artificial intelligence can play a vital role. In today’s era, everything can be predicted using complex mathematical calculations which can be promptly paced up when integrated with computer science and programming concepts. Numerous pieces of research are being done in the dimension of artificial intelligence and machine learning for early-stage detection of issues related to mental health with optimized accuracy. This can lead directly to the early treatment required with the concerned patient with the help of a counsellor or a psychiatrist if required. This study concluded the implementation of AI for early detection of the mental health issues focuses in dimension of AI and its sub fields such as, machine learning for early-stage detection of issues related to mental health with optimized accuracy. A nation like India is a great way to become a developed nation and implementing AI-based technology will accelerate the developing phase for overall sustainable development.
APA, Harvard, Vancouver, ISO, and other styles
33

Hu, Weiwei, Yulong Liu, Jian Dong, et al. "Evaluation of a Machine Learning Model Based on Laboratory Parameters for the Prediction of Influenza A and B in Chongqing, China: Multicenter Model Development and Validation Study." Journal of Medical Internet Research 27 (May 15, 2025): e67847. https://doi.org/10.2196/67847.

Full text
Abstract:
Background Influenza viruses are major pathogens responsible for acute respiratory infections in humans, which present with symptoms such as fever, cough, sore throat, muscle pain, and fatigue. While molecular diagnostics remain the gold standard, their limited accessibility in resource-poor settings underscores the need for rapid, cost-effective alternatives. Routine blood parameters offer promising predictive value but lack integration into intelligent diagnostic systems for influenza subtyping. Objective This study aimed to develop a machine learning model using 24 routine blood parameters to predict influenza A and B infections and validate a deployable diagnostic tool for low-resource clinical settings. Methods In this multicenter retrospective study, 6628 adult patients (internal cohort: n=2951; external validation: n=3677) diagnosed with influenza A virus infection (A+ group), influenza B virus infection (B+ group), or those presenting with influenza-like symptoms but testing negative for both viruses (A–/B– group) were enrolled from 3 hospitals between January 2023 and May 2024. The CatBoost (CATB) algorithm was optimized via 5-fold cross-validation and random grid search using 24 routine blood parameters. Model performance was evaluated using metrics such as the area under the curve (AUC), accuracy, specificity, sensitivity, positive predictive value, negative predictive value, and F1-score across internal testing and external validation cohorts, with Shapley additive explanations analysis identifying key biomarkers. The Artificial Intelligence Prediction of Influenza A and B (AI-Lab) tool was subsequently developed on the basis of the best-performing model. Results In the internal testing cohort, 7 models (K-nearest neighbors, naïve Bayes, decision tree, random forest, extreme gradient boosting, gradient-boosting decision tree, and CatBoost) were evaluated. The AUC values for diagnosing influenza A ranged from 0.788 to 0.923, and those for influenza B from 0.672 to 0.863. The CATB-based AI-Lab model achieved superior performance in influenza A detection (AUC 0.923, 95% CI 0.897-0.947) and influenza B (AUC 0.863, 95% CI 0.814-0.911), significantly outperforming conventional models (K-nearest neighbors, RF, and XGBoost; all P&lt;.001). During external validation, AI-Lab demonstrated high performance, achieving an accuracy of 0.913 for differentiating the A+ group from the A–/B– group and 0.939 for distinguishing the B+ group from the A–/B– group. Conclusions The CATB-based AI-Lab tool demonstrated high diagnostic accuracy for influenza A and B subtyping using routine laboratory data, achieving performance comparable to rapid antigen testing. By enabling timely subtype differentiation without specialized equipment, this system addresses critical gaps in managing influenza outbreaks, particularly in resource-constrained regions.
APA, Harvard, Vancouver, ISO, and other styles
34

Salib, Christian, Swati Bhardwaj, Shafinaz Hussein, et al. "Digital AI in Hematology - Integration of the Scopio Labs x100 Scanner with Newly Implemented AI Capabilities into Routine Clinical Workflow." Blood 138, Supplement 1 (2021): 4932. http://dx.doi.org/10.1182/blood-2021-148821.

Full text
Abstract:
Abstract BACKGROUND Digital pathology and artificial intelligence (AI) are areas of growing interest in pathology. A number of institutes have already integrated digital imaging into routine workflow, relying on AI algorithms for the detection of various cancers and mitotic activity quantification. Despite the use of whole slide imaging (WSI) for tissue evaluation, the field of hematology has lagged behind. While many hospitals rely on limited technologies for automated peripheral blood evaluation (e.g. CellavisionTM), the Scopio LabsTM X100 digital scanner provides high resolution oil-immersion level dynamic images of large scanned areas (https://scopiolabs.com/hematology/). With recent FDA-clearance and newly implemented AI capabilities, the Scopio Labs scanner allows for clear and accurate cytomorphologic characterization and cell quantification for peripheral blood smears (PBS). To this end, we aimed to be one of the few pioneering institutes in the United States to adopt early and implement this technology into our routine workflow as a 'hub and spoke' model for optimized case assessment, data sharing and result reporting across multiple satellite locations within our hospital health system. DESIGN A Scopio x100 digital scanner was deployed at our main hospital site, with an anticipated secondary scanner for installment at a satellite laboratory. PBS flagged for hematopathologist review from two satellite laboratories were scanned, and full-field digitalized slides were evaluated by hematopathologists following AI automated analyses. RESULTS 311 peripheral smears were scanned since April 2021 and representative slides were digitalized at 100x magnification (Figure 1, weblink: https://demo.scopiolabs.com/#/view_scan/9231acaf-f898-4649-950d-a41c26c2baaa) with rapid monolayer, monolayer, full-field, and full-field cytopenia scan options available. The automated AI capabilities classified cells into lineage-specific categories with quantification based on cytomorphologic features (Figure 2). Other AI features include additional cell assignment, cell annotation and comments accessible to all users, finalized report PDF generation, export, upload into our current PowerPath TM software with linkage to the corresponding flow cytometry and bone marrow biopsy reports; and the ability to share digitalized slides with clinicians, laboratory personnel and trainees using uniquely generated weblinks. Images can be used for lectures and tumor boards. Additionally, an 80-case study set for PBS was created for medical students, residents and fellow teaching purposes, including cases displaying acute B-cell lymphoblastic leukemia (B-ALL), acute myelomonocytic leukemia (AMML), hypersegmented neutrophils in COVID-19(+) patients, myelodysplastic syndrome (MDS), atypical lymphocytes, hemoglobinopathies, platelet disorders and various lymphomas. Overall improvements were made to the following areas: CLINICAL WORK/DIAGNOSIS 1. Time-saving due to pre-categorization of cells into lineage-specific groups for pathologist review 2. Minimizes subjectivity in cell counting and cellularity assessment EDUCATION 1. Case-based collection with flow and molecular being maintained here 2. Efficient case retrieval with retained annotations/comments for teaching purposes 3. Wide array of digitalized images for hematology atlas and publications ARCHIVING 1. Collection of reference images (intra/inter departmental) for an array of morphological entities for clinical reference and refined diagnosis (e.g. Bethesda reference images for pap by ASC) 2. Digital catalogue for long-term case follow-up and retrospective review CONCLUSION The Scopio Labs X100 digital system provides an efficient and cost-effective web-based tool to streamline clinical workflow and enhance PBS evaluation. With its recent AI capabilities of cell quantification, lineage-assignment and report-generation, we aim to continue our efforts to fully integrate Scopio Labs into our routine daily clinical workflow for reviewing PBS specimens. CONFLICT OF INTEREST STATEMENT The authors have nothing to disclose with regard to the submitted work Figure 1 Figure 1. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
35

Mahipal Reddy Yalla. "Demystifying infrastructure automation: Evolving from scripts to self-healing systems." World Journal of Advanced Research and Reviews 26, no. 2 (2025): 3682–89. https://doi.org/10.30574/wjarr.2025.26.2.1987.

Full text
Abstract:
Infrastructure automation has undergone a revolutionary transformation from rudimentary scripting tools to sophisticated AI-driven platforms, fundamentally reshaping enterprise IT operations and competitive dynamics. This evolution began with basic automation scripts in the 1990s, which offered limited coverage and required extensive maintenance, before progressing through several distinct technological epochs. The emergence of configuration management platforms between 2005-2012 introduced the transformative "infrastructure as code" paradigm, enabling version-controlled deployments and reducing configuration drift by over 80%. Cloud orchestration and containerization subsequently accelerated this progression, with enterprises achieving deployment time reductions exceeding 94% and dramatic improvements in operational efficiency. The integration of artificial intelligence represents the latest evolutionary stage, with AIOps platforms detecting anomalies before conventional tools and autonomously resolving routine incidents with exceptional accuracy. Beyond technical benefits, these advancements deliver substantial business value, including accelerated time-to-market, dramatically reduced operational costs, enhanced resilience, improved scalability, and optimized talent utilization. Organizations leveraging advanced automation demonstrate significantly higher profit margins, market share growth, and innovation throughput compared to traditional counterparts. As infrastructure environments continue to increase in complexity and scale, AI-driven automation has become not merely a technological advancement but a strategic business imperative essential for maintaining competitive advantages in rapidly evolving digital markets.
APA, Harvard, Vancouver, ISO, and other styles
36

Kumar, Mohan B. M. "Leveraging Artificial Intelligence for Enhancing Customer Relationship Management in Ujjivan Small Finance Bank." Journal of Research & Development 17, no. 1 (2025): 93–102. https://doi.org/10.5281/zenodo.14960476.

Full text
Abstract:
<strong><em>Abstract</em></strong> <em>Ujjivan Small Finance Bank has effectively integrated advanced artificial intelligence (AI) technologies, such as AI-powered chatbots, virtual assistants, and predictive analytics, to revolutionize customer relationship management (CRM), offering round-the-clock support with personalized, instant responses to customer inquiries, thereby enhancing satisfaction and engagement across diverse customer segments, while simultaneously utilizing AI-driven insights to understand behavioral patterns, tailor marketing strategies, and develop customized financial products that align with customer needs, further strengthening relationships and loyalty; this transformation extends to enhancing operational efficiency by adopting CRMNEXT for a unified customer view and seamless service delivery, enabling faster issue resolution and more accurate targeting of solutions, and incorporating machine learning algorithms into banking security systems to analyze transaction patterns and detect fraudulent activities in real-time, thereby safeguarding customer accounts and building trust in their services; the AI-enabled systems also automate routine banking processes such as document verification, data entry, and customer onboarding, significantly reducing human errors, expediting workflows, and freeing staff to focus on complex, high-value tasks, contributing to an overall enhanced banking experience; furthermore, Ujjivan has successfully applied AI to optimize its credit and loan approval processes by analyzing vast datasets, including credit histories and spending patterns, to make more accurate creditworthiness assessments, thereby improving loan disbursal timelines and offering better service to both individual and small business customers, while positioning itself as a customer-first financial institution; these initiatives align with the bank&rsquo;s goal of leveraging cutting-edge technology to deliver exceptional services in an increasingly digital-first world, cementing its leadership in innovative banking practices; through the seamless integration of AI across its CRM framework, Ujjivan has not only optimized its internal operations but also demonstrated a strong commitment to meeting the dynamic needs of its growing customer base by ensuring convenience, security, and personalized services, ultimately setting itself apart as a pioneering institution in the financial services sector and establishing a scalable model of success that combines technological innovation with a customer-centric approach, reinforcing its position as a leader in India&rsquo;s competitive small finance banking landscape and exemplifying the transformative potential of AI in driving operational excellence and fostering sustainable growth.</em>
APA, Harvard, Vancouver, ISO, and other styles
37

Salto-Tellez, Manuel, Yasmine Makhlouf, Stephanie Craig, Paul O'Reilly, Perry Maxwell, and Jacqueline James. "Abstract A23: True-T – Improved prediction by holistic artificial intelligence-based quantification of T-cell response." Cancer Immunology Research 10, no. 12_Supplement (2022): A23. http://dx.doi.org/10.1158/2326-6074.tumimm22-a23.

Full text
Abstract:
Abstract Introduction - Seminal work from Galon et al [PMID: 16371631], found immunohistochemical (IHC) quantification of the immune response to be prognostic in colorectal cancer (CRC). Although numerous systems for scoring T-cell subsets have been published [reviewed in PMID: 35758208] and this type of scoring is acknowledged as a bona fide diagnostic test [PMID: 32320495], it is not commonly used in general diagnostic routine. Our group reported recently that a digital pathology (DP) approach to scoring of CD3/CD4/CD8 in more than 1,500 patients was prognostic of survival in CRC Stage II &amp; III and predictive of chemotherapy response in CRC Stage IV [PMID: 32684627]Hypothesis – An artificial intelligence (AI)-based approach to DP analysis of CD3/CD4/CD8 provides unbiased prognostic and predictive advantage in CRC.Materials &amp; Methods - 3,123 whole slide images from 1,041 patient samples from 4 institutions in the context of the PathLAKE UK DP Consortium (Queens’s University Belfast, University of Oxford, University of Nottingham and University Hospital Coventry and Warwick). These samples represented 4 different clones [LN10 (CD3), 4B12 (CD4), 4B11 (CD8) and SP3 (CD4)] from 3 different companies, and were scanned with the Aperio AT2 scanner. Ninety-six experiments for validation and verification were carried out with open-source AI systems following protocols and pathways described before [PMID: 35626427 &amp; PMID: 34359723].Results – We developed a combined CD3/CD4/CD8 AI scoring tool (True-T) which can quantify CD3/CD4/CD8 expressing cells with an accuracy ranging between 96.94 and 99.26; a sensitivity range of 79.27 to 85.22 and a specificity of 98.96 to 99.40. Using this novel AI method of immune cell classification, previous study findings were replicated across four UK centers using an independent cohort of CRC patients using antibody clones optimized for routine clinical diagnostics (p=0.002; HR: 2.21; 95%CI:1.24-3.94 versus p=0.006; hr: 1.84; 95% ci: 1.54-2.21 in the current and former study [PMID: 32684627] respectively). We now have integrated True-T status into a easy to use graphical user interface (GUI), which includes key determinants of CRC prognosis from within a population-representative cohort (n&amp;gt;600), in order to be able to model the impact of a high or low True-T score with other variables of clinical significance.Conclusion – True-T shows that a holistic AI-based quantification of T-cell response potentially improves prediction of patient prognosis over other in-silico quantitative methods in CRC and can be implemented in routine diagnostics in a seamless manner with an easy-to-use GUI. Citation Format: Manuel Salto-Tellez, Yasmine Makhlouf, Stephanie Craig, Paul O'Reilly, Perry Maxwell, Jacqueline James. True-T – Improved prediction by holistic artificial intelligence-based quantification of T-cell response [abstract]. In: Proceedings of the AACR Special Conference: Tumor Immunology and Immunotherapy; 2022 Oct 21-24; Boston, MA. Philadelphia (PA): AACR; Cancer Immunol Res 2022;10(12 Suppl):Abstract nr A23.
APA, Harvard, Vancouver, ISO, and other styles
38

Sunil Kumar. "Integration of Machine Learning and Data Science for Optimized Decision-Making in Computer Applications and Engineering." Journal of Information Systems Engineering and Management 10, no. 45s (2025): 748–59. https://doi.org/10.52783/jisem.v10i45s.8990.

Full text
Abstract:
In the rapidly evolving landscape of computer applications and engineering, the integration of machine learning (ML) and data science has emerged as a transformative force in optimizing decision-making processes. This paper explores the synergetic convergence of these domains, emphasizing their potential to enhance efficiency, accuracy, and scalability in computational systems. As engineering challenges become increasingly complex, the ability to process and analyze vast, high-dimensional datasets in real-time is critical. Machine learning algorithms, when effectively harnessed through the analytical rigor of data science, enable predictive insights and adaptive systems capable of autonomous learning and continual improvement. The study investigates how ML techniques—ranging from supervised learning models like decision trees and support vector machines to unsupervised methods such as clustering and dimensionality reduction—can be applied to diverse engineering domains including structural analysis, signal processing, network optimization, and intelligent automation. Simultaneously, it assesses the role of data science workflows—comprising data acquisition, cleaning, transformation, and visualization—in providing a robust foundation for these ML models to perform optimally. Through case-driven illustrations, the paper highlights scenarios where integrated frameworks have led to significant performance enhancements, such as predictive maintenance in manufacturing, energy-efficient routing in communication networks, and adaptive control in robotics. Furthermore, the research addresses the computational and ethical challenges associated with such integrations, including data sparsity, model interpretability, and decision accountability. The need for explainable AI (XAI) is underscored, especially in critical systems where decision-making transparency is essential for regulatory and safety compliance. The paper also evaluates the effectiveness of hybrid models that combine domain-specific knowledge with data-driven learning to overcome the limitations of traditional engineering heuristics. Ultimately, the research advocates for a paradigm shift wherein machine learning and data science are not viewed as supplementary tools, but as integral components of modern engineering decision architectures. This interdisciplinary approach fosters not only technical innovation but also informed, agile, and sustainable problem-solving methodologies. By systematically unpacking the theoretical foundations and practical implications of this integration, the study contributes to the evolving discourse on intelligent systems design, offering valuable guidance for researchers, engineers, and decision-makers committed to advancing the frontiers of computational engineering.
APA, Harvard, Vancouver, ISO, and other styles
39

Islam, Naimul, Tipon Tanchangya, Kamron Naher, et al. "Revolutionizing supply chains: The role of emerging technologies in digital transformation." Financial Risk and Management Reviews 11, no. 1 (2025): 72–102. https://doi.org/10.18488/89.v11i1.4143.

Full text
Abstract:
The main objectives of the study are to provide a comprehensive overview of emerging technological solutions, their applications, and their impacts on supply chain digital transformation. It is qualitative research, and secondary data were collected. The study identified five effective applications of the individual solutions. AI provides effective insights, demand forecasting, warehouse automation, transportation and route optimization, supplier selection and management, and predictive maintenance. Blockchain enables tracking and transparency, enhancing traceability, cutting down on counterfeiting, encouraging sustainable and ethical sourcing, and facilitating smart payments. Business intelligence ensures improved communication, monitoring expenses, inventory management, tracking key performance indicators, and optimized visualization. Data science facilitates demand prediction, route enhancement, inventory management, hazard assessment, and supplier administration. IoT enables shipment and delivery tracking, warehouse capacity monitoring, inventory management, storage condition monitoring, and routine optimization and automation. RFID is effective for warehouse management, inventory management, freight transportation, supply chain visibility, and retail management. These emerging technologies collectively promote a more integrated, adaptable, and resilient supply chain landscape, address significant challenges, and open doors to future innovations. The results suggest that by adopting all emerging technologies within the supply chain context, business executives would increase their efficiency and enhance firm value as well.
APA, Harvard, Vancouver, ISO, and other styles
40

Anant, Krisha, Juanita Hernandez Lopez, Debbie Bennett, and Aimilia Gastounioti. "Abstract B078: Artificial-intelligence-driven breast density assessment in the transition from full-field digital mammograms to digital breast tomosynthesis." Cancer Research 84, no. 3_Supplement_1 (2024): B078. http://dx.doi.org/10.1158/1538-7445.advbc23-b078.

Full text
Abstract:
Abstract Introduction: To enhance reproducibility and robustness in mammographic density assessment, various artificial intelligence (AI) models have been proposed to automatically classify mammographic images into BI-RADS density categories. Despite their promising performances, so far density AI models have been assessed primarily in traditional full-field digital mammography (FFDM) images. Our study aims to assess the potential of AI in breast density assessment in FFDM versus the newer synthetic mammography (SM) images acquired with digital breast tomosynthesis. Methods: We retrospectively analyzed negative (BI-RADS 1 or 2) routine mammographic screening exams (Selenia or Selenia Dimensions; Hologic) acquired at sites within the Barnes-Jewish/Christian (BJC) Healthcare network in St. Louis, MO from 2015 to 2018. BI-RADS breast density assessments of radiologists were obtained from BJC’s mammography reporting software (Magview 7.1). For each mammographic imaging modality, a balanced dataset of 4,000 women was selected so there were equal numbers of women in each of the four BI-RADS density categories, and each woman had at least one mediolateral oblique (MLO) and one craniocaudal (CC) view per breast in that mammographic imaging modality. Previously validated pre-processing steps were applied to all FFDM and SM images to standardize image orientation and intensity. Images were then split into training, validation, and test sets at ratios of 80%, 10%, and 10%, respectively, while maintaining the distribution of breast density categories and ensuring that all images of the same woman appear only in one set. Our AI model was based on the widely used ResNet50 architecture and was designed to accept as an input a mammographic image and predict the BI-RADS breast density category that the image belongs to. Our AI model was optimized, trained, and evaluated separately for each mammographic imaging modality. We report on the AI model’s predictive accuracy on the test set for each mammographic imaging modality, for both views as well as separately for CC and MLO; accuracy differences in FFDM versus SM were assessed via bootstrapping. Results: A batch size of 32, learning rate of e-6, and Adam optimizer were chosen as the optimal hyperparameters for our AI model. Using the same hyperparameters, the AI model demonstrated substantially higher accuracy on the test set for FFDM than for SM (FFDM: accuracy = 71% ± 4.5% versus SM: accuracy = 66% ± 4.2%; p-value&amp;lt;0.001 for comparison). Similar conclusion held when CC and MLO views were evaluated separately (accuracy = 72% ± 4.6% versus 66% ± 4.3% for CC; accuracy = 69% ± 4.5% versus 62% ± 4.3% for MLO; p-value&amp;lt;0.001 for both comparisons). Conclusions: AI performance in BI-RADS breast density assessment was significantly higher on FFDM versus SM, even under the same AI model design, dataset size and training process. Our preliminary findings suggest that further AI optimizations and adaptations may be needed as we translate AI models from FFDM to the newer SM format acquired with digital breast tomosynthesis. Citation Format: Krisha Anant, Juanita Hernandez Lopez, Debbie Bennett, Aimilia Gastounioti. Artificial-intelligence-driven breast density assessment in the transition from full-field digital mammograms to digital breast tomosynthesis [abstract]. In: Proceedings of the AACR Special Conference in Cancer Research: Advances in Breast Cancer Research; 2023 Oct 19-22; San Diego, California. Philadelphia (PA): AACR; Cancer Res 2024;84(3 Suppl_1):Abstract nr B078.
APA, Harvard, Vancouver, ISO, and other styles
41

Kemothi, Swati, Santosh Singh, and Pooja Varma. "Personalized Medical Diet Recommendations for Disease Management and Improved Patient Outcomes." Seminars in Medical Writing and Education 2 (December 30, 2023): 127. https://doi.org/10.56294/mw2023127.

Full text
Abstract:
Personalized health diets play a crucial role in infection management by tailoring diet recommendation systems to routine data, genetic factors, and specific medical conditions. Research introduces the Intelligent Nutcracker Optimized Effective Decision Tree (INO-EDT) model, designed to provide individualized nutritional guidance for managing chronic illnesses, particularly diabetes and heart disease. Medical files, questionnaires, wearable devices, and food journals serve as sources of patient data standardization and cleaning to ensure accuracy and stability. Machine Learning (ML) techniques analyze individual patient profiles to develop personalized nutrition plans that are effective, sustainable, and adaptable. The INO-EDT model incorporates a nutcracker-inspired optimization technique to enhance decision tree accuracy, fine-tuning diet recommendations based on patient-specific factors. This optimization ensures proper diet interventions with enhanced efficacy of dietary interventions in disease organization. The outcome confirms that the INO-EDT model was more accurate (98.40%), demonstrating its ability to generate proper, data-backed dietary advice. By optimizing personalized nutritional interventions, the INO-EDT model enables healthcare providers to offer more effective, patient-centered solutions, reducing complications connected with chronic diseases. This approach enhances patient outcomes by integrating intellectual algorithms that consider multiple health parameters to create a customized diet strategy. The results highlight the potential of AI-driven dietary recommendation systems in enhancing disease management, improving adherence to medical diet systems, and elevating overall quality of life. Future research will aim to expand the model's capabilities by integrating additional health markers for broader clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Khan, Muhammad Navaid. "Technology Focus: Reservoir Surveillance (September 2024)." Journal of Petroleum Technology 76, no. 09 (2024): 80–81. http://dx.doi.org/10.2118/0924-0080-jpt.

Full text
Abstract:
In today’s oil and gas operations, surveillance technologies have undergone a revolution, reshaping how facilities, wells, and reservoirs are monitored. These advancements not only have increased the scale at which these technologies are deployed but also have led to an unprecedented influx of data. The sheer volume of data, however, poses a significant challenge to traditional analytical methods, overwhelming their capacity to derive actionable insights effectively. To address this challenge, the industry is rapidly advancing toward automated solutions powered by artificial-intelligence (AI) -driven analytics. These systems automate data ingestion and use machine-learning algorithms to sift through massive data sets, identifying anomalies and prioritizing actionable insights. By automating routine surveillance tasks, engineers can focus on critical actions that deliver substantial operational benefits. For example, in paper IPTC 23912, operators successfully optimized production operations by harnessing real-time field data through smart systems, effectively managing operations with complex near-critical fluids. Similarly, in paper SPE 218470, researchers proposed a novel workflow integrating virtual flowmetering and permanent downhole gauge data for pattern recognition to enhance real-time monitoring and decision-making in petroleum and geothermal industries. Nevertheless, ensuring effective surveillance of operational assets requires a strategic approach. A valuable tool in crafting such strategies is the value of information (VOI) assessment. This method systematically evaluates how acquiring specific information can influence decision-making and operational outcomes. For instance, paper SPE 215318 highlights a field operator’s systematic approach to VOI assessment, aiming to optimize daily operations and guide future development activities. In essence, while surveillance technologies have inundated operators with unprecedented data flows, advancements in automation and AI-driven analytics offer the promise of unlocking this data’s true potential. By embracing these technologies, the oil and gas industry can navigate the complexities of the modern energy landscape with greater agility, precision, and cost-effectiveness. Recommended additional reading at OnePetro: www.onepetro.org. SPE 215119 Surveillance, Analysis, and Optimization During Active Drilling Campaign by Yanfen Zhang, Chevron, et al. OTC 35413 New Opportunities in Well and Reservoir Surveillance Using Multiple Downhole Pressure Gauges in Deepwater Injector Wells by Piyush Pankaj, ExxonMobil, et al. OTC 34863 Digital Twin for Oil-Rim Management Using Early Warning System and Exception-Based Surveillance, Offshore Malaysia by M. Mahamad Amir, Petronas, et al.
APA, Harvard, Vancouver, ISO, and other styles
43

Ling, Alexander L., Joshua D. Bernstock, Erickson Torio, et al. "Abstract 3683: AI assisted MRI volumetrics improve recurrent glioblastoma patient stratification following immunotherapy treatment." Cancer Research 85, no. 8_Supplement_1 (2025): 3683. https://doi.org/10.1158/1538-7445.am2025-3683.

Full text
Abstract:
Abstract Background: MRI-based monitoring of tumor progression after treatment plays a critical role in patient care decisions and clinical trial monitoring for glioblastoma patients. However, traditional MRI assessments of glioblastoma tumor burden (i.e. RANO-based criteria) can be significantly confounded by inflammatory pseudo-progression—especially in the context of immunotherapy treatment. Methods: In an effort to improve MRI assessments of tumor burden in glioblastoma patients, we performed volumetric tumor segmentation of &amp;gt;1000 MRI timepoints from 41 recurrent high-grade glioma patients who underwent oncolytic herpesvirus treatment. Volumetric assessments were correlated with patient outcomes and clinical covariates to assess their utility in stratifying patient outcomes relative to RANO analysis. Furthermore, to reduce the cost and time required for tumor segmentation, we leveraged our dataset and the Brain Tumor Segmentation (BraTS) 2023 challenge dataset to train a convolutional neural network to automate glioblastoma tumor segmentation. Automated segmentation volumes were validated against manually segmented tumor volumes and optimized for high-throughput processing. Results: While routine RANO assessment of tumor progression in the months following therapy failed to separate short- and long-surviving patients, volumetric assessment enabled clear stratification of patients into short, medium, and long-surviving groups using only a few months of post-treatment imaging follow up. Furthermore, automated segmentation volumes showed a high-concordance with manual segmentation volumes in most patients, with deviations between the two being largely restricted to misclassification of resection cavities. In most cases, such errors could be easily detected and corrected, enabling rapid and accurate segmentation. Conclusions: Volumetric tumor segmentation significantly improves stratification of recurrent glioblastoma patients following immunotherapy treatment versus traditional RANO classification. While such segmentation is time and skill intensive when performed manually, AI assisted segmentation can result in accurate segmentation volumes with greatly reduced time and skill. Surgical artefacts remain challenging for automated segmentation approaches; however, we expect these limitations to be overcome as we improve our training datasets. We expect this to enable rapid and cheap volumetric analyses capable of improving glioblastoma patient monitoring and, ultimately, providing clinicians with more accurate information when making patient care decisions. Citation Format: Alexander L. Ling, Joshua D. Bernstock, Erickson Torio, Naoyuki Shono, Junfeng Liu, Ana Montalvo Landivar, Nafisa Masud, Ethan Chen, E. Antonio Chiocca. AI assisted MRI volumetrics improve recurrent glioblastoma patient stratification following immunotherapy treatment [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 3683.
APA, Harvard, Vancouver, ISO, and other styles
44

Артемов, В. А. "ИННОВАЦИОННЫЕ ПОДХОДЫ К ИНТЕГРАЦИИ ТЕХНОЛОГИЙ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА В ПРОЦЕССЫ УПРАВЛЕНИЯ ЧЕЛОВЕЧЕСКИМИ РЕСУРСАМИ". Вестник Академии права и управления, № 2(83) (26 травня 2025): 101–11. https://doi.org/10.47629/2074-9201_2025_2_101_111.

Full text
Abstract:
Активное внедрение механизмов искусственного интеллекта (далее – ИИ) в область управления персоналом способствует ускорению рутинных операций и принятию более точных решений. Исследование рассматривает способы применения машинного обучения и нейросетевых алгоритмов для автоматизации рекрутмента, анализа кадровых данных, оценки эффективности и организации персонализированных обучающих программ. Элементы EdTech получают широкое распространение благодаря адаптивным траекториям обучения и возможностям симуляций, повышающим мотивацию сотрудников и позволяющим отрабатывать навыки в виртуальной среде без рисков для бизнеса.Корпоративный сектор уже демонстрирует успешные примеры интеграции ИИ: компании «Ростелеком», X5 Group и «Лукойл» оптимизировали время закрытия вакансий, сократили издержки на рекрутмента и расширили доступ к актуальным образовательным ресурсам. Развитие генеративных моделей (GPT-4 и аналогичных) способствует появлению контента, практически не отличимого от созданного человеком. Аналитические инструменты на основе больших данных всё активнее используются для прогнозирования текучести кадров, планирования карьеры и оценки организационного климата. Проведенный анализ опирается на несколько теоретических подходов, включая Diffusion of Innovation (Rogers), Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Task-Technology Fit Model (TTF) и Theory of Planned Behavior (TPB). Модели помогают определить ключевые факторы, влияющие на внедрение и принятие интеллектуальных технологий, а также потенциальные барьеры, связанные с недоверием персонала и сложностью обработки больших массивов данных. Рассмотрена методология оценки результатов, которая включает количественные и качественные критерии, ROI и показатели социальной эффективности. Итогом исследования становится вывод о необходимости комплексного подхода: сочетание технологической готовности, грамотной коммуникации и мер по повышению доверия к решениям ИИ гарантирует достижение положительных результатов в HR-практиках. The active implementation of artificial intelligence (AI) mechanisms in the field of personnel management helps to accelerate routine operations and make more accurate decisions. The study examines how machine learning and neural network algorithms can be used to automate recruitment, analyze personnel data, evaluate effectiveness, and organize personalized training programs. EdTech elements are becoming widespread due to adaptive learning trajectories and simulation capabilities that increase employee motivation and allow them to practice skills in a virtual environment without risks to business.The corporate sector is already demonstrating successful examples of AI integration: Rostelecom, X5 Group, and Lukoil have optimized job closing times, reduced recruiting costs, and expanded access to relevant educational resources. The development of generative models (GPT-4 and similar) contributes to the emergence of content that is practically indistinguishable from human-made content. Analytical tools based on big data are increasingly being used to predict staff turnover, plan careers, and assess the organizational climate.The analysis is based on several theoretical approaches, including Diffusion of Innovation (Rogers), Technology Acceptance Model (TAM),Unified Theory of Acceptance and Use of Technology (UTAUT), Task-Technology Fit Model (TTF), and Theory of Planned Behavior (TPB). The models help identify key factors influencing the introduction and adoption of intelligent technologies, as well as potential barriers related to staff distrust and the complexity of processing large amounts of data. A methodology for evaluating results is considered, which includes quantitative and qualitative criteria, ROI, and social performance indicators. The study concludes that an integrated approach is needed: a combination of technological readiness, competent communication and measures to increase confidence in AI solutions guarantees the achievement of positive results in HR practices.
APA, Harvard, Vancouver, ISO, and other styles
45

Srisuwananukorn, Andrew, Giuseppe Gaetano Loscocco, Andrew T. Kuykendall, et al. "Interpretable Artificial Intelligence (AI) Differentiates Prefibrotic Primary Myelofibrosis (prePMF) from Essential Thrombocythemia (ET): A Multi-Center Study of a New Clinical Decision Support Tool." Blood 142, Supplement 1 (2023): 901. http://dx.doi.org/10.1182/blood-2023-173877.

Full text
Abstract:
Introduction Overlapping clinical, molecular, and histopathological characteristics pose challenges in differentiating prePMF from ET. The median overall survival, however, significantly differs between prePMF and ET (11.9 vs 22.2 years, Jeryczynski, 2017). The difference in survival highlights the need to distinguish between these two myeloproliferative neoplasms (MPNs) to select disease-specific therapeutic options. This area of unmet need often requires expert assessment at high-volume academic institutions to render a definitive diagnosis. Our aim in this study is to develop and validate a biologically-motivated AI algorithm to rapidly, accurately, and inexpensively diagnose prePMF and ET directly from diagnostic bone marrow (BM) biopsy digital whole-slide images (WSI). Methods Patients with a clinical/histopathological diagnosis of prePMF or ET as determined by the International Consensus Classification of Myeloid Neoplasms were identified at the University of Florence, Italy (Florence) between 06/2007 and 05/2023 and Moffitt Cancer Center, Tampa, FL (Moffitt) between 01/2013 and 01/2022. Diagnostic H&amp;E-stained BM biopsy slides were digitized using Aperio AT2 slide scanners (Leica Biosystems, Deer Park, IL) at each institution . The training cohort comprised of 200 (100 prePMF / 100 ET) patients from Florence, and the external test cohort entailed 26 (6 prePMF / 20 ET) patients from Moffitt. In total, the resultant model was trained on 32,226 patient-derived WSI. Our chosen pretrained neural network, RetCCL, was previously trained on 32,000 diagnostic WSIs to potentially represent a histologically-informed model (Wang, 2023). BM WSI were tessellated into representative image tiles extracted at 10x magnification (302 microns per image dimensions) for model training. Finally, a prediction upon each patient's WSI was calculated by attention-based multiple instance learning, which is a method that automatically assigns a numeric weight to an image portion representing its relative importance to the classification task. Model performance was assessed utilizing the area under the receiver operator curve (AUC). The cutoff threshold for diagnosis classification was determined by maximizing Youden's Index. For qualitative assessment, attention scores were plotted as a heatmap across the BM WSI and reviewed for morphological features by an expert hematopathologist. Custom scripts were written using our open-source AI framework, Slideflow (Dolezal, 2021). Model development was performed on the Minerva High Performance Computer at Mount Sinai Hospital. Evaluation time upon a single WSI was estimated using a consumer-grade computer with an NVIDIA RTX 3080 graphics processing unit. Results Within the training cohort, 5-fold cross validation resulted in a mean AUC of 0.90 and standard deviation of 0.04. A final locked model re-trained on the entire training cohort resulted in an AUC of 0.90 upon evaluation of the test cohort ( Figure 1). We optimized the classification threshold to balance sensitivity and specificity; the final diagnostic classification accuracy on the test cohort was 92.3% with a sensitivity and specificity for prePMF diagnosis of 66.6% and 100%, respectively. Upon review of the slides with highest prediction value per class, attention heatmaps highlighted the model's reliance on areas of cellular marrow without reliance on image artifacts or background ( Figure 2). Using affordable consumer-grade hardware, evaluation upon a previously unseen WSI was completed in approximately 6.1 seconds (4.9 for preprocessing and 1.2 for evaluation). Conclusion We developed a novel AI model with high accuracy for distinguishing between prePMF and ET in distinct clinical cohorts. To our knowledge, this study represents the largest image-based AI study within MPNs with external validation. Our proposed model may assist clinicians in appropriately identifying patient cohorts who would benefit from disease-specific therapies or enrollment onto clinical trials. We imagine that a potential high-speed, low-cost algorithm may reliably distinguish prePMF from ET patients with high specificity which can be democratized to the MPN clinical community in routine practice and drive clinical trial accrual for biologically rational novel therapeutics.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Qingbo, Xinyue Wang, Jing Tao, et al. "Abstract 5914: Development of immunohistochemistry detection kits for claudin family: CLDN6, CLDN9, and CLDN18.2." Cancer Research 85, no. 8_Supplement_1 (2025): 5914. https://doi.org/10.1158/1538-7445.am2025-5914.

Full text
Abstract:
Background: Claudins are a family of integral membrane proteins that are critical for the formation of tight junctions in epithelial and endothelial cells. Aberrant expression of Claudin family members is implicated in the progression of various cancers, making them valuable biomarkers for tumor diagnosis, prognosis, and therapeutic targeting. such as CLDN-6, CLDN-9, and the splice variant CLDN-18.2. An antibody drug targeting CLDN18.2 was recently approved by the FDA for the treatment of advanced gastric cancer. Despite their clinical potential, routine assessment of these proteins in cancer diagnostics is limited because of the lack of highly specific detection tools. we developed novel immunohistochemistry (IHC) detection kits for CLDN-6, CLDN-9, and CLDN-18.2 to facilitate their use as diagnostic and therapeutic biomarkers. Methods: We used AI-designed peptides from the CLDN-6, CLDN-9, and CLDN-18.2 proteins conjugated with keyhole limpet hemocyanin (KLH) as immunogens to immunize the BABL/c mice. The initial IHC antibody candidates were identified by ELISA and further optimized using various structurally facilitated AI approaches. The selected antibodies were evaluated by IHC in formalin-fixed paraffin-embedded (FFPE) CHOK1 cells expressing full-length CLDN6, 9, 18.2 or CLDN 3, 4, 18.1. These IHC antibodies were also subjected to ELISA, Western blotting, and tumor tissue microarray IHC staining to determine the features of IVD products. Furthermore, we conducted intra-day, inter-day, and inter-operator studies to assess the consistency of IHC staining under the conditions of our established system. Results: The final IHC antibody clones obtained for CLDN6, CLDN9, and CLDN18.2 were #7, #10, and #9B3, respectively. ELISA tests showed that they all had good binding affinity. Tumor tissue microarray staining showed that these antibodies can specifically stain the target protein in paraffin samples from testicular, endometrial, gastric cancer, and other tumor tissues. The WB assay and IHC staining results confirmed their binding specificity. We established a preliminary reaction system for the IHC assay with final working concentrations of 0.25 μg/ml, 0.25 μg/ml, and 20 μg/ml for #7, #10, and #9B3, respectively. The three IHC assays showed good intra-day, inter-day, and inter-operator reproducibility under the current reaction system, demonstrating good robustness as IHC diagnostic products. Conclusions: We successfully developed IHC antibodies against CLDN6, CLDN9, and CLDN18.2. These antibodies were incorporated into IHC diagnostic kits for clinical and research applications. The IHC kits are sensitive and robust for specific detection in FFPE tissues. They hold promise in enhancing cancer diagnosis, identifying patients for targeted therapies, and improving clinical stratification among patients with Claudin family disorders. Citation Format: Qingbo Wang, Xinyue Wang, Jing Tao, Tongyu Xiao, Xiao Wang, Minyang Li, Xiao Lv, Wenwen Zhao, Jiayue Geng, Yixin Liu, Zhanjiao Yu, Yuanhao Li. Development of immunohistochemistry detection kits for claudin family: CLDN6, CLDN9, and CLDN18.2 [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 5914.
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Wei, Lauren Cech, Victor Olivas, Aubhishek Zaman, Daniel Lucas Kerr, and Trever G. Bivona. "Deep learning-based characterization of the drug tolerant persister cell state in lung cancer." JCO Global Oncology 9, Supplement_1 (2023): 141. http://dx.doi.org/10.1200/go.2023.9.supplement_1.141.

Full text
Abstract:
141 Background: Lung cancer is the leading cause of cancer-related lethality globally. Targeted therapies improve the clinical outcome of cancer treatment; however, a subpopulation of cancer cells survive during initial therapy and evolve into drug tolerant persister cells (DTPs) that maintain a residual disease reservoir. Residual disease preludes acquired resistance and tumor progression; therefore, identifying and eliminating DTPs could benefit future treatment paradigms. We have shown that Hippo pathway effector YAP1 (Yes Associated Protein-1) is activated in oncogene-driven lung cancers when cancer cells are exposed to various targeted therapies, such as EGFR, ALK, and RAS inhibitors. YAP1 transcriptional activation during targeted therapy is characterized by its increased nuclear localization and interaction with transcription factors. This activation promotes the expression of genes involved in cell survival, cellular plasticity, and metabolic reprogramming at the residual disease state. We hypothesized that detection of intrinsic or acquired persister cells may aid the development of optimized treatment regimens. In this study, we are building image-based deep learning models to identify YAP1 activation-mediated DTPs from histologically stained slides. Methods: H&amp;E or YAP-stained immunohistochemistry (IHC) images from clinical lung cancer and patient-derived tumor xenograft samples were collected throughout targeted therapy and were annotated with semi-automation using high performance computing clusters. A deep learning model (U-Net algorithm) was used for image segmentation, training, validation, and testing. Results: 1638 images were annotated and over 80,000 patches from these images for YAP positive cells (or regions of interest) comprised the training dataset with semi-supervised automation. Subsequently, we built a customized deep-learning model to detect YAP-mediated DTP cell states from whole histopathological image slides. The deep learning-based model achieved excellent accuracy of 0.8238 and 0.9091 in training, and 0.8040 and 0.8949 in validation datasets for two different annotations, respectively. For a test dataset, the model obtained 0.81 and 0.902 accuracy for the two annotations, respectively. Conclusions: We have constructed a deep learning convolutional neural network model to infer the presence of the YAP1 activation-mediated drug tolerant persister cell state prior to or during targeted treatment of lung cancer. Implementing our AI-based model into routine lung cancer care in the future could identify patient subpopulations with YAP1 activated tumors who would most benefit from receiving YAP1-targeted small molecule inhibitors.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Guangqi, Zhen Lei, Hu Liao, and Xuelei Ma. "DTx-based cardio-oncology rehabilitation for lung cancer survivors: A randomized controlled trial." Journal of Clinical Oncology 42, no. 16_suppl (2024): 12024. http://dx.doi.org/10.1200/jco.2024.42.16_suppl.12024.

Full text
Abstract:
12024 Background: Cardio-oncology rehabilitation holds significant importance for lung cancer survivors, particularly those diagnosed at an early stage requiring lobectomy, where the maximal oxygen uptake peak (VO2PEAK) stands out as a robust prognositic predictor. Home-based cardiac telerehabilitation serves as a substitute for traditional center-based rehabilitation, demonstrating higher participation and completion rates. Despite the potential benefits, wearable devices and mobile apps, which offer tailored video guides, real-time monitoring, individualized safety alerts, and exercise-specific feedback, have not gained widespread adoption in clinical practice. Here, we present findings on the efficacy, safety, and compliance of a 3-month cardio-oncology rehabilitation program based on digital therapeutics (DTx) for lung cancer survivors. Methods: Early-stage lung cancer survivors post-lobectomy, not requiring radiotherapy or chemotherapy, were randomly assigned to receive cardiac telerehabilitation or usual care for 5 months. For pts in the telerehabilitation group, exercise prescriptions with video guides and real-time HR monitoring were implemented using the R Plus Health APP, a software as medical device approved by CFDA for remote cardiac rehab. The AI-driven prescriptions were generated based on CPET, modified by physiologists, and dynamically optimized according to feedback. Pts in the usual care group received routine instructions. Outcome measurements included VO2PEAK (primary outcome), FEV1, DLCO, cardiac function, safety, compliance, and scales assessing symptoms (MDASI), psychology (HADS), sleep (PSQI), fatigue (MSFI-SF), and quality of life (QLQ-C30). Results: 40 of 47 (85%) pts completed the trial (22 in the telerehabilitation group and 18 in the usual care group). Pts in the telerehabilitation group engaged in exercise an average of 3.3 times per week. The average exercise duration was 151.4 min per week, with an average effective exercise duration (when the required heart rate was reached) of 92.3 min per week. The average prescription compliance rate (effective exercise duration/lower limit of prescription duration) was 101.2%. Cardiac telerehabilitation was associated with a higher average VO2PEAK improvement (3.660±3.232 vs 1.088±3.230 mL/Kg/min, p=0.023), better alleviation of affective interference (-0.883±1.500 vs 0.208±1.222, p=0.048), higher quality of life improvement (16.250±23.017 vs 1.042±13.903, p=0.027), fatigue (-0.889±15.960 vs 1.389±12.087, p=0.024), daytime dysfunction (-0.550±0.686 vs 0.000±0.516, p=0.015), and more anxiety relief (-0.314±0.444 vs -0.054±0.295, p=0.048) compared with usual care. Other efficacy outcomes did not show significant differences between the two groups. No exercise-related adverse events occurred during the intervention. Conclusions: DTx-based cardio-oncology rehabilitation demonstrated improvements in cardiorespiratory fitness and reductions in affective interference and anxiety among lung cancer survivors, indicating high compliance and safety. Clinical trial information: ChiCTR2200064000.
APA, Harvard, Vancouver, ISO, and other styles
49

Kaproth, M., H. Rycroft, G. Gilbert, et al. "15 EFFECT OF SEMEN THAW METHOD ON CONCEPTION RATE IN DAIRY HEIFER HERDS." Reproduction, Fertility and Development 17, no. 2 (2005): 157. http://dx.doi.org/10.1071/rdv17n2ab15.

Full text
Abstract:
Semen processed with procedures permitting a flexible thaw method is used to breed millions of cows yearly. “Pocket thawing” is widely used as an alternative to warm-water thawing with such semen. To pocket thaw, a straw is retrieved from cryostorage, immediately wrapped in a folded paper towel, and moved to a thermally protected pocket for 2 to 3 min of thawing within the pocket before AI gun loading. Published field data are lacking for comparisons of such a thaw method with those for semen prepared to permit flexible-thawing. We measured the effect of warm-water or pocket thaw on conception rate in four dairy heifer herds using semen prepared with methods previously optimized for flexible-thawing success. Semen processing (Anderson S et al. 1994 J. Dairy Sci. 77, 2302–2307) includes two-step whole-milk extension, static vapor tank freezing (0.5-mL straws), and IMV Digitcool mechanical freezing (0.25-mL straws). It is unclear which specific processing steps permit flexible thawing. These procedures have been developed using breeding results from decades of field trials by professional inseminators using both pocket and warm-water thaw. Semen prepared from each of 12 sires produced equal straw units at 10 and 15 million total sperm per straw, in both 0.5- and 0.25-mL straw packages. Professional inseminators used each combination evenly over 16 months. Additional commercial semen (55% of total) from the same source was used. The thaw methods alternated weekly. Thaw effect on conception status, from 70 day non-return data for 11,215 services (67.6% conception rate), was estimated by a generalized linear mixed model. Neither thaw method nor total sperm per straw significantly affected conception rate (P = 0.658, 0.769, respectively). Bull, herd, inseminator within herd, year, season, and straw size did significantly affect conception rate (P &lt; 0.05). No thaw method interactions with herd, sperm number, season, and straw package size were significant (P = 0.297, 0.526, 0.365, 0.723, respectively). This suggests that if semen has been prepared with procedures specific to flexible-thawing, it can be either pocket thawed or warm-water thawed within a range of herdsman or inseminator practices, season, or straw packaging choices. Even at 10 million, the lowest total sperm per straw, pocket thaw was equally as successful as warm-water thaw. We generally observe that in vitro sperm quality, as expected, is maximal for rapidly thawed straws, with slower thawing resulting in lower values. However, while it appears that conventional measures of in vitro semen quality are improved with fast thaw rates, these measures do not appear to correspond to higher in vivo fertility for semen prepared intentionally to be flexibly thawed. We conclude that, for semen prepared with procedures that permit flexible thawing, the thaw method, whether pocket or warm-water thaw, does not affect conception under commercial conditions and with routine semen handling methods. We thank the herd owners and their staff, the inseminators, and Hap Allen, Ron Hunt, Gordon Nickerson, and Bryan Krick of Genex for their help and cooperation.
APA, Harvard, Vancouver, ISO, and other styles
50

Abidde, Wobiageri Ndidi, Nkechinyere Eyidia, and Collins Iyaminapu Iyoloma. "Optimization of Wireless Mesh Networks for Disaster Response Communication." International Journal of Current Science Research and Review 08, no. 03 (2025). https://doi.org/10.47191/ijcsrr/v8-i3-35.

Full text
Abstract:
Wireless Mesh Networks (WMNs) have emerged as a resilient and adaptable solution for disaster response communication, offering self-healing and self-organizing capabilities that ensure uninterrupted connectivity in emergency scenarios. Traditional communication infrastructures often fail due to network congestion, power outages, and physical damage during disasters, necessitating an optimized approach for rapid and reliable data transmission. This study presents an AI-optimized WMN framework aimed at enhancing network performance by improving packet delivery ratio (PDR), reducing end-to-end delay, optimizing energy consumption, increasing network throughput, and strengthening security. Simulations conducted in MATLAB Simulink compare the performance of AI-optimized routing with conventional protocols such as AODV (Ad hoc On-Demand Distance Vector) and OLSR (Optimized Link State Routing). Results demonstrate that AI-optimized routing achieves a 15.5% higher PDR, 43% lower delay, 49% increased throughput, and 30% reduced energy consumption compared to traditional approaches. Furthermore, an AI-driven Intrusion Detection System (IDS) improves network security by increasing attack detection accuracy to 94.6% while reducing false positive rates to 5.2%. The findings highlight the significance of AI-based routing optimization in disaster scenarios, ensuring robust, energy-efficient, and secure communication for first responders and affected communities. Future research will explore hybrid AI-blockchain security mechanisms, 5G and satellite network integration, and real-world experimental validation to further enhance WMN resilience in extreme disaster conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!