To see the other types of publications on this topic, follow the link: Frameworks of FL.

Journal articles on the topic 'Frameworks of FL'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Frameworks of FL.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Byrnes,, Heidi. "Of frameworks and the goals of collegiate foreign language education: critical reflections." Applied Linguistics Review 3, no. 1 (2012): 1–24. http://dx.doi.org/10.1515/applirev-2012-0001.

Full text
Abstract:
AbstractThe paper suggests that among reasons for the difficulties collegiate foreign language (FL) programs in the United States (and most likely elsewhere) encounter in assuring that their students attain the kind of upper-level multiple literacies necessary for engaging in sophisticated work with FL oral and written texts may be the fact that prevailing frameworks for capturing FL performance, development, and assessment are insufficient for envisioning such textually oriented learning goals. The result of this mismatch between dominant frameworks, typically associated with communicative language teaching, and the goals of literary cultural studies programs as humanities programs is that collegiate FL departments and their faculty members face serious obstacles in their efforts to create the kind of coherent, comprehensive, and principled curricula that would be necessary for overcoming what are already extraordinary challenges in an educational environment that provides little support for long-term, sustained efforts at language development toward advanced multiple literacies. The paper traces these links by examining three such frameworks in the United States: the Proficiency framework of the 1980s, based on the ACTFL oral proficiency interview, the Standards framework of the 1990s, part of a more general standards movement in U.S. education, and the most recent document, by the Modern Language Association (MLA), which focuses on the need for new curricular structures in collegiate FL education. Specifically, it provides an overview of the U.S. educational landscape with an eye toward the considerable influence such frameworks can have in the absence of a comprehensive language education policy; lays out key characteristics that would be necessary for a viable approach to collegiate FL education; probes the complex effects the three frameworks have had in collegiate FL programs; and explores how one department sought to counter-act their detrimental influence in order to affirm and realize a humanistically oriented approach to FL education. The paper concludes with overall observations about the increasing power of frameworks to set educational goals and ways to counteract their potentially unwelcome consequences.
APA, Harvard, Vancouver, ISO, and other styles
2

I., Venkata Dwaraka Srihith. "Federated Frameworks: Pioneering Secure and Decentralized Authentication Systems." Journal of Advances in Computational Intelligence Theory 7, no. 1 (2024): 31–40. https://doi.org/10.5281/zenodo.13968684.

Full text
Abstract:
<em>Federated Learning (FL) is innovative machine learning approach that lets multiple devices work together to train models without sharing sensitive data. By keeping data on the device, FL not only boosts privacy and security but also helps improve models collectively. Recent research looked into how Blockchain technology could strengthen FL, tackling existing security issues. Blockchain adds a safeguard against threats like data tampering or unauthorized access and makes systems more transparent and fairer by improving how records and rewards are managed. By blending Blockchain with FL, we get a more trusted and secure way to collaborate on machine learning, bringing both privacy and efficiency to a whole new level.</em> <em>&nbsp;</em>
APA, Harvard, Vancouver, ISO, and other styles
3

Rajendran, Suraj, Zhenxing Xu, Weishen Pan, Arnab Ghosh, and Fei Wang. "Data heterogeneity in federated learning with Electronic Health Records: Case studies of risk prediction for acute kidney injury and sepsis diseases in critical care." PLOS Digital Health 2, no. 3 (2023): e0000117. http://dx.doi.org/10.1371/journal.pdig.0000117.

Full text
Abstract:
With the wider availability of healthcare data such as Electronic Health Records (EHR), more and more data-driven based approaches have been proposed to improve the quality-of-care delivery. Predictive modeling, which aims at building computational models for predicting clinical risk, is a popular research topic in healthcare analytics. However, concerns about privacy of healthcare data may hinder the development of effective predictive models that are generalizable because this often requires rich diverse data from multiple clinical institutions. Recently, federated learning (FL) has demonstrated promise in addressing this concern. However, data heterogeneity from different local participating sites may affect prediction performance of federated models. Due to acute kidney injury (AKI) and sepsis’ high prevalence among patients admitted to intensive care units (ICU), the early prediction of these conditions based on AI is an important topic in critical care medicine. In this study, we take AKI and sepsis onset risk prediction in ICU as two examples to explore the impact of data heterogeneity in the FL framework as well as compare performances across frameworks. We built predictive models based on local, pooled, and FL frameworks using EHR data across multiple hospitals. The local framework only used data from each site itself. The pooled framework combined data from all sites. In the FL framework, each local site did not have access to other sites’ data. A model was updated locally, and its parameters were shared to a central aggregator, which was used to update the federated model’s parameters and then subsequently, shared with each site. We found models built within a FL framework outperformed local counterparts. Then, we analyzed variable importance discrepancies across sites and frameworks. Finally, we explored potential sources of the heterogeneity within the EHR data. The different distributions of demographic profiles, medication use, and site information contributed to data heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
4

Gufran, Danish, and Sudeep Pasricha. "FedHIL: Heterogeneity Resilient Federated Learning for Robust Indoor Localization with Mobile Devices." ACM Transactions on Embedded Computing Systems 22, no. 5s (2023): 1–24. http://dx.doi.org/10.1145/3607919.

Full text
Abstract:
Indoor localization plays a vital role in applications such as emergency response, warehouse management, and augmented reality experiences. By deploying machine learning (ML) based indoor localization frameworks on their mobile devices, users can localize themselves in a variety of indoor and subterranean environments. However, achieving accurate indoor localization can be challenging due to heterogeneity in the hardware and software stacks of mobile devices, which can result in inconsistent and inaccurate location estimates. Traditional ML models also heavily rely on initial training data, making them vulnerable to degradation in performance with dynamic changes across indoor environments. To address the challenges due to device heterogeneity and lack of adaptivity, we propose a novel embedded ML framework called FedHIL . Our framework combines indoor localization and federated learning (FL) to improve indoor localization accuracy in device-heterogeneous environments while also preserving user data privacy. FedHIL integrates a domain-specific selective weight adjustment approach to preserve the ML model's performance for indoor localization during FL, even in the presence of extremely noisy data. Experimental evaluations in diverse real-world indoor environments and with heterogeneous mobile devices show that FedHIL outperforms state-of-the-art FL and non-FL indoor localization frameworks. FedHIL is able to achieve 1.62 × better localization accuracy on average than the best performing FL-based indoor localization framework from prior work.
APA, Harvard, Vancouver, ISO, and other styles
5

Mar’i, Farhanna, and Ahmad Afif Supianto. "A conceptual approach of optimization in federated learning." Indonesian Journal of Electrical Engineering and Computer Science 37, no. 1 (2025): 288. http://dx.doi.org/10.11591/ijeecs.v37.i1.pp288-299.

Full text
Abstract:
Federated learning (FL) is an emerging approach to distributed learning from decentralized data, designed with privacy concerns in mind. FL has been successfully applied in several fields, such as the internet of things (IoT), human activity recognition (HAR), and natural language processing (NLP), showing remarkable results. However, the development of FL in real-world applications still faces several challenges. Recent optimizations of FL have been made to address these issues and enhance the FL settings. In this paper, we categorize the optimization of FL into five main challenges: Communication Efficiency, Heterogeneity, Privacy and Security, Scalability, and Convergence Rate. We provide an overview of various optimization frameworks for FL proposed in previous research, illustrated with concrete examples and applications based on these five optimization goals. Additionally, we propose two optional integrated conceptual frameworks (CFs) for optimizing FL by combining several optimization methods to achieve the best implementation of FL that addresses the five challenges.
APA, Harvard, Vancouver, ISO, and other styles
6

Farhanna, Mar'i Ahmad Afif Supianto. "A conceptual approach of optimization in federated learning." Indonesian Journal of Electrical Engineering and Computer Science 37, no. 1 (2025): 288–99. https://doi.org/10.11591/ijeecs.v37.i1.pp288-299.

Full text
Abstract:
Federated learning (FL) is an emerging approach to distributed learning from decentralized data, designed with privacy concerns in mind. FL has been successfully applied in several fields, such as the internet of things (IoT), human activity recognition (HAR), and natural language processing (NLP), showing remarkable results. However, the development of FL in real-world applications still faces several challenges. Recent optimizations of FL have been made to address these issues and enhance the FL settings. In this paper, we categorize the optimization of FL into five main challenges: Communication Efficiency, Heterogeneity, Privacy and Security, Scalability, and Convergence Rate. We provide an overview of various optimization frameworks for FL proposed in previous research, illustrated with concrete examples and applications based on these five optimization goals. Additionally, we propose two optional integrated conceptual frameworks (CFs) for optimizing FL by combining several optimization methods to achieve the best implementation of FL that addresses the five challenges.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Hanjing. "The practical applications of federated learning across various domains." Applied and Computational Engineering 87, no. 1 (2024): 154–61. http://dx.doi.org/10.54254/2755-2721/87/20241582.

Full text
Abstract:
With the advancement of artificial intelligence technology, a vast amount of data is transmitted during the model training process, significantly increasing the risk of data leakage. In an era where data privacy is highly valued, protecting data from leakage has become an urgent issue. Federated Learning (FL) has thus been proposed and applied across various fields. This paper presents the applications of FL in five key areas: healthcare, urban transportation, computer vision, Industrial Internet of Things (IIoT), and 5G networks. This paper discusses the feasibility of implementing FL for privacy protection in the aforementioned five real-world application scenarios and analyzes its accuracy and effiency. Additionally, it compares the FL framework with traditional frameworks, exploring the improvements FL has made in terms of privacy protection and performance, as well as the existing shortcomings of the FL framework. Further discussions are provided on potential future improvements. Moreover, this paper offers an outlook on current research trends and the developmental prospects in this research field.
APA, Harvard, Vancouver, ISO, and other styles
8

Luay Bahjat Albtosh. "Harnessing the power of federated learning to advance technology." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1302–12. http://dx.doi.org/10.30574/wjarr.2024.23.3.2768.

Full text
Abstract:
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, advocating for decentralized, privacy-preserving model training. This study provides a comprehensive evaluation of contemporary FL frameworks – TensorFlow Federated (TFF), PySyft, and FedJAX – across three diverse datasets: CIFAR-10, IMDb reviews, and the UCI Heart Disease dataset. Our results demonstrate TFF's superior performance on image classification tasks, while PySyft excels in both efficiency and privacy for textual data. The study underscores the potential of FL in ensuring data privacy and model performance yet emphasizes areas warranting improvement. As the volume of edge devices escalates and the need for data privacy intensifies, refining and expanding FL frameworks become essential for future machine learning deployments.
APA, Harvard, Vancouver, ISO, and other styles
9

Chia, Harmon Lee Bruce. "Harnessing the power of federated learning to advance technology." Advances in Engineering Innovation 2, no. 1 (2023): 44–47. http://dx.doi.org/10.54254/2977-3903/2/2023020.

Full text
Abstract:
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, advocating for decentralized, privacy-preserving model training. This study provides a comprehensive evaluation of contemporary FL frameworks TensorFlow Federated (TFF), PySyft, and FedJAX across three diverse datasets: CIFAR-10, IMDb reviews, and the UCI Heart Disease dataset. Our results demonstrate TFF's superior performance on image classification tasks, while PySyft excels in both efficiency and privacy for textual data. The study underscores the potential of FL in ensuring data privacy and model performance, yet emphasizes areas warranting improvement. As the volume of edge devices escalates and the need for data privacy intensifies, refining and expanding FL frameworks become essential for future machine learning deployments.
APA, Harvard, Vancouver, ISO, and other styles
10

Luay, Bahjat Albtosh. "Harnessing the power of federated learning to advance technology." World Journal of Advanced Research and Reviews 23, no. 3 (2024): 1303–12. https://doi.org/10.5281/zenodo.14945174.

Full text
Abstract:
Federated Learning (FL) has emerged as a transformative paradigm in machine learning, advocating for decentralized, privacy-preserving model training. This study provides a comprehensive evaluation of contemporary FL frameworks &ndash; TensorFlow Federated (TFF), PySyft, and FedJAX &ndash; across three diverse datasets: CIFAR-10, IMDb reviews, and the UCI Heart Disease dataset. Our results demonstrate TFF's superior performance on image classification tasks, while PySyft excels in both efficiency and privacy for textual data. The study underscores the potential of FL in ensuring data privacy and model performance yet emphasizes areas warranting improvement. As the volume of edge devices escalates and the need for data privacy intensifies, refining and expanding FL frameworks become essential for future machine learning deployments.
APA, Harvard, Vancouver, ISO, and other styles
11

Ngoupayou Limbepe, Zounkaraneni, Keke Gai, and Jing Yu. "Blockchain-Based Privacy-Enhancing Federated Learning in Smart Healthcare: A Survey." Blockchains 3, no. 1 (2025): 1. https://doi.org/10.3390/blockchains3010001.

Full text
Abstract:
Federated learning (FL) has emerged as an efficient machine learning (ML) method with crucial privacy protection features. It is adapted for training models in Internet of Things (IoT)-related domains, including smart healthcare systems (SHSs), where the introduction of IoT devices and technologies can arise various security and privacy concerns. However, as FL cannot solely address all privacy challenges, privacy-enhancing technologies (PETs) and blockchain are often integrated to enhance privacy protection in FL frameworks within SHSs. The critical questions remain regarding how these technologies are integrated with FL and how they contribute to enhancing privacy protection in SHSs. This survey addresses these questions by investigating the recent advancements on the combination of FL with PETs and blockchain for privacy protection in smart healthcare. First, this survey emphasizes the critical integration of PETs into the FL context. Second, to address the challenge of integrating blockchain into FL, it examines three main technical dimensions such as blockchain-enabled model storage, blockchain-enabled aggregation, and blockchain-enabled gradient upload within FL frameworks. This survey further explores how these technologies collectively ensure the integrity and confidentiality of healthcare data, highlighting their significance in building a trustworthy SHS that safeguards sensitive patient information.
APA, Harvard, Vancouver, ISO, and other styles
12

Yan, Lei, Lei Wang, Guanjun Li, Jingwei Shao, and Zhixin Xia. "Secure Dynamic Scheduling for Federated Learning in Underwater Wireless IoT Networks." Journal of Marine Science and Engineering 12, no. 9 (2024): 1656. http://dx.doi.org/10.3390/jmse12091656.

Full text
Abstract:
Federated learning (FL) is a distributed machine learning approach that can enable Internet of Things (IoT) edge devices to collaboratively learn a machine learning model without explicitly sharing local data in order to achieve data clustering, prediction, and classification in networks. In previous works, some online multi-armed bandit (MAB)-based FL frameworks were proposed to enable dynamic client scheduling for improving the efficiency of FL in underwater wireless IoT networks. However, the security of online dynamic scheduling, which is especially essential for underwater wireless IoT, is increasingly being questioned. In this work, we study secure dynamic scheduling for FL frameworks that can protect against malicious clients in underwater FL-assisted wireless IoT networks. Specifically, in order to jointly optimize the communication efficiency and security of FL, we employ MAB-based methods and propose upper-confidence-bound-based smart contracts (UCB-SCs) and upper-confidence-bound-based smart contracts with a security prediction model (UCB-SCPs) to address the optimal scheduling scheme over time-varying underwater channels. Then, we give the upper bounds of the expected performance regret of the UCB-SC policy and the UCB-SCP policy; these upper bounds imply that the regret of the two proposed policies grows logarithmically over communication rounds under certain conditions. Our experiment shows that the proposed UCB-SC and UCB-SCP approaches significantly improve the efficiency and security of FL frameworks in underwater wireless IoT networks.
APA, Harvard, Vancouver, ISO, and other styles
13

Kholod, Ivan, Evgeny Yanaki, Dmitry Fomichev, et al. "Open-Source Federated Learning Frameworks for IoT: A Comparative Review and Analysis." Sensors 21, no. 1 (2020): 167. http://dx.doi.org/10.3390/s21010167.

Full text
Abstract:
The rapid development of Internet of Things (IoT) systems has led to the problem of managing and analyzing the large volumes of data that they generate. Traditional approaches that involve collection of data from IoT devices into one centralized repository for further analysis are not always applicable due to the large amount of collected data, the use of communication channels with limited bandwidth, security and privacy requirements, etc. Federated learning (FL) is an emerging approach that allows one to analyze data directly on data sources and to federate the results of each analysis to yield a result as traditional centralized data processing. FL is being actively developed, and currently, there are several open-source frameworks that implement it. This article presents a comparative review and analysis of the existing open-source FL frameworks, including their applicability in IoT systems. The authors evaluated the following features of the frameworks: ease of use and deployment, development, analysis capabilities, accuracy, and performance. Three different data sets were used in the experiments—two signal data sets of different volumes and one image data set. To model low-power IoT devices, computing nodes with small resources were defined in the testbed. The research results revealed FL frameworks that could be applied in the IoT systems now, but with certain restrictions on their use.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguen, Thi Ha Mi. "Application of COSO and COBIT Frameworks for the Purpose of Organization of Internal Control." Auditor 7, no. 5 (2021): 15–23. http://dx.doi.org/10.12737/1998-0701-2021-7-5-15-23.

Full text
Abstract:
This article analyzes the relevance of this framework for the purposes of organizing internal control and proposes a scheme for integrating COBIT and COSO frameworks: the theses of these two frameworks are interconnected through the concepts of «process» and «information fl ows». The article also considers the application of the integrated concept in the context of the implementation of the principles of sustainable development in the activities of the organization.
APA, Harvard, Vancouver, ISO, and other styles
15

Novikova, Evgenia, Dmitry Fomichov, Ivan Kholod, and Evgeny Filippov. "Analysis of Privacy-Enhancing Technologies in Open-Source Federated Learning Frameworks for Driver Activity Recognition." Sensors 22, no. 8 (2022): 2983. http://dx.doi.org/10.3390/s22082983.

Full text
Abstract:
Wearable devices and smartphones that are used to monitor the activity and the state of the driver collect a lot of sensitive data such as audio, video, location and even health data. The analysis and processing of such data require observing the strict legal requirements for personal data security and privacy. The federated learning (FL) computation paradigm has been proposed as a privacy-preserving computational model that allows securing the privacy of the data owner. However, it still has no formal proof of privacy guarantees, and recent research showed that the attacks targeted both the model integrity and privacy of the data owners could be performed at all stages of the FL process. This paper focuses on the analysis of the privacy-preserving techniques adopted for FL and presents a comparative review and analysis of their implementations in the open-source FL frameworks. The authors evaluated their impact on the overall training process in terms of global model accuracy, training time and network traffic generated during the training process in order to assess their applicability to driver’s state and behaviour monitoring. As the usage scenario, the authors considered the case of the driver’s activity monitoring using the data from smartphone sensors. The experiments showed that the current implementation of the privacy-preserving techniques in open-source FL frameworks limits the practical application of FL to cross-silo settings.
APA, Harvard, Vancouver, ISO, and other styles
16

Matschinske, Julian, Julian Späth, Mohammad Bakhtiari, et al. "The FeatureCloud Platform for Federated Learning in Biomedicine: Unified Approach." Journal of Medical Internet Research 25 (July 12, 2023): e42621. http://dx.doi.org/10.2196/42621.

Full text
Abstract:
Background Machine learning and artificial intelligence have shown promising results in many areas and are driven by the increasing amount of available data. However, these data are often distributed across different institutions and cannot be easily shared owing to strict privacy regulations. Federated learning (FL) allows the training of distributed machine learning models without sharing sensitive data. In addition, the implementation is time-consuming and requires advanced programming skills and complex technical infrastructures. Objective Various tools and frameworks have been developed to simplify the development of FL algorithms and provide the necessary technical infrastructure. Although there are many high-quality frameworks, most focus only on a single application case or method. To our knowledge, there are no generic frameworks, meaning that the existing solutions are restricted to a particular type of algorithm or application field. Furthermore, most of these frameworks provide an application programming interface that needs programming knowledge. There is no collection of ready-to-use FL algorithms that are extendable and allow users (eg, researchers) without programming knowledge to apply FL. A central FL platform for both FL algorithm developers and users does not exist. This study aimed to address this gap and make FL available to everyone by developing FeatureCloud, an all-in-one platform for FL in biomedicine and beyond. Methods The FeatureCloud platform consists of 3 main components: a global frontend, a global backend, and a local controller. Our platform uses a Docker to separate the local acting components of the platform from the sensitive data systems. We evaluated our platform using 4 different algorithms on 5 data sets for both accuracy and runtime. Results FeatureCloud removes the complexity of distributed systems for developers and end users by providing a comprehensive platform for executing multi-institutional FL analyses and implementing FL algorithms. Through its integrated artificial intelligence store, federated algorithms can easily be published and reused by the community. To secure sensitive raw data, FeatureCloud supports privacy-enhancing technologies to secure the shared local models and assures high standards in data privacy to comply with the strict General Data Protection Regulation. Our evaluation shows that applications developed in FeatureCloud can produce highly similar results compared with centralized approaches and scale well for an increasing number of participating sites. Conclusions FeatureCloud provides a ready-to-use platform that integrates the development and execution of FL algorithms while reducing the complexity to a minimum and removing the hurdles of federated infrastructure. Thus, we believe that it has the potential to greatly increase the accessibility of privacy-preserving and distributed data analyses in biomedicine and beyond.
APA, Harvard, Vancouver, ISO, and other styles
17

Dalimarta, Fahmy Ferdian, Nina Faoziyah, and Doni Setiawan. "A Novel Privacy-Preserving Algorithm for Secure Data Sharing in Federated Learning Frameworks." Journal of Computer Networks, Architecture and High Performance Computing 7, no. 1 (2025): 223–34. https://doi.org/10.47709/cnahpc.v7i1.5385.

Full text
Abstract:
Federated Learning (FL) has emerged as a promising paradigm for the collaborative training of machine learning models across decentralized devices while preserving data privacy. However, ensuring data security and privacy during model updates remains a critical challenge, particularly in scenarios that involve sensitive data. This study proposes a novel Privacy-Preserving Algorithm (PPA-FL) designed to enhance data security and mitigate privacy leakage risks in FL frameworks. The algorithm integrates advanced encryption techniques, such as homomorphic encryption, with differential privacy to secure model updates without compromising the utility. Furthermore, it incorporates a dynamic noise-adjustment mechanism to adaptively balance privacy and model accuracy. Extensive experiments on benchmark datasets demonstrate that PPA-FL achieves a competitive trade-off between privacy protection and model performance compared to existing methods. The proposed approach is computationally efficient and scalable, making it suitable for real-world applications in healthcare, finance, and the IoT environment. This research contributes to advancing secure data-sharing practices in federated learning, fostering the broader adoption of privacy-preserving machine learning solutions.
APA, Harvard, Vancouver, ISO, and other styles
18

Du, Weidong, Min Li, Yiliang Han, Xu An Wang, and Zhaoying Wei. "A Homomorphic Signcryption-Based Privacy Preserving Federated Learning Framework for IoTs." Security and Communication Networks 2022 (September 22, 2022): 1–10. http://dx.doi.org/10.1155/2022/8380239.

Full text
Abstract:
Federated learning (FL) enables clients to train a machine learning model collaboratively by just aggregating their model parameters, which makes it very useful in empowering the IoTs with intelligence. To prevent privacy information leakage from parameters during aggregation, many FL frameworks use homomorphic encryption to protect client’s parameters. However, a secure federated learning framework should not only protect privacy of the parameters but also guarantee integrity of the aggregated results. In this paper, we propose an efficient homomorphic signcryption framework that can encrypt and sign the parameters in one go. According to the additive homomorphic property of our framework, it allows aggregating the signcryptions of parameters securely. Thus, our framework can both verify the integrity of the aggregated results and protect the privacy of the parameters. Moreover, we employ the blinding technique to resist collusion attacks between internal curious clients and the server and leverage the Chinese Remainder Theorem to improve efficiency. Finally, we simulate our framework in FedML. Extensive experimental results on four benchmark datasets demonstrate that our framework can protect privacy without compromising model performance, and our framework is more efficient than similar frameworks.
APA, Harvard, Vancouver, ISO, and other styles
19

R., Sanjana, Nikesh M., Bhuvaneshwari M., Bharathi M., and Aditya Sai Srinivas T. "United Intelligence: Federated Learning for the Future of Technology." Research and Applications of Web Development and Design 8, no. 1 (2024): 1–7. https://doi.org/10.5281/zenodo.13933081.

Full text
Abstract:
<em>Federated Learning (FL) is rapidly transforming how we approach machine learning by offering a decentralized, privacy-first way to train models. Instead of sending data to a central server, FL enables devices to collaborate and learn without ever sharing sensitive information, making it a game-changer for privacy-conscious applications. In this study, we dive deep into three leading FL frameworks&mdash;TensorFlow Federated (TFF), PySyft, and FedJAX&mdash;testing them on datasets like CIFAR-10 for image classification, IMDb reviews for sentiment analysis, and the UCI Heart Disease dataset for medical predictions. Our results show that TFF shines in image-related tasks with strong performance, while PySyft stands out for efficiently handling text data while keeping privacy intact. This research highlights FL&rsquo;s promise in balancing data security with model performance, though challenges like communication delays and scaling still need to be tackled. As more devices connect and privacy concerns grow, improving these frameworks will be key to the future of machine learning innovation.</em>
APA, Harvard, Vancouver, ISO, and other styles
20

Wu, Lang, Weijian Ruan, Jinhui Hu, and Yaobin He. "A Survey on Blockchain-Based Federated Learning." Future Internet 15, no. 12 (2023): 400. http://dx.doi.org/10.3390/fi15120400.

Full text
Abstract:
Federated learning (FL) and blockchains exhibit significant commonality, complementarity, and alignment in various aspects, such as application domains, architectural features, and privacy protection mechanisms. In recent years, there have been notable advancements in combining these two technologies, particularly in data privacy protection, data sharing incentives, and computational performance. Although there are some surveys on blockchain-based federated learning (BFL), these surveys predominantly focus on the BFL framework and its classifications, yet lack in-depth analyses of the pivotal issues addressed by BFL. This work aims to assist researchers in understanding the latest research achievements and development directions in the integration of FL with blockchains. Firstly, we introduced the relevant research in FL and blockchain technology and highlighted the existing shortcomings of FL. Next, we conducted a comparative analysis of existing BFL frameworks, delving into the significant problems in the realm of FL that the combination of blockchain and FL addresses. Finally, we summarized the application prospects of BFL technology in various domains such as the Internet of Things, Industrial Internet of Things, Internet of Vehicles, and healthcare services, as well as the challenges that need to be addressed and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
21

Aishwarya, M., J. Umesh Chandra, M. Farhan Ali, M. Bharathi, and T. Aditya Sai Srinivas. "Secure and Scalable AI: Insights into Federated Learning Algorithms and Platforms." Journal of Communication Engineering and VLSI Design 2, no. 2 (2024): 15–27. http://dx.doi.org/10.48001/jocevd.2024.2215-27.

Full text
Abstract:
This paper takes a closer look at the rapidly advancing field of Federated Learning (FL), a decentralized machine learning approach that focuses on preserving data privacy by training models across multiple devices. It highlights key algorithms like Federated Averaging (FedAvg) and its more refined versions Hierarchical Federated Averaging (HierFAVG) and Federated Matched Averaging (FedMA) which improve model aggregation techniques. The discussion extends to both Horizontal and Vertical Federated Learning (HFL and VFL), illustrating how they handle data partitioning and communication differently. Additionally, the paper reviews prominent FL frameworks and simulators, including TensorFlow Federated (TFF), PySyft, Flower, and FedML, emphasizing their roles in facilitating experiments and ensuring scalability. Critical features like data distribution, communication topologies, and security measures in FL simulators are explored. Ultimately, the paper offers a comprehensive overview of FL frameworks, algorithms, and architectures, showcasing their ability to advance distributed AI while tackling the challenges of data diversity and privacy.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Youpeng, Xuyu Wang, and Lingling An. "Hierarchical Clustering-based Personalized Federated Learning for Robust and Fair Human Activity Recognition." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, no. 1 (2022): 1–38. http://dx.doi.org/10.1145/3580795.

Full text
Abstract:
Currently, federated learning (FL) can enable users to collaboratively train a global model while protecting the privacy of user data, which has been applied to human activity recognition (HAR) tasks. However, in real HAR scenarios, deploying an FL system needs to consider multiple aspects, including system accuracy, fairness, robustness, and scalability. Most existing FL frameworks aim to solve specific problems while ignoring other properties. In this paper, we propose FedCHAR, a personalized FL framework with a hierarchical clustering method for robust and fair HAR, which not only improves the accuracy and the fairness of model performance by exploiting the intrinsically similar relationship between users but also enhances the robustness of the system by identifying malicious nodes through clustering in attack scenarios. In addition, to enhance the scalability of FedCHAR, we also propose FedCHAR-DC, a scalable and adaptive FL framework which is featured by dynamic clustering and adapting to the addition of new users or the evolution of datasets for realistic FL-based HAR scenarios. We conduct extensive experiments to evaluate the performance of FedCHAR on seven datasets of different sizes. The results demonstrate that FedCHAR could obtain better performance on different datasets than the other five state-of-the-art methods in terms of accuracy, robustness, and fairness. We further validate that FedCHAR-DC exhibits satisfactory scalability on three large-scale datasets regardless of the number of participants.
APA, Harvard, Vancouver, ISO, and other styles
23

Jerkovic, Filip, Nurul I. Sarkar, and Jahan Ali. "Smart Grid IoT Framework for Predicting Energy Consumption Using Federated Learning Homomorphic Encryption." Sensors 25, no. 12 (2025): 3700. https://doi.org/10.3390/s25123700.

Full text
Abstract:
Homomorphic Encryption (HE) introduces new dimensions of security and privacy within federated learning (FL) and internet of things (IoT) frameworks that allow preservation of user privacy when handling data for FL occurring in Smart Grid (SG) technologies. In this paper, we propose a novel SG IoT framework to provide a solution for predicting energy consumption while preserving user privacy in a smart grid system. The proposed framework is based on the integration of FL, edge computing, and HE principles to provide a robust and secure framework to conduct machine learning workloads end-to-end. In the proposed framework, edge devices are connected to each other using P2P networking, and the data exchanged between peers is encrypted using Cheon–Kim–Kim–Song (CKKS) fully HE. The results obtained show that the system can predict energy consumption as well as preserve user privacy in SG scenarios. The findings provide an insight into the SG IoT framework that can help network researchers and engineers contribute further towards developing a next-generation SG IoT system.
APA, Harvard, Vancouver, ISO, and other styles
24

Pujari, Mangesh, and Anil Kumar Pakina. "EdgeAI for Privacy-Preserving AI: The Role of Small LLMs in Federated Learning Environments." International Journal of Engineering and Computer Science 13, no. 10 (2024): 26589–601. https://doi.org/10.18535/ijecs.v13i10.4889.

Full text
Abstract:
Privacy considerations in artificial intelligence (AI) have led to the popularization of federated learning (FL) as a decentralized training organization. On this basis, FL allows collaborative model training without requiring data exchange for private data use. The adoption of FL on edge devices faces major challenges due to limited computational resources, networks, and energy efficiency. This paper analyzes the operation of small language models (SLMs) in FL frameworks with an eye on their promise to let intelligent privacy-preserving architectures thrive on edge devices. It is through SLMs that local inference can be made robust while exposing less data. This research investigates the performance of SLMs under different TinyML applications such as natural language understanding and anomaly detection, along with the inherent security vulnerabilities of SLMs in federated learning environments compared to other attack scenarios. Furthermore, effective countermeasures are proposed. Only the policy implications of adopting SLMs for privacy-sensitive domains will be covered, advocating for governance policy frameworks that delicately balance innovations and data protection.
APA, Harvard, Vancouver, ISO, and other styles
25

Elshair, Ismail M., Tariq Jamil Saifullah Khanzada, Muhammad Farrukh Shahid, and Shahbaz Siddiqui. "Evaluating Federated Learning Simulators: A Comparative Analysis of Horizontal and Vertical Approaches." Sensors 24, no. 16 (2024): 5149. http://dx.doi.org/10.3390/s24165149.

Full text
Abstract:
Federated learning (FL) is a decentralized machine learning approach whereby each device is allowed to train local models, eliminating the requirement for centralized data collecting and ensuring data privacy. Unlike typical typical centralized machine learning, collaborative model training in FL involves aggregating updates from various devices without sending raw data. This ensures data privacy and security while collecting a collective learning from distributed data sources. These devices in FL models exhibit high efficacy in terms of privacy protection, scalability, and robustness, which is contingent upon the success of communication and collaboration. This paper explore the various topologies of both decentralized or centralized in the context of FL. In this respect, we investigated and explored in detail the evaluation of four widly used end-to-end FL frameworks: FedML, Flower, Flute, and PySyft. We specifically focused on vertical and horizontal FL systems using a logistic regression model that aggregated by the FedAvg algorithm. specifically, we conducted experiments on two images datasets, MNIST and Fashion-MNIST, to evaluate their efficiency and performance. Our paper provides initial findings on how to effectively combine horizontal and vertical solutions to address common difficulties, such as managing model synchronization and communication overhead. Our research indicates the trade-offs that exist in the performance of several simulation frameworks for federated learning.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Boyuan, Shengbo Chen, and Zihao Peng. "New Generation Federated Learning." Sensors 22, no. 21 (2022): 8475. http://dx.doi.org/10.3390/s22218475.

Full text
Abstract:
With the development of the Internet of things (IoT), federated learning (FL) has received increasing attention as a distributed machine learning (ML) framework that does not require data exchange. However, current FL frameworks follow an idealized setup in which the task size is fixed and the storage space is unlimited, which is impossible in the real world. In fact, new classes of these participating clients always emerge over time, and some samples are overwritten or discarded due to storage limitations. We urgently need a new framework to adapt to the dynamic task sequences and strict storage constraints in the real world. Continuous learning or incremental learning is the ultimate goal of deep learning, and we introduce incremental learning into FL to describe a new federated learning framework. New generation federated learning (NGFL) is probably the most desirable framework for FL, in which, in addition to the basic task of training the server, each client needs to learn its private tasks, which arrive continuously independent of communication with the server. We give a rigorous mathematical representation of this framework, detail several major challenges faced under this framework, and address the main challenges of combining incremental learning with federated learning (aggregation of heterogeneous output layers and the task transformation mutual knowledge problem), and show the lower and upper baselines of the framework.
APA, Harvard, Vancouver, ISO, and other styles
27

Dey, Subarna, Asamanjoy Bhunia, Dolores Esquivel, and Christoph Janiak. "Covalent triazine-based frameworks (CTFs) from triptycene and fluorene motifs for CO2 adsorption." Journal of Materials Chemistry A 4, no. 17 (2016): 6259–63. http://dx.doi.org/10.1039/c6ta00638h.

Full text
Abstract:
Two microporous CTFs with triptycene (TPC) and fluorene (FL) have been synthesized through a mild AlCl<sub>3</sub>-catalyzed Friedel–Crafts reaction, with the highest surface area of up to 1668 m<sup>2</sup> g<sup>−1</sup> for non-ionothermal CTFs. CTF-TPC and CTF-FL show an excellent carbon dioxide uptake capacity of up to 4.24 mmol g<sup>−1</sup> at 273 K and 1 bar.
APA, Harvard, Vancouver, ISO, and other styles
28

Jia, Yongzhe, Xuyun Zhang, Amin Beheshti, and Wanchun Dou. "FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local Parameter Sharing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 11 (2024): 12848–56. http://dx.doi.org/10.1609/aaai.v38i11.29181.

Full text
Abstract:
Federated Learning (FL) has emerged as a promising solution in Edge Computing (EC) environments to process the proliferation of data generated by edge devices. By collaboratively optimizing the global machine learning models on distributed edge devices, FL circumvents the need for transmitting raw data and enhances user privacy. Despite practical successes, FL still confronts significant challenges including constrained edge device resources, multiple tasks deployment, and data heterogeneity. However, existing studies focus on mitigating the FL training costs of each single task whereas neglecting the resource consumption across multiple tasks in heterogeneous FL scenarios. In this paper, we propose Heterogeneous Federated Learning with Local Parameter Sharing (FedLPS) to fill this gap. FedLPS leverages principles from transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders. To further reduce resource consumption, a channel-wise model pruning algorithm that shrinks the footprint of local models while accounting for both data and system heterogeneity is employed in FedLPS. Additionally, a novel heterogeneous model aggregation algorithm is proposed to aggregate the heterogeneous predictors in FedLPS. We implemented the proposed FedLPS on a real FL platform and compared it with state-of-the-art (SOTA) FL frameworks. The experimental results on five popular datasets and two modern DNN models illustrate that the proposed FedLPS significantly outperforms the SOTA FL frameworks by up to 4.88% and reduces the computational resource consumption by 21.3%. Our code is available at: https://github.com/jyzgh/FedLPS.
APA, Harvard, Vancouver, ISO, and other styles
29

Sabuhi, Mikael, Petr Musilek, and Cor-Paul Bezemer. "Micro-FL: A Fault-Tolerant Scalable Microservice-Based Platform for Federated Learning." Future Internet 16, no. 3 (2024): 70. http://dx.doi.org/10.3390/fi16030070.

Full text
Abstract:
As the number of machine learning applications increases, growing concerns about data privacy expose the limitations of traditional cloud-based machine learning methods that rely on centralized data collection and processing. Federated learning emerges as a promising alternative, offering a novel approach to training machine learning models that safeguards data privacy. Federated learning facilitates collaborative model training across various entities. In this approach, each user trains models locally and shares only the local model parameters with a central server, which then generates a global model based on these individual updates. This approach ensures data privacy since the training data itself is never directly shared with a central entity. However, existing federated machine learning frameworks are not without challenges. In terms of server design, these frameworks exhibit limited scalability with an increasing number of clients and are highly vulnerable to system faults, particularly as the central server becomes a single point of failure. This paper introduces Micro-FL, a federated learning framework that uses a microservices architecture to implement the federated learning system. It demonstrates that the framework is fault-tolerant and scalable, showing its ability to handle an increasing number of clients. A comprehensive performance evaluation confirms that Micro-FL proficiently handles component faults, enabling a smooth and uninterrupted operation.
APA, Harvard, Vancouver, ISO, and other styles
30

Han, Bing, Qiang Fu, and Xinliang Zhang. "Towards Privacy-Preserving Federated Neuromorphic Learning via Spiking Neuron Models." Electronics 12, no. 18 (2023): 3984. http://dx.doi.org/10.3390/electronics12183984.

Full text
Abstract:
Federated learning (FL) has been broadly adopted in both academia and industry in recent years. As a bridge to connect the so-called “data islands”, FL has contributed greatly to promoting data utilization. In particular, FL enables disjoint entities to cooperatively train a shared model, while protecting each participant’s data privacy. However, current FL frameworks cannot offer privacy protection and reduce the computation overhead at the same time. Therefore, its implementation in practical scenarios, such as edge computing, is limited. In this paper, we propose a novel FL framework with spiking neuron models and differential privacy, which simultaneously provides theoretically guaranteed privacy protection and achieves low energy consumption. We model the local forward propagation process in a discrete way similar to nerve signal travel in the human brain. Since neurons only fire when the accumulated membrane potential exceeds a threshold, spiking neuron models require significantly lower energy compared to traditional neural networks. In addition, to protect sensitive information in model gradients, we add differently private noise in both the local training phase and server aggregation phase. Empirical evaluation results show that our proposal can effectively reduce the accuracy of membership inference attacks and property inference attacks, while maintaining a relatively low energy cost. blueFor example, the attack accuracy of a membership inference attack drops to 43% in some scenarios. As a result, our proposed FL framework can work well in large-scale cross-device learning scenarios.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Yanting, Jianwei Liu, Zhenyu Guan, Bihe Zhao, Xianglun Leng, and Song Bian. "ARMOR: Differential Model Distribution for Adversarially Robust Federated Learning." Electronics 12, no. 4 (2023): 842. http://dx.doi.org/10.3390/electronics12040842.

Full text
Abstract:
In this work, we formalize the concept of differential model robustness (DMR), a new property for ensuring model security in federated learning (FL) systems. For most conventional FL frameworks, all clients receive the same global model. If there exists a Byzantine client who maliciously generates adversarial samples against the global model, the attack will be immediately transferred to all other benign clients. To address the attack transferability concern and improve the DMR of FL systems, we propose the notion of differential model distribution (DMD) where the server distributes different models to different clients. As a concrete instantiation of DMD, we propose the ARMOR framework utilizing differential adversarial training to prevent a corrupted client from launching white-box adversarial attack against other clients, for the local model received by the corrupted client is different from that of benign clients. Through extensive experiments, we demonstrate that ARMOR can significantly reduce both the attack success rate (ASR) and average adversarial transfer rate (AATR) across different FL settings. For instance, for a 35-client FL system, the ASR and AATR can be reduced by as much as 85% and 80% over the MNIST dataset.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Xueyang, Hengguan Huang, Youlong Ding, Hao Wang, Ye Wang, and Qian Xu. "FedNP: Towards Non-IID Federated Learning via Federated Neural Propagation." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10399–407. http://dx.doi.org/10.1609/aaai.v37i9.26237.

Full text
Abstract:
Traditional federated learning (FL) algorithms, such as FedAvg, fail to handle non-i.i.d data because they learn a global model by simply averaging biased local models that are trained on non-i.i.d local data, therefore failing to model the global data distribution. In this paper, we present a novel Bayesian FL algorithm that successfully handles such a non-i.i.d FL setting by enhancing the local training task with an auxiliary task that explicitly estimates the global data distribution. One key challenge in estimating the global data distribution is that the data are partitioned in FL, and therefore the ground-truth global data distribution is inaccessible. To address this challenge, we propose an expectation-propagation-inspired probabilistic neural network, dubbed federated neural propagation (FedNP), which efficiently estimates the global data distribution given non-i.i.d data partitions. Our algorithm is sampling-free and end-to-end differentiable, can be applied with any conventional FL frameworks and learns richer global data representation. Experiments on both image classification tasks with synthetic non-i.i.d image data partitions and real-world non-i.i.d speech recognition tasks demonstrate that our framework effectively alleviates the performance deterioration caused by non-i.i.d data.
APA, Harvard, Vancouver, ISO, and other styles
33

Ankit, Chauhan. "Decentralized AI Model Training and Inference Using Blockchain for Privacy- Preserving Federated Learning." Journal of Research and Innovation in Technology, Commerce and Management Vol. 2, Issue 6 (2025): 2654–61. https://doi.org/10.5281/zenodo.15606610.

Full text
Abstract:
Federated Learning (FL) has emerged as a promising approach to training machine learning models across distributed devices while preserving data privacy by avoiding centralized data collection. However, traditional FL frameworks rely on a central server to aggregate model updates, introducing vulnerabilities such as single-point failures, lack of transparency, and susceptibility to adversarial attacks like model poisoning. To address these challenges, this paper proposes a decentralized AI training and inference framework that integrates blockchain technology with FL to enhance security, privacy, and trust. &nbsp; Our framework leverages smart contracts to automate model aggregation, decentralized storage for secure weight distribution, and cryptographic techniques such as homomorphic encryption and zero-knowledge proofs to ensure privacy-preserving validation. By eliminating the need for a central authority, our approach enhances robustness against malicious actors while maintaining model accuracy comparable to traditional FL. &nbsp; Additionally, we introduce a consensus mechanism that verifies participant contributions, ensuring fairness and auditability. Experimental evaluations on benchmark datasets demonstrate that our framework achieves competitive performance while significantly improving privacy and resistance to attacks. &nbsp; This work bridges the gap between decentralized AI and federated learning, offering a scalable and secure solution for privacy-sensitive applications in healthcare, finance, and IoT. Future research directions include optimizing blockchain scalability and exploring incentive mechanisms for sustainable participation.
APA, Harvard, Vancouver, ISO, and other styles
34

Elmorsy, Esraa S., Ayman Mahrous, Wael A. Amer, and Mohamad M. Ayad. "Nitrogen-Doped Carbon Dots in Zeolitic Imidazolate Framework Core-Shell Nanocrystals: Synthesis and Characterization." Solid State Phenomena 336 (August 30, 2022): 81–87. http://dx.doi.org/10.4028/p-206xsy.

Full text
Abstract:
Metal-organic frameworks (MOFs) have exciting properties and promising applications in different fields. In this work, novel zeolitic imidazolate frameworks (ZIFs) have been synthesized by encapsulating N-doped carbon quantum dots (N-CDs) with a blue FL into the zeolitic imidazolate framework materials core-shell structure (ZIF-8@ZIF-67). The functionalized core-shell MOFs maintained their crystal structure, morphology, and enhanced UV-vis absorbance. The properties of these new composites exhibit excellent potential for different applications including sensing, photo-catalysis, and selective adsorption.
APA, Harvard, Vancouver, ISO, and other styles
35

Shaheen, Momina, Muhammad Shoaib Farooq, Tariq Umer, and Byung-Seo Kim. "Applications of Federated Learning; Taxonomy, Challenges, and Research Trends." Electronics 11, no. 4 (2022): 670. http://dx.doi.org/10.3390/electronics11040670.

Full text
Abstract:
The federated learning technique (FL) supports the collaborative training of machine learning and deep learning models for edge network optimization. Although a complex edge network with heterogeneous devices having different constraints can affect its performance, this leads to a problem in this area. Therefore, some research can be seen to design new frameworks and approaches to improve federated learning processes. The purpose of this study is to provide an overview of the FL technique and its applicability in different domains. The key focus of the paper is to produce a systematic literature review of recent research studies that clearly describes the adoption of FL in edge networks. The search procedure was performed from April 2020 to May 2021 with a total initial number of papers being 7546 published in the duration of 2016 to 2020. The systematic literature synthesizes and compares the algorithms, models, and frameworks of federated learning. Additionally, we have presented the scope of FL applications in different industries and domains. It has been revealed after careful investigation of studies that 25% of the studies used FL in IoT and edge-based applications and 30% of studies implement the FL concept in the health industry, 10% for NLP, 10% for autonomous vehicles, 10% for mobile services, 10% for recommender systems, and 5% for FinTech. A taxonomy is also proposed on implementing FL for edge networks in different domains. Moreover, another novelty of this paper is that datasets used for the implementation of FL are discussed in detail to provide the researchers an overview of the distributed datasets, which can be used for employing FL techniques. Lastly, this study discusses the current challenges of implementing the FL technique. We have found that the areas of medical AI, IoT, edge systems, and the autonomous industry can adapt the FL in many of its sub-domains; however, the challenges these domains can encounter are statistical heterogeneity, system heterogeneity, data imbalance, resource allocation, and privacy.
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Zu-Jin, Jian Lü, Maochun Hong, and Rong Cao. "Metal–organic frameworks based on flexible ligands (FL-MOFs): structures and applications." Chem. Soc. Rev. 43, no. 16 (2014): 5867–95. http://dx.doi.org/10.1039/c3cs60483g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Liu, Zelei, Yuanyuan Chen, Yansong Zhao, et al. "Contribution-Aware Federated Learning for Smart Healthcare." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (2022): 12396–404. http://dx.doi.org/10.1609/aaai.v36i11.21505.

Full text
Abstract:
Artificial intelligence (AI) is a promising technology to transform the healthcare industry. Due to the highly sensitive nature of patient data, federated learning (FL) is often leveraged to build models for smart healthcare applications. Existing deployed FL frameworks cannot address the key issues of varying data quality and heterogeneous data distributions across multiple institutions in this sector. In this paper, we report our experience developing and deploying the Contribution-Aware Federated Learning (CAFL) framework for smart healthcare. It provides an efficient and accurate approach to fairly evaluate FL participants' contribution to model performance without exposing their private data, and improves the FL model training protocol to allow the best performing intermediate models to be distributed to participants for FL training. Since its deployment in Yidu Cloud Technology Inc. in March 2021, CAFL has served 8 well-established medical institutions in China to build healthcare decision support models. It can perform contribution evaluations 2.84 times faster than the best existing approach, and has improved the average accuracy of the resulting models by 2.62% compared to the previous system (which is significant in industrial settings). To our knowledge, it is the first contribution-aware federated learning successfully deployed in the healthcare industry.
APA, Harvard, Vancouver, ISO, and other styles
38

Alhafiz, Fatimah, and Abdullah Basuhail. "The Data Heterogeneity Issue Regarding COVID-19 Lung Imaging in Federated Learning: An Experimental Study." Big Data and Cognitive Computing 9, no. 1 (2025): 11. https://doi.org/10.3390/bdcc9010011.

Full text
Abstract:
Federated learning (FL) has emerged as a transformative framework for collaborative learning, offering robust model training across institutions while ensuring data privacy. In the context of making a COVID-19 diagnosis using lung imaging, FL enables institutions to collaboratively train a global model without sharing sensitive patient data. A central manager aggregates local model updates to compute global updates, ensuring secure and effective integration. The global model’s generalization capability is evaluated using centralized testing data before dissemination to participating nodes, where local assessments facilitate personalized adaptations tailored to diverse datasets. Addressing data heterogeneity, a critical challenge in medical imaging, is essential for improving both global performance and local personalization in FL systems. This study emphasizes the importance of recognizing real-world data variability before proposing solutions to tackle non-independent and non-identically distributed (non-IID) data. We investigate the impact of data heterogeneity on FL performance in COVID-19 lung imaging across seven distinct heterogeneity settings. By comprehensively evaluating models using generalization and personalization metrics, we highlight challenges and opportunities for optimizing FL frameworks. The findings provide valuable insights that can guide future research toward achieving a balance between global generalization and local adaptation, ultimately enhancing diagnostic accuracy and patient outcomes in COVID-19 lung imaging.
APA, Harvard, Vancouver, ISO, and other styles
39

Guo, Jinnan, Peter Pietzuch, Andrew Paverd, and Kapil Vaswani. "Trustworthy AI using Confidential Federated Learning." Queue 22, no. 2 (2024): 87–107. http://dx.doi.org/10.1145/3665220.

Full text
Abstract:
The principles of security, privacy, accountability, transparency, and fairness are the cornerstones of modern AI regulations. Classic FL was designed with a strong emphasis on security and privacy, at the cost of transparency and accountability. CFL addresses this gap with a careful combination of FL with TEEs and commitments. In addition, CFL brings other desirable security properties, such as code-based access control, model confidentiality, and protection of models during inference. Recent advances in confidential computing such as confidential containers and confidential GPUs mean that existing FL frameworks can be extended seamlessly to support CFL with low overheads. For these reasons, CFL is likely to become the default mode for deploying FL workloads.
APA, Harvard, Vancouver, ISO, and other styles
40

Méndez Prado, Silvia Mariela, Marlon José Zambrano Franco, Susana Gabriela Zambrano Zapata, Katherine Malena Chiluiza García, Patricia Everaert, and Martin Valcke. "A Systematic Review of Financial Literacy Research in Latin America and The Caribbean." Sustainability 14, no. 7 (2022): 3814. http://dx.doi.org/10.3390/su14073814.

Full text
Abstract:
Several well-known studies have remarked on the low financial literacy (FL) levels in Latin America and the Caribbean (LAC), which represent a problem in an economic context of change and uncertainty. This fact gives us the opportunity to evaluate the current state of literature related to FL in the region. The main list of identified keywords allowed the PRISMA methodology to guide the systematic literature review and analysis procedure. During 2016–2022, the FL search yielded around 4500 FL manuscripts worldwide, but only 65 articles were related to the scope of our analysis (which involved looking at LAC countries). Being the first review from an LAC country about all LAC countries, the findings highlight a lack of FL research focus on regional needs, gender gaps affecting women, and conceptual frameworks used to develop efficient educational program interventions. Most studies in this review build on the OECD definition of FL, but the financial attitude dimension often seems to be omitted from the analyses. These findings open the discussion about efficient policy design concerning FL development in LAC.
APA, Harvard, Vancouver, ISO, and other styles
41

Usman, Muhammad, Mario Luca Bernardi, and Marta Cimitile. "Introducing a Quality-Driven Approach for Federated Learning." Sensors 25, no. 10 (2025): 3083. https://doi.org/10.3390/s25103083.

Full text
Abstract:
The advancement of pervasive systems has made distributed real-world data across multiple devices increasingly valuable for training machine learning models. Traditional centralized learning approaches face limitations such as data security concerns and computational constraints. Federated learning (FL) provides privacy benefits but is hindered by challenges like data heterogeneity (Non-IID distributions) and noise heterogeneity (mislabeling and inconsistencies in local datasets), which degrade model performance. This paper proposes a model-agnostic, quality-driven approach, called DQFed, for training machine learning models across distributed and diverse client datasets while preserving data privacy. The DQFed framework demonstrates improvements in accuracy and reliability over existing FL frameworks. By effectively addressing class imbalance and noise heterogeneity, DQFed offers a robust and versatile solution for federated learning applications in diverse fields.
APA, Harvard, Vancouver, ISO, and other styles
42

Naseh, David, Mahdi Abdollahpour, and Daniele Tarchi. "Real-World Implementation and Performance Analysis of Distributed Learning Frameworks for 6G IoT Applications." Information 15, no. 4 (2024): 190. http://dx.doi.org/10.3390/info15040190.

Full text
Abstract:
This paper explores the practical implementation and performance analysis of distributed learning (DL) frameworks on various client platforms, responding to the dynamic landscape of 6G technology and the pressing need for a fully connected distributed intelligence network for Internet of Things (IoT) devices. The heterogeneous nature of clients and data presents challenges for effective federated learning (FL) techniques, prompting our exploration of federated transfer learning (FTL) on Raspberry Pi, Odroid, and virtual machine platforms. Our study provides a detailed examination of the design, implementation, and evaluation of the FTL framework, specifically adapted to the unique constraints of various IoT platforms. By measuring the accuracy of FTL across diverse clients, we reveal its superior performance over traditional FL, particularly in terms of faster training and higher accuracy, due to the use of transfer learning (TL). Real-world measurements further demonstrate improved resource efficiency with lower average load, memory usage, temperature, power, and energy consumption when FTL is implemented compared to FL. Our experiments also showcase FTL’s robustness in scenarios where users leave the server’s communication coverage, resulting in fewer clients and less data for training. This adaptability underscores the effectiveness of FTL in environments with limited data, clients, and resources, contributing valuable information to the intersection of edge computing and DL for the 6G IoT.
APA, Harvard, Vancouver, ISO, and other styles
43

Olagunju, Funminiyi. "Federated Learning in the Era of Data Privacy: An Exhaustive Survey of Privacy Preserving Techniques, Legal Frameworks, and Ethical Considerations." International Journal of Future Engineering Innovations 2, no. 3 (2025): 153–60. https://doi.org/10.54660/ijfei.2025.2.3.153-160.

Full text
Abstract:
Federated Learning (FL) has emerged as a transformative approach to decentralized machine learning, enabling model training across multiple devices without centralizing sensitive data. While FL inherently supports privacy, growing concerns around data security, regulatory compliance, and ethical accountability have led to the development of advanced privacy preserving mechanisms. This systematic review, conducted in adherence with PRISMA guidelines, explores the landscape of privacy enhancing techniques, legal regulations, and ethical implications associated with Federated Learning. We sourced peer reviewed literature from 2016 to 2024 across major scientific databases, including IEEE Xplore, SpringerLink, and ACM Digital Library. The review identifies and categorizes approaches such as Differential Privacy, Homomorphic Encryption, and Secure Multi party Computation. We further evaluate the alignment of FL practices with legal standards such as GDPR, HIPAA, and CCPA, and highlight ethical considerations including fairness, transparency, and user consent. Our analysis reveals critical gaps in interdisciplinary integration, particularly the need for frameworks that simultaneously meet technical robustness, legal compliance, and ethical accountability. We propose directions for future research, emphasizing a holistic approach that incorporates multi stakeholder engagement to realize trustworthy and scalable FL systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Yu, Xiaowei Peng, and Hequn Xian. "pFedBASC: Personalized Federated Learning with Blockchain-Assisted Semi-Centralized Framework." Future Internet 16, no. 5 (2024): 164. http://dx.doi.org/10.3390/fi16050164.

Full text
Abstract:
As network technology advances, there is an increasing need for a trusted new-generation information management system. Blockchain technology provides a decentralized, transparent, and tamper-proof foundation. Meanwhile, data islands have become a significant obstacle for machine learning applications. Although federated learning (FL) ensures data privacy protection, server-side security concerns persist. Traditional methods have employed a blockchain system in FL frameworks to maintain a tamper-proof global model database. In this context, we propose a novel personalized federated learning (pFL) with blockchain-assisted semi-centralized framework, pFedBASC. This approach, tailored for the Internet of Things (IoT) scenarios, constructs a semi-centralized IoT structure and utilizes trusted network connections to support FL. We concentrate on designing the aggregation process and FL algorithm, as well as the block structure. To address data heterogeneity and communication costs, we propose a pFL method called FedHype. In this method, each client is assigned a compact hypernetwork (HN) alongside a normal target network (TN) whose parameters are generated by the HN. Clients pull together other clients’ HNs for local aggregation to personalize their TNs, reducing communication costs. Furthermore, FedHype can be integrated with other existing algorithms, enhancing its functionality. Experimental results reveal that pFedBASC effectively tackles data heterogeneity issues while maintaining positive accuracy, communication efficiency, and robustness.
APA, Harvard, Vancouver, ISO, and other styles
45

Balaji, Soundararajan. "Designing Federated Learning Systems for Collaborative Financial Analytics." International Journal of Leading Research Publication 5, no. 2 (2024): 1–12. https://doi.org/10.5281/zenodo.15051139.

Full text
Abstract:
Federated Learning (FL) has emerged as a transformative paradigm for privacy-preserving collaborative machine learning, particularly in the financial sector, where data privacy and regulatory compliance are paramount. By enabling decentralized model training across distributed datasets without centralized data aggregation, FL addresses critical challenges in financial analytics, such as fraud detection, risk assessment, credit scoring, and cross-institutional insights. We will explore the principles, applications, and challenges of FL in finance, emphasizing its potential to enhance model robustness, ensure data sovereignty, and comply with stringent regulations like GDPR and anti-money laundering frameworks. Key challenges include data heterogeneity, secure aggregation techniques, regulatory alignment, and resistance to adversarial attacks. Case studies from banking, regulatory bodies, and financial intermediaries illustrate successful implementations, underscoring FL&rsquo;s capacity to unlock collaborative insights while preserving confidentiality. The study concludes with design principles for scalable, secure FL systems and highlights future directions for adoption in global financial ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
46

Nagaraj Naik, Vikranth B M. "Federated Learning: A Comprehensive Survey on Types, Applications, Challenges, and Future Directions." Communications on Applied Nonlinear Analysis 32, no. 9s (2025): 2089–99. https://doi.org/10.52783/cana.v32.4450.

Full text
Abstract:
Federated Learning (FL) is a new machine learning (ML) paradigm where multiple parties jointly train a model without sharing each other’s raw data, which is beneficial for predictive analytics with privacy protection and low data transmission cost. This work surveys the basic concepts in FL such as the three categories of FL: horizontal FL, vertical FL, and federated transfer learning, which are suitable for different types of data partitioning. With that in mind, FL is fitting for applications in healthcare, finance, edge computing, and IoT devices as it is able to maintain privacy-aware AI development. FL, however, has its own challenges, including communication overhead, computational costs, data heterogeneity, security, and fairness. The decentralized nature of the clients thus requires frequent exchanging of model updates, leading to more bandwidth-centric and high-latency constraints which may hinder their scaling silently. Furthermore, an ongoing challenge in FL concerns fairness, i.e., reducing potential biases that emerge from unbalanced data distributions. The next chapter of FL research would be a little more on the side of communication efficiency and probably leveraging adaptive compression mechanisms and zero-aggregation. To improve accuracy when dealing with heterogeneous environments, federated learning personalization (FL) is also gaining ground toward modifying models for each client while leveraging global knowledge. Additionally, continuing breakthroughs in federated reinforcement learning could further broaden the applicability of FL to dynamic, autonomous decision-making systems. Addressing these challenges will be key to creating scalable, secure and efficient FL frameworks for mainstream integration into privacy-sensitive fields. This survey presents a thorough introduction to FL, covering its advantages along with its disadvantages and future research trends. As FL tackles essential technical constraints and enhances its algorithmic architecture, it could potentially transform decentralized AI space and foster innovation in various fields.
APA, Harvard, Vancouver, ISO, and other styles
47

Moshawrab, Mohammad, Mehdi Adda, Abdenour Bouzouane, Hussein Ibrahim, and Ali Raad. "A Maneuver in the Trade-Off Space of Federated Learning Aggregation Frameworks Secured with Polymorphic Encryption: PolyFLAM and PolyFLAP Frameworks." Electronics 13, no. 18 (2024): 3716. http://dx.doi.org/10.3390/electronics13183716.

Full text
Abstract:
Maintaining user privacy in machine learning is a critical concern due to the implications of data collection. Federated learning (FL) has emerged as a promising solution by sharing trained models rather than user data. However, FL still faces several challenges, particularly in terms of security and privacy, such as vulnerability to inference attacks. There is an inherent trade-off between communication traffic across the network and computational costs on the server or client, which this paper aims to address by maneuvering between these performance parameters. To tackle these issues, this paper proposes two complementary frameworks: PolyFLAM (“Polymorphic Federated Learning Aggregation of Models”) and PolyFLAP (“Polymorphic Federated Learning Aggregation of Parameters”). These frameworks provide two options to suit the needs of users, depending on whether they prioritize reducing communication across the network or lowering computational costs on the server or client. PolyFLAM reduces computational costs by exchanging entire models, eliminating the need to rebuild models from parameters. In contrast, PolyFLAP reduces communication costs by transmitting only model parameters, which are smaller in size compared to entire models. Both frameworks are supported by polymorphic encryption, ensuring privacy is maintained even in cases of key leakage. Furthermore, these frameworks offer five different machine learning models, including support vector machines, logistic regression, Gaussian naïve Bayes, stochastic gradient descent, and multi-layer perceptron, to cover as many real-life problems as possible. The evaluation of these frameworks with simulated and real-life datasets demonstrated that they can effectively withstand various attacks, including inference attacks that aim to compromise user privacy by capturing exchanged models or parameters.
APA, Harvard, Vancouver, ISO, and other styles
48

Praveen, Kumar Myakala, Naayini Prudhvi, and Kamatala Srikanth. "A Survey on Federated Learning for TinyML: Challenges, Techniques, and Future Directions." Partners Universal International Innovation Journal (PUIIJ) 03, no. 02 (2025): 97–114. https://doi.org/10.5281/zenodo.15240508.

Full text
Abstract:
The convergence of Federated Learning (FL) and Tiny Machine Learning (TinyML) represents a transformative step toward enabling intelligent and privacy-preserving applications on resource-constrained edge devices. TinyML focuses on deploying lightweight machine learning models on microcontrollers and other low-power devices, whereas FL facilitates decentralized learning across distributed datasets without compromising user privacy. This survey provides a comprehensive review of the current state of research at the intersection of FL and TinyML, exploring model optimization techniques such as quantization, pruning, and knowledge distillation as well as communication efficient algorithms such as federated averaging and gradient sparsification. Key challenges, including ensuring energy efficiency, scalability, and security in FL-TinyML systems, are highlighted. Real-world applications, such as revolutionizing personalized healthcare, enabling smarter IoT devices, and advancing industrial&nbsp; automation, demonstrating the transformative potential of FL-TinyML to drive innovations in edge intelligence. This survey provides a timely and essential guide to the emerging field of FL-TinyML, paving the way for future research and development. Finally, this study identify open research questions and propose future directions, including hybrid optimization approaches, standardized evaluation frameworks, and the integration of blockchain for decentralized trust management.
APA, Harvard, Vancouver, ISO, and other styles
49

Qi, Tao, Huili Wang, and Yongfeng Huang. "Towards the Robustness of Differentially Private Federated Learning." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 18 (2024): 19911–19. http://dx.doi.org/10.1609/aaai.v38i18.29967.

Full text
Abstract:
Robustness and privacy protection are two important factors of trustworthy federated learning (FL). Existing FL works usually secure data privacy by perturbing local model gradients via the differential privacy (DP) technique, or defend against poisoning attacks by filtering the local gradients in the outlier of the gradient distribution before aggregation. However, these two issues are often addressed independently in existing works, and how to secure federated learning in both privacy and robustness still needs further exploration. In this paper, we unveil that although DP noisy perturbation can improve the learning robustness, DP-FL frameworks are not inherently robust and are vulnerable to a carefully-designed attack method. Furthermore, we reveal that it is challenging for existing robust FL methods to defend against attacks on DP-FL. This can be attributed to the fact that the local gradients of DP-FL are perturbed by random noise, and the selected central gradients inevitably incorporate a higher proportion of poisoned gradients compared to conventional FL. To address this problem, we further propose a new defense method for DP-FL (named Robust-DPFL), which can effectively distinguish poisoned and clean local gradients in DP-FL and robustly update the global model. Experiments on three benchmark datasets demonstrate that baseline methods cannot ensure task accuracy, data privacy, and robustness simultaneously, while Robust-DPFL can effectively enhance the privacy protection and robustness of federated learning meanwhile maintain the task performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Zamboni, Camilla. "Language, Play, Storytelling: Tabletop Role-Playing Games in the Italian L2 Classroom." Italica 101, no. 1 (2024): 149–70. https://doi.org/10.5406/23256672.101.1.09.

Full text
Abstract:
Abstract Within the larger context of gamification and game-based learning, in this article I argue that particularly tabletop role-playing games (TTRPGs), which are centered around storytelling and communication, are effective tools that second and foreign language (L2/FL) instructors can use in their pedagogical planning as both frameworks for thinking about language learning and as practical affordances to leverage in class interaction. I will define what TTRPGs are and how they can be beneficial for language learning; I will discuss ways to implement TTRPGs in our curricula in various forms and degrees, following the gameful framework theorized by Jonathon Reinhardt: finally, I will present a micro tabletop role-playing game that I recently developed with Alessia Caviglia, Planétes, which was designed with gameful affordances for L2/FL learning in mind and is a preliminary exploration of “game-based L2TL.”
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!