To see the other types of publications on this topic, follow the link: Client-Side Optimization.

Journal articles on the topic 'Client-Side Optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Client-Side Optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Maniezzo, Vittorio, Marco A. Boschetti, Antonella Carbonaro, Moreno Marzolla, and Francesco Strappaveccia. "Client-side Computational Optimization." ACM Transactions on Mathematical Software 45, no. 2 (2019): 1–16. http://dx.doi.org/10.1145/3309549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feng, Wei Chang. "Optimization Project on Collecting and Integrating Video Resources." Applied Mechanics and Materials 427-429 (September 2013): 2462–65. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.2462.

Full text
Abstract:
E-Box multimedia system is developed for the rich audio and video resource on the Internet and on its server side, it can automatically search and integration of network video and audio resources, and send to the client side for the user in real-time broadcast TV viewing, full use of remote control operation, Simply its a very easy to use multimedia system. This article introduces its infrastructure, main technical ideas and you can also see some details about server side and client side. At the same time, the improvement on how to collect and integrate video resources is comprehensively elaborated.
APA, Harvard, Vancouver, ISO, and other styles
3

Nasenok, Kyrylo, and Maria Voitsekhovska. "Client-side rendering issues in the modern worldwide networ." Technical sciences and technologies, no. 4 (38) (December 30, 2024): 197–207. https://doi.org/10.25140/2411-5363-2024-4(38)-197-207.

Full text
Abstract:
Client-side rendering is an approach to rendering web applications, allowing content to be processed and displayed directly in a browser. This method enables web developers to create modular and component-based code that is easily extendable, reusable, and simplifies application maintenance. Client-side rendering has revolutionized the web industry, as evidenced by its widespread adoption: as of 2024, approximately 9.5 million websites, or 8% of all active websites worldwide, use this approach, handling approximately 17% of total global web traffic. Despite its advantages, client-side rendering has certain limitations. It can affect various aspects of security and SEO optimization due to increased vulnerability to attacks and challenges in search engine indexing. The most significant drawback of this approach is the substantial increase in the size of files required for complete application loading and rendering. While this is not critical for smaller projects, it can be a significant strain on network resources for large, high-traffic sites with millions of daily visitors. This requires careful attention to content optimization and the use of additional tools to maintain stable application performance. The problem highlights the growth of global web traffic in recent years and the need to optimize this traffic, as it grows faster than the physical communications infrastructure that carries it around the world. It also underscores that while client-side rendering enhances development ease, maintainability, and application performance, it introduces new challenges such as increased application file size, SEO issues, and resource allocation difficulties. This article provides an overview of current issues with client-side rendering and their impact on the performance and functionality of web applications. It analyses the most common client-side rendering issues, including challenges with search engines, usability on low-performance devices, and the large file sizes required to render and display the application. It also examines practices and approaches for addressing these issues. Future research should focus on optimising existing solutions and migrating current projects to technologies that address client-side rendering challenges, such as server-side rendering and static page generation. In addition, it is important to investigate potential migration difficulties, as these methods require more server-side processing, which adds additional semantics, configuration and deployment work to the project.
APA, Harvard, Vancouver, ISO, and other styles
4

Sidorov, Denys. "COMPARATIVE ANALYSIS OF SERVER-SIDE RENDERING AND CLIENT-SIDE RENDERING IN FRONTEND WEB APPLICATIONS." Annali d'Italia 60 (October 25, 2024): 78–81. https://doi.org/10.5281/zenodo.13994015.

Full text
Abstract:
This article presents a comparative analysis of server-side rendering (SSR) and client-side rendering (CSR). It examines the impact of rendering methods on performance and user experience (UX). The main advantages and principles of each approach are discussed, including download speed, interactivity and SEO optimization. The importance of choosing an approach based on specific project requirements is addressed, as well as the possibilities of hybrid rendering as a means of combining the best features of both methods
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, He, Jiaxin Liu, Ling Gao, and Lin Ning. "The output flexibility optimization control of load-side wind turbine considering client demand response." Journal of Physics: Conference Series 2876, no. 1 (2024): 012036. http://dx.doi.org/10.1088/1742-6596/2876/1/012036.

Full text
Abstract:
Abstract This study focuses on optimizing the output flexibility control of load-side wind turbines considering client demand response. Considering the potential and role of client demand response, a load-side wind turbine output objective function was constructed, client demand response constraints were designed, and a two-layer model for flexible optimization control of load-side wind turbine output was constructed. The top is the power allocation hierarchy, and the core objective of this layer is to achieve power optimization distribution of wind turbines under various constraint conditions. The lower layer is the frequency control layer, which optimizes and controls the dynamic frequency of load-side wind turbine output. The experimental results show that the optimized control strategy not only enhances the efficiency of wind turbines, but also effectively reduces energy loss and enhances the stability and reliability of the power grid. Through precise calculation and real-time adjustment of the double-layer model, we can more accurately predict and control the output of wind turbines, thereby ensuring the supply-demand balance of the power grid and reducing potential risks caused by frequency fluctuations and power offsets.
APA, Harvard, Vancouver, ISO, and other styles
6

A Jartarghar, Harish, Girish Rao Salanke, Ashok Kumar A.R, Sharvani G.S, and Shivakumar Dalali. "React Apps with Server-Side Rendering: Next.js." Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 14, no. 4 (2022): 25–29. http://dx.doi.org/10.54554/jtec.2022.14.04.005.

Full text
Abstract:
Web applications are developed using a variety of different web frameworks, and developers can pick from a wide range of web frameworks when developing a web application. React.js library provides flexibility for building reusable User Interface (UI) Components. It uses the approach of client side rendering, which loads the HTML content using Javascript. The client side rendering causes the page to load slowly and the client communicates with the server for run-time data only. Next.js Framework solves this problem by using server side rendering. When the browser requests a web page, the server processes the web page by fetching the user’s specific data and sending it back to the browser over the internet. Next.js helps the Search engines to crawl the site for better Search Engine Optimization (SEO).
APA, Harvard, Vancouver, ISO, and other styles
7

Sidorov, Denys. "ANALYSIS OF STRATEGIES FOR MOBILE OPTIMIZATION IN FRONTEND DEVELOPMENT." Deutsche internationale Zeitschrift für zeitgenössische Wissenschaft 92 (November 18, 2024): 110–12. https://doi.org/10.5281/zenodo.14181578.

Full text
Abstract:
The article analyzes mobile optimization strategies in frontend development aimed at enhancing the performance and usability of mobile applications. It examines architectural approaches such as modular and microservices structures, as well as rendering methods (server-side and client-side) and their impact on load speed and interface responsiveness. Special attention is given to application state management methods, responsive design, and adherence to accessibility standards, which together improve the user experience for a wide audience.
APA, Harvard, Vancouver, ISO, and other styles
8

Aditya Kappagantula. "Optimizing JavaScript application performance: A comprehensive guide." International Journal of Science and Research Archive 14, no. 1 (2025): 1279–303. https://doi.org/10.30574/ijsra.2025.14.1.0234.

Full text
Abstract:
Modern JavaScript applications have evolved significantly, presenting both opportunities and challenges in web development. This comprehensive article explores various optimization strategies across JavaScript loading, code-level improvements, network efficiency, CSS optimization, data management, and performance monitoring. The article examines how modern frameworks impact application complexity and discusses practical approaches to maintain optimal performance while delivering feature-rich experiences. Through an article of client-side and server-side optimization techniques, caching strategies, and monitoring systems, this research provides insights into creating high-performance web applications that meet contemporary user expectations while managing technical complexity.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yuan, Yuan He, Hai Wei Wu, et al. "Rendering optimization method and implementation of power grid WebGIS based on Web Worker." Journal of Physics: Conference Series 2418, no. 1 (2023): 012077. http://dx.doi.org/10.1088/1742-6596/2418/1/012077.

Full text
Abstract:
Abstract In the power grid WebGIS application, the graphical display of power grid equipment is one of the most basic functions. However, the massive data brought by the growing number of power grid equipment brings difficulties to both server-side storage and client-side rendering. Aiming at the characteristics of power grid GIS data, this paper presents a Web Worker-based grid WebGIS rendering optimization method and uses a program to implement it. The method improves the rendering speed of the client and optimizes the display effect by marking the displaying rules after data vectorization, simplifying the line data, generating vector tiles dynamically, and station label dodging. The result of the test shows that this method can effectively improve the fluency and user experience of existing power grid GIS applications.
APA, Harvard, Vancouver, ISO, and other styles
10

Diksikumari Suthar. "Paper A Hybrid MCDA Optimization Approach for Image Compression in Web Performance Enhancement." Journal of Information Systems Engineering and Management 10, no. 40s (2025): 549–58. https://doi.org/10.52783/jisem.v10i40s.7327.

Full text
Abstract:
Introduction: The surge in demand for modern applications has high resolution images which causes a web performance optimization problem. Most traditional client-side and server-side image optimization processes tend to overlook the complete solution that puts into account load time, response time, and bandwidth usage. This research proposes an adaptive decision-making paradigm for image compression based on a hybrid optimization framework combining Multi-Criteria Decision Analysis (MCDA) with Entropy Weighting + TOPSIS and Optimization Theory (Lagrange Multiplier Method). Analysis on real-world datasets shows that hybrid optimization, with its integrated strategies, more than standalone methods, validating the proposed optimization framework strategy principles by claiming over 92% efficiency ranking in performance evaluation. Removing these modifications does not stand the claim of improvement. This result is claimed through ANOVA statistical tests proving that the claimed improvements are in fact relevant. Machine learning dynamic image adaptation algorithms will be included in future work.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xiujuan, Kangmiao Chen, Keke Wang, Zhengxiang Wang, Kangfeng Zheng, and Jiayue Zhang. "FedKG: A Knowledge Distillation-Based Federated Graph Method for Social Bot Detection." Sensors 24, no. 11 (2024): 3481. http://dx.doi.org/10.3390/s24113481.

Full text
Abstract:
Malicious social bots pose a serious threat to social network security by spreading false information and guiding bad opinions in social networks. The singularity and scarcity of single organization data and the high cost of labeling social bots have given rise to the construction of federated models that combine federated learning with social bot detection. In this paper, we first combine the federated learning framework with the Relational Graph Convolutional Neural Network (RGCN) model to achieve federated social bot detection. A class-level cross entropy loss function is applied in the local model training to mitigate the effects of the class imbalance problem in local data. To address the data heterogeneity issue from multiple participants, we optimize the classical federated learning algorithm by applying knowledge distillation methods. Specifically, we adjust the client-side and server-side models separately: training a global generator to generate pseudo-samples based on the local data distribution knowledge to correct the optimization direction of client-side classification models, and integrating client-side classification models’ knowledge on the server side to guide the training of the global classification model. We conduct extensive experiments on widely used datasets, and the results demonstrate the effectiveness of our approach in social bot detection in heterogeneous data scenarios. Compared to baseline methods, our approach achieves a nearly 3–10% improvement in detection accuracy when the data heterogeneity is larger. Additionally, our method achieves the specified accuracy with minimal communication rounds.
APA, Harvard, Vancouver, ISO, and other styles
12

Vivek, Jain. "The Importance of SEO in Modern JavaScript Frameworks." International Journal on Science and Technology 13, no. 4 (2022): 1–8. https://doi.org/10.5281/zenodo.14752339.

Full text
Abstract:
Modern JavaScript frameworks, such as React, Angular, and Vue.js, have revolutionized web development by enabling fast, dynamic, and interactive user experiences. However, these frameworks often face challenges with search engine optimization (SEO) due to their reliance on client-side rendering (CSR). This paper explores the critical role of SEO in modern JavaScript frameworks, analyzes technical challenges, and presents solutions such as server-side rendering (SSR), static site generation (SSG), and hybrid approaches. Practical examples and case studies demonstrate how developers can build SEO-friendly JavaScript-based applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhao, Xiaoyi, Xinchen Zhang, and Shasha Wang. "Research on Federated Learning Algorithms Driven by Data Heterogeneity." Frontiers in Computing and Intelligent Systems 11, no. 3 (2025): 127–30. https://doi.org/10.54097/0htgq955.

Full text
Abstract:
Federated Learning as a distributed machine learning paradigm enables collaborative modeling among multiple participants while preserving data privacy. However, challenges such as model convergence difficulties and low communication efficiency caused by client-side data heterogeneity remain critical bottlenecks hindering its practical applications. This paper constructs a three-dimensional analytical framework encompassing "client-local optimization, server aggregation strategies, and global convergence guarantees" based on mathematical characterization of data heterogeneity. Through systematic analysis of core research achievements, we reveal evolutionary patterns of key technical approaches including dynamic learning rate adaptation, gradient correction mechanisms, and heterogeneity-aware regularization. The study further identifies three fundamental challenges: multi-objective optimization dilemmas, inadequate adaptability to dynamic data drift, and theory-practice gaps. Future breakthroughs should focus on cross-modal knowledge transfer architectures and trusted federated learning mechanisms to enable reliable algorithm deployment in open environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Hanzhang, Wei Peng, Wenwen Wang, Yunping Lu, Pen-Chung Yew, and Weihua Zhang. "JavART: A Lightweight Rule-Based JIT Compiler using Translation Rules Extracted from a Learning Approach." Proceedings of the ACM on Programming Languages 9, OOPSLA1 (2025): 113–42. https://doi.org/10.1145/3720418.

Full text
Abstract:
The balance between the compilation/optimization time and the produced code quality is very important for Just-In-Time (JIT) compilation. Time-consuming optimizations can cause delayed deployment of the optimized code, and thus more execution time needs to be spent either in the interpretation or less optimized code, leading to a performance drag. Such a performance drag can be detrimental to mobile and client-side devices such as those running Android, where applications are often shorting-running, frequently restarted and updated. To tackle this issue, this paper presents a lightweight learning-based, rule-guided dynamic compilation approach to generate good-quality native code directly without the need to go through the interpretive phase and the first-level optimization at runtime. Different from existing JIT compilers, the compilation process is driven by translation rules, which are automatically learned offline by taking advantage of existing JIT compilers. We have implemented a prototype of our approach based on Android 14 to demonstrate the feasibility and effectiveness of such a lightweight rule-based approach using several real-world applications. Results show that, compared to the default mode running with the interpreter and two tiers of JIT compilers, our prototype can achieve a 1.23× speedup on average. Our proposed compilation approach can also generate native code 5.5× faster than the existing first-tier JIT compiler in Android, with the generated code running 6% faster. We also implement and evaluate our approach on a client-side system running Hotspot JVM, and the results show an average of 1.20× speedup.
APA, Harvard, Vancouver, ISO, and other styles
15

Yakovliev, M., and K. Filonenko. "EFFECTS OF LOADING SPEED ON THE SITE TRAFFIC CONVERSION." Системи управління, навігації та зв’язку. Збірник наукових праць 5, no. 57 (2019): 92–94. http://dx.doi.org/10.26906/sunz.2019.5.092.

Full text
Abstract:
The subject of this article is the impact of site loading speed to increase user conversions. The goal is to determine what is the optimal site loading speed for users and further site optimization. Tasks: identify and optimize the components that take the longest load time. The methods used are: optimization of the server part of the site; Configuring Apache and Ngix Server Using server side gzip compression Using CDNs to download popular JavaScript libraries server-side caching settings; database optimization; optimization of TCP, TLS, HTTP / 2; client side optimization. The following results are obtained. Using the methods described above, on the example of a working draft, it was possible to optimize the site loading speed by 40%. Taking statistics for the period of 7 days, with the same amount of traffic, the conversion of users to customers increased from 7% to 15%. The number of failures on the first visit to the site decreased by 30%. Conclusions. In the course of work, the correlation between the speed of loading the site and the conversion of users to customers was confirmed. By optimizing the speed of loading the site, it was possible to increase the conversion of users by 2 times and reduce the percentage of failure on the first visit to the site.
APA, Harvard, Vancouver, ISO, and other styles
16

Nursuwars, Firmansyah Maulana Sugiartana, Rahmi Nur Shofa, Asep Andang, and Nurul Hiron. "IoT APIs: Time Response Optimization in Edge Computing Data Communication for Power Phase Detection System." E3S Web of Conferences 500 (2024): 01013. http://dx.doi.org/10.1051/e3sconf/202450001013.

Full text
Abstract:
The IoT-based phase detection system is one of the important innovations in monitoring and managing modern electrical systems. However, challenges arise in presenting real-time data communication in the context of edge computing through the use of APIs. The problem that arises is the length of response time required in the data communication process, which can hamper the efficiency and accuracy of the system. The main objective of this research is to design and implement an effective strategy to reduce response time in API-based IoT data communication in phase detection systems. The method adopted includes a thorough analysis of existing communication processes and the development of optimized algorithms to speed up response times. This research approach involves measuring the response time before and after implementing an optimized algorithm on the client side, which in this case is represented by an Arduino device. Experiments were conducted using realistic data communication scenarios to validate the effectiveness of the proposed approach. The experimental results show that by optimizing the communication algorithm on the client side, the response time in IoT data communications can be significantly reduced. The response time which originally reached 4 seconds, was successfully reduced to only 0.8 seconds after the implementation of an optimized algorithm. This result has the potential to increase the operational efficiency of the system and expand the application of this technology in a variety of applications that require a fast response time.
APA, Harvard, Vancouver, ISO, and other styles
17

Buil, Roman, Jesica de Armas, Daniel Riera, and Sandra Orozco. "Optimization of the Real-Time Response to Roadside Incidents through Heuristic and Linear Programming." Mathematics 9, no. 16 (2021): 1982. http://dx.doi.org/10.3390/math9161982.

Full text
Abstract:
This paper presents a solution for a real-world roadside assistance problem. Roadside incidents can happen at any time. Depending on the type of incident, a specific resource from the roadside assistance company can be sent on site. The problem of allocating resources to these road-side incidents can be stated as a multi-objective function and a large set of constraints, including priorities and preferences, resource capacities and skills, calendars, and extra hours. The request from the client is to a have real-time response and to attempt to use only open source tools. The optimization objectives to consider are the minimization of the operational costs and the minimization of the time to arrive to each incident. In this work, an innovative approach to near-optimally solving this problem in real-time is proposed, combining a heuristic approach and linear programming. The results show the great potential of this approach: operational costs were reduced by 19%, the use of external providers was reduced to half, and the productivity of the resources owned by the client was significantly increased.
APA, Harvard, Vancouver, ISO, and other styles
18

Garg, Tashi. "UI Performance Optimization: The Interplay of Caching and." European Journal of Computer Science and Information Technology 13, no. 42 (2025): 83–92. https://doi.org/10.37745/ejcsit.2013/vol13n428392.

Full text
Abstract:
User interface performance directly impacts digital product success in competitive markets, with responsiveness influencing engagement, retention, and conversion metrics. This article addresses critical challenges in delivering smooth experiences across variable network conditions through two complementary optimization strategies: caching and pagination. The discussion demonstrates how effective implementation of these techniques creates interfaces that feel consistently responsive despite technical constraints. Client-side caching establishes immediate content availability through browser storage mechanisms, while server-side caching architectures optimize initial page loads through multi-tiered approaches. Strategic pagination patterns balance data volume management with intuitive user experiences, demonstrating how cursor-based techniques enhance both performance and usability. Visual feedback mechanisms bridge the gap between actual and perceived performance through skeleton screens, optimistic updates, and offline-first designs. The article highlights the psychological dimensions of performance perception, establishing how thoughtful interface design can extend user patience thresholds and maintain engagement during inevitable processing delays. By integrating these strategies within a comprehensive framework, developers can create interfaces that maintain data integrity and usability while delivering the immediate responsiveness users expect. The increasing complexity of modern web applications requires this balanced approach to performance optimization, addressing both technical efficiency and user perception to create experiences that feel inherently responsive regardless of actual network conditions.
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Ravi Raushan, and Prof Anu Priya. "SEO Optimization in Web Development: How Next.js Helps." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 7011–18. https://doi.org/10.22214/ijraset.2025.71854.

Full text
Abstract:
Abstract: This research paper examines how Next.js enhances Search Engine Optimization (SEO) by improving site performance, optimizing content delivery, and ensuring efficient indexing. By utilizing Next.js, developers can create fast, scalable, and SEO-friendly web applications, making it an effective framework for businesses looking to enhance their online presence. SEO is critical for increasing a website‘s visibility and ranking on search engines like Google. Traditional React-based applications that rely on client-side rendering (CSR) often encounter challenges such as delayed content rendering, slow page loads, and ineffective indexing. These issues hinder search engines from accurately crawling and ranking content, ultimately harming organic traffic and user engagement. Next.js, a powerful React framework developed by Vercel, addresses these challenges through advanced rendering techniques such as Server-Side Rendering (SSR), Static Site Generation (SSG), and Incremental Static Regeneration (ISR). These features lead to faster page loads, improved search engine indexing, and a seamless user experience, all of which contribute to higher search rankings. Furthermore, Next.js enhances metadata management with the next/head component, supports structured data implementation for rich search results, and optimizes images using the next/image component. By reducing Time to First Byte (TTFB) and improving Core Web Vitals—such as First Contentful Paint (FCP) and Cumulative Layout Shift (CLS)—Next.js significantly boosts SEO performance. This paper discusses how Next.js improves SEO by enhancing web performance, optimizing content delivery, and ensuring efficient indexing. It emphasizes the framework‘s ability to build high-performing, scalable web applications that rank well in search engines while delivering an exceptional user experience. Keywords: Next.js, Search Engine Optimization (SEO), Server-Side Rendering, Static Site Generation, Web Performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Lin Jun. "Development of Travel Reservation System for Mobile Platform." Applied Mechanics and Materials 644-650 (September 2014): 3099–102. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.3099.

Full text
Abstract:
To overcome deficiencies in the mobile tourism applications such as underutilization of mobile device capabilities and poor user experience, a low-cost and cross-platform mobile travel reservation system was designed. Based on business process optimization of client-side, the mobile app was built by using HTML5, CSS3 and jQuery technology. Additionally, the system uses Ajax asynchronous interactive technology to improve response time. The practical application of this system shows that it can work equally well on multiple mobile platform. The characteristics of flat UI design, convenient operation and good user experience stretch its application prospects.
APA, Harvard, Vancouver, ISO, and other styles
21

Shaheen, Momina, Muhammad Shoaib Farooq, and Tariq Umer. "Reduction in Data Imbalance for Client-Side Training in Federated Learning for the Prediction of Stock Market Prices." Journal of Sensor and Actuator Networks 13, no. 1 (2023): 1. http://dx.doi.org/10.3390/jsan13010001.

Full text
Abstract:
The approach of federated learning (FL) addresses significant challenges, including access rights, privacy, security, and the availability of diverse data. However, edge devices produce and collect data in a non-independent and identically distributed (non-IID) manner. Therefore, it is possible that the number of data samples may vary among the edge devices. This study elucidates an approach for implementing FL to achieve a balance between training accuracy and imbalanced data. This approach entails the implementation of data augmentation in data distribution by utilizing class estimation and by balancing on the client side during local training. Secondly, simple linear regression is utilized for model training at the client side to manage the optimal computation cost to achieve a reduction in computation cost. To validate the proposed approach, the technique was applied to a stock market dataset comprising stocks (AAL, ADBE, ASDK, and BSX) to predict the day-to-day values of stocks. The proposed approach has demonstrated favorable results, exhibiting a strong fit of 0.95 and above with a low error rate. The R-squared values, predominantly ranging from 0.97 to 0.98, indicate the model’s effectiveness in capturing variations in stock prices. Strong fits are observed within 75 to 80 iterations for stocks displaying consistently high R-squared values, signifying accuracy. On the 100th iteration, the declining MSE, MAE, and RMSE (AAL at 122.03, 4.89, 11.04, respectively; ADBE at 457.35, 17.79, and 21.38, respectively; ASDK at 182.78, 5.81, 13.51, respectively; and BSX at 34.50, 4.87, 5.87, respectively) values corroborated the positive results of the proposed approach with minimal data loss.
APA, Harvard, Vancouver, ISO, and other styles
22

Ganesh, Anand. "Efficient Cross-Platform Application Development for Gaming Ecosystems." International Scientific Journal of Engineering and Management 04, no. 02 (2025): 1–7. https://doi.org/10.55041/isjem02251.

Full text
Abstract:
Abstract—This paper explores cross-platform application de- velopment optimizations for console and PC, highlighting the unique challenges and solutions necessary to deliver seamless user experiences across a diverse range of devices. On the development side, the limited tool sets available on non-PC platforms require a streamlined workflow centered around PC-based development, while ensuring that testing on the actual devices accounts for discrepancies in aspects like resolution, aspect ratio, and pixels per inch. Remote network inspectors and debuggers are crucial for identifying and resolving platform-specific issues. Memory management, graphical constraints, and power limitations on consoles and hand-held devices are also discussed. On the client side, the benefits of a ”write once, run anywhere” approach are highlighted, particularly through the use of reusable React components and React Native’s custom native modules, enabling faster development cycles and consistent user interfaces across platforms. Additionally, server-side optimization through mi- cro services architecture is emphasized, enabling the efficient abstraction of common business logic, such as authorization, cart checkout, and purchases, while also offering customized front doors to accommodate platform-specific experiences. The paper outlines the key strategies to enhance cross-platform development, ensuring quick, efficient, and scalable solutions for both development and operational aspects of console, PC and hand-held applications. Index Terms—Gaming e-commerce, Digital storefronts, Engi- neering, Cross-platform
APA, Harvard, Vancouver, ISO, and other styles
23

Bahtiar Semma, Andi, Mukti Ali, Muh Saerozi, Mansur Mansur, and Kusrini Kusrini. "Cloud computing: google firebase firestore optimization analysis." Indonesian Journal of Electrical Engineering and Computer Science 29, no. 3 (2023): 1719. http://dx.doi.org/10.11591/ijeecs.v29.i3.pp1719-1728.

Full text
Abstract:
Cloud computing is a new paradigm that provides end users with a secure, personalized, dynamic computing environment with guaranteed service quality. One popular solution is Google cloud firestore, a global-scale not only structured query language (NoSQL) document database for mobile and web apps. Recent research on cloud-based NoSQL databases often discusses the difference between them and SQL databases and their performance. However, using cloud-based NoSQL databases such as firestore is tricky without any scientific comparison methodology, and it needs analysis of how its particular systems work. This study aims to discover what is the best design that could be implemented to optimize data read cost, response size, and time regarding the cloud firestore database. In this study, we develop a grade point average (GPA)-report mocking application to assess data read based on our institution’s needs. This application consists of three functions. Add the graduated GPA and students’ names, and view the ten highest GPAs, GPA average, and total graduated students. The finding indicates that aggregating data on the client side or utilizing the Google cloud function trigger, then updating aggregation data in one transaction significantly reduces document read count (cost), response size, and time.
APA, Harvard, Vancouver, ISO, and other styles
24

Jain, Vivek. "A COMPARATIVE ANALYSIS OF SINGLE PAGE APPLICATIONS (SPAS) AND MULTI PAGE APPLICATIONS (MPAS)." International Journal of Core Engineering & Management 7, no. 04 (2022): 271–77. https://doi.org/10.5281/zenodo.14956673.

Full text
Abstract:
In modern web development, the choice between Single Page Applications (SPAs) and Multi Page Applications (MPAs) plays a crucial role in determining application performance, user experience, and scalability. SPAs offer a seamless, app-like experience by dynamically updating content without requiring full-page reloads, leveraging client-side rendering (CSR) and technologies like React, Angular, and Vue.js. Conversely, MPAs follow a traditional multi-page architecture, where each user interaction triggers a fullpage request, often relying on server-side rendering (SSR) to manage content delivery efficiently. This paper presents a comparative analysis of SPAs and MPAs, focusing on key performance metrics such as page load speed, time-to-first-byte (TTFB), interactivity (TTI), SEO-friendliness, scalability, and security. We evaluate these architectures through real-world case studies, examining their advantages and trade-offs in different scenarios, including e-commerce platforms, enterprise dashboards, and content-heavy websites. Our study utilizes industry-standard tools like Google Lighthouse, WebPageTest, and GTmetrix to benchmark the performance of SPAs and MPAs under various network conditions and user behaviors. The results indicate that while SPAs provide a highly responsive and engaging user experience, they often suffer from initial load delays, SEO challenges, and increased client-side resource consumption. In contrast, MPAs excel in SEO optimization, accessibility, and security, but can introduce higher server load and navigation delays due to frequent full-page reloads. To bridge the gap between these architectures, we also explore hybrid approaches, including Progressive Web Applications (PWAs) and Server-Side Rendered (SSR) SPAs, which combine the best of both worlds. We provide implementation guidelines and best practices for developers to select the right architecture based on project requirements, performance goals, and scalability considerations. The findings of this paper serve as a decision-making framework for developers, product managers, and businesses aiming to build efficient, scalable, and user-friendly web applications in a rapidly evolving digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Zhiwei, Hu Xie, Wenxin Guo, Ruifeng Zhao, and Yang Liu. "Visual Analysis of Blockchain Energy Storage Scheduling considering the Optimal Scheduling of User-Side Source and Storage Resources." Mobile Information Systems 2022 (June 28, 2022): 1–13. http://dx.doi.org/10.1155/2022/8369121.

Full text
Abstract:
With the rapid development of Internet technology, the problem of client-side source storage resources is gradually exposed. In view of the problems of small capacity, uneven distribution, and diversification of attributable entities of user-side source storage resources, the current blockchain energy storage is difficult to schedule, and user-side sources and storage resources cannot be added to power scheduling optimization, resulting in unusable resources. In order to effectively utilize user-side resources, this paper proposes a blockchain energy storage scheduling visualization system (BESSVS) that takes into account the optimal scheduling of user-side source storage resources. The BESSVS can coordinate and optimize the management and control of decentralized power resources and load resources, and effectively combine the Internet of Things and the power plant storage energy corresponding to the BESSVS for optimal scheduling. The design of blockchain energy storage scheduling visualization system is mainly carried out from the system main body and data information structure. The advantages of blockchain in data storage, information security, data interoperability, etc., are introduced into the economic scheduling of blockchain energy storage. It is conducive to the stable scheduling of information transparency and also improves the data security and storage security of the system. Finally, the feasibility and practicability of the method are verified by an example.
APA, Harvard, Vancouver, ISO, and other styles
26

Andi, Bahtiar Semma, Ali Mukti, Saerozi Muh, Mansur, and Kusrini. "Cloud computing: google firebase firestore optimization analysis." Cloud computing: google firebase firestore optimization analysis 29, no. 3 (2023): 1719–28. https://doi.org/10.11591/ijeecs.v29.i3.pp1719-1728.

Full text
Abstract:
Cloud computing is a new paradigm that provides end users with a secure, personalized, dynamic computing environment with guaranteed service quality. One popular solution is Google cloud firestore, a global-scale not only structured query language (NoSQL) document database for mobile and web apps. Recent research on cloud-based NoSQL databases often discusses the difference between them and SQL databases and their performance. However, using cloud-based NoSQL databases such as firestore is tricky without any scientific comparison methodology, and it needs analysis of how its particular systems work. This study aims to discover what is the best design that could be implemented to optimize data read cost, response size, and time regarding the cloud firestore database. In this study, we develop a grade point average (GPA)-report mocking application to assess data read based on our institution’s needs. This application consists of three functions. Add the graduated GPA and students’ names, and view the ten highest GPAs, GPA average, and total graduated students. The finding indicates that aggregating data on the client side or utilizing the Google cloud function trigger, then updating aggregation data in one transaction significantly reduces document read count (cost), response size, and time.
APA, Harvard, Vancouver, ISO, and other styles
27

ShivaKrishna Deepak Veeravalli. "Leveraging Asynchronous Processing Tools in Salesforce: A Comprehensive Analysis." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 946–55. https://doi.org/10.32628/cseit251112106.

Full text
Abstract:
This article provides a comprehensive analysis of asynchronous processing tools within the Salesforce platform, exploring their significance in enhancing system performance, scalability, and user experience. It delves into a wide array of server-side and client-side asynchronous processing options, including Asynchronous Apex, Platform Events, Asynchronous Flows, and Lightning Actions, offering detailed insights into their functionalities, use cases, and implementation considerations. The article presents a comparative analysis of these tools, discussing their performance metrics, implementation complexities, and scalability factors. It also outlines best practices for implementing asynchronous processing in Salesforce, addressing key aspects such as resource optimization, platform limit mitigation, and data integrity. Furthermore, the article examines the challenges and limitations associated with asynchronous processing in Salesforce, and explores future trends and developments in this domain. By providing a thorough examination of Salesforce's asynchronous processing capabilities, this article serves as a valuable resource for developers, architects, and decision-makers seeking to leverage these tools effectively in building robust, efficient, and scalable Salesforce applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Mu, Shengdong, Boyu Liu, Chaolung Lien, and Nedjah Nadia. "Optimization of Personal Credit Evaluation Based on a Federated Deep Learning Model." Mathematics 11, no. 21 (2023): 4499. http://dx.doi.org/10.3390/math11214499.

Full text
Abstract:
Financial institutions utilize data for the intelligent assessment of personal credit. However, the privacy of financial data is gradually increasing, and the training data of a single financial institution may exhibit problems regarding low data volume and poor data quality. Herein, by fusing federated learning with deep learning (FL-DL), we innovatively propose a dynamic communication algorithm and an adaptive aggregation algorithm as means of effectively solving the following problems, which are associated with personal credit evaluation: data privacy protection, distributed computing, and distributed storage. The dynamic communication algorithm utilizes a combination of fixed communication intervals and constrained variable intervals, which enables the federated system to utilize multiple communication intervals in a single learning task; thus, the performance of personal credit assessment models is enhanced. The adaptive aggregation algorithm proposes a novel aggregation weight formula. This algorithm enables the aggregation weights to be automatically updated, and it enhances the accuracy of individual credit assessment by exploiting the interplay between global and local models, which entails placing an additional but small computational burden on the powerful server side rather than on the resource-constrained client side. Finally, with regard to both algorithms and the FL-DL model, experiments and analyses are conducted using Lending Club financial company data; the results of the analysis indicate that both algorithms outperform the algorithms that are being compared and that the FL-DL model outperforms the advanced learning model.
APA, Harvard, Vancouver, ISO, and other styles
29

Cho, Mingyu, Woohyuk Chung, Jincheol Ha, Jooyoung Lee, Eun-Gyeol Oh, and Mincheol Son. "FRAST: TFHE-Friendly Cipher Based on Random S-Boxes." IACR Transactions on Symmetric Cryptology 2024, no. 3 (2024): 1–43. http://dx.doi.org/10.46586/tosc.v2024.i3.1-43.

Full text
Abstract:
A transciphering framework, also known as hybrid homomorphic encryption, is a practical method of combining a homomorphic encryption (HE) scheme with a symmetric cipher in the client-server model to reduce computational and communication overload on the client side. As a server homomorphically evaluates a symmetric cipher in this framework, new design rationales are required for “HE-friendly” ciphers that take into account the specific properties of the HE schemes. In this paper, we propose a new TFHE-friendly cipher, dubbed FRAST, with a TFHE-friendly round function based on a random S-box to minimize the number of rounds. The round function of FRAST can be efficiently evaluated in TFHE by a new optimization technique, dubbed double blind rotation. Combined with our new WoP-PBS method, the double blind rotation allows computing multiple S-box calls in the round function of FRAST at the cost of a single S-box call. In this way, FRAST enjoys 2.768 (resp. 10.57) times higher throughput compared to Kreyvium (resp. Elisabeth) for TFHE keystream evaluation in the offline phase of the transciphering framework at the cost of slightly larger communication overload.
APA, Harvard, Vancouver, ISO, and other styles
30

Zantalis, Fotios, and Grigorios Koulouras. "Data-Bound Adaptive Federated Learning: FedAdaDB." IoT 6, no. 3 (2025): 35. https://doi.org/10.3390/iot6030035.

Full text
Abstract:
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied in FL, showing good results in convergence speed and accuracy. However, it can be quite challenging to combine good convergence, model generalization, and stability in an FL setup. Data-bound adaptive methods like AdaDB have demonstrated promising results in centralized settings by incorporating dynamic, data-dependent bounds on Learning Rates (LRs). In this paper, FedAdaDB is introduced, which is an FL version of AdaDB aiming to address the aforementioned challenges. FedAdaDB uses the AdaDB optimizer at the server-side to dynamically adjust LR bounds based on the aggregated client updates. Extensive experiments have been conducted comparing FedAdaDB with FedAvg and FedAdam on three different datasets (EMNIST, CIFAR100, and Shakespeare). The results show that FedAdaDB consistently offers better and more robust outcomes, in terms of the measured final validation accuracy across all datasets, for a trade-off of a small delay in the convergence speed at an early stage.
APA, Harvard, Vancouver, ISO, and other styles
31

Fitriyadi, Farid, Muhammad Daffa Arzeta N, and Farkhod Meliev. "A JavaScript-Based Genetic Algorithm for Real-Time Route Optimization: Toward Lightweight Web Integration in Healthcare and Logistics." Journal of Intelligent Computing & Health Informatics 6, no. 1 (2024): 11. https://doi.org/10.26714/jichi.v6i1.15777.

Full text
Abstract:
Efficient route optimization is essential in healthcare and logistics systems, where real-time decision-making significantly affects operational effectiveness. This paper introduces a lightweight implementation of a genetic algorithm (GA) in JavaScript, designed to solve the shortest route problem as a variant of the Traveling Salesman Problem (TSP). The algorithm operates entirely in the browser console, demonstrating the potential of client-side computation for fast, portable optimization. The GA framework integrates tournament selection, two-point ordered crossover, and swap mutation to evolve route solutions over 200 generations. Tested on a synthetic 11-city dataset, the algorithm achieved near-optimal performance with an average deviation of 4.28% from the known optimum and an average runtime of 1.26 seconds. Convergence occurred around generation 138 across five independent runs, indicating stable and consistent behavior despite stochastic initialization. While no graphical user interface was developed in this study, the use of native JavaScript allows future integration with interactive web applications and mobile dashboards. Comparative references suggest the algorithm performs competitively with existing metaheuristics under similar problem sizes. These findings highlight the feasibility of browser-based optimization as a foundation for accessible, real-time routing tools in decentralized healthcare and transport settings.
APA, Harvard, Vancouver, ISO, and other styles
32

Guillen, Luis, Satoru Izumi, Toru Abe, and Takuo Suganuma. "SAND/3: SDN-Assisted Novel QoE Control Method for Dynamic Adaptive Streaming over HTTP/3." Electronics 8, no. 8 (2019): 864. http://dx.doi.org/10.3390/electronics8080864.

Full text
Abstract:
Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard for video content delivery. Video traffic, most of which is generated from mobile devices, is shortly to become the most significant part of Internet traffic. Current DASH solutions only consider either client- or server-side optimization, leaving other components in DASH (e.g., at the transport layer) to default solutions that cause a performance bottleneck. In that regard, although it is assumed that HTTP must be necessarily transported on top of TCP, with the latest introduction of HTTP/3, it is time to re-evaluate its effects on DASH. The most substantial change in HTTP/3 is having Quick UDP Internet Connections (QUIC) as its primary underlying transport protocol. However, little is still know about the effects on standard DASH client-based adaption algorithms when exposed to the future HTTP/3. In this paper, we present SAND/3, an SDN (Software Defined Networking)-based Quality of Experience (QoE) control method for DASH over HTTP/3. Since the official deployment of HTTP/3 has not been released yet, we used the current implementation of Google QUIC. Preliminary results show that, by applying SAND/3, which combines information from different layers orchestrated by SDN to select the best QoE, we can obtain steadier media throughput, reduce the number of quality shifts in at least 40%, increase the amount downloaded content at least 20%, and minimize video interruptions compared to the current implementations regardless of the client adaption algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

Arunambika T. and Senthil Vadivu P. "OCEDS." International Journal of Distributed Systems and Technologies 12, no. 3 (2021): 48–63. http://dx.doi.org/10.4018/ijdst.2021070103.

Full text
Abstract:
Many organizations require handling a massive quantity of data. The rapid growth of data in size leads to the demand for a new large space for storage. It is impossible to store bulk data individually. The data growth issues compel organizations to search novel cost-efficient ways of storage. In cloud computing, reducing an execution cost and reducing a storage price are two of several problems. This work proposed an optimal cost-effective data storage (OCEDS) algorithm in cloud data centres to deal with this problem. Storing the entire database in the cloud on the cloud client is not the best approach. It raises processing costs on both the customer and the cloud service provider. Execution and storage cost optimization is achieved through the proposed OCEDS algorithm. Cloud CSPs present their clients profit-maximizing services while clients want to reduce their expenses. The previous works concentrated on only one side of cost optimization (CSP point of view or consumer point of view), but this OCEDS reduces execution and storage costs on both sides.
APA, Harvard, Vancouver, ISO, and other styles
34

Sri, Rama Chandra Charan Teja Tadi. "Expanding Web Development Horizons: Integrating WebAssembly with React, Vue, and Angular." Journal of Scientific and Engineering Research 8, no. 10 (2021): 250–61. https://doi.org/10.5281/zenodo.15075098.

Full text
Abstract:
WebAssembly, or Wasm, is the web technology advancement that allows frameworks like React, Vue, and Angular to interoperate. Its binary instruction form will enable developers to get native-quality performance with high performance using C, C++, and Rust. With universal support in leading browsers, WebAssembly facilitates complex web programs to execute almost at native pace, greatly enhancing client-side performance. Its integration with current JavaScript environments, including Angular, React, and Vue, allows for developers to easily enhance key aspects of an application, such as memory usage and reducing execution time. Apart from performance enhancement, WebAssembly upholds security levels with its sandboxing feature, whereby harmful code is separated. It also simplifies development by its ability to execute code written in any programming language without a struggle within the web environment. With further progress in web development, the use of WebAssembly, together with existing frameworks, facilitates innovation and enables developers to build more powerful, efficient, and secure applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Venkata, Padma Kumar Vemuri. "Solutions for Integrating Charts into Email Communications." INTERNATIONAL JOURNAL OF INNOVATIVE RESEARCH AND CREATIVE TECHNOLOGY 8, no. 1 (2022): 1–6. https://doi.org/10.5281/zenodo.15087550.

Full text
Abstract:
The integration of visual charts within email communications provides significant advantages in capturing recipient attention, simplifying complex datasets, and enhancing the clarity and memorability of key messages. This research report explores various reliable techniques for incorporating charts into emails, highlighting embedding static images as the most universally compatible method. Utilizing HTML image tags ensures broad compatibility across email platforms, while leveraging online chart generation APIs, such as QuickChart.io and Image-Charts, simplifies the creation and embedding process by managing server-side rendering complexities.The choice of image format is crucial, with PNG recommended for detailed graphics and clarity preservation, JPEG beneficial for complex, photograph-like charts prioritizing smaller file sizes, and GIF suitable for simple animations or graphics with limited colors. Advanced considerations, including mobile optimization, cross-client compatibility, and accessibility through effective alternative text descriptions, are discussed as best practices to enhance user experience and inclusivity.Finally, the limitations of embedding interactive charts directly in emails due to email client restrictions are acknowledged, recommending alternative approaches like linking static chart images to interactive online versions. Overall, the report provides comprehensive guidance for effectively employing visual charts in email communications, ensuring messages stand out, engage recipients, and clearly convey critical data and insights.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Liaoyan. "Optimization of an Intelligent Music-Playing System Based on Network Communication." Complexity 2021 (May 6, 2021): 1–11. http://dx.doi.org/10.1155/2021/9943795.

Full text
Abstract:
Streaming media server is the core system of audio and video application in the Internet; it has a wide range of applications in music recommendation. As song libraries and users of music websites and APPs continue to increase, user interaction data are generated at an increasingly fast rate, making the shortcomings of the original offline recommendation system and the advantages of the real-time streaming recommendation system more and more obvious. This paper describes in detail the working methods and contents of each stage of the real-time streaming music recommendation system, including requirement analysis, overall design, implementation of each module of the system, and system testing and analysis, from a practical scenario. Moreover, this paper analyzes the current research status and deficiencies in the field of music recommendation by analyzing the user interaction data of real music websites. From the actual requirements of the system, the functional and performance goals of the system are proposed to address these deficiencies, and then the functional structure, general architecture, and database model of the system are designed, and how to interact with the server side and the client side is investigated. For the implementation of data collection and statistics module, this paper adopts Flume and Kafka to collect user behavior data and uses Spark Streaming and Redis to count music popularity trends and support efficient query. The recommendation engine module in this paper is designed and optimized using Spark to implement incremental matrix decomposition on data streams, online collaborative topic model, and improved item-based collaborative filtering algorithm. In the system testing section, the functionality and performance of the system are tested, and the recommendation engine is tested with real datasets to show the discovered music themes and analyze the test results in detail.
APA, Harvard, Vancouver, ISO, and other styles
37

Carpenter, Chris. "Optimization Process Maximizes Financial, Environmental Benefits in LNG Breakwater." Journal of Petroleum Technology 73, no. 09 (2021): 55–56. http://dx.doi.org/10.2118/0921-0055-jpt.

Full text
Abstract:
This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper OTC 31284, “Greater Tortue Ahmeyim Project for BP In Mauritania and Senegal: Breakwater Design and Local Content Optimizations,” by Alexis Replumaz, Yann Julien, and Damien Bellengier, Eiffage Génie Civil Marine, prepared for the 2021 Offshore Technology Conference, originally scheduled to be held in Houston, 4–7 May. The paper has not been peer reviewed. Copyright 2021 Offshore Technology Conference. Reproduced by permission. During summer 2017, the authors’ company was invited by BP to bid for the construction of a concrete caisson breakwater protecting an offshore liquefied natural gas (LNG) floating terminal at a water depth of 33 m on the Mauritanian/Senegalese maritime border. As a result of subsequent front-end engineering design (FEED) studies, including 3D model testing, the company was able to reduce the amount of concrete required by 40% compared with the initial design, leading to financial and environmental benefits. Introduction The BP Tortue development comprises a subsea production system tied back to a pretreatment floating, production, storage, and offloading (FPSO) unit, which transfers gas to a near-shore hub for LNG production and export. Phase 1 will provide sales gas production and domestic supply and will generate approximately 2.5 mtpa of LNG to Mauritania and Senegal. The Phase 1 FPSO, in 100–130 m of water, will process inlet gas from the subsea wells located across several drill centers by separating condensate from the gas stream and exporting conditioned gas to a hub, where LNG processing and export will occur. The hub, 10 km from shore, comprises a breakwater to protect marine operations, including LNG processing and carrier loading. A single floating LNG vessel will condition the gas for LNG export. Hub construction began early in 2019 and should be completed in 2021 for a first-gas target in 2022. The breakwater design was conceived during the bidding stage of the project at the end of 2017 by proposing an alternative design for the breakwater adapted to project-specific conditions and regional facilities. The design has been improved continuously and optimized during the FEED stage based on a collaborative approach between the client and the contractor. Client Preliminary Design Optimizations During pre-FEED and bidding stages, the client performed an intensive geotechnical campaign based on several shallow and deep boreholes and a large-area geophysical survey. In water depths greater than 18 m along the maritime boundary between Mauritania and Senegal, a significant layer of soft soil exists, except around the outcrop located on the west side (10–11 km offshore in approximately 33 m of water). Although rock quantities could be slightly higher in the western location, the reduction of the dredging quantities and the reduction of the effect on the nearby coastal community of Saint Louis (lighting, noise, and vessel traffic) led to selection of this location for the hub terminal. The initial breakwater type was a rubble-mound structure. However, a composite breakwater (caisson on berm foundation) allowed for optimization of dredging and rock quantities. The change in breakwater type allowed a rock-quantity drop from 5.8 million to 1.1 million m3.
APA, Harvard, Vancouver, ISO, and other styles
38

Maksymiv, M. R., and T. Y. Rak. "BUILDING AND OPTIMIZING LIGHTWEIGHT GENERATIVE ADVERSARIAL NEURAL NETWORKS TO ENHANCE VIDEO QUALITY IN THE CLIENT DEVICES USING WEBGPU." Computer systems and network 7, no. 1 (2025): 186–94. https://doi.org/10.23939/csn2025.01.186.

Full text
Abstract:
The paper considers problems for the tasks of improving the quality of digital video images for cloud environments, as well as on the client side using generative adversarial neural networks (GANs) adapted for work in the browser. A method is proposed that uses WebGPU for accelerated execution of convolutional calculations, which allows to increase the resolution and improve the quality of low-quality video in real time without significant load on servers. Optimization of the neural network includes the use of Pruning and Knowledge Distillation methods, which made it possible to reduce the size of the model by 40–60% without significant loss of quality. The results of the experiments showed that the implementation of the proposed method increases the performance of video processing in the browser by 2–4 times compared to models based on the WebGL interface. The video quality assessment showed an improvement in PSNR and an increase in SSIM compared to traditional methods of increasing resolution. The proposed approach can be integrated into streaming services and web applications, which will reduce the load on computer networks and provide a better user experience with lower costs for cloud and server computing. Key words: Generative adversarial networks, High definition video, High-resolution imaging , Image Superresolution, Optimization models, Neural network
APA, Harvard, Vancouver, ISO, and other styles
39

Maulana, Irvan Dhimas, and Yeremia Alfa Susetyo. "Implementasi Fetch API dalam pengembangan Backend Website Daftar Film dengan Next.JS." Kesatria : Jurnal Penerapan Sistem Informasi (Komputer dan Manajemen) 6, no. 1 (2025): 187–96. https://doi.org/10.30645/kesatria.v6i1.560.

Full text
Abstract:
This study aims to improve the performance of a film list website by implementing the Next.JS framework along with its Fetching API to solve challenges in search efficiency and providing accurate and up-to-date information for users. The amount of outdated and irrelevant data on film websites often prevents users in finding current movie information. This research develops an application using Next.JS, integrated with the Fetch API, to dynamically retrieve data using Server-Side Rendering (SSR) and Static Site Generation (SSG), improving communication between server and client, while providing a responsive and SEO-friendly user experience. Testing results with Lighthouse and Chrome DevTools show improved performance, with an application score of 92 on the Vercel platform and 82 on the Local side. Cache optimization on Vercel also reduced data transfer size from 2.2 MB to 0.27 MB, significantly speeding up load times and stabilizing the application. These results indicate that the application successfully delivers relevant and up-to-date information with high speed and stable performance. However, this study is limited in terms of testing devices and focuses only on the Vercel hosting platform.
APA, Harvard, Vancouver, ISO, and other styles
40

Gajula, Viswanath, and Rajathy R. "An explorative optimization algorithm for sparse scheduling in-home energy management with smart grid." Circuit World 46, no. 4 (2020): 335–46. http://dx.doi.org/10.1108/cw-06-2019-0057.

Full text
Abstract:
Purpose Electricity utilization at electricity peak hour may differ from every single administration region, for example, mechanical region, business territory and residential zone. This paper introduces a demand-side load management (DSM) strategy, which is one of the utilization of smart grid (SG) that is fit for controlling loads inside the residential working so that the client fulfillment is augmented at least expense. Design/methodology/approach In this paper, a heuristic algorithms-based energy management controller is intended for a residential region in a SG. Here, Antlion Optimization technique is used for DSM techniques such as load shifting, peak clipping, and valley filling in the residential sectors for 24 h with the help of stochastic function to determine the detection of random distribution of the load. Findings This proposed algorithm offered the greatest fulfillment and least expense caused by the consumers when compared to the traditional cost by taking the individual consumer preferences for the loads and the ideal time scheduling for the load, which is obtained from the rebuilding trap. Originality/value Simulation results demonstrate that the comparison of the cost incurred by the users obtained by the DSM techniques is satisfiable.
APA, Harvard, Vancouver, ISO, and other styles
41

EL-SANA, JIHAD, and NETA SOKOLOVSKY. "VIEW-DEPENDENT RENDERING FOR LARGE POLYGONAL MODELS OVER NETWORKS." International Journal of Image and Graphics 03, no. 02 (2003): 265–90. http://dx.doi.org/10.1142/s0219467803001007.

Full text
Abstract:
In this paper we are presenting a novel approach that enables the rendering of large-shared datasets at interactive rates using inexpensive workstations. Our algorithm is based on view-dependent rendering and client-server technology — servers host large datasets and manage the selection of the various levels of detail, while clients receive blocks of update operations which are used to generate the appropriate level of detail in an incremental manner. We assume that servers are capable machines in terms of storage capacity and computational power and clients are inexpensive workstations which have limited 3D rendering capabilities. For optimization purposes we have developed two similar approaches — one for local area networks and the other for wide area networks. For the second approach we have performed several changes to adapt to the limitation of the wide area networks. To avoid network latency we have developed two powerful mechanisms that cache the adapt operation blocks on the clients' side and predict the future view-parameters of clients based on their recent behavior. Our approach dramatically reduces the amount of memory used by each client and the entire computing system since the dataset is stored only once in the local memory of the server. In addition, it decreases the load on the network as a result of the incremental update contributed by view-dependent rendering.
APA, Harvard, Vancouver, ISO, and other styles
42

Xia, Zhixin, Xiaolei Yang, Afei Li, Yongshan Liu, and Siyuan He. "Research on Information Security Transmission of Port Multi-Thread Equipment Based on Advanced Encryption Standard and Preprocessing Optimization." Applied Sciences 14, no. 24 (2024): 11887. https://doi.org/10.3390/app142411887.

Full text
Abstract:
Based on the C/S multithreaded control framework, this article used AES encryption technology, and by customizing the S-boxes therein and differential diffusion of the S-boxes, it improved the randomness of the ciphertexts and the resistance to differential attacks, and reduced the likelihood of leakage in the process of data computation. On this basis, in order to reduce the cost overhead generated by AES encryption, this paper used the pre-computed method of optimizing S-boxes and Mixcolumn matrices to be applied to the multithreaded control framework, which improved the computation rate of AES, and then it improved the efficiency of the information transmission in the multithreaded control process. In addition, by using the TLS protocol, the authentication module was set up on the client and server side, which effectively defended against various attacks on data transmission by external users. The experimental results indicate that after the optimization of the multithreaded C/S architecture, the corresponding time of the average transmission delay was reduced by 49.1%, the throughput rose by 96.4%, and the acceleration ratio reached 1.96.
APA, Harvard, Vancouver, ISO, and other styles
43

Auer, Michael, and Alexander Zipf. "3D WebGIS: From Visualization to Analysis. An Efficient Browser-Based 3D Line-of-Sight Analysis." ISPRS International Journal of Geo-Information 7, no. 7 (2018): 279. http://dx.doi.org/10.3390/ijgi7070279.

Full text
Abstract:
3D WebGIS systems have been mentioned in the literature almost since the beginning of the graphical web era in the late 1990s. The potential use of 3D WebGIS is linked to a wide range of scientific and application domains, such as planning, controlling, tracking or simulation in crisis management, military mission planning, urban information systems, energy facilities or cultural heritage management, just to name a few. Nevertheless, many applications or research prototypes entitled as 3D WebGIS or similar are mainly about 3D visualization of GIS data or the visualization of analysis results, rather than about performing the 3D analysis itself online. This research paper aims to step forward into the direction of web-based 3D geospatial analysis. It describes how to overcome speed and memory restrictions in web-based data management by adapting optimization strategies, developed earlier for web-based 3D visualization. These are applied in a holistic way in the context of a fully 3D line-of-sight computation over several layers with split (tiled) and unsplit (static) data sources. Different optimization approaches are combined and evaluated to enable an efficient client side analysis and a real 3D WebGIS functionality using new web technologies such as HTML5 and WebGL.
APA, Harvard, Vancouver, ISO, and other styles
44

Sakhbieva, Amina I., Malika D. Yusupova, and Zarema A. Magazieva. "FRONTEND PERFORMANCE AND ITS CONTRIBUTION TO THE USER’S MACROECONOMIC «COST OF WAITING»." EKONOMIKA I UPRAVLENIE: PROBLEMY, RESHENIYA 5/13, no. 158 (2025): 6–13. https://doi.org/10.36871/ek.up.p.r.2025.05.13.001.

Full text
Abstract:
The article examines the impact of frontend performance of web applications on user behavior and its macroeconomic consequences through the prism of the “cost of waiting” concept. The author analyzes key performance metrics (LCP, INP, CLS, etc.) that determine the speed of rendering and interactivity of interfaces, and demonstrates their relationship with user metrics — conversion, bounces, session depth. Based on empirical data from A/B tests, reports from large IT companies and scientific research, the economic feasibility of investing in optimizing the client side of web applications is substantiated. Particular attention is paid to the aggregation of individual time losses of users and their extrapolation at the macroeconomic level using theories of the alternative cost of time and search costs. The article proposes a methodology for calculating the economic effects of improving frontend indicators and provides recommendations for the implementation of optimization practices. The work demonstrates that even a slight reduction in interface delays can lead to a significant economic effect both at the level of an individual business and on the scale of the digital economy as a whole.
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Qihua, Zhihui Li, and Mingjun Liu. "Research on the Relationship between Motion Performance and User Experience of Golf Virtual Simulation Putting Simulator." Mathematical Problems in Engineering 2022 (November 26, 2022): 1–10. http://dx.doi.org/10.1155/2022/1616636.

Full text
Abstract:
This paper designs and develops a virtual golf simulation putting simulator based on the existing computer technology and conducts in-depth research and analysis on the relationship between its motion performance and user experience. The network architecture of the distributed virtual golf simulation system and the scene data management model are established, based on which the server-side system design and the client-side network communication module design of the distributed virtual golf simulation system are carried out. In the requirement analysis, the functional requirements such as building VR scenes, data communication and recognition models, and the non-functional requirements such as system security and ease of use are analyzed; in the outline design, the hardware equipment and logical architecture of the automatic user experience optimization system are described; in the detailed design, the functional modules of the system are designed in detail, including VR induction experience, physiological signal dataset user experience identification, data communication, optimization strategy, and so on, and important class diagrams and flowcharts are given. The intervention effects of positive thinking training on sports performance and improving athletes’ attention and receptivity have been verified and recognized by coaches and athletes. The putting simulator in the experimental class had higher hole-in-hole parameters than the control class, a highly significant difference; the putting simulator in the experimental class had higher hole-in-hole parameters than the control class, with a highly significant difference. These 3D models may contain more detailed information. In a virtual scene, the more detailed information a model contains, the more polygons the model needs, so that the computer needs to draw many polygons per frame, which has a great impact on the real-time performance of scene drawing. The parameters of the 5-yard chip-and-shoot in the experimental class were higher than those in the control class, and there was a very significant difference between the parameters of the 15-yard chip-and-shoot in the experimental class and those in the control class. The experimental results show that the model optimization processing method and rendering acceleration technology proposed in this paper can largely improve the rendering efficiency of 3D virtual scenes.
APA, Harvard, Vancouver, ISO, and other styles
46

Sabate, Charito Dela Cruz, and Mirador Labrador. "Development of geo-referenced agricultural map and management information system for Samar Island." Indonesian Journal of Electrical Engineering and Computer Science 26, no. 3 (2022): 1718. http://dx.doi.org/10.11591/ijeecs.v26.i3.pp1718-1724.

Full text
Abstract:
This study focuses <span>on the developed web-based information system which allows <a name="_Hlk100824864"></a>local government unit (LGU) in Samar Island and other agricultural sectors to have access to updated farmers’ information and the soil nutrient, soil fertility status, and the crop suitability of soil in the different municipalities of Samar. Geo-referenced agricultural map and management system can be an important tool for both operational and policy levels by accelerating the supply of up-to-date data, supporting the implementation of different types of projects targeting objectives such as municipal development, regional economic development, agricultural development, sustainable resource management, good governance, and many others. The management information system was developed using MySQL. The server-side language used in this study was PHP and the client-side language used was hypertext markup language (HTML). The map was created using Mapbox GL JS, then, geo-referenced map layers are saved to the local server as an array map for direct access via the front-end web application. With the use of the system, LGU in Samar Island and other agricultural sectors can implement project more efficiently because farmers’ information and the nutrient capacity of the soil is readily available.</span>
APA, Harvard, Vancouver, ISO, and other styles
47

Affan, Usman Ibnu, Khairan Marzuki, and Lalu Zazuli Azhar Mardedi. "Bandwith Optimization on Hotspot using PCQ Method And L2tp VPN Routing for Online Game Latency." International Journal of Engineering and Computer Science Applications (IJECSA) 1, no. 2 (2022): 65–76. http://dx.doi.org/10.30812/ijecsa.v1i2.2379.

Full text
Abstract:
VPN L2TP (Layer 2 Tunneling Protocol) is available on one of the services at Mikrotik. L2TP is a development of PPTP and a combination of L2F. The network security protocol and encryption used for authentication is the same as PPTP. However, to communicate, L2TP requires UDP port 1701 so that the security is better, L2TP is connected to IPSec to L2TP/IPSec. An example of its use is for the Windows operating system, which by default the Windows OS uses L2TP/IPSec. However, the consequences in terms of configuration are not as simple as PPTP. The client side must also support IPSec when implementing L2TP/IPSec. In terms of encryption, of course, encryption on L2TP/IPSec has a higher level of security than PPTP which uses MPPE. Traffic passing through the L2TP tunnel will experience overhead. The L2TP protocol is more firewall friendly than other types of VPNs such as PPTP. This is a big advantage if using this protocol, because most firewalls do not support GRE. However, L2TP does not have encryption, so it requires additional services to support higher security. So the author concludes that it will be easier to configure with online games. Online game is a type of computer game that is currently growing and requires a computer network . The networks that are usually used are internet networks or internet wifi and the like and always use current technology, such as modems and cable connections. Therefore, internet service providers (ISPs) must provide stable and fast internet quality. Bandwidth Needs Online games must be supported by an internet network that supports the speed and stability of the internet connection, especially the stability of the latency of the online game itself
APA, Harvard, Vancouver, ISO, and other styles
48

Sai Srinivas, T. Aditya, M. Bhuvaneswari, and M. Bharathi. "Optimizing On-Device AI: Overcoming Resource Constraints in Federated Learning for IoT." Journal of IoT-based Distributed Sensor Networks 1, no. 2 (2024): 10–20. http://dx.doi.org/10.46610/jibdsn.2024.v01i02.002.

Full text
Abstract:
Federated Learning (FL) is revolutionizing privacy in distributed IoT systems by eliminating the need to share raw data. However, it has its challenges. On the client side, attackers can tamper with data or inject false information, leading to what's known as backdoor poisoning attacks. Meanwhile, central servers can compromise data integrity and privacy by manipulating updates and extracting sensitive information from gradients. This is particularly problematic in IoT networks where user privacy is paramount. Innovative techniques like differential privacy and secure aggregation are being developed to tackle these issues and protect user data. Communication and learning convergence also pose significant hurdles due to uneven data distribution and the varied capabilities of IoT devices. To address this, new communication protocols and optimization algorithms are being implemented. Resource management is another critical area, given the limited computational power of many IoT devices. Solutions like resource-aware FL architectures and optimized AI models are emerging to ease these constraints. Additionally, advancements in AI hardware and lightweight training strategies are making deploying AI on IoT sensors more feasible. Finally, adopting standards such as ETSI Multi-access Edge Computing (MEC) and modern communication protocols is essential for the widespread deployment of FL-IoT systems, ensuring they are secure, efficient, and interoperable.
APA, Harvard, Vancouver, ISO, and other styles
49

Deha, Syafaat, Malldi Saesar, Syifa Aulia Sakira Nur Rahman, et al. "Interoperabilitas NDN-DPDK dan Forwarding NFD dalam Video Streaming berbasis NDN pada Virtual Machine." Syntax Literate ; Jurnal Ilmiah Indonesia 8, no. 7 (2023): 8478–95. http://dx.doi.org/10.36418/syntax-literate.v8i7.13187.

Full text
Abstract:
The Named Data Network (NDN) is a future network concept that addresses some of the issues present in the current internet architecture. One of the main advantages of NDN is the use of content caching at each router, enabling faster and more efficient data access. As internet network technology develops, the demand for video content continues to increase every year. Therefore, video optimization and delivery require high access speeds so that clients can run content from the server seamlessly. This research incorporates NDN-DPDK (Named Data Network-DData Plane Development Kit), which is designed as hardware acceleration, and uses NDN's default forwarder, Network Forwarding Daemon (NFD), which allows a wide variety of experiments with NDN architecture. The implementation and analysis of video streaming performance will use Quality of Service (QoS) parameters such as RTT, throughput, and startup delay to measure the quality of video streaming on the network side. In addition, CPU measurements are made on the router closest to the client to evaluate the traffic load required by the forwarder to transmit data. The results of implementation and analysis in this study obtained video streaming quality on the second accessor with an RTT time of 4 seconds, throughput above 0.4 MBps, a video startup delay of 8 seconds, and 100% CPU usage.
APA, Harvard, Vancouver, ISO, and other styles
50

S, Supreeth, and Kirankumari Patil. "Hybrid Genetic Algorithm and Modified-Particle Swarm Optimization Algorithm (GA-MPSO) for Predicting Scheduling Virtual Machines in Educational Cloud Platforms." International Journal of Emerging Technologies in Learning (iJET) 17, no. 07 (2022): 208–25. http://dx.doi.org/10.3991/ijet.v17i07.29223.

Full text
Abstract:
Cloud computing is expanding gradually as the number of educational applications is rapidly increasing. To get Educational cloud services, internet connectivity is predominantly important and Cloud Environment uses one of the basic technology to manage the Physical servers effectively ie; Virtualization Technology. In Cloud Computing, the data centers host numerous Virtual Machines (VMs) on top of the Servers. Due to the rapid growth of Educational platforms, the workload of the VM is computationally getting increased. In the Cloud Educational platforms, to execute the jobs IT resources are provisioned over the network. Since the data generated from the client-side is dynamic in nature, it is difficult to allocate the computational resources efficiently. So to enhance the energy efficiency and to provide the resources in an optimized way, a VM Scheduling mechanism with Hybrid Genetic Algorithm-Modified Particle Swarm Optimization (GA-MPSO) is proposed in this work to achieve QoS parameters like reduced Energy consumption, SLA violation, and cost reduction over the heterogeneous environments. The Hybrid G-MPSO develops the optimal range and improves the best range of scheduling the Virtual resources to VMs from Physical Machines (PMs). The proposed approach, when compared to other VM scheduling algorithms, it intensifies the energy consumption to 105KWH, SLA violation rate of 0.08%, reduces the migrations count to 2122, and consumes the overall cost of 2567.68$. The different scheduling methods for VMs are evaluated against the results, which show that the Hybrid GA-MPSO method is far better than the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!