Journal articles on the topic 'Computer networks Real-time data processing. Adaptive computing systems. Electronic data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer networks Real-time data processing. Adaptive computing systems. Electronic data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Chengcheng, Qian Liu, and Ruolei Zeng. "Novel Defense Schemes for Artificial Intelligence Deployed in Edge Computing Environment." Wireless Communications and Mobile Computing 2020 (August 3, 2020): 1–20. http://dx.doi.org/10.1155/2020/8832697.

Full text
Abstract:
The last few years have seen the great potential of artificial intelligence (AI) technology to efficiently and effectively deal with an incredible deluge of data generated by the Internet of Things (IoT) devices. If all the massive data is transferred to the cloud for intelligent processing, it not only brings considerable challenges to the network bandwidth but also cannot meet the needs of AI applications that require fast and real-time response. Therefore, to achieve this requirement, mobile or multiaccess edge computing (MEC) is receiving a substantial amount of interest, and its importance is gradually becoming more prominent. However, with the emerging of edge intelligence, AI also suffers from several tremendous security threats in AI model training, AI model inference, and private data. This paper provides three novel defense strategies to tackle malicious attacks in three aspects. First of all, we introduce a cloud-edge collaborative antiattack scheme to realize a reliable incremental updating of AI by ensuring the data security generated in the training phase. Furthermore, we propose an edge-enhanced defense strategy based on adaptive traceability and punishment mechanism to effectively and radically solve the security problem in the inference stage of the AI model. Finally, we establish a system model based on chaotic encryption with the three-layer architecture of MEC to effectively guarantee the security and privacy of the data during the construction of AI models. The experimental results of these three countermeasures verify the correctness of the conclusion and the feasibility of the methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Yadav, Rahul, and Weizhe Zhang. "MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing." Wireless Communications and Mobile Computing 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/6741972.

Full text
Abstract:
Mobile cloud computing (MCC) provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2) emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs) selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA) violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xuanyu, Yining Gao, Guangyi Xiao, Bo Feng, and Wenshu Chen. "A Real-Time Garbage Truck Supervision and Data Statistics Method Based on Object Detection." Wireless Communications and Mobile Computing 2020 (October 10, 2020): 1–9. http://dx.doi.org/10.1155/2020/8827310.

Full text
Abstract:
Garbage classification is difficult to supervise in the stage of collection and transportation. This paper proposes a computer vision-based method for intelligent supervision and workload statistics of garbage trucks. In terms of hardware, this paper deploys a camera and an image processing unit with NPU based on the original on-board computing and communication equipment. In terms of software, this paper uses the YOLOv3-tiny algorithm on the image processing unit to perform real-time target detection on garbage truck work, collects statistics on the color, specifications, and quantity of garbage bins cleaned by the garbage truck, and uploads the results to the server for recording and display. The proposed method has low deployment and maintenance costs while maintaining excellent accuracy and real-time performance, which makes it have good commercial application value.
APA, Harvard, Vancouver, ISO, and other styles
4

B. A. Alaasam, Ameer. "The Challenges and Prerequisites of Data Stream Processing in Fog Environment for Digital Twin in Smart Industry." International Journal of Interactive Mobile Technologies (iJIM) 15, no. 15 (August 11, 2021): 126. http://dx.doi.org/10.3991/ijim.v15i15.24181.

Full text
Abstract:
<p class="0abstract">Smart industry systems are based on integrating historical and current data from sensors with physical and digital systems to control product states. For example, Digital Twin (DT) system predicts the future state of physical assets using live simulation and controls the current state through real-time feedback. These systems rely on the ability to process big data stream to provide real-time responses. For, example it is estimated that one autonomous vehicle (AV) could produce 30 terabytes of data per day. AV will not be on the road before using an effective way to managing its big data and solve latency challenges. Cloud computing failed in the latency challenge, while Fog computing addresses it by moving parts of the computations from the Cloud to the edge of the network near the asset to reduce the latency. This work studies the challenges in data stream processing for DT in a fog environment. The challenges include fog architecture, the necessity of loosely-coupling design, the used virtual machine versus container, the stateful versus stateless operations, the stream processing tools, and live migration between fog nodes. The work also proposes a fog computing architecture and provides a vision of the prerequisites to meet the challenges.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Jayasinghe, Upul, Gyu Myoung Lee, Áine MacDermott, and Woo Seop Rhee. "TrustChain: A Privacy Preserving Blockchain with Edge Computing." Wireless Communications and Mobile Computing 2019 (July 8, 2019): 1–17. http://dx.doi.org/10.1155/2019/2014697.

Full text
Abstract:
Recent advancements in the Internet of Things (IoT) has enabled the collection, processing, and analysis of various forms of data including the personal data from billions of objects to generate valuable knowledge, making more innovative services for its stakeholders. Yet, this paradigm continuously suffers from numerous security and privacy concerns mainly due to its massive scale, distributed nature, and scarcity of resources towards the edge of IoT networks. Interestingly, blockchain based techniques offer strong countermeasures to protect data from tampering while supporting the distributed nature of the IoT. However, the enormous amount of energy consumption required to verify each block of data make it difficult to use with resource-constrained IoT devices and with real-time IoT applications. Nevertheless, it can expose the privacy of the stakeholders due to its public ledger system even though it secures data from alterations. Edge computing approaches suggest a potential alternative to centralized processing in order to populate real-time applications at the edge and to reduce privacy concerns associated with cloud computing. Hence, this paper suggests the novel privacy preserving blockchain called TrustChain which combines the power of blockchains with trust concepts to eliminate issues associated with traditional blockchain architectures. This work investigates how TrustChain can be deployed in the edge computing environment with different levels of absorptions to eliminate delays and privacy concerns associated with centralized processing and to preserve the resources in IoT networks.
APA, Harvard, Vancouver, ISO, and other styles
6

Fu, Chao, Qing Lv, and Reza G. Badrnejad. "Fog computing in health management processing systems." Kybernetes 49, no. 12 (January 4, 2020): 2893–917. http://dx.doi.org/10.1108/k-09-2019-0621.

Full text
Abstract:
Purpose Fog computing (FC) is a new field of research and has emerged as a complement to the cloud, which can mitigate the problems inherent to the cloud computing (CC) and internet of things (IoT) model such as unreliable latency, bandwidth constraints, security and mobility. Because there is no comprehensive study on the FC in health management processing systems techniques, this paper aims at surveying and analyzing the existing techniques systematically as well as offering some suggestions for upcoming works. Design/methodology/approach The paper complies with the methodological requirements of systematic literature reviews (SLR). The present paper investigates the newest systems and studies their practical techniques in detail. The applications of FC in health management systems have been categorized into three major groups, including review articles, data analysis, frameworks and models mechanisms. Findings The results have indicated that despite the popularity of FC as having real-time processing, low latency, dynamic configuration, scalability, low reaction time (less than a second), high bandwidth, battery life and network traffic, a few issues remain unanswered, such as security. The most recent research has focused on improvements in remote monitoring of the patients, such as less latency and rapid response. Also, the results have shown the application of qualitative methodology and case study in the use of FC in health management systems. While FC studies are growing in the clinical field, CC studies are decreasing. Research limitations/implications This study aims to be comprehensive, but there are some limitations. This research has only surveyed the articles that are mined, according to a keyword exploration of FC health, FC health care, FC health big data and FC health management system. Fog-based applications in the health management system may not be published with determined keywords. Moreover, the publications written in non-English languages have been ignored. Some important research studies may be printed in a language other than English. Practical implications The results of this survey will be valuable for academicians, and these can provide visions into future research areas in this domain. This survey helps the hospitals and related industries to identify FC needs. Moreover, the disadvantages and advantages of the above systems have been studied, and their key issues have been emphasized to develop a more effective FC in health management processing mechanisms over IoT in the future. Originality/value Previous literature review studies in the field of SLR have used a simple literature review to find the tasks and challenges in the field. In this study, for the first time, the FC in health management processing systems is applied in a systematic review focused on the mediating role of the IoT and thereby provides a novel contribution. An SLR is conducted to find more specific answers to the proposed research questions. SLR helps to reduce implicit researcher bias. Through the adoption of broad search strategies, predefined search strings and uniform inclusion and exclusion criteria, SLR effectively forces researchers to search for studies beyond their subject areas and networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Xingmin, Shenggang Xu, Fengping An, and Fuhong Lin. "A Novel Real-Time Image Restoration Algorithm in Edge Computing." Wireless Communications and Mobile Computing 2018 (August 9, 2018): 1–13. http://dx.doi.org/10.1155/2018/3610482.

Full text
Abstract:
Owning to the high processing complexity, the image restoration can only be processed offline and hardly be applied in the real-time production life. The development of edge computing provides a new solution for real-time image restoration. It can upload the original image to the edge node to process in real time and then return results to users immediately. However, the processing capacity of the edge node is still limited which requires a lightweight image restoration algorithm. A novel real-time image restoration algorithm is proposed in edge computing. Firstly, 10 classical functions are used to determine the population size and maximum iteration times of traction fruit fly optimization algorithm (TFOA). Secondly, TFOA is used to optimize the optimal parameters of least squares support vector regression (LSSVR) kernel function, and the error function of image restoration is taken as an adaptive function of TFOA. Thirdly, the LLSVR algorithm is used to restore the image. During the image restoration process, the training process is to establish a mapping relationship between the degraded image and the adjacent pixels of the original image. The relationship is established; the degraded image can be restored by using the mapping relationship. Through the comparison and analysis of experiments, the proposed method can meet the requirements of real-time image restoration, and the proposed algorithm can speed up the image restoration and improve the image quality.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Xianwei, and Baoliu Ye. "Latency-Aware Computation Offloading for 5G Networks in Edge Computing." Security and Communication Networks 2021 (September 22, 2021): 1–15. http://dx.doi.org/10.1155/2021/8800234.

Full text
Abstract:
With the development of Internet of Things, massive computation-intensive tasks are generated by mobile devices whose limited computing and storage capacity lead to poor quality of services. Edge computing, as an effective computing paradigm, was proposed for efficient and real-time data processing by providing computing resources at the edge of the network. The deployment of 5G promises to speed up data transmission but also further increases the tasks to be offloaded. However, how to transfer the data or tasks to the edge servers in 5G for processing with high response efficiency remains a challenge. In this paper, a latency-aware computation offloading method in 5G networks is proposed. Firstly, the latency and energy consumption models of edge computation offloading in 5G are defined. Then the fine-grained computation offloading method is employed to reduce the overall completion time of the tasks. The approach is further extended to solve the multiuser computation offloading problem. To verify the effectiveness of the proposed method, extensive simulation experiments are conducted. The results show that the proposed offloading method can effectively reduce the execution latency of the tasks.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Siye, Ziwen Cao, Yanfang Zhang, Weiqing Huang, and Jianguo Jiang. "A Temporal and Spatial Data Redundancy Processing Algorithm for RFID Surveillance Data." Wireless Communications and Mobile Computing 2020 (February 24, 2020): 1–12. http://dx.doi.org/10.1155/2020/6937912.

Full text
Abstract:
The Radio Frequency Identification (RFID) data acquisition rate used for monitoring is so high that the RFID data stream contains a large amount of redundant data, which increases the system overhead. To balance the accuracy and real-time performance of monitoring, it is necessary to filter out redundant RFID data. We propose an algorithm called Time-Distance Bloom Filter (TDBF) that takes into account the read time and read distance of RFID tags, which greatly reduces data redundancy. In addition, we have proposed a measurement of the filter performance evaluation indicators. In experiments, we found that the performance score of the TDBF algorithm was 5.2, while the Time Bloom Filter (TBF) score was only 0.03, which indicates that the TDBF algorithm can achieve a lower false negative rate, lower false positive rate, and higher data compression rate. Furthermore, in a dynamic scenario, the TDBF algorithm can filter out valid data according to the actual scenario requirements.
APA, Harvard, Vancouver, ISO, and other styles
10

Huo, Yan, Chengtao Yong, and Yanfei Lu. "Re-ADP: Real-Time Data Aggregation with Adaptive ω-Event Differential Privacy for Fog Computing." Wireless Communications and Mobile Computing 2018 (July 8, 2018): 1–13. http://dx.doi.org/10.1155/2018/6285719.

Full text
Abstract:
In the Internet of Things (IoT), aggregation and release of real-time data can often be used for mining more useful information so as to make humans lives more convenient and efficient. However, privacy disclosure is one of the most concerning issues because sensitive information usually comes with users in aggregated data. Thus, various data encryption technologies have emerged to achieve privacy preserving. These technologies may not only introduce complicated computing and high communication overhead but also do not work on the protection of endless data streams. Considering these challenges, we propose a real-time stream data aggregation framework with adaptive ω-event differential privacy (Re-ADP). Based on adaptive ω-event differential privacy, the framework can protect any data collected by sensors over any dynamic ω time stamp successively over infinite stream. It is designed for the fog computing architecture that dramatically extends the cloud computing to the edge of networks. In our proposed framework, fog servers will only send aggregated secure data to cloud servers, which can relieve the computing overhead of cloud servers, improve communication efficiency, and protect data privacy. Finally, experimental results demonstrate that our framework outperforms the existing methods and improves data availability with stronger privacy preserving.
APA, Harvard, Vancouver, ISO, and other styles
11

Chang, Ray-I., Yu-Hsien Chu, Chia-Hui Wang, and Niang-Ying Huang. "Video-Like Lossless Compression of Data Cube for Big Data Query in Wireless Sensor Networks." WSEAS TRANSACTIONS ON COMMUNICATIONS 20 (August 10, 2021): 139–45. http://dx.doi.org/10.37394/23204.2021.20.19.

Full text
Abstract:
Wireless Sensor Networks (WSNs) contain many sensor nodes which are placed in chosen spatial area to temporally monitor the environmental changes. As the sensor data is big, it should be well organized and stored in cloud servers to support efficient data query. In this paper, we first adopt the streamed sensor data as "data cubes" to enhance data compression by video-like lossless compression (VLLC). With layered tree structure of WSNs, compression can be done on the aggregation nodes of edge computing. Then, an algorithm is designed to well organize and store these VLLC data cubes into cloud servers to support cost-effect big data query with parallel processing. Our experiments are tested by real-world sensor data. Results show that our method can save 94% construction time and 79% storage space to achieve the same retrieval time in data query when compared with a well-known database MySQL
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Pengpeng, Hongjin Lv, Shouwan Gao, Qiang Niu, and Shixiong Xia. "A Real-Time Taxicab Recommendation System Using Big Trajectories Data." Wireless Communications and Mobile Computing 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/5414930.

Full text
Abstract:
Carpooling is becoming a more and more significant traffic choice, because it can provide additional service options, ease traffic congestion, and reduce total vehicle exhaust emissions. Although some recommendation systems have proposed taxicab carpooling services recently, they cannot fully utilize and understand the known information and essence of carpooling. This study proposes a novel recommendation algorithm, which provides either a vacant or an occupied taxicab in response to a passenger’s request, called VOT. VOT recommends the closest vacant taxicab to passengers. Otherwise, VOT infers destinations of occupied taxicabs by similarity comparison and clustering algorithms and then recommends the occupied taxicab heading to a close destination to passengers. Using an efficient large data-processing framework, Spark, we greatly improve the efficiency of large data processing. This study evaluates VOT with a real-world dataset that contains 14747 taxicabs’ GPS data. Results show that the ratio of range (between forecasted and actual destinations) of less than 900 M can reach 90.29%. The total mileage to deliver all passengers is significantly reduced (47.84% on average). Specifically, the reduced total mileage of nonrush hours outperforms other systems by 35%. VOT and others have similar performances in actual detour ratio, even better in rush hours.
APA, Harvard, Vancouver, ISO, and other styles
13

Yu, Zhenhao, Fang Liu, Yinquan Yuan, Sihan Li, and Zhengying Li. "Signal Processing for Time Domain Wavelengths of Ultra-Weak FBGs Array in Perimeter Security Monitoring Based on Spark Streaming." Sensors 18, no. 9 (September 4, 2018): 2937. http://dx.doi.org/10.3390/s18092937.

Full text
Abstract:
To detect perimeter intrusion accurately and quickly, a stream computing technology was used to improve real-time data processing in perimeter intrusion detection systems. Based on the traditional density-based spatial clustering of applications with noise (T-DBSCAN) algorithm, which depends on manual adjustments of neighborhood parameters, an adaptive parameters DBSCAN (AP-DBSCAN) method that can achieve unsupervised calculations was proposed. The proposed AP-DBSCAN method was implemented on a Spark Streaming platform to deal with the problems of data stream collection and real-time analysis, as well as judging and identifying the different types of intrusion. A number of sensing and processing experiments were finished and the experimental data indicated that the proposed AP-DBSCAN method on the Spark Streaming platform exhibited a fine calibration capacity for the adaptive parameters and the same accuracy as the T-DBSCAN method without the artificial setting of neighborhood parameters, in addition to achieving good performances in the perimeter intrusion detection systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhao, Xiao-ping, Yong-hong Zhang, and Fan Shao. "A Multifault Diagnosis Method of Gear Box Running on Edge Equipment." Security and Communication Networks 2020 (August 3, 2020): 1–13. http://dx.doi.org/10.1155/2020/8854236.

Full text
Abstract:
In recent years, a large number of edge computing devices have been used to monitor the operating state of industrial equipment and perform fault diagnosis analysis. Therefore, the fault diagnosis algorithm in the edge computing device is particularly important. With the increase in the number of device detection points and the sampling frequency, mechanical health monitoring has entered the era of big data. Edge computing can process and analyze data in real time or faster, making data processing closer to the source, rather than the external data center or cloud, which can shorten the delay time. After using 8 bits and 16 bits to quantify the deep measurement learning model, there is no obvious loss of accuracy compared with the original floating-point model, which shows that the model can be deployed and reasoned on the edge device, while ensuring real time. Compared with using servers for deployment, using edge devices not only reduces costs but also makes deployment more flexible.
APA, Harvard, Vancouver, ISO, and other styles
15

Choi, Jongmoo, Bumjong Jung, Yongjae Choi, and Seiil Son. "An Adaptive and Integrated Low-Power Framework for Multicore Mobile Computing." Mobile Information Systems 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/9642958.

Full text
Abstract:
Employing multicore in mobile computing such as smartphone and IoT (Internet of Things) device is a double-edged sword. It provides ample computing capabilities required in recent intelligent mobile services including voice recognition, image processing, big data analysis, and deep learning. However, it requires a great deal of power consumption, which causes creating a thermal hot spot and putting pressure on the energy resource in a mobile device. In this paper, we propose a novel framework that integrates two well-known low-power techniques, DPM (Dynamic Power Management) and DVFS (Dynamic Voltage and Frequency Scaling) for energy efficiency in multicore mobile systems. The key feature of the proposed framework is adaptability. By monitoring the online resource usage such as CPU utilization and power consumption, the framework can orchestrate diverse DPM and DVFS policies according to workload characteristics. Real implementation based experiments using three mobile devices have shown that it can reduce the power consumption ranging from 22% to 79%, while affecting negligibly the performance of workloads.
APA, Harvard, Vancouver, ISO, and other styles
16

Bruhn, Fredrik C., Nandinbaatar Tsog, Fabian Kunkel, Oskar Flordal, and Ian Troxel. "Enabling radiation tolerant heterogeneous GPU-based onboard data processing in space." CEAS Space Journal 12, no. 4 (June 15, 2020): 551–64. http://dx.doi.org/10.1007/s12567-020-00321-9.

Full text
Abstract:
Abstract The last decade has seen a dramatic increase in small satellite missions for commercial, public, and government intelligence applications. Given the rapid commercialization of constellation-driven services in Earth Observation, situational domain awareness, communications including machine-to-machine interface, exploration etc., small satellites represent an enabling technology for a large growth market generating truly Big Data. Examples of modern sensors that can generate very large amounts of data are optical sensing, hyperspectral, Synthetic Aperture Radar (SAR), and Infrared imaging. Traditional handling and downloading of Big Data from space requires a large onboard mass storage and high bandwidth downlink with a trend towards optical links. Many missions and applications can benefit significantly from onboard cloud computing similarly to Earth-based cloud services. Hence, enabling space systems to provide near real-time data and enable low latency distribution of critical and time sensitive information to users. In addition, the downlink capability can be more effectively utilized by applying more onboard processing to reduce the data and create high value information products. This paper discusses current implementations and roadmap for leveraging high performance computing tools and methods on small satellites with radiation tolerant hardware. This includes runtime analysis with benchmarks of convolutional neural networks and matrix multiplications using industry standard tools (e.g., TensorFlow and PlaidML). In addition, a ½ CubeSat volume unit (0.5U) (10 × 10 × 5 cm3) cloud computing solution, called SpaceCloud™ iX5100 based on AMD 28 nm APU technology is presented as an example of heterogeneous computer solution. An evaluation of the AMD 14 nm Ryzen APU is presented as a candidate for future advanced onboard processing for space vehicles.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Jianwen, Kaoru Ota, and Mianxiong Dong. "Real-Time Awareness Scheduling for Multimedia Big Data Oriented In-Memory Computing." IEEE Internet of Things Journal 5, no. 5 (October 2018): 3464–73. http://dx.doi.org/10.1109/jiot.2018.2802913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Svetlana, Jieun Kang, and YongIk Yoon. "Linked-Object Dynamic Offloading (LODO) for the Cooperation of Data and Tasks on Edge Computing Environment." Electronics 10, no. 17 (September 3, 2021): 2156. http://dx.doi.org/10.3390/electronics10172156.

Full text
Abstract:
With the evolution of the Internet of Things (IoT), edge computing technology is using to process data rapidly increasing from various IoT devices efficiently. Edge computing offloading reduces data processing time and bandwidth usage by processing data in real-time on the device where the data is generating or on a nearby server. Previous studies have proposed offloading between IoT devices through local-edge collaboration from resource-constrained edge servers. However, they did not consider nearby edge servers in the same layer with computing resources. Consequently, quality of service (QoS) degrade due to restricted resources of edge computing and higher execution latency due to congestion. To handle offloaded tasks in a rapidly changing dynamic environment, finding an optimal target server is still challenging. Therefore, a new cooperative offloading method to control edge computing resources is needed to allocate limited resources between distributed edges efficiently. This paper suggests the LODO (linked-object dynamic offloading) algorithm that provides an ideal balance between edges by considering the ready state or running state. LODO algorithm carries out tasks in the list in the order of high correlation between data and tasks through linked objects. Furthermore, dynamic offloading considers the running status of all cooperative terminals and decides to schedule task distribution. That can decrease the average delayed time and average power consumption of terminals. In addition, the resource shortage problem can settle by reducing task processing using its distributions.
APA, Harvard, Vancouver, ISO, and other styles
19

Rathee, Geetanjali, Adel Khelifi, and Razi Iqbal. "Artificial Intelligence- (AI-) Enabled Internet of Things (IoT) for Secure Big Data Processing in Multihoming Networks." Wireless Communications and Mobile Computing 2021 (August 11, 2021): 1–9. http://dx.doi.org/10.1155/2021/5754322.

Full text
Abstract:
The automated techniques enabled with Artificial Neural Networks (ANN), Internet of Things (IoT), and cloud-based services affect the real-time analysis and processing of information in a variety of applications. In addition, multihoming is a type of network that combines various types of networks into a single environment while managing a huge amount of data. Nowadays, the big data processing and monitoring in multihoming networks provide less attention while reducing the security risk and efficiency during processing or monitoring the information. The use of AI-based systems in multihoming big data with IoT- and AI-integrated systems may benefit in various aspects. Although multihoming security issues and their analysis have been well studied by various scientists and researchers; however, not much attention is paid towards big data security processing in multihoming especially using automated techniques and systems. The aim of this paper is to propose an IoT-based artificial network to process and compute big data processing by ensuring a secure communication multihoming network using the Bayesian Rule (BR) and Levenberg-Marquardt (LM) algorithms. Further, the efficiency and effect on multihoming information processing using an AI-assisted mechanism are experimented over various parameters such as classification accuracy, classification time, specificity, sensitivity, ROC, and F -measure.
APA, Harvard, Vancouver, ISO, and other styles
20

Gong, Changqing, Mengfei Li, Liang Zhao, Zhenzhou Guo, and Guangjie Han. "Homomorphic Evaluation of the Integer Arithmetic Operations for Mobile Edge Computing." Wireless Communications and Mobile Computing 2018 (November 15, 2018): 1–13. http://dx.doi.org/10.1155/2018/8142102.

Full text
Abstract:
With the rapid development of the 5G network and Internet of Things (IoT), lots of mobile and IoT devices generate massive amounts of multisource heterogeneous data. Effective processing of such data becomes an urgent problem. However, traditional centralised models of cloud computing are challenging to process multisource heterogeneous data effectively. Mobile edge computing (MEC) emerges as a new technology to optimise applications or cloud computing systems. However, the features of MEC such as content perception, real-time computing, and parallel processing make the data security and privacy issues that exist in the cloud computing environment more prominent. Protecting sensitive data through traditional encryption is a very secure method, but this will make it impossible for the MEC to calculate the encrypted data. The fully homomorphic encryption (FHE) overcomes this limitation. FHE can be used to compute ciphertext directly. Therefore, we propose a ciphertext arithmetic operation that implements data with integer homomorphic encryption to ensure data privacy and computability. Our scheme refers to the integer operation rules of complement, addition, subtraction, multiplication, and division. First, we use Boolean polynomials (BP) of containing logical AND, XOR operations to represent the rulers. Second, we convert the BP into homomorphic polynomials (HP) to perform ciphertext operations. Then, we optimise our scheme. We divide the ciphertext vector of integer encryption into subvectors of length 2 and increase the length of private key of FHE to support the 3-multiplication level additional. We test our optimised scheme in DGHV and CMNT. In the number of ciphertext refreshes, the optimised scheme is reduced by 2/3 compared to the original scheme, and the time overhead of our scheme is reduced by 1/3. We also examine our scheme in CNT of without bootstrapping. The time overhead of optimised scheme over DGHV and CMNT is close to the original scheme over CNT.
APA, Harvard, Vancouver, ISO, and other styles
21

Smith, Kristofer R., Hang Liu, Li-Tse Hsieh, Xavier de Foy, and Robert Gazda. "Wireless Adaptive Video Streaming with Edge Cloud." Wireless Communications and Mobile Computing 2018 (December 5, 2018): 1–13. http://dx.doi.org/10.1155/2018/1061807.

Full text
Abstract:
Wireless data traffic, especially video traffic, continues to increase at a rapid rate. Innovative network architectures and protocols are needed to improve the efficiency of data delivery and the quality of experience (QoE) of mobile users. Mobile edge computing (MEC) is a new paradigm that integrates computing capabilities at the edge of the wireless network. This paper presents a computation-capable and programmable wireless access network architecture to enable more efficient and robust video content delivery based on the MEC concept. It incorporates in-network data processing and communications under a unified software-defined networking platform. To address the multiple resource management challenges that arise in exploiting such integration, we propose a framework to optimize the QoE for multiple video streams, subject to wireless transmission capacity and in-network computation constraints. We then propose two simplified algorithms for resource allocation. The evaluation results demonstrate the benefits of the proposed algorithms for the optimization of video content delivery.
APA, Harvard, Vancouver, ISO, and other styles
22

Khan, Nauman Ahmad, Jean-Christophe Nebel, Souheil Khaddaj, and Vesna Brujic-Okretic. "Scalable System for Smart Urban Transport Management." Journal of Advanced Transportation 2020 (September 16, 2020): 1–13. http://dx.doi.org/10.1155/2020/8894705.

Full text
Abstract:
Efficient management of smart transport systems requires the integration of various sensing technologies, as well as fast processing of a high volume of heterogeneous data, in order to perform smart analytics of urban networks in real time. However, dynamic response that relies on intelligent demand-side transport management is particularly challenging due to the increasing flow of transmitted sensor data. In this work, a novel smart service-driven, adaptable middleware architecture is proposed to acquire, store, manipulate, and integrate information from heterogeneous data sources in order to deliver smart analytics aimed at supporting strategic decision-making. The architecture offers adaptive and scalable data integration services for acquiring and processing dynamic data, delivering fast response time, and offering data mining and machine learning models for real-time prediction, combined with advanced visualisation techniques. The proposed solution has been implemented and validated, demonstrating its ability to provide real-time performance on the existing, operational, and large-scale bus network of a European capital city.
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Zhanyang, Wentao Liu, Jingwang Huang, Chenyi Yang, Jiawei Lu, and Haozhe Tan. "Artificial Intelligence for Securing IoT Services in Edge Computing: A Survey." Security and Communication Networks 2020 (September 14, 2020): 1–13. http://dx.doi.org/10.1155/2020/8872586.

Full text
Abstract:
With the explosive growth of data generated by the Internet of Things (IoT) devices, the traditional cloud computing model by transferring all data to the cloud for processing has gradually failed to meet the real-time requirement of IoT services due to high network latency. Edge computing (EC) as a new computing paradigm shifts the data processing from the cloud to the edge nodes (ENs), greatly improving the Quality of Service (QoS) for those IoT applications with low-latency requirements. However, compared to other endpoint devices such as smartphones or computers, distributed ENs are more vulnerable to attacks for restricted computing resources and storage. In the context that security and privacy preservation have become urgent issues for EC, great progress in artificial intelligence (AI) opens many possible windows to address the security challenges. The powerful learning ability of AI enables the system to identify malicious attacks more accurately and efficiently. Meanwhile, to a certain extent, transferring model parameters instead of raw data avoids privacy leakage. In this paper, a comprehensive survey of the contribution of AI to the IoT security in EC is presented. First, the research status and some basic definitions are introduced. Next, the IoT service framework with EC is discussed. The survey of privacy preservation and blockchain for edge-enabled IoT services with AI is then presented. In the end, the open issues and challenges on the application of AI in IoT services based on EC are discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Jie, Fengwei Zhu, Zejun Huang, Jian Wan, and Yongjian Ren. "Research on Real-Time Anomaly Detection of Fishing Vessels in a Marine Edge Computing Environment." Mobile Information Systems 2021 (May 4, 2021): 1–15. http://dx.doi.org/10.1155/2021/5598988.

Full text
Abstract:
Fishing vessel monitoring systems (VMSs) play an important role in ensuring the safety of fishing vessel operations. Traditional VMSs use a cloud centralized computing model, and the storage, processing, and visualization of all fishing vessel data are completed in the monitoring center. Due to the limitation of maritime communications, the data generated by fishing vessels cannot be fully utilized, and communication delays lead to inadequate warnings in cases of fishing vessel abnormalities. In this paper, we present a real-time anomaly detection model (RADM) for fishing vessels based on edge computing. The model runs in the edge layer, making full use of the information of moving edge nodes and nearby nodes, and combines a historical trajectory extraction detection model with an online anomaly detection model to detect anomalies. The detection model of historical trajectory extraction mines frequent patterns in historical trajectories through multifeature clustering and identifies trajectories that are different from the frequent patterns as anomalies. Online anomaly detection algorithms detect anomalous behavior in specific scenarios based on the spatiotemporal neighborhood similarity and reduce the impact of anomaly evolution. Experiments show that RADM was more effective than traditional methods in real-time anomaly detection of fishing vessels, which provides a new method for upgrading the technology of traditional VMS.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Cailing, and Wenjun Li. "Automatic Classification Algorithm for Multisearch Data Association Rules in Wireless Networks." Wireless Communications and Mobile Computing 2021 (March 17, 2021): 1–9. http://dx.doi.org/10.1155/2021/5591387.

Full text
Abstract:
In order to realize efficient data processing in wireless network, this paper designs an automatic classification algorithm of multisearch data association rules in a wireless network. According to the algorithm, starting from the mining of multisearch data association rules, from the discretization of continuous attributes of multisearch data, generation of fuzzy classification rules, and the design of association rule classifier and other aspects, automatic classification is completed by using the mining results. Experimental results show that this algorithm has the advantages of small classification error, good real-time performance, high coverage rate, and high feasibility.
APA, Harvard, Vancouver, ISO, and other styles
26

Rego, Paulo A. L., Fernando A. M. Trinta, Masum Z. Hasan, and Jose N. de Souza. "Enhancing Offloading Systems with Smart Decisions, Adaptive Monitoring, and Mobility Support." Wireless Communications and Mobile Computing 2019 (April 21, 2019): 1–18. http://dx.doi.org/10.1155/2019/1975312.

Full text
Abstract:
Mobile cloud computing is an approach for mobile devices with processing and storage limitations to take advantage of remote resources that assist in performing computationally intensive or data-intensive tasks. The migration of tasks or data is commonly referred to as offloading, and its proper use can bring benefits such as performance improvement or reduced power consumption on mobile devices. In this paper, we face three challenges for any offloading solution: the decision of when and where to perform offloading, the decision of which metrics must be monitored by the offloading system, and the support for user’s mobility in a hybrid environment composed of cloudlets and public cloud instances. We introduce novel approaches based on machine learning and software-defined networking techniques for handling these challenges. In addition, we present details of our offloading system and the experiments conducted to assess the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Jiang, Jielin, Xing Zhang, and Shengjun Li. "A Task Offloading Method with Edge for 5G-Envisioned Cyber-Physical-Social Systems." Security and Communication Networks 2020 (August 7, 2020): 1–9. http://dx.doi.org/10.1155/2020/8867094.

Full text
Abstract:
Recently, Cyber-Physical-Social Systems (CPSS) have been introduced as a new information physics system, which enables personnel organizations to control physical entities in a reliable, real-time, secure, and collaborative manner through cyberspace. Moreover, with the maturity of edge computing technology, the data generated by physical entities in CPSS are usually sent to edge computing nodes for effective processing. Nevertheless, it remains a challenge to ensure that edge nodes maintain load balance while minimizing the completion time in the event of the edge node outage. Given these problems, a Unique Task Offloading Method (UTOM) for CPSS is designed in this paper. Technically, the system model is constructed firstly and then a multi-objective problem is defined. Afterward, Improving the Strength Pareto Evolutionary Algorithm (SPEA2) is utilized to generate the feasible solutions of the above problem, whose aims are optimizing the propagation time and achieving load balance. Furthermore, the normalization method has been leveraged to produce standard data and select the global optimal solution. Finally, several necessary experiments of UTOM are introduced in detail.
APA, Harvard, Vancouver, ISO, and other styles
28

Alazeb, Abdulwahab, Brajendra Panda, Sultan Almakdi, and Mohammed Alshehri. "Data Integrity Preservation Schemes in Smart Healthcare Systems That Use Fog Computing Distribution." Electronics 10, no. 11 (May 30, 2021): 1314. http://dx.doi.org/10.3390/electronics10111314.

Full text
Abstract:
The volume of data generated worldwide is rapidly growing. Cloud computing, fog computing, and the Internet of things (IoT) technologies have been adapted to compute and process this high data volume. In coming years information technology will enable extensive developments in the field of healthcare and offer health care providers and patients broadened opportunities to enhance their healthcare experiences and services owing to heightened availability and enriched services through real-time data exchange. As promising as these technological innovations are, security issues such as data integrity and data consistency remain widely unaddressed. Therefore, it is important to engineer a solution to these issues. Developing a damage assessment and recovery control model for fog computing is critical. This paper proposes two models for using fog computing in healthcare: one for private fog computing distribution and one for public fog computing distribution. For each model, we propose a unique scheme to assess the damage caused by malicious attack, to accurately identify affected transactions and recover damaged data if needed. A transaction-dependency graph technique is used for both models to observe and monitor all transactions in the whole system. We conducted a simulation study to assess the applicability and efficacy of the proposed models. The evaluation rendered these models practicable and effective.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Yiming, and Xidan Gong. "Optimization of Data Processing System for Exercise and Fitness Process Based on Internet of Things." Wireless Communications and Mobile Computing 2021 (July 6, 2021): 1–11. http://dx.doi.org/10.1155/2021/7132301.

Full text
Abstract:
In the digital network era, people have higher requirements for physical fitness. In the future, physical fitness requires not only good fitness equipment and fitness environment but also more convenient and intelligent health management, service guidance, social entertainment, and other refined fitness services. The innovation of sports and fitness equipment for the digital network era will definitely depend on the development of information technology and network technology. Based on the cutting-edge Internet of Things technology, this thesis focuses on the development and application of a new generation of digital fitness equipment adapted to future development, advocating the new concept of seamless integration of fitness exercise and information services through human-oriented systematic design thinking and providing implementable solutions to realize the science, convenience, and life of public fitness. This thesis uses modern science and technology, especially the Internet of Things (IoT) technology, to fully meet the diversified fitness needs of the fitness crowd as the guide; IoT digital fitness equipment design and application research was newly generated, using a variety of research methods to explore the functional design and application of IoT fitness equipment; the goal is to create a more intelligent and three-dimensional IoT fitness service model in the future. Through the application research of intelligent devices in IoT fitness equipment, the realization of the functions of identity identification, environment perception, and data transmission of IoT fitness equipment is made faster. Intelligent devices can become the interaction channel between fitness service personnel, fitness equipment, and fitness users and also reduce the development cost of IoT fitness equipment. The construction of an IoT fitness cloud service platform and data management system integrates the application of IoT, cloud computing, mobile communication, and other technologies to make IoT fitness service supply remote, real-time, and diversified. While providing convenient and value-added fitness services for fitness people, it also brings sustainable development space for the health service industry.
APA, Harvard, Vancouver, ISO, and other styles
30

Kryukov, Ya V., D. A. Pokamestov, E. V. Rogozhnikov, S. A. Novichkov, and D. V. Lakontsev. "Analysis of Computational Complexity and Processing Time Evaluation of the Protocol Stack in 5G New Radio." Proceedings of Tomsk State University of Control Systems and Radioelectronics 23, no. 3 (September 25, 2020): 31–37. http://dx.doi.org/10.21293/1818-0442-2020-23-3-31-37.

Full text
Abstract:
Currently, an active deployment of radio access networks for mobile communication systems 5G New Radio is being observed. The architecture of networks is developing rapidly, where significant part of the functions is performed in a virtual cloud space of a personal computer. The computing power of a personal computer must be sufficient to execute network protocols in real time. To reduce the cost of deploying 5G NR networks, the configuration of each remote computer must be optimally matched to the scale of a particular network. Therefore, an urgent direction of research is the assessment of the execution time of the 5G NR protocol stack on various configurations of computers and the development of a mathematical model for data analysis, approximation of dependencies and making recommendations. In this paper, the authors provide an overview of the main 5G NR network architectures, as well as a description of the methods and tools that can be used to estimate the computational complexity of the 5G NR protocol stack. The final section provides an analysis of the computational complexity of the protocol stack, obtained during the experiments by colleagues in partner institutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Makrani, Hosein Mohamamdi, Hossein Sayadi, Najmeh Nazari, Sai Mnoj Pudukotai Dinakarrao, Avesta Sasan, Tinoosh Mohsenin, Setareh Rafatirad, and Houman Homayoun. "Adaptive Performance Modeling of Data-intensive Workloads for Resource Provisioning in Virtualized Environment." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 5, no. 4 (March 2021): 1–24. http://dx.doi.org/10.1145/3442696.

Full text
Abstract:
The processing of data-intensive workloads is a challenging and time-consuming task that often requires massive infrastructure to ensure fast data analysis. The cloud platform is the most popular and powerful scale-out infrastructure to perform big data analytics and eliminate the need to maintain expensive and high-end computing resources at the user side. The performance and the cost of such infrastructure depend on the overall server configuration, such as processor, memory, network, and storage configurations. In addition to the cost of owning or maintaining the hardware, the heterogeneity in the server configuration further expands the selection space, leading to non-convergence. The challenge is further exacerbated by the dependency of the application’s performance on the underlying hardware. Despite an increasing interest in resource provisioning, few works have been done to develop accurate and practical models to proactively predict the performance of data-intensive applications corresponding to the server configuration and provision a cost-optimal configuration online. In this work, through a comprehensive real-system empirical analysis of performance, we address these challenges by introducing ProMLB: a proactive machine-learning-based methodology for resource provisioning. We first characterize diverse types of data-intensive workloads across different types of server architectures. The characterization aids in accurately capture applications’ behavior and train a model for prediction of their performance. Then, ProMLB builds a set of cross-platform performance models for each application. Based on the developed predictive model, ProMLB uses an optimization technique to distinguish close-to-optimal configuration to minimize the product of execution time and cost. Compared to the oracle scheduler, ProMLB achieves 91% accuracy in terms of application-resource matching. On average, ProMLB improves the performance and resource utilization by 42.6% and 41.1%, respectively, compared to baseline scheduler. Moreover, ProMLB improves the performance per cost by 2.5× on average.
APA, Harvard, Vancouver, ISO, and other styles
32

Si, Pengbo, Fei Wang, Enchang Sun, and Yuzhao Su. "BEI-TAB: Enabling Secure and Distributed Airport Baggage Tracking with Hybrid Blockchain-Edge System." Wireless Communications and Mobile Computing 2021 (September 23, 2021): 1–12. http://dx.doi.org/10.1155/2021/2741435.

Full text
Abstract:
Global air transport carries about 4.3 billion pieces of baggage each year, and up to 56 percent of travellers prefer obtaining real-time baggage tracking information throughout their trip. However, the traditional baggage tracking scheme is generally based on optical scanning and centralized storage systems, which suffers from low efficiency and information leakage. In this paper, a blockchain and edge computing-based Internet of Things (IoT) system for tracking of airport baggage (BEI-TAB) is proposed. Through the combination of radio frequency identification technology (RFID) and blockchain, real-time baggage processing information is automatically stored in blockchain. In addition, we deploy Interplanetary File System (IPFS) at edge nodes with ciphertext policy attribute-based encryption (CP-ABE) to store basic baggage information. Only hash values returned by the IPFS network are kept in blockchain, enhancing the scalability of the system. Furthermore, a multichannel scheme is designed to realize the physical isolation of data and to rapidly process multiple types of data and business requirements in parallel. To the best of our knowledge, it is the first architecture that integrates RFID, IPFS, and CP-ABE with blockchain technologies to facilitate secure, decentralized, and real-time characteristics for storing and sharing data for baggage tracking. We have deployed a testbed with both software and hardware to evaluate the proposed system, considering the performances of transaction processing time and speed. In addition, based on the characteristics of consortium blockchain, we improved the practical Byzantine fault tolerance (PBFT) consensus protocol, which introduced the node credit score mechanism and cooperated with the simplified consistency protocol. Experimental results show that the credit score-based PBFT consensus (CSPBFT) can shorten transaction delay and improve the long-term running efficiency of the system.
APA, Harvard, Vancouver, ISO, and other styles
33

Jiang, Linjun, Hailun Xia, and Caili Guo. "A Model-Based System for Real-Time Articulated Hand Tracking Using a Simple Data Glove and a Depth Camera." Sensors 19, no. 21 (October 28, 2019): 4680. http://dx.doi.org/10.3390/s19214680.

Full text
Abstract:
Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage.
APA, Harvard, Vancouver, ISO, and other styles
34

Molokomme, Daisy Nkele, Chabalala S. Chabalala, and Pitshou N. Bokoro. "A Review of Cognitive Radio Smart Grid Communication Infrastructure Systems." Energies 13, no. 12 (June 23, 2020): 3245. http://dx.doi.org/10.3390/en13123245.

Full text
Abstract:
The cognitive smart grid (SG) communication paradigm aims to mitigate quality of service (QoS) issues in obsolete communication architecture associated with the conventional electrical grid. This paradigm entails the integration of advanced information and communication technologies (ICTs) into power grids, enabling a two-way flow of information. However, due to the exponential increase in wireless applications and services, also driven by the deployment of the Internet of Things (IoT) smart devices, SG communication systems are expected to handle large volumes of data. As a result, the operation of SG networks is confronted with the major challenge of managing and processing data in a reliable and secure manner. The existing works in the literature proposed architectures with the objective to mitigate the underlying QoS issues such as latency, bandwidth, data congestion, energy efficiency, etc. In addition, a variety of communication technologies have been analyzed for their capacity to support stringent QoS requirements for diverse SGs environments. This notwithstanding, a standard architecture designed to mitigate the aforementioned issues for SG networks remains a work-in-progress. The main objective of this paper is to investigate the emerging technologies such as cognitive radio networks (CRNs) as part of the Fifth-Generation (5G) mobile technology for reliable communication in SG networks. Furthermore, a hybrid architecture based on the combination of fog computing and cloud computing is proposed. In this architecture, real-time latency-sensitive information is given high priority, with fog edge based servers deployed in close proximity to home area networks (HANs) for preprocessing and analyzing of information collected from smart IoT devices. In comparison to the recent works in the literature, which are mainly based on CRNs and 5G separately, the proposed architecture in this paper incorporates the combination of CRNs and 5G for reliable and efficient communication in SG networks.
APA, Harvard, Vancouver, ISO, and other styles
35

Shi, Lei, Jing Xu, Lunfei Wang, Jie Chen, Zhifeng Jin, Tao Ouyang, Juan Xu, and Yuqi Fan. "Multijob Associated Task Scheduling for Cloud Computing Based on Task Duplication and Insertion." Wireless Communications and Mobile Computing 2021 (April 28, 2021): 1–13. http://dx.doi.org/10.1155/2021/6631752.

Full text
Abstract:
With the emergence and development of various computer technologies, many jobs processed in cloud computing systems consist of multiple associated tasks which follow the constraint of execution order. The task of each job can be assigned to different nodes for execution, and the relevant data are transmitted between nodes to complete the job processing. The computing or communication capabilities of each node may be different due to processor heterogeneity, and hence, a task scheduling algorithm is of great significance for job processing performance. An efficient task scheduling algorithm can make full use of resources and improve the performance of job processing. The performance of existing research on associated task scheduling for multiple jobs needs to be improved. Therefore, this paper studies the problem of multijob associated task scheduling with the goal of minimizing the jobs’ makespan. This paper proposes a task Duplication and Insertion algorithm based on List Scheduling (DILS) which incorporates dynamic finish time prediction, task replication, and task insertion. The algorithm dynamically schedules tasks by predicting the completion time of tasks according to the scheduling of previously scheduled tasks, replicates tasks on different nodes, reduces transmission time, and inserts tasks into idle time slots to speed up task execution. Experimental results demonstrate that our algorithm can effectively reduce the jobs’ makespan.
APA, Harvard, Vancouver, ISO, and other styles
36

Senthilkumar, G., and M. P. Chitra. "An ensemble dynamic optimization based inverse adaptive heuristic critic in IaaS cloud computing for resource allocation." Journal of Intelligent & Fuzzy Systems 39, no. 5 (November 19, 2020): 7521–35. http://dx.doi.org/10.3233/jifs-200823.

Full text
Abstract:
In the recent years increase in computer and mobile user’s, data storage has become a priority in all fields. Large- and Small-Scale businesses today thrive on their data and they spent a huge amount of money to maintain this data. Cloud Storage provides on– demand availability of IT services via Large Distributed Data Centers over High Speed Networks. Network Virtualization is been considered as a recent proliferation in cloud computing which emerges as a Multifaceted method towards future internet by facilitating shared resources. Provisioning of the Virtual Network is considered to be a major challenge in terms of creating NP hard problems, minimization of workflow processing time under control resource etc. In order to cope up with the challenges our work has proposed an Ensemble Dynamic Optimization based on Inverse Adaptive Heuristic Critic (IAHC) for overcoming the virtual network provisioning in cloud computing. Our approach gets observed from Expert Observation and provides an approximate solution when various workflows arrives online at various Window Time (WT). It also provides an Optimal Policy for predicting the effect of Resource Allocation of one task for Present as well as Future time Windows. In order to the above approaches it also avoids the high sample complexity and maintains the cost while scaling up to provide Resource Provision. Therefore, our work achieves an adequate policy towards Resource Allocation, reduces the Cost as well as Energy Consumption and deals with real time uncertainties to avoid the Virtual Network provisioning.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Zhichao, Hui Chen, Xiaoqing Yin, and Jinsheng Deng. "EAWNet: An Edge Attention-Wise Objector for Real-Time Visual Internet of Things." Wireless Communications and Mobile Computing 2021 (July 10, 2021): 1–15. http://dx.doi.org/10.1155/2021/7258649.

Full text
Abstract:
With the upgrading of the high-performance image processing platform and visual internet of things sensors, VIOT is widely used in intelligent transportation, autopilot, military reconnaissance, public safety, and other fields. However, the outdoor visual internet of things system is very sensitive to the weather and unbalanced scale of latent object. The performance of supervised learning is often limited by the disturbance of abnormal data. It is difficult to collect all classes from limited historical instances. Therefore, in terms of the anomaly detection images, fast and accurate artificial intelligence-based object detection technology has become a research hot spot in the field of intelligent vision internet of things. To this end, we propose an efficient and accurate deep learning framework for real-time and dense object detection in VIOT named the Edge Attention-wise Convolutional Neural Network (EAWNet) with three main features. First, it can identify remote aerial and daily scenery objects fast and accurately in terms of an unbalanced category. Second, edge prior and rotated anchor are adopted to enhance the efficiency of detection in edge computing internet. Third, our EAWNet network uses an edge sensing object structure, makes full use of an attention mechanism to dynamically screen different kinds of objects, and performs target recognition on multiple scales. The edge recovery effect and target detection performance for long-distance aerial objects were significantly improved. We explore the efficiency of various architectures and fine tune the training process using various backbone and data enhancement strategies to increase the variety of the training data and overcome the size limitation of input images. Extensive experiments and comprehensive evaluation on COCO and large-scale DOTA datasets proved the effectiveness of this framework that achieved the most advanced performance in real-time VIOT object detection.
APA, Harvard, Vancouver, ISO, and other styles
38

Jiang, Dazhi, Zhihui He, Yingqing Lin, Yifei Chen, and Linyan Xu. "An Improved Unsupervised Single-Channel Speech Separation Algorithm for Processing Speech Sensor Signals." Wireless Communications and Mobile Computing 2021 (February 27, 2021): 1–13. http://dx.doi.org/10.1155/2021/6655125.

Full text
Abstract:
As network supporting devices and sensors in the Internet of Things are leaping forward, countless real-world data will be generated for human intelligent applications. Speech sensor networks, an important part of the Internet of Things, have numerous application needs. Indeed, the sensor data can further help intelligent applications to provide higher quality services, whereas this data may involve considerable noise data. Accordingly, speech signal processing method should be urgently implemented to acquire low-noise and effective speech data. Blind source separation and enhancement technique refer to one of the representative methods. However, in the unsupervised complex environment, in the only presence of a single-channel signal, many technical challenges are imposed on achieving single-channel and multiperson mixed speech separation. For this reason, this study develops an unsupervised speech separation method CNMF+JADE, i.e., a hybrid method combined with Convolutional Non-Negative Matrix Factorization and Joint Approximative Diagonalization of Eigenmatrix. Moreover, an adaptive wavelet transform-based speech enhancement technique is proposed, capable of adaptively and effectively enhancing the separated speech signal. The proposed method is aimed at yielding a general and efficient speech processing algorithm for the data acquired by speech sensors. As revealed from the experimental results, in the TIMIT speech sources, the proposed method can effectively extract the target speaker from the mixed speech with a tiny training sample. The algorithm is highly general and robust, capable of technically supporting the processing of speech signal acquired by most speech sensors.
APA, Harvard, Vancouver, ISO, and other styles
39

Assiroj, Priati, Harco Leslie Hendric Spits Warnars, Edi Abdurachman, Achmad Imam Kistijantoro, and Antoine Doucet. "The influence of data size on a high-performance computing memetic algorithm in fingerprint dataset." Bulletin of Electrical Engineering and Informatics 10, no. 4 (August 1, 2021): 2110–18. http://dx.doi.org/10.11591/eei.v10i4.2760.

Full text
Abstract:
The fingerprint is one kind of biometric. This biometric unique data have to be processed well and secure. The problem gets more complicated as data grows. This work is conducted to process image fingerprint data with a memetic algorithm, a simple and reliable algorithm. In order to achieve the best result, we run this algorithm in a parallel environment by utilizing a multi-thread feature of the processor. We propose a high-performance computing memetic algorithm (HPCMA) to process a 7200 image fingerprint dataset which is divided into fifteen specimens based on its characteristics based on the image specification to get the detail of each image. A combination of each specimen generates a new data variation. This algorithm runs in two different operating systems, Windows 7 and Windows 10 then we measure the influence of data size on processing time, speed up, and efficiency of HPCMA with simple linear regression. The result shows data size is very influencing to processing time more than 90%, to speed up more than 30%, and to efficiency more than 19%.
APA, Harvard, Vancouver, ISO, and other styles
40

Lagovsky, B. A., and E. Ya Rubinovich. "Algorithms for Digital Processing of Measurement Data Providing Angular Superresolution." Mekhatronika, Avtomatizatsiya, Upravlenie 22, no. 7 (July 8, 2021): 349–56. http://dx.doi.org/10.17587/mau.22.349-356.

Full text
Abstract:
Incorrect one- and two-dimensional inverse problems of reconstructing images of objects with angular resolutionexceeding the Rayleigh criterion are considered. The technique is based on the solution of inverse problems of source reconstruction signals described Fredholm integral equations. Algebraic methods and algorithms for processing dataobtained by measuring systems in order to achieve angular superresolution are presented. Angular superresolution allows you to detail images of objects, solve problems of their recognition and identification on this basis. The efficiency of using algorithms based on developed algebraic methods and their modifications in parameterization the inverse problems under study and further reconstructing approximate images of objects of various types is shown. It is shown that the noise immunity of the obtained solutions exceeds many known approaches. The results of numerical experiments demonstrate the possibility of obtaining images with a resolution exceeding the Rayleigh criterion by 2-6 times at small values of the signal-to-noise ratio. The ways of further increasing the degree of superresolution based on the intelligent analysis of measurement data are described. On the basis of the preliminary information on a source of signals algorithms allow to increase consistently the effective angular resolution before achievement greatest possible for a solved problem. Algorithms of secondary processing of the information necessary for it are described. It is found that the proposed symmetrization algorithm improves the quality of solutions to the inverse problems under consideration and their stability. The examples demonstrate the successful application of modified algebraic methods and algorithms for obtaining images of the objects under study in the presence of a priori information about the solution. The results of numerical studies show that the presented methods of digital processing of received signals allow us to restore the angular coordinates of individual objects under study and their elements with super-resolution with good accuracy. The adequacy and stability of the solutions were verified by conducting numerical experiments on a mathematical model. It was shown that the stability of solutions, especially at a significant level of random components, is higher than that of many other methods. The limiting possibilities of increasing the effective angular resolution and the accuracy of image reconstruction of signal sources, depending on the level of random components in the data utilized, are found. The effective angular resolution achieved in this case is 2—10 times higher than the Rayleigh criterion. The minimum required signal-to-noise ratio for obtaining adequate solutions with super-resolution is 13—16 dB for the described methods, which is significantly less than for the known methods. The relative simplicity of the presented methods allows you to use inexpensive computing devices and work in real time.
APA, Harvard, Vancouver, ISO, and other styles
41

Marquez-Viloria, David, Luis Castano-Londono, and Neil Guerrero-Gonzalez. "A Modified KNN Algorithm for High-Performance Computing on FPGA of Real-Time m-QAM Demodulators." Electronics 10, no. 5 (March 9, 2021): 627. http://dx.doi.org/10.3390/electronics10050627.

Full text
Abstract:
A methodology for scalable and concurrent real-time implementation of highly recurrent algorithms is presented and experimentally validated using the AWS-FPGA. This paper presents a parallel implementation of a KNN algorithm focused on the m-QAM demodulators using high-level synthesis for fast prototyping, parameterization, and scalability of the design. The proposed design shows the successful implementation of the KNN algorithm for interchannel interference mitigation in a 3 × 16 Gbaud 16-QAM Nyquist WDM system. Additionally, we present a modified version of the KNN algorithm in which comparisons among data symbols are reduced by identifying the closest neighbor using the rule of the 8-connected clusters used for image processing. Real-time implementation of the modified KNN on a Xilinx Virtex UltraScale+ VU9P AWS-FPGA board was compared with the results obtained in previous work using the same data from the same experimental setup but offline DSP using Matlab. The results show that the difference is negligible below FEC limit. Additionally, the modified KNN shows a reduction of operations from 43 percent to 75 percent, depending on the symbol’s position in the constellation, achieving a reduction 47.25% reduction in total computational time for 100 K input symbols processed on 20 parallel cores compared to the KNN algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Qin, Zhenquan, Zanping Cheng, Chuan Lin, Zhaoyi Lu, and Lei Wang. "Optimal Workload Allocation for Edge Computing Network Using Application Prediction." Wireless Communications and Mobile Computing 2021 (March 25, 2021): 1–13. http://dx.doi.org/10.1155/2021/5520455.

Full text
Abstract:
By deploying edge servers on the network edge, mobile edge computing network strengthens the real-time processing ability near the end devices and releases the huge load pressure of the core network. Considering the limited computing or storage resources on the edge server side, the workload allocation among edge servers for each Internet of Things (IoT) application affects the response time of the application’s requests. Hence, when the access devices of the edge server are deployed intensively, the workload allocation becomes a key factor affecting the quality of user experience (QoE). To solve this problem, this paper proposes an edge workload allocation scheme, which uses application prediction (AP) algorithm to minimize response delay. This problem has been proved to be a NP hard problem. First, in the application prediction model, long short-term memory (LSTM) method is proposed to predict the tasks of future access devices. Second, based on the prediction results, the edge workload allocation is divided into two subproblems to solve, which are the task assignment subproblem and the resource allocation subproblem. Using historical execution data, we can solve the problem in linear time. The simulation results show that the proposed AP algorithm can effectively reduce the response delay of the device and the average completion time of the task sequence and approach the theoretical optimal allocation results.
APA, Harvard, Vancouver, ISO, and other styles
43

Tiwari, Vivek, and Basant Tiwari. "A Data Driven Multi-Layer Framework of Pervasive Information Computing System for eHealthcare." International Journal of E-Health and Medical Communications 10, no. 4 (October 2019): 66–85. http://dx.doi.org/10.4018/ijehmc.2019100106.

Full text
Abstract:
In the last decade, significant advancements in telecommunications and informatics have seen which incredibly boost mobile communications, wireless networks, and pervasive computing. It enables healthcare applications to increase human livelihood. Furthermore, it seems feasible to continuous observation of patients and elderly individuals for their wellbeing. Such pervasive arrangements enable medical experts to analyse current patient status, minimise reaction time, increase livelihood, scalability, and availability. There is found plenty of remote patient monitoring model in literature, and most of them are designed with limited scope. Most of them are lacking to give an overall unified, complete model which talk about all state-of-the-art functionalities. In this regard, remote patient monitoring systems (RPMS's) play important roles through wearable devices to monitor the patient's physiological condition. RPMS also enables the capture of related videos, images, and frames. RPMS do not mean to enable only capturing various sorts of patient-related information, but it also must facilitate analytics, transformation, security, alerts, accessibility, etc. In this view, RPMS must ensure some broad issues like, wearability, adaptability, interoperability, integration, security, and network efficiency. This article proposes a data-driven multi-layer architecture for pervasively remote patient monitoring that incorporates these issues. The system has been classified into five fundamental layers: the data acquisition layer, the data pre-processing layer, the network and data transfer layer, the data management layer and the data accessing layer. It enables patient care at real-time using the network infrastructure efficiently. A detailed discussion on various security issues have been carried out. Moreover, standard deviation-based data reduction and a machine-learning-based data access policy is also proposed.
APA, Harvard, Vancouver, ISO, and other styles
44

Tran, Minh-Ngoc, and Younghan Kim. "Named Data Networking Based Disaster Response Support System over Edge Computing Infrastructure." Electronics 10, no. 3 (February 1, 2021): 335. http://dx.doi.org/10.3390/electronics10030335.

Full text
Abstract:
After a disaster happens, effective communication and information sharing between emergency response team members play a crucial role in a successful disaster response phase. With dedicated roles and missions are assigned to responders, role-based communication is a pivotal feature that an emergency communication network needs to support. Previous works have shown that Named Data Networking (NDN) has many advantages over traditional IP-based networks in providing this feature. However, these studies are only simulation-based. To apply NDN in disaster scenarios, real implementation of a deployment architecture over existing infrastructure during the disaster should be considered. Not only should it ensure efficient emergency communication, but the architecture should deal with other disaster-related challenges such as responder mobility, intermittent network, and replacement possibility due to disaster damage. In this paper, we designed and implemented an NDN-based disaster response support system over Edge Computing infrastructure with KubeEdge as the chosen edge platform to solve the above issues. Our proof-of-concept system performance shows that the architecture achieved efficient role-based communication support, fast mobility handover duration, quick network convergence time in case of node replacement, and loss-free information exchange between responders and the management center on the cloud.
APA, Harvard, Vancouver, ISO, and other styles
45

Rivera-Acosta, Miguel, Juan Manuel Ruiz-Varela, Susana Ortega-Cisneros, Jorge Rivera, Ramón Parra-Michel, and Pedro Mejia-Alvarez. "Spelling Correction Real-Time American Sign Language Alphabet Translation System Based on YOLO Network and LSTM." Electronics 10, no. 9 (April 27, 2021): 1035. http://dx.doi.org/10.3390/electronics10091035.

Full text
Abstract:
In this paper, we present a novel approach that aims to solve one of the main challenges in hand gesture recognition tasks in static images, to compensate for the accuracy lost when trained models are used to interpret completely unseen data. The model presented here consists of two main data-processing stages. A deep neural network (DNN) for performing handshape segmentation and classification is used in which multiple architectures and input image sizes were tested and compared to derive the best model in terms of accuracy and processing time. For the experiments presented in this work, the DNN models were trained with 24,000 images of 24 signs from the American Sign Language alphabet and fine-tuned with 5200 images of 26 generated signs. The system was real-time tested with a community of 10 persons, yielding a mean average precision and processing rate of 81.74% and 61.35 frames-per-second, respectively. As a second data-processing stage, a bidirectional long short-term memory neural network was implemented and analyzed for adding spelling correction capability to our system, which scored a training accuracy of 98.07% with a dictionary of 370 words, thus, increasing the robustness in completely unseen data, as shown in our experiments.
APA, Harvard, Vancouver, ISO, and other styles
46

Rampérez, Víctor, Javier Soriano, and David Lizcano. "A Multidomain Standards-Based Fog Computing Architecture for Smart Cities." Wireless Communications and Mobile Computing 2018 (September 26, 2018): 1–14. http://dx.doi.org/10.1155/2018/4019858.

Full text
Abstract:
Many of the problems arising from rapid urbanization and urban population growth can be solved by making cities “smart”. These smart cities are supported by large networks of interconnected and widely geo-distributed devices, known as Internet of Things or IoT, that generate large volumes of data. Traditionally, cloud computing has been the technology used to support this infrastructure; however, some of the essential requirements of smart cities such as low-latency, mobility support, location-awareness, bandwidth cost savings, and geo-distributed nature of such IoT systems cannot be met. To solve these problems, the fog computing paradigm proposes extending cloud computing models to the edge of the network. However, most of the proposed architectures and frameworks are based on their own private data models and interfaces, which severely reduce the openness and interoperability of these solutions. To address this problem, we propose a standard-based fog computing architecture to enable it to be an open and interoperable solution. The proposed architecture moves the stream processing tasks to the edge of the network through the use of lightweight context brokers and Complex Event Processing (CEP) to reduce latency. Moreover, to communicate the different smart cities domains we propose a Context Broker based on a publish/subscribe middleware specially designed to be elastic and low-latency and exploit the context information of these environments. Additionally, we validate our architecture through a real smart city use case, showing how the proposed architecture can successfully meet the smart cities requirements by taking advantage of the fog computing approach. Finally, we also analyze the performance of the proposed Context Broker based on microbenchmarking results for latency, throughput, and scalability.
APA, Harvard, Vancouver, ISO, and other styles
47

Lücking, Markus, Felix Kretzer, Niclas Kannengießer, Michael Beigl, Ali Sunyaev, and Wilhelm Stork. "When Data Fly: An Open Data Trading System in Vehicular Ad Hoc Networks." Electronics 10, no. 6 (March 11, 2021): 654. http://dx.doi.org/10.3390/electronics10060654.

Full text
Abstract:
Communication between vehicles and their environment (i.e., vehicle-to-everything or V2X communication) in vehicular ad hoc networks (VANETs) has become of particular importance for smart cities. However, economic challenges, such as the cost incurred by data sharing (e.g., due to power consumption), hinder the integration of data sharing in open systems into smart city applications, such as dynamic environmental zones. Moving from open data sharing to open data trading can address the economic challenges and incentivize vehicle drivers to share their data. In this context, integrating distributed ledger technology (DLT) into open systems for data trading is promising for reducing the transaction cost of payments in data trading, avoiding dependencies on third parties, and guaranteeing openness. However, because the integration of DLT conflicts with the short available communication time between fast moving objects in VANETs, it remains unclear how open data trading in VANETs using DLT should be designed to be viable. In this work, we present a system design for data trading in VANETs using DLT. We measure the required communication time for data trading between a vehicle and a roadside unit in a real scenario and estimate the associated cost. Our results show that the proposed system design is technically feasible and economically viable.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Guoqing, He Chen, and Yizhuang Xie. "An Efficient Dual-Channel Data Storage and Access Method for Spaceborne Synthetic Aperture Radar Real-Time Processing." Electronics 10, no. 6 (March 12, 2021): 662. http://dx.doi.org/10.3390/electronics10060662.

Full text
Abstract:
With the development of remote sensing technology and very large-scale integrated circuit (VLSI) technology, the real-time processing of spaceborne Synthetic Aperture Radar (SAR) has greatly improved the ability of Earth observation. However, the characteristics of external memory have led to matrix transposition becoming a technical bottleneck that limits the real-time performance of the SAR imaging system. In order to solve this problem, this paper combines the optimized data mapping method and reasonable hardware architecture to implement a data controller based on the Field-Programmable Gate Array (FPGA). First of all, this paper proposes an optimized dual-channel data storage and access method, so that the two-dimensional data access efficiency can be improved. Then, a hardware architecture is designed with register manager, simplified address generator and dual-channel Double-Data-Rate Three Synchronous Dynamic Random-Access Memory (DDR3 SDRAM) access mode. Finally, the proposed data controller is implemented on the Xilinx XC7VX690T FPGA chip. The experimental results show that the reading efficiency of the data controller proposed is 80% both in the range direction and azimuth direction, and the writing efficiency is 66% both in the range direction and azimuth direction. The results of a comparison with the recent implementations show that the proposed data controller has a higher data bandwidth, is more flexible in its design, and is suitable for use in spaceborne scenarios.
APA, Harvard, Vancouver, ISO, and other styles
49

Huang, Binbin, Zhongjin Li, Yunqiu Xu, Linxuan Pan, Shangguang Wang, Haiyang Hu, and Victor Chang. "Deep Reinforcement Learning for Performance-Aware Adaptive Resource Allocation in Mobile Edge Computing." Wireless Communications and Mobile Computing 2020 (July 2, 2020): 1–17. http://dx.doi.org/10.1155/2020/2765491.

Full text
Abstract:
Mobile edge computing (MEC) enables to provide relatively rich computing resources in close proximity to mobile users, which enables resource-limited mobile devices to offload workloads to nearby edge servers, and thereby greatly reducing the processing delay of various mobile applications and the energy consumption of mobile devices. Despite its advantages, when a large number of mobile users simultaneously offloads their computation tasks to an edge server, due to the limited computation and communication resources of edge server, inefficiency resource allocation will not make full use of the limited resource and cause waste of resource, resulting in low system performance (the weighted sum of the number of processed tasks, the number of punished tasks, and the number of dropped tasks). Therefore, it is a challenging problem to effectively allocate the computing and communication resources to multiple mobile users. To cope with this problem, we propose a performance-aware resource allocation (PARA) scheme, the goal of which is to maximize the long-term system performance. More specifically, we first build the multiuser resource allocation architecture for computing workloads and transmitting result data to mobile devices. Then, we formulate the multiuser resource allocation problem as a Markova Decision Process (MDP). To achieve this problem, a performance-aware resource allocation (PARA) scheme based on a deep deterministic policy gradient (DDPG) is adopted to derive optimal resource allocation policy. Finally, extensive simulation experiments demonstrate the effectiveness of the PARA scheme.
APA, Harvard, Vancouver, ISO, and other styles
50

Chi, Chuanxiu, Yingjie Wang, Yingshu Li, and Xiangrong Tong. "Multistrategy Repeated Game-Based Mobile Crowdsourcing Incentive Mechanism for Mobile Edge Computing in Internet of Things." Wireless Communications and Mobile Computing 2021 (January 25, 2021): 1–18. http://dx.doi.org/10.1155/2021/6695696.

Full text
Abstract:
With the advent of the Internet of Things (IoT) era, various application requirements have put forward higher requirements for data transmission bandwidth and real-time data processing. Mobile edge computing (MEC) can greatly alleviate the pressure on network bandwidth and improve the response speed by effectively using the device resources of mobile edge. Research on mobile crowdsourcing in edge computing has become a hot spot. Hence, we studied resource utilization issues between edge mobile devices, namely, crowdsourcing scenarios in mobile edge computing. We aimed to design an incentive mechanism to ensure the long-term participation of users and high quality of tasks. This paper designs a long-term incentive mechanism based on game theory. The long-term incentive mechanism is to encourage participants to provide long-term and continuous quality data for mobile crowdsourcing systems. The multistrategy repeated game-based incentive mechanism (MSRG incentive mechanism) is proposed to guide participants to provide long-term participation and high-quality data. The proposed mechanism regards the interaction between the worker and the requester as a repeated game and obtains a long-term incentive based on the historical information and discount factor. In addition, the evolutionary game theory and the Wright-Fisher model in biology are used to analyze the evolution of participants’ strategies. The optimal discount factor is found within the range of discount factors based on repeated games. Finally, simulation experiments verify the existing crowdsourcing dilemma and the effectiveness of the incentive mechanism. The results show that the proposed MSRG incentive mechanism has a long-term incentive effect for participants in mobile crowdsourcing systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography