To see the other types of publications on this topic, follow the link: Application to low power graphs algorithm.

Journal articles on the topic 'Application to low power graphs algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Application to low power graphs algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Min, Seunghwan, Sung Gwan Park, Kunsoo Park, Dora Giammarresi, Giuseppe F. Italiano, and Wook-Shin Han. "Symmetric continuous subgraph matching with bidirectional dynamic programming." Proceedings of the VLDB Endowment 14, no. 8 (April 2021): 1298–310. http://dx.doi.org/10.14778/3457390.3457395.

Full text
Abstract:
In many real datasets such as social media streams and cyber data sources, graphs change over time through a graph update stream of edge insertions and deletions. Detecting critical patterns in such dynamic graphs plays an important role in various application domains such as fraud detection, cyber security, and recommendation systems for social networks. Given a dynamic data graph and a query graph, the continuous subgraph matching problem is to find all positive matches for each edge insertion and all negative matches for each edge deletion. The state-of-the-art algorithm TurboFlux uses a spanning tree of a query graph for filtering. However, using the spanning tree may have a low pruning power because it does not take into account all edges of the query graph. In this paper, we present a symmetric and much faster algorithm SymBi which maintains an auxiliary data structure based on a directed acyclic graph instead of a spanning tree, which maintains the intermediate results of bidirectional dynamic programming between the query graph and the dynamic graph. Extensive experiments with real and synthetic datasets show that SymBi outperforms the state-of-the-art algorithm by up to three orders of magnitude in terms of the elapsed time.
APA, Harvard, Vancouver, ISO, and other styles
2

Mittal, Varsha, Durgaprasad Gangodkar, and Bhaskar Pant. "K-Graph: Knowledgeable Graph for Text Documents." Journal of KONBiN 51, no. 1 (March 1, 2021): 73–89. http://dx.doi.org/10.2478/jok-2021-0006.

Full text
Abstract:
Abstract Graph databases are applied in many applications, including science and business, due to their low-complexity, low-overheads, and lower time-complexity. The graph-based storage offers the advantage of capturing the semantic and structural information rather than simply using the Bag-of-Words technique. An approach called Knowledgeable graphs (K-Graph) is proposed to capture semantic knowledge. Documents are stored using graph nodes. Thanks to weighted subgraphs, the frequent subgraphs are extracted and stored in the Fast Embedding Referral Table (FERT). The table is maintained at different levels according to the headings and subheadings of the documents. It reduces the memory overhead, retrieval, and access time of the subgraph needed. The authors propose an approach that will reduce the data redundancy to a larger extent. With real-world datasets, K-graph’s performance and power usage are threefold greater than the current methods. Ninety-nine per cent accuracy demonstrates the robustness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Tafesse, Bisrat, and Venkatesan Muthukumar. "Framework for Simulation of Heterogeneous MpSoC for Design Space Exploration." VLSI Design 2013 (July 11, 2013): 1–16. http://dx.doi.org/10.1155/2013/936181.

Full text
Abstract:
Due to the ever-growing requirements in high performance data computation, multiprocessor systems have been proposed to solve the bottlenecks in uniprocessor systems. Developing efficient multiprocessor systems requires effective exploration of design choices like application scheduling, mapping, and architecture design. Also, fault tolerance in multiprocessors needs to be addressed. With the advent of nanometer-process technology for chip manufacturing, realization of multiprocessors on SoC (MpSoC) is an active field of research. Developing efficient low power, fault-tolerant task scheduling, and mapping techniques for MpSoCs require optimized algorithms that consider the various scenarios inherent in multiprocessor environments. Therefore there exists a need to develop a simulation framework to explore and evaluate new algorithms on multiprocessor systems. This work proposes a modular framework for the exploration and evaluation of various design algorithms for MpSoC system. This work also proposes new multiprocessor task scheduling and mapping algorithms for MpSoCs. These algorithms are evaluated using the developed simulation framework. The paper also proposes a dynamic fault-tolerant (FT) scheduling and mapping algorithm for robust application processing. The proposed algorithms consider optimizing the power as one of the design constraints. The framework for a heterogeneous multiprocessor simulation was developed using SystemC/C++ language. Various design variations were implemented and evaluated using standard task graphs. Performance evaluation metrics are evaluated and discussed for various design scenarios.
APA, Harvard, Vancouver, ISO, and other styles
4

Lakshmi, B., and A. S. Dhar. "CORDIC Architectures: A Survey." VLSI Design 2010 (March 31, 2010): 1–19. http://dx.doi.org/10.1155/2010/794891.

Full text
Abstract:
In the last decade, CORDIC algorithm has drawn wide attention from academia and industry for various applications such as DSP, biomedical signal processing, software defined radio, neural networks, and MIMO systems to mention just a few. It is an iterative algorithm, requiring simple shift and addition operations, for hardware realization of basic elementary functions. Since CORDIC is used as a building block in various single chip solutions, the critical aspects to be considered are high speed, low power, and low area, for achieving reasonable overall performance. In this paper, we first classify the CORDIC algorithm based on the number system and discuss its importance in the implementation of CORDIC algorithm. Then, we present systematic and comprehensive taxonomy of rotational CORDIC algorithms, which are subsequently discussed in depth. Special attention has been devoted to the higher radix and flat techniques proposed in the literature for reducing the latency. Finally, detailed comparison of various algorithms is presented, which can provide a first-order information to designers looking for either further improvement of performance or selection of rotational CORDIC for a specific application.
APA, Harvard, Vancouver, ISO, and other styles
5

CHOI, YOONSEO, and TAEWHAN KIM. "BINDING ALGORITHM FOR POWER OPTIMIZATION BASED ON NETWORK FLOW METHOD." Journal of Circuits, Systems and Computers 11, no. 03 (June 2002): 259–71. http://dx.doi.org/10.1142/s0218126602000422.

Full text
Abstract:
We propose an efficient binding algorithm for power optimization in behavioral synthesis. In prior work, it has been shown that several binding problems for low-power can be formulated as multi-commodity flow problems (due to an iterative execution of data flow graph) and be solved optimally. However, since the multi-commodity flow problem is NP-hard, the application is limited to a class of small sized problems. To overcome the limitation, we address the problem of how we can effectively make use of the property of efficient flow computations in a network so that it is extensively applicable to practical designs while producing close-to-optimal results. To this end, we propose a two-step procedure, which (1) determines a feasible binding solution by partially utilizing the computation steps for finding a maximum flow of minimum cost in a network and then (2) refines it iteratively. Experiments with a set of benchmark examples show that the proposed algorithm saves the run time significantly while maintaining close-to-optimal bindings in most practical designs.
APA, Harvard, Vancouver, ISO, and other styles
6

Durcek, Viktor, Michal Kuba, and Milan Dado. "Investigation of random-structure regular LDPC codes construction based on progressive edge-growth and algorithms for removal of short cycles." Eastern-European Journal of Enterprise Technologies 4, no. 9(112) (August 31, 2021): 46–53. http://dx.doi.org/10.15587/1729-4061.2021.225852.

Full text
Abstract:
This paper investigates the construction of random-structure LDPC (low-density parity-check) codes using Progressive Edge-Growth (PEG) algorithm and two proposed algorithms for removing short cycles (CB1 and CB2 algorithm; CB stands for Cycle Break). Progressive Edge-Growth is an algorithm for computer-based design of random-structure LDPC codes, the role of which is to generate a Tanner graph (a bipartite graph, which represents a parity-check matrix of an error-correcting channel code) with as few short cycles as possible. Short cycles, especially the shortest ones with a length of 4 edges, in Tanner graphs of LDPC codes can degrade the performance of their decoding algorithm, because after certain number of decoding iterations, the information sent through its edges is no longer independent. The main contribution of this paper is the unique approach to the process of removing short cycles in the form of CB2 algorithm, which erases edges from the code's parity-check matrix without decreasing the minimum Hamming distance of the code. The two cycle-removing algorithms can be used to improve the error-correcting performance of PEG-generated (or any other) LDPC codes and achieved results are provided. All these algorithms were used to create a PEG LDPC code which rivals the best-known PEG-generated LDPC code with similar parameters provided by one of the founders of LDPC codes. The methods for generating the mentioned error-correcting codes are described along with simulations which compare the error-correcting performance of the original codes generated by the PEG algorithm, the PEG codes processed by either CB1 or CB2 algorithm and also external PEG code published by one of the founders of LDPC codes
APA, Harvard, Vancouver, ISO, and other styles
7

Geoff Rideout, D., Jeffrey L. Stein, and Loucas S. Louca. "Systematic Identification of Decoupling in Dynamic System Models." Journal of Dynamic Systems, Measurement, and Control 129, no. 4 (October 24, 2006): 503–13. http://dx.doi.org/10.1115/1.2745859.

Full text
Abstract:
This paper proposes a technique to quantitatively and systematically search for decoupling among elements of a dynamic system model, and to partition models in which decoupling is found. The method can validate simplifying assumptions based on decoupling, determine when decoupling breaks down due to changes in system parameters or inputs, and indicate required model changes. A high-fidelity model is first generated using the bond graph formalism. The relative contributions of the terms of the generalized Kirchoff loop and node equations are computed by calculating and comparing a measure of their power flow. Negligible aggregate bond power at a constraint equation node indicates an unnecessary term, which is then removed from the model by replacing the associated bond by a modulated source of generalized effort or flow. If replacement of all low-power bonds creates separate bond graphs that are joined by modulating signals, then the model can be partitioned into driving and driven subsystems. The partitions are smaller than the original model, have lower-dimension design variable vectors, and can be simulated separately or in parallel. The partitioning algorithm can be employed alongside existing automated modeling techniques to facilitate efficient, accurate simulation-based design of dynamic systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Dai, Lan, and Chengying Chen. "A 69-dB SNR 89-μW AGC for Multifrequency Signal Processing Based on Peak-Statistical Algorithm and Judgment Logic." VLSI Design 2016 (December 29, 2016): 1–7. http://dx.doi.org/10.1155/2016/6708253.

Full text
Abstract:
A novel peak-statistical algorithm and judgment logic (PSJ) for multifrequency signal application of Autogain Control Loop (AGC) in hearing aid SoC is proposed in this paper. Under a condition of multifrequency signal, it tracks the amplitude change and makes statistical data of them. Finally, the judgment is decided and the circuit gain is controlled precisely. The AGC circuit is implemented with 0.13 μm 1P8M CMOS mixed-signal technology. Meanwhile, the low-power circuit topology and noise-optimizing technique are adopted to improve the signal-to-noise ratio (SNR) of our circuit. Under 1 V voltage supply, the peak SNR achieves 69.2 dB and total harmonic distortion (THD) is 65.3 dB with 89 μW power consumption.
APA, Harvard, Vancouver, ISO, and other styles
9

Ibrahim, Atef, Fayez Gebali, Yassine Bouteraa, Usman Tariq, Tariq Ahamad, and Waleed Nazih. "Low-Space Bit-Parallel Systolic Structure for AOP-Based Multiplier Suitable for Resource-Constrained IoT Edge Devices." Mathematics 10, no. 5 (March 4, 2022): 815. http://dx.doi.org/10.3390/math10050815.

Full text
Abstract:
Security and privacy issues with IoT edge devices hinder the application of IoT technology in many applications. Applying cryptographic protocols to edge devices is the perfect solution to security issues. Implementing these protocols on edge devices represents a significant challenge due to their limited resources. Finite-field multiplication is the core operation for most cryptographic protocols, and its efficient implementation has a remarkable impact on their performance. This article offers an efficient low-area and low-power one-dimensional bit-parallel systolic implementation for field multiplication in GF(2n) based on an irreducible all-one polynomial (AOP). We represented the adopted multiplication algorithm in the bit-level form to be able to extract its dependency graph (DG). We choose to apply specific scheduling and projection vectors to the DG to extract the bit-parallel systolic multiplier structure. In contrast with most of the previously published parallel structures, the proposed one has an area complexity of the order O(n) compared to the area complexity of the order of O(n2) for most parallel multiplier structures. The complexity analysis of the proposed multiplier structure shows that it exhibits a meaningful reduction in area compared to most of the compared parallel multipliers. To confirm the results of the complexity analysis, we performed an ASIC implementation of the proposed and the existing efficient multiplier structures using an ASIC CMOS library. The obtained ASIC synthesis report shows that the proposed multiplier structure displays significant savings in terms of its area, power consumption, area-delay product (ADP), and power-delay product (PDP). It offers average savings in space of nearly 33.7%, average savings in power consumption of 39.3%, average savings in ADP of 24.8%, and savings in PDP of 31.2% compared to the competitive existing multiplier structures. The achieved results make the proposed multiplier structure more suitable for utilization in resource-constrained devices such as IoT edge devices, smart cards, and other compact embedded devices.
APA, Harvard, Vancouver, ISO, and other styles
10

Gulakhmadov, Aminjon, Salima Asanova, Damira Asanova, Murodbek Safaraliev, Alexander Tavlintsev, Egor Lyukhanov, Sergey Semenenko, and Ismoil Odinaev. "Power Flows and Losses Calculation in Radial Networks by Representing the Network Topology in the Hierarchical Structure Form." Energies 15, no. 3 (January 21, 2022): 765. http://dx.doi.org/10.3390/en15030765.

Full text
Abstract:
This paper proposes a structured hierarchical-multilevel approach to calculating the power flows and losses of electricity in radial electrical networks with different nominal voltages at given loads and voltages of the power source. The researched electrical networks are characterized by high dimensionality, dynamism of development, but also insufficient completeness and reliability of state information. The approach is based on the representation of the initial network graph in the form of a hierarchical-multilevel structure, divided into two stages with rated voltages Unom≤35 kV and Unom≥35 kV, and using the traditional (manual) engineering two-stage method, where the calculation is performed in a sequence from bottom to top (stage 1) and from top to bottom (stage 2), moving along the structure of the network. The application of the above approach makes it possible to obtain an algorithm for implementation on a computer, which is characterized by universality (for an arbitrary configuration and complexity of the network), high performance and low requirements for the computer memory.
APA, Harvard, Vancouver, ISO, and other styles
11

Kumar V., Shiva, Rajashree V. Biradar, and V. C. Patil. "Design and Performance Analysis of Hybrid Energy Harvesting and WSN Application for More Life Time and High Throughput." International Journal of Circuits, Systems and Signal Processing 16 (January 17, 2022): 686–98. http://dx.doi.org/10.46300/9106.2022.16.85.

Full text
Abstract:
the technology of wireless sensor-actuator networks (WSANs) is widely employed in the applications of IoT due to its wireless nature and it does not involve any wired structure. The wireless systems that are battery-driven can easily reconfigure the existing devices and sensors efficiently in the manufacturing units without employing any cable for power operation as well as for communication. The wireless sensor-actuator networks that are based on IEEE 802.15.4 consumes significantly less power. These networks are designed and built cost-effectively by considering the capacity of battery and expense so that they can be employed for many applications. The application of a typical wireless Autonomous Scheduling and Distributed Graph Routing (DDSR) has illustrated the reliability of employing its basic approaches for almost ten years and it consists of the accurate plot for routing and time-slotted channel hopping therefore ensuring accurate low-power wireless communication in the processing site. Officially declared by the controversial statements associated with the government of Greek experiences fourth industrialization. There is a huge requirement for sensor nodes link via WSAN in the industrial site. Also, reduced computational complexity is one of the drawbacks faced by the existing standards of WSAN which is caused because of their highly centralized traffic management systems and thereby significantly improves the consistency and accessibility of network operations at the expense of optimization. This research work enables the study of efficient Wireless DGR network management and also introduces an alternative for DDSR by enabling the sensor nodes to determine their data traffic routes for the transmission of data. When compared to the above two physical routing protocols, the proposed technique can drastically improve the performance of a network, throughput, and energy consumption under various aspects. Energy harvesting (EH) plays a significant role in the implementation of large IoT devices. The requirement for subsequent employment of power sources is eliminated by The efficient approach of Energy Harvesting and thereby providing a relatively close- perpetual working environment for the network. The structural concept of routing protocols that are designed for the IoT applications which are based on the wireless sensor has been transformed into "energy-harvesting-aware" from the concept of "energy-aware" because of the development in the Energy harvesting techniques. The main objective of the research work is to propose a routing protocol that is energy-harvesting-aware for the various network of IoT in case of acoustic sources of energy. A novel algorithm for routing called Autonomous Scheduling and Distributed Graph Routing (DDSR) has been developed and significantly improved by incorporating a new “energy back-off” factor. The proposed algorithm when integrated with various techniques of energy harvesting enhances the longevity of nodes, quality of service of a network under increased differential traffic, and factors influencing the accessibility of energy. The research work analyses the performance of the system for various constraints of energy harvesting. When compared to previous routing protocols the proposed algorithm achieves very good energy efficiency in the network of distributed IoT by fulfilling the requirements of QoS.
APA, Harvard, Vancouver, ISO, and other styles
12

Chauhan, Ankit, Tobias Friedrich, and Ralf Rothenberger. "Greed is Good for Deterministic Scale-Free Networks." Algorithmica 82, no. 11 (June 19, 2020): 3338–89. http://dx.doi.org/10.1007/s00453-020-00729-z.

Full text
Abstract:
Abstract Large real-world networks typically follow a power-law degree distribution. To study such networks, numerous random graph models have been proposed. However, real-world networks are not drawn at random. Therefore, Brach et al. (27th symposium on discrete algorithms (SODA), pp 1306–1325, 2016) introduced two natural deterministic conditions: (1) a power-law upper bound on the degree distribution (PLB-U) and (2) power-law neighborhoods, that is, the degree distribution of neighbors of each vertex is also upper bounded by a power law (PLB-N). They showed that many real-world networks satisfy both properties and exploit them to design faster algorithms for a number of classical graph problems. We complement their work by showing that some well-studied random graph models exhibit both of the mentioned PLB properties. PLB-U and PLB-N hold with high probability for Chung–Lu Random Graphs and Geometric Inhomogeneous Random Graphs and almost surely for Hyperbolic Random Graphs. As a consequence, all results of Brach et al. also hold with high probability or almost surely for those random graph classes. In the second part we study three classical $$\textsf {NP}$$ NP -hard optimization problems on PLB networks. It is known that on general graphs with maximum degree $$\Delta$$ Δ , a greedy algorithm, which chooses nodes in the order of their degree, only achieves a $$\Omega (\ln \Delta )$$ Ω ( ln Δ ) -approximation for Minimum Vertex Cover and Minimum Dominating Set, and a $$\Omega (\Delta )$$ Ω ( Δ ) -approximation for Maximum Independent Set. We prove that the PLB-U property with $$\beta >2$$ β > 2 suffices for the greedy approach to achieve a constant-factor approximation for all three problems. We also show that these problems are -hard even if PLB-U, PLB-N, and an additional power-law lower bound on the degree distribution hold. Hence, a PTAS cannot be expected unless = . Furthermore, we prove that all three problems are in if the PLB-U property holds.
APA, Harvard, Vancouver, ISO, and other styles
13

Hansson, Andreas, Kees Goossens, and Andrei Rădulescu. "A Unified Approach to Mapping and Routing on a Network-on-Chip for Both Best-Effort and Guaranteed Service Traffic." VLSI Design 2007 (June 4, 2007): 1–16. http://dx.doi.org/10.1155/2007/68432.

Full text
Abstract:
One of the key steps in Network-on-Chip-based design is spatial mapping of cores and routing of the communication between those cores. Known solutions to the mapping and routing problems first map cores onto a topology and then route communication, using separate and possibly conflicting objective functions. In this paper, we present a unified single-objective algorithm, called Unified MApping, Routing, and Slot allocation (UMARS+). As the main contribution, we show how to couple path selection, mapping of cores, and channel time-slot allocation to minimize the network required to meet the constraints of the application. The time-complexity of UMARS+ is low and experimental results indicate a run-time only 20% higher than that of path selection alone. We apply the algorithm to an MPEG decoder System-on-Chip, reducing area by 33%, power dissipation by 35%, and worst-case latency by a factor four over a traditional waterfall approach.
APA, Harvard, Vancouver, ISO, and other styles
14

DA SILVA, MARIANA O., GUSTAVO A. GIMENEZ-LUGO, and MURILO V. G. DA SILVA. "VERTEX COVER IN COMPLEX NETWORKS." International Journal of Modern Physics C 24, no. 11 (October 14, 2013): 1350078. http://dx.doi.org/10.1142/s0129183113500782.

Full text
Abstract:
A Minimum Vertex Cover is the smallest set of vertices whose removal completely disconnects a graph. In this paper, we perform experiments on a number of graphs from standard complex networks databases addressing the problem of finding a "good" vertex cover (finding an optimum is a NP-Hard problem). In particular, we take advantage of the ubiquitous power law distribution present on many complex networks. In our experiments, we show that running a greedy algorithm in a power law graph we can obtain a very small vertex cover typically about 1.02 times the theoretical optimum. This is an interesting practical result since theoretically we know that: (1) In a general graph, on n vertices a greedy approach cannot guarantee a factor better than ln n; (2) The best approximation algorithm known at the moment is very involved and has a much larger factor of [Formula: see text]. In fact, in the context of approximation within a constant factor, it is conjectured that there is no (2 – ϵ)-approximation algorithm for the problem; (3) Even restricted to power law graphs and probabilistic guarantees, the best known approximation rate is 1.5.
APA, Harvard, Vancouver, ISO, and other styles
15

Du, Zhihui, Oliver Alvarado Rodriguez, Joseph Patchett, and David A. Bader. "Interactive Graph Stream Analytics in Arkouda." Algorithms 14, no. 8 (July 21, 2021): 221. http://dx.doi.org/10.3390/a14080221.

Full text
Abstract:
Data from emerging applications, such as cybersecurity and social networking, can be abstracted as graphs whose edges are updated sequentially in the form of a stream. The challenging problem of interactive graph stream analytics is the quick response of the queries on terabyte and beyond graph stream data from end users. In this paper, a succinct and efficient double index data structure is designed to build the sketch of a graph stream to meet general queries. A single pass stream model, which includes general sketch building, distributed sketch based analysis algorithms and regression based approximation solution generation, is developed, and a typical graph algorithm—triangle counting—is implemented to evaluate the proposed method. Experimental results on power law and normal distribution graph streams show that our method can generate accurate results (mean relative error less than 4%) with a high performance. All our methods and code have been implemented in an open source framework, Arkouda, and are available from our GitHub repository, Bader-Research. This work provides the large and rapidly growing Python community with a powerful way to handle terabyte and beyond graph stream data using their laptops.
APA, Harvard, Vancouver, ISO, and other styles
16

McClay, Wilbert. "A Magnetoencephalographic/Encephalographic (MEG/EEG) Brain-Computer Interface Driver for Interactive iOS Mobile Videogame Applications Utilizing the Hadoop Ecosystem, MongoDB, and Cassandra NoSQL Databases." Diseases 6, no. 4 (September 28, 2018): 89. http://dx.doi.org/10.3390/diseases6040089.

Full text
Abstract:
In Phase I, we collected data on five subjects yielding over 90% positive performance in Magnetoencephalographic (MEG) mid-and post-movement activity. In addition, a driver was developed that substituted the actions of the Brain Computer Interface (BCI) as mouse button presses for real-time use in visual simulations. The process was interfaced to a flight visualization demonstration utilizing left or right brainwave thought movement, the user experiences, the aircraft turning in the chosen direction, or on iOS Mobile Warfighter Videogame application. The BCI’s data analytics of a subject’s MEG brain waves and flight visualization performance videogame analytics were stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse. In Phase II portion of the project involves the Emotiv Encephalographic (EEG) Wireless Brain–Computer interfaces (BCIs) allow for people to establish a novel communication channel between the human brain and a machine, in this case, an iOS Mobile Application(s). The EEG BCI utilizes advanced and novel machine learning algorithms, as well as the Spark Directed Acyclic Graph (DAG), Cassandra NoSQL database environment, and also the competitor NoSQL MongoDB database for housing BCI analytics of subject’s response and users’ intent illustrated for both MEG/EEG brainwave signal acquisition. The wireless EEG signals that were acquired from the OpenVibe and the Emotiv EPOC headset can be connected via Bluetooth to an iPhone utilizing a thin Client architecture. The use of NoSQL databases were chosen because of its schema-less architecture and Map Reduce computational paradigm algorithm for housing a user’s brain signals from each referencing sensor. Thus, in the near future, if multiple users are playing on an online network connection and an MEG/EEG sensor fails, or if the connection is lost from the smartphone and the webserver due to low battery power or failed data transmission, it will not nullify the NoSQL document-oriented (MongoDB) or column-oriented Cassandra databases. Additionally, NoSQL databases have fast querying and indexing methodologies, which are perfect for online game analytics and technology. In Phase II, we collected data on five MEG subjects, yielding over 90% positive performance on iOS Mobile Applications with Objective-C and C++, however on EEG signals utilized on three subjects with the Emotiv wireless headsets and (n < 10) subjects from the OpenVibe EEG database the Variational Bayesian Factor Analysis Algorithm (VBFA) yielded below 60% performance and we are currently pursuing extending the VBFA algorithm to work in the time-frequency domain referred to as VBFA-TF to enhance EEG performance in the near future. The novel usage of NoSQL databases, Cassandra and MongoDB, were the primary main enhancements of the BCI Phase II MEG/EEG brain signal data acquisition, queries, and rapid analytics, with MapReduce and Spark DAG demonstrating future implications for next generation biometric MEG/EEG NoSQL databases.
APA, Harvard, Vancouver, ISO, and other styles
17

Hu, Jinlong, Junjie Liang, and Shoubin Dong. "iBGP: A Bipartite Graph Propagation Approach for Mobile Advertising Fraud Detection." Mobile Information Systems 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/6412521.

Full text
Abstract:
Online mobile advertising plays a vital financial role in supporting free mobile apps, but detecting malicious apps publishers who generate fraudulent actions on the advertisements hosted on their apps is difficult, since fraudulent traffic often mimics behaviors of legitimate users and evolves rapidly. In this paper, we propose a novel bipartite graph-based propagation approach, iBGP, for mobile apps advertising fraud detection in large advertising system. We exploit the characteristics of mobile advertising user’s behavior and identify two persistent patterns: power law distribution and pertinence and propose an automatic initial score learning algorithm to formulate both concepts to learn the initial scores of non-seed nodes. We propose a weighted graph propagation algorithm to propagate the scores of all nodes in the user-app bipartite graphs until convergence. To extend our approach for large-scale settings, we decompose the objective function of the initial score learning model into separate one-dimensional problems and parallelize the whole approach on an Apache Spark cluster. iBGP was applied on a large synthetic dataset and a large real-world mobile advertising dataset; experiment results demonstrate that iBGP significantly outperforms other popular graph-based propagation methods.
APA, Harvard, Vancouver, ISO, and other styles
18

Cho, Koon-Shik, and Jun-Dong Cho. "Low Power Digital Multimedia Telecommunication Designs." VLSI Design 12, no. 3 (January 1, 2001): 301–15. http://dx.doi.org/10.1155/2001/43078.

Full text
Abstract:
The increasing prominence of wireless multimedia systems and the need to limit power capability in very-high density VLSI chips have led to rapid and innovative developments in low-power design. Power reduction has emerged as a significant design constraint in VLSI design. The need for wireless multimedia systems leads to much higher power consumption than traditional portable applications. This paper presents possible optimization technique to reduce the energy consumption for wireless multimedia communication systems. Four topics are presented in the wireless communication systems subsection which deal with architectures such as PN acquisition, parallel correlator, matched filter and channel coding. Two topics include the IDCT and motion estimation in multimedia application.These topics consider algorithms and architectures for low power design such as using hybrid architecture in PN acquisition, analyzing the algorithm and optimizing the sample storage in parallel correlator, using complex matched filter that analog operational circuits controlled by digital signals, adopting bit serial arithmetic for the ACS operation in viterbi decoder, using CRC to adaptively terminate the SOVA iteration in turbo decoder, using codesign in RS codec, disabling the processing elements as soon as the distortion values become great than the minimum distortion value in motion estimation, and exploiting the relative occurrence of zero-valued DCT coefficient in IDCT.
APA, Harvard, Vancouver, ISO, and other styles
19

Yu, Liren, Jiaming Xu, and Xiaojun Lin. "The Power of D-hops in Matching Power-Law Graphs." Proceedings of the ACM on Measurement and Analysis of Computing Systems 5, no. 2 (June 2021): 1–43. http://dx.doi.org/10.1145/3460094.

Full text
Abstract:
This paper studies seeded graph matching for power-law graphs. Assume that two edge-correlated graphs are independently edge-sampled from a common parent graph with a power-law degree distribution. A set of correctly matched vertex-pairs is chosen at random and revealed as initial seeds. Our goal is to use the seeds to recover the remaining latent vertex correspondence between the two graphs. Departing from the existing approaches that focus on the use of high-degree seeds in $1$-hop neighborhoods, we develop an efficient algorithm that exploits the low-degree seeds in suitably-defined D-hop neighborhoods. Specifically, we first match a set of vertex-pairs with appropriate degrees (which we refer to as the first slice) based on the number of low-degree seeds in their D-hop neighborhoods. This approach significantly reduces the number of initial seeds needed to trigger a cascading process to match the rest of graphs. Under the Chung-Lu random graph model with n vertices, max degree Θ(√n), and the power-law exponent 2<β<3, we show that as soon as D> 4-β/3-β, by optimally choosing the first slice, with high probability our algorithm can correctly match a constant fraction of the true pairs without any error, provided with only Ω((log n)4-β) initial seeds. Our result achieves an exponential reduction in the seed size requirement, as the best previously known result requires n1/2+ε seeds (for any small constant ε>0). Performance evaluation with synthetic and real data further corroborates the improved performance of our algorithm.
APA, Harvard, Vancouver, ISO, and other styles
20

Shi, Ke, Lin Zhang, Zhiying Qi, Kang Tong, and Hongsheng Chen. "Transmission Scheduling of Periodic Real-Time Traffic in IEEE 802.15.4e TSCH-Based Industrial Mesh Networks." Wireless Communications and Mobile Computing 2019 (September 22, 2019): 1–12. http://dx.doi.org/10.1155/2019/4639789.

Full text
Abstract:
Time-slotted channel hopping (TSCH) is a part of an emerging IEEE 802.15.4e standard to enable deterministic low-power mesh networking, which offers high reliability and low latency for wireless industrial applications. Nonetheless, the standard only provides a framework, but it does not mandate a specific scheduling mechanism for time and frequency slot allocation. This paper focuses on a centralized scheme to schedule multiple concurrent periodic real-time flows in TSCH networks with mesh topology. In our scheme, each flow is assigned a dynamic priority based on its deadline and the hops remaining to reach the destination. A maximum matching algorithm is utilized to find conflict-free links, which provides more chances to transfer high-priority flows at each time slot. Frequency allocation is implemented by graph coloring to make finally selected links interference free. Simulation results show that our algorithm clearly outperforms the existing algorithms on the deadline satisfaction ratio with a similar radio duty cycle.
APA, Harvard, Vancouver, ISO, and other styles
21

Sikandar, Saleha, Naveed Khan Baloch, Fawad Hussain, Waqar Amin, Yousaf Bin Zikria, and Heejung Yu. "An Optimized Nature-Inspired Metaheuristic Algorithm for Application Mapping in 2D-NoC." Sensors 21, no. 15 (July 28, 2021): 5102. http://dx.doi.org/10.3390/s21155102.

Full text
Abstract:
Mapping application task graphs on intellectual property (IP) cores into network-on-chip (NoC) is a non-deterministic polynomial-time hard problem. The evolution of network performance mainly depends on an effective and efficient mapping technique and the optimization of performance and cost metrics. These metrics mainly include power, reliability, area, thermal distribution and delay. A state-of-the-art mapping technique for NoC is introduced with the name of sailfish optimization algorithm (SFOA). The proposed algorithm minimizes the power dissipation of NoC via an empirical base applying a shared k-nearest neighbor clustering approach, and it gives quicker mapping over six considered standard benchmarks. The experimental results indicate that the proposed techniques outperform other existing nature-inspired metaheuristic approaches, especially in large application task graphs.
APA, Harvard, Vancouver, ISO, and other styles
22

GHAVAMI, BEHNAM, HOSSEIN PEDRAM, and AREZOO SALARPOUR. "LEAKAGE POWER REDUCTION OF ASYNCHRONOUS PIPELINES." Journal of Circuits, Systems and Computers 20, no. 02 (April 2011): 207–22. http://dx.doi.org/10.1142/s0218126611007207.

Full text
Abstract:
With CMOS technology scaling, leakage power is expected to become a significant portion of the total power. A dual-threshold CMOS circuit, which has both high and low threshold transistors in a single chip, can be used to deal with the leakage problem in high performance applications. This paper presents dual-threshold voltage technique for reducing leakage power dissipation of Quasi Delay Insensitive asynchronous pipelines while still maintaining high performance of these circuits. We exploited the Dependency Graph model to produce a formal performance analysis. In order to reduce leakage power an efficient algorithm for selecting and assigning high threshold voltage to templates of a pipeline is proposed. Results obtained indicate that our proposed technique can achieve on average 40% savings for leakage power, while there is no performance penalty.
APA, Harvard, Vancouver, ISO, and other styles
23

Saponara, Sergio, and Luca Fanucci. "Homogeneous and Heterogeneous MPSoC Architectures with Network-On-Chip Connectivity for Low-Power and Real-Time Multimedia Signal Processing." VLSI Design 2012 (August 14, 2012): 1–17. http://dx.doi.org/10.1155/2012/450302.

Full text
Abstract:
Two multiprocessor system-on-chip (MPSoC) architectures are proposed and compared in the paper with reference to audio and video processing applications. One architecture exploits a homogeneous topology; it consists of 8 identical tiles, each made of a 32-bit RISC core enhanced by a 64-bit DSP coprocessor with local memory. The other MPSoC architecture exploits a heterogeneous-tile topology with on-chip distributed memory resources; the tiles act as application specific processors supporting a different class of algorithms. In both architectures, the multiple tiles are interconnected by a network-on-chip (NoC) infrastructure, through network interfaces and routers, which allows parallel operations of the multiple tiles. The functional performances and the implementation complexity of the NoC-based MPSoC architectures are assessed by synthesis results in submicron CMOS technology. Among the large set of supported algorithms, two case studies are considered: the real-time implementation of an H.264/MPEG AVC video codec and of a low-distortion digital audio amplifier. The heterogeneous architecture ensures a higher power efficiency and a smaller area occupation and is more suited for low-power multimedia processing, such as in mobile devices. The homogeneous scheme allows for a higher flexibility and easier system scalability and is more suited for general-purpose DSP tasks in power-supplied devices.
APA, Harvard, Vancouver, ISO, and other styles
24

Raghunathan, Shriram, Sumeet K. Gupta, Himanshu S. Markandeya, Pedro P. Irazoqui, and Kaushik Roy. "Ultra Low-Power Algorithm Design for Implantable Devices: Application to Epilepsy Prostheses." Journal of Low Power Electronics and Applications 1, no. 1 (May 12, 2011): 175–203. http://dx.doi.org/10.3390/jlpea1010175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

S, Skandha Deepsita, Dhayala Kumar M, and Noor Mahammad SK. "Energy Efficient Error Resilient Multiplier Using Low-power Compressors." ACM Transactions on Design Automation of Electronic Systems 27, no. 3 (May 31, 2022): 1–26. http://dx.doi.org/10.1145/3488837.

Full text
Abstract:
The approximate hardware design can save huge energy at the cost of errors incurred in the design. This article proposes the approximate algorithm for low-power compressors, utilized to build approximate multiplier with low energy and acceptable error profiles. This article presents two design approaches (DA1 and DA2) for higher bit size approximate multipliers. The proposed multiplier of DA1 have no propagation of carry signal from LSB to MSB, resulted in a very high-speed design. The increment in delay, power, and energy are not exponential with increment of multiplier size ( n ) for DA1 multiplier. It can be observed that the maximum combinations lie in the threshold Error Distance of 5% of the maximum value possible for any particular multiplier of size n . The proposed 4-bit DA1 multiplier consumes only 1.3 fJ of energy, which is 87.9%, 78%, 94%, 67.5%, and 58.9% less when compared to M1, M2, LxA, MxA, accurate designs respectively. The DA2 approach is recursive method, i.e., n -bit multiplier built with n/2-bit sub-multipliers. The proposed 8-bit multiplication has 92% energy savings with Mean Relative Error Distance (MRED) of 0.3 for the DA1 approach and at least 11% to 40% of energy savings with MRED of 0.08 for the DA2 approach. The proposed multipliers are employed in the image processing algorithm of DCT, and the quality is evaluated. The standard PSNR metric is 55 dB for less approximation and 35 dB for maximum approximation.
APA, Harvard, Vancouver, ISO, and other styles
26

Geevarghese, Abraham Chavacheril, and Madheswaran Muthusamy. "FPGA implementation of IFFT architecture with enhanced pruning algorithm for low power application." Microprocessors and Microsystems 71 (November 2019): 102840. http://dx.doi.org/10.1016/j.micpro.2019.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chang, K. C., and T. F. Chen. "Low-power algorithm for automatic topology generation for application-specific networks on chips." IET Computers & Digital Techniques 2, no. 3 (2008): 239. http://dx.doi.org/10.1049/iet-cdt:20070049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Garbaya, Amel, Mouna Kotti, Mourad Fakhfakh, and Esteban Tlelo-Cuautle. "Surrogate Assisted Optimization for Low-Voltage Low-Power Circuit Design." Journal of Low Power Electronics and Applications 10, no. 2 (June 16, 2020): 20. http://dx.doi.org/10.3390/jlpea10020020.

Full text
Abstract:
Low-voltage low-power (LVLP) circuit design and optimization is a hard and time-consuming task. In this study, we are interested in the application of the newly proposed meta-modelling technique to alleviate such burdens. Kriging-based surrogate models of circuits’ performances were constructed and then used within a metaheuristic-based optimization kernel in order to maximize the circuits’ sizing. The JAYA algorithm was used for this purpose. Three topologies of CMOS current conveyors (CCII) were considered to showcase the proposed approach. The achieved performances were compared to those obtained using conventional LVLP circuit sizing techniques, and we show that our approach offers interesting results.
APA, Harvard, Vancouver, ISO, and other styles
29

Łukaszewski, Artur, Łukasz Nogal, and Marcin Januszewski. "The Application of the Modified Prim’s Algorithm to Restore the Power System Using Renewable Energy Sources." Symmetry 14, no. 5 (May 16, 2022): 1012. http://dx.doi.org/10.3390/sym14051012.

Full text
Abstract:
The recent trends in the development of power systems are focused on the Self-Healing Grid technology fusing renewable energy sources. In the event of a failure of the power system, automated distribution grids should continue to supply energy to consumers. Unfortunately, there are currently a limited number of algorithms for rebuilding a power system with renewable energy sources. This problem is possible to solve by implementing restoration algorithms based on graph theory. This article presents the new modification of Prim’s algorithm, which has been adapted to operate on a power grid containing several power sources, including renewable energy sources. This solution is unique because Prim’s algorithm is ultimately dedicated to single-source graph topologies, while the proposed solution is adapted to multi-source topologies. In the algorithm, the power system is modeled by the adjacency matrices. The adjacency matrixes for the considered undirected graphs are symmetric. The novel logic is based on the original method of determining weights depending on active power, reactive power and active power losses. The developed solution was verified by performing a simulation on a test model of the distribution grid powered by a renewable energy source. The control logic concept was compared with the reference algorithms, which were chosen from the ideas representing available approaches based on graph theory present in the scientific publications. The conducted research confirmed the effectiveness and validity of the novel restoration strategy. The presented algorithm may be applied as a restoration logic dedicated to power distribution systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Długosz, Zofia, Michał Rajewski, Rafał Długosz, and Tomasz Talaśka. "A Novel, Low Computational Complexity, Parallel Swarm Algorithm for Application in Low-Energy Devices." Sensors 21, no. 24 (December 17, 2021): 8449. http://dx.doi.org/10.3390/s21248449.

Full text
Abstract:
In this work, we propose a novel metaheuristic algorithm that evolved from a conventional particle swarm optimization (PSO) algorithm for application in miniaturized devices and systems that require low energy consumption. The modifications allowed us to substantially reduce the computational complexity of the PSO algorithm, translating to reduced energy consumption in hardware implementation. This is a paramount feature in the devices used, for example, in wireless sensor networks (WSNs) or wireless body area sensors (WBANs), in which particular devices have limited access to a power source. Various swarm algorithms are widely used in solving problems that require searching for an optimal solution, with simultaneous occurrence of a different number of sub-optimal solutions. This makes the hardware implementation worthy of consideration. However, hardware implementation of the conventional PSO algorithm is challenging task. One of the issues is an efficient implementation of the randomization function. In this work, we propose novel methods to work around this problem. In the proposed approach, we replaced the block responsible for generating random values using deterministic methods, which differentiate the trajectories of particular particles in the swarm. Comprehensive investigations in the software model of the modified algorithm have shown that its performance is comparable with or even surpasses the conventional PSO algorithm in a multitude of scenarios. The proposed algorithm was tested with numerous fitness functions to verify its flexibility and adaptiveness to different problems. The paper also presents the hardware implementation of the selected blocks that modify the algorithm. In particular, we focused on reducing the hardware complexity, achieving high-speed operation, while reducing energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
31

Yan, Tai Shan, Guan Qi Guo, Wu Li, and Wei He. "An Improved Neural Network Algorithm and its Application in Fault Diagnosis." Advanced Materials Research 765-767 (September 2013): 2355–58. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2355.

Full text
Abstract:
Aiming at BP neural network algorithms limitation such as falling into local minimum easily and low convergence speed, an improved BP algorithm with two times adaptive adjust of training parameters (TA-BP algorithm) was proposed. Besides the adaptive adjust of training rate and momentum factor, this algorithm can gain appropriate permitted convergence error by adaptive adjust in the course of training. TA-BP algorithm was applied in fault diagnosis of power transformer. A fault diagnosis model for power transformer was founded based on neural network. The illustrational results show that this algorithm is better than traditional BP algorithm in both convergence speed and precision. We can realize a fast and accurate diagnosis for power transformer fault by this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
32

Cao, Huazhen, Tao Yu, Xiaoshun Zhang, Bo Yang, and Yaxiong Wu. "Reactive Power Optimization of Large-Scale Power Systems: A Transfer Bees Optimizer Application." Processes 7, no. 6 (May 31, 2019): 321. http://dx.doi.org/10.3390/pr7060321.

Full text
Abstract:
A novel transfer bees optimizer for reactive power optimization in a high-power system was developed in this paper. Q-learning was adopted to construct the learning mode of bees, improving the intelligence of bees through task division and cooperation. Behavior transfer was introduced, and prior knowledge of the source task was used to process the new task according to its similarity to the source task, so as to accelerate the convergence of the transfer bees optimizer. Moreover, the solution space was decomposed into multiple low-dimensional solution spaces via associated state-action chains. The transfer bees optimizer performance of reactive power optimization was assessed, while simulation results showed that the convergence of the proposed algorithm was more stable and faster, and the algorithm was about 4 to 68 times faster than the traditional artificial intelligence algorithms.
APA, Harvard, Vancouver, ISO, and other styles
33

Ryu, Junghun, Eric Noel, and K. Wendy Tang. "Distributed and Fault-Tolerant Routing for Borel Cayley Graphs." International Journal of Distributed Sensor Networks 8, no. 10 (October 1, 2012): 124245. http://dx.doi.org/10.1155/2012/124245.

Full text
Abstract:
We explore the use of a pseudorandom graph family, Borel Cayley graph family, as the network topology with thousands of nodes operating in a packet switching environment. BCGs are known to be an efficient topology in interconnection networks because of their small diameters, short average path lengths, and low-degree connections. However, the application of BCGs is hindered by a lack of size flexibility and fault-tolerant routing. We propose a fault-tolerant routing algorithm for BCGs. Our algorithm exploits the vertex-transitivity property of Borel Cayley graphs and relies on extra information to reflect topology change. Our results show that the proposed method supports good reachability and a small End-to-End delay under various link failures scenarios.
APA, Harvard, Vancouver, ISO, and other styles
34

Guo, Yixuan. "Financial Market Sentiment Prediction Technology and Application Based on Deep Learning Model." Computational Intelligence and Neuroscience 2022 (March 4, 2022): 1–10. http://dx.doi.org/10.1155/2022/1988396.

Full text
Abstract:
In the real world, there are a variety of situations that require strategy control, that is reinforcement learning, as a method for studying the decision-making and behavioral strategies of intelligence. It has received a lot of research and empirical evidence on its functions and roles and is also a method recognized by scholars. Among them, combining reinforcement learning with sentiment analysis is an important theoretical research direction, but so far there is still relatively little research work about it, and it still has the problems of poor application effect and low accuracy rate. Therefore, in this study, we use the features related to sentiment analysis and deep reinforcement learning and use various algorithms for optimization to deal with the above problems. In this study, a sentiment analysis method incorporating knowledge graphs is designed using the characteristics of the stock trading market. A deep reinforcement learning investment trading strategy algorithm for sentiment analysis combined with knowledge graphs from this study is used in the subsequent experiments. The deep reinforcement learning system combining sentiment analysis and knowledge graph implemented in this study not only analyzes the algorithm from the theoretical aspect but also simulates data from the stock exchange market for experimental comparison and analysis. The experimental results illustrate that the deep reinforcement learning algorithm combining sentiment analysis and knowledge graphs used in this study can achieve better gains than the existing traditional reinforcement learning algorithms and has better practical application value.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Yufei, Zuocheng Xing, Zerun Li, Yang Zhang, and Yifan Hu. "High Area-Efficient Parallel Encoder with Compatible Architecture for 5G LDPC Codes." Symmetry 13, no. 4 (April 16, 2021): 700. http://dx.doi.org/10.3390/sym13040700.

Full text
Abstract:
This paper presents a novel parallel quasi-cyclic low-density parity-check (QC-LDPC) encoding algorithm with low complexity, which is compatible with the 5th generation (5G) new radio (NR). Basing on the algorithm, we propose a high area-efficient parallel encoder with compatible architecture. The proposed encoder has the advantages of parallel encoding and pipelined operations. Furthermore, it is designed as a configurable encoding structure, which is fully compatible with different base graphs of 5G LDPC. Thus, the encoder architecture has flexible adaptability for various 5G LDPC codes. The proposed encoder was synthesized in a 65 nm CMOS technology. According to the encoder architecture, we implemented nine encoders for distributed lifting sizes of two base graphs. The eperimental results show that the encoder has high performance and significant area-efficiency, which is better than related prior art. This work includes a whole set of encoding algorithm and the compatible encoders, which are fully compatible with different base graphs of 5G LDPC codes. Therefore, it has more flexible adaptability for various 5G application scenarios.
APA, Harvard, Vancouver, ISO, and other styles
36

Alistarh, Dan, Giorgi Nadiradze, and Amirmojtaba Sabour. "Dynamic Averaging Load Balancing on Cycles." Algorithmica 84, no. 4 (December 24, 2021): 1007–29. http://dx.doi.org/10.1007/s00453-021-00905-9.

Full text
Abstract:
AbstractWe consider the following dynamic load-balancing process: given an underlying graph G with n nodes, in each step $$t\ge 0$$ t ≥ 0 , a random edge is chosen, one unit of load is created, and placed at one of the endpoints. In the same step, assuming that loads are arbitrarily divisible, the two nodes balance their loads by averaging them. We are interested in the expected gap between the minimum and maximum loads at nodes as the process progresses, and its dependence on n and on the graph structure. Peres et al. (Random Struct Algorithms 47(4):760–775, 2015) studied the variant of this process, where the unit of load is placed in the least loaded endpoint of the chosen edge, and the averaging is not performed. In the case of dynamic load balancing on the cycle of length n the only known upper bound on the expected gap is of order $$\mathcal {O}( n \log n )$$ O ( n log n ) , following from the majorization argument due to the same work. In this paper, we leverage the power of averaging and provide an improved upper bound of $$\mathcal {O} ( \sqrt{n} \log n )$$ O ( n log n ) . We introduce a new potential analysis technique, which enables us to bound the difference in load between k-hop neighbors on the cycle, for any $$k \le n/2$$ k ≤ n / 2 . We complement this with a “gap covering” argument, which bounds the maximum value of the gap by bounding its value across all possible subsets of a certain structure, and recursively bounding the gaps within each subset. We also show that our analysis can be extended to the specific instance of Harary graphs. On the other hand, we prove that the expected second moment of the gap is lower bounded by $$\Omega (n)$$ Ω ( n ) . Additionally, we provide experimental evidence that our upper bound on the gap is tight up to a logarithmic factor.
APA, Harvard, Vancouver, ISO, and other styles
37

Manjula, S., R. Karthikeyan, S. Karthick, N. Logesh, and M. Logeshkumar. "Optimized Design of Low Power Complementary Metal Oxide Semiconductor Low Noise Amplifier for Zigbee Application." Journal of Computational and Theoretical Nanoscience 18, no. 4 (April 1, 2021): 1327–30. http://dx.doi.org/10.1166/jctn.2021.9387.

Full text
Abstract:
An optimized high gain low power low noise amplifier (LNA) is presented using 90 nm CMOS process at 2.4 GHz frequency for Zigbee applications. For achieving desired design specifications, the LNA is optimized by particle swarm optimization (PSO). The PSO is successfully implemented for optimizing noise figure (NF) when satisfying all the design specifications such as gain, power dissipation, linearity and stability. PSO algorithm is developed in MATLAB to optimize the LNA parameters. The LNA with optimized parameters is simulated using Advanced Design System (ADS) Simulator. The LNA with optimized parameters produces 21.470 dB of voltage gain, 1.031 dB of noise figure at 1.02 mW power consumption with 1.2 V supply voltage. The comparison of designed LNA with and without PSO proves that the optimization improves the LNA results while satisfying all the design constraints.
APA, Harvard, Vancouver, ISO, and other styles
38

Chentouf, Mohamed, and Zine El Abidine Alaoui Ismaili. "A Novel Net Weighting Algorithm for Power and Timing-Driven Placement." VLSI Design 2018 (October 18, 2018): 1–9. http://dx.doi.org/10.1155/2018/3905967.

Full text
Abstract:
Nowadays, many new low power ASICs applications have emerged. This new market trend made the designer’s task of meeting the timing and routability requirements within the power budget more challenging. One of the major sources of power consumption in modern integrated circuits (ICs) is the Interconnect. In this paper, we present a novel Power and Timing-Driven global Placement (PTDP) algorithm. Its principle is to wrap a commercial timing-driven placer with a nets weighting mechanism to calculate the nets weights based on their timing and power consumption. The new calculated weight is used to drive the placement engine to place the cells connected by the critical power or timing nets close to each other and hence reduce the parasitic capacitances of the interconnects and, by consequence, improve the timing and power consumption of the design. This approach not only improves the design power consumption but facilitates also the routability with only a minor impact on the timing closure of a few designs. The experiments carried on 40 industrial designs of different nodes, sizes, and complexities and demonstrate that the proposed algorithm is able to achieve significant improvements on Quality of Results (QoR) compared with a commercial timing driven placement flow. We effectively reduce the interconnect power by an average of 11.5% that leads to a total power improvement of 5.4%, a timing improvement of 9.4%, 13.7%, and of 3.2% in Worst Negative Slack (WNS), Total Negative Slack (TNS), and total wirelength reduction, respectively.
APA, Harvard, Vancouver, ISO, and other styles
39

Mou, Yuanju, Zhizhong Lv, Liang Ge, Xiaoting Xiao, and Zhengyin Wang. "Noise Elimination of Low-Voltage Power Line Communication Channel using Time-Frequency Peak Filtering Algorithm." International Journal of Circuits, Systems and Signal Processing 15 (May 17, 2021): 439–52. http://dx.doi.org/10.46300/9106.2021.15.48.

Full text
Abstract:
There are a lot of noises in the low-voltage power line communication (LVPLC) channel, which seriously damages the LVPLC system. The noise in the low voltage power line can be divided into general background noise and random pulse noise. These two noises will cause serious interference to the communication process based on LVPLC, reduce the signal-to-noise ratio of LVPLC system, and the communication quality cannot meet the requirements. To ensure the communication quality, this paper uses the time-frequency peak filtering algorithm to eliminate the noise in the LVPLC in the experimental environment. Firstly, this paper studies the noise characteristics based on the measured LVPLC channel noise. Secondly, the memory noise model is established, and the time-frequency peak filtering algorithm is used to eliminate the noise. In order to analyze the denoising effect of time-frequency peak filtering algorithm, the algorithm is simulated. Finally, the application effect of the algorithm is verified by experimental test. The simulation and application results show that the time-frequency peak filtering algorithm can improve the signal-to-noise ratio by about 5 dB in the actual noise environment of LVPLC, which can adapt to the changeable environment of LVPLC channel noise, and has good noise suppression effect and good application value. The application in the solar panel data transmission system shows that the time-frequency peak filtering algorithm can meet the communication performance requirements of the laboratory, reduce the bit error rate by about 2 % under the general background noise interference, and reduce the bit error rate by about 3 % under the pulse interference environment, and improve the transmission quality of LVPLC system.
APA, Harvard, Vancouver, ISO, and other styles
40

McVay, D. A., and J. P. Spivey. "Optimizing Gas-Storage Reservoir Performance." SPE Reservoir Evaluation & Engineering 4, no. 03 (June 1, 2001): 173–78. http://dx.doi.org/10.2118/71867-pa.

Full text
Abstract:
Summary As gas storage becomes increasingly important in managing the nation's gas supplies, there is a need to develop more gas-storage reservoirs and to manage them more efficiently. Using computer reservoir simulation to rigorously predict gas-storage reservoir performance, we present specific procedures for efficient optimization of gas-storage reservoir performance for two different problems. The first is maximizing working gas volume and peak rates for a particular configuration of reservoir, well, and surface facilities. We present a new, simple procedure to determine the maximum performance with a minimal number of simulation runs. The second problem is minimizing the cost to satisfy a specific production and injection schedule, which is derived from the working gas volume and peak rate requirements. We demonstrate a systematic procedure to determine the optimum combination of cushion gas volume, compression horsepower, and number and locations of wells. The use of these procedures is illustrated through application to gas-reservoir data. Introduction With the unbundling of the natural gas industry as a result of Federal Energy Regulatory Commission (FERC) Order 636, the role of gas storage in managing the nation's gas supplies has increased in importance. In screening reservoirs to determine potential gas-storage reservoir candidates, it is often desirable to determine the maximum storage capacity for specific reservoirs. In designing the conversion of producing fields to storage or the upgrading of existing storage fields, it is beneficial to determine the optimum combination of wells, cushion gas and compression facilities that minimizes investment. A survey of the petroleum literature found little discussion of simulation-based methodologies for achieving these two desired outcomes. Duane1 presented a graphical technique for optimizing gas-storage field design. This method allowed the engineer to minimize the total field-development cost for a desired peak-day rate and cyclic capacity (working gas capacity). To use the method, the engineer would prepare a series of field-design optimization graphs for different compressor intake pressures. Each graph consists of a series of curves corresponding to different peak-day rates. Each curve, in turn, shows the number of wells required to deliver the given peak-day rate as a function of the gas inventory level. Thus, the tradeoff between compression horsepower costs, well costs, and cushion gas costs could be examined to determine the optimum design in terms of minimizing the total field-development cost. Duane's method implicitly assumes that boundary-dominated flow will prevail throughout the reservoir. Henderson et al. 2 presented a case history of storage-field-design optimization with a single-phase, 2D numerical model of the reservoir. They varied well placement and well schedules in their study to reduce the number of wells necessary to meet the desired demand schedule. They used a trial-and-error method and stated that the results were preliminary. They found that wells in the poorest portion of the field should be used to meet demand at the beginning of the withdrawal period. Additional wells were added over time to meet the demand schedule. The wells in the best part of the field were held in reserve to meet the peak-day requirements, which occurred at the end of the withdrawal season. Coats3 presented a method for locating new wells in a heterogeneous field. His objective was to determine the optimum drilling program to maintain a contractual deliverability during field development. He provided a discussion of whether wells should be spaced closer together in areas of high kh or in areas of low kh. He found that when f h is essentially uniformly distributed, the wells should be closer together in low kh areas. On the other hand, if the variation in kh is largely caused by variations in h, or if porosity is highly correlated with permeability, wells should be closer together in areas of high kh. Coats' method assumes boundary-dominated flow throughout the reservoir. Wattenbarger4 used linear programming to solve the problem of determining the withdrawal schedule on a well-by-well basis that would maximize the total seasonal production, subject to constraints such as fixed demand schedule and minimum wellbore pressure. Van Horn and Wienecke5 solved the gas-storage-design optimization problem with a Fibonnaci Search algorithm. They expressed the investment requirement for a storage field in terms of four variables: cushion gas, number of wells, purification equipment, and compressor horsepower. They chose as the optimum design the combination of these four variables that minimized investment cost. The authors used an empirical backpressure equation, combined with a simplified gas material-balance equation, as the reservoir model. In this paper we present systematic, simulation-based methodologies for optimizing gas-storage reservoir performance for two different problems. The first is maximizing working gas volume and peak rates for a particular configuration of reservoir, well, and surface facilities. The second problem is minimizing the cost to satisfy a specific production and injection schedule, which is derived from the working gas volume and peak rate requirements. Constructing the Reservoir Model To optimize gas-storage reservoir performance, a model of the reservoir is required. We prefer to use the simplest model that is able to predict storage-reservoir performance as a function of the number and locations of wells, compression horsepower, and cushion gas volume. Although models combining material balance with analytical or empirical deliverability equations may be used in certain situations, a reservoir-simulation model is usually best, owing to its flexibility and its ability to handle well interference and complex reservoirs accurately. It is important to calibrate the model against historical production and pressure data; we must show that the model reproduces past reservoir performance accurately before we can use it to predict future performance with reliability. However, even calibrating the model by history matching past performance may not be adequate. It is our experience that information obtained during primary depletion of a reservoir is often not adequate to predict its performance under storage operations. Primary production over many years may mask layered or dual-porosity behavior that significantly affects the ability of the reservoir to deliver large volumes of gas within a 4- or 5-month period. Wells and Evans6 presented a case history of the Loop gas storage field, which exhibited this behavior. It may be necessary to implement a program of coring, logging, pressure-transient testing, and/or simulated storage production/injection testing to characterize the reservoir accurately.
APA, Harvard, Vancouver, ISO, and other styles
41

Ismail, Mohamed, Imran Ahmed, and Justin Coon. "Low Power Decoding of LDPC Codes." ISRN Sensor Networks 2013 (January 17, 2013): 1–12. http://dx.doi.org/10.1155/2013/650740.

Full text
Abstract:
Wireless sensor networks are used in many diverse application scenarios that require the network designer to trade off different factors. Two such factors of importance in many wireless sensor networks are communication reliability and battery life. This paper describes an efficient, low complexity, high throughput channel decoder suited to decoding low-density parity-check (LDPC) codes. LDPC codes have demonstrated excellent error-correcting ability such that a number of recent wireless standards have opted for their inclusion. Hardware realisation of practical LDPC decoders is a challenging area especially when power efficient solutions are needed. Implementation details are given for an LDPC decoding algorithm, termed adaptive threshold bit flipping (ATBF), designed for low complexity and low power operation. The ATBF decoder was implemented in 90 nm CMOS at 0.9 V using a standard cell design flow and was shown to operate at 250 MHz achieving a throughput of 252 Gb/s/iteration. The decoder area was 0.72 mm2 with a power consumption of 33.14 mW and a very small energy/decoded bit figure of 1.3 pJ.
APA, Harvard, Vancouver, ISO, and other styles
42

Lakshminarayana, Subhash, Saurav Sthapit, and Carsten Maple. "Application of Physics-Informed Machine Learning Techniques for Power Grid Parameter Estimation." Sustainability 14, no. 4 (February 11, 2022): 2051. http://dx.doi.org/10.3390/su14042051.

Full text
Abstract:
Power grid parameter estimation involves the estimation of unknown parameters, such as the inertia and damping coefficients, from the observed dynamics. In this work, we present physics-informed machine learning algorithms for the power system parameter estimation problem. First, we propose a novel algorithm to solve the parameter estimation based on the Sparse Identification of Nonlinear Dynamics (SINDy) approach, which uses sparse regression to infer the parameters that best describe the observed data. We then compare its performance against another benchmark algorithm, namely, the physics-informed neural networks (PINN) approach applied to parameter estimation. We perform extensive simulations on IEEE bus systems to examine the performance of the aforementioned algorithms. Our results show that the SINDy algorithm outperforms the PINN algorithm in estimating the power grid parameters over a wide range of system parameters (including high and low inertia systems) and power grid architectures. Particularly, in case of the slow dynamics system, the proposed SINDy algorithms outperforms the PINN algorithm, which struggles to accurately determine the parameters. Moreover, it is extremely efficient computationally and so takes significantly less time than the PINN algorithm, thus making it suitable for real-time parameter estimation. Furthermore, we present an extension of the SINDy algorithm to a scenario where the operator does not have the exact knowledge of the underlying system model. We also present a decentralised implementation of the SINDy algorithm which only requires limited information exchange between the neighbouring nodes of a power grid.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Tao, Rui Zhang, Yue Jun Liu, and Cheng Yang. "The Application of M-Ary Quantum Evolutionary Algorithm in Bit and Power Allocation for OFDM System over the Power Line." Advanced Materials Research 219-220 (March 2011): 1085–88. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.1085.

Full text
Abstract:
Adaptive resource allocation in orthogonal frequency division multiplexing (OFDM) systems is a very promising technique to improve spectral efficiency. According to actual transmission behavior of the channel environment over low voltage power line, M-ary Quantum Evolutionary Algorithm (MQEA) is proposed to allocate bit and power on different sub-carrier channel, which maximizes the total data rate of the system in a downlink transmission with a total power and bit-error rate(BER) constraints. Simulation results show that the proposed algorithm achieve a higher data rate.
APA, Harvard, Vancouver, ISO, and other styles
44

Jiao, Shangbin, Chen Wang, Rui Gao, Yuxing Li, and Qing Zhang. "Harris Hawks Optimization with Multi-Strategy Search and Application." Symmetry 13, no. 12 (December 8, 2021): 2364. http://dx.doi.org/10.3390/sym13122364.

Full text
Abstract:
The probability of the basic HHO algorithm in choosing different search methods is symmetric: about 0.5 in the interval from 0 to 1. The optimal solution from the previous iteration of the algorithm affects the current solution, the search for prey in a linear way led to a single search result, and the overall number of updates of the optimal position was low. These factors limit Harris Hawks optimization algorithm. For example, an ease of falling into a local optimum and the efficiency of convergence is low. Inspired by the prey hunting behavior of Harris’s hawk, a multi-strategy search Harris Hawks optimization algorithm is proposed, and the least squares support vector machine (LSSVM) optimized by the proposed algorithm was used to model the reactive power output of the synchronous condenser. Firstly, we select the best Gauss chaotic mapping method from seven commonly used chaotic mapping population initialization methods to improve the accuracy. Secondly, the optimal neighborhood perturbation mechanism is introduced to avoid premature maturity of the algorithm. Simultaneously, the adaptive weight and variable spiral search strategy are designed to simulate the prey hunting behavior of Harris hawk to improve the convergence speed of the improved algorithm and enhance the global search ability of the improved algorithm. A numerical experiment is tested with the classical 23 test functions and the CEC2017 test function set. The results show that the proposed algorithm outperforms the Harris Hawks optimization algorithm and other intelligent optimization algorithms in terms of convergence speed, solution accuracy and robustness, and the model of synchronous condenser reactive power output established by the improved algorithm optimized LSSVM has good accuracy and generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Jun, Lin Zhang, Mei Juan Liu, Ge Xin Xing, Wei Wei Li, Yi Qiang Zhao, Xin Xin Gu, and Xiao Hua Zhao. "The Design and Application of SVG for Low Voltage Distribution Networks." Applied Mechanics and Materials 448-453 (October 2013): 2097–104. http://dx.doi.org/10.4028/www.scientific.net/amm.448-453.2097.

Full text
Abstract:
SVG can compensate reactive power deficiency, suppress harmonics, improve three-phase imbalance and power quality more flexibly. There are very few small volume SVG products available for low voltage distribution network in the past. The generic SVG products are very expansive, thus not suitable for low voltage distribution network. Therefore, it is an urgent task to design a new generation distribution network SVG product that offers good value for money. This paper studied a SVG digital controller based on TMS320F28335 DSP chip. The fast and powerful computing and parallel operation capability of TMS320F28335 can satisfy the real-time, multifunction and multiple objective coordination control of SVG. Applied the instantaneous reactive power theory and adopted current direct control mode, an enhanced filtering algorithm to filter instantaneous sampling value is proposed. Automatic bi-directional compensation control strategy effectively reduced voltage variation at the user side. Its effectiveness is verified by an engineering project.
APA, Harvard, Vancouver, ISO, and other styles
46

Predus, Marius Florian. "Prediction of power cables failures using a software application." Studia Universitatis Babeș-Bolyai Engineering 65, no. 1 (November 20, 2020): 153–62. http://dx.doi.org/10.24193/subbeng.2020.1.17.

Full text
Abstract:
This paper analyses the electrical performance of power supply cables in operation by investigating previous faults and forecasting faults using the Easyfit Professional 5.6 software program. The calculation of the maximum operating time until the first fault occurs is based on an algorithm for estimating the parameters entered in the application, respectively the intervals of good operation time between two successive faults. The case study presented in the paper analyses the probability of failure of a medium voltage power line, under the administration of a distribution operator, based on information collected during maintenance work on medium and low voltage installations in the analysed area.
APA, Harvard, Vancouver, ISO, and other styles
47

Ghavami, Behnam. "Spatial correlation-aware statistical dual-threshold voltage design of template-based asynchronous circuits." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 37, no. 3 (May 8, 2018): 1189–203. http://dx.doi.org/10.1108/compel-03-2016-0118.

Full text
Abstract:
Purpose Power consumption is a top priority in high-performance asynchronous circuit design today. The purpose of this study is to provide a spatial correlation-aware statistical dual-threshold voltage design method for low-power design of template-based asynchronous circuits. Design/methodology/approach In this paper, the authors proposed a statistical dual-threshold voltage design of template-based asynchronous circuits considering process variations with spatial correlation. The utilized circuit model is an extended Timed Petri-Net which captures the dynamic behavior of the asynchronous circuit with statistical delay and power values. To have a more comprehensive framework, the authors model the spatial correlation information of the circuit. The authors applied a genetic optimization algorithm that uses a two-dimensional graph to calculate the power and performance of each threshold voltage assignment. Findings Experimental results show that using this statistically aware optimization, leakage power of asynchronous circuits can be reduced up to 3X. The authors also show that the spatial correlation may lead to large errors if not being considered in the design of dual-threshold-voltage asynchronous circuits. Originality/value The proposed framework is the scheme giving a low-power design of asynchronous circuits compared to other schemes. The comparison exhibits that the proposed method has better results in terms of performance and power. To consider the process variations with spatial correlation, the authors apply the principle component analysis method to transform the correlated variables into uncorrelated ones.
APA, Harvard, Vancouver, ISO, and other styles
48

Cao, Hailin, Wang Zhu, Wenjuan Feng, and Jin Fan. "Robust Beamforming Based on Graph Attention Networks for IRS-Assisted Satellite IoT Communications." Entropy 24, no. 3 (February 24, 2022): 326. http://dx.doi.org/10.3390/e24030326.

Full text
Abstract:
Satellite communication is expected to play a vital role in realizing Internet of Remote Things (IoRT) applications. This article considers an intelligent reflecting surface (IRS)-assisted downlink low Earth orbit (LEO) satellite communication network, where IRS provides additional reflective links to enhance the intended signal power. We aim to maximize the sum-rate of all the terrestrial users by jointly optimizing the satellite’s precoding matrix and IRS’s phase shifts. However, it is difficult to directly acquire the instantaneous channel state information (CSI) and optimal phase shifts of IRS due to the high mobility of LEO and the passive nature of reflective elements. Moreover, most conventional solution algorithms suffer from high computational complexity and are not applicable to these dynamic scenarios. A robust beamforming design based on graph attention networks (RBF-GAT) is proposed to establish a direct mapping from the received pilots and dynamic network topology to the satellite and IRS’s beamforming, which is trained offline using the unsupervised learning approach. The simulation results corroborate that the proposed RBF-GAT approach can achieve more than 95% of the performance provided by the upper bound with low complexity.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Yu Hong, Xue Cheng Zhao, and Wei Cheng. "The Application of Chaotic Particle Swarm Optimization Algorithm in Power System Load Forecasting." Advanced Materials Research 614-615 (December 2012): 866–69. http://dx.doi.org/10.4028/www.scientific.net/amr.614-615.866.

Full text
Abstract:
The support vector machine (SVM) has been successfully applied in the short-term load forecasting area, but its learning and generalization ability depends on a proper setting of its parameters. In order to improve forecasting accuracy, aiming at the disadvantages like man-made blindness in the parameters selection of SVM, In this paper, the chaos theory was applied to the PSO (particles swarm optimization) algorithm in order to cope with the problems such as low search speed and local optimization. Finally, we used it to optimize the support vector machines of short-term load forecasting model. Through the analysis of the daily forecasting results, it is shown that the proposed method could reduce modeling error and forecasting error of SVM model effectively and has better performance than general methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Rubio-Chavarría, Mario, Cristina Santamaría, Belén García-Mora, and Gregorio Rubio. "Modelling Biological Systems: A New Algorithm for the Inference of Boolean Networks." Mathematics 9, no. 4 (February 13, 2021): 373. http://dx.doi.org/10.3390/math9040373.

Full text
Abstract:
Biological systems are commonly constituted by a high number of interacting agents. This great dimensionality hinders biological modelling due to the high computational cost. Therefore, new modelling methods are needed to reduce computation time while preserving the properties of the depicted systems. At this point, Boolean Networks have been revealed as a modelling tool with high expressiveness and reduced computing times. The aim of this work has been to introduce an automatic and coherent procedure to model systems through Boolean Networks. A synergy that harnesses the strengths of both approaches is obtained by combining an existing approach to managing information from biological pathways with the so-called Nested Canalising Boolean Functions (NCBF). In order to show the power of the developed method, two examples of an application with systems studied in the bibliography are provided: The epithelial-mesenchymal transition and the lac operon. Due to the fact that this method relies on directed graphs as a primary representation of the systems, its applications exceed life sciences into areas such as traffic management or machine learning, in which these graphs are the main expression of the systems handled.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography