To see the other types of publications on this topic, follow the link: Parallel and dynamic reconfigurable computing.

Journal articles on the topic 'Parallel and dynamic reconfigurable computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel and dynamic reconfigurable computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Schevelev, S. S. "Reconfigurable Modular Computing System." Proceedings of the Southwest State University 23, no. 2 (July 9, 2019): 137–52. http://dx.doi.org/10.21869/2223-1560-2019-23-2-137-152.

Full text
Abstract:
Purpose of research. A reconfigurable computer system consists of a computing system and special-purpose computers that are used to solve the tasks of vector and matrix algebra, pattern recognition. There are distinctions between matrix and associative systems, neural networks. Matrix computing systems comprise a set of processor units connected through a switching device with multi-module memory. They are designed to solve vector, matrix and data array problems. Associative systems contain a large number of operating devices that can simultaneously process multiple data streams. Neural networks and neurocomputers have high performance when solving problems of expert systems, pattern recognition due to parallel processing of a neural network.Methods. An information graph of the computational process of a reconfigurable modular system was plotted. Structural and functional schemes, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words were developed. Software for modelling the operation of the arithmetic-symbol processor, specialized computing modules, and switching systems was developed.Results. A block diagram of a reconfigurable computing modular system was developed. The system consists of compatible functional modules and is capable of static and dynamic reconfiguration, has a parallel connection structure of the processor and computing modules through the use of interface channels. It consists of an arithmeticsymbol processor, specialized computing modules and switching systems; it performs specific tasks of symbolic information processing, arithmetic and logical operations.Conclusion. Systems with a reconfigurable structure are high-performance and highly reliable computing systems that consist of integrated processors in multi-machine and multiprocessor systems. Reconfigurability of the structure provides high system performance due to its adaptation to computational processes and the composition of the processed tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Shevelev, S. S. "RECONFIGURABLE COMPUTING MODULAR SYSTEM." Radio Electronics, Computer Science, Control 1, no. 1 (March 31, 2021): 194–207. http://dx.doi.org/10.15588/1607-3274-2021-1-19.

Full text
Abstract:
Context. Modern general purpose computers are capable of implementing any algorithm, but when solving certain problems in terms of processing speed they cannot compete with specialized computing modules. Specialized devices have high performance, effectively solve the problems of processing arrays, artificial intelligence tasks, and are used as control devices. The use of specialized microprocessor modules that implement the processing of character strings, logical and numerical values, represented as integers and real numbers, makes it possible to increase the speed of performing arithmetic operations by using parallelism in data processing. Objective. To develop principles for constructing microprocessor modules for a modular computing system with a reconfigurable structure, an arithmetic-symbolic processor, specialized computing devices, switching systems capable of configuring microprocessors and specialized computing modules into a multi-pipeline structure to increase the speed of performing arithmetic and logical operations, high-speed design algorithms specialized processors-accelerators of symbol processing. To develop algorithms, structural and functional diagrams of specialized mathematical modules that perform arithmetic operations in direct codes on neural-like elements and systems for decentralized control of the operation of blocks. Method. An information graph of the computational process of a modular system with a reconstructed structure has been built. Structural and functional diagrams, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words have been developed. Software has been developed for simulating the operation of an arithmetic-symbolic processor, specialized computing modules, and switching systems. Results. A block diagram of a reconfigurable computing modular system has been developed, which consists of compatible functional modules, it is capable of static and dynamic reconfiguration, has a parallel structure for connecting the processor and computing modules through the use of interface channels. The system consists of an arithmetic-symbolic processor, specialized computing modules and switching systems, performs specific tasks of symbolic information processing, arithmetic and logical operations. Conclusions. The architecture of reconfigurable computing systems can change dynamically during their operation. It becomes possible to adapt the architecture of a computing system to the structure of the problem being solved, to create problem-oriented computers, the structure of which corresponds to the structure of the problem being solved. As the main computing element in reconfigurable computing systems, not universal microprocessors are used, but programmable logic integrated circuits, which are combined using high-speed interfaces into a single computing field. Reconfigurable multipipeline computing systems based on fields are an effective tool for solving streaming information processing and control problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Magalhães Pereira, Monica, and Luigi Carro. "Dynamic Reconfigurable Computing: The Alternative to Homogeneous Multicores under Massive Defect Rates." International Journal of Reconfigurable Computing 2011 (2011): 1–17. http://dx.doi.org/10.1155/2011/452589.

Full text
Abstract:
The aggressive scaling of CMOS technology has increased the density and allowed the integration of multiple processors into a single chip. Although solutions based on MPSoC architectures can increase application's speed through TLP exploitation, this speedup is still limited to the amount of parallelism available in the application, as demonstrated by Amdahl's Law. Moreover, with the continuous shrinking of device features, very aggressive defect rates are expected for new technologies. Under high defect rates a large amount of processors of the MPSoC will be susceptible to defects and consequently will fail, not only reducing yield but also severely affecting the expected performance. This paper presents a run-time adaptive architecture that allows software execution even under aggressive defect rates. The proposed architecture can accelerate not only highly parallel applications but also sequential ones, and it is a heterogeneous solution to overcome the performance penalty that is imposed to homogeneous MPSoCs under massive defect rates.
APA, Harvard, Vancouver, ISO, and other styles
4

Condia, Josie E. Rodriguez, Pierpaolo Narducci, Matteo Sonza Reorda, and Luca Sterpone. "DYRE: a DYnamic REconfigurable solution to increase GPGPU’s reliability." Journal of Supercomputing 77, no. 10 (March 29, 2021): 11625–42. http://dx.doi.org/10.1007/s11227-021-03751-2.

Full text
Abstract:
AbstractGeneral-purpose graphics processing units (GPGPUs) are extensively used in high-performance computing. However, it is well known that these devices’ reliability may be limited by the rising of faults at the hardware level. This work introduces a flexible solution to detect and mitigate permanent faults affecting the execution units in these parallel devices. The proposed solution is based on adding some spare modules to perform two in-field operations: detecting and mitigating faults. The solution takes advantage of the regularity of the execution units in the device to avoid significant design changes and reduce the overhead. The proposed solution was evaluated in terms of reliability improvement and area, performance, and power overhead costs. For this purpose, we resorted to a micro-architectural open-source GPGPU model (FlexGripPlus). Experimental results show that the proposed solution can extend the reliability by up to 57%, with overhead costs lower than 2% and 8% in area and power, respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

YEH, POCHI, and CLAIRE GU. "PHOTOREFRACTIVE MEDIA FOR OPTICAL INTERCONNECTIONS." Journal of Nonlinear Optical Physics & Materials 01, no. 01 (January 1992): 167–201. http://dx.doi.org/10.1142/s0218199192000108.

Full text
Abstract:
The photorefractive effect and its applications in optical interconnections are described. The fundamental limit, the dynamics of grating formation, and the two-wave mixing (TWM) process are discussed. Reconfigurable interconnections for parallel optical computing is demonstrated using photorefractive holograms. Neural network interconnections with photorefractive media are also presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Belaid, Ikbel, Fabrice Muller, and Maher Benjemaa. "Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices." International Journal of Reconfigurable Computing 2011 (2011): 1–28. http://dx.doi.org/10.1155/2011/591983.

Full text
Abstract:
Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Russek, Paweł, Ernest Jamro, Agnieszka Dąbrowska-Boruch, and Kazimierz Wiatr. "A study of the loops control for reconfigurable computing with OpenCL in the LABS local search problem." International Journal of High Performance Computing Applications 34, no. 1 (August 12, 2019): 103–14. http://dx.doi.org/10.1177/1094342019868515.

Full text
Abstract:
In this article, we study the steepest descent local search (SDLS) algorithm that is used as the improvement step in the memetic algorithms for the search of low autocorrelation binary sequences (LABS). We address the method of reconfigurable computing, as the algorithm is of the field programmable gate array (FPGA) type as it features the integer operations, bit-wise data representation, regular execution flow, and huge computational complexity. It contains four levels of nested loops, but its direct parallel implementation as a custom processor leads to typical problems because the loops expose a dynamic range and too many iterations. This inhibits a simple parallel data path that is typically produced by the method of the loop unrolling. We have examined the four architectures that mitigate the found obstacles, and we provide the results of their implementation. The solutions take advantages of the loop pipelining, reordering of the loops, and dynamic reconfiguration. The recently available development tool was involved in our study as we have used the OpenCL (OCL) platform for FPGAs to draw practical conclusions. The given proposals are characterized by their performance and capacity for a problem size. Consequently, the speed/size trade-off is highlighted, as an FPGA size is a design constraint. The performance of the FPGA-based solutions is compared to the CPU speed, and the maximum reported speed-up is 750. Readers can further develop and/or use the presented OCL solutions for efficient LABS discovery as we provide the corresponding software repository.
APA, Harvard, Vancouver, ISO, and other styles
8

Assuncao, Luis, Carlos Goncalves, and Jose C. Cunha. "Autonomic Workflow Activities." International Journal of Adaptive, Resilient and Autonomic Systems 5, no. 2 (April 2014): 57–82. http://dx.doi.org/10.4018/ijaras.2014040104.

Full text
Abstract:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
APA, Harvard, Vancouver, ISO, and other styles
9

McArdle, N., M. Naruse, H. Toyoda, Y. Kobayashi, and M. Ishikawa. "Reconfigurable optical interconnections for parallel computing." Proceedings of the IEEE 88, no. 6 (June 2000): 829–37. http://dx.doi.org/10.1109/5.867696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Boghdadi, Hatem M. "Dynamic-width reconfigurable parallel prefix circuits." Journal of Supercomputing 71, no. 4 (January 1, 2015): 1177–95. http://dx.doi.org/10.1007/s11227-014-1270-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Foucher, Clément, Fabrice Muller, and Alain Giulieri. "Online codesign on reconfigurable platform for parallel computing." Microprocessors and Microsystems 37, no. 4-5 (June 2013): 482–93. http://dx.doi.org/10.1016/j.micpro.2011.12.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

SEFRAOUI, Omar, Mohammed AISSAOUI, and Mohsine ELEULDJ. "Dynamic Reconfigurable Component for Cloud Computing Resources." International Journal of Computer Applications 88, no. 7 (February 14, 2014): 1–5. http://dx.doi.org/10.5120/15361-3890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Qi, Ji. "A Scheduling Algorithm for Dynamic Reconfigurable Computing." Journal of Computer Research and Development 44, no. 8 (2007): 1439. http://dx.doi.org/10.1360/crad20070822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Varvarigos, E. A., and D. P. Bertsekas. "Dynamic broadcasting in parallel computing." IEEE Transactions on Parallel and Distributed Systems 6, no. 2 (1995): 120–31. http://dx.doi.org/10.1109/71.342123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tissot, Y., G. A. Russell, K. J. Symington, and J. F. Snowdon. "Optimization of reconfigurable optically interconnected systems for parallel computing." Journal of Parallel and Distributed Computing 66, no. 2 (February 2006): 238–47. http://dx.doi.org/10.1016/j.jpdc.2005.07.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Szymanski, Ted H., and H. Scott Hinton. "Reconfigurable intelligent optical backplane for parallel computing and communications." Applied Optics 35, no. 8 (March 10, 1996): 1253. http://dx.doi.org/10.1364/ao.35.001253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ramesh, Tirumale, and Subramaniam Ganesan. "Reconfigurable shared and dedicated-bus multiprocessor for parallel computing." Computers & Electrical Engineering 19, no. 5 (September 1993): 377–86. http://dx.doi.org/10.1016/0045-7906(93)90012-g.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Yanan, Leibo Liu, Jianfeng Zhu, Shouyi Yin, and Shaojun Wei. "Architecture, challenges and applications of dynamic reconfigurable computing." Journal of Semiconductors 41, no. 2 (February 2020): 021401. http://dx.doi.org/10.1088/1674-4926/41/2/021401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Melissa C., and Gregory D. Peterson. "Parallel application performance on shared high performance reconfigurable computing resources." Performance Evaluation 60, no. 1-4 (May 2005): 107–25. http://dx.doi.org/10.1016/j.peva.2004.10.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Joven, Jaume, Akash Bagdia, Federico Angiolini, Per Strid, David Castells-Rufas, Eduard Fernandez-Alonso, Jordi Carrabina, and Giovanni De Micheli. "QoS-Driven Reconfigurable Parallel Computing for NoC-Based Clustered MPSoCs." IEEE Transactions on Industrial Informatics 9, no. 3 (August 2013): 1613–24. http://dx.doi.org/10.1109/tii.2012.2222035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cordes, Ben, and Miriam Leeser. "Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing." EURASIP Journal on Embedded Systems 2009 (2009): 1–14. http://dx.doi.org/10.1155/2009/727965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

LIN, RONG, STEPHAN OLARIU, JAMES L. SCHWING, and JINGYUAN ZHANG. "COMPUTING ON RECONFIGURABLE BUSES—A NEW COMPUTATIONAL PARADIGM." Parallel Processing Letters 04, no. 04 (December 1994): 465–76. http://dx.doi.org/10.1142/s0129626494000430.

Full text
Abstract:
Up to now, buses have been used exclusively to ferry data around. The contribution of this work is to show that buses can be used both as topological descriptors and as powerful computational devices. We illustrate the power of this paradigm by designing two fast algorithms for image segmentation and parallel visibility. Our algorithm for image segmentation uses a novel technique invol ving building a bus around every region of interest in the image. With a binary image pretiled in the natural way on a reconfigurable mesh of size N×N our segmentation algorithm runs in O( log N) time, improving by a factor of O( log N) over the state of the art. Next, we exhibit a very simple algorithm to solve the parallel visibility problem on an image of size N×N. Our algorithm runs in O( log N) time. The only previously-known algorithm for this problem runs in O( log N) time on a hypercube with N processors. To support these algorithms, a set of basic building blocks are developed which are of independent interest. These include solutions to the following problems on a bus on length N: (1) computing the prefix maxima of items stored by the processors on the bus, even if none of the processors knows its rank on the bus; (2) computing the rank of every processor on the bus; (3) electing a leader on a closed bus.
APA, Harvard, Vancouver, ISO, and other styles
23

Etherington, Carole J., Matthew W. Anderson, Eric Bach, Jon T. Butler, and Pantelimon Stănică. "A Parallel Approach in Computing Correlation Immunity up to Six Variables." International Journal of Foundations of Computer Science 27, no. 04 (June 2016): 511–28. http://dx.doi.org/10.1142/s0129054116500131.

Full text
Abstract:
We show the use of a reconfigurable computer in computing the correlation immunity of Boolean functions of up to 6 variables. Boolean functions with high correlation immunity are desired in cryptographic systems because they are immune to correlation attacks. The SRC-6 reconfigurable computer was programmed in Verilog to compute the correlation immunity of functions. This computation is performed at a rate that is 190 times faster than a conventional computer. Our analysis of the correlation immunity is across all n-variable Boolean functions, for 2 ≤ n ≤ 6, thus obtaining, for the first time, a complete distribution of such functions. We also compare correlation immunity with two other cryptographic properties, nonlinearity and degree.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Chi-Feng, and Cheng-Wen Wu. "Testing and Diagnosing Dynamic Reconfigurable FPGA." VLSI Design 10, no. 3 (January 1, 2000): 321–33. http://dx.doi.org/10.1155/2000/79281.

Full text
Abstract:
Dynamic reconfigurable field-programmable logic arrays (FPGAs) are receiving notable attention because of their much shorter reconfiguration time as compared with traditional FPGAs. The short reconfiguration time is vital to applications such as reconfigurable computing and emulation. We show in this paper that testing and diagnosis of the FPGA also can take advantage of its dynamic reconfigurability. We first propose an efficient methodology for testing the interconnects of the FPGA, then present several universal test and diagnosis approaches which cover all functional units of the FPGA. Experimental results show that our approach significantly reduces the testing time, without additional cost for diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
25

Kao, Chi-Chou. "Performance-driven parallel reconfigurable computing architecture for multi-standard video decoding." Multimedia Tools and Applications 79, no. 41-42 (August 15, 2020): 30583–99. http://dx.doi.org/10.1007/s11042-020-09505-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tan, Cheng, Chenhao Xie, Tong Geng, Andres Marquez, Antonino Tumeo, Kevin Barker, and Ang Li. "ARENA: Asynchronous Reconfigurable Accelerator Ring to Enable Data-Centric Parallel Computing." IEEE Transactions on Parallel and Distributed Systems 32, no. 12 (December 1, 2021): 2880–92. http://dx.doi.org/10.1109/tpds.2021.3081074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

LIN, Wang-Qun, Lei DENG, Zhao-Yun DING, Quan-Yuan WU, Yan JIA, and Bin ZHOU. "Hierarchical Dynamic Community Detection by Parallel Computing." Chinese Journal of Computers 35, no. 8 (2012): 1712. http://dx.doi.org/10.3724/sp.j.1016.2012.01712.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fabiani, Erwan. "Experiencing a Problem-Based Learning Approach for Teaching Reconfigurable Architecture Design." International Journal of Reconfigurable Computing 2009 (2009): 1–11. http://dx.doi.org/10.1155/2009/923415.

Full text
Abstract:
This paper presents the “reconfigurable computing” teaching part of a computer science master course (first year) on parallel architectures. The practical work sessions of this course rely on active pedagogy using problem-based learning, focused on designing a reconfigurable architecture for the implementation of an application class of image processing algorithms. We show how the successive steps of this project permit the student to experiment with several fundamental concepts of reconfigurable computing at different levels. Specific experiments include exploitation of architectural parallelism, dataflow and communicating component-based design, and configurability-specificity tradeoffs.
APA, Harvard, Vancouver, ISO, and other styles
29

Monien, Burkhard, Ralf Diekmann, and Reinhard Lüling. "The Construction of Large Scale Reconfigurable Parallel Computing Systems (The Architecture of the SC320)." International Journal of Foundations of Computer Science 08, no. 03 (September 1997): 347–61. http://dx.doi.org/10.1142/s0129054197000227.

Full text
Abstract:
Reconfigurable communication networks for massively parallel multiprocessor systems offer the possibility to realize a number of application demands like special communication patterns or real-time requirements. This paper presents the design principle of a reconfigurable network which is able to realize any graph of maximal degree four. The architecture is based on a special multistage Clos network, constructed out of a number of static routing switches of equal size. Upper bounds on the cut size of 4-regular graphs, if split into a number of clusters, allow minimizing the number of switches and connections while still offering the desired reconfiguration capabilities as well as large scalability and flexible multi-user access. Efficient algorithms configuring the architecture are based on an old result by Petersen27 about the decomposition of regular graphs. The concept presented here is the basis for the Parsytec SC series of reconfigurable MPP-systems. The currently largest realization with 320 processors is presented in greater detail.
APA, Harvard, Vancouver, ISO, and other styles
30

Pajuelo-Holguera, Francisco, Juan A. Gómez-Pulido, and Fernando Ortega. "Performance of Two Approaches of Embedded Recommender Systems." Electronics 9, no. 4 (March 25, 2020): 546. http://dx.doi.org/10.3390/electronics9040546.

Full text
Abstract:
Nowadays, highly portable and low-energy computing environments require programming applications able to satisfy computing time and energy constraints. Furthermore, collaborative filtering based recommender systems are intelligent systems that use large databases and perform extensive matrix arithmetic calculations. In this research, we present an optimized algorithm and a parallel hardware implementation as good approach for running embedded collaborative filtering applications. To this end, we have considered high-level synthesis programming for reconfigurable hardware technology. The design was tested under environments where usual parameters and real-world datasets were applied, and compared to usual microprocessors running similar implementations. The performance results obtained by the different implementations were analyzed in computing time and energy consumption terms. The main conclusion is that the optimized algorithm is competitive in embedded applications when considering large datasets and parallel implementations based on reconfigurable hardware.
APA, Harvard, Vancouver, ISO, and other styles
31

Fujioka, Y., M. Kameyama, and N. Tomabechi. "Reconfigurable parallel VLSI processor for dynamic control of intelligent robots." IEE Proceedings - Computers and Digital Techniques 143, no. 1 (1996): 23. http://dx.doi.org/10.1049/ip-cdt:19960103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Choi, Jin-Kyu, Osamu Mori, and Toru Omata. "Dynamic and stable reconfiguration of self-reconfigurable planar parallel robots." Advanced Robotics 18, no. 6 (January 2004): 565–82. http://dx.doi.org/10.1163/1568553041257440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Klymenko, Iryna Anatoliivna, Valentyna Vasylivna Tkachenko, and Oleksandr Mykolaiovych Storozhuk. "Adaptive tasks mapping tools for reconfigurable computing structure in parallel computing systems under data flow control." Electronics and Communications 21, no. 2 (October 27, 2016): 71–77. http://dx.doi.org/10.20535/2312-1807.2016.21.2.71110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zhang, Fan, Xing Guo Luo, and Xing Ming Zhang. "Design of Reconfigurable Multi-Process or Architecture for High Performance Computing." Applied Mechanics and Materials 378 (August 2013): 534–38. http://dx.doi.org/10.4028/www.scientific.net/amm.378.534.

Full text
Abstract:
In this paper, the design used Reconfigurable multi-processor Architecture (Reconfigurable Multi - Processors Architecture, RCMPA), the system can adapt to a variety of applications, through the multi-processor parallel execution and flexible configuration system. At the same time, each computing components in the system constitute by the general microprocessor, the reconfiguration of FPGA and SRAMs. General purpose microprocessor can realize the control of a variety of tasks, scheduling, and some computing functions. FPGA can offer sufficient flexibility, extensibility and high-speed Internet features. SRAMs can offer all kinds of storage structure of high-speed read and write speed and high density storage unit.
APA, Harvard, Vancouver, ISO, and other styles
35

Zeng, Xiao Hui, Jing Zhong Li, Deng Li Bo, Chen Zhang, and Wen Lang Luo. "A Parallel Computing Dynamic Task Scheduling System for Nano-Materials Design and Simulation." Key Engineering Materials 562-565 (July 2013): 709–15. http://dx.doi.org/10.4028/www.scientific.net/kem.562-565.709.

Full text
Abstract:
Available task scheduling systems can not support MPI parallel computing applications to be suspended for quickly inserting the emergency parallel computing tasks. By modifying TCP/IP protocol, this paper proposes a new method to solve the processes’ communication synchronization for suspending parallel application; moreover, by modifying the signal mechanism of the Linux operating system, this paper also proposes a method to solve the problems of consistently suspending and recovering parallel application. A Parallel computing dynamic task scheduling prototype system is implemented, and the experiment results show that the prototype system can suspend running parallel computing application, and also support dynamic insertion of emergency MPI parallel computing application.
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Guanyu, Dan Zhang, Hongyan Tang, Lingyu Kong, and Sumian Song. "Analysis and control for a new reconfigurable parallel mechanism." International Journal of Advanced Robotic Systems 17, no. 5 (September 1, 2020): 172988142093132. http://dx.doi.org/10.1177/1729881420931322.

Full text
Abstract:
This article proposes a new reconfigurable parallel mechanism using a spatial overconstrained platform. This proposed mechanism can be used as a machine tool. The mobility is analyzed by Screw Theory. The inverse kinematic model is established by applying the closed-loop equation. Next, the dynamic model of the presented mechanism is established by Lagrange formulation. To control the presented mechanism, some controllers have been used. Based on this dynamic model, the fuzzy-proportion integration differentiation (PID) controller is designed to track the trajectory of the end effector. For each limb, a sliding mode controller is applied to track the position and velocity of the slider. Finally, some simulations using ADAMS and MATLAB are proposed to verify the effectiveness and stability of these controllers.
APA, Harvard, Vancouver, ISO, and other styles
37

Paul, Anand, Yung-Chuan Jiang, Jhing-Fa Wang, and Jar-Ferr Yang. "Parallel Reconfigurable Computing-Based Mapping Algorithm for Motion Estimation in Advanced Video Coding." ACM Transactions on Embedded Computing Systems 11, S2 (August 2012): 1–18. http://dx.doi.org/10.1145/2331147.2331149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Murshed, M. Manzur, and Richard P. Brent. "Constant Time Algorithms for Computing the Contour of Maximal Elements on a Reconfigurable Mesh." Parallel Processing Letters 08, no. 03 (September 1998): 351–61. http://dx.doi.org/10.1142/s0129626498000365.

Full text
Abstract:
There has recently been an interest in the introduction of reconfigurable buses to existing parallel architectures. Among them the Reconfigurable Mesh (RM) draws much attention because of its simplicity. This paper presents three constant time algorithms to compute the contour of the maximal elements of N planar points on the RM. The first algorithm employs an RM of size N × N while the second one uses a 3-D RM of size [Formula: see text]. We further extend the result to k-D RM of size N1/(k - 1) × N1/(k - 1) × … × N1/(k - 1).
APA, Harvard, Vancouver, ISO, and other styles
39

Hu, Wei Wei. "Reconfigurable Technology in the Application of Virtual Instrument." Advanced Materials Research 1049-1050 (October 2014): 1137–40. http://dx.doi.org/10.4028/www.scientific.net/amr.1049-1050.1137.

Full text
Abstract:
In this paper,the main study is virtual instrument experimental systems based on reconfigurable computing technology. The system is divided into a computer, the master FPGA,reconfigurable FPGA,and instrumentation port drivers,adopting dynamic and static combination mode,the master FPGA chooses a different download programs to achieve the reconfiguration of slave FPGA;through designing two functional modules of counter and function generator,reconfigurable performance of the virtual system is verified.Modular standard hardware is the hard fundament for virtual instrument. The traditional hardware module is independent entity one by one, which make them inflexible in configuration and large in volume. Depending on the FPGA hardware and reconfigurable computing technology, the function of many instruments can be realized in FPGA. The dynamic configuration can realize new function and reduce the volume of the instruments, which make the function of virtual instrument more powerful and realize the virtual instrument in a real sense.
APA, Harvard, Vancouver, ISO, and other styles
40

ADAR, N., and G. KUVAT. "Parallel Genetic Algorithms with Dynamic Topology using Cluster Computing." Advances in Electrical and Computer Engineering 16, no. 3 (2016): 73–80. http://dx.doi.org/10.4316/aece.2016.03011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Muchnick, V. B., and A. V. Shafarenko. "Dynamic evaluation strategy for fine-grain data-parallel computing." IEE Proceedings - Computers and Digital Techniques 143, no. 3 (1996): 181. http://dx.doi.org/10.1049/ip-cdt:19960333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Son, Dong Oh, Cong Thuan Do, Hong Jun Choi, Jiseung Nam, and Cheol Hong Kim. "A dynamic CTA scheduling scheme for massive parallel computing." Cluster Computing 20, no. 1 (February 14, 2017): 781–87. http://dx.doi.org/10.1007/s10586-017-0768-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Overeinder, B. J., P. M. A. Sloot, R. N. Heederik, and L. O. Hertzberger. "A dynamic load balancing system for parallel cluster computing." Future Generation Computer Systems 12, no. 1 (May 1996): 101–15. http://dx.doi.org/10.1016/0167-739x(95)00038-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Čiegis, Raimondas, Vadimas Starikovičius, Natalija Tumanova, and Minvydas Ragulskis. "Application of distributed parallel computing for dynamic visual cryptography." Journal of Supercomputing 72, no. 11 (May 4, 2016): 4204–20. http://dx.doi.org/10.1007/s11227-016-1733-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Shaojun, Datong Liu, Jianbao Zhou, Bin Zhang, and Yu Peng. "A Run-Time Dynamic Reconfigurable Computing System for Lithium-Ion Battery Prognosis." Energies 9, no. 8 (July 25, 2016): 572. http://dx.doi.org/10.3390/en9080572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vucha, Mahendra, and Arvind Rajawat. "Dynamic Task Distribution Model for On-Chip Reconfigurable High Speed Computing System." International Journal of Reconfigurable Computing 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/783237.

Full text
Abstract:
Modern embedded systems are being modeled as Reconfigurable High Speed Computing System (RHSCS) where Reconfigurable Hardware, that is, Field Programmable Gate Array (FPGA), and softcore processors configured on FPGA act as computing elements. As system complexity increases, efficient task distribution methodologies are essential to obtain high performance. A dynamic task distribution methodology based on Minimum Laxity First (MLF) policy (DTD-MLF) distributes the tasks of an application dynamically onto RHSCS and utilizes available RHSCS resources effectively. The DTD-MLF methodology takes the advantage of runtime design parameters of an application represented as DAG and considers the attributes of tasks in DAG and computing resources to distribute the tasks of an application onto RHSCS. In this paper, we have described the DTD-MLF model and verified its effectiveness by distributing some of real life benchmark applications onto RHSCS configured on Virtex-5 FPGA device. Some benchmark applications are represented as DAG and are distributed to the resources of RHSCS based on DTD-MLF model. The performance of the MLF based dynamic task distribution methodology is compared with static task distribution methodology. The comparison shows that the dynamic task distribution model with MLF criteria outperforms the static task distribution techniques in terms of schedule length and effective utilization of available RHSCS resources.
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Wattar, Ahmed, Shawki Areibi, and Gary Grewal. "An Efficient Framework for Floor-plan Prediction of Dynamic Runtime Reconfigurable Systems." International Journal of Reconfigurable and Embedded Systems (IJRES) 4, no. 2 (July 1, 2015): 99. http://dx.doi.org/10.11591/ijres.v4.i2.pp99-121.

Full text
Abstract:
<p>Several embedded application domains for reconfigurable systems tend to combine <br />frequent changes with high performance demands of their workloads such as image processing, wearable computing and<br />network processors. Time multiplexing of reconfigurable hardware resources raises a number of new issues, ranging <br />from run-time systems to complex programming models that usually form a Reconfigurable<br />hardware Operating System (ROS). The Operating System performs online task scheduling and handles resource management.<br />There are many challenges in adaptive computing and dynamic reconfigurable systems. One of the major understudied challenges<br />is estimating the required resources in terms of soft cores, Programmable Reconfigurable Regions (PRRs), <br />the appropriate communication infrastructure, and to predict a near optimal layout and floor-plan of the reconfigurable logic fabric. <br />Some of these issues are specific to the application being designed, while others are more general and relate to the underlying run-time environment.<br />Static resource allocation for Run-Time Reconfiguration (RTR) often leads to inferior and unacceptable results. <br />In this paper, we present a novel adaptive and dynamic methodology, based on a Machine Learning approach, for predicting and<br />estimating the necessary resources for an application based on past historical information.<br />An important feature of the proposed methodology is that the system is able to learn and generalize and, therefore, is expected to improve <br />its accuracy over time. The goal of the entire process is to extract useful hidden knowledge from the data. This knowledge is the prediction <br />and estimation of the necessary resources for an unknown or not previously seen application.<br /><br /></p>
APA, Harvard, Vancouver, ISO, and other styles
48

Vranjković, Vuk S., Rastislav J. R. Struharik, and Ladislav A. Novak. "Reconfigurable Hardware for Machine Learning Applications." Journal of Circuits, Systems and Computers 24, no. 05 (April 8, 2015): 1550064. http://dx.doi.org/10.1142/s0218126615500644.

Full text
Abstract:
This paper proposes universal coarse-grained reconfigurable computing architecture for hardware implementation of decision trees (DTs), artificial neural networks (ANNs), and support vector machines (SVMs), suitable for both field programmable gate arrays (FPGA) and application specific integrated circuits (ASICs) implementation. Using this universal architecture, two versions of DTs (functional DT and axis-parallel DT), two versions of SVMs (with polynomial and radial kernel) and two versions of ANNs (multi layer perceptron ANN and radial basis ANN) machine learning classifiers, have been implemented in FPGA. Experimental results, based on 18 benchmark datasets of standard UCI machine learning repository database, show that FPGA implementation provides significant improvement (1–2 orders of magnitude) in the average instance classification time, in comparison with software implementations based on R project.
APA, Harvard, Vancouver, ISO, and other styles
49

BEN-ASHER, YOSI, and ASSAF SCHUSTER. "TIME-SIZE TRADEOFFS FOR RECONFIGURABLE MESHES." Parallel Processing Letters 06, no. 02 (June 1996): 231–45. http://dx.doi.org/10.1142/s0129626496000236.

Full text
Abstract:
Many algorithms were devised for the reconfigurable mesh model (RN-mesh) for parallel computing which involve only a constant number of broadcasting steps. It was not known, however, how tight are the constants involved. Consider an n×n directed reconfigurable mesh (DRN-mesh) that computes a function f(n) in T steps, where T is a constant. In this paper we show that T can always be reduced to a single step, still using a polynomial size DRN-mesh. Furthermore, we show that this is in fact a general tradeoff: namely, the number of steps may be reduced to any value between 1 and T, paying by an exponential growth of the size of the DRN-mesh in the number of eliminated steps.
APA, Harvard, Vancouver, ISO, and other styles
50

LOPES, HEITOR S., CARLOS R. ERIG LIMA, and NORTON J. MURATA. "A CONFIGWARE APPROACH FOR HIGH-SPEED PARALLEL ANALYSIS OF GENOMIC DATA." Journal of Circuits, Systems and Computers 16, no. 04 (August 2007): 527–40. http://dx.doi.org/10.1142/s0218126607003885.

Full text
Abstract:
Many problems in bioinformatics represent great computational challenges due to the huge amount of biological data to be analyzed. Reconfigurable systems can offer custom-computing machines, with orders of magnitude faster than regular software, running in general-purpose processors. We present a methodology for using a configware system in an interesting problem of molecular biology: the splice junction detection in eukaryote genes. Decision trees were developed using a benchmark of DNA sequences. They were converted into logical equations, simplified, and submitted to a Boolean minimization. The resulting circuit was implemented in reconfigurable parallel hardware and evaluated with a five-fold cross-validation procedure, run in a second level of parallelism. The average accuracy achieved was 90.41% and it takes 18 ns to process each data record with 60 nucleotides.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography