Academic literature on the topic 'Parallel and dynamic reconfigurable computing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Parallel and dynamic reconfigurable computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Parallel and dynamic reconfigurable computing"

1

Schevelev, S. S. "Reconfigurable Modular Computing System." Proceedings of the Southwest State University 23, no. 2 (July 9, 2019): 137–52. http://dx.doi.org/10.21869/2223-1560-2019-23-2-137-152.

Full text
Abstract:
Purpose of research. A reconfigurable computer system consists of a computing system and special-purpose computers that are used to solve the tasks of vector and matrix algebra, pattern recognition. There are distinctions between matrix and associative systems, neural networks. Matrix computing systems comprise a set of processor units connected through a switching device with multi-module memory. They are designed to solve vector, matrix and data array problems. Associative systems contain a large number of operating devices that can simultaneously process multiple data streams. Neural networks and neurocomputers have high performance when solving problems of expert systems, pattern recognition due to parallel processing of a neural network.Methods. An information graph of the computational process of a reconfigurable modular system was plotted. Structural and functional schemes, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words were developed. Software for modelling the operation of the arithmetic-symbol processor, specialized computing modules, and switching systems was developed.Results. A block diagram of a reconfigurable computing modular system was developed. The system consists of compatible functional modules and is capable of static and dynamic reconfiguration, has a parallel connection structure of the processor and computing modules through the use of interface channels. It consists of an arithmeticsymbol processor, specialized computing modules and switching systems; it performs specific tasks of symbolic information processing, arithmetic and logical operations.Conclusion. Systems with a reconfigurable structure are high-performance and highly reliable computing systems that consist of integrated processors in multi-machine and multiprocessor systems. Reconfigurability of the structure provides high system performance due to its adaptation to computational processes and the composition of the processed tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Shevelev, S. S. "RECONFIGURABLE COMPUTING MODULAR SYSTEM." Radio Electronics, Computer Science, Control 1, no. 1 (March 31, 2021): 194–207. http://dx.doi.org/10.15588/1607-3274-2021-1-19.

Full text
Abstract:
Context. Modern general purpose computers are capable of implementing any algorithm, but when solving certain problems in terms of processing speed they cannot compete with specialized computing modules. Specialized devices have high performance, effectively solve the problems of processing arrays, artificial intelligence tasks, and are used as control devices. The use of specialized microprocessor modules that implement the processing of character strings, logical and numerical values, represented as integers and real numbers, makes it possible to increase the speed of performing arithmetic operations by using parallelism in data processing. Objective. To develop principles for constructing microprocessor modules for a modular computing system with a reconfigurable structure, an arithmetic-symbolic processor, specialized computing devices, switching systems capable of configuring microprocessors and specialized computing modules into a multi-pipeline structure to increase the speed of performing arithmetic and logical operations, high-speed design algorithms specialized processors-accelerators of symbol processing. To develop algorithms, structural and functional diagrams of specialized mathematical modules that perform arithmetic operations in direct codes on neural-like elements and systems for decentralized control of the operation of blocks. Method. An information graph of the computational process of a modular system with a reconstructed structure has been built. Structural and functional diagrams, algorithms that implement the construction of specialized modules for performing arithmetic and logical operations, search operations and functions for replacing occurrences in processed words have been developed. Software has been developed for simulating the operation of an arithmetic-symbolic processor, specialized computing modules, and switching systems. Results. A block diagram of a reconfigurable computing modular system has been developed, which consists of compatible functional modules, it is capable of static and dynamic reconfiguration, has a parallel structure for connecting the processor and computing modules through the use of interface channels. The system consists of an arithmetic-symbolic processor, specialized computing modules and switching systems, performs specific tasks of symbolic information processing, arithmetic and logical operations. Conclusions. The architecture of reconfigurable computing systems can change dynamically during their operation. It becomes possible to adapt the architecture of a computing system to the structure of the problem being solved, to create problem-oriented computers, the structure of which corresponds to the structure of the problem being solved. As the main computing element in reconfigurable computing systems, not universal microprocessors are used, but programmable logic integrated circuits, which are combined using high-speed interfaces into a single computing field. Reconfigurable multipipeline computing systems based on fields are an effective tool for solving streaming information processing and control problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Magalhães Pereira, Monica, and Luigi Carro. "Dynamic Reconfigurable Computing: The Alternative to Homogeneous Multicores under Massive Defect Rates." International Journal of Reconfigurable Computing 2011 (2011): 1–17. http://dx.doi.org/10.1155/2011/452589.

Full text
Abstract:
The aggressive scaling of CMOS technology has increased the density and allowed the integration of multiple processors into a single chip. Although solutions based on MPSoC architectures can increase application's speed through TLP exploitation, this speedup is still limited to the amount of parallelism available in the application, as demonstrated by Amdahl's Law. Moreover, with the continuous shrinking of device features, very aggressive defect rates are expected for new technologies. Under high defect rates a large amount of processors of the MPSoC will be susceptible to defects and consequently will fail, not only reducing yield but also severely affecting the expected performance. This paper presents a run-time adaptive architecture that allows software execution even under aggressive defect rates. The proposed architecture can accelerate not only highly parallel applications but also sequential ones, and it is a heterogeneous solution to overcome the performance penalty that is imposed to homogeneous MPSoCs under massive defect rates.
APA, Harvard, Vancouver, ISO, and other styles
4

Condia, Josie E. Rodriguez, Pierpaolo Narducci, Matteo Sonza Reorda, and Luca Sterpone. "DYRE: a DYnamic REconfigurable solution to increase GPGPU’s reliability." Journal of Supercomputing 77, no. 10 (March 29, 2021): 11625–42. http://dx.doi.org/10.1007/s11227-021-03751-2.

Full text
Abstract:
AbstractGeneral-purpose graphics processing units (GPGPUs) are extensively used in high-performance computing. However, it is well known that these devices’ reliability may be limited by the rising of faults at the hardware level. This work introduces a flexible solution to detect and mitigate permanent faults affecting the execution units in these parallel devices. The proposed solution is based on adding some spare modules to perform two in-field operations: detecting and mitigating faults. The solution takes advantage of the regularity of the execution units in the device to avoid significant design changes and reduce the overhead. The proposed solution was evaluated in terms of reliability improvement and area, performance, and power overhead costs. For this purpose, we resorted to a micro-architectural open-source GPGPU model (FlexGripPlus). Experimental results show that the proposed solution can extend the reliability by up to 57%, with overhead costs lower than 2% and 8% in area and power, respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

YEH, POCHI, and CLAIRE GU. "PHOTOREFRACTIVE MEDIA FOR OPTICAL INTERCONNECTIONS." Journal of Nonlinear Optical Physics & Materials 01, no. 01 (January 1992): 167–201. http://dx.doi.org/10.1142/s0218199192000108.

Full text
Abstract:
The photorefractive effect and its applications in optical interconnections are described. The fundamental limit, the dynamics of grating formation, and the two-wave mixing (TWM) process are discussed. Reconfigurable interconnections for parallel optical computing is demonstrated using photorefractive holograms. Neural network interconnections with photorefractive media are also presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Belaid, Ikbel, Fabrice Muller, and Maher Benjemaa. "Static Scheduling of Periodic Hardware Tasks with Precedence and Deadline Constraints on Reconfigurable Hardware Devices." International Journal of Reconfigurable Computing 2011 (2011): 1–28. http://dx.doi.org/10.1155/2011/591983.

Full text
Abstract:
Task graph scheduling for reconfigurable hardware devices can be defined as finding a schedule for a set of periodic tasks with precedence, dependence, and deadline constraints as well as their optimal allocations on the available heterogeneous hardware resources. This paper proposes a new methodology comprising three main stages. Using these three main stages, dynamic partial reconfiguration and mixed integer programming, pipelined scheduling and efficient placement are achieved and enable parallel computing of the task graph on the reconfigurable devices by optimizing placement/scheduling quality. Experiments on an application of heterogeneous hardware tasks demonstrate an improvement of resource utilization of 12.45% of the available reconfigurable resources corresponding to a resource gain of 17.3% compared to a static design. The configuration overhead is reduced to 2% of the total running time. Due to pipelined scheduling, the task graph spanning is minimized by 4% compared to sequential execution of the graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Russek, Paweł, Ernest Jamro, Agnieszka Dąbrowska-Boruch, and Kazimierz Wiatr. "A study of the loops control for reconfigurable computing with OpenCL in the LABS local search problem." International Journal of High Performance Computing Applications 34, no. 1 (August 12, 2019): 103–14. http://dx.doi.org/10.1177/1094342019868515.

Full text
Abstract:
In this article, we study the steepest descent local search (SDLS) algorithm that is used as the improvement step in the memetic algorithms for the search of low autocorrelation binary sequences (LABS). We address the method of reconfigurable computing, as the algorithm is of the field programmable gate array (FPGA) type as it features the integer operations, bit-wise data representation, regular execution flow, and huge computational complexity. It contains four levels of nested loops, but its direct parallel implementation as a custom processor leads to typical problems because the loops expose a dynamic range and too many iterations. This inhibits a simple parallel data path that is typically produced by the method of the loop unrolling. We have examined the four architectures that mitigate the found obstacles, and we provide the results of their implementation. The solutions take advantages of the loop pipelining, reordering of the loops, and dynamic reconfiguration. The recently available development tool was involved in our study as we have used the OpenCL (OCL) platform for FPGAs to draw practical conclusions. The given proposals are characterized by their performance and capacity for a problem size. Consequently, the speed/size trade-off is highlighted, as an FPGA size is a design constraint. The performance of the FPGA-based solutions is compared to the CPU speed, and the maximum reported speed-up is 750. Readers can further develop and/or use the presented OCL solutions for efficient LABS discovery as we provide the corresponding software repository.
APA, Harvard, Vancouver, ISO, and other styles
8

Assuncao, Luis, Carlos Goncalves, and Jose C. Cunha. "Autonomic Workflow Activities." International Journal of Adaptive, Resilient and Autonomic Systems 5, no. 2 (April 2014): 57–82. http://dx.doi.org/10.4018/ijaras.2014040104.

Full text
Abstract:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
APA, Harvard, Vancouver, ISO, and other styles
9

McArdle, N., M. Naruse, H. Toyoda, Y. Kobayashi, and M. Ishikawa. "Reconfigurable optical interconnections for parallel computing." Proceedings of the IEEE 88, no. 6 (June 2000): 829–37. http://dx.doi.org/10.1109/5.867696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Boghdadi, Hatem M. "Dynamic-width reconfigurable parallel prefix circuits." Journal of Supercomputing 71, no. 4 (January 1, 2015): 1177–95. http://dx.doi.org/10.1007/s11227-014-1270-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Parallel and dynamic reconfigurable computing"

1

Viswanathan, Venkatasubramanian. "Une architecture évolutive flexible et reconfigurable dynamiquement pour les systèmes embarqués haute performance." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0029.

Full text
Abstract:
Dans cette thèse, nous proposons une architecture reconfigurable scalable et flexible, avec un réseau de communication parallèle « full-duplex switched » ainsi que le modèle d’exécution approprié ce qui nous a permis de redéfinir les paradigmes de calcul, de communication et de reconfiguration dans les systèmes embarqués à haute performance (HPEC). Ces systèmes sont devenus très sophistiqués et consommant des ressources pour trois raisons. Premièrement, ils doivent capturer et traiter des données en temps réel à partir de plusieurs sources d’E/S parallèles. Deuxièmement, ils devraient adapter leurs fonctionnalités selon l’application ou l’environnement. Troisièmement, à cause du parallélisme potentiel des applications, multiples instances de calcul réparties sur plusieurs nœuds sont nécessaires, ce qui rend ces systèmes massivement parallèles. Grace au parallélisme matériel offert par les FPGAs, la logique d’une fonction peut être reproduite plusieurs fois pour traiter des E/S parallèles, faisant du modèle d’exécution « Single Program Multiple Data » (SPMD) un modèle préféré pour les concepteurs d’architectures parallèles sur FPGA. En plus, la fonctionnalité de reconfiguration dynamique est un autre attrait des composants FPGA permettant la réutilisation efficace des ressources matérielles limitées. Le défi avec les systèmes HPEC actuels est qu’ils sont généralement conçus pour répondre à des besoins spécifiques d’une application engendrant l’obsolescence rapide du matériel. Dans cette thèse, nous proposons une architecture qui permet la personnalisation des nœuds de calcul (FPGA), la diffusion des données (E/S, bitstreams) et la reconfiguration de plusieurs nœuds de calcul en parallèle. L’environnement logiciel exploite les attraits du réseau de communication pour implémenter le modèle d’exécution SPMD.Enfin, afin de démontrer les avantages de notre architecture, nous avons mis en place une application d’encodage H.264 sécurisé distribué évolutif avec plusieurs protocoles de communication avioniques pour les données et le contrôle. Nous avons utilisé le protocole « serial Front Panel Data Port (sFPDP) » d’acquisition de données à haute vitesse basé sur le standard FMC pour capturer, encoder et de crypter le flux vidéo. Le système mis en œuvre s’appuie sur 3 FPGA différents, en respectant le modèle d’exécution SPMD. En outre, nous avons également mis en place un système d’E/S modulaire en échangeant des protocoles dynamiquement selon les besoins du système. Nous avons ainsi conçu une architecture évolutive et flexible et un modèle d’exécution parallèle afin de gérer plusieurs sources vidéo d’entrée parallèles
In this thesis, we propose a scalable and customizable reconfigurable computing platform, with a parallel full-duplex switched communication network, and a software execution model to redefine the computation, communication and reconfiguration paradigms in High Performance Embedded Systems. High Performance Embedded Computing (HPEC) applications are becoming highly sophisticated and resource consuming for three reasons. First, they should capture and process real-time data from several I/O sources in parallel. Second, they should adapt their functionalities according to the application or environment variations within given Size Weight and Power (SWaP) constraints. Third, since they process several parallel I/O sources, applications are often distributed on multiple computing nodes making them highly parallel. Due to the hardware parallelism and I/O bandwidth offered by Field Programmable Gate Arrays (FPGAs), application can be duplicated several times to process parallel I/Os, making Single Program Multiple Data (SPMD) the favorite execution model for designers implementing parallel architectures on FPGAs. Furthermore Dynamic Partial Reconfiguration (DPR) feature allows efficient reuse of limited hardware resources, making FPGA a highly attractive solution for such applications. The problem with current HPEC systems is that, they are usually built to meet the needs of a specific application, i.e., lacks flexibility to upgrade the system or reuse existing hardware resources. On the other hand, applications that run on such hardware architectures are constantly being upgraded. Thus there is a real need for flexible and scalable hardware architectures and parallel execution models in order to easily upgrade the system and reuse hardware resources within acceptable time bounds. Thus these applications face challenges such as obsolescence, hardware redesign cost, sequential and slow reconfiguration, and wastage of computing power.Addressing the challenges described above, we propose an architecture that allows the customization of computing nodes (FPGAs), broadcast of data (I/O, bitstreams) and reconfiguration several or a subset of computing nodes in parallel. The software environment leverages the potential of the hardware switch, to provide support for the SPMD execution model. Finally, in order to demonstrate the benefits of our architecture, we have implemented a scalable distributed secure H.264 encoding application along with several avionic communication protocols for data and control transfers between the nodes. We have used a FMC based high-speed serial Front Panel Data Port (sFPDP) data acquisition protocol to capture, encode and encrypt RAW video streams. The system has been implemented on 3 different FPGAs, respecting the SPMD execution model. In addition, we have also implemented modular I/Os by swapping I/O protocols dynamically when required by the system. We have thus demonstrated a scalable and flexible architecture and a parallel runtime reconfiguration model in order to manage several parallel input video sources. These results represent a conceptual proof of a massively parallel dynamically reconfigurable next generation embedded computers
APA, Harvard, Vancouver, ISO, and other styles
2

SURENDIRANATH, SUDHA. "ACCELERATING DNA SEQUENTIAL ANALYSIS EXPLOITING PARALLEL HARDWARE AND RECONFIGURABLE COMPUTING." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1131856327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jacob, Aju. "Distributed configuration management for reconfigurable cluster computing." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0007181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Jian. "RECONFIGURABLE COMPUTING FOR VIDEO CODING." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4301.

Full text
Abstract:
Video coding is widely used in our daily life. Due to its high computational complexity, hardware implementation is usually preferred. In this research, we investigate both ASIC hardware design approach and reconfigurable hardware design approach for video coding applications. First, we present a unified architecture that can perform Discrete Cosine Transform (DCT), Inverse Discrete Cosine Transform (IDCT), DCT domain motion estimation and compensation (DCT-ME/MC). Our proposed architecture is a Wavefront Array-based Processor with a highly modular structure consisting of 8*8 Processing Elements (PEs). By utilizing statistical properties and arithmetic operations, it can be used as a high performance hardware accelerator for video transcoding applications. We show how different core algorithms can be mapped onto the same hardware fabric and can be executed through the pre-defined PEs. In addition to the simplified design process of the proposed architecture and savings of the hardware resources, we also demonstrate that high throughput rate can be achieved for IDCT and DCT-MC by fully utilizing the sparseness property of DCT coefficient matrix. Compared to fixed hardware architecture using ASIC design approach, reconfigurable hardware design approach has higher flexibility, lower cost, and faster time-to-market. We propose a self-reconfigurable platform which can reconfigure the architecture of DCT computations during run-time using dynamic partial reconfiguration. The scalable architecture for DCT computations can compute different number of DCT coefficients in the zig-zag scan order to adapt to different requirements, such as power consumption, hardware resource, and performance. We propose a configuration manager which is implemented in the embedded processor in order to adaptively control the reconfiguration of scalable DCT architecture during run-time. In addition, we use LZSS algorithm for compression of the partial bitstreams and on-chip BlockRAM as a cache to reduce latency overhead for loading the partial bitstreams from the off-chip memory for run-time reconfiguration. A hardware module is designed for parallel reconfiguration of the partial bitstreams. The experimental results show that our approach can reduce the external memory accesses by 69% and can achieve 400 MBytes/s reconfiguration rate. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration. Prediction algorithm of zero quantized DCT (ZQDCT) to control the run-time reconfiguration of the proposed scalable architecture has been used, and 12 different modes of DCT computations including zonal coding, multi-block processing, and parallel-sequential stage modes are supported to reduce power consumptions, required hardware resources, and computation time with a small quality degradation. Detailed trade-offs of power, throughput, and quality are investigated, and used as a criterion for self-reconfiguration to meet the requirements set by the users.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
5

Varvarigos, Emmanouel A. "Static and dynamic communication in parallel computing." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12868.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (p. 186-191).
by Emmanouel A. Varvarigos.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Phan, Cong-Vinh. "Formal aspects of dynamic reconfigurability in reconfigurable computing systems." Thesis, London South Bank University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

PANDEY, ANKUR. "A MULTITHREADED RUNTIME SUPPORT ENVIRONMENT FOR DYNAMIC RECONFIGURABLE COMPUTING." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1026133065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Surendiranath, Sudha. "Accelerating DNA sequential analysis through exploiting parallel hardware and reconfigurable computing." Cincinnati, Ohio : University of Cincinnati, 2005. http://www.ohiolink.edu/etd/view.cgi?acc%5Fnum=ucin1131856327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Thorndike, David Andrew. "A Multicore Computing Platform for Benchmarking Dynamic Partial Reconfiguration Based Designs." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1338933284.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Craven, Stephen Douglas. "Structured Approach to Dynamic Computing Application Development." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/27730.

Full text
Abstract:
The ability of some configurable logic devices to modify their hardware during operation has long held great potential to increase performance and reduce device cost. However, despite many research projects and a decade of research, the dynamic reconfiguration of Field Programmable Gate Arrays (FPGAs) is still very much an art practiced by few. Previous attempts to automate the many low-level details that complicate Run-Time Reconfigurable (RTR) application development suffer severe limitations. This dissertation describes a comprehensive approach to dynamic hardware development, providing a designer with appropriate models for computation, communication, and reconfiguration integrated with a high-level design environment. In this way, many manual and time consuming tasks associated with partial reconfiguration are hidden, permitting a designer to focus instead on a design's behavior. This design and implementation environment has been validated on a variety of relevant applications, quantifying the effects of high-level design.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Parallel and dynamic reconfigurable computing"

1

C, Sanderson A., ed. Tetrobot: A modular approach to reconfigurable parallel robotics. Boston: Kluwer Academic Publishers, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Luigi, Carro, ed. Dynamic reconfigurable architectures and transparent optimization techniques: Automatic acceleration of software execution. Dordrecht: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sanderson, Arthur C., and Gregory J. Hamlin. Tetrobot A Modular Approach to Reconfigurable Parallel Robotics (The International Series in Engineering and Computer Science). Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1974-, Wang Lizhe, Chen Jinjun, and Jie Wei, eds. Quantitative quality of service for grid computing: Applications for heterogeneity, large-scale distribution, and dynamic environments. Hershey PA: Information Science Reference, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Parallel and dynamic reconfigurable computing"

1

Ferreira, Mário Lopes, João Canas Ferreira, and Michael Huebner. "A Parallel-Pipelined OFDM Baseband Modulator with Dynamic Frequency Scaling for 5G Systems." In Applied Reconfigurable Computing. Architectures, Tools, and Applications, 511–22. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78890-6_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Buchty, Rainer, David Kramer, Mario Kicherer, and Wolfgang Karl. "A Light-Weight Approach to Dynamical Runtime Linking Supporting Heterogenous, Parallel, and Reconfigurable Architectures." In Architecture of Computing Systems – ARCS 2009, 60–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00454-4_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

von Praun, Christoph, Christoph von Praun, Jeremy T. Fineman, Charles E. Leiserson, Efstratios Gallopoulos, Marc Snir, Michael Heath, et al. "Reconfigurable Computer." In Encyclopedia of Parallel Computing, 1728. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

von Praun, Christoph, Christoph von Praun, Jeremy T. Fineman, Charles E. Leiserson, Efstratios Gallopoulos, Marc Snir, Michael Heath, et al. "Reconfigurable Computers." In Encyclopedia of Parallel Computing, 1728. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Falsafi, Babak, Samuel Midkiff, JackB Dennis, JackB Dennis, Amol Ghoting, Roy H. Campbell, Christof Klausecker, et al. "Dynamic LPAR." In Encyclopedia of Parallel Computing, 592. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Falsafi, Babak, Samuel Midkiff, JackB Dennis, JackB Dennis, Amol Ghoting, Roy H. Campbell, Christof Klausecker, et al. "Dynamic Reconfiguration." In Encyclopedia of Parallel Computing, 592. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Wanjun, Xiaohua Fan, and Christoph Meinel. "A CORBA-Based Dynamic Reconfigurable Middleware." In Networking and Mobile Computing, 1208–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11534310_126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salleh, Shaharuddin, and Albert Y. Zomaya. "Dynamic Scheduling." In Scheduling in Parallel Computing Systems, 93–125. Boston, MA: Springer US, 1999. http://dx.doi.org/10.1007/978-1-4615-5065-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Oh, Yeong-Jae, Hanho Lee, and Chong-Ho Lee. "Dynamic Partial Reconfigurable FIR Filter Design." In Reconfigurable Computing: Architectures and Applications, 30–35. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11802839_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fukuda, Masahiro, and Yasushi Inoguchi. "FPGA-Based Parallel Pattern Matching." In Applied Reconfigurable Computing. Architectures, Tools, and Applications, 192–203. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78890-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Parallel and dynamic reconfigurable computing"

1

Petrovsky, A. "Dynamic algorithm transforms for reconfigurable real-time audio coding processor." In Proceedings International Conference on Parallel Computing in Electrical Engineering. IEEE, 2002. http://dx.doi.org/10.1109/pcee.2002.1115317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saadat, Khalil, Ning Wang, Xinpeng Wei, Bin Da, and Rahim Tafazolli. "Reconfigurable Blockchains for Dynamic Cluster-based Applications." In 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). IEEE, 2020. http://dx.doi.org/10.1109/ispa-bdcloud-socialcom-sustaincom51426.2020.00142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Subramaniyan, Rajagopal, Ian Troxel, Alan D. George, and Melissa Smith. "Simulative analysis of dynamic scheduling heuristics for reconfigurable computing of parallel applications." In the internation symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1117201.1117249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laskowski, Eryk, and Marek Tudruj. "Optimized Communication Control in Programs for Dynamic Look-Ahead Reconfigurable SoC Systems." In 2008 International Symposium on Parallel and Distributed Computing. IEEE, 2008. http://dx.doi.org/10.1109/ispdc.2008.54.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hsieh, Fu-Shiung. "A Meta-Heuristic Approach for Dynamic Process Planning in Reconfigurable Manufacturing Systems." In 2017 18th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). IEEE, 2017. http://dx.doi.org/10.1109/pdcat.2017.00035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Yang, and Chen Hang. "A Dynamic Reconfigurable Adaptive Software Architecture for Federate in HLA-based Simulation." In Eighth ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD 2007). IEEE, 2007. http://dx.doi.org/10.1109/snpd.2007.314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tutsch, Dietmar. "Reconfigurable parallel computing." In 2010 1st International Conference on Parallel, Distributed and Grid Computing (PDGC 2010). IEEE, 2010. http://dx.doi.org/10.1109/pdgc.2010.5679961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

El-Boghdadi, Hatem M. "Dynamic-Width Reconfigurable Parallel Prefix Circuits." In 2013 IEEE 16th International Conference on Computational Science and Engineering (CSE). IEEE, 2013. http://dx.doi.org/10.1109/cse.2013.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jian Li, Xiangjing An, Lei Ye, and Hangen He. "A Reconfigurable Parallel Architecture for Image Computing." In 2006 6th World Congress on Intelligent Control and Automation. IEEE, 2006. http://dx.doi.org/10.1109/wcica.2006.1714060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mould, N. A., B. F. Veale, M. P. Tull, and J. K. Antonio. "Dynamic configuration steering for a reconfigurable superscalar processor." In Proceedings 20th IEEE International Parallel & Distributed Processing Symposium. IEEE, 2006. http://dx.doi.org/10.1109/ipdps.2006.1639456.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Parallel and dynamic reconfigurable computing"

1

Korkali, Mert, Steve Smith, and Liang Min. Parallel Computing for Massive Dynamic Contingency Analysis. Office of Scientific and Technical Information (OSTI), April 2019. http://dx.doi.org/10.2172/1544923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography