To see the other types of publications on this topic, follow the link: Data processors.

Journal articles on the topic 'Data processors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data processors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Burnett, Rachel. "Data controllers and data processors." ITNOW 47, no. 5 (September 1, 2005): 34. http://dx.doi.org/10.1093/itnow/bwi108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sinharoy, Balaram. "Compiler Optimization to Improve Data Locality for Processor Multithreading." Scientific Programming 7, no. 1 (1999): 21–37. http://dx.doi.org/10.1155/1999/235625.

Full text
Abstract:
Over the last decade processor speed has increased dramatically, whereas the speed of the memory subsystem improved at a modest rate. Due to the increase in the cache miss latency (in terms of the processor cycle), processors stall on cache misses for a significant portion of its execution time. Multithreaded processors has been proposed in the literature to reduce the processor stall time due to cache misses. Although multithreading improves processor utilization, it may also increase cache miss rates, because in a multithreaded processor multiple threads share the same cache, which effectively reduces the cache size available to each individual thread. Increased processor utilization and the increase in the cache miss rate demands higher memory bandwidth. A novel compiler optimization method has been presented in this paper that improves data locality for each of the threads and enhances data sharing among the threads. The method is based on loop transformation theory and optimizes both spatial and temporal data locality. The created threads exhibit high level of intra‐thread and inter‐thread data locality which effectively reduces both the data cache miss rates and the total execution time of numerically intensive computation running on a multithreaded processor.
APA, Harvard, Vancouver, ISO, and other styles
3

ZIMAN, MÁRIO, and VLADIMÍR BUŽEK. "REALIZATION OF UNITARY MAPS VIA PROBABILISTIC PROGRAMMABLE QUANTUM PROCESSORS." International Journal of Quantum Information 01, no. 04 (December 2003): 527–41. http://dx.doi.org/10.1142/s0219749903000401.

Full text
Abstract:
We analyze probabilistic realizations of programmable quantum processors that allow us to realize unitary operations on qubits as well as on qudits. Programmable processors are composed of two inputs — the data register and the program register. In the input state of the program register information about the operation that is supposed to be performed on the data is encoded. At the output of the probabilistic processor a measurement over the program register is performed. An intrinsic property of probabilistic processors is that they sometimes fail, but we know when this happens. We present a complete analysis of two processors: (1) The so-called [Formula: see text] processor that is based on a simple controlled-NOT gate. (2) The so-called [Formula: see text] processor that utilizes the quantum-information-distributor circuit.
APA, Harvard, Vancouver, ISO, and other styles
4

BRUCK, JEHOSHUA, and CHING-TIEN HO. "EFFICIENT GLOBAL COMBINE OPERATIONS IN MULTI-PORT MESSAGE-PASSING SYSTEMS." Parallel Processing Letters 03, no. 04 (December 1993): 335–46. http://dx.doi.org/10.1142/s012962649300037x.

Full text
Abstract:
We present a class of efficient algorithms for global combine operations in k-port message-passing systems. In the k-port communication model, in each communication round, a processor can send data to k other processors and simultaneously receive data from k other processors. We consider algorithms for global combine operations in n processors with respect to a commutative and associative reduction function. Initially, each processor holds a vector of m data items and finally the result of the reduction function over the n vectors of data items, which is also a vector of m data items, is known to all n processors. We present three efficient algorithms that employ various trade-offs between the number of communication rounds and the number of data items transferred in sequence. For the case m=1, we have an algorithm which is optimal in both the number of communication rounds and the number of data items transferred in sequence.
APA, Harvard, Vancouver, ISO, and other styles
5

Suh, Ilhyun, and Yon Dohn Chung. "A Workload Assignment Strategy for Efficient ROLAP Data Cube Computation in Distributed Systems." International Journal of Data Warehousing and Mining 12, no. 3 (July 2016): 51–71. http://dx.doi.org/10.4018/ijdwm.2016070104.

Full text
Abstract:
Data cube plays a key role in the analysis of multidimensional data. Nowadays, the explosive growth of multidimensional data has made distributed solutions important for data cube computation. Among the architectures for distributed processing, the shared-nothing architecture is known to have the best scalability. However, frequent and massive network communication among the processors can be a performance bottleneck in shared-nothing distributed processing. Therefore, suppressing the amount of data transmission among the processors can be an effective strategy for improving overall performance. In addition, dividing the workload and distributing them evenly to the processors is important. In this paper, the authors present a distributed algorithm for data cube computation that can be adopted in shared-nothing systems. The proposed algorithm gains efficiency by adopting the workload assignment strategy that reduces the total network cost and allocates the workload evenly to each processor, simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
6

Barnard, Richard C., Kai Huang, and Cory Hauck. "A mathematical model of asynchronous data flow in parallel computers*." IMA Journal of Applied Mathematics 85, no. 6 (September 25, 2020): 865–91. http://dx.doi.org/10.1093/imamat/hxaa031.

Full text
Abstract:
Abstract We present a simplified model of data flow on processors in a high-performance computing framework involving computations necessitating inter-processor communications. From this ordinary differential model, we take its asymptotic limit, resulting in a model which treats the computer as a continuum of processors and data flow as an Eulerian fluid governed by a conservation law. We derive a Hamilton–Jacobi equation associated with this conservation law for which the existence and uniqueness of solutions can be proven. We then present the results of numerical experiments for both discrete and continuum models; these show a qualitative agreement between the two and the effect of variations in the computing environment’s processing capabilities on the progress of the modelled computation.
APA, Harvard, Vancouver, ISO, and other styles
7

Molyakov, Andrey. "Main Scientific and Technological Problems in the Field of Architectural Solutions for Supercomputers." Computer and Information Science 13, no. 3 (July 24, 2020): 89. http://dx.doi.org/10.5539/cis.v13n3p89.

Full text
Abstract:
In this paper author describes creation of a domestic accelerator processor capable of replacing NVIDIA GPGPU graphics processors for solving scientific and technical problems and other tasks requiring high performance, but which are characterized by good or medium localization of the processed data. Moreover, this paper illustrates creation of a domestic processor or processors for solving the problems of creating information systems for processing big data, as well as tasks of artificial intelligence (deep learning, graph processing and others). Therefore, these processors are characterized by intensive irregular work with memory (poor and extremely poor localization of data), while requiring high energy efficiency. The author points out the need for a systematic approach, training of young specialists on supporting innovative research.
APA, Harvard, Vancouver, ISO, and other styles
8

Aasaraai, Kaveh, and Andreas Moshovos. "NCOR: An FPGA-Friendly Nonblocking Data Cache for Soft Processors with Runahead Execution." International Journal of Reconfigurable Computing 2012 (2012): 1–12. http://dx.doi.org/10.1155/2012/915178.

Full text
Abstract:
Soft processors often use data caches to reduce the gap between processor and main memory speeds. To achieve high efficiency, simple, blocking caches are used. Such caches are not appropriate for processor designs such as Runahead and out-of-order execution that require nonblocking caches to tolerate main memory latencies. Instead, these processors use non-blocking caches to extract memory level parallelism and improve performance. However, conventional non-blocking cache designs are expensive and slow on FPGAs as they use content-addressable memories (CAMs). This work proposes NCOR, an FPGA-friendly non-blocking cache that exploits the key properties of Runahead execution. NCOR does not require CAMs and utilizes smart cache controllers. A 4 KB NCOR operates at 329 MHz on Stratix III FPGAs while it uses only 270 logic elements. A 32 KB NCOR operates at 278 Mhz and uses 269 logic elements.
APA, Harvard, Vancouver, ISO, and other styles
9

DEHNE, FRANK, and HAMIDREZA ZABOLI. "PARALLEL CONSTRUCTION OF DATA CUBES ON MULTI-CORE MULTI-DISK PLATFORMS." Parallel Processing Letters 23, no. 01 (March 2013): 1350002. http://dx.doi.org/10.1142/s0129626413500023.

Full text
Abstract:
On-line Analytical Processing (OLAP) has become one of the most powerful and prominent technologies for knowledge discovery in VLDB (Very Large Database) environments. Central to the OLAP paradigm is the data cube, a multi dimensional hierarchy of aggregate values that provides a rich analytical model for decision support. Various sequential algorithms for the efficient generation of the data cube have appeared in the literature. However, given the size of contemporary data warehousing repositories, multi-processor solutions are crucial for the massive computational demands of current and future OLAP systems. In this paper we discuss the development of MCMD-CUBE, a new parallel data cube construction method for multi-core processors with multiple disks. We present experimental results for a Sandy Bridge multi-core processor with four parallel disks. Our experiments indicate that MCMD-CUBE achieves very close to linear speedup. A critical part of our MCMD-CUBE method is parallel sorting. We developed a new parallel sorting method termed MCMD-SORT for multi-core processors with multiple disks which outperforms other previous methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Plantin, Jean-Christophe. "The data archive as factory: Alienation and resistance of data processors." Big Data & Society 8, no. 1 (January 2021): 205395172110075. http://dx.doi.org/10.1177/20539517211007510.

Full text
Abstract:
Archival data processing consists of cleaning and formatting data between the moment a dataset is deposited and its publication on the archive’s website. In this article, I approach data processing by combining scholarship on invisible labor in knowledge infrastructures with a Marxian framework and show the relevance of considering data processing as factory labor. Using this perspective to analyze ethnographic data collected during a six-month participatory observation at a U.S. data archive, I generate a taxonomy of the forms of alienation that data processing generates, but also the types of resistance that processors develop, across four categories: routine, speed, skill, and meaning. This synthetic approach demonstrates, first, that data processing reproduces typical forms of factory worker’s alienation: processors are asked to work along a strict standardized pipeline, at a fast pace, without acquiring substantive skills or having a meaningful involvement in their work. It reveals, second, how data processors resist the alienating nature of this workflow by developing multiple tactics along the same four categories. Seen through this dual lens, data processors are therefore not only invisible workers, but also factory workers who follow and subvert a workflow organized as an assembly line. I conclude by proposing a four-step framework to better value the social contribution of data workers beyond the archive.
APA, Harvard, Vancouver, ISO, and other styles
11

MONGENET, CATHERINE. "DATA COMPILING FOR SYSTEMS OF UNIFORM RECURRENCE EQUATIONS." Parallel Processing Letters 04, no. 03 (September 1994): 245–57. http://dx.doi.org/10.1142/s0129626494000247.

Full text
Abstract:
This paper presents techniques to compile systems of recurrence equations into parallel programs defined by a set of virtual processors connected via a regular network and by the communications between these processors. These techniques are founded on a dependency analysis. The data dependencies are automatically compiled either in local memory management or in communications between the virtual processors through send/receive channels.
APA, Harvard, Vancouver, ISO, and other styles
12

BAR-NOY, AMOTZ, SHLOMO KIPNIS, and BARUCH SCHIEBER. "AN OPTIMAL ALGORITHM FOR COMPUTING CENSUS FUNCTIONS IN MESSAGE-PASSING SYSTEMS." Parallel Processing Letters 03, no. 01 (March 1993): 19–23. http://dx.doi.org/10.1142/s0129626493000046.

Full text
Abstract:
We consider a message-passing system of n processors, each of which initially holds one piece of data. The goal is to compute an associative and commutative census function f on the n distributed pieces of data and to make the result known to all processors. To perform the computation, processors communicate with each other by sending and receiving messages in specified communication rounds. We describe an optimal algorithm for this problem that requires the least number of communication rounds and that minimizes the time spent by any processor in sending and receiving messages.
APA, Harvard, Vancouver, ISO, and other styles
13

Jain. S, Poonam, Pooja S, Sripath Roy. K, Abhilash K, and Arvind B V. "Implementation of asymmetric processing on multi core processors to implement IOT applications on GNU/Linux framework." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 710. http://dx.doi.org/10.14419/ijet.v7i2.7.10928.

Full text
Abstract:
Internet of Things brought in a bigger computing challenges where there came a need for running tasks in a multi-sensor and large data processing is involved. In order to implement this requirement multiprocessors are being used for implementation of IoT Gateways. There comes a need for specific tasks having a resource dedicated for its job. To fulfill this we face a hurdle in choosing dedicated processor or shared processor in a Symmetric Processing Architecture. Dedicated processor are the one in which all the tasks are being processed on a single core where as in fair share processors specific processes are assigned to specific cores. Symmetric processing makes use of dedicated processors where as Asymmetric processor makes use of shared processors. Asymmetric Multi Processing can be used in real time applications in order to solve real time problems, one such platform is IOT. In this paper we have evaluated Asymmetric processing on GNU/Linux Platform to test multiple threads running on different multi-core processors architectures to realize the same for running IOT applications having higher computational requirements in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Liang Liang, and Peng Long Jiang. "The Design of PLC SOC Controller by Dual-Core Mutually Exclusive Processor." Advanced Materials Research 383-390 (November 2011): 5663–68. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.5663.

Full text
Abstract:
The Programmable Logic Controller (PLC) market is mainly dominated by Omron, Schneider, NEC and other foreign manufacturers. The CPU module of their PLC products usually has two processors at PCB level. One of the processors is general-purpose processor and the other is Ladder Chart Hardware Process Unit (LPU). Based on the research of dual-core mutual exclusion, interrupt management and data consistency issues of the LEON2 of Gaisler Research, the LPU processor and LEON2 processor are integrated in a chip on mutually exclusive SOC (System On Chip) architecture. It is proved by simulation that the PLC SOC controller works steadily and efficiently.
APA, Harvard, Vancouver, ISO, and other styles
15

De Lacy Costello, Ben. "Calculating Voronoi Diagrams Using Simple Chemical Reactions." Parallel Processing Letters 25, no. 01 (March 2015): 1540003. http://dx.doi.org/10.1142/s0129626415400034.

Full text
Abstract:
This paper overviews work on the use of simple chemical reactions to calculate Voronoi diagrams and undertake other related geometric calculations. This work highlights that this type of specialised chemical processor is a model example of a parallel processor. For example increasing the complexity of the input data within a given area does not increase the computation time. These processors are also able to calculate two or more Voronoi diagrams in parallel. Due to the specific chemical reactions involved and the relative strength of reaction with the substrate (and cross-reactivity with the products) these processors are also capable of calculating Voronoi diagrams sequentially from distinct chemical inputs. The chemical processors are capable of calculating a range of generalised Voronoi diagrams (either from circular drops of chemical or other geometric shapes made from adsorbent substrates soaked in reagent) , skeletonisation of planar shapes and weighted Voronoi diagrams (e.g., additively weighted Voronoi diagrams, Multiplicitavely weighted Crystal growth Voronoi diagrams). The paper will also discuss some limitations of these processors. These chemical processors constitute a class of pattern forming reactions which have parallels with those observed in natural systems. It is possible that specialised chemical processors of this general type could be useful for synthesising functional structured materials.
APA, Harvard, Vancouver, ISO, and other styles
16

ROSENBLUM, IRINA, JOAN ADLER, and SIMON BRANDON. "MULTI-PROCESSOR MOLECULAR DYNAMICS USING THE BRENNER POTENTIAL: PARALLELIZATION OF AN IMPLICIT MULTI-BODY POTENTIAL." International Journal of Modern Physics C 10, no. 01 (February 1999): 189–203. http://dx.doi.org/10.1142/s0129183199000139.

Full text
Abstract:
We present computational aspects of Molecular Dynamics calculations of thermal properties of diamond using the Brenner potential. Parallelization was essential in order to carry out these calculations on samples of suitable sizes. Our implementation uses MPI on a multi-processor machine such as the IBM SP2. Three aspects of parallelization of the Brenner potential are discussed in depth. These are its long-range nature, the need for different parallelization algorithms for forces and neighbors, and the relative expense of force calculations compared to that of data communication. The efficiency of parallelization is presented as a function of different approaches to these issues as well as of cell size and number of processors employed in the calculation. In the calculations presented here, information from almost half of the atoms were needed by each processor even when 16 processors were used. This made it worthwhile to avoid unnecessary complications by making data from all atoms available to all processors. Superlinear speedup was achieved for four processors (by avoiding paging) with 512 atom samples, and 5ps long trajectories were calculated (for 5120 atom samples) in 53 hours using 16 processors; 514 hours would have been needed to complete this calculation using a serial program. Finally, we discuss and make available a set of routines that enable MPI-based codes such as ours to be debugged on scalar machines.
APA, Harvard, Vancouver, ISO, and other styles
17

Shmeylin, B. Z., and E. A. Alekseeva. "THE PROBLEM OF PROVIDING CACHE COHERENCE IN MULTIPROCESSOR SYSTEMS WITH MANY PROCESSORS." Issues of radio electronics, no. 5 (May 20, 2018): 47–53. http://dx.doi.org/10.21778/2218-5453-2018-5-47-53.

Full text
Abstract:
In this paper the tasks of managing the directory in coherence maintenance systems in multiprocessor systems with a large number of processors are solved. In microprocessor systems with a large number of processors (MSLP) the problem of maintaining the coherence of processor caches is significantly complicated. This is due to increased traffic on the memory buses and increased complexity of interprocessor communications. This problem is solved in various ways. In this paper, we propose the use of Bloom filters used to accelerate the determination of an element’s belonging to a certain array. In this article, such filters are used to establish the fact that the processor belongs to some subset of the processors and determine if the processor has a cache line in the set. In the paper, the processes of writing and reading information in the data shared between processors are discussed in detail, as well as the process of data replacement from private caches. The article also shows how the addresses of cache lines and processor numbers are removed from the Bloom filters. The system proposed in this paper allows significantly speeding up the implementation of operations to maintain cache coherence in the MSLP as compared to conventional systems. In terms of performance and additional hardware and software costs, the proposed system is not inferior to the most efficient of similar systems, but on some applications and significantly exceeds them.
APA, Harvard, Vancouver, ISO, and other styles
18

Chattra, Eka, and Obrin Candra Brillyant. "Implementation of Meltdown Attack Simulation for Cybersecurity Awareness Material." ACMIT Proceedings 7, no. 1 (July 7, 2021): 6–13. http://dx.doi.org/10.33555/acmit.v7i1.102.

Full text
Abstract:
One of the rising risk in cybersecurity is an attack on cyber physical system. Today’s computer systems has evolve through the development of processor technology, namely by the use of optimization techniques such as out-of-order execution. Using this technique, processors can improve computing system performance without sacrificing manufacture processes. However, the use of these optimization techniques has vulnerabilities, especially on Intel processors. The vulnerability is in the form of data exfiltration in the cache memory that can be exploit by an attack. Meltdown is an exploit attack that takes advantage of such vulnerabilities in modern Intel processors. This vulnerability can be used to extract data that is processed on that specific computer device using said processors, such as passwords, messages, or other credentials. In this paper, we use qualitative research which aims to describe a simulation approach with experience meltdown attack in a safe environment with applied a known meltdown attack scheme and source code to simulate the attack on an Intel Core i7 platform running Linux OS. Then we modified the source code to prove the concept that the Meltdown attack can extract data on devices using Intel processors without consent from the authorized user.
APA, Harvard, Vancouver, ISO, and other styles
19

Muddukrishna, Ananya, Peter A. Jonsson, and Mats Brorsson. "Locality-Aware Task Scheduling and Data Distribution for OpenMP Programs on NUMA Systems and Manycore Processors." Scientific Programming 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/981759.

Full text
Abstract:
Performance degradation due to nonuniform data access latencies has worsened on NUMA systems and can now be felt on-chip in manycore processors. Distributing data across NUMA nodes and manycore processor caches is necessary to reduce the impact of nonuniform latencies. However, techniques for distributing data are error-prone and fragile and require low-level architectural knowledge. Existing task scheduling policies favor quick load-balancing at the expense of locality and ignore NUMA node/manycore cache access latencies while scheduling. Locality-aware scheduling, in conjunction with or as a replacement for existing scheduling, is necessary to minimize NUMA effects and sustain performance. We present a data distribution and locality-aware scheduling technique for task-based OpenMP programs executing on NUMA systems and manycore processors. Our technique relieves the programmer from thinking of NUMA system/manycore processor architecture details by delegating data distribution to the runtime system and uses task data dependence information to guide the scheduling of OpenMP tasks to reduce data stall times. We demonstrate our technique on a four-socket AMD Opteron machine with eight NUMA nodes and on the TILEPro64 processor and identify that data distribution and locality-aware task scheduling improve performance up to 69% for scientific benchmarks compared to default policies and yet provide an architecture-oblivious approach for programmers.
APA, Harvard, Vancouver, ISO, and other styles
20

Gonzalez, Jose, and Antonio Gonzalez. "Data value speculation in superscalar processors." Microprocessors and Microsystems 22, no. 6 (November 1998): 293–301. http://dx.doi.org/10.1016/s0141-9331(98)00086-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jaenicke, Edward C., Martin Shields, and Timothy W. Kelsey. "Food Processors' Use of Contracts to Purchase Agricultural Inputs: Evidence from a Pennsylvania Survey." Agricultural and Resource Economics Review 36, no. 2 (October 2007): 213–29. http://dx.doi.org/10.1017/s1068280500007048.

Full text
Abstract:
Using data from a survey of Pennsylvania food processors, we investigate what firm-level characteristics make a processor more or less likely to buy agricultural inputs and ingredients though contracts. We find that over 20 percent of Pennsylvania processors use contracts, and over 44 percent of agricultural inputs (based on value) are purchased under contract. We also analyze the two related questions of what firm attributes, attitudes, or other factors make a firm more likely to use contracts at all, and what factors lead a processor who does contract to use them more intensively.
APA, Harvard, Vancouver, ISO, and other styles
22

Issa, Joseph. "Performance and power analysis for high performance computation benchmarks." Open Computer Science 3, no. 1 (January 1, 2013): 1–16. http://dx.doi.org/10.2478/s13537-013-0101-5.

Full text
Abstract:
AbstractPerformance and power consumption analysis and characterization for computational benchmarks is important for processor designers and benchmark developers. In this paper, we characterize and analyze different High Performance Computing workloads. We analyze benchmarks characteristics and behavior on various processors and propose a performance estimation analytical model to predict performance for different processor microarchitecture parameters. Performance model is verified to predict performance within <5% error margin between estimated and measured data for different processors. We also propose a power estimation analytical model to estimate power consumption with low error deviation.
APA, Harvard, Vancouver, ISO, and other styles
23

Yantır, Hasan Erdem, Ahmed M. Eltawil, and Khaled N. Salama. "Efficient Acceleration of Stencil Applications through In-Memory Computing." Micromachines 11, no. 6 (June 26, 2020): 622. http://dx.doi.org/10.3390/mi11060622.

Full text
Abstract:
The traditional computer architectures severely suffer from the bottleneck between the processing elements and memory that is the biggest barrier in front of their scalability. Nevertheless, the amount of data that applications need to process is increasing rapidly, especially after the era of big data and artificial intelligence. This fact forces new constraints in computer architecture design towards more data-centric principles. Therefore, new paradigms such as in-memory and near-memory processors have begun to emerge to counteract the memory bottleneck by bringing memory closer to computation or integrating them. Associative processors are a promising candidate for in-memory computation, which combines the processor and memory in the same location to alleviate the memory bottleneck. One of the applications that need iterative processing of a huge amount of data is stencil codes. Considering this feature, associative processors can provide a paramount advantage for stencil codes. For demonstration, two in-memory associative processor architectures for 2D stencil codes are proposed, implemented by both emerging memristor and traditional SRAM technologies. The proposed architecture achieves a promising efficiency for a variety of stencil applications and thus proves its applicability for scientific stencil computing.
APA, Harvard, Vancouver, ISO, and other styles
24

Lastovetsky, Alexey, and Ravi Reddy. "Data Partitioning for Multiprocessors with Memory Heterogeneity and Memory Constraints." Scientific Programming 13, no. 2 (2005): 93–112. http://dx.doi.org/10.1155/2005/964902.

Full text
Abstract:
The paper presents a performance model that can be used to optimally distribute computations over heterogeneous computers. This model is application-centric representing the speed of each computer by a function of the problem size. This way it takes into account the processor heterogeneity, the heterogeneity of memory structure, and the memory limitations at each level of memory hierarchy. A problem of optimal partitioning of ann-element set overpheterogeneous processors using this performance model is formulated, and its efficient solution of the complexity O(p3× log2n) is given.
APA, Harvard, Vancouver, ISO, and other styles
25

Plantin, Jean-Christophe. "Data Cleaners for Pristine Datasets: Visibility and Invisibility of Data Processors in Social Science." Science, Technology, & Human Values 44, no. 1 (June 14, 2018): 52–73. http://dx.doi.org/10.1177/0162243918781268.

Full text
Abstract:
This article investigates the work of processors who curate and “clean” the data sets that researchers submit to data archives for archiving and further dissemination. Based on ethnographic fieldwork conducted at the data processing unit of a major US social science data archive, I investigate how these data processors work, under which status, and how they contribute to data sharing. This article presents two main results. First, it contributes to the study of invisible technicians in science by showing that the same procedures can make technical work invisible outside and visible inside the archive, to allow peer review and quality control. Second, this article contributes to the social study of scientific data sharing, by showing that the organization of data processing directly stems from the conception that the archive promotes of a valid data set—that is, a data set that must look “pristine” at the end of its processing. After critically interrogating this notion of pristineness, I show how it perpetuates a misleading conception of data as “raw” instead of acknowledging the important contribution of data processors to data sharing and social science.
APA, Harvard, Vancouver, ISO, and other styles
26

LEBRUN, PAUL. "THE BTEV TRIGGERS AND DATA ACQUISITION SYSTEMS." International Journal of Modern Physics A 16, supp01c (September 2001): 1153–55. http://dx.doi.org/10.1142/s0217751x0100917x.

Full text
Abstract:
A coherent and complete suite of triggers and data compression stages for the BTeV experiment has been designed and prototyped. Level 1 considers all crossings, finds primary vertices, and searches for detached tracks from these primary vertives using only the pixel vertex detector. Muons with a relatively high transverse momentum will also be identified at that stage. Level 1 runs on dedicated processors. Level 2 selects events based on the pixel and the forward tracking systems, and will run on a conventional processor farm. Level 3 will use the complete BTeV detector information to reduce the background by a factor of factor 2 to 5 and to reduce significantly the event size.
APA, Harvard, Vancouver, ISO, and other styles
27

RAUBER, THOMAS, and GUDULA RÜNGER. "A DATA RE-DISTRIBUTION LIBRARY FOR MULTI-PROCESSOR TASK PROGRAMMING." International Journal of Foundations of Computer Science 17, no. 02 (April 2006): 251–70. http://dx.doi.org/10.1142/s0129054106003814.

Full text
Abstract:
Multiprocessor task (M-task) programming is a suitable parallel programming model for coding application problems with an inherent modular structure. An M-task can be executed on a group of processors of arbitrary size, concurrently to other M-tasks of the same application program. The data of a multiprocessor task program usually include composed data structures, like vectors or arrays. For distributed memory machines or cluster platforms, those composed data structures are distributed within one or more processor groups. Thus, a concise parallel programming model for M-tasks requires a standardized distributed data format for composed data structures. Additionally, functions for data re-distribution with respect to different data distributions and different processor group layouts are needed to glue program parts together. In this paper, we present a data re-distribution library which extends the M-task programming with Tlib, a library providing operations to split processor groups and to map M-tasks to processor groups.
APA, Harvard, Vancouver, ISO, and other styles
28

Sari, Devi Pramita, Sudiro Sudiro, and Chriswardani Suryawati. "Evaluasi Sistem Pengolah Data Mortalitas Pasien Rawat Inap Berbasis Komputer di RSUD Dr. Moewardi." Jurnal Manajemen Kesehatan Indonesia 5, no. 1 (April 30, 2017): 1–5. http://dx.doi.org/10.14710/jmki.5.1.2017.1-5.

Full text
Abstract:
The System processors mortality inpatients there is the problem performance computer. Research objectives, evaluate system processors mortality computer based, based on method of Healt Metricks Networck.This research non experiments with the qualitative method. Collection method used observation system processors mortalitas and in-depth interviews. Informants main the program administrators system as many as six informants.Informants triangulation that is supporting management system processors mortality as much as two informants. The result in components resources not all educated personnel information system or medical record, and there never has been training system processors mortality, there has been no written job description, implementers copies duty with the services provided data morbidity, the lack of space and special computer system ocessors mortality, the absence of written regulations. Components indicators there has been no database doctor ,the act of, and cause of death, and certificates death are printed. A component source of the data was not in accordance between the data on mortalitas manual there are in a register in-patient facilities with data on the computer. Components data management of the data analysis is carried out only by system coordinator processors mortality. Components data products are considered to data was incomplete. The spreading and the use of delivery via email still not yet web based and has not integrated.Advice for the hospital management to distribute and should use designed system processors mortality web-based integrated to the external DKK and Dispendukcapil
APA, Harvard, Vancouver, ISO, and other styles
29

FEAUTRIER, PAUL. "TOWARD AUTOMATIC DISTRIBUTION." Parallel Processing Letters 04, no. 03 (September 1994): 233–44. http://dx.doi.org/10.1142/s0129626494000235.

Full text
Abstract:
This paper considers the problem of distributing data and code among the processors of a distributed memory supercomputer. Provided that the source program is amenable to detailed dataflow analysis, one may determine a placement function by an incremental analogue of Gaussian elimination. Such a function completely characterizes the distribution by giving the identity of the virtual processor on which each elementary calculation is done. One has then to “realize” the virtual processors on the PE. The resulting structure satisfies the “owner computes” rule and is reminiscent of two-level distribution schemes, like HPF’s [Formula: see text] and [Formula: see text] directives, or the CM-2 virtual processor system.
APA, Harvard, Vancouver, ISO, and other styles
30

Lau, T. L., and E. P. K. Tsang. "Solving the Processor Configuration Problems with a Mutation-Based Genetic Algorithm." International Journal on Artificial Intelligence Tools 06, no. 04 (December 1997): 567–85. http://dx.doi.org/10.1142/s0218213097000281.

Full text
Abstract:
The Processor Configuration Problem (PCP) is a real life Constraint Optimization Problem. The task is to link up a finite set of processors into a network, whilst minimizing the maximum distance between these processors. Since each processor has a limited number of communication channels, a carefully planned layout will help reduce the overhead for message switching. In this paper, we present a Genetic Algorithm (GA) approach to the PCP. Our technique uses a mutation-based GA, a function that produces schemata by analyzing previous solutions, and an efficient data representation. Our approach has been shown to out-perform other published techniques in this problem.
APA, Harvard, Vancouver, ISO, and other styles
31

Vajnovszki, Vincent, and Jean Pallo. "Parallel Algorithms for Listing Well-Formed Parentheses Strings." Parallel Processing Letters 08, no. 01 (March 1998): 19–28. http://dx.doi.org/10.1142/s0129626498000055.

Full text
Abstract:
We present two cost-optimal parallel algorithms generating the set of all well-formed parentheses strings of length 2n with constant delay for each generated string. In our first algorithm we generate in lexicographic order well-formed parentheses strings represented by bitstrings, and in the second one we use the representation by weight sequences. In both cases the computational model is based on an architecture CREW PRAM, where each processor performs the same algorithm simultaneously on a different set of data. Different processors can access the shared memory at the same time to read different data in the same or different memory locations, but no two processors are allowed to write into the same memory location simultaneously. These results complete a recent parallel generating algorithm for well-formed parentheses strings in a linear array of processors model, due to Akl and Stojmenović.
APA, Harvard, Vancouver, ISO, and other styles
32

Doraipandian, Manivannan, and Periasamy Neelamegam. "Wireless Sensor Network Using ARM Processors." International Journal of Embedded and Real-Time Communication Systems 4, no. 4 (October 2013): 48–59. http://dx.doi.org/10.4018/ijertcs.2013100103.

Full text
Abstract:
The hardware design of Wireless Sensor Networks (WSN) is the crux of its effective deployment. Nowadays these networks are used in microscopic, secure and high-end embedded products. WSN's potentiality in terms of efficient data sensing and distributed data processing has led to its usage in applications for measurement and tracking. WSN comprises of small number of embedded devices known as sensor nodes, gateways and base stations. Sensor nodes consist of sensors, processors and transceivers. The property of embedded sensor devices, also called motes, is to determine the strength of WSN. Thus processor selection for the motes plays a critical role in determining a WSN's competency. In this article, the absolute and obvious hardware characteristics of available and proposed sensor nodes are discussed. The objective of this work was to increase the efficiency and provision of sensor nodes by evaluating their processing and transceiver units. During this work, a sensor node was developed with ARM processor and XBee series 2 Unit. LPC 2148, LPC 2378 ARM processors were posed as processing unit and XBee series 2 acted as communication unit. Results of this experimental setup were recorded. Also a comparative study of the various available sensor nodes and proposed sensor nodes was done extensively.
APA, Harvard, Vancouver, ISO, and other styles
33

Fricke, J. Robert. "Reverse-time migration in parallel: A tutorial." GEOPHYSICS 53, no. 9 (September 1988): 1143–50. http://dx.doi.org/10.1190/1.1442553.

Full text
Abstract:
A reverse-time migration is implemented on a fine-grain or massively parallel computer. With fine-grain architectures many processors are distributed throughout the memory space and can operate on data “in place.” In addition, via a general communication system, any processor can access data from anywhere in the entire memory-processor space. Thus, operations on both local and global data elements are possible. These capabilities are controlled by parallel language constructs which allow parallel variable declaration, parallel arithmetic operation, and parallel random memory access. Reverse-time migration was programmed on a fine-grain machine with these hardware and software features. The reverse-time migration process had a speed improvement of two orders of magnitude relative to a state-of-the-art serial machine. At least another order of magnitude performance can be achieved with currently available floating-point processors. Similar increases in performance are expected for other seismic processes such as velocity estimation, data interpolation, 2-D filtering, and others.
APA, Harvard, Vancouver, ISO, and other styles
34

Martyniuk, T. B., N. O. Denysiuk, and B. I. Krukivskyi. "ASSOCIATIVE PROCESSORS WITH PARALLEL-SERIAL DATA PROCESSING." Information Technology and Computer Engineering 44, no. 1 (2019): 27–36. http://dx.doi.org/10.31649/1999-9941-2019-44-1-27-36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Richard, Lecordier, and Martin Patrick. "Data flow processors for automated visual inspection." Microprocessing and Microprogramming 34, no. 1-5 (February 1992): 37–40. http://dx.doi.org/10.1016/0165-6074(92)90097-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vermij, Erik, Leandro Fiorin, Rik Jongerius, Christoph Hagleitner, Jan Van Lunteren, and Koen Bertels. "An Architecture for Integrated Near-Data Processors." ACM Transactions on Architecture and Code Optimization 14, no. 3 (September 6, 2017): 1–25. http://dx.doi.org/10.1145/3127069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Hongzhi, Feng Xiong, Jianing Li, Shengfei Shi, Jianzhong Li, and Hong Gao. "Data management on new processors: A survey." Parallel Computing 72 (February 2018): 1–13. http://dx.doi.org/10.1016/j.parco.2017.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Bakhshalipour, Mohammad, Pejman Lotfi-Kamran, Abbas Mazloumi, Farid Samandi, Mahmood Naderan-Tahan, Mehdi Modarressi, and Hamid Sarbazi-Azad. "Fast Data Delivery for Many-Core Processors." IEEE Transactions on Computers 67, no. 10 (October 1, 2018): 1416–29. http://dx.doi.org/10.1109/tc.2018.2821144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Deppe, J., H. Areti, R. Atac, J. Biel, A. Cook, M. Edel, M. Fischler, et al. "ACP/R3000 processors in data acquisition systems." IEEE Transactions on Nuclear Science 36, no. 5 (1989): 1577–79. http://dx.doi.org/10.1109/23.41107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Byna, Surendra, Yong Chen, and Xian-He Sun. "Taxonomy of Data Prefetching for Multicore Processors." Journal of Computer Science and Technology 24, no. 3 (May 2009): 405–17. http://dx.doi.org/10.1007/s11390-009-9233-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gascon, J., P. Taras, F. Banville, M. Beaulieu, R. Bornais, S. Flibotte, and B. Lorazo. "Nuclear spectroscopy data sorting with parallel processors." Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 278, no. 2 (June 1989): 491–96. http://dx.doi.org/10.1016/0168-9002(89)90870-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Arandi, Samer, George Matheou, Costas Kyriacou, and Paraskevas Evripidou. "Data-Driven Thread Execution on Heterogeneous Processors." International Journal of Parallel Programming 46, no. 2 (February 8, 2017): 198–224. http://dx.doi.org/10.1007/s10766-016-0486-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Mohamed, Mofreh, Faiez Areed, and Kamel Soliman. "Data Processors for Power System Control.(Dept.E)." MEJ. Mansoura Engineering Journal 12, no. 2 (December 1, 1987): 49–60. http://dx.doi.org/10.21608/bfemu.1987.175867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Omilani, Olaoluwa, Adebayo Abass, and Victor Okoruwa. "Smallholder Agroprocessors’ Willingness to Pay for Value-Added Solid-Waste Management Solutions." Sustainability 11, no. 6 (March 23, 2019): 1759. http://dx.doi.org/10.3390/su11061759.

Full text
Abstract:
The paper examined the willingness of smallholder cassava processors to pay for value-added solid wastes management solutions in Nigeria. We employed a multistage sampling procedure to obtain primary data from 403 cassava processors from the forest and Guinea savannah zones of Nigeria. Contingent valuation and logistic regression were used to determine the willingness of the processors to pay for improved waste management options and the factors influencing their decision on the type of waste management system adopted and willingness to pay for a value-added solid-waste management system option. Women constituted the largest population of smallholder cassava processors, and the processors generated a lot of solid waste (605–878 kg/processor/season). Waste was usually dumped (59.6%), given to others (58.1%), or sold in wet (27.8%) or dry (35.5%) forms. The factors influencing the processors’ decision on the type of waste management system to adopt included sex of processors, membership of an association, quantity of cassava processed and ownership structure. Whereas the processors were willing to pay for new training on improved waste management technologies, they were not willing to pay more than US$3. However, US$3 may be paid for training in mushroom production. It is expected that public expenditure on training to empower processors to use solid-waste conversion technologies for generating value-added products will lead to such social benefits as lower exposure to environmental toxins from the air, rivers and underground water, among others, and additional income for the smallholder processors. The output of the study can serve as the basis for developing usable and affordable solid-waste management systems for community cassava processing units in African countries involved in cassava production.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Hao, Qinggeng Jin, Chenghua Zhang, and He Guo. "A Selective Mirrored Task Based Fault Tolerance Mechanism for Big Data Application Using Cloud." Wireless Communications and Mobile Computing 2019 (February 26, 2019): 1–12. http://dx.doi.org/10.1155/2019/4807502.

Full text
Abstract:
With the wide deployment of cloud computing in big data processing and the growing scale of big data application, managing reliability of resources becomes a critical issue. Unfortunately, due to the highly intricate directed-acyclic-graph (DAG) based application and the flexible usage of processors (virtual machines) in cloud platform, the existing fault tolerant approaches are inefficient to strike a balance between the parallelism and the topology of the DAG-based application while using the processors, which causes a longer makespan for an application and consumes more processor time (computation cost). To address these issues, this paper presents a novel fault tolerant framework named Fault Tolerance Algorithm using Selective Mirrored Tasks Method (FAUSIT) for the fault tolerance of running a big data application on cloud. First, we provide comprehensive theoretical analyses on how to improve the performance of fault tolerance for running a single task on a processor. Second, considering the balance between the parallelism and the topology of an application, we present a selective mirrored task method. Finally, by employing the selective mirrored task method, the FAUSIT is designed to improve the fault tolerance for DAG based application and incorporates two important objects: minimizing the makespan and the computation cost. Our solution approach is evaluated through rigorous performance evaluation study using real-word workflows, and the results show that the proposed FAUSIT approach outperforms existing algorithms in terms of makespan and computation cost.
APA, Harvard, Vancouver, ISO, and other styles
46

Faeq, Mays K., and Safaa S. Omran. "Cache coherency controller for MESI protocol based on FPGA." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (April 1, 2021): 1043. http://dx.doi.org/10.11591/ijece.v11i2.pp1043-1052.

Full text
Abstract:
In modern techniques of building processors, manufactures using more than one processor in the integrated circuit (chip) and each processor called a core. The new chips of processors called a multi-core processor. This new design makes the processors to work simultanously for more than one job or all the cores working in parallel for the same job. All cores are similar in their design, and each core has its own cache memory, while all cores shares the same main memory. So if one core requestes a block of data from main memory to its cache, there should be a protocol to declare the situation of this block in the main memory and other cores.This is called the cache coherency or cache consistency of multi-core. In this paper a special circuit is designed using very high speed integrated circuit hardware description language (VHDL) coding and implemented using ISE Xilinx software. The protocol used in this design is the modified, exclusive, shared and invalid (MESI) protocol. Test results were taken by using test bench, and showed all the states of the protocol are working correctly.
APA, Harvard, Vancouver, ISO, and other styles
47

Storch, T., and R. Müller. "PROCESSING CHAINS FOR DESIS AND ENMAP IMAGING SPECTROSCOPY DATA: SIMILARITIES AND DIFFERENCES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W3 (October 20, 2017): 177–80. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w3-177-2017.

Full text
Abstract:
The Earth Observation Center (EOC) of the German Aerospace Center (DLR) realizes operational processors for DESIS (DLR Earth Sensing Imaging Spectrometer) and EnMAP (Environmental Mapping and Analysis Program) high-resolution imaging spectroscopy remote sensing satellite missions. DESIS is planned to be launched in 2018 and EnMAP in 2020. The developmental (namely schedule, deployment, and team) and functional (namely processing levels, algorithms in processors, and archiving approaches) similarities and differences of the fully-automatic processors are analyzed. The processing chains generate high-quality standardized image products for users at different levels taking characterization and calibration data into account. EOC has long lasting experiences with the airborne and spaceborne acquisition, processing, and analysis of hyperspectral image data. It turns out that both activities strongly benefit from each other.
APA, Harvard, Vancouver, ISO, and other styles
48

Dehne, Frank, and Hamidreza Zaboli. "Parallel Real-Time OLAP on Multi-Core Processors." International Journal of Data Warehousing and Mining 11, no. 1 (January 2015): 23–44. http://dx.doi.org/10.4018/ijdwm.2015010102.

Full text
Abstract:
One of the most powerful and prominent technologies for knowledge discovery in decision support systems is online analytical processing (OLAP). Most of the traditional OLAP research, and most of the commercial systems, follow the static data cube approach proposed by Gray et.al. and materialize all or a subset of the cuboids of the data cube in order to ensure adequate query performance. Practitioners have called for some time for a real-time OLAP approach where the OLAP system gets updated instantaneously as new data arrives and always provides an up-to-date data warehouse for the decision support process. However, a major problem for real-time OLAP is the significant performance issues with large scale data warehouses. The aim of our research is to address these problems through the use of efficient parallel computing methods. In this paper, we present a parallel real-time OLAP system for multi-core processors. To our knowledge, this is the first real-time OLAP system that has been parallelized and optimized for contemporary multi-core architectures. Our system allows for multiple insert and multiple query transactions to be executed in parallel and in real-time. We evaluated our method for a multitude of scenarios (different ratios of insert and query transactions, query transactions with different amounts of data aggregation, different database sizes, etc.), using the TPCDS “Decision Support” benchmark data set. As multi-core test platforms, we used an Intel Sandy Bridge processor with 4 cores (8 hardware supported threads) and an Intel Xeon Westmere processor with 20 cores (40 hardware supported threads). The tests demonstrate that, with increasing number of processor cores, our parallel system achieves close to linear speedup in transaction response time and transaction throughput. On the 20 core architecture we achieved, for a 100 GB database, a better than 0.25 second query response time for real-time OLAP queries that aggregate 25% of the database. Since hardware performance improvements are currently, and in the foreseeable future, achieved not by faster processors but by increasing the number of processor cores, our new parallel real-time OLAP method has the potential to enable OLAP systems that operate in real-time on large databases.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Kuo Yi, Fuh Gwo Chen, and Jr Shian Chen. "A Cost-Effective Hardware Approach for Measuring Power Consumption of Modern Multi-Core Processors." Applied Mechanics and Materials 110-116 (October 2011): 4569–73. http://dx.doi.org/10.4028/www.scientific.net/amm.110-116.4569.

Full text
Abstract:
Multiple processor cores are built within a chip by advanced VLSI technology. With the decreasing prices, multi-core processors are widely deployed in both server and desktop systems. The workload of multi-threaded applications could be separated to different cores by multiple threads, such that application threads can run concurrently to maximize overall execution speed of the applications. Moreover, for the green trend of computing nowadays, most of modern multi-core processors have a functionality of dynamic frequency turning. The power-level tuning techniques are based on Dynamic Voltage and Frequency Scaling (DVFS). In order to evaluate the performance of various power-saving approaches, an appropriate technique to measure the power consumption of multi-core processors is important. However, most of approaches estimate CPU power consumption only from CMOS power consumption data and CPU frequency. These approaches only estimate the dynamic power consumption of multi-core processors, the static power consumption is not be included. In this study, a hardware approach for the power consumption measurement of multi-core processors is proposed. Thus the power consumption of a CPU could be measured precisely, and the performance of CPU power-saving approaches can be evaluated well.
APA, Harvard, Vancouver, ISO, and other styles
50

Santos, Paulo Cesar, Francis Birck Moreira, Aline Santana Cordeiro, Sairo Raoní Santos, Tiago Rodrigo Kepe, Luigi Carro, and Marco Antonio Zanata Alves. "Survey on Near-Data Processing: Applications and Architectures." Journal of Integrated Circuits and Systems 16, no. 2 (August 16, 2021): 1–17. http://dx.doi.org/10.29292/jics.v16i2.502.

Full text
Abstract:
One of the main challenges for modern processors is the data transfer between processor and memory. Such data movement implies high latency and high energy consumption. In this context, Near-Data Processing (NDP) proposals have started to gain acceptance as an accelerator device. Such proposals alleviate the memory bottleneck by moving instructions to data whereabouts. The first proposals date back to the 1990s, but it was only in the 2010s that we could observe an increase in papers addressing NDP. It occurred together with the appearance of 3D-stacked chips with logic and memory stacked layers. This survey presents a brief history of these accelerators, focusing on the applications domains migrated to near-data and the proposed architectures. We also introduce a new taxonomy to classify such architectural proposals according to their data distance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography