Academic literature on the topic 'Vector processor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Vector processor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Vector processor"

1

Lai, Bing-Chang, Phillip John McKerrow, and Jo Abrantes. "The abstract vector processor." Microprocessors and Microsystems 30, no. 2 (2006): 86–101. http://dx.doi.org/10.1016/j.micpro.2005.06.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Madeswaran, V., and A. Mathialagan. "Microprogrammable pipelined vector processor." Computers in Industry 13, no. 4 (1990): 367–70. http://dx.doi.org/10.1016/0166-3615(90)90009-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Qi. "Design of a vector processor." Journal of Computer Science and Technology 1, no. 1 (1986): 26–34. http://dx.doi.org/10.1007/bf02943298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hussain, Tassadaq, Oscar Palomar, Osman S. Ünsal, Adrian Cristal, and Eduard Ayguadé. "Memory Controller for Vector Processor." Journal of Signal Processing Systems 90, no. 11 (2016): 1533–49. http://dx.doi.org/10.1007/s11265-016-1215-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Soliman, Mostafa I., and Elsayed A. Elsayed. "Simultaneous Multithreaded Matrix Processor." Journal of Circuits, Systems and Computers 24, no. 08 (2015): 1550114. http://dx.doi.org/10.1142/s0218126615501145.

Full text
Abstract:
This paper proposes a simultaneous multithreaded matrix processor (SMMP) to improve the performance of data-parallel applications by exploiting instruction-level parallelism (ILP) data-level parallelism (DLP) and thread-level parallelism (TLP). In SMMP, the well-known five-stage pipeline (baseline scalar processor) is extended to execute multi-scalar/vector/matrix instructions on unified parallel execution datapaths. SMMP can issue four scalar instructions from two threads each cycle or four vector/matrix operations from one thread, where the execution of vector/matrix instructions in threads is done in round-robin fashion. Moreover, this paper presents the implementation of our proposed SMMP using VHDL targeting FPGA Virtex-6. In addition, the performance of SMMP is evaluated on some kernels from the basic linear algebra subprograms (BLAS). Our results show that, the hardware complexity of SMMP is 5.68 times higher than the baseline scalar processor. However, speedups of 4.9, 6.09, 6.98, 8.2, 8.25, 8.72, 9.36, 11.84 and 21.57 are achieved on BLAS kernels of applying Givens rotation, scalar times vector plus another, vector addition, vector scaling, setting up Givens rotation, dot-product, matrix–vector multiplication, Euclidean length, and matrix–matrix multiplications, respectively. The average speedup over the baseline is 9.55 and the average speedup over complexity is 1.68. Comparing with Xilinx MicroBlaze, the complexity of SMMP is 6.36 times higher, however, its speedup ranges from 6.87 to 12.07 on vector/matrix kernels, which is 9.46 in average.
APA, Harvard, Vancouver, ISO, and other styles
6

Suaib, Mohammad, Abel Palaty, and Kumar Sambhav Pandey. "Architecture of SIMD Type Vector Processor." International Journal of Computer Applications 20, no. 4 (2011): 42–45. http://dx.doi.org/10.5120/2418-3233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krashinsky, Ronny, Christopher Batten, and Krste Asanović. "Implementing the scale vector-thread processor." ACM Transactions on Design Automation of Electronic Systems 13, no. 3 (2008): 1–24. http://dx.doi.org/10.1145/1367045.1367050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Boeri, F., and M. Auguin. "OPSILA: a vector and parallel processor." IEEE Transactions on Computers 42, no. 1 (1993): 76–82. http://dx.doi.org/10.1109/12.192215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

BRUCK, JEHOSHUA, and CHING-TIEN HO. "EFFICIENT GLOBAL COMBINE OPERATIONS IN MULTI-PORT MESSAGE-PASSING SYSTEMS." Parallel Processing Letters 03, no. 04 (1993): 335–46. http://dx.doi.org/10.1142/s012962649300037x.

Full text
Abstract:
We present a class of efficient algorithms for global combine operations in k-port message-passing systems. In the k-port communication model, in each communication round, a processor can send data to k other processors and simultaneously receive data from k other processors. We consider algorithms for global combine operations in n processors with respect to a commutative and associative reduction function. Initially, each processor holds a vector of m data items and finally the result of the reduction function over the n vectors of data items, which is also a vector of m data items, is known to all n processors. We present three efficient algorithms that employ various trade-offs between the number of communication rounds and the number of data items transferred in sequence. For the case m=1, we have an algorithm which is optimal in both the number of communication rounds and the number of data items transferred in sequence.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Vector processor"

1

Liu, Zhiduo. "Accelerator compiler for the VENICE vector processor." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43442.

Full text
Abstract:
This thesis describes the compiler design for VENICE, a new soft vector processor (SVP). The compiler is a new back-end target for the Microsoft Accelerator, a high-level data parallel library in C/C++ and C\#. This allows automatic compilation from high-level programs into VENICE assembly code, thus avoiding the process of writing assembly code used by previous SVPs. Experimental results show the compiler can generate scalable parallel code with execution times that are comparable to human-optimized VENICE assembly code. On data-parallel applications, VENICE at 100MHz on an Altera DE3 platform runs at speeds comparable to one core of a 2.53GHz Intel Xeon E5540 processor, beating it in performance on four of six benchmarks by up to 3.2x. The compiler also delivers near-linear scaling performance on five of six benchmarks, which exceed scalability of the Multi-core target of Accelerator.
APA, Harvard, Vancouver, ISO, and other styles
2

Thomas, Scott D. "Vector processor services for local area networks." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10312009-020126/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hopkins, T. M. "The design of a sparse vector processor." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14094.

Full text
Abstract:
This thesis describes the development of a new vector processor architecture capable of high efficiency when computing with very sparse vector and matrix data, of irregular structure. Two applications are identified as of particular importance: sparse Gaussian elimination, and Linear Programming, and the algorithmic steps involved in the solution of these problems are analysed. Existing techniques for sparse vector computation, which are only able to achieve a small fraction of the arithmetic performance commonly expected on dense matrix problems, are critically examined. A variety of new techniques with potential for hardware support is discussed. From these, the most promising are selected, and efficient hardware implementations developed. The architecture of a complete vector processor incorporating the new vector and matrix mechanisms is described - the new architecture also uses an innovative control structure for the vector processor, which enables high efficiency even when computing with vectors with very small numbers of non-zeroes. The practical feasibility of the design is demonstrated by describing the prototype implementation, under construction from off-the-shelf components. The expected performance of the new architecture is analysed, and simulation results are presented which demonstrate that the machine could be expected to provide an order of magnitude speed-up on many large sparse Linear Programming problems, compared to a scalar processor with the same clock rate. The simulation results indicate that the vector processor control structure is successful - the vector half-performance length is as low as 8 for standard vector instruction loop tests. In some cases, simulations indicate that the performance of the machine is limited by the speed of some scalar processor operations. Finally, the scope for re-implementing the new architecture in technology faster than the prototype's 8MHz is briefly discussed, and particular potential difficulties identified.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Jason Kwok Kwun. "Vector processing as a soft-core processor accelerator." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2394.

Full text
Abstract:
Soft processors simplify hardware design by being able to implement complex control strategies using software. However, they are not fast enough for many intensive data-processing tasks, such as highly data-parallel embedded applications. This thesis suggests adding a vector processing core to the soft processor as a general-purpose accelerator for these types of applications. The approach has the benefits of a purely software-oriented development model, a fixed ISA allowing parallel software and hardware development, a single accelerator that can accelerate multiple functions in an application, and scalable performance with a single source code. With no hardware design experience needed, a software programmer can make area-versus-performance tradeoffs by scaling the number of functional units and register file bandwidth with a single parameter. The soft vector processor can be further customized by a number of secondary parameters to add and remove features for the specific application to optimize resource utilization. This thesis shows that a vector processing architecture maps efficiently into an FPGA and provides a scalable amount of performance for a reasonable amount of area. Configurations of the soft vector processor with different performance levels are estimated to achieve speedups of 2-24x for 5-26x the area of a Nios II/s processor on three benchmark kernels.
APA, Harvard, Vancouver, ISO, and other styles
5

Koutsomyti, Konstantia. "A configurable vector processor for accelerating speech coding algorithms." Thesis, Loughborough University, 2007. https://dspace.lboro.ac.uk/2134/35006.

Full text
Abstract:
The growing demand for voice-over-packer (VoIP) services and multimedia-rich applications has made increasingly important the efficient, real-time implementation of low-bit rates speech coders on embedded VLSI platforms. Such speech coders are designed to substantially reduce the bandwidth requirements thus enabling dense multichannel gateways in small form factor. This however comes at a high computational cost which mandates the use of very high performance embedded processors. This thesis investigates the potential acceleration of two major ITU-T speech coding algorithms, namely G.729A and G.723.1, through their efficient implementation on a configurable extensible vector embedded CPU architecture. New scalar and vector ISAs were introduced which resulted in up to 80% reduction in the dynamic instruction count of both workloads. These instructions were subsequently encapsulated into a parametric, hybrid SISD (scalar processor)–SIMD (vector) processor. This work presents the research and implementation of the vector datapath of this vector coprocessor which is tightly-coupled to a Sparc-V8 compliant CPU, the optimization and simulation methodologies employed and the use of Electronic System Level (ESL) techniques to rapidly design SIMD datapaths.
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Ruth Christiana. "Designing Anti-Islanding Detection Using the Synchrophasor Vector Processor." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1001.

Full text
Abstract:
ABSTRACT Designing Anti-Islanding Detection Using the Synchrophasor Vector Processor Ruth Huang The need for distributed generation (DG) has become more and more popular because of the adverse effects of fossil fuels and the fear of running out of fossil fuels. By having DG, there are less transmission losses, voltage support, controllability of the system, decreased costs in transmission and distribution, power quality improvement, energy efficiency, and reduced reserve margin. The adverse effects of DG are voltage flicker, harmonics, and islanding. Islanding occurs when the DG continues to energize the power system when the main utility is disconnected. Detecting islanding is important for personnel safety, speedy restoration, and equipment protection. This paper describes the different islanding methods currently used and the benefits of combining two passive islanding detection methods, under/over voltage detection and voltage phase jump detection methods, using the synchrophasor vector processor (SVP).
APA, Harvard, Vancouver, ISO, and other styles
7

Bernstein, Raymond F. "A pipelined vector processor and memory architecture for cyclostationary processing." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA305842.

Full text
Abstract:
Dissertation (Ph.D. in Electrical Engineering) Naval Postgraduate School, December 1995.<br>Dissertation supervisor(s): Herschel H. Loomis, Jr. "December 1995." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Andréas. "Algorithm Adaptation and Optimization of a Novel DSP Vector Co-processor." Thesis, Linköping University, Computer Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57427.

Full text
Abstract:
<p>The Division of Computer Engineering at Linköping's university is currently researching the possibility to create a highly parallel DSP platform, that can keep up with the computational needs of upcoming standards for various applications, at low cost and low power consumption. The architecture is called ePUMA and it combines a general RISC DSP master processor with eight SIMD co-processors on a single chip. The master processor will act as the main processor for general tasks and execution control, while the co-processors will accelerate computing intensive and parallel DSP kernels.This thesis investigates the performance potential of the co-processors by implementing matrix algebra kernels for QR decomposition, LU decomposition, matrix determinant and matrix inverse, that run on a single co-processor. The kernels will then be evaluated to find possible problems with the co-processors' microarchitecture and suggest solutions to the problems that might exist. The evaluation shows that the performance potential is very good, but a few problems have been identified, that causes significant overhead in the kernels. Pipeline mismatches, that occurs due to different pipeline lengths for different instructions, causes pipeline hazards and the current solution to this, doesn't allow effective use of the pipeline. In some cases, the single port memories will cause bottlenecks, but the thesis suggests that the situation could be greatly improved by using buffered memory write-back. Also, the lack of register forwarding makes kernels with many data dependencies run unnecessarily slow.</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Chou, Christopher Han-Yu. "VIPERS II : a soft-core vector processor with single-copy scratchpad memory." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23595.

Full text
Abstract:
Previous work has demonstrated soft-core vector processors in FPGAs can be applied to speed up data-parallel embedded applications, while providing the users an easy-to-use platform to tradeoff performance and area. However, its performance is limited by load and store latencies, requiring extra software design effort to optimize performance. This thesis presents VIPERS II, a new vector ISA and the corresponding microarchitecture, in which the vector processor reads and writes directly to a scratchpad memory instead of the vector register file. With this approach, the load and store operations and their inherent latencies can often be eliminated if the working set of data fits in the vector scratchpad memory. Moreover, with the removal of load/store latencies, the user doesn't have to use loop unrolling to enhance performance, reducing the amount of software effort required and making the vectorized code more compact. The thesis shows the new architecture has the potential to achieve performance similar to that of the unrolled versions of the benchmarks, without actually unrolling the loop. Hardware performance results of VIPERS II demonstrated up to 47x speedup over a Nios II processor with only 13x more resources used.
APA, Harvard, Vancouver, ISO, and other styles
10

Paredes, Lopez Mireya. "Exploring vectorisation for parallel breadth-first search on an advanced vector processor." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/exploring-vectorisation-for-parallel-breadthfirst-search-on-an-advanced-vector-processor(f5de60ab-179e-4dbe-9934-dc3e22d0b8a8).html.

Full text
Abstract:
Modern applications generate a massive amount of data that is challenging to process or analyse. Graph algorithms have emerged as a solution for the analysis of this data because they can represent the entities participating in the generation of large scale datasets in terms of vertices and their relationships in terms of edges. Graph analysis algorithms are used for finding patterns within these relationships, aiming to extract information to be further analysed. The breadth-first search (BFS) is one of the main graph search algorithms used for graph analysis and its optimisation has been widely researched using different parallel computers. However, the BFS parallelisation has been shown to be chal- lenging because of its inherent characteristics, including irregular memory access patterns, data dependencies and workload imbalance, that limit its scalability. This thesis investigates the optimisation of the BFS on the Xeon Phi, which is a modern parallel architecture provided with an advanced vector processor using a self-created development framework integrated with the Graph 500 benchmark. As a result, optimised parallel versions of two high-level algorithms for BFS were created using vectorisation, starting with the conventional top-down BFS algorithm and, building on this, leading to the hybrid BFS algorithm. The best implementations resulted in speedups of 1.37x and 1.33x, for a one million vertices graph, compared to the state-of-the-art, respectively. The hybrid BFS algorithm can be further used by other graph analysis algorithms and the lessons learned from vectorisation can be applied to other algorithms targeting the existing and future models of the Xeon Phi and other advanced vector architectures.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Vector processor"

1

1951-, Burkhart H., ed. CONPAR 90-VAPP IV: Joint International Conference on Vector and Parallel Processing, Zurich, Switzerland, September 10-13, 1990 : proceedings. Springer-Verlag, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Commission, United States International Trade. Vector supercomputers from Japan. U.S. International Trade Commission, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

United States International Trade Commission. Vector supercomputers from Japan. U.S. International Trade Commission, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States International Trade Commission. Vector supercomputers from Japan. U.S. International Trade Commission, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

United States International Trade Commission. Vector supercomputers from Japan. U.S. International Trade Commission, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Forecasting Aggregated Vector ARMA Processes. Springer-Verlag, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lütkepohl, Helmut. Forecasting Aggregated Vector ARMA Processes. Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/978-3-642-61584-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brüggemann, Ralf. Model Reduction Methods for Vector Autoregressive Processes. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-642-17029-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

International Conference on Vector and Parallel Processors in Computational Science (2nd 1984 Oxford, England). Vector and parallel processors in computational science: Proceedings of the Second International Conference on Vector and Parallel Processors in Computational Science, Oxford, 28-31 August 1984. Edited by Duff Iain S and Reid John Ker. North-Holland, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jefferies, Brian. Evolution processes and the Feynman-Kac formula. Kluwer Academic Publishers, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Vector processor"

1

Weik, Martin H. "vector processor." In Computer Science and Communications Dictionary. Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_20702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Goossens, Bernard. "A multithreaded vector co-processor." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63371-5_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alves, J. C., A. Puga, L. Corte-Real, and J. S. Matos. "ProHos-1 — A vector processor for the efficient estimation of higher-order moments." In Vector and Parallel Processing — VECPAR'96. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-62828-2_115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Figueiredo, Renato J. O., José A. B. Fortes, and Zina Ben Miled. "Spatial Data Locality with Respect to Degree of Parallelism in Processor-and-Memory Hierarchies." In Vector and Parallel Processing – VECPAR’98. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/10703040_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thomas, Stefan. "Preconditioned conjugate gradient methods for semiconductor device simulation on a CRAY C90 vector processor." In Vector and Parallel Processing — VECPAR'96. Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-62828-2_118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lau, K. K., and X. Z. Qiao. "FFT on a new parallel vector processor." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/3-540-16811-7_157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Pingping, Jun Wu, Songyuan Zhao, Haoqi Ren, and Zhifeng Zhang. "A Deep Learning Compiler for Vector Processor." In Communications and Networking. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67720-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhao, Yuekai, Jianzhuang Lu, and Xiaowen Chen. "Accelerating Depthwise Separable Convolutions with Vector Processor." In Lecture Notes in Computer Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86340-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kojima, Keiji, Shun’ichi Torii, and Seiichi Yoshizumi. "IDP — A Main Storage Based Vector Database Processor —." In The Kluwer International Series in Engineering and Computer Science. Springer US, 1988. http://dx.doi.org/10.1007/978-1-4613-1679-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Orin, David E., P. Sadayappan, Y. L. C. Ling, and K. W. Olson. "Robotics Vector Processor Architecture for Real-Time Control." In Sensor-Based Robots: Algorithms and Architectures. Springer Berlin Heidelberg, 1991. http://dx.doi.org/10.1007/978-3-642-75530-9_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Vector processor"

1

Momose, Shintaro. "SX-ACE processor: NEC's brand-new vector processor." In 2014 IEEE Hot Chips 26 Symposium (HCS). IEEE, 2014. http://dx.doi.org/10.1109/hotchips.2014.7478805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spinean, Bogdan, Georgi Kuzmanov, and Georgi Gaydadjiev. "Vector processor customization for FFT." In 2011 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XI). IEEE, 2011. http://dx.doi.org/10.1109/samos.2011.6045451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stanic, Milan, Oscar Palomar, Timothy Hayes, Ivan Ratkovic, Osman Unsal, and Adrian Cristal. "Towards low-power embedded vector processor." In CF'16: Computing Frontiers Conference. ACM, 2016. http://dx.doi.org/10.1145/2903150.2903485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sunderlin, Tim A., and James A. Carter III. "Optical vector matrix processor component characterization." In Aerospace/Defense Sensing and Controls, edited by Dennis R. Pape. SPIE, 1996. http://dx.doi.org/10.1117/12.243134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pochapsky, Eugene, and David Casasent. "Optical Linear. Heterodyne Matrix-Vector Processor." In 1988 Los Angeles Symposium--O-E/LASE '88, edited by Kul B. Bhasin and Brian M. Hendrickson. SPIE, 1988. http://dx.doi.org/10.1117/12.944191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tanenhaus, Martin E., Moti Barkan, and Alex Genusov. "Gallium arsenide vector processor for high-performance digital signal processor applications." In SPIE Proceedings, edited by Mark P. Bendett, Daniel H. Butler, Jr., Arati Prabhakar, and Andrew C. Yang. SPIE, 1990. http://dx.doi.org/10.1117/12.21004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Zhiduo, Aaron Severance, Satnam Singh, and Guy G. F. Lemieux. "Accelerator compiler for the VENICE vector processor." In the ACM/SIGDA international symposium. ACM Press, 2012. http://dx.doi.org/10.1145/2145694.2145732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gee, J. D., and A. J. Smith. "The performance impact of vector processor cashes." In Proceedings of the Twenty-Fifth Hawaii International Conference on System Sciences. IEEE, 1992. http://dx.doi.org/10.1109/hicss.1992.183193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jacob, Arpith, Brandon Harris, Jeremy Buhler, Roger Chamberlain, and Young Cho. "Scalable Softcore Vector Processor for Biosequence Applications." In 2006 14th Annual IEEE Symposium on Field-Programmable Custom Computing Machines. IEEE, 2006. http://dx.doi.org/10.1109/fccm.2006.62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Deering, Michael, Stephanie Winner, Bic Schediwy, Chris Duffy, and Neil Hunt. "The triangle processor and normal vector shader." In the 15th annual conference. ACM Press, 1988. http://dx.doi.org/10.1145/54852.378468.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Vector processor"

1

Mosca, Eugene P., Frank P. Pursel, Richard D. Griffin, and John N. Lee. Acousto-Optical Vector Matrix Product Processor: Implementation Issues. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada207933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hammond, Simon David, and Christian Robert Trott. Optimizing the Performance of Sparse-Matrix Vector Products on Next-Generation Processors. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1528773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ho, Hwai-Chung, and Tze-Chien Sun. Limiting Distributions of Non-Linear Vector Functions of Stationary Gaussian Processes. Defense Technical Information Center, 1988. http://dx.doi.org/10.21236/ada194569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lamichhane, Kamal. Search for New Bosons in Gluon-Gluon and Vector Boson Fusion Processes at the LHC and Development of Silicon Module Assembly Techniques for the CMS High Granularity Calorimeter. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1762133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

de Caritat, Patrice, Brent McInnes, and Stephen Rowins. Towards a heavy mineral map of the Australian continent: a feasibility study. Geoscience Australia, 2020. http://dx.doi.org/10.11636/record.2020.031.

Full text
Abstract:
Heavy minerals (HMs) are minerals with a specific gravity greater than 2.9 g/cm3. They are commonly highly resistant to physical and chemical weathering, and therefore persist in sediments as lasting indicators of the (former) presence of the rocks they formed in. The presence/absence of certain HMs, their associations with other HMs, their concentration levels, and the geochemical patterns they form in maps or 3D models can be indicative of geological processes that contributed to their formation. Furthermore trace element and isotopic analyses of HMs have been used to vector to mineralisation or constrain timing of geological processes. The positive role of HMs in mineral exploration is well established in other countries, but comparatively little understood in Australia. Here we present the results of a pilot project that was designed to establish, test and assess a workflow to produce a HM map (or atlas of maps) and dataset for Australia. This would represent a critical step in the ability to detect anomalous HM patterns as it would establish the background HM characteristics (i.e., unrelated to mineralisation). Further the extremely rich dataset produced would be a valuable input into any future machine learning/big data-based prospectivity analysis. The pilot project consisted in selecting ten sites from the National Geochemical Survey of Australia (NGSA) and separating and analysing the HM contents from the 75-430 µm grain-size fraction of the top (0-10 cm depth) sediment samples. A workflow was established and tested based on the density separation of the HM-rich phase by combining a shake table and the use of dense liquids. The automated mineralogy quantification was performed on a TESCAN® Integrated Mineral Analyser (TIMA) that identified and mapped thousands of grains in a matter of minutes for each sample. The results indicated that: (1) the NGSA samples are appropriate for HM analysis; (2) over 40 HMs were effectively identified and quantified using TIMA automated quantitative mineralogy; (3) the resultant HMs’ mineralogy is consistent with the samples’ bulk geochemistry and regional geological setting; and (4) the HM makeup of the NGSA samples varied across the country, as shown by the mineral mounts and preliminary maps. Based on these observations, HM mapping of the continent using NGSA samples will likely result in coherent and interpretable geological patterns relating to bedrock lithology, metamorphic grade, degree of alteration and mineralisation. It could assist in geological investigations especially where outcrop is minimal, challenging to correctly attribute due to extensive weathering, or simply difficult to access. It is believed that a continental-scale HM atlas for Australia could assist in derisking mineral exploration and lead to investment, e.g., via tenement uptake, exploration, discovery and ultimately exploitation. As some HMs are hosts for technology critical elements such as rare earth elements, their systematic and internally consistent quantification and mapping could lead to resource discovery essential for a more sustainable, lower-carbon economy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!