To see the other types of publications on this topic, follow the link: Floating-point unit.

Journal articles on the topic 'Floating-point unit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Floating-point unit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Oavrielov, Moshe, and Lev Epstein. "The NS32081 Floating-point Unit." IEEE Micro 6, no. 2 (1986): 6–12. http://dx.doi.org/10.1109/mm.1986.304737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burud, Mr Anand S., and Dr Pradip C. Bhaskar. "Processor Design Using 32 Bit Single Precision Floating Point Unit." International Journal of Trend in Scientific Research and Development Volume-2, Issue-4 (2018): 198–202. http://dx.doi.org/10.31142/ijtsrd12912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Galal, Sameh, and Mark Horowitz. "Energy-Efficient Floating-Point Unit Design." IEEE Transactions on Computers 60, no. 7 (2011): 913–22. http://dx.doi.org/10.1109/tc.2010.121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kammer, Hubert. "The SUPRENUM vector floating-point unit." Parallel Computing 7, no. 3 (1988): 315–23. http://dx.doi.org/10.1016/0167-8191(88)90050-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mehta, Sonali, Balwinder singh, and Dilip Kumar. "Performance Analysis of Floating Point MAC Unit." International Journal of Computer Applications 78, no. 1 (2013): 38–41. http://dx.doi.org/10.5120/13456-1139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hicks, T. N., R. E. Fry, and P. E. Harvey. "POWER2 floating-point unit: Architecture and implementation." IBM Journal of Research and Development 38, no. 5 (1994): 525–36. http://dx.doi.org/10.1147/rd.385.0525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schwarz, E. M., and C. A. Krygowski. "The S/390 G5 floating-point unit." IBM Journal of Research and Development 43, no. 5.6 (1999): 707–21. http://dx.doi.org/10.1147/rd.435.0707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gerwig, G., H. Wetter, E. M. Schwarz, et al. "The IBM eServer z990 floating-point unit." IBM Journal of Research and Development 48, no. 3.4 (2004): 311–22. http://dx.doi.org/10.1147/rd.483.0311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Timmermann, D., B. Rix, H. Hahn, and B. J. Hosticka. "A CMOS floating-point vector-arithmetic unit." IEEE Journal of Solid-State Circuits 29, no. 5 (1994): 634–39. http://dx.doi.org/10.1109/4.284719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dr. Vasudeva G and Dr Bharathi Gururaj. "Floating Point Unit with High Precision Efficiency." International Journal of Soft Computing and Engineering 15, no. 2 (2025): 24–30. https://doi.org/10.35940/ijsce.b3669.15020525.

Full text
Abstract:
In this paper, we dive into designing a Single Precision Floating Point Unit (FPU), a key player in modern processors. FPUs are essential for handling complex numerical calculations with high precision and a broad range, making them indispensable in scientific research, graphics rendering, and machine learning—our design centers around two main components: the Brent-Kung adder and the radix-4 Booth multiplier. The BrentKung adder is our go-to for fast addition and subtraction. Thanks to its clever parallel-prefix structure, it minimises delays even as the numbers get bigger. For multiplication
APA, Harvard, Vancouver, ISO, and other styles
11

Dr., Vasudeva G. "Floating Point Unit with High Precision Efficiency." International Journal of Soft Computing and Engineering (IJSCE) 15, no. 2 (2025): 24–30. https://doi.org/10.35940/ijsce.B3669.15020525.

Full text
Abstract:
<strong>Abstract:</strong> In this paper, we dive into the design of a Single Precision Floating Point Unit (FPU), a key player in the world of modern processors. FPUs are essential for handling complex numerical calculations with high precision and a broad range, making them indispensable in areas like scientific research, graphics rendering, and machine learning. Our design centers around two main components: the Brent-Kung adder and the radix-4 Booth multiplier. The Brent-Kung adder is our go-to for fast addition and subtraction. Thanks to its clever parallel- prefix structure, it keeps del
APA, Harvard, Vancouver, ISO, and other styles
12

Dr., Vasudeva G. "Floating Point Unit with High Precision Efficiency." International Journal of Soft Computing and Engineering (IJSCE) 15, no. 2 (2025): 24–30. https://doi.org/10.35940/ijsce.B3669.15020525/.

Full text
Abstract:
<strong>Abstract:</strong> In this paper, we dive into designing a Single Precision Floating Point Unit (FPU), a key player in modern processors. FPUs are essential for handling complex numerical calculations with high precision and a broad range, making them indispensable in scientific research, graphics rendering, and machine learning&mdash;our design centers around two main components: the Brent-Kung adder and the radix-4 Booth multiplier. The BrentKung adder is our go-to for fast addition and subtraction. Thanks to its clever parallel-prefix structure, it minimises delays even as the numbe
APA, Harvard, Vancouver, ISO, and other styles
13

Mr., Anand S. Burud, and Pradip C. Bhaskar Dr. "Processor Design Using 32 Bit Single Precision Floating Point Unit." International Journal of Trend in Scientific Research and Development 2, no. 4 (2018): 198–202. https://doi.org/10.31142/ijtsrd12912.

Full text
Abstract:
The floating point operations have discovered concentrated applications in the various different fields for the necessities for high precision operation because of its incredible dynamic range, high exactness and simple operation rules. High accuracy is needed for the design and research of the floating point processing units. With the expanding necessities for the floating point operations for the fast high speed data signal processing and the logical operation, the requirements for the high speed hardware floating point arithmetic units have turned out to be increasingly requesting. The ALU
APA, Harvard, Vancouver, ISO, and other styles
14

Maladkar, Kishan. "Design and Implementation of a 32-bit Floating Point Unit." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 731–36. http://dx.doi.org/10.22214/ijraset.2021.35052.

Full text
Abstract:
A Floating Point Unit is a math co-processor that is in the most demand of Digital Signal Processing (DSP), Processors and more. It is used to perform functions or operations on floating point numbers like addition, subtraction, multiplication, division, square root and more. It is specifically designed to carry out mathematical operations and it can be emulated in CPU. Floating point unit is a common operation used in advanced Digital Signal Processing and various processor applications. The aim was to develop an optimized floating point unit so that the delay was reduced and efficiency was i
APA, Harvard, Vancouver, ISO, and other styles
15

Mohammed, Falih Hassan, Farhood Hussein Karime, and Al-Musawi Bahaa. "Design and implementation of fast floating point units for FPGAs." Indonesian Journal of Electrical Engineering and Computer Science 19, no. 3 (2022): 1480–89. https://doi.org/10.11591/ijeecs.v19.i3.pp1480-1489.

Full text
Abstract:
Due to growth in demand for high-performance applications that require high numerical stability and accuracy, the need for floating-point FPGA has been increased. In this work, an open-source and efficient floating-point unit is implemented on a standard Xilinx Sparton-6 FPGA platform. The proposed design is described in a hierarchal way starting from functional block descriptions toward modules level design. Our implementation used minimal resources available on the targeting FPGA board, tested on the Sparton-6 FPGA platform and verified on ModelSim. The open-source framework can be embedded
APA, Harvard, Vancouver, ISO, and other styles
16

Tang, Xia Qing, Xiang Liu, Jun Qiang Gao, and Bo Lin. "Design and Implementation of FPGA-Based High-Performance Floating Point Arithmetic Unit." Applied Mechanics and Materials 599-601 (August 2014): 1465–69. http://dx.doi.org/10.4028/www.scientific.net/amm.599-601.1465.

Full text
Abstract:
Since FPGA processing data, the presence of fixed-point processing accuracy is not high, and IP Core floating point unit and there are some problems in the use of design risk. Based on the improved floating point unit and program optimization algorithm is designed to achieve single-precision floating-point add / subtract, multiply, and divide operations operator. IP Core for floating-point unit design and FPGA development software provides comparative results: both the maximum clock frequency and latency basically unchanged, while the former occupies less hardware resources, to complete a plus
APA, Harvard, Vancouver, ISO, and other styles
17

Sun, Wei, Jun She An, and Shuang Yang. "A Real-Time Sequence Detection Algorithm for Floating Point Unit Design." Applied Mechanics and Materials 687-691 (November 2014): 3494–97. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.3494.

Full text
Abstract:
Sequence detection is used in many algorithms and applications. Sequences are different depending on different demands. In the process of floating-point CORDIC coprocessor design,data are need to change from floating point format to fixed point format. This process is necessary to detect the number of consecutive zeros. We design the leading-zero-counting algorithm to achieve this function, and this conversion process is completed in a very fixed short time, to ensure the needs of the floating point CORDIC coprocessor.
APA, Harvard, Vancouver, ISO, and other styles
18

Lai, Shuhao, and Xiaoyong He. "Design of the vector floating-point unit with high area efficiency." Journal of Physics: Conference Series 2524, no. 1 (2023): 012027. http://dx.doi.org/10.1088/1742-6596/2524/1/012027.

Full text
Abstract:
Abstract With the development of the information age, there is an increasing trend towards mixed precision and vector operations in floating point arithmetic. Traditional floating-point arithmetic is usually implemented using multiple modules to ensure the required speed. Still, this approach significantly increases area and reduces area efficiency, resulting in the wastage of hardware resources. This paper focuses on optimizing speed and area to improve the area’s efficiency. The proposed floating-point unit designed can perform half-precision, single-precision, and double-precision floating-
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, De, MingJiang Wang, and Shikai Zuo. "Delay-optimized floating point fused add-subtract unit." IEICE Electronics Express 12, no. 17 (2015): 20150642. http://dx.doi.org/10.1587/elex.12.20150642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sokolov, I. A., Y. V. Rogdestvenski, Y. G. Diachenko, et al. "Delay-Insensitive Floating Point Multiply-Add-Subtract Unit." Problems of advanced micro- and nanoelectronic systems development, no. 3 (2019): 20–25. http://dx.doi.org/10.31114/2078-7707-2019-3-20-25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yee Jern Chong and S. Parameswaran. "Custom Floating-Point Unit Generation for Embedded Systems." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 28, no. 5 (2009): 638–50. http://dx.doi.org/10.1109/tcad.2009.2013999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Bin, and Jizhong Zhao. "Elementary Function Computing Method for Floating-Point Unit." Journal of Signal Processing Systems 88, no. 3 (2016): 311–21. http://dx.doi.org/10.1007/s11265-016-1166-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Jacobi, Christian, and Christoph Berg. "Formal Verification of the VAMP Floating Point Unit." Formal Methods in System Design 26, no. 3 (2005): 227–66. http://dx.doi.org/10.1007/s10703-005-1613-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

R., Bhuvanapriya, and T. Menakadevi. "Design and Implementation of FPU for Optimised Speed." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2020): 3922–33. https://doi.org/10.35940/ijeat.C6444.029320.

Full text
Abstract:
Currently, each CPU has one or additional Floating Point Units (FPUs) integrated inside it. It is usually utilized in math wide-ranging applications, such as digital signal processing. It is found in places be established in engineering, medical and military fields in adding along to in different fields requiring audio, image or video handling. A high-speed and energy-efficient floating point unit is naturally needed in the electronics diligence as an arithmetic unit in microprocessors. The most operations accounting 95% of conformist FPU are multiplication and addition. Many applications need
APA, Harvard, Vancouver, ISO, and other styles
25

Freitas, Jordana Alves de, Kátia Lopes Silva, and Mauro Hemerly Gazzani. "IMPLEMENTAÇÃO DA OPERAÇÃO DE DIVISÃO EM UMA UNIDADE DE PONTO FLUTUANTE DE 32 BITS BASEADA NO PADRÃO IEEE 754 EM VERILOG." Revista ft 29, no. 142 (2025): 21–22. https://doi.org/10.69849/revistaft/ni10202501090721.

Full text
Abstract:
A Floating-Point Unit (FPU) is an essential component of a computer processor responsible for performing arithmetic operations on floating point numbers, following the specifications of the IEEE 754 standard. The UPF consists of a series of circuits and logic designed to handle the representation, manipulation and calculations of floating-point numbers. It is responsible for converting the floating-point numbers into proper internal representations, performing the necessary mathematical operations and providing the result in the correct format. This work presents the implementation of a 32-bit
APA, Harvard, Vancouver, ISO, and other styles
26

., NARAHARI BHARGAVI, and B. NAGA RAJESH . "VLSI IMPLEMENTATION OF HIGH SPEED SINGLE PRECESSION FLOATING POINT UNIT USING VERILOG." International Journal of Engineering Technology and Management Sciences 6, no. 1 (2022): 16–23. http://dx.doi.org/10.46647/ijetms.2022.v06i01.003.

Full text
Abstract:
Single-precision floating-point format is a computer number format that is used to represent a wide dynamic range of values. Floating point numbers representation has widespread dominance over fixed point numbers. Since the recent years, researchers are putting a lot of efforts in interfacing complex modules which are used in signal processing with processors for increasing the speed. In this work implementation of a floating point arithmetic unit which can perform addition, subtraction, multiplication, and division functions on 32-bit operands that use the IEEE 754-2008 standard is done using
APA, Harvard, Vancouver, ISO, and other styles
27

Han, Kyung-Nam, Sang-Wook Han, and Euisik Yoon. "Fast floating-point normalisation unit realised using NOR planes." Electronics Letters 38, no. 16 (2002): 857. http://dx.doi.org/10.1049/el:20020555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sohn, Jongwook, and Earl E. Swartzlander. "A Fused Floating-Point Four-Term Dot Product Unit." IEEE Transactions on Circuits and Systems I: Regular Papers 63, no. 3 (2016): 370–78. http://dx.doi.org/10.1109/tcsi.2016.2525042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

kumar.P, Arun, Bharanidharan K, Sampurna K, and Sharmila Devi.K. "Generic High Performance Multimode Floating Point Unit for FPGAS." International Journal of Engineering Trends and Technology 9, no. 15 (2014): 765–69. http://dx.doi.org/10.14445/22315381/ijett-v9p345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Linghao, and Zhibiao Shao. "The Calculation and Anticipation Unit for Floating-Point Addition." Journal of Circuits, Systems and Computers 24, no. 03 (2015): 1550029. http://dx.doi.org/10.1142/s0218126615500292.

Full text
Abstract:
Most recent microprocessors present multiple special functional units to optimize their performance. In this paper, a new functional unit called the calculation and anticipation (C&amp;A) unit is presented for the IEEE 754 standard floating-point adder (FPA) that is the most important and frequently used calculation part for both modern CPUs and GPUs. C&amp;A unit parallelize rounding step and readjustment step, which are known as the time-consuming steps for floating-point addition with significand addition. Therefore it reduces FPA critical path delay enormously, and even more decreases a li
APA, Harvard, Vancouver, ISO, and other styles
31

Jessani, R. M., and C. H. Olson. "The floating-point unit of the PowerPC 603e microprocessor." IBM Journal of Research and Development 40, no. 5 (1996): 559–66. http://dx.doi.org/10.1147/rd.405.0559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bruguera, Javier D. "Low Latency Floating-Point Division and Square Root Unit." IEEE Transactions on Computers 69, no. 2 (2020): 274–87. http://dx.doi.org/10.1109/tc.2019.2947899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

G, Vasudeva, and Bharathi Gururaj. "Design of an Efficient Single Precision Floating Point Unit." International Journal of Electrical Engineering and Computer Science 7 (March 26, 2025): 44–54. https://doi.org/10.37394/232027.2025.7.5.

Full text
Abstract:
In this , we design of a Single Precision Floating Point Unit (FPU), a key player in the world of modern processors. FPUs are essential for handling complex numerical calculations with high precision and a broad range, making them indispensable in areas like scientific research, graphics rendering, and machine learning. Our design centers around two main components: the Brent-Kung adder and the radix-4 Booth multiplier. The Brent-Kung adder is our go-to for fast addition and subtraction. Thanks to its clever parallel- prefix structure, it keeps delays minimal even as the numbers get bigger. Fo
APA, Harvard, Vancouver, ISO, and other styles
34

Daumas, Marc, and Claire Finot. "Division of Floating Point Expansions with an Application to the Computation of a Determinant." JUCS - Journal of Universal Computer Science 5, no. (6) (1999): 323–38. https://doi.org/10.3217/jucs-005-06-0323.

Full text
Abstract:
Floating point expansion is a technique for implementing multiple precision using a processor's floating point unit instead of its integer unit. Research on this subject has arised recently from the observation that the floating point unit becomes a more and more efficient part of modern computers. Many simple arithmetic operators and some very useful geometric operators have already been presented on expansions. Yet previous work included only a very simple division algorithm. We present in this work a new algorithm that allows us to extend the set of geometric operators with Bareiss' determi
APA, Harvard, Vancouver, ISO, and other styles
35

Naginder, Singh, and Parihar Kapil. "Comparative study of single precision floating point division using different computational algorithms." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 3 (2023): 336–44. https://doi.org/10.11591/ijres.v12.i3pp336-344.

Full text
Abstract:
This paper presents different computational algorithms to implement single precision floating point division on field programmable gate arrays (FPGA). Fast division computation algorithms can apply to all division cases by which an efficient result will be obtained in terms of delay time and power consumption. 24-bit Vedic multiplication (Urdhva-Triyakbhyam-sutra) technique enhances the computational speed of the mantissa module and this module is used to design a 32-bit floating point multiplier which is the crucial feature of this proposed design, which yields a higher computational speed an
APA, Harvard, Vancouver, ISO, and other styles
36

Dharmavaram, Asha Devi, Suresh Babu M, and Prasad Acharya G. "CUSTOM IP DESIGN AND VERIFICATION FOR IEEE754 SINGLE PRECISION FLOATING POINT ARITHMETIC UNIT." ASEAN Engineering Journal 14, no. 2 (2024): 69–76. http://dx.doi.org/10.11113/aej.v14.20678.

Full text
Abstract:
The compact and accurate way of representing numbers in a wide range is the advantage of floating-point (FP) representation and computation. The floating-point digital signal processors offer the IPs that should have the features of low power, high performance, and less area in cost-effective designs. The proposed paper demonstrates the design and implementation of a 32-bit floating-point arithmetic unit (FPAU). The arithmetic operations performed by the FPAU are in the IEEE 754 single precision format for FP numbers. Before performing the 32-bit FP arithmetic operations, the input operands ar
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, Hyunpil, and Sangook Moon. "Proxy Bits for Low Cost Floating-Point Fused Multiply–Add Unit." Journal of Circuits, Systems and Computers 25, no. 10 (2016): 1650127. http://dx.doi.org/10.1142/s0218126616501279.

Full text
Abstract:
A new floating-point fused multiply–add (FMA) unit is proposed in this paper. We observed a group of redundant bits that have no effect on the effective results of the floating-point FMA arithmetic, and figured out that two proxy bits can replace the redundant bits. We proved the existence of the proxy bits using binary arithmetic keeping track of the negligible bits. Using proxy bits, the proposed FMA unit achieves improvement in terms of cost, power consumption, and performance. The results show that the proposed FMA unit reduces the total area and latency by approximately 17.0% and 32% resp
APA, Harvard, Vancouver, ISO, and other styles
38

Singh, Naginder, and Kapil Parihar. "Comparative study of single precision floating point division using different computational algorithms." International Journal of Reconfigurable and Embedded Systems (IJRES) 12, no. 3 (2023): 336. http://dx.doi.org/10.11591/ijres.v12.i3.pp336-344.

Full text
Abstract:
&lt;span&gt;This paper presents different computational algorithms to implement single precision floating point division on field programmable gate arrays (FPGA). Fast division computation algorithms can apply to all division cases by which an efficient result will be obtained in terms of delay time and power consumption. 24-bit Vedic multiplication (Urdhva-Triyakbhyam-sutra) technique enhances the computational speed of the mantissa module and this module is used to design a 32-bit floating point multiplier which is the crucial feature of this proposed design, which yields a higher computatio
APA, Harvard, Vancouver, ISO, and other styles
39

Yang, Fengyuan. "Research and Analysis of Floating-Point Adder Principle." Applied and Computational Engineering 8, no. 1 (2023): 113–17. http://dx.doi.org/10.54254/2755-2721/8/20230092.

Full text
Abstract:
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder i
APA, Harvard, Vancouver, ISO, and other styles
40

Vinotheni M S and Karthika K. "IMPLEMENTATION OF HIGH PERFORMANCE POSIT-MULTIPLIER." international journal of engineering technology and management sciences 7, no. 4 (2023): 166–76. http://dx.doi.org/10.46647/ijetms.2023.v07i04.026.

Full text
Abstract:
To represent real numbers, practically all computer systems now employ IEEE-754 floating point. Posit has recently been offered as an alternative to IEEE-754 floating point because it provides higher accuracy and a wider dynamic range. The use of not-a-numbers (NaN's) is one of the most common , having too many of them wastes valuable bit patterns in floating point format. As an alternative to floating point, a system known as "Universal Numbers'' or UNUMs was developed. There are three variations of this system, but in terms of hardware compatibility, Type III(POSIT) is the best substitute fo
APA, Harvard, Vancouver, ISO, and other styles
41

Ding, Jun, and Na Li. "A FPGA-Based Design of Floating-Point FFT Processor with Dual-Core." Advanced Materials Research 811 (September 2013): 441–46. http://dx.doi.org/10.4028/www.scientific.net/amr.811.441.

Full text
Abstract:
This paper presents a dual-core floating point FFT processor design based on CORDIC algorithm, enabling high-speed floating-point real-time FFT computation, and its time complexity is (N / 4) Log (N / 2). The design unifiesthe floating complex multiplication and the evaluationof twiddle factors into an iteration, which not only reduces the complexity of complex multiplication but also reduces the difficulty when the butterfly unit deals with floating-point in fast Fourier transform. The butterfly unit unaffected by the size of external memory can handle the Fourier transform with high sample n
APA, Harvard, Vancouver, ISO, and other styles
42

Salman Faraz, Shaikh, Yogesh Suryawanshi, Sandeep Kakde, Ankita Tijare, and Rajesh Thakare. "Design and Synthesis of Restoring Technique Based Dual Mode Floating Point Divider for Fast Computing Applications." International Journal of Engineering & Technology 7, no. 3.6 (2018): 48. http://dx.doi.org/10.14419/ijet.v7i3.6.14936.

Full text
Abstract:
Floating point division plays a vital role in quick processing applications. A division is one amongst the complicated modules needed in processors. Area, delay and power consumption are the main factors that play a significant role once planning a floating point dual-precision divider. Compared to different floating-point arithmetic, the design of division is way a lot of sophisticated and needs longer time. Floating point division is that the main arithmetic unit that is employed within the design of the many processors in the field of DSP, math processors and plenty of different application
APA, Harvard, Vancouver, ISO, and other styles
43

Yadav, Amana, and Ila Chaudhary. "Design of 32-bit Floating Point Unit for Advanced Processors." International Journal of Engineering Research and Applications 07, no. 06 (2017): 39–46. http://dx.doi.org/10.9790/9622-0706053946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sohn, Jongwook, and Earl E. Swartzlander. "Improved Architectures for a Fused Floating-Point Add-Subtract Unit." IEEE Transactions on Circuits and Systems I: Regular Papers 59, no. 10 (2012): 2285–91. http://dx.doi.org/10.1109/tcsi.2012.2188955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Alachiotis, Nikolaos, and Alexandros Stamatakis. "A Vector-Like Reconfigurable Floating-Point Unit for the Logarithm." International Journal of Reconfigurable Computing 2011 (2011): 1–12. http://dx.doi.org/10.1155/2011/341510.

Full text
Abstract:
The use of reconfigurable computing for accelerating floating-point intensive codes is becoming common due to the availability of DSPs in new-generation FPGAs. We present the design of an efficient, pipelined floating-point datapath for calculating the logarithm function on reconfigurable devices. We integrate the datapath into a stand-alone LUT-based (Lookup Table) component, the LAU (Logarithm Approximation Unit). We extended the LAU, by integrating two architecturally independent, LAU-based datapaths into a larger component, the VLAU (vector-like LAU). The VLAU produces 2 results/cycle, whi
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Yunji. "Formal Verification of Godson-2 Microprocessor Floating-Point Division Unit." Journal of Computer Research and Development 43, no. 10 (2006): 1835. http://dx.doi.org/10.1360/crad20061023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Manolopoulos, K., D. Reisis, and V. A. Chouliaras. "An efficient multiple precision floating-point Multiply-Add Fused unit." Microelectronics Journal 49 (March 2016): 10–18. http://dx.doi.org/10.1016/j.mejo.2015.10.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Aswani, T. S., and B. Premanand. "Area Efficient Floating Point Addition Unit With Error Detection Logic." Procedia Technology 24 (2016): 1149–54. http://dx.doi.org/10.1016/j.protcy.2016.05.068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Alder, Fritz, Jo Van Bulck, Jesse Spielman, David Oswald, and Frank Piessens. "Faulty Point Unit: ABI Poisoning Attacks on Trusted Execution Environments." Digital Threats: Research and Practice 3, no. 2 (2022): 1–26. http://dx.doi.org/10.1145/3491264.

Full text
Abstract:
This article analyzes a previously overlooked attack surface that allows unprivileged adversaries to impact floating-point computations in enclaves through the Application Binary Interface (ABI). In a comprehensive study across 7 industry-standard and research enclave shielding runtimes for Intel Software Guard Extensions (SGX), we show that control and state registers of the x87 Floating-Point Unit (FPU) and Intel Streaming SIMD Extensions are not always properly sanitized on enclave entry. We furthermore show that this attack goes beyond the x86 architecture and can also affect RISC-V enclav
APA, Harvard, Vancouver, ISO, and other styles
50

Sayyam, Jain, and Ramesh K.B. "Optimized Single Precision Floating-Point ALU Design and Implementation for RISC Processors on FPGA." Recent Trends in Analog Design and Digital Devices 7, no. 2 (2024): 29–35. https://doi.org/10.5281/zenodo.11609206.

Full text
Abstract:
<em>Single Precision Floating-Point Arithmetic Logic Units (FPALUs) play a crucial role in the performance and functionality of Reduced Instruction Set Computer (RISC) processors. This paper presents the design and implementation of an FPALU tailored for a RISC processor on a Field-Programmable Gate Array (FPGA). The FPALU is optimized for single precision floating-point arithmetic operations, including addition, subtraction, multiplication, and division. The design methodology encompasses the development of essential logic blocks, such as the op code decoder, arithmetic block, logical block,
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!