Siga este enlace para ver otros tipos de publicaciones sobre el tema: Floating point.

Artículos de revistas sobre el tema "Floating point"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Floating point".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Jorgensen, Alan A., Las Vegas, Connie R. Masters, Ratan K. Guha, and Andrew C. Masters. "Bounded Floating Point: Identifying and Revealing Floating-Point Error." Advances in Science, Technology and Engineering Systems Journal 6, no. 1 (January 2021): 519–31. http://dx.doi.org/10.25046/aj060157.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

M. Somasekhar, M. Somasekhar. "Floating Point Operations in PipeRench CGRA." International Journal of Scientific Research 1, no. 6 (June 1, 2012): 67–68. http://dx.doi.org/10.15373/22778179/nov2012/24.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond, and Jean-Michel Muller. "Floating-point arithmetic." Acta Numerica 32 (May 2023): 203–90. http://dx.doi.org/10.1017/s0962492922000101.

Texto completo
Resumen
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Boldo, Sylvie, Claude-Pierre Jeannerod, Guillaume Melquiond, and Jean-Michel Muller. "Floating-point arithmetic." Acta Numerica 32 (May 1, 2023): 203–90. https://doi.org/10.1017/s0962492922000101.

Texto completo
Resumen
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the 'standard model' that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more.In this survey we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Blinn, J. F. "Floating-point tricks." IEEE Computer Graphics and Applications 17, no. 4 (1997): 80–84. http://dx.doi.org/10.1109/38.595279.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ghosh, Aniruddha, Satrughna Singha, and Amitabha Sinha. ""Floating point RNS"." ACM SIGARCH Computer Architecture News 40, no. 2 (May 31, 2012): 39–43. http://dx.doi.org/10.1145/2234336.2234343.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Harrison, John. "Floating-Point Verification." JUCS - Journal of Universal Computer Science 13, no. (5) (May 28, 2007): 629–38. https://doi.org/10.3217/jucs-013-05-0629.

Texto completo
Resumen
This paper overviews the application of formal verification techniques to hardware ingeneral, and to floating-point hardware in particular. A specific challenge is to connect the usual mathematical view of continuous arithmetic operations with the discrete world, in a credible andverifiable way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Kavya, Nagireddy. "Design and Implementation of Floating-Point Addition and Floating-Point Multiplication." International Journal for Research in Applied Science and Engineering Technology 10, no. 1 (January 31, 2022): 98–101. http://dx.doi.org/10.22214/ijraset.2022.39742.

Texto completo
Resumen
Abstract: In this paper, we present the design and implementation of Floating point addition and Floating point Multiplication. There are many multipliers in existence in which Floating point Multiplication and Floating point addition offers a high precision and more accuracy for the data representation of the image. This project is designed and simulated on Xilinx ISE 14.7 version software using verilog. Simulation results show area reduction and delay reduction as compared to the conventional method. Keywords: FIR Filter, Floating point Addition, Floating point Multiplication, Carry Look Ahead Adder
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ms., Anuja A. Bhat* &. Prof. Rutuja Warbhe. "DESIGN OF FLOATING POINT MULTIPLIER BASED ON BOOTH ALGORITHM USING VHDL." INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT 4, no. 5 (May 18, 2017): 123–30. https://doi.org/10.5281/zenodo.580862.

Texto completo
Resumen
In this paper, high Speed, low power and less delay 32-bit IEEE 754 Floating PointSubtractor andMultiplierispresented using Booth Multiplier. Multiplication is an important fundamental function in many Digital Signal Processing (DSP) applications such as Fast Fourier Transform (FFT). Since multiplication dominates the execution time of most DSP algorithms, so there is a need of high speed multiplier. The main objective of this researchis to reduce delay, power and to increase the speed.The coding is done in VHDL, synthesis and simulationhas been done using Xilinx ISE simulator. The modules designed are 24-bit Booth Multiplier for mantissa multiplication in Floating Point Multiplier, 32-bit Floating Point Subtractor and 32-bit Floating Point Multiplier. The Computational delay obtained by Floating Point Subtractor, booth multiplier and floating point multiplier is 16.180nsec, 33.159nsec and 18.623nsec respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mishra, Raj Gaurav, and Amit Kumar Shrivastava. "Implementation of Custom Precision Floating Point Arithmetic on FPGAs." HCTL Open International Journal of Technology Innovations and Research (IJTIR) 1, January 2013 (January 31, 2013): 10–26. https://doi.org/10.5281/zenodo.160887.

Texto completo
Resumen
Floating point arithmetic is a common requirement in signal processing, image processing and real time data acquisition & processing algorithms. Implementation of such algorithms on FPGA requires an efficient implementation of floating point arithmetic core as an initial process. We have presented an empirical result of the implementation of custom-precision floating point numbers on an FPGA processor using the rules of IEEE standards defined for single and double precision floating point numbers. Floating point operations are difficult to implement on FPGAs because of their complexity in calculations and their hardware utilization for such calculations. In this paper, we have described and evaluated the performance of custom-precision, pipelined, floating point arithmetic core for the conversion to and from signed binary numbers. Then, we have assessed the practical implications of using these algorithms on the Xilinx Spartan 3E FPGA boards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ms., Anuja A. Bhat* &. Prof. Mangesh N. Thakare. "DESIGN OF FLOATING POINT MULTIPLIER BASED ON BOOTH ALGORITHM USING VHDL." INTERNATIONAL JOURNAL OF RESEARCH SCIENCE & MANAGEMENT 4, no. 5 (May 8, 2017): 55–62. https://doi.org/10.5281/zenodo.572573.

Texto completo
Resumen
In this paper, we have presented of High Speed, low power and less delay 32-bit IEEE 754 Floating Point Subtractor and Multiplier using Booth Multiplier. Multiplication is an important fundamental function in many Digital Signal Processing (DSP) applications such as Fast Fourier Transform (FFT). Since multiplication dominates the execution time of most DSP algorithms, so there is a need of high speed multiplier. The main objective of this research is to reduce delay, power and to increase the speed. The coding is done in VHDL, synthesis and simulation has been done using Xilinx ISE simulator. The modules designed are 24-bit Booth Multiplier for mantissa multiplication in Floating Point Multiplier, 32-bit Floating Point Subtractor and 32-bit Floating Point Multiplier. The Computational delay obtained by Floating Point Subtractor, booth multiplier and floating point multiplier is 16.180nsec, 33.159nsec and 18.623nsec respectively
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Singamsetti, Mrudula, Sadulla Shaik, and T. Pitchaiah. "Merged Floating Point Multipliers." International Journal of Engineering and Advanced Technology 9, no. 1s5 (December 30, 2019): 178–82. http://dx.doi.org/10.35940/ijeat.a1042.1291s519.

Texto completo
Resumen
Floating point multipliers are extensively used in many scientific and signal processing computations, due to high speed and memory requirements of IEEE-754 floating point multipliers which prevents its implementation in many systems because of fast computations. Hence floating point multipliers became one of the research criteria. This research aims to design a new floating point multiplier that occupies less area, low power dissipation and reduces computational time (more speed) when compared to the conventional architectures. After an extensive literature survey, new architecture was recognized i.e, resource sharing Karatsuba –Ofman algorithm which occupies less area, power and increasing speed. The design was implemented in mat lab using DSP block sets, simulator tool is Xilinx Vivado.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Baidas, Z., A. D. Brown, and A. C. Williams. "Floating-point behavioral synthesis." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, no. 7 (July 2001): 828–39. http://dx.doi.org/10.1109/43.931000.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Sayers, David, and du Croz Jeremy. "Validating floating-point systems." Physics World 2, no. 6 (June 1989): 59–62. http://dx.doi.org/10.1088/2058-7058/2/6/33.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Erle, Mark A., Brian J. Hickmann, and Michael J. Schulte. "Decimal Floating-Point Multiplication." IEEE Transactions on Computers 58, no. 7 (July 2009): 902–16. http://dx.doi.org/10.1109/tc.2008.218.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Nannarelli, Alberto. "Tunable Floating-Point Adder." IEEE Transactions on Computers 68, no. 10 (October 1, 2019): 1553–60. http://dx.doi.org/10.1109/tc.2019.2906907.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Shirayanagi, Kiyoshi. "Floating point Gröbner bases." Mathematics and Computers in Simulation 42, no. 4-6 (November 1996): 509–28. http://dx.doi.org/10.1016/s0378-4754(96)00027-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wichmann, Brian. "Improving floating-point programming." Science of Computer Programming 15, no. 2-3 (December 1990): 255–56. http://dx.doi.org/10.1016/0167-6423(90)90092-r.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Weiss, S., and A. Goldstein. "Floating point micropipeline performance." Journal of Systems Architecture 45, no. 1 (January 1998): 15–29. http://dx.doi.org/10.1016/s1383-7621(97)00070-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Advanced Micro Devices. "IEEE floating-point format." Microprocessors and Microsystems 12, no. 1 (January 1988): 13–23. http://dx.doi.org/10.1016/0141-9331(88)90031-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Espelid, T. O. "On Floating-Point Summation." SIAM Review 37, no. 4 (December 1995): 603–7. http://dx.doi.org/10.1137/1037130.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Umemura, Kyoji. "Floating-point number LISP." Software: Practice and Experience 21, no. 10 (October 1991): 1015–26. http://dx.doi.org/10.1002/spe.4380211003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Hockert, Neil, and Katherine Compton. "Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)." Journal of Signal Processing Systems 67, no. 1 (January 11, 2011): 31–46. http://dx.doi.org/10.1007/s11265-010-0561-y.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ramya Rani, N. "Implementation of Embedded Floating Point Arithmetic Units on FPGA." Applied Mechanics and Materials 550 (May 2014): 126–36. http://dx.doi.org/10.4028/www.scientific.net/amm.550.126.

Texto completo
Resumen
:Floating point arithmetic plays a major role in scientific and embedded computing applications. But the performance of field programmable gate arrays (FPGAs) used for floating point applications is poor due to the complexity of floating point arithmetic. The implementation of floating point units on FPGAs consumes a large amount of resources and that leads to the development of embedded floating point units in FPGAs. Embedded applications like multimedia, communication and DSP algorithms use floating point arithmetic in processing graphics, Fourier transformation, coding, etc. In this paper, methodologies are presented for the implementation of embedded floating point units on FPGA. The work is focused with the aim of achieving high speed of computations and to reduce the power for evaluating expressions. An application that demands high performance floating point computation can achieve better speed and density by incorporating embedded floating point units. Additionally this paper describes a comparative study of the design of single precision and double precision pipelined floating point arithmetic units for evaluating expressions. The modules are designed using VHDL simulation in Xilinx software and implemented on VIRTEX and SPARTAN FPGAs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Albert, Anitha Juliette, and Seshasayanan Ramachandran. "NULL Convention Floating Point Multiplier." Scientific World Journal 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/749569.

Texto completo
Resumen
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power. This paper presents the design of an IEEE 754 single precision floating point multiplier using asynchronous NULL convention logic paradigm. Rounding has not been implemented to suit high precision applications. The novelty of the research is that it is the first ever NULL convention logic multiplier, designed to perform floating point multiplication. The proposed multiplier offers substantial decrease in power consumption when compared with its synchronous version. Performance attributes of the NULL convention logic floating point multiplier, obtained from Xilinx simulation and Cadence, are compared with its equivalent synchronous implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

T.Govinda, Rao, Pradeep P.Devi, and P.Kalyanchakravarthi. "DESIGN OF DOUBLE PRECISION FLOATING POINT MULTIPLICATION ALGORITHM WITH VECTOR SUPPORT." International Journal Of Microwave Engineering (JMICRO) 1, no. 2 (November 24, 2022): 9. https://doi.org/10.5281/zenodo.7353323.

Texto completo
Resumen
This paper presents floating point multiplier capable of supporting wide range of application domains like scientific computing and multimedia applications and also describes an implementation of a floating point multiplier that supports the IEEE 754-2008 binary interchange format with methodology for estimating the power and speed has been developed. This Pipelined vectorized floating point multiplier supporting FP16, FP32, FP64 input data and reduces the area, power, latency and increases throughput. Precision can be implemented by taking the 128 bit input operands.The floating point units consumeless power and small part of total area. Graphic Processor Units (GPUS) are specially tuned for performing a set of operations on large sets of data. This paper also presents the design of a Double precision floating point multiplication algorithm with vector support. The single precision floating point multiplier is having a path delay of 72ns and also having the operating frequency of 13.58MHz.Finally this implementation is done in Verilog HDL using Xilinx ISE-14.2.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Luсkij, Georgi, and Oleksandr Dolholenko. "Development of floating point operating devices." Technology audit and production reserves 5, no. 2(73) (October 31, 2023): 11–17. http://dx.doi.org/10.15587/2706-5448.2023.290127.

Texto completo
Resumen
The paper shows a well-known approach to the construction of cores in multi-core microprocessors, which is based on the application of a data flow graph-driven calculation model. The architecture of such kernels is based on the application of the reduced instruction set level data flow model proposed by Yale Patt. The object of research is a model of calculations based on data flow management in a multi-core microprocessor. The results of the floating-point multiplier development that can be dynamically reconfigured to handle five different formats of floating-point operands and an approach to the construction of an operating device for addition-subtraction of a sequence of floating-point numbers are presented, for which the law of associativity is fulfilled without additional programming complications. On the basis of the developed circuit of the floating-point multiplier, it is possible to implement various variants of the high-speed multiplier with both fixed and floating points, which may find commercial application. By adding memory elements to each of the multiplier segments, it is possible to get options for building very fast pipeline multipliers. The multiplier scheme has a limitation: the exponent is not evaluated for denormalized operands, but the standard for floating-point arithmetic does not require that denormalized operands be handled. In such cases, the multiplier packs infinity as the result. The implementation of an inter-core operating device of a floating-point adder-subtractor can be considered as a new approach to the practical solution of dynamic planning tasks when performing addition-subtraction operations within the framework of a multi-core microprocessor. The limitations of its implementation are related to the large amount of hardware costs required for implementation. To assess this complexity, an assessment of the value of the bits of its main blocks for various formats of representing floating-point numbers, in accordance with the floating-point standard, was carried out.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Sravani, Chinta, Dr Prasad Janga, and Mrs S. SriBindu. "Floating Point Operations Compatible Streaming Elements for FPGA Accelerators." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 302–9. http://dx.doi.org/10.31142/ijtsrd15853.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Yang, Hongru, Jinchen Xu, Jiangwei Hao, Zuoyan Zhang, and Bei Zhou. "Detecting Floating-Point Expression Errors Based Improved PSO Algorithm." IET Software 2023 (October 23, 2023): 1–16. http://dx.doi.org/10.1049/2023/6681267.

Texto completo
Resumen
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper. However, due to the sparsity of floating-point errors, only a limited number of inputs can cause significant floating-point errors, and determining how to detect these inputs and to selecting the appropriate search technique is critical to detecting significant errors. This paper proposes characteristic particle swarm optimization (CPSO) algorithm based on particle swarm optimization (PSO) algorithm. The floating-point expression error detection tool PSOED is implemented, which can detect significant errors in floating-point arithmetic expressions and provide corresponding input. The method presented in this paper is based on two insights: (1) treating floating-point error detection as a search problem and selecting reliable heuristic search strategies to solve the problem; (2) fully utilizing the error distribution laws of expressions and the distribution characteristics of floating-point numbers to guide the search space generation and improve the search efficiency. This paper selects 28 expressions from the FPBench standard set as test cases, uses PSOED to detect the maximum error of the expressions, and compares them to the current dynamic error detection tools S3FP and Herbie. PSOED detects the maximum error 100% better than S3FP, 68% better than Herbie, and 14% equivalent to Herbie. The results of the experiments indicate that PSOED can detect significant floating-point expression errors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Aruna Mastani, S., and Riyaz Ahamed Shaik. "Inexact Floating Point Adders Analysis." International Journal of Applied Engineering Research 15, no. 11 (November 30, 2020): 1075–80. http://dx.doi.org/10.37622/ijaer/15.11.2020.1075-1080.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Meyer, Quirin, Jochen Süßmuth, Gerd Sußner, Marc Stamminger, and Günther Greiner. "On Floating-Point Normal Vectors." Computer Graphics Forum 29, no. 4 (August 26, 2010): 1405–9. http://dx.doi.org/10.1111/j.1467-8659.2010.01737.x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ghatte, Najib, Shilpa Patil, and Deepak Bhoir. "Floating Point Engine using VHDL." International Journal of Engineering Trends and Technology 8, no. 4 (February 25, 2014): 198–203. http://dx.doi.org/10.14445/22315381/ijett-v8p236.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Lange, Marko, and Siegfried M. Rump. "Faithfully Rounded Floating-point Computations." ACM Transactions on Mathematical Software 46, no. 3 (September 25, 2020): 1–20. http://dx.doi.org/10.1145/3290955.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Winter, Dik T. "Floating point attributes in Ada." ACM SIGAda Ada Letters XI, no. 7 (September 2, 1991): 244–73. http://dx.doi.org/10.1145/123533.123577.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Toronto, Neil, and Jay McCarthy. "Practically Accurate Floating-Point Math." Computing in Science & Engineering 16, no. 4 (July 2014): 80–95. http://dx.doi.org/10.1109/mcse.2014.90.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Kadric, Edin, Paul Gurniak, and Andre DeHon. "Accurate Parallel Floating-Point Accumulation." IEEE Transactions on Computers 65, no. 11 (November 1, 2016): 3224–38. http://dx.doi.org/10.1109/tc.2016.2532874.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Lam, Michael O., Jeffrey K. Hollingsworth, and G. W. Stewart. "Dynamic floating-point cancellation detection." Parallel Computing 39, no. 3 (March 2013): 146–55. http://dx.doi.org/10.1016/j.parco.2012.08.002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Scheidt, J. K., and C. W. Schelin. "Distributions of floating point numbers." Computing 38, no. 4 (December 1987): 315–24. http://dx.doi.org/10.1007/bf02278709.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Oavrielov, Moshe, and Lev Epstein. "The NS32081 Floating-point Unit." IEEE Micro 6, no. 2 (April 1986): 6–12. http://dx.doi.org/10.1109/mm.1986.304737.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Groza, V. Z. "High-resolution floating-point ADC." IEEE Transactions on Instrumentation and Measurement 50, no. 6 (2001): 1822–29. http://dx.doi.org/10.1109/19.982987.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Nikmehr, H., B. Phillips, and Cheng-Chew Lim. "Fast Decimal Floating-Point Division." IEEE Transactions on Very Large Scale Integration (VLSI) Systems 14, no. 9 (September 2006): 951–61. http://dx.doi.org/10.1109/tvlsi.2006.884047.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Serebrenik, Alexander, and Danny De Schreye. "Termination of Floating-Point Computations." Journal of Automated Reasoning 34, no. 2 (December 2005): 141–77. http://dx.doi.org/10.1007/s10817-005-6546-z.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Rivera, Joao, Franz Franchetti, and Markus Püschel. "Floating-Point TVPI Abstract Domain." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 442–66. http://dx.doi.org/10.1145/3656395.

Texto completo
Resumen
Floating-point arithmetic is natively supported in hardware and the preferred choice when implementing numerical software in scientific or engineering applications. However, such programs are notoriously hard to analyze due to round-off errors and the frequent use of elementary functions such as log, arctan, or sqrt. In this work, we present the Two Variables per Inequality Floating-Point (TVPI-FP) domain, a numerical and constraint-based abstract domain designed for the analysis of floating-point programs. TVPI-FP supports all features of real-world floating-point programs including conditional branches, loops, and elementary functions and it is efficient asymptotically and in practice. Thus it overcomes limitations of prior tools that often are restricted to straight-line programs or require the use of expensive solvers. The key idea is the consistent use of interval arithmetic in inequalities and an associated redesign of all operators. Our extensive experiments show that TVPI-FP is often orders of magnitudes faster than more expressive tools at competitive, or better precision while also providing broader support for realistic programs with loops and conditionals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Hammadi Jassim, Manal. "Floating Point Optimization Using VHDL." Engineering and Technology Journal 27, no. 16 (December 1, 2009): 3023–49. http://dx.doi.org/10.30684/etj.27.16.11.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Yang, Fengyuan. "Research and Analysis of Floating-Point Adder Principle." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 113–17. http://dx.doi.org/10.54254/2755-2721/8/20230092.

Texto completo
Resumen
With the development of the times, computers are used more and more widely, and the research and development of adder, as the most basic operation unit, determine the development of the computer field. This paper analyzes the principle of one-bit adder and floating-point adder by literature analysis. One-bit adder is the most basic type of traditional adder, besides bit-by-bit adder, overrun adder and so on. The purpose of this paper is to understand the basic principle of adder, among them, IEEE-754 binary floating point operation is very important. So that the traditional fixed-point adder is the basis of the floating-point adder, which can have a new direction in the future development of floating-point adder optimization. This paper finds that the floating-point adder is one of the most widely used components in signal processing systems today, and therefore, the improvement of the floating-point adder is necessary.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Mohammed, Falih Hassan, Farhood Hussein Karime, and Al-Musawi Bahaa. "Design and implementation of fast floating point units for FPGAs." Indonesian Journal of Electrical Engineering and Computer Science 19, no. 3 (September 1, 2022): 1480–89. https://doi.org/10.11591/ijeecs.v19.i3.pp1480-1489.

Texto completo
Resumen
Due to growth in demand for high-performance applications that require high numerical stability and accuracy, the need for floating-point FPGA has been increased. In this work, an open-source and efficient floating-point unit is implemented on a standard Xilinx Sparton-6 FPGA platform. The proposed design is described in a hierarchal way starting from functional block descriptions toward modules level design. Our implementation used minimal resources available on the targeting FPGA board, tested on the Sparton-6 FPGA platform and verified on ModelSim. The open-source framework can be embedded or customized for low-cost FPGA devices that do not offer floating-point units.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Kurniawan, Wakhid, Hafizd Ardiansyah, Annisa Dwi Oktavianita, and Mr Fitree Tahe. "Integer Representation of Floating-Point Manipulation with Float Twice." IJID (International Journal on Informatics for Development) 9, no. 1 (September 9, 2020): 15. http://dx.doi.org/10.14421/ijid.2020.09103.

Texto completo
Resumen
In the programming world, understanding floating point is not easy, especially if there are floating point and bit-level interactions. Although there are currently many libraries to simplify the computation process, still many programmers today who do not really understand how the floating point manipulation process. Therefore, this paper aims to provide insight into how to manipulate IEEE-754 32-bit floating point with different representation of results, which are integers and code rules of float twice. The method used is a literature review, adopting a float-twice prototype using C programming. The results of this study are applications that can be used to represent integers of floating-point manipulation by adopting a float-twice prototype. Using the application programmers make it easy for programmers to determine the type of program data to be developed, especially those running on 32 bits floating point (Single Precision).
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Blanton, Marina, Michael T. Goodrich, and Chen Yuan. "Secure and Accurate Summation of Many Floating-Point Numbers." Proceedings on Privacy Enhancing Technologies 2023, no. 3 (July 2023): 432–45. http://dx.doi.org/10.56553/popets-2023-0090.

Texto completo
Resumen
Motivated by the importance of floating-point computations, we study the problem of securely and accurately summing many floating-point numbers. Prior work has focused on security absent accuracy or accuracy absent security, whereas our approach achieves both of them. Specifically, we show how to implement floating-point superaccumulators using secure multi-party computation techniques, so that a number of participants holding secret shares of floating-point numbers can accurately compute their sum while keeping the individual values private.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

R., Bhuvanapriya, and T. Menakadevi. "Design and Implementation of FPU for Optimised Speed." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (February 29, 2020): 3922–33. https://doi.org/10.35940/ijeat.C6444.029320.

Texto completo
Resumen
Currently, each CPU has one or additional Floating Point Units (FPUs) integrated inside it. It is usually utilized in math wide-ranging applications, such as digital signal processing. It is found in places be established in engineering, medical and military fields in adding along to in different fields requiring audio, image or video handling. A high-speed and energy-efficient floating point unit is naturally needed in the electronics diligence as an arithmetic unit in microprocessors. The most operations accounting 95% of conformist FPU are multiplication and addition. Many applications need the speedy execution of arithmetic operations. In the existing system, the FPM(Floating Point Multiplication) and FPA(Floating Point Addition) have more delay and fewer speed and fewer throughput. The demand for high speed and throughput intended to design the multiplier and adder blocks within the FPM (Floating point multiplication)and FPA(Floating Point Addition) in a format of single precision floating point and double-precision floating point operation is internally pipelined to achieve high throughput and these are supported by the IEEE 754 standard floating point representations. This is designed with the Verilog code using Xilinx ISE 14.5 software tool is employed to code and verify the ensuing waveforms of the designed code.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Mr., Anand S. Burud, and Pradip C. Bhaskar Dr. "Processor Design Using 32 Bit Single Precision Floating Point Unit." International Journal of Trend in Scientific Research and Development 2, no. 4 (June 3, 2018): 198–202. https://doi.org/10.31142/ijtsrd12912.

Texto completo
Resumen
The floating point operations have discovered concentrated applications in the various different fields for the necessities for high precision operation because of its incredible dynamic range, high exactness and simple operation rules. High accuracy is needed for the design and research of the floating point processing units. With the expanding necessities for the floating point operations for the fast high speed data signal processing and the logical operation, the requirements for the high speed hardware floating point arithmetic units have turned out to be increasingly requesting. The ALU is a standout amongst the most essential segments in a processor, and is ordinarily the piece of the processor that is outlined first. In this paper, a fast IEEE754 compliant 32 bit floating point arithmetic unit designed using VHDL code has been presented and all operations of addition got tested on Xilinx and verified successfully. Mr. Anand S. Burud | Dr. Pradip C. Bhaskar "Processor Design Using 32 Bit Single Precision Floating Point Unit" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: https://www.ijtsrd.com/papers/ijtsrd12912.pdf
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía