To see the other types of publications on this topic, follow the link: Data pipelining technology.

Journal articles on the topic 'Data pipelining technology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Data pipelining technology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Slatter, P. T. "Sludge pipeline design." Water Science and Technology 44, no. 10 (2001): 115–20. http://dx.doi.org/10.2166/wst.2001.0596.

Full text
Abstract:
The need for the design engineer to have a sound basis for designing sludge pumping and pipelining plant is becoming more critical. This paper examines both a traditional text-book approach and one of the latest approaches from the literature, and compares them with experimental data. The pipelining problem can be divided into the following main areas; rheological characterisation, laminar, transitional and turbulent flow and each is addressed in turn. Experimental data for a digested sludge tested in large pipes is analysed and compared with the two different theoretical approaches. Discussion is centred on the differences between the two methods and the degree of agreement with the data. It is concluded that the new approach has merit and can be used for practical design.
APA, Harvard, Vancouver, ISO, and other styles
2

AV, Shruthi, Electa Alice, and Mohammed Bilal. "Low Power VLSI Design and Implementation of Area-Optimized 256-bit AEStandard for Real Time Images on Vertex 5." International Journal of Reconfigurable and Embedded Systems (IJRES) 2, no. 2 (2013): 83. http://dx.doi.org/10.11591/ijres.v2.i2.pp83-88.

Full text
Abstract:
A new Vertex6-chipscope based implementation scheme of the AES-256 (Advanced Encryption Standard, with 256-bit key) encryption and decryption algorithm is proposed in this paper. For maintaining the speed of encryption and decryption, the pipelining technology is applied and the mode of data transmission is modified in this design so that the chip size can be reduced. The 256-bit plaintext and the 256- bit initial key, as well as the 256-bit output of cipher-text, are all divided into four 32-bit consecutive units respectively controlled by the clock. In this novel work, substantial improvement in performance in terms of area, power and dynamic speed has been obtained.
APA, Harvard, Vancouver, ISO, and other styles
3

Sujatha, E., Dr C. Subhas, and Dr M. N. Giri Prasad. "High performance turbo encoder using mealy FSM state encoding technique." International Journal of Engineering & Technology 7, no. 3.3 (2018): 255. http://dx.doi.org/10.14419/ijet.v7i2.33.14163.

Full text
Abstract:
Error-correction Coding plays a vital role to obtain efficient and high quality data transmission, in today’s high speed wireless communication system. Considering the requirement of using high data rates by Long Term Evolution (LTE) system, parallel concatenation of two convolutional encoders were used to design turbo encoder. In this research task a high speed turbo encoder, which is a key component in the transmitter of wireless communication System, with memory based interleaver has been designed and implemented on FPGA for 3rd Generation Partnership Project (3GPP) defined Long Term Evolution – Advanced (LTE-A) standard using Finite state Machine(FSM) encoding technique. Memory based quadratic permutation polynomial (QPP) interleaver shuffles a sequence of binary data and supports any of the 188 block sizes from N= 40 to N= 6144. The proposed turbo encoder is implemented using 28nm CMOS technology and achieved 300 Mbps data rate by using 1% of available total hardware logic. By using the proposed technique, encoded data can be released continuously with the help of two parallel memories to write/read the input using pipelining concept.
APA, Harvard, Vancouver, ISO, and other styles
4

Lathar, Pankaj, and K. G. Srinivasa. "A Study on the Performance and Scalability of Apache Flink Over Hadoop MapReduce." International Journal of Fog Computing 2, no. 1 (2019): 61–73. http://dx.doi.org/10.4018/ijfc.2019010103.

Full text
Abstract:
With the advancements in science and technology, data is being generated at a staggering rate. The raw data generated is generally of high value and may conceal important information with the potential to solve several real-world problems. In order to extract this information, the raw data available must be processed and analysed efficiently. It has however been observed, that such raw data is generated at a rate faster than it can be processed by traditional methods. This has led to the emergence of the popular parallel processing programming model – MapReduce. In this study, the authors perform a comparative analysis of two popular data processing engines – Apache Flink and Hadoop MapReduce. The analysis is based on the parameters of scalability, reliability and efficiency. The results reveal that Flink unambiguously outperformance Hadoop's MapReduce. Flink's edge over MapReduce can be attributed to following features – Active Memory Management, Dataflow Pipelining and an Inline Optimizer. It can be concluded that as the complexity and magnitude of real time raw data is continuously increasing, it is essential to explore newer platforms that are adequately and efficiently capable of processing such data.
APA, Harvard, Vancouver, ISO, and other styles
5

H Bailmare, Ravi, S. J. Honale, and Pravin V Kinge. "Design and Implementation of Adaptive FIR filter using Systolic Architecture." International Journal of Reconfigurable and Embedded Systems (IJRES) 3, no. 2 (2014): 54. http://dx.doi.org/10.11591/ijres.v3.i2.pp54-61.

Full text
Abstract:
<p>The tremendous growth of computer and Internet technology wants a data to be process with a high speed and in a powerful manner. In such complex environment, the conventional methods of performing multiplications are not suitable to obtain the perfect solution. To obtain perfect solution parallel computing is use in contradiction. The DLMS adaptive algorithm minimizes approximately the mean square error by recursively altering the weight vector at each sampling instance. In order to obtain minimum mean square error and updated value of weight vector effectively, systolic architecture is used. Systolic architecture is an arrangement of processor where data flows synchronously across array element. This project demonstrates an effective design for adaptive filter using Systolic architecture for DLMS algorithm, synthesized and simulated on Xilinx ISE Project navigator tool in very high speed integrated circuit hardware description language (VHDL) and Field Programmable Gate Arrays (FPGAs). Here, by combining the concept of pipelining and parallel processing in to the systolic architecture the computing speed increases.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Shiyu, Shengbing Zhang, Xiaoping Huang, and Hao Lyu. "On-chip data organization and access strategy for spaceborne SAR real-time imaging processor." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 1 (2021): 126–34. http://dx.doi.org/10.1051/jnwpu/20213910126.

Full text
Abstract:
Spaceborne SAR(synthetic aperture radar) imaging requires real-time processing of enormous amount of input data with limited power consumption. Designing advanced heterogeneous array processors is an effective way to meet the requirements of power constraints and real-time processing of application systems. To design an efficient SAR imaging processor, the on-chip data organization structure and access strategy are of critical importance. Taking the typical SAR imaging algorithm-chirp scaling algorithm-as the targeted algorithm, this paper analyzes the characteristics of each calculation stage engaged in the SAR imaging process, and extracts the data flow model of SAR imaging, and proposes a storage strategy of cross-region cross-placement and data sorting synchronization execution to ensure FFT/IFFT calculation pipelining parallel operation. The memory wall problem can be alleviated through on-chip multi-level data buffer structure, ensuring the sufficient data providing of the imaging calculation pipeline. Based on this memory organization and access strategy, the SAR imaging pipeline process that effectively supports FFT/IFFT and phase compensation operations is therefore optimized. The processor based on this storage strategy can realize the throughput of up to 115.2 GOPS, and the energy efficiency of up to 254 GOPS/W can be achieved by implementing 65 nm technology. Compared with conventional CPU+GPU acceleration solutions, the performance to power consumption ratio is increased by 63.4 times. The proposed architecture can not only improve the real-time performance, but also reduces the design complexity of the SAR imaging system, which facilitates excellent performance in tailoring and scalability, satisfying the practical needs of different SAR imaging platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Chiung-An, Chen Wu, Patricia Abu, and Shih-Lun Chen. "VLSI Implementation of an Efficient Lossless EEG Compression Design for Wireless Body Area Network." Applied Sciences 8, no. 9 (2018): 1474. http://dx.doi.org/10.3390/app8091474.

Full text
Abstract:
Data transmission of electroencephalography (EEG) signals over Wireless Body Area Network (WBAN) is currently a widely used system that comes together with challenges in terms of efficiency and effectivity. In this study, an effective Very-Large-Scale Integration (VLSI) circuit design of lossless EEG compression circuit is proposed to increase both efficiency and effectivity of EEG signal transmission over WBAN. The proposed design was realized based on a novel lossless compression algorithm which consists of an adaptive fuzzy predictor, a voting-based scheme and a tri-stage entropy encoder. The tri-stage entropy encoder is composed of a two-stage Huffman and Golomb-Rice encoders with static coding table using basic comparator and multiplexer components. A pipelining technique was incorporated to enhance the performance of the proposed design. The proposed design was fabricated using a 0.18 μm CMOS technology containing 8405 gates with 2.58 mW simulated power consumption under an operating condition of 100 MHz clock speed. The CHB-MIT Scalp EEG Database was used to test the performance of the proposed technique in terms of compression rate which yielded an average value of 2.35 for 23 channels. Compared with previously proposed hardware-oriented lossless EEG compression designs, this work provided a 14.6% increase in compression rate with a 37.3% reduction in hardware cost while maintaining a low system complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Vishnoi, U., and T. G. Noll. "Area- and energy-efficient CORDIC accelerators in deep sub-micron CMOS technologies." Advances in Radio Science 10 (September 18, 2012): 207–13. http://dx.doi.org/10.5194/ars-10-207-2012.

Full text
Abstract:
Abstract. The COordinate Rotate DIgital Computer (CORDIC) algorithm is a well known versatile approach and is widely applied in today's SoCs for especially but not restricted to digital communications. Dedicated CORDIC blocks can be implemented in deep sub-micron CMOS technologies at very low area and energy costs and are attractive to be used as hardware accelerators for Application Specific Instruction Processors (ASIPs). Thereby, overcoming the well known energy vs. flexibility conflict. Optimizing Global Navigation Satellite System (GNSS) receivers to reduce the hardware complexity is an important research topic at present. In such receivers CORDIC accelerators can be used for digital baseband processing (fixed-point) and in Position-Velocity-Time estimation (floating-point). A micro architecture well suited to such applications is presented. This architecture is parameterized according to the wordlengths as well as the number of iterations and can be easily extended for floating point data format. Moreover, area can be traded for throughput by partially or even fully unrolling the iterations, whereby the degree of pipelining is organized with one CORDIC iteration per cycle. From the architectural description, the macro layout can be generated fully automatically using an in-house datapath generator tool. Since the adders and shifters play an important role in optimizing the CORDIC block, they must be carefully optimized for high area and energy efficiency in the underlying technology. So, for this purpose carry-select adders and logarithmic shifters have been chosen. Device dimensioning was automatically optimized with respect to dynamic and static power, area and performance using the in-house tool. The fully sequential CORDIC block for fixed-point digital baseband processing features a wordlength of 16 bits, requires 5232 transistors, which is implemented in a 40-nm CMOS technology and occupies a silicon area of 1560 μm2 only. Maximum clock frequency from circuit simulation of extracted netlist is 768 MHz under typical, and 463 MHz under worst case technology and application corner conditions, respectively. Simulated dynamic power dissipation is 0.24 uW MHz−1 at 0.9 V; static power is 38 uW in slow corner, 65 uW in typical corner and 518 uW in fast corner, respectively. The latter can be reduced by 43% in a 40-nm CMOS technology using 0.5 V reverse-backbias. These features are compared with the results from different design styles as well as with an implementation in 28-nm CMOS technology. It is interesting that in the latter case area scales as expected, but worst case performance and energy do not scale well anymore.
APA, Harvard, Vancouver, ISO, and other styles
9

T.O., Bardadym, Gorbachuk V.M., Novoselova N.A., Osypenko C.P., and Skobtsov Y.V. "Intelligent analytical system as a tool to ensure the reproducibility of biomedical calculations." Artificial Intelligence 25, no. 3 (2020): 65–78. http://dx.doi.org/10.15407/jai2020.03.065.

Full text
Abstract:
The experience of the use of applied containerized biomedical software tools in cloud environment is summarized. The reproducibility of scientific computing in relation with modern technologies of scientific calculations is discussed. The main approaches to biomedical data preprocessing and integration in the framework of the intelligent analytical system are described. At the conditions of pandemic, the success of health care system depends significantly on the regular implementation of effective research tools and population monitoring. The earlier the risks of disease can be identified, the more effective process of preventive measures or treatments can be. This publication is about the creation of a prototype for such a tool within the project «Development of methods, algorithms and intelligent analytical system for processing and analysis of heterogeneous clinical and biomedical data to improve the diagnosis of complex diseases» (M/99-2019, M/37-2020 with support of the Ministry of Education and Science of Ukraine), implementted by the V.M. Glushkov Institute of Cybernetics, National Academy of Sciences of Ukraine, together with the United Institute of Informatics Problems, National Academy of Sciences of Belarus (F19UKRG-005 with support of the Belarussian Republican Foundation for Fundamental Research). The insurers, entering the market, can insure mostly low risks by facilitating more frequent changes of insurers by consumers (policyholders) and mixing the overall health insurance market. Socio-demographic variables can be risk adjusters. Since age and gender have a relatively small explanatory power, other socio-demographic variables were studied – marital status, retirement status, disability status, educational level, income level. Because insurers have an interest in beneficial diagnoses for their policyholders, they are also interested in the ability to interpret relevant information – upcoding: insurers can encourage their policyholders to consult with doctors more often to select as many diagnoses as possible. Many countries and health care systems use diagnostic information to determine the reimbursement to a service provider, revealing the necessary data. For processing and analysis of these data, software implementations of construction for classifiers, allocation of informative features, processing of heterogeneous medical and biological variables for carrying out scientific research in the field of clinical medicine are developed. The experience of the use of applied containerized biomedical software tools in cloud environment is summarized. The reproducibility of scientific computing in relation with modern technologies of scientific calculations is discussed. Particularly, attention is paid to containerization of biomedical applications (Docker, Singularity containerization technology), this permits to get reproducibility of the conditions in which the calculations took place (invariability of software including software and libraries), technologies of software pipelining of calculations, that allows to organize flow calculations, and technologies for parameterization of software environment, that allows to reproduce, if necessary, an identical computing environment. The main approaches to biomedical data preprocessing and integration in the framework of the intelligent analytical system are described. The experience of using the developed linear classifier, gained during its testing on artificial and real data, allows us to conclude about several advantages provided by the containerized form of the created application: it permits to provide access to real data located in cloud environment; it is possible to perform calculations to solve research problems on cloud resources both with the help of developed tools and with the help of cloud services; such a form of research organization makes numerical experiments reproducible, i.e. any other researcher can compare the results of their developments on specific data that have already been studied by others, in order to verify the conclusions and technical feasibility of new results; there exists a universal opportunity to use the developed tools on technical devices of various classes from a personal computer to powerful cluster.
APA, Harvard, Vancouver, ISO, and other styles
10

Settari, Antonin, G. M. Warren, Jerome Jacquemont, Paul Bieniawski, and Michel Dussaud. "Brine Disposal Into a Tight Stress-Sensitive Formation at Fracturing Conditions: Design and Field Experience." SPE Reservoir Evaluation & Engineering 2, no. 02 (1999): 186–95. http://dx.doi.org/10.2118/56001-pa.

Full text
Abstract:
Summary This paper describes a study of the potential of a tight reservoir zone for disposal of brine generated in salt cavern leaching operations. The study included field injection testing, numerical analysis using uncoupled and coupled reservoir, geomechanical and fracturing modeling, laboratory work and design of a field injection monitoring program. It was shown that a surprising brine disposal capacity exists in the tight (0.03 md) Oriskany target formation. Initial screening was followed by carefully designed injection testing, laboratory work and subsequent evaluation with the aid of detailed coupled fracture and reservoir numerical models, and numerical well test analysis. Low initial estimates of brine disposal capacity were increased significantly by incorporating more sophisticated, coupled reservoir and geomechanical numerical models. The models, which account for stress dependent porosity and permeability and fracture propagation, were calibrated to laboratory and field test data. Using these models, an excellent match of the injection data was obtained, and predictions of injectivity were made under various project scenarios. The coupled model has been also used to design the monitoring program for the first phase of the injection operations. Introduction Gas storage in salt caverns has many advantages over conventional storage operations in reservoirs. In the U.S. alone, over 30 caverns have been built and put into operation. On the other hand, the selection of the site, design and execution of the leaching process, commissioning, operating and monitoring the caverns requires specialized, multidisciplinary technology.1,2 This paper deals only with one facet of the overall process, namely, the disposal of the brine generated during the leaching process by reinjection. This topic has many similarities to other injection processes in petroleum engineering and will be of interest to those working in waterflooding or waste disposal at or near fracturing conditions, and in geomechanics and fracturing. In cavern leaching operations, large amounts of concentrated brine are generated which need to be treated or disposed of. In general, there are several ways to dispose of or utilize the brine, such as selling it for salt products manufacture, building a salt product manufacturing plant, disposal in suitable permeable formations, or even pipelining to the sea. The economical and environmental aspects of each alternative guide the selection of the best method (or combination of several methods). Even more importantly, the efficiency of the brine disposal is one of the critical elements for the economics of a planned cavern gas storage project. In the subject project described below, disposal by reinjecting the brine was considered in conjunction with selling the majority of the brine to a salt product company. The Tioga Gas Storage Project The location of the project in Tioga, Pennsylvania, was selected by its developer Market Hub Partners (MHP) based on gas market analysis and geological considerations. Key elements included finding a salt formation which would be an excellent candidate for cavern leaching, close to existing pipelines and infrastructure. Such a formation was found below the existing Tioga gas storage field. As shown in Fig. 1, MHP is planning to build up to ten storage caverns in this massive (2200 ft thick) salt formation, separated from the Oriskany gas storage formation by 400 ft of limestone and anhydrite shale. The structural crossection of the storage site, with a proposed cavern location, is shown in Fig. 2. Each cavern will provide around 2,500,000 MScf of storage and in the process of leaching will generate about 25 million bbls of brine per cavern. These fluid volumes provide a large incentive to find a suitable horizon for disposal of the brine, and to prove up the injection capacity. Disposal Site Selection and Geology Two sites were selected for brine disposal, based on a geological review including the interpretation of 7 seismic lines and 120 wells in the vicinity of the Tioga storage pool. The brine disposal areas are 1.5 miles south of the gas storage field, and isolated from it by a series of faults of more than 1,000 ft of thrust (see Fig. 2). The target injection zone consists of the middle Devonian Oriskany sandstone and Helderberg limestone. These formations are isolated from the shallow drinking water aquifer by 4,800 ft of upper Devonian shales, including a strike of very tight limestone. The underlying formations consist of a layer of anhydrite on top of the salt section (Salina). The first brine disposal test well SWD#01 (see Fig. 3) was drilled in June 1995 and confirmed the results of the geological review. SWD#01 was cored in the Oriskany, Helderberg, Anhydrite and salt. Exploratory Testing of the Target Zones A first DST, followed by the injection of 600 barrels of brine, was performed in the Oriskany. The injection/fall-off test was conducted above parting pressure, in order to achieve a commercial rate (22 gpm). The well was then deepened to reach TD, at 5,750 ft GL. A set of logs were run, including an FMI log, which confirmed an induced fracture in the Oriskany zone. A second DST was performed, in the Oriskany, Helderberg, and Anhydrite sections, followed by the injection of 1,200 barrels of brine. The pressure signature was similar to DST#1. A second FMI log was conducted, which exhibited new induced fracturing in the Helderberg. SWD#01 was then cased, cemented and perforated in the Oriskany and Helderberg. Stress tests were conducted in the Oriskany and the Helderberg.
APA, Harvard, Vancouver, ISO, and other styles
11

Singh, Charanjit, and Balwinder Singh. "Design of High Performance Modified Wave pipelined DAA Filter with Critical Path Approach." International Journal of Electronics and Electical Engineering, October 2012, 78–82. http://dx.doi.org/10.47893/ijeee.2012.1016.

Full text
Abstract:
In this paper, a new high speed control circuit is proposed which will act as a critical path for the data which will go from input to output to improve the performance of wave pipelining circuits The wave pipelining is a method of high performance circuit designs which implements pipelining in logic without the use of intermediate registers. Wave pipelining has been widely used in the past few years with a great deal of significant features in technology and applications. It has the ability to improve speed, efficiency, economy in every aspect which it presents. Wave pipelining is being used in wide range of applications like digital filters, network routers, multipliers, fast convolvers, MODEMs, image processing, control systems, radars and many others. In previous work, the operating speed of the wave-pipelined circuit can be increased by the following three tasks: adjustment of the clock period, clock skew and equalization of path delays. The path-delay equalization task can be done theoretically, but the real challenge is to accomplish it in the presence of various different delays. So, the main objective of this paper is to solve the path delay equalization problem by inserting the control circuit in wave pipelined based circuit which will act as critical path for the data that moves from input to output. The proposed technique is evaluated for DSP applications by designing 4- tap FIR filter using Distributed arithmetic algorithm (DAA). Then comparison of this design is done with 4-tap FIR filter designs using conventional pipelining and non pipelining. The synthesis and simulation results based on Xilinx ISE Navigator 12.3 shows that wave pipelined DAA based filter is faster by a factor of 1.43 compared to non pipelined one and the conventional pipelined filter is faster than non pipelined by factor of 1.61 but at the cost of increased logic utilization by 200 %. So, the wave-pipelined DA filters designed with the proposed control circuit can operate at higher frequency than that of non-pipelined but less than that of pipelined. The gain in speed in pipelined compared to that of wavepipelined is at the cost of increased area and more dissipated power. When latency is considered, wavepipelined design filters with the proposed scheme are having the lowest latency among three schemes designed.
APA, Harvard, Vancouver, ISO, and other styles
12

Aqueel, Shabana, and Kavita Khare. "A High Performance DDR3 SDRAM Controller." International Journal of Electronics and Electical Engineering, July 2012, 1–4. http://dx.doi.org/10.47893/ijeee.2012.1001.

Full text
Abstract:
The paper presents the implementation of compliant DDR3 memory controller. It discusses the overall architecture of the DDR3 controller along with the detailed design and operation of its individual sub blocks, the pipelining implemented in the design to increase the design throughput. It also discusses the advantages of DDR3 memories over DDR2 memories operation. Double Data Rate (DDR) SDRAMs have been prevalent in the PC memory market in recent years and are widely used for networking systems. These memory devices are rapidly developing, with high density, high memory bandwidth and low device cost. However, because of the high-speed interface technology and complex instruction-based memory access control, a specific purpose memory controller is necessary for optimizing the memory access trade off. In this paper, a specific purpose DDR3 controller for highperformance is proposed.
APA, Harvard, Vancouver, ISO, and other styles
13

"Research on High Speed Low Power Digital Logic Family for Pipelined Arithmatic Logic Structures." International Journal of Innovative Technology and Exploring Engineering 8, no. 12S (2019): 637–48. http://dx.doi.org/10.35940/ijitee.l1156.10812s19.

Full text
Abstract:
The bit size of the data length process depends on the clock speed operation .the clock speed increases with the bit size of the data length .but this increases deal in the circuit to overcome this pipeline and parallel processing is used. This will increase the performance of the circuit with the advancement of the high speed technology the data length process per clock is increasing rapidly from Intel 1 intel20 to Intel series. Adder is an important adder structure design which uses parallel and pipelining scheme are RCA and SFA. To design these adders we need high speed processing digital electronic circuit which must be high speed and low power. There are various types of logic families which we are discuss in this paper. From static to dynamic circuit design why dynamic is faster than static. and various types of dynamic circuit design structure this paper basically focus on constant delay logic style and why it is superior to other dynamic structures such as domino logic ,dynamic logic np CMOS logic,C2MOS logic ,NORA CMOS logic design, Zipper CMOS,FTL logic.
APA, Harvard, Vancouver, ISO, and other styles
14

"Dolphin Echolocation Based Generation of Application Definite Noc Custom Topology." International Journal of Recent Technology and Engineering 8, no. 3 (2019): 8247–54. http://dx.doi.org/10.35940/ijrte.c6572.098319.

Full text
Abstract:
Invention of new electronic devices in this technology driven-world has been setting a new scale for faster reach in terms of utilization and communication. The imperfection in the System on Chip (SoC) lead to an innovation of Network on Chip (NoC) that elucidates the communication defects and thereby set a new path for the researchers to enhance the network connectivity. Nature is an inspiration for most of the technologies invented by the researchers. In this paper, we have dealt the existing physical flows and future difficulties with Reliable Reconfigurable Real-Time Operating System (R3TOS) as a software interface with NoC with the proposed customized topology of Transmission Rate based Topology Design using Dolphin Echolocation (TRTD-DE) algorithm. Echolocation is biological scanning system that used by dolphins for movement and chasing the prey. The above ability along with Data flow pipelining (DFP) constructs the customized topology that process the parallel flow of data in order with the availability of the input data. The topology effectively increases the performance due to the clustering phenomena. The algorithm greatly reduces the latency period which is the significant of the topology and increases the throughput. The objectives of the research in the reduction of transmission rate and energy utilization are tested with different multimedia benchmark applications. The transmission rate reduced to average of 41.39% while average of energy consumption reduced to 31.9%.
APA, Harvard, Vancouver, ISO, and other styles
15

"Design and Implementation of AES Algorithm." International Journal of Recent Technology and Engineering 8, no. 2S4 (2019): 387–90. http://dx.doi.org/10.35940/ijrte.b1075.0782s419.

Full text
Abstract:
In this paper the Advanced Encryption Standard (AES) was endorsed by the National Institute of Standards and Technology in 2001. It was intended to supplant the maturing Data Encryption Standard (DES) and be valuable for a wide scope of utilizations with differing throughput, zone, control dissemination and vitality utilization necessities .Though they are very adaptable, FPGAs are regularly less effective than Application Specific Integrated Circuits (ASICs); There have been numerous AES executions that attention on acquiring high throughput or low region use, however almost no examination done in the territory of low power or vitality productive based AES; actually, it is uncommon for assessments on power dispersal to be made by any means. This postulation introduces new effective equipment usage to those propelled encryption standard (AES) calculation. Two primary commitments are introduced in this thesis, the initial you quit offering on that one will be a secondary speed 128 odds AES encrypted, and the second person is another 32 odds AES configuration. In 1st commitment An 128 odds circle unrolled sub-pipelined AES encrypted is exhibited. In this encrypted a effective blending to those encryption methodology sub-steps will be executed following relocating them. Those second commitment displays An 32 odds AES plan. In this design, the S-BOX is actualized for inward pipelining Furthermore it is imparted the middle of those principle round and the enter development units. Also, the way development unit is actualized will fill in on the fly What's more previously, parallel with the fundamental round unit. These outlines bring attained higher FPGA (Throughput/Area) effectiveness analyzing to past AES outlines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!