Academic literature on the topic 'RTL schematic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'RTL schematic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "RTL schematic"

1

S., Aruna Deepthi, Sreenivasa Rao E., and N. Giri Prasad M. "RTL implementation of image compression techniques in WSN." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (2019): 1750–56. https://doi.org/10.11591/ijece.v9i3.pp.1750-1756.

Full text
Abstract:
The Wireless sensor networks have limitations regarding data redundancy, power and require high bandwidth when used for multimedia data. Image compression methods overcome these problems. Non-negative Matrix Factorization (NMF) method is useful in approximating high dimensional data where the data has non-negative components. Another method of the NMF called (PNMF) Projective Nonnegative Matrix Factorization is used for learning spatially localized visual patterns. Simulation results show the comparison between SVD, NMF, PNMF compression schemes. Compressed images are transmitted from base station to cluster head node and received from ordinary nodes. The station takes on the image restoration. Image quality, compression ratio, signal to noise ratio and energy consumption are the essential metrics measured for compression performance. In this paper, the compression methods are designed using Matlab. The parameters like PSNR, the total node energy consumption are calculated. RTL schematic of NMF SVD, PNMF methods is generated by using Verilog HDL.
APA, Harvard, Vancouver, ISO, and other styles
2

Deepthi, S. Aruna, E. Sreenivasa Rao, and M. N. Giri Prasad. "RTL Implementation of image compression techniques in WSN." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 3 (2019): 1750. http://dx.doi.org/10.11591/ijece.v9i3.pp1750-1756.

Full text
Abstract:
<p>The Wireless sensor networks have limitations regarding data redundancy, power and require high bandwidth when used for multimedia data. Image compression methods overcome these problems. Non-negative Matrix Factorization (NMF) method is useful in approximating high dimensional data where the data has non-negative components. Another method of the NMF called (PNMF) Projective Nonnegative Matrix Factorization is used for learning spatially localized visual patterns. Simulation results show the comparison between SVD, NMF, PNMF compression schemes. Compressed images are transmitted from base station to cluster head node and received from ordinary nodes. The station takes on the image restoration. Image quality, compression ratio, signal to noise ratio and energy consumption are the essential metrics measured for compression performance. In this paper, the compression methods are designed using Matlab.The parameters like PSNR, the total node energy consumption are calculated. RTL schematic of NMF SVD, PNMF methods is generated by using Verilog HDL.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Sharma, Sanjeev Kumar, Arpit Jain, Kamali Gupta, Devendra Prasad, and Varinder Singh. "An Internal Schematic View and Simulation of Major Diagonal Mesh Network-on-Chip." Journal of Computational and Theoretical Nanoscience 16, no. 10 (2019): 4412–17. http://dx.doi.org/10.1166/jctn.2019.8534.

Full text
Abstract:
NoC is a competent communication for on chip network architectures. It make more efficient the computational and high congestion communication on a single chip. In this paper, we are proposing a NoC topologies, i.e., Major Diagonal Mesh NoC called MD-Mesh NoC. In MD-Mesh NoC the corner of major diagonal linked with each other so that the efficiency of the communication among the corner can be increase. The internal semantic view and register transfer logic (RTL) View has been shown. As number of connections among the nodes increases and number of hopes decreases, performance of packet traversing will get increases. The synthesis and simulation has been done on Vertex 5 FPGA. The hardware parameters like number of slices and memory usage with respect to increase the number of nodes has been calculated on FPGA Vertex 5.
APA, Harvard, Vancouver, ISO, and other styles
4

Shashidhara, K. S., and H. C. Srinivasaiah. "Implementation of 1024-point FFT Soft-Core to Characterize Power and Resource Parameters in Artix-7, Kintex-7, Virtex-7, and Zynq-7000 FPGAs." European Journal of Engineering Research and Science 4, no. 9 (2019): 81–88. http://dx.doi.org/10.24018/ejers.2019.4.9.1515.

Full text
Abstract:
This Paper presents implementation of 1024-point Fast Fourier Transform (FFT). The MatLab simulink environment approach is used to implement the complex 1024-point FFT. The FFT is implemented on different FPGAs such as the following four: Artix-7, Kintex-7, Virtex-7, and Zynq-7000. The comparative study on power and resource consumption has been carried out as design parameters of prime concern. The results show that Artix-7 FPGA consumes less power of 3.402W when compared with its contemporary devices, mentioned above. The resource consumption remains same across all the devices. The resource estimation on each FPGA is carried on and its results are presented for 1024-point FFT function implementation. This Comprehensive analysis provides a deep insight with respect to power and resources. The synthesis and implementation results such as RTL Schematic, I/O Planning, and Floor Planning are generated and analyzed for all the above devices.
APA, Harvard, Vancouver, ISO, and other styles
5

Shashidhara, K. S., and H. C. Srinivasaiah. "Implementation of 1024-point FFT Soft-Core to Characterize Power and Resource Parameters in Artix-7, Kintex-7, Virtex-7, and Zynq-7000 FPGAs." European Journal of Engineering and Technology Research 4, no. 9 (2019): 81–88. http://dx.doi.org/10.24018/ejeng.2019.4.9.1515.

Full text
Abstract:
This Paper presents implementation of 1024-point Fast Fourier Transform (FFT). The MatLab simulink environment approach is used to implement the complex 1024-point FFT. The FFT is implemented on different FPGAs such as the following four: Artix-7, Kintex-7, Virtex-7, and Zynq-7000. The comparative study on power and resource consumption has been carried out as design parameters of prime concern. The results show that Artix-7 FPGA consumes less power of 3.402W when compared with its contemporary devices, mentioned above. The resource consumption remains same across all the devices. The resource estimation on each FPGA is carried on and its results are presented for 1024-point FFT function implementation. This Comprehensive analysis provides a deep insight with respect to power and resources. The synthesis and implementation results such as RTL Schematic, I/O Planning, and Floor Planning are generated and analyzed for all the above devices.
APA, Harvard, Vancouver, ISO, and other styles
6

Narendran, S., and J. Selvakumar. "Digital Simulation of Superconductive Memory System Based on Hardware Description Language Modeling." Advances in Condensed Matter Physics 2018 (May 27, 2018): 1–5. http://dx.doi.org/10.1155/2018/2683723.

Full text
Abstract:
We have modeled a memory system using Josephson Junction to attain low power consumption using low input voltage compared to conventional Complementary Metal Oxide Semiconductor-Static Random Access Memory (CMOS-SRAM). We attained the low power by connecting a shared/common bit line and using a 1-bit memory cell. Through our design we may attain 2.5–3.5 microwatts of power using lower input voltage of 0.6 millivolts. Comparative study has been made to find which memory system will attain low power consumption. Conventional SRAM techniques consume power in the range of milliwatts with the supply input in the range of 0-10 volts. Using HDL language, we made a memory logic design of RAM cells using Josephson Junction in FreeHDL software which is dedicated only for Josephson Junction based design. With use of XILINX, we have calculated the power consumption and equivalent Register Transfer Level (RTL) schematic is drawn.
APA, Harvard, Vancouver, ISO, and other styles
7

Parikh, Raj, and Khushi Sandip Parikh. "Mathematical Foundations of AI-Based Secure Physical Design Verification." Indian Journal of VLSI Design 5, no. 1 (2025): 1–7. https://doi.org/10.54105/ijvlsid.a1230.05010325.

Full text
Abstract:
Concerns about hardware security are raised by the increasing dependence on third-party Semiconductor Intellectual Property in system-on-chip design, especially during physical design verification. Traditional rule-based verification methods, such as Design Rule Checking (DRC) and Layout vs. Schematic (LVS) checking, together with side-channel analysis indicated apparent deficiencies in dealing with new forms of threat. The impossibility of distinguishing dependable from malicious insertions in ICs makes it hard to prevent such dangers as hardware Trojans (HTs); side-channel vulnerabilities remain everywhere, and modifications at various stages of the manufacturing process can be hard to detect. This thesis addresses these security challenges by defining a theoretical AI-driven framework for secure physical design verification that couples graph neural network models (GNNs) and probabilistic modeling with constraints optimized to maximize IC security. This approach views physical design verification as graph-based machine learning: GNNs identify unauthorized modifications or discrepancies between the layout and circuit netlist through the acquisition of behavioral metrics and structural feature extraction of netlist data. A probabilistic DRC model is derived after processing some learning data using recurrent algorithms. This model departs from the rigid rules of traditional deterministic DRC in that it uses machine learning-based predictions to estimate the likelihood that design rules will be violated. Also, we can model mathematical foundations for the secure routing as a constrained pathfinding problem for all myths addressed above concerning these different methods—moves are optimized to avoid sources of security problems. These problems might include crosstalk-induced leakage and electromagnetic sidechannel threats. Lagrange multipliers and Karush-Kuhn-Tucker (KKT) conditions are included in verification to maintain security constraints while ensuring efficient use of resources. Then, HT detection is reformulated as GNN-based node embeddings, whose information propagation throughout the circuit graph picks up modifications at boundary nodes and those less deep in the structure. As an alternative to experience-based anomaly detection proposed in earlier work, a theoretical softmaxbased anomaly classification framework is put forward here to model HT insertion probabilities, gathering acceptable anomalies at various levels of circuit design from RTL-level to Gate-level as necessary. The capturing of side-channel signals becomes the focus of a deep learning-based theoretical run-time anomaly detection model, aiming at power and electromagnetic (EM) leakage patterns so that all potential threats can be detected early on. This theoretical framework provides a conceptual methodology for scalable, automated, and robust security verification in modern ICs through graph-based learning, and constrained optimization methods. It lays a foundation to advance secure semiconductor designs further using AI-driven techniques without recourse to benchmarks or empirical validations.
APA, Harvard, Vancouver, ISO, and other styles
8

Khushi, Sandip Parikh. "Mathematical Foundations of AI-Based Secure Physical Design Verification." Indian Journal of VLSI Design (IJVLSID) 5, no. 1 (2025): 1–7. https://doi.org/10.54105/ijvlsid.A1230.05010325.

Full text
Abstract:
<strong>Abstract:</strong> Concerns about hardware security are raised by the increasing dependence on third-party Semiconductor Intellectual Property in system-on-chip design, especially during physical design verification. Traditional rule-based verification methods, such as Design Rule Checking (DRC) and Layout vs. Schematic (LVS) checking, together with side-channel analysis indicated apparent deficiencies in dealing with new forms of threat. The impossibility of distinguishing dependable from malicious insertions in ICs makes it hard to prevent such dangers as hardware Trojans (HTs); side-channel vulnerabilities remain everywhere, and modifications at various stages of the manufacturing process can be hard to detect. This thesis addresses these security challenges by defining a theoretical AI-driven framework for secure physical design verification that couples graph neural network models (GNNs) and probabilistic modeling with constraints optimized to maximize IC security. This approach views physical design verification as graph-based machine learning: GNNs identify unauthorized modifications or discrepancies between the layout and circuit netlist through the acquisition of behavioral metrics and structural feature extraction of netlist data. A probabilistic DRC model is derived after processing some learning data using recurrent algorithms. This model departs from the rigid rules of traditional deterministic DRC in that it uses machine learning-based predictions to estimate the likelihood that design rules will be violated. Also, we can model mathematical foundations for the secure routing as a constrained pathfinding problem for all myths addressed above concerning these different methods&mdash;moves are optimized to avoid sources of security problems. These problems might include crosstalk-induced leakage and electromagnetic sidechannel threats. Lagrange multipliers and Karush-Kuhn-Tucker (KKT) conditions are included in verification to maintain security constraints while ensuring efficient use of resources. Then, HT detection is reformulated as GNN-based node embeddings, whose information propagation throughout the circuit graph picks up modifications at boundary nodes and those less deep in the structure. As an alternative to experience-based anomaly detection proposed in earlier work, a theoretical softmaxbased anomaly classification framework is put forward here to model HT insertion probabilities, gathering acceptable anomalies at various levels of circuit design from RTL-level to Gate-level as necessary. The capturing of side-channel signals becomes the focus of a deep learning-based theoretical run-time anomaly detection model, aiming at power and electromagnetic (EM) leakage patterns so that all potential threats can be detected early on. This theoretical framework provides a conceptual methodology for scalable, automated, and robust security verification in modern ICs through graph-based learning, and constrained optimization methods. It lays a foundation to advance secure semiconductor designs further using AI-driven techniques without recourse to benchmarks or empirical validations.
APA, Harvard, Vancouver, ISO, and other styles
9

Ioannou, Georgios. "A corpus-based analysis of the verb pleróo in Ancient Greek." Review of Cognitive Linguistics 15, no. 1 (2017): 253–87. http://dx.doi.org/10.1075/rcl.15.1.10ioa.

Full text
Abstract:
Abstract This is a corpus-based study of the development of the verb pleróo in Ancient Greek, originally meaning fill, from the 6th c. bce in Classical Greek, up to the end of the 3rd c. bce in Hellenistic Koiné. It implements a hierarchical cluster analysis and a multiple correspondence analysis of the sum of the attested instances of pleróo of that period, divided by century. It explores the gains following a syncretism between two methodological strands: earlier introspective analyses postulating variant construals over intuitively grasped schematic configurations such as image schemas, and strictly inductive methods based on statistical analyses of correlations between co-occurring formal and semantic features. Thus, it examines the relevance of the container image-schema to the architecture of the schematic construction corresponding to the prototypical and historically preceding sense of pleróo, fill. Consequently, it observes how shifts in the featural configurations detected through statistical analysis, leading to the emergence of new senses, correspond to successive shifts on the perspectival salience of elements in the schematic construction of the verb.
APA, Harvard, Vancouver, ISO, and other styles
10

Kanasugi, Petra. "Parts of speech membership as a factor of meaning extension and level of abstraction." Review of Cognitive Linguistics 17, no. 1 (2019): 78–112. http://dx.doi.org/10.1075/rcl.00027.kan.

Full text
Abstract:
Abstract Czech and Japanese show formal differences in adnominal modification. Czech tends to utilize adjectives for both classification and qualification purposes whereas Japanese tends to express classification by compounding and to use a whole range of parts of speech for qualification. As a result, part of speech membership often differs between the Czech and Japanese rendering of the same referential content. It has been shown that parts of speech dispose of schematic meaning which contributes to conceptualization. Based on the results of corpora analysis, I argue that the difference in parts of speech membership results in different tendencies in meaning extension and ultimately in different meaning of the two counterparts, Czech adjectives are more abstract and schematic while Japanese verbs are more concrete.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Conference papers on the topic "RTL schematic"

1

Fu, Rongliang, Zhi-Min Zhang, Guang-Ming Tang, et al. "Design Automation Methodology from RTL to Gate-level Netlist and Schematic for RSFQ Logic Circuits." In GLSVLSI '20: Great Lakes Symposium on VLSI 2020. ACM, 2020. http://dx.doi.org/10.1145/3386263.3406898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McMeekin, S. G., M. R. S. Taylor, B. Vögele, and C. N. Ironside. "Franz-Keldysh effect in an optical waveguide containing a resonant tunneling diode." In The European Conference on Lasers and Electro-Optics. Optica Publishing Group, 1994. http://dx.doi.org/10.1364/cleo_europe.1994.ctur2.

Full text
Abstract:
We report on work concerned with the optoelectronic properties of a GaAs optical waveguide containing a resonant tunneling diode (RTD).1 As far as we are aware this is the first report of an RTD directly incorporated in an optical waveguide and the observation of optical modulation via a Franz-Keldysh shift on the band edge. Figure 1 shows a cross-section schematic of the device. The device that was employed for our Franz-Keldysh band-shift measurements consisted of a 4-μm-wide optical waveguide with a 200-μm-long contact on top of the waveguide defining an active area of 800 μm2. Figure 2 shows the IV curve for this optical waveguide RTD. Resonance was found to occur at a bias of 0.9 V with a sharp drop in the current indicating bistability but not oscillation. Optical characterisation was carried out with the aim of determining the change in the optical absorption spectrum when the device switched. Optical characterisation of the device employed a Ti:sapphire laser that was tuneable in the wavelength region close to the band-gap resonance of the waveguide, i.e., close to 890 nm. In order to minimise thermal effects in the RTD a pulsed power supply was used. When the pulse amplitude exceeded Vb the current through the RTD would switch rapidly from Ib to Ic (see Fig. 2). An increase in absorption along the length of the RTD was observed upon switching due to the RTD. No change in absorption was observed when the device was biased below Vb.
APA, Harvard, Vancouver, ISO, and other styles
3

Magomadov, I. A., N. S. Uzdieva, S. A. Balhasan, et al. "Approbation and Implementation of New Technologies for Processing Seismic Data of Complex Folded Zones (CRS, Beam, RTM)." In ADIPEC. SPE, 2024. http://dx.doi.org/10.2118/222473-ms.

Full text
Abstract:
Abstract In complex folded areas with harsh tectonic conditions, there are problems in the seismic imaging of the subthrust and areas with steep slope angles. Recently, new seismic processing migration algorithms have appeared, which are quite expensive in terms of computational resources, but on the other hand, they make it possible to display complex structures more correctly, for example, with salt-dome tectonics. The purpose of this work is to test new algorithms for improving the signal-to-noise ratio in areas with complex wave fields and migrations. Nine 2D seismic lines were processed in the Omega 2018 software package using CRS, Beam, and RTM technologies. The pre-processing stage included quality analysis of seismic signal and estimation for sources/receivers such parameters as: schematic maps of root-mean-square amplitudes, dominant frequencies, and signal-to-noise ratios. The r robust Surface-consistent deconvolution was applied to improve the signal processing. After selecting the optimal parameters, CRS summation and CRS seismogram operators were obtained. The RTM migration used TEEC ware's RTM method, which leads to migration using relief and generates CRP gathers in either the surface offset region or the reflection angle region. Angular seismograms were calculated to improve the signal-to-noise ratio. Another measure to ensure a high signal-to-noise ratio was the use of CRS gathers as input data, which greatly improved depth imaging. Simulation software for migration processing was used for deep migration of CRS seismograms. The cluster-based imaging system generates seismograms with normal reflection angles without azimuth dependence. Although the velocity models for Beam and RTM are equal, the speed models for PSTM and RTM are very different. When forming a deep velocity model for PSTM, it is necessary to perform smoothing on a large base, otherwise migration artifacts will arise in places of sharp changes in velocities. For RTM, on the contrary, a correct speed model is required (without anti-aliasing), which will generate an image of higher quality. Beam migration calculations are higher than Kirchhoff migration due to more correct consideration of the dynamics and path of rays. However, these calculations are not comparable to the per-account costs for RTM. RTM migration costs are also very sensitive to the maximum frequency for calculations. RTM migration produces less noise than Beam migration and performs better in areas where reflections are lost. However, one must keep in mind that the speeds for Beam and RTM migrations, although the same, were obtained using RTM migration and if only Beam migration was used, the result could be worse. In different parts of the section, the advantages of one or another migration are visible. In general, it is noticeable that RTM migration works better in the upper part of the sections. The frequency content of the RTM migration recording is often higher and there is less noise. The methods outlined in this paper will reduce problems in imaging the environment in subthrust parts of structures and areas with steep slope angles, which is actual problem for Caspian, Russian and some parts of Middle East regions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!