To see the other types of publications on this topic, follow the link: Register Transfer Level Design.

Dissertations / Theses on the topic 'Register Transfer Level Design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Register Transfer Level Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Niu, Xinwei. "System-on-a-Chip (SoC) based Hardware Acceleration in Register Transfer Level (RTL) Design." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/888.

Full text
Abstract:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
2

Hämäläinen, J. (Joona). "Register-transfer-level power profiling for system-on-chip power distribution network design and signoff." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201905141744.

Full text
Abstract:
Abstract. This thesis is a study of how register-transfer-level (RTL) power profiling can help the design and signoff of power distribution network in digital integrated circuits. RTL power profiling is a method which collects RTL power estimation results to a single power profile which then can be analysed in order to find interesting time windows for specifying power distribution network design and signoff. The thesis starts with theory part. Complementary metal-oxide semiconductor (CMOS) inverter power dissipation is studied at first. Next, power distribution network structure and voltage drop problems are introduced. Voltage drop is demonstrated by using power distribution network impedance figures. Common on-chip power distribution network structure is introduced, and power distribution network design flow is outlined. Finally, decoupling capacitors function and impact on power distribution network impedance are thoroughly explained. The practical part of the thesis contains RTL power profiling flow details and power profiling flow results for one simulation case in one design block. Also, some methods of improving RTL power estimation accuracy are discussed and calibration with extracted parasitic is then used to get new set of power profiling time windows. After the results are presented, overall RTL power estimation accuracy is analysed and resulted time windows are compared to reference gate-level time windows. RTL power profiling result analysis shows that resulted time windows match the theory and RTL power profiling seems to be a promising method for finding time windows for power distribution network design and signoff.Rekisterisiirtotason tehoprofilointi järjestelmäpiirin tehonsiirtoverkon suunnittelussa ja verifioinnissa. Tiivistelmä. Tässä työssä tutkitaan, miten rekisterisiirtotason (RTL) tehoprofilointi voi auttaa digitaalisten integroitujen piirien tehonsiirtoverkon suunnittelussa ja verifioinnissa. RTL-tehoprofilointi on menetelmä, joka analysoi RTL-tehoestimoinnista saadusta tehokäyrästä hyödyllisiä aikaikkunoita tehonsiirtoverkon suunnitteluun ja verifiointiin. Työ alkaa teoriaosuudella, jonka aluksi selitetään, miten CMOS-invertteri kuluttaa tehoa. Seuravaksi esitellään tehonsiirtoverkon rakenne ja pahimmat tehonsiirtoverkon jännitehäviön aiheuttajat. Jännitehäviötä havainnollistetaan myös piirikaavioiden ja impedanssikäyrien avustuksella. Lisäksi integroidun piirin tehonsiirtoverkon suunnitteluvuo ja yleisin rakenne on esitelty. Lopuksi teoriaosuus käsittelee yksityiskohtaisesti ohituskondensaattoreiden toiminnan ja vaikutuksen tehonsiirtoverkon kokonaisimpedanssiin. Työn kokeellisessa osuudessa esitellään ensin tehoprofiloinnin vuo ja sen jälkeen vuon tulokset yhdelle esimerkkilohkolle yhdessä simulaatioajossa. Lisäksi tässä osiossa käsitellään RTL-tehoestimoinnin tarkkuutta ja tehdään RTL-tehoprofilointi loisimpedansseilla kalibroidulle RTL-mallille. Lopuksi RTL-tehoestimoinnin tuloksia ja saatuja RTL-tehoprofiloinnin aikaikkunoita analysoidaan ja verrataan porttitason mallin tuloksiin. RTL-tehoprofiloinnin tulosten analysointi osoittaa, että saatavat aikaikkunat vastaavat teoriaa ja että RTL-tehoprofilointi näyttää lupaavalta menetelmältä tehosiirtoverkon analysoinnin ja verifioinnin aikaikkunoiden löytämiseen.
APA, Harvard, Vancouver, ISO, and other styles
3

Makris, Georgios. "Transparency-based hierarchical testability analysis and test generation for register transfer level designs /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2001. http://wwwlib.umi.com/cr/ucsd/fullcit?p9997571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

MANSOURI, NAZANIN. "AUTOMATED CORRECTNESS CONDITION GENERATION FOR FORMAL VERIFICATION OF SYNTHESIZED RTL DESIGNS." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin982064542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beckwith, Luke Parkhurst. "An Investigation of Methods to Improve Area and Performance of Hardware Implementations of a Lattice Based Cryptosystem." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100798.

Full text
Abstract:
With continuing research into quantum computing, current public key cryptographic algorithms such as RSA and ECC will become insecure. These algorithms are based on the difficulty of integer factorization or discrete logarithm problems, which are difficult to solve on classical computers but become easy with quantum computers. Because of this threat, government and industry are investigating new public key standards, based on mathematical assumptions that remain secure under quantum computing. This paper investigates methods of improving the area and performance of one of the proposed algorithms for key exchanges, "NewHope." We describe a pipelined FPGA implementation of NewHope512cpa which dramatically increases the throughput for a similar design area. Our pipelined encryption implementation achieves 652.2 Mbps and a 0.088 Mbps/LUT throughput-to-area (TPA) ratio, which are the best known results to date, and achieves an energy efficiency of 0.94 nJ/bit. This represents TPA and energy efficiency improvements of 10.05× and 8.58×, respectively, over a non-pipelined approach. Additionally, we investigate replacing the large SHAKE XOF (hash) function with a lightweight Trivium based PRNG, which reduces the area by 32% and improves energy efficiency by 30% for the pipelined encryption implementation, and which could be considered for future cipher specifications.
Master of Science
Cryptography is prevalent in almost every aspect of our lives. It is used to protect communication, banking information, and online transactions. Current cryptographic protections are built specifically upon public key encryption, which allows two people who have never communicated before to setup a secure communication channel. However, due to the nature of current cryptographic algorithms, the development of quantum computers will make it possible to break the algorithms that secure our communications. Because of this threat, new algorithms based on principles that stand up to quantum computing are being investigated to find a suitable alternative to secure our systems. These algorithms will need to be efficient in order to keep up with the demands of the ever growing internet. This paper investigates four hardware implementations of a proposed quantum-secure algorithm to explore ways to make designs more efficient. The improvements are valuable for high throughput applications, such as a server which must handle a large number of connections at once.
APA, Harvard, Vancouver, ISO, and other styles
6

Makni, Mariem. "Un framework haut niveau pour l'estimation du temps d'exécution, des ressources matérielles et de la consommation d'énergie dans les accélérateurs à base de FPGA." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0042.

Full text
Abstract:
Les systèmes embarqués sur puce (SoC: Systems-on-Chip) sont devenus de plus en plus complexes grâce à l’évolution de la technologie des circuits intégrés. Les applications récentes nécessitent des systèmes à haute performances. Les FPGAs (Field Programmable Gate Arrays) peuvent répondre à ces besoins. On retrouve ces FPGA dans de nombreux domaines d’application : systèmes embarqués, télécommunications, traitement du signal et des images, serveurs de calcul HPC, etc. De nombreux défis sont rencontrés par les concepteurs de ces applications, parmi lesquels : le développement des applications complexes, la vérification du code, la nécessité d’automatiser le processus de conception pour augmenter la productivité et satisfaire la contrainte du « time-to-market ». Récemment, la synthèse de haut niveau (ou HLS) est considérée comme une solution efficace pour résoudre ces défis en utilisant un niveau d’abstraction plus élevé. En effet, cette technique permet de transformer automatiquement une spécification du système en C, C++, systemC en une implémentation au niveau transfert de registre (ou RTL pour Register Transfer Level). Les outils de HLS offrent un espace de solutions avec un grand nombre d’optimisations possibles au niveau du code comme l’utilisation du dépliage de boucles, le flot de données et partitionnement des tableaux, etc. Le concepteur doit explorer toutes ces alternatives et mesurer les performances obtenues en termes de temps d’exécution, de ressources matérielles, et de consommation d’´energie. Dans ce travail de thèse, nous avons utilisé les accélérateurs matériels à base de FPGAs et nous avons développé l’outil HAPE. Ce dernier permet d’aider les concepteurs à estimer la performance, la surface et l’énergie pour diverses configurations au niveau du code source. L’approche proposée comprend quatre contributions principales : (i) Nous avons proposé un modèle analytique de haut niveau pour estimer le temps de communications et le temps d’exécution total (ii) nous avons proposé un modèle analytique pour estimer les différentes ressources du FPGAs (DSPs, LUTs, FFs, BRAMs), (iii) nous avons proposé un modèle analytique pour estimer la consommation d’énergie basé sur l’utilisation du matériel (BRAMs, FFs, LUTs, etc) en explorant l’espace de solutions pour les différentes optimisations, (iv) Nous avons enfin proposé un environnement de conception (HAPE) permettant l’exploration des 3 critères : temps, ressources matérielles et consommation de puissance. L’approche proposée dans cette thèse est basée sur une analyse dynamique du code exécutée pour extraire les dépendances des données. Cette approche augmente la précision dans l’estimation du : temps de communication, de la consommation des ressources matérielles et de la consommation d’énergie dans les accélérateurs à base de FPGA. HAPE permet d’estimer ces paramètres avec une erreur inférieure à 5% par rapport aux implémentations RTL
In recent years, the complexity of system-on-chip (SoC) designs has been dramatically increased. As a result, the increased demands for high performance and minimal power/area costs for embedded streaming applications need to find new emerged architectures. The trend towards FPGA-based accelerators is giving a great potential of computational power and performance required for diverse applications. The advantages of such architectures result from many sources. The most important advantage stems from more efficient adaptation to the various application needs. In fact, many compute-intensive applications demand different levels of processing capabilities and energy consumption trade-offs which may be satisfied by using FPGA-based accelerators. Current researches in performance, area and power analysis rely on register-transfer level (RTL) based synthesis flows to produce accurate estimates. However, complex hardware programming model (Verilog or VHDL) makes FPGA development a time-consuming process even as the time-to-market constraints continue to tighten. Such techniques not only require advanced hardware expertise and time but are also difficult to use, making large design space exploration and time-to-market costly. High-Level Synthesis (HLS) technology has been emerged in the last few years as a solution to address these problems and managing design complexity at a more abstract level. This technique aims to bridge the gap between the traditional RTL design process and the ever-increasing complexity of applications. The important advantage of HLS tools is the ability to automatically generate RTL implementations from high-level specifications (e.g., C/C++/SystemC). The HLS tools provide various optimization pragmas such as loop unrolling, loop pipelining, dataflow, array partitioning, etc. Unfortunately, the large design space resulting from the various combinations of pragmas makes exhaustive design space exploration prohibitively time-consuming with HLS tools. In addition, to thoroughly evaluate such architectures, designers must perform large design space exploration to understand the tradeoffs across the entire system, which is currently infeasible due to the lack of a fast simulation infrastructure for FPGA-based accelerators. Hence, there is a clear need for a pre-RTL and high-level framework to enable rapid design space exploration for FPGA-based accelerators
APA, Harvard, Vancouver, ISO, and other styles
7

Gent, Kelson Andrew. "High Quality Test Generation at the Register Transfer Level." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73544.

Full text
Abstract:
Integrated circuits, from general purpose microprocessors to application specific designs (ASICs), have become ubiquitous in modern technology. As our applications have become more complex, so too have the circuits used to drive them. Moore's law predicts that the number of transistors on a chip doubles every 18-24 months. This explosion in circuit size has also lead to significant growth in testing effort required to verify the design. In order to cope with the required effort, the testing problem must be approached from several different design levels. In particular, exploiting the Register Transfer Level for test generation allows for the use of relational information unavailable at the structural level. This dissertation demonstrates several novel methods for generating tests applicable for both structural and functional tests. These testing methods allow for significantly faster test generation for functional tests as well as providing high levels of fault coverage during structural test, typically outperforming previous state of the art methods. First, a semi-formal method for functional verification is presented. The approach utilizes a SMT-based bounded model checker in combination with an ant colony optimization based search engine to generate tests with high branch coverage. Additionally, the method is utilized to identify unreachable code paths within the RTL. Compared to previous methods, the experimental results show increased levels of coverage and improved performance. Then, an ant colony optimization algorithm is used to generate high quality tests for fault coverage. By utilizing co-simulation at the RTL and gate level, tests are generated for both levels simultaneously. This method is shown to reach previously unseen levels of fault coverage with significantly lower computational effort. Additionally, the engine was also shown to be effective for behavioral level test generation. Next, an abstraction method for functional test generation is presented utilizing program slicing and data mining. The abstraction allows us to generate high quality test vectors that navigate extremely narrow paths in the state space. The method reaches previously unseen levels of coverage and is able to justify very difficult to reach control states within the circuit. Then, a new method of fault grading test vectors is introduced based on the concept of operator coverage. Operator coverage measures the behavioral coverage in each synthesizable statement in the RTL by creating a set of coverage points for each arithmetic and logical operator. The metric shows a strong relationship with fault coverage for coverage forecasting and vector comparison. Additionally, it provides significant reductions in computation time compared to other vector grading methods. Finally, the prior metric is utilized for creating a framework of automatic test pattern generation for defect coverage at the RTL. This framework provides the unique ability to automatically generate high quality test vectors for functional and defect level testing at the RTL without the need for synthesis. In summary, We present a set of tools for the analysis and test of circuits at the RTL. By leveraging information available at HDL, we can generate tests to exercise particular properties that are extremely difficult to extract at the gate level.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Blumer, Aric David. "Register Transfer Level Simulation Acceleration via Hardware/Software Process Migration." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29380.

Full text
Abstract:
The run-time reconfiguration of Field Programmable Gate Arrays (FPGAs) opens new avenues to hardware reuse. Through the use of process migration between hardware and software, an FPGA provides a parallel execution cache. Busy processes can be migrated into hardware-based, parallel processors, and idle processes can be migrated out increasing the utilization of the hardware. The application of hardware/software process migration to the acceleration of Register Transfer Level (RTL) circuit simulation is developed and analyzed. RTL code can exhibit a form of locality of reference such that executing processes tend to be executed again. This property is termed executive temporal locality, and it can be exploited by migration systems to accelerate RTL simulation. In this dissertation, process migration is first formally modeled using Finite State Machines (FSMs). Upon FSMs are built programs, processes, migration realms, and the migration of process state within a realm. From this model, a taxonomy of migration realms is developed. Second, process migration is applied to the RTL simulation of digital circuits. The canonical form of an RTL process is defined, and transformations of HDL code are justified and demonstrated. These transformations allow a simulator to identify basic active units within the simulation and combine them to balance the load across a set of processors. Through the use of input monitors, executive locality of reference is identified and demonstrated on a set of six RTL designs. Finally, the implementation of a migration system is described which utilizes Virtual Machines (VMs) and Real Machines (RMs) in existing FPGAs. Empirical and algorithmic models are developed from the data collected from the implementation to evaluate the effect of optimizations and migration algorithms.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Hernandez, Anna C. "Implementing and Comparing Image Convolution Methods on an FPGA at the Register-Transfer Level." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1340.

Full text
Abstract:
Whether it's capturing a car's license plate on the highway or detecting someone's facial features to tag friends, computer vision and image processing have found their way into many facets of our lives. Image and video processing algorithms ultimately tailor towards one of two goals: to analyze data and produce output in as close to real-time as possible, or to take in and operate on large swaths of information offline. Image convolution is a mathematical method with which we can filter an image to highlight or make clearer desired information. The most popular uses of image convolution accentuate edges, corners, and facial features for analysis. The goal of this project was to investigate various image convolution algorithms and compare them in terms of hardware usage, power utilization, and ability to handle substantial amounts of data in a reasonable amount of time. The algorithms were designed, simulated, and synthesized for the Zynq-7000 FPGA, selected both for its flexibility and low power consumption.
APA, Harvard, Vancouver, ISO, and other styles
10

Haataja, M. (Miikka). "Register-transfer level power estimation and reduction methodologies of digital system-on-chip building blocks." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201603231342.

Full text
Abstract:
This thesis is a study of register-transfer level power estimation and reduction methodologies for digital system-on-chip building blocks. In the theory section, the components of power dissipation for current circuit technology are explained in details, the commonly implemented register-transfer level power estimation methodologies are classified and explained, and finally, commonly used power reduction methods used in system-on-chip development are presented. In the implementation part of this thesis, register-transfer level power estimation and power reduction methodologies with a state-of-the-art commercial register-transfer level power tool are presented. Results obtained with these methodologies are analyzed for three different system-on-chip building blocks. The experimental results of power estimation accuracy and power saving estimates are presented. The average deviation between register-transfer level and gate-level power estimation were 11%, and potential total power saving estimates were between 10% and 29%
Tässä työssä tutkitaan rekisterinsiirtotason tehonkulutuksen arviointi- ja vähennysmenetelmiä digitaalisille järjestelmäpiirilohkoille. Teoriaosuudessa esitetään tehonkulutuksen eri komponentit nykyiselle piiriteknologialle, luokitellaan yleisimmät rekisterinsiirtotasolla käytettävät tehonkulutuksen arviointimenetelmät sekä kuvataan yleisesti digitaalisten järjestelmäpiirien suunnittelussa käytettyjä tehonvähennysmenetelmiä. Kokeellisessa osassa kuvataan rekisterinsiirtotason tehonkulutuksen arviointi- ja vähennysmenetelmä käyttäen kaupallista rekisterinsiirtotason tehotyökalua. Menetelmiä testataan kolmella digitaalisella järjestelmäpiirilohkolla ja saatuja tuloksia analysoidaan tehonkulutuksen arvion tarkkuuden ja tehonvähennyksen arvioiden kannalta. Näiden kolmen järjestelmäpiirilohkon tulokset tehonkulutuksen ja tehonvähennyksen arviosta on esitetty. Rekisterisiirtotason tehonarviointi poikkesi keskimäärin 11 % porttitason vertailuarviosta, ja potentiaaliset tehonvähennysarviot olivat väliltä 10–29 %
APA, Harvard, Vancouver, ISO, and other styles
11

Phillips, Jonathan D. "A C to Register Transfer Level Algorithm Using Structured Circuit Templates: A Case Study with Simulated Annealing." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/215.

Full text
Abstract:
A tool flow is presented for deriving simulated annealing accelerator circuits on a field programmable gate array (FPGA) from C source code by exploring architecture solutions that conform to a preset template through scheduling and mapping algorithms. A case study carried out on simulated annealing-based Autonomous Mission Planning and Scheduling (AMPS) software used for autonomous spacecraft systems is explained. The goal of the research is an automated method for the derivation of a hardware design that maximizes performance while minimizing the FPGA footprint. Results obtained are compared with a peer C to register transfer level (RTL) logic tool, a state-of-the-art space-borne embedded processor and a commodity desktop processor for a variety of problems. The automatically derived hardware circuits consistently outperform other methods by one or more orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
12

Tonetto, Rafael Billig. "A platform to evaluate the fault sensitivity of superscalar processors." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169905.

Full text
Abstract:
A diminuição agressiva dos transistores, a qual levou a reduções na tensão de operação, vem proporcionando enormes benefícios em termos de poder computacional, mantendo o consumo de energia em um nível aceitável. No entanto, à medida que o tamanho dos recursos e a tensão diminuem, a susceptibilidade a falhas tende a aumentar e a importância das avaliações com falhas cresce. Os processadores superescalares, que hoje dominam o mercado, são um exemplo significativo de sistemas que se beneficiam destas melhorias tecnológicas e são mais suscetíveis a erros. Juntamente com isso, existem vários métodos para injeção de falhas, que é um meio eficiente para avaliar a resiliência desses processadores. No entanto, os métodos tradicionais de injeção de falhas, como a técnica baseada em hardware, impõem que o processador seja implementado fisicamente antes que os testes possam ser conduzidos, sem fornecer níveis razoáveis de controlabilidade. Por outro lado, as técnicas baseadas em simuladores implementados em software oferecem altos níveis de controlabilidade. No entanto, enquanto os simuladores em SW de alto nível (que são rápidos) podem levar a uma avaliação incompleta, ou mesmo equivocada, da resiliência do sistema, uma vez que não modelam os componentes internos do hardware (como os registradores do pipeline), simuladores em SW de baixo nível são extremamente lentos e dificilmente estão disponíveis em RTL (Register-Transfer Level). Considerando este cenário, propomos uma plataforma que preenche a lacuna entre as abordagens em HW e SW para avaliar falhas em processadores superescalares: é rápida, tem alta controlabilidade, disponível em software, flexível e, o mais importante, modela o processador em RTL. A ferramenta foi implementada sobre a plataforma usada para gerar o processador superescalar The Berkeley Out-of-Order Machine (BOOM), que é um processador altamente escalável e parametrizável. Esta propriedade nos permitiu experimentar três arquiteturas diferentes do processador: single-, dual- e quad-issue, e, ao analisar como a resiliência a falhas é influenciada pela complexidade de diferentes processadores, usamos os processadores para validar nossa ferramenta.
APA, Harvard, Vancouver, ISO, and other styles
13

Ponraj, Sathishkumar. "Stimulus-free RT level power model using belief propagation." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Samala, Harikrishna. "Methodology to Derive Resource Aware Context Adaptable Architectures for Field Programmable Gate Arrays." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/484.

Full text
Abstract:
The design of a common architecture that can support multiple data-flow patterns (or contexts) embedded in complex control flow structures, in applications like multimedia processing, is particularly challenging when the target platform is a Field Programmable Gate Array (FPGA) with a heterogeneous mixture of device primitives. This thesis presents scheduling and mapping algorithms that use a novel area cost metric to generate resource aware context adaptable architectures. Results of a rigorous analysis of the methodology on multiple test cases are presented. Results are compared against published techniques and show an area savings and execution time savings of 46% each.
APA, Harvard, Vancouver, ISO, and other styles
15

Hylla, Kai [Verfasser], Wolfgang [Akademischer Betreuer] Nebel, and Wolfgang [Akademischer Betreuer] Rosenstiel. "Bridging the gap between precise RT-level power/timing estimation and fast high-level simulation : a method for automatically identifying and characterising combinational macros in synchronous sequential systems at register-transfer level and subsequent executable high-level model generation with respect to non-functional properties / Kai Hylla. Betreuer: Wolfgang Nebel ; Wolfgang Rosenstiel." Oldenburg : BIS der Universität Oldenburg, 2014. http://d-nb.info/1050816560/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Willey, Landon Clark. "A Systems-Level Approach to the Design, Evaluation, and Optimization of Electrified Transportation Networks Using Agent-Based Modeling." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8532.

Full text
Abstract:
Rising concerns related to the effects of traffic congestion have led to the search for alternative transportation solutions. Advances in battery technology have resulted in an increase of electric vehicles (EVs), which serve to reduce the impact of many of the negative consequences of congestion, including pollution and the cost of wasted fuel. Furthermore, the energy-efficiency and quiet operation of electric motors have made feasible concepts such as Urban Air Mobility (UAM), in which electric aircraft transport passengers in dense urban areas prone to severe traffic slowdowns. Electrified transportation may be the solution needed to combat urban gridlock, but many logistical questions related to the design and operation of the resultant transportation networks remain to be answered. This research begins by examining the near-term effects of EV charging networks. Stationary plug-in methods have been the traditional approach to recharge electric ground vehicles; however, dynamic charging technologies that can charge vehicles while they are in motion have recently been introduced that have the potential to eliminate the inconvenience of long charging wait times and the high cost of large batteries. Using an agent-based model verified with traffic data, different network designs incorporating these dynamic chargers are evaluated based on the predicted benefit to EV drivers. A genetic optimization is designed to optimally locate the chargers. Heavily-used highways are found to be much more effective than arterial roads as locations for these chargers, even when installation cost is taken into consideration. This work also explores the potential long-term effects of electrified transportation on urban congestion by examining the implementation of a UAM system. Interdependencies between potential electric air vehicle ranges and speeds are explored in conjunction with desired network structure and size in three different regions of the United States. A method is developed to take all these considerations into account, thus allowing for the creation of a network optimized for UAM operations when vehicle or topological constraints are present. Because the optimization problem is NP-hard, five heuristic algorithms are developed to find potential solutions with acceptable computation times, and are found to be within 10% of the optimal value for the test cases explored. The results from this exploration are used in a second agent-based transportation model that analyzes operational parameters associated with UAM networks, such as service strategy and dispatch frequency, in addition to the considerations associated with network design. General trends between the effectiveness of UAM networks and the various factors explored are identified and presented.
APA, Harvard, Vancouver, ISO, and other styles
17

Asef, Pedram. "Multi-level-objective design optimization of permanent magnet synchronous wind generator and solar photovoltaic system for an urban environment application." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/665396.

Full text
Abstract:
This Ph.D. thesis illustrates a novel study on the analytical and numerical design optimization of radial-flux permanent magnet synchronous wind generators (PMSGs) for small power generation in an urban area, in which an outer rotor topology with a closed-slot stator is employed. The electromagnetic advantages of a double-layer fractional concentration non-overlapping winding configuration are discussed. The analytical behavior of a PMSG is studied in detail; especially for magnetic flux density distribution, time and space harmonics, flux linkages, back-EMF, cogging torque, torque, output power, efficiency, and iron losses computation. The electromagnetic behavior of PMSGs are evaluated when a number of various Halbach array magnetization topologies are presented to maximize the generator’s performance. In addition, the thermal behavior of the PMSG is improved using an innovative natural air-cooling system for rated speed and higher to decrease the machine’s heat mainly at the stator teeth. The analytical investigation is verified via 2-D and 3-D finite element analysis along with a good experimental agreement. Design optimization of electrical machines plays the deterministic role in performance improvements such as the magnetization pattern, output power, and efficiency maximization, as well as losses and material cost minimization. This dissertation proposes a novel multi-objective design optimization technique using a dual-level response surface methodology (D-RSM) and Booth’s algorithm (coupled to a memetic algorithm known as simulated annealing) to maximize the output power and minimize material cost through sizing optimization. Additionally, the efficiency maximization by D-RSM is investigated while the PMSG and drive system are on duty as the whole. It is shown that a better fit is available when utilizing modern design functions such as mixed-resolution central composite (MR-CCD) and mixed-resolution robust (MR-RD), due to controllable and uncontrollable design treatments, and also a Window-Zoom-in approach. The proposed design optimization was verified by an experimental investigation. Additionally, there are several novel studies on vibro-acoustic design optimization of the PMSGs with considering variable speed analysis and natural frequencies using two techniques to minimize the magnetic noise and vibrations. Photovoltaic system design optimization considered of 3-D modeling of an innovative application-oriented urban environment structure, a smart tree for small power generation. The horizon shading is modeled as a broken line superimposed onto the sun path diagram, which can hold any number of height/azimuth points in this original study. The horizon profile is designed for a specific location on the Barcelona coast in Spain and the meteorological data regarding the location of the project was also considered. Furthermore, the input weather data is observed and stored for the whole year (in 2016). These data include, ambient temperature, module’s temperature (open and closed circuits tests), and shading average rate. A novel Pareto-based 3-D analysis was used to identify complete and partial shading of the photovoltaic system. A significant parameter for a photovoltaic (PV) module operation is the nominal operating cell temperature (NOCT). In this research, a glass/glass module has been referenced to the environment based on IEC61215 via a closed-circuit and a resistive load to ensure the module operates at the maximum power point. The proposed technique in this comparative study attempts to minimize the losses in a certain area with improved output energy without compromising the overall efficiency of the system. A Maximum Power Point Track (MPPT) controller is enhanced by utilizing an advanced perturb & observe (P&O) algorithm to maintain the PV operating point at its maximum output under different temperatures and insolation. The most cost-effective design of the PV module is achieved via optimizing installation parameters such as tilt angle, pitch, and shading to improve the energy yield. The variation of un-replicated factorials using a Window-Zoom-in approach is examined to determine the parameter settings and to check the suitability of the design. An experimental investigation was carried out to verify the 3-D shading analysis and NOCT technique for an open-circuit and grid-connected PV module.
Esta tesis muestra un novedoso estudio referente al diseño optimizado de forma analítica y numérica de un generador síncrono de imanes permanentes (PMSGs) para una aplicación de microgeneración eólica en un entorno urbano, donde se ha escogido una topología de rotor exterior con un estator de ranuras cerradas. Las ventajas electromagnéticas de los arrollamientos fraccionarios de doble capa, con bobinas concentradas se discuten ampliamente en la parte inicial del diseño del mismo, así como las características de distribución de la inducción, los armónicos espaciales y temporales, la fem generada, el par de cogging así como las características de salida (par, potencia generada, la eficiencia y la distribución y cálculo de las pérdidas en el hierro que son analizadas detalladamente) Posteriormente se evalúan diferentes configuraciones de estructuras de imanes con magnetización Halbach con el fin de maximizar las prestaciones del generador. Adicionalmente se analiza la distribución de temperaturas y su mejora mediante el uso de un novedoso diseño mediante el uso de ventilación natural para velocidades próximas a la nominal y superiores con el fin de disminuir la temperatura de la máquina, principalmente en el diente estatórico. El cálculo analítico se completa mediante simulaciones 2D y 3D utilizando el método de los elementos finitos así como mediante diversas experiencias que validan los modelos y aproximaciones realizadas. Posteriormente se desarrollan algoritmos de optimización aplicados a variables tales como el tipo de magnetización, la potencia de salida, la eficiencia así como la minimización de las pérdidas y el coste de los materiales empleados. En la tesis se proponen un nuevo diseño optimizado basado en una metodología multinivel usando la metodología de superficie de respuesta (D-RSM) y un algoritmo de Booth (maximizando la potencia de salida y minimizando el coste de material empleado) Adicionalmente se investiga la maximización de la eficiencia del generador trabajando conjuntamente con el circuito de salida acoplado. El algoritmo utilizado queda validado mediante la experimentación desarrollada conjuntamente con el mismo. Adicionalmente, se han realizado diversos estudios vibroacústicos trabajando a velocidad variable usando dos técnicas diferentes para reducir el ruido generado y las vibraciones producidas. Posteriormente se considera un sistema fotovoltaico orientado a aplicaciones urbanas que hemos llamado “Smart tree for small power generation” y que consiste en un poste con un generador eólico en la parte superior juntamente con uno o más paneles fotovoltaicos. Este sistema se ha modelado usando metodologías en 3D. Se ha considerado el efecto de las sombras proyectadas por los diversos elementos usando datos meteorológicos y de irradiación solar de la propia ciudad de Barcelona. Usando una metodología basada en un análisis 3D y Pareto se consigue identificar completamente el sistema fotovoltaico; para este sistema se considera la temperatura de la célula fotovoltaica y la carga conectada con el fin de generar un algoritmo de control que permita obtener el punto de trabajo de máxima potencia (MPPT) comprobándose posteriormente el funcionamiento del algoritmo para diversas situaciones de funcionamiento del sistema
La tesis desenvolupa un nou estudi per al disseny optimitzat, analític i numèric, d’un generador síncron d’imants permanents (PMSGs) per a una aplicació de microgeneració eòlica en aplicacions urbanes, on s’ha escollit una configuració amb rotor exterior i estator amb ranures tancades. Es discuteixen de forma extensa els avantatges electromagnètics dels bobinats fraccionaris de doble capa així com les característiques resultats vers la distribució de les induccions, els harmònics espacials i temporals, la fem generada, el parell de cogging i les característiques de sortida (parell, potencia, eficiència i pèrdues) Tanmateix s’afegeix l’estudi de diferents estructures Halbach per als imants permanents a fi i efecte de maximitzar les característiques del generador. Tot seguit s’analitza la distribució de temperatures i la seva reducció mitjançant la utilització d’una nova metodologia basada en la ventilació natural. Els càlculs analítics es complementen mitjançant anàlisi en 2 i 3 dimensions utilitzant elements finits i diverses experiències que validen els models i aproximacions emprades. Una vegada fixada la geometria inicial es desenvolupen algoritmes d’optimització per a diverses variables (tipus de magnetització dels imants, potencia de sortida, eficiència, minimització de pèrdues i cost dels materials) La tesi planteja una optimització multinivell emprant la metodologia de superfície de resposta i un algoritme de Booth; a més, es realitza la optimització considerant el circuit de sortida. L’algoritme resta validat per la experimentació realitzada. Finalment, s’han considerat diversos estudis vibroacústic treballant a velocitat variable, emprant dues tècniques diferents per a reduir el soroll i les vibracions desenvolupades. Per a finalitzar l’estudi es considera un sistema format per una turbina eòlica instal·lada sobre un pal de llum autònom, els panells fotovoltaics corresponents i el sistema de càrrega. Per a modelitzar l’efecte de l’ombrejat s’ha emprat un model en 3D i les dades del temps i d’irradiació solar de la ciutat de Barcelona. El model s’ha identificat completament i s’ha generat un algoritme de control que considera, a més, l’efecte de la temperatura de la cèl·lula fotovoltaica y la càrrega connectada al sistema per tal d’aconseguir el seguiment del punt de màxima potencia
APA, Harvard, Vancouver, ISO, and other styles
18

OLIVEIRA, Helder Fernando de Araújo. "Uma abordagem para estimação do consumo de energia em modelos de simulação distribuída." Universidade Federal de Campina Grande, 2015. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/587.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-04T22:06:03Z No. of bitstreams: 1 HELDER FERNANDO DE ARAÚJO OLIVEIRA - TESE PPGCC 2015..pdf: 1535968 bytes, checksum: ea0ac08d16d7773542f5d7193c85c162 (MD5)
Made available in DSpace on 2018-05-04T22:06:03Z (GMT). No. of bitstreams: 1 HELDER FERNANDO DE ARAÚJO OLIVEIRA - TESE PPGCC 2015..pdf: 1535968 bytes, checksum: ea0ac08d16d7773542f5d7193c85c162 (MD5) Previous issue date: 2015-11-10
Capes
Consumo de energia é um grande desafio durante o projeto de um SoC (System-on-a-Chip). Dependendo do projeto, para garantir maior precisão na estimação do consumo de energia, pode ser necessário estimar o consumo de energia do sistema ou parte dele utilizando diferentes elementos: diferentes abordagens de estimação, ferramentas ou, até mesmo, modelos descritos em variadas linguagens e/ou níveis de abstração. Porém, consiste em um desafio incorporar tais elementos para criação de um ambiente de simulação distribuído e heterogêneo, o qual permita que estes se comuniquem e troquem informações de modo sincronizado. Diante do exposto, a presente pesquisa tem como objetivo desenvolver uma abordagem, utilizando-se High Level Architecture (HLA), a fim de permitir a criação de um ambiente de simulação distribuído e heterogêneo, composto por diferentes ferramentas e modelos. Estes modelos podem ser descritos em diversas linguagens e/ou níveis de abstração, como também podem utilizar diferentes abordagens a estimação do consumo de energia. O uso da HLA permite que os elementos que compõem este ambiente heterogêneo possam ser simulados de maneira sincronizada e distribuída. A abordagem deve proporcionar a coleta e o agrupamento de dados de estimação de consumo de energia de modo centralizado. Para realização dos estudos de caso, foi utilizado um benchmark composto por um conjunto escalável de MPSoC (MultiProcessor System-on-Chip) descrito em C++/SystemC e o arcabouço Ptolemy. Um projeto em SystemVerilog/Verilog também foi utilizado para validar a coleta de dados de estimação de consumo de energia de modelos descritos nessas linguagens, por meio da abordagem proposta. Resultados experimentais demonstraram a flexibilidade da abordagem e sua aplicabilidade para a criação de um ambiente de simulação síncrono e heterogêneo, o qual promove uma visão integrada dos dados de energia estimados.
Energy consumption is a big challenge in SoC (System-on-a-Chip) design. Depending on the project requirements, to guarantee a better accuracy in power estimation, it might be necessary to estimate the power consumption of a system or part of it using different elements: different power estimation approaches, tools or, even, models described in different languages and/or abstraction levels. However, it is a challenge to incorporate these elements to create a simulation environment distributed and heterogeneous, which allows these elements to communicate and exchange information synchronously. In view of what has been exposed, the present research aims to develop an approach using HLA (High Level Architecture), enabling the creation of an environment distributed and heterogeneous, composed by different tools and models. These models can be described in different languages and/or abstraction levels, as well as use different power estimation approaches. The use of HLA enables the synchronized and distributed simulation of the elements that compose the simulation environment. The approach must allow the collecting and grouping of power estimation data in a centralized manner. As a case study, it has been used a benchmark composed of a scalable set of MPSoCs (MultiProcessor Systemon-Chip) which is described in C++/SystemC and the Ptolemy framework. A project in SystemVerilog/Verilog was also used to validate the power estimation data collected from models described in these languages, through the proposed approach. The experimental results show the approach flexibility and its applicability on creation of a distributed and synchronous simulation environment, which promotes an integrated view of power estimation data.
APA, Harvard, Vancouver, ISO, and other styles
19

SILVEIRA, George Sobral. "Uma abordagem para suporte à verificação funcional no nível de sistema aplicada a circuitos digitais que empregam a Técnica Power Gating." Universidade Federal de Campina Grande, 2012. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/2146.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-11-07T17:16:29Z No. of bitstreams: 1 GEORGE SOBRAL SILVEIRA - TESE PPGEE 2012..pdf: 4756019 bytes, checksum: 743307d8794218c3a447296994c05332 (MD5)
Made available in DSpace on 2018-11-07T17:16:29Z (GMT). No. of bitstreams: 1 GEORGE SOBRAL SILVEIRA - TESE PPGEE 2012..pdf: 4756019 bytes, checksum: 743307d8794218c3a447296994c05332 (MD5) Previous issue date: 2012-08-10
Capes
A indústria de semicondutores tem investido fortemente no desenvolvimento de sistemas complexos em um único chip, conhecidos como SoC (System-on-Chip). Com os diversos recursos adicionados ao SoC, ocorreu o aumento da complexidade no fluxo de desenvolvimento, principalmente no processo de verificação e um aumento do seu consumo energético. Entretanto, nos últimos anos, aumentou a preocupação com a energia consumida por dispositivos eletrônicos. Dentre as diversas técnicas utilizadas para reduzir o consumo de energia, Power Gating tem se destacado pela sua eficiência. Ultimamente, o processo de verificação dessa técnica vem sendo executado no nível de abstração RTL (Register TransferLevel), com base nas tecnologias CPF (Common Power Format) e UPF (Unified Power Format). De acordo com a literatura, as tecnologias que oferecem suporte a CPF e UPF, e baseadas em simulações, limitam a verificação até o nível de abstração RTL. Nesse nível, a técnica de Power Gating proporciona um considerável aumento na complexidade do processo de verificação dos atuais SoC. Diante desse cenário, o objetivo deste trabalho consiste em uma abordagem metodológica para a verificação funcional no nível ESL (Electronic System-Level) e RTL de circuitos digitais que empregam a técnica de Power Gating, utilizando uma versão modificada do simulador OSCI (Open SystemC Initiative). Foram realizados quatro estudos de caso e os resultados demonstraram a eficácia da solução proposta.
The semiconductor industry has strongly invested in the development of complex systems on a single chip, known as System-on-Chip (SoC), which are extensively used in portable devices. With the many features added to SoC, there has been an increase of complexity in the development flow, especially in the verification process, and an increase in SoC power consumption. However, in recent years, the concern about power consumption of electronic devices, has increased. Among the different techniques to reduce power consumption, Power Gating has been highlighted for its efficiency. Lately, the verification process of this technique has been executed in Register Transfer-Level (RTL) abstraction, based on Common Power Format (CPF) and Unified Power Format (UPF) . The simulators which support CPF and UPF limit the verification to RTL level or below. At this level, Power Gating accounts for a considerable increase in complexity of the SoC verification process. Given this scenario, the objective of this work consists of an approach to perform the functional verification of digital circuits containing the Power Gating technique at the Electronic System Level (ESL) and at the Register Transfer Level (RTL), using a modified Open SystemC Initiative (OSCI) simulator. Four case studies were performed and the results demonstrated the effectiveness of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
20

Smigelski, Jeffrey Ralph. "Water Level Dynamics of the North American Great Lakes:Nonlinear Scaling and Fractional Bode Analysis of a Self-Affine Time Series." Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1379087351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kooli-Chaabane, Hanen. "Le transfert de technologie vu comme une dynamique des compétences technologiques : application à des projets d'innovation basés sur des substitutions technologiques par le brasage métallique." Thesis, Vandoeuvre-les-Nancy, INPL, 2010. http://www.theses.fr/2010INPL075N/document.

Full text
Abstract:
Le transfert de technologie est un processus d’innovation loin de se résumer à une simple relation émetteur / récepteur de connaissances. Il est complexe et de ce fait, les facteurs déterminants de son succès sont encore mal connus, sa modélisation reste à étudier et des principes de pilotage sont à établir.Cette thèse propose une modélisation descriptive du processus de transfert de technologie afin de mieux comprendre la dynamique des projets de transfert de technologie et de dégager des bonnes pratiques permettant de mieux le piloter. Dans le champ théorique, nous avons analysé les modèles de transfert de technologie existant dans la littérature et avons proposé un méta-modèle du point de vue de l’ingénierie système. Nous avons ensuite cherché à mieux comprendre les phénomènes in situ.Pour ce faire, une méthodologie d’observation pour la collecte des données au niveau « micro » a été mise au point. Nous avons suivi cinq projets de transfert durant une période allant de trois mois à deux ans. Deux dimensions ont été privilégiées : la dimension immatérielle et matérielle. Le concept d’Objet Intermédiaire de Transfert (OIT) est introduit à partir de la notion d’Objet Intermédiaire de Conception. Les données obtenues ont été analysées selon deux approches :- une approche comparative descriptive, permettant d’identifier les invariants et les phénomènes divergents entre les cinq processus. - une approche multicritère basée sur la théorie des ensembles approximatifs. Cette dernière approche fournit des informations utiles pour la compréhension du processus par l’intermédiaire des règles de connaissances. Elle a validé l’importance des OIT dans la dynamique du projet final
Technology transfer is an innovation process far from to be defined as a simple transmitter / receiver relationship of knowledge. It is complex. Thus the determinants of its success are still poorly understood and its modeling remains to be studied to a better management and optimization of the process.This thesis proposes a descriptive modeling of the technology transfer process. The aim is to have better understanding of the dynamics of technology transfer projects, and developing best practices to improve its management.In the theoretical field, we analyzed the models of the literature and proposed a meta-model of technology transfer from the point of view of systems engineering. We then sought to better understand the phenomena in situ.In order to reach our aim, an observation methodology for data collection at the micro level has been developed. We followed five transfer projects for a period ranging from three months to two years. Two dimensions have been emphasized: the immaterial and the material dimension. The concept of Intermediate Transfer Object (ITO) is introduced from the concept of design intermediary object.The data obtained were analyzed using two approaches:- a comparative descriptive approach, identifying invariants and divergent phenomena between the five processes. This has allowed us to propose best practices for technology transfer project management in the context of brazing.- a multicriteria approach based on the rough sets theory. This approach provides useful information for understanding the process through the decision rules. It validated the importance of the technology transfer object in the dynamics and the success of a project
APA, Harvard, Vancouver, ISO, and other styles
22

丘偉明. "A register transfer level system unit and EIH design in NSC98." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/81048530058680810771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Wu, Li-Shiuan, and 吳立璿. "Design Automation Tool From SystemC To Register-Transfer Level Verilog With Peak Power Minimization." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/03524830162889119303.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
96
The advancement in semiconductor process technology has enabled a complex VLSI system to be fabricated. The time required and difficulty involved in designing such VLSI system has increased tremendously. The creation of high level description language and high level synthesis tool become a hot research topic for simplifying the design flow and shorten time to market. Trade-offs between energy and performance of the system is one of the important factors that designers need to decide. In this thesis, a design automation tool that could translate SystemC to register-transfer level Verilog is developed. A heuristic scheduling algorithm is incorporated during the translation process to minimize the peak power of the system. Besides that, by using a method of control edge insertion could also reduce the energy consumption of the system. Experiment results show that, under the time and resource constrains, the tool that developed in this thesis could effectively reduce the peak power and energy for some benchmark circuits and the maximum energy savings that could be achieved is about 29%.
APA, Harvard, Vancouver, ISO, and other styles
24

Karakaya, Fuat. "Automated exploration of the asic design space for minimum power-delay-area product at the register transfer level." 2004. http://etd.utk.edu/2004/KarakayaFuat.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Tennessee, Knoxville, 2004.
Title from title page screen (viewed May 13, 2004). Thesis advisor: Donald W. Bouldin. Document formatted into pages (x, 102 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 99-101).
APA, Harvard, Vancouver, ISO, and other styles
25

吳宗益. "Some techniques for storage optimization at the register-transfer level." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/72774825386582786762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Ping-Hsun, and 楊秉勳. "Interconnection-Aware Register Transfer Level Partitioning for Low-Power Datapath." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/23038932466717378646.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
94
In this thesis, we present a register transfer level partitioning algorithm and discuss the impact of interconnect power consumption in a data-dominated design. The partitioning divides the functional operation nodes of data flow graph into several groups that have less inter-cluster communication for preserving data locality. However, resource sharing may increase inter-cluster communication and destroy data locality on the physical level. The proposed partitioning algorithm called RS-Partitioning performs resource sharing and high-level partitioning simultaneously under consideration of data locality. Our high-level partitioning takes resource sharing into account to avoid destroying data locality. Partitioned and allocated datapath design that preserves data locality can reduce the number of access of power hungry global wires. Besides, partitioning makes the partitioned data flow graph easier to get regularity that results in simplifying the structure of interconnects. Therefore, partitioning with data locality can reduce interconnect power consumption, and from experimental results our approach can achieve 28.5% and 34.2% interconnect power reduction on average for 2-way and 4-way partitions, respectively.
APA, Harvard, Vancouver, ISO, and other styles
27

Lin, Hen-Ming, and 林恆民. "On HDL Synthesis at Register Transfer Level and Related Graph Theory." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/10507443646496386859.

Full text
Abstract:
博士
國立交通大學
電子工程系
89
HDL synthesis is a process that translates a design written in Hardware Description Language (HDL) such as Verilog and VHDL into a structural netlist. However, typical synthesizers adopt ad hoc methods to solve the special element inferences including latch inference, flip-flop inference and tri-state buffer inference in HDL synthesis. The ad hoc methods infer the latches, flip-flops, and tri-state buffers by recognizing some syntactic templates in HDL description process by process. They do not take into account the dependencies across processes and thus cannot completely and correctly solve the problems. It results in that designers must follow some unreasonable limitations in order to get correct and efficient netlists. Nonetheless, the typical synthesizers could still generate a wrong netlist and imposes extra overheads on verifying a design. In the dissertation, we first propose a synthesis flow for HDL synthesis. Unlike the typical synthesizers that conduct the special element inferences before the combinational circuit network generation, our approach first generates the overall combinational circuit network for the input HDL description. Then, it conducts the special element inferences on the overall combinational circuit network and thus can take into account the dependencies across processes. According to the flow mentioned above, the dissertation proposes systematical algorithms for the latch inference, flip-flop inference and tri-state buffer inference in HDL synthesis. On latch inference in HDL synthesis, we reduce the latch inference problem to the minimum feedback vertex set (MFVS) problem in graph theory. Therefore, the minimum number of latches can be inferred correctly and efficiently. On flip-flop inference in HDL synthesis, according to the concept of multiple clocked flip-flops (MC flip-flops), we propose a retiming based framework to infer the minimum number of flip-flops systematically and correctly for both simple and complex clocked statements. Furthermore, we also proposed a possible implementation for the MC flip-flops. With the support of MC flip-flops, the typical synthesizable subset of HDL could be extended. On the tri-state buffer inference in HDL synthesis, we propose a synthesis model based on the concept of rectification. First, a naive netlist that cannot correctly perform the high impedance behavior is constructed from input HDL description. Then, a set of rectification circuits that are controlled by a rectification circuit controller is inserted so that the compensated netlist can perform the behavior required by the input HDL description. The inference algorithms proposed in the dissertation systematically solve the problems of latch inference, flip-flop inference and tri-state buffer inference in HDL synthesis. It increases the reliability of HDL synthesis, makes designers get rids of unreasonable limitations on coding style, avoids the mismatches between synthesis and simulation in HDL synthesis, and thus reduces verification overheads. Finding the minimum feedback vertex set in a graph is an important problem for a variety of CAD applications including the latch inference in HDL synthesis, the partial scan in design for testability, etc. In the last of the dissertation, we make an in-depth exploration on the minimum feedback vertex set problem in graph theory and propose three new reduction operations based on some innovative theorems. According to the reduction operations, we further design some efficient algorithms. The reduction operations and the algorithms are demonstrated to be very effective in partial scan problem.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Yen-An, and 陳延安. "An Efficient Register-Transfer Level Testability Estimation Technique Based on Monte Carlo Simulation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/41236167528518606395.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
98
本論文提出一個以統計為基礎的方法來計算出暫存器轉換層級之設計的可測度。這個可測度分析的方法是由新的高階設計表示法和蒙特卡羅模擬所組成,透過隨機取樣模擬和統計模型的配合以改進誤差值和增加信心水準。我們的實驗是由一系列ISCAS'89設計和一些實際設計案例來當作測資。實驗結果指出我們提出的方法能有效的在高階設計中估計出可測度。因此程式設計師可以在電路合成之前先找出設計中可測度很低的點。
APA, Harvard, Vancouver, ISO, and other styles
29

Rose, James A. "A computer architecture for compiled event-driven simulation at the gate and register-transfer level." 1992. http://catalog.hathitrust.org/api/volumes/oclc/28227412.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Brinkmann, Raik [Verfasser]. "Preprocessing for property checking of sequential circuits on the register transfer level = Vorverarbeitung für die Überprüfung von Eigenschaften sequentieller Schaltungen auf der Register-Transfer-Ebene / von Raik Brinkmann." 2004. http://d-nb.info/97006392X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Shao-hsuan, and 王少軒. "Pico-Second Level Vernier Delay Line for Register Metastability Measurements Technique and Chip Design." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/26407190975022909650.

Full text
Abstract:
碩士
國立雲林科技大學
電子與光電工程研究所碩士班
100
In current system on a chip (SoC), the asynchrony between digital signals and transmitting process of the system circuit often causes insufficient signal setup time or hold time, which in term leads to metastable state and finally causes logic error. In metastable state, the time between each logic signals is only several pico-seconds, which makes it difficult to capture the metastable state or even to analyze it. Several measuring circuits have been proposed by previous studies to solve the aforementioned problem. Though these circuits can reach the pico-second level, they are hard to control and difficult to be designed and realized. This thesis proposed a measuring technique based on vernier circuit, which is not only stable but can easily produce two logic signals with only pico-second-level delay time. This thesis applied feedback type D latches and D flip-flops as the measurement basis. When applied a CMOS 0.18um process to simulate our idea, the timing discrepancy of metastable state between a D latch and a D flip-flop was about 80ps-302ps. In this thesis, the simulation produced a measuring circuit with a 10ps timing difference. Under proper sizing, the timing difference of the same circuit can be produced ranging from zero to 320ps. The study also successfully simulated the metastable state of a D latch at 120ps in 10ps timing resolution. Moreover, the chip of the measuring circuit was also successfully simulated; whose simulated result was the same before and after layout.
APA, Harvard, Vancouver, ISO, and other styles
32

Tatas, K., K. Siozios, A. Bartzas, Costas Kyriacou, and D. Soudris. "A Novel Prototyping and Evaluation Framework for NoC-Based MPSoC." 2013. http://hdl.handle.net/10454/9739.

Full text
Abstract:
No
This paper presents a framework for high-level exploration, Register Transfer-Level (RTL) design and rapid prototyping of Network-on-Chip (NoC) architectures. From the high-level exploration, a selected NoC topology is derived, which is then implemented in RTL using an automated design flow. Furthermore, for verification purposes, appropriate self-checking testbenches for the verification of the RTL and architecture files for the semi-automatic implementation of the system in Xilinx EDK are also generated, significantly reducing design and verification time, and therefore Non-Recurring Engineering (NRE) cost. Simulation and FPGA implementation results are given for four case studies multimedia applications, proving the validity of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
33

Matheson, Adrian Anthony. "Auditory Interface Design to Support Rover Tele-operation in the Presence of Background Speech: Evaluating the Effects of Sonification, Reference Level Sonification, and Sonification Transfer Function." Thesis, 2013. http://hdl.handle.net/1807/43257.

Full text
Abstract:
Preponderant visual interface use for conveying information from machine to human admits failures due to overwhelming the visual channel. This thesis investigates the suitability of auditory feedback and certain related design choices in settings involving background speech. Communicating a tele-operated vehicle’s tilt angle was the focal application. A simulator experiment with pitch feedback on one system variable, tilt angle, and its safety threshold was conducted. Manipulated in a within-subject design were: (1) presence vs. absence of speech, (2) discrete tilt alarm vs. discrete alarm and tilt sonification (continuous feedback), (3) tilt sonification vs. tilt and threshold sonification, and (4) linear vs. quadratic transfer function of variable to pitch. Designs with both variable and reference sonification were found to significantly reduce the time drivers spent exceeding the safety limit compared to the designs with no sonification, though this effect was not detected within the set of conditions with background speech audio.
APA, Harvard, Vancouver, ISO, and other styles
34

Corvino, Rosilde. "Exploration de l'espace des architectures pour des systèmes de traitement d'image, analyse faite sur des blocs fondamentaux de la rétine numérique." Phd thesis, 2009. http://tel.archives-ouvertes.fr/tel-00456577.

Full text
Abstract:
Dans le cadre de la synthèse de haut niveau (SHN), qui permet d'extraire un modèle structural à partir d'un modèle algorithmique, nous proposons des solutions pour opti- miser l'accès et le transfert de données du matériel cible. Une méthodologie d'exploration de l'espace des architectures mémoire possibles a été mise au point. Cette méthodologie trouve un compromis entre la quantité de mémoire interne utilisée et les performances temporelles du matériel généré. Deux niveau d'optimisation existe : 1. Une optimisation architecturale, qui consiste à créer une hiérarchie mémoire, 2. Une optimisation algorithmique, qui consiste à partitionner la totalité des données manipulées pour stocker en interne seulement celles qui sont utiles dans l'immédiat. Pour chaque répartition possible, nous résolvons le problème de l'ordonnancement des calculs et de mapping des données. À la fin, nous choisissons la ou les solutions pareto. Nous proposons un outil, front-end de la SHN, qui est capable d'appliquer l'optimisation algorithmique du point 2 à un algorithme de traitement d'image spécifié par l'utilisateur. L'outil produit en sortie un modèle algorithmique optimisé pour la SHN, en customisant une architecture générique.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography