To see the other types of publications on this topic, follow the link: RTL Design.

Dissertations / Theses on the topic 'RTL Design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'RTL Design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kevorkov, Ruslan. "Sounding Rocket ExperimentElectronics – RTL Design and Validation." Thesis, KTH, Rymd- och plasmafysik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-149252.

Full text
Abstract:
The Infrared Spectroscopy to Analyse the middle Atmosphere Composition (ISAAC) is an experimental module designed by KTH students. It consists of a Rocket Mounted Unit (RMU) and two Free-Falling Units (FFU) carried inside. The main objective of the experiment is to demonstrate ability of one FFU to track the other and to carry out measurements in cooperation. This Master’s thesis covers the development and implementation of the ejection system as well as data acquisition for the ISAAC experiment to have well-timed ejection of the FFUs and data for a post-flight analysis. Ejection control and communication is implemented in a Field-Programmable Gate Array (FPGA) using VHDL hardware description language. Newly developed firmware verification and the post-flight analysis results are also presented in the report. The ISAAC experiment was launched on May 29 from Esrange, Kiruna onboard the REXUS15 rocket.
ISAAC (Infrared Spectroscopy to Analyse the Middle Atmosphere Composition) är en raketmonterad experimentmodul designad av studenter på KTH. Modulen består av en raketmonterad modul benämnd RMU (Rocket Mounted Module), i vilken två mindre fritt fallande enheter benämnda FFU (Free Falling Units) sitter monterade. Huvudmålet med experimentet är att demonstrera förmågan för den ena FFU:n att spåra den andra FFU:n samt förmågan att genomföra koordinerade mätningar. Detta examensarbete behandlar utvecklandet och implementationen av utskjutningssystemet samt datainsamlingen för ISAAC -experimentet. Dessa delar görs för att kunna genomföra utskjutningen vid en lämplig tidpunkt samt få data till efterbehandling. Utskjutningskontroll samt kommunikatio n är implementerade i en FPGA (Field Programmable Gate Array) i det hårdvarubeskrivande språket VHDL (VHSIC (Very High Speed Integrated Circuit) Hardware Description Language). Verifikation av nyutvecklad inbyggd programvara samt analysresultat av data från uppskjutningen presenteras också. Uppskjutningen av ISAAC-experimentet skedde den 29:e maj 2014 från rymdbasen Esrange i Kiruna ombord på raketen REXUS15.
APA, Harvard, Vancouver, ISO, and other styles
2

Jangid, Anuradha. "Verifying IP-Cores by Mapping Gate to RTL-Level Designs." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1385975878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shrestha, Gyanendra. "Ensuring Trust Of Third-Party Hardware Design With Constrained Sequential Equivalence Checking." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/44889.

Full text
Abstract:
Globalization of semiconductor design and manufacturing has led to a concern of trust in the final product. The components may now be designed and manufactured from anywhere in the world without the direct supervision of the buyer. As a result, the hardware designs and fabricated chips may be vulnerable to malicious alterations by an adversary at any stage of VLSI design flow, thus compromising the integrity of the component. The effect of any modifications made by the adversary can be catastrophic in the critical applications. Because of the stealthy nature of such insertions, it is extremely difficult to detect them using traditional testing and verification methods. Therefore, the trust of the hardware systems require a new approach and have drawn much attention in the hardware security community. For many years, the researchers have developed sophisticated techniques to detect, isolate and prevent malicious attacks in cyber security community assuming that the underlying hardware platform is extremely secure and trustworthy. But the hardware may contain one or more backdoors that can be exploited by software at the time of operation. Therefore, the trust of the computing system cannot be guaranteed unless we can guarantee the trust of the hardware platform. A malicious insertion can be very stealthy and may only involve minor modification in the hardware design or the fabricated chip. The insertion may require rare or specific conditions in order to be activated. The effect may be denial of service, change of function, destruction of chip, leakage of secret information from cryptographic hardware etc. In this thesis, we propose a novel technique for the detection of malicious alteration(s) in a third party soft intellectual property (IP) using a clever combination of sequential equivalence checking (SEC) and automatic test generation. The use of powerful inductive invariants can prune a large illegal state space, and test generation helps to provide a sensitization path for nodes of interest. Results for a set of hard-to-verify designs show that our method can either ensure that the suspect design is free from the functional effect of any malicious change(s) or return a small group of most likely malicious signals.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Nilsson, Jesper. "Mixed RTL and gate-level power estimation with low power design iteration." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1685.

Full text
Abstract:

In the last three decades we have witnessed a remarkable development in the area of integrated circuits. From small logic devices containing some hundred transistors to modern processors containing several tens of million transistors. However, power consumption has become a real problem and may very well be the limiting factor of future development. Designing for low power is therefore increasingly important. To accomplice an efficient low power design, accurate power estimation at early design stage is essential. The aim of this thesis was to set up a power estimation flow to estimate the power consumption at early design stage. The developed flow spans over both RTL- and gate-level incorporating Mentor Graphics Modelsim (RTL-level simulator), Cadence PKS (gate- level synthesizer) and own developed power estimation tools. The power consumption is calculated based on gate-level physical information and RTL- level toggle information. To achieve high estimation accuracy, real node annotations is used together with an own developed on-chip wire model to estimate node voltage swing.

Since the power estimation may be very time consuming, the flow also includes support for low power design iteration. This gives efficient power estimation speedup when concentrating on smaller sub- parts of the design.

APA, Harvard, Vancouver, ISO, and other styles
5

Puri, Prateek. "Design Validation of RTL Circuits using Binary Particle Swarm Optimization and Symbolic Execution." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/55815.

Full text
Abstract:
Over the last two decades, chip design has been conducted at the register transfer (RT) Level using Hardware Descriptive Languages (HDL), such as VHDL and Verilog. The modeling at the behavioral level not only allows for better representation and understanding of the design, but also allows for encapsulation of the sub-modules as well, thus increasing productivity. Despite these benefits, validating a RTL design is not necessarily easier. Today, design validation is considered one of the most time and resource consuming aspects of hardware design. The high costs associated with late detection of bugs can be enormous. Together with stringent time to market factors, the need to guarantee the correct functionality of the design is more critical than ever. The work done in this thesis tackles the problem of RTL design validation and presents new frameworks for functional test generation. We use branch coverage as our metric to evaluate the quality of the generated test stimuli. The initial effort for test generation utilized simulation based techniques because of their scalability with design size and ease of use. However, simulation based methods work on input spaces rather than the DUT's state space and often fail to traverse very narrow search paths in large input spaces. To encounter this problem and enhance the ability of test generation framework, in the following work in this thesis, certain design semantics are statically extracted and recurrence relationships between different variables are mined. Information such as relations among variables and loops can be extremely valuable from test generation point of view. The simulation based method is hybridized with Z3 based symbolic backward execution engine with feedback among different stages. The hybridized method performs loop abstraction and is able to traverse narrow design paths without performing costly circuit analysis or explicit loop unrolling. Also structural and functional unreachable branches are identified during the process of test generation. Experimental results show that the proposed techniques are able to achieve high branch coverage on several ITC'99 benchmark circuits and their modified variants, with significant speed up and reduction in the sequence length.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Ravinath, Vinodh. "Design and Implementation of Single Issue DSP Processor Core." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10160.

Full text
Abstract:

Micro processors built specifically for digital signal processing are DSP processors. DSP is one of the core technologies in rapidly growing applications like communications and audio processing. The estimated growth of DSP processors in the last 6 years is over 40%. The variety of DSP capable processors for various applications also increased with the rising popularity of DSP processors. The design flow and architecture of such processors are not commonly available to students for learning.

This report is a structured approach to design and implementation of an embedded DSP processor core for voice, audio and video codec. The report focuses on the design requirement specification, senior instruction set and assembly manual release, micro architecture design and implementation of the core. Details about the core verification are also included in this report. The instruction set of this processor supports running basic kernels of BDTI benchmarking.

APA, Harvard, Vancouver, ISO, and other styles
7

Niu, Xinwei. "System-on-a-Chip (SoC) based Hardware Acceleration in Register Transfer Level (RTL) Design." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/888.

Full text
Abstract:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
APA, Harvard, Vancouver, ISO, and other styles
8

Motschull, Jan Even. "TV-Design als wichtiger Faktor für Programmverbindungen im deutschen Fernsehen Analysen und Vergleich zwischen den Vollprogrammsendern RTL, ProSieben und dem Spartensender VIVA zur Ermittlung von designerischen Grundsätzen im Fernsehen /." [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=974085839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Prado, Rafael Nunes de Almeida. "Desenvolvimento de uma arquitetura em hardware prototipada em FPGA para aplica??es gen?ricas utilizando redes neurais artificiais embarcadas." Universidade Federal do Rio Grande do Norte, 2011. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15342.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:47Z (GMT). No. of bitstreams: 1 RafaelNAP_DISSERT.pdf: 1349793 bytes, checksum: 6843077c7952b1e58788ef395d9822e6 (MD5) Previous issue date: 2011-02-22
This work proposes hardware architecture, VHDL described, developed to embedded Artificial Neural Network (ANN), Multilayer Perceptron (MLP). The present work idealizes that, in this architecture, ANN applications could easily embed several different topologies of MLP network industrial field. The MLP topology in which the architecture can be configured is defined by a simple and specifically data input (instructions) that determines the layers and Perceptron quantity of the network. In order to set several MLP topologies, many components (datapath) and a controller were developed to execute these instructions. Thus, an user defines a group of previously known instructions which determine ANN characteristics. The system will guarantee the MLP execution through the neural processors (Perceptrons), the components of datapath and the controller that were developed. In other way, the biases and the weights must be static, the ANN that will be embedded must had been trained previously, in off-line way. The knowledge of system internal characteristics and the VHDL language by the user are not needed. The reconfigurable FPGA device was used to implement, simulate and test all the system, allowing application in several real daily problems
Prop?e uma arquitetura em hardware, descrita em VHDL, desenvolvida para embarque de redes neurais artificiais, do tipo Multilayer Perceptron (MLP). Idealiza que, nessa arquitetura, as aplica??es com RNA tenham facilidade no procedimento de embarque de uma rede neural MLP em hardware, bem como permitam f?cil configura??o de v?rios tipos de redes MLP em campo, com diferentes topologias (quantidade de neur?nios e camadas). Uma rede de comunica??o foi desenvolvida para fazer reuso de neur?nios artificiais. A defini??o da arquitetura MLP que o sistema proposto ir? se configurar e executar depende de uma entrada de dados espec?fica, a qual define a quantidade de neur?nios, camadas e tipos de fun??es de ativa??o em cada neur?nio. Para permitir essa maleabilidade de configura??es nas RNA, um conjunto de componentes digitais (datapath) e um controlador foram desenvolvidos para executar instru??es que definir?o a arquitetura da rede MLP. Desta forma, o hardware funcionar? a partir de uma entrada de instru??es previamente conhecidas por um usu?rio, as quais indicar?o as caracter?sticas de uma determinada rede MLP, e o sistema ir? garantir a execu??o da MLP desejada a partir dos neur?nios artificiais desenvolvidos para o sistema, pelo controlador e pelos componentes do datapath, a rede de comunica??o interligar? os neur?nios e auxilia no reuso dos mesmos. Separadamente, os pesos e bias ter?o de estar fixos, ou seja, a rede neural a ser embarcada j? deve estar treinada de maneira off-line (realizada antecipadamente em software). A arquitetura vislumbra que o operador n?o necessite conhecer o dispositivo internamente, nem tampouco ter conhecimento sobre linguagem VHDL. O dispositivo reconfigur?vel e de prototipagem r?pida FPGA foi escolhido para implementa??o, simula??o e testes oportunizando aplicar o sistema a problemas reais do nosso cotidiano
APA, Harvard, Vancouver, ISO, and other styles
10

Láník, Jan. "La réduction de consommation dans les circuits digitaux." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM016/document.

Full text
Abstract:
Le sujet de cette thèse est la réduction de consommation dans les circuits digitaux, et plus particulièrement dans ce cadre les méthodes basées sur la réduction de la fréquence de commutation moyenne, au niveau transistor. Ces méthodes sont structurelles, au sens où elles ne sont pas liées à l’optimisation des caractéristiques physique du circuit mais sur la structure de l’implémentation logique, et de ce fait parfaitement indépendantes de la technologie considérée. Nous avons développé dans ce cadre deux méthodes nouvelles. La première est basée sur l’optimisation de la structure de la partie combinatoire d’un circuit pendant la synthèse logique. La seconde est centrée sur la partie séquentielle du circuit. Elle consiste en la recherche de conditions permettant de détecter qu’un sous-circuit devient inactif, de sorte à pouvoir désactiver ce sous-circuit en coupant la branche correspondante de l’arbre d’horloge, et utilise des méthodes formelles pour prouver que la fonctionnalité du circuit n’en serait pas affectée
The topic of this thesis are methods for power reduction in digital circuits by reducing average switching on the transistor level. These methods are structural in the sense that they are not related to tuning physical properties of the circuitry but to the internal structure of the implemented logic an d therefore independent on the particular technology. We developed two novel methods. One is based on optimizing the structure of the combinatorial part of a circuit during synthesis. The second method is focused on sequential part of the circuit. It looks for clock gating conditions that can be used to disable idle parts of a circuit and uses formal methods to prove that the function of the circuit will not be altered
APA, Harvard, Vancouver, ISO, and other styles
11

Sinigaglia, Mattia. "Progettazione ed implementazione di un Sistema On Chip per applicazioni audio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23790/.

Full text
Abstract:
Lo scopo de progetto è stato quello di contribuire alla realizzazione di un microcontrollore progettato per applicazioni audio con bassissimi consumi. Il microcontrollore integra un acceleratore FFT che effettua la trasformata di Fourier su diversi segnali audio acquisiti dalla periferica I2S che è una periferica dedicata alla comunicazione con interfacce audio digitali. Nello specifico, è stato implementato nella periferica I2S il protocollo DSP con TDM per consentire la connessione di molteplici dispositivi sulla stessa linea dati. Il risultato ottenuto è stato quello di riuscire a comunicare contemporaneamente con 16 dispositivi di input e di output fornendo l’elaborazione effettuata dall’acceleratore FFT sui dati acquisiti. Il microcontrollore, basato su PULP, prende il nome di Echoes come tributo ai Pink Floyd perché specifico per applicazioni audio ed è dotato di un largo set di periferiche che gli consentono di comunicare con il mondo esterno. L'elaborato è suddiviso in due parti, la prima introduce all'edge processing ed ai protocolli audio digitali, la seconda invece descrive le fasi di integrazione del nuovo protocollo DSP nella periferica I2S in PULP, la progettazione e l'implementazione fisica del chip Echoes.
APA, Harvard, Vancouver, ISO, and other styles
12

Manoni, Simone. "EPAC Multi-FPGA SerDes: Enabling Partitioning of the European Processor Accelerator on Multiple FPGAs." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
European Processor Initiative (EPI) è un progetto attualmente implementato nella seconda fase di un accordo con la Commissione europea, il cui scopo è quello di progettare e attuare una tabella di marcia per una nuova famiglia di processori europei a basso consumo per l'extreme scale computing, Big-Data, HPC e altre applicazioni emergenti. La prima fase di EPI è iniziata nel dicembre 2018 ed è stata completata con successo nel novembre 2021, con la consegna dei primi 143 test chip (EPACs) per l'unione europea. Il bring-up dei test chip è avvenuto con successo e ha eseguito il suo primo programma inviando i tradizionali saluti "Hello World!" in diverse lingue. Per eseguire tutte le necessarie procedure di prototipazione e test necessarie prima di inviare un chip in produzione, è necessario che l'EPAC sia emulato da un dispositivo FPGA. Tuttavia, la dimensione di EPAC è troppo grande per implementare e prototipare il progetto completo sulla maggior parte degli FPGA commerciali. Pertanto, fino ad ora, la prototipazione è stata è stata effettuata disabilitando diverse parti del sistema una per una, in modo che il sistema ridotto potesse essere implementato in un singolo FPGA. Il lavoro presentato in questa tesi, che è stato svolto all'interno di Semidynamics Technology Services per EPI, ha avuto come contributo la concezione del partizionamento EPAC su un sistema multi-FPGA, la definizione dell'architettura e la progettazione di un modulo Serializzatore-Deserializzatore che permette il partizionamento EPAC del sistema multi-FPGA, al fine di realizzare un emulatore Full-Chip.
APA, Harvard, Vancouver, ISO, and other styles
13

Ström, Marcus. "System Design of RF Receiver and Digital Implementation of Control Logic." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1848.

Full text
Abstract:

This report is the outcome of a thesis work done at Linköpings University, campus Norrköping. The thesis work was part of the development of a RF transceiver chip for implantable medical applications. The development was done in cooperation with Zarlink Semiconductor AB, located in Järfälla, Stockholm.

The transceiver is divided into three main blocks, which are the wakeup block, the MAC block and the RF block. The wakeup block is always operating and is awaiting a wakeup request in the 2,45GHz ISM-band. The RF-block is operating in the 400MHz ISM-band and is powered up after wakeup The MAC is the controller of the whole chip. All three blocks in the transceiver structure should be integrated on the same chip, using TSMC 0,18µm process design kit for CMOS (Mixed Signal /RF).

The purpose of the thesis work was to develop the wakeup circuit for the transceiver. The main purpose was to develop the digital control logic in the circuitry, using RTL-coding (mainly VHDL) but the thesis work also included a system analysis of the whole wakeup block, including the front-end, for getting a better overview and understanding of the project.

A complete data packet or protocol for the wakeup message on 2,45GHz, is defined in the report and is one of the results of the project. The packet was developed continuously during progress in the project. Once the data packet was defined the incoming RF stage could be investigated. The final proposal to a complete system design for the wakeup block in the RF transceiver is also one of the outcomes of the project. The front-end consists mainly of a LNA, a simple detector and a special decoder. Since the total power consumption on the wakeup block was set to 200nA, this had to be taken under consideration continuously. There was an intention not to have an internal clock signal or oscillator available in the digital part (for keeping the power consumption down). The solution to this was a self-clocking method used on the incoming RF signal. A special decoder distinguishes the incoming RF signal concerning the burst lengths in time. The decoder consists of a RC net that is uploaded and then has an output of 1, if the burst length is long enough and vice versa.

When it was decided to use a LNA in the front-end, it was found that it could not be active continuously, because of the requirements on low power consumption. The solution to this was to use a strobe signal for the complete front-end, which activates it. This strobe signal was extracted in the digital logic. The strobe signal has a specific duty cycle, depending on the time factors in the detector and in the decoder in the front-end. The total strobing time is in the implemented solution 250µs every 0,5s.

The digital implementation of the control logic in the wakeupblock was made in VHDL (source code) and Verilog (testbenches). The source code was synthesized against the component library for the process 0,18µm from TSMC, which is a mixed/signal and RF process. The netlist from the synthesizing was stored as a Verilog file and simulated together with the testbenches using the simulator Verilog-XL. The results from the simulations were examined and reviewed in the program Simvison from Cadence. The result was then verified during a pre-layout review together with colleagues at Zarlink Semiconductor AB. During the implementation phase a Design report was written continuously and then used for the pre-layout review. Extracts (source code and testbench) from this document can be found as appendixes to the report.

APA, Harvard, Vancouver, ISO, and other styles
14

Vijayaraghavan, Vijay P. "Exploration des liens entre la synthèse de haut niveau (HLS) et la synthèse au niveau transferts de registres (RTL)." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0184.

Full text
Abstract:
Le sujet traite dans cette these, concerne les liens entre la synthese de haut niveau et la synthese au niveau transfert de registres (rtl). Il s'agit d'une adaptation de l'architecture resultat de la synthese de haut niveau par transformation en une description rtl acceptee par les outils industriels actuels. Les objectifs vises par cette transformation, sont: accroitre la flexibilite et l'efficacite, permettre la parametrisation de l'architecture finale. A partir d'une description comportamentale decrite dans un language de description de materiel (la synthese de haut niveau) genere une architecture au niveau transfert de registres, comprenant un controleur et un chemin de donnees. Le controleur et le chemin de donnees peuvent etre synthetises par des outils de synthese rtl existant pour realiser un asic ou un fpga. Nous allons dans un premier temps concevoir une methode que nous appelerons personnalisation. Elle permet aux concepteurs d'adapter l'architecture generee aux outils de synthese rtl et a toute structure particuliere requise. Le controleur et le chemin de donnees peuvent etre synthetises par des outils de synthese rtl et logique existant pour realiser un asic ou un fpga. Cependant, pour des raisons d'efficacite, il est preferable de synthetiser le chemin de donnees par un compilateur de chemin de donnees. Ensuite, nous definirons une methode appelee decomposition. Cette derniere fournira un moyen de decomposer un chemin de donnees en plusieurs sous chemins de donnees reguliers, pouvant etre synthetises de maniere efficace par un compilateur de chemin de donnees. Enfin, nous presenterons la generation de chemins de donnees generiques, destines a la realisation d'architectures parametrables au niveau rtl. Cet algorithme a ete implante dans le generateur de code vhdl a partir de la structure de donnees intermediaire utilisee par amical, un outil de synthese de haut niveau
APA, Harvard, Vancouver, ISO, and other styles
15

Márquez, Carlos Iván Castro. "Checagem de equivalência de sequências de estados de projetos digitais em RTL com modelos de referência em alto nível e de protocolo de comunicação." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-23122014-155143/.

Full text
Abstract:
A verificação funcional é o conjunto de tarefas destinado a descobrir erros gerados durante o projeto de circuitos integrados, e representa um importante desafio ao influenciar fortemente a eficiência do ciclo inteiro de produção. Estima-se que até 80% dos custos totais de projeto são devidos à verificação, tornando esta atividade o gargalo principal para reduzir o time-to-market. Tal problemática tem provocado a aparição de diversas estratégias para diminuir o esforço, ou para aumentar a capacidade de cobertura da verificação. Por um lado existe a simulação, que permite descobrir um número razoável de erros de projeto; porém, a lentidão da simulação de descrições RTL torna mínima a cobertura real de estados. Por outro lado, os métodos formais de verificação fornecem alta cobertura de estados. Um deles é a checagem de modelos, que checa a validade de um conjunto de propriedades para todos os estados do projeto sob verificação. No entanto, esta técnica padece do problema de explosão de estados, e da dificuldade de especificar um conjunto robusto de propriedades. Outra alternativa formal é a checagem de equivalência que, ao invés de verificar propriedades, compara o projeto com um modelo de referência. No entanto, a checagem de equivalência tradicional é aplicável, unicamente, a descrições no mesmo nível de abstração, e com interfaces idênticas. Como fato importante, não foram encontrados registros na literatura de sobre a verificação formal de descrições RTL, considerando ambos os aspectos computacionais (presentes no modelo de referência) e de comunicação às interfaces (provenientes da especificação funcional de protocolo). Neste trabalho apresenta-se uma metodologia de verificação formal, através do uso de técnicas de checagem de equivalência para determinar a validade de uma implementação em RTL, comparando-a com um modelo de referência em alto nível, e com um modelo formal do protocolo de comunicação. Para permitir tal checagem, a metodologia baseia-se no conceito de sequências de estados, ao invés de estados individuais como na checagem de equivalência tradicional. As discrepâncias entre níveis diferentes de abstração são consideradas, incluindo alfabetos diferentes, mapeamento entre estados, e dessemelhanças temporais. A caracterização e solução do problema são desenvolvidas através de um quadro teórico, onde se apresentam conceitos, e definições, cuja validade é provada formalmente. Uma ferramenta para aplicação prática da metodologia foi desenvolvida e aplicada sobre diferentes tipos de descrições RTL, escritas nas linguagens VHDL e SystemC. Os resultados demonstram efetividade e eficiência na verificação formal de circuitos digitais que incluem, mas não se limitam à correção de erros, encriptação, processamento de imagens, e funções matemáticas. Também, evidencia-se a capacidade da ferramenta para descobrir erros de tipo combinatório e sequencial injetados propositalmente, relacionados com a funcionalidade do modelo de referência, assim como, com a da especificação do protocolo de comunicação, dentro de tempos e número de iterações praticáveis em casos reais.
Functional verification is the group of tasks aiming the discovery of bugs created during integrated circuit design, and represents an important challenge by its strong influence on efficiency throughout production cycles. As an estimative, up to 80% of the whole design costs are due to verification, which makes verification the greatest bottleneck while attempting to reduce time-to-market. Such problem has given rise to a series of techniques to reduce the effort, or to increase verification coverage capability. On the one side, simulation allows finding a good number of bugs, but it is still far from reaching high state coverage because of RTL cycle-accurate slowness. On the other side, formal approaches supply high state coverage. Model checking, for instance, checks the validness of a set of properties for all designs states. However, a strong disadvantage resides in defining and determining the quality of the set of properties to verify, not to mention state explosion. Sequential equivalence checking, which instead of checking properties compares the design with a reference model. Nevertheless, traditionally it can only be applied between circuit descriptions where a one-to-one correspondence for states, as well as for memory elements, is expected. As a remarkable issue, no works were found in literature that dealt with formal verification of RTL designs, while taking care of both computational aspects, present in the high-level reference model, and interface communication aspects, which proceed from the protocol functional specification. This work presents a formal verification methodology, which uses equivalence checking techniques, to validate RTL descriptions through direct comparison with a high-level reference model, and with formal model of the communication protocol. It is based on extracting and comparing complete sequences of states, instead of single states as in traditional equivalence checking, in order to determine if the design intention is maintained in RTL implementation. The natural discrepancies between system level and RTL code are considered, including non-matching interface and memory elements, state mapping, and process concurrency. For the complete problem characterization and solution, a theoretical framework is introduced, where concepts and definitions are provided, and whose validity is formally proved. A tool to apply systematically the methodology was developed and applied on different types of RTL descriptions, written in VHDL and SystemC languages. The results show that the approach may be applied effectively and efficiently to verify formally digital circuits that include, but are not limited to error correction, encryption, image processing, and math functions. Also, evidence has been obtained about the capacity of the tool to discover both combinatory and sequential bugs injected on purpose, related with computational and protocol functionalities, on real scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Carvalho, Paulo Roberto Bueno de. "Projeto de circuito oscilador controlado numericamente implementado em CMOS com otimização de área." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-26012017-085719/.

Full text
Abstract:
Este trabalho consiste no projeto e implementação em CMOS de um circuito integrado digital para geração de sinais, denominado Oscilador Controlado Numericamente. O circuito será aplicado em um sistema de Espectroscopia por Bioimpedância Elétrica, utilizado como método para detecção precoce de câncer do colo do útero. Durante o trabalho, realizou-se o estudo dos requisitos do sistema de espectroscopia e as especificações dos tipos de sinais a serem gerados. Levantou-se, na bibliografia, algumas técnicas de codificação em linguagem de hardware para otimização do projeto nos quesitos área, potência dissipada e frequência máxima de funcionamento. Para implementar o circuito, também se pesquisou o fluxo de projeto de circuitos digitais, focando as etapas de codificação em linguagem de descrição de hardware Verilog e os resultados de síntese lógica e de layout. Foram avaliadas duas arquiteturas, empregando-se algumas das técnicas de codificação levantadas durante o estudo bibliográfico. Estas arquiteturas foram implementadas, verificadas em plataforma programável, sintetizadas e mapeadas em portas lógicas no processo TSMC 180 nm, onde foram comparados os resultados de área e dissipação de potência. Observou-se, nos resultados de síntese lógica, redução de área de 78% e redução de 83% na dissipação de potência total no circuito em que se aplicou uma das técnicas de otimização em comparação com o circuito implementado sem otimização, utilizando uma arquitetura CORDIC do tipo unrolled. A arquitetura com menor área utilizada - 0,017 mm2 - foi escolhida para fabricação em processo mapeado. Após fabricação e encapsulamento do circuito, o chip foi montado em uma placa de testes desenvolvida para avaliar os resultados qualitativos. Os resultados dos testes foram analisados e comparados aos obtidos em simulação, comprovando-se o funcionamento do circuito. Observou-se uma variação máxima de 0,00623% entre o valor da frequência do sinal de saída obtido nas simulações e o do circuito fabricado.
The aim of this work is the design of a digital integrated circuit for signal generation called Numerically Controlled Oscillator, designed in 180 nm CMOS technology. The application target is for Electrical Bioimpedance Spectroscopy system, and can be used as a method for early detection of cervical cancer. Throughout the work, the spectroscopy system requirements and specifications of the types of signals to be generated were studied. Furthermore, the research of some coding techniques in hardware language for design optimization in terms of area, power consumption and frequency operation was conducted looking into the bibliography. The digital design flow was studied focusing on the Verilog hardware description language and the results of logic synthesis and layout, in order to implement the circuit. Reviews of two architectures have been made, using some of the encoding techniques that have been raised during the bibliographical study. These architectures have been implemented, verified on programmable platform, synthesized and mapped to standard cells in TSMC 180 nm process, which compared the area and total power consumption of results. Based on the results of logic synthesis, a 78% area reduction and 83% power consumption reduction were obtained on the implemented circuit with encoding techniques for optimization in comparison with the another circuit using a CORDIC unrolled architecture. The architecture with smaller area - 0.017 mm2 - was chosen for implementation in the mapped process. After the circuit fabrication and packaging, the chip was mounted on an evaluation board designed to evaluate the functionality. The test results were analyzed and compared with the simulation results, showing that the circuit works as expected. The output signals were compared between theoretical and experimental results, showing a maximum deviation of 0.00623%.
APA, Harvard, Vancouver, ISO, and other styles
17

Fiedor, Jan. "Návrh a implementace nástroje pro formální verifikaci systémů specifikovaných jazykem RT logiky." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236750.

Full text
Abstract:
As systems complexity grows, so grows the risk of errors, that's why it's necessary to effectively and reliably repair those errors. With most of real-time systems this statement pays twice, because a single error can cause complete system crash which may result in catastrophe. Formal verification, contrary to other methods, allows reliable system requirements verification.
APA, Harvard, Vancouver, ISO, and other styles
18

MANSOURI, NAZANIN. "AUTOMATED CORRECTNESS CONDITION GENERATION FOR FORMAL VERIFICATION OF SYNTHESIZED RTL DESIGNS." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin982064542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Zijian. "Multiway decision graphs and their applications in automatic formal verification of RTL designs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/nq26757.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zheng, Yexin. "Novel RTD-Based Threshold Logic Design and Verification." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32011.

Full text
Abstract:
Innovative nano-scale devices have been developed to enhance future circuit design to overcome physical barriers hindering complementary metal-oxide semiconductor (CMOS) technology. Among the emerging nanodevices, resonant tunneling diodes (RTDs) have demonstrated promising electronic features due to their high speed switching capability and functional versatility. Great circuit functionality can be achieved through integrating heterostructure field-effect transistors (HFETs) in conjunction with RTDs to modulate effective negative differential resistance (NDR). However, RTDs are intrinsically suitable for implementing threshold logic rather than Boolean logic which has dominated CMOS technology in the past. To fully take advantage of such emerging nanotechnology, efficient design methodologies and design automation tools for threshold logic therefore become essential. In this thesis, we first propose novel programmable logic elements (PLEs) implemented in threshold gates (TGs) and multi-threshold threshold gates (MTTGs) by exploring RTD/ HFET monostable-bistable transition logic element (MOBILE) principles. Our three-input PLE can be configured through five control bits to realize all the three-variable logic functions, which is, to the best of our knowledge, the first single RTD-based structure that provides complete logic implementation. It is also a more efficient reconfigurable circuit element than a general look-up table which requires eight configuration bits for three-variable functions. We further extend the design concept to construct a more versatile four-input PLE. A comprehensive comparison of three- and four-input PLEs provides an insightful view of design tradeoffs between performance and area. We present the mathematical proof of PLE's logic completeness based on Shannon Expansion, as well as the HSPICE simulation results of the programmable and primitive RTD/HFET gates that we have designed. An efficient control bit generating algorithm is developed by using a special encoding scheme to implement any given logic function. In addition, we propose novel techniques of formulating a given threshold logic in conjunctive normal form (CNF) that facilitates efficient SAT-based equivalence checking for threshold logic networks. Three different strategies of CNF generation from threshold logic representations are implemented. Experimental results based on MCNC benchmarks are presented as a complete comparison. Our hybrid algorithm, which takes into account input symmetry as well as input weight order of threshold gates, can efficiently generate CNF formulas in terms of both SAT solving time and CNF generating time.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
21

Schostak, Daniel Paul. "Methodology for the formal specification of RTL RISC processor designs (with particular reference to the ARM6)." Thesis, University of Leeds, 2003. http://etheses.whiterose.ac.uk/1314/.

Full text
Abstract:
Due to the need to meet increasingly challenging objectives of increasing performance, reducing power consumption and reducing size, synchronous processor core designs have been increasing significantly in complexity for some time now. This applies to even those designs originally based on the RISC principle of reducing complexity in order to improve instruction throughput and the performance of the design. As designs increase in complexity, the difficulty of describing what the design does and demonstrating that the design does indeed do this, also increases. The usual practice of describing designs using natural languages rather than formal language exacerbates this because of the ambiguities inherent in natural language descriptions. Hence this thesis is concerned with the development of a scalable methodology for the creation of a formal description of synchronous processor core design Not only does the methodology of this thesis provide a standardised approach for describing synchronous processor core designs, but the description that it generates can be used as a basis for the formal verification of the solutions to the problem that increasing complexity poses for traditional validation. The concept of different presentations of one description is part of the methodology of this thesis and is use to reconcile differences in how the description is best used for one purpose or another. The methodology of this thesis was developed for the formal specification of the ARM6 processor core and thus this design provides the primary example used in this thesis. Case studies of the use of the methodology of this thesis with other processor cores and a modernised version of the ARM6 are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Fough, Nazila. "Design and analysis of RTP circuit breaker for multimedia applications." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=228630.

Full text
Abstract:
Live network multimedia applications (e.g., video conferencing, TV on demand) have been very popular in recent years and are expected to dominate Internet traffic in the near future. With multimedia and Internet-enabled devices being ubiquitous, mechanisms that ensure multimedia flows do not congest the Internet are crucial components of multimedia systems that are embraced rather than opposed by network service providers. The emergence of browser-based multimedia conferencing applications using the WebRTC protocol, an open source project aiming at Real-Time Communication (RTC) with Web, and wide deployment of these applications are expected to increase the traffic of interactive real-time multimedia on the Internet. RTP Media Congestion Avoidance Technique (RMCAT) may be applied to WebRTC, but this is a long-term process and WebRTC deployments will occur before RMCAT is completed. New methods and quick solutions are therefore required to protect the network from uncontrolled media flows until deployment of effective congestion control can be guaranteed. The RTP Protocol Circuit Breaker (RTP-CB) has been proposed in March 2012 within the Internet Engineering Task Force (IETF). Rather than providing congestion control, the RTP-CB is designed only to protect the network by terminating RTP/UDP flows that cause excessive congestion. While the deployment of congestion control for RTP/UDP flow remains an open issue, design a RTP-CB as a quick solution for protecting the current internet is the main focus of this work. In this work by analysing the UDP traffic over a limited path, a RTP-CB algorithm is designed. Then a packet sniffer's code (C routine) is written to sniff and analyse all RTP/UDP, TCP, RTCP SR, and RTCP RR traffic. Based on the designed algorithm the above code was developed further to work as a RTP-CB. This RTP-CB can be deployed on receiver or sender. After deployment of RTP-CB for RTP/UDP flows in a controlled network, its performance in a range of scenarios with using only its congestion rule has been evaluated. The evaluation showed some short coming in performance of RTP-CB in some certain condition when RTP-CB used only congestion rule. The performance of the RTP-CB is evaluated from two perspectives: First, the thesis considered network performance metrics, such as the frequency at which a RTP circuit breaker triggered. Then, it considered the experience of multimedia users, accounting for all outcomes to all users: those congesting the network (where the flow is terminated), those that did not (and are rewarded by reduced congestion) as well as flows that, without severely congesting the network, obtained little quality from a multimedia session and consumed network resources to no avail. Building on the knowledge gathered in these experiments, some extensions (Media Usability Rule) to the RTP-CB rules is proposed and evaluated. This work demonstrates this evaluation by streaming video flows over IP networks using a dedicated test-bed and proposed RTP-CB. These experiments assess the effect of network conditions (packet loss, jitter and network capacity constraint) on the transmission of different types of video stream with and without the proposed RTP-CB Media usability rule. The experiments prove that RTP-CB implementing the congestion rule alone can offer adequate protection to a network, but it does not perform well in some conditions, for example, when the bottleneck buffer size is small. Experiments confirm that the proposed (computationally inexpensive) modifications to the RTP-CB rules improve the RTP-CB performance. The results of these experiments and media usability rule were introduced in IETF RTP-CB draft version 07 of October 27, 2014 and later versions acknowledged contributions by the author of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
23

Abou-Senna, Hatem. "Microscopic Assessment of Transportation Emissions on Limited Access Highways." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5090.

Full text
Abstract:
On-road vehicles are a major source of transportation carbon dioxide (CO2) greenhouse gas emissions in all the developed countries, and in many of the developing countries in the world. Similarly, several criteria air pollutants are associated with transportation, e.g., carbon monoxide (CO), nitrogen oxides (NOx), and particulate matter (PM). The need to accurately quantify transportation-related emissions from vehicles is essential. Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. With MOVES, there is an opportunity for higher precision and accuracy. Integrating a microscopic traffic simulation model (such as VISSIM) with MOVES allows one to obtain precise and accurate emissions estimates. The new United States Environmental Protection Agency (USEPA) mobile source emissions model, MOVES2010a (MOVES) can estimate vehicle emissions on a second-by-second basis creating the opportunity to develop new software “VIMIS 1.0” (VISSIM/MOVES Integration Software) to facilitate the integration process. This research presents a microscopic examination of five key transportation parameters (traffic volume, speed, truck percentage, road grade and temperature) on a 10-mile stretch of Interstate 4 (I-4) test bed prototype; an urban limited access highway corridor in Orlando, Florida. The analysis was conducted utilizing VIMIS 1.0 and using an advanced custom design technique; D-Optimality and I-Optimality criteria, to identify active factors and to ensure precision in estimating the regression coefficients as well as the response variable. The analysis of the experiment identified the optimal settings of the key factors and resulted in the development of Micro-TEM (Microscopic Transportation Emissions Meta-Model). The main purpose of Micro-TEM is to serve as a substitute model for predicting transportation emissions on limited access highways to an acceptable degree of accuracy in lieu of running simulations using a traffic model and integrating the results in an emissions model. Furthermore, significant emission rate reductions were observed from the experiment on the modeled corridor especially for speeds between 55 and 60 mph while maintaining up to 80% and 90% of the freeway's capacity. However, vehicle activity characterization in terms of speed was shown to have a significant impact on the emission estimation approach. Four different approaches were further examined to capture the environmental impacts of vehicular operations on the modeled test bed prototype. First, (at the most basic level), emissions were estimated for the entire 10-mile section “by hand” using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link driving schedules (LDS), and second-by-second operating mode distributions (OPMODE). This research analyzed how the various approaches affect predicted emissions of CO, NOx, PM and CO2. The results demonstrated that obtaining accurate and comprehensive operating mode distributions on a second-by-second basis improves emission estimates. Specifically, emission rates were found to be highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, frequent braking/coasting and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach. Additionally, model applications and mitigation scenarios were examined on the modeled corridor to evaluate the environmental impacts in terms of vehicular emissions and at the same time validate the developed model “Micro-TEM”. Mitigation scenarios included the future implementation of managed lanes (ML) along with the general use lanes (GUL) on the I-4 corridor, the currently implemented variable speed limits (VSL) scenario as well as a hypothetical restricted truck lane (RTL) scenario. Results of the mitigation scenarios showed an overall speed improvement on the corridor which resulted in overall reduction in emissions and emission rates when compared to the existing condition (EX) scenario and specifically on link by link basis for the RTL scenario. The proposed emission rate estimation process also can be extended to gridded emissions for ozone modeling, or to localized air quality dispersion modeling, where temporal and spatial resolution of emissions is essential to predict the concentration of pollutants near roadways.
ID: 031988296; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2012.; Includes bibliographical references.
Ph.D.
Doctorate
Civil, Environmental and Construction Engineering
Engineering and Computer Science
Civil Engineering
APA, Harvard, Vancouver, ISO, and other styles
24

Berg, Jens, and Tony Högye. "Reifying Game Design Patterns : A Quantitative Study of Real Time Strategy Games." Thesis, Uppsala universitet, Institutionen för speldesign, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324158.

Full text
Abstract:
Communicating design is in many aspects a difficult process. Game design is not only directives on look and feel, but also carries intentionality. To properly convey intentionality, a common abstract vocabulary is a well-established method for expressing design. Game design patterns are an attempt to formalize and establish such a vocabulary. Game design patterns are a debated tool and this paper aims to examine the practical application of a pattern through a quantitative study in order to strengthen the potential for a more cohesive definition of the term. This is done by first establishing a game design pattern through observation of RTS games. The pattern is then studied through implementation in three commercial RTS games. The results focus on quantitative data gathered from AI vs AI matches related to game pacing. Through testing and analysis of the AI matches it can be stated that game design patterns in a contextualized setting supports the idea of using game design patterns as a formal tool. It was further concluded that the AI also came with limitations in how the collected data is applicable to the overall design of the games. Additional studies using quantitative data in conjunction with qualitative observations could lend further support to game design patterns as a useful tool for both researchers and developers.
Kommunikation av design är i många avseenden en invecklad process. Design av spel innebär inte enbart riktlinjer för utseende och känsla, utan också intentionalitet. En beprövad metod för att uttrycka design och intentionalitet är skapandet av ett gemensamt vokabulär. Game design patterns är ett försök att upprätta och formalisera just ett sådant vokabulär inom speldesign. Game design patterns är ett debatterat verktyg och detta arbetet ämnar undersöka den praktiska tillämpningen av ett pattern genom en kvantitativ studie för att stärka potentialen för en mer sammanhängande definition av termen. Detta utförs genom att först etablera ett game design pattern med hjälp av observation av RTS-spel. Sedan studeras det genom implementation i tre kommersiella RTS-spel. Resultatet fokuseras på kvantitativ data relaterat till pacing som insamlas från matcher mellan två AI. Genom analys av AI-matcherna kan det anses att game design pattern i en kontextualiserad inramning stöder teorin att använda design patterns som ett formellt designverktyg. Vidare drogs slutsatsen att användandet av AI också innebär begränsningar i hur tillämplig den insamlade datan är i den övergripande designen av spel. Fler studier med kvantitativ data ihop med kvalitativa observationer kan ytterligare stödja idén om game design pattern som ett användbart verktyg för både forskare och utvecklare inom spel.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, Hongjie. "Global Optimization of Nonconvex Factorable Programs with Applications to Engineering Design Problems." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36823.

Full text
Abstract:
The primary objective of this thesis is to develop and implement a global optimization algorithm to solve a class of nonconvex programming problems, and to test it using a collection of engineering design problem applications.The class of problems we consider involves the optimization of a general nonconvex factorable objective function over a feasible region that is restricted by a set of constraints, each of which is defined in terms of nonconvex factorable functions. Such problems find widespread applications in production planning, location and allocation, chemical process design and control, VLSI chip design, and numerous engineering design problems. This thesis offers a first comprehensive methodological development and implementation for determining a global optimal solution to such factorable programming problems. To solve this class of problems, we propose a branch-and-bound approach based on linear programming (LP) relaxations generated through various approximation schemes that utilize, for example, the Mean-Value Theorem and Chebyshev interpolation polynomials, coordinated with a {em Reformulation-Linearization Technique} (RLT). The initial stage of the lower bounding step generates a tight, nonconvex polynomial programming relaxation for the given problem. Subsequently, an LP relaxation is constructed for the resulting polynomial program via a suitable RLT procedure. The underlying motivation for these two steps is to generate a tight outer approximation of the convex envelope of the objective function over the convex hull of the feasible region. The bounding step is thenintegrated into a general branch-and-bound framework. The construction of the bounding polynomials and the node partitioning schemes are specially designed so that the gaps resulting from these two levels of approximations approach zero in the limit, thereby ensuring convergence to a global optimum. Various implementation issues regarding the formulation of such tight bounding problems using both polynomial approximations and RLT constructs are discussed. Different practical strategies and guidelines relating to the design of the algorithm are presented within a general theoretical framework so that users can customize a suitable approach that takes advantage of any inherent special structures that their problems might possess. The algorithm is implemented in C++, an object-oriented programming language. The class modules developed for the software perform various functions that are useful not only for the proposed algorithm, but that can be readily extended and incorporated into other RLT based applications as well. Computational results are reported on a set of fifteen engineering process control and design test problems from various sources in the literature. It is shown that, for all the test problems, a very competitive computational performance is obtained. In most cases, the LP solution obtained for the initial node itself provides a very tight lower bound. Furthermore, for nine of these fifteen problems, the application of a local search heuristic based on initializing the nonlinear programming solver MINOS at the node zero LP solution produced the actual global optimum. Moreover, in finding a global optimum, our algorithm discovered better solutions than the ones previously reported in the literature for two of these test instances.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Balakrishnan, Aarathi. "Design and analysis of user interface for radiology teaching file (RTF)." [Gainesville, Fla.]: University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Karlsson, Simon. "Real-time Location System with Passive RFID for surveillance of trusted objects in a room." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-63803.

Full text
Abstract:
The use of Radio Frequency Identification (RFID) in Real-Time Location Systems (RTLS) in asset management has been in limited use, mainly in large organizations such as hospitals and military. The research in this area is making progress and new solutions with reduced costs with greater resolution are presented by different companies that enable the technology to be used in new operating areas. This thesis is about the development, implementation and integration of a RTLS solution that enables surveillance of the position of keys. The RTLS solution utilizes RTLS hardware to receive the positions of the keys. The report describes how the RTLS hardware is selected and how the software solution is designed and implemented. The report describes also result of how the finished solution with software and hardware cooperates. The most vital problem was to create an efficient zone structure that implements the surveillance hierarchy of the keys. The thesis was conducted at a company (PAAM Systems) that offers solutions in access and asset management. The company aims to use a RTLS in an asset management application for keys. The purpose of this work is to examine the existing solutions on the market that provide a RTLS with passive RFID technology.
APA, Harvard, Vancouver, ISO, and other styles
28

Moye, Charles David. "The Design and Implementation of a Spatial Partitioner for use in a Runtime Reconfigurable System." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/34445.

Full text
Abstract:

Microprocessors have difficulties addressing the demands of today's high-performance embedded applications. ASICs are a good solution to the speed concerns, but their cost and time to market can make them impractical for some needs. Configurable Computing Machines (CCMs) provide a cost-effective way of creating custom components; however, oftentimes it would be better if there were a way to change the configuration of the CCM as a program is executing. An efficient way of doing this is with Runtime Reconfigurable (RTR) computing architectures.

In an RTR system, one challenging problem is the assignment of operators onto the array of processing elements (PEs) in a way as to simultaneously minimize both the number of PEs used and the number of interconnections between them for each configuration. This job is automated through the use of a software program referred to as the Spatial Partitioner.

The design and implementation of the Spatial Partitioner is the subject of this work. The Spatial Partitioner developed herein uses an iterative, recursive algorithm along with cluster refinement to find a reasonably efficient allocation of operators onto the target platform in a reasonable amount of time. Information about the topology of the target platform is used throughout the execution of the algorithm to ensure that the resulting solution is legal in terms of layout.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
29

Montagut, Climent Mario Alberto. "DESIGN, DEVELOPMENT AND EVALUATION OF AN ADAPTIVE AND STANDARDIZED RTP/RTCP-BASED IDMS SOLUTION." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48549.

Full text
Abstract:
Nowadays, we are witnessing a transition from physical togetherness towards networked togetherness around media content. Novel forms of shared media experiences are gaining momentum, allowing geographically distributed users to concurrently consume the same media content while socially interacting (e.g., via text, audio or video chat). Relevant use cases are, for example, Social TV, networked games and multi-party conferencing. However, realizing enjoyable shared media services faces many challenges. In particular, a key technological enabler is the concurrent synchronization of the media playout across multiple locations, which is known as Inter-Destination Multimedia Synchronization (IDMS). This PhD thesis presents an inter-operable, adaptive and accurate IDMS solution, based on extending the capabilities of RTP/RTCP standard protocols (RFC 3550). Concretely, two new RTCP messages for IDMS have been defined to carry out the necessary information to achieve IDMS. Such RTCP extensions have been standardized within the IETF, in RFC 7272. In addition, novel standard-compliant Early Event-Driven (EED) RTCP feedback reporting mechanisms have been also designed to enhance the performance in terms of interactivity, flexibility, dynamism and accuracy when performing IDMS. The designed IDMS solution makes use of globally synchronized clocks (e.g., using NTP) and can adopt different (centralized and distributed) architectural schemes to exchange the RTCP messages for IDMS. This allows efficiently providing IDMS in a variety of networked scenarios and applications, with different requirements (e.g., interactivity, scalability, robustness…) and available resources (e.g., bandwidth, latency, multicast support…). Likewise, various monitoring and control algorithms, such as dynamic strategies for selecting the reference timing to synchronize with, and fault tolerance mechanisms, have been added. Moreover, the proposed IDMS solution includes a novel Adaptive Media Playout (AMP) technique, which aims to smoothly adjust the media playout rate, within perceptually tolerable ranges, every time an asynchrony threshold is exceeded. Prototypes of the IDMS solution have been implemented in both a simulation and in real media framework. The evaluation tests prove the consistent behavior and the satisfactory performance of each one of the designed components (e.g.,protocols, architectural schemes, master selection policies, adjustment techniques…). Likewise, comparison results between the different developed alternatives for such components are also provided. In general, the obtained results demonstrate the ability of this RTP/RTCP-based IDMS solution to concurrently and independently maintain an overall synchronization status (within allowable limits) in different logical groups of users, while avoiding annoying playout discontinuities and hardly increasing the computation and traffic load.
Montagut Climent, MA. (2015). DESIGN, DEVELOPMENT AND EVALUATION OF AN ADAPTIVE AND STANDARDIZED RTP/RTCP-BASED IDMS SOLUTION [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48549
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
30

Pineschi, Vinicius. "El rol del UX Design en la era de la transformación digital." Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/653481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Warren, Ashley N. "Disrupting the Connotation of Response to Innovation at the Secondary Level Through Design Thinking." Miami University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=miami1561990253714983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Belhadj, Mohamed Hichem. "Spécification et synthèse de systèmes à controle intensif." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0084.

Full text
Abstract:
La realisation avec succes de nouveaux produits fiables, performants et peu couteux est le resume des defis actuels du marche de l'electronique. Relever ces defis passe par un bon choix des moyens de modelisation et de specification et par l'utilisation d'outils specifiques aux applications visees et aux technologies cibles. Cette these propose quelques elements de reponse pour la specification et la synthese des systemes a controle intensif sur les technologies programmables du type fpga et cpld. Les aspects de specification etudies sont relatifs aux modeles abstraits, aux langages de description et aux outils graphiques. Outre l'introduction d'un nouveau modele de description des controleurs communicants, des comparaisons fondees sur les concepts fondamentaux de la modelisation, i. E. La hierarchie, la modularite, la concurrence, la synchronisation etc. , sont suggerees. Des flots de synthese orientes applications dominees par le controle et specifiques aux technologies ciblees sont introduits. Une des contributions de cette these est la recherche systematique de moyens efficaces pour faciliter l'exploration de l'espace des solutions. Dans le cadre de la synthese des controleurs une strategie de choix du codage des etats est presentee. Elle est fondee sur une caracterisation originale des codages, des technologies cibles et de la complexite des controleurs consideres. Enfin, des problemes ouverts sont soulignes et des axes de recherche fondamentale et appliquee a explorer sont proposes.
APA, Harvard, Vancouver, ISO, and other styles
33

Akyel, Kaya Can. "Statistical methodologies for modelling the impact of process variability in ultra-deep-submicron SRAMs." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT080/document.

Full text
Abstract:
La miniaturisation des transistors vers ses ultimes limites physiques a exacerbé les effets négatifs qui sont liées à la granularité de la matière. Plusieurs nouvelles sources de variabilités affectent les transistors qui, bien qu'identiquement dessinés, montrent des caractéristiques électriques qui sont variables entre eux et entre différents moments de leur utilisation. Les circuits de mémoire SRAM, qui sont conçues avec des règles de dessin parmi le plus agressives et contiennent un nombre de transistors très élevé, sont menacés en particulier par ce phéomène de variabilité qui représente le plus grand obstacle non seulement pour la réduction de la surface d'un point mémoire SRAM, mais aussi pour la réduction de son tension d'alimentation. L'optimisation des circuits SRAM est devenue une tache cruciale afin de répondre à la fois aux demandes d'augmentation de densité et de la réduction de la consommation, donc une méthodologie statistique permettant de modéliser an amont l'impact de la variabilité à travers des simulations SPICE est devenue un besoin obligatoire. Les travaux de recherches présentés se concentrent sur le développement des nouvelles méthodologies pour la simulation des points mémoires sous l'impact de la variabilité, dans le but d'accomplir une modélisation précise de la tension d'alimentation minimale d'un SRAM quelques soit les conditions d'opérations. La variabilité dynamique liée au bruit RTS qui cause le changement des caractéristiques électrique des transistors au cours de leurs opérations est également étudiée avec un effort particulier de modélisation. Ce travail a donné lieu à de nombreuses publications internationales et à un brevet. Aujourd'hui cette méthodologie est retenue par STMicroelectronics et est utilisé dans la phase d'optimisation des plans mémoires SRAM
The downscaling of device geometry towards its physical limits exacerbates the impact of the inevitable atomistic phenomena tied to matter granularity. In this context, many different variability sources raise and affect the electrical characteristics of the manufactured devices. The variability-aware design methodology has therefore become a popular research topic in the field of digital circuit design, since the increased number of transistors in the modern integrated circuits had led to a large statistical variability affecting dramatically circuit functionality. Static Random Access Memory (SRAM) circuits which are manufactured with the most aggressive design rules in a given technology node and contain billions of transistor, are severely impacted by the process variability which stands as the main obstacle for the further reduction of the bitcell area and of its minimum operating voltage. The reduction of the latter is a very important parameter for Low-Power design, which is one of the most popular research fields of our era. The optimization of SRAM bitcell design therefore has become a crucial task to guarantee the good functionality of the design at an industrial manufacturing level, in the same time answering to the high density and low power demands. However, the long time required by each new technology node process development means a long waiting time before obtaining silicon results, which is in cruel contrast with the fact that the design optimization has to be started as early as possible. An efficient SPICE characterization methodology for the minimum operating voltage of SRAM circuits is therefore a mandatory requirement for design optimization. This research work concentrates on the development of the new simulation methodologies for the modeling of the process variability in ultra-deep-submicron SRAMs, with the ultimate goal of a significantly accurate modeling of the minimum operating voltage Vmin. A particular interest is also carried on the time-dependent sub-class of the process variability, which appears as a change in the electrical characteristics of a given transistor during its operation and during its life-time. This research work has led to many publications and one patent application. The majority of findings are retained by STMicroelectronics SRAM development team for a further use in their design optimization flow
APA, Harvard, Vancouver, ISO, and other styles
34

Björklén, Simon. "Extending Modelica with High-Level Data Structures: Design and Implementation in OpenModelica." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12148.

Full text
Abstract:

Modelica is an equation-based object-oriented language (EOO). PELAB at Linköping University along with the OpenModelica development group, is developing a metamodeling extension, MetaModelica, to this language along with a compiler called the OpenModelica Compiler (OMC).

The goal of this thesis was to analyze the compiler, extend it with union type support and then write a report about the extension with union types in particular and extension with high level data structures in general, to facilitate further development.

The implementation made by this thesis was implemented with the goal of keeping the current structure intact and extending case-clauses where possible. The main parts of the extension is implemented by this thesis work but some parts concerning the pattern matching algorithms are still to be extended. The main goal of this is to bootstrap the OpenModelica Compiler, making it able to compile itself although this is still a goal for the future.

With this thesis I also introduce some guidelines for implementing a new highlevel data structure into the compiler and which modules needs extension.

APA, Harvard, Vancouver, ISO, and other styles
35

Monticeli, Francisco Maciel [UNESP]. "Otimização da determinação de vazios em compósitos híbridos processados por RTM." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151336.

Full text
Abstract:
Submitted by FRANCISCO MACIEL MONTICELI null (francisco_monticelli@hotmail.com) on 2017-08-15T16:18:26Z No. of bitstreams: 1 Dissertação Francisco_M_Monticeli.pdf: 22030385 bytes, checksum: 6b57793ae92e8109811a1ce2e8318122 (MD5)
Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-08-22T17:27:29Z (GMT) No. of bitstreams: 1 monticeli_fm_me_guara.pdf: 22030385 bytes, checksum: 6b57793ae92e8109811a1ce2e8318122 (MD5)
Made available in DSpace on 2017-08-22T17:27:29Z (GMT). No. of bitstreams: 1 monticeli_fm_me_guara.pdf: 22030385 bytes, checksum: 6b57793ae92e8109811a1ce2e8318122 (MD5) Previous issue date: 2017-07-04
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
O compósito híbrido surgiu com o objetivo de reduzir a quantidade de materiais de elevado custo e, ao mesmo tempo, manter as elevadas propriedades mecânicas. Além disso, viu-se a possibilidade de, em se usando reforços diferentes, obter um novo material que evidenciasse as vantagens dos reforços e diminuísse as desvantagens simultaneamente. Um importante fator durante o processamento de compósitos poliméricos para aplicação estrutural é o controle da formação de vazios, pois estes atuam como concentradores de tensão. O objetivo deste trabalho foi produzir compósitos híbridos utilizando diferentes disposições de tecidos de fibra de vidro e carbono, sem perda significativa de propriedades mecânicas. Para a produção do compósito, a pré-forma foi inicialmente caracterizada quanto à impregnação, sendo, para isto desenvolvido um modelo analítico geral que determina o parâmetro de permeabilidade dos compósitos híbridos. O modelo foi validado através de teste de permeabilidade e a qualidade dos laminados (híbridos, e não híbridos) foi certificada pelo processamento dos compósitos e pela quantificação da fração volumétrica de vazios. Este projeto propôs, ainda, a melhoria da análise de vazios pela técnica de porosimetria de Hg com auxílio do planejamento de experimentos. Assim foi possível determinar a fração volumétrica de poros abertos e fechados, a distribuição do diâmetro dos poros e a distância entre os poros dos compósitos, em uma análise conjunta com as técnicas de digestão ácida e microscopia óptica. A pré-forma de carbono apresentou elevada taxa de resistência ao fluxo; por outro lado, observou-se um comportamento oposto para a pré-forma de vidro. Contudo, as pré-formas híbridas apresentaram um efeito híbrido positivo, resultado de uma sinergia que proporcionou um maior valor de permeabilidade. Consequentemente, pode-se conseguir uma otimização do tempo de injeção, considerando uma combinação de tecidos de vidro e carbono equilibrados. O modelo analítico foi capaz de prever o comportamento da frente de fluxo mostrando um valor superestimado de 10%. A técnica de porosimetria de Hg foi validada para análise de poro em compósito avançados com valores próximos obtidos pelas técnicas de digestão ácida e de microscopia óptica. Em função dos resultados obtidos dos valores de diâmetro dos poros, que foi semelhante para todos os compósitos, concluí-se que este ocorre em função do tipo de processo e da resina. Do mesmo modo, a distância e a fração de poros abertos dependem diretamente da quantidade de poros presente no material. Os resultados encontrados indicaram que os compósitos híbridos estudados neste trabalho são materiais promissores para a aplicação aeronáutica, combinando as excelentes propriedades mecânicas da fibra de carbono com a viabilidade do ciclo de injeção da fibra de vidro. Com isso, o laminado híbrido 2 foi o compósito ideal para um processamento com fração de vazios próximo ao uso aeronáutico, resultando em uma redução de custo de matéria prima e tempo de processamento.
Hybrid composite arose with the aim of reducing high cost materials and, at the same time, maintaining mechanical properties suitable for use. In addition, using different reinforcements, a new material could be obtained which would evidence the advantages of the reinforcements and decrease the disadvantages simultaneously. An important factor, during the processing of polymeric composites for structural application, is the voids formation control, since they act as stress concentrators. The aim of this work was to produce hybrid composites using different stacking of glass and carbon fabrics without significant loss of mechanical properties. For the composite manuftacturing, the preform was initially characterized as the impregnation, for which a general analytical model was developed that determines the permeability parameter of the hybrid composites. In addition, the model was validated by conducting permeability test and the quality of several laminates (hybrid, and non-hybrid) was certified by processing them, and voids were quantified thereof. This project also proposed the improvement of voids analysis by the Hg porosimetry technique with the support of design of experimental. Therefore, it was possible to determine the volumetric fraction of open and closed voids, pore diameter distribution and the distance between voids, in an ensemble analysis with acid digestion and optical microscopy. Carbon preform presented high flow resistance; on the other hand, an opposite behavior was observed for the glass preform. The hybrid architecture presented a positive hybrid effect, which means a synergy that provided a higher permeability value. Therefore, optimization in injection time can be achieved, considering a combination of balanced glass and carbon fabrics. The analytical model was enabled to predict the flow front behavior by showing an overestimated value of 10%. The Hg porosimetry technique was validated for advanced composite voids analysis with similar results obtained by acid digestion techniques and optical microscopy. Based on the results obtained from pore diameter values, which were similar for all composites, it was concluded that this occurs as a function of the type of process and the resin. Although, the distance and the open pores fraction depend directly on the amount of pores along the laminate. Hybrid composites have proven to be a promising material in which it combines the excellent mechanical properties of carbon fiber with the viability of the fiberglass injection cycle. With that, hybrid 2 laminate is the ideal composite for a processing with voids fraction close to the aeronautical use requirement with reduction of high-cost material and processing time.
FAPESP: 2015/19967-4
APA, Harvard, Vancouver, ISO, and other styles
36

Soares, Klein Nayara. "El Rol físico del agua en mezclas de cemento Portland." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/107994.

Full text
Abstract:
Water is one of the fundamental components of concrete, not only for its role on the hydration of Portland cement, but also because of the physical functions it develops, which are associated with the main phases of concrete life: fresh state, hardened state and the useful life of the structures. The objective of this PhD Thesis is to study in detail the physical role of water in Portland cement mixtures: the aggregate absorption, the wetting and the fluidization of the granular skeletons that compose the cement pastes. The study covers the mathematical modelling of the mentioned physical functions in a way that it is possible to calculate the volume of water necessary to perform such functions, facilitating the mix-design process. The calculated volume is considered to be the total volume of water needed for production. Moreover, the calculation must take into account the conditions and constraints associated with the production and casting, as well as the technical requirements of the material to be designed. The modelling of the water physical functions allowed the development of a calculation method to quantify the approximate volume of water needed for concrete production. The developed method was used to calculate the volume of water of three different special concretes: a lightweight self-compacting concrete reinforced with fibres, an ultra-high performance concrete reinforced with steel fibres and a concrete with recycled aggregates. What is more, the volume of water for two conventional concretes, with compressive strengths of 25 and 30 MPa, was calculated. Since the calculation was based on granular skeletons for real mixtures, produced in laboratory or/and industrially, the results obtained through the use of the developed method were compared to the experimental results of each concrete. At last, the method was used to quantify the volume of paste necessary for the production of a porous concrete. The results show that the mathematical models used to describe the physical phenomena of absorption, wetting and fluidization fit well to the experimental reproduction of these phenomena. Corrections are needed in some situations due to the ideal boundary conditions adopted during modelling, which facilitate calculation. Anyhow, the errors are corrected through the use of adjusting coefficients. Therefore, the calculation method developed has proven itself effective and applicable in the mix-design of different types of conventional and special concretes, showing the potential to be used for the development of new materials.
El agua es uno de los componentes fundamentales del hormigón, no sólo por ser necesaria a la hidratación del cemento Portland, sino que también por las diferentes funciones físicas que desarrolla, las cuales están asociadas a las principales fases de la vida del hormigón: estado fresco, estado endurecido y vida útil de la estructura. El objetivo de la presente Tesis Doctoral es realizar un estudio detallado de las funciones físicas del agua en las mezclas de cemento Portland: la absorción de esta por los áridos, el mojado y la fluidificación de los conjuntos granulares que componen las pastas de cemento. Dicho estudio se traduce en la modelización matemática de las funciones físicas presentadas, en el sentido de dar una respuesta numérica que facilite el diseño de mezclas, acotando el volumen de agua necesario al desarrollo de las funciones especificadas, siendo éste el volumen de agua total necesario a la producción. Asimismo, el cálculo del referido volumen debe tener en cuenta los condicionantes de producción, puesta en obra, así como los requerimientos técnicos del material que se va diseñar. A través de la modelización de las funciones físicas del agua consideradas, se ha desarrollado un método de cálculo para acotar el volumen de agua total necesario a la producción de hormigones. Se ha utilizado el método desarrollado para el cálculo del volumen de agua de tres hormigones especiales distintos: hormigón ligero autocompactante con fibras, hormigón de ultra-alta resistencia reforzado con fibras de acero y hormigón con áridos reciclados. Asimismo, se ha calculado el volumen de agua para dos hormigones convencionales, de resistencias à compresión 25 y 30 MPa. Se han contrastado los resultados obtenidos por el uso del método desarrollado con los resultados experimentales de cada hormigón, ya que el cálculo se hizo con base en conjuntos granulares de mezclas reales, producidas en laboratorio y/o industrialmente. Por último, se ha utilizado el modelo desarrollado para la cuantificación del volumen de pasta necesario a la producción de un hormigón poroso. Los resultados demuestran que los modelos matemáticos utilizados para describir los fenómenos físicos de absorción, mojado y fluidificación se adecuan bien a la reproducción experimental de dichos fenómenos, en que correcciones son necesarias en algunas situaciones, debido a la adopción de condiciones de contorno ideales en la modelización, que facilitan los cálculos. De cualquier modo, los errores se corrigen a través de coeficientes de ajuste. Así, el método de cálculo desarrollado para acotar el volumen de agua se ha demostrado eficiente en el diseño de diferentes tipos de hormigones convencionales y especiales, pudiendo ser utilizado en el desarrollo de nuevos materiales.
APA, Harvard, Vancouver, ISO, and other styles
37

Dunn, Steven C. "Design and applications of volume holographic optical elements." Doctoral diss., University of Central Florida, 2001. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/2545.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
Volume gratings were studied both theoretically and experimentally in order to design and analyze practical volume holographic optical elements. The diffraction of finite (Gaussian) beams by transmission gratings is investigated.
Ph.D.
Doctorate;
Department of Electrical Engineering and Computer Science
Engineering
Electrical Engineering and Computer Science
225 p.
xvi, 225 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
38

Palacharla, Sridevi. "Design and implementation of a multimedia presentation system using Real-time Transport Protocol (RTP)." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ27004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Palacharla, Sridevi Carleton University Dissertation Engineering Systems and Computer. "Design and implementation of a multimedia presentation system using real-time transport protocol (RTP)." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bishop, Carlton Delos. "Finite impulse response filter design using cosine series functions." Doctoral diss., University of Central Florida, 1988. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/43377.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
Window functions have been extensively used for the design of SAW filters. The classical truncated cosine series functions, such as the Hamming and Blackmann functions, are only a few of an infinite set of such functions. The derivation of this set of functions from orthonormal basis sets and the criteria for obtaining the constant coefficients of the functions are presented. These functions are very useful because of the closed-form expressions and their easily recognizable Fourier transform. Another approach to the design of Gaussian shaped filters having a desired sidelobe level using a 40 term cosine series will be presented as well. This approach is again non-iterative and a near equi-ripple sidelobe level filter could be achieved. A deconvolution technique will also be presented. this has the advantage of being non-iterative, simple and fast. This design method produces results comparable to the Dolph-Chebyshev technique.
Ph.D.
Doctorate
Electrical Engineering and Communication
Engineering
Electrical Engineering
41 p.
vii, 41 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
41

Di, Jia. "Energy aware design and analysis for synchronous and asynchronous circuits." Doctoral diss., University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/3736.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
Power dissipation has become a major concern for IC designers. Various low power design techniques have been developed for synchronous circuits. Asynchronous circuits, however have gained more interests recently due to their benefits in lower noise, easy timing control, etc. But few publications on energy reduction techniques for asynchronous logic are available. Power awareness indicates the ability of the system power to scale with changing conditions and quality requirements. Scalability is an important figure-of-merit since it allows the end user to implement operational policy just like the user of mobile multimedia equipment needs to select between better quality and longer battery operation time. This dissertation discusses power /energy optimization and performs analysis on both synchronous and asynchronous logic
Ph.D.
Doctorate;
Department of Electrical and Computer Engineering
Engineering and Computer Science
Electrical and Computer Engineering
163 p.
xv, 163 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
42

Lihitkar, Shalini R. "Design and Develop IR of Electronic Theses of Social-Sciences of RTM,Nagpur University, Nagpur." Universidad Peruana de Ciencias Aplicadas (UPC), 2012. http://hdl.handle.net/10757/622564.

Full text
Abstract:
Conferencia realizado del 12 al 14 de setiembre en Lima, Peru del 2012 en el marco del 15º Simposio Internacional de Tesis y Disertaciones Electrónicas (ETD 2012). Evento aupiciado por la Universidad Nacional Mayor de San Marcos (UNMSM) y la Universidad Peruana de Ciencias Aplicadas (UPC).
In the age of Information Technology it is very important to keep a pace with the rapid changes that has been taking place all over the world. Institutional repository play a vital role in dissemination of intellectual output of the organization hence it is essential to all the organization to develop and digitize their collection and scholarly communication. Keeping in view the technological changes and importance of creating digital repository of electronic theses, the proposal has been prepared for creation of institutional repository of electronic theses of social-sciences of Rashtrasant Tukadoji Maharaj Nagpur University , Nagpur . This proposal is submitted to Indian Council of Social-Science Research, (ICSSR) New Delhi and has been approved recently, now it is in execution stage. The proposal, constraints and measures for Institutional Repository have been also discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
43

Resinas, Manuel, Adela del-Río-Ortega, Antonio Ruiz-Cortés, and Macias Cristina Cabanillas. "Specification and Automated Design-Time Analysis of the Business Process Human Resource Perspective." Elsevier, 2015. http://dx.doi.org/10.1016/j.is.2015.03.002.

Full text
Abstract:
The human resource perspective of a business process is concerned with the relation between the activities of a process and the actors who take part in them. Unlike other process perspectives, such as control flow, for which many different types of analyses have been proposed, such as finding deadlocks, there is an important gap regarding the human resource perspective. Resource analysis in business processes has not been defined, and only a few analysis operations can be glimpsed in previous approaches. In this paper, we identify and formally define seven design-time analysis operations related to how resources are involved in process activities. Furthermore, we demonstrate that for a wide variety of resource-aware BP models, those analysis operations can be automated by leveraging Description Logic (DL) off-the-shelf reasoners. To this end, we rely on Resource Assignment Language (RAL), a domain-specific language that enables the definition of conditions to select the candidates to participate in a process activity. We provide a complete formal semantics for RAL based on DLs and extend it to address the operations, for which the control flow of the process must also be taken into consideration. A proof-of-concept implementation has been developed and integrated in a system called CRISTAL. As a result, we can give an automatic answer to different questions related to the management of resources in business processes at design time.
APA, Harvard, Vancouver, ISO, and other styles
44

Anderson, Robert K. "Development of scale factors for clarifier design based on batch settling data." Master's thesis, University of Central Florida, 1989. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/22218.

Full text
Abstract:
University of Central Florida college of Engineering Thesis
Traditionally, batch settling tests have been employed to determine the values of the settling parameters V0 and K of the Vesilind equation which represents activated sludge settling velocity as a function of solids concentration. It remains unresolved how closely batch settling tests describe settling in full-scale clarifiers. An experimental procedure was developed to dtermine scale factors between batch settling and full-scale solids flux curves. An experimental protocol was determined for full-scale clarifier operation, including specific criteria of necessary instrumentation and operational flexibility. Several graphical techniques were evaluated and a procedure was selected to determine a scale factor between batch and full-scale settling. The specified procedure requires determination of underflow velocity and concentration. The scale factor was approximately 0.84 as applied to the limiting flux, thus clarifiers designed from batch settling tests would be underdesigned. In addition, a methodology was developed to account for batch flux curve variability in the form of a safety factor. Finally, a design procedure was recommnded to calculate clarifier area based on the scale factor determined from the batch and full-scale experiments.
M.S.;
Engineering;
113 p.
viii, 113 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
45

Nuñez, Vasquez Victor Rennato. "El rol de la música incidental y el sound design en los videojuegos modernos (1996-2019)." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/655918.

Full text
Abstract:
El presente trabajo de investigación tiene como finalidad explorar y entender el rol del sonido en los videojuegos lanzados al mercado entre 1996 a 2019, usando Resident Evil 7 (2017) como un estudio de caso. Para un mejor análisis del apartado sonoro se ha dividido el estudio del sonido en dos partes: música incidental y sound design, teniendo presente, además, que la línea que divida música y efecto de sonido es cada vez menor. La primera parte del trabajo se centra en el rol de la música incidental en el videojuego; la forma en que el mismo, como medio audiovisual no-lineal e interactivo, necesita un acercamiento diferente al de otros medios lineales como el cine; y un análisis musical de la banda sonora de dos videojuegos de la saga Resident Evil con más de quince años de diferencia, con el fin de apreciar como ha evolucionado el rol de la música en el videojuego. La segunda parte del trabajo se centra en el rol del sound design en el videojuego; analizando la complejidad de este apartado; la forma en que la misma interactua con la música al punto de compenetrarse, viéndose el videojuego como medio interactivo, favorecido de esta interacción. La tercera parte del trabajo se centra en la relación que existe entre la implementación sonora, composición musical y sound design. Se busca con el presente trabajo contribuir al estudio y entendimiento del sonido en el videojuego, y la relación de este apartado con sus principales características: interactividad y no-linealidad.
The present research aims to explore and understand the role of sound in video games released on the market between 1996 to 2019, using Resident Evil 7 (2017) as a case study. For a better analysis of the audio section, the study of sound has been divided into two parts: incidental music and sound design, bearing in mind, in addition, that the line that divides music and sound effect is less and less. The first part of the work focuses on the role of incidental music in the video game; the way in which it, as a non-linear and interactive audiovisual medium, needs a different approach than other linear media such as cinema; and a musical analysis of the soundtrack of two video games in the Resident Evil saga more than fifteen years apart, in order to appreciate how the role of music in the video game has evolved. The second part of the work focuses on the role of sound design in the video game; analyzing the complexity of this section; the way in which it interacts with music to the point of interpenetrating, seeing the video game as an interactive medium, favored by this interaction. The third part of the work focuses on the relationship between sound implementation, musical composition, and sound design. This work seeks to contribute to the study and understanding of sound in the video game, and the relationship of this section with its main characteristics: interactivity and non-linearity.
Trabajo de investigación
APA, Harvard, Vancouver, ISO, and other styles
46

Bengtsson, Robin. "Metodutveckling av vidhäftningsbehandling för textila vävar." Thesis, Mittuniversitetet, Avdelningen för kvalitets- och maskinteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34795.

Full text
Abstract:
På Trelleborg Engineered Coated Fabrics tillverkas   gummibelagda vävar. För att fästa gummit på väven är det nödvändigt att först   belägga väven med ett vidhäftningsmedel. På Trelleborg ECF används en speciell   kalandreringsmaskin för denna operation. Även om maskinen är effektiv för   storskalig produktion så är den inte lika effektiv när nya produkter ska tas   fram. Därför har två nya labbmaskiner införskaffats så att nya produkter kan   utvecklas i labbet istället. Avsikten med detta projekt är att utvärdera de   nya maskinerna, finna ett samband mellan labbmaskinerna och   produktionsmaskinen samt utvärdera de faktorer som påverkar vidhäftningen.   Detta har gjorts genom att testa olika kombinationer på de inställningar som   används när väven beläggs med vidhäftningsmedlet. Dessutom har en analys av   produktionsprocessen gjorts för att finna ett samband mellan labbmaskinerna   och produktionsmaskinen. Även om ingen fullständig koppling mellan maskinerna   har hittats så har de faktorer som påverkar vidhäftningen utvärderats och   baserat på den kunskap som erhållits från projektet har ett förslag på hur produktionsprocessen   kan förbättras gjorts.
At   Trelleborg Engineered Coated Fabrics rubber coated fabrics are manufactured.   In order to apply rubber to fabrics it is indispensable to first add a rubber   adhesion to the fabric. At Trelleborg ECF a special calendaring machine is   used for this operation. While it is an effective machine for big production   volumes it is not as effective when new products are developed. Therefore two   new lab machines has been procured so that new products can be developed in   the lab. The intent of this project is to evaluate the new machines, find a   connection between lab and production machines and appraise the factors that   affect the adhesion. This has been done by testing different combinations of   settings in the process when the adhesion is added to the fabric.   Additionally an analysis of the production process has been done to manage   the connection between lab machines and the production machine. Though no   fully complete connection between the machines has been found the factors   affecting the adhesion has been evaluated and based on the knowledge obtained   from the project a proposition on how to improve the production process has   been made.

Betyg: 180724

APA, Harvard, Vancouver, ISO, and other styles
47

Burridge, Michael J. "Nonlinear robust control of a series dc motor utilizing the recursive design approach." Master's thesis, University of Central Florida, 1995. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/24126.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
In this thesis, the investigation of asymptotic stavility of the series DC motor with unknown load-torque and unknown armature inductance is considered. The control technique of recursive, or backstepping, design is employed. Three cases are considered. In the first case, the system is assumed to be perfectly known. In the second case, the load torque is assumed to be unknown and a proportional-integral controller is developed to compensate for this unknown quantity. In the final case, it is assumed that two system parameters, load torque and armature inductance, are not known exactly, but vary from expected nominal values within a specified range. A robust control is designed to handle this case. The Lyapunov stavility criterion is applied ina ll three cases to prove the stability of the system under the developed control. The results are then verified through the use of computer simulation.
M.S.;
Electrical and Computer Engineering;
Engineering;
Electrical Engineering;
103 p.
vii, 103 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
48

Gratorp, Christina. "Bitrate smooting: a study on traffic shaping and -analysis in data networks." Thesis, Linköping University, Department of Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10136.

Full text
Abstract:

Examensarbetet bakom denna rapport utgör en undersökande studie om hur transmission av mediadata i nätverk kan göras effektivare. Det kan åstadkommas genom att viss tilläggsinformation avsedd för att jämna ut datatakten adderas i det realtidsprotokoll, Real Time Protocol, som används för strömmande media. Genom att försöka skicka lika mycket data under alla konsekutiva tidsintervall i sessionen kommer datatakten vid en godtycklig tidpunkt med större sannolikhet att vara densamma som vid tidigare klockslag. En streamingserver kan tolka, hantera och skicka data vidare enligt instruktionerna i protokollets sidhuvud. Datatakten jämnas ut genom att i förtid, under tidsintervall som innehåller mindre data, skicka även senare data i strömmen. Resultatet av detta är en utjämnad datataktskurva som i sin tur leder till en jämnare användning av nätverkskapaciteten.

Arbetet inkluderar en översiktlig analys av beteendet hos strömmande media, bakgrundsteori om filkonstruktion och nätverksteknologier samt ett förslag på hur mediafiler kan modifieras för att uppfylla syftet med examensarbetet. Resultat och diskussion kan förhoppningsvis användas som underlag för en framtida implementation av en applikation ämnad att förbättra trafikflöden över nätverk.

APA, Harvard, Vancouver, ISO, and other styles
49

Smith, Scott Christopher. "Gate and throughput optimizations for null convention self-timed digital circuits." Doctoral diss., University of Central Florida, 2001. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/3372.

Full text
Abstract:
University of Central Florida College of Engineering Thesis
NULL Convention Logic (NCL) provides an asynchronous design methodology employing dual-rail signals, quad-rail signals, or other Mutually Exclusive Assertion Groups (MEAGs) to incorporate data and control information into one mixed path. In NCL, the control is inherently present with each datum, so there is no need for worse case delay analysis and control path delay matching. This dissertation focuses on optimization methods for NCL circuits, specifically addressing three related architectural areas of NCL design. First, a design method for optimizing NCL circuits is developed. The method utilizes conventional Boolean minimization followed by table-driven gate substitutions. It IS applied to design time and space optimal fundamental logic functions, a time and space optimal full adder, and time, transistor count, and power optimal up-counter circuits. The method is applicable when composing logic functions where each gate is a state-holding element; and can produce delay-insensitive circuits requiring less area and fewer gate delays than alternative gate-level approaches requiring full minterm generation. Second, a pipelining method for producing throughput optimal NCL systems is developed. A relationship between the number of gate delays per stage and the worse case throughput for a pipeline as a whole is derived. The method then uses this relationship to minimize a pipeline's worse-case throughput by partitioning the NCL combinational circuitry through the addition of asynchronous registers. The method is applied to design a maximum throughput unsigned multiplier, which yields a speedup of 2.25 over the non-pipelined version, while maintaining delay-insensitivity. Third, a technique to mitigate the impact of the NULL cycle is developed. The technique Wher increases the maximum attainable throughput of a NCL system by reducing inherent overheads associated with an integrated data and control path. This technique is applied to a non-pipelined Cbit by 4-bit unsigned multiplier to yield a speedup of 1.61 over the standalone version. Finally, these techniques are applied to design a 72+32x32 multiply and &cumulate (MAC) unit, which outperforms other delay-insensitive/self-timed MACs in the literature. It also performs conditional rounding, scaling, and saturation of the output, whereas the others do not; thus further distinguishing it from the previous work. The methods developed facilitate speed, transistor count, and power tradeoffs using approaches that are readily automatable.
Ph.D.
Doctorate;
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Architecture and Digital Systems
154 p.
xiv, 154 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
50

Lessing, Sara. "ComPron : Learning Pronunciation through Building Associations between Native Language and Second Language Speech Sounds." Thesis, Uppsala universitet, Människa-datorinteraktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-414819.

Full text
Abstract:
Current computer-assisted pronunciation training (CAPT) tools are too focused on what technologies can do, rather than focusing on learner needs and pedagogy. They also lack an embodied perspective on learning. This thesis presents a Research through Design project exploring what kind of interactive design features can support second language learners’ pronunciation learning of segmental speech sounds with embodiment in mind. ComPron was designed: an open simulated prototype that supports learners in learning perception and production of new segmental speech sounds in a second language, by comparing them to native language speech sounds. ComProm was evaluated through think-aloud user tests and semi-structured interviews (N=4). The findings indicate that ComPron supports awareness of speech sound-movement connections, association building between sounds, and production of sounds. The design features that enabled awareness, association building, and speech sound production support are discussed and what ComPron offers in comparison to other CAPT-tools.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography