Siga este enlace para ver otros tipos de publicaciones sobre el tema: Hardware-software design.

Tesis sobre el tema "Hardware-software design"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Hardware-software design".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Nilsson, Per. "Hardware / Software co-design for JPEG2000". Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5796.

Texto completo
Resumen

For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.

This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.

First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.

Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bappudi, Bhargav. "Example Modules for Hardware-software Co-design". University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1470043472.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Liucheng, Miao, Su Jiangang y Feng Bingxuan. "HARDWARE-INDEPENDENT AND SOFTWARE-INDEPENDENT IN SYSTEM DESIGN". International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606803.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Today, open technology has been widely used in computer and other field, including software and hardware. The “Open Technology” about hardware and software can be called “Hardware-Independent and Software-Independent”(For example, Open Operating System in Computer.). But, in telemetry technology field, the system design based on “Hardware-Independent and Software-Independent” is primary stage. In this paper, the following question will be discussed: a. Why telemetry system design needs “open technology” b. How to accomplish system design based on “Hardware-Independent and Software-Independent” c. The application prospect of “hardware-Independent and Software-Independent” in system design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Li, Juncao. "An Automata-Theoretic Approach to Hardware/Software Co-verification". PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/12.

Texto completo
Resumen
Hardware/Software (HW/SW) interfaces are pervasive in computer systems. However, many HW/SW interface implementations are unreliable due to their intrinsically complicated nature. In industrial settings, there are three major challenges to improving reliability. First, as there is no systematic framework for HW/SW interface specifications, interface protocols cannot be precisely conveyed to engineers. Second, as there is no unifying formal model for representing the implementation semantics of HW/SW interfaces accurately, some critical properties cannot be formally verified on HW/SW interface implementations. Finally, few automatic tools exist to help engineers in HW/SW interface development. In this dissertation, we present an automata-theoretic approach to HW/SW co-verification that addresses these challenges. We designed a co-specification framework to formally specify HW/SW interface protocols; we synthesized a hybrid Büchi Automaton Pushdown System, namely Büchi Pushdown System (BPDS), as the unifying formal model for HW/SW interfaces; and we created a co-verification tool, CoVer that implements our model checking algorithms and realizes our reduction algorithms for BPDS. The application of our approach to the Windows device/driver framework has resulted in the detection of fifteen specification issues. Furthermore, utilizing CoVer, we discovered twelve real bugs in five drivers. These non-trivial findings have demonstrated the significance of our approach in industrial applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Cadenelli, Luca. "Hardware/software co-design for data-intensive genomics workloads". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668250.

Texto completo
Resumen
Since the last decade, the main components of computer systems have been evolving, diversifying, to overcome their physical limits and to minimize their energy footprint. Hardware specialization and heterogeneity have become key to design more efficient systems and tackle ever-important problems with ever-larger volumes of data. However, to fully take advantage of the new hardware, a tighter integration between hardware and software, called hardware/software co-design, is also needed. Hardware/software co-design is a time-consuming process that poses its challenges, such as code and performance portability. Despite its challenges and considerable costs, it is an effort that is crucial for data-intensive applications that run at scale. Such applications span across different fields, such as engineering, chemistry, life sciences, astronomy, high energy physics, earth sciences, et cetera. Another scientific field where hardware/software co-design is fundamental is genomics. Here, modern DNA sequencing technologies reduced the sequencing time and made its cost orders of magnitude cheaper than it was just a few years ago. This breakthrough, together with novel genomics methods, will eventually enable the long-awaited personalized medicine. Personalized medicine selects appropriate and optimal therapies based on the context of a patient’s genome, and it has the potential to change medical treatments as we know them today. However, the broad adoption of genomics methods is limited by their capital and operational costs. In fact, genomics pipelines consist of complex algorithms with execution times of many hours per each patient and vast intermediate data structures stored in main memory for good performance. To satisfy the main memory requirement genomics applications are usually scaled-out to multiple compute nodes. Therefore, these workloads require infrastructures of enterprise-class servers, with entry and running costs that that most labs, clinics, and hospitals cannot afford. Due to these reasons, co-designing genomics workloads to lower their total cost of ownership is essential and worth investigating. This thesis demonstrates that hardware/software co-design allows migrating data-intensive genomics applications to inexpensive desktop-class machines to reduce the total cost of ownership when compared to traditional cluster deployments. Firstly, the thesis examines algorithmic improvements to ease co-design and to reduce workload footprint, using NVMs as a memory extension, and so to be able to run in one single node. Secondly, it investigates how data-intensive algorithms can offload computation to programmable accelerators (i.e., GPUs and FPGAs) to reduce the execution time and the energy-to-solution. Thirdly, it explores and proposes techniques to substantially reduce the memory footprint through the adoption of flash memory to the point that genomics methods can run on one affordable desktop-class machine. Results on SMUFIN, a state-of-the-art real-world genomics method prove that hardware/software co-design allows significant reductions in the total cost of ownership of data-intensive genomics methods, easing their adoption on large repositories of genomes and also on the field.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Engels, Daniel Wayne 1970. "Scheduling for hardware/software partitioning in embedded system design". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86443.

Texto completo
Resumen
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 197-204).
by Daniel Wayne Engels.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

TIWARI, ANURAG. "HARDWARE/SOFTWARE CO-DEBUGGING FOR RECONFIGURABLE COMPUTING APPLICATIONS". University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1011816501.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

BRUHNS, THOMAS VICTOR. "HARDWARE AND SOFTWARE FOR A COMPUTER CONTROLLED LIDAR SYSTEM". Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/188042.

Texto completo
Resumen
The hardware and software for a computer controlled optical radar, or lidar, system are described. The system builds on a previously installed pulsed ruby backscatter lidar, capable of acquiring data at controlled azimuth and elevation angles through the atmosphere. The described system replaces hardwired logic with computer control. Two coupled computers are used to allow a degree of real time control while data are processed. One of these computers reads and controls mount elevation angle, reads the laser energy monitor, and senses firing of the laser. The other computer serves as a user interface, and receives the lidar return data from a digitizer and memory, and the angle and energy information from the other computer. The second computer also outputs data to a disc drive. The software provided with the system is described, and the feasibility of additional software for both control and data processing is explored. Particular attention is given to data integrity and instrument and computer operation in the presence of the high energy pulses used to drive the laser. A previously described laser energy monitor has been improved to isolate it from laser transients. Mount elevation angles are monitored with an absolute angle readout. As a troubleshooting aid, a simulator with an output that approximates the lidar receiver output was developed. Its output is digitally generated and provides a known repetitive signal. Operating procedures are described for standard data acquisition, and troubleshooting is outlined. The system can be used by a relatively inexperienced operator; English sentences are displayed on the system console CRT terminal to lead the operator through data acquisition once the system hardware is turned on. A brief synopsis of data acquired on the system is given. Those data are used as the basis of other referenced papers. It constitutes soundings for over one hundred days. One high point has been operation of the system in conjunction with a balloon borne atmospheric particulate sampling package. The system has also been used occasionally as the transmitter of a lidar system with physically separated receiver and transmitter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ramírez, Bellido Alejandro. "High performance instruction fetch using software and hardware co-design". Doctoral thesis, Universitat Politècnica de Catalunya, 2002. http://hdl.handle.net/10803/5969.

Texto completo
Resumen
En los últimos años, el diseño de procesadores de altas prestaciones ha progresado a lo largo de dos corrientes de investigación: incrementar la profundidad del pipeline para permitir mayores frecuencias de reloj, y ensanchar el pipeline para permitir la ejecución paralela de un mayor numero de instrucciones. Diseñar un procesador de altas prestaciones implica balancear todos los componentes del procesador para asegurar que el rendimiento global no esta limitado por ningún componente individual. Esto quiere decir que si dotamos al procesador de una unidad de ejecución mas rápida, hay que asegurarse de que podemos hacer fetch y decodificar instrucciones a una velocidad suficiente para mantener ocupada a esa unidad de ejecución.

Esta tesis explora los retos presentados por el diseño de la unidad de fetch desde dos puntos de vista: el diseño de un software mas adecuado para las arquitecturas de fetch ya existente, y el diseño de un hardware adaptado a las características especiales del nuevo software que hemos generado.

Nuestra aproximación al diseño de un suevo software ha sido la propuesta de un nuevo algoritmo de reordenación de código que no solo pretende mejorar el rendimiento de la cache de instrucciones, sino que al mismo tiempo pretende incrementar la anchura efectiva de la unidad de fetch. Usando información sobre el comportamiento del programa (profile data), encadenamos los bloques básicos del programa de forma que los saltos condicionales tendrán tendencia a ser no tomados, lo cual favorece la ejecución secuencial del código. Una vez hemos organizado los bloques básicos en estas trazas, mapeamos las diferentes trazas en memoria de forma que minimicen la cantidad de espacio requerida para el código realmente útil, y los conflictos en memoria de este código. Además de describir el algoritmo, hemos realizado un análisis en detalle del impacto de estas optimizaciones sobre los diferentes aspectos del rendimiento de la unidad de fetch: la latencia de memoria, la anchura efectiva de la unidad de fetch, y la capacidad de predicción del predictor de saltos.

Basado en el análisis realizado sobre el comportamiento de los códigos optimizados, proponemos también una modificacion del mecanismo de la trace cache que pretende realizar un uso mas efectivo del escaso espacio de almacenaje disponible. Este mecanismo utiliza la trace cache únicamente para almacenar aquellas trazas que no podrían ser proporcionadas por la cache de instrucciones en un único ciclo.

También basado en el conocimiento adquirido sobre el comportamiento de los códigos optimizados, proponemos un nuevo predictor de saltos que hace un uso extensivo de la misma información que se uso para reordenar el código, pero en este caso se usa para mejorar la precisión del predictor de saltos.

Finalmente, proponemos una nueva arquitectura para la unidad de fetch del procesador basada en explotar las características especiales de los códigos optimizados. Nuestra arquitectura tiene un nivel de complejidad muy bajo, similar al de una arquitectura capaz de leer un único bloque básico por ciclo, pero ofrece un rendimiento muy superior, siendo comparable al de una trace cache, mucho mas costosa y compleja.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Patel, Krutartha Computer Science &amp Engineering Faculty of Engineering UNSW. "Hardware-software design methods for security and reliability of MPSoCs". Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/44854.

Texto completo
Resumen
Security of a Multi-Processor System on Chip (MPSoC) is an emerging area of concern in embedded systems. MPSoC security is jeopardized by Code Injection attacks. Code Injection attacks, which are the most common types of software attacks, have plagued single processor systems. Design of MPSoCs must therefore incorporate security as one of the primary objectives. Code Injection attacks exploit vulnerabilities in \trusted" and legacy code. An architecture with a dedicated monitoring processor (MONITOR) is employed to simultaneously supervise the application processors on an MPSoC. The program code in the application processors is divided into basic blocks. The basic blocks in the application processors are statically instrumented with special instructions that allow communication with the MONITOR at runtime. The MONITOR verifies the execution of all the processors at runtime using control flow checks and either a timing or instruction count check. This thesis proposes a monitoring system called SOFTMON, a design methodology called SHIELD, a design flow called LOCS and an architectural framework called CUFFS for detecting Code Injection attacks. SOFTMON, a software monitoring system, uses a software algorithm in the MONITOR. SOFTMON incurs limited area overheads. However, the runtime performance overhead is quite high. SHIELD, an extension to the work in SOFTMON overcomes the limitation of high runtime overhead using a MONITOR that is predominantly hardware based. LOCS uses only one special instruction per basic block compared to two, as was the case in SOFTMON and SHIELD. Additionally, profile information is generated for all the basic blocks in all the application processors for the MPSoC designer to tune the design by increasing or decreasing the frequency of loop basic blocks. CUFFS detects attacks even without application processors communicating to the MONITOR. The SOFTMON, SHIELD and LOCS approaches can only detect attacks if the application processors communicate to the MONITOR. CUFFS relies on the exact number of instructions in basic blocks to determine an attack, rather than time-frame based measures used in SOFTMON, SHIELD and LOCS. The lowest runtime performance overhead was achieved by LOCS (worst case of 37.5%), while the SOFTMON monitoring system had the least amount of area overheads of about 25%. The CUFFS approach employed an active MONITOR and hence detected a greater range of attacks. The CUFFS framework also detects bit flip errors (reliability errors) in the control flow instructions of the application processors on an MPSoC. CUFFS can detect nearly 70% of all bit flip errors in the control flow instructions. Additionally, a modified CUFFS approach is proposed to ensure reliable inter-processor communication on an MPSoC. The modified CUFFS approach uses a hardware based checksum approach for reliable inter-processor communication and incurred a runtime performance overhead of up to 25% and negligible area overheads compared to CUFFS. Thus, the approaches proposed in this thesis equip an MPSoC designer with tools to embed security features during an MPSoC's design phase. Incorporating security measures at the processor design level provides security against software attacks in MPSoCs and incurs manageable runtime, area and code-size overheads.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Jingyao. "Hardware-Software Co-Design for Sensor Nodes in Wireless Networks". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/50972.

Texto completo
Resumen
Simulators are important tools for analyzing and evaluating different design options for wireless sensor networks (sensornets) and hence, have been intensively studied in the past decades. However, existing simulators only support evaluations of protocols and software aspects of sensornet design. They cannot accurately capture the significant impacts of various hardware designs on sensornet performance.  As a result, the performance/energy benefits of customized hardware designs are difficult to be evaluated in sensornet research. To fill in this technical void, in first section, we describe the design and implementation of SUNSHINE, a scalable hardware-software emulator for sensornet applications.
SUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware up to cycle-level accuracy through its seamless integration of three existing sensornet simulators: a network simulator TOSSIM, an instruction-set simulator SimulAVR and a hardware simulator
GEZEL. SUNSHINE solves several sensornet simulation challenges, including data exchanges and time synchronization across different simulation domains and simulation accuracy levels. SUNSHINE also provides hardware specification scheme for simulating flexible and customized hardware designs. Several experiments are given to illustrate SUNSHINE\'s simulation capability. Evaluation results are provided to demonstrate that SUNSHINE is an efficient tool for software-hardware co-design in sensornet research.

Even though SUNSHINE can simulate flexible sensor nodes (nodes contain FPGA chips as coprocessors) in wireless networks, it does not estimate power/energy consumption of sensor nodes. So far, no simulators have been developed to evaluate the performance of such flexible nodes in wireless networks. In second section, we present PowerSUNSHINE, a power- and energy-estimation tool that fills the void. PowerSUNSHINE is the first scalable power/energy estimation tool for WSNs that provides an accurate prediction for both fixed and flexible sensor nodes. In the section, we first describe requirements and challenges of building PowerSUNSHINE. Then, we present power/energy models for both fixed and flexible sensor nodes. Two testbeds, a MicaZ platform and a flexible node consisting of a microcontroller, a radio and a FPGA based co-processor, are provided to demonstrate the simulation fidelity of PowerSUNSHINE. We also discuss several evaluation results based on simulation and testbeds to show that PowerSUNSHINE is a scalable simulation tool that provides accurate estimation of power/energy consumption for both fixed and flexible sensor nodes.

Since the main components of sensor nodes include a microcontroller and a wireless transceiver (radio), their real-time performance may be a bottleneck when executing computation-intensive tasks in sensor networks. A coprocessor can alleviate the burden of microcontroller from multiple tasks and hence decrease the probability of dropping packets from wireless channel. Even though adding a coprocessor would gain benefits for sensor networks, designing applications for sensor nodes with coprocessors from scratch is challenging due to the consideration of design details in multiple domains, including software, hardware, and network. To solve this problem, we propose a hardware-software co-design framework for network applications that contain multiprocessor sensor nodes. The framework includes a three-layered architecture for multiprocessor sensor nodes and application interfaces under the framework. The layered architecture is to make the design of multiprocessor nodes\' applications flexible and efficient. The application interfaces under the framework are implemented for deploying reliable applications of multiprocessor sensor nodes. Resource sharing technique is provided to make processor, coprocessor and radio work coordinately via communication bus. Several testbeds containing multiprocessor sensor nodes are deployed to evaluate the effectiveness of our framework. Network experiments are executed in SUNSHINE emulator to demonstrate the benefits of using multiprocessor sensor nodes in many network scenarios.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Webster, David D. "Hardware, software, firmware allocation of functions in systems development". Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/49907.

Texto completo
Resumen
The top-down development methodology is, for the most part, a well defined subject. There is, however, one area of top-down development that lacks structure and definition. The undefined topic is the hardware, software, and firmware allocation of functions. This research addresses this deficiency in top-down system development. The key objective is the restructuring of the hardware, software, and firmware process from a subjective, qualitative decision process to a structured, quantitative one. Factors that affect the hardware, software, and firmware allocation process are identified. Qualitative data on the influence of the factors on the allocation process are systematized into quantitative information. This information is used to develop a model to provide a recommendation for implementing a function in hardware, software, or firmware. The model applies three analytical methods: 1) the analytic hierarchy process, 2) the general linear model, and 3) the second order regression technique. These three methods are applied to the quantified information of the hardware, software, firmware allocation process. A computer-based software tool is developed by this research to aid in the evaluation of the hardware, software, and firmware allocation process. The software support tool assists in data collection. Future application of the support tool will enable the capture and documentation of expert knowledge on the hardware, software, and firmware allocation process. The improved knowledge base can be used to improve the model which in tum will improve the system development process, and resulting system.
Ph. D.
incomplete_metadata
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

El, Shobaki Mohammed. "On-chip monitoring for non-intrusive hardware/software observability". Licentiate thesis, Uppsala : Dept. of Information Technology, Univ, 2004. http://www.it.uu.se/research/reports/lic/2004-004/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Adhipathi, Pradeep. "Model based approach to Hardware/ Software Partitioning of SOC Designs". Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9986.

Texto completo
Resumen
As the IT industry marks a paradigm shift from the traditional system design model to System-On-Chip (SOC) design, the design of custom hardware, embedded processors and associated software have become very tightly coupled. Any change in the implementation of one of the components affects the design of other components and, in turn, the performance of the system. This has led to an integrated design approach known as hardware/software co-design and co-verification. The conventional techniques for co-design favor partitioning the system into hardware and software components at an early stage of the design and then iteratively refining it until a good solution is found. This method is expensive and time consuming. A more modern approach is to model the whole system and rigorously test and refine it before the partitioning is done. The key to this method is the ability to model and simulate the entire system. The advent of new System Level Modeling Languages (SLML), like SystemC, has made this possible. This research proposes a strategy to automate the process of partitioning a system model after it has been simulated and verified. The partitioning idea is based on systems modeled using Process Model Graphs (PmG). It is possible to extract a PmG directly from a SLML like SystemC. The PmG is then annotated with additional attributes like IO delay and rate of activation. A complexity heuristic is generated from this information, which is then used by a greedy algorithm to partition the graph into different architectures. Further, a command line tool has been developed that can process textually represented PmGs and partition them based on this approach.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Zhu, Weiwen 1967. "Design and modeling of mixed synchronous-asynchronous and hardware-software systems". Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34005.

Texto completo
Resumen
This thesis presents the design of a hardware/software co-simulator and a case study in the comparison of synchronous and asynchronous design styles of digital VLSI circuits. Adopting the design pattern approach of software design, our simulator software package, based on PtolemyII, extracts the temporal causality of software in embedded systems to perform fast timing estimation of functionality partitioning of hardware/software in embedded systems. Our package can simulate system features such as task prioritization, message passing, resource sharing and task blocking. We demonstrate the proposed approach by two event-driven software applications. In this thesis we also discuss synchronous and asynchronous design styles of VLSI circuits. We use a CDMA correlator to illustrate the different aspects of these design styles. The comparison is presented in terms of area and power. Meanwhile, we also include a switching activity study for the evaluation of architecture tradeoffs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Cavalcante, Sergio Vanderlei. "A hardware-software co-design system for embedded real-time applications". Thesis, University of Newcastle Upon Tyne, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360339.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Davis, Jesse H. Z. (Jesse Harper Zehring) 1980. "Hardware & software architecture for multi-level unmanned autonomous vehicle design". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/16968.

Texto completo
Resumen
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 95-96).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
The theory, simulation, design, and construction of a radically new type of unmanned aerial vehicle (UAV) are discussed. The vehicle architecture is based on a commercially available non-autonomous flyer called the Vectron Blackhawk Flying Saucer. Due to its full body rotation, the craft is more inherently gyroscopically stable than other more common types of UAVs. This morphology was chosen because it has never before been made autonomous, so the theory, simulation, design, and construction were all done from fundamental principles as an example of original multi-level autonomous development.
by Jesse H.Z. Davis.
M.Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Farrell, John Patrick. "Digital Hardware Design Decisions and Trade-offs for Software Radio Systems". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/33294.

Texto completo
Resumen
Software radio is a technique for implementing reconfigurable radio systems using a combination of various circuit elements and digital hardware. By implementing radio functions in software, a flexible radio can be created that is capable of performing a variety of functions at different times. Numerous digital hardware devices are available to perform the required signal processing, each with its own strengths and weaknesses in terms of performance, power consumption, and programmability. The system developer must make trade-offs in these three design areas when determining the best digital hardware solution for a software radio implementation. When selecting digital hardware architectures, it is important to recognize the requirements of the system and identify which architectures will provide sufficient performance within the design constraints. While some architectures may provide abundant computational performance and flexibility, the associated power consumption may largely exceed the limits available for a given system. Conversely, other processing architectures may demand minimal power consumption and offer sufficient computation performance yet provide little in terms of the flexibility needed for software radio systems. Several digital hardware solutions are presented as well as their design trade-offs and associated implementation issues.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Koch, Christine. "Managerial coordination between hardware and software development during complex electronic system design". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22135.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Koch, Christine Carleton University Dissertation Management Studies. "Managerial coordination between hardware and software development during complex electronic system design". Ottawa, 1997.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Cornevaux-Juignet, Franck. "Hardware and software co-design toward flexible terabits per second traffic processing". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0081/document.

Texto completo
Resumen
La fiabilité et la sécurité des réseaux de communication nécessitent des composants efficaces pour analyser finement le trafic de données. La diversification des services ainsi que l'augmentation des débits obligent les systèmes d'analyse à être plus performants pour gérer des débits de plusieurs centaines, voire milliers de Gigabits par seconde. Les solutions logicielles communément utilisées offrent une flexibilité et une accessibilité bienvenues pour les opérateurs du réseau mais ne suffisent plus pour répondre à ces fortes contraintes dans de nombreux cas critiques.Cette thèse étudie des solutions architecturales reposant sur des puces programmables de type Field-Programmable Gate Array (FPGA) qui allient puissance de calcul et flexibilité de traitement. Des cartes équipées de telles puces sont intégrées dans un flot de traitement commun logiciel/matériel afin de compenser les lacunes de chaque élément. Les composants du réseau développés avec cette approche innovante garantissent un traitement exhaustif des paquets circulant sur les liens physiques tout en conservant la flexibilité des solutions logicielles conventionnelles, ce qui est unique dans l'état de l'art.Cette approche est validée par la conception et l'implémentation d'une architecture de traitement de paquets flexible sur FPGA. Celle-ci peut traiter n'importe quel type de paquet au coût d'un faible surplus de consommation de ressources. Elle est de plus complètement paramétrable à partir du logiciel. La solution proposée permet ainsi un usage transparent de la puissance d'un accélérateur matériel par un ingénieur réseau sans nécessiter de compétence préalable en conception de circuits numériques
The reliability and the security of communication networks require efficient components to finely analyze the traffic of data. Service diversification and through put increase force network operators to constantly improve analysis systems in order to handle through puts of hundreds,even thousands of Gigabits per second. Commonly used solutions are software oriented solutions that offer a flexibility and an accessibility welcome for network operators, but they can no more answer these strong constraints in many critical cases.This thesis studies architectural solutions based on programmable chips like Field-Programmable Gate Arrays (FPGAs) combining computation power and processing flexibility. Boards equipped with such chips are integrated into a common software/hardware processing flow in order to balance short comings of each element. Network components developed with this innovative approach ensure an exhaustive processing of packets transmitted on physical links while keeping the flexibility of usual software solutions, which was never encountered in the previous state of theart.This approach is validated by the design and the implementation of a flexible packet processing architecture on FPGA. It is able to process any packet type at the cost of slight resources over consumption. It is moreover fully customizable from the software part. With the proposed solution, network engineers can transparently use the processing power of an hardware accelerator without the need of prior knowledge in digital circuit design
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Liang, Cao. "Hardware/Software Co-Design Architecture and Implementations of MIMO Decoders on FPGA". ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/416.

Texto completo
Resumen
During the last years, multiple-input multiple-output (MIMO) technology has attracted great attentions in the area of wireless communications. The hardware implementation of MIMO decoders becomes a challenging task as the complexity of the MIMO system increases. This thesis presents hardware/software co-design architecture and implementations of two typical lattice decoding algorithms, including Agrell and Vardy (AV) algorithm and Viterbo and Boutros (VB) algorithm. Three levels of parallelisms are analyzed for an efficient implementation with the preprocessing part on embedded MicroBlaze soft processor and the decoding part on customized hardware. The decoders for a 4 by 4 MIMO system with 16-QAM modulation scheme are prototyped on a Xilinx XC2VP30 FPGA device. The hardware implementations of the AV and VB decoders show that they support up to 81 Mbps and 37 Mbps data rate respectively. The performances in terms of resource utilizations and BER are also compared between these two decoders.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Tucci, Primiano <1986&gt. "Hardware/Software Design of Dynamic Real-Time Schedulers for Embedded Multiprocessor Systems". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amsdottorato.unibo.it/5594/.

Texto completo
Resumen
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Dias, Maurício Acconcia. "Co-Projeto de hardware/software para correlação de imagens". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-31082011-124626/.

Texto completo
Resumen
Este trabalho de pesquisa tem por objetivo o desenvolvimento de um coprojeto de hardware/software para o algoritmo de correlação de imagens visando atingir um ganho de desempenho com relação à implementação totalmente em software. O trabalho apresenta um comparativo entre um conjunto bastante amplo e significativo de configurações diferentes do soft-processor Nios II implementadas em FPGA, inclusive com a adição de novas instruções dedicadas. O desenvolvimento do co-projeto foi feito com base em uma modificação do método baseado em profiling adicionando-se um ciclo de desenvolvimento e de otimização de software. A comparação foi feita com relação ao tempo de execução para medir o speedup alcançado durante o desenvolvimento do co-projeto que atingiu um ganho de desempenho significativo. Também analisou-se a influência de estruturas de hardware básicas e dedicadas no tempo de execução final do algoritmo. A análise dos resultados sugere que o método se mostrou eficiente considerando o speedup atingido, porém o tempo total de execução ainda ficou acima do esperado, considerando-se a necessidade de execução e processamento de imagens em tempo real dos sistemas de navegação robótica. No entanto, destaca-se que as limitações de processamento em tempo real estão também ligadas as restrições de desempenho impostas pelo hardware adotado no projeto, baseado em uma FPGA de baixo custo e capacidade média
This work presents a FPGA based hardware/software co-design for image normalized cross correlation algorithm. The main goal is to achieve a significant speedup related to the execution time of the all-software implementation. The co-design proposed method is a modified profiling-based method with a software development step. The executions were compared related to execution time resulting on a significant speedup. To achieve this speedup a comparison between 21 different configurations of Nios II soft-processor was done. Also hardware influence on execution time was evaluated to know how simple hardware structures and specific hardware structures influence algorithm final execution time. Result analysis suggest that the method is very efficient considering achieved speedup but the final execution time still remains higher, considering the need for real time image processing on robotic navigation systems. However, the limitations for real time processing are a consequence of the hardware adopted in this work, based on a low cost and capacity FPGA
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shaffer, Ryan M. (Ryan Matthew). "Why software firms build hardware, and what Microsoft is doing about it". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100312.

Texto completo
Resumen
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
"February 2015." Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-66).
Many software companies build first-party hardware products due to the trend toward smaller, more highly-integrated devices, along with the fast pace of innovation in the technology industry. Building hardware products does not always lead to success and actually creates a financial risk for the company by significantly reducing profit margins as compared to the traditional profit margins to which large software companies are accustomed. Three specific strategies are observed which firms have used successfully in this area. First, the "Hardware First" strategy is described, wherein a company builds devices with the primary goal of selling those devices bundled with the company's software. Second, the "Proprietary Devices" strategy is presented, in which a company builds a device that is targeted at a particular market or function and locks in the customer to the firm's ecosystem. This strategy has been observed to succeed in markets where the technology is not yet mature, as well as in cases where the device has a particular purpose that cannot be achieved as effectively with a general-purpose device. Third, the "Service Funnels" strategy is considered, wherein a firm builds hardware devices whose primary intent is to drive usage and revenue of its core software and services products. Microsoft and its various hardware strategies over the years are especially considered, including products such as Xbox, Zune, Kin, and Surface, as well as its acquisition of Nokia's devices business. Each of the three observed strategies has been used by Microsoft at various times, and analysis of these strategies is used to help explain why some products have succeeded while others have failed dramatically in the marketplace. Microsoft's core capability is undoubtedly in software, and developing a mutually-beneficial relationship between its hardware and software products will be key to the long-term success of Microsoft in today's technology landscape.
by Ryan M. Shaffer.
S.M. in Engineering and Management
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Hetrick, Michael Lynn Saverio. "Modular Understanding| A Taxonomy and Toolkit for Designing Modularity in Audio Software and Hardware". Thesis, University of California, Santa Barbara, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10253730.

Texto completo
Resumen

Modular synthesis is a continually evolving practice. Currently, an effective taxonomy for analyzing modular synthesizer design does not exist, which is a significant barrier for pedagogy and documentation. In this dissertation, I will define new taxonomies for modular control, patching strategies, and panel design. I will also analyze how these taxonomies can be used to influence the design of musical applications outside of hardware, such as my company Unfiltered Audio's software products. Finally, I will present Euro Reakt, my collection of over 140 module designs for the Reaktor Blocks format and walk through the design process of each.

Los estilos APA, Harvard, Vancouver, ISO, etc.
27

CHATHA, KARAMVIR SINGH. "SYSTEM-LEVEL COSYNTHESIS OF TRANSFORMATIVE APPLICATIONS FOR HETEROGENEOUS HARDWARE-SOFTWARE ARCHITECTURES". University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin990822809.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Skøien, Kristoffer Rist. "A Modular Software and Hardware Framework with Application to Unmanned Autonomous Systems : Interacting Modules, Error Detection and Hardware Design". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for teknisk kybernetikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14471.

Texto completo
Resumen
The Department of Engineering Cybernetics at the Norwegian University of Science andTechnology established the Unmanned Vehicle Laboratory fall 2010. The goal is to havestudents develop a fully functional autonomous aerial vehicle over time as part of projects and master’s theses. A range of projects were carried out fall 2010, among others the report General Platform for Unmanned Autonomous Systems was written on the topic of hardware, operating systems and peripheral interfacing.This master’s thesis continues where the previous report left suggestions for further work, and covers the topics of a software framework, sensor error detection, actuator and sensor interfacing, that is now part of the autonomous flight system.A highly modular software framework has been constructed, applicable far beyond theunmanned vehicle domain. Due to the high level of encapsulation and modularity it isespecially valuable in projects where there is a high mobility of the workforce, such as student projects and theses. It acts as a middleware layer, with language independent, separately compilable modules, communicating with one another to achieve the desired functionality.To prevent unrealistic or erroneous sensor readings from spreading through the system, a software detection unit catches signal anomalies based on statistics and alerts subscribing modules. The algorithm has been interfaced into the software framework, and is applicable to numerous sensors.Hardware was designed, constructed and tested to handle sensor interfacing, power supply demands and real-time critical actions such as actuator control. The design is performed from an aerial vehicle application point of view, but general to such an extent that is usable in a wide range of autonomous crafts.The framework, filter and hardware are merged together and tested on an embedded system, verifying the system functionality with a feedback loop from measurements to actuators. Utilizing the previous work along with all three elements of this thesis, a fully functional system for vehicle control is achieved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

O'Connor, R. Brendan. "Dataflow Analysis and Optimization of High Level Language Code for Hardware-Software Co-Design". Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36653.

Texto completo
Resumen
Recent advancements in FPGA technology have provided devices which are not only suited for digital logic prototyping, but also are capable of implementing complex computations. The use of these devices in multi-FPGA Custom Computing Machines (CCMs) has provided the potential to execute large sections of programs entirely in custom hardware which can provide a substantial speedup over execution in a general-purpose sequential processor. Unfortunately, the development tools currently available for CCMs do not allow users to easily configure multi-FPGA platforms. In order to exploit the capabilities of such an architecture, a procedure has been developed to perform a dataflow analysis of programs written in C which is capable of several hardware-specific optimizations. This, together with other software tools developed for this purpose, allows CCMs and their host processors to be targeted from the same high-level specification.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Marques, Vítor Manuel dos Santos. "Performance of hardware and software sorting algorithms implemented in a SOC". Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23467.

Texto completo
Resumen
Mestrado em Engenharia de Computadores e Telemática
Field Programmable Gate Arrays (FPGAs) were invented by Xilinx in 1985. Their reconfigurable nature allows to use them in multiple areas of Information Technologies. This project aims to study this technology to be an alternative to traditional data processing methods, namely sorting. The proposed solution is based on the principle of reusing resources to counter this technology’s known resources limitations.
As Field Programmable Gate Arrays (FPGAs) foram inventadas em 1985 pela Xilinx. A sua natureza reconfiguratória permite que sejam utilizadas em várias áreas das tecnologias de informação. Este trabalho tem como objectivo estudar o uso desta tecnologia como alternativa aos métodos tradicionais de processamento de dados, nomeadamente a ordenação. A solução proposta baseia-se na reutilização de recursos para combater as conhecidas limitações deste tipo de tecnologia.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Vilanova, García Lluís. "Code-Centric Domain Isolation : a hardware/software co-design for efficient program isolation". Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/385746.

Texto completo
Resumen
Current software systems contain a multitude of software components: from simple libraries to complex plugins and services. System security and resiliency depends on being able to isolate individual components onto separate domains. Conventional systems impose large performance and programmability overheads when isolating components. Importantly, when performance and isolation are at stake, performance often takes precedence at the expense of security and reliability. These performance and programmability overheads are rooted at the co-evolution of conventional architectures and OSs, which expose isolation in terms of a loose "virtual CPU" model. Operating Systems (OSs) expose isolation domains to users in the form of processes. The OS kernel is isolated from user code by running at a separate privileged level. At the same time, user processes are isolated from each other through the utilization of different page tables. The OS kernel then multiplexes processes across the available physical resources, providing processes the illusion of having a machine for their exclusive use. Given this virtual CPU model, processes interact through interfaces designed for distributed systems, making their programming and performance poorer. The architectural foundations used for building processes impose performance overheads in the excess of 10× and 1000× compared to a function call (for privilege level and page table switches, respectively). Even more, not all overheads can be attributed to the hardware itself, but to the inherent overheads imposed by current OS designs; the OS kernel must mediate cross-process communications through expensive Inter-Process Communication (IPC) operations, which deviate from the traditional synchronous function call semantics. Threads are bound to their creating process, and invoking functionality across processes requires costly OS kernel mediation and application developer involvement to synchronize and exchange information through IPC channels. This thesis proposes a hardware and software co-design that eliminate the overheads of process isolation, while providing a path for gradual adoption for more aggressive optimizations. That is, it allows processes to efficiently call into functions residing on other isolation domains (e.g., processes) without breaking the synchronous function call semantics. On the hardware side, this thesis proposes the CODOMs protection architecture. It provides memory and privilege protection across software components in a way that is at the same time very efficient and very flexible. This hardware substrate is then used to propose DomOS, a set of changes to the OS at the runtime and kernel layers to allow threads to efficiently and securely cross process boundaries using regular function calls. That is, a thread in one process is allowed to call into a function residing in another process without involving the OS in the critical communication path. This is achieved by mapping processes into a shared address space and eliminating IPC overheads through a combination of new hardware primitives and compile-time and run-time optimizations. IPC in DomOS is up to 24× times faster than Linux pipes, and up to 14× times faster than IPC in L4 Fiasco.OC. When applied to a multi-tier web server, DomOS performs up to 2.18× better than an unmodified Linux system, and 1.32× on average. On all configurations, DomOS provides more than 85% of the ideal system efficiency.
Els sistemes software d'avui en dia contenen una multitud de components software: des de simples llibreries fins a plugins o serveis complexos. La seguretat i fiabilitat d'aquests sistemes depèn de ser capaç d'aïllar cadascun d'aquests components en un domini a part. L'aïllament en els sistemes convencionals imposa grans costos tant en el rendiment com en la programabilitat del sistema. És més, tots els sistemes solen donar prioritat al rendiment sobre qualsevol altre consideració, degradant la seguretat i fiabilitat del sistema. Aquests costos en rendiment i programabilitat són deguts a la co-evolució de les arquitectures i Sistemes Operatius (SOs) convencionals, que exposen l'aïllament en termes d'un model de "CPUs virtuals". Els SOs encarnen aquest model a través dels processos que proprcionen. El SO s'aïlla del codi d'usuari a través d'un nivell de privilegi separat. Al mateix temps, els processos d'usuari estan aïllats els uns dels altres al utilitzar taules de pàgines separades. El nucli del SO multiplexa aquests processos entre els diferents recursos físics del sistema, proporcionant-los la il·lusió d'estar executant-se en una màquina per al seu ús exclusiu. Donat aquest model, els processos interactuen a través d'interfícies que han estat dissenyades per a sistemes distribuïts, empitjorant-ne la programabilitat i rendiment. Els elements de l'arquitectura que s'utilitzen per a construïr processos imposen costos en el rendiment que superen el 10x i 1000x en comparació amb una simple crida a funció (en el cas de nivells de privilegi i canvis de taula de pàgina, respectivament). És més, part d'aquests costos no vénen donats per l'arquitectura, sinó pels costos inherents al disseny dels SOs actuals. El nucli del SO actua com a mitjancer en la comunicació entre processos a través de primitives conegudes com a IPC. El IPC no és només costós en termes de rendiment, sinó que a més a més es desvia de les semàntiques tradicionals de crida síncrona de funcions. Tot "thread" està lligat al procés que el crea, i la invocació de funcionalitat entre processos requereix de la costosa mediació del SO i de la participació del programador a l'hora de sincronitzar "threads" i intercanviar informacio a través dels canals d'IPC. Aquesta tesi proposa un co-disseny del programari i del maquinari que elimina els costos de l'aïllament basat en processos, alhora que proporciona un camí per a l'adopció gradual d'optimitzacions més agressives. És a dir, permet que qualsevol procés faci una simple crida a una funció que està en un altre domini d'aïllament (com ara un altre procés) sense trencar la la semàntica de les crides síncrones a funció. Aquesta tesi proposa l'arquitectura de protecció CODOMs, que proporciona protecció de memòria i privilegis entre components de programari d'una forma que és, alhora, eficient i flexible. Aquest substrat del maquinari és aleshores utilitzat per proposar DomOS, un conjunt de canvis al SO al nivell del "runtime" i del nucli que permeten a qualsevol "thread" fer crides a funció de forma eficient i segura a codi que resideix en d'altres processos. És a dir, que el "thread" d'un procés pot cridar una funció d'un altre procés sense haver de passar pel SO en el seu camí crític. Això s'aconsegueix a través de mapejar tots els processos en un espai d'adreces compartit i d'eliminar tots els costos d'IPC a través d'una combinació de noves primitives en el maquinari i d'optimitzacions en temps de compilació i en temps d'execució. El IPC a DomOS és fins a 24x més ràpid que les pipes a Linux, i fins a 14x més ràpid que el IPC al SO L4 Fiasco.OC. Si s'aplica el sistema a un servidor web multi-capa, DomOS és fins a 2.18x més ràpid que un sistema Linux no modificat, i 1.32x més ràpid de mitjana. En totes les configuracions, DomOS proporciona més del 85% de la eficiència d'un sistema ideal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Talbot, Jake. "Design and Implementation of Digital Signal Processing Hardware for a Software Radio Reciever". DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/265.

Texto completo
Resumen
This pro ject summarizes the design and implementation of field programmable gate array (FPGA) based digital signal processing (DSP) hardware meant to be used in a software radio system. The filters and processing were first designed in MATLAB and then implemented using very high speed integrated circuit hardware description language (VHDL). Since this hardware is meant for a software radio system, making the hardware flexible was the main design goal. Flexibility in the FPGA design was reached using VHDL generics and generate for loops. The hardware was verified using MATLAB generated signals as stimulus to the VHDL design and comparing the VHDL output with the corresponding MATLAB calculated signal. Using this verification method, the VHDL design was verified post place and route (PAR) on several different Virtex family FPGAs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Lin, Huang-Cang y 林煌翔. "On Software/Hardware Co-Design of FFT". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/03182064851667338426.

Texto completo
Resumen
碩士
國立交通大學
電機資訊學院碩士在職專班
93
In this thesis, we propose a new platform for software/hardware co-design of FFT based on the SID hardware simulation software with ARM processor simulation core. With this platform, we compare the different hardware structures and analyze their efficiency, cost and speed improvements. Experiments show that this platform provides a very good simulation environment for system designers. The area and timing optimization for the hardware FFT can be easily achieved.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hwang, Chen-Wei y 黃振偉. "Hardware/Software Tradeoff for Embedded Controller Design". Thesis, 1994. http://ndltd.ncl.edu.tw/handle/32371551366993063004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Leong, Mun-kit y 梁文傑. "Design and Implementation of SoC Hardware-Software Co-design Platform". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/zdv3j6.

Texto completo
Resumen
碩士
國立中山大學
電機工程學系研究所
96
Reconfigurable supercomputing has been used by many high-performance computer systems to accelerate the processing speed. Thus, it is the present trend to use the microprocessor to combine with reconfigurable FPGA as the embedded system platform. However, the hardware-software co-design and integration of embedded system become great challenges of the designer. Beside this, the communication between hardware and software is crucial for the system to be operated effectively. Our concept consists of the design of FPGA configuration, described in I-Link hardware/software integration, improve the communication among the hardware and software. Besides, by using command packet method, we put the data to multi-hardware through hardware management unit (HMU). While system is operated, The Boot Loader will set up TCB and HCB data structure through PSP. The PSP can be regarded as the important reference segment of messages switching among system and hardware/software. The HMU has data buffering and management ability which can let the processes more easy and smooth. We successfully accomplish a hardware-software integrated system in HSCP, which is developed in our laboratory. The basic components of our platform include ARM7TDMI CPU, memory and Altera ACEK 1K-100 of FPGA. By using ARM-code, we also preliminary accomplish the Boot Loader, HW Constructor and self-developed embedded system. Finally, we make use of a large amount of multiplication operation and matrix summation to verify the feasibility of this system architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Yi-der, Lin. "A Hardware/Software Exploration of LDPC Decoder Design". 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2108200618074700.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Lin, Yi-der y 林宜德. "A Hardware/Software Exploration of LDPC Decoder Design". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/00467642419412823450.

Texto completo
Resumen
碩士
國立臺灣大學
資訊工程學研究所
94
One of the difficult problems of hardware/software codesign flow is hardware/software partitioning which decides each component of the system to be implemented as hardware or software. The hw/sw (hardware/software) partitioning determines the performance and hardware resource used of the partitioned system. Hw/sw exploration helps us make the decision. It explores pros and cons of all possible hw/sw partitioned systems. We present a system model and hw/sw communication optimization to explore execution time of a partitioned system more precisely. At the same time, they can improve traditional codesign flow. The system model can reduce hw/sw integration and implementation effort and hw/sw communication optimization can reduce hw/sw communication overhead. Low-Density Parity Check (LDPC) codes have been widely considered as error-correcting codes for next generation communication systems. Therefore, we take LDPC decoder as the case study. After successfully applying our method to LDPC decoder, we can find out different hw/sw partitioned LDPC decoders to satisfy different needs according to the hw/sw exploration results. Finally, we did implement four kinds of hw/sw partitioned LDPC decoders. By analyzing the experiments results, there is a tradeoff between performance, hardware resource and flexibility.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Lee, Yuan-Cheng y 李沅臻. "Optimizing Memory Virtualization through Hardware/Software Co-design". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/ujxkag.

Texto completo
Resumen
博士
國立臺灣大學
資訊網路與多媒體研究所
105
Virtualization is a technology enabling consolidation of multiple operating systems into a single physical machine. It originated from the need to create a multi-user time-sharing operating system based on multiple single-user operating systems. This long-lasting technology has evolved constantly. In addition to the popular applications for server-side virtualization, the advances of the capabilities of embedded processors make virtualization available on various systems much wider than before. The diversity of the target systems demands new design approaches considering the characteristics of the systems. In this dissertation, we propose the idea of optimizing virtualization environments through hardware/software co-design, and demonstrate the potential power of hardware/software co-design through the development of a new optimization technique for memory virtualization. Based on the existing studies, we recognize the memory subsystem as a major bottleneck of a virtualization environment. Therefore, we concentrate our efforts on optimizing memory virtualization for a specific type of virtualization environments as a working example. We first present a quantitative analysis of the impacts of memory virtualization. We then propose an optimized memory virtualization technique along with a comprehensive evaluation including the qualitative analysis with a formal proof and the quantitative analysis based on software emulation and hardware simulation. The results suggest the proposed technique outperforms the existing technique. The research points out hardware/software co-design is a promising direction for optimizing virtualization for the emerging applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Yeh, Ta-li y 葉大立. "Design of the Software/Hardware Codesign Platform-IRES". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/pe59mg.

Texto completo
Resumen
碩士
國立中山大學
電機工程學系研究所
96
High-performance reconfigurable computing has demonstrated its potential to accelerate demanding computational applications. Thus, the current trend is towards combining the microprocessor with the power of reconfigurable hardware in embedded system research area. However, integrating hardware and software that is the interface of communication is challenging. In this thesis, we present a methodology flow to improve the cohesion between hardware and software for reconfigurable embedded system design through IRES (I-link for Reconfigurable Embedded System), Hardware-Software integration platform. In IRES, we set up the platform and produce the Executor through I-link (Hardware-Software Integration Link). The Executor consists of tasks and hardware bitstreams which are provided by user design, bootloader and operation system which are provided by system, and PSPs (Program Segment Prefix) which are from the files given above. We initial the system through bootloader which will scan the PSPs of Executor to construct Task Control Block (TCB), Hardware Control Block (HCB) and Netlist IP Information Block (NIB) data structure. User can get the hardware information from those data structures, and communicate with hardware by using simple functions like “read()” and “write()”. Then, the system transmits the data to and from multi-hardware through Hardware Management Unit (HMU) which also has data buffering ability. Finally, we successfully accomplish IRES Hardware-Software integration platform in HSCP, which is developed in our laboratory, and verify the feasibility of communication between hardware and software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Lee, Jen-Chieh y 李仁傑. "Hardware/Software Co-design for Image Object Segmentation". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/67040552908716287125.

Texto completo
Resumen
碩士
國立雲林科技大學
電子與資訊工程研究所
95
In modern system, image processing becomes more and more important, binarziation is the most usallay approach in the image technologies.However, the tradintionally binarziaiton was easily effected by illumination changes, and the currently image processings are always based on software. Thus, how to improve an effective image technology and design the hardware to reduce timing cost will become the focus.In this paper, we purpose an improved automatic thresholding algorithm to enhance the traditionally algorithm which have the insufficient performance problem, and we also implement this algorithm on the VLSI architecture. We implement the two different architectures can reach 100Mhz and 250Mhz respectively with the .18 UMC technolongy.In order to completely verify our system, we introduce the ARM integrator AP platform to implement the conpect of hardware/software co-design. We also design AHB Slaver and AHB Master Interface, and integrator them with our automatic thresholding circuit, memory controller, CMOS Sensor controller, VGA DAC controller and other system devices. To optimize our system, we provide the AHB DMA devices to improve system performance. In software, we use bootloader U-Boot to boot our system, and introduce the BusyBox and SystemV Init to be our command sets of operation system and initial system procedure respectively. And also use the Embedded Linux to be our operation system of verification platform. We design the linux drivers for our hardware devices. To completely verify software/hardware co-design, we design an image processing software flow with our hardware system to implement a motion dectetion system, and this system can produce 7 frames per sescon.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Chou, Chin-tai y 周錦泰. "Design and Implementation of Software Polymorphism in Hardware". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/97468966155697100468.

Texto completo
Resumen
碩士
大同大學
資訊工程學系(所)
94
Small feature size results in exponential increasing of transistor counts in a single chip. To avoid exponential increasing design cost and complexity, design at higher abstraction level must be raised. Object-oriented design methodology helps to decrease design complexity in software and thus is becoming more and more popular in the field of hardware design. Polymorphism and object inheritance are two of the essential features of object-oriented design methodology, however, are rarely discussed in hardware development. In this thesis, we propose a novel approach to implement software object and polymorphism in hardware. With a software/hardware interface with IMT (instance method table) and OMU (object management unit), we show that the object inheritance and polymorphism mechanism can be easily integrated into a hardware/software codesign system. Our experiment shows that hardware polymorphism can obtain a speedup of 4.85 in speed over software one and use only 22.7% of the energy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Huang, Yau-Shian y 黃耀賢. "Software Design of A Cost/Performance Estimation Method for Hardware/Software Partitioning". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/41565342982909312713.

Texto completo
Resumen
碩士
國立中山大學
電機工程學系研究所
89
In the age of deep submicron VLSI, we can design various system applications in a single chip. On this system-on-chip design, there are ASIC circuitry, processor core together with software components, and hardware modules. During system design, we need to select the forms of execution for kinds of system functions.It is called hardware/software partitioning. Different hardware/software partitioning, affect the achievable cost and performance of the accordingly elaborated system chip designs. In this research, we explore research and software design issues of an estimation method for hardware/software partitioning. It consists of these tasks: •software scheduling •hardware/software co-scheduling •cost and performance estimation for hardware/software partitioning For a system description, given a chosen hardware/software partitioning and a set of allocated resources, we can perform the corresponding cost and performance estimation task that can be utilized directly by system designs or can be called by a hardware/software partitioning optimization program. We designed the experimental software for this estimation method. We also carried out a set of experiments based upon real and synthesized design cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Lin, Shien-Tsan y 林顯燦. "The Design of Hardware-Software Co-design Platform for Embedded Applications". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/47368555803335823374.

Texto completo
Resumen
碩士
國立東華大學
電機工程學系
91
Abstract Modern embedded systems can be utilized in widespread applications, and the type of the signal processing functions varies enormously. Therefore, the difficulties of designing an embedded system are growing substantially. In general, Many modern embedded systems are composed of several heterogeneous subsystems, including programmable digital signal processor (PDSP), memory, programmable logic device (PLD), and application specify integrated circuit (ASIC), etc., to overcome the developing problem of great complexity. heterogeneous units compose such a system, and are in responsible for processing the signal originated from different sorts. Therefore, it becomes more complicated to carry out the architecture design of modern embedded system. The followings will be critical issues, drawing up the structure design of an embedded system, lowering down the complexity of system constitution, raising the efficacy of every unit in the system, increasing the recycling of units in the system no matter hardware or software, and shortening the development cycle. In order to resolve the problems describing above, this paper proposes an embedded development platform integrating with DSP, FPGA, and ASIC. On the basis of the platform, a system developer can complete the design rapidly and identify the prototype. In the aspect of system scheme, Object-Oriented System Analysis and Unified Modeling Language (UML) are utilized to proceeding the system analysis and plan. We treat the hardware and software units belonging to the system as independent components, and design a uniform interface to communicate each unit in the system. To treat each unit no matter hardware or software as independent IP (Intellectual Property) will increase the reusability of every component, make the system more stable, and maintain it more easily. Through Objected-Oriented System Analysis and independent hardware and software IP, we design and simulate an inverted pendulum fuzzy control system to verify the accuracy of the system architecture mentioned above. The experimental result reveals our methodology is realizable and practical.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Choi, Jongsok. "Enabling Hardware/Software Co-design in High-level Synthesis". Thesis, 2012. http://hdl.handle.net/1807/33380.

Texto completo
Resumen
A hardware implementation can bring orders of magnitude improvements in performance and energy consumption over a software implementation. Hardware design, however, can be extremely difficult. High-level synthesis, the process of compiling software to hardware, promises to make hardware design easier. However, compiling an entire software program to hardware can be inefficient. This thesis proposes hardware/software co-design, where computationally intensive functions are accelerated by hardware, while remaining program segments execute in software. The work in this thesis builds a framework where user-designated software functions are automatically compiled to hardware accelerators, which can execute serially or in parallel to work in tandem with a processor. To support multiple parallel accelerators, new multi-ported cache designs are presented. These caches provide low-latency high-bandwidth data to further improve the performance of accelerators. An extensive range of cache architectures are explored, and results show that certain cache architectures significantly outperform others in a processor/accelerator system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

"Application hardware-software co-design for reconfigurable computing systems". THE GEORGE WASHINGTON UNIVERSITY, 2008. http://pqdtopen.proquest.com/#viewpdf?dispub=3297468.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Sheng-HsinLo y 羅聖心. "Hardware and Software Co-design of IPsec Database Query". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/06182288128288495542.

Texto completo
Resumen
碩士
國立成功大學
電腦與通信工程研究所
100
With the popularity of the Internet, confidentiality requirements for the Internet have become more critical. The IEFT has proposed IP security to provide services of encryption/decryption and authentication without changing current network architecture. After enabling IPsec, every transmitted or received packet must query the IPsec database. As the speed of network increases, software searching of the IPsec database may become the critical path. The purpose of this thesis is to describe and analyze a database structure as well as its querying flow for IPsec and propose a database searching algorithm for Security Policy Database and Security Association Database. In order to accelerate the speed of IPsec Database querying, the application of hardware acceleration together with software searching is used. We evaluate three designs: scratchpad memory, hardware cache and software cache. We use SystemC language to implement our design in ESL virtual platform with the ARM processor. The design proposed in this work is implemented in Platform Architect and provides an on-line verification environment. Compare to software searching with 256 security policies, the software cache can reduce 83.54% querying time, hardware cache can reduce 85.89% querying time and scratchpad memory can reduce 83.87% querying time. We found that the efficiency of software cache is nearly equal to hardware cache and consumes less cost.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Kuo, Su-Ming y 郭書銘. "Design and Implementation of RTOS in Hardware Software Codesign". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/47582338524589224580.

Texto completo
Resumen
碩士
大同大學
資訊工程學系(所)
95
There have some problems must be solve in electronics industry:Mobile devices must be small、light and low power consumption;The life cycle of Consumer-electronic products are getting shorter and shorter;For those requestments we implement a platform. The platform has two parts Java CPU and hardware OS. The Java CPU can commit across-platform requestment. Hardware OS (μC/OS-II) can speed-up system performance and reduce the CPU utility rate. Applation developer just use Java technologic to build Embedded System easily in this platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Ho, Chia-lun y 何嘉倫. "Computer Vision Software and Hardware Design Based on OpenVX". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/z8dqc5.

Texto completo
Resumen
碩士
國立中山大學
資訊工程學系研究所
104
OpenVX, an open, royalty-free, cross-platform standard, can be used to speed up computer vision applications in embedded systems. It can achieve performance and power-optimized processing of computer vision, including facial, body and gesture tracking, intelligent video surveillance, advanced driver assistance systems (ADAS) and augmented reality on real-time embedded systems. In this thesis, we investigates the contents of OpenVX specification and develop an OpenVX based computer vision algorithm for face detection and gesture tracking. FPGA implementation of hardware and software co-operation is also demonstrated where the color-to-intensity conversion is executed in hardware. It is shown that the performance of the system can be enhanced without increasing too much hardware costs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Yang, Chian-Hsin y 楊謙信. "HARDWARE/SOFTWARE CO-DESIGN FOR A TKIP IP CORE". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/9jet93.

Texto completo
Resumen
碩士
國立東華大學
資訊工程學系
94
In this thesis, a TKIP cipher following the hardware/software co-design and co-verification principle is implemented on SOC development platform which includes ARM7TDMI microprocessor and chipset. From the analysis of computational TKIP cipher, RC4 is highly repetitive and occupy 58% in the total computation. Therefore, we implemented RC4 algorithm in hardware. The hardware of RC4 is implemented in recursive architecture. Concerning the integration of TKIP cipher, we use a wrapper to serve as the communicational interface between the proposed chip and AHB bus. Then we apply RAM-based interface to communicate ARM7TDMI with the designed chip. Thus in this design, our chip plays a slave role, and ARM7TDMI plays a master role which is responsible for complex controls and data access. When TKIP programs run to RC4 function, ARM7TDMI stores data into the local memory within designed chip. After a fixed numbers of cycles, ARM7TDMI reads out computed data from the chip and continue to execute the next functions in programs. ARM7TDMI will proceed the above actions until TKIP programs are finished. In the final verification, the system has successfully encrypt the plaintext and decrypt the ciphertext. The maximum frequency at which integrated hardware can be operated on Cell-Based Design is 250MHz, and the number of gate counts is 6596.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Su, Yung-Chun y 蘇詠俊. "Hardware/Software Design and Implementation of Stepper Motor Controller". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/79254169881623000034.

Texto completo
Resumen
碩士
長榮大學
資訊管理研究所
97
The main idea of this thesis is to implement an embedded control system for stepper motor, which is based on the concept of hardware/software co-design. First, we introduce the design flows of classical and cooperative, also we discuss the relationship between hardware and software, and design methods in co-design flow. In the variable speed control for stepper motor side, we propose an algorithm to achieve the variable speed control. This algorithm is capable of recursion, and realizing on FPGA easily. To guarantee the correctness of data transmission, we use the synchronous finite-state machine (FSM) to design the algorithm unit in hardware, and use Avalon Memory-Mapped (Avalon-MM) technology for integrating with embedded processor. In addition, we design the monitoring program in software for closed-loop achieving, and design the hardware abstraction layer (HAL) in order to system integration. This system consists of open-loop and closed-loop controls, and couples with the enhanced velocity profile generator (EVPG) to realize the variable speed control for stepper motor. Furthermore, this system can judge the conditions which input by user, such as rotation speed, acceleration mode, and running revolution for motor, whether the missing step effect occurs. Once the missing step effect occurs, system would switch to the closed-loop control automatically. Otherwise, the open-loop control works still. Finally, this system core is based on Nios II embedded processor, and realization on DE2 development board.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía