Academic literature on the topic 'SW partitioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'SW partitioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "SW partitioning"

1

Lin, Geng, Wenxing Zhu, and M. Montaz Ali. "A Tabu Search-Based Memetic Algorithm for Hardware/Software Partitioning." Mathematical Problems in Engineering 2014 (2014): 1–15. http://dx.doi.org/10.1155/2014/103059.

Full text
Abstract:
Hardware/software (HW/SW) partitioning is to determine which components of a system are implemented on hardware and which ones on software. It is one of the most important steps in the design of embedded systems. The HW/SW partitioning problem is an NP-hard constrained binary optimization problem. In this paper, we propose a tabu search-based memetic algorithm to solve the HW/SW partitioning problem. First, we convert the constrained binary HW/SW problem into an unconstrained binary problem using an adaptive penalty function that has no parameters in it. A memetic algorithm is then suggested for solving this unconstrained problem. The algorithm uses a tabu search as its local search procedure. This tabu search has a special feature with respect to solution generation, and it uses a feedback mechanism for updating the tabu tenure. In addition, the algorithm integrates a path relinking procedure for exploitation of newly found solutions. Computational results are presented using a number of test instances from the literature. The algorithm proves its robustness when its results are compared with those of two other algorithms. The effectiveness of the proposed parameter-free adaptive penalty function is also shown.
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Xiaohu, Fazhi He, Neng Hou, and Haojun Ai. "An Efficient Particle Swarm Optimization for Large-Scale Hardware/Software Co-Design System." International Journal of Cooperative Information Systems 27, no. 01 (March 2018): 1741001. http://dx.doi.org/10.1142/s0218843017410015.

Full text
Abstract:
In the co-design process of hardware/software (HW/SW) system, especially for large and complicated embedded systems, HW/SW partitioning is a challenging step. Among different heuristic approaches, particle swarm optimization (PSO) has the advantages of simple implementation and computational efficiency, which is suitable for solving large-scale problems. This paper presents a conformity particle swarm optimization with fireworks explosion operation (CPSO-FEO) to solve large-scale HW/SW partitioning. First, the proposed CPSO algorithm simulates the conformist mentality from biology research. The CPSO particles with psychological conformist always try to move toward a secure point and avoid being attacked by natural enemy. In this way, there is a greater possibility to increase population diversity and avoid local optimum in CPSO. Next, to enhance the search accuracy and solution quality, an improved FEO with new initialization strategy is presented and is combined with CPSO algorithm to search a better position for the global best position. This combination can keep both the diversified and intensified searching. At last, the experiments on benchmarks and large-scale HW/SW partitioning demonstrate the efficiency of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Soininen, Juha-Pekka, Matti Sipola, and Kari Tiensyrjä. "SW/HW-partitioning of real-time embedded systems." Microprocessing and Microprogramming 27, no. 1-5 (August 1989): 239–44. http://dx.doi.org/10.1016/0165-6074(89)90053-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

FEBRY, RICARDO, and PETER LUTZ. "Energy Partitioning in Fish: The Activityrelated Cost of Osmoregulation in a Euryhaline Cichlid." Journal of Experimental Biology 128, no. 1 (March 1, 1987): 63–85. http://dx.doi.org/10.1242/jeb.128.1.63.

Full text
Abstract:
We have investigated how the maintenance, net cost of swimming and total (maintenance + net cost of swimming) metabolic rates of red, hybrid tilapia (Oreochromis mossambicus ♀ × O. hornorum ♂) responded to different acclimation salinities, and if these responses correlated with changes in ion-osmoregulation (= osmoregulation) costs. Three groups of fish were acclimated to either fresh water (FW, 0‰), isosmotic sea water (ISW, 12‰) or full strength sea water (SW, 35 ‰) and oxygen consumption was measured while they swam at 10, 20, 30 and 40cms-1. Maintenance oxygen consumption (estimated by extrapolation), for an average fish (63g), increased among groups in the following order: FW < ISW < SW. The net cost of swimming increased in the order ISW < SW < FW, and total oxygen consumption (maintenance + net cost of swimming) increased in the order ISW < FW < SW. We assumed that the contribution of cardiac, branchial and swimming muscles to the net cost of swimming was proportional to swimming speed only, and therefore, at similar speeds, differences in the net cost of swimming among salinities were due to changes in the activity-related cost of osmoregulation. Consequently, the order in which the net cost of swimming increases from one group to another is the same as the order in which the cost of osmoregulation increases. Since the sequences for maintenance and total metabolic rates differed from that for the net cost of swimming, salinity-related increases in these rates cannot be attributed exclusively to changes in osmoregulation cost. We conclude, based on the differences in the net cost of swimming, that osmoregulation in FW is more expensive than in SW, and that it is cheapest in ISW. Although we were not able to estimate the total cost of osmoregulation in FW and SW, we estimated the activity-related cost, relative to the cost in ISW, at different swimming speeds (net cost of swimming in FW or SW minus net cost of swimming in ISW at each speed). For a 63-g fish in FW, this cost increased from zero at rest, to 41mgO2kg−1h−1 (16% of the total metabolic rate, 24% of the net cost swimming) at 40 cms−1. In SW the same cost increased only to 32 mgO2 kg−1h−1 (12% of the total metabolic rate, 20% of the net cost of swimming) at 40cms−1. The net cost of swimming in FW or SW increased with swimming speed at a rate 3×4 times faster
APA, Harvard, Vancouver, ISO, and other styles
5

LU, Xiao-zhang, Wei LIU, and Yao-dong TAO. "Method of HW/SW partitioning based on NSGA-II." Journal of Computer Applications 29, no. 1 (June 25, 2009): 238–41. http://dx.doi.org/10.3724/sp.j.1087.2009.238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pérez-Cáceres, Irene, José Fernando Simancas, David Martínez Poyatos, Antonio Azor, and Francisco González Lodeiro. "Oblique collision and deformation partitioning in the SW Iberian Variscides." Solid Earth 7, no. 3 (May 30, 2016): 857–72. http://dx.doi.org/10.5194/se-7-857-2016.

Full text
Abstract:
Abstract. Different transpressional scenarios have been proposed to relate kinematics and complex deformation patterns. We apply the most suitable of them to the Variscan orogeny in SW Iberia, which is characterized by a number of successive left-lateral transpressional structures developed in the Devonian to Carboniferous period. These structures resulted from the oblique convergence between three continental terranes (Central Iberian Zone, Ossa-Morena Zone and South Portuguese Zone), whose amalgamation gave way to both intense shearing at the suture-like contacts and transpressional deformation of the continental pieces in-between, thus showing strain partitioning in space and time. We have quantified the kinematics of the collisional convergence by using the available data on folding, shearing and faulting patterns, as well as tectonic fabrics and finite strain measurements. Given the uncertainties regarding the data and the boundary conditions modeled, our results must be considered as a semi-quantitative approximation to the issue, though very significant from a regional point of view. The total collisional convergence surpasses 1000 km, most of them corresponding to left-lateral displacement parallel to terrane boundaries. The average vector of convergence is oriented E–W (present-day coordinates), thus reasserting the left-lateral oblique collision in SW Iberia, in contrast with the dextral component that prevailed elsewhere in the Variscan orogen. This particular kinematics of SW Iberia is understood in the context of an Avalonian plate salient currently represented by the South Portuguese Zone.
APA, Harvard, Vancouver, ISO, and other styles
7

Pérez-Cáceres, I., J. F. Simancas, D. Martínez Poyatos, A. Azor, and F. González Lodeiro. "Oblique collision and deformation partitioning in the SW Iberian Variscides." Solid Earth Discussions 7, no. 4 (December 9, 2015): 3773–815. http://dx.doi.org/10.5194/sed-7-3773-2015.

Full text
Abstract:
Abstract. Different transpressional scenarios have been proposed to relate kinematics and complex deformation patterns. We apply the most suitable of them to the Variscan orogeny in SW Iberia, which is characterized by a number of successive left-lateral transpressional structures developed at Devonian to Carboniferous times. These structures resulted from the oblique convergence between three continental terranes (Central Iberian Zone, Ossa-Morena Zone and South Portuguese Zone), whose amalgamation gave way to both intense shearing at the suture-like contacts and transpressional deformation of the continental pieces in-between, thus showing strain partitioning in space and time. We have quantified the kinematics of the collisional convergence by using the available data on folding, shearing and faulting patterns, as well as tectonic fabrics and finite strain measurements. Given the uncertainties regarding the data and the boundary conditions modeled, our results must be considered as a semi-quantitative approximation to the issue, though very significant from a regional point of view. The total collisional convergence surpasses 1000 km, most of them corresponding to left-lateral displacement parallel to terrane boundaries. The average vector of convergence is oriented E–W (present-day coordinates), thus reasserting the left-lateral oblique collision in SW Iberia, in contrast with the dextral component that prevailed elsewhere in the Variscan orogen. This particular kinematics of SW Iberia is understood in the context of an Avalonian plate promontory currently represented by the South Portuguese Zone.
APA, Harvard, Vancouver, ISO, and other styles
8

Fuhr, Gereon, Seyit Halil Hamurcu, Diego Pala, Thomas Grass, Rainer Leupers, Gerd Ascheid, and Juan Fernando Eusse. "Automatic Energy-Minimized HW/SW Partitioning for FPGA-Accelerated MPSoCs." IEEE Embedded Systems Letters 11, no. 3 (September 2019): 93–96. http://dx.doi.org/10.1109/les.2019.2901224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Iguider, Adil, Kaouthar Bousselam, Oussama Elissati, Mouhcine Chami, and Abdeslam En-Nouaary. "GO Game Inspired Algorithm for Hardware Software Partitioning in Multiprocessor Embedded Systems." Computer and Information Science 12, no. 4 (November 22, 2019): 111. http://dx.doi.org/10.5539/cis.v12n4p111.

Full text
Abstract:
The codesign is a robust methodology, used in modern embedded systems with the objective of achieving the functional specifications and meeting the non-functional requirements. The most interesting step in the codesing  is the process of  Hardware/Software Partitioning. The aim is to decide which functionalities of the system should be implemented in hardware ($HW$) or in software ($SW$). In this article, a new heuristic algorithm is proposed to simultaneously optimize the hardware area (cost) and the execution time (performance) of a multiprocessor system. The proposed algorithm is inspired from game theory and especially from the GO game. The system is modeled using the DAG graph (Data Acyclic Graph), and two players (HW player and SW player) play in turn and choose a block (functionality) from the graph (system). The HW player has the goal of optimizing the global HW area while the SW player has the objective of minimizing the global execution time. After the game termination, and based on the 0-1 Knapsack algorithm, a step of refinement is used to meet the constraint on the total hardware area or on the overall execution time if a constraint is pre-defined. Experimental results show that the proposed algorithm gives better solutions compared to the Simulated Annealing algorithm and the Genetic Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, Huizhu, Peng Zhang, Don Xie, and Wen Gao. "An AVS HDTV video decoder architecture employing efficient HW/SW partitioning." IEEE Transactions on Consumer Electronics 52, no. 4 (November 2006): 1447–53. http://dx.doi.org/10.1109/tce.2006.273169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "SW partitioning"

1

Bjärmark, Joakim, and Marco Strandberg. "Hardware Accelerator for Duo-binary CTC decoding : Algorithm Selection, HW/SW Partitioning and FPGA Implementation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7902.

Full text
Abstract:

Wireless communication is always struggling with errors in the transmission. The digital data received from the radio channel is often erroneous due to thermal noise and fading. The error rate can be lowered by using higher transmission power or by using an effective error correcting code. Power consumption and limits for electromagnetic radiation are two of the main problems with handheld devices today and an efficient error correcting code will lower the transmission power and therefore also the power consumption of the device.

Duo-binary CTC is an improvement of the innovative turbo codes presented in 1996 by Berrou and Glavieux and is in use in many of today's standards for radio communication i.e. IEEE 802.16 (WiMAX) and DVB-RSC. This report describes the development of a duo-binary CTC decoder and the different problems that were encountered during the process. These problems include different design issues and algorithm choices during the design.

An implementation in VHDL has been written for Alteras Stratix II S90 FPGA and a reference-model has been made in Matlab. The model has been used to simulate bit error rates for different implementation alternatives and as bit-true reference for the hardware verification.

The final result is a duo-binary CTC decoder compatible with Alteras Stratix II designs and a reference model that can be used when simulating the decoder alone or the whole signal processing chain. Some of the features of the hardware are that block sizes, puncture rates and number of iterations are dynamically configured between each block Before synthesis it is possible to choose how many decoders that will work in parallel and how many bits the soft input will be represented in. The circuit has been run in 100 MHz in the lab and that gives a throughput around 50Mbit with four decoders working in parallel. This report describes the implementation, including its development, background and future possibilities.

APA, Harvard, Vancouver, ISO, and other styles
2

Tiejun, Hu Di Wu. "Design of Single Scalar DSP based H.264/AVC Decoder." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2812.

Full text
Abstract:

H.264/AVC is a new video compression standard designed for future broadband network. Compared with former video coding standards such as MPEG-2 and MPEG-4 part 2, it saves up to 40% in bit rate and provides important characteristics such as error resilience, stream switching etc. However, the improvement in performance also introduces increase in computational complexity, which requires more powerful hardware. At the same time, there are several image and video coding standards currently used such as JPEG and MPEG-4. Although ASIC design meets the performance requirement, it lacks flexibility for heterogeneous standards. Hence reconfigurable DSP processor is more suitable for media processing since it provides both real-time performance and flexibility.

Currently there are several single scalar DSP processors in the market. Compare to media processor, which is generally SIMD or VLIW, single scalar DSP is cheaper and has smaller area while its performance for video processing is limited. In this paper, a method to promote the performance of single scalar DSP by attaching hardware accelerators is proposed. And the bottleneck for performance promotion is investigated and the upper limit of acceleration of a certain single scalar DSP for H.264/AVC decoding is presented.

Behavioral model of H.264/AVC decoder is realized in pure software during the first step. Although real-time performance cannot be achieved with pure software implementation, computational complexity of different parts is investigated and the critical path in decoding was exposed by analyzing the first design of this software solution. Then both functional acceleration and addressing acceleration were investigated and designed to achieve the performance for real-time decoding using available clock frequency within 200MHz.

APA, Harvard, Vancouver, ISO, and other styles
3

Nilsson, Per. "Hardware / Software co-design for JPEG2000." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5796.

Full text
Abstract:

For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.

This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.

First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.

APA, Harvard, Vancouver, ISO, and other styles
4

Andersson, Mikael, and Per Karlström. "Parallel JPEG Processing with a Hardware Accelerated DSP Processor." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2615.

Full text
Abstract:

This thesis describes the design of fast JPEG processing accelerators for a DSP processor.

Certain computation tasks are moved from the DSP processor to hardware accelerators. The accelerators are slave co processing machines and are controlled via a new instruction set. The clock cycle and power consumption is reduced by utilizing the custom built hardware. The hardware can perform the tasks in fewer clock cycles and several tasks can run in parallel. This will reduce the total number of clock cycles needed.

First a decoder and an encoder were implemented in DSP assembler. The cycle consumption of the parts was measured and from this the hardware/software partitioning was done. Behavioral models of the accelerators were then written in C++ and the assembly code was modified to work with the new hardware. Finally, the accelerators were implemented using Verilog.

Extension of the accelerator instructions was given following a custom design flow.

APA, Harvard, Vancouver, ISO, and other styles
5

Kandasamy, Santheeban. "Dynamic HW/SW Partitioning: Configuration Scheduling and Design Space Exploration." Thesis, 2007. http://hdl.handle.net/10012/3042.

Full text
Abstract:
Hardware/software partitioning is a process that occurs frequently in embedded system design. It is the procedure of determining whether a part of a system should be implemented in software or hardware. This dissertation is a study of hardware/software partitioning and the use of scheduling algorithms to improve the performance of dynamically reconfigurable computing devices. Reconfigurable computing devices are devices that are adaptable at the logic level to solve specific problems [Tes05]. One example of a reconfigurable computing device is the field programmable gate array (FPGA). The emergence of dynamically reconfigurable FPGAs made it possible to configure FPGAs at runtime. Most current approaches use a simple on demand configuration scheduling algorithm for the FPGA configurations. The on demand configuration scheduling algorithm reconfigures the FPGA at runtime, whenever a configuration is needed and is found not to be configured. The problem with this approach of dynamic reconfiguration is the reconfiguration time overhead, which is the time it takes to reconfigure the FPGA with a new configuration at runtime. Configuration caches and partial configuration have been proposed as possible solutions to this problem, but these techniques suffer from various limitations. The emergence of dynamically reconfigurable FPGAs also made it possible to perform dynamic hardware/software partitioning (DHSP), which is the procedure of determining at runtime whether a computation should be performed using its software or hardware implementation. The drawback of performing DHSP using configurations that are generated at runtime is that the profiling and the dynamic generation of configurations require profiling tool and synthesis tool access at runtime. This study proposes that configuration scheduling algorithms, which perform DHSP using statically generated configurations, can be developed to combine the advantages and reduce the major disadvantages of current approaches. A case study is used to compare and evaluate the tradeoffs between the currently existing approach for dynamic reconfiguration and the DHSP configuration scheduling algorithm based approach proposed in the study. A simulation model is developed to examine the performance of the various configuration scheduling algorithms. First, the difference in the execution time between the different approaches is analyzed. Afterwards, other important design criteria such as power consumption, energy consumption, area requirements and unit cost are analyzed and estimated. Also, business and marketing considerations such as time to market and development cost are considered. The study illustrates how different types of DHSP configuration scheduling algorithms can be implemented and how their performance can be evaluated using a variety of software applications. It is also shown how to evaluate when which of the approaches would be more advantageous by determining the tradeoffs that exist between them. Also the underlying factors that affect when which design alternative is more advantageous are determined and analyzed. The study shows that configuration scheduling algorithms, which perform DHSP using statically generated configurations, can be developed to combine the advantages and reduce some major disadvantages of current approaches. It is shown that there are situations where DHSP configuration scheduling algorithms can be more advantageous than the other approaches.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Chin-Yang, and 陳金洋. "HW/SW Partitioning and Pipelined Scheduling Using Integer Linear Programming." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/13997876798330032318.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
93
The primary design goal of many embedded systems for multimedia applications is usually meeting the performance requirement at a minimum cost. In this thesis, we proposed two different ILP based approaches for hardware/software (HW/SW) partitioning and pipelined scheduling of embedded systems for multimedia applications. One ILP approach solves the HW/SW partitioning and pipelined scheduling problem simultaneously. Another ILP approach separates the HW/SW partitioning and pipelined scheduling problem into two phases. The first phase is focusing on the HW/SW partitioning and mapping problem. Second phase is used to solve the pipelined scheduling problem. The two ILP approaches not only partition and map each computation task of a particular multimedia application onto a component of the heterogeneous multiprocessor architecture, but also schedules and pipelines the execution of these computation tasks while considering communication time. For the first ILP model, the objective is to minimize the total component cost and the number of pipeline stages subject to the throughput constraint. In the second ILP approach, the objective of the first phase and second phase is to minimize the total component cost and the number of pipeline stages subject to the throughput constraint, respectively. Finally, experiments on three real multimedia applications (JPEG Encoder, MP3 Decoder, Wavelet Video Encoder) are used to demonstrate the effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Lan-Hsin, and 林藍芯. "A Novel Approach of HW/SW Partitioning for Embedded Multiprocessor Systems." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/17996773602461302920.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
92
To speed the time-to-market cycle, the codesign of hardware and software has become one of the kernel technologies in modern embedded systems. To achieve this objective, we must develop the hardware and software concurrently and begin the software design targeting at the “virtual hardware platforms” before the hardware platform is available. This can lead to the better system design and reduce the risks that arise from the rapid changes of system specifications. An incorrect HW/SW-partitioning will result in time-consuming design and expensive optimizations of the whole system. Therefore, how to partition the system into hardware and software parts has become one of the critical issues in system level. This paper presents a novel HW/SW-partitioning approach, which targets at embedded systems consisting of multiprocessor for time, area, and power constraints. Our approach is two-fold: partitioning phase and scheduling phase. In the partitioning phase, for an embedded system with n processors, recursive spectral bisection (RSB) has been used to partition an application program into n blocks and then these blocks are mapped into software components. We try to move tasks from software components to hardware components in order to meet the deadline constraint. In the scheduling phase, we derive an approach to adapt the load in each processor by exchanging tasks between hardware and software components not only to meet the deadline constraint of the system but also to reduce the cost of the system. Finally, we conclude this paper and describe the work we will continue in the near future.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, Sónia. "Strain partitioning and the seismicity distribution within a transpressive plate boundary : SW Iberia-NW Nubia." Doctoral thesis, 2017. http://hdl.handle.net/10451/30387.

Full text
Abstract:
Tese de doutoramento, Geologia (Geodinâmica Interna), Universidade de Lisboa, Faculdade de Ciências, 2017
The Gulf of Cadiz offshore SW Iberia is an area linked with episodic destructive seismic and tsunamigenic events, such as the M~8.8, 1st November 1755 Lisbon earthquake among others. The association of active faults to this kind of high magnitude event has been extensively studied specially due to the contribution of several international projects for more than two decades. However, the meaning of the persistent small to intermediate magnitude seismicity recognized in this region is still particularly not fully understood. This is, at least, related to the lack of an accurate hypocenter location of these events resulting from an asymmetrical geographical distribution of the permanent seismic network. One of the main purposes of the NEAREST project (Integrated observation from NEAR shore sourcES of Tsunamis: towards an early warning system GOCE, contract n. 037110) was the identification and characterization of seismogenic and tsunamigenic structures in the Gulf of Cadiz area, source region of the Lisbon 1755 earthquake and tsunami. To address this problem 24 broadband Ocean Bottom Seismometers (OBS) and a seafloor multiparametric station GEOSTAR (Geophysical and Oceanographic Station for Abyssal Research) acquired between August 2007 and July 2008 passive seismic data in this region. The results delivered a detailed record of the local seismicity, revealing 3 main clusters of earthquakes, two of them coinciding with the location of the 3 larger instrumental earthquakes in the area: i) the 28th February 1969 (Mw~8.0); ii) the 12th February 2007 (Mw=6.0) and iii) the 17th December 2009 (Mw=5.5). Focal mechanisms show a mixed pattern, mostly strike-slip and reverse dip-slip with a very few normal mechanisms. The results show that of the recorded events are located in the mantle (at depths between 30 and 60 km). This implies the existence of tectonically active structures located much deeper than the ones mapped by Multichannel seismic reflection. A thorough analysis shows that the seismicity clusters are offset with respect to the upper crustal active thrusts. The wide-range solutions of focal mechanisms also imply that the related source processes are complex. This can reflect the interaction of different active geological features, such as faults and rheological boundaries. To understand these new results in the context of the seismotectonics of the Gulf of Cadiz a review of some available geophysical data (reflection and refraction seismic profiles interpretation) in this area is presented as well as novel work on seismic reflection profile IAM GB1 across a rheologic boundary and seismicity cluster. Our study shows that the seismicity clusters are located at faults intersections mapped at the seafloor and shallow crust, suggesting that the crustal tectonic faults are replicated in the lithospheric mantle. These fault interferences are associated with boundaries of lithospheric domains prone to localize stress and seismic strain. Active crustal faults are either locked or move through slow aseismic slip. Frictional slip in crustal faults is probably limited to high magnitude earthquakes. Serpentinization probably induces tectonic decoupling limiting micro-seismicity to depths below the serpentinized layer. It is expected that during highmagnitude events seismic rupture is favored by weakening mechanisms and propagates upwards through the serpentinized layer up to the surface. The results obtained in this work improve our knowledge about the local seismicity and related active faults in the Gulf of Cadiz area, giving a new contribution to access to the seismic hazard in the Nubia-Iberia plate boundary in the Northeast Atlantic Region.
O Golfo de Cádis é uma região com uma sismicidade moderada embora se conheça, tanto no registo histórico como instrumental, eventos de elevada magnitude. O sismo de 1 de Novembro de 1755 é um exemplo paradigmático com uma magnitude estimada de 8.8 e um tsunami associado com Mt = 8.5. Já o sismo de 28 de Fevereiro de 1969, é o mais importante registado instrumentalmente, teve uma Ms de 7.9, ao qual esteve associado um pequeno tsunami. Mais recentemente, salientam-se os sismos de 12 de Fevereiro de 2007, com Mw=6.0 e o de 17 de Dezembro de 2009, com Mw =5.5 (EMSCEuropean-Mediterranean Seismological Centre). No entanto, a sismicidade nesta região é descrita como de magnitude baixa a intermédia, com uma distribuição em profundidade acima dos 60 km. Correlacionar esta sismicidade com potenciais estruturas sismogénicas no Golfo de Cádis constituiu um dos objectivos do projecto NEAREST (Integrated observation from NEAR shore sourcES of Tsunamis: towards an early warning system GOCE, contract n. 037110). Neste contexto, foram necessárias uma caracterização e localização mais precisas dos eventos sísmicos ocorridos nesta região, até agora limitadas pelos constrangimentos inerentes à distribuição geográfica das estações permanentes terrestres. Por isso, foi desenvolvida uma campanha de aquisição de dados contínuos utilizando uma rede de sismómetros de fundo do mar. A rede sísmica NEAREST operou de modo contínuo num período de 11 meses, entre Agosto de 2007 Julho de 2008, integrando 24 sismómetros de fundo do mar (OBS) e uma estação multiparamétrica- GEOSTAR. Durante as campanhas de colocação e recuperação dos instrumentos, as manipulações dos OBS e GEOSTAR estiveram a cargo do Alfred Wegener Institute for Polarand Marine Research e do Istituto Nazionale di Geofisica e Vulcanologia – INGV, respectivamente. Os OBS foram construídos pela K.U.M. Umwelt- und Meerestechnik Kiel GmbH, Germany e incorporavam sismómetros de banda larga Güralp CMG-40T e um hidrofone. A GEOSTAR é um observatório que integra diversos equipamentos para a recolha de dados geofísicos e oceanográficos em contínuo. Nesta estação estão incluídos um sismómetro de banda larga com 3 componentes e um hidrofone usados nesta campanha. As estações terrestres estão a cargo do Instituto Português do Mar e da Atmosfera (IPMA) e Instituto Dom Luiz (IDL) correspondendo a sismómetros de banda larga também com 3 componentes. Os registos nestas estações foram apenas utilizados para constranger as soluções dos mecanismos focais. Em trabalhos futuros, prevê-se a sua inclusão na localização dos eventos identificados pela rede NEAREST. Durante o período de aquisição foram registados na rede terrestre, para a área delimitada pela rede NEAREST, cerca de 270 sismos locais. Durante o período de funcionamento da rede NEAREST foram identificados cerca de 750 eventos observados em mais de 3 estações. Deste total 590 sismos estavam localizados na área da rede NEAREST. A localização hipocentral foi testada usando diferentes metodologias e modelos de velocidades: a) inversão conjunta das posições hipocentrais e correcções de estações; b) o método das diferenças duplas e c) a inversão conjunta do modelo de velocidades-localizações hipocentrais e correcções de estações. O catálogo final inclui 443 eventos identificados em mais de 6 estações e localizados na área da rede NEAREST. De um modo geral, a maioria dos hipocentros estão localizados a mais de 30 km de profundidade, portanto no manto. As magnitudes locais variam entre 1.2 e 4.8. As localizações epicentrais e hipocentrais baseadas na rede NEAREST divergem das soluções conhecidas para a rede terrestre (providenciadas pelo IPMA), estando deslocadas para SW e sendo mais profundas. A diferença de profundidade pode atingir os 40 km. A campanha do projecto NEAREST permitiu a identificação de uma grande quantidade de eventos não detectada pela rede terrestre. Esta campanha permitiu ainda uma redefinição da distribuição da sismicidade na região, até então considerada difusa. Destes resultados foi possível reconhecer 3 enxames de sismicidade, dois destes coincidentes com 3 dos maiores eventos observados no registo instrumental. Tanto os sismos de 28 de Fevereiro de 1969 (Mw~8.0) como 12 de Fevereiro de 2007 (Mw=6.0) na proximidade da Falha da Ferradura e 17 de Dezembro de 2009 (ML=6.0) na região do canhão de São de Vicente. Os mecanismos focais do catálogo NEAREST são consistentes com estes eventos bem como com soluções de tensores de momento publicadas para esta região. No enxame do canhão de São Vicente é onde estão localizados a maioria dos eventos. Os hipocentros encontram-se a profundidades entre os 20 e os 55 km. A distribuição dos epicentros apresenta um alinhamento ≈ NE-SW ao longo do canhão de São Vicente e prolongando-se para o limite NE da Falha da Ferradura. Os mecanismos focais dominantes são de desligamento e oblíquos, combinando movimento de desligamento com uma menor contribuição de movimento inverso. Foram ainda registados raros eventos em falha normal. A compressão máxima é aproximadamente sub-paralela ao SHmax, com uma direcção ≈NW-SE. Os epicentros localizados no enxame a SW da Falha da Ferradura, tem um alinhamento aproximadamente NW-SE, sub-paralelo à direcção de SHmax regional. Neste enxame os hipocentros são mais profundos localizando-se entre os 30 e os 55km. Os mecanismos focais são na sua maioria de desligamento puro existindo alguns eventos em falha inversa e também raras soluções em falha normal. Importa salientar que as soluções de desligamento apresentam frequentemente um plano subparalelo à orientação das falhas de desligamento SWIM (≈WNW-ESE a E-W). A compressão máxima é aproximadamente NW-SE e NNW-SSE, a W e E do enxame de sismicidade, respectivamente. As direcções de SHmax são mais uma vez coincidentes com a direção de compressão máxima. No enxame do Banco do Gorringe maioria dos sismos estão localizados no bordo SW deste relevo submarino, sub-paralelos à falha do Gorringe. Os eventos são menos profundos quando comparados com os outros dois enxames, na sua maioria acima dos 40 km. Os mecanismos focais são na sua maioria de desligamento e em falha inversa. Também neste enxame foram registados alguns sismos em falha normal. A direcção de compressão máxima e o SHmax são NNW-SSE. O facto de estes eventos se localizarem predominantemente no manto constitui um dos principais resultados deste trabalho. Neste contexto, tendo em consideração a profundidade dos eventos sísmicos, a correlação da sismicidade com as estruturas sismogénicas na região do Golfo de Cádis é particularmente complexa. Esta comparação foi desenvolvida com base nos dados de sísmica de reflexão e refração disponíveis. Do nosso estudo resulta que a sismicidade parece estar concentrada em zonas de interferência de falhas localizadas no manto subcrustal litosférico. Estas deverão ser uma replicação do padrão observado a níveis crustais e parecem ser coincidentes com transições entre diferentes domínios litosféricos. Estas zonas de interferência de falhas deverão ser áreas favoráveis à acumulação de tensões e deformação sísmica. As falhas activas crustais deverão estar ou bloqueadas ou movimentar-se de modo assísmico. A movimentação sísmica pode estar associada apenas a sismos de maior magnitude. A existência de níveis serpentinizados no Golfo de Cádis é suportada por dados de sísmica refracção e furos de sondagens profundas. Estes podem funcionar como planos de descolamento para as grandes falhas inversas, acomodando a movimentação asísmica e impedindo a micro-sismicidade de se propagar aos níveis crustais. Durante os sismos de elevada magnitude estes níveis serpentinizados deverão funcionar como zona enfraquecida, de baixo atrito, favorecendo a propagação da ruptura sísmica até à superfície. Os resultados obtidos neste trabalho melhoram o nosso conhecimento sobre a sismicidade e a sua relação com as falhas activas na região do limite de placas litosféricas no Golfo de Cádis, contribuindo para o estudo do risco sísmico associada a sismos devastadores.
APA, Harvard, Vancouver, ISO, and other styles
9

Gomez, Carolina Andrea. "Clastic wedge development and sediment budget in a source-to-sink transect (Late Campanian western interior basin, SW Wyoming and N Colorado)." 2009. http://hdl.handle.net/2152/7677.

Full text
Abstract:
The problem of how sand and mud was distributed downslope, within linked alluvial-brackish water-marine shoreline systems of an extensive clastic wedge is addressed here. The Iles Clastic wedge accumulated over a time period of a few million years (my), and its component high-frequency regressive-transgressive sequences have a duration of a few 100 thousand years (ky). The sediment partitioning study provides insight into where the thickest sandstones and mudstones were located, and generates a model that can be applied to improving the management of hydrocarbons or water resources. A 300 km 2-D study transect across the Iles Clastic Wedge in SW Wyoming and N Colorado included subsurface well log information and outcrop stratigraphic columns. This information was used to correlate high-frequency sequences across several hundred kilometers, characterize depositional processes from proximal to distal reaches, develop a sediment partitioning model, and understand the role of the likely drivers in the development of the wedge and its internal sequences. The main results of this study are: (1) The Iles Clastic Wedge spans 3 my (500 m thick) and is composed internally of 11 sequences of 200-400 ky, each of which have significant regressive-transgressive transits of up to 90 km. Sediment partitioning analysis shows that within the regressive limb of the large wedge, the component regressive compartments tend to thicken basinwards, whereas transgressive compartments thicken landwards. This geometry is driven by preferential erosion in proximal areas during regression, bypassing much sediment to the marine shorelines, and transgressive backfilling into proximal areas previously eroded more deeply. (2) The greatest concentration of sands tends to be located in the proximal fluvial and estuarine facies of the transgressive compartments and within the medial shoreline/deltaic facies of the regressive compartments. (3) As the high-frequency sequences developed, the effectiveness of basinward sand partitioning reaches a maximum value near the peak regression level of the wedge, reflecting stronger erosion and sediment bypass during this times. (4) The development of the Iles Clastic Wedge was influenced by both tectonic and eustatic drivers, with important tectonic control in the upstream reaches. On a 4th-order timescale, the Iles Wedge internal sequences were likely influenced mainly by eustasy.
text
APA, Harvard, Vancouver, ISO, and other styles
10

Juliato, Marcio. "Fault Tolerant Cryptographic Primitives for Space Applications." Thesis, 2011. http://hdl.handle.net/10012/5876.

Full text
Abstract:
Spacecrafts are extensively used by public and private sectors to support a variety of services. Considering the cost and the strategic importance of these spacecrafts, there has been an increasing demand to utilize strong cryptographic primitives to assure their security. Moreover, it is of utmost importance to consider fault tolerance in their designs due to the harsh environment found in space, while keeping low area and power consumption. The problem of recovering spacecrafts from failures or attacks, and bringing them back to an operational and safe state is crucial for reliability. Despite the recent interest in incorporating on-board security, there is limited research in this area. This research proposes a trusted hardware module approach for recovering the spacecrafts subsystems and their cryptographic capabilities after an attack or a major failure has happened. The proposed fault tolerant trusted modules are capable of performing platform restoration as well as recovering the cryptographic capabilities of the spacecraft. This research also proposes efficient fault tolerant architectures for the secure hash (SHA-2) and message authentication code (HMAC) algorithms. The proposed architectures are the first in the literature to detect and correct errors by using Hamming codes to protect the main registers. Furthermore, a quantitative analysis of the probability of failure of the proposed fault tolerance mechanisms is introduced. Based upon an extensive set of experimental results along with probability of failure analysis, it was possible to show that the proposed fault tolerant scheme based on information redundancy leads to a better implementation and provides better SEU resistance than the traditional Triple Modular Redundancy (TMR). The fault tolerant cryptographic primitives introduced in this research are of crucial importance for the implementation of on-board security in spacecrafts.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "SW partitioning"

1

Ray, Abhijit, Wu Jigang, and Srikanthan Thambipillai. "Knapsack Model and Algorithm for HW/SW Partitioning Problem." In Computational Science - ICCS 2004, 200–205. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24685-5_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Iguider, Adil, Mouhcine Chami, Oussama Elissati, and Abdeslam En-Nouaary. "Embedded Systems HW/SW Partitioning Based on Lagrangian Relaxation Method." In Innovations in Smart Cities and Applications, 149–60. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-74500-8_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pu, Geguang, Zhang Chong, Zongyan Qiu, Zuoquan Lin, and He Jifeng. "A Hybrid Heuristic Algorithm for HW-SW Partitioning Within Timed Automata." In Lecture Notes in Computer Science, 459–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11892960_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Yue, Hao Zhang, and Hongbin Yang. "Research on Parallel HW/SW Partitioning Based on Hybrid PSO Algorithm." In Algorithms and Architectures for Parallel Processing, 449–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03095-6_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Prevostini, Mauro, Francesco Balzarini, Atanas Nikolov Kostadinov, Srinivas Mankan, Aris Martinola, and Antonio Minosi. "UML-Based Specifications of an Embedded System Oriented to HW/SW Partitioning." In Languages for System Specification, 71–84. Boston, MA: Springer US, 2004. http://dx.doi.org/10.1007/1-4020-7991-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kastrup, Bernardo, Jeroen Trum, Orlando Moreira, Jan Hoogerbrugge, and Jef van Meerbergen. "Compiling Applications for ConCISe: An Example of Automatic HW/SW Partitioning and Synthesis." In Lecture Notes in Computer Science, 695–706. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44614-1_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Fundamentals and HW/SW Partitioning." In Image Processing for Embedded Devices, edited by S. Battiato, G. Puglisi, A. Bruna, A. Capra, and M. Guarnera, 1–9. BENTHAM SCIENCE PUBLISHERS, 2012. http://dx.doi.org/10.2174/978160805170011001010001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

B., M., and S. E. D. Habib. "Particle Swarm Optimization for HW/SW Partitioning." In Particle Swarm Optimization. InTech, 2009. http://dx.doi.org/10.5772/6740.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "SW partitioning"

1

Hardt, W. "An automated approach to HW/SW-codesign." In IEE Colloquium on Partitioning in Hardware-Software Codesigns. IEE, 1995. http://dx.doi.org/10.1049/ic:19950170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Farmahini-Farahani, Amin, Mehdi Kamal, Sied Mehdi Fakhraie, and Saeed Safari. "HW/SW partitioning using discrete particle swarm." In the 17th great lakes symposium. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1228784.1228870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Halimic, Mirsad, and Aida Halimic. "Vendor supplied development environments based HW/SW partitioning." In Exhibition, "Innovative Engineering for Sustainable Environment". IEEE, 2009. http://dx.doi.org/10.1109/ieeegcc.2009.5734248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chehida, K. Ben, and M. Auguin. "HW / SW partitioning approach for reconfigurable system design." In the international conference. New York, New York, USA: ACM Press, 2002. http://dx.doi.org/10.1145/581630.581670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Banerjee, Sudarshan, and Nikil Dutt. "Efficient search space exploration for HW-SW partitioning." In the 2nd IEEE/ACM/IFIP international conference. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1016720.1016752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Henkel, Jouml;rg, and Yanbing Li. "Energy-conscious HW/SW-partitioning of embedded systems." In the sixth international workshop. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/278241.278292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wei Liu and Xuejie Wang. "An AVS VLD architecture based on HW/SW partitioning." In 2011 International Conference on Transportation and Mechanical & Electrical Engineering (TMEE). IEEE, 2011. http://dx.doi.org/10.1109/tmee.2011.6199481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weiss, Shlomo, and Shay Beren. "HW/SW partitioning of an embedded instruction memory decompressor." In the ninth international symposium. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/371636.371668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Honglei, Liu Wenju, Wu Jigang, and Li Hui. "Framework for HW/SW partitioning and scheduling on MPSoCs." In 2010 International Conference on Computer and Information Application (ICCIA). IEEE, 2010. http://dx.doi.org/10.1109/iccia.2010.6141566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Letitia W., Florian Lugou, and Ludovic Apvrille. "Security-aware Modeling and Analysis for HW/SW Partitioning." In 5th International Conference on Model-Driven Engineering and Software Development. SCITEPRESS - Science and Technology Publications, 2017. http://dx.doi.org/10.5220/0006119603020311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography