To see the other types of publications on this topic, follow the link: Parallel code mapping.

Journal articles on the topic 'Parallel code mapping'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel code mapping.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Proctor, Robert W., Huifang Wang, and Kim-Phuong L. Vu. "Influences of different combinations of conceptual, perceptual, and structural similarity on stimulus-response compatibility." Quarterly Journal of Experimental Psychology Section A 55, no. 1 (February 2002): 59–74. http://dx.doi.org/10.1080/02724980143000163.

Full text
Abstract:
This study evaluated the hypothesis that an increase in set-level stimulus-response compatibility produces facilitation for congruent mappings and interference for incongruent mappings. The degree of set-level compatibility was manipulated by varying combinations of conceptual, perceptual, and structural similarity. Experiment 1 varied perceptual similarity, by combining two stimulus codes (spatial, verbal) with two response modalities (manual, vocal) for orthogonal spatial dimensions, which have structural similarity. The element-level mapping effect did not vary as a function of the code-modality relation, in contrast to findings obtained with parallel spatial dimensions, which also have conceptual similarity. Experiment 2 manipulated combinations of conceptual and perceptual similarity by combining vertical and horizontal stimulus and response orientations, using verbal or spatial stimuli and vocal responses. The element-level mapping effect was larger for parallel than orthogonal orientations, with congruent mappings showing facilitation and incongruent mappings showing interference. The largest effect was facilitation for parallel orientations with the verbal-vocal set, consistent with the view that perceptual similarity contributes to performance primarily when responding with the identity of the stimulus. Our results indicate that conceptual similarity, but not perceptual similarity, produces the facilitation/interference pattern suggestive of automatic activation of the corresponding response regardless of mapping.
APA, Harvard, Vancouver, ISO, and other styles
2

GRÉWAL, GARY WILLIAM, and CHARLES THOMAS WILSON. "MAPPING REFERENCE CODE TO IRREGULAR DSPS WITHIN THE RETARGETABLE, OPTIMIZING COMPILER COGEN(T)." International Journal of Computational Intelligence and Applications 03, no. 01 (March 2003): 45–64. http://dx.doi.org/10.1142/s146902680300080x.

Full text
Abstract:
Generating high quality code for embedded processors is made difficult by irregular architectures and highly encoded parallel instructions. Rather than dealing with the target machine at every stage of the compilation, a promising new methodology employs generic algorithms to optimize code for an idealized abstraction of the true target machine. This code, called reference code, is then mapped to the real instruction set by enhanced genetic algorithms. One perturbs the original schedule to find a number of alternative (parallel) instruction sequences, and the other evolves feasible register assignments, if possible, for each sequence. This paper describes the strategy for mapping idealized code into actual code. The COGEN(T) system employs this methodology to produce good code for different commercial DSPs and ASIPs.
APA, Harvard, Vancouver, ISO, and other styles
3

Fu, Zuohui, Yikun Xian, Shijie Geng, Yingqiang Ge, Yuting Wang, Xin Dong, Guang Wang, and Gerard De Melo. "ABSent: Cross-Lingual Sentence Representation Mapping with Bidirectional GANs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7756–63. http://dx.doi.org/10.1609/aaai.v34i05.6279.

Full text
Abstract:
A number of cross-lingual transfer learning approaches based on neural networks have been proposed for the case when large amounts of parallel text are at our disposal. However, in many real-world settings, the size of parallel annotated training data is restricted. Additionally, prior cross-lingual mapping research has mainly focused on the word level. This raises the question of whether such techniques can also be applied to effortlessly obtain cross-lingually aligned sentence representations. To this end, we propose an Adversarial Bi-directional Sentence Embedding Mapping (ABSent) framework, which learns mappings of cross-lingual sentence representations from limited quantities of parallel data. The experiments show that our method outperforms several technically more powerful approaches, especially under challenging low-resource circumstances. The source code is available from https://github.com/zuohuif/ABSent along with relevant datasets.
APA, Harvard, Vancouver, ISO, and other styles
4

Ikuta, Kai, Hiroyuki Maehara, Yuta Notsu, Kosuke Namekata, Taichi Kato, Shota Notsu, Soshi Okamoto, Satoshi Honda, Daisaku Nogami, and Kazunari Shibata. "Starspot Mapping with Adaptive Parallel Tempering. I. Implementation of Computational Code." Astrophysical Journal 902, no. 1 (October 13, 2020): 73. http://dx.doi.org/10.3847/1538-4357/abae5f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

NITSCHE, THOMAS. "LIFTING SEQUENTIAL FUNCTIONS TO PARALLEL SKELETONS." Parallel Processing Letters 12, no. 02 (June 2002): 267–84. http://dx.doi.org/10.1142/s0129626402000963.

Full text
Abstract:
This paper describes the transformation of (almost arbitrary) sequential functions on covers to parallel, collective operations (skeletons). This allows the direct re-use of existing, but sequential, code on parallel machines without the necessity to hand-code the desired parallel operations. A necessary pre-requisite for this skeleton lifting is the availability of a cover which holds the mapping information of the local subobjects, including their topology information. The lifting transformation distinguishes sequential values, which are available on each processor, from parallel values, which are only stored once. Accesses to parallel values are either ignored locally if they will be computed on another processor, or they induce communication messages to transfer the necessary data, depending on the behavior of the original function.
APA, Harvard, Vancouver, ISO, and other styles
6

Știrb, Iulia. "Extending NUMA-BTLP Algorithm with Thread Mapping Based on a Communication Tree." Computers 7, no. 4 (December 3, 2018): 66. http://dx.doi.org/10.3390/computers7040066.

Full text
Abstract:
The paper presents a Non-Uniform Memory Access (NUMA)-aware compiler optimization for task-level parallel code. The optimization is based on Non-Uniform Memory Access—Balanced Task and Loop Parallelism (NUMA-BTLP) algorithm Ştirb, 2018. The algorithm gets the type of each thread in the source code based on a static analysis of the code. After assigning a type to each thread, NUMA-BTLP Ştirb, 2018 calls NUMA-BTDM mapping algorithm Ştirb, 2016 which uses PThreads routine pthread_setaffinity_np to set the CPU affinities of the threads (i.e., thread-to-core associations) based on their type. The algorithms perform an improve thread mapping for NUMA systems by mapping threads that share data on the same core(s), allowing fast access to L1 cache data. The paper proves that PThreads based task-level parallel code which is optimized by NUMA-BTLP Ştirb, 2018 and NUMA-BTDM Ştirb, 2016 at compile-time, is running time and energy efficiently on NUMA systems. The results show that the energy is optimized with up to 5% at the same execution time for one of the tested real benchmarks and up to 15% for another benchmark running in infinite loop. The algorithms can be used on real-time control systems such as client/server based applications which require efficient access to shared resources. Most often, task parallelism is used in the implementation of the server and loop parallelism is used for the client.
APA, Harvard, Vancouver, ISO, and other styles
7

Di Martino, Beniamino, and Antonio Esposito. "Automatic Dynamic Data Structures Recognition to Support the Migration of Applications to the Cloud." International Journal of Grid and High Performance Computing 7, no. 3 (July 2015): 1–22. http://dx.doi.org/10.4018/ijghpc.2015070101.

Full text
Abstract:
The work presented in this manuscript describes a methodology for the recognition of Dynamic Data structures, with a focus on Queues, Pipes and Lists. The recognition of such structures is used as a basis for the mapping of sequential code to Cloud Services, in order to support the semi-automatic restructuring of source software. The goal is to develop a complete methodology and a framework based on it to ease the efforts needed to port native applications to a Cloud Platform and simplify the relative complex processes. In order to achieve such an objective, the proposed technique exploits an intermediate representation of the code, consisting in parallel Skeletons and Cloud Patterns. Logical inference rules act on a knowledge base, built during the analysis of the source code, to guide the recognition and mapping processes. Both the inference rules and knowledge base are expressed in Prolog. A prototype tool for the automatic analysis of sequential source code and its mapping to a Cloud Pattern is also presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Bonati, Claudio, Enrico Calore, Simone Coscetti, Massimo D’Elia, Michele Mesiti, Francesco Negro, Sebastiano Fabio Schifano, Giorgio Silvi, and Raffaele Tripiccione. "Portable LQCD Monte Carlo code using OpenACC." EPJ Web of Conferences 175 (2018): 09008. http://dx.doi.org/10.1051/epjconf/201817509008.

Full text
Abstract:
Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.
APA, Harvard, Vancouver, ISO, and other styles
9

Jeong, Eunjin, Dowhan Jeong, and Soonhoi Ha. "Dataflow Model–based Software Synthesis Framework for Parallel and Distributed Embedded Systems." ACM Transactions on Design Automation of Electronic Systems 26, no. 5 (June 5, 2021): 1–38. http://dx.doi.org/10.1145/3447680.

Full text
Abstract:
Existing software development methodologies mostly assume that an application runs on a single device without concern about the non-functional requirements of an embedded system such as latency and resource consumption. Besides, embedded software is usually developed after the hardware platform is determined, since a non-negligible portion of the code depends on the hardware platform. In this article, we present a novel model-based software synthesis framework for parallel and distributed embedded systems. An application is specified as a set of tasks with the given rules for execution and communication. Having such rules enables us to perform static analysis to check some software errors at compile-time to reduce the verification difficulty. Platform-specific programs are synthesized automatically after the mapping of tasks onto processing elements is determined. The proposed framework is expandable to support new hardware platforms easily. The proposed communication code synthesis method is extensible and flexible to support various communication methods between devices. In addition, the fault-tolerant feature can be added by modifying the task graph automatically according to the selected fault-tolerance configurations by the user. The viability of the proposed software development methodology is evaluated with a real-life surveillance application that runs on six processing elements.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, H. C., and C. K. Yuen. "A general framework to build new CPUs by mapping abstract machine code to instruction level parallel execution hardware." ACM SIGARCH Computer Architecture News 33, no. 4 (November 2005): 113–20. http://dx.doi.org/10.1145/1105734.1105750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Edahiro, Masato, and Masaki Gondo. "Research on highly parallel embedded control system design and implementation method." Impact 2019, no. 10 (December 30, 2019): 44–46. http://dx.doi.org/10.21820/23987073.2019.10.44.

Full text
Abstract:
The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.
APA, Harvard, Vancouver, ISO, and other styles
12

T.V, Sushma, and Roopa M. "Hilbert space filling curve using scilab." International Journal of Engineering & Technology 7, no. 1.9 (March 1, 2018): 129. http://dx.doi.org/10.14419/ijet.v7i1.9.9748.

Full text
Abstract:
Space filling curve is used widely for linear mapping of multi-dimensional space. This provides a new line of thinking for various applications in image processing, Image compression being the most widely used. The paper highlights the locality preserving property of Hilbert Space filling curve which is essential in numerous applications such asin image compression, numerical analysis of a large aray of data, parallel processing and so on. A simplistic approach forusingHilbert Space filling curve using Scilab code has been presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yu, and Yi Xiao. "Using GPU and OpenACC to Accelerate the Maze Optimal Routing Algorithm." Applied Mechanics and Materials 380-384 (August 2013): 1338–41. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1338.

Full text
Abstract:
in order to improve the efficiency of maze optimal routing problem, a GPU acceleration programming model OpenACC is used in this paper. By analyzing an algorithm which solves the maze problem based on ant colony algorithm, we complete the task mapping on the model. Though GPU acceleration, ant colony searching process was changed into parallel matrix operations. To decrease the algorithm accessing overhead and increase operating speed, data were rationally organized and stored for GPU. Experiments of different scale maze matrix show that the parallel algorithm greatly reduces the operation time. Speedup will be increased with the expansion of the matrix size. In our experiments, the maximum speedup is about 6.1. The algorithm can solve larger matrices with a high level of processing performance by adding efficient OpenACC instruction to serial code and organizing the data structure for parallel accessing.
APA, Harvard, Vancouver, ISO, and other styles
14

McCarthy, Steven. "The Art Portrait, the Pixel and the Gene: Micro Construction of Macro Representation." Convergence: The International Journal of Research into New Media Technologies 11, no. 4 (November 2005): 60–71. http://dx.doi.org/10.1177//1354856505061054.

Full text
Abstract:
Digital images rely on the fineness of pixels to create an illusion of pictorial reality, with individual ‘picture elements’ sacrificing themselves in service of the overall image. The elemental binary code underlying digital pictures has its parallel in human genetic code: bits of information are stored in the DNA, itself consisting of binary chemical relationships. The nature of human identity - as translated by artistic representations of the face - is emerging from this intersection. The mapping of the human genome has had implications for socio-cultural constructions of identity, especially for race and hereditary characteristics. This paper examines three artists whose creative inquiry addresses the human face and its relationship to digitisation, identity and genetic code: painter Chuck Close, Photomosaics® software inventor Rob Silvers, and photographer Nancy Burson. Their varying imaging strategies all employ micro and macro relationships, yet each offers different models for representing human identity.
APA, Harvard, Vancouver, ISO, and other styles
15

Miele, Antonio, Christian Pilato, and Donatella Sciuto. "A Simulation-Based Framework for the Exploration of Mapping Solutions on Heterogeneous MPSoCs." International Journal of Embedded and Real-Time Communication Systems 4, no. 1 (January 2013): 22–41. http://dx.doi.org/10.4018/jertcs.2013010102.

Full text
Abstract:
The efficient analysis and exploration of mapping solutions of a parallel application on a heterogeneous Multi-Processor Systems-on-Chip (MPSoCs) is usually a challenging task in system-level design, in particular when the architecture integrates hardware cores that may expose reconfigurable features. This paper proposes a system-level design framework based on SystemC simulations for fulfilling this task, featuring (i) an automated flow for the generation of timing models for the hardware cores starting from the application source code, (ii) an enhanced simulation environment for SystemC architectures enabling the specification and modification of mapping choices only by changing an XML descriptor, and (iii) a flexible controller of the simulation environment supporting the exploration of various mapping solutions featuring a customizable engine. The proposed framework has been validated with a case study considering an image processing application to show the possibility to automatically exploring alternative solutions onto a reconfigurable MPSoC platform.
APA, Harvard, Vancouver, ISO, and other styles
16

Franklin, Rodney C. G., Jeffrey P. Jacobs, Christo I. Tchervenkov, and Marie J. Béland. "Bidirectional crossmap of the Short Lists of the European Paediatric Cardiac Code and the International Congenital Heart Surgery Nomenclature and Database Project." Cardiology in the Young 12, S2 (September 2002): 18–22. http://dx.doi.org/10.1017/s1047951100012221.

Full text
Abstract:
On 6 October, 2000, A Meeting of representatives from the Association for European Paediatric Cardiology, the Society of Thoracic Surgeons, and the European Association for Cardiothoracic Surgery, took place in Frankfurt, Germany to discuss the publications earlier that year of two separate systems of nomenclature for paediatric and congenital heart disease: the European Paediatric Cardiac Code and the International Congenital Heart Surgery Nomenclature and Database Project. It was agreed at this meeting that the Short Lists of both systems should be mapped to each other in a first attempt to gravitate toward a single system for describing cardiac defects and procedures related to the heart. The need for this mapping, the historical background of the two parallel nomenclature systems and the later ratification of the mapping process by the first International Summit on Nomenclature for Congenital Heart Disease on 27 May, 2001, in Toronto, Canada, are discussed in the current issue ofCardiology in the Young.
APA, Harvard, Vancouver, ISO, and other styles
17

Martina, Maurizio, Andrea Molino, Fabrizio Vacca, Guido Masera, and Guido Montorsi. "High throughput implementation of an adaptive serial concatenation turbo decoder." Journal of Communications Software and Systems 2, no. 3 (April 5, 2017): 252. http://dx.doi.org/10.24138/jcomss.v2i3.288.

Full text
Abstract:
The complete design of a new high throughput adaptive turbo decoder is described. The developed system isprogrammable in terms of block length, code rate and modulation scheme, which can be dinamically changed from frame to frame, according to varied channel conditions or user requirements. A parallel architecture with 16 concurrent SISOs has been adopted to achieve a decoding throughput as high as 35 Mbit/s with 10 iterations, while error correcting performance are within 1dB from the capacity limit. The whole system, including the iterativedecoder itself, de-mapping and de-puncturing units, as well as the input double buffer, has been mapped to a single FPGA device, running at 80 MHz, with a percentage occupation of 54%.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Yong Jie, Hua Cheng Dou, Shi Jun Deng, Liang Yong Cheng, Li Wang, Jian Ping Li, and Ze Bing Zhou. "Research on Parallel Geo-Spatial Index Replication Strategy Based on PC Cluster System and its 3D Visualization Application." Applied Mechanics and Materials 241-244 (December 2012): 2969–75. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2969.

Full text
Abstract:
In the field of spatial information, the amount of spatial data becomes more and more large, and every operation becomes more and more complicated. Now, parallel techniques gradually become valid means of resolving this kind of complicated problem. Therefore, this paper studies the spatial partitioning method of massive data and the parallel geo-spatial index replication strategy after fully considering characteristics of PC cluster based on shared-nothing structure. After studying excellent linear mapping characteristics of Hilbert spatial ordering code, this paper applies it to spatial partitioning of data, and gives a concrete algorithm. In this algorithm, the clustering performance of spatial objects is considered, and the balance of data storage on each processing unit is also done, which greatly improves the processing efficiency of parallel spatial database. Based on this, this paper builds the parallel geo-spatial index based on R-tree, and proposes the spatial index replicas update mechanism based on master replica and weak consistency suitable for the parallel environment of shared-nothing structure after deeply analyzing current synchronous mechanisms of replicas. It is a kind of update mechanism based on message with low cost, enhances the power of parallel spatial data access and using PC cluster, and improves the availability of index data. In the field of 3D visualization application, it is also efficient. Experiments prove that the spatial partitioning strategy and the parallel geo-spatial index replication mechanism presented in this paper can improve load balance of system, and enhance performance of the whole system.
APA, Harvard, Vancouver, ISO, and other styles
19

Ferner, Clayton S., and Robert G. Babb II. "Automatic Choice of Scheduling Heuristics for Parallel/Distributed Computing." Scientific Programming 7, no. 1 (1999): 47–65. http://dx.doi.org/10.1155/1999/898723.

Full text
Abstract:
Task mapping and scheduling are two very difficult problems that must be addressed when a sequential program is transformed into a parallel program. Since these problems are NP‐hard, compiler writers have opted to concentrate their efforts on optimizations that produce immediate gains in performance. As a result, current parallelizing compilers either use very simple methods to deal with task scheduling or they simply ignore it altogether. Unfortunately, the programmer does not have this luxury. The burden of repartitioning or rescheduling, should the compiler produce inefficient parallel code, lies entirely with the programmer. We were able to create an algorithm (called a metaheuristic), which automatically chooses a scheduling heuristic for each input program. The metaheuristic produces better schedules in general than the heuristics upon which it is based. This technique was tested on a suite of real scientific programs written in SISAL and simulated on four different network configurations. Averaged over all of the test cases, the metaheuristic out‐performed all eight underlying scheduling algorithms; beating the best one by 2%, 12%, 13%, and 3% on the four separate network configurations. It is able to do this, not always by picking the best heuristic, but rather by avoiding the heuristics when they would produce very poor schedules. For example, while the metaheuristic only picked the best algorithm about 50% of the time for the 100 Gbps Ethernet, its worst decision was only 49% away from optimal. In contrast, the best of the eight scheduling algorithms was optimal 30% of the time, but its worst decision was 844% away from optimal.
APA, Harvard, Vancouver, ISO, and other styles
20

Gosse, Paul, and Karen Flanagan Hollebrands. "Technology Tips: April 2003." Mathematics Teacher 96, no. 4 (April 2003): 292–98. http://dx.doi.org/10.5951/mt.96.4.0292.

Full text
Abstract:
This month's tip centers on an alternative view of functions. Instead of perpendicular axes for domain and range, we explore parallel axes. This idea has been around for a while (see the references in Bridger and Bridger ([2001] and in the “Surfing Note”), but we hope to breathe new life into this fascinating representation of functions with two easy-to-use programs for the TI-83 Plus. We provide an introduction to mapping diagrams (also called function diagrams) and the code for one program to produce them using the TI-83 Plus. Information about the second program will be given in “Technology Tips” in May. Both programs are available electronically, so users do not have to type the programs into their calculators.
APA, Harvard, Vancouver, ISO, and other styles
21

Gosse, Paul, and Karen Flanagan Hollebrands. "Technology Tips: April 2003." Mathematics Teacher 96, no. 4 (April 2003): 292–98. http://dx.doi.org/10.5951/mt.96.4.0292.

Full text
Abstract:
This month's tip centers on an alternative view of functions. Instead of perpendicular axes for domain and range, we explore parallel axes. This idea has been around for a while (see the references in Bridger and Bridger ([2001] and in the “Surfing Note”), but we hope to breathe new life into this fascinating representation of functions with two easy-to-use programs for the TI-83 Plus. We provide an introduction to mapping diagrams (also called function diagrams) and the code for one program to produce them using the TI-83 Plus. Information about the second program will be given in “Technology Tips” in May. Both programs are available electronically, so users do not have to type the programs into their calculators.
APA, Harvard, Vancouver, ISO, and other styles
22

Roteta, Ekhi, Aitor Bastarrika, Magí Franquesa, and Emilio Chuvieco. "Landsat and Sentinel-2 Based Burned Area Mapping Tools in Google Earth Engine." Remote Sensing 13, no. 4 (February 23, 2021): 816. http://dx.doi.org/10.3390/rs13040816.

Full text
Abstract:
Four burned area tools were implemented in Google Earth Engine (GEE), to obtain regular processes related to burned area (BA) mapping, using medium spatial resolution sensors (Landsat and Sentinel-2). The four tools are (i) the BA Cartography tool for supervised burned area over the user-selected extent and period, (ii) two tools implementing a BA stratified random sampling to select the scenes and dates for validation, and (iii) the BA Reference Perimeter tool to obtain highly accurate BA maps that focus on validating coarser BA products. Burned Area Mapping Tools (BAMTs) go beyond the previously implemented Burned Area Mapping Software (BAMS) because of GEE parallel processing capabilities and preloaded geospatial datasets. BAMT also allows temporal image composites to be exploited in order to obtain BA maps over a larger extent and longer temporal periods. The tools consist of four scripts executable from the GEE Code Editor. The tools’ performance was discussed in two case studies: in the 2019/2020 fire season in Southeast Australia, where the BA cartography detected more than 50,000 km2, using Landsat data with commission and omission errors below 12% when compared to Sentinel-2 imagery; and in the 2018 summer wildfires in Canada, where it was found that around 16,000 km2 had burned.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Degeng, and Michael Gribskov. "Examining the architecture of cellular computing through a comparative study with a computer." Journal of The Royal Society Interface 2, no. 3 (May 16, 2005): 187–95. http://dx.doi.org/10.1098/rsif.2005.0038.

Full text
Abstract:
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Hong, Jeong Beom, Young Sik Lee, Yong Wook Kim, and Tae Hee Han. "Error-Vulnerable Pattern-Aware Binary-to-Ternary Data Mapping for Improving Storage Density of 3LC Phase Change Memory." Electronics 9, no. 4 (April 9, 2020): 626. http://dx.doi.org/10.3390/electronics9040626.

Full text
Abstract:
Multi-level cell (MLC) phase-change memory (PCM) is an attractive solution for next-generation memory that is composed of resistance-based nonvolatile devices. MLC PCM is superior to dynamic random-access memory (DRAM) with regard to scalability and leakage power. Therefore, various studies have focused on the feasibility of MLC PCM-based main memory. The key challenges in replacing DRAM with MLC PCM are low reliability, limited lifetime, and long write latency, which are predominantly affected by the most error-vulnerable data pattern. Based on the physical characteristics of the PCM, where the reliability depends on the data pattern, a tri-level-cell (3LC) PCM has significantly higher performance and lifetime than a four-level-cell (4LC) PCM. However, a storage density is limited by binary-to-ternary data mapping. This paper introduces error-vulnerable pattern-aware binary-to-ternary data mapping utilizing 3LC PCM without an error-correction code (ECC) to enhance the storage density. To mitigate the storage density loss caused by the 3LC PCM, a two-way encoding is applied. The performance degradation is minimized through parallel encoding. The experimental results demonstrate that the proposed method improves the storage density by 17.9%. Additionally, the lifetime and performance are enhanced by 36.1% and 38.8%, respectively, compared with those of a 4LC PCM with an ECC.
APA, Harvard, Vancouver, ISO, and other styles
25

Kimmel, Michael. "Optimizing the analysis of metaphor in discourse." Review of Cognitive Linguistics 10, no. 1 (June 15, 2012): 1–48. http://dx.doi.org/10.1075/rcl.10.1.01kim.

Full text
Abstract:
This article presents a software-based methodology for studying metaphor in discourse, mainly within the framework of Conceptual Metaphor Theory (CMT). Despite a welcome recent swing towards methodological reflexivity, a detailed explication of the pros and cons of different procedures is still in order as far as qualitative research (i.e. a context-sensitive manual coding of a text corpus) is concerned. Qualitatively oriented scholars have to make difficult decisions revolving around the general research design, the transfer of linguistic theory into method, good workflow management, and the aimed at scope of analysis. My first task is to pinpoint typical tasks and demonstrate how they are optimally dealt with by using qualitative annotation software like ATLAS.ti. Software not only streamlines metaphor tagging itself, it systematizes the interpretive work from grouping text items into systematic/conceptual metaphor sets, via data surveys and checks, to quantitative comparisons and a cohesion-based analysis. My second task is to illustrate how a good research design can provide a step-wise procedure, offer systematic validation checks, keep the code system slim and many analytic options open. When we aim at complex data searches and want to handle high metaphor diversity I recommend compositional coding, i.e. tagging source and target domains separately (instead of adopting a “one mapping-one code” strategy). Furthermore, by tagging metaphors for image-schematic and rich semantic source domains in parallel, i.e. two-tier coding, we get multiple options for grouping metaphors into systematic sets.
APA, Harvard, Vancouver, ISO, and other styles
26

Liebrock, Lorie M., and Ken Kennedy. "Automatic Data Distribution for Composite Grid Applications." Scientific Programming 6, no. 1 (1997): 95–113. http://dx.doi.org/10.1155/1997/174748.

Full text
Abstract:
Problem topology is the key to efficient parallelization support for partially regular applications. Specifically, problem topology provides the information necessary for automatic data distribution and regular application optimization of a large class of partially regular applications. Problem topology is the connectivity of the problem. This research focuses on composite grid applications and strives to take advantage of their partial regularity in the parallelization and compilation process. Composite grid problems arise in important application areas, e.g., reactor and aerodynamic simulation. Related physical phenomena are inherently parallel and their simulations are computationally intensive. We present algorithms that automatically determine data distributions for composite grid problems. Our algorithm's alignment and distribution specifications may be used as input to a High Performance Fortran program to apply the mapping for execution of the simulation code. These algorithms eliminate the need for user-specified data distribution for this large class of complex topology problems. We test the algorithms using a number of topological descriptions from aerodynamic and water-cooled nuclear reactor simulations. Speedup-bound predictions with and without communication, based on the automatically generated distributions, indicate that significant speedups are possible using these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Andrejevic-Stosovic, Miona, and Vanco Litovski. "Hierarchical approach to diagnosis of electronic circuits using ANNs." Journal of Automatic Control 20, no. 1 (2010): 45–52. http://dx.doi.org/10.2298/jac1001045a.

Full text
Abstract:
In this paper, we apply artificial neural networks (ANNs) to the diagnosis of a mixed-mode electronic circuit. In order to tackle the circuit complexity and to reduce the number of test points hierarchical approach to the diagnosis generation was implemented with two levels of decision: the system level and the circuit level. For every level, using the simulation-before-test (SBT) approach, fault dictionary was created first, containing data relating the fault code and the circuit response for a given input signal. Also, hypercomputing was implemented, i.e. we used parallel simulation of large number of replicas of the original circuit with faults inserted to achieve fast creation of the fault dictionary. ANNs were used to model the fault dictionaries. At the topmost level, the fault dictionary was split into parts simplifying the implementation of the concept. During the learning phase, the ANNs were considered as an approximation algorithm to capture the mapping enclosed within the fault dictionary. Later on, in the diagnostic phase, the ANNs were used as an algorithm for searching the fault dictionary. A voting system was created at the topmost level in order to distinguish which ANN output is to be accepted as the final diagnostic statement. The approach was tested on an example of an analog-to-digital converter.
APA, Harvard, Vancouver, ISO, and other styles
28

Goodman, M. L. "A three-dimensional, iterative mapping procedure for the implementation of an ionosphere-magnetosphere anisotropic Ohm's law boundary condition in global magnetohydrodynamic simulations." Annales Geophysicae 13, no. 8 (August 31, 1995): 843–53. http://dx.doi.org/10.1007/s00585-995-0843-z.

Full text
Abstract:
Abstract. The mathematical formulation of an iterative procedure for the numerical implementation of an ionosphere-magnetosphere (IM) anisotropic Ohm's law boundary condition is presented. The procedure may be used in global magnetohydrodynamic (MHD) simulations of the magnetosphere. The basic form of the boundary condition is well known, but a well-defined, simple, explicit method for implementing it in an MHD code has not been presented previously. The boundary condition relates the ionospheric electric field to the magnetic field-aligned current density driven through the ionosphere by the magnetospheric convection electric field, which is orthogonal to the magnetic field B, and maps down into the ionosphere along equipotential magnetic field lines. The source of this electric field is the flow of the solar wind orthogonal to B. The electric field and current density in the ionosphere are connected through an anisotropic conductivity tensor which involves the Hall, Pedersen, and parallel conductivities. Only the height-integrated Hall and Pedersen conductivities (conductances) appear in the final form of the boundary condition, and are assumed to be known functions of position on the spherical surface R=R1 representing the boundary between the ionosphere and magnetosphere. The implementation presented consists of an iterative mapping of the electrostatic potential ψ the gradient of which gives the electric field, and the field-aligned current density between the IM boundary at R=R1 and the inner boundary of an MHD code which is taken to be at R2>R1. Given the field-aligned current density on R=R2, as computed by the MHD simulation, it is mapped down to R=R1 where it is used to compute ψ by solving the equation that is the IM Ohm's law boundary condition. Then ψ is mapped out to R=R2, where it is used to update the electric field and the component of velocity perpendicular to B. The updated electric field and perpendicular velocity serve as new boundary conditions for the MHD simulation which is then used to compute a new field-aligned current density. This process is iterated at each time step. The required Hall and Pedersen conductances may be determined by any method of choice, and may be specified anew at each time step. In this sense the coupling between the ionosphere and magnetosphere may be taken into account in a self-consistent manner.
APA, Harvard, Vancouver, ISO, and other styles
29

Nair, Manjusha, Jinesh Manchan Kannimoola, Bharat Jayaraman, Bipin Nair, and Shyam Diwakar. "Temporal constrained objects for modelling neuronal dynamics." PeerJ Computer Science 4 (July 23, 2018): e159. http://dx.doi.org/10.7717/peerj-cs.159.

Full text
Abstract:
Background Several new programming languages and technologies have emerged in the past few decades in order to ease the task of modelling complex systems. Modelling the dynamics of complex systems requires various levels of abstractions and reductive measures in representing the underlying behaviour. This also often requires making a trade-off between how realistic a model should be in order to address the scientific questions of interest and the computational tractability of the model. Methods In this paper, we propose a novel programming paradigm, called temporal constrained objects, which facilitates a principled approach to modelling complex dynamical systems. Temporal constrained objects are an extension of constrained objects with a focus on the analysis and prediction of the dynamic behaviour of a system. The structural aspects of a neuronal system are represented using objects, as in object-oriented languages, while the dynamic behaviour of neurons and synapses are modelled using declarative temporal constraints. Computation in this paradigm is a process of constraint satisfaction within a time-based simulation. Results We identified the feasibility and practicality in automatically mapping different kinds of neuron and synapse models to the constraints of temporal constrained objects. Simple neuronal networks were modelled by composing circuit components, implicitly satisfying the internal constraints of each component and interface constraints of the composition. Simulations show that temporal constrained objects provide significant conciseness in the formulation of these models. The underlying computational engine employed here automatically finds the solutions to the problems stated, reducing the code for modelling and simulation control. All examples reported in this paper have been programmed and successfully tested using the prototype language called TCOB. The code along with the programming environment are available at http://github.com/compneuro/TCOB_Neuron. Discussion Temporal constrained objects provide powerful capabilities for modelling the structural and dynamic aspects of neural systems. Capabilities of the constraint programming paradigm, such as declarative specification, the ability to express partial information and non-directionality, and capabilities of the object-oriented paradigm especially aggregation and inheritance, make this paradigm the right candidate for complex systems and computational modelling studies. With the advent of multi-core parallel computer architectures and techniques or parallel constraint-solving, the paradigm of temporal constrained objects lends itself to highly efficient execution which is necessary for modelling and simulation of large brain circuits.
APA, Harvard, Vancouver, ISO, and other styles
30

Su, Hui, Hongliang Li, Baowen Hu, and Jiaqi Yang. "A Research on the Macroscopic and Mesoscopic Parameters of Concrete Based on an Experimental Design Method." Materials 14, no. 7 (March 26, 2021): 1627. http://dx.doi.org/10.3390/ma14071627.

Full text
Abstract:
Concrete is a composite material that has complex mechanical properties. The mechanical properties of each of its components are different at the mesoscopic scale. Studying the relationship between the macroscopic and mesoscopic parameters of concrete can help better understand its mechanical properties at these levels. When using the discrete element method to model the macro-mesoscopic parameters of concrete, their calibration is the first challenge. This paper proposes a numerical model of concrete using the particle discrete element software particle flow code (PFC). The mesoscopic parameters required by the model need to be set within a certain range for an orthogonal experimental design. We used the proposed model to perform numerical simulations as well as response surface design and analysis. This involved fitting a set of mapping relationships between the macro–micro parameters of concrete. An optimization model was established in the MATLAB environment. The program used to calibrate the mesoscopic parameters of concrete was written using the genetic algorithm, and its macro-micro parameters were inverted. The following three conclusions can be drawn from the orthogonal test: First, the tensile strength and shear strength of the parallel bond between the particles of mortar had a significant influence on the peak compressive strength of concrete, whereas the influence of the other parameters was not significant. Second, the elastic modulus of the parallel bonding between particles of mortar, their stiffness ratio and friction coefficient, and the elastic modulus and stiffness ratio of contact bonding in the interfacial transition zone had a significant influence on the elastic modulus, whereas the influence of the other parameters was not significant. Third, the elastic modulus, stiffness ratio, and friction coefficient of the particles of mortar as well as the ratio of the contact adhesive stiffness in their interfacial transition zone had a significant influence on Poisson’s ratio, whereas the influence of the other parameters was not significant. The fitting effect of the response surface design was good.
APA, Harvard, Vancouver, ISO, and other styles
31

Колганов, А. С. "An experience of applying the parallelization regions for the step-by-step parallelization of software packages using the SAPFOR system." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 4 (October 8, 2020): 388–404. http://dx.doi.org/10.26089/nummet.v21r432.

Full text
Abstract:
Одна из основных сложностей разработки параллельной программы для кластера — необходимость принятия глобальных решений по распределению данных и вычислений с учетом свойств всей программы, а затем выполнения кропотливой работы по модификации программы и ее отладки. Большой объем программного кода, а также многомодульность, многовариантность и многоязыковость, затрудняют принятие решений по согласованному распределению данных и вычислений. Опыт использования предыдущей системы САПФОР показал, что при распараллеливании на кластер больших программ и программных комплексов необходимо уметь распараллеливать их постепенно, начиная с наиболее времяемких фрагментов и постепенно добавляя новые фрагменты, пока не достигнем желаемого уровня эффективности параллельной программы. С этой целью предыдущая система была полностью переработана, и на ее основе была создана новая система SAPFOR (System FOR Automated Parallelization). В данной статье будет рассмотрен опыт применения метода частичного распараллеливания, идея которого заключается в том, что распараллеливанию подвергается не вся программа целиком, а ее части (области распараллеливания) — в них заводятся дополнительные экземпляры требуемых данных, производится распределение этих данных и соответствующих им вычислений. The main difficulty in developing a parallel program for a cluster is the need to make global decisions on the distribution of data and computations, taking into account the properties of the entire program, and then doing the hard work of modifying the program and debugging it. A large amount of code as well as multimoduling, multivariant and multilanguage, make it difficult to make decisions on a consistent distribution of data and computations. The experience of using the previous SAPFOR system showed that, when parallelizing large programs and software packages for a cluster, one should be able to parallelize them gradually, starting with the most time-intensive fragments and gradually adding new fragments until we reach the desired level of parallel program efficiency. For this purpose, the previous system was completely redesigned and a new system SAPFOR (System FOR Automated Parallelization) was created. To solve this problem, the method of incremental or partial arallelization will be considered in this paper. The idea of this method is that not the entire program is subjected to parallelization, but only its parts (parallelization regions) where additional versions of the required data are created and distributed and the corresponding computations are performed. This paper also discusses the application of automated mapping of programs to a cluster using the proposed incremental parallelization method and using the example of a NPB (NAS Parallel Benchmarks) software package.
APA, Harvard, Vancouver, ISO, and other styles
32

Zaharia, S., C. Z. Cheng, and K. Maezawa. "3-D force-balanced magnetospheric configurations." Annales Geophysicae 22, no. 1 (January 1, 2004): 251–65. http://dx.doi.org/10.5194/angeo-22-251-2004.

Full text
Abstract:
Abstract. The knowledge of plasma pressure is essential for many physics applications in the magnetosphere, such as computing magnetospheric currents and deriving mag-netosphere-ionosphere coupling. A thorough knowledge of the 3-D pressure distribution has, however, eluded the community, as most in situ pressure observations are either in the ionosphere or the equatorial region of the magnetosphere. With the assumption of pressure isotropy there have been attempts to obtain the pressure at different locations,by either (a) mapping observed data (e.g. in the ionosphere) along the field lines of an empirical magnetospheric field model, or (b) computing a pressure profile in the equatorial plane (in 2-D) or along the Sun-Earth axis (in 1-D) that is in force balance with the magnetic stresses of an empirical model. However, the pressure distributions obtained through these methods are not in force balance with the empirical magnetic field at all locations. In order to find a global 3-D plasma pressure distribution in force balance with the magnetospheric magnetic field, we have developed the MAG-3-D code that solves the 3-D force balance equation computationally. Our calculation is performed in a flux coordinate system in which the magnetic field is expressed in terms of Euler potentials as . The pressure distribution, , is prescribed in the equatorial plane and is based on satellite measurements. In addition, computational boundary conditions for ψ surfaces are imposed using empirical field models. Our results provide 3-D distributions of magnetic field, plasma pressure, as well as parallel and transverse currents for both quiet-time and disturbed magnetospheric conditions. Key words. Magnetospheric physics (magnetospheric configuration and dynamics; magnetotail; plasma sheet)
APA, Harvard, Vancouver, ISO, and other styles
33

Lew, Marshall. "Liquefaction evaluation guidelines for practicing engineering and geological professionals and regulators." Environmental and Engineering Geoscience 7, no. 4 (November 1, 2001): 301–20. http://dx.doi.org/10.2113/gseegeosci.7.4.301.

Full text
Abstract:
Abstract Liquefaction is a seismic hazard that must be evaluated for a significant percentage of the developable areas of California. The combination of the presence of active seismic faults, young loose alluvium, and shallow ground water are the ingredients that could result in the occurrence of liquefaction in many areas of California. These ingredients are also found in other seismically active areas of the United States and the world. The state of California, through the Seismic Hazard Mapping Act of 1990, has mandated that liquefaction hazard be determined for new construction. On a parallel track, the Uniform Building Code, since 1994, has provisions requiring the determination of liquefaction potential and mitigation of related hazards, such as settlement, flow slides, lateral spreading, ground oscillation, sand boils, and loss of bearing capacity. Fortunately, the state of knowledge has now evolved to where there are field exploration methods and analytical techniques to estimate the liquefaction potential and the possible consequences arising from the occurrence of liquefaction. There are some areas that still need further research. Mitigation for liquefaction has become more commonplace and confidence in these techniques has been increased based on the relatively successful performance of improved sites in the past several major earthquakes. Unfortunately, not all practicing engineering and geological professionals and building officials are knowledgeable about the current state-of-practice in liquefaction hazard analysis and mitigation. Thus, it was considered necessary to develop a set of guidelines to aid professionals and building officials, based on California's experience with the current practice of liquefaction hazard analysis and mitigation. Although the guidelines reported in this paper were written specifically for practice in California, it is believed that guidelines can benefit practitioners to evaluate liquefaction hazard in all seismic regions.
APA, Harvard, Vancouver, ISO, and other styles
34

ORII, Shigeo. "A Vector-Parallel Mapping Algorithm for Plasma Particle Codes." Journal of Plasma and Fusion Research 75, no. 6 (1999): 704–16. http://dx.doi.org/10.1585/jspf.75.704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Johnson, S. P., and M. Cross. "Mapping structured grid three-dimensional CFD codes onto parallel architectures." Applied Mathematical Modelling 15, no. 8 (August 1991): 394–405. http://dx.doi.org/10.1016/0307-904x(91)90027-m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Brown, Christopher, Vladimir Janjic, M. Goli, and J. McCall. "Programming Heterogeneous Parallel Machines Using Refactoring and Monte–Carlo Tree Search." International Journal of Parallel Programming 48, no. 4 (June 10, 2020): 583–602. http://dx.doi.org/10.1007/s10766-020-00665-z.

Full text
Abstract:
Abstract This paper presents a new technique for introducing and tuning parallelism for heterogeneous shared-memory systems (comprising a mixture of CPUs and GPUs), using a combination of algorithmic skeletons (such as farms and pipelines), Monte–Carlo tree search for deriving mappings of tasks to available hardware resources, and refactoring tool support for applying the patterns and mappings in an easy and effective way. Using our approach, we demonstrate easily obtainable, significant and scalable speedups on a number of case studies showing speedups of up to 41 over the sequential code on a 24-core machine with one GPU. We also demonstrate that the speedups obtained by mappings derived by the MCTS algorithm are within 5–15% of the best-obtained manual parallelisation.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yuexing, Ming F. Gu, Hidenobu Yajima, Qirong Zhu, and Moupiya Maji. "ART2: a 3D parallel multiwavelength radiative transfer code for continuum and atomic and molecular lines." Monthly Notices of the Royal Astronomical Society 494, no. 2 (March 19, 2020): 1919–35. http://dx.doi.org/10.1093/mnras/staa733.

Full text
Abstract:
ABSTRACT ART2 is a 3D multiwavelength Monte Carlo radiative transfer (RT) code that couples continuum and emission lines to track the propagation of photons and their interactions with the interstellar medium (ISM). The original ART2 has been extensively applied to hydrodynamics simulations to study panchromatic properties of galaxies and ISM. Here, we describe new implementations of non-local thermodynamic equilibrium RT of molecular and atomic fine structure emission lines, and the parallelization of the code using a number of novel methods. The new ART2 can efficiently and self-consistently produce a full spectrum that includes both continuum and lines such as [C ii], [N ii], [O iii], Ly α, and CO. These essential features, together with the multiphase ISM model and the adaptive grid, make ART2 a multipurpose code to study multiwavelength properties of a wide range of astrophysical systems from planetary discs to large-scale structures. To demonstrate the capability of the new ART2, we applied it to two hydrodynamics simulations: the zoom-in Milky Way Simulation to obtain panchromatic properties of individual galaxies, and the large-scale IllustrisTNG100 Simulation to obtain global properties such as the line intensity mappings. These products are vital for a broad array of studies. By enabling direct comparison between numerical simulations and multiband observations, ART2 provides a crucial theoretical framework for the understanding of existing and future surveys, and the synergy between multiband galaxy surveys and line intensity mappings. Therefore, ART2 is a powerful and versatile tool to bridge the gap between theories and observations of cosmic structures.
APA, Harvard, Vancouver, ISO, and other styles
38

Santos-Magalhães, N. S., E. A. Bouton, and H. M. De Oliveira. "HOW TO REPRESENT THE GENETIC CODE?" Revista de Ensino de Bioquímica 2, no. 2 (May 15, 2004): 13. http://dx.doi.org/10.16923/reb.v2i2.145.

Full text
Abstract:
The advent of molecular genetic comprises a true revolution of far-reaching consequences for human-kind, which evolved into a specialized branch of the modern-day Biochemistry. The analysis of specicgenomic information are gaining wide-ranging interest because of their signicance to the early diag-nosis of disease, and the discovery of modern drugs. In order to take advantage of a wide assortmentof signal processing (SP) algorithms, the primary step of modern genomic SP involves convertingsymbolic-DNA sequences into complex-valued signals. How to represent the genetic code? Despitebeing extensively known, the DNA mapping into proteins is one of the relevant discoveries of genetics.The genetic code (GC) is revisited in this work, addressing other descriptions for it, which can beworthy for genomic SP. Three original representations are discussed. The inner-to-outer map buildson the unbalanced role of nucleotides of a codon. A two-dimensional-Gray genetic representationis oered as a structured map that can help interpreting DNA spectrograms or scalograms. Theseare among the powerful visual tools for genome analysis, which depends on the choice of the geneticmapping. Finally, the world-chart for the GC is investigated. Evoking the cyclic structure of thegenetic mapping, it can be folded joining the left-right borders, and the top-bottom frontiers. As aresult, the GC can be drawn on the surface of a sphere resembling a world-map. Eight parallels oflatitude are required (four in each hemisphere) as well as four meridians of longitude associated tofour corresponding anti-meridians. The tropic circles have 11.25o, 33.75o, 56.25o, and 78.5o (Northand South). Starting from an arbitrary Greenwich meridian, the meridians of longitude can be plottedat 22.5o, 67.5o, 112.5o, and 157.5o (East and West). Each triplet is assigned to a single point on thesurface that we named Nirenberg-Kohamas Earth. Despite being valuable, usual representations forthe GC can be replaced by the handy descriptions oered in this work. These alternative maps arealso particularly useful for educational purposes, giving a much rich interpretation and visualizationthan a simple look-up table.
APA, Harvard, Vancouver, ISO, and other styles
39

Koyama, Shinsuke. "On the Relation Between Encoding and Decoding of Neuronal Spikes." Neural Computation 24, no. 6 (June 2012): 1408–25. http://dx.doi.org/10.1162/neco_a_00279.

Full text
Abstract:
Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural encoding, refers to the mapping from stimulus to response. It focuses primarily on understanding how neurons respond to a wide variety of stimuli and constructing models that accurately describe the stimulus-response relationship. Neural decoding refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, that is, the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean does not always efficiently decode the rate codes, but it can improve efficiency in reading certain rate codes when correlations within a spike train are taken into account.
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Rawi, Ghazi, John Cioffi, and Mark Horowitz. "On task mapping optimization for parallel decoding of low-density parity-check codes on message-passing architectures." Parallel Computing 31, no. 5 (May 2005): 462–90. http://dx.doi.org/10.1016/j.parco.2004.12.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Magomedov, Sh G., and A. S. Lebedev. "A tool for automatic parallelization of affine programs for systems with shared and distributed memory." Russian Technological Journal 7, no. 5 (October 15, 2019): 7–19. http://dx.doi.org/10.32362/2500-316x-2019-7-5-7-19.

Full text
Abstract:
Effective programming of parallel architectures has always been a challenge, and it is especially complicated with their modern diversity. The task of automatic parallelization of program code was formulated from the moment of the appearance of the first parallel computers made in Russia (for example, PS2000). To date, programming languages and technologies have been developed that simplify the work of a programmer (T-System, MC#, Erlang, Go, OpenCL), but do not make parallelization automatic. The current situation requires the development of effective programming tools for parallel computing systems. Such tools should support the development of parallel programs for systems with shared and distributed memory. The paper deals with the problem of automatic parallelization of affine programs for such systems. Methods for calculating space-time mappings that optimize the locality of the program are discussed. The implementation of developed methods is done in Haskell within the source-to-source translator performing automatic parallelization. A comparison of the performance of parallel variants of lu, atax, syr2k programs obtained using the developed tool and the modern Pluto tool is made. The experiments were performed on two x86_64 machines connected by the InfiniBand network. OpenMP and MPI were used as parallelization technologies. The performance of the resulting parallel program indicates the practical applicability of the developed tool for affine programs parallelization.
APA, Harvard, Vancouver, ISO, and other styles
42

Zaiss, A., R. Brunner, D. Spinner, R. Klar, and S. Schulz. "Conversion Problems concerning Automated Mapping from lCD-10 to lCD-9." Methods of Information in Medicine 37, no. 03 (July 1998): 254–59. http://dx.doi.org/10.1055/s-0038-1634529.

Full text
Abstract:
AbstractThe increasing parallel use of ICD-9 and ICD-10 complicates the comparability of coded diagnoses. This is the reason why we developed a symmetric table for interactive conversion between ICD-9 and ICD-10, based on a vector space text-retrieval method that resulted in unambiguous mapping from ICD-9 to ICD-10 in 64%, from ICD-10 to ICD-9 in 87% of all three- and four-character classes of the tabular list. Out of the remaining 13% of multi-valued relations, a table for automated mapping from ICD-10 to ICD-9 was created. In 9% of cases, the selection offered no problems. A compromise between preserving information content and maintaining the logical integrity had to be found in 2.4%; in 1.6% automated mapping was impossible because of newly defined concepts and structural differences between ICD-9 and ICD-10 that are not counterbalanced by a consistent system of residual categories. We recommend that in a future revision of the ICD, compatibility with the then existing classification system should be considered.
APA, Harvard, Vancouver, ISO, and other styles
43

Golota, Taras I., and Sotirios G. Ziavras. "A Universal, Dynamically Adaptable and Programmable Network Router for Parallel Computers." VLSI Design 12, no. 1 (January 1, 2001): 25–52. http://dx.doi.org/10.1155/2001/50167.

Full text
Abstract:
Existing message-passing parallel computers employ routers designed for a specific interconnection network and deal with fixed data channel width. There are disadvantages to this approach, because the system design and development times are significant and these routers do not permit run time network reconfiguration. Changes in the topology of the network may be required for better performance or faulttolerance. In this paper, we introduce a class of high-performance universal (statically and dynamically adaptable) programmable routers (UPRs) for message-passing parallel computers. The universality of these routers is based on their capability to adapt at run and/or static times according to the characteristics of the systems and/or applications. More specifically, the number of bidirectional data channels, the channel size and the I/O port mappings (for the implementation of a particular topology) can change dynamically and statically. Our research focuses on system-level specification issues of the UPRs, their VLSI design and their simulation to estimate their performance. Our simulation of data transfers via UPR routers employs VHDL code in the Mentor Graphics environment. The results show that the performance of the routers depends mostly on their current configuration. Details of the simulation and synthesis are presented.
APA, Harvard, Vancouver, ISO, and other styles
44

Gan, Fengjiao, Chenggao Luo, Xingyue Liu, Hongqiang Wang, and Long Peng. "Fast Terahertz Coded-Aperture Imaging Based on Convolutional Neural Network." Applied Sciences 10, no. 8 (April 12, 2020): 2661. http://dx.doi.org/10.3390/app10082661.

Full text
Abstract:
Terahertz coded-aperture imaging (TCAI) has many advantages such as forward-looking imaging, staring imaging and low cost and so forth. However, it is difficult to resolve the target under low signal-to-noise ratio (SNR) and the imaging process is time-consuming. Here, we provide an efficient solution to tackle this problem. A convolution neural network (CNN) is leveraged to develop an off-line end to end imaging network whose structure is highly parallel and free of iterations. And it can just act as a general and powerful mapping function. Once the network is well trained and adopted for TCAI signal processing, the target of interest can be recovered immediately from echo signal. Also, the method to generate training data is shown, and we find that the imaging network trained with simulation data is of good robustness against noise and model errors. The feasibility of the proposed approach is verified by simulation experiments and the results show that it has a competitive performance with the state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Пушкарев, К. В., and В. Д. Кошур. "A hybrid heuristic parallel method of global optimization." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (June 30, 2015): 242–55. http://dx.doi.org/10.26089/nummet.v16r224.

Full text
Abstract:
Рассматривается задача нахождения глобального минимума непрерывной целевой функции многих переменных в области, имеющей вид многомерного параллелепипеда. Для решения сложных задач глобальной оптимизации предлагается гибридный эвристический параллельный метод глобальной оптимизации (ГЭПМ), основанный на комбинировании и гибридизации различных методов и технологии многоагентной системы. В состав ГЭПМ включены как новые методы (например, метод нейросетевой аппроксимации инверсных зависимостей, использующий обобщeнно-регрессионные нейронные сети (GRNN), отображающие значения целевой функции в значения координат), так и модифицированные классические методы (например, модифицированный метод Хука-Дживса). Кратко описывается программная реализация ГЭПМ в форме кроссплатформенной (на уровне исходного кода) программной библиотеки на языке C++, использующей обмен сообщениями через интерфейс MPI (Message Passing Interface). Приводятся результаты сравнения ГЭПМ с 21 современным методом глобальной оптимизации и генетическим алгоритмом на 28 тестовых целевых функциях 50 переменных. The problem of finding the global minimum of a continuous objective function of multiple variables in a multidimensional parallelepiped is considered. A hybrid heuristic parallel method for solving of complicated global optimization problems is proposed. The method is based on combining various methods and on the multi-agent technology. It consists of new methods (for example, the method of neural network approximation of inverse coordinate mappings that uses Generalized Regression Neural Networks (GRNN) to map the values of an objective function to coordinates) and modified classical methods (for example, the modified Hooke-Jeeves method). An implementation of the proposed method as a cross-platform (on the source code level) library written in the C++ language is briefly discussed. This implementation uses the message passing via MPI (Message Passing Interface). The method is compared with 21 modern methods of global optimization and with a genetic algorithm using 28 test objective functions of 50 variables.
APA, Harvard, Vancouver, ISO, and other styles
46

BRANDES, THOMAS. "HPF LIBRARY AND COMPILER SUPPORT FOR HALOS IN DATA PARALLEL IRREGULAR COMPUTATIONS." Parallel Processing Letters 10, no. 02n03 (June 2000): 189–200. http://dx.doi.org/10.1142/s0129626400000196.

Full text
Abstract:
On distributed memory architectures data parallel compilers emulate the global address space by distributing the data onto the processors according to the mapping directives of the user and by generating automatically explicit inter-processor communication. A shadow is additionally allocated local memory to keep on one processor also non-local values of the data that is accessed or defined by this processor. While shadow edges are already well studied for structured grids, this paper focuses on its use for applications with unstructured grids where updates on the shadow edges involve unstructured communication with complex communication schedules. The use of shadow edges is considered for High Performance Fortran (HPF) as the de facto standard language for writing data parallel programs in Fortran. A library with a HPF binding provides the explicit control of unstructured shadows and their communication schedules, also called halos. This halo library allows writing HPF programs with a performance close to hand-coded message-passing versions but where the user is freed of the burden to calculate shadow sizes and communication schedules and to do the exchanging of data with explicit message passing commands. In certain situations, the HPF compiler can create and use halos automatically. This paper shows the advantages and also the limits of this approach. The halo library and an automatic support of halos have been implemented within the ADAPTOR HPF compilation system. The performance results verify the effectiveness of the chosen approach.
APA, Harvard, Vancouver, ISO, and other styles
47

Mohd Sani, Mohd Shahrir, J. M. Zikri, and A. Abdul Adam. "Sound Intensity Mapping on Single Cylinder Direct Injection Diesel Engine with The Application of Palm Oil Methyl Ester Biodiesel." International Journal of Automotive and Mechanical Engineering 18, no. 2 (July 28, 2021): 8833–44. http://dx.doi.org/10.15282/ijame.18.2.2021.21.0677.

Full text
Abstract:
The utilisation of biodiesel nowadays has become familiar with rapid production types of biodiesel in order to replace the dependency on the fossil fuel parallel to the implementation of green technology that emphasises the products to be more environmental-friendly. Nevertheless, the emerges of various kinds of biodiesel cannot be simply used, despite using the biodiesel does not need any major modification on the engine; it still needs a few analyses that must be done to determine whether it will give advantages or disadvantages. Therefore, this research was carried out to investigate the effect of using palm oil methyl ester (POME) biodiesel on the engine in terms of noise emission. The sound intensity mapping method was used to indicate the effectiveness of the biodiesel by identifying the noise radiation. Along with the mapping, the sound power level (SPL) is also being obtained to provide a clear comparison between the parameters. Generally, switching up the engine speed and load increased the sound power level. Based on the results obtained related to the SPL, the intensity mapping tends to show a higher colour-coded in the noise source image for the higher engine speed and load setup. It was found that the engine speed and load give a significant contribution to noise emission produced by the engine, and it can be inferred that this method can be utilised to accomplish the noise emission analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Thomas, Nathan, Steven Saunders, Tim Smith, Gabriel Tanase, and Lawrence Rauchwerger. "ARMI: A High Level Communication Library for STAPL." Parallel Processing Letters 16, no. 02 (June 2006): 261–80. http://dx.doi.org/10.1142/s0129626406002617.

Full text
Abstract:
ARMI is a communication library that provides a framework for expressing fine-grain parallelism and mapping it to a particular machine using shared-memory and message passing library calls. The library is an advanced implementation of the RMI protocol and handles low-level details such as scheduling incoming communication and aggregating outgoing communication to coarsen parallelism. These details can be tuned for different platforms to allow user codes to achieve the highest performance possible without manual modification. ARMI is used by STAPL, our generic parallel library, to provide a portable, user transparent communication layer. We present the basic design as well as the mechanisms used in the current Pthreads/OpenMP, MPI implementations and/or a combination thereof. Performance comparisons between ARMI and explicit use of Pthreads or MPI are given on a variety of machines, including an HP-V2200, Origin 3800, IBM Regatta and IBM RS/6000 SP cluster.
APA, Harvard, Vancouver, ISO, and other styles
49

Edwards, H. Carter, Daniel Sunderland, Vicki Porter, Chris Amsler, and Sam Mish. "Manycore Performance-Portability: Kokkos Multidimensional Array Library." Scientific Programming 20, no. 2 (2012): 89–114. http://dx.doi.org/10.1155/2012/917630.

Full text
Abstract:
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel execution performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].
APA, Harvard, Vancouver, ISO, and other styles
50

Himmelfarb, H. J., E. Maicas, and J. D. Friesen. "Isolation of the SUP45 omnipotent suppressor gene of Saccharomyces cerevisiae and characterization of its gene product." Molecular and Cellular Biology 5, no. 4 (April 1985): 816–22. http://dx.doi.org/10.1128/mcb.5.4.816.

Full text
Abstract:
The Saccharomyces cerevisiae SUP45+ gene has been isolated from a genomic clone library by genetic complementation of paromomycin sensitivity, which is a property of a mutant strain carrying the sup45-2 allele. This plasmid complements all phenotypes associated with the sup45-2 mutation, including nonsense suppression, temperature sensitivity, osmotic sensitivity, and paromomycin sensitivity. Genetic mapping with a URA3+-marked derivative of the complementing plasmid that was integrated into the chromosome by homologous recombination demonstrated that the complementing fragment contained the SUP45+ gene and not an unlinked suppressor. The SUP45+ gene is present as a single copy in the haploid genome and is essential for viability. In vitro translation of the hybrid-selected SUP45+ transcript yielded a protein of Mr = 54,000, which is larger than any known ribosomal protein. RNA blot hybridization analysis showed that the steady-state level of the SUP45+ transcript is less than 10% of that for ribosomal protein L3 or rp59 transcripts. When yeast cells are subjected to a mild heat shock, the synthesis rate of the SUP45+ transcript was transiently reduced, approximately in parallel with ribosomal protein transcripts. Our data suggest that the SUP45+ gene does not encode a ribosomal protein. We speculate that it codes for a translation-related function whose precise nature is not yet known.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography