Artigos de revistas sobre o tema "Code-to-architecture-mapping"

Siga este link para ver outros tipos de publicações sobre o tema: Code-to-architecture-mapping.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Code-to-architecture-mapping".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Olsson, Tobias, Morgan Ericsson e Anna Wingkvist. "s4rdm3x: A Tool Suite to Explore Code to Architecture Mapping Techniques". Journal of Open Source Software 6, n.º 58 (7 de fevereiro de 2021): 2791. http://dx.doi.org/10.21105/joss.02791.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Ya Qin, Dian Fu Ma, Ying Wang e Xian Qi Zhao. "A Mapping Simulation of Code Generation for Partitioned System". Applied Mechanics and Materials 325-326 (junho de 2013): 1759–65. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.1759.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
It is crucial for real-time embedded system to design and verify for a little fault may lead to a catastrophe. Architecture Analysis and Design Language (AADL) is a modeling language used to design and analysis the architecture of real-time embedded system based on Model Driven Architecture (MDA). Code generation of AADL model to codes running on the Real-time Operation System can avoid hand-writing mistakes and improve the efficiency of development. Partitioning is introduced into embedded system to control fault transmission. This paper presents a mapping approach to generate codes from AADL model for partitioned system, and the generated codes which include configuration codes and C codes will run on a partitioned platform.
3

Sejans, Janis, e Oksana Nikiforova. "Problems and Perspectives of Code Generation from UML Class Diagram". Scientific Journal of Riga Technical University. Computer Sciences 44, n.º 1 (1 de janeiro de 2011): 75–84. http://dx.doi.org/10.2478/v10143-011-0024-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Problems and Perspectives of Code Generation from UML Class Diagram As a result of increasing technological diversity, more attention is being focused on model driven architecture (MDA), and its standard - Unified Modeling Language (UML). UML class diagrams require correct diagram notation mapping to target programming language syntax under the framework of MDA. Currently there are plenty of CASE tools which claim that they are able to generate the source code from UML models. Therefore by combining the knowledge of a programming language, syntax rules and UML class diagram notation semantic, an experimental model for stressing the code generator can be produced, thus allowing comparison of quality of the transformation result. This paper describes a creation of such experimental models.
4

Nasser, Y., J. F. Hélard e M. Crussière. "System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV". International Journal of Digital Multimedia Broadcasting 2008 (2008): 1–12. http://dx.doi.org/10.1155/2008/359206.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Single-frequency networks (SFNs) for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST) encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR) mapping (EESM) to predict the bit error rate (BER) at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.
5

Mishra, Bhavyansh, Robert Griffin e Hakki Erhan Sevil. "Modelling Software Architecture for Visual Simultaneous Localization and Mapping". Automation 2, n.º 2 (2 de abril de 2021): 48–61. http://dx.doi.org/10.3390/automation2020003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There is, therefore, a need for emphasis on a comprehensive review of the evolving architecture of such algorithms in the literature. Although VSLAM algorithm pipelines share similar mathematical backbones, their implementations are individualized and the ad hoc nature of the interfacing between different modules of VSLAM pipelines complicates code reuseability and maintenance. This paper presents a software model for core components of VSLAM implementations and interfaces that govern data flow between them while also attempting to preserve the elements that offer performance improvements over the evolution of VSLAM architectures. The framework presented in this paper employs principles from model-driven engineering (MDE), which are used extensively in the development of large and complicated software systems. The presented VSLAM framework will assist researchers in improving the performance of individual modules of VSLAM while not having to spend time on system integration of those modules into VSLAM pipelines.
6

Miele, Antonio, Christian Pilato e Donatella Sciuto. "A Simulation-Based Framework for the Exploration of Mapping Solutions on Heterogeneous MPSoCs". International Journal of Embedded and Real-Time Communication Systems 4, n.º 1 (janeiro de 2013): 22–41. http://dx.doi.org/10.4018/jertcs.2013010102.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The efficient analysis and exploration of mapping solutions of a parallel application on a heterogeneous Multi-Processor Systems-on-Chip (MPSoCs) is usually a challenging task in system-level design, in particular when the architecture integrates hardware cores that may expose reconfigurable features. This paper proposes a system-level design framework based on SystemC simulations for fulfilling this task, featuring (i) an automated flow for the generation of timing models for the hardware cores starting from the application source code, (ii) an enhanced simulation environment for SystemC architectures enabling the specification and modification of mapping choices only by changing an XML descriptor, and (iii) a flexible controller of the simulation environment supporting the exploration of various mapping solutions featuring a customizable engine. The proposed framework has been validated with a case study considering an image processing application to show the possibility to automatically exploring alternative solutions onto a reconfigurable MPSoC platform.
7

Schott, Brian, Stephen P. Crago, Robert H. Parker, Chen H. Chen, Lauretta C. Carter, Joseph P. Czarnaski, Matthew French, Ivan Hom, Tam Tho e Terri Valenti. "Reconfigurable Architectures for System Level Applications of Adaptive Computing". VLSI Design 10, n.º 3 (1 de janeiro de 2000): 265–79. http://dx.doi.org/10.1155/2000/28323.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The System Level Applications of Adaptive Computing (SLAAC) project is defining an open, distributed, scalable, adaptive computing systems architecture based on a highspeed network cluster of heterogeneous, FPGA-accelerated nodes. Two reference implementations of this architecture are being created. The Research Reference Platform (RRP) is a MyrinetTM cluster of PCs with PCI-based FPGA accelerators (SLAAC-1). The Deployable Reference Platform (DRP) is a Myrinet cluster of PowerPCTM nodes with VME-based FPGA accelerators (SLAAC-2) and a commercial 6U-VME quad- PowerPC board (CSPI M2641S) serving as the carrier. A key strategy proposed for successful ACS technology insertions is source-code compatibility between the RRP and DRP platforms. This paper focuses on the development of the SLAAC-1 and SLAAC-2 accelerators and how the network-centric SLAAC system-level architecture has shaped their designs. A preliminary mapping of a Synthetic Aperture Radar/Automatic Target Recognition (SAR/ATR) algorithm to SLAAC-2 is also discussed.
8

Kuc, Mateusz, Wojciech Sułek e Dariusz Kania. "Low Power QC-LDPC Decoder Based on Token Ring Architecture". Energies 13, n.º 23 (30 de novembro de 2020): 6310. http://dx.doi.org/10.3390/en13236310.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The article presents an implementation of a low power Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) decoder in a Field Programmable Gate Array (FPGA) device. The proposed solution is oriented to a reduction in dynamic energy consumption. The key research concepts present an effective technology mapping of a QC-LDPC decoder to an LUT-based FPGA with many limitations. The proposed decoder architecture uses a distributed control system and a Token Ring processing scheme. This idea helps limit the clock skew problem and is oriented to clock gating, a well-established concept for power optimization. Then the clock gating of the decoder building blocks allows for a significant reduction in energy consumption without deterioration in other parameters of the decoder, particularly its error correction performance. We also provide experimental results for decoder implementations with different QC-LDPC codes, indicating important characteristics of the code parity check matrix, for which an energy-saving QC-LDPC decoder with the proposed architecture can be designed. The experiments are based on implementations in the Intel Cyclone V FPGA device. Finally, the presented architecture is compared with the other solutions from the literature.
9

Wang, Degeng, e Michael Gribskov. "Examining the architecture of cellular computing through a comparative study with a computer". Journal of The Royal Society Interface 2, n.º 3 (16 de maio de 2005): 187–95. http://dx.doi.org/10.1098/rsif.2005.0038.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
10

Bardenheuer, Kristina, Alun Passey, Maria d'Errico, Barbara Millier, Carine Guinard-Azadian, Johan Aschan e Michel van Speybroeck. "Honeur (Heamatology Outcomes Network in Europe): A Federated Model to Support Real World Data Research in Hematology". Blood 132, Supplement 1 (29 de novembro de 2018): 4839. http://dx.doi.org/10.1182/blood-2018-99-111093.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Introduction: The Haematology Outcomes Network in EURope (HONEUR) is an interdisciplinary initiative aimed at improving patient outcomes by analyzing real world data across hematological centers in Europe. Its overarching goal is to create a secure network which facilitates the development of a collaborative research community and allows access to big data tools for analysis of the data. The central paradigm in the HONEUR network is a federated model whereby the data stays at the respective sites and the analysis is executed at the local data sources. To allow for a uniform data analysis, the common data model 'OMOP' (Observational Medical Outcomes Partnership) was selected and extended to accommodate specific hematology data elements. Objective: To demonstrate feasibility of the OMOP common data model for the HONEUR network. Methods: In order to validate the architecture of the HONEUR network and the applicability of the OMOP common data model, data from the EMMOS registry (NCT01241396) have been used. This registry is a prospective, non-interventional study that was designed to capture real world data regarding treatments and outcomes for multiple myeloma at different stages of the disease. Data was collected between Oct 2010 and Nov 2014 on more than 2,400 patients across 266 sites in 22 countries. Data was mapped to the OMOP common data model version 5.3. Additional new concepts to the standard OMOP were provided to preserve the semantic mapping quality and reduce the potential loss of granularity. Following the mapping process, a quality analysis was performed to assess the completeness and accuracy of the mapping to the common data model. Specific critical concepts in multiple myeloma needed to be represented in OMOP. This applies in particular for concepts like treatment lines, cytogenetic observations, disease progression, risk scales (in particular ISS and R-ISS). To accommodate these concepts, existing OMOP structures were used with the definition of new concepts and concept-relationships. Results: Several elements of mapping data from the EMMOS registry to the OMOP common data model (CDM) were evaluated via integrity checks. Core entities from the OMOP CDM were reconciled against the source data. This was applied for the following entities: person (profile of year of birth and gender), drug exposure (profile of number of drug exposures per drug, at ATC code level), conditions (profile of number of occurrences of conditions per condition code, converted to SNOMED), measurement (profile of number of measurements and value distribution per (lab) measurement, converted to LOINC) and observation (profile of number of observations per observation concept). Figure 1 shows the histogram of year of birth distribution between the EMMOS registry and the OMOP CDM. No discernible differences exist, except for subjects which have not been included in the mapping to the OMOP CDM due to lacking confirmation of a diagnosis of multiple myeloma. As additional part of the architecture validation, the occurrence of the top 20 medications in the EMMOS registry and the OMOP CDM were compared, with a 100% concordance for the drug codes, which is shown in Figure 2. In addition to the reconciliation against the different OMOP entities, a comparison was also made against 'derived' data, in particular 'time to event' analysis. Overall survival was plotted from calculated variables in the analysis level data from the EMMOS registry and derived variables in the OMOP CDM. Probability of overall survival over time was virtually identical with only one day difference in median survival and 95% confidence intervals identically overlapping over the period of measurement (Figure 3). Conclusions: The concordance of year of birth, drug code mapping and overall survival between the EMMOS registry and the OMOP common data model indicates the reliability of mapping potential in HONEUR, especially where auxiliary methods have been developed to handle outcomes and treatment data in a way that can be harmonized across platform datasets. Disclosures No relevant conflicts of interest to declare.
11

Thuerck, Daniel, Nicolas Weber e Roberto Bifulco. "Flynn’s Reconciliation". ACM Transactions on Architecture and Code Optimization 18, n.º 3 (junho de 2021): 1–26. http://dx.doi.org/10.1145/3458357.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
A large portion of the recent performance increase in the High Performance Computing (HPC) and Machine Learning (ML) domains is fueled by accelerator cards. Many popular ML frameworks support accelerators by organizing computations as a computational graph over a set of highly optimized, batched general-purpose kernels. While this approach simplifies the kernels’ implementation for each individual accelerator, the increasing heterogeneity among accelerator architectures for HPC complicates the creation of portable and extensible libraries of such kernels. Therefore, using a generalization of the CUDA community’s warp register cache programming idiom, we propose a new programming idiom (CoRe) and a virtual architecture model (PIRCH), abstracting over SIMD and SIMT paradigms. We define and automate the mapping process from a single source to PIRCH’s intermediate representation and develop backends that issue code for three different architectures: Intel AVX512, NVIDIA GPUs, and NEC SX-Aurora. Code generated by our source-to-source compiler for batched kernels, borG, competes favorably with vendor-tuned libraries and is up to 2× faster than hand-tuned kernels across architectures.
12

Martina, Maurizio, Andrea Molino, Fabrizio Vacca, Guido Masera e Guido Montorsi. "High throughput implementation of an adaptive serial concatenation turbo decoder". Journal of Communications Software and Systems 2, n.º 3 (5 de abril de 2017): 252. http://dx.doi.org/10.24138/jcomss.v2i3.288.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The complete design of a new high throughput adaptive turbo decoder is described. The developed system isprogrammable in terms of block length, code rate and modulation scheme, which can be dinamically changed from frame to frame, according to varied channel conditions or user requirements. A parallel architecture with 16 concurrent SISOs has been adopted to achieve a decoding throughput as high as 35 Mbit/s with 10 iterations, while error correcting performance are within 1dB from the capacity limit. The whole system, including the iterativedecoder itself, de-mapping and de-puncturing units, as well as the input double buffer, has been mapped to a single FPGA device, running at 80 MHz, with a percentage occupation of 54%.
13

Sturzenegger, Matthieu, Kris Holm, Carie-Ann Lau e Matthias Jakob. "Debris-Flow and Debris-Flood Susceptibility Mapping for Geohazard Risk Prioritization". Environmental and Engineering Geoscience 27, n.º 2 (3 de março de 2021): 179–94. http://dx.doi.org/10.2113/eeg-d-20-00006.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
ABSTRACT Regional-scale assessments for debris-flow and debris-flood propagation and avulsion on fans can be challenging. Geomorphological mapping based on aerial or satellite imagery requires substantial field verification effort. Surface evidence of past events may be obfuscated by development or obscured by repeat erosion or debris inundation, and trenching may be required to record the sedimentary architecture and date past events. This paper evaluates a methodology for debris-flow and debris-flood susceptibility mapping at regional scale based on a combination of digital elevation model (DEM) metrics to identify potential debris source zones and flow propagation modeling using the Flow-R code that is calibrated through comparison to mapped alluvial fans. The DEM metrics enable semi-automated identification and preliminary, process-based classification of streams prone to debris flow and debris flood. Flow-R is a susceptibility mapping tool that models potential flow inundation based on a combination of spreading and runout algorithms considering DEM topography and empirical propagation parameters. The methodology is first evaluated at locations where debris-flow and debris-flood hazards have been previously assessed based on field mapping and detailed numerical modeling. It is then applied over a 125,000 km2 area in southern British Columbia, Canada. The motivation for the application of this methodology is that it represents an objective and repeatable approach to susceptibility mapping, which can be integrated in a debris-flow and debris-flood risk prioritization framework at regional scale to support risk management decisions.
14

Huang, Yen-Chieh, e Chih-Ping Chu. "Developing Web Applications Based on Model Driven Architecture". International Journal of Software Engineering and Knowledge Engineering 24, n.º 02 (março de 2014): 163–82. http://dx.doi.org/10.1142/s0218194014500077.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Model Driven Architecture (MDA) is a new software development framework. This paper presents a model-driven approach to the development of Web applications by combining Conallen's web applications design concept and Kleppe's MDA process. We use the UML extension mechanism, i.e. stereotypes, to define the various web elements, and use the Robustness diagram to represent MVC 2 structure for Web application. After required analysis, we start by using a use case diagram as CIM, and then transform CIM to PIM, and PIM to PSM. We propose mapping rules for model-to-model transformation. Finally, we develop a tool named WebPSM2Code, which can automatically transform PSM diagram to Web application code, such as Java, JSP, HTML, Servlet, Javascript, as well as deployment descriptor file. All the files can automatically address to the correct directory structure for JSP Web application, and the transformation rate is about 39% of the whole system. Using this methodology, systems can be analyzed, designed, and generated more easily and systematically. Thereby, the time that Web programmers spend on coding can be reduced.
15

Mirri, Silvia, Catia Prandi, Paola Salomoni, Franco Callegati, Andrea Melis e Marco Prandini. "A Service-Oriented Approach to Crowdsensing for Accessible Smart Mobility Scenarios". Mobile Information Systems 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/2821680.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This work presents an architecture to help designing and deploying smart mobility applications. The proposed solution builds on the experience already matured by the authors in different fields: crowdsourcing and sensing done by users to gather data related to urban barriers and facilities, computation of personalized paths for users with special needs, and integration of open data provided by bus companies to identify the actual accessibility features and estimate the real arrival time of vehicles at stops. In terms of functionality, the first “monolithic” prototype fulfilled the goal of composing the aforementioned pieces of information to support citizens with reduced mobility (users with disabilities and/or elderly people) in their urban movements. In this paper, we describe a service-oriented architecture that exploits the microservices orchestration paradigm to enable the creation of new services and to make the management of the various data sources easier and more effective. The proposed platform exposes standardized interfaces to access data, implements common services to manage metadata associated with them, such as trustworthiness and provenance, and provides an orchestration language to create complex services, naturally mapping their internal workflow to code. The manuscript demonstrates the effectiveness of the approach by means of some case studies.
16

Michele, Nuovo. "MyETL: A Java Software Tool to Extract, Transform, and Load Your Business". CRIS - Bulletin of the Centre for Research and Interdisciplinary Study 2015, n.º 2 (1 de dezembro de 2015): 41–51. http://dx.doi.org/10.1515/cris-2015-0011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract The project follows the development of a Java Software Tool that extracts data from Flat File (Fixed Length Record Type), CSV (Comma Separated Values), and XLS (Microsoft Excel 97-2003 Worksheet file), apply transformation to those sources, and finally load the data into the end target RDBMS. The software refers to a process known as ETL (Extract Transform and Load). Those kinds of systems are called ETL systems. The analysis involved research on the theory behind the ETL process as well as the theory behind the various phases of the applied methodology. Also an in-depth look at the design and architecture of the software has been made. To create a complete design needed to be used for the implementation, different techniques and diagrams where used to visualise and refine ideas: UML class diagrams, System Architecture Diagrams, Physical Data Model, and Project Timeline. The implementation of the project involved the translation of the system architecture into working software using the Extreme Programming Methodology and the Java programming language. A mapping algorithm module and design patterns have been used in the implementation phase. A transformation syntax has been defined to achieve data transformation. The testing of the software was done in the form of a unit test. A formal test plan was prepared to ensure that the main features of the system worked as defined. An error handling code implementation has been developed to avoid an unexpected crash of the system and to communicate to the user problems or errors.
17

Lin, Chin-Feng, Tsung-Jen Su, Hung-Kai Chang, Chun-Kang Lee, Shun-Hsyung Chang, Ivan A. Parinov e Sergey Shevtsov. "Direct-Mapping-Based MIMO-FBMC Underwater Acoustic Communication Architecture for Multimedia Signals". Applied Sciences 10, n.º 1 (27 de dezembro de 2019): 233. http://dx.doi.org/10.3390/app10010233.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this paper, a direct-mapping (DM)-based multi-input multi-output (MIMO) filter bank multi-carrier (FBMC) underwater acoustic multimedia communication architecture (UAMCA) is proposed. The proposed DM-based MIMO-FBMC UAMCA is rare and non-obvious in the underwater multimedia communication research topic. The following are integrated into the proposed UAMCA: A 2 × 2 DM transmission mechanism, a (2000, 1000) low-density parity-check code encoder, a power assignment mechanism, an object-composition petrinet mechanism, adaptive binary phase shift keying modulation and 4-offset quadrature amplitude modulation methods. The multimedia signals include voice, image, and data. The DM transmission mechanism in different spatial hardware devices transmits different multimedia packets. The proposed underwater multimedia transmission power allocation algorithm (UMTPAA) is simple, fast, and easy to implement, and the threshold transmission bit error rates (BERs) and real-time requirements for voice, image, and data signals can be achieved using the proposed UMTPAA. The BERs of the multimedia signals, data symbol error rates of the data signals, power saving ratios of the voice, image and data signals, mean square errors of the voice signals, and peak signal-to-noise ratios of the image signals, for the proposed UAMCA with a perfect channel estimation, and channel estimation errors of 5%, 10%, and 20%, respectively, were explored and demonstrated. Simulation results demonstrate that the proposed 2 × 2 DM-based MIMO-FBMC UAMCA is suitable for low power and high speed underwater multimedia sensor networks.
18

Marchesan Almeida, Gabriel, Gilles Sassatelli, Pascal Benoit, Nicolas Saint-Jean, Sameer Varyani, Lionel Torres e Michel Robert. "An Adaptive Message Passing MPSoC Framework". International Journal of Reconfigurable Computing 2009 (2009): 1–20. http://dx.doi.org/10.1155/2009/242981.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Multiprocessor Systems-on-Chips (MPSoCs) offer superior performance while maintaining flexibility and reusability thanks to software oriented personalization. While most MPSoCs are today heterogeneous for better meeting the targeted application requirements, homogeneous MPSoCs may become in a near future a viable alternative bringing other benefits such as run-time load balancing and task migration. The work presented in this paper relies on a homogeneous NoC-based MPSoC framework we developed for exploring scalable and adaptive on-line continuous mapping techniques. Each processor of this system is compact and runs a tiny preemptive operating system that monitors various metrics and is entitled to take remapping decisions through code migration techniques. This approach that endows the architecture with decisional capabilities permits refining application implementation at run-time according to various criteria. Experiments based on simple policies are presented on various applications that demonstrate the benefits of such an approach.
19

Gong, X., F. Erwee e V. Rautenbach. "GEOMETRY VIEWER FOR PGADMIN4: A PROCESS GUIDED BY THE GOOGLE SUMMER OF CODE". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W14 (23 de agosto de 2019): 79–83. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w14-79-2019.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
<p><strong>Abstract.</strong> The latest version of pgAdmin4 was released in mid-2016 and moved to a web-based application that was written in Python and jQuery with Bootstrap, using the Flask framework. This new architecture of pgAdmin4 provided an excellent opportunity to integrate a geometry viewer into the application. This progress started as the geometry viewer was selected as a project for the 2018 Google Summer of Code (GSoC). The requirements for the geometry viewer was elicited through conversations with the mentors and emails to the discussion list of PostGIS and pgAdmin. Once the formal design was finalized the development started. The spatial technology stack implemented to expand pgAdmin4 with a geometry viewer was the JavaScript mapping library Leaflet JS and WKX - parser/serializer library that supports several spatial vector formats. Both these fulfilled the requirements of the coding standard of pgAdmin that all client-side code must be developed in JavaScript using jQuery and other plugins. Leaflet JS is well known for its ease of use and compatibility. WKX is lesser known but well supported and concise to the need to parse the spatial data before rendering on the Leaflet map. The decision on both of these libraries was motivated by their minimal size and possibilities for expansion for future extensions of the viewer. The first version of the geometry viewer was well-received and is currently integrated into the latest versions of pgAdmin4.</p>
20

Cameron, Seth, Stephen Grossberg e Frank H. Guenther. "A Self-Organizing Neural Network Architecture for Navigation Using Optic Flow". Neural Computation 10, n.º 2 (1 de fevereiro de 1998): 313–52. http://dx.doi.org/10.1162/089976698300017782.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This article describes a self-organizing neural network architecture that transforms optic flow and eye position information into representations of heading, scene depth, and moving object locations. These representations are used to navigate reactively in simulations involving obstacle avoidance and pursuit of a moving target. The network's weights are trained during an action-perception cycle in which self-generated eye and body movements produce optic flow information, thus allowing the network to tune itself without requiring explicit knowledge of sensor geometry. The confounding effect of eye movement during translation is suppressed by learning the relationship between eye movement outflow commands and the optic flow signals that they induce. The remaining optic flow field is due to only observer translation and independent motion of objects in the scene. A self-organizing feature map categorizes normalized translational flow patterns, thereby creating a map of cells that code heading directions. Heading information is then recombined with translational flow patterns in two different ways to form maps of scene depth and moving object locations. Most of the learning processes take place concurrently and evolve through unsupervised learning. Mapping the learned heading representations onto heading labels or motor commands requires additional structure. Simulations of the network verify its performance using both noise-free and noisy optic flow information.
21

Leon, Vasileios, George Lentaris, Evangelos Petrongonas, Dimitrios Soudris, Gianluca Furano, Antonis Tavoularis e David Moloney. "Improving Performance-Power-Programmability in Space Avionics with Edge Devices: VBN on Myriad2 SoC". ACM Transactions on Embedded Computing Systems 20, n.º 3 (abril de 2021): 1–23. http://dx.doi.org/10.1145/3440885.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The advent of powerful edge devices and AI algorithms has already revolutionized many terrestrial applications; however, for both technical and historical reasons, the space industry is still striving to adopt these key enabling technologies in new mission concepts. In this context, the current work evaluates an heterogeneous multi-core system-on-chip processor for use on-board future spacecraft to support novel, computationally demanding digital signal processors and AI functionalities. Given the importance of low power consumption in satellites, we consider the Intel Movidius Myriad2 system-on-chip and focus on SW development and performance aspects. We design a methodology and framework to accommodate efficient partitioning, mapping, parallelization, code optimization, and tuning of complex algorithms. Furthermore, we propose an avionics architecture combining this commercial off-the-shelf chip with a field programmable gate array device to facilitate, among others, interfacing with traditional space instruments via SpaceWire transcoding. We prototype our architecture in the lab targeting vision-based navigation tasks. We implement a representative computer vision pipeline to track the 6D pose of ENVISAT using megapixel images during hypothetical spacecraft proximity operations. Overall, we achieve 2.6 to 4.9 FPS with only 0.8 to 1.1 W on Myriad2 , i.e., 10-fold acceleration versus modern rad-hard processors. Based on the results, we assess various benefits of utilizing Myriad2 instead of conventional field programmable gate arrays and CPUs.
22

Edahiro, Masato, e Masaki Gondo. "Research on highly parallel embedded control system design and implementation method". Impact 2019, n.º 10 (30 de dezembro de 2019): 44–46. http://dx.doi.org/10.21820/23987073.2019.10.44.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The pace of technology's advancements is ever-increasing and intelligent systems, such as those found in robots and vehicles, have become larger and more complex. These intelligent systems have a heterogeneous structure, comprising a mixture of modules such as artificial intelligence (AI) and powertrain control modules that facilitate large-scale numerical calculation and real-time periodic processing functions. Information technology expert Professor Masato Edahiro, from the Graduate School of Informatics at the Nagoya University in Japan, explains that concurrent advances in semiconductor research have led to the miniaturisation of semiconductors, allowing a greater number of processors to be mounted on a single chip, increasing potential processing power. 'In addition to general-purpose processors such as CPUs, a mixture of multiple types of accelerators such as GPGPU and FPGA has evolved, producing a more complex and heterogeneous computer architecture,' he says. Edahiro and his partners have been working on the eMBP, a model-based parallelizer (MBP) that offers a mapping system as an efficient way of automatically generating parallel code for multi- and many-core systems. This ensures that once the hardware description is written, eMBP can bridge the gap between software and hardware to ensure that not only is an efficient ecosystem achieved for hardware vendors, but the need for different software vendors to adapt code for their particular platforms is also eliminated.
23

Fang, Xin, Stratis Ioannidis e Miriam Leeser. "SIFO: Secure Computational Infrastructure Using FPGA Overlays". International Journal of Reconfigurable Computing 2019 (6 de dezembro de 2019): 1–18. http://dx.doi.org/10.1155/2019/1439763.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Secure Function Evaluation (SFE) has received recent attention due to the massive collection and mining of personal data, but remains impractical due to its large computational cost. Garbled Circuits (GC) is a protocol for implementing SFE which can evaluate any function that can be expressed as a Boolean circuit and obtain the result while keeping each party’s input private. Recent advances have led to a surge of garbled circuit implementations in software for a variety of different tasks. However, these implementations are inefficient, and therefore GC is not widely used, especially for large problems. This research investigates, implements, and evaluates secure computation generation using a heterogeneous computing platform featuring FPGAs. We have designed and implemented SIFO: secure computational infrastructure using FPGA overlays. Unlike traditional FPGA design, a coarse-grained overlay architecture is adopted which supports mapping SFE problems that are too large to map to a single FPGA. Host tools provided include SFE problem generator, parser, and automatic host code generation. Our design allows repurposing an FPGA to evaluate different SFE tasks without the need for reprogramming and fully explores the parallelism for any GC problem. Our system demonstrates an order of magnitude speedup compared with an existing software platform.
24

Huda Ja’afar, Noor, e Afandi Ahmad. "Pipeline architectures of Three-dimensional daubechies wavelet transform using hybrid method". Indonesian Journal of Electrical Engineering and Computer Science 15, n.º 1 (1 de julho de 2019): 240. http://dx.doi.org/10.11591/ijeecs.v15.i1.pp240-246.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
<span>The application of three-dimensional (3-D) medical image compression systems uses several building blocks for its computationally intensive algorithms to perform matrix transformation operations. Complexity in addressing large medical volumes data has resulted in vast challenges from a hardware implementation perspective. This paper presents an approach towards very-large-scale-integration (VLSI) implementation of 3-D Daubechies wavelet transform for medical image compression. Discrete wavelet transform (DWT) algorithm is used to design the proposed architectures with pipelined direct mapping technique. Hybrid method use a combination of hardware description language (HDL) and G-code, where this method provides an advantage compared to traditional method. The proposed pipelined architectures are deployed for adaptive transformation process of medical image compression applications. The soft IP core design was targeted on to Xilinx field programmable gate array (FPGA) single board RIO (sbRIO 9632). Results obtained for 3-D DWT architecture using Daubechies 4-tap (Daub4) implementation exhibits promising results in terms of area, power consumption and maximum frequency compared to Daubechies 6-tap (Daub6).</span>
25

Valdes, Camilo, Vitalii Stebliankin e Giri Narasimhan. "Large scale microbiome profiling in the cloud". Bioinformatics 35, n.º 14 (julho de 2019): i13—i22. http://dx.doi.org/10.1093/bioinformatics/btz356.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Motivation Bacterial metagenomics profiling for metagenomic whole sequencing (mWGS) usually starts by aligning sequencing reads to a collection of reference genomes. Current profiling tools are designed to work against a small representative collection of genomes, and do not scale very well to larger reference genome collections. However, large reference genome collections are capable of providing a more complete and accurate profile of the bacterial population in a metagenomics dataset. In this paper, we discuss a scalable, efficient and affordable approach to this problem, bringing big data solutions within the reach of laboratories with modest resources. Results We developed Flint, a metagenomics profiling pipeline that is built on top of the Apache Spark framework, and is designed for fast real-time profiling of metagenomic samples against a large collection of reference genomes. Flint takes advantage of Spark’s built-in parallelism and streaming engine architecture to quickly map reads against a large (170 GB) reference collection of 43 552 bacterial genomes from Ensembl. Flint runs on Amazon’s Elastic MapReduce service, and is able to profile 1 million Illumina paired-end reads against over 40 K genomes on 64 machines in 67 s—an order of magnitude faster than the state of the art, while using a much larger reference collection. Streaming the sequencing reads allows this approach to sustain mapping rates of 55 million reads per hour, at an hourly cluster cost of $8.00 USD, while avoiding the necessity of storing large quantities of intermediate alignments. Availability and implementation Flint is open source software, available under the MIT License (MIT). Source code is available at https://github.com/camilo-v/flint. Supplementary information Supplementary data are available at Bioinformatics online.
26

Ma, Mingze, e Rizos Sakellariou. "Code-size-aware Scheduling of Synchronous Dataflow Graphs on Multicore Systems". ACM Transactions on Embedded Computing Systems 20, n.º 3 (abril de 2021): 1–24. http://dx.doi.org/10.1145/3440034.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Synchronous dataflow graphs are widely used to model digital signal processing and multimedia applications. Self-timed execution is an efficient methodology for the analysis and scheduling of synchronous dataflow graphs. In this article, we propose a communication-aware self-timed execution approach to solve the problem of scheduling synchronous dataflow graphs on multicore systems with communication delays. Based on this communication-aware self-timed execution approach, four communication-aware scheduling algorithms are proposed using different allocation rules. Furthermore, a code-size-aware mapping heuristic is proposed and jointly used with a proposed scheduling algorithm to reduce the code size of SDFGs on multicore systems. The proposed scheduling algorithms are experimentally evaluated and found to perform better than existing algorithms in terms of throughput and runtime for several applications. The experiments also show that the proposed code-size-aware mapping approach can achieve significant code size reduction with limited throughput degradation in most cases.
27

Blair, Hugh T., Allan Wu e Jason Cong. "Oscillatory neurocomputing with ring attractors: a network architecture for mapping locations in space onto patterns of neural synchrony". Philosophical Transactions of the Royal Society B: Biological Sciences 369, n.º 1635 (5 de fevereiro de 2014): 20120526. http://dx.doi.org/10.1098/rstb.2012.0526.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Theories of neural coding seek to explain how states of the world are mapped onto states of the brain. Here, we compare how an animal's location in space can be encoded by two different kinds of brain states: population vectors stored by patterns of neural firing rates, versus synchronization vectors stored by patterns of synchrony among neural oscillators. It has previously been shown that a population code stored by spatially tuned ‘grid cells’ can exhibit desirable properties such as high storage capacity and strong fault tolerance; here it is shown that similar properties are attainable with a synchronization code stored by rhythmically bursting ‘theta cells’ that lack spatial tuning. Simulations of a ring attractor network composed from theta cells suggest how a synchronization code might be implemented using fewer neurons and synapses than a population code with similar storage capacity. It is conjectured that reciprocal connections between grid and theta cells might control phase noise to correct two kinds of errors that can arise in the code: path integration and teleportation errors. Based upon these analyses, it is proposed that a primary function of spatially tuned neurons might be to couple the phases of neural oscillators in a manner that allows them to encode spatial locations as patterns of neural synchrony.
28

Bispo, João, Nuno Paulino, João M. P. Cardoso e João Canas Ferreira. "Transparent Runtime Migration of Loop-Based Traces of Processor Instructions to Reconfigurable Processing Units". International Journal of Reconfigurable Computing 2013 (2013): 1–20. http://dx.doi.org/10.1155/2013/340316.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The ability to map instructions running in a microprocessor to a reconfigurable processing unit (RPU), acting as a coprocessor, enables the runtime acceleration of applications and ensures code and possibly performance portability. In this work, we focus on the mapping of loop-based instruction traces (called Megablocks) to RPUs. The proposed approach considers offline partitioning and mapping stages without ignoring their future runtime applicability. We present a toolchain that automatically extracts specific trace-based loops, called Megablocks, from MicroBlaze instruction traces and generates an RPU for executing those loops. Our hardware infrastructure is able to move loop execution from the microprocessor to the RPU transparently, at runtime, and without changing the executable binaries. The toolchain and the system are fully operational. Three FPGA implementations of the system, differing in the hardware interfaces used, were tested and evaluated with a set of 15 application kernels. Speedups ranging from 1.26 to 3.69 were achieved for the best alternative using a MicroBlaze processor with local memory.
29

Bourbouh, Hamza, Pierre-Loïc Garoche, Christophe Garion e Xavier Thirioux. "From Lustre to Simulink". ACM Transactions on Cyber-Physical Systems 5, n.º 3 (julho de 2021): 1–20. http://dx.doi.org/10.1145/3461668.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Model-based design is now unavoidable when building embedded systems and, more specifically, controllers. Among the available model languages, the synchronous dataflow paradigm, as implemented in languages such as MATLAB Simulink or ANSYS SCADE, has become predominant in critical embedded system industries. Both of these frameworks are used to design the controller itself but also provide code generation means, enabling faster deployment to target and easier V&V activities performed earlier in the design process, at the model level. Synchronous models also ease the definition of formal specification through the use of synchronous observers, attaching requirements to the model in the very same language, mastered by engineers and tooled with simulation means or code generation. However, few works address the automatic synthesis of MATLAB Simulink annotations from lower-level models or code. This article presents a compilation process from Lustre models to genuine MATLAB Simulink, without the need to rely on external C functions or MATLAB functions. This translation is based on the modular compilation of Lustre to imperative code and preserves the hierarchy of the input Lustre model within the generated Simulink one. We implemented the approach and used it to validate a compilation toolchain, mapping Simulink to Lustre and then C, thanks to equivalence testing and checking. This backward compilation from Lustre to Simulink also provides the ability to produce automatically Simulink components modeling specification, proof arguments, or test cases coverage criteria.
30

KELLY, WAYNE, e WILLIAM PUGH. "SELECTING AFFINE MAPPINGS BASED ON PERFORMANCE ESTIMATION". Parallel Processing Letters 04, n.º 03 (setembro de 1994): 205–19. http://dx.doi.org/10.1142/s0129626494000211.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In previous work, we presented a framework for unifying iteration reordering transformations such as loop interchange, loop distribution, loop skewing and statement reordering. The framework provides a uniform way to represent and reason about transformations. However, it does not provide a way to decide which transformation(s) should be applied to a given program. This paper describes a way to make such decisions within the context of the framework. The framework is based on the idea that a transformation can be represented as an affine mapping from the original iteration space to a new iteration space. We show how we can estimate the performance of a program by considering only the mapping from which it was produced. We also show how to produce a lower bound on performance given only a partially specified mapping. Our ability to estimate performance directly from mappings and to do so even for partially specified mappings allows us to efficiently find mappings which will produce good code.
31

Joardar, Biresh Kumar, Janardhan Rao Doppa, Hai Li, Krishnendu Chakrabarty e Partha Pratim Pande. "Learning to Train CNNs on Faulty ReRAM-based Manycore Accelerators". ACM Transactions on Embedded Computing Systems 20, n.º 5s (31 de outubro de 2021): 1–23. http://dx.doi.org/10.1145/3476986.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The growing popularity of convolutional neural networks (CNNs) has led to the search for efficient computational platforms to accelerate CNN training. Resistive random-access memory (ReRAM)-based manycore architectures offer a promising alternative to commonly used GPU-based platforms for training CNNs. However, due to the immature fabrication process and limited write endurance, ReRAMs suffer from different types of faults. This makes training of CNNs challenging as weights are misrepresented when they are mapped to faulty ReRAM cells. This results in unstable training, leading to unacceptably low accuracy for the trained model. Due to the distributed nature of the mapping of the individual bits of a weight to different ReRAM cells, faulty weights often lead to exploding gradients. This in turn introduces a positive feedback in the training loop, resulting in extremely large and unstable weights. In this paper, we propose a lightweight and reliable CNN training methodology using weight clipping to prevent this phenomenon and enable training even in the presence of many faults. Weight clipping prevents large weights from destabilizing CNN training and provides the backpropagation algorithm with the opportunity to compensate for the weights mapped to faulty cells. The proposed methodology achieves near-GPU accuracy without introducing significant area or performance overheads. Experimental evaluation indicates that weight clipping enables the successful training of CNNs in the presence of faults, while also reducing training time by 4 X on average compared to a conventional GPU platform. Moreover, we also demonstrate that weight clipping outperforms a recently proposed error correction code (ECC)-based method when training is carried out using faulty ReRAMs.
32

Lee, Eun-Seok, e Byeong-Seok Shin. "A Flexible Input Mapping System for Next-Generation Virtual Reality Controllers". Electronics 10, n.º 17 (3 de setembro de 2021): 2149. http://dx.doi.org/10.3390/electronics10172149.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This paper proposes an input mapping system that can transform various input signals from next-generation virtual reality devices to suit existing virtual reality content. Existing interactions of virtual reality content are developed based on input values for standardized commercial haptic controllers. This prevents the challenge of new ideas in content. However, controllers that are not compatible with existing virtual reality content have to take significant risks until commercialization. The proposed system allows content developers to map streams of new input devices to standard input events for use in existing content. This allows the reuse of code from existing content, even with new devices, effectively reducing development tasks. Further, it is possible to define a new input method from the perspective of content instead of the sensing results of the input device, allowing for content-specific standardization in content-oriented industries such as games and virtual reality.
33

Abdelhedi, Fatma, Amal Ait Brahim e Gilles Zurfluh. "OCL Constraints Checking on NoSQL Systems Through an MDA-Based Approach". International Journal of Data Warehousing and Mining 17, n.º 1 (janeiro de 2021): 1–14. http://dx.doi.org/10.4018/ijdwm.2021010101.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Big data have received a great deal of attention in recent years. Not only is the amount of data on a completely different level than before, but also the authors have different type of data including factors such as format, structure, and sources. This has definitely changed the tools one needs to handle big data, giving rise to NoSQL systems. While NoSQL systems have proven their efficiency to handle big data, it's still an unsolved problem how the automatic storage of big data in NoSQL systems could be done. This paper proposes an automatic approach for implementing UML conceptual models in NoSQL systems, including the mapping of the associated OCL constraints to the code required for checking them. In order to demonstrate the practical applicability of the work, this paper has realized it in a tool supporting four fundamental OCL expressions: iterate-based expressions, OCL predefined operations, If expression, and Let expression.
34

Apreda, Rodolfo. "Stakeholders and conflict systems mapping on why the founding charter compact becomes the mainstay of corporate governance". Corporate Ownership and Control 8, n.º 2 (2011): 9–19. http://dx.doi.org/10.22495/cocv8i2p1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
n this paper we put forward an alternative approach to dealing with the Charter of any organization, that essential document which ought to be regarded as the mainstay of governance. In the first place, we show that an organization carries out its tasks by becoming a responsive mechanism to fulfill stakeholders’ demands. In the second place, organizations behave like conflict-systems within which political issues are of the essence when coping with power, influence, control and authority; on these grounds, we give heed to agenda building and the problem of factions. We argue that such two-tiered structure stands for the preconditions of any Charter. Lastly, we set up the Charter as a compact of regulatory and discretionary governance, comprised not only by the articles and certificate of incorporation, but also internal bylaws of the organization, the Statute of Governance, the Code of Good Practices, and provisions for upgrading, overhauling, and even changing the architecture of governance in its entirety.
35

Daitch, Amy L., Brett L. Foster, Jessica Schrouff, Vinitha Rangarajan, Itır Kaşikçi, Sandra Gattas e Josef Parvizi. "Mapping human temporal and parietal neuronal population activity and functional coupling during mathematical cognition". Proceedings of the National Academy of Sciences 113, n.º 46 (7 de novembro de 2016): E7277—E7286. http://dx.doi.org/10.1073/pnas.1608434113.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Brain areas within the lateral parietal cortex (LPC) and ventral temporal cortex (VTC) have been shown to code for abstract quantity representations and for symbolic numerical representations, respectively. To explore the fast dynamics of activity within each region and the interaction between them, we used electrocorticography recordings from 16 neurosurgical subjects implanted with grids of electrodes over these two regions and tracked the activity within and between the regions as subjects performed three different numerical tasks. Although our results reconfirm the presence of math-selective hubs within the VTC and LPC, we report here a remarkable heterogeneity of neural responses within each region at both millimeter and millisecond scales. Moreover, we show that the heterogeneity of response profiles within each hub mirrors the distinct patterns of functional coupling between them. Our results support the existence of multiple bidirectional functional loops operating between discrete populations of neurons within the VTC and LPC during the visual processing of numerals and the performance of arithmetic functions. These findings reveal information about the dynamics of numerical processing in the brain and also provide insight into the fine-grained functional architecture and connectivity within the human brain.
36

Qawasmeh, Ahmad, Maxime R. Hugues, Henri Calandra e Barbara M. Chapman. "Performance portability in reverse time migration and seismic modelling via OpenACC". International Journal of High Performance Computing Applications 31, n.º 5 (21 de abril de 2017): 422–40. http://dx.doi.org/10.1177/1094342016675678.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Heterogeneity among the computational resources within a single machine has significantly increased in high performance computing to exploit the tremendous potential of graphics processing units (GPUs). Portability in terms of code development and performance has been a challenge due to major differences between GPU programming and memory models from one side and conventional central processing units (CPUs) from another side. Performance characteristics of compilers and processors also vary between machines. Emerging high-level directive-based programming models such as OpenACC has been proposed to target this challenge. In this work, we develop OpenACC implementations for both seismic modelling and reverse time migration algorithms that solve the isotropic, acoustic, and elastic wave equations. We employ OpenACC to take advantage of the computational power of two Nvidia GPU cards: (1) M2090 and (2) K40, residing in IBM and CRAY XC30 clusters respectively. We also explore the main aspects of hybridization seismic modelling and reverse time migration by implementing an Message Passing Interface (MPI)+OpenACC approach. We expose various mapping techniques to develop a portable code that maximizes performance regardless of compiler or platform. Depending on the intensity of the computations, different propagators exhibited different speedup behaviours against a full socket CPU MPI implementation. A performance enhancement of ~10× was obtained, when the acoustic model was ported to a single GPU, compared with a 1.7× speedup obtained using the isotropic model. Our MPI+OpenACC implementation of reverse time migration and seismic modelling shows promising scaling when multiple GPUs were used.
37

Ernst, Dominik, Georg Hager, Jonas Thies e Gerhard Wellein. "Performance engineering for real and complex tall & skinny matrix multiplication kernels on GPUs". International Journal of High Performance Computing Applications 35, n.º 1 (9 de outubro de 2020): 5–19. http://dx.doi.org/10.1177/1094342020965661.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
General matrix-matrix multiplications with double-precision real and complex entries (DGEMM and ZGEMM) in vendor-supplied BLAS libraries are best optimized for square matrices but often show bad performance for tall & skinny matrices, which are much taller than wide. NVIDIA’s current CUBLAS implementation delivers only a fraction of the potential performance as indicated by the roofline model in this case. We describe the challenges and key characteristics of an implementation that can achieve close to optimal performance. We further evaluate different strategies of parallelization and thread distribution and devise a flexible, configurable mapping scheme. To ensure flexibility and allow for highly tailored implementations we use code generation combined with autotuning. For a large range of matrix sizes in the domain of interest we achieve at least 2/3 of the roofline performance and often substantially outperform state-of-the art CUBLAS results on an NVIDIA Volta GPGPU.
38

Zheng, Hongdi, Junfeng Wang, Jianping Zhang e Ruirui Li. "IRTS: An Intelligent and Reliable Transmission Scheme for Screen Updates Delivery in DaaS". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n.º 3 (22 de julho de 2021): 1–24. http://dx.doi.org/10.1145/3440035.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Desktop-as-a-service (DaaS) has been recognized as an elastic and economical solution that enables users to access personal desktops from anywhere at any time. During the interaction process of DaaS, users rely on screen updates to perceive execution results remotely, and thus the reliability and timeliness of screen updates transmission have a great influence on users’ quality of experience (QoE). However, the efficient transmission of screen updates in DaaS is facing severe challenges: most transmission schemes applied in DaaS determine sending strategies in terms of pre-set rules, lacking the intelligence to utilize bandwidth rationally and fit new network scenarios. Meanwhile, they tend to focus on reliability or timeliness and perform unsatisfactorily in ensuring reliability and timeliness simultaneously, leading to lower transmission efficiency of screen updates and users’ QoE when network conditions turn unfavorable. In this article, an intelligent and reliable end-to-end transmission scheme (IRTS) is proposed to cope with the preceding issues. IRTS draws support from reinforcement learning by adopting SARSA, an online learning method based on the temporal difference update rule, to grasp the optimal mapping between network states and sending actions, which extricates IRTS from the reliance on pre-set rules and augments its adaptability to different network conditions. Moreover, IRTS guarantees reliability and timeliness via an adaptive loss recovery method, which intends to recover lost screen updates data automatically with fountain code while controlling the number of redundant packets generated. Extensive performance evaluations are conducted, and numerical results show that IRTS outperforms the reference schemes in display quality, end-to-end delay/delay jitter, and fairness when transferring screen updates under various network conditions, proving that IRTS can enhance the transmission efficiency of screen updates and users’ QoE in DaaS.
39

Liu, Shuai, e Keji Zhao. "The Toolbox for Untangling Chromosome Architecture in Immune Cells". Frontiers in Immunology 12 (29 de abril de 2021). http://dx.doi.org/10.3389/fimmu.2021.670884.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The code of life is not only encrypted in the sequence of DNA but also in the way it is organized into chromosomes. Chromosome architecture is gradually being recognized as an important player in regulating cell activities (e.g., controlling spatiotemporal gene expression). In the past decade, the toolbox for elucidating genome structure has been expanding, providing an opportunity to explore this under charted territory. In this review, we will introduce the recent advancements in approaches for mapping spatial organization of the genome, emphasizing applications of these techniques to immune cells, and trying to bridge chromosome structure with immune cell activities.
40

Liu, Tong, e Zheng Wang. "normGAM: an R package to remove systematic biases in genome architecture mapping data". BMC Genomics 20, S12 (dezembro de 2019). http://dx.doi.org/10.1186/s12864-019-6331-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Background The genome architecture mapping (GAM) technique can capture genome-wide chromatin interactions. However, besides the known systematic biases in the raw GAM data, we have found a new type of systematic bias. It is necessary to develop and evaluate effective normalization methods to remove all systematic biases in the raw GAM data. Results We have detected a new type of systematic bias, the fragment length bias, in the genome architecture mapping (GAM) data, which is significantly different from the bias of window detection frequency previously mentioned in the paper introducing the GAM method but is similar to the bias of distances between restriction sites existing in raw Hi-C data. We have found that the normalization method (a normalized variant of the linkage disequilibrium) used in the GAM paper is not able to effectively eliminate the new fragment length bias at 1 Mb resolution (slightly better at 30 kb resolution). We have developed an R package named normGAM for eliminating the new fragment length bias together with the other three biases existing in raw GAM data, which are the biases related to window detection frequency, mappability, and GC content. Five normalization methods have been implemented and included in the R package including Knight-Ruiz 2-norm (KR2, newly designed by us), normalized linkage disequilibrium (NLD), vanilla coverage (VC), sequential component normalization (SCN), and iterative correction and eigenvector decomposition (ICE). Conclusions Based on our evaluations, the five normalization methods can eliminate the four biases existing in raw GAM data, with VC and KR2 performing better than the others. We have observed that the KR2-normalized GAM data have a higher correlation with the KR-normalized Hi-C data on the same cell samples indicating that the KR-related methods are better than the others for keeping the consistency between the GAM and Hi-C experiments. Compared with the raw GAM data, the normalized GAM data are more consistent with the normalized distances from the fluorescence in situ hybridization (FISH) experiments. The source code of normGAM can be freely downloaded from http://dna.cs.miami.edu/normGAM/.
41

Hagedorn, Stefan, Philipp Götze, Omran Saleh e Kai-Uwe Sattler. "Stream processing platforms for analyzing big dynamic data". it - Information Technology 58, n.º 4 (28 de janeiro de 2016). http://dx.doi.org/10.1515/itit-2016-0001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
AbstractNowadays, data is produced in every aspect of our lives, leading to a massive amount of information generated every second. However, this vast amount is often too large to be stored and for many applications the information contained in these data streams is only useful when it is fresh. Batch processing platforms like Hadoop MapReduce do not fit these needs as they require to collect data on disk and process it repeatedly. Therefore, modern data processing engines combine the scalability of distributed architectures with the one-pass semantics of traditional stream engines. In this paper, we survey the current state of the art in scalable stream processing from a user perspective. We examine and describe their architecture, execution model, programming interface, and data analysis support as well as discuss the challenges and limitations of their APIs. In this connection, we introduce Piglet, an extended Pig Latin language and code generator that compiles (extended) Pig Latin code into programs for various data processing platforms. Thereby, we discuss the mapping to platform-specic concepts in order to provide a uniform view.
42

Kollam, Manoj. "Design And Implemation Of An Enhanced Dds Based Digital Modulator For Multiple Modulation Schemes". International Journal of Smart Sensor and Adhoc Network., outubro de 2011, 102–7. http://dx.doi.org/10.47893/ijssan.2011.1031.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This paper deals with the design & implementation of a Digital Modulator based on the FPGA. The design is implemented using the Enhanced Direct Digital Synthesis (DDS) Technology. The basic DDS architecture is enhanced with the minimum hardware to facilitate the complete system level support for different kinds of Modulations with minimal FPGA resources. The size of the ROM look up is reduced by using the mapping logic. The Design meets the present Software Define Radio (SDR) requirements and provides the user selection for desired modulation technique to be used. The VHDL programming language is used for modeling the hardware blocks for powerful and flexible programming and to avoid VHDL code generation tools. The design is simulated in the Model Sim Simulation Tool and Synthesized using the Xilinx ISE Synthesis Tool. The architecture is implemented on the SPARTAN-3A FPGA from Xilinx Family in the SPARTAN-3A evaluation board. The experimental results obtained demonstrate the usefulness of the proposed system in terms of the system resources, its capabilities for design, validation and practical implementation purposes.
43

Knemeyer, Max, Mohammed Nsaif, Frank Glinka, Alexander Ploss e Sergei Gorlatch. "TO WARDS DATA PERSIST ENCY IN REAL-TIME ONLINE INTERACTIVE APPLIC ATIONS". International Journal of Computing, 1 de agosto de 2014, 75–85. http://dx.doi.org/10.47839/ijc.12.1.590.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The class of distributed Real-time O nline Interactive Applications (ROIA) includes such important applications as Massively Multiplayer Online Games (MMOGs), as well as interactive e-Learning a nd simulation systems. These applications usually work in a persistent environment (also called world) which continues to exist and evolve also while the user is offline and away from the application. The challenge is how to efficiently make the world and the player characters persistent in the system over time. In this paper, we deal with storing persistent data of real-time interactive applications in modern relational databases. We analyze the major requirements to a system for persistency and we describe a preliminary design of the Entity Persistence Module (EPM) middleware which liberates the application developer from writing and maintaining complex a nd error-prone code for persistent data management. EPM automatically performs the mapping operations to store/ retrieve the complex data to/from different types of relational databases, supports the management of persistent data in memory, and integrates it into the main loop of the ROIA client-server architecture.
44

Yu, Changwei, Nevena Cvetesic, Vincent Hisler, Kapil Gupta, Tao Ye, Emese Gazdag, Luc Negroni et al. "TBPL2/TFIIA complex establishes the maternal transcriptome through oocyte-specific promoter usage". Nature Communications 11, n.º 1 (dezembro de 2020). http://dx.doi.org/10.1038/s41467-020-20239-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
AbstractDuring oocyte growth, transcription is required to create RNA and protein reserves to achieve maternal competence. During this period, the general transcription factor TATA binding protein (TBP) is replaced by its paralogue, TBPL2 (TBP2 or TRF3), which is essential for RNA polymerase II transcription. We show that in oocytes TBPL2 does not assemble into a canonical TFIID complex. Our transcript analyses demonstrate that TBPL2 mediates transcription of oocyte-expressed genes, including mRNA survey genes, as well as specific endogenous retroviral elements. Transcription start site (TSS) mapping indicates that TBPL2 has a strong preference for TATA-like motif in core promoters driving sharp TSS selection, in contrast with canonical TBP/TFIID-driven TATA-less promoters that have broader TSS architecture. Thus, we show a role for the TBPL2/TFIIA complex in the establishment of the oocyte transcriptome by using a specific TSS recognition code.
45

Amirante, Dario, Paolo Adami e Nicholas Hills. "A Multi-Fidelity Aero-Thermal Design Approach for Secondary Air Systems". Journal of Engineering for Gas Turbines and Power, 18 de dezembro de 2020. http://dx.doi.org/10.1115/1.4049406.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract The paper presents a multi-disciplinary approach for aero-thermal and heat transfer analysis for internal flows. The versatility and potential benefit offered by the approach is described through the application to a realistic low pressure turbine assembly. The computational method is based on a run time code coupling architecture that allows mixed models and simulations to be integrated together for the prediction of the sub-system aero thermal performance. In this specific application the model is consisting of two rotor blades, the embedded vanes, the inter-stage cavity and the solid parts. The geometry represents a real engine situation. The key element of the approach is the use of a fully modular coupling strategy that aims to combine (1) flexibility for design needs, (2) variable level of modelling for better accuracy and (3) in memory code coupling for preserving computational efficiency in large system and sub-system simulations. For this particular example Reynolds Averaged Navier-Stokes (RANS) equations are solved for the fluid regions and thermal coupling is enforced with the metal (conjugate heat transfer). Fluid-fluid interfaces use mixing planes between the rotating parts while overlapping regions are exploited to link the cavity flow to the main annulus flow as well as in the cavity itself for mapping of the metal parts and leakages. Metal temperatures predicted by the simulation are compared to those retrieved from a thermal model of the engine, and the results are discussed with reference to the underlying flow physics.
46

Ozdemir, Mehmet Akif, Gizem Dilara Ozdemir e Onan Guren. "Classification of COVID-19 electrocardiograms by using hexaxial feature mapping and deep learning". BMC Medical Informatics and Decision Making 21, n.º 1 (25 de maio de 2021). http://dx.doi.org/10.1186/s12911-021-01521-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Background Coronavirus disease 2019 (COVID-19) has become a pandemic since its first appearance in late 2019. Deaths caused by COVID-19 are still increasing day by day and early diagnosis has become crucial. Since current diagnostic methods have many disadvantages, new investigations are needed to improve the performance of diagnosis. Methods A novel method is proposed to automatically diagnose COVID-19 by using Electrocardiogram (ECG) data with deep learning for the first time. Moreover, a new and effective method called hexaxial feature mapping is proposed to represent 12-lead ECG to 2D colorful images. Gray-Level Co-Occurrence Matrix (GLCM) method is used to extract features and generate hexaxial mapping images. These generated images are then fed into a new Convolutional Neural Network (CNN) architecture to diagnose COVID-19. Results Two different classification scenarios are conducted on a publicly available paper-based ECG image dataset to reveal the diagnostic capability and performance of the proposed approach. In the first scenario, ECG data labeled as COVID-19 and No-Findings (normal) are classified to evaluate COVID-19 classification ability. According to results, the proposed approach provides encouraging COVID-19 detection performance with an accuracy of 96.20% and F1-Score of 96.30%. In the second scenario, ECG data labeled as Negative (normal, abnormal, and myocardial infarction) and Positive (COVID-19) are classified to evaluate COVID-19 diagnostic ability. The experimental results demonstrated that the proposed approach provides satisfactory COVID-19 prediction performance with an accuracy of 93.00% and F1-Score of 93.20%. Furthermore, different experimental studies are conducted to evaluate the robustness of the proposed approach. Conclusion Automatic detection of cardiovascular changes caused by COVID-19 can be possible with a deep learning framework through ECG data. This not only proves the presence of cardiovascular changes caused by COVID-19 but also reveals that ECG can potentially be used in the diagnosis of COVID-19. We believe the proposed study may provide a crucial decision-making system for healthcare professionals. Source code All source codes are made publicly available at: https://github.com/mkfzdmr/COVID-19-ECG-Classification
47

Rietz, Finn, Alexander Sutherland, Suna Bensch, Stefan Wermter e Thomas Hellström. "WoZ4U: An Open-Source Wizard-of-Oz Interface for Easy, Efficient and Robust HRI Experiments". Frontiers in Robotics and AI 8 (14 de julho de 2021). http://dx.doi.org/10.3389/frobt.2021.668057.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Wizard-of-Oz experiments play a vital role in Human-Robot Interaction (HRI), as they allow for quick and simple hypothesis testing. Still, a publicly available general tool to conduct such experiments is currently not available in the research community, and researchers often develop and implement their own tools, customized for each individual experiment. Besides being inefficient in terms of programming efforts, this also makes it harder for non-technical researchers to conduct Wizard-of-Oz experiments. In this paper, we present a general and easy-to-use tool for the Pepper robot, one of the most commonly used robots in this context. While we provide the concrete interface for Pepper robots only, the system architecture is independent of the type of robot and can be adapted for other robots. A configuration file, which saves experiment-specific parameters, enables a quick setup for reproducible and repeatable Wizard-of-Oz experiments. A central server provides a graphical interface via a browser while handling the mapping of user input to actions on the robot. In our interface, keyboard shortcuts may be assigned to phrases, gestures, and composite behaviors to simplify and speed up control of the robot. The interface is lightweight and independent of the operating system. Our initial tests confirm that the system is functional, flexible, and easy to use. The interface, including source code, is made commonly available, and we hope that it will be useful for researchers with any background who want to conduct HRI experiments.
48

Tran, Alan, Alex Bocharov, Bela Bauer e Parsa Bonderson. "Optimizing Clifford gate generation for measurement-only topological quantum computation with Majorana zero modes". SciPost Physics 8, n.º 6 (24 de junho de 2020). http://dx.doi.org/10.21468/scipostphys.8.6.091.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
One of the main challenges for quantum computation is that while the number of gates required to perform a non-trivial quantum computation may be very large, decoherence and errors in realistic quantum architectures limit the number of physical gate operations that can be performed coherently. Therefore, an optimal mapping of the quantum algorithm into the physically available set of operations is of crucial importance. We examine this problem for a measurement-only topological quantum computer based on Majorana zero modes, where gates are performed through sequences of measurements. Such a scheme has been proposed as a practical, scalable approach to process quantum information in an array of topological qubits built using Majorana zero modes. Building on previous work that has shown that multi-qubit Clifford gates can be enacted in a topologically protected fashion in such qubit networks, we discuss methods to obtain the optimal measurement sequence for a given Clifford gate under the constraints imposed by the physical architecture, such as layout and the relative difficulty of implementing different types of measurements. Our methods also provide tools for comparative analysis of different architectures and strategies, given experimental characterizations of particular aspects of the systems under consideration. As a further non-trivial demonstration, we discuss an implementation of the surface code in Majorana-based topological qubits. We use the techniques developed here to obtain an optimized measurement sequence that implements the stabilizer measurements using only fermionic parity measurements on nearest-neighbor topological qubit islands.
49

Heckel, Reinhard, Wen Huang, Paul Hand e Vladislav Voroninski. "Rate-optimal denoising with deep neural networks". Information and Inference: A Journal of the IMA, 17 de junho de 2020. http://dx.doi.org/10.1093/imaiai/iaaa011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation. The underlying principle is that neural networks trained on large data sets have empirically been shown to be able to generate natural images well from a low-dimensional latent representation of the image. Given such a generator network, a noisy image can be denoised by (i) finding the closest image in the range of the generator or by (ii) passing it through an encoder-generator architecture (known as an autoencoder). However, there is little theory to justify this success, let alone to predict the denoising performance as a function of the network parameters. In this paper, we consider the problem of denoising an image from additive Gaussian noise using the two generator-based approaches. In both cases, we assume the image is well described by a deep neural network with ReLU activations functions, mapping a $k$-dimensional code to an $n$-dimensional image. In the case of the autoencoder, we show that the feedforward network reduces noise energy by a factor of $O(k/n)$. In the case of optimizing over the range of a generative model, we state and analyze a simple gradient algorithm that minimizes a non-convex loss function and provably reduces noise energy by a factor of $O(k/n)$. We also demonstrate in numerical experiments that this denoising performance is, indeed, achieved by generative priors learned from data.
50

Gueta, Tomer, Rahul Chauhan, Thiloshon Nagarajah, Vijay Barve, Povilas Gibas, Martynas Jočys, Rahul Saxena, Sunny Dhoke e Yohay Carmel. "bddashboard: An infrastructure for biodiversity dashboards in R". Biodiversity Information Science and Standards 5 (27 de setembro de 2021). http://dx.doi.org/10.3897/biss.5.75684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The bdverse is a collection of packages that form a general framework for facilitating biodiversity science in R (programming language). Exploratory and diagnostic visualization can unveil hidden patterns and anomalies in data and allow quick and efficient exploration of massive datasets. The development of an interactive yet flexible dashboard that can be easily deployed locally or remotely is a highly valuable biodiversity informatics tool. To this end, we have developed 'bddashboard', which serves as an agile framework for biodiversity dashboard development. This project is built in R, using the Shiny package (RStudio, Inc 2021) that helps build interactive web apps in R. The following key components were developed: Core Interactive Components The basic building blocks of every dashboard are interactive plots, maps, and tables. We have explored all major visualization libraries in R and have concluded that 'plotly' (Sievert 2020) is the most mature and showcases the best value for effort. Additionally, we have concluded that 'leaflet' (Graul 2016) shows the most diverse and high-quality mapping features, and DT (DataTables library) (Xie et al. 2021) is best for rendering tabular data. Each component was modularized to better adjust it for biodiversity data and to enhance its flexibility. Field Selector The field selector is a unique module that makes each interactive component much more versatile. Users have different data and needs; thus, every combination or selection of fields can tell a different story. The field selector allows users to change the X and Y axis on plots, to choose the columns that are visible on a table, and to easily control map settings. All that in real-time, without reloading the page or disturbing the reactivity. The field selector automatically detects how many columns a plot needs and what type of columns can be passed to the X-axis or Y-axis. The field selector also displays the completeness of each field. Plot Navigation We developed the plot navigation module to prevent unwanted extreme cases. Technically, drawing 1,000 bars on a single bar plot is possible, but this visualization is not human-friendly. Navigation allows users to decide how many values they want to see on a single plot. This technique allows for fast drawing of extensive datasets without affecting page reactivity, dramatically improving performance and functioning as a fail-safe mechanism. Reactivity Reactivity creates the connection between different components. The changes in input values automatically flow to the plots, text, maps, and tables that use the input, and cause them to update. Reactivity facilitates drilling down functionality, which enhances the user’s ability to explore and investigate the data. We developed a novel and robust reactivity technique that allows us to add a new component and effectively connect it with all existing components within a dashboard tab, using only one line of code. Generic Biodiversity Tabs We developed five useful dashboard tabs (Fig. 1): (i) the Data Summary tab to give a quick overview of a dataset; (ii) the Data Completeness tab helps users get valuable information about missing records and missing Darwin Core fields; (iii) the Spatial tab is dedicated to spatial visualizations; (iv) the Taxonomic tab is designed to visualize taxonomy; and (v) the Temporal tab is designed to visualize time-related aspects. Performance and Agility To make a dashboard work smoothly and react quickly, hundreds of small and large modules, functions, and techniques must work together. Our goal was to minimize dashboard latency and maximize its data capacity. We used asynchronous modules to write non-blocking code, clusters in map components, and preprocessing and filtering data before passing it to plots to reduce the load. The 'bddashboard' package modularized architecture allows us to develop completely different interactive and reactive dashboards within mere minutes.

Vá para a bibliografia