Literatura científica selecionada sobre o tema "Code-to-architecture-mapping"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Code-to-architecture-mapping".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Code-to-architecture-mapping":

1

Olsson, Tobias, Morgan Ericsson e Anna Wingkvist. "s4rdm3x: A Tool Suite to Explore Code to Architecture Mapping Techniques". Journal of Open Source Software 6, n.º 58 (7 de fevereiro de 2021): 2791. http://dx.doi.org/10.21105/joss.02791.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Ya Qin, Dian Fu Ma, Ying Wang e Xian Qi Zhao. "A Mapping Simulation of Code Generation for Partitioned System". Applied Mechanics and Materials 325-326 (junho de 2013): 1759–65. http://dx.doi.org/10.4028/www.scientific.net/amm.325-326.1759.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
It is crucial for real-time embedded system to design and verify for a little fault may lead to a catastrophe. Architecture Analysis and Design Language (AADL) is a modeling language used to design and analysis the architecture of real-time embedded system based on Model Driven Architecture (MDA). Code generation of AADL model to codes running on the Real-time Operation System can avoid hand-writing mistakes and improve the efficiency of development. Partitioning is introduced into embedded system to control fault transmission. This paper presents a mapping approach to generate codes from AADL model for partitioned system, and the generated codes which include configuration codes and C codes will run on a partitioned platform.
3

Sejans, Janis, e Oksana Nikiforova. "Problems and Perspectives of Code Generation from UML Class Diagram". Scientific Journal of Riga Technical University. Computer Sciences 44, n.º 1 (1 de janeiro de 2011): 75–84. http://dx.doi.org/10.2478/v10143-011-0024-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Problems and Perspectives of Code Generation from UML Class Diagram As a result of increasing technological diversity, more attention is being focused on model driven architecture (MDA), and its standard - Unified Modeling Language (UML). UML class diagrams require correct diagram notation mapping to target programming language syntax under the framework of MDA. Currently there are plenty of CASE tools which claim that they are able to generate the source code from UML models. Therefore by combining the knowledge of a programming language, syntax rules and UML class diagram notation semantic, an experimental model for stressing the code generator can be produced, thus allowing comparison of quality of the transformation result. This paper describes a creation of such experimental models.
4

Nasser, Y., J. F. Hélard e M. Crussière. "System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV". International Journal of Digital Multimedia Broadcasting 2008 (2008): 1–12. http://dx.doi.org/10.1155/2008/359206.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Single-frequency networks (SFNs) for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST) encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR) mapping (EESM) to predict the bit error rate (BER) at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.
5

Mishra, Bhavyansh, Robert Griffin e Hakki Erhan Sevil. "Modelling Software Architecture for Visual Simultaneous Localization and Mapping". Automation 2, n.º 2 (2 de abril de 2021): 48–61. http://dx.doi.org/10.3390/automation2020003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Visual simultaneous localization and mapping (VSLAM) is an essential technique used in areas such as robotics and augmented reality for pose estimation and 3D mapping. Research on VSLAM using both monocular and stereo cameras has grown significantly over the last two decades. There is, therefore, a need for emphasis on a comprehensive review of the evolving architecture of such algorithms in the literature. Although VSLAM algorithm pipelines share similar mathematical backbones, their implementations are individualized and the ad hoc nature of the interfacing between different modules of VSLAM pipelines complicates code reuseability and maintenance. This paper presents a software model for core components of VSLAM implementations and interfaces that govern data flow between them while also attempting to preserve the elements that offer performance improvements over the evolution of VSLAM architectures. The framework presented in this paper employs principles from model-driven engineering (MDE), which are used extensively in the development of large and complicated software systems. The presented VSLAM framework will assist researchers in improving the performance of individual modules of VSLAM while not having to spend time on system integration of those modules into VSLAM pipelines.
6

Miele, Antonio, Christian Pilato e Donatella Sciuto. "A Simulation-Based Framework for the Exploration of Mapping Solutions on Heterogeneous MPSoCs". International Journal of Embedded and Real-Time Communication Systems 4, n.º 1 (janeiro de 2013): 22–41. http://dx.doi.org/10.4018/jertcs.2013010102.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The efficient analysis and exploration of mapping solutions of a parallel application on a heterogeneous Multi-Processor Systems-on-Chip (MPSoCs) is usually a challenging task in system-level design, in particular when the architecture integrates hardware cores that may expose reconfigurable features. This paper proposes a system-level design framework based on SystemC simulations for fulfilling this task, featuring (i) an automated flow for the generation of timing models for the hardware cores starting from the application source code, (ii) an enhanced simulation environment for SystemC architectures enabling the specification and modification of mapping choices only by changing an XML descriptor, and (iii) a flexible controller of the simulation environment supporting the exploration of various mapping solutions featuring a customizable engine. The proposed framework has been validated with a case study considering an image processing application to show the possibility to automatically exploring alternative solutions onto a reconfigurable MPSoC platform.
7

Schott, Brian, Stephen P. Crago, Robert H. Parker, Chen H. Chen, Lauretta C. Carter, Joseph P. Czarnaski, Matthew French, Ivan Hom, Tam Tho e Terri Valenti. "Reconfigurable Architectures for System Level Applications of Adaptive Computing". VLSI Design 10, n.º 3 (1 de janeiro de 2000): 265–79. http://dx.doi.org/10.1155/2000/28323.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The System Level Applications of Adaptive Computing (SLAAC) project is defining an open, distributed, scalable, adaptive computing systems architecture based on a highspeed network cluster of heterogeneous, FPGA-accelerated nodes. Two reference implementations of this architecture are being created. The Research Reference Platform (RRP) is a MyrinetTM cluster of PCs with PCI-based FPGA accelerators (SLAAC-1). The Deployable Reference Platform (DRP) is a Myrinet cluster of PowerPCTM nodes with VME-based FPGA accelerators (SLAAC-2) and a commercial 6U-VME quad- PowerPC board (CSPI M2641S) serving as the carrier. A key strategy proposed for successful ACS technology insertions is source-code compatibility between the RRP and DRP platforms. This paper focuses on the development of the SLAAC-1 and SLAAC-2 accelerators and how the network-centric SLAAC system-level architecture has shaped their designs. A preliminary mapping of a Synthetic Aperture Radar/Automatic Target Recognition (SAR/ATR) algorithm to SLAAC-2 is also discussed.
8

Kuc, Mateusz, Wojciech Sułek e Dariusz Kania. "Low Power QC-LDPC Decoder Based on Token Ring Architecture". Energies 13, n.º 23 (30 de novembro de 2020): 6310. http://dx.doi.org/10.3390/en13236310.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The article presents an implementation of a low power Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) decoder in a Field Programmable Gate Array (FPGA) device. The proposed solution is oriented to a reduction in dynamic energy consumption. The key research concepts present an effective technology mapping of a QC-LDPC decoder to an LUT-based FPGA with many limitations. The proposed decoder architecture uses a distributed control system and a Token Ring processing scheme. This idea helps limit the clock skew problem and is oriented to clock gating, a well-established concept for power optimization. Then the clock gating of the decoder building blocks allows for a significant reduction in energy consumption without deterioration in other parameters of the decoder, particularly its error correction performance. We also provide experimental results for decoder implementations with different QC-LDPC codes, indicating important characteristics of the code parity check matrix, for which an energy-saving QC-LDPC decoder with the proposed architecture can be designed. The experiments are based on implementations in the Intel Cyclone V FPGA device. Finally, the presented architecture is compared with the other solutions from the literature.
9

Wang, Degeng, e Michael Gribskov. "Examining the architecture of cellular computing through a comparative study with a computer". Journal of The Royal Society Interface 2, n.º 3 (16 de maio de 2005): 187–95. http://dx.doi.org/10.1098/rsif.2005.0038.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The computer and the cell both use information embedded in simple coding, the binary software code and the quadruple genomic code, respectively, to support system operations. A comparative examination of their system architecture as well as their information storage and utilization schemes is performed. On top of the code, both systems display a modular, multi-layered architecture, which, in the case of a computer, arises from human engineering efforts through a combination of hardware implementation and software abstraction. Using the computer as a reference system, a simplistic mapping of the architectural components between the two is easily detected. This comparison also reveals that a cell abolishes the software–hardware barrier through genomic encoding for the constituents of the biochemical network, a cell's ‘hardware’ equivalent to the computer central processing unit (CPU). The information loading (gene expression) process acts as a major determinant of the encoded constituent's abundance, which, in turn, often determines the ‘bandwidth’ of a biochemical pathway. Cellular processes are implemented in biochemical pathways in parallel manners. In a computer, on the other hand, the software provides only instructions and data for the CPU. A process represents just sequentially ordered actions by the CPU and only virtual parallelism can be implemented through CPU time-sharing. Whereas process management in a computer may simply mean job scheduling, coordinating pathway bandwidth through the gene expression machinery represents a major process management scheme in a cell. In summary, a cell can be viewed as a super-parallel computer, which computes through controlled hardware composition. While we have, at best, a very fragmented understanding of cellular operation, we have a thorough understanding of the computer throughout the engineering process. The potential utilization of this knowledge to the benefit of systems biology is discussed.
10

Bardenheuer, Kristina, Alun Passey, Maria d'Errico, Barbara Millier, Carine Guinard-Azadian, Johan Aschan e Michel van Speybroeck. "Honeur (Heamatology Outcomes Network in Europe): A Federated Model to Support Real World Data Research in Hematology". Blood 132, Supplement 1 (29 de novembro de 2018): 4839. http://dx.doi.org/10.1182/blood-2018-99-111093.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Introduction: The Haematology Outcomes Network in EURope (HONEUR) is an interdisciplinary initiative aimed at improving patient outcomes by analyzing real world data across hematological centers in Europe. Its overarching goal is to create a secure network which facilitates the development of a collaborative research community and allows access to big data tools for analysis of the data. The central paradigm in the HONEUR network is a federated model whereby the data stays at the respective sites and the analysis is executed at the local data sources. To allow for a uniform data analysis, the common data model 'OMOP' (Observational Medical Outcomes Partnership) was selected and extended to accommodate specific hematology data elements. Objective: To demonstrate feasibility of the OMOP common data model for the HONEUR network. Methods: In order to validate the architecture of the HONEUR network and the applicability of the OMOP common data model, data from the EMMOS registry (NCT01241396) have been used. This registry is a prospective, non-interventional study that was designed to capture real world data regarding treatments and outcomes for multiple myeloma at different stages of the disease. Data was collected between Oct 2010 and Nov 2014 on more than 2,400 patients across 266 sites in 22 countries. Data was mapped to the OMOP common data model version 5.3. Additional new concepts to the standard OMOP were provided to preserve the semantic mapping quality and reduce the potential loss of granularity. Following the mapping process, a quality analysis was performed to assess the completeness and accuracy of the mapping to the common data model. Specific critical concepts in multiple myeloma needed to be represented in OMOP. This applies in particular for concepts like treatment lines, cytogenetic observations, disease progression, risk scales (in particular ISS and R-ISS). To accommodate these concepts, existing OMOP structures were used with the definition of new concepts and concept-relationships. Results: Several elements of mapping data from the EMMOS registry to the OMOP common data model (CDM) were evaluated via integrity checks. Core entities from the OMOP CDM were reconciled against the source data. This was applied for the following entities: person (profile of year of birth and gender), drug exposure (profile of number of drug exposures per drug, at ATC code level), conditions (profile of number of occurrences of conditions per condition code, converted to SNOMED), measurement (profile of number of measurements and value distribution per (lab) measurement, converted to LOINC) and observation (profile of number of observations per observation concept). Figure 1 shows the histogram of year of birth distribution between the EMMOS registry and the OMOP CDM. No discernible differences exist, except for subjects which have not been included in the mapping to the OMOP CDM due to lacking confirmation of a diagnosis of multiple myeloma. As additional part of the architecture validation, the occurrence of the top 20 medications in the EMMOS registry and the OMOP CDM were compared, with a 100% concordance for the drug codes, which is shown in Figure 2. In addition to the reconciliation against the different OMOP entities, a comparison was also made against 'derived' data, in particular 'time to event' analysis. Overall survival was plotted from calculated variables in the analysis level data from the EMMOS registry and derived variables in the OMOP CDM. Probability of overall survival over time was virtually identical with only one day difference in median survival and 95% confidence intervals identically overlapping over the period of measurement (Figure 3). Conclusions: The concordance of year of birth, drug code mapping and overall survival between the EMMOS registry and the OMOP common data model indicates the reliability of mapping potential in HONEUR, especially where auxiliary methods have been developed to handle outcomes and treatment data in a way that can be harmonized across platform datasets. Disclosures No relevant conflicts of interest to declare.

Teses / dissertações sobre o assunto "Code-to-architecture-mapping":

1

Florean, Alexander, e Laoa Jalal. "Mapping Java Source Code To Architectural Concerns Through Machine Learning". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84250.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The explosive growth of software systems with both size and complexity results in the recognised need of techniques to combat architectural degradation. Reflexion Modelling is a method commonly used for Software Architectural Consistency Checking (SACC). However, the steps needed to utilise the method involve manual mapping, which could become tedious depending on the system's size. Recently, machine learning has been showing promising results outperforming other approaches. However, neither a comparison of different classifiers nor a comprehensive investigation of how to best pre-process source code has yet been performed. This thesis compares different classifier and their performance to the manual effort needed to train them and how different pre-processing settings affect their accuracy. The study can be divided into two areas: pre-processing and how large the manual mapping should be to achieve satisfactory performance. Across the three software systems used in this study, the overall best performing model, MaxEnt, achieved the following average results, accuracy 0.88, weighted precision 0.89 and weighted recall 0.88. SVM performed almost identically to MaxEnt. Furthermore, the results show that Naive-Bayes, the algorithm in recent related work approaches, performs worse than SVM and MaxEnt. The results yielded that the pre-processing that extracts packages and libraries, together with the feature representation method Bag-of-Words had the best performance. Furthermore, it was found that manual mapping of a minimum of ten files per concern is needed for satisfactory performance. The research results represent a further step towards automating code-to-architecture mappings, as required in reflexion modelling and similar techniques.

Capítulos de livros sobre o assunto "Code-to-architecture-mapping":

1

Lämmer, Anne, Sandy Eggert e Norbert Gronau. "A SOA-Based Approach to Integrate Enterprise Systems". In Enterprise Information Systems, 1265–78. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61692-852-0.ch506.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This chapter presents a procedure for the integration of enterprise systems. Therefore enterprise systems are being transferred into a service oriented architecture. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally, an example is given how to integrate an Enterprise Resource Planning System with an Enterprise Content Management System using the proposed procedure model.
2

Lämmer, Anne, Sandy Eggert e Norbert Gronau. "A SOA-Based Approach to Integrate Enterprise Systems". In Organizational Advancements through Enterprise Information Systems, 120–33. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-968-7.ch008.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This chapter presents a procedure for the integration of enterprise systems. Therefore enterprise systems are being transferred into a service oriented architecture. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally, an example is given how to integrate an Enterprise Resource Planning System with an Enterprise Content Management System using the proposed procedure model.
3

Lammer, Anne, Sandy Eggert e Norbert Gronau. "A Procedure Model for a SOA-Based Integration of Enterprise Systems". In Always-On Enterprise Information Systems for Business Continuance, 265–76. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-723-2.ch016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Enterprise systems are being transferred into a service-oriented architecture. In this article we present a procedure for the integration of enterprise systems. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally an example is given how to integrate an Enterprise Resource Planning System and an Enterprise Content Management System using the proposed procedure model.
4

Lammer, Anne, Sandy Eggert e Norbert Gronau. "A Procedure Model for a SOA-Based Integration of Enterprise Systems". In Enterprise Information Systems, 946–57. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-61692-852-0.ch403.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Enterprise systems are being transferred into a service-oriented architecture. In this article we present a procedure for the integration of enterprise systems. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally an example is given how to integrate an Enterprise Resource Planning System and an Enterprise Content Management System using the proposed procedure model.
5

Lämmer, Anne, Sandy Eggert e Norbert Gronau. "A Procedure Model for a SoA-Based Integration of Enterprise Systems". In Information Resources Management, 313–25. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-965-1.ch209.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Enterprise systems are being transferred into a service-oriented architecture. In this article we present a procedure for the integration of enterprise systems. The procedure model starts with decomposition into Web services. This is followed by mapping redundant functions and assigning of the original source code to the Web services, which are orchestrated in the final step. Finally an example is given how to integrate an Enterprise Resource Planning System and an Enterprise Content Management System using the proposed procedure model.

Trabalhos de conferências sobre o assunto "Code-to-architecture-mapping":

1

Sinkala, Zipani Tom, e Sebastian Herold. "InMap: Automated Interactive Code-to-Architecture Mapping Recommendations". In 2021 IEEE 18th International Conference on Software Architecture (ICSA). IEEE, 2021. http://dx.doi.org/10.1109/icsa51549.2021.00024.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Duncan, Ralph, Nima Gougol e Jim Frandeen. "Mapping Exceptions to High-Level Source Code on a Heterogeneous Architecture". In 2018 9th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP). IEEE, 2018. http://dx.doi.org/10.1109/paap.2018.00017.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zheng, Yongjie, Cuong Cu e Hazeline U. Asuncion. "Mapping Features to Source Code through Product Line Architecture: Traceability and Conformance". In 2017 IEEE International Conference on Software Architecture (ICSA). IEEE, 2017. http://dx.doi.org/10.1109/icsa.2017.13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Olsson, Tobias, Morgan Ericsson e Anna Wingkvist. "An exploration and experiment tool suite for code to architecture mapping techniques". In the 13th European Conference. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3344948.3344997.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

De Luca, Mark John, e Ian J. Taylor. "Low Fidelity, Numerical Simulation of Gas Turbine Propulsion Across Off Design Conditions". In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15576.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract This paper presents the first section of work carried out as part of a larger research investigation into the feasibility of engine type architecture for specific mission profiles, pertaining to next generational concepts of spacecraft propulsion. Significant work has been undertaken to improve the capability of an existing simulation code, HyPro, used to model all aspects of propulsion technologies, with a focus being given to developing gas turbine type components. The paper explores the implementation of an approach that circumvents the requirements for detailed component mapping and eliminates the need for a fixed, specified design point, presenting early stage validation data of the methodology selected for the modelling, which will go towards justifying its inclusion to the code library.
6

Evans, Jerry W., Paul P. Mehta e Arthur L. Ludwig. "Advanced Interactive Cost Analysis Tool (AICAT): An Environment for Evaluating Affordability". In ASME Turbo Expo 2005: Power for Land, Sea, and Air. ASMEDC, 2005. http://dx.doi.org/10.1115/gt2005-68283.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
AICAT encompasses innovative software architecture linked to a relational database. The system is based on a methodology (U.S. Patent #6775647) American Technology & Services previously developed for estimating and modeling the cost of advanced technology hardware for the “next-generation” fighter engine [1]. The AICAT offers a universal platform or software environment for developing and utilizing various types of cost models ideally suited for analyzing Life Cycle Cost (LCC) elements for all types of gas turbine engines. These cost models are comprised of algorithms and logic for mapping to specific elements such as features and design parameters associated with various hardware types, as well as cost elements related to gas turbine development or maintenance. The AICAT architecture is designed to be readily expandable and easily modified. Unlike other cost modeling software, AICAT does not require logic to be written in software code and recompiled in order to add models for new cost elements or for updating existing cost models.
7

Chen, Jinwei, Yuanfu Li, Zhenchao Hu e Huisheng Zhang. "Study on Model-Based Systems Engineering Method for Modeling of Thermal Power System". In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23867.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Thermal power systems, particularly with large capacity and high operating parameters, are more and more complicated nowadays, which include machinery, electronics, electrical, hydraulic, thermal, control, and process-oriented subsystems. The traditional development method based on documents has the problems of difficulty to reuse the designed elements, weak traceability of requirements, and lack of top-level logic verification. Moreover, there is a large gap because different models, tools and terminology are used during design process. The gap results in inefficiencies and quality issues that can be very expensive. In this paper, a model-based systems engineering (MBSE) approach is introduced for the top-down design flow of thermal power systems. The MBSE method can perfected the requirement definition, complete the mapping of the requirements to the system elements, realize the function logic verification, and support requirements verification at all stages. A GOPPRR (graph, object, point, property, role, relationship) meta-modeling method is proposed to support the MBSE formalisms. An Architecture Analysis and Design Integrated Approach (Arcadia) framework is adopted to capture the complex architecture, which is a standardized modeling method including requirement analysis, function analysis, logic analysis, and architecture design. Based on the architecture-driven algorithm and code generation, the standardized modeling process can establish a traceable relationship at each design stage and can verify the availability of initial requirements. Moreover, the designed elements of previous work can be reused in other relative design processes. The proposed MBSE method in this paper is applied to establish a gas turbine performance simulation model. The entire modeling process is enhanced by managing the relative design information consistently. The performance of the design process with MBSE method is analyzed and compared from different aspects. The results show that the performance simulation model of the power system established by the MBSE method can effectively describe the requirements, functions, logic, and architecture during design process. Based on the MBSE method, the requirements of the system are refined, traced and verified.
8

Amirante, Dario, Nicholas J. Hills e Paolo Adami. "A Multi-Fidelity Aero-Thermal Design Approach for Secondary Air Systems". In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-15922.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract The paper presents a multi-disciplinary approach for aero-thermal and heat transfer analysis for internal flows. The versatility and potential benefit offered by the approach is described through the application to a realistic low pressure turbine assembly. The computational method is based on a run time code-coupling architecture that allows mixed models and simulations to be integrated together for the prediction of the sub-system aero-thermal performance. In this specific application the model is consisting of two rotor blades, the embedded vanes, the inter-stage cavity and the solid parts. The geometry represents a real engine situation. The key element of the approach is the use of a fully modular coupling strategy that aims to combine (1) flexibility for design needs, (2) variable level of modelling for better accuracy and (3) in memory code coupling for preserving computational efficiency in large system and sub-system simulations. For this particular example Reynolds Averaged Navier-Stokes (RANS) equations are solved for the fluid regions and thermal coupling is enforced with the metal (conjugate heat transfer). Fluid-fluid interfaces use mixing planes between the rotating parts while overlapping regions are exploited to link the cavity flow to the main annulus flow as well as in the cavity itself for mapping of the metal parts and leakages. Metal temperatures predicted by the simulation are compared to those retrieved from a thermal model of the engine, and the results are discussed with reference to the underlying flow physics.
9

Calabrese, Agostina, Michele Bevilacqua e Roberto Navigli. "EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings". In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/67.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The problem of grounding language in vision is increasingly attracting scholarly efforts. As of now, however, most of the approaches have been limited to word embeddings, which are not capable of handling polysemous words. This is mainly due to the limited coverage of the available semantically-annotated datasets, hence forcing research to rely on alternative technologies (i.e., image search engines). To address this issue, we introduce EViLBERT, an approach which is able to perform image classification over an open set of concepts, both concrete and non-concrete. Our approach is based on the recently introduced Vision-Language Pretraining (VLP) model, and builds upon a manually-annotated dataset of concept-image pairs. We use our technique to clean up the image-to-concept mapping that is provided within a multilingual knowledge base, resulting in over 258,000 images associated with 42,500 concepts. We show that our VLP-based model can be used to create multimodal sense embeddings starting from our automatically-created dataset. In turn, we also show that these multimodal embeddings improve the performance of a Word Sense Disambiguation architecture over a strong unimodal baseline. We release code, dataset and embeddings at http://babelpic.org.
10

Tumkor, Serdar, Mingshao Zhang, Zhou Zhang, Yizhe Chang, Sven K. Esche e Constantin Chassapis. "Integration of a Real-Time Remote Experiment Into a Multi-Player Game Laboratory Environment". In ASME 2012 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/imece2012-86944.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
While real-time remote experiments have been used in engineering and science education for over a decade, more recently virtual learning environments based on game systems have been explored for their potential usage in educational laboratories. However, combining the advantages of both these approaches and integrating them into an effective learning environment has not been reported yet. One of the challenges in creating such a combination is to overcome the barriers between real and virtual systems, i.e. to select compatible platforms, to achieve an efficient mapping between the real world and the virtual environment and to arrange for efficient communications between the different system components. This paper will present a pilot implementation of a multi-player game-based virtual laboratory environment that is linked to the remote experimental setup of an air flow rig. This system is designed for a junior-level mechanical engineering laboratory on fluid mechanics. In order to integrate this remote laboratory setup into the virtual laboratory environment, an existing remote laboratory architecture had to be redesigned. The integrated virtual laboratory platform consists of two main parts, namely an actual physical experimental device controlled by a remote controller and a virtual laboratory environment that was implemented using the ‘Source’ game engine, which forms the basis of the commercially available computer game ‘Half-Life 2’ in conjunction with ‘Garry’s Mod’ (GM). The system implemented involves a local device controller that exchanges data in the form of shared variables and Dynamical Link Library (DLL) files with the virtual laboratory environment, thus establishing the control of real physical experiments from inside the virtual laboratory environment. The application of a combination of C++ code, Lua scripts [1] and LabVIEW Virtual Instruments makes the platform very flexible and expandable. This paper will present the architecture of this platform and discuss the general benefits of virtual environments that are linked with real physical devices.

Vá para a bibliografia