Academic literature on the topic 'Subroutine executed'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Subroutine executed.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Subroutine executed"

1

Kim, Jeong-Min, Youngsik Kim, Shin-Dug Kim, Tack-Don Han, and Sung-Bong Yang. "An Adaptive Parallel Computer Vision System." International Journal of Pattern Recognition and Artificial Intelligence 12, no. 03 (May 1998): 311–34. http://dx.doi.org/10.1142/s021800149800021x.

Full text
Abstract:
An approach for designing a hybrid parallel system that can perform different levels of parallelism adaptively is presented. An adaptive parallel computer vision system (APVIS) is proposed to attain this goal. The APVIS is constructed by integrating two different types of parallel architectures, i.e. a multiprocessor based system (MBS) and a memory based processor array (MPA), tightly into a single machine. One important feature in the APVIS is that the programming interface to execute data parallel code onto the MPA is the same as the usual subroutine calling mechanism. Thus the existence of the MPA is transparent to the programmers. This research is to design an underlying base architecture that can be optimally executed for a broad range of vision tasks. A performance model is provided to show the effectiveness of the APVIS. It turns out that the proposed APVIS can provide significant performance improvement and cost effectiveness for highly parallel applications having a mixed set of parallelisms. Also an example application composed of a series of vision algorithms, from low-level and medium-level processing steps, is mapped onto the MPA. Consequently, the APVIS with a few or tens of MPA modules can perform the chosen example application in real time when multiple images are incoming successively with a few seconds inter-arrival time.
APA, Harvard, Vancouver, ISO, and other styles
2

Saternus, Zbigniew, Wiesława Piekarska, Marcin Kubiak, and Tomasz Domański. "The Influence of Welding Heat Source Inclination on the Melted Zone Shape, Deformations and Stress State of Laser Welded T-Joints." Materials 14, no. 18 (September 14, 2021): 5303. http://dx.doi.org/10.3390/ma14185303.

Full text
Abstract:
The paper concerns the numerical analysis of the influence for three different of welding heat source inclinations on the weld pool shape and mechanical properties of the resulting joint. Numerical analysis is based on the experimental tests of single-side welding of two sheets made of X5CrNi18-10 stainless steel. The joint is made using a laser welding heat source. Experimental test was performed for one heating source inclination. As a part of the work metallographic tests are performed on which the quality of obtained joints are determined. Numerical calculations are executed in Abaqus FEA. The same geometrical model is assumed as in the experiment. Material model takes into account changing with temperature thermophysical properties of austenitic steel. Modeling of the motion of heating source is performed in additional subroutine. The welding source parameters are assumed in accordance with the welding process parameters. Numerical calculations were performed for three different inclinations of the source. One inclination is consistent with experimental studies. The performed numerical calculations allowed to determine the temperature field, shape of welding pool as well as deformations and stress state in welded joint. The obtained results are compared to results of the experiment.
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, J. W. B. "qtcm 0.1.2: a Python implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation Model." Geoscientific Model Development 2, no. 1 (February 11, 2009): 1–11. http://dx.doi.org/10.5194/gmd-2-1-2009.

Full text
Abstract:
Abstract. Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
APA, Harvard, Vancouver, ISO, and other styles
4

Lin, J. W. B. "qtcm 0.1.2: A Python Implementation of the Neelin-Zeng Quasi-Equilibrium Tropical Circulation model." Geoscientific Model Development Discussions 1, no. 1 (October 30, 2008): 315–44. http://dx.doi.org/10.5194/gmdd-1-315-2008.

Full text
Abstract:
Abstract. Historically, climate models have been developed incrementally and in compiled languages like Fortran. While the use of legacy compiled languages results in fast, time-tested code, the resulting model is limited in its modularity and cannot take advantage of functionality available with modern computer languages. Here we describe an effort at using the open-source, object-oriented language Python to create more flexible climate models: the package qtcm, a Python implementation of the intermediate-level Neelin-Zeng Quasi-Equilibrium Tropical Circulation model (QTCM1) of the atmosphere. The qtcm package retains the core numerics of QTCM1, written in Fortran to optimize model performance, but uses Python structures and utilities to wrap the QTCM1 Fortran routines and manage model execution. The resulting "mixed language" modeling package allows order and choice of subroutine execution to be altered at run time, and model analysis and visualization to be integrated in interactively with model execution at run time. This flexibility facilitates more complex scientific analysis using less complex code than would be possible using traditional languages alone, and provides tools to transform the traditional "formulate hypothesis → write and test code → run model → analyze results" sequence into a feedback loop that can be executed automatically by the computer.
APA, Harvard, Vancouver, ISO, and other styles
5

Ren, Yanli, Min Dong, Zhihua Niu, and Xiaoni Du. "Noninteractive Verifiable Outsourcing Algorithm for Bilinear Pairing with Improved Checkability." Security and Communication Networks 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/4892814.

Full text
Abstract:
It is well known that the computation of bilinear pairing is the most expensive operation in pairing-based cryptography. In this paper, we propose a noninteractive verifiable outsourcing algorithm of bilinear pairing based on two servers in the one-malicious model. The outsourcer need not execute any expensive operation, such as scalar multiplication and modular exponentiation. Moreover, the outsourcer could detect any failure with a probability close to 1 if one of the servers misbehaves. Therefore, the proposed algorithm improves checkability and decreases communication cost compared with the previous ones. Finally, we utilize the proposed algorithm as a subroutine to achieve an anonymous identity-based encryption (AIBE) scheme with outsourced decryption and an identity-based signature (IBS) scheme with outsourced verification.
APA, Harvard, Vancouver, ISO, and other styles
6

Chmiel, Mirosław, Jan Mocha, Edward Hrynkiewicz, and Dariusz Polok. "About Implementation of IEC 61131-3 IL Function Blocks in Standard Microcontrollers." International Journal of Electronics and Telecommunications 60, no. 1 (March 1, 2014): 33–38. http://dx.doi.org/10.2478/eletel-2014-0004.

Full text
Abstract:
Abstract The paper presents considerations on implementation of function blocks of the IL language, as fragments of control programs that use these blocks. Subsequently, the predefined function blocks of the IL language have been applied to implementation in a Central Processing Unit for a programmable controller based on standard microcontroller from such families as MCS-51, AVR and ARM with the Cortex-M3 core. The considerations refer to the IL language revision that is fully compliant with the IEC-61131-3 standards. The completed theoretical analysis demonstrated that the adopted method of the module description is really reasonable and offers substantial advantages as compared to direct calls of function modules already developed as subroutines. Also the executed experiments have proved the feasibility to arrange central units of programmable controllers on the basis of standard microcontrollers and such central units may be competitive to compact CPUs available on the market for typical PLCs.
APA, Harvard, Vancouver, ISO, and other styles
7

Acton, C., N. Bachman, B. Semenov, and E. Wright. "SPICE TOOLS SUPPORTING PLANETARY REMOTE SENSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 357–59. http://dx.doi.org/10.5194/isprs-archives-xli-b4-357-2016.

Full text
Abstract:
NASA's "SPICE"<sup>*</sup> ancillary information system has gradually become the de facto international standard for providing scientists the fundamental observation geometry needed to perform photogrammetry, map making and other kinds of planetary science data analysis. SPICE provides position and orientation ephemerides of both the robotic spacecraft and the target body; target body size and shape data; instrument mounting alignment and field-of-view geometry; reference frame specifications; and underlying time system conversions. <br><br> SPICE comprises not only data, but also a large suite of software, known as the SPICE Toolkit, used to access those data and subsequently compute derived quantities–items such as instrument viewing latitude/longitude, lighting angles, altitude, etc. <br><br> In existence since the days of the Magellan mission to Venus, the SPICE system has continuously grown to better meet the needs of scientists and engineers. For example, originally the SPICE Toolkit was offered only in Fortran 77, but is now available in C, IDL, MATLAB, and Java Native Interface. SPICE calculations were originally available only using APIs (subroutines), but can now be executed using a client-server interface to a geometry engine. Originally SPICE "products" were only available in numeric form, but now SPICE data visualization is also available. <br><br> The SPICE components are free of cost, license and export restrictions. Substantial tutorials and programming lessons help new users learn to employ SPICE calculations in their own programs. The SPICE system is implemented and maintained by the Navigation and Ancillary Information Facility (NAIF)–a component of NASA's Planetary Data System (PDS). <br><br> <sup>*</sup> Spacecraft, Planet, Instrument, Camera-matrix, Events
APA, Harvard, Vancouver, ISO, and other styles
8

Acton, C., N. Bachman, B. Semenov, and E. Wright. "SPICE TOOLS SUPPORTING PLANETARY REMOTE SENSING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 13, 2016): 357–59. http://dx.doi.org/10.5194/isprsarchives-xli-b4-357-2016.

Full text
Abstract:
NASA's "SPICE"&lt;sup&gt;*&lt;/sup&gt; ancillary information system has gradually become the de facto international standard for providing scientists the fundamental observation geometry needed to perform photogrammetry, map making and other kinds of planetary science data analysis. SPICE provides position and orientation ephemerides of both the robotic spacecraft and the target body; target body size and shape data; instrument mounting alignment and field-of-view geometry; reference frame specifications; and underlying time system conversions. &lt;br&gt;&lt;br&gt; SPICE comprises not only data, but also a large suite of software, known as the SPICE Toolkit, used to access those data and subsequently compute derived quantities–items such as instrument viewing latitude/longitude, lighting angles, altitude, etc. &lt;br&gt;&lt;br&gt; In existence since the days of the Magellan mission to Venus, the SPICE system has continuously grown to better meet the needs of scientists and engineers. For example, originally the SPICE Toolkit was offered only in Fortran 77, but is now available in C, IDL, MATLAB, and Java Native Interface. SPICE calculations were originally available only using APIs (subroutines), but can now be executed using a client-server interface to a geometry engine. Originally SPICE "products" were only available in numeric form, but now SPICE data visualization is also available. &lt;br&gt;&lt;br&gt; The SPICE components are free of cost, license and export restrictions. Substantial tutorials and programming lessons help new users learn to employ SPICE calculations in their own programs. The SPICE system is implemented and maintained by the Navigation and Ancillary Information Facility (NAIF)–a component of NASA's Planetary Data System (PDS). &lt;br&gt;&lt;br&gt; &lt;sup&gt;*&lt;/sup&gt; Spacecraft, Planet, Instrument, Camera-matrix, Events
APA, Harvard, Vancouver, ISO, and other styles
9

Dietterich, T. G. "Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition." Journal of Artificial Intelligence Research 13 (November 1, 2000): 227–303. http://dx.doi.org/10.1613/jair.639.

Full text
Abstract:
This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics---as a subroutine hierarchy---and a declarative semantics---as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.
APA, Harvard, Vancouver, ISO, and other styles
10

"Home Energy Management System for a domestic load center using Artificial Neural Networks towards Energy Integration." International Journal of Recent Technology and Engineering 8, no. 4 (November 30, 2019): 12548–57. http://dx.doi.org/10.35940/ijrte.d6761.118419.

Full text
Abstract:
This paper titled ‘Home Energy Management System (HEMS) for a domestic load center using Artificial Neural Networks towards Energy Integration’ focuses on implementing intelligent integration of Distribution Generation Systems (DGS) for domestic load. The proposed energy integration by HEMS, is executed by implementing a Load Management Algorithm (LMA), which functions based on ANN forecast models. Demand Side Management (DSM) techniques are the back bone for LMA proposed in this paper. Historical temperature data and electrical load data of a domestic load center at Municipality of Birmingham, City of Alabama, USA, is considered to implement the proposed LMA. Two subroutine programs/algorithms are designed to implement the proposed LMA. First is of implementing Load Priority Techniques, to assign Load Priority for the loads for the hour based on the temperature forecast. Temperature forecast is done using Nonlinear Auto Regression with Exogenous Inputs (NARX) ANN time series model. And this is termed as Load Priority Assignment Algorithm (LPAA). Second is of predicting Threshold Power (PTh) for the hour as per load priority assigned by LPAA, this is termed as estimation of PTh. Big data driven Nonlinear Auto Regression with Exogenous Inputs (NARX) Neural Net based temperature forecasting model is implemented and used to assign hourly priority for all the appliances/loads. Big data driven Artificial Neural Network (ANN) based load power forecasting model is implemented and used to predict hourly PTh, LMA proposed for HEMS is implemented with step by step comparison of current load demand with PTh and transferring the loads between the energy sources, according to the load priority assignment from LPAA. Proposed LMA based HEMS contributes to enhance the penetration of nonconventional DGS into domestic load sector. Simulation of proposed HEMS is implemented in MATLAB-Simulink environment using MATLAB 2018b. Simulation model is embedded into target hardware: Arduino 2560 by enabling serial communication protocols between MATLAB-SIMULINK and ARDUINO. Hence the experimental setup of HEMS would be functional to prove simulation results and to transfer the loads in real time. Working Hardware model of the HEMS at domestic load center is fabricated using simple and cost effective components such as current sensor ACS712, Arduino Mega 2560, relays and relay driver circuit.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Subroutine executed"

1

Pao, Y. C. "A General-Purpose Software for Menu Design of CAD Projects." In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0074.

Full text
Abstract:
Abstract A software package MenuCAD has been developed for the general need of designing menu-driven, user-friendly CAD computer programs. The main menu is formatted similar to the major contents in the final report of the design project including Contents, Analysis, Sample Design Cases, Illustrations and Tables, References, and Program Listings. Sub-menus are further divided into items delineating the steps involved in the design. Screen help messages are provided for design of the main menu and sub-menus interactively and for applying the arrow keys on the keyboard to select a sub-menus and a particular item in the sub-menu in order to execute a desired design step. MenuCAD builds the framework, its user has to supplement with a subroutine ExecItem for describing the special features and for directing how each design step should be executed in the project. A CAD design of four-bar linkage project is presented as a sample application of this package.
APA, Harvard, Vancouver, ISO, and other styles
2

Hossain, Md Abir, Jacqueline R. Cottingham, and Calvin M. Stewart. "A Reduced Order Modeling Approach to Probabilistic Creep-Damage Predictions in Finite Element Analysis." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58480.

Full text
Abstract:
Abstract This paper introduces a computationally efficient Reduced Order Modeling (ROM) approach for the probabilistic prediction of creep-damage failure. Component-level probabilistic simulations are needed to assess the reliability and safety of high-temperature components. Full-scale probabilistic creep-damage modeling in finite element (FE) approach is computationally expensive requiring many hundreds of simulations to replicate the uncertainty of component failure. To that end, ROM is proposed to minimize the elevated computational cost while controlling the loss of accuracy. It is proposed that full-scale probabilistic simulations can be completed in 1D at a reduced cost, the extremum conditions extracted, and those conditions applied for lower cost 2D/3D probabilistic simulations of components that capture the mean and uncertainty of failure. The probabilistic Sine-hyperbolic (Sinh) model is selected which in previous work was calibrated to alloy 304 stainless steel. The Sinh model includes probability density functions (pdfs) for test condition (stress and temperature), initial damage (i.e., microstructure), and material properties uncertainty. The Sinh model is programmed into ANSYS finite element software using the USERCREEP.F material subroutine. First, the Sinh model and FE code are subject to verification and validation to affirm the accuracy of the simulations. Numerous Monte Carlo simulations are executed in a 1D model to generate probabilistic creep deformation, damage, and rupture data. This data is analyzed and the probabilistic parameters corresponding to extreme creep response are extracted. The ROM concept is applied where only the extreme conditions are applied in the 2D probabilistic prediction of a component. The probabilistic predictions between the 1D and 2D model is compared to assess ROM for creep. The accuracy of the probabilistic prediction employing the ROM approach will potentially reduce the time and cost of simulating complex engineering systems. Future studies will introduce multi-stage Sinh, stochasticity, and spatial uncertainty for improved prediction.
APA, Harvard, Vancouver, ISO, and other styles
3

Mahmood, Fiaz, Huasi Hu, and Liangzhi Cao. "Buildup and Decay Analysis of Corrosion Products Activity in Primary Coolant Loop of AP-1000." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81388.

Full text
Abstract:
The broad half-life range of Activated Corrosion Products (ACPs) results in major radiation exposure throughout reactor operation and shutdown. The movement of unpredicted activity hot spots in coolant loop can bring about huge financial and dosimetric impacts. The PWR operating experience depicts that activity released during reactor operation and shutdown cannot be estimated through a simple correlation. This paper seeks to analyze buildup and decay behavior of ACPs in primary coolant loop of AP-1000 under normal operation, power regulation and shutdown modes. The application of a well-tested mathematical model is extended in an in-house developed code CPA-AP1000, to simulate the behavior of dominant Corrosion Products (CPs), by programing in MATLAB. The MCNP code is used as a subroutine of the program to model the reactor core and execute energy dependent neutron flux calculations. It is observed that short-lived CPs (56Mn, 24Na) build up rapidly under normal operation mode and decay quickly after the reactor is shutdown. The long-lived CPs (59Fe, 60Co, 99Mo) have exhibited slow buildup under normal operating conditions and likewise sluggish decay after the shutdown. To analyze activity response during reactor control regime, operating power level is promptly decreased and in response specific activity of CPs also followed decreasing trend. It is noticed that activity of CPs drops slowly during reactor control regime in comparison to emergency scram. The results are helpful in estimating radiation exposure caused by ACPs during accessibility of the equipment in coolant loop, under normal operation, power regulation and shutdown modes. Moreover, current analyses provide baseline data for further investigations on ACPs in AP-1000, being a new reactor design.
APA, Harvard, Vancouver, ISO, and other styles
4

Bossard, John, Alton Reich, and Alex DiMeo. "Analysis of Pressure Relief Valve Dynamics During Opening." In ASME 2016 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/pvp2016-63261.

Full text
Abstract:
In nuclear power plants power actuated pressure relief valves serve several purposes. They act as safety valves and open automatically in response to unusually high pressures in the primary system. They also act as power operated valves and are used to relieve steam in response to automatic or manually initiated control signals. These valves are required to lift completely over a short duration from the time that they receive an actuation signal, or the system pressure exceeds the set point. This short lift time results in the valve disk moving at high velocities, and can result in high impact forces on the piston and stem when the valve fully opens. To quantitatively evaluate the dynamic performance of the Target Rocket Pressure Relief Valve, an analysis effort was undertaken which would accommodate both the fluid dynamic features of the valve operation, as well as the kinematic characteristics of the valve, during pressure relief valve operation. To execute the analysis, the Generalized Fluid System Simulation Program (GFSSP) was used. GFSSP is a network flow solver CFD code developed by NASA that has the ability to analyze transient, multi-phase flows, and conjugate heat transfer, along with the inclusion of custom user subroutines developed by the user which can accommodate other simulation requirements. In this paper we present the GFSSP model developed, and the computed results that could be compared with corresponding parameters as measured from experimental testing for the pressure relief valve. Adjustments to GFSSP input parameters allow the anchoring of the GFSSP valve model to test data. This makes it possible to use the GFSSP model as a predictive tool for understanding valve dynamics, as well as evaluating proposed pressure relief valve modifications for performance improvements.
APA, Harvard, Vancouver, ISO, and other styles
5

Robertson, Miles C., Aaron W. Costall, Peter J. Newton, and Ricardo F. Martinez-Botas. "Radial Turboexpander Optimization Over Discretized Heavy-Duty Test Cycles for Mobile Organic Rankine Cycle Applications." In ASME Turbo Expo 2016: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/gt2016-56754.

Full text
Abstract:
Mobile organic Rankine cycle (MORC) systems represent a candidate technology for the reduction of fuel consumption and CO2 emissions from heavy-duty vehicles. Through the recovery of internal combustion engine waste heat, energy can be either compounded or used to power vehicle ancillary systems. Waste heat recovery systems have been shown to deliver fuel economy improvements of up to 13% in large diesel engines [1]. Whilst the majority of studies focus on individual component performance under specific thermodynamic conditions, there has been little investigation into the effects of expander specification across transient test cycles used for heavy-duty engine emission certification. It is this holistic approach which will allow prediction of the validity of MORC systems for different classes of heavy-duty vehicle, in addition to providing an indication of system performance. This paper first describes a meanline (one-dimensional simulation along a mean streamline within a flow passage) model for radial ORC turbines, divided into two main subroutines. An on-design code takes a thermodynamic input, before generating a candidate geometry for a chosen operating point. The efficacy of this design is then evaluated by an off-design code, which applies loss correlations to the proposed geometry to give a prediction of turbine performance. The meanline code is then executed inside a quasi-steady-state ORC cycle model, using reference emission test cycles to generate exhaust (heat source) boundary conditions, generated by a simulated 11.7L heavy-duty diesel engine. A detailed evaporator model, developed using the NTU-effectiveness method and single/two-phase flow correlations, provides accurate treatment of heat flow within the system. Together, these elements allow estimation of ORC system performance across entire reference emission test cycles. In order to investigate the limits of MORC performance, a Genetic Algorithm is applied to the ORC expander, aiming to optimize the geometry specification (radii, areas, blade heights, angles) to provide maximal time-averaged power output. This process is applied across the reference duty cycles, with the implications on power output and turbine geometry discussed for each. Due to the large possible variation in thermodynamic conditions within the turbine operating range a typical ideal-gas methodology (generating a single operating map for interpolation across all operating points) is no longer accurate — a complete off-design calculation must therefore be performed for all operating points. To reduce computational effort, discretization of the ORC thermodynamic inputs (temperature, mass flow rate) is investigated with several strategies proposed for reduced-order simulation. The paper concludes by predicting which heavy-duty emission test cycles stand to benefit the most from this optimization procedure, along with a comparison to existing transient results. Duty cycles containing narrow bands of operation were found to provide optimal performance, with a Constant-Speed, Variable-Load cycle achieving an average power output of 4.60 kW. Consideration is also given to the effectiveness of the methodology contained within the paper, the challenges of making ORC systems viable for mobile applications, along with suggestions for future research developments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography