To see the other types of publications on this topic, follow the link: Robust low-order modelling.

Journal articles on the topic 'Robust low-order modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Robust low-order modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Messenger, Andrew, and Thomas Povey. "Calibrated Low-Order Transient Thermal and Flow Models for Robust Test Facility Design." Journal of the Global Power and Propulsion Society 4 (July 3, 2020): 94–113. http://dx.doi.org/10.33737/jgpps/122270.

Full text
Abstract:
This paper describes an upgrade to high temperature operation of the Engine Component AeroThermal (ECAT) facility, an established engine-parts facility at the University of Oxford. The facility is used for high-TRL research and development, new technology demonstration, and for component validation (typically large civil-engine HP NGVs). In current operation the facility allows Reynolds number, Mach number, and coolant-to-mainstream pressure ratio to be matched to engine conditions. Rich-burn or lean-burn temperature, swirl and turbulence profiles can also be simulated. The upgrade will increase the maximum inlet temperature to 600 K, allowing coolant-to-mainstream temperature ratio to be matched to engine conditions. This will allow direct validation of temperature ratio scaling methods in addition to providing a test bed in which all important non-dimensional parameters for aero-thermal behaviour are exactly matched. To accurately predict the operating conditions of the upgraded facility, a low order transient thermal model was developed in which the air delivery system and working section are modelled as a series of distributed thermal masses. Nusselt number correlations were used to calculate convective heat transfer to and from the fluid in the pipes and working section. The correlation was tuned and validated with experimental results taken from tests conducted in the existing facility. This modelling exercise informed a number of high-level facility design decisions, and provides an accurate estimate of the running conditions of the upgraded facility. We present detailed results from the low-order modelling, and discuss the key design decisions. We also present a discussion of challenges in the mechanical design of the working section, which is complicated by transient thermal stress induced in the working section components during facility start-up. The high-temperature core is unusually high-TRL for a research organisation, and we hope both the development and methodology will be of interest to engine designers and the research community.
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, Bryn Ll, P. H. Heins, E. C. Kerrigan, J. F. Morrison, and A. S. Sharma. "Modelling for robust feedback control of fluid flows." Journal of Fluid Mechanics 769 (March 25, 2015): 687–722. http://dx.doi.org/10.1017/jfm.2015.84.

Full text
Abstract:
This paper addresses the problem of designing low-order and linear robust feedback controllers that providea prioriguarantees with respect to stability and performance when applied to a fluid flow. This is challenging, since whilst many flows are governed by a set of nonlinear, partial differential–algebraic equations (the Navier–Stokes equations), the majority of established control system design assumes models of much greater simplicity, in that they are: firstly, linear; secondly, described by ordinary differential equations (ODEs); and thirdly, finite-dimensional. With this in mind, we present a set of techniques that enables the disparity between such models and the underlying flow system to be quantified in a fashion that informs the subsequent design of feedback flow controllers, specifically those based on the$\mathscr{H}_{\infty }$loop-shaping approach. Highlights include the application of a model refinement technique as a means of obtaining low-order models with an associated bound that quantifies the closed-loop degradation incurred by using such finite-dimensional approximations of the underlying flow. In addition, we demonstrate how the influence of the nonlinearity of the flow can be attenuated by a linear feedback controller that employs high loop gain over a select frequency range, and offer an explanation for this in terms of Landahl’s theory of sheared turbulence. To illustrate the application of these techniques, an$\mathscr{H}_{\infty }$loop-shaping controller is designed and applied to the problem of reducing perturbation wall shear stress in plane channel flow. Direct numerical simulation (DNS) results demonstrate robust attenuation of the perturbation shear stresses across a wide range of Reynolds numbers with a single linear controller.
APA, Harvard, Vancouver, ISO, and other styles
3

TOUBER, EMILE, and NEIL D. SANDHAM. "Low-order stochastic modelling of low-frequency motions in reflected shock-wave/boundary-layer interactions." Journal of Fluid Mechanics 671 (March 7, 2011): 417–65. http://dx.doi.org/10.1017/s0022112010005811.

Full text
Abstract:
A combined numerical and analytical approach is used to study the low-frequency shock motions observed in shock/turbulent-boundary-layer interactions in the particular case of a shock-reflection configuration. Starting from an exact form of the momentum integral equation and guided by data from large-eddy simulations, a stochastic ordinary differential equation for the reflected-shock-foot low-frequency motions is derived. During the derivation a similarity hypothesis is verified for the streamwise evolution of boundary-layer thickness measures in the interaction zone. In its simplest form, the derived governing equation is mathematically equivalent to that postulated without proof by Plotkin (AIAA J., vol. 13, 1975, p. 1036). In the present contribution, all the terms in the equation are modelled, leading to a closed form of the system, which is then applied to a wide range of input parameters. The resulting map of the most energetic low-frequency motions is presented. It is found that while the mean boundary-layer properties are important in controlling the interaction size, they do not contribute significantly to the dynamics. Moreover, the frequency of the most energetic fluctuations is shown to be a robust feature, in agreement with earlier experimental observations. The model is proved capable of reproducing available low-frequency experimental and numerical wall-pressure spectra. The coupling between the shock and the boundary layer is found to be mathematically equivalent to a first-order low-pass filter. It is argued that the observed low-frequency unsteadiness in such interactions is not necessarily a property of the forcing, either from upstream or downstream of the shock, but an intrinsic property of the coupled system, whose response to white-noise forcing is in excellent agreement with actual spectra.
APA, Harvard, Vancouver, ISO, and other styles
4

Bianchi, Fernando D., Marcela Moscoso-Vásquez, Patricio Colmegna, and Ricardo S. Sánchez-Peña. "Invalidation and low-order model set for artificial pancreas robust control design." Journal of Process Control 76 (April 2019): 133–40. http://dx.doi.org/10.1016/j.jprocont.2019.02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shishkin, G. I. "ROBUST NOVEL HIGH-ORDER ACCURATE NUMERICAL METHODS FOR SINGULARLY PERTURBED CONVECTION‐DIFFUSION PROBLEMS." Mathematical Modelling and Analysis 10, no. 4 (2005): 393–412. http://dx.doi.org/10.3846/13926292.2005.9637296.

Full text
Abstract:
For singularly perturbed boundary value problems, numerical methods convergent ϵ‐uniformly have the low accuracy. So, for parabolic convection‐diffusion problem the order of convergence does not exceed one even if the problem data are sufficiently smooth. However, already for piecewise smooth initial data this order is not higher than 1/2. For problems of such type, using newly developed methods such as the method based on the asymptotic expansion technique and the method of the additive splitting of singularities, we construct ϵ‐uniformly convergent schemes with improved order of accuracy. Straipsnyje nagrinejami nedidelio tikslumo ϵ‐tolygiai konvertuojantys skaitmeniniai metodai, singuliariai sutrikdytiems kraštiniams uždaviniams. Paraboliniam konvekcijos‐difuzijos uždaviniui konvergavimo eile neviršija vienos antrosios, jeigu uždavinio duomenys yra pakankamai glodūs. Tačiau trūkiems pradiniams duomenims eile yra ne aukštesne už 2−1. Šio tipo uždaviniams, naudojant naujai išvestus metodus, darbe sukonstruotos ϵ‐tolygiai konvertuojančios schemos aukštesniu tikslumu.
APA, Harvard, Vancouver, ISO, and other styles
6

Abouorm, Lara, Maxime Blais, Nicolas Moulin, Julien Bruchon, and Sylvain Drapier. "A Robust Monolithic Approach for Resin Infusion Based Process Modelling." Key Engineering Materials 611-612 (May 2014): 306–15. http://dx.doi.org/10.4028/www.scientific.net/kem.611-612.306.

Full text
Abstract:
The aim of this work is to focus on the Stokes-Darcy coupled problem in order to propose a robust monolithic approach to simulate composite manufacturing process based on liquid resin infusion. The computational domain can be divided into two non-miscible sub-domains: a purely fluid domain and a porous medium. In the purely fluid domain, the fluid flows according to the Stokes' equations, while the fluid flows into the preforms according to the Darcy's equations. Specific conditions have to be considered on the fluid/porous medium interface. Under the effect of a mechanical pressure applied on the high deformable preform/resin stacking, the resin flows and infuses through the preform which permeability is very low, down to 10-15 m2. Flows are solved using finite element method stabilized with a sub-grid scale stabilization technique (ASGS). A special attention is paid to the interface conditions, namely normal stress and velocity continuity and tangential velocity constraint similar to a Beaver-Joseph-Saffman’s condition. The originality of the model consists in using one single mesh to represents the Stokes and the Darcy sub-domains (monolithic approach). A level set context is used to represent Stokes-Darcy interface and to capture the moving flow front. This monolithic approach is now perfectly robust and leads to perform complex shapes for manufacturing process by resin infusion.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Young Chol, and Lihua Jin. "Robust identification of continuous-time low-order models using moments of a single rectangular pulse response." Journal of Process Control 23, no. 5 (2013): 682–95. http://dx.doi.org/10.1016/j.jprocont.2013.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Semiletov, Vasily A., and Sergey A. Karabasov. "Similarity scaling of jet noise sources for low-order jet noise modelling based on the Goldstein generalised acoustic analogy." International Journal of Aeroacoustics 16, no. 6 (2017): 476–90. http://dx.doi.org/10.1177/1475472x17730457.

Full text
Abstract:
As a first step towards a robust low-order modelling framework that is free from either calibration parameters based on the far-field noise data or any assumptions about the noise source structure, a new low-order noise prediction scheme is implemented. The scheme is based on the Goldstein generalised acoustic analogy and uses the Large Eddy Simulation database of fluctuating Reynolds stress fields from the CABARET MILES solution of Semiletov et al. corresponding to a static isothermal jet from the SILOET experiment for reconstruction of effective noise sources. The sources are scaled in accordance with the physics-based arguments and the corresponding sound meanflow propagation problem is solved using a frequency domain Green’s function method for each jet case. Results of the far-field noise predictions of the new method are validated for the two NASA SHJAR jet cases, sp07 and sp03 from and compared with the reference predictions, which are obtained by applying the Lighthill acoustic analogy scaling for the SILOET far-field measurements and using an empirical jet-noise prediction code, sJet.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Bin, Chang-Hong Wang, Wei Li, and Zhuo Li. "Robust Controller Design Using the Nevanlinna-Pick Interpolation in Gyro Stabilized Pod." Discrete Dynamics in Nature and Society 2010 (2010): 1–16. http://dx.doi.org/10.1155/2010/569850.

Full text
Abstract:
The sensitivity minimization of feedback system is solved based on the theory of Nevanlinna-Pick interpolation with degree constraint without using weighting functions. More details of the dynamic characteristic of second-order system investigated, which is determined by the location of spectral zeroes, the upper boundγofS, the length of the spectral radius and the additional interpolation constraints. And the guidelines on how to tune the design parameters are provided. Gyro stabilized pod as a typical tracking system is studied, which is based on the typical structure of two-axis and four-frame. The robust controller is designed based on Nevanlinna-Pick interpolation with degree constraint. When both friction of LuGre model and disturbance exist, the closed-loop system has stronger disturbance rejection ability and high tracking precision. Numerical examples illustrate the potential of the method in designing robust controllers with relatively low degrees.
APA, Harvard, Vancouver, ISO, and other styles
10

Flinois, Thibault L. B., and Aimee S. Morgans. "Feedback control of unstable flows: a direct modelling approach using the Eigensystem Realisation Algorithm." Journal of Fluid Mechanics 793 (March 14, 2016): 41–78. http://dx.doi.org/10.1017/jfm.2016.111.

Full text
Abstract:
Obtaining low-order models for unstable flows in a systematic and computationally tractable manner has been a long-standing challenge. In this study, we show that the Eigensystem Realisation Algorithm (ERA) can be applied directly to unstable flows, and that the resulting models can be used to design robust stabilising feedback controllers. We consider the unstable flow around a D-shaped body, equipped with body-mounted actuators, and sensors located either in the wake or on the base of the body. A linear model is first obtained using approximate balanced truncation. It is then shown that it is straightforward and justified to obtain models for unstable flows by directly applying the ERA to the open-loop impulse response. We show that such models can also be obtained from the response of the nonlinear flow to a small impulse. Using robust control tools, the models are used to design and implement both proportional and $\mathscr{H}_{\infty }$ loop-shaping controllers. The designed controllers were found to be robust enough to stabilise the wake, even from the nonlinear vortex shedding state and in some cases at off-design Reynolds numbers.
APA, Harvard, Vancouver, ISO, and other styles
11

Tadmor, Gilead, Oliver Lehmann, Bernd R. Noack, et al. "Reduced-order models for closed-loop wake control." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 369, no. 1940 (2011): 1513–24. http://dx.doi.org/10.1098/rsta.2010.0367.

Full text
Abstract:
We review a strategy for low- and least-order Galerkin models suitable for the design of closed-loop stabilization of wakes. These low-order models are based on a fixed set of dominant coherent structures and tend to be incurably fragile owing to two challenges. Firstly, they miss the important stabilizing effects of interactions with the base flow and stochastic fluctuations. Secondly, their range of validity is restricted by ignoring mode deformations during natural and actuated transients. We address the first challenge by including shift mode(s) and nonlinear turbulence models. The resulting robust least-order model lives on an inertial manifold, which links slow variations in the base flow and coherent and stochastic fluctuation amplitudes. The second challenge, the deformation of coherent structures, is addressed by parameter-dependent modes, allowing smooth transitions between operating conditions. Now, the Galerkin model lives on a refined manifold incorporating mode deformations. Control design is a simple corollary of the distilled model structure. We illustrate the modelling path for actuated wake flows.
APA, Harvard, Vancouver, ISO, and other styles
12

Garabandić, D., and T. Petrović. "Robust Controllers for Pulse-Width-Modulated D.C./D.C. Converters Using Internal-Model-Control Design." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 207, no. 3 (1993): 127–34. http://dx.doi.org/10.1243/pime_proc_1993_207_331_02.

Full text
Abstract:
A linear feedback controller for pulse-width-modulated d.c./d.c. regulator is designed using a frequency domain optimization method based on internal-model-control theory. This method aims to produce suboptimal low-order controllers which are ‘robust’, in the sense that the closed-loop system is guaranteed to meet stability objectives in the presence of model uncertainty. The small-signal model of a d.c./d.c. converter is used for the controller design. The model uncertainty description derived here is based on experiments and non-linear modelling. The result of the synthesis is a family of controllers, and each member of this family satisfies the robust control objectives. All controllers have a multi-loop structure including two feedback loops and one feedforward loop. A detailed design of the controller, including experimental results, is presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Iffland, Ronja, Kristian Förster, Daniel Westerholt, María Herminia Pesci, and Gilbert Lösken. "Robust Vegetation Parameterization for Green Roofs in the EPA Stormwater Management Model (SWMM)." Hydrology 8, no. 1 (2021): 12. http://dx.doi.org/10.3390/hydrology8010012.

Full text
Abstract:
In increasingly expanding cities, roofs are still largely unused areas to counteract the negative impacts of urbanization on the water balance and to reduce flooding. To estimate the effect of green roofs as a sustainable low impact development (LID) technique on the building scale, different approaches to predict the runoff are carried out. In hydrological modelling, representing vegetation feedback on evapotranspiration (ET) is still considered challenging. In this research article, the focus is on improving the representation of the coupled soil–vegetation system of green roofs. Relevant data to calibrate and validate model representations were obtained from an existing field campaign comprising several green roof test plots with different characteristics. A coupled model, utilizing both the Penman–Monteith equation to estimate ET and the software EPA stormwater management model (SWMM) to calculate the runoff, was set up. Through the application of an automatic calibration procedure, we demonstrate that this coupled modelling approach (Kling–Gupta efficiency KGE = 0.88) outperforms the standard ET representation in EPA SWMM (KGE = −0.35), whilst providing a consistent and robust parameter set across all green roof configurations. Moreover, through a global sensitivity analysis, the impact of changes in model parameters was quantified in order to aid modelers in simplifying their parameterization of EPA SWMM. Finally, an improved model using the Penman–Monteith equation and various recommendations are presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Rahnavard, Mostafa, Moosa Ayati, and Mohammad Reza Hairi Yazdi. "Robust actuator and sensor fault reconstruction of wind turbine using modified sliding mode observer." Transactions of the Institute of Measurement and Control 41, no. 6 (2018): 1504–18. http://dx.doi.org/10.1177/0142331218754620.

Full text
Abstract:
This paper proposes a robust fault diagnosis scheme based on modified sliding mode observer, which reconstructs wind turbine hydraulic pitch actuator faults as well as simultaneous sensor faults. The wind turbine under consideration is a 4.8 MW benchmark model developed by Aalborg University and kk-electronic a/s. Rotor rotational speed, generator rotational speed, blade pitch angle and generator torque have different order of magnitudes. Since the dedicated sensors experience faults with quite different values, simultaneous fault reconstruction of these sensors is a challenging task. To address this challenge, some modifications are applied to the classic sliding mode observer to realize simultaneous fault estimation. The modifications are mainly suggested to the discontinuous injection switching term as the nonlinear part of observer. The proposed fault diagnosis scheme does not require know the exact value of nonlinear aerodynamic torque and is robust to disturbance/modelling uncertainties. The aerodynamic torque mapping, represented as a two-dimensional look up table in the benchmark model, is estimated by an analytical expression. The pitch actuator low pressure faults are identified using some fault indicators. By filtering the outputs and defining an augmented state vector, the sensor faults are converted to actuator faults. Several fault scenarios, including the pitch actuator low pressure faults and simultaneous sensor faults, are simulated in the wind turbine benchmark in the presence of measurement noises. Simulation results show that the modified observer immediately and faithfully estimates the actuator faults as well as simultaneous sensor faults with different order of magnitudes.
APA, Harvard, Vancouver, ISO, and other styles
15

Muñoz-Nieto, A. L., P. Rodriguez-Gonzalvez, D. Gonzales-Aguilera, et al. "UAV archaeological reconstruction: The study case of Chamartin Hillfort (Avila, Spain)." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 259–65. http://dx.doi.org/10.5194/isprsannals-ii-5-259-2014.

Full text
Abstract:
Photogrammetry from unmanned aerial vehicles (UAV) is a common technique for 3D modelling in a low-cost way. This technique is becoming essential to analyse the cultural heritage, e.g. historical buildings, monuments and archaeological remains in order not only to preserve them, but for disseminate accurate graphic information. The study case of Chamartin hillfort (Ávila, Spain) provided us the opportunity to apply automation techniques to generate geomatic products with high metric quality. A novel photogrammetric software tool was used with the aim to achieve high resolution orthophoto and 3D models of complex sites. This tool allows a flexible way for heritage documentation, since it incorporates robust algorithms to cope with a wide range of studies cases and shooting configurations.
APA, Harvard, Vancouver, ISO, and other styles
16

Blank, Laura, Alfonso Caiazzo, Franz Chouly, Alexei Lozinski, and Joaquin Mura. "Analysis of a stabilized penalty-free Nitsche method for the Brinkman, Stokes, and Darcy problems." ESAIM: Mathematical Modelling and Numerical Analysis 52, no. 6 (2018): 2149–85. http://dx.doi.org/10.1051/m2an/2018063.

Full text
Abstract:
In this paper we study the Brinkman model as a unified framework to allow the transition between the Darcy and the Stokes problems. We propose an unconditionally stable low-order finite element approach, which is robust with respect to the whole range of physical parameters, and is based on the combination of stabilized equal-order finite elements with a non-symmetric penalty-free Nitsche method for the weak imposition of essential boundary conditions. In particular, we study the properties of the penalty-free Nitsche formulation for the Brinkman setting, extending a recently reported analysis for the case of incompressible elasticity (Boiveau and Burman, IMA J. Numer. Anal. 36 (2016) 770-795). Focusing on the two-dimensional case, we obtain optimal a priori error estimates in a mesh-dependent norm, which, converging to natural norms in the cases of Stokes or Darcy ows, allows to extend the results also to these limits. Moreover, we show that, in order to obtain robust estimates also in the Darcy limit, the formulation shall be equipped with a Grad-Div stabilization and an additional stabilization to control the discontinuities of the normal velocity along the boundary. The conclusions of the analysis are supported by numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
17

Horn, Joseph. "Non-Linear Dynamic Inversion Control Design for Rotorcraft." Aerospace 6, no. 3 (2019): 38. http://dx.doi.org/10.3390/aerospace6030038.

Full text
Abstract:
Flight control design for rotorcraft is challenging due to high-order dynamics, cross-coupling effects, and inherent instability of the flight dynamics. Dynamic inversion design offers a desirable solution to rotorcraft flight control as it effectively decouples the plant model and effectively handles non-linearity. However, the method has limitations for rotorcraft due to the requirement for full-state feedback and issues with non-minimum phase zeros. A control design study is performed using dynamic inversion with reduced order models of the rotorcraft dynamics, which alleviates the full-state feedback requirement. The design is analyzed using full order linear analysis and non-linear simulations of a utility helicopter. Simulation results show desired command tracking when the controller is applied to the full-order system. Classical stability margin analysis is used to achieve desired tradeoffs in robust stability and disturbance rejection. Results indicate the feasibility of applying dynamic inversion to rotorcraft control design, as long as full order linear analysis is applied to ensure stability and adequate modelling of low-frequency dynamics.
APA, Harvard, Vancouver, ISO, and other styles
18

Lenstra, A. T. H., and O. N. Kataeva. "Structures of copper(II) and manganese(II) di(hydrogen malonate) dihydrate; effects of intensity profile truncation and background modelling on structure models." Acta Crystallographica Section B Structural Science 57, no. 4 (2001): 497–506. http://dx.doi.org/10.1107/s0108768101004050.

Full text
Abstract:
The crystal structures of the title compounds were determined with net intensities I derived via the background–peak–background procedure. Least-squares optimizations reveal differences between the low-order (0 < s < 0.7 Å−1) and high-order (0.7 < s < 1.0 Å−1) structure models. The scale factors indicate discrepancies of up to 10% between the low-order and high-order reflection intensities. This observation is compound independent. It reflects the scan-angle-induced truncation error, because the applied scan angle (0.8 + 2.0 tan θ)° underestimates the wavelength dispersion in the monochromated X-ray beam. The observed crystal structures show pseudo-I-centred sublattices for three of its non-H atoms in the asymmetric unit. Our selection of observed intensities (I > 3σ) stresses that pseudo-symmetry. Model refinements on individual data sets with (h + k + l) = 2n and (h + k + l) = 2n + 1 illustrate the lack of model robustness caused by that pseudo-symmetry. To obtain a better balanced data set and thus a more robust structure we decided to exploit background modelling. We described the background intensities B(\displaystyle\mathrel{\mathop H^{\rightharpoonup}}) with an 11th degree polynomial in θ. This function predicts the local background b at each position \displaystyle\mathrel{\mathop H^{\rightharpoonup}} and defines the counting statistical distribution P(B), in which b serves as average and variance. The observation R defines P(R). This leads to P(I) = P(R)/P(B) and thus I = R − b and σ2(I) = I so that the error σ(I) is background independent. Within this framework we reanalysed the structure of the copper(II) derivative. Background modelling resulted in a structure model with an improved internal consistency. At the same time the unweighted R value based on all observations decreased from 10.6 to 8.4%. A redetermination of the structure at 120 K concluded the analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

BOLOTIN, ARKADY. "ROBUSTNESS OF THE REGRESSION MODELS FOR UNCERTAIN CATEGORIES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 15, no. 06 (2007): 681–95. http://dx.doi.org/10.1142/s0218488507004947.

Full text
Abstract:
Dichotomization of the outcome by a single cut-off point is an important part of medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit or logistic regression. However, in many real-life situations, a certain cut-off point is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we test the robustness of the linear regression model and get that neither the beta in the model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain categories, we propose to make use of the linear regression model with the fuzzy membership function as a dependent variable. In the paper, we test the robustness of the linear regression model of such fuzzy dependent variable and get that this model can be insensitive against the uncertainty in the cut-off point location. To demonstrate theoretical conclusions, in the paper we present the modelling results from the real study of low haemoglobin levels in infants.
APA, Harvard, Vancouver, ISO, and other styles
20

Hiptmair, Ralf, and Carolina Urzúa-Torres. "Preconditioning the EFIE on screens." Mathematical Models and Methods in Applied Sciences 30, no. 09 (2020): 1705–26. http://dx.doi.org/10.1142/s0218202520500347.

Full text
Abstract:
We consider the electric field integral equation (EFIE) modeling the scattering of time-harmonic electromagnetic waves at a perfectly conducting screen. When discretizing the EFIE by means of low-order Galerkin boundary methods (BEM), one obtains linear systems that are ill-conditioned on fine meshes and for low wave numbers [Formula: see text]. This makes iterative solvers perform poorly and entails the use of preconditioning. In order to construct optimal preconditioners for the EFIE on screens, the authors recently derived compact equivalent inverses of the EFIE operator on simple Lipschitz screens in [R. Hiptmair and C. Urzúa-Torres, Compact equivalent inverse of the electric field integral operator on screens, Integral Equations Operator Theory 92 (2020) 9]. This paper elaborates how to use this result to build an optimal operator preconditioner for the EFIE on screens that can be discretized in a stable fashion. Furthermore, the stability of the preconditioner relies only on the stability of the discrete [Formula: see text] duality pairing for scalar functions, instead of the vectorial one. Therefore, this novel approach not only offers [Formula: see text]-independent and [Formula: see text]-robust condition numbers, but it is also easier to implement and accommodates non-uniform meshes without additional computational effort.
APA, Harvard, Vancouver, ISO, and other styles
21

Katal, Nitish, and Shiv Narayan. "QFT Based Robust Positioning Control of the PMSM Using Automatic Loop Shaping with Teaching Learning Optimization." Modelling and Simulation in Engineering 2016 (2016): 1–18. http://dx.doi.org/10.1155/2016/9837058.

Full text
Abstract:
Automation of the robust control system synthesis for uncertain systems is of great practical interest. In this paper, the loop shaping step for synthesizing quantitative feedback theory (QFT) based controller for a two-phase permanent magnet stepper motor (PMSM) has been automated using teaching learning-based optimization (TLBO) algorithm. The QFT controller design problem has been posed as an optimization problem and TLBO algorithm has been used to minimize the proposed cost function. This facilitates designing low-order fixed-structure controller, eliminates the need of manual loop shaping step on the Nichols charts, and prevents the overdesign of the controller. A performance comparison of the designed controller has been made with the classical PID tuning method of Ziegler-Nichols and QFT controller tuned using other optimization algorithms. The simulation results show that the designed QFT controller using TLBO offers robust stability, disturbance rejection, and proper reference tracking over a range of PMSM’s parametric uncertainties as compared to the classical design techniques.
APA, Harvard, Vancouver, ISO, and other styles
22

Tao, Chongben, Yufeng Jin, Feng Cao, Zufeng Zhang, Chunguang Li, and Hanwen Gao. "3D Semantic VSLAM of Indoor Environment Based on Mask Scoring RCNN." Discrete Dynamics in Nature and Society 2020 (October 20, 2020): 1–14. http://dx.doi.org/10.1155/2020/5916205.

Full text
Abstract:
In view of existing Visual SLAM (VSLAM) algorithms when constructing semantic map of indoor environment, there are problems with low accuracy and low label classification accuracy when feature points are sparse. This paper proposed a 3D semantic VSLAM algorithm called BMASK-RCNN based on Mask Scoring RCNN. Firstly, feature points of images are extracted by Binary Robust Invariant Scalable Keypoints (BRISK) algorithm. Secondly, map points of reference key frame are projected to current frame for feature matching and pose estimation, and an inverse depth filter is used to estimate scene depth of created key frame to obtain camera pose changes. In order to achieve object detection and semantic segmentation for both static objects and dynamic objects in indoor environments and then construct dense 3D semantic map with VSLAM algorithm, a Mask Scoring RCNN is used to adjust its structure partially, where a TUM RGB-D SLAM dataset for transfer learning is employed. Semantic information of independent targets in scenes provides semantic information including categories, which not only provides high accuracy of localization but also realizes the probability update of semantic estimation by marking movable objects, thereby reducing the impact of moving objects on real-time mapping. Through simulation and actual experimental comparison with other three algorithms, results show the proposed algorithm has better robustness, and semantic information used in 3D semantic mapping can be accurately obtained.
APA, Harvard, Vancouver, ISO, and other styles
23

Yuan, Wei-Hai, Wei Zhang, Beibing Dai, and Yuan Wang. "Application of the particle finite element method for large deformation consolidation analysis." Engineering Computations 36, no. 9 (2019): 3138–63. http://dx.doi.org/10.1108/ec-09-2018-0407.

Full text
Abstract:
Purpose Large deformation problems are frequently encountered in various fields of geotechnical engineering. The particle finite element method (PFEM) has been proven to be a promising method to solve large deformation problems. This study aims to develop a computational framework for modelling the hydro-mechanical coupled porous media at large deformation based on the PFEM. Design/methodology/approach The PFEM is extended by adopting the linear and quadratic triangular elements for pore water pressure and displacements. A six-node triangular element is used for modelling two-dimensional problems instead of the low-order three-node triangular element. Thus, the numerical instability induced by volumetric locking is avoided. The Modified Cam Clay (MCC) model is used to describe the elasto-plastic soil behaviour. Findings The proposed approach is used for analysing several consolidation problems. The numerical results have demonstrated that large deformation consolidation problems with the proposed approach can be accomplished without numerical difficulties and loss of accuracy. The coupled PFEM provides a stable and robust numerical tool in solving large deformation consolidation problems. It is demonstrated that the proposed approach is intrinsically stable. Originality/value The PFEM is extended to consider large deformation-coupled hydro-mechanical problem. PFEM is enhanced by using a six-node quadratic triangular element for displacement and this is coupled with a four-node quadrilateral element for modelling excess pore pressure.
APA, Harvard, Vancouver, ISO, and other styles
24

ZAMPINI, STEFANO. "DUAL-PRIMAL METHODS FOR THE CARDIAC BIDOMAIN MODEL." Mathematical Models and Methods in Applied Sciences 24, no. 04 (2014): 667–96. http://dx.doi.org/10.1142/s0218202513500632.

Full text
Abstract:
The cardiac Bidomain model consists in a reaction–diffusion system of partial differential equations which is often discretized by low-order finite elements in space and implicit–explicit methods in time; the resulting linear systems are very ill-conditioned and they must be solved at each time step of a cardiac beat simulation. In this paper we will construct and analyze Balancing Domain Decomposition by Constraints and Finite Element Tearing and Interconnecting Dual-Primal methods for the Bidomain operator. Proven theoretical estimates show that the proposed methods are scalable, quasi-optimal and robust with respect to possible coefficient discontinuities of the Bidomain operator and the time step. The results of extensive parallel numerical tests in three dimensions confirm the convergence rates predicted by the theory; large numerical simulations up to 400 millions of degrees of freedom on 27K cores of BlueGene/Q are also provided.
APA, Harvard, Vancouver, ISO, and other styles
25

Buitink, Joost, Lieke A. Melsen, James W. Kirchner, and Adriaan J. Teuling. "A distributed simple dynamical systems approach (dS2 v1.0) for computationally efficient hydrological modelling at high spatio-temporal resolution." Geoscientific Model Development 13, no. 12 (2020): 6093–110. http://dx.doi.org/10.5194/gmd-13-6093-2020.

Full text
Abstract:
Abstract. In this paper, we introduce a new numerically robust distributed rainfall–runoff model for computationally efficient simulation at high spatio-temporal resolution: the distributed simple dynamical systems (dS2) model. The model is based on the simple dynamical systems approach as proposed by Kirchner (2009), and the distributed implementation allows for spatial heterogeneity in the parameters and/or model forcing fields at high spatio-temporal resolution (for instance as derived from precipitation radar data). The concept is extended with snow and routing modules, where the latter transports water from each pixel to the catchment outlet. The sensitivity function, which links changes in storage to changes in discharge, is implemented by a new three-parameter equation that is able to represent the widely observed downward curvature in log–log space. The simplicity of the underlying concept allows the model to calculate discharge in a computationally efficient manner, even at high temporal and spatial resolution, while maintaining proven model performance. The model code is written in Python in order to be easily readable and adjustable while maintaining computational efficiency. Since this model has short runtimes, it allows for extended sensitivity and uncertainty studies with relatively low computational costs. A test application shows good and consistent model performance across scales ranging from 3 to over 1700 km2.
APA, Harvard, Vancouver, ISO, and other styles
26

Resseguier, Valentin, Etienne Mémin, Dominique Heitz, and Bertrand Chapron. "Stochastic modelling and diffusion modes for proper orthogonal decomposition models and small-scale flow analysis." Journal of Fluid Mechanics 826 (August 15, 2017): 888–917. http://dx.doi.org/10.1017/jfm.2017.467.

Full text
Abstract:
We present here a new stochastic modelling approach in the constitution of fluid flow reduced-order models. This framework introduces a spatially inhomogeneous random field to represent the unresolved small-scale velocity component. Such a decomposition of the velocity in terms of a smooth large-scale velocity component and a rough, highly oscillating component gives rise, without any supplementary assumption, to a large-scale flow dynamics that includes a modified advection term together with an inhomogeneous diffusion term. Both of those terms, related respectively to turbophoresis and mixing effects, depend on the variance of the unresolved small-scale velocity component. They bring an explicit subgrid term to the reduced system which enables us to take into account the action of the truncated modes. Besides, a decomposition of the variance tensor in terms of diffusion modes provides a meaningful statistical representation of the stationary or non-stationary structuration of the small-scale velocity and of its action on the resolved modes. This supplies a useful tool for turbulent fluid flow data analysis. We apply this methodology to circular cylinder wake flow at Reynolds numbers $Re=100$ and $Re=3900$. The finite-dimensional models of the wake flows reveal the energy and the anisotropy distributions of the small-scale diffusion modes. These distributions identify critical regions where corrective advection effects, as well as structured energy dissipation effects, take place. In providing rigorously derived subgrid terms, the proposed approach yields accurate and robust temporal reconstruction of the low-dimensional models.
APA, Harvard, Vancouver, ISO, and other styles
27

Campbell, Michael J., Philip E. Dennison, and Bret W. Butler. "A LiDAR-based analysis of the effects of slope, vegetation density, and ground surface roughness on travel rates for wildland firefighter escape route mapping." International Journal of Wildland Fire 26, no. 10 (2017): 884. http://dx.doi.org/10.1071/wf17031.

Full text
Abstract:
Escape routes are essential components of wildland firefighter safety, providing pre-defined pathways to a safety zone. Among the many factors that affect travel rates along an escape route, landscape conditions such as slope, low-lying vegetation density, and ground surface roughness are particularly influential, and can be measured using airborne light detection and ranging (LiDAR) data. In order to develop a robust, quantitative understanding of the effects of these landscape conditions on travel rates, we performed an experiment wherein study participants were timed while walking along a series of transects within a study area dominated by grasses, sagebrush and juniper. We compared resultant travel rates to LiDAR-derived estimates of slope, vegetation density and ground surface roughness using linear mixed effects modelling to quantify the relationships between these landscape conditions and travel rates. The best-fit model revealed significant negative relationships between travel rates and each of the three landscape conditions, suggesting that, in order of decreasing magnitude, as density, slope and roughness increase, travel rates decrease. Model coefficients were used to map travel impedance within the study area using LiDAR data, which enabled mapping the most efficient routes from fire crew locations to safety zones and provided an estimate of travel time.
APA, Harvard, Vancouver, ISO, and other styles
28

Heo, Yoon, and Nguyen Khanh Doanh. "Trade Flower and IPR Protection: A Dynamic Analysis Of the Experience of ASEAN-6 Countries." International Studies Review 16, no. 1 (2015): 59–74. http://dx.doi.org/10.1163/2667078x-01601004.

Full text
Abstract:
This paper examines the impacts of intellectual property rights (IPR) protection in foreign markets on ASEAN countries' exports for the period 2005 - 2010 using a dyanamic panel data model, which allows us to account for persistence effect. In order to solve the inconsistency of OLS in a dynamic modelling, we opt for the system GMM estimator because it helps researchers overcome the problems of serial correlation,heteroskedasticity, and enogeneity for some explanator variables. Our reselts are robust and summarized as follows. first, reinfoced IPR protection in foreign countries has a positive effect on ASEAN'S exports, indicating the dominance of market expansion effect. Second, regardless of the level of economic development in importing countries, stronger IPR protection induces ASEAN's exports to foreign countries. Third, the trade impacts of IPR protection are strongest in high-income trading partners, followed by medium-income,and finally, low-income partner countries. Fourth,at the sectoral level, the effect of IPR protection is found to be the strongest for capital-intensive exports to highly developed countries.
APA, Harvard, Vancouver, ISO, and other styles
29

Lanzetta, Anna, Davide Mattioli, Francesco Di Capua, et al. "Anammox-Based Processes for Mature Leachate Treatment in SBR: A Modelling Study." Processes 9, no. 8 (2021): 1443. http://dx.doi.org/10.3390/pr9081443.

Full text
Abstract:
Mature landfill leachates are characterized by high levels of ammoniacal nitrogen which must be reduced for discharge in the sewer system and further treatment in municipal wastewater treatment plants. The use of anammox-based processes can allow for an efficient treatment of ammonium-rich leachates. In this work, two real scale sequencing batch reactors (SBRs), designed to initially perform partial nitritation/anammox (PN/A) and simultaneous partial nitrification and denitrification (SPND) for the treatment of ammonium-rich urban landfill leachate, were modelled using BioWin 6.0 in order to enable plant-wide modelling and optimizing. The constructed models were calibrated and validated using data from long- and short-term (one cycle) SBR operation and fit well to the main physical-chemical parameters (i.e., ammonium, nitrite and nitrate concentrations) measured during short-term (one cycle) operations. Despite the different strategies in terms of dissolved oxygen (DO) concentrations and aeration and mixing patterns applied for SBR operation, the models allowed for understanding that in both reactors the PN/A process was shown as the main contributor to nitrogen removal when the availability of organic carbon was low. Indeed, in both SBRs, the activity of nitrite oxidizing bacteria was inhibited due to high levels of free ammonia, whereas anammox bacteria were active due to the simultaneous presence of ammonium and nitrite and their ability to recover from DO inhibition. Increasing the external carbon addition, a prompt decrease of the anammox biomass was observed, with SPND becoming the main nitrogen removal mechanism. Models were also applied to estimate the production rates of nitrous oxide by aerobic ammonia oxidizing bacteria and heterotrophic denitrifiers. The models were found to be a robust tool for understanding the effects of different operating conditions (i.e, temperature, cycle phases, DO concentration, external carbon addition) on the nitrogen removal performances of the two reactors, assessing the contribution of the different bacterial groups involved.
APA, Harvard, Vancouver, ISO, and other styles
30

Johnson, Nicky, Vasant Gandhi, and Dinesh Jain. "Performance Behavior of Participatory Water Institutions in Eastern India: A Study through Structural Equation Modelling." Water 12, no. 2 (2020): 485. http://dx.doi.org/10.3390/w12020485.

Full text
Abstract:
The paper examines the nature and performance of participatory water institutions in eastern India using structural equation modelling. There is a crisis in the management of water in India, and this is often not about having too little water but about managing it poorly. It is now being widely recognized that engineering structures and solutions are not enough, and having effective water institutions is critical. These are urgently needed in eastern India for helping lift the region out of low incomes and poverty. However, creating good institutions is complex, and in this context, the fundamentals of new institutional economics, and management governance theory have suggested the importance of a number of key factors including five institutional features and eight rationalities. Based on this, a study was conducted in eastern India, sampling from the states of Assam and Bihar, covering 510 farm households across 51 water institutions. In order to understand and map the relationship and pathways across these key factors, a structural equation model is hypothesized. In the model, the five institutional features are considered determinants of the eight rationalities, and the rationalities are considered determinants of four performance goals. The performance on the goals determines the overall performance/success of the institution. Besides this, the institutional features and rationalities can also directly influence performance on the goals and the overall performance. The model is tested with data from the survey and different pathways that are robust are identified. The results can provide useful insights into the interlinkages and pathways of institutional behavior and can help policy and institution design for delivering more robust performance. The results show that one of the most important factors determining overall performance/success is technical rationality, and this deserves great attention. It includes technical expertise, sound location and quality of structures and equipment, and good maintenance. However, success is also strongly linked to performance on production/income goals, equity, and environment goals. These are, in turn, strongly related to achievement of economic, social, technical, and organizational rationalities, which call for attention to economic aspects such as crop choice and marketing, besides social aspects such as inclusion of women and poorer social groups, and organizational aspects such as member involvement and regular meetings. Further, the institutional features of clear objectives, good interactions, adaptive, correct scale, and compliance are important for achievement of almost all rationalities through various pathways, and should be strongly focused on in all the institutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Aggarwal, Kush, R. J. Urbanic, and Syed Mohammad Saqib. "Development of predictive models for effective process parameter selection for single and overlapping laser clad bead geometry." Rapid Prototyping Journal 24, no. 1 (2018): 214–28. http://dx.doi.org/10.1108/rpj-04-2016-0059.

Full text
Abstract:
Purpose The purpose of this work is to explore predictive model approaches for selecting laser cladding process settings for a desired bead geometry/overlap strategy. Complementing the modelling challenges is the development of a framework and methodologies to minimize data collection while maximizing the goodness of fit for the predictive models. This is essential for developing a foundation for metallic additive manufacturing process planning solutions. Design/methodology/approach Using the coaxial powder flow laser cladding method, 420 steel cladding powder is deposited on low carbon structural steel plates. A design of experiments (DOE) approach is taken using the response surface methodology (RSM) to establish the experimental configuration. The five process parameters such as laser power, travel speed, etc. are varied to explore their impact on the bead geometry. A total of three replicate experiments are performed and the collected data are assessed using a variety of methods to determine the process trends and the best modelling approaches. Findings There exist unpredictable, non-linear relationships between the process parameters and the bead geometry. The best fit for a predictive model is achieved with the artificial neural network (ANN) approach. Using the RSM, the experimental set is reduced by an order of magnitude; however, a model with R2 = 0.96 is generated with ANN. The predictive model goodness of fit for a single bead is similar to that for the overlapping bead geometry using ANN. Originality/value Developing a bead shape to process parameters model is challenging due to the non-linear coupling between the process parameters and the bead geometry and the number of parameters to be considered. The experimental design and modelling approaches presented in this work illustrate how designed experiments can minimize the data collection and produce a robust predictive model. The output of this work will provide a solid foundation for process planning operations.
APA, Harvard, Vancouver, ISO, and other styles
32

AKHAVAN, R., A. ANSARI, S. KANG, and N. MANGIAVACCHI. "Subgrid-scale interactions in a numerically simulated planar turbulent jet and implications for modelling." Journal of Fluid Mechanics 408 (April 10, 2000): 83–120. http://dx.doi.org/10.1017/s0022112099007582.

Full text
Abstract:
The dynamics of subgrid-scale energy transfer in turbulence is investigated in a database of a planar turbulent jet at Reλ ≈ 110, obtained by direct numerical simulation. In agreement with analytical predictions (Kraichnan 1976), subgrid-scale energy transfer is found to arise from two effects: one involving non-local interactions between the resolved scales and disparate subgrid scales, the other involving local interactions between the resolved and subgrid scales near the cutoff. The former gives rise to a positive, wavenumber-independent eddy-viscosity distribution in the spectral space, and is manifested as low-intensity, forward transfers of energy in the physical space. The latter gives rise to positive and negative cusps in the spectral eddy-viscosity distribution near the cutoff, and appears as intense and coherent regions of forward and reverse transfer of energy in the physical space. Only a narrow band of subgrid wavenumbers, on the order of a fraction of an octave, make the dominant contributions to the latter. A dynamic two-component subgrid-scale model (DTM), incorporating these effects, is proposed. In this model, the non-local forward transfers of energy are parameterized using an eddy-viscosity term, while the local interactions are modelled using the dynamics of the resolved scales near the cutoff. The model naturally accounts for backscatter and correctly predicts the breakdown of the net transfer into forward and reverse contributions in a priori tests. The inclusion of the local-interactions term in DTM significantly reduces the variability of the model coefficient compared to that in pure eddy-viscosity models. This eliminates the need for averaging the model coefficient, making DTM well-suited to computations of complex-geometry flows. The proposed model is evaluated in LES of transitional and turbulent jet and channel flows. The results show DTM provides more accurate predictions of the statistics, structure, and spectra than dynamic eddy-viscosity models and remains robust at marginal LES resolutions.
APA, Harvard, Vancouver, ISO, and other styles
33

Lochhead, I., and N. Hedley. "3D MODELLING IN TEMPERATE WATERS: BUILDING RIGS AND DATA SCIENCE TO SUPPORT GLASS SPONGE MONITORING EFFORTS IN COASTAL BRITISH COLUMBIA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 969–76. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-969-2020.

Full text
Abstract:
Abstract. Structure-from-motion (SfM) has emerged as a popular method of characterizing marine benthos in tropical marine environments and could be of tremendous value to glass sponge monitoring and management efforts in the Northeast Pacific Ocean. However, temperate marine environments present a unique set of challenges to SfM workflows, and the combined impact that cold, dark, and turbid waters have on the veracity of SfM derived data must be critically evaluated in order for SfM to become a meaningful tool for ongoing glass sponge research. This paper discusses the design, development, testing, and deployment of an innovative underwater SfM workflow for generating high-resolution 3D models in temperate marine environments. This multi-phase research project (dry-lab, wet-lab, and field), while possibly seen as unconventional, was designed to innovate in two ways. First to build an operational data capture platform to support low-cost SfM-based seafloor surveys. And second, to enable systematic isolation and evaluation of SfM data capture parameters and their implications for representational veracity and data quality. This paper reports the challenges and outcomes from a series of field surveys conducted in Howe Sound, BC, one of which serves as the first of two data sets in a temporal analysis of 3D morphometric change. This research demonstrates that accurate, high-resolution morphometric characterization, of all benthic species and habitats, is dependent on a range of equipment, procedural, and environmental variables. It is also intended to share our applied problem-solving path to successful 3D capture, backed up by robust data science.
APA, Harvard, Vancouver, ISO, and other styles
34

Jeremiah, Jeremiah J., Samuel J. Abbey, Colin A. Booth, and Anil Kashyap. "Results of Application of Artificial Neural Networks in Predicting Geo-Mechanical Properties of Stabilised Clays—A Review." Geotechnics 1, no. 1 (2021): 147–71. http://dx.doi.org/10.3390/geotechnics1010008.

Full text
Abstract:
This study presents a literature review on the use of artificial neural networks in the prediction of geo-mechanical properties of stabilised clays. In this paper, the application of ANNs in a geotechnical analysis of clay stabilised with cement, lime, geopolymers and by-product cementitious materials has been evaluated. The chemical treatment of expansive clays will involve the development of optimum binder mix proportions or the improvement of a specific soil property using additives. These procedures often generate large data requiring regression analysis in order to correlate experimental data and model the performance of the soil in the field. These analyses involve large datasets and tedious mathematical procedures to correlate the variables and develop required models using traditional regression analysis. The findings from this study show that ANNs are becoming well known in dealing with the problem of mathematical modelling involving nonlinear functions due to their robust data analysis and correlation capabilities and have been successfully applied to the stabilisation of clays with high performance. The study also shows that the supervised ANN model is well adapted to dealing with stabilisation of clays with high performance as indicated by high R2 and low MAE, RMSE and MSE values. The Levenberg–Marquardt algorithm is effective in shortening the convergence time during model training.
APA, Harvard, Vancouver, ISO, and other styles
35

Tsai, Ming-Ju, Jyun-Rong Wang, Shinn-Jang Ho, Li-Sun Shu, Wen-Lin Huang, and Shinn-Ying Ho. "GREMA: modelling of emulated gene regulatory networks with confidence levels based on evolutionary intelligence to cope with the underdetermined problem." Bioinformatics 36, no. 12 (2020): 3833–40. http://dx.doi.org/10.1093/bioinformatics/btaa267.

Full text
Abstract:
Abstract Motivation Non-linear ordinary differential equation (ODE) models that contain numerous parameters are suitable for inferring an emulated gene regulatory network (eGRN). However, the number of experimental measurements is usually far smaller than the number of parameters of the eGRN model that leads to an underdetermined problem. There is no unique solution to the inference problem for an eGRN using insufficient measurements. Results This work proposes an evolutionary modelling algorithm (EMA) that is based on evolutionary intelligence to cope with the underdetermined problem. EMA uses an intelligent genetic algorithm to solve the large-scale parameter optimization problem. An EMA-based method, GREMA, infers a novel type of gene regulatory network with confidence levels for every inferred regulation. The higher the confidence level is, the more accurate the inferred regulation is. GREMA gradually determines the regulations of an eGRN with confidence levels in descending order using either an S-system or a Hill function-based ODE model. The experimental results showed that the regulations with high-confidence levels are more accurate and robust than regulations with low-confidence levels. Evolutionary intelligence enhanced the mean accuracy of GREMA by 19.2% when using the S-system model with benchmark datasets. An increase in the number of experimental measurements may increase the mean confidence level of the inferred regulations. GREMA performed well compared with existing methods that have been previously applied to the same S-system, DREAM4 challenge and SOS DNA repair benchmark datasets. Availability and implementation All of the datasets that were used and the GREMA-based tool are freely available at https://nctuiclab.github.io/GREMA. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
36

Flipo, N., A. Mouhri, B. Labarthe, S. Biancamaria, A. Rivière, and P. Weill. "Continental hydrosystem modelling: the concept of nested stream–aquifer interfaces." Hydrology and Earth System Sciences 18, no. 8 (2014): 3121–49. http://dx.doi.org/10.5194/hess-18-3121-2014.

Full text
Abstract:
Abstract. Coupled hydrological-hydrogeological models, emphasising the importance of the stream–aquifer interface, are more and more used in hydrological sciences for pluri-disciplinary studies aiming at investigating environmental issues. Based on an extensive literature review, stream–aquifer interfaces are described at five different scales: local [10 cm–~10 m], intermediate [~10 m–~1 km], watershed [10 km2–~1000 km2], regional [10 000 km2–~1 M km2] and continental scales [>10 M km2]. This led us to develop the concept of nested stream–aquifer interfaces, which extends the well-known vision of nested groundwater pathways towards the surface, where the mixing of low frequency processes and high frequency processes coupled with the complexity of geomorphological features and heterogeneities creates hydrological spiralling. This conceptual framework allows the identification of a hierarchical order of the multi-scale control factors of stream–aquifer hydrological exchanges, from the larger scale to the finer scale. The hyporheic corridor, which couples the river to its 3-D hyporheic zone, is then identified as the key component for scaling hydrological processes occurring at the interface. The identification of the hyporheic corridor as the support of the hydrological processes scaling is an important step for the development of regional studies, which is one of the main concerns for water practitioners and resources managers. In a second part, the modelling of the stream–aquifer interface at various scales is investigated with the help of the conductance model. Although the usage of the temperature as a tracer of the flow is a robust method for the assessment of stream–aquifer exchanges at the local scale, there is a crucial need to develop innovative methodologies for assessing stream–aquifer exchanges at the regional scale. After formulating the conductance model at the regional and intermediate scales, we address this challenging issue with the development of an iterative modelling methodology, which ensures the consistency of stream–aquifer exchanges between the intermediate and regional scales. Finally, practical recommendations are provided for the study of the interface using the innovative methodology MIM (Measurements–Interpolation–Modelling), which is graphically developed, scaling in space the three pools of methods needed to fully understand stream–aquifer interfaces at various scales. In the MIM space, stream–aquifer interfaces that can be studied by a given approach are localised. The efficiency of the method is demonstrated with two examples. The first one proposes an upscaling framework, structured around river reaches of ~10–100 m, from the local to the watershed scale. The second example highlights the usefulness of space borne data to improve the assessment of stream–aquifer exchanges at the regional and continental scales. We conclude that further developments in modelling and field measurements have to be undertaken at the regional scale to enable a proper modelling of stream–aquifer exchanges from the local to the continental scale.
APA, Harvard, Vancouver, ISO, and other styles
37

Dawood, Furat, GM Shafiullah, and Martin Anda. "Stand-Alone Microgrid with 100% Renewable Energy: A Case Study with Hybrid Solar PV-Battery-Hydrogen." Sustainability 12, no. 5 (2020): 2047. http://dx.doi.org/10.3390/su12052047.

Full text
Abstract:
A 100% renewable energy-based stand-alone microgrid system can be developed by robust energy storage systems to stabilize the variable and intermittent renewable energy resources. Hydrogen as an energy carrier and energy storage medium has gained enormous interest globally in recent years. Its use in stand-alone or off-grid microgrids for both the urban and rural communities has commenced recently in some locations. Therefore, this research evaluates the techno-economic feasibility of renewable energy-based systems using hydrogen as energy storage for a stand-alone/off-grid microgrid. Three case scenarios in a microgrid environment were identified and investigated in order to select an optimum solution for a remote community by considering the energy balance and techno-economic optimization. The “HOMER Pro” energy modelling and simulating software was used to compare the energy balance, economics and environmental impact amongst the proposed scenarios. The simulation results showed that the hydrogen-battery hybrid energy storage system is the most cost-effective scenario, though all developed scenarios are technically possible and economically comparable in the long run, while each has different merits and challenges. It has been shown that the proposed hybrid energy systems have significant potentialities in electrifying remote communities with low energy generation costs, as well as a contribution to the reduction of their carbon footprint and to ameliorating the energy crisis to achieve a sustainable future.
APA, Harvard, Vancouver, ISO, and other styles
38

GOSWAMI, B. K., and A. N. PISARCHIK. "CONTROLLING MULTISTABILITY BY SMALL PERIODIC PERTURBATION." International Journal of Bifurcation and Chaos 18, no. 06 (2008): 1645–73. http://dx.doi.org/10.1142/s0218127408021257.

Full text
Abstract:
A small perturbation of any system parameters may not in general create any significant qualitative change in dynamics of a multistable system. However, a slow-periodic modulation with properly adjusted amplitude and frequency can do so. In particular, it can control the number of coexisting attractors. The basic idea in this controlling mechanism is to introduce a collision between an attractor with its basin boundary. As a consequence, the attractor is destroyed via boundary crisis, and the chaotic transients settle down to an adjacent attractor. These features have been observed first theoretically with the Hénon map and laser rate equations, and then confirmed experimentally with a cavity-loss modulated CO 2 laser and a pump-modulated fiber laser. The number of coexisting attractors increases as the dissipativity of the system reduces. In the low-dissipative limit, the creation of attractors obeys the predictions of Gavrilov, Shilnikov and Newhouse, when the attractors, referred to as Gavrilov–Shilnikov–Newhouse (GSN) sinks, are created in various period n-tupling processes and remain organized in phase and parameter spaces in a self-similar order. We demonstrate that slow small-amplitude periodic modulation of a system parameter can even destroy these GSN sinks and the system is suitably converted again to a controllable monostable system. Such a control is robust against small noise as well. We also show the applicability of the method to control multistability in coupled oscillators and multistability induced by delayed feedback. In the latter case, it is possible to annihilate coexisting states by modulating either the feedback variable or a system parameter or the feedback strength.
APA, Harvard, Vancouver, ISO, and other styles
39

Hwangbo, Myung, Jun-Sik Kim, and Takeo Kanade. "Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation." International Journal of Robotics Research 30, no. 14 (2011): 1755–74. http://dx.doi.org/10.1177/0278364911416391.

Full text
Abstract:
When a camera rotates rapidly or shakes severely, a conventional KLT (Kanade–Lucas–Tomasi) feature tracker becomes vulnerable to large inter-image appearance changes. Tracking fails in the KLT optimization step, mainly due to an inadequate initial condition equal to final image warping in the previous frame. In this paper, we present a gyro-aided feature tracking method that remains robust under fast camera–ego rotation conditions. The knowledge of the camera’s inter-frame rotation, obtained from gyroscopes, provides an improved initial warping condition, which is more likely within the convergence region of the original KLT. Moreover, the use of an eight-degree-of-freedom affine photometric warping model enables the KLT to cope with camera rolling and illumination change in an outdoor setting. For automatic incorporation of sensor measurements, we also propose a novel camera/gyro auto-calibration method which can be applied in an in-situ or on-the-fly fashion. Only a set of feature tracks of natural landmarks is needed in order to simultaneously recover intrinsic and extrinsic parameters for both sensors. We provide a simulation evaluation for our auto-calibration method and demonstrate enhanced tracking performance for real scenes with aid from low-cost microelectromechanical system gyroscopes. To alleviate the heavy computational burden required for high-order warping, our publicly available GPU implementation is discussed for tracker parallelization.
APA, Harvard, Vancouver, ISO, and other styles
40

Ogero, Morris, Rachel Sarguta, Lucas Malla, Jalemba Aluvaala, Ambrose Agweyu, and Samuel Akech. "Methodological rigor of prognostic models for predicting in-hospital paediatric mortality in low- and middle-income countries: a systematic review protocol." Wellcome Open Research 5 (May 27, 2020): 106. http://dx.doi.org/10.12688/wellcomeopenres.15955.1.

Full text
Abstract:
Introduction: In low- and middle-income countries (LMICs) where healthcare resources are often limited, making decisions on appropriate treatment choices is critical in ensuring reduction of paediatric deaths as well as instilling proper utilisation of the already constrained healthcare resources. Well-developed and validated prognostic models can aid in early recognition of potential risks thus contributing to the reduction of mortality rates. The aim of the planned systematic review is to identify and appraise the methodological rigor of multivariable prognostic models predicting in-hospital paediatric mortality in LMIC in order to identify statistical and methodological shortcomings deserving special attention and to identify models for external validation. Methods and analysis: This protocol has followed the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Protocols. A search of articles will be conducted in MEDLINE, Google Scholar, and CINAHL (via EbscoHost) from inception to 2019 without any language restriction. We will also perform a search in Web of Science to identify additional reports that cite the identified studies. Data will be extracted from relevant articles in accordance with the Cochrane Prognosis Methods’ guidance; the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies. Methodological quality assessment will be performed based on prespecified domains of the Prediction study Risk of Bias Assessment Tool. Ethics and dissemination: Ethical permission will not be required as this study will use published data. Findings from this review will be shared through publication in peer-reviewed scientific journals and, presented at conferences. It is our hope that this study will contribute to the development of robust multivariable prognostic models predicting in-hospital paediatric mortality in low- and middle-income countries. Registration: PROSPERO ID CRD42018088599; registered on 13 February 2018.
APA, Harvard, Vancouver, ISO, and other styles
41

Guizilini, Vitor, and Fabio Ramos. "Learning to reconstruct 3D structures for occupancy mapping from depth and color information." International Journal of Robotics Research 37, no. 13-14 (2018): 1595–609. http://dx.doi.org/10.1177/0278364918783061.

Full text
Abstract:
Real-world scenarios contain many structural patterns that, if appropriately extracted and modeled, can be used to reduce problems associated with sensor failure and occlusions while improving planning methods in such tasks as navigation and grasping. This paper devises a novel unsupervised procedure that models 3D structures from unorganized pointclouds as occupancy maps. Our methodology enables the learning of unique and arbitrarily complex features using a variational Bayesian convolutional auto-encoder, which compresses local information into a latent low-dimensional representation and then decodes it back in order to reconstruct the original scene, including color information when available. This reconstructive model is trained on features obtained automatically from a wide variety of scenarios, in order to improve its generalization and interpolative powers. We show that the proposed framework is able to recover partially missing structures and reason over occlusions with high accuracy while maintaining a detailed reconstruction of observed areas. To combine localized feature estimates seamlessly into a single global structure, we employ the Hilbert maps framework, recently proposed as a robust and efficient occupancy mapping technique, and introduce a new kernel for reproducing kernel Hilbert space projection that uses estimates from the reconstructive model. Experimental tests are conducted with large-scale 2D and 3D datasets, using both laser and monocular data, and a study of the impact of various accuracy–speed trade-offs is provided to assess the limits of the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
42

Kaur, Chamandeep, Preeti Singh, and Sukhtej Sahni. "Electroencephalography-Based Source Localization for Depression Using Standardized Low Resolution Brain Electromagnetic Tomography – Variational Mode Decomposition Technique." European Neurology 81, no. 1-2 (2019): 63–75. http://dx.doi.org/10.1159/000500414.

Full text
Abstract:
Background: Electroencephalography (EEG) may be used as an objective diagnosis tool for diagnosing various disorders. Recently, source localization from EEG is being used in the analysis of real-time brain monitoring applications. However, inverse problem reduces the accuracy in EEG signal processing systems. Objectives: This paper presents a new method of EEG source localization using variational mode decomposition (VMD) and standardized the low resolution brain electromagnetic tomography (sLORETA) inverse model. The focus is to compare the effectiveness of the proposed approach for EEG signals of depression patients. Method: As the first stage, real EEG recordings corresponding to depression patients are decomposed into various mode functions by applying VMD. Then, closely related functions are analyzed using the inverse modelling-based source localization procedures such as sLORETA. Simulations have been carried out on real EEG databases for depression to demonstrate the effectiveness of the proposed techniques. Results: The performance of the algorithm has been assessed using localization error (LE), mean square error and signal to noise ratio output corresponding to simulated EEG dipole sources and real EEG signals for depression. In order to study the spatial resolution for cortical potential distribution, the main focus has been on studying the effects of noise sources and estimating LE of inverse solutions. More accurate and robust localization results show that this methodology is very promising for EEG source localization of depression signals. Conclusion: It can be said that proposed algorithm efficiently suppresses the influence of noise in the EEG inverse problem using simulated EEG activity and EEG database for depression. Such a system may offer an effective solution for clinicians as a crucial stage of EEG pre-processing in automated depression detection systems and may prevent delay in diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
43

Bahloul, R., Phillippe dal Santo, Ali Mkaddem, and A. Potiron. "Optimisation of Springback Predicted by Experimental and Numerical Approach by Using Response Surface Methodology." Advanced Materials Research 6-8 (May 2005): 753–62. http://dx.doi.org/10.4028/www.scientific.net/amr.6-8.753.

Full text
Abstract:
Bending has significant importance in the sheet metal product industry. Moreover, the springback of sheet metal should be taken into consideration in order to produce bent sheet metal parts within acceptable tolerance limits and to solve geometrical variation for the control of manufacturing process. Nowadays, the importance of this problem increases because of the use of sheet-metal parts with high mechanical characteristics (High Strength Low Alloy steel). This work describes robust methods of predicting springback of parts in 3D modelling subjected to bending and unbending deformations. Also the effects of tool geometry in the final shape after springback are discussed. The first part of this paper presents the laboratory experiments in wiping die bending, in which the influence of process variables, such as die shoulder radius, punch-die clearance, punch nose radius and materials properties were discussed. The second part summarises the finite element analysis by using ABAQUS software and compares these results with some experimental data. It appeared that the final results of the FEM simulation are in good agreement with the experimental ones. An optimisation methodology based on the use of experimental design method and response surface technique is proposed in the third part of this paper. That makes it possible to obtain the optimum values of clearance between the punch and the die and the optimum die radius which can reduce the springback without cracking and damage of product.
APA, Harvard, Vancouver, ISO, and other styles
44

Loi, Shyeh Tjing. "Topology and obliquity of core magnetic fields in shaping seismic properties of slowly rotating evolved stars." Monthly Notices of the Royal Astronomical Society 504, no. 3 (2021): 3711–29. http://dx.doi.org/10.1093/mnras/stab991.

Full text
Abstract:
ABSTRACT It is thought that magnetic fields must be present in the interiors of stars to resolve certain discrepancies between theory and observation (e.g. angular momentum transport), but such fields are difficult to detect and characterize. Asteroseismology is a powerful technique for inferring the internal structures of stars by measuring their oscillation frequencies, and succeeds particularly with evolved stars, owing to their mixed modes, which are sensitive to the deep interior. The goal of this work is to present a phenomenological study of the combined effects of rotation and magnetism in evolved stars, where both are assumed weak enough that first-order perturbation theory applies, and we focus on the regime where Coriolis and Lorentz forces are comparable. Axisymmetric ‘twisted-torus’ field configurations are used, which are confined to the core and allowed to be misaligned with respect to the rotation axis. Factors such as the field radius, topology and obliquity are examined. We observe that fields with finer-scale radial structure and/or smaller radial extent produce smaller contributions to the frequency shift. The interplay of rotation and magnetism is shown to be complex: we demonstrate that it is possible for nearly symmetric multiplets of apparently low multiplicity to arise even under a substantial field, which might falsely appear to rule out its presence. Our results suggest that proper modelling of rotation and magnetism, in a simultaneous fashion, may be required to draw robust conclusions about the existence/non-existence of a core magnetic field in any given object.
APA, Harvard, Vancouver, ISO, and other styles
45

Smith, Katie A., Lucy J. Barker, Maliko Tanguy, et al. "A multi-objective ensemble approach to hydrological modelling in the UK: an application to historic drought reconstruction." Hydrology and Earth System Sciences 23, no. 8 (2019): 3247–68. http://dx.doi.org/10.5194/hess-23-3247-2019.

Full text
Abstract:
Abstract. Hydrological models can provide estimates of streamflow pre- and post-observations, which enable greater understanding of past hydrological behaviour, and potential futures. In this paper, a new multi-objective calibration method was derived and tested for 303 catchments in the UK, and the calibrations were used to reconstruct river flows back to 1891, in order to provide a much longer view of past hydrological variability, given the brevity of most UK river flow records which began post-1960. A Latin hypercube sample of 500 000 parameterisations for the GR4J model for each catchment were evaluated against six evaluation metrics covering all aspects of the flow regime from high, median, and low flows. The results of the top ranking model parameterisation (LHS1), and also the top 500 (LHS500), for each catchment were used to provide a deterministic result whilst also accounting for parameter uncertainty. The calibrations are generally good at capturing observed flows, with some exceptions in heavily groundwater-dominated catchments, and snowmelt and artificially influenced catchments across the country. Reconstructed flows were appraised over 30-year moving windows and were shown to provide good simulations of flow in the early parts of the record, in cases where observations were available. To consider the utility of the reconstructions for drought simulation, flow data for the 1975–1976 drought event were explored in detail in nine case study catchments. The model's performance in reproducing the drought events was found to vary by catchment, as did the level of uncertainty in the LHS500. The Standardised Streamflow Index (SSI) was used to assess the model simulations' ability to simulate extreme events. The peaks and troughs of the SSI time series were well represented despite slight over- or underestimations of past drought event magnitudes, while the accumulated deficits of the drought events extracted from the SSI time series verified that the model simulations were overall very good at simulating drought events. This paper provides three key contributions: (1) a robust multi-objective model calibration framework for calibrating catchment models for use in both general and extreme hydrology; (2) model calibrations for the 303 UK catchments that could be used in further research, and operational applications such as hydrological forecasting; and (3) ∼ 125 years of spatially and temporally consistent reconstructed flow data that will allow comprehensive quantitative assessments of past UK drought events, as well as long-term analyses of hydrological variability that have not been previously possible, thus enabling water resource managers to better plan for extreme events and build more resilient systems for the future.
APA, Harvard, Vancouver, ISO, and other styles
46

Runya, Robert Mzungu, Chris McGonigle, Rory Quinn, et al. "Examining the Links between Multi-Frequency Multibeam Backscatter Data and Sediment Grain Size." Remote Sensing 13, no. 8 (2021): 1539. http://dx.doi.org/10.3390/rs13081539.

Full text
Abstract:
Acoustic methods are routinely used to provide broad scale information on the geographical distribution of benthic marine habitats and sedimentary environments. Although single-frequency multibeam echosounder surveys have dominated seabed characterisation for decades, multifrequency approaches are now gaining favour in order to capture different frequency responses from the same seabed type. The aim of this study is to develop a robust modelling framework for testing the potential application and value of multifrequency (30, 95, and 300 kHz) multibeam backscatter responses to characterize sediments’ grain size in an area with strong geomorphological gradients and benthic ecological variability. We fit a generalized linear model on a multibeam backscatter and its derivatives to examine the explanatory power of single-frequency and multifrequency models with respect to the mean sediment grain size obtained from the grab samples. A strong and statistically significant (p < 0.05) correlation between the mean backscatter and the absolute values of the mean sediment grain size for the data was noted. The root mean squared error (RMSE) values identified the 30 kHz model as the best performing model responsible for explaining the most variation (84.3%) of the mean grain size at a statistically significant output (p < 0.05) with an adjusted r2 = 0.82. Overall, the single low-frequency sources showed a marginal gain on the multifrequency model, with the 30 kHz model driving the significance of this multifrequency model, and the inclusion of the higher frequencies diminished the level of agreement. We recommend further detailed and sufficient ground-truth data to better predict sediment properties and to discriminate benthic habitats to enhance the reliability of multifrequency backscatter data for the monitoring and management of marine protected areas.
APA, Harvard, Vancouver, ISO, and other styles
47

Harnois-Déraps, Joachim, Nicolas Martinet, Tiago Castro, et al. "Cosmic shear cosmology beyond two-point statistics: a combined peak count and correlation function analysis of DES-Y1." Monthly Notices of the Royal Astronomical Society 506, no. 2 (2021): 1623–50. http://dx.doi.org/10.1093/mnras/stab1623.

Full text
Abstract:
ABSTRACT We constrain cosmological parameters from a joint cosmic shear analysis of peak-counts and the two-point shear correlation functions, as measured from the Dark Energy Survey (DES-Y1). We find the structure growth parameter $S_8\equiv \sigma _8\sqrt{\Omega _{\rm m}/0.3} = 0.766^{+0.033}_{-0.038}$ which, at 4.8 per cent precision, provides one of the tightest constraints on S8 from the DES-Y1 weak lensing data. In our simulation-based method we determine the expected DES-Y1 peak-count signal for a range of cosmologies sampled in four w cold dark matter parameters (Ωm, σ8, h, w0). We also determine the joint covariance matrix with over 1000 realizations at our fiducial cosmology. With mock DES-Y1 data we calibrate the impact of photometric redshift and shear calibration uncertainty on the peak-count, marginalizing over these uncertainties in our cosmological analysis. Using dedicated training samples we show that our measurements are unaffected by mass resolution limits in the simulation, and that our constraints are robust against uncertainty in the effect of baryon feedback. Accurate modelling for the impact of intrinsic alignments on the tomographic peak-count remains a challenge, currently limiting our exploitation of cross-correlated peak counts between high and low redshift bins. We demonstrate that once calibrated, a fully tomographic joint peak-count and correlation functions analysis has the potential to reach a 3 per cent precision on S8 for DES-Y1. Our methodology can be adopted to model any statistic that is sensitive to the non-Gaussian information encoded in the shear field. In order to accelerate the development of these beyond-two-point cosmic shear studies, our simulations are made available to the community upon request.
APA, Harvard, Vancouver, ISO, and other styles
48

Calcutt, H., E. R. Willis, J. K. Jørgensen, et al. "The ALMA-PILS survey: propyne (CH3CCH) in IRAS 16293–2422." Astronomy & Astrophysics 631 (November 2019): A137. http://dx.doi.org/10.1051/0004-6361/201936323.

Full text
Abstract:
Context. Propyne (CH3CCH), also known as methyl acetylene, has been detected in a variety of environments, from Galactic star-forming regions to extragalactic sources. These molecules are excellent tracers of the physical conditions in star-forming regions, allowing the temperature and density conditions surrounding a forming star to be determined. Aims. This study explores the emission of CH3CCH in the low-mass protostellar binary, IRAS 16293–2422, and examines the spatial scales traced by this molecule, as well as its formation and destruction pathways. Methods. Atacama Large Millimeter/submillimeter Array (ALMA) observations from the Protostellar Interferometric Line Survey (PILS) were used to determine the abundances and excitation temperatures of CH3CCH towards both protostars. This data allows us to explore spatial scales from 70 to 2400 au. This data is also compared with the three-phase chemical kinetics model MAGICKAL, to explore the chemical reactions of this molecule. Results. CH3CCH is detected towards both IRAS 16293A and IRAS 16293B, and is found the hot corino components, one around each source, in the PILS dataset. Eighteen transitions above 3σ are detected, enabling robust excitation temperatures and column densities to be determined in each source. In IRAS 16293A, an excitation temperature of 90 K and a column density of 7.8 × 1015 cm−2 best fits the spectra. In IRAS 16293B, an excitation temperature of 100 K and 6.8 × 1015 cm−2 best fits the spectra. The chemical modelling finds that in order to reproduce the observed abundances, both gas-phase and grain-surface reactions are needed. The gas-phase reactions are particularly sensitive to the temperature at which CH4 desorbs from the grains. Conclusions. CH3CCH is a molecule whose brightness and abundance in many different regions can be utilised to provide a benchmark of molecular variation with the physical properties of star-forming regions. It is essential when making such comparisons, that the abundances are determined with a good understanding of the spatial scale of the emitting region, to ensure that accurate abundances are derived.
APA, Harvard, Vancouver, ISO, and other styles
49

Sorbie, K. S., A. Y. Al Ghafri, A. Skauge, and E. J. Mackay. "On the Modelling of Immiscible Viscous Fingering in Two-Phase Flow in Porous Media." Transport in Porous Media 135, no. 2 (2020): 331–59. http://dx.doi.org/10.1007/s11242-020-01479-w.

Full text
Abstract:
Abstract Viscous fingering in porous media is an instability which occurs when a low-viscosity injected fluid displaces a much more viscous resident fluid, under miscible or immiscible conditions. Immiscible viscous fingering is more complex and has been found to be difficult to simulate numerically and is the main focus of this paper. Many researchers have identified the source of the problem of simulating realistic immiscible fingering as being in the numerics of the process, and a large number of studies have appeared applying high-order numerical schemes to the problem with some limited success. We believe that this view is incorrect and that the solution to the problem of modelling immiscible viscous fingering lies in the physics and related mathematical formulation of the problem. At the heart of our approach is what we describe as the resolution of the “M-paradox”, where M is the mobility ratio, as explained below. In this paper, we present a new 4-stage approach to the modelling of realistic two-phase immiscible viscous fingering by (1) formulating the problem based on the experimentally observed fractional flows in the fingers, which we denote as $$ f_{\rm w}^{*} $$ f w ∗ , and which is the chosen simulation input; (2) from the infinite choice of relative permeability (RP) functions, $$ k_{\rm rw}^{*} $$ k rw ∗ and $$ k_{\rm ro}^{*} $$ k ro ∗ , which yield the same $$ f_{\rm w}^{*} $$ f w ∗ , we choose the set which maximises the total mobility function, $$ \lambda_{\text{T}}^{{}} $$ λ T (where $$ \lambda_{\text{T}}^{{}} = \lambda_{\text{o}}^{{}} + \lambda_{\text{w}}^{{}} $$ λ T = λ o + λ w ), i.e. minimises the pressure drop across the fingering system; (3) the permeability structure of the heterogeneous domain (the porous medium) is then chosen based on a random correlated field (RCF) in this case; and finally, (4) using a sufficiently fine numerical grid, but with simple transport numerics. Using our approach, realistic immiscible fingering can be simulated using elementary numerical methods (e.g. single-point upstreaming) for the solution of the two-phase fluid transport equations. The method is illustrated by simulating the type of immiscible viscous fingering observed in many experiments in 2D slabs of rock where water displaces very viscous oil where the oil/water viscosity ratio is $$ (\mu_{\text{o}} /\mu_{\text{w}} ) = 1600 $$ ( μ o / μ w ) = 1600 . Simulations are presented for two example cases, for different levels of water saturation in the main viscous finger (i.e. for 2 different underlying $$ f_{\rm w}^{*} $$ f w ∗ functions) produce very realistic fingering patterns which are qualitatively similar to observations in several respects, as discussed. Additional simulations of tertiary polymer flooding are also presented for which good experimental data are available for displacements in 2D rock slabs (Skauge et al., in: Presented at SPE Improved Oil Recovery Symposium, 14–18 April, Tulsa, Oklahoma, USA, SPE-154292-MS, 2012. 10.2118/154292-MS, EAGE 17th European Symposium on Improved Oil Recovery, St. Petersburg, Russia, 2013; Vik et al., in: Presented at SPE Europec featured at 80th EAGE Conference and Exhibition, Copenhagen, Denmark, SPE-190866-MS, 2018. 10.2118/190866-MS). The finger patterns for the polymer displacements and the magnitude and timing of the oil displacement response show excellent qualitative agreement with experiment, and indeed, they fully explain the observations in terms of an enhanced viscous crossflow mechanism (Sorbie and Skauge, in: Proceedings of the EAGE 20th Symposium on IOR, Pau, France, 2019). As a sensitivity, we also present some example results where the adjusted fractional flow ($$ f_{\rm w}^{*} $$ f w ∗ ) can give a chosen frontal shock saturation, $$ S_{\rm wf}^{*} $$ S wf ∗ , but at different frontal mobility ratios, $$ M(S_{\rm wf}^{*} ) $$ M ( S wf ∗ ) . Finally, two tests on the robustness of the method are presented on the effect of both rescaling the permeability field and on grid coarsening. It is demonstrated that our approach is very robust to both permeability field rescaling, i.e. where the (kmax/kmin) ratio in the RCF goes from 100 to 3, and also under numerical grid coarsening.
APA, Harvard, Vancouver, ISO, and other styles
50

Blaalid, Rakel, Kristin Magnussen, Nina Bruvik Westberg, and Ståle Navrud. "A benefit-cost analysis framework for prioritization of control programs for well-established invasive alien species." NeoBiota 68 (August 24, 2021): 31–52. http://dx.doi.org/10.3897/neobiota.68.62122.

Full text
Abstract:
Invasive alien species (IAS) are identified as a major threat to biodiversity and ecosystem services. While early detection and control programs to avoid establishments of new alien species can be very cost-effective, control costs for well-established species can be enormous. Many of these well-established species constitute severe or high ecological impact and are thus likely to be included in control programs. However, due to limited funds, we need to prioritize which species to control according to the gains in ecological status and human well-being compared to the costs. Benefit-Cost Analysis (BCA) provides such a tool but has been hampered by the difficulties in assessing the overall social benefits on the same monetary scale as the control costs. In order to overcome this obstacle, we combine a non-monetary benefit assessment tool with the ecosystem service framework to create a benefit assessment in line with the welfare economic underpinnings of BCA. Our simplified BCA prioritization tool enables us to conduct rapid and cheap appraisals of large numbers of invasive species that the Norwegian Biodiversity Information Centre has found to cause negative ecological impacts. We demonstrate this application on 30 well-established invasive alien vascular plant species in Norway. Social benefits are calculated and aggregated on a benefit point scale for six impact categories: four types of ecosystem services (supporting, provisioning, regulating and cultural), human health and infrastructure impacts. Total benefit points are then compared to the total control costs of programs aiming at eradicating individual IAS across Norway or in selected vulnerable ecosystems. Although there are uncertainties with regards to IAS population size, benefits assessment and control program effectiveness and costs; our simplified BCA tool identified six species associated with robust low cost-benefit ratios in terms of control costs (in million USD) per benefit point. As a large share of public funds for eradication of IAS is currently spent on control programs for other plant species, we recommend that the environmental authorities at all levels use our BCA prioritization tool to increase the social benefits of their limited IAS control budgets. In order to maximize the net social benefits of IAS control programs, environmental valuation studies of their ecosystem service benefits are needed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography