To see the other types of publications on this topic, follow the link: Non-attractive set of initial approximations.

Journal articles on the topic 'Non-attractive set of initial approximations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Non-attractive set of initial approximations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Anna, Malinova Angel Golev Anton Iliev* Nikolay Kyurkchiev. "SOME NOTES ON THE FAST ADAPTIVE NEURAL NETWORK SOLVERS." Global Journal of Engineering Science and Research Management 4, no. 10 (2017): 121–31. https://doi.org/10.5281/zenodo.1034497.

Full text
Abstract:
With this paper, we discuss some important aspects related to the iterative solution of two classes of polynomials, nonlinear systems of equations, and the adapted to them – "FAST adaptive neural solver" (FANS). The crucial issue of choosing initial approximations (separation of unattractive networks of initial data) and the possibility of minimizing CPU–time with the use of existing FANS is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Pelekh, Ya M., A. V. Kunynets, and R. Ya Pelekh. "TWO-SIDED METHODS FOR SOLVING INITIAL VALUE PROBLEM FOR NONLINEAR INTEGRO-DIFFERENTIAL EQUATIONS." Journal of Numerical and Applied Mathematics, no. 2 (2022): 116–21. http://dx.doi.org/10.17721/2706-9699.2022.2.13.

Full text
Abstract:
Using the continued fractions and the method of constructing Runge-Kutta methods, numerical methods for solving the Cauchy problem for nonlinear Volterra non-linear integrodifferential equations are proposed. With appropriate values of the parameters, one can obtain an approximation to the exact solution of the first and second order of accuracy. We found a set of parameters for which we obtain two-sided calculation formulas, which at each step of integration allow to obtain the upper and lower approximations of the exact solution.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Jinghua, and Hui Zhang. "Existence and decay rates of smooth solutions for a non-uniformly parabolic equation." Proceedings of the Royal Society of Edinburgh: Section A Mathematics 132, no. 6 (2002): 1477–91. http://dx.doi.org/10.1017/s0308210500002213.

Full text
Abstract:
We obtain the existence and decay rates of the classical solution to the initial-value problem of a non-uniformly parabolic equation. Our method is to set up two equivalent sequences of the successive approximations. One converges to a weak solution of the initial-value problem; the other shows that the weak solution is the classical solution for t > 0. Moreover, we show how bounds of the derivatives to the classical solution depend explicitly on the interval with compact support in (0, ∞). Then we study decay rates of this classical solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Hayes, Brian, and Michael Shearer. "Undercompressive shocks and Riemann problems for scalar conservation laws with non-convex fluxes." Proceedings of the Royal Society of Edinburgh: Section A Mathematics 129, no. 4 (1999): 733–54. http://dx.doi.org/10.1017/s0308210500013111.

Full text
Abstract:
The Riemann initial value problem is studied for scalar conservation laws whose fluxes have a single inflection point. For a regularization consisting of balanced diffusive and dispersive terms, the travelling wave criterion is used to select admissible shocks. In some cases, the Riemann problem solution contains an undercompressive shock. The analysis is illustrated by exploring parameter space for the Buckley–Leverett flux. The boundary of the set of parameters for which there is a physical solution of the Riemann problem for all data is computed. Within the region of acceptable parameters, the solution hasseveral different forms, depending on the initial data; the different forms are illustrated by numerical computations. Qualitatively similar behaviour is observed in Lax–Wendroff approximations of solutions of the Buckley–Leverett equation with no dissipation or dispersion.
APA, Harvard, Vancouver, ISO, and other styles
5

Guseva, S. R., and D. N. Ibragimov. "A priori estimation of the minimal stabilization time for linear discrete-time systems with bounded control based on the apparatus of eigensets." Modelling and Data Analysis 15, no. 1 (2025): 110–32. https://doi.org/10.17759/mda.2025150106.

Full text
Abstract:
<p>A linear system with discrete time and bounded control is considered. It is assumed that the system matrix is non-singular and diagonalizable, and the set of admissible control values is convex and compact. For a given system, the time-optimization problem is studied. In particular, it is required to construct a priori estimates of the optimal value of the minimal time as a function of the initial state and system parameters that do not require an exact construction of the class of null-controllable sets. To solve the problem, an apparatus of eigensets of a linear transformation is developed, and basic properties of non-trivial eigensets are formulated and proven. For the simplest case, when the set of admissible control values is a non-trivial eigenset of the system matrix, the response time function for a given initial state is constructed explicitly. For an arbitrary control system, a method is proposed for reducing to the simplest case by constructing internal and external approximations of a set of constraints on control values. Numerical calculations are presented demonstrating the efficiency and accuracy of the developed technique.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Trębicki, Grzegorz. "Merits of Fantastic Literature: A Proposal for Theoretical Framework." Literatura i Kultura Popularna 28 (October 6, 2022): 107–16. http://dx.doi.org/10.19195/0867-7441.28.8.

Full text
Abstract:
The paper attempts to set ground for a comprehensive debate on the merits of non-mimetic (fantastic) literature, tentatively proposing a system of four basic merits — the entertaining/fabulative, the emotional-cognitive, the speculative/extrapolative, and the aesthetic. These merits can also be alternatively viewed as narrative functions, precisely defined presuppositions, agreements between the implied reader and writer as to what literary gratifications are to be expected by the reader from the reading experience. They constitute structural dominants which are well manifested in the texts themselves. The author’s proposals should be read merely as certain initial approximations, albeit hopefully useful ones, capable of inspiring further debate.
APA, Harvard, Vancouver, ISO, and other styles
7

Shevchenko, I. I. "Using sequential particle methods and non-parametric distributions in bayesian evaluations of abundance and catch at age time series." Problems of Fisheries 24, no. 1 (2023): 99–116. http://dx.doi.org/10.36038/0234-2774-2023-24-1-99-116.

Full text
Abstract:
We describe an approach to analyzing time series for two variables connected through the state model with abundance and catch data sets and cohort and catch equations as an example. First, we create a deterministic model with parameters that maximizes the closeness of given data and data generated by a model. Then, we obtain cohort stochastic models using the difference between initial and modeled data. They are represented as hidden Bayesian models with abundances as states and catches as observations. Using these models, one can evaluate posterior densities and calculate averages, deviations, etc. As a general matter, the recursive equations met by posterior densities have no analytic solutions. We describe several particle methods that may be used for density approximations and following calculations of their statistical quantities. All generated sample densities are smoothed with non-parametric kernel density estimation. The Fishmetica package was extended with functions for generating samples and weights for filtering, predicting and smoothing densities. Numerical simulations were conducted for a test data set. Several extensions of the approach are proposed including an additional option for comparing the basic models with the use of a likelihood function.
APA, Harvard, Vancouver, ISO, and other styles
8

Almgren, A., R. Camassa, and R. Tiron. "Shear instability of internal solitary waves in Euler fluids with thin pycnoclines." Journal of Fluid Mechanics 710 (August 29, 2012): 324–61. http://dx.doi.org/10.1017/jfm.2012.366.

Full text
Abstract:
AbstractThe stability with respect to initial condition perturbations of solitary travelling-wave solutions of the Euler equations for continuously, stably stratified, near two-layer fluids is examined numerically and analytically for a set of parameters of relevance for laboratory experiments. Numerical travelling-wave solutions of the Dubreil–Jacotin–Long equation are first obtained with a variant of Turkington, Eyland and Wang’s iterative code by testing convergence on the equation’s residual. In this way, stationary solutions with very thin pycnoclines (and small Richardson numbers) approaching the near two-layer configurations used in experiments can be obtained, allowing for a stability study free of non-stationary effects, introduced by lack of numerical resolution, which develop when these solutions are used as initial conditions in a time-dependent evolution code. The thin pycnoclines in this study permit analytical results to be derived from strongly nonlinear models and their predictions compared with carefully controlled numerical simulations. This brings forth shortcomings of simple criteria for shear instability manifestations based on parallel shear approximations due to subtle higher-order effects. In particular, evidence is provided that the fore–aft asymmetric growth observed in all simulations requires non-parallel shear analysis. Collectively, the results of this study reveal that while the wave-induced shear can locally reach unstable configurations and give rise to local convective instability, the global wave/self-generated shear system is in fact stable, even for extreme cases of thin pycnoclines and near-maximum-amplitude waves.
APA, Harvard, Vancouver, ISO, and other styles
9

Verykokou, Styliani, and Charalabos Ioannidis. "EXTERIOR ORIENTATION ESTIMATION OF OBLIQUE AERIAL IMAGERY USING VANISHING POINTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 123–30. http://dx.doi.org/10.5194/isprs-archives-xli-b3-123-2016.

Full text
Abstract:
In this paper, a methodology for the calculation of rough exterior orientation (EO) parameters of multiple large-scale overlapping oblique aerial images, in the case that GPS/INS information is not available (e.g., for old datasets), is presented. It consists of five main steps; (a) the determination of the overlapping image pairs and the single image in which four ground control points have to be measured; (b) the computation of the transformation parameters from every image to the coordinate reference system; (c) the rough estimation of the camera interior orientation parameters; (d) the estimation of the true horizon line and the nadir point of each image; (e) the calculation of the rough EO parameters of each image. A developed software suite implementing the proposed methodology is tested using a set of UAV multi-perspective oblique aerial images. Several tests are performed for the assessment of the errors and show that the estimated EO parameters can be used either as initial approximations for a bundle adjustment procedure or as rough georeferencing information for several applications, like 3D modelling, even by non-photogrammetrists, because of the minimal user intervention needed. Finally, comparisons with a commercial software are made, in terms of automation and correctness of the computed EO parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Verykokou, Styliani, and Charalabos Ioannidis. "EXTERIOR ORIENTATION ESTIMATION OF OBLIQUE AERIAL IMAGERY USING VANISHING POINTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (June 9, 2016): 123–30. http://dx.doi.org/10.5194/isprsarchives-xli-b3-123-2016.

Full text
Abstract:
In this paper, a methodology for the calculation of rough exterior orientation (EO) parameters of multiple large-scale overlapping oblique aerial images, in the case that GPS/INS information is not available (e.g., for old datasets), is presented. It consists of five main steps; (a) the determination of the overlapping image pairs and the single image in which four ground control points have to be measured; (b) the computation of the transformation parameters from every image to the coordinate reference system; (c) the rough estimation of the camera interior orientation parameters; (d) the estimation of the true horizon line and the nadir point of each image; (e) the calculation of the rough EO parameters of each image. A developed software suite implementing the proposed methodology is tested using a set of UAV multi-perspective oblique aerial images. Several tests are performed for the assessment of the errors and show that the estimated EO parameters can be used either as initial approximations for a bundle adjustment procedure or as rough georeferencing information for several applications, like 3D modelling, even by non-photogrammetrists, because of the minimal user intervention needed. Finally, comparisons with a commercial software are made, in terms of automation and correctness of the computed EO parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Palmer, Michael H., and John A. Blair-Fish. "Halogen Nuclear Quadrupole Coupling Constants in Non-axially symmetric Molecules; Ab initio Calculations, which Include Correlation, Compared with Experiment." Zeitschrift für Naturforschung A 53, no. 6-7 (1998): 370–82. http://dx.doi.org/10.1515/zna-1998-6-718.

Full text
Abstract:
Abstract Ab initio determination of the electric field gradient (EFG) tensors at halogen and other centres ena-bled determination of the nuclear quadrupole coupling constants (NQCC) for a diverse set of C2v , C3v and other symmetry molecules of general formula MH2X2 and MHX3 , where the halogen atoms (X) are Cl, Br and I, and the heavy central atoms (M) are C and Si. The study presents results at a standardised level of calculation, triple-zeta in the valence space plus polarisation functions (TZVP) for the equilib-rium geometry stage; all-electron MP2 correlation is included in all these studies. For the bromo and iodo compounds, especially the latter, it is essential to allow core polarisation, by decontraction of the p,d-functions. This is conveniently done by initial optimization of the structure with a partly contract-ed basis, followed by reestablishment of the equilibrium structure with the decontracted basis.The NQCCs, derived from the EFGs, using the 'best' values for the atomic quadruple moments Cl, Br and I, lead to good agreement with the inertial axis (IA) data obtained from microwave spectroscopy. When the data from the present study is plotted against the values derived from the IA data, obtained by whatever approximations chosen by the MW authors, we obtain a linear regression for the data (85 points) with the slope 1.0365 and intercept -0.1737, with standard errors of 0.0042 and 0.2042, respectively; these are statistically identical results irrespective of whether the data is restricted to IA or EFG principal axis (PA) data.Since as in the C3v MH3X compounds studied previously, a close correlation of the microwave spectral data with the calculations was observed using the 'best' current values for Qz , there seems no need to postulate that the values of QBr for both 79Br and 81Br are seriously in error. A scaling downwards of Qz by about 5% for Br and I increases the agreement with experiment, but the contributions of relativistic effects are unknown, and could lead to further reassessment.Of the two common assumptions used in MW spectroscopy, to convert from IA to EFG-PA data, either (a) cylindrical symmetry of the NQCC along the bond direction, or (b) coincidence of the tensor principal element with the bond axis, the latter is found to be a much more realistic approximation.
APA, Harvard, Vancouver, ISO, and other styles
12

Shved, A. V., Ye O. Davydenko, and H. V. Horban. "INTELLECTUAL SUPPORT OF THE PROCESSES OF SEARCHING AND EXTRACTION OF PRECEDENTS IN CASE-BASED REASONING APPROACH." Radio Electronics, Computer Science, Control, no. 3 (November 3, 2024): 107. http://dx.doi.org/10.15588/1607-3274-2024-3-10.

Full text
Abstract:
Context. The situational approach is based on the real-time decision-making methods for solving current problem situation. An effective tool for implementing the concept of a situational approach is an experience-based technique that widely known as сasebased reasoning approach. Reasoning by precedents allows solving new (latest) problems using knowledge and accumulated experience of previously solved problems. Since cases (precedents) describing a scenario for solving a certain problem situation are stored in the case library, their search and retrieval directly determine the system response time. In these conditions, there is a need to find ways of solving an actual scientific and practical problem aimed at optimizing case searching and extracting processes. The object of the paper is the processes of searching and extracting of cases from the case library. Objective. The purpose of the article is to improve the process of cases searching in CBR approach by narrowing down the set of cases permissible for solving the current target situation, and excluding from further analysis such cases that do not correspond to the given set of parameters of the target situation. Method. The research methodology is based on the application of rough set theory methods to improve the decision-making procedure based on reasoning by precedents. The proposed two-stage procedure for narrowing the initial set of cases involves preliminary filtering of precedents whose parameter values belong to the given neighborhoods of the corresponding parameters of the target situation at the first stage, and additional narrowing of the obtained subset of cases by the methods of rough set theory at the second stage. The determination of the R-lower and R-upper approximations of a given target set of cases within the notation of rough set theory allows dividing (segmenting) the original set of cases available for solving the current problem stored in case library into three subgroups (segments). The search for prototype solutions can be performed among a selected subset of cases that can be accurately classified as belonging to a given target set; which with some degree of probability can be attributed to the given target set, or within the framework of the union of these two subsets. The third subset contains cases that definitely do not belong to the given target set and can be excluded from further consideration. Results. The problem of presentation and derivation of knowledge based on precedents has been considered. The procedure for searching for precedents in case library has been improved in order to reduce the system response time required to find the solution closest to the current problem situation by narrowing the initial set of cases. Conclusions. The case-based reasoning approach is received the further development by segmenting cases in terms of their belonging to a given target set of precedents uses methods of the rough set theory, then the search for cases is carried out within a given segment. The proposed approach, in contrast to the classic CBR framework, uses additional knowledge derived from obtained case segment; allows modeling the uncertainty regarding the belonging / non-belonging of a case to a given target set; removing from further consideration cases that do not correspond to a given target set.
APA, Harvard, Vancouver, ISO, and other styles
13

Prasertpong, Rukchart. "Roughness of soft sets and fuzzy sets in semigroups based on set-valued picture hesitant fuzzy relations." AIMS Mathematics 7, no. 2 (2022): 2891–928. http://dx.doi.org/10.3934/math.2022160.

Full text
Abstract:
<abstract><p>In the philosophy of rough set theory, the methodologies of rough soft sets and rough fuzzy sets have been being examined to be efficient mathematical tools to deal with unpredictability. The basic of approximations in rough set theory is based on equivalence relations. In the aftermath, such theory is extended by arbitrary binary relations and fuzzy relations for more wide approximation spaces. In recent years, the notion of picture hesitant fuzzy relations by Mathew et al. can be considered as a novel extension of fuzzy relations. Then this paper proposes extended approximations into rough soft sets and rough fuzzy sets from the viewpoint of its. We give corresponding examples to illustrate the correctness of such approximations. The relationships between the set-valued picture hesitant fuzzy relations with the upper (resp., lower) rough approximations of soft sets and fuzzy sets are investigated. Especially, it is shown that every non-rough soft set and non-rough fuzzy set can be induced by set-valued picture hesitant fuzzy reflexive relations and set-valued picture hesitant fuzzy antisymmetric relations. By processing the approximations and advantages in the new existing tools, some terms and products have been applied to semigroups. Then, we provide attractive results of upper (resp., lower) rough approximations of prime idealistic soft semigroups over semigroups and fuzzy prime ideals of semigroups induced by set-valued picture hesitant fuzzy relations on semigroups.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
14

Bertoli, Wesley, Ricardo P. Oliveira, and Jorge A. Achcar. "A New Semiparametric Regression Framework for Analyzing Non-Linear Data." Analytics 1, no. 1 (2022): 15–26. http://dx.doi.org/10.3390/analytics1010002.

Full text
Abstract:
This work introduces a straightforward framework for semiparametric non-linear models as an alternative to existing non-linear parametric models, whose interpretation primarily depends on biological or physical aspects that are not always available in every practical situation. The proposed methodology does not require intensive numerical methods to obtain estimates in non-linear contexts, which is attractive as such algorithms’ convergence strongly depends on assigning good initial values. Moreover, the proposed structure can be compared with standard polynomial approximations often used for explaining non-linear data behaviors. Approximate posterior inferences for the semiparametric model parameters were obtained from a fully Bayesian approach based on the Metropolis-within-Gibbs algorithm. The proposed structures were considered to analyze artificial and real datasets. Our results indicated that the semiparametric models outperform linear polynomial regression approximations to predict the behavior of response variables in non-linear settings.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Shuwei, Houpu Yang, Jin Zhao, and Shu Wang. "Abstract PO5-07-05: Deep learning can diagnose axillary lymph node metastases on optical virtual histologic images in breast cancer patients during surgery." Cancer Research 84, no. 9_Supplement (2024): PO5–07–05—PO5–07–05. http://dx.doi.org/10.1158/1538-7445.sabcs23-po5-07-05.

Full text
Abstract:
Abstract Background: Reliable identification of axillary lymph node (ALN) involvement in patients with breast cancer allows for definitive axillary dissection at the time of the initial surgery, thus avoiding the need for a separate axillary surgery. However, conventional intraoperative ALN diagnostic methods are time-consuming and labor-intensive and can result in tissue destruction. Dynamic full field optical coherence tomography, also called dynamic cell imaging (DCI), has been developed and validated to offer rapid and label-free histologic approximations of metastatic and non-metastatic ALNs. In this study, we aim to optimize the diagnostic pipeline with an automated approach and present the results of using a deep learning (DL) algorithm with DCI to predict ALN status intraoperatively in patients with breast cancer. Methods: Breast cancer patients who required ALN staging were enrolled prospectively in this study. DCI was applied to bisected fresh lymph nodes in a non-destructive manner, and the specimens were subsequently sent for histopathological examination, regarded as the gold standard for comparison. A DL model was trained and fine-tuned on over 80,000 DCI images, and the results were mapped to slide level to predict ALN diagnosis. Results: Total 607 DCI slides of ALNs with 112,852 cropped patches were included in the study. The DL model was trained and validated on a dataset containing 481 slides and tested on an independent testing dataset with 126 slides. In the test set, the DL algorithm yielded accuracy for prediction of ALN status, with sensitivity and specificity of 91.9% and 95.5% and an area under the receiver operating characteristic curve (AUC) of 0.937 (95% confidence interval [CI]: 0.912-0.957) at slide level. Conclusion: These results demonstrate that the integration of DCI with DL is rapid, reduces labor requirements and minimizes tissue destruction. Meanwhile, this algorithm had high classification accuracy to predict the metastatic burden of ALNs for patients with breast cancer. Citation Format: Shuwei Zhang, Houpu Yang, Jin Zhao, Shu Wang. Deep learning can diagnose axillary lymph node metastases on optical virtual histologic images in breast cancer patients during surgery [abstract]. In: Proceedings of the 2023 San Antonio Breast Cancer Symposium; 2023 Dec 5-9; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2024;84(9 Suppl):Abstract nr PO5-07-05.
APA, Harvard, Vancouver, ISO, and other styles
16

Molinari, Irene, Roberto Tonini, Stefano Lorito, et al. "Fast evaluation of tsunami scenarios: uncertainty assessment for a Mediterranean Sea database." Natural Hazards and Earth System Sciences 16, no. 12 (2016): 2593–602. http://dx.doi.org/10.5194/nhess-16-2593-2016.

Full text
Abstract:
Abstract. We present a database of pre-calculated tsunami waveforms for the entire Mediterranean Sea, obtained by numerical propagation of uniformly spaced Gaussian-shaped elementary sources for the sea level elevation. Based on any initial sea surface displacement, the database allows the fast calculation of full waveforms at the 50 m isobath offshore of coastal sites of interest by linear superposition. A computationally inexpensive procedure is set to estimate the coefficients for the linear superposition based on the potential energy of the initial elevation field. The elementary sources size and spacing is fine enough to satisfactorily reproduce the effects of M> = 6.0 earthquakes. Tsunami propagation is modelled by using the Tsunami-HySEA code, a GPU finite volume solver for the non-linear shallow water equations. Like other existing methods based on the initial sea level elevation, the database is independent on the faulting geometry and mechanism, which makes it applicable in any tectonic environment. We model a large set of synthetic tsunami test scenarios, selected to explore the uncertainty introduced when approximating tsunami waveforms and their maxima by fast and simplified linear combination. This is the first time to our knowledge that the uncertainty associated to such a procedure is systematically analysed and that relatively small earthquakes are considered, which may be relevant in the near-field of the source in a complex tectonic setting. We find that non-linearity of tsunami evolution affects the reconstruction of the waveforms and of their maxima by introducing an almost unbiased (centred at zero) error distribution of relatively modest extent. The uncertainty introduced by our approximation can be in principle propagated to forecast results. The resulting product then is suitable for different applications such as probabilistic tsunami hazard analysis, tsunami source inversions and tsunami warning systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Mezerdi, Mohamed Amine, and Nabil Khelfallah. "Stability and prevalence of McKean–Vlasov stochastic differential equations with non-Lipschitz coefficients." Random Operators and Stochastic Equations 29, no. 1 (2021): 67–78. http://dx.doi.org/10.1515/rose-2021-2053.

Full text
Abstract:
Abstract We consider various approximation properties for systems driven by a McKean–Vlasov stochastic differential equations (MVSDEs) with continuous coefficients, for which pathwise uniqueness holds. We prove that the solution of such equations is stable with respect to small perturbation of initial conditions, parameters and driving processes. Moreover, the unique strong solutions may be constructed by an effective approximation procedure. Finally, we show that the set of bounded uniformly continuous coefficients for which the corresponding MVSDE have a unique strong solution is a set of second category in the sense of Baire.
APA, Harvard, Vancouver, ISO, and other styles
18

Jiang, Yulin, Qingmin Pu, and Wenbin Ding. "Reconstruction of Velocity Distribution in Partially-Filled Pipe Based on Non-Uniform Under-Sampling." Advances in Mathematical Physics 2020 (January 22, 2020): 1–8. http://dx.doi.org/10.1155/2020/6961286.

Full text
Abstract:
In the process of research on the flow velocity distribution in a partially filled pipe, the under-sampling of measurement data often occurs. For the first time, this problem is solved by the improved non-uniform B-spline curve fitting approximation (NBSC) method. The main innovation of this method is to reconstruct the flow velocity distribution fitting curve with a small amount of non-uniform feature points containing flow velocity information. First, the curvature of a whole discrete sampled data is analyzed, then the weighted threshold is set, and the sampled points that satisfy the threshold are extracted as the initial velocity distribution feature points. Next the node vectors were constructed according to the initial feature points, and the initial interpolation fitting curves are generated. Secondly, by using the relative deviation between the initial approximation curve and each sampled point, new feature points were added where the curve allowable deviation exceeded the specified tolerance, and then a new interpolation fitting curve was obtained. The above procedure was repeated until the fitting curve reached expected accuracy, thus the appropriate feature points were determined. Experimental results showed that, in the case of the same approximation deviation, the proposed NBSC method can solve the problem of under-sampling of measurement data better.
APA, Harvard, Vancouver, ISO, and other styles
19

Filipovich, O. V. "Two-element Selective Assembly with Non-linear Input-output Models Using Approximation." Intellekt. Sist. Proizv. 22, no. 2 (2024): 80–86. http://dx.doi.org/10.22213/2410-9304-2024-2-80-86.

Full text
Abstract:
The process of single-parameter selective assembly of two elements is considered for the case of a nonlinear input-output modelapplication. Due to the objective complexity of determining the relations between the limit deviations and tolerances of input and output parameters, as well as the relative accuracy smallness due to the precision of the assembly, it is reasonable to represent the initial model in the form of a first-order polynomial of two variables. Linearisation is proposed to be carried out using a method of obtaining an approximating relation in the form of a Taylor series and by means of a multivariate least square method, with subsequent variantcomparison according to a given criterion. The criterion for choosing one of the two proposed variants is the minimum average approximation error. To determine the values of group tolerances, it is proposed to use two methods: assigning the equal tolerances and assigning tolerances of the equal relative accuracy. For both methods the derivation of the set-making equations is given, allowing the use of selective group numbersunder certain assumptions. Using a linearized model, the main indicators of the assembly process are determined: the number of assembly sets, work in progress and preliminary scrap. An example is given for the case when the output parameter represents the product of the input elementparameters. The coefficients were calculated and the set-making equation was derived. Comparison of the results presented in the paper with earlier obtained results (initial nonlinear model) shows a relatively small divergence in calculating the boundaries of selective groups, the error in determining the probability of obtaining assembly sets as a whole does not exceed 0.5%. The proposed method is applicable in case of small values of relative accuracy of input and output parameters, which in practice corresponds to selective assembly of precision products.
APA, Harvard, Vancouver, ISO, and other styles
20

Obayomi, Adesoji, and Lukman Salaudeen. "A Non-Standard Finite Difference Schemes for the Solution of Stiff Initial Value Problems." International Journal of Development Mathematics (IJDM) 1, no. 3 (2024): 001–7. http://dx.doi.org/10.62054/ijdm/0103.01.

Full text
Abstract:
In this study, we introduce a novel non-standard finite difference (NSFD) scheme designed to address the challenges posed by stiff initial value problems. Stiffness in differential equations often leads to numerical instability and requires specialized methods for stable and accurate solutions. A novel set of numerical schemes for solving stiff ordinary differential equations caused by the decay of radioactive substances developed. This paper demonstrates the power of normalization in the discretization function. We employed non-local approximation and renormalization of the denominator function to create qualitatively stable schemes for a stiff ordinary differential equation. The schemes' stability properties were verified using numerical experiments. The schemes' performance is evaluated in comparison to other typical finite difference schemes
APA, Harvard, Vancouver, ISO, and other styles
21

Pazis, Jason, and Ronald Parr. "Non-Parametric Approximate Linear Programming for MDPs." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (2011): 459–64. http://dx.doi.org/10.1609/aaai.v25i1.7930.

Full text
Abstract:
The Approximate Linear Programming (ALP) approach to value function approximation for MDPs is a parametric value function approximation method, in that it represents the value function as a linear combination of features which are chosen a priori. Choosing these features can be a difficult challenge in itself. One recent effort, Regularized Approximate Linear Programming (RALP), uses L1 regularization to address this issue by combining a large initial set of features with a regularization penalty that favors a smooth value function with few non-zero weights. Rather than using smoothness as a backhanded way of addressing the feature selection problem, this paper starts with smoothness and develops a non-parametric approach to ALP that is consistent with the smoothness assumption. We show that this new approach has some favorable practical and analytical properties in comparison to (R)ALP.
APA, Harvard, Vancouver, ISO, and other styles
22

GOHARA, KAZUTOSHI, HIROSHI SAKURAI, and SHOZO SATO. "EXPERIMENTAL VERIFICATION FOR FRACTAL TRANSITION USING A FORCED DAMPED OSCILLATOR." Fractals 08, no. 01 (2000): 67–72. http://dx.doi.org/10.1142/s0218348x00000081.

Full text
Abstract:
A damped oscillator stochastically driven by temporal forces is experimentally investigated. The dynamics is characterized by a set Γ(C) of trajectories in a cylindrical space, where C is a set of initial states on the Poincaré section. Two sets, Γ(C) and C, are attractive and unique invariant fractal sets that approximately satisfy specific equations derived previously by the authors. The correlation dimension of the set C is in good agreement with the similarity dimension obtained for a strictly self-similar set constructed by contraction mappings while C is a self-affine set constructed by non-contraction mappings.
APA, Harvard, Vancouver, ISO, and other styles
23

Alshehry, Azzh Saad, Roman Ullah, Nehad Ali Shah, Rasool Shah, and Kamsing Nonlaopon. "Implementation of Yang residual power series method to solve fractional non-linear systems." AIMS Mathematics 8, no. 4 (2023): 8294–309. http://dx.doi.org/10.3934/math.2023418.

Full text
Abstract:
<abstract><p>In this study, we implemented the Yang residual power series (YRPS) methodology, a unique analytical treatment method, to estimate the solutions of a non-linear system of fractional partial differential equations. The RPS approach and the Yang transform are togethered in the YRPS method. The suggested approach to handle fractional systems is explained along with its application. With fewer calculations and greater accuracy, the limit idea is used to solve it in Yang space to produce the YRPS solution for the proposed systems. The benefit of the new method is that it requires less computation to get a power series form solution, whose coefficients should be established in a series of algebraic steps. Two attractive initial value problems were used to test the technique's applicability and performance. The behaviour of the approximative solutions is numerically and visually discussed, along with the effect of fraction order $ \varsigma $. It was observed that the proposed method's approximations and exact solutions were completely in good agreement. The YRPS approach results highlight and show that the approach may be utilized to a variety of fractional models of physical processes easily and with analytical efficiency.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
24

Романюк, Вадим Васильович. "Finite approximation of zero-sum games played in staircase-function continuous spaces." KPI Science News, no. 4 (February 14, 2022): 19–38. http://dx.doi.org/10.20535/kpisn.2021.4.242769.

Full text
Abstract:
Background. There is a known method of approximating continuous zero-sum games, wherein an approximate solutionis considered acceptable if it changes minimally by changing the sampling step minimally. However, the method cannotbe applied straightforwardly to a zero-sum game played with staircase-function strategies. Besides, the independence ofthe player’s sampling step selection should be taken into account.Objective. The objective is to develop a method of finite approximation of zero-sum games played in staircase-functioncontinuous spaces by taking into account that the players are likely to independently sample their pure strategy sets.Methods. To achieve the said objective, a zero-sum game, in which the players’ strategies are staircase functions of time,is formalized. In such a game, the set of the player’s pure strategies is a continuum of staircase functions of time, andthe time is thought of as it is discrete. The conditions of sampling the set of possible values of the player’s pure strategyare stated so that the game becomes defined on a product of staircase-function finite spaces. In general, the samplingstep is different at each player and the distribution of the sampled points (function-strategy values) is non-uniform.Results. A method of finite approximation of zero-sum games played in staircase-function continuous spaces is pre-sented. The method consists in irregularly sampling the player’s pure strategy value set, solving smaller-sized matrixgames, each defined on a subinterval where the pure strategy value is constant, and stacking their solutions if they areconsistent. The stack of the smaller-sized matrix game solutions is an approximate solution to the initial staircase game.The (weak) consistency of the approximate solution is studied by how much the payoff and optimal situation change asthe sampling density minimally increases by the three ways of the sampling increment: only the first player’s increment,only the second player’s increment, both the players’ increment. The consistency is decomposed into the payoff, opti-mal strategy support cardinality, optimal strategy sampling density, and support probability consistency. It is practicallyreasonable to consider a relaxed payoff consistency.Conclusions. The suggested method of finite approximation of staircase zero-sum games consists in the independentsamplings, solving smaller-sized matrix games in a reasonable time span, and stacking their solutions if they are con-sistent. The finite approximation is regarded appropriate if at least the respective approximate (stacked) solution ise-payoff consistent.
APA, Harvard, Vancouver, ISO, and other styles
25

Vadim Romanuke. "Equilibrium stacks for a non-cooperative game defined on a product of staircase-function continuous and finite strategy spaces." Statistics, Optimization & Information Computing 12, no. 1 (2023): 45–74. http://dx.doi.org/10.19139/soic-2310-5070-1356.

Full text
Abstract:
A method of finite uniform approximation of 3-person games played with staircase-function strategies is presented. A continuous staircase 3-person game is approximated to a staircase trimatrix game by sampling the player’s pure strategy value set. The set is sampled uniformly so that the resulting staircase trimatrix game is cubic. An equilibrium of the staircase trimatrix game is obtained by stacking the equilibria of the subinterval trimatrix games, each defined on an interval where the pure strategy value is constant. The stack is an approximate solution to the initial staircase game. The (weak) consistency, equivalent to the approximate solution acceptability, is studied by how much the players’ payoff and equilibrium strategy change as the sampling density minimally increases. The consistency includes the payoff, equilibrium strategy support cardinality, equilibrium strategy sampling density, and support probability consistency. The most important parts are the payoff consistency and equilibrium strategy support cardinality (weak) consistency, which are checked in the quickest and easiest way. However, it is practically reasonable to consider a relaxed payoff consistency, by which the player’s payoff change in an appropriate approximation may grow at most by epsilon as the sampling density minimally increases. The weak consistency itself is a relaxation to the consistency, where the minimal decrement of the sampling density is ignored. An example is presented to show how the approximation is fulfilled for a case of when every subinterval trimatrix game has pure strategy equilibria.
APA, Harvard, Vancouver, ISO, and other styles
26

Richa, Munish Aggarwal, Harish Kumar, Ranju Mahajan, Navdeep Singh Arora, and Tarsem Singh Gill. "Relativistic ponderomotive self-focusing of quadruple Gaussian laser beam in cold quantum plasma." Laser and Particle Beams 36, no. 3 (2018): 353–58. http://dx.doi.org/10.1017/s026303461800023x.

Full text
Abstract:
AbstractIn the present paper, we have investigated self-focusing of the quadruple Gaussian laser beam in underdense cold quantum plasma. The non-linearity chosen is associated with the relativistic mass effect that arises due to quiver motion of electron and electron density perturbation caused by ponderomotive force. The non-linearity modifies the plasma frequency in the dielectric function and hence the refractive index of the medium. The focusing/defocusing of the quadruple laser depends on the refractive index of the medium. We have set up non-linear differential equation that controls the beam width parameter by using well-known paraxial ray approximation and Wentzel–Krammers–Brillouin approximation. The effect of intensity parameter and electron temperature is observed on laser beam self-focusing in the presence of cold quantum plasma. From the results, it is revealed that electron temperature and the initial intensity of the laser beam control the profile dynamics of the laser beam.
APA, Harvard, Vancouver, ISO, and other styles
27

Arabi naree, Somaye, and Maryam Mohammadi. "A new non-negative matrix factorization method to build a recommender system." Journal of Research in Science, Engineering and Technology 8, no. 2 (2020): 12–6. http://dx.doi.org/10.24200/jrset.vol8iss2pp12-6.

Full text
Abstract:
The main aim of this paper is to apply non-negative matrix factorization to build a recommender system. In a recommender system there are a group of users that rate to a set of items. These ratings can be represented by a rating matrix. The main problem is to estimate the unknown ratings and then predict the interests of the users to the items which haven’t rated. The main innovation of this paper is to propose a new algorithm to compute matrix factorization in a way that the factorized matrixes would be a good approximation for the initial rating matrix and moreover would be a good source to predict the unknown ratings of the items precisely. The results show that the proposed matrix factorization improves the estimated ratings considerably.
APA, Harvard, Vancouver, ISO, and other styles
28

Arabi naree, Somaye, and Maryam Mohammadi. "A New Non-Negative Matrix Factorization Method to Build a Recommender System." Journal of Management and Accounting Studies 8, no. 3 (2020): 56–61. http://dx.doi.org/10.24200/jmas.vol8iss3pp56-61.

Full text
Abstract:
The main aim of this paper is to apply non-negative matrix factorization to build a recommender system. In a recommender system there are a group of users that rate to a set of items. These ratings can be represented by a rating matrix. The main problem is to estimate the unknown ratings and then predict the interests of the users to the items which haven’t rated. The main innovation of this paper is to propose a new algorithm to compute matrix factorization in a way that the factorized matrixes would be a good approximation for the initial rating matrix and moreover would be a good source to predict the unknown ratings of the items precisely. The results show that the proposed matrix factorization improves the estimated ratings considerably.
APA, Harvard, Vancouver, ISO, and other styles
29

Alalawi, Huda, and Michael Strickland. "Far-from-equilibrium attractors for massive kinetic theory in the relaxation time approximation." EPJ Web of Conferences 296 (2024): 13004. http://dx.doi.org/10.1051/epjconf/202429613004.

Full text
Abstract:
In this proceedings contribution, we summarize recent findings concerning the presence of early- and late-time attractors in non-conformal kinetic theory. We study the effects of varying both the initial momentum-space anisotropy and initialization times using an exact solution of the 0+1D boostinvariant Boltzmann equation with a mass- and temperature-dependent relaxation time. Our findings support the existence of a longitudinal pressure attractor, but they do not support the existence of distinct attractors for the bulk viscous and shear pressures. Considering a large set of integral moments, we show that for moments with greater than one power of longitudinal momentum squared, both early- and late-time attractors are present.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Jaw-Yen, Bagus Putra Muljadi, Zhi-Hui Li, and Han-Xin Zhang. "A Direct Solver for Initial Value Problems of Rarefied Gas Flows of Arbitrary Statistics." Communications in Computational Physics 14, no. 1 (2013): 242–64. http://dx.doi.org/10.4208/cicp.290112.030812a.

Full text
Abstract:
AbstractAn accurate and direct algorithm for solving the semiclassical Boltzmann equation with relaxation time approximation in phase space is presented for parallel treatment of rarefied gas flows of particles of three statistics. The discrete ordinate method is first applied to discretize the velocity space of the distribution function to render a set of scalar conservation laws with source term. The high order weighted essentially non-oscillatory scheme is then implemented to capture the time evolution of the discretized velocity distribution function in physical space and time. The method is developed for two space dimensions and implemented on gas particles that obey the Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics. Computational examples in one- and two-dimensional initial value problems of rarefied gas flows are presented and the results indicating good resolution of the main flow features can be achieved. Flows of wide range of relaxation times and Knudsen numbers covering different flow regimes are computed to validate the robustness of the method. The recovery of quantum statistics to the classical limit is also tested for small fugacity values.
APA, Harvard, Vancouver, ISO, and other styles
31

Blusseau, Samy, Bastien Ponchon, Santiago Velasco-Forero, Jesús Angulo, and Isabelle Bloch. "Approximating morphological operators with part-based representations learned by asymmetric auto-encoders." Mathematical Morphology - Theory and Applications 4, no. 1 (2020): 64–86. http://dx.doi.org/10.1515/mathm-2020-0102.

Full text
Abstract:
AbstractThis paper addresses the issue of building a part-based representation of a dataset of images. More precisely, we look for a non-negative, sparse decomposition of the images on a reduced set of atoms, in order to unveil a morphological and explainable structure of the data. Additionally, we want this decomposition to be computed online for any new sample that is not part of the initial dataset. Therefore, our solution relies on a sparse, non-negative auto-encoder, where the encoder is deep (for accuracy) and the decoder shallow (for explainability). This method compares favorably to the state-of-the-art online methods on two benchmark datasets (MNIST and Fashion MNIST) and on a hyperspectral image, according to classical evaluation measures and to a new one we introduce, based on the equivariance of the representation to morphological operators.
APA, Harvard, Vancouver, ISO, and other styles
32

Wyse, Rosemary F. G., and Bernard J. T. Jones. "A Model for the Sequence of Elliptical Galaxies." Symposium - International Astronomical Union 117 (1987): 367. http://dx.doi.org/10.1017/s0074180900150545.

Full text
Abstract:
We present a simple model for the formation of elliptical galaxies, based on a binary clustering hierarchy of dark matter, the chemical enrichment of the gas at each level being controlled by supernovae. The initial conditions for the non-linear phases of galaxy formation are set by the post-recombination power spectrum of density fluctuations. We investigate two models for this power spectrum - the first is a straightforward power law, |δk|2 ∝ kn, and the second is Peeble's analytic approximation to the emergent spectrum in a universe dominated by cold dark matter. The normalisation is chosen such that on some scale, say M ∼ 1012M⊙, the objects that condense out have properties - radius and velocity dispersion - resembling ‘typical’ galaxies. There is some ambiguity in this due to the poorly determined mass-to-light ratio of a typical elliptical galaxy — we look at two normalisations, σ1D ∼ 350kms−1 and σ1D ∼ 140kms−1. The choice determines which of Compton cooling or hydrogen cooling is more important during the galaxy formation period. The non-linear behaviour of the perturbations is treated by the homogeneous sphere approximation.
APA, Harvard, Vancouver, ISO, and other styles
33

Abbas, Umar Farouk, Abdulkadir Ahmed, and Usman Mukhtar. "BAYESIAN ESTIMATION OF FOUR PARAMETERS ADDITIVE CHEN-WEIBULL DISTRIBUTION." FUDMA JOURNAL OF SCIENCES 6, no. 1 (2022): 181–90. http://dx.doi.org/10.33003/fjs-2022-0601-891.

Full text
Abstract:
Models with bathtub-shaped failure rate function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this study, the additive Chen-Weibull (ACW) distribution with increasing and bathtub-shaped failure rates function is studied using Bayesian and non-Bayesian approach using two real data set. The Bayes estimator were obtained by assuming non-informative prior (Half-Cauchy) under square error loss function (SELF), the Laplace Approximation and Monte Carlo Markov Chain (MCMC) techniques conducted in R were used to approximate the posterior distribution of ACW model. In addition, the maximum product of spacing method (MPS) of estimation is also considered using mpsedist function in BMT package in R with good set of initial values of parameters. We compared the performance of the two difference estimation methods by using Kolmogorov-Smirnov test. And the result showed that MPSEs method outperformed Bayesian approach
APA, Harvard, Vancouver, ISO, and other styles
34

Skinn, Brian. "(Invited) Estimation of the Onset of Mass Transfer Limitations in Pulsed Electrochemical Processes." ECS Meeting Abstracts MA2019-01, no. 20 (2019): 1100. http://dx.doi.org/10.1149/ma2019-01/20/1100.

Full text
Abstract:
The contributions of the physical phenomena governing the distribution of current across an electrode in an electrochemical process are conventionally categorized as primary, secondary, and/or tertiary current distribution effects, which respectively embody geometric/ohmic, kinetic polarization, and concentration polarization effects. On virtually all non-trivial workpieces of interest to industrial electrochemical practice, it is important to be able to control the areas affected by the process; viz., preferentially adding or removing material to some regions over others. Two of the most significant phenomena contributing to the tertiary current distribution in electrochemical processes are depletion (for electrodeposition) and saturation (for electrodissolution) of the active soluble metal species at the workpiece surface. Both of these phenomena lead to mass-transfer limitations: taking electrodissolution as an example, if material is being dissolved at a particular point on the electrode surface at a rate greater than diffusion can carry the products away from the surface, then mass-transfer limitations will result. The tertiary current distribution effects arising from these limitations will tend to disfavor further increases in the local electrodissolution current density at that point, thus shifting the current density distribution to other locations on the workpiece surface, to other reactions at the same location, or both. Thus, exerting control over these tertiary current distribution effects can be highly valuable for developing an efficient and accurate electrochemical process. An interesting feature of these mass-transfer-limiting phenomena is that they are almost entirely inactive for a short time (generally < 1 s for processes of practical interest) after the electrical voltage is applied, even if the applied current density is sufficiently high that significant mass transfer limitations will result after this initial interval. Thus, it follows that pulsing the applied potential/current at sufficiently high frequencies has the potential to enable significant control of these tertiary current distribution effects, by allowing the physicochemical conditions contributing to mass-transfer limitations at the electrode surface to “relax” while the potential is turned off. This “relaxation” behavior is schematized in Figure 1 for a generic pulse-electrodissolution process under steady-periodic conditions, where the orange and blue traces represent the concentration profiles at the end of the on-time and off-time, respectively, under conditions where no mass-transfer limitations are active at any point in time. For the purposes of electrochemical process optimization, the ability to estimate the maximum concentration of dissolved species at the electrode surface for a given system and applied waveform would provide guidance as to whether and when a particular mode of mass-transfer limitation is likely to be active. In particular, evaluation of the “transition time,” the value of the waveform on-time above which mass-transfer limitations become appreciable, is of significant practical interest. Methods for transition time estimation based on linearized approximation of the boundary-layer concentration dynamics under a number of simplifying assumptions are available in the literature; e.g., Ref. [1]. However, the transition times calculated using these methods were found to deviate from COMSOL Multiphysics® simulation results by anywhere between –80% to +2780%, depending on the form of the estimation used and the particular waveform under consideration. This talk summarizes a method developed to provide appreciably more accurate predictions of transition times, under a similar set of simplifying assumptions as in Ref. 1. Separate on-time and off-time analytical solutions of the time-dependent steady-periodic mass transport behavior in a one-dimensional boundary layer were developed via the ‘finite Fourier transform’ (FFT) technique [[2]] and used to generate transition time estimates. Optimal values of the FFT model parameters were separately identified for fifty-three pairs of two pulsed-waveform timing parameters, period and duty cycle, spanning substantially the entire parameter space of practical industrial relevance. When compared to COMSOL® simulation results, the deviation of the transition time predictions (equivalently, predictions of the maximum surface concentration, in the electrodissolution paradigm of the model) was within 9% for all of the examined sets of timing parameters, with most deviating less than 5%. This FFT method thus provides a highly accurate method for estimation of transition times, within the approximations made in constructing the model. References [[1]] Ibl, N. “Some Theoretical Aspects of Pulse Electrolysis.” Surface Technology 10: 81-104 (1980). [[2]] Deen, W.M. “Analysis of Transport Phenomena,” 2nd ed., Ch. 5. New York: Oxford University Press, 2012. Figure 1
APA, Harvard, Vancouver, ISO, and other styles
35

Kapoor, Mamta, and Varun Joshi. "Numerical approximation of coupled 1D and 2D non-linear Burgers’ equations by employing Modified Quartic Hyperbolic B-spline Differential Quadrature Method." International Journal of Mechanics 15 (April 7, 2021): 37–55. http://dx.doi.org/10.46300/9104.2021.15.5.

Full text
Abstract:
In this paper, the numerical solution of coupled 1D and coupled 2D Burgers' equation is provided with the appropriate initial and boundary conditions, by implementing "modified quartic Hyperbolic B-spline DQM". In present method, the required weighting coefficients are computed using modified quartic Hyperbolic B-spline as a basis function. These coupled 1D and coupled 2D Burgers' equations got transformed into the set of ordinary differential equations, tackled by SSPRK43 scheme. Efficiency of the scheme and exactness of the obtained numerical solutions is declared with the aid of 8 numerical examples. Numerical results obtained by modified quartic Hyperbolic B-spline are efficient and it is easy to implement
APA, Harvard, Vancouver, ISO, and other styles
36

Hafdallah, Abdelhak. "OPTIMAL CONTROL FOR A CONTROLLED ILL-POSED WAVE EQUATION WITHOUT REQUIRING THE SLATER HYPOTHESIS." Ural Mathematical Journal 6, no. 1 (2020): 84. http://dx.doi.org/10.15826/umj.2020.1.007.

Full text
Abstract:
In this paper, we investigate the problem of optimal control for an ill-posed wave equation without using the extra hypothesis of Slater i.e. the set of admissible controls has a non-empty interior. Firstly, by a controllability approach, we make the ill-posed wave equation a well-posed equation with some incomplete data initial condition. The missing data requires us to use the no-regret control notion introduced by Lions to control distributed systems with ncomplete data. After approximating the no-regret control by a low-regret control sequence, we characterize the optimal control by a singular optimality system.
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Guohua, Jeroen Vink, Fredrik Saaf, and Terence Wells. "Strategies to Enhance the Performance of Gaussian Mixture Model Fitting for Uncertainty Quantification." SPE Journal 27, no. 01 (2021): 329–48. http://dx.doi.org/10.2118/204008-pa.

Full text
Abstract:
Summary When formulating history matching within the Bayesian framework, we may quantify the uncertainty of model parameters and production forecasts using conditional realizations sampled from the posterior probability density function (PDF). It is quite challenging to sample such a posterior PDF. Some methods [e.g., Markov chain Monte Carlo (MCMC)] are very expensive, whereas other methods are cheaper but may generate biased samples. In this paper, we propose an unconstrained Gaussian mixture model (GMM) fitting method to approximate the posterior PDF and investigate new strategies to further enhance its performance. To reduce the central processing unit (CPU) time of handling bound constraints, we reformulate the GMM fitting formulation such that an unconstrained optimization algorithm can be applied to find the optimal solution of unknown GMM parameters. To obtain a sufficiently accurate GMM approximation with the lowest number of Gaussian components, we generate random initial guesses, remove components with very small or very large mixture weights after each GMM fitting iteration, and prevent their reappearance using a dedicated filter. To prevent overfitting, we add a new Gaussian component only if the quality of the GMM approximation on a (large) set of blind-test data sufficiently improves. The unconstrained GMM fitting method with the new strategies proposed in this paper is validated using nonlinear toy problems and then applied to a synthetic history-matching example. It can construct a GMM approximation of the posterior PDF that is comparable to the MCMC method, and it is significantly more efficient than the constrained GMM fitting formulation (e.g., reducing the CPU time by a factor of 800 to 7,300 for problems we tested), which makes it quite attractive for large-scale history-matching problems. NOTE: This paper is also published as part of the 2021 SPE Reservoir Simulation Conference Special Issue.
APA, Harvard, Vancouver, ISO, and other styles
38

Kumar, Harish, Munish Aggarwal, Richa, Dinkar Sharma, Sumit Chandok, and Tarsem Singh Gill. "Significant enhancement in the propagation of cosh-Gaussian laser beam in a relativistic–ponderomotive plasma using ramp density profile." Laser and Particle Beams 36, no. 2 (2018): 179–85. http://dx.doi.org/10.1017/s0263034618000125.

Full text
Abstract:
AbstractThe paper presents an investigation on self-focusing of cosh-Gaussian (ChG) laser beam in a relativistic–ponderomotive non-uniform plasma. It is observed numerically that the selection of decentered parameter and initial beam radius determines the focusing/defocusing of ChG laser beam. For given value of these parameters, the plasma density ramp of suitable length can avoid defocusing and enhance focusing effect significantly. Focusing length and extent of focusing may also be controlled by varying slope of the ramp density. A comparison with Gaussian beam has also been attempted for optimized set of parameters. The results establish that ChG beam focuses earlier and sharper relative to Gaussian beam. We have setup the non-linear differential equation for the beam width parameter using Wentzel–Kramers–Brillouin and paraxial ray approximation and solved it numerically using Runge–Kutta method.
APA, Harvard, Vancouver, ISO, and other styles
39

Hartmeier, Michael, та Mathias Garny. "Minimal basis for exact time dependent kernels in cosmological perturbation theory and application to ΛCDM and w 0 wa CDM". Journal of Cosmology and Astroparticle Physics 2023, № 12 (2023): 027. http://dx.doi.org/10.1088/1475-7516/2023/12/027.

Full text
Abstract:
Abstract We derive a minimal basis of kernels furnishing the perturbative expansion of the density contrast and velocity divergence in powers of the initial density field that is applicable to cosmological models with arbitrary expansion history, thereby relaxing the commonly adopted Einstein-de-Sitter (EdS) approximation. For this class of cosmological models, the non-linear kernels are at every order given by a sum of terms, each of which factorizes into a time-dependent growth factor and a wavenumber-dependent basis function. We show how to reduce the set of basis functions to a minimal amount, and give explicit expressions up to order n = 5. We find that for this minimal basis choice, each basis function individually displays the expected scaling behaviour due to momentum conservation, being non-trivial at n ≥ 4. This is a highly desirable property for numerical evaluation of loop corrections. In addition, it allows us to match the density field to an effective field theory (EFT) description for cosmologies with an arbitrary expansion history, which we explicitly derive at order four. We evaluate the differences to the EdS approximation for ΛCDM and w 0 wa CDM, paying special attention to the irreducible cosmology dependence that cannot be absorbed into EFT terms for the one-loop bispectrum. Finally, we provide algebraic recursion relations for a special generalization of the EdS approximation that retains its simplicity and is relevant for mixed hot and cold dark matter models.
APA, Harvard, Vancouver, ISO, and other styles
40

Jones, Janet E., Philip Mellor, and Jesper Storm. "Limits on the Missing Mass in Dark Stellar Remnants." Symposium - International Astronomical Union 117 (1987): 411. http://dx.doi.org/10.1017/s0074180900150600.

Full text
Abstract:
A set of comprehensive computer models for the chemical evolution of galaxies have been used to determine the limits on the amount of mass that could exist in the form of dark stellar remnants deriving from normal stellar evolutionary processes. In these models, the instantaneous recycling approximation is not assumed: stars are binned into 10 mass intervals, with different lifetimes, yields and remnant masses. The models were run using many different values for the IMF (including non-Salpeter and varying IMFs), star formation rates, yields, remnant masses, gas infall and outflow rates, primordial metalliciy and initial conditions. The Galaxy is described by a two-zone halo-disk system, where gas from the halo falls onto the disk. Elliptical galaxies are described by single-zone models.
APA, Harvard, Vancouver, ISO, and other styles
41

Kehle, Christoph. "Diophantine approximation as Cosmic Censor for Kerr–AdS black holes." Inventiones mathematicae 227, no. 3 (2021): 1169–321. http://dx.doi.org/10.1007/s00222-021-01078-6.

Full text
Abstract:
AbstractThe purpose of this paper is to show an unexpected connection between Diophantine approximation and the behavior of waves on black hole interiors with negative cosmological constant $$\Lambda <0$$ Λ < 0 and explore the consequences of this for the Strong Cosmic Censorship conjecture in general relativity. We study linear scalar perturbations $$\psi $$ ψ of Kerr–AdS solving $$\Box _g\psi -\frac{2}{3}\Lambda \psi =0$$ □ g ψ - 2 3 Λ ψ = 0 with reflecting boundary conditions imposed at infinity. Understanding the behavior of $$\psi $$ ψ at the Cauchy horizon corresponds to a linear analog of the problem of Strong Cosmic Censorship. Our main result shows that if the dimensionless black hole parameters mass $${\mathfrak {m}} = M \sqrt{-\Lambda }$$ m = M - Λ and angular momentum $${\mathfrak {a}} = a \sqrt{-\Lambda }$$ a = a - Λ satisfy a certain non-Diophantine condition, then perturbations $$\psi $$ ψ arising from generic smooth initial data blow up $$|\psi |\rightarrow +\infty $$ | ψ | → + ∞ at the Cauchy horizon. The proof crucially relies on a novel resonance phenomenon between stable trapping on the black hole exterior and the poles of the interior scattering operator that gives rise to a small divisors problem. Our result is in stark contrast to the result on Reissner–Nordström–AdS (Kehle in Commun Math Phys 376(1):145–200, 2020) as well as to previous work on the analogous problem for $$\Lambda \ge 0$$ Λ ≥ 0 —in both cases such linear scalar perturbations were shown to remain bounded. As a result of the non-Diophantine condition, the set of parameters $${\mathfrak {m}}, {\mathfrak {a}}$$ m , a for which we show blow-up forms a Baire-generic but Lebesgue-exceptional subset of all parameters below the Hawking–Reall bound. On the other hand, we conjecture that for a set of parameters $${\mathfrak {m}}, {\mathfrak {a}} $$ m , a which is Baire-exceptional but Lebesgue-generic, all linear scalar perturbations remain bounded at the Cauchy horizon $$|\psi |\le C$$ | ψ | ≤ C . This suggests that the validity of the $$C^0$$ C 0 -formulation of Strong Cosmic Censorship for $$\Lambda <0$$ Λ < 0 may change in a spectacular way according to the notion of genericity imposed.
APA, Harvard, Vancouver, ISO, and other styles
42

Manucharyan, G. E., W. Moon, F. Sévellec, A. J. Wells, J. Q. Zhong, and J. S. Wettlaufer. "Steady turbulent density currents on a slope in a rotating fluid." Journal of Fluid Mechanics 746 (April 2, 2014): 405–36. http://dx.doi.org/10.1017/jfm.2014.119.

Full text
Abstract:
AbstractWe consider the dynamics of actively entraining turbulent density currents on a conical sloping surface in a rotating fluid. A theoretical plume model is developed to describe both axisymmetric flow and single-stream currents of finite angular extent. An analytical solution is derived for flow dominated by the initial buoyancy flux and with a constant entrainment ratio, which serves as an attractor for solutions with alternative initial conditions where the initial fluxes of mass and momentum are non-negligible. The solutions indicate that the downslope propagation of the current halts at a critical level where there is purely azimuthal flow, and the boundary layer approximation breaks down. Observations from a set of laboratory experiments are consistent with the dynamics predicted by the model, with the flow approaching a critical level. Interpretation in terms of the theory yields an entrainment coefficient $E\propto 1/\Omega $ where the rotation rate is $\Omega $. We also derive a corresponding theory for density currents from a line source of buoyancy on a planar slope. Our theoretical models provide a framework for designing and interpreting laboratory studies of turbulent entrainment in rotating dense flows on slopes and understanding their implications in geophysical flows.
APA, Harvard, Vancouver, ISO, and other styles
43

Demirović, Emir, Christian Schilling, and Anna Lukina. "In Search of Trees: Decision-Tree Policy Synthesis for Black-Box Systems via Search." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 26 (2025): 27250–57. https://doi.org/10.1609/aaai.v39i26.34934.

Full text
Abstract:
Decision trees, owing to their interpretability, are attractive as control policies for (dynamical) systems. Unfortunately, constructing, or synthesising, such policies is a challenging task. Previous approaches do so by imitating a neural-network policy, approximating a tabular policy obtained via formal synthesis, employing reinforcement learning, or modelling the problem as a mixed-integer linear program. However, these works may require access to a hard-to-obtain accurate policy or a formal model of the environment (within reach of formal synthesis), and may not provide guarantees on the quality or size of the final tree policy. In contrast, we present an approach to synthesise optimal decision-tree policies given a deterministic black-box environment and specification, a discretisation of the tree predicates, and an initial set of states, where optimality is defined with respect to the number of steps to achieve the goal. Our approach is a specialised search algorithm which systematically explores the (exponentially large) space of decision trees under the given discretisation. The key component is a novel trace-based pruning mechanism that significantly reduces the search space. Our approach represents a conceptually novel way of synthesising small decision-tree policies with optimality guarantees even for black-box environments with black-box specifications.
APA, Harvard, Vancouver, ISO, and other styles
44

Akishin, Pavel G., and Andrey A. Sapozhnikov. "The volume integral equation method in magnetostatic problem." Discrete and Continuous Models and Applied Computational Science 27, no. 1 (2019): 60–69. http://dx.doi.org/10.22363/2658-4670-2019-27-1-60-69.

Full text
Abstract:
This article addresses the issues of volume integral equation method application to magnetic system calculations. The main advantage of this approach is that in this case finding the solution of equations is reduced to the area filled with ferromagnetic. The difficulty of applying the method is connected with kernel singularity of integral equations. For this reason in collocation method only piecewise constant approximation of unknown variables is used within the limits of fragmentation elements inside the famous package GFUN3D. As an alternative approach the points of observation can be replaced by integration over fragmentation element, which allows to use approximation of unknown variables of a higher order.In the presented work the main aspects of applying this approach to magnetic systems modelling are discussed on the example of linear approximation of unknown variables: discretisation of initial equations, decomposition of the calculation area to elements, calculation of discretised system matrix elements, solving the resulting nonlinear equation system. In the framework of finite element method the calculation area is divided into a set of tetrahedrons. At the beginning the initial area is approximated by a combination of macro-blocks with a previously constructed two-dimensional mesh at their borders. After that for each macro-block separately the procedure of tetrahedron mesh construction is performed. While calculating matrix elements sixfold integrals over two tetrahedra are reduced to a combination of fourfold integrals over triangles, which are calculated using cubature formulas. Reduction of singular integrals to the combination of the regular integrals is proposed with the methods based on the concept of homogeneous functions. Simple iteration methods are used to solve non-linear discretized systems, allowing to avoid reversing large-scale matrixes. The results of the modelling are compared with the calculations obtained using other methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Sadhak, Gautam* Mridul Jaggi Santosh Anand. "SOLVING TRANSPORTATION PROBLEMS USING THE BEST CANDIDATES METHOD." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 5, no. 9 (2016): 547–53. https://doi.org/10.5281/zenodo.154479.

Full text
Abstract:
Problem statement: The optimization processes in mathematics, computer science and economics are solving effectively by choosing the best element from set of available alternatives elements. The most important and successful applications in the optimization refers to transportation problem (TP), that is a special class of the linear programming (LP) in the operation research (OR). Approach: The main objective of transportation problem solution methods is to minimize the cost or the time of transportation. Most of the currently used methods for solving transportation problems are trying to reach the optimal solution, whereby, most of these methods are considered complex and very expansive in term of the execution time. In this study we use the best candidate method (BCM), in which the key idea is to minimize the combinations of the solution by choosing the best candidates to reach the optimal solution. Results: Comparatively, applying the BCM in the proposed method obtains the best initial feasible solution to a transportation problem and performs faster than the existing methods with a minimal computation time and less complexity. The proposed methods is therefore an attractive alternative to traditional problem solution methods. Conclusion/Recommendations: The BCM can be used successfully to solve different business problems of distribution products that is commonly referred to a transportation problems.
APA, Harvard, Vancouver, ISO, and other styles
46

D’Amato, Elena, Constantino Carlos Reyes-Aldasoro, Arianna Consiglio, Gabriele D’Amato, Maria Felicia Faienza, and Marcella Zollino. "Detection of Pitt–Hopkins Syndrome Based on Morphological Facial Features." Applied Sciences 11, no. 24 (2021): 12086. http://dx.doi.org/10.3390/app112412086.

Full text
Abstract:
This work describes a non-invasive, automated software framework to discriminate between individuals with a genetic disorder, Pitt–Hopkins syndrome (PTHS), and healthy individuals through the identification of morphological facial features. The input data consist of frontal facial photographs in which faces are located using histograms of oriented gradients feature descriptors. Pre-processing steps include color normalization and enhancement, scaling down, rotation, and cropping of pictures to produce a series of images of faces with consistent dimensions. Sixty-eight facial landmarks are automatically located on each face through a cascade of regression functions learnt via gradient boosting to estimate the shape from an initial approximation. The intensities of a sparse set of pixels indexed relative to this initial estimate are used to determine the landmarks. A set of carefully selected geometric features, for example, the relative width of the mouth or angle of the nose, is extracted from the landmarks. The features are used to investigate the statistical differences between the two populations of PTHS and healthy controls. The methodology was tested on 71 individuals with PTHS and 55 healthy controls. The software was able to classify individuals with an accuracy rate of 91%, while pediatricians achieved a recognition rate of 74%. Two geometric features related to the nose and mouth showed significant statistical difference between the two populations.
APA, Harvard, Vancouver, ISO, and other styles
47

Bazilevskiy, Mikhail Pavlovich. "Program for Constructing Quite Interpretable Elementary and Non-elementary Quasi-linear Regression Models." Proceedings of the Institute for System Programming of the RAS 35, no. 4 (2023): 129–44. http://dx.doi.org/10.15514/ispras-2023-35(4)-7.

Full text
Abstract:
A quite interpretable linear regression satisfies the following conditions: the signs of its coefficients correspond to the meaningful meaning of the factors; multicollinearity is negligible; coefficients are significant; the quality of the model approximation is high. Previously, to construct such models, estimated using the ordinary least squares, the QInter-1 program was developed. In it, according to the given initial parameters, the mixed integer 0-1 linear programming task is automatically generated, as a result of which the most informative regressors are selected. The mathematical apparatus underlying this program was significantly expanded over time: non-elementary linear regressions were developed, linear restrictions on the absolute values of intercorrelations were proposed to control multicollinearity, assumptions appeared about the possibility of constructing not only linear, but also quasi-linear regressions. This article is devoted to the description of the developed second version of the program for constructing quite interpretable regressions QInter-2. The QInter-2 program allows, depending on the initial parameters selected by the user, to automatically formulate for the LPSolve solver the mixed integer 0-1 linear programming task for constructing both elementary and non-elementary quite interpretable quasi-linear regressions. It is possible to set up to nine elementary functions and control such parameters as the number of regressors in the model, the number of signs in real numbers after the decimal point, the absolute contributions of variables to the overall determination, the number of occurrences of explanatory variables in the model, and the magnitude of intercorrelations. In the process of working with the program, you can also control the number of elementary and non-elementarily transformed variables that affect the speed of solving the mixed integer 0-1 linear programming task. The QInter-2 program is universal and can be used to construct quite interpretable mathematical dependencies in various subject areas.
APA, Harvard, Vancouver, ISO, and other styles
48

Azri, Hakim, Hafida Belbachir, and Fatiha Guerroudji Meddah. "Identifying Spam Activity on Public Facebook Pages." Journal of Computing and Information Technology 29, no. 3 (2022): 133–49. http://dx.doi.org/10.20532/cit.2021.1005221.

Full text
Abstract:
Since their emergence, online social networks (OSNs) keep gaining popularity. However, many related problems have also arisen, such as the use of fake accounts for malicious activities. In this paper, we focus on identifying spammers among users that are active on public Facebook pages. We are specifically interested in identifying groups of spammers sharing similar URLs. For this purpose, we built an initial dataset based on all the content that has been posted upon feed posts on a set of public Facebook pages with high numbers of subscribers. We assumed that such public pages, with hundreds of thousands of subscribers and revolving around a common attractive topic, make an ideal ground for spamming activity. Our first contribution in this paper is a reliable methodology that helps in identifying potential spammer and non-spammer accounts that are likely to be tagged as, respectively, spammers/non-spammers upon manual verification. For that aim, we used a set of features characterizing spam activity with a coring method. This methodology, combined with manual human validation, successfully allowed us to build a dataset of spammers and non-spammers. Our second contribution is the analysis of the identified spammer accounts. We found that these accounts do not display any community-like behavior as they rarely interact with each other, and are slightly more active than non-spammers during late-night hours, while slightly less active during daytime hours. Finally, our third contribution is the proposal of a clustering approach that successfully detected 16 groups of spammers in the form of clusters of spam accounts sharing similar URLs.
APA, Harvard, Vancouver, ISO, and other styles
49

Ingel, Lev Kh. "On the onset of advective flow in a stratified medium induced by weak symmetry breaking of horizontal translations." Journal of Physics: Conference Series 2317, no. 1 (2022): 012001. http://dx.doi.org/10.1088/1742-6596/2317/1/012001.

Full text
Abstract:
Abstract In this paper, we analytically study the mechanism of the onset of the liquid/gas flow in a non-isothermal stratified medium. The flow is induced by small spatial inhomogeneities of the exchange coefficients. In a statically stable stratified medium heated strictly from above, as is known, there is heat diffusion directed from top to bottom. We consider the system, which is slightly non-invariant with respect to translations along with horizontal directions. It may occur, for example, due to the dependence of the thermal conductivity coefficient on horizontal coordinates. In this case, we show that it could result in the rise of horizontal inhomogeneity in the distributions of buoyancy and hydrostatic pressure and, consequently, in the onset of the horizontal advection. We consider harmonic variations of the thermal conductivity of small amplitude. By applying a linear approximation to a set of governing equations, we derive explicit analytical expressions for temperature perturbations and velocity components. Finally, we investigate the possibility of an intense response of the system on slight initial symmetry breaking in a definite range of parameter values.
APA, Harvard, Vancouver, ISO, and other styles
50

Hopmann, Ch, and C. Zimmermann. "Characterisation of the quasi-static material behaviour of thermoplastic elastomers (TPE) under consideration of temperature and stress state." Journal of Elastomers & Plastics 52, no. 3 (2019): 199–215. http://dx.doi.org/10.1177/0095244319835868.

Full text
Abstract:
This article deals with the determination of the mechanical material behaviour of injection-moulded thermoplastic elastomers (TPE). For this purpose, the mechanical behaviour of TPE is investigated on basis of different materials (type and hardness) and process parameters with regard to the process-induced anisotropy. Based on these investigations, an almost isotropic material was selected for the investigations about the influence of temperature, stress state and load level. The results confirm that the hardness of the material and type of the processed material have an impact on the mechanical properties longitudinal and transversal to the flow direction. Furthermore, TPE behave quite similar to pure elastomers, as they showed a non-linear material behaviour, a stress softening after initial loading and a residual deformation after unloading. All these effects also depend on the temperature, stress state and load level, which results in a complex material behaviour. At the end of the article, two calibration approaches of a hyperelastic material model were shown (one set of parameters for all stress states vs. one set of parameters for each stress state). The second approach (one set of parameters for each stress state) shows a much higher accuracy in approximation of the test results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!