To see the other types of publications on this topic, follow the link: Two- and three-step complex problems.

Journal articles on the topic 'Two- and three-step complex problems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Two- and three-step complex problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Boymanov, H. "METHODS OF TEACHING PRIMARY EDUCATION STUDENTS TO PROBLEM SOLVING." International journal of advanced research in education, technology and management 2, no. 2 (2023): 63–70. https://doi.org/10.5281/zenodo.7606680.

Full text
Abstract:
<strong>Abstract.</strong> In the first grades of primary education, when solving problems, students learn to find the sum and remainder, to find sums and addends to increase or decrease a number by a few units, to compare differences, to find differences and denominators, to find differences and denominators. it is said to face several kinds of problems in finding it. &nbsp;
APA, Harvard, Vancouver, ISO, and other styles
2

Tashboeva, Saidakhan Rakhmonberdievna, and Khilolakhan Umidjon qizi Khodzhimatova. "Methods for solving simple text problems in elementary grades. (in 4th grade)." Journal of Science-Innovative Research in Uzbekistan 1, no. 2 (2023): 240–44. https://doi.org/10.5281/zenodo.8075673.

Full text
Abstract:
Text tasks are very important in elementary grades. Problems are very useful for the development and thinking of students. Any questions that are interesting and engage the student in thought will have a more positive impact on their brain activity. For elementary school students, not too difficult one- or two-step tasks are recommended. Through them, the horizons of thinking expand and serve as the basis for complex issues that will be worked on in the next stages. To complete any complex examples and problems, you must first be able to work with simple looking examples and problems
APA, Harvard, Vancouver, ISO, and other styles
3

Peng, Qiao, and Dengyin Zhang. "Multitarget Detection in Depth-Perception Traffic Scenarios." Mathematical Problems in Engineering 2022 (February 4, 2022): 1–7. http://dx.doi.org/10.1155/2022/5590514.

Full text
Abstract:
Multitarget detection in complex traffic scenarios usually has many problems: missed detection of targets, difficult detection of small targets, etc. In order to solve these problems, this paper proposes a two-step detection model of depth-perception traffic scenarios to improve detection accuracy, mainly for three categories of frequently occurring targets: vehicles, person, and traffic signs. The first step is to use the optimized convolutional neural network (CNN) model to identify the existence of small targets, positioning them with candidate box. The second step is to obtain classification, location, and pixel-level segmentation of multitarget by using mask R-CNN based on the results of the first step. Without significantly reducing the detection speed, the two-step detection model can effectively improve the detection accuracy of complex traffic scenes containing multiple targets, especially small targets. In the actual testing dataset, compared with mask R-CNN, the mean average detection accuracy of multiple targets increased by 4.01% and the average precision of small targets has increased by 5.8%.
APA, Harvard, Vancouver, ISO, and other styles
4

Birck, Hannes, Oliver Heckmann, Andreas Mauthe, and Ralf Steinmetz. "The Two-Step P2P Simulation Approach." Journal of Communications Software and Systems 1, no. 1 (2017): 4. http://dx.doi.org/10.24138/jcomss.v1i1.312.

Full text
Abstract:
In this article a framework is introduced that can be used to analyse the effects &amp; requirements of P2P applications onapplication and on network layer. P2P applications are complex and deployed on a large scale, pure packet level simulations do not scale well enough to analyse P2P applications in a large network with thousands of peers. It is also difficult to assess the effect of application level behavior on the communication system. We therefore propose an approach starting with a more abstract and therefore scalable application level simulation. For the application layer a specific simulation framework wasdeveloped. The results of the application layer simulations plus some estimated background traffic are fed into a packet layer simulator like NS2 (or our lab testbed) in a second step to perform some detailed packet layer analysis such as loss and delay measurements. This can be done for a subnetwork of the original network to avoid scalability problems.
APA, Harvard, Vancouver, ISO, and other styles
5

McGovern, Eimear, Eoin Kelleher, Aisling Snow, et al. "Clinical application of three-dimensional printing to the management of complex univentricular hearts with abnormal systemic or pulmonary venous drainage." Cardiology in the Young 27, no. 7 (2017): 1248–56. http://dx.doi.org/10.1017/s104795111600281x.

Full text
Abstract:
AbstractIn recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Guang, Nan He, and Yanxia Dong. "A Proportional-Egalitarian Allocation Policy for Public Goods Problems with Complex Network." Mathematics 9, no. 17 (2021): 2034. http://dx.doi.org/10.3390/math9172034.

Full text
Abstract:
How free-riding behavior can be avoided is a constant topic in public goods problems, especially in persistent and complex resource allocation situations. In this paper, a novel allocation policy for public goods games with a complex network, called the proportional-egalitarian allocation method (PEA), is proposed. This allocation rule differs from the well-studied redistribution policies by following a two-step process without paying back into the common pool. A parameter is set up for dividing the total income into two parts, and then they are distributed by following the egalitarianism and proportional rule, respectively. The first part of total income is distributed equally, while the second part is allocated proportionally according to players’ initial payoffs. In addition, a new strategy-updating mechanism is proposed by comparing the average group payoffs instead of the total payoffs. Compared with regular lattice networks, this mechanism admits the difference of cooperative abilities among players induced by the asymmetric network. Furthermore, numerical calculations show that a relatively small income for the first distribution step will promote the cooperative level, while relatively less income for the second step may harm cooperation evolution. This work thus enriches the knowledge of allocation policies for public goods games and also provides a fresh perspective for the strategy-updating mechanism.
APA, Harvard, Vancouver, ISO, and other styles
7

Eriksson, Ljusk Ola. "Two Methods for Solving Stand Management Problems Based on a Single Tree Model." Forest Science 40, no. 4 (1994): 732–58. http://dx.doi.org/10.1093/forestscience/40.4.732.

Full text
Abstract:
Abstract Two methods are presented for solving the stand management problem where the growth dynamics are depicted with a single tree model. With the nongradient method the problem is recast into a combinatorial problem, which in turn is solved with the method of simulated annealing. The gradient method, of which three versions are given, utilizes a combination of nonlinear and mixed integer techniques for solving the problem. In both cases, a linear programming problem solves the single tree harvest selection problem. In tests with nine sample problems, all methods are within 1% of the best solution found. The computational effort of the nongradient method is about one order of magnitude greater than that of the gradient methods. The results indicate the advantage of being able to divide the problem into a suitable hierarchy, since management problems based on single tree models are often too complex to be solved in one step. At each level an appropriate technique can then be applied. For. Sci. 40(4):732-758.
APA, Harvard, Vancouver, ISO, and other styles
8

Asmouh, Ilham, Mofdi El-Amrani, Mohammed Seaid, and Naji Yebari. "A Cell-Centered Semi-Lagrangian Finite Volume Method for Solving Two-Dimensional Coupled Burgers’ Equations." Computational and Mathematical Methods 2022 (February 13, 2022): 1–18. http://dx.doi.org/10.1155/2022/8192192.

Full text
Abstract:
A cell-centered finite volume semi-Lagrangian method is presented for the numerical solution of two-dimensional coupled Burgers’ problems on unstructured triangular meshes. The method combines a modified method of characteristics for the time integration and a cell-centered finite volume for the space discretization. The new method belongs to fractional-step algorithms for which the convection and the viscous parts in the coupled Burgers’ problems are treated separately. The crucial step of interpolation in the convection step is performed using two local procedures accounting for the element where the departure point is located. The resulting semidiscretized system is then solved using a third-order explicit Runge-Kutta scheme. In contrast to the Eulerian-based methods, we apply the new method for each time step along the characteristic curves instead of the time direction. The performance of the current method is verified using different examples for coupled Burgers’ problems with known analytical solutions. We also apply the method for simulation of an example of coupled Burgers’ flows in a complex geometry. In these test problems, the new cell-centered finite volume semi-Lagrangian method demonstrates its ability to accurately resolve the two-dimensional coupled Burgers’ problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Hromadka, T. V., and R. J. Whitley. "Approximating three-dimensional steady-state potential flow problems using two-dimensional complex polynomials." Engineering Analysis with Boundary Elements 29, no. 2 (2005): 190–94. http://dx.doi.org/10.1016/j.enganabound.2004.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Xirui. "Analysis on Lightweight Network Methods and Technologies." Highlights in Science, Engineering and Technology 4 (July 26, 2022): 339–48. http://dx.doi.org/10.54097/hset.v4i.922.

Full text
Abstract:
There are currently two main schools of deep learning. One is academic. They pursue stronger performance through powerful, complex models. The other is the engineering school. Their purpose is to efficiently deploy models to various hardware platforms. Complex models have better performance. However, it also brings unavoidable consumption. With the increasing depth of convolutional neural networks, lightweighting has become a key research direction. There are currently four main methods for designing lightweight networks. This article will first introduce CNN model compression and basic convolution operations.This paper also introduces the model compression based on AutoML and the automatic animation design based on NAS. Finally, according to the above three points, this paper introduces the application of the above methods in artificially designed neural networks.This paper mainly introduces the step-by-step evolution of the existing methods. This paper analyzes aspects of current neural network improvements and emerging problems. The significance of this paper is to summarize and deepen the solved problems and key problems in the lightweight process through past experience.
APA, Harvard, Vancouver, ISO, and other styles
11

Lü, Nian Chun, Yun Hong Cheng, Cheng Jin, and Yi Le Chen. "Dislocation Distribution Function of Two Fracture Dynamics Problems Concerning Aluminum Alloys." Materials Science Forum 575-578 (April 2008): 1008–12. http://dx.doi.org/10.4028/www.scientific.net/msf.575-578.1008.

Full text
Abstract:
By the approaches of the theory of complex functions, dynamic propagation problems on the surfaces of mode I crack subjected to unit-step loads and instantaneous impulse loads located at the origin of the coordinates were studied for Aluminum alloys, respectively. Analytical solutions to stresses, displacements, dynamic stress intensity factors and dislocation distribution functions are gained by the methods of self-similar functions. The problems considered can be very facilely transformed into Riemann-Hilbert problem and their closed solutions are obtained rather straightforward by Muskhelishvili’s measure.
APA, Harvard, Vancouver, ISO, and other styles
12

Devi, Kasmita, Prashanth Maroju, Eulalia Martínez, and Ramandeep Behl. "The Local Convergence of a Three-Step Sixth-Order Iterative Approach with the Basin of Attraction." Symmetry 16, no. 6 (2024): 742. http://dx.doi.org/10.3390/sym16060742.

Full text
Abstract:
In this study, we introduce an iterative approach exhibiting sixth-order convergence for the solution of nonlinear equations. The method attains sixth-order convergence by using three evaluations of the function and two evaluations of the first-order derivative per iteration. We examined the theoretical convergence of our method through the convergence theorem, which substantiates the convergence order. Furthermore, we analyzed the local convergence of our proposed technique by employing a hypothesis that involves the first-order derivative of the function Θ alongside the Lipschitz conditions. To evaluate the performance and efficacy of our iterative method, we provide a comparative analysis against existing methods based on various standard numerical problems. Finally, graphical comparisons employing basins of attraction are presented to illustrate the dynamic behavior of the iterative method in the complex plane.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Qiaofeng, and Qiuhai Lu. "Force localization and reconstruction using a two-step iterative approach." Journal of Vibration and Control 24, no. 17 (2017): 3830–41. http://dx.doi.org/10.1177/1077546317713366.

Full text
Abstract:
In this paper, we propose a two-step iterative approach to both localize and reconstruct a single point force acting on a structure. Since force reconstruction problems are typically ill-posed, regularization techniques are generally called for. However, for the considered localization-and-reconstruction problem, traditional parameter selection criteria become time-consuming and easily fail. So we propose the stabilization diagram of identified locations to determine the appropriate regularization range and the true force location. The consistency of identified locations under different regularization levels indicates the rationality of corresponding parameters. After the construction of the stabilization diagram, the problem can be transformed to a reconstruction-only one and solved by classic methods. We also adopted a two-phase version of complex method to accelerate the localization process on complex-shaped structures. The approach is validated by simulations of a cantilever beam, an impacted table, and an experiment of an impacted beam. The results show that the force can be well localized and reconstructed from only the responses of structures by the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Xing Yu, Guo Bin Wu, Han Zhao, and Qi Feng Sun. "Research on Function Module Clustering Based on the Rule-Immunity Algorithm for Complex Product." Applied Mechanics and Materials 742 (March 2015): 364–71. http://dx.doi.org/10.4028/www.scientific.net/amm.742.364.

Full text
Abstract:
A two-step strategy was proposed to solve the problems that inefficiency and inaccuracy of function modules clustering for complex product. Firstly, the three principles were proposed that weldment simplification, outsourcing simplification and borrowed component reduction to preprocess and simplify complex product. Then the complex product preprocessed can be clustered into different function modules by using the advanced Immune Algorithm amalgamated with heuristic rule (R-Immunity). By comparing the efficiency, accuracy and robustness of function modules clustering among the Genetic Algorithm, the Immune Algorithm and the R-Immunity Algorithm, we consider that the R-Immunity Algorithm is more efficient and precise to solve the problems related to function modules clustering. Finally, starting with the structure properties of complex product, the clustering results were optimized for the purposes of reducing coupling between modules and satisfying configuration requirements of customer.
APA, Harvard, Vancouver, ISO, and other styles
15

Taylor, Jill, and Brian D. Cox. "Microgenetic Analysis of Group-Based Solution of Complex Two-Step Mathematical World Problems by Fourth Graders." Journal of the Learning Sciences 6, no. 2 (1997): 183–226. http://dx.doi.org/10.1207/s15327809jls0602_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

English, Lyn D. "Children's Strategies for Solving Two– and Three–Dimensional Combinatorial Problems." Journal for Research in Mathematics Education 24, no. 3 (1993): 255–73. http://dx.doi.org/10.5951/jresematheduc.24.3.0255.

Full text
Abstract:
The study investigated the strategies that 7-to 12-year-old children spontaneously apply to the solution of novel combinatorial problems. The children were individually administered a set of six problems involving the dressing of toy bears in all possible combinations of tops and pants (two-dimensional) or tops, pants, and tennis rackets (three-dimensional). Two sets of solution procedures were identified, each comprising a series of five increasingly complex strategies ranging from trial-and-error approaches to sophisticated odometer procedures. Results suggested that experience with the two-dimensional problems enabled children to adopt and subsequently transform their efficient 2-D odometer strategy (where one item is held constant) into the most sophisticated 3-D odometer strategy, which involved working simultaneously with two constant items. The study highlights the importance of discrete mathematics as a source of problem-solving activities in which children are motivated to create, modify, and extend their own theories.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Fangfang, Sergey S. Krivenko, and Vladimir V. Lukin. "ANALYSIS OF TWO-STEP APPROACH FOR COMPRESSING TEXTURE IMAGES WITH DESIRED QUALITY." Aerospace technic and technology, no. 1 (January 25, 2020): 50–58. http://dx.doi.org/10.32620/aktt.2020.1.08.

Full text
Abstract:
considered. Quality is mainly characterized by the peak signal-to-noise ratio (PSNR) but visual quality metrics are briefly studied as well. Potentially, a two-step approach can be used to carry out a compression with providing the desired quality in a quite simple way and with a reduced compression time. However, the two-step approach can run into problems for PSNR metric under conditions that a required PSNR is quite small (about 30 dB). These problems mainly deal with the accuracy of providing a desired quality at the second step. The paper analyzes the reasons why this happens. For this purpose, a set of nine test images of different complexity is analyzed first. Then, the use of the two-step approach is studied for a wide set of complex structure texture test images. The corresponding test experiments are carried out for several values of the desired PSNR. The obtained results show that the two-step approach has limitations in the cases when complex texture images have to be compressed with providing relatively low values of the desired PSNR. The main reason is that the rate-distortion dependence is nonlinear while linear approximation is applied at the second step. To get around the aforementioned shortcomings, a simple but efficient solution is proposed based on the performed analysis. It is shown that, due to the proposed modification, the application range of the two-step method of lossy compression has become considerably wider and it covers PSNR values that are commonly required in practice. The experiments are performed for a typical image encoder AGU based on discrete cosine transform (DCT) but it can be expected that the proposed approach is applicable for other DCT-based image compression techniques.
APA, Harvard, Vancouver, ISO, and other styles
18

Tashboyeva, Saidakhan Rahmonberdiyevna Khojimamatova Khilolakhan Umidjon qizi. "PROCEDURE FOR WORKING SIMPLE TEXT PROBLEMS IN ELEMENTARY GRADES. (IN 3RD GRADE)." CENTRAL ASIAN JOURNAL OF EDUCATION AND INNOVATION 2, no. 6 (2023): 82–85. https://doi.org/10.5281/zenodo.8076961.

Full text
Abstract:
Textual problems are very important in primary grades. Problems are very helpful for student development and thinking. Any issues that are interesting and attract the student to think will have a more positive effect on his brain activity. For elementary school students, not too complicated, one or two-step problems are recommended. Through them, the scope of thinking expands and serves as a basis for complex issues that will be worked on at the next stages. In order to complete any complex examples and problems, one must first be able to work through simple-looking examples and problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Mingan, Shuo Feng, Jianming Li, Zhonghua Li, Yu Xue, and Dongliang Guo. "Cloud Model-Based Artificial Immune Network for Complex Optimization Problem." Computational Intelligence and Neuroscience 2017 (2017): 1–17. http://dx.doi.org/10.1155/2017/5901258.

Full text
Abstract:
This paper proposes an artificial immune network based on cloud model (AINet-CM) for complex function optimization problems. Three key immune operators—cloning, mutation, and suppression—are redesigned with the help of the cloud model. To be specific, an increasing half cloud-based cloning operator is used to adjust the dynamic clone multipliers of antibodies, an asymmetrical cloud-based mutation operator is used to control the adaptive evolution of antibodies, and a normal similarity cloud-based suppressor is used to keep the diversity of the antibody population. To quicken the searching convergence, a dynamic searching step length strategy is adopted. For comparative study, a series of numerical simulations are arranged between AINet-CM and the other three artificial immune systems, that is, opt-aiNet, IA-AIS, and AAIS-2S. Furthermore, two industrial applications—finite impulse response (FIR) filter design and proportional-integral-differential (PID) controller tuning—are investigated and the results demonstrate the potential searching capability and practical value of the proposed AINet-CM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhu, Dong, Peng Zhao, Qiang Zhao, Qingliang Li, Jinpeng Zhang, and Lixia Yang. "Two-Step Deep Learning Approach for Estimating Vegetation Backscatter: A Case Study of Soybean Fields." Remote Sensing 17, no. 1 (2024): 41. https://doi.org/10.3390/rs17010041.

Full text
Abstract:
Precisely predicting vegetation backscatter involves various challenges, such as complex vegetation structure, soil–vegetation interaction, and data availability. Deep learning (DL) works as a powerful tool to analyze complex data and approximate the nonlinear relationship between variables, thus exhibiting potential applications in microwave scattering problems. However, few DL-based approaches have been developed to reproduce vegetation backscatters owing to the lack of acquiring a large amount of training data. Motivated by a relatively accurate single-scattering radiative transfer model (SS-RTM) and radar measurements, we, for the first time to our knowledge, introduce a transfer learning (TL)-based approach to estimate the radar backscatter of vegetation canopy in the case of soybean fields. The proposed approach consists of two steps. In the first step, a simulated dataset was generated by the SS-RTM. Then, we pre-trained two baseline networks, namely, a deep neural network (DNN) and long short-term memory network (LSTM), using the simulated dataset. In the second step, limited measured data were utilized to fine-tune the previously pre-trained networks on the basis of TL strategy. Extensive experiments, conducted on both simulated data and in situ measurements, revealed that the proposed two-step TL-based approach yields a significantly better and more robust performance than SS-RTM and other DL schemes, indicating the feasibility of such an approach in estimating vegetation backscatters. All these outcomes provide a new path for addressing complex microwave scattering problems.
APA, Harvard, Vancouver, ISO, and other styles
21

Hutchinson, Nancy L. "Effects of Cognitive Strategy Instruction on Algebra Problem Solving of Adolescents with Learning Disabilities." Learning Disability Quarterly 16, no. 1 (1993): 34–63. http://dx.doi.org/10.2307/1511158.

Full text
Abstract:
This study investigated the effects of a two-phase cognitive strategy on algebra problem solving of adolescents with learning disabilities. The strategy was designed to enable students to represent and solve three types of word problems. The study used a modified multiple baseline with 11 replications as well as a two-group design. Conditions of the multiple-baseline design included baseline, instruction to mastery, transfer, and maintenance. Visual analysis of the single-subject data showed the strategy to be an effective intervention for this sample of students with deficits in algebra problem solving, but with criterial knowledge of basic operations and one-step problems. Statistical analyses of the two-group data showed that the instructed students had significantly higher posttest scores than the comparison group. Overall, the instructed students demonstrated improved performance on algebra word problems. Maintenance and transfer of the strategy were evident. This study has implications for teaching complex problem solving to adolescents with learning disabilities in secondary schools.
APA, Harvard, Vancouver, ISO, and other styles
22

Isaiev, A. B., V. I. Miroshnychenko, O. O. Koyfman, and O. I. Simkin. "Application of two-step input to reduce overshoot of the transient response at automated control systems." Reporter of the Priazovskyi State Technical University. Section: Technical sciences, no. 48 (June 27, 2024): 92–103. http://dx.doi.org/10.31498/2225-6733.48.2024.310687.

Full text
Abstract:
The problems of quality improving for technology control are very important and are considered in works on the theory of automatic control and related fields. Various approaches to solving the problems are known especially by the decreasing overshoot of step response. To provide this the most of existing methods require: to adjust a controller parameters; development of a mathematical model of a controlled object; additional filters applications, that as a whole is difficult to implement under industrial conditions. The authors have proposed another approach to solving the above problems, contrary to the known ones. Namely, to apply two successive inputs with lower amplitudes and delay in time instead of the known one step input. The response of the controlled system on the complex two step inputs was considered and the optimal time delay was defined. The investigations were conducted by the modelling a typical classic linear automation control system that consists of a static control object of the first order with time delay, proposal - integration - difference controllers. The modelling results for a transient under the various applied one and two step inputs, including their ramp variation were obtained. It was shown that the two-step inputs with their ramp application gives the decreasing of the maximum displacement for a transient in a typical control system: more than 3 times increase was shown comparing with the one-step approach. The optimal time interval between the steps was determined that leads to the maximum effectiveness of the technology application. The procedure to define the details of the approach application was developed
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Cheng-An, Hamou Sadat, Vital Ledez, and Denis Lemonnier. "Meshless method for solving radiative transfer problems in complex two-dimensional and three-dimensional geometries." International Journal of Thermal Sciences 49, no. 12 (2010): 2282–88. http://dx.doi.org/10.1016/j.ijthermalsci.2010.06.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

K., Olumurewa O. "A Study on Convergence and Region of Absolute Stability of Implicit Two – Step Multiderivative Method for Solving Stiff Initial Value Problems of First Order Ordinary Differential Equations." International Journal of Research and Innovation in Applied Science IX, no. III (2024): 134–47. http://dx.doi.org/10.51584/ijrias.2024.90315.

Full text
Abstract:
This study investigated and related the convergence and region of absolute stability of an implicit two - step multiderivative method, used in solving sampled stiff initial value problem of first order ordinary differential equation. It adopted the general implicit multiderivative linear multistep method at step - number (k) = 2 with derivative order (l) varied from 1 - 6 to develop six different variants of the method. Boundary locus method was adopted to determine the intervals of absolute stability which were plotted on the complex plane to show the regions of absolute stability. The variant methods were used to solve sampled stiff initial value problem of first order ordinary differential equation. The resulting numerical solutions were compared with the exact solution to determine accuracy and convergence of the methods. The study showed that two - step first derivative, two - step second derivative, two - step third derivative, two - step fifth derivative method and two - step sixth derivative methods yielded more accurate and convergent results with wider regions of absolute stability than two - step fourth derivative.
APA, Harvard, Vancouver, ISO, and other styles
25

Chaudhari, Parag, Jose Magalhaes, and Aparna Salunkhe. "Two-step computational aeroacoustics approach for underhood cooling fan application." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 3 (2021): 3615–24. http://dx.doi.org/10.3397/in-2021-2467.

Full text
Abstract:
Aeroacoustic noise is one of the important characteristics of the fan design. Computational Aeroacoustics (CAA) can provide better design options without relying on physical prototypes and reduce the development time and cost. There are two ways of performing CAA analysis; one-step and two-step approach. In one-step CAA, air flow and acoustic analysis are carried out in a single software. In two-step approach, air flow and acoustic analysis are carried out in separate software. Two-step CAA approach can expedite the calculation process and can be implemented in larger and complex domain problems. For the work presented in this paper, a mockup of an underhood cooling fan was designed. The sound pressure levels were measured for different installation configurations. The sound pressure level for one of the configurations was calculated with two-step approach and compared with test data. The compressible fluid flow field was first computed in a commercially available computational fluid dynamics software. This flow field was imported in a separate software where fan noise sources were computed and further used to predict the sound pressure levels at various microphone locations. The results show an excellent correlation between test and simulation for both tonal and broadband components of the fan noise.
APA, Harvard, Vancouver, ISO, and other styles
26

Dong, Guirong, Chengyang Liu, Dianzi Liu, and Xiaoan Mao. "Adaptive Multi-Level Search for Global Optimization: An Integrated Swarm Intelligence-Metamodelling Technique." Applied Sciences 11, no. 5 (2021): 2277. http://dx.doi.org/10.3390/app11052277.

Full text
Abstract:
Over the last decade, metaheuristic algorithms have emerged as a powerful paradigm for global optimization of multimodal functions formulated by nonlinear problems arising from various engineering subjects. However, numerical analyses of many complex engineering design problems may be performed using finite element method (FEM) or computational fluid dynamics (CFD), by which function evaluations of population-based algorithms are repetitively computed to seek a global optimum. It is noted that these simulations become computationally prohibitive for design optimization of complex structures. To efficiently and effectively address this class of problems, an adaptively integrated swarm intelligence-metamodelling (ASIM) technique enabling multi-level search and model management for the optimal solution is proposed in this paper. The developed technique comprises two steps: in the first step, a global-level exploration for near optimal solution is performed by adaptive swarm-intelligence algorithm, and in the second step, a local-level exploitation for the fine optimal solution is studied on adaptive metamodels, which are constructed by the multipoint approximation method (MAM). To demonstrate the superiority of the proposed technique over other methods, such as conventional MAM, particle swarm optimization, hybrid cuckoo search, and water cycle algorithm in terms of computational expense associated with solving complex optimization problems, one benchmark mathematical example and two real-world complex design problems are examined. In particular, the key factors responsible for the balance between exploration and exploitation are discussed as well.
APA, Harvard, Vancouver, ISO, and other styles
27

Martínez-Ojeda, Emigdio, and Adriana Ortiz-Rodríguez. "On the Complex and Real Hessian Polynomials." International Journal of Mathematics and Mathematical Sciences 2010 (2010): 1–22. http://dx.doi.org/10.1155/2010/962719.

Full text
Abstract:
We study some realization problems related to the Hessian polynomials. In particular, we solve the Hessian curve realization problem for degrees zero, one, two, and three and the Hessian polynomial realization problem for degrees zero, one, and two.
APA, Harvard, Vancouver, ISO, and other styles
28

Guo, Zhitao, Xiaojie Zhao, Li Yao, and Zhiying Long. "Improved brain community structure detection by two-step weighted modularity maximization." PLOS ONE 18, no. 12 (2023): e0295428. http://dx.doi.org/10.1371/journal.pone.0295428.

Full text
Abstract:
The human brain can be regarded as a complex network with interacting connections between brain regions. Complex brain network analyses have been widely applied to functional magnetic resonance imaging (fMRI) data and have revealed the existence of community structures in brain networks. The identification of communities may provide insight into understanding the topological functions of brain networks. Among various community detection methods, the modularity maximization (MM) method has the advantages of model conciseness, fast convergence and strong adaptability to large-scale networks and has been extended from single-layer networks to multilayer networks to investigate the community structure changes of brain networks. However, the problems of MM, suffering from instability and failing to detect hierarchical community structure in networks, largely limit the application of MM in the community detection of brain networks. In this study, we proposed the weighted modularity maximization (WMM) method by using the weight matrix to weight the adjacency matrix and improve the performance of MM. Moreover, we further proposed the two-step WMM method to detect the hierarchical community structures of networks by utilizing node attributes. The results of the synthetic networks without node attributes demonstrated that WMM showed better partition accuracy than both MM and robust MM and better stability than MM. The two-step WMM method showed better accuracy of community partitioning than WMM for synthetic networks with node attributes. Moreover, the results of resting state fMRI (rs-fMRI) data showed that two-step WMM had the advantage of detecting the hierarchical communities over WMM and was more insensitive to the density of the rs-fMRI networks than WMM.
APA, Harvard, Vancouver, ISO, and other styles
29

Colle, A. R., D. Redekop, and C. L. Tan. "Elastostatic analysis of problems involving complex toroidal geometries." Journal of Strain Analysis for Engineering Design 22, no. 4 (1987): 195–202. http://dx.doi.org/10.1243/03093247v224195.

Full text
Abstract:
The three-dimensional boundary integral equation (BIE) method is used to solve two elastostatic problems involving complex toroidal geometries. The first problem concerns a thick-walled 180 degree pipe bend connected to tangent pipes, subjected to in-plane bending and internal pressure. Results are presented for stresses and are compared with results from previous numerical and experimental work. The second problem concerns a thick-walled pressurized torus, with cross bores situated in the toroidal plane. Separate solutions are given for cross bores located at the intrados and extrados of the torus. Results are presented for the stress concentration factor, and are compared with numerical work for thick-walled cylindrical pipes.
APA, Harvard, Vancouver, ISO, and other styles
30

Elfelly, Nesrine, Jeans-Yves Dieulot, and Pierre Borne. "A Neural Approach of Multimodel Representation of Complex Processes." International Journal of Computers Communications & Control 3, no. 2 (2008): 149. http://dx.doi.org/10.15837/ijccc.2008.2.2383.

Full text
Abstract:
&lt;p&gt;The multimodel approach was recently developed to deal with the issues of complex processes modeling and control. Despite its success in different fields, it still faced with some design problems, and in particular the determination of the models and of the adequate method of validities computation. &lt;br /&gt;In this paper, we propose a neural approach to derive different models describing the process in different operating conditions. The implementation of this approach requires two main steps. The first step consists in exciting the system with a rich (e.g. pseudo random) signal and collecting measurements. These measurements are classified by using an adequate Kohonen self-organizing neural network. The second step is a parametric identification of the base-models by using the classification results for order and parameters estimation. The suggested approach is implemented and tested with two processes and compared to the classical modeling approach. The obtained results turn out to be satisfactory and show a good precision. These also allow to draw some interpretations about the adequate validities’ calculation method based on classification results.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Linyang, Jianxiang Guo, Xinran Yu, et al. "Optimization of Integrated Energy Systems Based on Two-Step Decoupling Method." Electronics 13, no. 11 (2024): 2045. http://dx.doi.org/10.3390/electronics13112045.

Full text
Abstract:
An integrated energy system (IES) plays a key role in transforming energy consumption patterns and solving serious environmental and economic problems. However, the abundant optional schemes and the complex coupling relationship among each piece of equipment make the optimization of an IES very complicated, and most of the current literature focuses on optimization of a specific system. In this work, a simulation-based two-step decoupling method is proposed to simplify the optimization of an IES. The generalized IES is split into four subsystems, and a two-layer optimization method is applied for optimization of the capacity of each piece of equipment. The proposed method enables fast comparison among abundant optional configurations of an IES, and it is applied to a hospital in Beijing, China. The optimized coupling system includes the gas-fired trigeneration system, the GSHP, and the electric chiller. Compared with the traditional distributed systems, the emission reduction rate of CO2 and NOX for the coupling system reaches 153.8% and 314.5%, respectively. Moreover, the primary energy consumption of the coupling system is 82.67% less than that of the traditional distributed energy system, while the annual cost is almost at the same level.
APA, Harvard, Vancouver, ISO, and other styles
32

Kumar, Sunil, Jai Bhagwan, and Lorentz Jäntschi. "Optimal Derivative-Free One-Point Algorithms for Computing Multiple Zeros of Nonlinear Equations." Symmetry 14, no. 9 (2022): 1881. http://dx.doi.org/10.3390/sym14091881.

Full text
Abstract:
In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or second-order derivative of functions. However, calculating the function’s derivative at each iteration is laborious. So, taking this as motivation, we develop second-order algorithms without using the derivatives. The convergence analysis is first carried out for particular values of multiple roots before coming to a general conclusion. According to the Kung–Traub hypothesis, the new algorithms will have optimal convergence since only two functions need to be evaluated at every step. The order of convergence is investigated using Taylor’s series expansion. Moreover, the applicability and comparisons with existing methods are demonstrated on three real-life problems (e.g., Kepler’s, Van der Waals, and continuous-stirred tank reactor problems) and three standard academic problems that contain the root clustering and complex root problems. Finally, we see from the computational outcomes that our approaches use the least amount of processing time compared with the ones already in use. This effectively displays the theoretical conclusions of this study.
APA, Harvard, Vancouver, ISO, and other styles
33

Akgül, Ali, Ishfaq Ahmad Mallah, and Subhash Alha. "New Aspects of Bloch Model Associated with Fractal Fractional Derivatives." Nonlinear Engineering 10, no. 1 (2021): 323–42. http://dx.doi.org/10.1515/nleng-2021-0026.

Full text
Abstract:
Abstract To model complex real world problems, the novel concept of non-local fractal-fractional differential and integral operators with two orders (fractional order and fractal dimension) have been used as mathematical tools in contrast to classical derivatives and integrals. In this paper, we consider Bloch equations with fractal-fractional derivatives. We find the general solutions for components of magnetization ℳ = (Mu , Mv , Mw ) by using descritization and Lagrange's two step polynomial interpolation. We analyze the model with three different kernels namely power function, exponential decay function and Mittag-Leffler type function. We provide graphical behaviour of magnetization components ℳ = (Mu , Mv , Mw ) on different orders. The examination of Bloch equations with fractal-fractional derivatives show new aspects of Bloch equations.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Yan, and Zhan Li. "Research on Recurrence Plot Feature Quantization Method Based on Image Texture Analysis." Journal of Environmental and Public Health 2022 (August 8, 2022): 1–12. http://dx.doi.org/10.1155/2022/2495024.

Full text
Abstract:
The nonlinear time-series analysis method, based on the recurrence plot theory, has received great attention from researchers and has been successfully used in multiple fields. However, traditional recurrence plots that use Heaviside step functions to determine the recursive behavior of a point in the phase space have two problems: (1) Heaviside step functions produce a rigid boundary, resulting in information loss; and (2) the selection of the critical distance, ε , is crucial; if the selection is inappropriate, it will result in a low-dimensional dynamics error, and as of now, there exists no unified method for selecting this parameter. With regard to the problems described above, the novelty of this article lies in the following: (1) when determining the state-phase point recursiveness, a Gaussian function is used to replace the Heaviside function, thereby solving the rigidity and binary value problems of the recursive analysis results caused by the Heaviside step function; and (2) texture analysis is performed on a recurrence plot, new ways of studying complex system dynamics features are proposed, and a system of complex system dynamic-like measurement methods is built.
APA, Harvard, Vancouver, ISO, and other styles
35

Sidoryakina, V. V., and A. I. Sukhinov. "Construction of Solutions and Study of Their Closeness in L2 for Two Boundary Value Problems for a Model of Multicomponent Suspension Transport in Coastal Systems." Журнал вычислительной математики и математической физики 63, no. 10 (2023): 1721–32. http://dx.doi.org/10.31857/s0044466923100149.

Full text
Abstract:
Three-dimensional models of suspension transport in coastal marine systems are considered. The associated processes have a number of characteristic features, such as high concentrations of suspensions (e.g., when soil is dumped on the bottom), much larger areas of suspension spread than the reservoir depth, complex granulometric (multifractional) content of suspensions, and mutual transitions between fractions. Suspension transport can be described using initial-boundary value diffusion–convection–reaction problems. According to the authors' idea, on a time grid constructed for the original continuous initial-boundary value problem, the right-hand sides are transformed with a “delay” so that the right-hand side concentrations of the components other than the underlying one (for which the initial-boundary value problem of diffusion–convection is formulated) are determined at the preceding time level. This approach simplifies the subsequent numerical implementation of each of the diffusion–convection equations. Additionally, if the number of fractions is three or more, the computation of each of the concentrations at every time step can be organized independently (in parallel). Previously, sufficient conditions for the existence and uniqueness of a solution to the initial-boundary value problem of suspension transport were determined, and a conservative stable difference scheme was constructed, studied, and numerically implemented for test and real-world problems. In this paper, the convergence of the solution of the delay-transformed problem to the solution of the original suspension transport problem is analyzed. It is proved that the differences between these solutions tends to zero at an O(τ) rate in the norm of the Hilbert space L2 as the time step t approaches zero.
APA, Harvard, Vancouver, ISO, and other styles
36

Martynenko, S. I., and A. Yu Varaksin. "Boundary Value Problems Numerical Solution on Multiblock Grids." Herald of the Bauman Moscow State Technical University. Series Natural Sciences, no. 1 (94) (February 2021): 18–33. http://dx.doi.org/10.18698/1812-3368-2021-1-18-33.

Full text
Abstract:
Results of theoretical analysis of the geometric multigrid algorithms convergence are presented for solving the linear boundary value problems on a two-block grid. In this case, initial domain could be represented as a union of intersecting subdomains, in each of them a structured grid could be constructed generating a hierarchy of coarse grids. Multigrid iteration matrix is obtained using the damped nonsymmetric iterative method as a smoother. The multigrid algorithm contains a new problem-dependent component --- correction interpolation between grid blocks. Smoothing property for the damped nonsymmetric iterative method and convergence of the robust multigrid technique are proved. Estimation of the multigrid iteration matrix norm is obtained (sufficient convergence condition). It is shown that the number of multigrid iterations does not depend on either the step or the number of grid blocks, if interpolation of the correction between grid blocks is sufficiently accurate. Results of computational experiments are presented on solving the three-dimensional Dirichlet boundary value problem for the Poisson equation illustrating the theoretical analysis. Results obtained could be easily generalized to multiblock grids. The work is of interest for developers of highly efficient algorithms for solving the (initial-) boundary value problems describing physical and chemical processes in complex geometry domains
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Simon, and Li Chen. "Towards Rapid Redesign: Pattern-based Redesign Planning for Large-Scale and Complex Redesign Problems." Journal of Mechanical Design 129, no. 2 (2005): 227–33. http://dx.doi.org/10.1115/1.2218885.

Full text
Abstract:
We have developed a decomposition-based rapid redesign methodology for large and complex computational redesign problems. While the overall methodology consists of two general steps: diagnosis and repair, in this paper we focus on the repair step in which decomposition patterns are utilized for redesign planning. Resulting from design diagnosis, a typical decomposition pattern solution to a given redesign problem indicates the portions of the design model necessary for recomputation as well as the interaction part within the model accountable for design change propagation. Following this, in this paper we suggest repair actions with an approach derived from an input pattern solution, to generate a redesign road map allowing for taking a shortcut in the redesign solution process. To do so, a two-stage redesign planning approach from recomputation strategy selection to redesign road map generation is proposed. An example problem concerning the redesign of a relief valve is used for illustration and validation.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Wei, You Hong Tang, Cheng Bi Zhao, and Cheng Zhang. "A Two-Phase Flow Model with VOF for Free Surface Flow Problems." Applied Mechanics and Materials 232 (November 2012): 279–83. http://dx.doi.org/10.4028/www.scientific.net/amm.232.279.

Full text
Abstract:
A numerical model based on the two-phase flow model for incompressible viscous fluid with a complex free surface has been developed in this study. The two-step projection method is employed to solve the Navier–Stokes equations in the numerical solutions, and finite difference method on a staggered grid is used throughout the computation. The two-order accurate volume of fluid (VOF) method is used to track the distorted and broken free surfaces. The two-phase model is first validated by simulating the dam break over a dry bed, in which the numerical results and experimental data agree well. Then 2-D fluid sloshing in a horizontally excited rectangular tank at different excitation frequencies is simulated using this two-phase model. The results of this study show that the two-phase flow model with VOF method is a potential tool for the simulation of nonlinear fluid sloshing. These studies demonstrate the capability of the two-phase model to simulate free surface flow problems with considering air movement effects.
APA, Harvard, Vancouver, ISO, and other styles
39

Cheng, Heng, Miaojuan Peng, and Yumin Cheng. "A Fast Complex Variable Element-Free Galerkin Method for Three-Dimensional Wave Propagation Problems." International Journal of Applied Mechanics 09, no. 06 (2017): 1750090. http://dx.doi.org/10.1142/s1758825117500909.

Full text
Abstract:
In this paper, combining the dimension splitting method with the improved complex variable element-free Galerkin (ICVEFG) method, we present a fast ICVEFG method for three-dimensional wave propagation problems. Using the dimension splitting method, the equations of three-dimensional wave propagation problems are translated into a series of two-dimensional ones in another one-dimensional direction. The new Galerkin weak form of the dimension splitting method for three-dimensional wave propagation problems is obtained. The improved complex variable moving least-square (ICVMLS) approximation is used to obtain the shape functions, and the penalty method is used to apply the essential boundary conditions, finite difference method is used in the one-dimensional direction, then the formulae of the ICVEFG method for three-dimensional wave propagation problems are obtained. The convergence and the corresponding parameters in the ICVEFG method are discussed. Some numerical examples are given to show that the new method has higher computational precision, and can improve the computational efficiency of the conventional meshless methods for three-dimensional problems greatly.
APA, Harvard, Vancouver, ISO, and other styles
40

Retolaza, José Luis. "Causality problem in Economic Science." Cuadernos de Gestión 7, no. 2 (2007): 39–53. http://dx.doi.org/10.5295/cdg.19146jr.

Full text
Abstract:
The main point of the paper is the problem of the economy to be consider like a science in the most strict term of the concept. In the first step we are going to tackle a presentation about what we understand by science to subsequently present some of the fallacies which have bring certain scepticism about the scientific character of the investigation in economy, to know: 1) The differences between hard and weak sciences -physics and social; 2) The differences between paradigm, positivist and phenomenological y 3) The differences between physic causality and historic causality. In the second step we are going to talk about two fundamental problems which are questioned: 1) the confusion between ontology and gnoseology and 2) the erroneous concept of causality that commonly is used. In the last step of the paper we are going over the recent models of «causal explanation» and we suggest the probabilistic casualty development next with a more elaborated models of causal explanation, like a way to conjugate the scientific severity with the possibility to tackle complex economic realities.
APA, Harvard, Vancouver, ISO, and other styles
41

Tang, Jun, Jiayi Sun, Cong Lu, and Songyang Lao. "Optimized artificial potential field algorithm to multi-unmanned aerial vehicle coordinated trajectory planning and collision avoidance in three-dimensional environment." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 233, no. 16 (2019): 6032–43. http://dx.doi.org/10.1177/0954410019844434.

Full text
Abstract:
Multi-unmanned aerial vehicle trajectory planning is one of the most complex global optimum problems in multi-unmanned aerial vehicle coordinated control. Results of recent research works on trajectory planning reveal persisting theoretical and practical problems. To mitigate them, this paper proposes a novel optimized artificial potential field algorithm for multi-unmanned aerial vehicle operations in a three-dimensional dynamic space. For all purposes, this study considers the unmanned aerial vehicles and obstacles as spheres and cylinders with negative electricity, respectively, while the targets are considered spheres with positive electricity. However, the conventional artificial potential field algorithm is restricted to a single unmanned aerial vehicle trajectory planning in two-dimensional space and usually fails to ensure collision avoidance. To deal with this challenge, we propose a method with a distance factor and jump strategy to resolve common problems such as unreachable targets and ensure that the unmanned aerial vehicle does not collide into the obstacles. The method takes companion unmanned aerial vehicles as the dynamic obstacles to realize collaborative trajectory planning. Besides, the method solves jitter problems using the dynamic step adjustment method and climb strategy. It is validated in quantitative test simulation models and reasonable results are generated for a three-dimensional simulated urban environment.
APA, Harvard, Vancouver, ISO, and other styles
42

Hamouda, T. "Complex three- dimensional-shaped knitting preforms for composite application." Journal of Industrial Textiles 46, no. 7 (2016): 1536–51. http://dx.doi.org/10.1177/1528083715624260.

Full text
Abstract:
For decades, street lighting and electric poles are made of metal and it is vulnerable to corrosion due to the harsh weather and chemicals. To overcome such essential problems, galvanized iron is used although it adds more hard work to increase the manufacturing cost. Therefore, fiber reinforced polymer lighting pole is proposed. Fiber reinforced polymer materials possess many advantages such as corrosion resistance, high specific strength and stiffness, etc. Two-dimensional woven fabrics and three-dimensional woven fabrics preforms are used to produce composite structures. However, complex shapes cannot be manufactured as a one piece preform. Woven fabrics, whether two-dimensional or three-dimensional need to be cut into patterns to finally produce the complex shapes. These processes add more cost and time to the final composite products. In this research, innovative technique to produce a three-dimensional complex shape knitted preform using regular flat-knitting machine will be presented. Production of such shaped three-dimensional preform permits the production of one piece-shaped preform without any connection or further sewing processes. Produced knitted preform can be used for various reinforcement applications such as light and communication poles, scaffold façades, traffic sign, oars, and wind mill blades.
APA, Harvard, Vancouver, ISO, and other styles
43

Xie, Zhihua. "A two-phase flow model for three-dimensional breaking waves over complex topography." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471, no. 2180 (2015): 20150101. http://dx.doi.org/10.1098/rspa.2015.0101.

Full text
Abstract:
A two-phase flow model has been developed to study three-dimensional breaking waves over complex topography, including the wave pre-breaking, overturning and post-breaking processes. The large-eddy simulation approach has been adopted in this study, where the model is based on the filtered Navier–Stokes equations with the Smagorinsky sub-grid model being used for the unresolved scales of turbulence. The governing equations have been discretized using the finite volume method, with the PISO algorithm being employed for the pressure–velocity coupling. The air–water interface has been captured using a volume of fluid method and the partial cell treatment has been implemented to deal with complex topography in the Cartesian grid. The model is first validated against available analytical solutions and experimental data for solitary wave propagation over constant water depth and three-dimensional breaking waves over a plane slope, respectively. Furthermore, the model is used to study three-dimensional overturning waves over three different bed topographies, with three-dimensional wave profiles and surface velocities being presented and discussed. The overturning jet, air entrainment and splash-up during wave breaking have been captured by the two-phase flow model, which demonstrates the capability of the model to simulate free surface flow and wave breaking problems over complex topography.
APA, Harvard, Vancouver, ISO, and other styles
44

de Jongh, Ad, Erik ten Broeke, and Steven Meijer. "Two Method Approach: A Case Conceptualization Model in the Context of EMDR." Journal of EMDR Practice and Research 4, no. 1 (2010): 12–21. http://dx.doi.org/10.1891/1933-3196.4.1.12.

Full text
Abstract:
This article outlines a comprehensive model that helps to identify crucial target memories for EMDR treatment. The “Two Method Approach” can be used for conceptualization and treatment implementation for a broad spectrum of symptoms and problems, other than those related to PTSD per se. The model consists of two types of case conceptualizations. The First Method deals with symptoms whereby memories of the etiological and/or aggravating events can be meaningfully specified on a time line. It is primarily aimed at the conceptualization and treatment of DSM-IV-TR Axis I disorders. The Second Method is used to identify memories that underlie patients’ so-called dysfunctional core beliefs. This method is primarily used to treat more severe forms of pathology, such as severe social phobia, complex PTSD, and/or personality disorders. The two methods of case conceptualization are explained step by step in detail and are illustrated by case examples.
APA, Harvard, Vancouver, ISO, and other styles
45

Khudhur, Hisham M., and Kais I. Ibraheem. "Metaheuristic optimization algorithm based on the two-step Adams-Bashforth method in training multi-layer perceptrons." Eastern-European Journal of Enterprise Technologies 2, no. 4 (116) (2022): 6–13. http://dx.doi.org/10.15587/1729-4061.2022.254023.

Full text
Abstract:
The proposed metaheuristic optimization algorithm based on the two-step Adams-Bashforth scheme (MOABT) was first used in this paper for Multilayer Perceptron Training (MLP). In computer science and mathematical examples, metaheuristic is high-level procedures or guidelines designed to find, devise, or select algorithmic research methods to obtain high-quality solutions to an example problem, especially if the information is insufficient or incomplete, or if computational capacity is limited. Many metaheuristic methods include some stochastic example operations, which means that the resulting solution is dependent on the random variables that are generated during the search. The use of higher evidence can frequently find good solutions with less computational effort than iterative methods and algorithms because it searches a broad range of feasible solutions at the same time. Therefore, metaheuristic is a useful approach to solving example problems. There are several characteristics that distinguish metaheuristic strategies for the research process. The goal is to efficiently explore the search perimeter to find the best and closest solution. The techniques that make up metaheuristic algorithms range from simple searches to complex learning processes. Eight model data sets are used to calculate the proposed approach, and there are five classification data sets and three proximate job data sets included in this set. The numerical results were compared with those of the well-known evolutionary trainer Gray Wolf Optimizer (GWO). The statistical study revealed that the MOABT algorithm can outperform other algorithms in terms of avoiding local optimum and speed of convergence to global optimum. The results also show that the proposed problems can be classified and approximated with high accuracy
APA, Harvard, Vancouver, ISO, and other styles
46

Hisham, M. Khudhur, and I. Ibraheem Kais. "Metaheuristic optimization algorithm based on the two-step Adams-Bashforth method in training multi-layer perceptrons." Eastern-European Journal of Enterprise Technologies 2, no. 4 (116) (2022): 6–13. https://doi.org/10.15587/1729-4061.2022.254023.

Full text
Abstract:
The proposed metaheuristic optimization algorithm based on the two-step Adams-Bashforth scheme (MOABT) was first used in this paper for Multilayer Perceptron Training (MLP). In computer science and mathematical examples, metaheuristic is high-level procedures or guidelines designed to find, devise, or select algorithmic research methods to obtain high-quality solutions to an example problem, especially if the information is insufficient or incomplete, or if computational capacity is limited. Many metaheuristic methods include some stochastic example operations, which means that the resulting solution is dependent on the random variables that are generated during the search. The use of higher evidence can frequently find good solutions with less computational effort than iterative methods and algorithms because it searches a broad range of feasible solutions at the same time. Therefore, metaheuristic is a useful approach to solving example problems. There are several characteristics that distinguish metaheuristic strategies for the research process. The goal is to efficiently explore the search perimeter to find the best and closest solution. The techniques that make up metaheuristic algorithms range from simple searches to complex learning processes. Eight model data sets are used to calculate the proposed approach, and there are five classification data sets and three proximate job data sets included in this set. The numerical results were compared with those of the well-known evolutionary trainer Gray Wolf Optimizer (GWO). The statistical study revealed that the MOABT algorithm can outperform other algorithms in terms of avoiding local optimum and speed of convergence to global optimum. The results also show that the proposed problems can be classified and approximated with high accuracy
APA, Harvard, Vancouver, ISO, and other styles
47

Osazuwa-Ojo, Victory Osaruese, and Victor O. Elaigwu. "A TWO-STEP AUTHENTICATION FACIAL RECOGNITION SYSTEM FOR AUTOMATED ATTENDANCE TRACKING." FUDMA JOURNAL OF SCIENCES 8, no. 6 (2024): 7–16. https://doi.org/10.33003/fjs-2024-0806-2773.

Full text
Abstract:
This study addresses the need for efficient, automated attendance systems through the design of a facial recognition application. Manual attendance systems are slow, error-prone and the retrieval of old records can be tedious. Universally assessable technological developments such as facial recognition software can easily solve these problems. However, the vast amount of computational resources required for its implementation has posed a limitation to its wide adoption. This study presents a two-step approach to resolve these challenges. By leveraging a faster, less-powerful model, as the first step, the workload of facial recognition can be distributed to save time and computational cost. A more powerful machine learning model is applied as the second step, deployed for tasks that are too complex for the first model to handle. The two-step authentication process will also reduce the occurrences of false negatives. Face_recognition, a python library is used for detection and encoding of face images read using python’s opencv library from an IP webcam. A flask application demonstrates this facial recognition functionality. The database connection and communication are accomplished using flask_sqlalchemy. A graphical user interface (web application) is used to interact with users on a high level, showing saved images of logged personnel and their times of entry. The system has a maximum accuracy of 98.78% and precision of 98.82% from tests. This shows its potential for application on a wider scale, with some added improvements such as cloud deployment and larger datasets.
APA, Harvard, Vancouver, ISO, and other styles
48

AbuSalim, Samah, Nordin Zakaria, Aarish Maqsood, et al. "Multi-granularity tooth analysis via YOLO-based object detection models for effective tooth detection and classification." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 2 (2024): 2081. http://dx.doi.org/10.11591/ijai.v13.i2.pp2081-2092.

Full text
Abstract:
Accurate detection and classification of teeth is the first step in dental disease diagnosis. However, the same class of tooth exhibits significant variations in surface appearance. Moreover, the complex geometrical structure poses challenges in learning discriminative features among the different tooth classes. Due to these complex features, tooth classification is one of the challenging research domains in deep learning. To address the aforementioned issues, the presented study proposes discriminative local feature extraction at different granular levels using YOLO models. However, this necessitates a granular intra-oral image dataset. To facilitate this requirement, a dataset at three granular levels (two, four, and seven teeth classes) is developed. YOLOv5, YOLOv6, and YOLOv7 models were trained using 2,790 images. The results indicate superior performance of YOLOv6 for two-class classification problems. The model generated a mean average precision (mAP) value of 94%. However, as the granularity level is increased, the performance of YOLO models decreases. For, four and seven-class classification problems, the highest mAP value of 87% and 79% was achieved by YOLOv5 respectively. The results indicate that different levels of granularity play an important role in tooth detection and classification. The YOLO’s performance gradually decreased as the granularity decreased especially at the finest granular level.
APA, Harvard, Vancouver, ISO, and other styles
49

Courtemanche, M., L. Glass, M. D. Rosengarten, and A. L. Goldberger. "Beyond pure parasystole: promises and problems in modeling complex arrhythmias." American Journal of Physiology-Heart and Circulatory Physiology 257, no. 2 (1989): H693—H706. http://dx.doi.org/10.1152/ajpheart.1989.257.2.h693.

Full text
Abstract:
The dynamics of pure parasystole, a cardiac arrhythmia in which two competing pacemakers fire independently, have recently been fully characterized. This model is now extended in an attempt to account for the more complex dynamics occurring with modulated parasystole, in which there exists nonlinear interaction between the sinus node and the ectopic ventricular focus. Theoretical analysis of modulated parasystole reveals three types of dynamics: entrainment, quasiperiodicity, and chaos. Rhythms associated with quasiperiodicity obey a set of rules derived from pure parasystole. This model is applied to the interpretation of continuous electrocardiographic data sets from three patients with complicated patterns of ventricular ectopic activity. We describe several new statistical properties of these records, related to the number of intervening sinus beats between ectopic events, that are essential in characterizing the dynamics and testing mathematical models. Detailed comparison between data and theory in these cases show substantial areas of agreement as well as potentially important discrepancies. These findings have implications for understanding the dynamics of the heartbeat in normal and pathological conditions.
APA, Harvard, Vancouver, ISO, and other styles
50

Cecen, R. K., and F. Aybek Çetek. "Optimising aircraft arrivals in terminal airspace by mixed integer linear programming model." Aeronautical Journal 124, no. 1278 (2020): 1129–45. http://dx.doi.org/10.1017/aer.2020.15.

Full text
Abstract:
ABSTRACTAir traffic flow becomes denser and more complex within terminal manoeuvering areas (TMAs) due to rapid growth rates in demand. Effective TMA arrival management plays a key role in the improvement of airspace capacity, flight efficiency and air traffic controller performance. This study proposes a mixed integer linear programming model for aircraft landing problems with area navigation (RNAV) route structure using three conflict resolution and sequencing techniques together: flexible route allocation, airspeed reduction and vector manoeuver. A two-step mixed integer linear programming model was developed that minimises total conflict resolution time and then total airborne delay using lexicographic goal programming. Experimental results demonstrate that the model can obtain conflict-free and time optimal aircraft trajectories for RNAV route structures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography