To see the other types of publications on this topic, follow the link: Development and application of optimized algorithms.

Journal articles on the topic 'Development and application of optimized algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Development and application of optimized algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Xue Mei. "Nonlinear Blind Source Separation Using GA Optimized RBF-ICA and its Application to the Image Noise Removal." Advanced Materials Research 393-395 (November 2011): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amr.393-395.205.

Full text
Abstract:
Today the blind source separation (BSS) algorithms are widely used to separate independent components in a data set based on its statistical properties. Especially in image data applications, the independent component analysis (ICA) based BSS procedure for image pre-processing has been successfully applied for independent component extraction in order to remove the noise signals mixed into the image data. The contribution of this paper refers to the development of a nonlinear BSS method using the radial basis function (RBF) neural network based ICA algorithm, which was built by adopted some modifications in the linear ICA model. Moreover, genetic algorithm (GA) was used to optimize the RBF neural network to obtain satisfactory nonlinear solve of the nonlinear mixing matrix. In the experiments of this work, the GA optimized nonlinear ICA method and other ICA models were applied for image de-noising. A comparative analysis has showed satisfactory and effective image de-noising results obtained by the presented method.
APA, Harvard, Vancouver, ISO, and other styles
2

Raju, M. Mohan, R. K. Srivastava, Dinesh C. S. Bisht, H. C. Sharma, and Anil Kumar. "Development of Artificial Neural-Network-Based Models for the Simulation of Spring Discharge." Advances in Artificial Intelligence 2011 (September 6, 2011): 1–11. http://dx.doi.org/10.1155/2011/686258.

Full text
Abstract:
The present study demonstrates the application of artificial neural networks (ANNs) in predicting the weekly spring discharge. The study was based on the weekly spring discharge from a spring located near Ranichauri in Tehri Garhwal district of Uttarakhand, India. Five models were developed for predicting the spring discharge based on a weekly interval using rainfall, evaporation, temperature with a specified lag time. All models were developed both with one and two hidden layers. Each model was developed with many trials by selecting different network architectures and different number of hidden neurons; finally a best predicting model presented against each developed model. The models were trained with three different algorithms, that is, quick-propagation algorithm, batch backpropagation algorithm, and Levenberg-Marquardt algorithm using weekly data from 1999 to 2005. A best model for the simulation was selected from the three presented algorithms using the statistical criteria such as correlation coefficient (), determination coefficient, or Nash Sutcliff's efficiency (DC). Finally, optimized number of neurons were considered for the best model. Training and testing results revealed that the models were predicting the weekly spring discharge satisfactorily. Based on these criteria, ANN-based model results in better agreement for the computation of spring discharge. LMR models were also developed in the study, and they also gave good results, but, when compared with the ANN methodology, ANN resulted in better optimized values.
APA, Harvard, Vancouver, ISO, and other styles
3

Cai, Yinying, and Amit Sharma. "Swarm Intelligence Optimization: An Exploration and Application of Machine Learning Technology." Journal of Intelligent Systems 30, no. 1 (January 1, 2021): 460–69. http://dx.doi.org/10.1515/jisys-2020-0084.

Full text
Abstract:
Abstract In the agriculture development and growth, the efficient machinery and equipment plays an important role. Various research studies are involved in the implementation of the research and patents to aid the smart agriculture and authors and reviewers that machine leaning technologies are providing the best support for this growth. To explore machine learning technology and machine learning algorithms, the most of the applications are studied based on the swarm intelligence optimization. An optimized V3CFOA-RF model is built through V3CFOA. The algorithm is tested in the data set collected concerning rice pests, later analyzed and compared in detail with other existing algorithms. The research result shows that the model and algorithm proposed are not only more accurate in recognition and prediction, but also solve the time lagging problem to a degree. The model and algorithm helped realize a higher accuracy in crop pest prediction, which ensures a more stable and higher output of rice. Thus they can be employed as an important decision-making instrument in the agricultural production sector.
APA, Harvard, Vancouver, ISO, and other styles
4

Arce, Samuel, Cory A. Vernon, Joshua Hammond, Valerie Newell, Joseph Janson, Kevin W. Franke, and John D. Hedengren. "Automated 3D Reconstruction Using Optimized View-Planning Algorithms for Iterative Development of Structure-from-Motion Models." Remote Sensing 12, no. 13 (July 7, 2020): 2169. http://dx.doi.org/10.3390/rs12132169.

Full text
Abstract:
Unsupervised machine learning algorithms (clustering, genetic, and principal component analysis) automate Unmanned Aerial Vehicle (UAV) missions as well as the creation and refinement of iterative 3D photogrammetric models with a next best view (NBV) approach. The novel approach uses Structure-from-Motion (SfM) to achieve convergence to a specified orthomosaic resolution by identifying edges in the point cloud and planning cameras that “view” the holes identified by edges without requiring an initial model. This iterative UAV photogrammetric method successfully runs in various Microsoft AirSim environments. Simulated ground sampling distance (GSD) of models reaches as low as 3.4 cm per pixel, and generally, successive iterations improve resolution. Besides analogous application in simulated environments, a field study of a retired municipal water tank illustrates the practical application and advantages of automated UAV iterative inspection of infrastructure using 63 % fewer photographs than a comparable manual flight with analogous density point clouds obtaining a GSD of less than 3 cm per pixel. Each iteration qualitatively increases resolution according to a logarithmic regression, reduces holes in models, and adds details to model edges.
APA, Harvard, Vancouver, ISO, and other styles
5

Sunnersjö, Staffan, Mikael Cederfeldt, Fredrik Elgh, and Ingvar Rask. "A Transparent Design System for Iterative Product Development." Journal of Computing and Information Science in Engineering 6, no. 3 (January 9, 2006): 300–307. http://dx.doi.org/10.1115/1.2218363.

Full text
Abstract:
Automated systems for variant design can be used for design iterations in order to guide the designer towards solutions that are optimized with respect to weight, cost, lead time, or other vital properties. In this work such a system for computational design problems is presented together with examples of its application. The system performs design computations, computed aided design model configuration, production process planning, and cost estimation. The design rules and algorithms are captured in knowledge “chunks,” which are human readable as well as computer executable. The workflow governing the execution of these rules and algorithms is created using a dependency structure matrix (DSM) which is included in the system. Particular attention has been given to the need for transparency, modularity, and longevity of the system, which is a prerequisite for such a system to become a viable tool in industrial applications. Experiences from the proposed system indicate that the DSM workflow manager in combination with a human readable and modularized knowledge base provides clarity and transparency for both developer and user of the system.
APA, Harvard, Vancouver, ISO, and other styles
6

Klimánek, M. "Optimization of digital terrain model for its application in forestry." Journal of Forest Science 52, No. 5 (January 9, 2012): 233–41. http://dx.doi.org/10.17221/4506-jfs.

Full text
Abstract:
Digital terrain model (DTM) represents a very important geospatial data type. In the CzechRepublic, the most common digital contour data sources are the Primary Geographic Data Base (ZABAGED), the Digital Ground Model (DMÚ25) and eventually the Regional Plans of Forest Development (OPRL). In constructing regular raster DTM, the initial process requires interpolation between the points in order to estimate values in a regular grid pattern. In this study, constructions of DTM from the above-mentioned data were tested using several software products: ArcEditor 9.0, Atlas 3.8, GRASS 6.1, Idrisi 14.02 and TopoL 2001. Algorithm parameters can be optimized in several ways. In this sense a comparison of the first and second derivative of DTM and its real appearance in the terrain and a cross-validation procedure or terrain data measurements to compute and minimize the root mean square error values (RMSE) proved to be the most useful operations. The ZABAGED contour data provided the best results, with software specific algorithms for interpolations of contour data (ArcGIS Desktop Topo to Raster, Idrisi Kilimanjaro TIN).
APA, Harvard, Vancouver, ISO, and other styles
7

Zhao, Jiaxin, Hongwei Wang, and Heming Zhang. "An Accurate Method for Real-Time Aircraft Dynamics Simulation Based on Predictor-Corrector Scheme." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/193179.

Full text
Abstract:
Real-time aircraft dynamics simulation requires very high accuracy and stability in the numerical integration process. Nonetheless, traditional multistep numerical methods cannot effectively meet the new requirements. Therefore, a novel real-time multistep method based on Predict-Evaluate-Correct scheme of three-step fourth-order method (RTPEC-34) is proposed and developed in this research to address the gap. In addition to the development of a highly accurate algorithm based on predictor-corrector, the contribution of this work also includes the analysis of truncation error for real-time problems. Moreover, the parameters for the RTPEC-34 method are optimized using intelligent optimization algorithms. The application and comparison of the optimization algorithms also lead to general guidelines for their applications in the development of improved multistep methods. Last but not least, theoretical analysis is also conducted on the stability of the proposed RTPEC-34 method, which is corroborated in simulation experiments and thus provides general guidelines for the evaluation of real-time numerical methods. The RTPEC-34 method is compared with other multistep algorithms using both numerical experiments and a real engineering example. As shown in the comparison, it achieves improved performance in terms of accuracy and stability and it is also a viable and efficient algorithm for real-time aircraft dynamics simulation.
APA, Harvard, Vancouver, ISO, and other styles
8

DOBLANDER, ANDREAS, DIETMAR GÖSSERINGER, BERNHARD RINNER, and HELMUT SCHWABACH. "AN EVALUATION OF MODEL-BASED SOFTWARE SYNTHESIS FROM SIMULINK MODELS FOR EMBEDDED VIDEO APPLICATIONS." International Journal of Software Engineering and Knowledge Engineering 15, no. 02 (April 2005): 343–48. http://dx.doi.org/10.1142/s0218194005002038.

Full text
Abstract:
In next generation video surveillance systems there is a trend towards embedded solutions. Digital signal processors (DSP) are often used to provide the necessary computing power. The limited resources impose significant challenges for software development. Resource constraints must be met while facing increasing application complexity and pressing time-to-market demands. Recent advances in synthesis tools for Simulink suggest a high-level approach to algorithm implementation for embedded DSP systems. The model-based visual development process of Simulink facilitates simulation as well as synthesis of target specific code. In this work the modeling and code generation capabilities of Simulink are evaluated with respect to video analysis algorithms. Different models of a motion detection algorithm are used to synthesize code. The generated code targeted at a Texas Instruments TMS320C6416 DSP is compared to a hand-optimized reference. Experiments show that an ad hoc approach to synthesize complex image processing algorithms hardly yields optimal code for DSPs. However, several optimizations can be applied to improve performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Shan. "The Development of Computer Network Intrusion Detection System Based on Data Mining." Applied Mechanics and Materials 66-68 (July 2011): 2248–51. http://dx.doi.org/10.4028/www.scientific.net/amm.66-68.2248.

Full text
Abstract:
In this paper, data mining algorithms have been refined and optimized to achieve the intelligent detection of network data. Winsock2 SPI used in the design of interception of network data, and use "session filtering" approach to network packet filtering. System is divided into control rules and intelligent detection module. Through practical testing, the system can display real-time network connection status on the application procedures can be effective control of network data can also be intelligent detection.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdelghany, Reem Y., Salah Kamel, Hamdy M. Sultan, Ahmed Khorasy, Salah K. Elsayed, and Mahrous Ahmed. "Development of an Improved Bonobo Optimizer and Its Application for Solar Cell Parameter Estimation." Sustainability 13, no. 7 (March 31, 2021): 3863. http://dx.doi.org/10.3390/su13073863.

Full text
Abstract:
Recently, photovoltaic (PV) energy has been considered one of the most exciting new technologies in the energy sector. PV power plants receive considerable attention because of their wide applications. Consequently, it is important to study the parameters of the solar cell model to control and determine the characteristics of the PV systems. In this study, an improved bonobo optimizer (IBO) was proposed to improve the performance of the conventional bonobo optimizer (BO). Both the IBO and the BO were utilized to obtain the accurate values of the unknown parameters of different mathematical models of solar cells. The proposed IBO improved the performance of the conventional BO by enhancing the exploitation (local search) and exploration (global search) phases to find the best optimal solution, where the search space was reduced using Levy flights and the sine–cosine function. Levy flights enhance the explorative phase, whereas the sine–cosine function improves the exploitation phase. Both the proposed IBO and the conventional BO were applied on single, double, and triple diode models of solar cells. To check the effectiveness of the proposed algorithm, statistical analysis based on the results of 20 runs of the optimization program was performed. The results obtained by the proposed IBO were compared with other algorithms, and all results of the proposed algorithm showed their durability and exceeded other algorithms.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Jian Dong, Ming Yang Liu, Guo Qing Shi, and Wei Hua Pan. "A MIL-STD-1553B Bus Command Optimization Algorithm Based on Load Balance." Applied Mechanics and Materials 130-134 (October 2011): 3839–42. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.3839.

Full text
Abstract:
The paper, based on the MIL-STD-1553B bus communication protocol, makes the computer-aided analysis to optimize bus transmission algorithm. An optimized algorithm is proposed by rearranging the commands set by the bus controller (BC), achieving the bus load balance in a major period. Through the command sequence optimization, this algorithm could manage the commands to be transmitted and generate an optimized bus command table. Based on the algorithm, software used in the PC is development, which could manage the commands to be transmitted, and generate optimized command table for bus communication transmission. The actual application shows that this algorithm can do well in optimizing the bus command table and improve the efficiency of bus transmission.
APA, Harvard, Vancouver, ISO, and other styles
12

Salimi, Amir Hossein, Jafar Masoompour Samakosh, Ehsan Sharifi, Mohammad Reza Hassanvand, Amir Noori, and Hary von Rautenkranz. "Optimized Artificial Neural Networks-Based Methods for Statistical Downscaling of Gridded Precipitation Data." Water 11, no. 8 (August 10, 2019): 1653. http://dx.doi.org/10.3390/w11081653.

Full text
Abstract:
Precipitation as a key parameter in hydrometeorology and other water-related applications always needs precise methods for assessing and predicting precipitation data. In this study, an effort has been conducted to downscale and evaluate a satellite precipitation estimation (SPE) product using artificial neural networks (ANN), and to impose a residual correction method for five separate daily heavy precipitation events localized over northeast Austria. For the ANN model, a precipitation variable was the chosen output and the inputs were temperature, MODIS cloud optical, and microphysical variables. The particle swarm optimization (PSO), imperialist competitive algorithm,(ICA), and genetic algorithm (GA) were utilized to improve the performance of ANN. Moreover, to examine the efficiency of the networks, the downscaled product was evaluated using 54 rain gauges at a daily timescale. In addition, sensitivity analysis was conducted to obtain the most and least influential input parameters. Among the optimized algorithms for network training used in this study, the performance of the ICA slightly outperformed other algorithms. The best-recorded performance for ICA was on 17 April 2015 with root mean square error (RMSE) = 5.26 mm, mean absolute error (MAE) = 6.06 mm, R2 = 0.67, bias = 0.07 mm. The results showed that the prediction of precipitation was more sensitive to cloud optical thickness (COT). Moreover, the accuracy of the final downscaled satellite precipitation was improved significantly through residual correction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, M., and Z. H. Sun. "Application of the fruit fly optimization algorithm to an optimized neural network model in radar target recognition." Computer Optics 45, no. 2 (April 2021): 296–300. http://dx.doi.org/10.18287/2412-6179-co-789.

Full text
Abstract:
With the development of computer technology, there are more and more algorithms and models for data processing and analysis, which brings a new direction to radar target recognition. This study mainly analyzed the recognition of high resolution range profile (HRRP) in radar target recognition and applied the generalized regression neural network (GRNN) model for HRRP recognition. In order to improve the performance of HRRP, the fruit fly optimization algorithm (FOA) algorithm was improved to optimize the parameters of the GRNN model. Simulation experiments were carried out on three types of aircraft. The improved FOA-GRNN (IFOA-GRNN) model was compared with the radial basis function (RBF) and GRNN models. The results showed that the IFOA-GRNN model had a better convergence accuracy, the highest average recognition rate (96.4 %), the shortest average calculation time (275 s), and a good recognition rate under noise interference. The experimental results show that the IFOA-GRNN model has a good performance in radar target recognition and can be further promoted and applied in practice.
APA, Harvard, Vancouver, ISO, and other styles
14

Nagaiah, Narasimha R., and Christopher D. Geiger. "Application of evolutionary algorithms to optimize cooling channels." International Journal for Simulation and Multidisciplinary Design Optimization 10 (2019): A4. http://dx.doi.org/10.1051/smdo/2019008.

Full text
Abstract:
The design and development is a complex, repetitive, and more often difficult task, as design tasks comprising of restraining and conflicting relationships among design variables with more than one design objectives. Conventional methods for solving more than one objective optimization problems is to build one composite function by scalarizing the multiple objective functions into a single objective function with one solution. But, the disadvantages of conventional methods inspired scientists and engineers to look for different methods that result in more than one design solutions, also known as Pareto optimal solutions instead of one single solution. Furthermore, these methods not only involved in the optimization of more than one objectives concurrently but also optimize the objectives which are conflicting in nature, where optimizing one or more objective affects the outcome of other objectives negatively. This study demonstrates a nature-based and bio-inspired evolutionary simulation method that addresses the disadvantages of current methods in the application of design optimization. As an example, in this research, we chose to optimize the periodic segment of the cooling passage of an industrial gas turbine blade comprising of ribs (also known as turbulators) to enhance the cooling effectiveness. The outlined design optimization method provides a set of tradeoff designs to pick from depending on designer requirements.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhou, Zhiwu, Julián Alcalá, and Víctor Yepes. "Optimized Application of Sustainable Development Strategy in International Engineering Project Management." Mathematics 9, no. 14 (July 10, 2021): 1633. http://dx.doi.org/10.3390/math9141633.

Full text
Abstract:
The aim of this paper is to establish an international framework for sustainable project management in engineering, to make up the lack of research in this field, and to propose a scientific theoretical basis for the establishment of a new project management system. The article adopts literature review, mathematical programming algorithm and case study as the research method. The literature review applied the visual clustering research method and analyzed the results of 21-year research in this field. As a result, the project management system was found to have defects and deficiencies. A mathematical model was established to analyze the composition and elements of the optimized international project management system. The case study research selected large bridges for analysis and verified the superiority and practicability of the theoretical system. Thus, the goal of sustainable development of bridges was achieved. The value of this re-search lies in establishing a comprehensive international project management system model; truly integrating sustainable development with project management; providing new research frames and management models to promote the sustainable development of the construction industry.
APA, Harvard, Vancouver, ISO, and other styles
16

David Sandino, Juan, Dario Amaya Hurtado, and Olga Lucia Ramos. "Prediction of Reproductive System Affectation in Sprague Dawley Rats by Food Intake Exposed with Fenthion, Using Naïve Bayes Classifier and Genetic Algorithms." Biosciences, Biotechnology Research Asia 14, no. 4 (December 25, 2017): 1291–97. http://dx.doi.org/10.13005/bbra/2572.

Full text
Abstract:
ABSTRACT: Improper application of pesticides in agricultural crops and indirect effects caused by exposure to them through consumption of contaminated crops, nowadays represent a serious risk to public health harmony. It is vital then, to know the degree of toxicity of each of these chemicals in order to properly regulate its application and sensitize the population at risk. Therefore, this paper shows the results of an algorithm with the ability to predict the effects on the reproductive system in Sprague Dawley rats, caused by the intake of food exposed with Fenthion. The original data were processed using the Naïve Bayes classifier, then optimized using genetic algorithms. It is concluded that the prediction algorithm does the job properly, processing qualitative information with relatively low computational cost, which allows its easy portability to different development platforms.
APA, Harvard, Vancouver, ISO, and other styles
17

Devasena, D., M. Jagadeeswari, and K. Srinivasan. "Development of Optimized Algorithm and Field Programmable Gate Array Implementation for Bio Medical Image Denoising for Health Informatics Applications." Journal of Medical Imaging and Health Informatics 11, no. 10 (October 1, 2021): 2626–38. http://dx.doi.org/10.1166/jmihi.2021.3851.

Full text
Abstract:
Denoising images is a most difficult task in applications for image processing. The image specifics are preserved and the additional sounds found in the images are removed. It is also a challenge to remove noise from medical and satellite images. It improves the diagnostic capacity of medical images and satellite images visual clarity. The noise in the images varies and its density varies depending on imaging techniques. The algorithms in the literature were suggested based on the noise density and the forms of noise. The aim of this paper is to eliminate the noise from ultrasound, magnetic resonance images and satellite images using an effective denoisation algorithm Hybrid Wiener Adaptive Weighted Median filter (HWAWMF) which is the combination of Wiener and Adaptive Centre Pixel Weighted Median Filter (ACPWMEF). In terms of performance parameters with an improved Peak to Signal Noise Ratio(PSNR), the hybrid filter shows better results than ACPWMEF. The Vienna filter takes out the additional noises in the images thus blurs the image’s optical perception. And also uses optimization approaches to enhance the image consistency. This paper proposes HWAWMF (PSO HWAWMF) based on particle swarm optimization and HWAWMF based on dragonfly optimization algorithms (DOAF HWAWMF). Visual vision and PSNR also have been improved by using the optimising algorithm at an average of 3.18 db, 4.83 db, and 3.14 db for lower noise (0.0% to 30%), medium noise (40% to 60%) as well as high noise density (70% to 90%). The efficacy of the algorithm using MATLAB R2013 is verified through both medical images, simulated and actual. In order to assess the computer complexity of the Altera algorithm for location, power and time using Cyclone II EP2C35F672C6, Cyclone II and Stratix III EP3SL150F1152C2, this algorithm is also implemented in the Altera Field Programable Gate Array (FPGA).
APA, Harvard, Vancouver, ISO, and other styles
18

Grout, Ian, and Lenore Mullin. "Hardware Considerations for Tensor Implementation and Analysis Using the Field Programmable Gate Array." Electronics 7, no. 11 (November 13, 2018): 320. http://dx.doi.org/10.3390/electronics7110320.

Full text
Abstract:
In today’s complex embedded systems targeting internet of things (IoT) applications, there is a greater need for embedded digital signal processing algorithms that can effectively and efficiently process complex data sets. A typical application considered is for use in supervised and unsupervised machine learning systems. With the move towards lower power, portable, and embedded hardware-software platforms that meet the current and future needs for such applications, there is a requirement on the design and development communities to consider different approaches to design realization and implementation. Typical approaches are based on software programmed processors that run the required algorithms on a software operating system. Whilst such approaches are well supported, they can lead to solutions that are not necessarily optimized for a particular problem. A consideration of different approaches to realize a working system is therefore required, and hardware based designs rather than software based designs can provide performance benefits in terms of power consumption and processing speed. In this paper, consideration is given to utilizing the field programmable gate array (FPGA) to implement a combined inner and outer product algorithm in hardware that utilizes the available hardware resources within the FPGA. These products form the basis of tensor analysis operations that underlie the data processing algorithms in many machine learning systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Banerjee, Avishek, Victor Das, Arindam Biswas, Samiran Chattopadhyay, and Utpal Biswas. "Development of Energy Efficient and Optimized Coverage Area Network Configuration to Achieve Reliable WSN Network Using Meta-Heuristic Approaches." International Journal of Applied Metaheuristic Computing 12, no. 3 (July 2021): 1–27. http://dx.doi.org/10.4018/ijamc.2021070101.

Full text
Abstract:
Energy optimization and coverage area optimization of wireless sensor networks (WSN) are two major challenges to accomplish reliability optimization in the field of WSN. Reliability optimization in the field of WSN is directly connected to the performance and efficiency and consistency of the network. In this paper, the authors describe how these challenges can be resolved by designing an efficient WSN with the help of meta-heuristic algorithms. They have configured an optimized route/path using ant colony optimization (ACO) algorithm and deployed static WSN nodes. After configuring an efficient network, if we can maximize the coverage area, then we can ensure that the network is a reliable network. For coverage area optimization, they used a hybrid differential evolution-quantum behaved particle swarm optimization (DE-QPSO) algorithm. The result has been compared with existing literature, and the authors found good results applying those meta-heuristic and hybrid algorithms.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Chenglong, Ning Ding, Haoyun Dong, and Yiming Zhai. "Application of Credit Card Fraud Detection Based on CS-SVM." International Journal of Machine Learning and Computing 11, no. 1 (January 2021): 34–39. http://dx.doi.org/10.18178/ijmlc.2021.11.1.1011.

Full text
Abstract:
With the development of e-commerce, credit card fraud is also increasing. At the same time, the way of credit card fraud is also constantly innovating. Support Vector Machine, Logical Regression, Random Forest, Naive Bayes and other algorithms are often used in credit card fraud identification. However, the current fraud detection technology is not accurate, and may cause significant economic losses to cardholders and banks. This paper will introduce an innovative method to optimize the support vector machine by cuckoo search algorithm to improve its ability of identifying credit card fraud. Cuckoo search algorithm improves classification performance by optimizing the parameters of support vector machine kernel function (C, g). The results demonstrate that CS-SVM is superior to SVM in Accuracy, Precision, Recall, F1-score, AUC, and superior to Logistic. Regression, Random Forest, Decision Tree, Naive Bayes, whose accuracy is 98%.
APA, Harvard, Vancouver, ISO, and other styles
21

이우형 and 전태주. "Optimized Building Form by Using Genetic Algorithm and Application to Multi-dimentional Development Design." Journal of Korea Intitute of Spatial Design 14, no. 7 (December 2019): 23–36. http://dx.doi.org/10.35216/kisd.2019.14.7.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zeng, Hongcheng, Timo Pukkala, Heli Peltola, and Seppo Kellomäki. "Optimization of irregular-grid cellular automata and application in risk management of wind damage in forest planning." Canadian Journal of Forest Research 40, no. 6 (June 2010): 1064–75. http://dx.doi.org/10.1139/x10-052.

Full text
Abstract:
This study demonstrated how cellular automata, using irregular grids, can be used to minimize the risk of wind damage in forest management planning. The development of a forest in central Finland was simulated for a 30-year period with three subplanning periods. A forest growth and yield model in association with a mechanistic wind damage model was applied to simulate forest growth and to calculate the length of stand edges at risk. Irregular cellular automata were utilized to optimize the harvest schedules for reducing the risk and maintaining a sustainable harvest level. The cellular automata produced rational results, i.e., new clearcuts were often placed next to open gaps, thereby, reducing the amount of vulnerable stand edges. The algorithms of the cellular automata rapidly converged and optimized the harvest schedules in an efficient way, especially when risk minimization was the only objective. In a planning problem that included even-flow timber harvesting objectives (harvest level equal to the total timber growth), the targets were almost achieved. Although the cellular automaton had slightly larger deviations of harvesting from the targets compared with other tested heuristic approaches (simulated annealing, tabu search, and genetic algorithms), it had the best performance when minimizing the expected wind damage.
APA, Harvard, Vancouver, ISO, and other styles
23

Hou, Yuan Hang, Jia Ning Zhang, and Chun Bo Zhen. "Development of Naval Ship Synthesis Model in Concept Design." Advanced Materials Research 945-949 (June 2014): 490–93. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.490.

Full text
Abstract:
Naval ship concept design is required not only accurately evaluating the feasibility of projects, but also optimize the technology indexes based on certain algorithm. That means at the foundation of appropriate technology indexes, the cost and risk can be optimized to obtain the preliminary overall project, which can be the design baseline for the subsequent design. The establishing and application pattern of naval ship synthesis model will be discussed in this paper firstly, and overview the development of ship synthesis model for about half the century. Some achievements will be introduced then, which included the key technology indexes, that aiming at the technology supports to the naval ship projects design work in domestic.
APA, Harvard, Vancouver, ISO, and other styles
24

Lin, Y., T. Zhang, K. Qian, G. Xie, and J. Cai. "PERFORMANCE EVALUATION OF ELM WITH A-OPTIMIZED DESIGN REGULARIZATION FOR REMOTE SENSING IMAGERY CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (August 6, 2020): 45–49. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-45-2020.

Full text
Abstract:
Abstract. The automatic classification technology of remote sensing images is the key technology to extract the rich geo-information in remote sensing images and to monitor the dynamic changes of land use and ecological environment. Remote sensing images have the characteristics of large amount of information and many dimensions. Therefore, how to classify and extract the information in remote sensing images has become a crucial issue in the field of remote sensing science. With the development of neural network theory, many scholars have carried out research on the application of neural network models in remote sensing image classification. However, there are still some problems to be solved in artificial neural network methods. In this study, considering the problem of large-scale land classification for medium resolution and multi-spectral remote sensing imagery, an improved machine learning algorithm based on extreme learning machine for remote sensing classification has been developed via regularization theory. The improved algorithm is more suitable for the application of post-classification change monitoring of features in large scale imaging. In this study, our main job is to evaluate the performance of ELM with A-optimal design regularization (here we call it simply as A-optimal RELM). So the accuracy and efficiency of A-optimal RELM algorithm for remote sensing imagery classification, as well as the algorithms of support vector machine (SVM) and original ELM are compared in the experiments. The experimental results show that A-optimal RELM performs the best on all three different images with overall accuracy of 97.27% and 95.03% respectively. Besides, the A-optimal RELM performs better on the details of distinguish similar and confusing terrestrial object pixels.
APA, Harvard, Vancouver, ISO, and other styles
25

Baskoro, Ario Sunar, and Boby Surya. "Development of Photometric Stereo-Based 3 Dimensional Object Reconstruction for Welding Application." Applied Mechanics and Materials 842 (June 2016): 337–41. http://dx.doi.org/10.4028/www.scientific.net/amm.842.337.

Full text
Abstract:
Photometric stereo works on recovering the shape and reflectance properties of an object using multiple images taken in variable lighting conditions. In this study, a system has been implemented to construct a height field from 12 images of an object. The algorithm which is able to calibrate the lighting direction, find the normals and color albedo at each pixel have been optimized to compute the surface which best fit the normals. This algorithm was also implemented to the welding product to reconstruct the shape of welding bead.
APA, Harvard, Vancouver, ISO, and other styles
26

Shi, Xiaoxiao. "A Method of Optimizing Network Topology Structure Combining Viterbi Algorithm and Bayesian Algorithm." Wireless Communications and Mobile Computing 2021 (May 8, 2021): 1–12. http://dx.doi.org/10.1155/2021/5513349.

Full text
Abstract:
With Internet entering all walks of life, development of internet and usage expansion demand better performance, especially the application of 5G network that adopts NAS networking mode. Some of the network bandwidth cannot fully support the current network demand, which causes network fluctuations and other concerns. In this paper, a method for optimizing the topological structure of the bottom layer of the communication network is proposed that has outage performance close to optimal routing scheme. In specific, path in areas with poor network conditions is first optimized using Viterbi algorithm. Then, network element nodes on the path are optimized using Bayes recommendation algorithm for reasonable flow distribution. Dual planning of improved Viterbi algorithm is used to realize the main and standby path planning, and then, Bayesian recommendation algorithm based on the average value is used to optimize the network elements. Therefore, it is very efficient to realize overall performance optimization.
APA, Harvard, Vancouver, ISO, and other styles
27

Bohnenkamp, David, Jan Behmann, and Anne-Katrin Mahlein. "In-Field Detection of Yellow Rust in Wheat on the Ground Canopy and UAV Scale." Remote Sensing 11, no. 21 (October 25, 2019): 2495. http://dx.doi.org/10.3390/rs11212495.

Full text
Abstract:
The application of hyperspectral imaging technology for plant disease detection in the field is still challenging. Existing equipment and analysis algorithms are adapted to highly controlled environmental conditions in the laboratory. However, only real time information from the field scale is able to guide plant protection measures and to optimize the use of resources. At the field scale, many parameters such as the optimal measurement distance, informative feature sets, and suitable algorithms have not been investigated. In this study, the hyperspectral detection and quantification of yellow rust in wheat was evaluated using two measurement platforms: a ground-based vehicle and an unmanned aerial vehicle (UAV). Different disease development stages and disease severities were provided in a plot-based field experiment. Measurements were performed weekly during the vegetation period. Data analysis was performed by three prediction algorithms with a focus on the selection of optimal feature sets. In this context, the across-scale application of optimized feature sets, an approach of information transfer between scales, was also evaluated. Relevant aspects for an on-line disease assessment in the field integrating affordable sensor technology, sensor spatial resolution, compact analysis models, and fast evaluation have been outlined and reflected upon. For the first time, a hyperspectral imaging observation experiment of a plant disease was comparatively performed at two scales, ground canopy and UAV.
APA, Harvard, Vancouver, ISO, and other styles
28

Singh, Sushil Kumar, Mikail Mohammed Salim, Jeonghun Cha, Yi Pan, and Jong Hyuk Park. "Machine Learning-Based Network Sub-Slicing Framework in a Sustainable 5G Environment." Sustainability 12, no. 15 (August 3, 2020): 6250. http://dx.doi.org/10.3390/su12156250.

Full text
Abstract:
Nowadays, 5G network infrastructures are being developed for various industrial IoT (Internet of Things) applications worldwide, emerging with the IoT. As such, it is possible to deploy power-optimized technology in a way that promotes the long-term sustainability of networks. Network slicing is a fundamental technology that is implemented to handle load balancing issues within a multi-tenant network system. Separate network slices are formed to process applications having different requirements, such as low latency, high reliability, and high spectral efficiency. Modern IoT applications have dynamic needs, and various systems prioritize assorted types of network resources accordingly. In this paper, we present a new framework for the optimum performance of device applications with optimized network slice resources. Specifically, we propose a Machine Learning-based Network Sub-slicing Framework in a Sustainable 5G Environment in order to optimize network load balancing problems, where each logical slice is divided into a virtualized sub-slice of resources. Each sub-slice provides the application system with different prioritized resources as necessary. One sub-slice focuses on spectral efficiency, whereas the other focuses on providing low latency with reduced power consumption. We identify different connected device application requirements through feature selection using the Support Vector Machine (SVM) algorithm. The K-means algorithm is used to create clusters of sub-slices for the similar grouping of types of application services such as application-based, platform-based, and infrastructure-based services. Latency, load balancing, heterogeneity, and power efficiency are the four primary key considerations for the proposed framework. We evaluate and present a comparative analysis of the proposed framework, which outperforms existing studies based on experimental evaluation.
APA, Harvard, Vancouver, ISO, and other styles
29

Ramadan, Abd-ElHady, Salah Kamel, Tahir Khurshaid, Seung-Ryle Oh, and Sang-Bong Rhee. "Parameter Extraction of Three Diode Solar Photovoltaic Model Using Improved Grey Wolf Optimizer." Sustainability 13, no. 12 (June 21, 2021): 6963. http://dx.doi.org/10.3390/su13126963.

Full text
Abstract:
The enhancement of photovoltaic (PV) energy systems relies on an accurate PV model. Researchers have made significant efforts to extract PV parameters due to their nonlinear characteristics of the PV system, and the lake information from the manufactures’ PV system datasheet. PV parameters estimation using optimization algorithms is a challenging problem in which a wide range of research has been conducted. The idea behind this challenge is the selection of a proper PV model and algorithm to estimate the accurate parameters of this model. In this paper, a new application of the improved gray wolf optimizer (I-GWO) is proposed to estimate the parameters’ values that achieve an accurate PV three diode model (TDM) in a perfect and robust manner. The PV TDM is developed to represent the effect of grain boundaries and large leakage current in the PV system. I-GWO is developed with the aim of improving population, exploration and exploitation balance and convergence of the original GWO. The performance of I-GWO is compared with other well-known optimization algorithms. I-GWO is evaluated through two different applications. In the first application, the real data from RTC furnace is applied and in the second one, the real data of PTW polycrystalline PV panel is applied. The results are compared with different evaluation factors (root mean square error (RMSE), current absolute error and statistical analysis for multiple independent runs). I-GWO achieved the lowest RMSE values in comparison with other algorithms. The RMSE values for the two applications are 0.00098331 and 0.0024276, respectively. Based on quantitative and qualitative performance evaluation, it can be concluded that the estimated parameters of TDM by I-GWO are more accurate than those obtained by other studied optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Reyes Fernandez de Bulnes, Darian, Yazmin Maldonado, and Leonardo Trujillo. "Development of Multiobjective High-Level Synthesis for FPGAs." Scientific Programming 2020 (June 29, 2020): 1–25. http://dx.doi.org/10.1155/2020/7095048.

Full text
Abstract:
Traditionally, the High-Level Synthesis (HLS) for Field Programmable Gate Array (FPGA) devices is a methodology that transforms a behavioral description, as the timing-independent specification, to an abstraction level that is synthesizable, like the Register Transfer Level. This process can be performed under a framework that is known as Design Space Exploration (DSE), which helps to determine the best design by addressing scheduling, allocation, and binding problems, all three of which are NP-hard problems. In this manner, and due to the increased complexity of modern digital circuit designs and concerns regarding the capacity of the FPGAs, designers are proposing novel HLS techniques capable of performing automatic optimization. HLS has several conflicting metrics or objective functions, such as delay, area, power, wire length, digital noise, reliability, and security. For this reason, it is suitable to apply Multiobjective Optimization Algorithms (MOAs), which can handle the different trade-offs among the objective functions. During the last two decades, several MOAs have been applied to solve this problem. This paper introduces a comprehensive analysis of different MOAs that are suitable to perform HLS for FPGA devices. We highlight significant aspects of MOAs, namely, optimization methods, intermediate structures where the optimizations are performed, HLS techniques that are addressed, and benchmarks and performance assessments employed for experimentation. In addition, we show the analysis of how multiple objectives are optimized currently in the algorithms and which are the objective functions that are optimized. Finally, we provide insights and suggestions to contribute to the solution of major research challenges in this area.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Yangyi, Shaojing Su, Huiwen Yin, Xiaojun Guo, Zhen Zuo, Junyu Wei, and Liyin Zhang. "Optimized Non-Cooperative Spectrum Sensing Algorithm in Cognitive Wireless Sensor Networks." Sensors 19, no. 9 (May 10, 2019): 2174. http://dx.doi.org/10.3390/s19092174.

Full text
Abstract:
The cognitive wireless sensor network (CWSN) is an important development direction of wireless sensor networks (WSNs), and spectrum sensing technology is an essential prerequisite for CWSN to achieve spectrum sharing. However, the existing non-cooperative narrowband spectrum sensing technology has difficulty meeting the application requirements of CWSN at present. In this paper, we present a non-cooperative spectrum sensing algorithm for CWSN, which combines the multi-resolution technique, phase space reconstruction method, and singular spectrum entropy method to sense the spectrum of narrowband wireless signals. Simulation results validate that this algorithm can greatly improve the detection probability at a low signal-to-noise ratio (SNR) (from −19dB to −12dB), and the detector can quickly achieve the best detection performance as the SNR increases. This algorithm could promote the development of CWSN and the application of WSNs.
APA, Harvard, Vancouver, ISO, and other styles
32

Tabaiyan, Mostafa, and Mehdi Agha Sarram. "A Method Based on RTO and Selective Acknowledgement for Improving SCTP Protocol Performance in Mobile Ad Hoc Networks." Modern Applied Science 10, no. 6 (June 8, 2016): 238. http://dx.doi.org/10.5539/mas.v10n6p238.

Full text
Abstract:
With the increasing development of Mobile Ad Hoc Network usage, there will be more need to have a transport protocol with an appropriate throughput in end to end transport. Because of the design nature of TCP protocol for wired Networks, its application in Mobile Ad-hoc Networks causes a reduction in efficiency, performance and throughput.Protocols such as SCTP with sufficient validity and reliability in data transfer are proposed as solution for increasing throughput. However, in order to provide the above quality, sacrifice factors such as Network balance. In the present paper, two optimized transporting algorithm are introduced. These algorithm act based on retransmission timeout control and selective acknowledgement number for transmissions in SCTP protocol. The offered algorithms using implemented NS-2 simulator and throughput improvement and reduction of packet delay time are compared with former protocols.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, H. B., J. W. Li, B. Zhou, Z. Q. Yuan, and Y. P. Chen. "Application of a hybrid model of neural networks and genetic algorithms to evaluate landslide susceptibility." Natural Hazards and Earth System Sciences Discussions 1, no. 2 (March 4, 2013): 353–88. http://dx.doi.org/10.5194/nhessd-1-353-2013.

Full text
Abstract:
Abstract. In the last few decades, the development of Geographical Information Systems (GIS) technology has provided a method for the evaluation of landslide susceptibility and hazard. Slope units were found to be appropriate for the fundamental morphological elements in landslide susceptibility evaluation. Following the DEM construction in a loess area susceptible to landslides, the direct-reverse DEM technology was employed to generate 216 slope units in the studied area. After a detailed investigation, the landslide inventory was mapped in which 39 landslides, including paleo-landslides, old landslides and recent landslides, were present. Of the 216 slope units, 123 involved landslides. To analyze the mechanism of these landslides, six environmental factors were selected to evaluate landslide occurrence: slope angle, aspect, the height and shape of the slope, distance to river and human activities. These factors were extracted in terms of the slope unit within the ArcGIS software. The spatial analysis demonstrates that most of the landslides are located on convex slopes at an elevation of 100–150 m with slope angles from 135°–225° and 40°–60°. Landslide occurrence was then checked according to these environmental factors using an artificial neural network with back propagation, optimized by genetic algorithms. A dataset of 120 slope units was chosen for training the neural network model, i.e., 80 units with landslide presence and 40 units without landslide presence. The parameters of genetic algorithms and neural networks were then set: population size of 100, crossover probability of 0.65, mutation probability of 0.01, momentum factor of 0.60, learning rate of 0.7, max learning number of 10 000, and target error of 0.000001. After training on the datasets, the susceptibility of landslides was mapped for the land-use plan and hazard mitigation. Comparing the susceptibility map with landslide inventory, it was noted that the prediction accuracy of landslide occurrence is 93.02%, whereas units without landslide occurrence are predicted with an accuracy of 81.13%. To sum up, the verification shows satisfactory agreement with an accuracy of 86.46% between the susceptibility map and the landslide locations. In the landslide susceptibility assessment, ten new slopes were predicted to show potential for failure, which can be confirmed by the engineering geological conditions of these slopes. It was also observed that some disadvantages could be overcome in the application of the neural networks with back propagation, for example, the low convergence rate and local minimum, after the network was optimized using genetic algorithms. To conclude, neural networks with back propagation that are optimized by genetic algorithms are an effective method to predict landslide susceptibility with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
34

Tajudin, Aimi Idzwan, Ahmad Asri Abd Samat, Pais Saedin, and Mohamad Adha Mohamad Idin. "Application of Artificial Bee Colony Algorithm for Distribution Network Reconfiguration." Applied Mechanics and Materials 785 (August 2015): 38–42. http://dx.doi.org/10.4028/www.scientific.net/amm.785.38.

Full text
Abstract:
—Network reconfiguration is a process of changing the original structure of the distribution network system with the intention of balancing the load in every system’s feeder at the same time to optimize the operation of the system. The process involve the changing of switching state (tie switches and sectionalize switches), with the aim to find the best combination that can increase the performance of the system while satisfying with the operational constraints. The extreme necessity to the process has become a challenging mission for the researcher to overcome the reconfiguration problems. Recent years have seen a rapid development of evolutionary algorithms and swarm intelligence based algorithms to resolve for network reconfiguration problems. For that reason, this report deals with Artificial Bee Colony (ABC) algorithm to be implemented in network reconfiguration procedure to achieve the optimum level of operation. The ease and simplicity of the algorithm and the capability to find the global optimization solution has made this algorithm appropriate for this project. The objective of this work focused on improvements of distribution power system, in terms of minimizing the total real power loss and improving the voltage profile within the acceptable value. The algorithm was tested on two different radial distribution system (33 bus and 69 bus radial distribution systems)
APA, Harvard, Vancouver, ISO, and other styles
35

Hussain, M. N. M., Ahmad Maliki Omar, Intan Rahayu Ibrahim, and Kamarulazhar Daud. "A dsPIC Microcontroller Based System Identification of Positive Output Buck Boost Converter for Module Mismatch Application." Applied Mechanics and Materials 785 (August 2015): 106–10. http://dx.doi.org/10.4028/www.scientific.net/amm.785.106.

Full text
Abstract:
An identification system of multiple-input single-output (MISO) model is developed in controlling dsPIC microcontroller of positive output buck-boost (POBB) converters for module mismatch condition of photovoltaic (PV) system. In particular, the possibility of the scheme is to resolve the mismatch losses from the PV module either during shading or mismatch module occurrences. The MPPT algorithm is simplified by identification approach of indirect incorporated with a simple incremental direct method to form a combined direct and indirect (CoDId) algorithms. Irregular consumption of solar irradiation on a PV module shall step-up or step down the voltage regarding to the desired DC output voltage of POBB converter. This optimized algorithm will ensure that the PV module to kept at maximum power point (MPP), preventing power loss during module mismatch incident in PV module especially during partial shading condition. The simulation and laboratory results for PV module of polycrystalline Mitsubishi PV-AE125MF5N indicate that the proposed model and development of PV system architecture performs well, while the efficiency up to 97.7% at critical of low solar irradiance level. The controlling signal is based on low-cost embedded microcontroller of dsPIC30F Digital Signal Control (DSC).
APA, Harvard, Vancouver, ISO, and other styles
36

Mala, C. S., and S. Ramachandran. "DEVELOPMENT OF NOVEL ALGORITHMS AND ARCHITECTURE FOR A ROBOT BASED AGRICULTURAL IMPLEMENT." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 5 (August 20, 2013): 1661–74. http://dx.doi.org/10.24297/ijct.v10i5.7009.

Full text
Abstract:
This paper presents novel algorithms and architecture for a Robot based agricultural implement. The application is for tilling the agricultural field. The hardware consists of a platform with four wheels and a mechanism to facilitate the forward, reverse and lateral movement of the wheels. The platform also houses a turn table, a lift and a plough. Various user defined inputs may be programmed such as length and breadth of the field, spacing between two till lines and depth of tilling. Thereafter, the entire tilling operation of the field is automated. The Robot based vehicle begins the operation from the top left corner and moves towards right tilling the field as it moves forward. Once the required length of field is reached, the vehicle halts and moves to the next row for a specified spacing between tilled rows. The movement from one row to another is in a lateral fashion by rotating the Robotic vehicle wheels by 90 degrees. No tilling is carried out till the next row is reached. The tilling operation is resumed by rotating the plough by 180 degrees and moving the vehicle in the reverse direction. This process continues till the end of the field, covering the entire breadth and maintaining the desired spacing and depth. This automated tilling requires the development of novel algorithms and an optimized architecture, which is presented in this paper. The system is user friendly and upgradable. The entire system has been realized using Verilog and is RTL compliant. The design is both platform and technology independent. The design has been simulated using ModelSim.
APA, Harvard, Vancouver, ISO, and other styles
37

Zou, Dai Kun, Wei Hu, Hong Wei Xia, Wen Fei Li, Zhuo Min Yao, and Hong Guo. "Optimized PCA Based Face Recognition for Mobile Devices." Applied Mechanics and Materials 427-429 (September 2013): 1682–86. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1682.

Full text
Abstract:
With the rapid development of embedded technology, mobile devices have been widely used than before. Face recognition has also been taken as a key application with PCA as the basic algorithm. Though PCA can provide basic information processing, it still has some problems to be used for mobile devices. The movement of the faces increases the difficulty of the recognition and the limited resources of mobile devices propose more constraints to traditional PCA algorithm. A novel approach is presented to optimize PCA based face recognition for better performance and faster recognition speed. The experiments show that the new approach can achieve its target though the optimization.
APA, Harvard, Vancouver, ISO, and other styles
38

P. de Figueiredo, Felipe A., Dragoslav Stojadinovic, Prasanthi Maddala, Ruben Mennes, Irfan Jabandžić, Xianjun Jiao, and Ingrid Moerman. "SCATTER PHY: An Open Source Physical Layer for the DARPA Spectrum Collaboration Challenge." Electronics 8, no. 11 (November 14, 2019): 1343. http://dx.doi.org/10.3390/electronics8111343.

Full text
Abstract:
DARPA, the Defense Advanced Research Projects Agency from the United States, has started the Spectrum Collaboration Challenge with the aim to encourage research and development of coexistence and collaboration techniques of heterogeneous networks in the same wireless spectrum bands. Team SCATTER has been participating in the challenge since its beginning, back in 2016. SCATTER’s open-source software defined physical layer (SCATTER PHY) has been developed as a standalone application, with the ability to communicate with higher layers through a set of well defined messages (created with Google’s Protocol buffers) and that exchanged over a ZeroMQ bus. This approach allows upper layers to access it remotely or locally and change all parameters in real time through the control messages. SCATTER PHY runs on top of USRP based software defined radio devices (i.e., devices from Ettus or National Instruments) to send and receive wireless signals. It is a highly optimized and real-time configurable SDR based PHY layer that can be used for the research and development of novel intelligent spectrum sharing schemes and algorithms. The main objective of making SCATTER PHY available to the research and development community is to provide a solution that can be used out of the box to devise disruptive algorithms and techniques to optimize the sub-optimal use of the radio spectrum that exists today. This way, researchers and developers can mainly focus their attention on the development of smarter (i.e., intelligent algorithms and techniques) spectrum sharing approaches. Therefore, in this paper, we describe the design and main features of SCATTER PHY and showcase several experiments performed to assess the effectiveness and performance of the proposed PHY layer.
APA, Harvard, Vancouver, ISO, and other styles
39

Pogorilyy, S. D., and M. S. Slynko. "Research and development of Johnson's algorithm parallel schemes in GPGPU technology." PROBLEMS IN PROGRAMMING, no. 2-3 (June 2016): 105–12. http://dx.doi.org/10.15407/pp2016.02-03.105.

Full text
Abstract:
Johnson’s all pairs shortest path algorithm application in an edge weighted, directed graph is considered. Its formalization in terms of Glushkov’s modified systems of algorithmic algebras was made. The expediency of using GPGPU technology to accelerate the algorithm is proved. A number of schemas of parallel algorithm optimized for using in GPGPU were obtained. Suggested approach to the implementation of the schemes obtained using computing architecture NVIDIA CUDA. An experimental study of improved performance by using GPU for computations was made.
APA, Harvard, Vancouver, ISO, and other styles
40

Königseder, Corinna, and Kristina Shea. "Systematic rule analysis of generative design grammars." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 28, no. 3 (July 22, 2014): 227–38. http://dx.doi.org/10.1017/s0890060414000195.

Full text
Abstract:
AbstractThe use of generative design grammars for computational design synthesis has been shown to be successful in many application areas. The development of advanced search and optimization strategies to guide the computational synthesis process is an active research area with great improvements in the last decades. The development of the grammar rules, however, often resembles an art rather than a science. Poor grammars drive the need for problem specific and sophisticated search and optimization algorithms that guide the synthesis process toward valid and optimized designs in a reasonable amount of time. Instead of tuning search algorithms for inferior grammars, this research focuses on designing better grammars to not unnecessarily burden the search process. It presents a grammar rule analysis method to provide a more systematic development process for grammar rules. The goal of the grammar rule analysis method is to improve the quality of the rules and in turn have a major impact on the quality of the designs generated. Four different grammars for automated gearbox synthesis are used as a case study to validate the developed method and show its potential.
APA, Harvard, Vancouver, ISO, and other styles
41

Xu, He, Weiwei Shen, Peng Li, Jiawei Zhu, and Ruchuan Wang. "A novel algorithm L-NCD for redundant reader elimination in P2P-RFID network." Journal of Algorithms & Computational Technology 11, no. 2 (January 10, 2017): 135–47. http://dx.doi.org/10.1177/1748301816688020.

Full text
Abstract:
With the development of radio frequency identification technology, radio frequency identification system has been deployed in different applications in a large scale. It needs a densely deployed reader environment to cover the work area, especially in radio frequency identification logistics and warehousing applications. If the distribution area and the number of readers are not optimized, and some of them will be redundant, which will reduce the efficiency of the whole radio frequency identification system. Researchers have proposed many algorithms and optimization techniques to solve the problem of redundant readers. In this paper, the authors propose a novel redundant reader elimination algorithm, namely, L-NCD, in P2P-RFID reader network, combined with layered eliminate optimization and neighboring coverage density. The simulation results demonstrate that the proposed algorithm eliminates more redundant readers than the other methods such as redundant-reader elimination under the same conditions.
APA, Harvard, Vancouver, ISO, and other styles
42

Song, Zhe, Xing Mu, and Hou-Xing Zhou. "High Performance Computing of Complex Electromagnetic Algorithms Based on GPU/CPU Heterogeneous Platform and Its Applications to EM Scattering and Multilayered Medium Structure." International Journal of Antennas and Propagation 2017 (2017): 1–12. http://dx.doi.org/10.1155/2017/9173062.

Full text
Abstract:
The fast and accurate numerical analysis for large-scale objects and complex structures is essential to electromagnetic simulation and design. Comparing to the exploration in EM algorithms from mathematical point of view, the computer programming realization is coordinately significant while keeping up with the development of hardware architectures. Unlike the previous parallel algorithms or those implemented by means of parallel programming on multicore CPU with OpenMP or on a cluster of computers with MPI, the new type of large-scale parallel processor based on graphics processing unit (GPU) has shown impressive ability in various scenarios of supercomputing, while its application in computational electromagnetics is especially expected. This paper introduces our recent work on high performance computing based on GPU/CPU heterogeneous platform and its application to EM scattering problems and planar multilayered medium structure, including a novel realization of OpenMP-CUDA-MLFMM, a developed ACA method and a deeply optimized CG-FFT method. With fruitful numerical examples and their obvious enhancement in efficiencies, it is convincing to keep on deeply investigating and understanding the computer hardware and their operating mechanism in the future.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Jiahui, Dongji Xuan, Biao Wang, and Rui Jiang. "Structure Optimization of Battery Thermal Management Systems Using Sensitivity Analysis and Stud Genetic Algorithms." Applied Sciences 11, no. 16 (August 13, 2021): 7440. http://dx.doi.org/10.3390/app11167440.

Full text
Abstract:
Battery thermal management systems (BTMS) are hugely important in enhancing the lifecycle of batteries and promoting the development of electric vehicles. The cooling effect of BTMS can be improved by optimizing its structural parameters. In this paper, flow resistance and heat dissipation models were used to optimize the structure of BTMS, which were more efficient than the computational fluid dynamics method. Subsequently, five structural parameters that affect the temperature inside the battery pack were analyzed using single-factor sensitivity analysis under different inlet airflow rates, and three structural parameters were selected as the constraints of a stud genetic algorithm. In this stud genetic algorithm, the maximal temperature difference obtained by the heat dissipation model was within 5K as the constraint function, where the objective function minimized the overall area of the battery pack. The BTMS optimized by the stud genetic algorithm was reduced by 16% in the maximal temperature difference and saved 6% of the battery package area compared with the original BTMS. It can be concluded that the stud genetic algorithm combined with the flow resistance network and heat dissipation models can quickly and efficiently optimize the air-cooled BTMS to improve the cooling performance.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Chao, Xingqin An, Qing Hou, Zhaobin Sun, Yanjun Li, and Jiangtao Li. "Development of four-dimensional variational assimilation system based on the GRAPES–CUACE adjoint model (GRAPES–CUACE-4D-Var V1.0) and its application in emission inversion." Geoscientific Model Development 14, no. 1 (January 22, 2021): 337–50. http://dx.doi.org/10.5194/gmd-14-337-2021.

Full text
Abstract:
Abstract. In this study, a four-dimensional variational (4D-Var) data assimilation system was developed based on the GRAPES–CUACE (Global/Regional Assimilation and PrEdiction System – CMA Unified Atmospheric Chemistry Environmental Forecasting System) atmospheric chemistry model, GRAPES–CUACE adjoint model and L-BFGS-B (extended limited-memory Broyden–Fletcher–Goldfarb–Shanno) algorithm (GRAPES–CUACE-4D-Var) and was applied to optimize black carbon (BC) daily emissions in northern China on 4 July 2016, when a pollution event occurred in Beijing. The results show that the newly constructed GRAPES–CUACE-4D-Var assimilation system is feasible and can be applied to perform BC emission inversion in northern China. The BC concentrations simulated with optimized emissions show improved agreement with the observations over northern China with lower root-mean-square errors and higher correlation coefficients. The model biases are reduced by 20 %–46 %. The validation with observations that were not utilized in the assimilation shows that assimilation makes notable improvements, with values of the model biases reduced by 1 %–36 %. Compared with the prior BC emissions, which are based on statistical data of anthropogenic emissions for 2007, the optimized emissions are considerably reduced. Especially for Beijing, Tianjin, Hebei, Shandong, Shanxi and Henan, the ratios of the optimized emissions to prior emissions are 0.4–0.8, indicating that the BC emissions in these highly industrialized regions have greatly reduced from 2007 to 2016. In the future, further studies on improving the performance of the GRAPES–CUACE-4D-Var assimilation system are still needed and are important for air pollution research in China.
APA, Harvard, Vancouver, ISO, and other styles
45

Abubakar, Yusuf. "DEVELOPMENT OF SOFTWARE FOR QUANTUM VOLTAGE SYNTHESISER." FUDMA JOURNAL OF SCIENCES 4, no. 3 (September 11, 2020): 17–25. http://dx.doi.org/10.33003/fjs-2020-0403-167.

Full text
Abstract:
Signal analysis is a wide area of research, which received considerable attention due to its numerous fields of application. The impact of these fields in technological advancement cannot be overemphasized. No matter how good a signal is acquired, it can still contain some unwanted components (noise) that is capable of undermining its integrity. This work synthesizes signal from a delta-sigma system by designing digital filters LabVIEW program. The designed lowpass and bandpass filters were examined in addition to different filter design algorithms. Based on the analysis, Dolph-chebyshev and Kaiser window filters provide excellent filtering by blocking quantization noise to up to -40dB for fixed point design and, -60dB for floating point. Despite issues with windowing in filter design, these two algorithms seem to work well with this type of signal. Additionally, this design can be optimised by pushing the filter to its limit.
APA, Harvard, Vancouver, ISO, and other styles
46

Moayedi, Hossein, Bahareh Kalantar, Anastasios Dounis, Dieu Tien Bui, and Loke Kok Foong. "Development of Two Novel Hybrid Prediction Models Estimating Ultimate Bearing Capacity of the Shallow Circular Footing." Applied Sciences 9, no. 21 (October 29, 2019): 4594. http://dx.doi.org/10.3390/app9214594.

Full text
Abstract:
In the present work, we employed artificial neural network (ANN) that is optimized with two hybrid models, namely imperialist competition algorithm (ICA) as well as particle swarm optimization (PSO) in the case of the problem of bearing capacity of shallow circular footing systems. Many types of research have shown that ANNs are valuable techniques for estimating the bearing capacity of the soils. However, most ANN training models have some drawbacks. This study aimed to focus on the application of two well-known hybrid ICA–ANN and PSO–ANN models to the estimation of bearing capacity of the circular footing lied in layered soils. In order to provide the training and testing datasets for the predictive network models, extensive finite element (FE) modelling (a database includes 2810 training datasets and 703 testing datasets) are performed on 16 soil layer sets (weaker soil rested on stronger soil and vice versa). Note that all the independent variables of ICA and PSO algorithms are optimized utilizing a trial and error method. The input includes upper layer thickness/foundation width (h/B) ratio, footing width (B), top and bottom soil layer properties (e.g., six of the most critical soil characteristics), vertical settlement of circular footing (s), where the output was taken ultimate bearing capacity of the circular footing (Fult). Based on coefficient of determination (R2) and Root Mean Square Error (RMSE), amounts of (0.979, 0.076) and (0.984, 0.066) predicted for training dataset and amounts of (0.978, 0.075) and (0.983, 0.066) indicated in the case of the testing dataset of proposed PSO–ANN and ICA–ANN models of prediction network, respectively. It demonstrates a higher reliability of the presented PSO–ANN model for predicting ultimate bearing capacity of circular footing located on double sandy layer soils.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Rui, and Peng Li. "CHP Microgrid Optimized Operation Based on Bacterial Foraging Optimization Algorithm." Advanced Materials Research 981 (July 2014): 668–72. http://dx.doi.org/10.4028/www.scientific.net/amr.981.668.

Full text
Abstract:
CHP system with Energy saving, environmental protection, economic and other characteristics,have good prospects for the development and application value.This paper directe a micro grid system consisted by photovoltaic cells, wind turbines, fuel cells, microturbines, auxiliary boilers, thermal energy storage systems and batteries and heat load and electrical load.Considering various distributed power generation costs, environmental costs and micro-grid equipment maintenance costs,To meet the constraints of micro-grid operation, optimization of the different micro-grid distributed power and energy storage system power output, make the system's total operating costs are minimized.This paper analyzes the economic and environmental of micro-grid optimal operation characteristics, given a model of CHP micro-grid.For the cost of power generation and emissions of different weights, using bacterial foraging optimization(BFO) algorithm,through a numerical example verified the Correctness and effectiveness of mathematical model and optimization algorithm .
APA, Harvard, Vancouver, ISO, and other styles
48

Srivastava, Sweta, and Sudip Kumar Sahana. "Application of Bat Algorithm for Transport Network Design Problem." Applied Computational Intelligence and Soft Computing 2019 (January 1, 2019): 1–12. http://dx.doi.org/10.1155/2019/9864090.

Full text
Abstract:
The requirement of the road services and transportation network development planning came into existence with the development of civilization. In the modern urban transport scenario with the forever mounting amount of vehicles, it is very much essential to tackle network congestion and to minimize the travel time. This work is based on determining the optimal wait time at traffic signals for the microscopic discrete model. The problem is formulated as a bilevel model. The upper layer optimizes the travel time by reducing the wait time at traffic signal and the lower layer solves the stochastic user equilibrium. Soft computing techniques like Genetic Algorithms, Ant Colony Optimization, and many other biologically inspired techniques prove to give good results for bilevel problems. Here this work uses Bat Intelligence to solve the transport network design problem. The results are compared with the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Zuhang, Yun Zhang, Lifeng Zhang, Xiaolong Hao, Hengchi Lei, and Hepeng Zheng. "Validation of GPM Precipitation Products by Comparison with Ground-Based Parsivel Disdrometers over Jianghuai Region." Water 11, no. 6 (June 16, 2019): 1260. http://dx.doi.org/10.3390/w11061260.

Full text
Abstract:
In this study, we evaluated the performance of rain-retrieval algorithms for the Version 6 Global Precipitation Measurement Dual-frequency Precipitation Radar (GPM DPR) products, against disdrometer observations and improved their retrieval algorithms by using a revised shape parameter µ derived from long-term Particle Size Velocity (Parsivel) disdrometer observations in Jianghuai region from 2014 to 2018. To obtain the optimized shape parameter, raindrop size distribution (DSD) characteristics of summer and winter seasons over Jianghuai region are analyzed, in terms of six rain rate classes and two rain categories (convective and stratiform). The results suggest that the GPM DPR may have better performance for winter rain than summer rain over Jianghuai region with biases of 40% (80%) in winter (summer). The retrieval errors of rain category-based µ (3–5%) were proved to be the smallest in comparison with rain rate-based µ (11–13%) or a constant µ (20–22%) in rain-retrieval algorithms, with a possible application to rainfall estimations over Jianghuai region. Empirical Dm–Ze and Nw–Dm relationships were also derived preliminarily to improve the GPM rainfall estimates over Jianghuai region.
APA, Harvard, Vancouver, ISO, and other styles
50

Martínez-López, Alba, and Manuel Chica. "Joint Optimization of Routes and Container Fleets to Design Sustainable Intermodal Chains in Chile." Sustainability 12, no. 6 (March 12, 2020): 2221. http://dx.doi.org/10.3390/su12062221.

Full text
Abstract:
This paper introduces a decision support tool for sustainable intermodal chains with seaborne transport, in which the optimization of a multi-objective model enables conflicting objectives to be handled simultaneously. Through the assessment of ‘door-to-door’ transport in terms of costs, time, and environmental impact, the most suitable maritime route and the optimized fleet are jointly calculated to maximize the opportunities for success of intermodal chains versus trucking. The resolution of the model through NSGA-II algorithms permits to obtain Pareto fronts that offer groups of optimized solutions. This is not only useful to make decisions in the short term, but also to establish long-term strategies through assessment of the frontiers’ behavior obtained when a sensitivity analysis is undertaken. Thus, consequences of transport policies on intermodal performance can be analyzed. A real-life case is studied to test the usefulness of the model. From the application case, not only the most suitable Motorway of the Seas with their optimized fleets are identified for Chile, but also significant general findings are provided for both policy makers and heads of ports to promote the intermodal option regardless of their geographical locations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography