Siga este enlace para ver otros tipos de publicaciones sobre el tema: M5P Algorithm.

Artículos de revistas sobre el tema "M5P Algorithm"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "M5P Algorithm".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Melesse, Assefa M., Khabat Khosravi, John P. Tiefenbacher, Salim Heddam, Sungwon Kim, Amir Mosavi y Binh Thai Pham. "River Water Salinity Prediction Using Hybrid Machine Learning Models". Water 12, n.º 10 (21 de octubre de 2020): 2951. http://dx.doi.org/10.3390/w12102951.

Texto completo
Resumen
Electrical conductivity (EC), one of the most widely used indices for water quality assessment, has been applied to predict the salinity of the Babol-Rood River, the greatest source of irrigation water in northern Iran. This study uses two individual—M5 Prime (M5P) and random forest (RF)—and eight novel hybrid algorithms—bagging-M5P, bagging-RF, random subspace (RS)-M5P, RS-RF, random committee (RC)-M5P, RC-RF, additive regression (AR)-M5P, and AR-RF—to predict EC. Thirty-six years of observations collected by the Mazandaran Regional Water Authority were randomly divided into two sets: 70% from the period 1980 to 2008 was used as model-training data and 30% from 2009 to 2016 was used as testing data to validate the models. Several water quality variables—pH, HCO3−, Cl−, SO42−, Na+, Mg2+, Ca2+, river discharge (Q), and total dissolved solids (TDS)—were modeling inputs. Using EC and the correlation coefficients (CC) of the water quality variables, a set of nine input combinations were established. TDS, the most effective input variable, had the highest EC-CC (r = 0.91), and it was also determined to be the most important input variable among the input combinations. All models were trained and each model’s prediction power was evaluated with the testing data. Several quantitative criteria and visual comparisons were used to evaluate modeling capabilities. Results indicate that, in most cases, hybrid algorithms enhance individual algorithms’ predictive powers. The AR algorithm enhanced both M5P and RF predictions better than bagging, RS, and RC. M5P performed better than RF. Further, AR-M5P outperformed all other algorithms (R2 = 0.995, RMSE = 8.90 μs/cm, MAE = 6.20 μs/cm, NSE = 0.994 and PBIAS = −0.042). The hybridization of machine learning methods has significantly improved model performance to capture maximum salinity values, which is essential in water resource management.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Debastiani, Aline Bernarda, Sílvio Luís Rafaeli Neto y Ricardo Dalagnol da Silva. "ÁRVORE MODELO FRENTE A UMA REDE NEURAL ARTIFICIAL PARA A MODELAGEM CHUVA-VAZÃO". Nativa 7, n.º 5 (12 de septiembre de 2019): 527. http://dx.doi.org/10.31413/nativa.v7i5.7089.

Texto completo
Resumen
O objetivo deste estudo é investigar o desempenho da árvore modelo (M5P) e sua sensibilidade à poda e comparação com o desempenho de uma Rede Neural Artificial (RNA) para a simulação da vazão média diária mensal. A motivação para esta análise está na maior simplicidade e velocidade de processamento da M5P comparado às RNAs e a carência de estudos aplicando este método na modelagem hidrológica. O estudo foi desenvolvido na bacia hidrográfica do Alto Canoas, tendo um delineamento experimental composto por um período de treinamento, um de validação cruzada e dois períodos de testes. A RNA utilizada foi a Multi Layer Perceptron (MLP), implementada no software MATLAB, e a M5P (com e sem poda), disponível do software WEKA. O algoritmo M5P se mostrou sensível à poda em somente metade dos tratamentos. A M5P apresentou bom ajuste na modelagem, porém a RNA apresentou desempenho superior em todos os tratamentos.Palavras-chave: rede neural artificial; árvore de regressão; Bacia do Alto Canoas. MODEL TREE IN COMPARISON TO ARTIFICIAL NEURAL NETWORK FOR RAINFALL-RUNOFF MODELING ABSTRACT: The aim of this study is to investigate the performance of the model tree (M5P) and its sensitivity to pruning and comparison to the performance of an Artificial Neural network (ANN) for the simulation of daily average discharge of the month. The motivation for this analysis is on simplicity and speed of processing M5P compared the RNAs. The study was developed in the Alto Canoas watershed, having an experiment consisting of a training period, a cross-validation and two testing periods. The ANN used was the Multi Layer Perceptron (MLP), implemented in MATLAB software, and M5P (with and without pruning), available from the WEKA software. M5P algorithm proved sensitive to pruning in half of the treatments. The M5P showed good fit in the modeling, but the RNA presented superior performance in all treatments.Keywords: artificial neural network; regression tree; Basin Alto Canoas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Thai, Thi Huyen, Richard Ansong Omari, Dietmar Barkusky y Sonoko Dorothea Bellingrath-Kimura. "Statistical Analysis versus the M5P Machine Learning Algorithm to Analyze the Yield of Winter Wheat in a Long-Term Fertilizer Experiment". Agronomy 10, n.º 11 (13 de noviembre de 2020): 1779. http://dx.doi.org/10.3390/agronomy10111779.

Texto completo
Resumen
To compare how different analytical methods explain crop yields from a long-term field experiment (LTFE), we analyzed the grain yield of winter wheat (WW) under different fertilizer applications in Müncheberg, Germany. An analysis of variance (ANOVA), linear mixed-effects model (LMM), and MP5 regression tree model were used to evaluate the grain yield response. All the methods identified fertilizer application and environmental factors as the main variables that explained 80% of the variance in grain yields. Mineral nitrogen fertilizer (NF) application was the major factor that influenced the grain yield in all methods. Farmyard manure slightly influenced the grain yield with no NF application in the ANOVA and M5P regression tree. While sources of environmental factors were unmeasured in the ANOVA test, they were quantified in detail in the LMM and M5P model. The LMM and M5P model identified the cumulative number of freezing days in December as the main climate-based determinant of the grain yield variation. Additionally, the temperature in October, the cumulative number of freezing days in February, the yield of the preceding crop, and the total nitrogen in the soil were determinants of the grain yield in both models. Apart from the common determinants that appeared in both models, the LMM additionally showed precipitation in June and the cumulative number of days in July with temperatures above 30 °C, while the M5P model showed soil organic carbon as an influencing factor of the grain yield. The ANOVA results provide only the main factors affecting the WW yield. The LMM had a better predictive performance compared to the M5P, with smaller root mean square and mean absolute errors. However, they were richer regressors than the ANOVA. The M5P model presented an intuitive visualization of important variables and their critical thresholds, and revealed other variables that were not captured by the LMM model. Hence, the use of different methods can strengthen the statement of the analysis, and thus, the co-use of the LMM and M5P model should be considered, especially in large databases involving multiple variables.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kazllarof, Vangjel, Stamatis Karlos y Sotiris Kotsiantis. "Investigation of Combining Logitboost(M5P) under Active Learning Classification Tasks". Informatics 7, n.º 4 (3 de noviembre de 2020): 50. http://dx.doi.org/10.3390/informatics7040050.

Texto completo
Resumen
Active learning is the category of partially supervised algorithms that is differentiated by its strategy to combine both the predictive ability of a base learner and the human knowledge so as to exploit adequately the existence of unlabeled data. Its ambition is to compose powerful learning algorithms which otherwise would be based only on insufficient labelled samples. Since the latter kind of information could raise important monetization costs and time obstacles, the human contribution should be seriously restricted compared with the former. For this reason, we investigate the use of the Logitboost wrapper classifier, a popular variant of ensemble algorithms which adopts the technique of boosting along with a regression base learner based on Model trees into 3 different active learning query strategies. We study its efficiency against 10 separate learners under a well-described active learning framework over 91 datasets which have been split to binary and multi-class problems. We also included one typical Logitboost variant with a separate internal regressor for discriminating the benefits of adopting a more accurate regression tree than one-node trees, while we examined the efficacy of one hyperparameter of the proposed algorithm. Since the application of the boosting technique may provide overall less biased predictions, we assume that the proposed algorithm, named as Logitboost(M5P), could provide both accurate and robust decisions under active learning scenarios that would be beneficial on real-life weakly supervised classification tasks. Its smoother weighting stage over the misclassified cases during training as well as the accurate behavior of M5P are the main factors that lead towards this performance. Proper statistical comparisons over the metric of classification accuracy verify our assumptions, while adoption of M5P instead of weak decision trees was proven to be more competitive for the majority of the examined problems. We present our results through appropriate summarization approaches and explanatory visualizations, commenting our results per case.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhan, Chengjun, Albert Gan y Mohammed Hadi. "Prediction of Lane Clearance Time of Freeway Incidents Using the M5P Tree Algorithm". IEEE Transactions on Intelligent Transportation Systems 12, n.º 4 (diciembre de 2011): 1549–57. http://dx.doi.org/10.1109/tits.2011.2161634.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kandiri, Amirreza, Farid Sartipi y Mahdi Kioumarsi. "Predicting Compressive Strength of Concrete Containing Recycled Aggregate Using Modified ANN with Different Optimization Algorithms". Applied Sciences 11, n.º 2 (6 de enero de 2021): 485. http://dx.doi.org/10.3390/app11020485.

Texto completo
Resumen
Using recycled aggregate in concrete is one of the best ways to reduce construction pollution and prevent the exploitation of natural resources to provide the needed aggregate. However, recycled aggregates affect the mechanical properties of concrete, but the existing information on the subject is less than what the industry needs. Compressive strength, on the other hand, is the most important mechanical property of concrete. Therefore, having predictive models to provide the required information can be helpful to convince the industry to increase the use of recycled aggregate in concrete. In this research, three different optimization algorithms including genetic algorithm (GA), salp swarm algorithm (SSA), and grasshopper optimization algorithm (GOA) are employed to be hybridized with artificial neural network (ANN) separately to predict the compressive strength of concrete containing recycled aggregate, and a M5P tree model is used to test the efficiency of the ANNs. The results of this study show the superior efficiency of the modified ANN with SSA when compared to other models. However, the statistical indicators of the hybrid ANNs with SSA, GA, and GOA are so close to each other.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Behnood, Ali, Venous Behnood, Mahsa Modiri Gharehveran y Kursat Esat Alyamac. "Prediction of the compressive strength of normal and high-performance concretes using M5P model tree algorithm". Construction and Building Materials 142 (julio de 2017): 199–207. http://dx.doi.org/10.1016/j.conbuildmat.2017.03.061.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Saltos, Ginger y Mihaela Cocea. "An Exploration of Crime Prediction Using Data Mining on Open Data". International Journal of Information Technology & Decision Making 16, n.º 05 (septiembre de 2017): 1155–81. http://dx.doi.org/10.1142/s0219622017500250.

Texto completo
Resumen
The increase in crime data recording coupled with data analytics resulted in the growth of research approaches aimed at extracting knowledge from crime records to better understand criminal behavior and ultimately prevent future crimes. While many of these approaches make use of clustering and association rule mining techniques, there are fewer approaches focusing on predictive models of crime. In this paper, we explore models for predicting the frequency of several types of crimes by LSOA code (Lower Layer Super Output Areas — an administrative system of areas used by the UK police) and the frequency of anti-social behavior crimes. Three algorithms are used from different categories of approaches: instance-based learning, regression and decision trees. The data are from the UK police and contain over 600,000 records before preprocessing. The results, looking at predictive performance as well as processing time, indicate that decision trees (M5P algorithm) can be used to reliably predict crime frequency in general as well as anti-social behavior frequency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Behnood, Ali y Dana Daneshvar. "A machine learning study of the dynamic modulus of asphalt concretes: An application of M5P model tree algorithm". Construction and Building Materials 262 (noviembre de 2020): 120544. http://dx.doi.org/10.1016/j.conbuildmat.2020.120544.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Liu, Zhao, Jin-sheng Yang, Yuan Wu, Ou Zhang, Min Chen, Ling-ling Huang, Xiu-qing He, Guan-yi Wu y Ying-ying Wang. "Predictors for Smoking Cessation with Acupuncture in a Hong Kong Population". Evidence-Based Complementary and Alternative Medicine 2015 (2015): 1–8. http://dx.doi.org/10.1155/2015/189694.

Texto completo
Resumen
Background. Observational studies of smoking cessation with acupuncture have been reported widely; however, few researchers have focused on its predictors.Objective. This paper attempts to explore the predictors for smoking cessation with acupuncture in a Hong Kong population, aiming to provide references for clinical treatment in the future.Methods. We performed a secondary analysis of data from our observational study “Acupuncture for Smoking Cessation (2011–2014)” in Hong Kong. A total of 23 indexes were selected as possible predictors, and study participants with complete information of 23 indexes were included. By taking 8-week and 52-week smoking cessation results as dependent variables, binary logistic regression method was used to identify the predictors. Additionally, based on an M5P decision-tree algorithm, an equation of “successful rate of smoking cessation with acupuncture” was calculated.Results. (1) 2,051 study participants were included in total. (2) According to the results of binary logistic regression, variables including treatment location, total number of acupuncture sessions received, and whether the study participants received at least 6 sessions of acupuncture were taken as the short-term predictors; gender, treatment location, Fagerstrom Test for Nicotine Dependence (FTND), and total number of acupuncture sessions received were taken as the long-term predictors. (3) According to study participants’ FTND, treatment location, and number of cigarettes smoked/day, the equation of “successful rate of smoking cessation with acupuncture” was established.Conclusion. Receiving sufficient and qualified acupuncture is the leading factor for short-term smoking cessation with acupuncture, whereas individual factors and smoking background play a more important role in long-term smoking cessation with acupuncture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Romon, Sébastien, Xavier Bressaud, Sylvain Lassarre, Guillaume Saint Pierre y Louahdi Khoudour. "Map-matching Algorithm for Large Databases". Journal of Navigation 68, n.º 5 (18 de marzo de 2015): 971–88. http://dx.doi.org/10.1017/s0373463315000156.

Texto completo
Resumen
This article proposes a batch-mode algorithm to handle the large databases generated from experimentations using probe vehicles. This algorithm can locate raw Global Positioning System (GPS) positions on a map, but can also be used to correct map-matching errors introduced by real time map-matching algorithms. For each journey, the algorithm globally searches for the closest path to the GPS positions, and so is inspired from the “path to path” algorithm's family. It uses the Multiple Hypothesis Technique (MHT) and relies on an innovative weighting system based on the area between the GPS points and the arcs making up the path. For high performance, the algorithm uses an iterative program and the data is stored in tree form.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Gao, Fan, Xiao Zheng Mai y Bing Liang Cui. "An Algorithm for Map Coloring Problem Based on Depth First Search". Advanced Materials Research 424-425 (enero de 2012): 480–83. http://dx.doi.org/10.4028/www.scientific.net/amr.424-425.480.

Texto completo
Resumen
From analyzing the characters of depth first search algorithm, we proposed a new map coloring algorithm. The proposed algorithm overcomes the disadvantage of other algorithms in the field of Map-coloring, and the results show that the proposed algorithms can solve the problem of coloring administrative map efficiently and obtain optimal solutions
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wang, Hai Ning, Shou Qian Sun, Jian Feng Wu y Fu Qian Shi. "Research of Tent Map Based Chaotic Particle Swarm Optimization Algorithm for Emotion Recognition". Advanced Materials Research 143-144 (octubre de 2010): 1280–84. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.1280.

Texto completo
Resumen
For the problem of feature redundancy of emotion recognition based on multi-channel physiological signals and low efficiency of traditional feature reduction algorithms on great sample data, a new chaotic particle swarm optimization algorithm (TM-CPSO) was proposed to solve the problem of emotion feature selection by combining tent map based chaos search mechanism and improved particle swarm optimization algorithm. The problem of falling into local minimum can be avoided by mapping the search process to the recursive procedure of the chaotic orbit. The recognition rate and efficiency was increased and the algorithm's validity was verified through the analysis of experimental simulation data and the comparison of several recognition methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kudo, Yasushi, Kenro Iwaki, Hideharu Takahashi, Naoki Oouchi y Toshihiko Onodera. "3-D image using MIP algorithm". Japanese Journal of Radiological Technology 52, n.º 2 (1996): 130. http://dx.doi.org/10.6009/jjrt.kj00001354051.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Karapetyan, Daniel y Gregory Gutin. "A New Approach to Population Sizing for Memetic Algorithms: A Case Study for the Multidimensional Assignment Problem". Evolutionary Computation 19, n.º 3 (septiembre de 2011): 345–71. http://dx.doi.org/10.1162/evco_a_00026.

Texto completo
Resumen
Memetic algorithms are known to be a powerful technique in solving hard optimization problems. To design a memetic algorithm, one needs to make a host of decisions. Selecting the population size is one of the most important among them. Most of the algorithms in the literature fix the population size to a certain constant value. This reduces the algorithm's quality since the optimal population size varies for different instances, local search procedures, and runtimes. In this paper we propose an adjustable population size. It is calculated as a function of the runtime of the whole algorithm and the average runtime of the local search for the given instance. Note that in many applications the runtime of a heuristic should be limited and, therefore, we use this bound as a parameter of the algorithm. The average runtime of the local search procedure is measured during the algorithm's run. Some coefficients which are independent of the instance and the local search are to be tuned at the design time; we provide a procedure to find these coefficients. The proposed approach was used to develop a memetic algorithm for the multidimensional assignment problem (MAP). We show that our adjustable population size makes the algorithm flexible to perform efficiently for a wide range of running times and local searches and this does not require any additional tuning of the algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

CHO, KILSEOK, ALAN D. GEORGE, RAJ SUBRAMANIYAN y KEONWOOK KIM. "PARALLEL ALGORITHMS FOR ADAPTIVE MATCHED-FIELD PROCESSING ON DISTRIBUTED ARRAY SYSTEMS". Journal of Computational Acoustics 12, n.º 02 (junio de 2004): 149–74. http://dx.doi.org/10.1142/s0218396x04002274.

Texto completo
Resumen
Matched-field processing (MFP) localizes sources more accurately than plane-wave beamforming by employing full-wave acoustic propagation models for the cluttered ocean environment. The minimum variance distortionless response MFP (MVDR–MFP) algorithm incorporates the MVDR technique into the MFP algorithm to enhance beamforming performance. Such an adaptive MFP algorithm involves intensive computational and memory requirements due to its complex acoustic model and environmental adaptation. The real-time implementation of adaptive MFP algorithms for large surveillance areas presents a serious computational challenge where high-performance embedded computing and parallel processing may be required to meet real-time constraints. In this paper, three parallel algorithms based on domain decomposition techniques are presented for the MVDR–MFP algorithm on distributed array systems. The parallel performance factors in terms of execution times, communication times, parallel efficiencies, and memory capacities are examined on three potential distributed systems including two types of digital signal processor arrays and a cluster of personal computers. The performance results demonstrate that these parallel algorithms provide a feasible solution for real-time, scalable, and cost-effective adaptive beamforming on embedded, distributed array systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Yaghini, Masoud, Mohsen Momeni y Mohammadreza Sarmadi. "A DIMMA-Based Memetic Algorithm for 0-1 Multidimensional Knapsack Problem Using DOE Approach for Parameter Tuning". International Journal of Applied Metaheuristic Computing 3, n.º 2 (abril de 2012): 43–55. http://dx.doi.org/10.4018/jamc.2012040104.

Texto completo
Resumen
Multidimensional 0-1 Knapsack Problem (MKP) is a well-known integer programming problems. The objective of MKP is to find a subset of items with maximum value satisfying the capacity constraints. A Memetic algorithm on the basis of Design and Implementation Methodology for Metaheuristic Algorithms (DIMMA) is proposed to solve MKP. DIMMA is a new methodology to develop a metaheuristic algorithm. The Memetic algorithm is categorized as metaheuristics and is a particular class of evolutionary algorithms. The parameters of the proposed algorithm are tuned by Design of Experiments (DOE) approach. DOE refers to the process of planning the experiment so that appropriate data that can be analyzed by statistical methods will be collected, resulting in valid and objective conclusions. The proposed algorithm is tested on several MKP standard instances from OR-Library. The results show the efficiency and effectiveness of the proposed algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Et. al., Thirumaran S,. "Fast Frequent Item Mining from Big Data using Map Reduce and Bit Vectors". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 2 (11 de abril de 2021): 1866–75. http://dx.doi.org/10.17762/turcomat.v12i2.1525.

Texto completo
Resumen
One of the most important areas that are constantly being focused recently is the big data and mining frequent patterns from them is an interesting vertical which is perpetually being evolved and gained plethora of attention among the research fraternities. Generally, the data is mined with the aid of Apriori based algorithms, tree based algorithm and hash based algorithm but most of these existing algorithms suffer many snags and limitations. This paper proposes a new method that overrides and overcomes the most common problems related to speed, memory consumption and search space. The algorithm named Dual Mine employs binary vector representation and vertical data representations in the map reduce and then discover the most patterns from the large data sets. The Dual mine algorithm is then compared with some of the existing algorithms to determine the efficiency of the proposed algorithm and from the experimental results it is quite evident that the proposed algorithm “Dual Mine” outscored the other algorithms by a big magnitude with respect to speed and memory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Sufriyana, Herdiantri, Yu-Wei Wu y Emily Chia-Yu Su. "Prediction of Preeclampsia and Intrauterine Growth Restriction: Development of Machine Learning Models on a Prospective Cohort". JMIR Medical Informatics 8, n.º 5 (18 de mayo de 2020): e15411. http://dx.doi.org/10.2196/15411.

Texto completo
Resumen
Background Preeclampsia and intrauterine growth restriction are placental dysfunction–related disorders (PDDs) that require a referral decision be made within a certain time period. An appropriate prediction model should be developed for these diseases. However, previous models did not demonstrate robust performances and/or they were developed from datasets with highly imbalanced classes. Objective In this study, we developed a predictive model of PDDs by machine learning that uses features at 24-37 weeks’ gestation, including maternal characteristics, uterine artery (UtA) Doppler measures, soluble fms-like tyrosine kinase receptor-1 (sFlt-1), and placental growth factor (PlGF). Methods A public dataset was taken from a prospective cohort study that included pregnant women with PDDs (66/95, 69%) and a control group (29/95, 31%). Preliminary selection of features was based on a statistical analysis using SAS 9.4 (SAS Institute). We used Weka (Waikato Environment for Knowledge Analysis) 3.8.3 (The University of Waikato, Hamilton, NZ) to automatically select the best model using its optimization algorithm. We also manually selected the best of 23 white-box models. Models, including those from recent studies, were also compared by interval estimation of evaluation metrics. We used the Matthew correlation coefficient (MCC) as the main metric. It is not overoptimistic to evaluate the performance of a prediction model developed from a dataset with a class imbalance. Repeated 10-fold cross-validation was applied. Results The classification via regression model was chosen as the best model. Our model had a robust MCC (.93, 95% CI .87-1.00, vs .64, 95% CI .57-.71) and specificity (100%, 95% CI 100-100, vs 90%, 95% CI 90-90) compared to each metric of the best models from recent studies. The sensitivity of this model was not inferior (95%, 95% CI 91-100, vs 100%, 95% CI 92-100). The area under the receiver operating characteristic curve was also competitive (0.970, 95% CI 0.966-0.974, vs 0.987, 95% CI 0.980-0.994). Features in the best model were maternal weight, BMI, pulsatility index of the UtA, sFlt-1, and PlGF. The most important feature was the sFlt-1/PlGF ratio. This model used an M5P algorithm consisting of a decision tree and four linear models with different thresholds. Our study was also better than the best ones among recent studies in terms of the class balance and the size of the case class (66/95, 69%, vs 27/239, 11.3%). Conclusions Our model had a robust predictive performance. It was also developed to deal with the problem of a class imbalance. In the context of clinical management, this model may improve maternal mortality and neonatal morbidity and reduce health care costs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Wang, Di y Yong Jie Pang. "Research on Disparity Map Generation Method of Underwater Target Based on the Improved SIFT Algorithm". Applied Mechanics and Materials 741 (marzo de 2015): 701–4. http://dx.doi.org/10.4028/www.scientific.net/amm.741.701.

Texto completo
Resumen
In order to obtain the depth information of the underwater target, it’s necessary to generate the disparity map based on binocular vision stereo matching. In the circulation water channel, the stereo matching experiments with underwater target were carried out by using the BM algorithm, SGBM algorithms and SIFT algorithm respectively. Then the characteristics of the disparity maps were analyzed for the three kinds of stereo matching algorithms. Compared with the BM algorithm and SGBM algorithms, the SIFT algorithm has been proved to be more suitable for underwater stereo matching. In order to obtain more feature points of underwater image, it is necessary to improved SIFT algorithm parameter. Underwater image matching experiments were made to determine the principal curvature coefficientγ. The results illustrated that the improvedγis better than the original value for underwater disparity map generation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Zhou, Chunyue, Tong Xu y Hairong Dong. "Distributed locating algorithm MDS-MAP (LF) based on low-frequency signal". Computer Science and Information Systems 12, n.º 4 (2015): 1289–305. http://dx.doi.org/10.2298/csis140801055z.

Texto completo
Resumen
The positioning error of distributed MDS-MAP algorithms comes from two aspects: the local positioning error and the position fusion error. In an attempt to improve the positioning result in both local positioning accuracy and global convergence probability, this paper proposes a novel MDS-MAP(LF) algorithm, which uses low frequency signal to measure the inter-sensor distance rather than shortest path algorithms. The proposed MDS-MAP(LF) algorithm leverages the propagation feature of low frequency signal to acquire a more precisely two-hop distance. The simulation and analysis results indicate that the accuracy of local positioning is improved by more than 3%. With the use of cluster expansion, MDS-MAP(LF) also shows a better convergence with comparison to the former classical distributed MDS-MAP algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Pei, Yan. "From Determinism and Probability to Chaos: Chaotic Evolution towards Philosophy and Methodology of Chaotic Optimization". Scientific World Journal 2015 (2015): 1–14. http://dx.doi.org/10.1155/2015/704587.

Texto completo
Resumen
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Akbari, D. y M. Moradizadeh. "SPECTRAL-SPATIAL CLASSIFICATION OF HYPERSPECTRAL IMAGERY USING A HYBRID FRAMEWORK". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (18 de octubre de 2019): 41–44. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-41-2019.

Texto completo
Resumen
Abstract. Hyperspectral Images are worthwhile data for many processing algorithms (e.g. Dimensionality Reduction, Target Detection, Change Detection, Classification and Unmixing). Classification is a key issue in processing hyperspectral images. Spectral-identification-based algorithms are sensitive to spectral variability and noise in acquisition. There are many algorithms for classification. This paper describes a new framework for classification of hyperspectral images, based on both spectral and spatial information. The spatial information is obtained by an enhanced Marker-based Hierarchical Segmentation (MHS) algorithm. The hyperspectral data is first fed into the Multi-Layer Perceptron (MLP) neural network classification algorithm. Then, the MHS algorithm is applied in order to increase the accuracy of less-accurately classified land-cover types. In the proposed approach, the markers are extracted from the classification maps obtained by MLP and Support Vector Machines (SVM) classifiers. Experimental results on Quebec City hyperspectral dataset, demonstrate that the proposed approach achieves approximately 9% and 5% better overall accuracy than the MLP and the original MHS algorithms respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Karawia, A. "Encryption Algorithm of Multiple-Image Using Mixed Image Elements and Two Dimensional Chaotic Economic Map". Entropy 20, n.º 10 (18 de octubre de 2018): 801. http://dx.doi.org/10.3390/e20100801.

Texto completo
Resumen
To enhance the encryption proficiency and encourage the protected transmission of multiple images, the current work introduces an encryption algorithm for multiple images using the combination of mixed image elements (MIES) and a two-dimensional economic map. Firstly, the original images are grouped into one big image that is split into many pure image elements (PIES); secondly, the logistic map is used to shuffle the PIES; thirdly, it is confused with the sequence produced by the two-dimensional economic map to get MIES; finally, the MIES are gathered into a big encrypted image that is split into many images of the same size as the original images. The proposed algorithm includes a huge number key size space, and this makes the algorithm secure against hackers. Even more, the encryption results obtained by the proposed algorithm outperform existing algorithms in the literature. A comparison between the proposed algorithm and similar algorithms is made. The analysis of the experimental results and the proposed algorithm shows that the proposed algorithm is efficient and secure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Fayyaz, Zahra, Nafiseh Mohammadian, M. Reza Rahimi Tabar, Rayyan Manwar y Kamran Avanaki. "A comparative study of optimization algorithms for wavefront shaping". Journal of Innovative Optical Health Sciences 12, n.º 04 (julio de 2019): 1942002. http://dx.doi.org/10.1142/s1793545819420021.

Texto completo
Resumen
By manipulating the phase map of a wavefront of light using a spatial light modulator, the scattered light can be sharply focused on a specific target. Several iterative optimization algorithms for obtaining the optimum phase map have been explored. However, there has not been a comparative study on the performance of these algorithms. In this paper, six optimization algorithms for wavefront shaping including continuous sequential, partitioning algorithm, transmission matrix estimation method, particle swarm optimization, genetic algorithm (GA), and simulated annealing (SA) are discussed and compared based on their efficiency when introduced with various measurement noise levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sharma, Sonia y Dr Parag Jain. "Comparative Analysis of Map Reduce Scheduling Algorithms". Journal of Advanced Research in Dynamical and Control Systems 11, n.º 10-SPECIAL ISSUE (25 de octubre de 2019): 20–31. http://dx.doi.org/10.5373/jardcs/v11sp10/20192773.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yu, De Xin y Xin Zhao. "A Map Matching Algorithm for Vehicle Based on BeiDou Short-Message Communication". Applied Mechanics and Materials 556-562 (mayo de 2014): 5060–63. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.5060.

Texto completo
Resumen
In this paper, there is an existing map-matching algorithm on the lack of accuracy, combined with the specific short-message communication function which developed by China's own BeiDou navigation satellites, and then there is proposed a new map matching algorithm for vehicle based on BeiDou Short-Message communication. The algorithm makes up for the shortcomings of existing map matching algorithms, and effectively reduce the matching error, especially for city complex sections, including the main and auxiliary road, the viaduct etc. Finally, this paper validates by comparing the new algorithm with the mature matching algorithm based on fuzzy logic. After verification, it proves that the proposed algorithm can improve the precision of map matching, especially for city complex sections of the map, and the effect is more significant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

GUAN, YE-PENG. "AUTOMATIC EXTRACTION OF LIP BASED ON LIP MAP". International Journal of Information Acquisition 05, n.º 01 (marzo de 2008): 31–40. http://dx.doi.org/10.1142/s0219878908001478.

Texto completo
Resumen
The effective automatic location and tracking of a person's lip has been proven to be very difficult in the field of computer vision. A lip segmentation approach is proposed based on wavelet multi-scale edge detection across a lip map. The developed algorithm exploits the spatial interactions between neighboring pixels through wavelet multi-scale edge detection across the lip map. The algorithm produces better segmentation automatically without the need to determine an optimum threshold for each lip image. It has indicated the developed algorithm with superior performance by comparing with some existing lip segmentation algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Li, Jun, Xiumin Wang, Jinlong He, Chen Su y Liang Shan. "Turbo Decoder Design based on an LUT-Normalized Log-MAP Algorithm". Entropy 21, n.º 8 (20 de agosto de 2019): 814. http://dx.doi.org/10.3390/e21080814.

Texto completo
Resumen
Turbo codes have been widely used in wireless communication systems due to their good error correction performance. Under time division long term evolution (TD-LTE) of the 3rd generation partnership project (3GPP) wireless communication standard, a Log maximum a posteriori (Log-MAP) decoding algorithm with high complexity is usually approximated as a lookup-table Log-MAP (LUT-Log-MAP) algorithm and Max-Log-MAP algorithm, but these two algorithms have high complexity and high bit error rate, respectively. In this paper, we propose a normalized Log-MAP (Nor-Log-MAP) decoding algorithm in which the function max* is approximated by using a fixed normalized factor multiplied by the max function. Combining a Nor-Log-MAP algorithm with a LUT-Log-MAP algorithm creates a new kind of LUT-Nor-Log-MAP algorithm. Compared with the LUT-Log-MAP algorithm, the decoding performance of the LUT-Nor-Log-MAP algorithm is close to that of the LUT-Log-MAP algorithm. Based on the decoding method of the Nor-Log-MAP algorithm, we also put forward a normalization functional unit (NFU) for a soft-input soft-output (SISO) decoder computing unit. The simulation results show that the LUT-Nor-Log-MAP algorithm can save about 2.1% of logic resources compared with the LUT-Log-MAP algorithm. Compared with the Max-Log-MAP algorithm, the LUT-Nor-Log-MAP algorithm shows a gain of 0.25~0.5 dB in decoding performance. Using the Cyclone IV platform, the designed Turbo decoder can achieve a throughput of 36 Mbit/s under a maximum clock frequency of 44 MHz.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Irfan, Pahrul. "Aplikasi Enkripsi Citra Menggunakan Algoritma Kriptografi Arnold Cat Map Dan Logistic Map". Jurnal Matrik 16, n.º 1 (26 de julio de 2017): 96. http://dx.doi.org/10.30812/matrik.v16i1.14.

Texto completo
Resumen
Data security in the process of information exchange is very important. One way to secure the image is to use cryptographic techniques. Cryptographic algorithms applied to the image is used to randomize the position of pixels using a secret key parameters, so that images can not be recognized anymore after the encryption process. In this study, researchers used the algorithm of chaos known as algorithms compact, fast and commonly used in cryptography especially those in the image file. The results showed the image that has been through an encryption process can not be recognized because the randomization process image pixel position is performed using chaosalgorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Padang Tunggal, Tatiya, Andi Supriyanto, Nur Mukhammad Zaidatur Rochman, Ibnu Faishal, Imam Pambudi y Iswanto Iswanto. "Pursuit Algorithm for Robot Trash Can Based on Fuzzy-Cell Decomposition". International Journal of Electrical and Computer Engineering (IJECE) 6, n.º 6 (1 de diciembre de 2016): 2863. http://dx.doi.org/10.11591/ijece.v6i6.10766.

Texto completo
Resumen
<p>Scooby Smart Trash can is a trash can equipped with artificial intelligence algorithms that is able to capture and clean up garbages thrown by people who do not care about the environment. The can is called smart because it acts like scoobydoo in a children's cartoon in that the can will react if there is garbage thrown and it catches and cleans them up. This paper presents pursuit algorithm that uses cell decomposition algorithm in which algorithms are used to create a map of the robot's path and fuzzy algorithm as one of the artificial intelligence algorithm for robot path planning. By using the combined algorithms, the robot is able to pursuit and chases the trash carelessly discarded, but it has not been able to find the shortest distance. Therefore, this paper considers a second modification of the algorithm by adding a potential field algorithm used to add weight values on the map, so that the robot can pursue trash by finding the shortest path. The proposed algorithm shows that the robot can avoid obstacles and find the shortest path so that the time required to get to the destination point is fast.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Padang Tunggal, Tatiya, Andi Supriyanto, Nur Mukhammad Zaidatur Rochman, Ibnu Faishal, Imam Pambudi y Iswanto Iswanto. "Pursuit Algorithm for Robot Trash Can Based on Fuzzy-Cell Decomposition". International Journal of Electrical and Computer Engineering (IJECE) 6, n.º 6 (1 de diciembre de 2016): 2863. http://dx.doi.org/10.11591/ijece.v6i6.pp2863-2869.

Texto completo
Resumen
<p>Scooby Smart Trash can is a trash can equipped with artificial intelligence algorithms that is able to capture and clean up garbages thrown by people who do not care about the environment. The can is called smart because it acts like scoobydoo in a children's cartoon in that the can will react if there is garbage thrown and it catches and cleans them up. This paper presents pursuit algorithm that uses cell decomposition algorithm in which algorithms are used to create a map of the robot's path and fuzzy algorithm as one of the artificial intelligence algorithm for robot path planning. By using the combined algorithms, the robot is able to pursuit and chases the trash carelessly discarded, but it has not been able to find the shortest distance. Therefore, this paper considers a second modification of the algorithm by adding a potential field algorithm used to add weight values on the map, so that the robot can pursue trash by finding the shortest path. The proposed algorithm shows that the robot can avoid obstacles and find the shortest path so that the time required to get to the destination point is fast.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Masadeh, Raja, Abdullah Alzaqebah, Bushra Smadi y Esra Masadeh. "Parallel Whale Optimization Algorithm for Maximum Flow Problem". Modern Applied Science 14, n.º 3 (19 de febrero de 2020): 30. http://dx.doi.org/10.5539/mas.v14n3p30.

Texto completo
Resumen
Maximum Flow Problem (MFP) is considered as one of several famous problems in directed graphs. Many researchers studied MFP and its applications to solve problems using different techniques. One of the most popular algorithms that are employed to solve MFP is Ford-Fulkerson algorithm. However, this algorithm has long run time when it comes to application with large data size. For this reason, this study presents a parallel whale optimization (PWO) algorithm to get maximum flow in a weighted directed graph. The PWO algorithm is implemented and tested on datasets with different sizes. The PWO algorithm achieved up to 3.79 speedup on a machine with 4 processors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Liu, Shuai, Zheng Chen, Huahui Zhou, Kunlin He, Meiyu Duan, Qichen Zheng, Pengcheng Xiong et al. "DiaMole: Mole Detection and Segmentation Software for Mobile Phone Skin Images". Journal of Healthcare Engineering 2021 (2 de junio de 2021): 1–10. http://dx.doi.org/10.1155/2021/6698176.

Texto completo
Resumen
Motivation. The worldwide incidence and mortality rates of melanoma are on the rise recently. Melanoma may develop from benign lesions like skin moles. Easy-to-use mole detection software will help find the malignant skin lesions at the early stage. Results. This study developed mole detection and segmentation software DiaMole using mobile phone images. DiaMole utilized multiple deep learning algorithms for the object detection problem and mole segmentation problem. An object detection algorithm generated a rectangle tightly surrounding a mole in the mobile phone image. Moreover, the segmentation algorithm detected the precise boundary of that mole. Three deep learning algorithms were evaluated for their object detection performance. The popular performance metric mean average precision (mAP) was used to evaluate the algorithms. Among the utilized algorithms, the Faster R-CNN could achieve the best mAP = 0.835, and the integrated algorithm could achieve the mAP = 0.4228. Although the integrated algorithm could not achieve the best mAP, it can avoid the missing of detecting the moles. A popular Unet model was utilized to find the precise mole boundary. Clinical users may annotate the detected moles based on their experiences. Conclusions. DiaMole is user-friendly software for researchers focusing on skin lesions. DiaMole may automatically detect and segment the moles from the mobile phone skin images. The users may also annotate each candidate mole according to their own experiences. The automatically calculated mole image masks and the annotations may be saved for further investigations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Tsankova, D. y S. Lekova. "Global Optimization Algorithm Based on One-Dimensional Chaotic Maps and Gradient Descent Technique". Information Technologies and Control 15, n.º 1 (1 de marzo de 2017): 17–22. http://dx.doi.org/10.1515/itc-2017-0018.

Texto completo
Resumen
Abstract A hybrid algorithm for searching the global minimum of a multimodal function is proposed in the paper. It is a two stages search technique, the first stage is the twice carrier wave based chaotic optimization algorithm (COA) for global searching, and the second stage is the gradient descent algorithm (GDA) for accurate local searching. The chaotic dynamics is realized through one-dimensional map in three variants: logistic, cubic and sine map. Three testing functions are used. A hundred simulations (each starting from different initial point generated randomly) were carried out for each of the test functions using two optimization algorithms: the proposed hybrid algorithm and the GDA working alone. The success and accuracy of locating the extremum, as well as the convergence of the algorithms using the three different chaotic maps were discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Phan, Yen Quoc y Nga Thu Thi Nguyen. "Experiment, evaluate methods of interpolation of terain surface for different types of terrain". Journal of Mining and Earth Sciences 61, n.º 2 (29 de abril de 2020): 116–25. http://dx.doi.org/10.46326/jmes.2020.61(2).13.

Texto completo
Resumen
Surface modeling is done by many classic and modern algorithms such as Polynomial Interpolation, Delaunay Triangulation, Nearest Neighbor, Natural Neighbor, Kriging, Inverse Distance Weighting (IDW), Spline Functions, etc. The important issue is to experiment, evaluate and select algorithms suitable to the reality of the data and the study area. The paper used three algorithms IDW, Kriging and Natural Neighbor to model the terrain on two map sheets representing different types of terrain. From there, compare the results and evaluate the accuracy of the methods based on random test data from the data set which is extracted from the original map. In addition, checking the contour determined from the algorithm compared to the original contour were also carried out on the entire map sheet. Results show that: Natural Neighbor algorithm gives better results on both experimental areas, then IDW and Kriging algorithms, the root mean Square Error of 15.2922, 16.4754 and 17.9949 m respectively for average high terrain and 13.9728, 15.2466, 15.7613 meters with high mountainous terrain
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Brida, Peter, Juraj Machaj, Jan Racko y Ondrej Krejcar. "Algorithm for Dynamic Fingerprinting Radio Map Creation Using IMU Measurements". Sensors 21, n.º 7 (24 de marzo de 2021): 2283. http://dx.doi.org/10.3390/s21072283.

Texto completo
Resumen
While a vast number of location-based services appeared lately, indoor positioning solutions are developed to provide reliable position information in environments where traditionally used satellite-based positioning systems cannot provide access to accurate position estimates. Indoor positioning systems can be based on many technologies; however, radio networks and more precisely Wi-Fi networks seem to attract the attention of a majority of the research teams. The most widely used localization approach used in Wi-Fi-based systems is based on fingerprinting framework. Fingerprinting algorithms, however, require a radio map for position estimation. This paper will describe a solution for dynamic radio map creation, which is aimed to reduce the time required to build a radio map. The proposed solution is using measurements from IMUs (Inertial Measurement Units), which are processed with a particle filter dead reckoning algorithm. Reference points (RPs) generated by the implemented dead reckoning algorithm are then processed by the proposed reference point merging algorithm, in order to optimize the radio map size and merge similar RPs. The proposed solution was tested in a real-world environment and evaluated by the implementation of deterministic fingerprinting positioning algorithms, and the achieved results were compared with results achieved with a static radio map. The achieved results presented in the paper show that positioning algorithms achieved similar accuracy even with a dynamic map with a low density of reference points.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Caiafa, Cesar F. y Andrzej Cichocki. "Estimation of Sparse Nonnegative Sources from Noisy Overcomplete Mixtures Using MAP". Neural Computation 21, n.º 12 (diciembre de 2009): 3487–518. http://dx.doi.org/10.1162/neco.2009.08-08-846.

Texto completo
Resumen
In this letter, we propose a new algorithm for estimating sparse nonnegative sources from a set of noisy linear mixtures. In particular, we consider difficult situations with high noise levels and more sources than sensors (underdetermined case). We show that when sources are very sparse in time and overlapped at some locations, they can be recovered even with very low signal-to-noise ratio, and by using many fewer sensors than sources. A theoretical analysis based on Bayesian estimation tools is included showing strong connections with algorithms in related areas of research such as ICA, NMF, FOCUSS, and sparse representation of data with overcomplete dictionaries. Our algorithm uses a Bayesian approach by modeling sparse signals through mixed-state random variables. This new model for priors imposes ℓ0 norm-based sparsity. We start our analysis for the case of nonoverlapped sources (1-sparse), which allows us to simplify the search of the posterior maximum avoiding a combinatorial search. General algorithms for overlapped cases, such as 2-sparse and k-sparse sources, are derived by using the algorithm for 1-sparse signals recursively. Additionally, a combination of our MAP algorithm with the NN-KSVD algorithm is proposed for estimating the mixing matrix and the sources simultaneously in a real blind fashion. A complete set of simulation results is included showing the performance of our algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Yan Xuliang, 闫旭亮, 徐望 Xu Wang, 杨功流 Yang Gongliu y 王璐 Wang Lu. "基于改进对数极坐标变换的星图识别算法". Acta Optica Sinica 41, n.º 10 (2021): 1010001. http://dx.doi.org/10.3788/aos202141.1010001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Nasonov, A., A. Krylov y D. Lyukov. "IMAGE SHARPENING WITH BLUR MAP ESTIMATION USING CONVOLUTIONAL NEURAL NETWORK". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W12 (9 de mayo de 2019): 161–66. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w12-161-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> We propose a method for choosing optimal values of the parameters of image sharpening algorithm for out-of-focus blur based on grid warping approach. The idea of the considered sharpening algorithm is to move pixels from the edge neighborhood towards the edge centerlines. Compared to traditional deblurring algorithms, this approach requires only scalar blur level value rather than a blur kernel. We propose a convolutional neural network based algorithm for estimating the blur level value.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Hou, Chengan, Xingbin Liu y Songyang Feng. "Quantum image scrambling algorithm based on discrete Baker map". Modern Physics Letters A 35, n.º 17 (22 de abril de 2020): 2050145. http://dx.doi.org/10.1142/s021773232050145x.

Texto completo
Resumen
Quantum image processing has become a significant aspect within the field of quantum information processing because the image is an essential carrier of information, and quantum computation has powerful image processing ability. Image scrambling algorithms are often required as initial image operations in quantum image processing applications such as quantum image encryption and watermarking. However, the efficiency of existing quantum image scrambling algorithms needs to be improved urgently, especially in terms of periodicity. Therefore, a novel quantum image scrambling algorithm based on discrete Baker map is proposed in this paper, which can be implemented by swapping qubits with low circuit complexity. The quantum version of discrete Baker map is deduced and the corresponding quantum circuit is designed. The simulation results show that the scrambling algorithm has the characteristic of long period, which can further enhance the security of quantum image encryption algorithms. Besides, for generalized discrete Baker maps, the conditions that they can be implemented by swapping qubits are given. Moreover, the number of discrete Baker maps satisfying the conditions is also calculated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Tong, Xiaojun, Xudong Liu, Jing Liu, Miao Zhang y Zhu Wang. "A Novel Lightweight Block Encryption Algorithm Based on Combined Chaotic S-Box". International Journal of Bifurcation and Chaos 31, n.º 10 (agosto de 2021): 2150152. http://dx.doi.org/10.1142/s0218127421501522.

Texto completo
Resumen
Due to high computational cost, traditional encryption algorithms are not suitable for the environments in which resources are limited. In view of the above problem, we first propose a combined chaotic map to increase the chaotic interval and Lyapunov exponent of the existing one-dimensional chaotic maps. Then, an S-box based on the proposed combined chaotic map is constructed. The performances of the designed S-box, such as bijection, nonlinearity, strict avalanche criteria, differential uniformity, the bits independence criterion, and the linear approximation probability, are tested to show that it has better cryptographic performances. Finally, we present a lightweight block encryption algorithm by using the above S-box. The algorithm is based on the generalized Feistel structure and SPN structure. In addtion, the processes of encryption and decryption of our algorithm are almost the same, which reduces the complexity of algorithm implementation. The experimental results show that the proposed encryption algorithm meets the requirements of lightweight algorithms and has good cryptographic characteristics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Li, Guodong y Xuejuan Han. "A Color Image Encryption Algorithm with Cat Map and Chaos Map Embedded". International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 29, Supp01 (26 de marzo de 2021): 73–87. http://dx.doi.org/10.1142/s0218488521400043.

Texto completo
Resumen
In order to deal with the problem of encryption algorithms being overly simplistic, and the relatively low security of color images that creates potential to be attacked in the transmission process, this paper will introduce a new encryption algorithm that is designed to divide color images into R, G and B layers. In the scrambling operation: the first scrambling is aimed to block the clear text image scrambling; The second scrambling is the dynamic Arnold scrambling of the ciphertext after the first scrambling. In the diffusion operation, the scrambled ciphertext image was taken as the input, and the pseudo-random sequence generated by Tent mapping and Sine mapping was embedded. The sequence generated by Logistic mapping was used to select sub-blocks for block diffusion of the image. Tent-Sine mapping was applied to the second diffusion to obtain the final ciphertext image. The algorithm designed in this paper combines image block scrambling and dynamic Arnold scrambling, the scrambling degree of each layer of image pixels would be greatly improved, thus improving the security of color images. In the process of diffusion, chaos sequence is selected for diffusion operation, which increases the difficulty of decoding ciphertext. The simulation results show that the new algorithm has desirable encryption effect, strong key sensitivity and large key space, and complex encryption algorithm can effectively resist attacks, which certainly has value in image information security.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Harun, Sariffuddin y Mohd Faisal Ibrahim. "A genetic algorithm based task scheduling system for logistics service robots". Bulletin of Electrical Engineering and Informatics 8, n.º 1 (1 de marzo de 2019): 206–13. http://dx.doi.org/10.11591/eei.v8i1.1437.

Texto completo
Resumen
The demand for autonomous logistics service robots requires an efficient task scheduling system in order to optimise cost and time for the robot to complete its tasks. This paper presents a Genetic algorithm (GA) based task scheduling system for a ground mobile robot that is able to find a global near-optimal travelling path to complete a logistics task of pick-and-deliver items at various locations. In this study, the chromosome representation and the fitness function of GA is carefully designed to cater for a single load logistics robotic task. Two variants of GA crossover are adopted to enhance the performance of the proposed algorithm. The performance of the scheduling is compared and analysed between the proposed GA algorithms and a conventional greedy algorithm in a virtual map and a real map environments that turns out the proposed GA algorithms outperform the greedy algorithm by 40% to 80% improvement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

ABOUELHODA, MOHAMED I., ROBERT GIEGERICH, BEHSHAD BEHZADI y JEAN-MARC STEYAERT. "ALIGNMENT OF MINISATELLITE MAPS BASED ON RUN-LENGTH ENCODING SCHEME". Journal of Bioinformatics and Computational Biology 07, n.º 02 (abril de 2009): 287–308. http://dx.doi.org/10.1142/s0219720009004060.

Texto completo
Resumen
Subsequent duplication events are responsible for the evolution of the minisatellite maps. Alignment of two minisatellite maps should therefore take these duplication events into account, in addition to the well-known edit operations. All algorithms for computing an optimal alignment of two maps, including the one presented here, first deduce the costs of optimal duplication scenarios for all substrings of the given maps. Then, they incorporate the pre-computed costs in the alignment recurrence. However, all previous algorithms addressing this problem are dependent on the number of distinct map units (map alphabet) and do not fully make use of the repetitiveness of the map units. In this paper, we present an algorithm that remedies these shortcomings: our algorithm is alphabet-independent and is based on the run-length encoding scheme. It is the fastest in theory, and in practice as well, as shown by experimental results. Furthermore, our alignment model is more general than that of the previous algorithms, and captures better the duplication mechanism. Using our algorithm, we derive a quantitative evidence that there is a directional bias in the growth of minisatellites of the MSY1 dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Li, Zhaoying, Zhao Zhang, Hao Liu y Liang Yang. "A new path planning method based on concave polygon convex decomposition and artificial bee colony algorithm". International Journal of Advanced Robotic Systems 17, n.º 1 (1 de enero de 2020): 172988141989478. http://dx.doi.org/10.1177/1729881419894787.

Texto completo
Resumen
Free space algorithms are kind of graphics-based methods for path planning. With previously known map information, graphics-based methods have high computational efficiency in providing a feasible path. However, the existing free space algorithms do not guarantee the global optimality because they always search in one connected domain but not all the possible connected domains. To overcome this drawback, this article presents an improved free space algorithm based on map decomposition with multiple connected domains and artificial bee colony algorithm. First, a decomposition algorithm of single-connected concave polygon is introduced based on the principle of concave polygon convex decomposition. Any map without obstacle is taken as single-connected concave polygon (the convex polygon map can be seen as already decomposed and will not be discussed here). Single concave polygon can be decomposed into convex polygons by connecting concave points with their visible vertex. Second, decomposition algorithm for multi-connected concave polygon (any map with obstacles) is designed. It can be converted into single-connected concave polygon by excluding obstacles using virtual links. The map can be decomposed into several convex polygons which form multiple connected domains. Third, artificial bee colony algorithm is used to search the optimal path in all the connected domains so as to avoid falling into the local minimum. Numerical simulations and comparisons with existing free space algorithm and rapidly exploring random tree star algorithm are carried out to evaluate the performance of the proposed method. The results show that this method is able to find the optimal path with high computational efficiency and accuracy. It has advantages especially for complex maps. Furthermore, parameter sensitivity analysis is provided and the suggested values for parameters are given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Sisodia, Dilip Singh, Vijay Khandal y Riya Singhal. "Fast prediction of web user browsing behaviours using most interesting patterns". Journal of Information Science 44, n.º 1 (1 de octubre de 2016): 74–90. http://dx.doi.org/10.1177/0165551516673293.

Texto completo
Resumen
The prediction of users’ browsing behaviours is essential for putting appropriate information on the web. The browsing behaviours are stored as navigational patterns in web server logs. These weblogs are used to predict the frequently accessed patterns of web users, which can be used to predict user behaviour and to collect business intelligence. However, owing to the exponentially increasing weblog size, existing implementations of frequent-pattern-mining algorithms often take too much time and generate too many redundant patterns. This article introduces the most interesting pattern-based parallel FP-growth (MIP-PFP) algorithm. MIP-PFP is an improved implementation of the parallel FP-growth algorithm and implemented on the Apache Spark platform for extracting frequent patterns from huge weblogs. Experiments were performed on openly available National Aeronautics and Space Administration (NASA) weblog data to test the effectiveness of the MIP-PFP algorithm. The results were compared with existing implementation of PFP algorithms. The results suggest that the MIP-PFP algorithm running on Apache Spark reduced the execution time by a factor of more than 10 times. The effect of sequence length that has been used as input to the MIP-PFP algorithm was also evaluated with different interestingness parameters including support, confidence, lift, leverage, cosine, and conviction. It is observed from experimental results that only sequences of length greater than three produced a very low value of support for these interestingness measures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

RANGARAJAN, GOVINDAN. "POLYNOMIAL MAP FACTORIZATION OF SYMPLECTIC MAPS". International Journal of Modern Physics C 14, n.º 06 (julio de 2003): 847–54. http://dx.doi.org/10.1142/s0129183103004991.

Texto completo
Resumen
Long-term stability studies of nonlinear Hamiltonian systems require symplectic integration algorithms which are both fast and accurate. In this paper, we study a symplectic integration method wherein the symplectic map representing the Hamiltonian system is refactorized using polynomial symplectic maps. This method is analyzed for the three degree of freedom case. Finally, we apply this algorithm to study a large particle storage ring.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

K.S., Sree Ranjini. "A study on performance of MHDA in training MLPs". Engineering Computations 36, n.º 6 (8 de julio de 2019): 1820–34. http://dx.doi.org/10.1108/ec-05-2018-0216.

Texto completo
Resumen
Purpose In recent years, the application of metaheuristics in training neural network models has gained significance due to the drawbacks of deterministic algorithms. This paper aims to propose the use of a recently developed “memory based hybrid dragonfly algorithm” (MHDA) for training multi-layer perceptron (MLP) model by finding the optimal set of weight and biases. Design/methodology/approach The efficiency of MHDA in training MLPs is evaluated by applying it to classification and approximation benchmark data sets. Performance comparison between MHDA and other training algorithms is carried out and the significance of results is proved by statistical methods. The computational complexity of MHDA trained MLP is estimated. Findings Simulation result shows that MHDA can effectively find the near optimum set of weight and biases at a higher convergence rate when compared to other training algorithms. Originality/value This paper presents MHDA as an alternative optimization algorithm for training MLP. MHDA can effectively optimize set of weight and biases and can be a potential trainer for MLPs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Wen, Guo Zhi. "The Algorithm Animation of Genetic Algorithm of Travelling Salesman Problem". Applied Mechanics and Materials 411-414 (septiembre de 2013): 2013–16. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.2013.

Texto completo
Resumen
The traveling salesman problem is analyzed with genetic algorithms. The best route map and tendency of optimal grade of 500 cities before the first mutation, best route map after 15 times of mutation and tendency of optimal grade of the final mutation are displayed with algorithm animation. The optimal grade is about 0.0455266 for the best route map before the first mutation, but is raised to about 0.058241 for the 15 times of mutation. It shows that through the improvements of algorithms and coding methods, the efficiency to solve the traveling problem can be raised with genetic algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía