Academic literature on the topic 'Classical worm algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Classical worm algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Classical worm algorithm"

1

Layana Castro, Pablo E., Joan Carles Puchalt, Antonio García Garví, and Antonio-José Sánchez-Salmerón. "Caenorhabditis elegans Multi-Tracker Based on a Modified Skeleton Algorithm." Sensors 21, no. 16 (2021): 5622. http://dx.doi.org/10.3390/s21165622.

Full text
Abstract:
Automatic tracking of Caenorhabditis elegans (C. egans) in standard Petri dishes is challenging due to high-resolution image requirements when fully monitoring a Petri dish, but mainly due to potential losses of individual worm identity caused by aggregation of worms, overlaps and body contact. To date, trackers only automate tests for individual worm behaviors, canceling data when body contact occurs. However, essays automating contact behaviors still require solutions to this problem. In this work, we propose a solution to this difficulty using computer vision techniques. On the one hand, a skeletonization method is applied to extract skeletons in overlap and contact situations. On the other hand, new optimization methods are proposed to solve the identity problem during these situations. Experiments were performed with 70 tracks and 3779 poses (skeletons) of C. elegans. Several cost functions with different criteria have been evaluated, and the best results gave an accuracy of 99.42% in overlapping with other worms and noise on the plate using the modified skeleton algorithm and 98.73% precision using the classical skeleton algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Van den Nest, Maarten. "Simulating quantum computers with probabilistic methods." Quantum Information and Computation 11, no. 9&10 (2011): 784–812. http://dx.doi.org/10.26421/qic11.9-10-5.

Full text
Abstract:
We investigate the boundary between classical and quantum computational power. This work consists of two parts. First we develop new classical simulation algorithms that are centered on sampling methods. Using these techniques we generate new classes of classically simulatable quantum circuits where standard techniques relying on the exact computation of measurement probabilities fail to provide efficient simulations. For example, we show how various concatenations of matchgate, Toffoli, Clifford, bounded-depth, Fourier transform and other circuits are classically simulatable. We also prove that sparse quantum circuits as well as circuits composed of CNOT and $\exp[{i\theta X}]$ gates can be simulated classically. In a second part, we apply our results to the simulation of quantum algorithms. It is shown that a recent quantum algorithm, concerned with the estimation of Potts model partition functions, can be simulated efficiently classically. Finally, we show that the exponential speed-ups of Simon's and Shor's algorithms crucially depend on the very last stage in these algorithms, dealing with the classical postprocessing of the measurement outcomes. Specifically, we prove that both algorithms would be classically simulatable if the function classically computed in this step had a sufficiently peaked Fourier spectrum.
APA, Harvard, Vancouver, ISO, and other styles
3

Hastings, Matthew B. "Classical and Quantum Algorithms for Tensor Principal Component Analysis." Quantum 4 (February 27, 2020): 237. http://dx.doi.org/10.22331/q-2020-02-27-237.

Full text
Abstract:
We present classical and quantum algorithms based on spectral methods for a problem in tensor principal component analysis. The quantum algorithm achieves a quartic speedup while using exponentially smaller space than the fastest classical spectral algorithm, and a super-polynomial speedup over classical algorithms that use only polynomial space. The classical algorithms that we present are related to, but slightly different from those presented recently in Ref. \cite{wein2019kikuchi}. In particular, we have an improved threshold for recovery and the algorithms we present work for both even and odd order tensors. These results suggest that large-scale inference problems are a promising future application for quantum computers.
APA, Harvard, Vancouver, ISO, and other styles
4

Cherckesova, Larissa, Olga Safaryan, Pavel Razumov, Irina Pilipenko, Yuriy Ivanov, and Ivan Smirnov. "Speed improvement of the quantum factorization algorithm of P. Shor by upgrade its classical part." E3S Web of Conferences 224 (2020): 01016. http://dx.doi.org/10.1051/e3sconf/202022401016.

Full text
Abstract:
This report discusses Shor’s quantum factorization algorithm and ρ–Pollard’s factorization algorithm. Shor’s quantum factorization algorithm consists of classical and quantum parts. In the classical part, it is proposed to use Euclidean algorithm, to find the greatest common divisor (GCD), but now exist large number of modern algorithms for finding GCD. Results of calculations of 8 algorithms were considered, among which algorithm with lowest execution rate of task was identified, which allowed the quantum algorithm as whole to work faster, which in turn provides greater potential for practical application of Shor’s quantum algorithm. Standard quantum Shor’s algorithm was upgraded by replacing the binary algorithm with iterative shift algorithm, canceling random number generation operation, using additive chain algorithm for raising to power. Both Shor’s algorithms (standard and upgraded) are distinguished by their high performance, which proves much faster and insignificant increase in time in implementation of data processing. In addition, it was possible to modernize Shor’s quantum algorithm in such way that its efficiency turned out to be higher than standard algorithm because classical part received an improvement, which allows an increase in speed by 12%.
APA, Harvard, Vancouver, ISO, and other styles
5

Сехпосян Арташес. "ИССЛЕДОВАНИЕ НЕКОТОРЫХ КЛАССИЧЕСКИХ АЛГОРИТМОВ ЗАДАЧИ МАРШРУТИЗАЦИИ ТРАНСПОРТА". World Science 1, № 3(43) (2019): 10–14. http://dx.doi.org/10.31435/rsglobal_ws/31032019/6398.

Full text
Abstract:

 
 
 The article is devoted to the study of some classical algorithms for transport routing problems. The article describes the existing classical transport routing algorithms, such as the Clark-Wright algorithm and its extended algorithm. The main minus of Clarke-Wright’s algorithm has been revealed, which consists in the efficiency of its operation decreases as it approaches the end of the calculations, while at the beginning of the work, the solutions are relatively successful. To improve the performance of the Clark-Wright algorithm, three approaches have been proposed in its extended algorithm. Also were explored the algorithms of Mole-Jameson and Christofides, Mingozzi and Tossa, which can be used for tasks with an unspecified in advance number of vehicles, are investigated. The algorithm of the Notes, which is used for the initial processing of the problems of ZMT, is proposed. Classical improvement algorithms for MMT are investigated, in which either a single route at a time or several routes are processed. The above algorithms give more interesting results than their predecessors, and are considered optimal in terms of the use of certain resources.
 
 
APA, Harvard, Vancouver, ISO, and other styles
6

Fehér, Áron, and Dénes Nimród Kutasi. "Modelling and Control of Bounded Hybrid Systems in Power Electronics." Acta Universitatis Sapientiae Electrical and Mechanical Engineering 9, no. 1 (2017): 33–42. http://dx.doi.org/10.1515/auseme-2017-0008.

Full text
Abstract:
Abstract In this work, an explicit Model Predictive Control algorithm is devised and compared to classical control algorithms applied to a series resonant DC/DC converter circuit. In the first part, a model of the converter as a hybrid system is created and studied. In the second part, the predictive algorithm is applied and tested on the model. Finally, the designed control algorithm is compared to classical PI and sliding mode controllers.
APA, Harvard, Vancouver, ISO, and other styles
7

van Apeldoorn, Joran, András Gilyén, Sander Gribling, and Ronald de Wolf. "Convex optimization using quantum oracles." Quantum 4 (January 13, 2020): 220. http://dx.doi.org/10.22331/q-2020-01-13-220.

Full text
Abstract:
We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using O~(1) quantum queries to a membership oracle, which is an exponential quantum speed-up over the Ω(n) membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that O~(n) quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: Ω(n) quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and Ω(n) quantum separation queries are needed if it does not.
APA, Harvard, Vancouver, ISO, and other styles
8

Van den Nest, Maarten. "Efficient classical simulations of quantum Fourier transforms and Normalizer circuits over Abelian groups." Quantum Information and Computation 13, no. 11&12 (2013): 1007–37. http://dx.doi.org/10.26421/qic13.11-12-7.

Full text
Abstract:
The quantum Fourier transform (QFT) is an important ingredient in various quantum algorithms which achieve superpolynomial speed-ups over classical computers. In this paper we study under which conditions the QFT can be simulated efficiently classically. We introduce a class of quantum circuits, called \emph{normalizer circuits}: a normalizer circuit over a finite Abelian group is any quantum circuit comprising the QFT over the group, gates which compute automorphisms and gates which realize quadratic functions on the group. In our main result we prove that all normalizer circuits have polynomial-time classical simulations. The proof uses algorithms for linear diophantine equation solving and the monomial matrix formalism introduced in our earlier work. Our result generalizes the Gottesman-Knill theorem: in particular, Clifford circuits for $d$-level qudits arise as normalizer circuits over the group ${\mathbf Z}_d^m$. We also highlight connections between normalizer circuits and Shor's factoring algorithm, and the Abelian hidden subgroup problem in general. Finally we prove that quantum factoring cannot be realized as a normalizer circuit owing to its modular exponentiation subroutine.
APA, Harvard, Vancouver, ISO, and other styles
9

Nissim, R., and R. Brafman. "Distributed Heuristic Forward Search for Multi-agent Planning." Journal of Artificial Intelligence Research 51 (October 7, 2014): 293–332. http://dx.doi.org/10.1613/jair.4295.

Full text
Abstract:
This paper deals with the problem of classical planning for multiple cooperative agents who have private information about their local state and capabilities they do not want to reveal. Two main approaches have recently been proposed to solve this type of problem -- one is based on reduction to distributed constraint satisfaction, and the other on partial-order planning techniques. In classical single-agent planning, constraint-based and partial-order planning techniques are currently dominated by heuristic forward search. The question arises whether it is possible to formulate a distributed heuristic forward search algorithm for privacy-preserving classical multi-agent planning. Our work provides a positive answer to this question in the form of a general approach to distributed state-space search in which each agent performs only the part of the state expansion relevant to it. The resulting algorithms are simple and efficient -- outperforming previous algorithms by orders of magnitude -- while offering similar flexibility to that of forward-search algorithms for single-agent planning. Furthermore, one particular variant of our general approach yields a distributed version of the A* algorithm that is the first cost-optimal distributed algorithm for privacy-preserving planning.
APA, Harvard, Vancouver, ISO, and other styles
10

Fadhil Oudah, Sadeer, Prof Dr Hegazy Zaher, Assoc Prof Dr Naglaa Ragaa Saeid Hassan, and Dr Eman Oun. "Literature Review on Differential Evolution Algorithm." Journal of University of Shanghai for Science and Technology 23, no. 06 (2021): 1577–600. http://dx.doi.org/10.51201/jusst/21/06471.

Full text
Abstract:
Differential evolution algorithm is one of the most efficient metaheuristic approaches. In this paper, a review and analysis is presented in order to help for future research in differential evolution algorithm. It covers an analysis of about 142 papers of the previous work in the modifications of the algorithm including the main parameters of the classical steps of the algorithm and hybridization with other algorithms. The analysis also shows the applications that optimized using the differential evolution algorithm.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Classical worm algorithm"

1

Meier, Hannes. "Phase transitions in novel superfluids and systems with correlated disorder." Doctoral thesis, KTH, Statistisk fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-160929.

Full text
Abstract:
Condensed matter systems undergoing phase transitions rarely allow exact solutions. The presence of disorder renders the situation  even worse but collective Monte Carlo methods and parallel algorithms allow numerical descriptions. This thesis considers classical phase transitions in disordered spin systems in general and in effective models of superfluids with disorder and novel interactions in particular. Quantum phase transitions are considered via a quantum to classical mapping. Central questions are if the presence of defects changes universal properties and what qualitative implications follow for experiments. Common to the cases considered is that the disorder maps out correlated structures. All results are obtained using large-scale Monte Carlo simulations of effective models capturing the relevant degrees of freedom at the transition. Considering a model system for superflow aided by a defect network, we find that the onset properties are significantly altered compared to the $\lambda$-transition in $^{4}$He. This has qualitative implications on expected experimental signatures in a defect supersolid scenario. For the Bose glass to superfluid quantum phase transition in 2D we determine the quantum correlation time by an anisotropic finite size scaling approach. Without a priori assumptions on critical parameters, we find the critical exponent $z=1.8 \pm 0.05$ contradicting the long standing result $z=d$. Using a 3D effective model for multi-band type-1.5 superconductors we find that these systems possibly feature a strong first order vortex-driven phase transition. Despite its short-range nature details of the interaction are shown to play an important role. Phase transitions in disordered spin models exposed to correlated defect structures obtained via rapid quenches of critical loop and spin models are investigated. On long length scales the correlations are shown to decay algebraically. The decay exponents are expressed through known critical exponents of the disorder generating models. For cases where the disorder correlations imply the existence of a new long-range-disorder fixed point we determine the critical exponents of the disordered systems via finite size scaling methods of Monte Carlo data and find good agreement with theoretical expectations.<br><p>QC 20150306</p>
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Classical worm algorithm"

1

Smith, Hazel. Improvisation in Contemporary Experimental Poetry. Edited by Benjamin Piekut and George E. Lewis. Oxford University Press, 2014. http://dx.doi.org/10.1093/oxfordhb/9780199892921.013.26.

Full text
Abstract:
This chapter characterizes recent developments in improvisation in contemporary experimental poetry. It traces the evolution of improvised poetry from the work of classic improvisers such as David Antin, Steve Benson, and Bob Cobbing to the present day. It argues that poetic improvisation has been marginalized not only within poetic practice but also within theories of poetic performance. It traces the development of poetic improvisation as “new sonic writing” into computerized modes of improvisation, particularly algorithmic text generation. It discusses the impact of social changes, such as increased gender equality, globalization, and transnationalism, on the evolution of poetic improvisation, which has become increasingly populated by women and also more ethnically diverse. It formulates the concept of a “posthuman cosmopolitanism” with regard to computerized improvisation.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Classical worm algorithm"

1

Guan, Ji, Wang Fang, and Mingsheng Ying. "Robustness Verification of Quantum Classifiers." In Computer Aided Verification. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_7.

Full text
Abstract:
AbstractSeveral important models of machine learning algorithms have been successfully generalized to the quantum world, with potential speedup to training classical classifiers and applications to data analytics in quantum physics that can be implemented on the near future quantum computers. However, quantum noise is a major obstacle to the practical implementation of quantum machine learning. In this work, we define a formal framework for the robustness verification and analysis of quantum machine learning algorithms against noises. A robust bound is derived and an algorithm is developed to check whether or not a quantum machine learning algorithm is robust with respect to quantum training data. In particular, this algorithm can find adversarial examples during checking. Our approach is implemented on Google’s TensorFlow Quantum and can verify the robustness of quantum machine learning algorithms with respect to a small disturbance of noises, derived from the surrounding environment. The effectiveness of our robust bound and algorithm is confirmed by the experimental results, including quantum bits classification as the “Hello World” example, quantum phase recognition and cluster excitation detection from real world intractable physical problems, and the classification of MNIST from the classical world.
APA, Harvard, Vancouver, ISO, and other styles
2

Lundén, Daniel, Johannes Borgström, and David Broman. "Correctness of Sequential Monte Carlo Inference for Probabilistic Programming Languages." In Programming Languages and Systems. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72019-3_15.

Full text
Abstract:
AbstractProbabilistic programming is an approach to reasoning under uncertainty by encoding inference problems as programs. In order to solve these inference problems, probabilistic programming languages (PPLs) employ different inference algorithms, such as sequential Monte Carlo (SMC), Markov chain Monte Carlo (MCMC), or variational methods. Existing research on such algorithms mainly concerns their implementation and efficiency, rather than the correctness of the algorithms themselves when applied in the context of expressive PPLs. To remedy this, we give a correctness proof for SMC methods in the context of an expressive PPL calculus, representative of popular PPLs such as WebPPL, Anglican, and Birch. Previous work have studied correctness of MCMC using an operational semantics, and correctness of SMC and MCMC in a denotational setting without term recursion. However, for SMC inference—one of the most commonly used algorithms in PPLs as of today—no formal correctness proof exists in an operational setting. In particular, an open question is if the resample locations in a probabilistic program affects the correctness of SMC. We solve this fundamental problem, and make four novel contributions: (i) we extend an untyped PPL lambda calculus and operational semantics to include explicit resample terms, expressing synchronization points in SMC inference; (ii) we prove, for the first time, that subject to mild restrictions, any placement of the explicit resample terms is valid for a generic form of SMC inference; (iii) as a result of (ii), our calculus benefits from classic results from the SMC literature: a law of large numbers and an unbiased estimate of the model evidence; and (iv) we formalize the bootstrap particle filter for the calculus and discuss how our results can be further extended to other SMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

Yuan, David Yu, and Tony Wildish. "Bioinformatics Application with Kubeflow for Batch Processing in Clouds." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59851-8_24.

Full text
Abstract:
Abstract Bioinformatics pipelines make extensive use of HPC batch processing. The rapid growth of data volumes and computational complexity, especially for modern applications such as machine learning algorithms, imposes significant challenges to local HPC facilities. Many attempts have been made to burst HPC batch processing into clouds with virtual machines. They all suffer from some common issues, for example: very high overhead, slow to scale up and slow to scale down, and nearly impossible to be cloud-agnostic. We have successfully deployed and run several pipelines on Kubernetes in OpenStack, Google Cloud Platform and Amazon Web Services. In particular, we use Kubeflow on top of Kubernetes for more sophisticated job scheduling, workflow management, and first class support for machine learning. We choose Kubeflow/Kubernetes to avoid the overhead of provisioning of virtual machines, to achieve rapid scaling with containers, and to be truly cloud-agnostic in all cloud environments. Kubeflow on Kubernetes also creates some new challenges in deployment, data access, performance monitoring, etc. We will discuss the details of these challenges and provide our solutions. We will demonstrate how our solutions work across all three very different clouds for both classical pipelines and new ones for machine learning.
APA, Harvard, Vancouver, ISO, and other styles
4

"Worm Algorithm for Problems of Quantum and Classical Statistics." In Understanding Quantum Phase Transitions. CRC Press, 2010. http://dx.doi.org/10.1201/b10273-29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sakk, Eric. "Quantum Fourier Operators and Their Application." In Real Perspective of Fourier Transforms and Current Developments in Superconductivity. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.94902.

Full text
Abstract:
The application of the quantum Fourier transform (QFT) within the field of quantum computation has been manifold. Shor’s algorithm, phase estimation and computing discrete logarithms are but a few classic examples of its use. These initial blueprints for quantum algorithms have sparked a cascade of tantalizing solutions to problems considered to be intractable on a classical computer. Therefore, two main threads of research have unfolded. First, novel applications and algorithms involving the QFT are continually being developed. Second, improvements in the algorithmic complexity of the QFT are also a sought after commodity. In this work, we review the structure of the QFT and its implementation. In order to put these concepts in their proper perspective, we provide a brief overview of quantum computation. Finally, we provide a permutation structure for putting the QFT within the context of universal computation.
APA, Harvard, Vancouver, ISO, and other styles
6

Sharma, Oshin, and Hemraj Saini. "Performance Evaluation of VM Placement Using Classical Bin Packing and Genetic Algorithm for Cloud Environment." In Research Anthology on Multi-Industry Uses of Genetic Programming and Algorithms. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8048-6.ch068.

Full text
Abstract:
In current era, the trend of cloud computing is increasing with every passing day due to one of its dominant service i.e. Infrastructure as a service (IAAS), which virtualizes the hardware by creating multiple instances of VMs on single physical machine. Virtualizing the hardware leads to the improvement of resource utilization but it also makes the system over utilized with inefficient performance. Therefore, these VMs need to be migrated to another physical machine using VM consolidation process in order to reduce the amount of host machines and to improve the performance of system. Thus, the idea of placing the virtual machines on some other hosts leads to the proposal of many new algorithms of VM placement. However, the reduced set of physical machines needs the lesser amount of power consumption therefore; in current work the authors have presented a decision making VM placement system based on genetic algorithm and compared it with three predefined VM placement techniques based on classical bin packing. This analysis contributes to better understand the effects of the placement strategies over the overall performance of cloud environment and how the use of genetic algorithm delivers the better results for VM placement than classical bin packing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Kotamarti, Rao M., Mitchell A. Thornton, and Margaret H. Dunham. "Quantum Computing Approach for Alignment-Free Sequence Search and Classification." In Multidisciplinary Computational Intelligence Techniques. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1830-5.ch017.

Full text
Abstract:
Many classes of algorithms that suffer from large complexities when implemented on conventional computers may be reformulated resulting in greatly reduced complexity when implemented on quantum computers. The dramatic reductions in complexity for certain types of quantum algorithms coupled with the computationally challenging problems in some bioinformatics problems motivates researchers to devise efficient quantum algorithms for sequence (DNA, RNA, protein) analysis. This chapter shows that the important sequence classification problem in bioinformatics is suitable for formulation as a quantum algorithm. This chapter leverages earlier research for sequence classification based on Extensible Markov Model (EMM) and proposes a quantum computing alternative. The authors utilize sequence family profiles built using EMM methodology which is based on using pre-counted word data for each sequence. Then a new method termed quantum seeding is proposed for generating a key based on high frequency words. The key is applied in a quantum search based on Grover algorithm to determine a candidate set of models resulting in a significantly reduced search space. Given Z as a function of M models of size N, the quantum version of the seeding algorithm has a time complexity in the order of as opposed to O(Z) for the standard classic version for large values of Z.
APA, Harvard, Vancouver, ISO, and other styles
8

Samanta, Sutapa, and Manoj K. Jha. "Multi Depot Probabilistic Vehicle Routing Problems with a Time Window." In Geographic Information Systems. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch053.

Full text
Abstract:
Vehicle Routing Problems (VRPs) are prevalent in all large pick up and delivery logistics systems and are critical to city logistics operations. Of notable significance are three key extensions to classical VRPs: (1) multi-depot scenario; (2) probabilistic demand; and (3) time-window constraints, which are considered simultaneously with VRPs in this paper. The issue then becomes a Multi Depot Probabilistic Vehicle Routing Problem with a Time Window (MDPVRPTW). The underlying complexities of MDPVRPTW are analyzed and a heuristic approach is presented to solve the problem. Genetic algorithms (GAs) are found to be capable of providing an efficient solution to the so-called MDPVRPTW. Within the GAs, two modification operators namely, crossover and mutation, are designed specially to solve the MDPVRPTW. Three numerical examples with 14, 25, and 51 nodes are presented to test the efficiency of the algorithm as the problem size grows. The proposed algorithms perform satisfactorily and the limiting case solutions are in agreement with the constraints. Additional work is needed to test the robustness and efficiency of the algorithms as the problem size grows.
APA, Harvard, Vancouver, ISO, and other styles
9

Garcia, Juan Carlos Castillo, Jesús Everardo Olguín Tiznado, Claudia Camargo Wilson, Juan Andrés López Barreras, and Rafael García Martínez. "Application of the Simultaneous Perturbation Stochastic Approximation Algorithm for Process Optimization." In Design of Experiments for Chemical, Pharmaceutical, Food, and Industrial Applications. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1518-1.ch014.

Full text
Abstract:
There are different techniques for the optimization of industrial processes that are widely used in industry, such as experimental design or surface response methodology to name a few. There are also alternative techniques for optimization, like the Simultaneous Perturbation Stochastic Approaches (SPSA) algorithm. This chapter compares the results that can be obtained with classical techniques against the results that alternative linear search techniques such as the Simultaneous Perturbation Stochastic Approaches (SPSA) algorithm can achieve. Authors start from the work reported by Gedi et al. 2015 to implement the SPSA algorithm. The experiments allow authors to affirm that for this case study, the SPSA is capable of equalizing, even improving the results reported by the authors.
APA, Harvard, Vancouver, ISO, and other styles
10

Truta, Traian Marius, Alina Campan, and Matthew Beckerich. "Efficient Approximation Algorithms for Minimum Dominating Sets in Social Networks." In Research Anthology on Artificial Intelligence Applications in Security. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7705-9.ch052.

Full text
Abstract:
Social networks are increasingly becoming an outlet that is more and more powerful in spreading news and influence individuals. Compared with other traditional media outlets such as newspaper, radio, and television, social networks empower users to spread their ideological message and/or to deliver target advertising very efficiently in terms of both cost and time. In this article, the authors focus on efficiently finding dominating sets in social networks for the classical dominating set problem as well as for two related problems: partial dominating sets and d-hop dominating sets. They will present algorithms for determining efficiently a good approximation for the social network minimum dominating sets for each of the three variants. The authors ran an extensive suite of experiments to test the presented algorithms on several datasets that include real networks made available by the Stanford Network Analysis Project and synthetic networks that follow the power-law and random models that they generated for this work. The performed experiments show that the selection of the algorithm that performs best to determine efficiently the dominating set is dependent of network characteristics and the order of importance between the size of the dominating set and the time required to determine such a set.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Classical worm algorithm"

1

Thombre, Ritu, and Babita Jajodia. "Experimental Analysis of Attacks on RSA & Rabin Cryptosystems using Quantum Shor’s Algorithm." In International Conference on Women Researchers in Electronics and Computing. AIJR Publisher, 2021. http://dx.doi.org/10.21467/proceedings.114.74.

Full text
Abstract:
In this world of massive communication networks, data security and confidentiality are of crucial importance for maintaining secured private communication and protecting information against eavesdropping attacks. Existing cryptosystems provide data security and confidentiality by the use of encryption and signature algorithms for secured communication. Classical computers use cryptographic algorithms that use the product of two large prime numbers for generating public and private keys. These classical algorithms are based on the fact that integer factorization is a non-deterministic polynomial-time (NP) problem and requires super-polynomial time making it impossible for large enough integers. Shor’s algorithm is a well-known algorithm for factoring large integers in polynomial time and takes only O(b3) time and O(b) space on b-bit number inputs. Shor’s algorithm poses a potential threat to the current security system with the ongoing advancements of Quantum computers. This paper discusses how Shor’s algorithm will be able to break integer factorization-based cryptographic algorithms, for example, Rivest–Shamir–Adleman (RSA) and Rabin Algorithms. As a proof of concept, experimental analysis of Quantum Shor’s algorithm on existing public-key cryptosystems using IBM Quantum Experience is performed for factorizing integers of moderate length (seven bits) due to limitations of thirty-two qubits in present IBM quantum computers. In a nutshell, this work will demonstrate how Shor’s algorithm poses threat to confidentiality and authentication services.
APA, Harvard, Vancouver, ISO, and other styles
2

Guo, Lei, Lijian Zhou, Shaohui Jia, Li Yi, Haichong Yu, and Xiaoming Han. "An Automatic Segmentation Algorithm Used in Pipeline Integrity Alignment Sheet Design." In 2010 8th International Pipeline Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/ipc2010-31036.

Full text
Abstract:
Pipeline segmentation design is the first step to design alignment sheet. In this step, several rectangular boxes are used to cover pipeline and each box will become the basic unit of alignment sheet design. After studying various pipeline alignment sheet mapping technologies, the author found that traditional manual design method, which can take advantage of designers’ subjectivity, causes low work efficiency. By reviewing and studying existing works at home and abroad, the author believed that it is possible and feasible to develop an automatic segmentation algorithm based on existing curve simplification algorithms to improve to improve the efficiency of pipeline section design and alignment sheet mapping. Based on several classical curve simplification algorithms, the author proposed the automatic segmentation algorithm, which automatically adjusts the location of rectangular boxes according to the number of pipeline/circle intersection points and pipeline/ rectangular box intersection points. Finally, through comparing time and result with the traditional manual method, the author proved the algorithm’s effectiveness and feasibility.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jie, Philippe Mainc¸on, Carl M. Larsen, and Halvor Lie. "VIV Force Identification Using Classical Optimal Control Algorithm." In ASME 2009 28th International Conference on Ocean, Offshore and Arctic Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/omae2009-79568.

Full text
Abstract:
Due to the difficulty of direct force measurements in vortex induced vibration (VIV) experiments with long elastic cylinders, accelerometer and bending strain measurement are available. Still, obtaining information on the force is of great interest to researchers. The work presented in this paper follows the same principle as Mainc¸on (2004), who estimated external forces acting on a riser subjected to VIV from measured response by using a classical optimal tracking algorithm. The objective of this study is to first present a method for extracting VIV forces from measured data with long elastic riser models subjected to current. The second objective is to extract first order (primary) cross-flow force coefficients by a combined use of modal filtering. The algorithm minimizes the sum of the squares of the discrepancies between measured and predicted response plus a constant times the sum of squares of the external forces, while satisfying the system’s dynamic equilibrium equation. FEM discretization of the riser with Euler beam elements leads to a stiffness and mass matrix. The dimension of these matrixes is reduced by eliminating the rotation degree of freedom using master-slave condensation, which greatly facilitates the matrix iteration. Displacement is used in this study as input to the algorithm to identify forces. The method is verified against synthetic measurement data. The results showed the algorithm’s capability to accurately estimate the input forces from noisy measurement data. The method is applied to the data from a rotating rig test to identify hydrodynamic forces in primary cross-flow vortex shedding frequency range. The emphasis is on extracting force coefficient database. One important finding is that the high mode component of the force contributed little to the response, while it resulted in complication of the coefficient data base. Therefore, they are neglected by filtering the measurement with modal analysis before the use of inverse force estimation. The excitation and added mass coefficients are calculated and their contour plots are generated. Comparisons with existing data are investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Fišer, Daniel, and Antonín Komenda. "Fact-Alternating Mutex Groups for Classical Planning (Extended Abstract)." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/793.

Full text
Abstract:
Mutex groups are defined in the context of STRIPS planning as sets of facts out of which, maximally, one can be true in any state reachable from the initial state. This work provides a complexity analysis showing that inference of mutex groups is as hard as planning itself (PSPACE-Complete) and it also shows a tight relationship between mutex groups and graph cliques. Furthermore, we propose a new type of mutex group called a fact-alternating mutex group (fam-group) of which inference is NP-Complete. We introduce an algorithm for the inference of fam-groups based on integer linear programming that is complete with respect to the maximal fam-groups and we demonstrate that fam-groups can be beneficial in the translation of planning tasks into finite domain representation, for the detection of dead-end state and for the pruning of spurious operators. The experimental evaluation of the pruning algorithm shows a substantial increase in a number of solved tasks in domains from the optimal deterministic track of the last two planning competitions (IPC 2011 and 2014).
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Cheng-Hung, Ken W. Bosworth, and Marco P. Schoen. "Investigation of Particle Swarm Optimization Dynamics." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-41343.

Full text
Abstract:
In this work, a set of operators for a Particle Swarm (PS) based optimization algorithm is investigated for the purpose of finding optimal values for some of the classical benchmark problems. Particle swarm algorithms are implemented as mathematical operators inspired by the social behaviors of bird flocks and fish schools. In addition, particle swarm algorithms utilize a small number of relatively uncomplicated rules in response to complex behaviors, such that they are computationally inexpensive in terms of memory requirements and processing time. In particle swarm algorithms, particles in a continuous variable space are linked with neighbors, therefore the updated velocity means of particles influences the simulation results. The paper presents a statistical investigation on the velocity update rule for continuous variable PS algorithm. In particular, the probability density function influencing the particle velocity update is investigated along with the components used to construct the updated velocity vector of each particle within a flock. The simulation results of several numerical benchmark examples indicate that small amount of negative velocity is necessary to obtain good optimal values near global optimality.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Shouda, Weijie Zheng, and Benjamin Doerr. "Choosing the Right Algorithm With Hints From Complexity Theory." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/234.

Full text
Abstract:
Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established optimization heuristics. We demonstrate this approach on the recently proposed DLB benchmark, for which the only known results are O(n^3) runtimes for several classic evolutionary algorithms and an O(n^2 log n) runtime for an estimation-of-distribution algorithm. Our finding that the unary unbiased black-box complexity is only O(n^2) suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. Since we also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms, we shift our attention to algorithms that use the information of more parents to generate new solutions. An artificial algorithm of this type having an O(n log n) runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem also in time O(n log n). Our experiments show a remarkably good performance of the Metropolis algorithm, clearly the best of all algorithms regarded for reasonable problem sizes.
APA, Harvard, Vancouver, ISO, and other styles
7

Ståhlberg, Simon, Guillem Francès, and Jendrik Seipp. "Learning Generalized Unsolvability Heuristics for Classical Planning." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/574.

Full text
Abstract:
Recent work in classical planning has introduced dedicated techniques for detecting unsolvable states, i.e., states from which no goal state can be reached. We approach the problem from a generalized planning perspective and learn first-order-like formulas that characterize unsolvability for entire planning domains. We show how to cast the problem as a self-supervised classification task. Our training data is automatically generated and labeled by exhaustive exploration of small instances of each domain, and candidate features are automatically computed from the predicates used to define the domain. We investigate three learning algorithms with different properties and compare them to heuristics from the literature. Our empirical results show that our approach often captures important classes of unsolvable states with high classification accuracy. Additionally, the logical form of our heuristics makes them easy to interpret and reason about, and can be used to show that the characterizations learned in some domains capture exactly all unsolvable states of the domain.
APA, Harvard, Vancouver, ISO, and other styles
8

Seipp, Jendrik. "Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/780.

Full text
Abstract:
Pattern databases are the foundation of some of the strongest admissible heuristics for optimal classical planning. Experiments showed that the most informative way of combining information from multiple pattern databases is to use saturated cost partitioning. Previous work selected patterns and computed saturated cost partitionings over the resulting pattern database heuristics in two separate steps. We introduce a new method that uses saturated cost partitioning to select patterns and show that it outperforms all existing pattern selection algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

Ham, Jongho, Jungeun An, Bongjae Kim, Jaewoong Choi, and Booki Kim. "Support Optimization for Piping System With Machine Learning." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-62356.

Full text
Abstract:
Piping stress analysis is performed by the manipulations of support type, location and pipe arrangement based on many specific design criteria. A classical way to find good engineering solutions satisfying design criteria among lots of combinations is obviously time-consuming work. In field practice, it also highly depends on engineer’s experiences and abilities. This paper proposes a hybrid method by combining several global search optimization algorithms and predication model generation in order to automatically control the combinations of support types as the engineering solutions. Here, we use some efficient and popular algorithms such as genetic algorithm, swarm intelligence and Gaussian pattern search to develop initial design of experiments. From the set of the initials, we build and update a prediction model by applying machine learning algorithm such as artificial neural network. As a result of using the hybrid method, the engineering solution is sufficiently optimized for the classical solution. Design variables for this problem are the types of restraints (or the pipe support type). The nonlinearity conditions such as gaps and frictions are also treated as key design variables. Each restraint is initially identified as a binary set of design variables, and transformed to integer numbers to run on the n-dimensional design space. The number of dimension corresponds to the number of pipe supports. Currently, pipe stress analysis problems are divided into a certain size that is enough to run on one computer for project management purpose. If we have bigger system with more design variables to consider, the hybrid machine learning method plays a key role in saving computation time with the help of additional parallel computation technique.
APA, Harvard, Vancouver, ISO, and other styles
10

Mazur, Marek, Philippe Scouflaire, Franck Richecoeur, Léo Cunha Caldeira Mesquita, Aymeric Vie, and Sébastien Ducruix. "Planar Velocity Measurements at 100 kHz in Gas Turbine Combustors With a Continuous Laser Source." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-64597.

Full text
Abstract:
This work aims at presenting a novel approach to measure planar velocity in gas turbine combustors at very high sampling frequencies. For this purpose, a continuous wave laser is used in order to illuminate particles that are seeded into the flow. The Mie scattering images are acquired with a high-speed camera at 100 kHz with a constant time between each frame. The velocity fields are then obtained by applying classical PIV algorithms on successive particle scattering images. While this approach has been recently used in other research fields, such as aerodynamics or hydrodynamics, it is relatively new in combustion studies, where pulsed laser systems with higher power levels are usually preferred. The proposed technique is an economical and ergonomic solution to determine velocity fields at very high sampling frequencies. It is highly portable and safe and convenient to use and align. The main drawback is the long image exposure duration due to the low laser energy. This leads to a smearing effect of the captured particles and acts as a low-pass filter. It has the consequence that the PIV algorithm does not determine the displacement of “dots”, but of “traces”. The measurement technique is tested experimentally on a model gas turbine combustor at a laboratory scale. The test is performed in three steps: (1) The instantaneous velocity fields are analysed in order to verify, whether the flame topology is represented correctly. (2) The mean and RMS velocity fields that are obtained with the present technique are compared with those obtained by classic low speed PIV. (3) Instantaneous synthetic Mie scattering fields are generated from a large eddy simulation (LES) on a similar combustor to test the algorithms. The planar velocity fields are calculated from these images and compared for the two techniques. Finally, possible error sources of the new technique are discussed.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Classical worm algorithm"

1

Corriveau, Elizabeth, Ashley Mossell, Holly VerMeulen, Samuel Beal, and Jay Clausen. The effectiveness of laser-induced breakdown spectroscopy (LIBS) as a quantitative tool for environmental characterization. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40263.

Full text
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid, low-cost analytical method with potential applications for quantitative analysis of soils for heavy metal contaminants found in military ranges. The Department of Defense (DoD), Army, and Department of Homeland Security (DHS) have mission requirements to acquire the ability to detect and identify chemicals of concern in the field. The quantitative potential of a commercial off-the-shelf (COTS) hand-held LIBS device and a classic laboratory bench-top LIBS system was examined by measuring heavy metals (antimony, tungsten, iron, lead, and zinc) in soils from six military ranges. To ensure the accuracy of the quantified results, we also examined the soil samples using other hand-held and bench-top analytical methods, to include Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES) and X-Ray Fluorescence (XRF). The effects of soil heterogeneity on quantitative analysis were reviewed with hand-held and bench-top systems and compared multivariate and univariate calibration algorithms for heavy metal quantification. In addition, the influence of cold temperatures on signal intensity and resulting concentration were examined to further assess the viability of this technology in cold environments. Overall, the results indicate that additional work should be performed to enhance the ability of LIBS as a reliable quantitative analytical tool.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography