To see the other types of publications on this topic, follow the link: Computer algorithm.

Journal articles on the topic 'Computer algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hiremath, Shivakumar U., Shashank P. Baannadabavi, Shreyansh Kabbin, and Shrikanth Shirakol. "Edge Detection Algorithm Using PI-Computer." Bonfring International Journal of Research in Communication Engineering 6, Special Issue (2016): 79–82. http://dx.doi.org/10.9756/bijrce.8206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

He, Bo. "Fast Distributed Algorithm of Mining Global Frequent Itemsets." Advanced Materials Research 219-220 (March 2011): 191–94. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.191.

Full text
Abstract:
Most distributed algorithms of mining global frequent itemsets worked on net structure network and adopted Apriori-like algorithm. Whereas there were some problems in these algorithms: a lot of candidate itemsets and heavy communication traffic. Aiming at these problems, this paper proposed a fast distributed algorithm of mining global frequent itemsets, namely, FDMGFI algorithm, which set centre node. FDMGFI algorithm made computer nodes compute local frequent itemsets independently with FP-growth algorithm, then the centre node exchanged data with other computer nodes and combined, finally, global frequent itemsets were gained. FDMGFI algorithm required far less communication traffic by the searching strategies of top-down and bottom-up. Theoretical analysis and experimental results suggest that FDMGFI algorithm is fast and effective.
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Xuandiyang. "Research on Biological Population Evolutionary Algorithm and Individual Adaptive Method Based on Quantum Computing." Wireless Communications and Mobile Computing 2022 (March 22, 2022): 1–9. http://dx.doi.org/10.1155/2022/5188335.

Full text
Abstract:
On the basis of classical computer, quantum computer has been developed. In dealing with some large-scale parallel problems, quantum computer is simpler and faster than traditional computer. Nowadays, physical qubit computers have many limitations. Classical computers have many ways to simulate quantum computing, the most effective of which are quantum superiority and quantum algorithm. Ensuring computational efficiency, accuracy, and precision is of great significance to the study of large-scale quantum computing. Compared with other algorithms, genetic algorithm has more advantages, so it can be more widely used. For example, strong adaptability and global optimization ability are the advantages of genetic algorithm. Through the research in Chapter 4, we can conclude that the variance of A2C is obviously smaller than that of PPO. Furthermore, it can be concluded that A2C has better robustness.
APA, Harvard, Vancouver, ISO, and other styles
4

Moosakhah, Fatemeh, and Amir Massoud Bidgoli. "Congestion Control in Computer Networks with a New Hybrid Intelligent Algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 8 (2014): 4688–706. http://dx.doi.org/10.24297/ijct.v13i8.7068.

Full text
Abstract:
With invention of computer networks, transferring data from one computer to another became possible, but as the number of computers that transfer data to each other increased and common communication channel bandwidth among them in a network limited, has led to a phenomenon called congestion, so that some of data packets would be dropped and never arrive to destination. Different algorithms have been proposed for overcoming congestion. These are divided into two general groups: 1- flow based algorithms and 2- class based algorithms. In present study, using class based algorithm with optimization of its control by fuzzy logic and new Cuckoo algorithm, we increased the number of packets that reach to destination and reduced the number of dropped packets considerably during congestion. Simulation results indicate a great improvement of efficiency.
APA, Harvard, Vancouver, ISO, and other styles
5

Ivancova, Olga, Vladimir Korenkov, Olga Tyatyushkina, Sergey Ulyanov, and Toshio Fukuda. "Quantum supremacy in end-to-end intelligent IT. PT. III. Quantum software engineering – quantum approximate optimization algorithm on small quantum processors." System Analysis in Science and Education, no. 2 (2020) (June 30, 2020): 115–76. http://dx.doi.org/10.37005/2071-9612-2020-2-115-176.

Full text
Abstract:
Principles and methodologies of quantum algorithmic gate-based design on small quantum computer described. The possibilities of quantum algorithmic gates simulation on classical computers discussed. A new approach to a circuit implementation design of quantum algorithm gates for fast quantum massive parallel computing presented. SW & HW support sophisticated smart toolkit of supercomputing accelerator of quantum algorithm simulation on small quantum programmable computer algorithm gate (that can program in SW to implement arbitrary quantum algorithms by executing any sequence of universal quantum logic gates) described
APA, Harvard, Vancouver, ISO, and other styles
6

Naseem, Amir, M. A. Rehman, and Jihad Younis. "A New Root-Finding Algorithm for Solving Real-World Problems and Its Complex Dynamics via Computer Technology." Complexity 2021 (November 29, 2021): 1–10. http://dx.doi.org/10.1155/2021/6369466.

Full text
Abstract:
Nowadays, the use of computers is becoming very important in various fields of mathematics and engineering sciences. Many complex statistics can be sorted out easily with the help of different computer programs in seconds, especially in computational and applied Mathematics. With the help of different computer tools and languages, a variety of iterative algorithms can be operated in computers for solving different nonlinear problems. The most important factor of an iterative algorithm is its efficiency that relies upon the convergence rate and computational cost per iteration. Taking these facts into account, this article aims to design a new iterative algorithm that is derivative-free and performs better. We construct this algorithm by applying the forward- and finite-difference schemes on Golbabai–Javidi’s method which yields us an efficient and derivative-free algorithm whose computational cost is low as per iteration. We also study the convergence criterion of the designed algorithm and prove its quartic-order convergence. To analyze it numerically, we consider nine different types of numerical test examples and solve them for demonstrating its accuracy, validity, and applicability. The considered problems also involve some real-life applications of civil and chemical engineering. The obtained numerical results of the test examples show that the newly designed algorithm is working better against the other similar algorithms in the literature. For the graphical analysis, we consider some different degrees’ complex polynomials and draw the polynomiographs of the designed quartic-order algorithm and compare it with the other similar existing methods with the help of a computer program. The graphical results reveal the better convergence speed and the other graphical characteristics of the designed algorithm over the other comparable ones.
APA, Harvard, Vancouver, ISO, and other styles
7

Figueiredo, Marco A., Clay S. Gloster, Mark Stephens, Corey A. Graves, and Mouna Nakkar. "Implementation of Multispectral Image Classification on a Remote Adaptive Computer." VLSI Design 10, no. 3 (2000): 307–19. http://dx.doi.org/10.1155/2000/31983.

Full text
Abstract:
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms is justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of magnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application that can benefit from implementation on an FPGA-based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm (implemented on a typical general-purpose computer).
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, Dazhi, and Zhun Fan. "The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators." Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/474805.

Full text
Abstract:
At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
APA, Harvard, Vancouver, ISO, and other styles
9

Niu, Yiming, Wenyong Du, and Zhenying Tang. "Computer Network Security Defense Model." Journal of Physics: Conference Series 2146, no. 1 (2022): 012041. http://dx.doi.org/10.1088/1742-6596/2146/1/012041.

Full text
Abstract:
Abstract With the rapid development of the Internet industry, hundreds of millions of online resources are also booming. In the information space with huge and complex resources, it is necessary to quickly help users find the resources they are interested in and save users time. At this stage, the content industry’s application of the recommendation model in the content distribution process has become the mainstream. The content recommendation model provides users with a highly efficient and highly satisfying reading experience, and solves the problem of information redundancy to a certain extent. Knowledge tag personalized dynamic recommendation technology is currently widely used in the field of e-commerce. The purpose of this article is to study the optimization of the knowledge tag personalized dynamic recommendation system based on artificial intelligence algorithms. This article first proposes a hybrid recommendation algorithm based on the comparison between content-based filtering and collaborative filtering algorithms. It mainly introduces user browsing behavior analysis and design, KNN-based item similarity algorithm design, and hybrid recommendation algorithm implementation. Finally, through algorithm simulation experiments, the effectiveness of the algorithm in this paper is verified, and the accuracy of the recommendation has been improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Handayani, Dwipa, and Abrar Hiswara. "KAMUS ISTILAH ILMU KOMPUTER DENGAN ALGORITMA BOYER MOORE BERBASIS WEB." Jurnal Informatika 19, no. 2 (2019): 90–97. http://dx.doi.org/10.30873/ji.v19i2.1519.

Full text
Abstract:
A dictionary is a reference book that contains words and phrases that are usually arranged in alphabetical order along with an explanation of their meaning, usage and translation and function to help recognize new terms. The field of computer science certainly has specific terms related to computers, so it is needed a dictionary of computer terms, currently the existing dictionary is still conventional in its use ineffective and inefficient. The design and manufacture of applications using algorithms by performing a sequence of logical steps in solving problems that are arranged systematically. Algorithms for searching are now growing day by day. Boyer Moore algorithm is one of the search algorithms that is considered to have the best results, namely the algorithm that moves matching strings from right to left. With this web-based dictionary the user is expected to be able to get information quickly, without any limitations on space and time. Keywords: Boyer Moore's Algorithm, Computer Science, Glossary of Terms, Web.
APA, Harvard, Vancouver, ISO, and other styles
11

Pu, Chun Wang. "Research on the Design of Football Teaching System Based on Computer 3D Human Motion Recognition." Advanced Materials Research 791-793 (September 2013): 2013–17. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.2013.

Full text
Abstract:
The performance of computer hardware and software has been improved, so computer 3D virtual simulation is gradually applied in all walks of life. It needs to handle large amounts of data in computer 3D simulation process. In the data storage and calculation process, it must choose efficient algorithms to solve the problem of redundancy data. Based on this, the paper respectively applies the BHIK algorithm and CCD algorithm, the two algorithms to the simulation of computer 3D motion recognition process, and it has compared the efficiency of the two algorithms. By the comparing, the calculation speed of BHIK algorithm and convergence are significantly better than the CCD algorithm. The execution time of BHIK algorithm is only 1/10 of CCD algorithm, and the convergence speed is 4 times of CCD algorithm. So we choose BHIK algorithm as the computer 3D simulation algorithm. Finally, it takes the football teaching system as an example to verify the validity and reliability of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

STEWART, IAIN A. "ON TWO APPROXIMATION ALGORITHMS FOR THE CLIQUE PROBLEM." International Journal of Foundations of Computer Science 04, no. 02 (1993): 117–33. http://dx.doi.org/10.1142/s0129054193000080.

Full text
Abstract:
We look at well-known polynomial-time approximation algorithms for the optimization problem MAX-CLIQUE (“find the size of the largest clique in a graph”) with regard to how easy it is to compute the actual cliques yielded by these approximation algorithms. We show that even for two “pretty useless” deterministic polynomial-time approximation algorithms, it is unlikely that the resulting clique can be computed efficiently in parallel. We also show that for each non-deterministic algorithm, it is unlikely that there is some deterministic polynomial-time algorithm that decides whether any given vertex appears in some clique yielded by that nondeterministic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Full text
Abstract:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
14

Porsani, Milton J., and Bjørn Ursin. "Direct multichannel predictive deconvolution." GEOPHYSICS 72, no. 2 (2007): H11—H27. http://dx.doi.org/10.1190/1.2432260.

Full text
Abstract:
The Levinson principle generally can be used to compute recursively the solution of linear equations. It can also be used to update the error terms directly. This is used to do single-channel deconvolution directly on seismic data without computing or applying a digital filter. Multichannel predictive deconvolution is used for seismic multiple attenuation. In a standard procedure, the prediction-error filter matrices are computed with a Levinson recursive algorithm, using a covariance matrix of the input data. The filtered output is the prediction errors or the nonpredictable part of the data. Starting with the classical Levinson recursion,wehave derived new algorithms for direct recursive calculationof the prediction errors without computing the data covariance-matrix or computing the prediction-error filters. One algorithm generates recursively the one-step forward and backward predic-tion errors and the L-step forward prediction error, computing only the filter matrices with the highest index. A numerically more stable algorithm uses reduced QR decomposition or singular-value decomposition (SVD) in a direct recursive computation of the prediction errors without computing any filter matrix. The new, stable, predictive algorithms require more arithmetic opera-tions in the computer, but the computer programs and data flow are much simpler than for standard predictive deconvolution.
APA, Harvard, Vancouver, ISO, and other styles
15

García-Sánchez, Pedro A., Christopher O’Neill, and Gautam Webb. "The computation of factorization invariants for affine semigroups." Journal of Algebra and Its Applications 18, no. 01 (2019): 1950019. http://dx.doi.org/10.1142/s0219498819500191.

Full text
Abstract:
We present several new algorithms for computing factorization invariant values over affine semigroups. In particular, we give (i) the first known algorithm to compute the delta set of any affine semigroup, (ii) an improved method of computing the tame degree of an affine semigroup, and (iii) a dynamic algorithm to compute catenary degrees of affine semigroup elements. Our algorithms rely on theoretical results from combinatorial commutative algebra involving Gröbner bases, Hilbert bases, and other standard techniques. Implementation in the computer algebra system GAP is discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Gong, Qianru. "Application Research of Data Encryption Algorithm in Computer Security Management." Wireless Communications and Mobile Computing 2022 (July 14, 2022): 1–7. http://dx.doi.org/10.1155/2022/1463724.

Full text
Abstract:
In order to promote the research process of information security in the whole society, improve the safety factor of computer data communication, strengthen computer security management, the author proposes a computer data encryption strategy that combines the strong security of the 3DES encryption algorithm and the asymmetric encryption advantages of the RSA algorithm. Through the detailed analysis of DES encryption algorithm and 3DES encryption, creatively uses the RSA encryption algorithm to improve the single 3DES algorithm, consolidates the performance of the 3DES encryption algorithm to ensure data communication, and ensures the data integrity, better improve encryption performance. Experiments show that: The proposed encryption algorithm improves security performance by 10 times, and the response efficiency is only 1 ms away from other algorithms, compared with other algorithms, it has better encryption performance and is suitable for actual computer data communication scenarios. Conclusion: The encryption algorithm proposed by the author has achieved good results in terms of security performance and response efficiency, it is suitable for actual computer data security communication and can effectively improve computer security management.
APA, Harvard, Vancouver, ISO, and other styles
17

PREVE, NIKOLAOS P., and EMMANUEL N. PROTONOTARIOS. "MONTE CARLO SIMULATION ON COMPUTATIONAL FINANCE FOR GRID COMPUTING." International Journal of Modeling, Simulation, and Scientific Computing 03, no. 03 (2012): 1250010. http://dx.doi.org/10.1142/s1793962312500109.

Full text
Abstract:
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating complex systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. In finance, Monte Carlo simulation method is used to calculate the value of companies, to evaluate economic investments and financial derivatives. On the other hand, Grid Computing applies heterogeneous computer resources of many geographically disperse computers in a network in order to solve a single problem that requires a great number of computer processing cycles or access to large amounts of data. In this paper, we have developed a simulation based on Monte Carlo method which is applied on grid computing in order to predict through complex calculations the future trends in stock prices.
APA, Harvard, Vancouver, ISO, and other styles
18

Singh, Varun, Varun Sharma, and Vasu Bachchas. "Sudoku Solving Using Quantum Computer." International Journal for Research in Applied Science and Engineering Technology 11, no. 2 (2023): 622–29. http://dx.doi.org/10.22214/ijraset.2023.49094.

Full text
Abstract:
Abstract: We use ability of quantum computing such as superposition and entanglement to solve the sudoku. In recent years, quantum computers have shown promise as a new technology for solving complex problems in various fields, including optimization and cryptography. In this paper, we investigate the potential of quantum computers for solving Sudoku puzzles. We present a quantum algorithm for solving Sudoku puzzles, and compare its performance to classical algorithms. Our results show that the quantum algorithm outperforms classical algorithms in terms of both speed and accuracy, and provides a new tool for solving Sudoku puzzles efficiently. Additionally, we discuss the implications of our results for the development of quantum algorithms for solving other combinatorial problems.
APA, Harvard, Vancouver, ISO, and other styles
19

Rejer, Izabela. "Genetic Algorithms for Feature Selection for Brain–Computer Interface." International Journal of Pattern Recognition and Artificial Intelligence 29, no. 05 (2015): 1559008. http://dx.doi.org/10.1142/s0218001415590089.

Full text
Abstract:
The crucial problem that has to be solved when designing an effective brain–computer interface (BCI) is: how to reduce the huge space of features extracted from raw electroencephalography (EEG) signals. One of the strategies for feature selection that is often applied by BCI researchers is based on genetic algorithms (GAs). The two types of GAs that are most commonly used in BCI research are the classic algorithm and the Culling algorithm. This paper presents both algorithms and their application for selecting features crucial for the correct classification of EEG signals recorded during imagery movements of the left and right hand. The results returned by both algorithms are compared to those returned by an algorithm with aggressive mutation and an algorithm with melting individuals, both of which have been proposed by the author of this paper. While the aggressive mutation algorithm has been published previously, the melting individuals algorithm is presented here for the first time.
APA, Harvard, Vancouver, ISO, and other styles
20

Tian, Pengyi, Dinggen Xu, and Xiuyuan Zhang. "Computer-Based Electronic Engineering Technology." Journal of Physics: Conference Series 2146, no. 1 (2022): 012038. http://dx.doi.org/10.1088/1742-6596/2146/1/012038.

Full text
Abstract:
Abstract Most of the current image fusion algorithms directly process the original image, neglect the analysis of the main components of the image, and have a great influence on the effect of image fusion. In this paper, the main component analysis method is used to decompose the image, divided into low rank matrix and sparse matrix, introduced compression perception technology and NSST transformation algorithm to process the two types of matrix, according to the corresponding fusion rules to achieve image fusion, through experimental results: this algorithm has greater mutual information compared with traditional algorithms, structural information similarity and average gradient.
APA, Harvard, Vancouver, ISO, and other styles
21

Kotyk, Vladyslav, and Oksana Lashko. "Software Implementation of Gesture Recognition Algorithm Using Computer Vision." Advances in Cyber-Physical Systems 6, no. 1 (2021): 21–26. http://dx.doi.org/10.23939/acps2021.01.021.

Full text
Abstract:
This paper examines the main methods and principles of image formation, display of the sign language recognition algorithm using computer vision to improve communication between people with hearing and speech impairments. This algorithm allows to effectively recognize gestures and display information in the form of labels. A system that includes the main modules for implementing this algorithm has been designed. The modules include the implementation of perception, transformation and image processing, the creation of a neural network using artificial intelligence tools to train a model for predicting input gesture labels. The aim of this work is to create a full-fledged program for implementing a real-time gesture recognition algorithm using computer vision and machine learning.
APA, Harvard, Vancouver, ISO, and other styles
22

Khatri, Sumeet, Ryan LaRose, Alexander Poremba, Lukasz Cincio, Andrew T. Sornborger, and Patrick J. Coles. "Quantum-assisted quantum compiling." Quantum 3 (May 13, 2019): 140. http://dx.doi.org/10.22331/q-2019-05-13-140.

Full text
Abstract:
Compiling quantum algorithms for near-term quantum computers (accounting for connectivity and native gate alphabets) is a major challenge that has received significant attention both by industry and academia. Avoiding the exponential overhead of classical simulation of quantum dynamics will allow compilation of larger algorithms, and a strategy for this is to evaluate an algorithm's cost on a quantum computer. To this end, we propose a variational hybrid quantum-classical algorithm called quantum-assisted quantum compiling (QAQC). In QAQC, we use the overlap between a target unitaryUand a trainable unitaryVas the cost function to be evaluated on the quantum computer. More precisely, to ensure that QAQC scales well with problem size, our cost involves not only the global overlapTr(V†U)but also the local overlaps with respect to individual qubits. We introduce novel short-depth quantum circuits to quantify the terms in our cost function, and we prove that our cost cannot be efficiently approximated with a classical algorithm under reasonable complexity assumptions. We present both gradient-free and gradient-based approaches to minimizing this cost. As a demonstration of QAQC, we compile various one-qubit gates on IBM's and Rigetti's quantum computers into their respective native gate alphabets. Furthermore, we successfully simulate QAQC up to a problem size of 9 qubits, and these simulations highlight both the scalability of our cost function as well as the noise resilience of QAQC. Future applications of QAQC include algorithm depth compression, black-box compiling, noise mitigation, and benchmarking.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Yang, Shi Jun Ji, and Li Jun Yang. "General Subdivision Inferred from Catmull-Clark Subdivision Algorithm." Materials Science Forum 532-533 (December 2006): 789–92. http://dx.doi.org/10.4028/www.scientific.net/msf.532-533.789.

Full text
Abstract:
Subdivision algorithms have emerged recently as a powerful and useful technique in modeling free-form surfaces. Subdivision algorithms exited at present however, being their disadvantages, can’t meet the demand of wide application in modeling surfaces and don’t still belong to a general theory. In this paper, a general subdivision algorithm is presented which is a general conclusion inferred from classical Catmull-Clark subdivision algorithm and can produce existing subdivision algorithm by selecting reasonable vertical weights and horizontal weights. The subdivision algorithm is an ideal resolution for keeping shape feature such as crease, corner and dart contrast to all existing subdivision algorithms, it also have the advantage of flexible weights selection, easily control of shape and high compute speed. Therefore, the algorithms are extensively applicable for shape modeling in computer aided geometric design, industrial prototype design and reverse engineering.
APA, Harvard, Vancouver, ISO, and other styles
24

BERRUYER, GILLES, and ANDREW HAMMERSLEY. "Parallelisation of an Interactive Fitting Algorithm." International Journal of Modern Physics C 02, no. 01 (1991): 254–62. http://dx.doi.org/10.1142/s0129183191000287.

Full text
Abstract:
The rational and suitability of implementing model fitting algorithms on vector and parallel computers are discussed. A particular maximum likelihood based algorithm for fitting of two-dimensional “Gaussian” peaks was investigated in detail and adapted to a system of four transputers. Analysis of the algorithm shows it well suited to both vectorisation and parallelisation; this result being applicable to other fitting methods. The transputer implementation gave an increase in performance (5–10 times faster) compared to the host computer, allowing the model fitting procedure to approach acceptable response times.
APA, Harvard, Vancouver, ISO, and other styles
25

Drake, Virginia E., Christopher J. Rizzi, Jewel D. Greywoode, Kavita T. Vakharia, and Kalpesh T. Vakharia. "Midface Fracture Simulation and Repair: A Computer-Based Algorithm." Craniomaxillofacial Trauma & Reconstruction 12, no. 1 (2019): 14–19. http://dx.doi.org/10.1055/s-0037-1608696.

Full text
Abstract:
We introduce a novel computer-based method to digitally fixate midfacial fractures to facilitate more efficient intraoperative fixation. This article aims to describe a novel computer-based algorithm that can be utilized to model midface fracture reduction and fixation and to evaluate the algorithm's ability to produce images similar to true postoperative images. This is a retrospective review combined with cross-sectional survey from January 1, 2010, to December 31, 2015. This study was performed at a single tertiary care, level-I trauma center. Ten patients presenting with acute midfacial traumatic fractures were evaluated. Thirty-five physicians were surveyed regarding the accuracy of the images obtained using the algorithm. A computer algorithm utilizing AquariusNet (TeraRecon, Inc., Foster City, CA) and Adobe Photoshop (Adobe Systems Inc., San Jose, CA) was developed to model midface fracture repair. Preoperative three-dimensional computed tomographic (CT) images were processed using the algorithm. Fractures were virtually reduced and fixated to generate a virtual postoperative image. A survey comparing the virtual postoperative and the actual postoperative images was produced. A Likert-type scale rating system of 0 to 10 (0 being completely different and 10 being identical) was utilized. Survey participants evaluated the similarity of fracture reduction and fixation plate appearance. The algorithm's capacity for future clinical utility was also assessed. Survey response results from 35 physicians were collected and analyzed to determine the accuracy of the algorithm. Ten patients were evaluated. Fracture types included zygomaticomaxillary complex, LeFort, and naso-orbito-ethmoidal complex. Thirty-four images were assessed by a group of 35 physicians from the fields of otolaryngology, oral and maxillofacial surgery, and radiology. Mean response for fracture reduction similarity was 7.8 ± 2.5 and fixation plate similarity was 8.3 ± 1.9. All respondents reported interest in the tool for clinical use. This computer-based algorithm is able to produce virtual images that resemble actual postoperative images. It has the ability to model midface fracture repair and hardware placement.
APA, Harvard, Vancouver, ISO, and other styles
26

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov, and O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms." Experimental and Clinical Medicine 89, no. 4 (2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Full text
Abstract:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
APA, Harvard, Vancouver, ISO, and other styles
27

Sharma, Satender, Usha Chauhan, Ruqaiya Khanam, and Krishna Kant Singh. "Digital Watermarking using Grasshopper Optimization Algorithm." Open Computer Science 11, no. 1 (2021): 330–36. http://dx.doi.org/10.1515/comp-2019-0023.

Full text
Abstract:
Abstract The advancement in computer science technology has led to some serious concerns about the piracy and copyright of digital content. Digital watermarking technique is widely used for copyright protection and other similar applications. In this paper, a technique for digital watermarking based on Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Grasshopper Optimization Algorithm (GOA) is proposed. The method computes the DWT of the cover image to obtain the sub-components of the image. The subcomponent is converted to frequency domain using DCT. The challenge is to find the optimal scaling factor to be used for watermarking. The authors have designed a GOA based technique that finds the optimized scaling factor and the coefficient for embedding the watermark. GOA makes the watermark undetectable and is invisible in the cover image. The watermark image is embedded in the cover image using these coefficients. The extraction of watermark from the cover image is done by using inverse DCT and DWT. The proposed method is compared with the other state of the art methods. The effectiveness of the proposed method is computed using Peak Signal to Noise Ratio (PSNR), Normalized Cross Correlation (NCC) and Image Fidelity (IF). The proposed method outperforms the other methods and can be effectively used for practical digital watermarking.
APA, Harvard, Vancouver, ISO, and other styles
28

Lian, Jian, Yan Zhang, and Cheng Jiang Li. "An Efficient K-Shortest Paths Based Routing Algorithm." Advanced Materials Research 532-533 (June 2012): 1775–79. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1775.

Full text
Abstract:
We present an efficient K-shortest paths routing algorithm for computer networks. This Algorithm is based on enhancements to currently used link-state routing algorithms such as OSPF and IS-IS, which are only focusing on finding the shortest path route by adopting Dijkstra algorithm. Its desire effect to achieve is through the use of K-shortest paths algorighm, which has been implemented successfully in some fileds like traffic engineering. The correctness of this Algorithm is discussed at the same time as long as the comparison with OSPF.
APA, Harvard, Vancouver, ISO, and other styles
29

AKL, SELIM G., and Stefan D. Bruda. "PARALLEL REAL-TIME OPTIMIZATION: BEYOND SPEEDUP." Parallel Processing Letters 09, no. 04 (1999): 499–509. http://dx.doi.org/10.1142/s0129626499000463.

Full text
Abstract:
Traditionally, interest in parallel computation centered around the speedup provided by parallel algorithms over their sequential counterparts. In this paper, we ask a different type of question: Can parallel computers, due to their speed, do more than simply speed up the solution to a problem? We show that for real-time optimization problems, a parallel computer can obtain a solution that is better than that obtained by a sequential one. Specifically, a sequential and a parallel algorithm are exhibited for the problem of computing the best-possible approximation to the minimum-weight spanning tree of a connected, undirected and weighted graph whose vertices and edges are not all available at the outset, but instead arrive in real time. While the parallel algorithm succeeds in computing the exact minimum-weight spanning tree, the sequential algorithm can only manage to obtain an approximate solution. In the worst case, the ratio of the weight of the solution obtained sequentially to that of the solution computed in parallel can be arbitrarily large.
APA, Harvard, Vancouver, ISO, and other styles
30

Vorobeichikova, O. V. "APPLICATION OF COMPUTER TECHNOLOGIES IN TEACHING OF MEDICAL STUDENTS." Bulletin of Siberian Medicine 13, no. 4 (2014): 27–31. http://dx.doi.org/10.20538/1682-0363-2014-4-27-31.

Full text
Abstract:
he purpose of the given research are situational tasks from the point of view of algorithms of their decision and application of computer technologies for realization of similar algorithms. In the beginning the concept of a situational task and an opportunity of their use for training medical students is considered. The analysis of existing situational clinical tasks is spent and classification of algorithms of the decision is resulted. The opportunity of application of computer technologies for realization of similar algorithms is considered. Among all existing algorithms of the decision one in which the algorithm can be applied to the decision of the same tasks of one class is especially allocated. The technology of construction of such algorithm is resulted and the description of a program complex which realizes such algorithm of the decision of situational tasks is given.
APA, Harvard, Vancouver, ISO, and other styles
31

SOBHY, MOHAMED I., and ALAA-EL-DIN SHEHATA. "SECURE COMPUTER COMMUNICATION USING CHAOTIC ALGORITHMS." International Journal of Bifurcation and Chaos 10, no. 12 (2000): 2831–39. http://dx.doi.org/10.1142/s021812740000181x.

Full text
Abstract:
In this paper the application of chaotic algorithms in sending computer messages is described. The communication is achieved through email. Other transmission media can also be used. The algorithm has a degree of security many orders of magnitude higher than systems based on physical electronic circuitry. Both text, image or recorded voice messages can be transmitted. The algorithm can be used for computer communication and for secure databases.
APA, Harvard, Vancouver, ISO, and other styles
32

Sen, S. K., Hongwei Du, and D. W. Fausett. "A center of a polytope: An expository review and a parallel implementation." International Journal of Mathematics and Mathematical Sciences 16, no. 2 (1993): 209–24. http://dx.doi.org/10.1155/s0161171293000262.

Full text
Abstract:
The solution space of the rectangular linear systemAx=b, subject tox≥0, is called a polytope. An attempt is made to provide a deeper geometric insight, with numerical examples, into the condensed paper by Lord, et al. [1], that presents an algorithm to compute a center of a polytope. The algorithm is readily adopted for either sequential or parallel computer implementation. The computed center provides an initial feasible solution (interior point) of a linear programming problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Wei, Feng. "Research on Knight Covering Based on Breadth First Search Algorithm." Applied Mechanics and Materials 686 (October 2014): 377–80. http://dx.doi.org/10.4028/www.scientific.net/amm.686.377.

Full text
Abstract:
This paper introduces the general process of the search algorithm Structure through the knight problem. According to the characteristics of the problem, we detailed discuss the DFS(Depth First Search) algorithm and BFS(Breadth First Search) algorithm, and combine the two algorithms together to solve the knights coverage problem. This article has a good reference for the mixed-use scenarios which requires a variety of search algorithms.Algorithms is always the core of Computer programming modeling. Computer algorithm algorithms describes in detail how a computer will enter into the process of output required step by step, or, is a detailed description of the calculation process executing on a computer. The algorithm components include: the accuracy of algorithm, the specific steps of practical algorithm, the execution order of correct, fast and effective algorithm, there is no infinite loop, no matter how complex the algorithm is. The Following will analyze and study BFS, taking Knight Covering as examples.
APA, Harvard, Vancouver, ISO, and other styles
34

Bi, Bo, Muhammad Kamran Jamil, Khawaja Muhammad Fahd, Tian-Le Sun, Imran Ahmad, and Lei Ding. "Algorithms for Computing Wiener Indices of Acyclic and Unicyclic Graphs." Complexity 2021 (May 3, 2021): 1–6. http://dx.doi.org/10.1155/2021/6663306.

Full text
Abstract:
Let G = V G , E G be a molecular graph, where V G and E G are the sets of vertices (atoms) and edges (bonds). A topological index of a molecular graph is a numerical quantity which helps to predict the chemical/physical properties of the molecules. The Wiener, Wiener polarity, and the terminal Wiener indices are the distance-based topological indices. In this paper, we described a linear time algorithm (LTA) that computes the Wiener index for acyclic graphs and extended this algorithm for unicyclic graphs. The same algorithms are modified to compute the terminal Wiener index and the Wiener polarity index. All these algorithms compute the indices in time O n .
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Yinchun. "A hidden Markov optimization model for processing and recognition of English speech feature signals." Journal of Intelligent Systems 31, no. 1 (2022): 716–25. http://dx.doi.org/10.1515/jisys-2022-0057.

Full text
Abstract:
Abstract Speech recognition plays an important role in human–computer interaction. The higher the accuracy and efficiency of speech recognition are, the larger the improvement of human–computer interaction performance. This article briefly introduced the hidden Markov model (HMM)-based English speech recognition algorithm and combined it with a back-propagation neural network (BPNN) to further improve the recognition accuracy and reduce the recognition time of English speech. Then, the BPNN-combined HMM algorithm was simulated and compared with the HMM algorithm and the BPNN algorithm. The results showed that increasing the number of test samples increased the word error rate and recognition time of the three speech recognition algorithms, among which the word error rate and recognition time of the BPNN-combined HMM algorithm were the lowest. In conclusion, the BPNN-combined HMM can effectively recognize English speeches, which provides a valid reference for intelligent recognition of English speeches by computers.
APA, Harvard, Vancouver, ISO, and other styles
36

He, Yifeng, Nan Ye, and Rui Zhang. "Analysis of Data Encryption Algorithms for Telecommunication Network-Computer Network Communication Security." Wireless Communications and Mobile Computing 2021 (November 13, 2021): 1–19. http://dx.doi.org/10.1155/2021/2295130.

Full text
Abstract:
Nowadays, the development of information technology can be described as extremely rapid, especially with the widespread use of the Internet; the security of communications in the network has become an important issue we are facing. The purpose of this article is to solve the problems in today’s network security data encryption algorithms. Starting from the computer network communication security data encryption algorithm, we discuss the effects of several different encryption methods on improving network security. The research results show that it is known that the application of link encryption algorithm in network communication security encryption algorithm can increase the security index by 25%, the node encryption algorithm can increase the security index by 35%, and the end-to-end encryption algorithm can make the cyber activity safety index to increase by 40%. Among them, RSA algorithm and DES algorithm are two very representative algorithms; they represent different encryption systems. From the perspective of network data link, there are three methods of encryption algorithm, namely, link encryption algorithm, node encryption algorithm, and end-to-end encryption algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Gang, Jin. "Dynamic Monitoring of Football Training Based on Optimization of Computer Intelligent Algorithm." Computational Intelligence and Neuroscience 2022 (February 28, 2022): 1–8. http://dx.doi.org/10.1155/2022/2199166.

Full text
Abstract:
Nowadays, with the development of computer science and technology, computer intelligent algorithms are more and more widely used in various industries. Every calculation formula in the computer intelligent algorithm has systematic logic and singleness, in order to expound the dynamic algorithm of football training optimized by the computer intelligent algorithm in detail. In this paper, the monitoring system using the computer intelligent algorithm can dynamically observe people or objects and systematically analyze them. This paper mainly studies the research of a football training dynamic monitoring system based on the computer intelligent algorithm and the design and optimization of the computer intelligent dynamic monitoring system in football training. Finally, the overall composition of the computer intelligent dynamic monitoring system and the application of the optimized computer intelligent dynamic monitoring system to the analysis of sample data are studied.
APA, Harvard, Vancouver, ISO, and other styles
38

Cao, Min. "Alphabet Computer Automatic Clarity Algorithm." Applied Mechanics and Materials 556-562 (May 2014): 3905–8. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3905.

Full text
Abstract:
The alphabet computer recognition methods are widely applied in many areas. The clarity of the alphabet is first key step. This paper proposes a kind of image clarity algorithm for alphabet. The key frames in the surveillance video are analyzed in frequency domain to identify the key parameters causing alphabet illegibility and restore the surveillance video. The experiment results illustrate the algorithm can well restore the key frames of the alphabet in the video which can be widely applied in the vehicle plate recognition.
APA, Harvard, Vancouver, ISO, and other styles
39

Yeh, Wei-Chang, Edward Lin, and Chia-Ling Huang. "Predicting Spread Probability of Learning-Effect Computer Virus." Complexity 2021 (July 10, 2021): 1–17. http://dx.doi.org/10.1155/2021/6672630.

Full text
Abstract:
With the rapid development of network technology, computer viruses have developed at a fast pace. The threat of computer viruses persists because of the constant demand for computers and networks. When a computer virus infects a facility, the virus seeks to invade other facilities in the network by exploiting the convenience of the network protocol and the high connectivity of the network. Hence, there is an increasing need for accurate calculation of the probability of computer-virus-infected areas for developing corresponding strategies, for example, based on the possible virus-infected areas, to interrupt the relevant connections between the uninfected and infected computers in time. The spread of the computer virus forms a scale-free network whose node degree follows the power rule. A novel algorithm based on the binary-addition tree algorithm (BAT) is proposed to effectively predict the spread of computer viruses. The proposed BAT utilizes the probability derived from PageRank from the scale-free network together with the consideration of state vectors with both the temporal and learning effects. The performance of the proposed algorithm was verified via numerous experiments.
APA, Harvard, Vancouver, ISO, and other styles
40

Koltracht, Israel, and Peter Lancaster. "Threshold algorithms for the prediction of reflection coefficients in a layered medium." GEOPHYSICS 53, no. 7 (1988): 908–19. http://dx.doi.org/10.1190/1.1442528.

Full text
Abstract:
An algorithm is presented for the solution of the inverse problem of reflection seismology in the presence of noise. The algorithm is based on a new representation of reflection coefficients in terms of the recorded seismogram. This representation allows use of matrix perturbation methods for the analysis of error magnification in the recursive reconstruction of a stratified acoustic medium. Our analysis indicates that one of the main reasons for uncontrollable noise magnification is the assignment of significant values to very small reflection coefficients, most of which reflect only noise in the data rather than reflection information. Our analysis also allows one to decide when a small computed reflection coefficient should be set to zero. The strategy of setting small reflection coefficients to zero, which is called thresholding, has a stabilizing effect on inverse scattering algorithms. The threshold algorithm also permits adaptive change of noise barriers, which can be used for more detailed exposure of certain parts of a seismic section at the expense of its less important parts. These properties of the threshold algorithm are demonstrated on both synthetic examples and sets of seismic survey data. The general version of the threshold algorithms allows efficient implementation on modern computer architectures (such as parallel or pipelined computers). In particular, the algorithm can be implemented with linear complexity on parallel processors. Simplified versions of the general algorithm for special surface conditions are also presented.
APA, Harvard, Vancouver, ISO, and other styles
41

Ahmed, Asad, Osman Hasan, Falah Awwad, Nabil Bastaki, and Syed Rafay Hasan. "Formal Asymptotic Analysis of Online Scheduling Algorithms for Plug-In Electric Vehicles’ Charging." Energies 12, no. 1 (2018): 19. http://dx.doi.org/10.3390/en12010019.

Full text
Abstract:
A large-scale integration of plug-in electric vehicles (PEVs) into the power grid system has necessitated the design of online scheduling algorithms to accommodate the after-effects of this new type of load, i.e., PEVs, on the overall efficiency of the power system. In online settings, the low computational complexity of the corresponding scheduling algorithms is of paramount importance for the reliable, secure, and efficient operation of the grid system. Generally, the computational complexity of an algorithm is computed using asymptotic analysis. Traditionally, the analysis is performed using the paper-pencil proof method, which is error-prone and thus not suitable for analyzing the mission-critical online scheduling algorithms for PEV charging. To overcome these issues, this paper presents a formal asymptotic analysis approach for online scheduling algorithms for PEV charging using higher-order-logic theorem proving, which is a sound computer-based verification approach. For illustration purposes, we present the complexity analysis of two state-of-the-art online algorithms: the Online cooRdinated CHARging Decision (ORCHARD) algorithm and online Expected Load Flattening (ELF) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Tavares, Anderson. "Algorithm Selection in Zero-Sum Computer Games." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 13, no. 1 (2021): 301–3. http://dx.doi.org/10.1609/aiide.v13i1.12916.

Full text
Abstract:
Competitive computer games are challenging domains for artificial intelligence techniques. In such games, human players often resort to strategies, or game-playing policies, to guide their low-level actions. In this research, we propose a computational version of this behavior, by modeling game playing as an algorithm selection problem: agents must map game states to algorithms to maximize their performance. By reasoning over algorithms instead of low-level actions, we reduce the complexity of decision making in computer games. With further simplifications on the state space of a game, we were able to discuss game-theoretic concepts over aspects of real-time strategy games, as well as generating a game-playing agent that successfully learns how to select algorithms in AI tournaments. We plan to further extend the approach to handle incomplete-information settings, where we do not know the possible behaviors of the opponent.
APA, Harvard, Vancouver, ISO, and other styles
43

Jitprasithsiri, Siriphan, Hosin Lee, Robert G. Sorcic, and Richard Johnston. "Development of Digital Image-Processing Algorithm to Compute Unified Crack Index for Salt Lake City." Transportation Research Record: Journal of the Transportation Research Board 1526, no. 1 (1996): 142–48. http://dx.doi.org/10.1177/0361198196152600118.

Full text
Abstract:
This paper presents the recent efforts in developing an image processing algorithm for computing a unified pavement crack index for Salt Lake City. The pavement surface images were collected using a digital camera mounted on a van. Each image covers a pavement area of 2.13 m (7 ft) × 1.52 m (5 ft), taken at every 30-m (100-ft) station. The digital images were then transferred onto a 1-gigabyte hard disk from a set of memory cards each of which can store 21 digital images. Approximately 1,500 images are then transferred from the hard disk to a compact disc. The image-processing algorithm, based on a variable thresholding technique, was developed on a personal computer to automatically process pavement images. The image is divided into 140 smaller tiles, each tile consisting of 40 × 40 pixels. To measure the amount of cracking, a variable threshold value is computed based on the average gray value of each tile. The program then automatically counts the number of cracked tiles and computes a unified crack index for each pavement image. The crack indexes computed from the image-processing algorithms are compared against the manual rating procedure in this paper. The image-processing algorithms were applied to process more than 450 surveyed miles of Salt Lake City street network.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Li. "Application of Data Image Encryption Technology in Computer Network Information Security." Mathematical Problems in Engineering 2022 (July 21, 2022): 1–7. http://dx.doi.org/10.1155/2022/8963756.

Full text
Abstract:
In order to solve the practical problem that the security of computer network information cannot be guaranteed, which seriously affects the network performance, the author proposes a public key data encryption system based on PKI. The system proposed by the author is based on the public key cryptographic algorithm RSA, including implementation of data encryption and digital signature and key distribution, an improved RSA algorithm is proposed for the slow speed of RSA. The result obtained is as follows: after reasonable selection of parameters and the use of optimized algorithms (also known as combined algorithms), the RSA algorithm is about 1.0% to 2% more efficient than the traditional algorithm, to a certain extent, the operation efficiency of the RSA algorithm is improved, and the purpose of improving the RSA algorithm is achieved. It is proved that the public key data encryption system is of great significance for modern computer network information encryption and maintaining a green and secure network environment.
APA, Harvard, Vancouver, ISO, and other styles
45

Gong, Fanghai. "Application of Artificial Intelligence Computer Intelligent Heuristic Search Algorithm." Advances in Multimedia 2022 (September 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/5178515.

Full text
Abstract:
In order to transform three-dimensional space path planning into two-dimensional plane path planning problem and greatly reduce the search time, an intelligent heuristic search algorithm based on artificial intelligence is proposed. The heuristic search algorithm is analyzed and introduced, and A is chosen. A two-dimensional spatial environment model of picking robot path planning is investigated, and a spatial model of picking robot path planning is established by raster method. Then, considering the whole day operation time, the whole day operation time is divided into several periods. With the help of heuristic search algorithm, the most reasonable operation time interval of each period is found, so as to provide reliable reference for the decision-making organization of urban rail transit operation on how to arrange the train rationally. The experimental results show that the improved A ∗ algorithm can significantly improve the moving path of the picking robot and make the planned path smoother, which confirms the feasibility and superiority of the improved algorithm. The operation decision of urban rail transit is obtained through experiments. After 114 iterations of the heuristic search algorithm, the optimal value is 6.83353635 e -001, while the average optimal value is 6.83551939 e -001. After 231 iterations of particle swarm optimization algorithm, the optimal value is 6.83650785 e -001. The average optimal value is 6.83745935 e -001. After 789 iterations, the genetic algorithm obtains the optimal value of 6.83921100 e -001, and the average optimal value is 6.84410765 e -001. Through the comparative analysis, it can be seen that compared with the other two optimization algorithms, the heuristic search algorithm is significantly better than the other two optimization algorithms, both in terms of the optimal value and the number of optimization iterations. The results show that the heuristic search algorithm is a fast, accurate, and reliable optimization method to solve the problem of accurate scheduling of urban rail transit departure interval. It is proved that the intelligent heuristic search algorithm of artificial intelligence computer can realize the path planning effectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Xiao, Ligang, Daowen Qiu, Le Luo, and Paulo Mateus. "Distributed Shor's algorithm." Quantum Information and Computation 23, no. 1&2 (2023): 27–44. http://dx.doi.org/10.26421/qic23.1-2-3.

Full text
Abstract:
Shor's algorithm is one of the most important quantum algorithm proposed by Peter Shor [Proceedings of the 35th Annual Symposium on Foundations of Computer Science, 1994, pp. 124--134]. Shor's algorithm can factor a large integer with certain probability and costs polynomial time in the length of the input integer. The key step of Shor's algorithm is the order-finding algorithm, the quantum part of which is to estimate $s/r$, where $r$ is the ``order" and $s$ is some natural number that less than $r$. {{Shor's algorithm requires lots of qubits and a deep circuit depth, which is unaffordable for current physical devices.}} In this paper, to reduce the number of qubits required and circuit depth, we propose a quantum-classical hybrid distributed order-finding algorithm for Shor's algorithm, which combines the advantages of both quantum processing and classical processing. {{ In our distributed order-finding algorithm, we use two quantum computers with the ability of quantum teleportation separately to estimate partial bits of $s/r$.}} The measuring results will be processed through a classical algorithm to ensure the accuracy of the results. Compared with the traditional Shor's algorithm that uses multiple control qubits, our algorithm reduces nearly $L/2$ qubits for factoring an $L$-bit integer and reduces the circuit depth of each computer.
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Bai Ming, and Wei Wei. "CRC Algorithm in Computer Network Communication." Applied Mechanics and Materials 347-350 (August 2013): 1975–78. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.1975.

Full text
Abstract:
This article has analyzed the insufficiency of the traditional teaching administration system, proposed that through inserts the way of regulation storeroom to realize in the procedure active database-based the teaching administration system, through the practical research showed that this system can provide the real-time, all-round service on own initiative, and active service mechanism is highly effective, feasible In this paper, it studies one of the error check cont rols of the communications of computers network . Cyclic Redundancy Check ( CRC ) . It introduces the principle of CRC, the algorithms of CRC, the algorithms analysis of CRC, the program of CRC and the functions & features of CRC. The algorithms of CRC need not to design circuit of hardware in addition. It improves the speed of the communications of computer s network an d checks the message correctly.
APA, Harvard, Vancouver, ISO, and other styles
48

CHLEBUS, BOGDAN S. "TWO SELECTION ALGORITHMS ON A MESH-CONNECTED COMPUTER." Parallel Processing Letters 02, no. 04 (1992): 341–46. http://dx.doi.org/10.1142/s0129626492000489.

Full text
Abstract:
Two deterministic selection algorithms on an n × n mesh-connected processor array are developed. The model of computation is restricted in the following sense: at every step each processor buffers exactly one of the original keys, and every one of the original keys is buffered by a processor. The first algorithm operates in time 2.5n + o(n). It is a general selection algorithm, that is, its complexity bound does not depend on the rank of the element searched for. The second algorithm has its time bound depending on the rank of the item sought. This bound is [Formula: see text], where the rank is x2n2. This algorithm is superior to the previous one for approximately 10% of the smallest and 10% of the largest keys.
APA, Harvard, Vancouver, ISO, and other styles
49

Sokolov, Sergey, Andrey Boguslavsky, and Sergei Romanenko. "Implementation of the visual data processing algorithms for onboard computing units." Robotics and Technical Cybernetics 9, no. 2 (2021): 106–11. http://dx.doi.org/10.31776/rtcj.9204.

Full text
Abstract:
According to the short analysis of modern experience of hardware and software for autonomous mobile robots a role of computer vision systems in the structure of those robots is considered. A number of configurations of onboard computers and implementation of algorithms for visual data capturing and processing are described. In original configuration space the «algorithms-hardware» plane is considered. For software designing the realtime vision system framework is used. Experiments with the computing module based on the Intel/Altera Cyclone IV FPGA (implementation of the histogram computation algorithm and the Canny's algorithm), with the computing module based on the Xilinx FPGA (implementation of a sparse and dense optical flow algorithms) are described. Also implementation of algorithm of graph segmentation of grayscale images is considered and analyzed. Results of the first experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
50

TOUYAMA, TAKAYOSHI, and SUSUMU HORIGUCHI. "PERFORMANCE EVALUATION OF PRACTICAL PARALLEL COMPUTER MODEL LogPQ." International Journal of Foundations of Computer Science 12, no. 03 (2001): 325–40. http://dx.doi.org/10.1142/s0129054101000515.

Full text
Abstract:
The present super computer will be replaced by a massively parallel computer consisting of a large number of processing elements which satisfy the continuous increasing depend for computing power. Practical parallel computing model has been expected to develop efficient parallel algorithms on massively parallel computers. Thus, we have presented a practical parallel computation model LogPQ by taking account of communication queues into the LogP model. This paper addresses the performance of a parallel matrix multiplication algorithm using LogPQ and LogP models. The parallel algorithm is implemented on Cray T3E and the parallel performances are compared with on the old machine CM-5. This shows that the communication network of T3E has superior buffering behavior than CM-5, in which we don't need to prepare extra buffering on T3E. Although, a little effect remains for both of the send and receive bufferings. On the other hand, the effect of message size remains, which shows the necessity of the overhead and gap proportional to the message size.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography