Auswahl der wissenschaftlichen Literatur zum Thema „Computer algorithms“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Computer algorithms" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Computer algorithms"

1

Pinkan Indriani Daulay and Yahfizham Yahfizham. "Penerapan Algoritma Pemrograman dalam Pembelajaran Ilmu Komputer." Jurnal Arjuna : Publikasi Ilmu Pendidikan, Bahasa dan Matematika 1, no. 6 (November 7, 2023): 91–103. http://dx.doi.org/10.61132/arjuna.v1i6.297.

Der volle Inhalt der Quelle
Annotation:
Algorithms are at the core of computer science and play an important role in computer programming. Programming aims to tell the computer to perform certain functions. Computer instructions provide a set of instructions written in a language that the computer can understand. Programming algorithms consist of various detailed sequential steps aimed at solving various computer programming problems. A computer is an electronic device capable of receiving, processing, storing and creating information. In general, a computer is a machine that is used to perform various tasks, such as data processing, calculations, storing information, and executing predefined programs. Computers are divided into two parts, namely hardware and software which work together to support various types of computers and applications. Some applications of algorithms in computer science learning are data compression algorithms, binary search algorithms, linear search algorithms, repetition algorithms and hashing algorithms. When writing and creating an algorithm, you don't really focus on the programming language, so there are lots of algorithms used in programming. The purpose of this scientific work is to find out how important it is to apply programming algorithms in computer science learning. In order to achieve this goal, a literature study research method was carried out.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Mitra, Manu. "Algorithms in Computer Systems." Frontiers of Supercomputing 2, no. 2 (May 28, 2019): 1–3. https://doi.org/10.31424/icses.fos.

Der volle Inhalt der Quelle
Annotation:
Basic definition of algorithm in mathematics is step by step procedure to solve a problem. Algorithms are basic and most important area in a writing an error free programs. Fig. 1 illustrates flow chart of a computer algorithm. One of the most essential thing to remember is that there can be various algorithms for the same problem but some algorithms are much better than others. Technically, algorithm and programs are actually not the same thing, they differ at the level of precision. It is often expressed loosely defined format called “pseudo code” which matches programming language closely leaving out specific details that can be added later. Pseudo code doesn’t have hard and fast rules about commands, but it is halfway between an information instruction and specific program. Although, there are plenty of algorithms that are already there and are yet to be implemented, designed with various methods, techniques and composition of various methods. This editorial paper gives brief insights about few from plethora of various algorithms in computer systems; for instance, algorithms that can predict growth of cities, algorithms that can create three dimensional shapes, algorithms for customization of video game difficulty using big data, algorithms in the smart watch, and finally, algorithms to detect fake users on social networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Der volle Inhalt der Quelle
Annotation:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mitra, Manu. "Algorithms and Machine Learning." American Research Journal of Electronics and Communication Engineering 1, no. 1 (October 10, 2019): 1–5. https://doi.org/10.5281/zenodo.3479371.

Der volle Inhalt der Quelle
Annotation:
Machine learning is a model that learns patterns in data and then calculates similar patterns in new data. For instance, if you want to categorize children’s books, it would mean that instead of setting up exact instructions for what establishes a children’s books, experts can set the computer hundreds of examples of children’s books. Then the computer find the pattern in that books and uses that pattern to recognize future books in that particular category. However, Machine Learning is actually a subset of artificial intelligence that assists computers to learn without explicitly programmed with predefined instructions. It concentrates on development of computer applications that can teach themselves to develop and change when open to new data. This calculation ability, in addition to the computer’s ability to process massive amounts of data; it supports machine learning to handle complex business conditions with efficiency and preciseness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jattana, Manpreet Singh. "Quantum annealer accelerates the variational quantum eigensolver in a triple-hybrid algorithm." Physica Scripta 99, no. 9 (August 16, 2024): 095117. http://dx.doi.org/10.1088/1402-4896/ad6aea.

Der volle Inhalt der Quelle
Annotation:
Abstract Hybrid algorithms that combine quantum and classical resources have become commonplace in quantum computing. The variational quantum eigensolver (VQE) is routinely used to solve prototype problems. Currently, hybrid algorithms use no more than one kind of quantum computer connected to a classical computer. In this work, a novel triple-hybrid algorithm combines the effective use of a classical computer, a gate-based quantum computer, and a quantum annealer. The solution of a graph coloring problem found using a quantum annealer reduces the resources needed from a gate-based quantum computer to accelerate VQE by allowing simultaneous measurements within commuting groups of Pauli operators. We experimentally validate our algorithm by evaluating the ground state energy of H2 using different IBM Q devices and the DWave Advantage system requiring only half the resources of standard VQE. Other larger problems we consider exhibit even more significant VQE acceleration. Several examples of algorithms are provided to further motivate a new field of multi-hybrid algorithms that leverage different kinds of quantum computers to gain performance improvements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ataeva, Gulsina Isroilovna, and Lola Dzhalolovna Yodgorova. "METHODS AND ALGORITHMS OF COMPUTER GRAPHICS." Scientific Reports of Bukhara State University 4, no. 1 (February 26, 2020): 43–47. http://dx.doi.org/10.52297/2181-1466/2020/4/1/3.

Der volle Inhalt der Quelle
Annotation:
Methods and algorithms of computer graphics are considered in the article. Implementation of transformation of graphic objects by means of operations of transfer, scaling, rotation, the types of geometric models are considered. Methods of computer graphics include methods of converting graphic objects, representing (scanning) lines in raster form, selecting a window, removing hidden lines, projecting, painting images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cropper, Andrew. "The Automatic Computer Scientist." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15434. http://dx.doi.org/10.1609/aaai.v37i13.26801.

Der volle Inhalt der Quelle
Annotation:
Algorithms are ubiquitous: they track our sleep, help us find cheap flights, and even help us see black holes. However, designing novel algorithms is extremely difficult, and we do not have efficient algorithms for many fundamental problems. The goal of my research is to accelerate algorithm discovery by building an automatic computer scientist. To work towards this goal, my research focuses on inductive logic programming, a form of machine learning in which my collaborators and I have demonstrated major advances in automated algorithm discovery over the past five years. In this talk and paper, I survey these advances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Moosakhah, Fatemeh, and Amir Massoud Bidgoli. "Congestion Control in Computer Networks with a New Hybrid Intelligent Algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 8 (August 23, 2014): 4688–706. http://dx.doi.org/10.24297/ijct.v13i8.7068.

Der volle Inhalt der Quelle
Annotation:
With invention of computer networks, transferring data from one computer to another became possible, but as the number of computers that transfer data to each other increased and common communication channel bandwidth among them in a network limited, has led to a phenomenon called congestion, so that some of data packets would be dropped and never arrive to destination. Different algorithms have been proposed for overcoming congestion. These are divided into two general groups: 1- flow based algorithms and 2- class based algorithms. In present study, using class based algorithm with optimization of its control by fuzzy logic and new Cuckoo algorithm, we increased the number of packets that reach to destination and reduced the number of dropped packets considerably during congestion. Simulation results indicate a great improvement of efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Pelter, Michele M., and Mary G. Carey. "ECG Computer Algorithms." American Journal of Critical Care 17, no. 6 (November 1, 2008): 581–82. http://dx.doi.org/10.4037/ajcc2008.17.6.581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kaltofen, E. "Computer Algebra Algorithms." Annual Review of Computer Science 2, no. 1 (June 1987): 91–118. http://dx.doi.org/10.1146/annurev.cs.02.060187.000515.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Computer algorithms"

1

Mosca, Michele. "Quantum computer algorithms." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301184.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Der volle Inhalt der Quelle
Annotation:
<p>Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.</p><br><p>Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rhodes, Daniel Thomas. "Hardware accelerated computer graphics algorithms." Thesis, Nottingham Trent University, 2008. http://irep.ntu.ac.uk/id/eprint/201/.

Der volle Inhalt der Quelle
Annotation:
The advent of shaders in the latest generations of graphics hardware, which has made consumer level graphics hardware partially programmable, makes now an ideal time to investigate new graphical techniques and algorithms as well as attempting to improve upon existing ones. This work looks at areas of current interest within the graphics community such as Texture Filtering, Bump Mapping and Depth of Field simulation. These are all areas which have enjoyed much interest over the history of computer graphics but which provide a great deal of scope for further investigation in the light of recent hardware advances. A new hardware implementation of a texture filtering technique, aimed at consumer level hardware, is presented. This novel technique utilises Fourier space image filtering to reduce aliasing. Investigation shows that the technique provides reduced levels of aliasing along with comparable levels of detail to currently popular techniques. This adds to the community's knowledge by expanding the range of techniques available, as well as increasing the number of techniques which offer the potential for easy integration with current consumer level graphics hardware along with real-time performance. Bump mapping is a long-standing and well understood technique. Variations and extensions of it have been popular in real-time 3D computer graphics for many years. A new hardware implementation of a technique termed Super Bump Mapping (SBM) is introduced. Expanding on the work of Cant and Langensiepen [1], the SBM technique adopts the novel approach of using normal maps which supply multiple vectors per texel. This allows the retention of much more detail and overcomes some of the aliasing deficiencies of standard bump mapping caused by the standard single vector approach and the non-linearity of the bump mapping process. A novel depth of field algorithm is proposed, which is an extension of the authors previous work [2][3][4]. The technique is aimed at consumer level hardware and attempts to raise the bar for realism by providing support for the 'see-through' effect. This effect is a vital factor in the realistic appearance of simulated depth of field and has been overlooked in real time computer graphics due to the complexities of an accurate calculation. The implementation of this new algorithm on current consumer level hardware is investigated and it is concluded that while current hardware is not yet capable enough, future iterations will provide the necessary functional and performance increases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mims, Mark McGrew. "Dynamical stability of quantum algorithms /." Full text (PDF) from UMI/Dissertation Abstracts International, 2000. http://wwwlib.umi.com/cr/utexas/fullcit?p3004342.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Li, Quan Ph D. Massachusetts Institute of Technology. "Algorithms and algorithmic obstacles for probabilistic combinatorial structures." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115765.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 209-214).<br>We study efficient average-case (approximation) algorithms for combinatorial optimization problems, as well as explore the algorithmic obstacles for a variety of discrete optimization problems arising in the theory of random graphs, statistics and machine learning. In particular, we consider the average-case optimization for three NP-hard combinatorial optimization problems: Large Submatrix Selection, Maximum Cut (Max-Cut) of a graph and Matrix Completion. The Large Submatrix Selection problem is to find a k x k submatrix of an n x n matrix with i.i.d. standard Gaussian entries, which has the largest average entry. It was shown in [13] using non-constructive methods that the largest average value of a k x k submatrix is 2(1 + o(1) [square root] log n/k with high probability (w.h.p.) when k = O(log n/ log log n). We show that a natural greedy algorithm called Largest Average Submatrix LAS produces a submatrix with average value (1+ o(1)) [square root] 2 log n/k w.h.p. when k is constant and n grows, namely approximately [square root] 2 smaller. Then by drawing an analogy with the problem of finding cliques in random graphs, we propose a simple greedy algorithm which produces a k x k matrix with asymptotically the same average value (1+o(1) [square root] 2log n/k w.h.p., for k = o(log n). Since the maximum clique problem is a special case of the largest submatrix problem and the greedy algorithm is the best known algorithm for finding cliques in random graphs, it is tempting to believe that beating the factor [square root] 2 performance gap suffered by both algorithms might be very challenging. Surprisingly, we show the existence of a very simple algorithm which produces a k x k matrix with average value (1 + o[subscript]k(1) + o(1))(4/3) [square root] 2log n/k for k = o((log n)¹.⁵), that is, with asymptotic factor 4/3 when k grows. To get an insight into the algorithmic hardness of this problem, and motivated by methods originating in the theory of spin glasses, we conduct the so-called expected overlap analysis of matrices with average value asymptotically (1 + o(1))[alpha][square root] 2 log n/k for a fixed value [alpha] [epsilon] [1, fixed value a E [1, [square root]2]. The overlap corresponds to the number of common rows and common columns for pairs of matrices achieving this value. We discover numerically an intriguing phase transition at [alpha]* [delta]= 5[square root]2/(3[square root]3) ~~ 1.3608.. [epsilon] [4/3, [square root]2]: when [alpha] < [alpha]* the space of overlaps is a continuous subset of [0, 1]², whereas [alpha] = [alpha]* marks the onset of discontinuity, and as a result the model exhibits the Overlap Gap Property (OGP) when [alpha] > [alpha]*, appropriately defined. We conjecture that OGP observed for [alpha] > [alpha]* also marks the onset of the algorithmic hardness - no polynomial time algorithm exists for finding matrices with average value at least (1+o(1)[alpha][square root]2log n/k, when [alpha] > [alpha]* and k is a growing function of n. Finding a maximum cut of a graph is a well-known canonical NP-hard problem. We consider the problem of estimating the size of a maximum cut in a random Erdős-Rényi graph on n nodes and [cn] edges. We establish that the size of the maximum cut normalized by the number of nodes belongs to the interval [c/2 + 0.47523[square root]c,c/2 + 0.55909[square root]c] w.h.p. as n increases, for all sufficiently large c. We observe that every maximum size cut satisfies a certain local optimality property, and we compute the expected number of cuts with a given value satisfying this local optimality property. Estimating this expectation amounts to solving a rather involved multi-dimensional large deviations problem. We solve this underlying large deviation problem asymptotically as c increases and use it to obtain an improved upper bound on the Max-Cut value. The lower bound is obtained by application of the second moment method, coupled with the same local optimality constraint, and is shown to work up to the stated lower bound value c/2 + 0.47523[square root]c. We also obtain an improved lower bound of 1.36000n on the Max-Cut for the random cubic graph or any cubic graph with large girth, improving the previous best bound of 1.33773n. Matrix Completion is the problem of reconstructing a rank-k n x n matrix M from a sampling of its entries. We propose a new matrix completion algorithm using a novel sampling scheme based on a union of independent sparse random regular bipartite graphs. We show that under a certain incoherence assumption on M and for the case when both the rank and the condition number of M are bounded, w.h.p. our algorithm recovers an [epsilon]-approximation of M in terms of the Frobenius norm using O(nlog² (1/[epsilon])) samples and in linear time O(nlog² (1/[epsilon])). This provides the best known bounds both on the sample complexity and computational cost for reconstructing (approximately) an unknown low-rank matrix. The novelty of our algorithm is two new steps of thresholding singular values and rescaling singular vectors in the application of the "vanilla" alternating minimization algorithm. The structure of sparse random regular graphs is used heavily for controlling the impact of these regularization steps.<br>by Quan Li.<br>Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Tran, Chan-Hung. "Fast clipping algorithms for computer graphics." Thesis, University of British Columbia, 1986. http://hdl.handle.net/2429/26336.

Der volle Inhalt der Quelle
Annotation:
Interactive computer graphics allow achieving a high bandwidth man-machine communication only if the graphics system meets certain speed requirements. Clipping plays an important role in the viewing process, as well as in the functions zooming and panning; thus, it is desirable to develop a fast clipper. In this thesis, the intersection problem of a line segment against a convex polygonal object has been studied. Adaption of the the clip algorithms for parallel processing has also been investigated. Based on the conventional parametric clipping algorithm, two families of 2-D generalized line clipping algorithms are proposed: the t-para method and the s-para method. Depending on the implementation both run either linearly in time using a sequential tracing or logarithmically in time by applying the numerical bisection method. The intersection problem is solved after the sector locations of the endpoints of a line segment are determined by a binary search. Three-dimensional clipping with a sweep-defined object using translational sweeping or conic sweeping is also discussed. Furthermore, a mapping method is developed for rectangular clipping. The endpoints of a line segment are first mapped onto the clip boundaries by an interval-clip operation. Then a pseudo window is-defined and a set of conditions is derived for trivial acceptance and rejection. The proposed algorithms are implemented and compared with the Liang-Barsky algorithm to estimate their practical efficiency. Vectorization of the 2-D and 3-D rectangular clipping algorithms on an array processor has also been attempted.<br>Applied Science, Faculty of<br>Electrical and Computer Engineering, Department of<br>Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Viloria, John A. (John Alexander) 1978. "Optimizing clustering algorithms for computer vision." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86847.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Khungurn, Pramook. "Shirayanagi-Sweedler algebraic algorithm stabilization and polynomial GCD algorithms." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41662.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.<br>Includes bibliographical references (p. 71-72).<br>Shirayanagi and Sweedler [12] proved that a large class of algorithms on the reals can be modified slightly so that they also work correctly on floating-point numbers. Their main theorem states that, for each input, there exists a precision, called the minimum converging precision (MCP), at and beyond which the modified "stabilized" algorithm follows the same sequence of steps as the original "exact" algorithm. In this thesis, we study the MCP of two algorithms for finding the greatest common divisor of two univariate polynomials with real coefficients: the Euclidean algorithm, and an algorithm based on QR-factorization. We show that, if the coefficients of the input polynomials are allowed to be any computable numbers, then the MCPs of the two algorithms are not computable, implying that there are no "simple" bounding functions for the MCP of all pairs of real polynomials. For the Euclidean algorithm, we derive upper bounds on the MCP for pairs of polynomials whose coefficients are members of Z, 0, Z[6], and Q[6] where ( is a real algebraic integer. The bounds are quadratic in the degrees of the input polynomials or worse. For the QR-factorization algorithm, we derive a bound on the minimal precision at and beyond which the stabilized algorithm gives a polynomial with the same degree as that of the exact GCD, and another bound on the the minimal precision at and beyond which the algorithm gives a polynomial with the same support as that of the exact GCD. The bounds are linear in (1) the degree of the polynomial and (2) the sum of the logarithm of diagonal entries of matrix R in the QR factorization of the Sylvester matrix of the input polynomials.<br>by Pramook Khungurn.<br>M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

O'Brien, Neil. "Algorithms for scientific computing." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/355716/.

Der volle Inhalt der Quelle
Annotation:
There has long been interest in algorithms for simulating physical systems. We are concernedwith two areaswithin this field: fastmultipolemethods andmeshlessmethods. Since Greengard and Rokhlin’s seminal paper in 1987, considerable interest has arisen in fast multipole methods for finding the energy of particle systems in two and three dimensions, and more recently in many other applications where fast matrix-vector multiplication is called for. We develop a new fast multipole method that allows the calculation of the energy of a system of N particles in O(N) time, where the particles’ interactions are governed by the 2D Yukawa potential which takes the form of a modified Bessel function Kv. We then turn our attention to meshless methods. We formulate and test a new radial basis function finite differencemethod for solving an eigenvalue problemon a periodic domain. We then applymeshlessmethods to modelling photonic crystals. After an initial background study of the field, we detail the Maxwell equations, which govern the interaction of the light with the photonic crystal, and show how photonic band gaps may be given rise to. We present a novel meshless weak-strong form method with reduced computational cost compared to the existing meshless weak form method. Furthermore, we develop a new radial basis function finite differencemethod for photonic band gap calculations. Throughout the work we demonstrate the application of cutting-edge technologies such as cloud computing to the development and verification of algorithms for physical simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nofal, Samer. "Algorithms for argument systems." Thesis, University of Liverpool, 2013. http://livrepository.liverpool.ac.uk/12173/.

Der volle Inhalt der Quelle
Annotation:
Argument systems are computational models that enable an artificial intelligent agent to reason via argumentation. Basically, the computations in argument systems can be viewed as search problems. In general, for a wide range of such problems existing algorithms lack five important features. Firstly, there is no comprehensive study that shows which algorithm among existing others is the most efficient in solving a particular problem. Secondly, there is no work that establishes the use of cost-effective heuristics leading to more efficient algorithms. Thirdly, mechanisms for pruning the search space are understudied, and hence, further pruning techniques might be neglected. Fourthly, diverse decision problems, for extended models of argument systems, are left without dedicated algorithms fine-tuned to the specific requirements of the respective extended model. Fifthly, some existing algorithms are presented in a high level that leaves some aspects of the computations unspecified, and therefore, implementations are rendered open to different interpretations. The work presented in this thesis tries to address all these concerns. Concisely, the presented work is centered around a widely studied view of what computationally defines an argument system. According to this view, an argument system is a pair: a set of abstract arguments and a binary relation that captures the conflicting arguments. Then, to resolve an instance of argument systems the acceptable arguments must be decided according to a set of criteria that collectively define the argumentation semantics. For different motivations there are various argumentation semantics. Equally, several proposals in the literature present extended models that stretch the basic two components of an argument system usually by incorporating more elements and/or broadening the nature of the existing components. This work designs algorithms that solve decision problems in the basic form of argument systems as well as in some other extended models. Likewise, new algorithms are developed that deal with different argumentation semantics. We evaluate our algorithms against existing algorithms experimentally where sufficient indications highlight that the new algorithms are superior with respect to their running time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Computer algorithms"

1

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Horowitz, Ellis. Computer algorithms. 2nd ed. Summit, NJ: Silicon Press, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Horowitz, Ellis. Computer algorithms. New York: Computer Science Press, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Baase, Sara. Computer algorithms: Introduction to design and analysis. 2nd ed. Reading, Mass: Addison-Wesley Pub. Co., 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Salander, Elisabeth C., and Elisabeth C. Salander. Computer search algorithms. Hauppauge, N.Y: Nova Science Publishers, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Horowitz, Ellis. Computer algorithms/C++. 2nd ed. Summit, NJ: Silicon Press, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Computer algorithms"

1

Phan, Vinhthuy. "Algorithms, Computer." In Encyclopedia of Sciences and Religions, 71–74. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-1-4020-8265-8_1476.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zobel, Justin. "Algorithms." In Writing for Computer Science, 115–28. London: Springer London, 2004. http://dx.doi.org/10.1007/978-0-85729-422-7_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zobel, Justin. "Algorithms." In Writing for Computer Science, 145–55. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6639-9_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lim, Daniel. "Algorithms." In Philosophy through Computer Science, 22–29. New York: Routledge, 2023. http://dx.doi.org/10.4324/9781003271284-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Baratz, Alan, Inder Gopal, and Adrian Segall. "Fault tolerant queries in computer networks." In Distributed Algorithms, 30–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0019792.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Roosta, Seyed H. "Computer Architecture." In Parallel Processing and Parallel Algorithms, 1–56. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mehlhorn, Kurt. "The Physarum Computer." In WALCOM: Algorithms and Computation, 8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19094-0_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Erciyes, K. "Algorithms." In Undergraduate Topics in Computer Science, 41–61. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-61115-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sutinen, Erkki, and Matti Tedre. "ICT4D: A Computer Science Perspective." In Algorithms and Applications, 221–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12476-1_16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Symeonidis, Panagiotis, Dimitrios Ntempos, and Yannis Manolopoulos. "Algorithms." In SpringerBriefs in Electrical and Computer Engineering, 67–79. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4939-0286-6_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Computer algorithms"

1

Zhang, Yi, Jie Qiu, and Guangqiang Wu. "Computer Vision Kinematic Detection of Centrifugal Pendulum Vibration Absorber." In 2025 4th Asia Conference on Algorithms, Computing and Machine Learning (CACML), 1–6. IEEE, 2025. https://doi.org/10.1109/cacml64929.2025.11010954.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Efimov, Aleksey Igorevich, and Dmitry Igorevich Ustukov. "Comparative Analysis of Stereo Vision Algorithms Implementation on Various Architectures." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-484-489.

Der volle Inhalt der Quelle
Annotation:
A comparative analysis of the functionality of stereo vision algorithms on various hardware architectures has been carried out. The quantitative results of stereo vision algorithms implementation are presented, taking into account the specifics of the applied hardware base. The description of the original algorithm for calculating the depth map using the summed-area table is given. The complexity of the algorithm does not depend on the size of the search window. The article presents the content and results of the implementation of the stereo vision method on standard architecture computers, including multi-threaded implementation, a single-board computer and FPGA. The proposed results may be of interest in the design of vision systems for applied applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Spector, Lee. "Evolving quantum computer algorithms." In the 11th annual conference companion. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1570256.1570420.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Spector, Lee. "Evolving quantum computer algorithms." In the 13th annual conference companion. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2001858.2002128.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Milne, Darran. "Computer-Generated Holography Algorithms." In Frontiers in Optics. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/fio.2023.fm1a.4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Czakoova, Krisztina. "DEVELOPING ALGORITHMIC THINKING BY EDUCATIONAL COMPUTER GAMES." In eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-003.

Der volle Inhalt der Quelle
Annotation:
Basics of algorithmic thinking should not be limited to create right solutions and express them by a computer program, but should also be used a suitable methodology based on problem solving, preferably in a playful way. In many cases at school most of the learners consider the topic of algorithms as hard and not very attractive. For beginners in programming the knowledge of specific algorithms is not so important. The ability to understand principles of algorithms, as well as to find own algorithms for new problems are more desirable. One main educational objective is to know that an algorithm prescribes exactly what to do in the possible situations. The educational computer games based on the use of basic control structures do a good service to pupils can understand correctly how to get to the solution, using clearly defined steps with immediate feedback, with the possibility of visualizing the sequence of steps (with the possibility of corrections). Students gain new knowledge based on their own observation and discovery. The games also motivate the students to improve their algorithms to find more efficient solutions in the strategy of games. The aim is for pupils to acquire new knowledge by exploring and learning by doing. The main aim of the paper is to show a way of learning principles and concepts of algorithms by using computer game that is much easier to comprehend by the learners and makes them more fun. During the creation of the game, which was inspired by the well-known programmable toy Bee-bot, we tried to comply with the didactic principles of illustrationality, appropriaty and individual approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kosovskaya, Tatiana, та Juan Zhou. "Algorithms for Checking Isomorphism of Two Elementary Conjunctiоns". У Computer Science and Information Technologies 2023. Institute for Informatics and Automation Problems, 2023. http://dx.doi.org/10.51408/csit2023_01.

Der volle Inhalt der Quelle
Annotation:
When solving AI problems related to the study of complex structured objects, a convenient tool for describing such objects is the predicate calculus language. The paper presents two algorithms for checking two elementary conjunctions of predicate formulas for isomorphism (matches up to the names of variables and the order of conjunctive terms). The first of the algorithms checks for isomorphism elementary conjunctions containing a single predicate symbol. In addition, if the formulas are isomorphic, then it finds a one-to-one correspondence between the arguments of these formulas. If all predicates are binary, the proposed algorithm is an algorithm for checking two directed graphs for isomorphism. The second algorithm checks for isomorphism elementary conjunctions containing several predicate symbols. Estimates of their time complexity are given for both algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Freeman, William T. "Where computer vision needs help from computer science." In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2011. http://dx.doi.org/10.1137/1.9781611973082.64.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bulavintsev, Vadim, and Dmitry Zhdanov. "Method for Adaptation of Algorithms to GPU Architecture." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-930-941.

Der volle Inhalt der Quelle
Annotation:
We propose a generalized method for adapting and optimizing algorithms for efficient execution on modern graphics processing units (GPU). The method consists of several steps. First, build a control flow graph (CFG) of the algorithm. Next, transform the CFG into a tree of loops and merge non-parallelizable loops into parallelizable ones. Finally, map the resulting loops tree to the tree of GPU computational units, unrolling the algorithm’s loops as necessary for the match. The mapping should be performed bottom-up, from the lowest GPU architecture levels to the highest ones, to minimize off-chip memory access and maximize register file usage. The method provides programmer with a convenient and robust mental framework and strategy for GPU code optimization. We demonstrate the method by adapting to a GPU the DPLL backtracking search algorithm for solving the Boolean satisfiability problem (SAT). The resulting GPU version of DPLL outperforms the CPU version in raw tree search performance sixfold for regular Boolean satisfiability problems and twofold for irregular ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

E. Fantacci, M., S. Bagnasco, N. Camarlinghi, E. Fiorina, E. Lopez Torres, F. Pennanzio, c. Peroni, et al. "A Web-based Computer Aided Detection System for Automated Search of Lung Nodules in Thoracic Computed Tomography Scans." In International Conference on Bioinformatics Models, Methods and Algorithms. SCITEPRESS - Science and and Technology Publications, 2015. http://dx.doi.org/10.5220/0005280102130218.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Computer algorithms"

1

Poggio, Tomaso, and James Little. Parallel Algorithms for Computer Vision. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada203947.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, October 1988. http://dx.doi.org/10.21236/ada201921.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dixon, L. C., and R. C. Price. Optimisation Algorithms for Highly Parallel Computer Architectures. Fort Belvoir, VA: Defense Technical Information Center, December 1990. http://dx.doi.org/10.21236/ada235911.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Leach, Ronald J. Analysis of Blending Algorithms in Computer Graphics. Fort Belvoir, VA: Defense Technical Information Center, November 1991. http://dx.doi.org/10.21236/ada244279.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, October 2000. http://dx.doi.org/10.21236/ada393995.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Schnabel, R. Concurrent Algorithms for Numerical Computation on Hypercube Computer. Fort Belvoir, VA: Defense Technical Information Center, February 1988. http://dx.doi.org/10.21236/ada195502.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kupinski, Matthew A. Investigation of Genetic Algorithms for Computer-Aided Diagnosis. Fort Belvoir, VA: Defense Technical Information Center, October 1999. http://dx.doi.org/10.21236/ada391457.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lewis, Dustin, Naz Modirzadeh, and Gabriella Blum. War-Algorithm Accountability. Harvard Law School Program on International Law and Armed Conflict, August 2016. http://dx.doi.org/10.54813/fltl8789.

Der volle Inhalt der Quelle
Annotation:
In War-Algorithm Accountability (August 2016), we introduce a new concept—war algorithms—that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems” (AWS). We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict. In introducing this concept, our foundational technological concern is the capability of a constructed system, without further human intervention, to help make and effectuate a “decision” or “choice” of a war algorithm. Distilled, the two core ingredients are an algorithm expressed in computer code and a suitably capable constructed system. Through that lens, we link international law and related accountability architectures to relevant technologies. We sketch a three-part (non-exhaustive) approach that highlights traditional and unconventional accountability avenues. We focus largely on international law because it is the only normative regime that purports—in key respects but with important caveats—to be both universal and uniform. In this way, international law is different from the myriad domestic legal systems, administrative rules, or industry codes that govern the development and use of technology in all other spheres. By not limiting our inquiry only to weapon systems, we take an expansive view, showing how the broad concept of war algorithms might be susceptible to regulation—and how those algorithms might already fit within the existing regulatory system established by international law.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Varastehpour, Soheil, Hamid Sharifzadeh, and Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Der volle Inhalt der Quelle
Annotation:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ainsworth, James S., and Steven Kubala. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms. Fort Belvoir, VA: Defense Technical Information Center, September 1990. http://dx.doi.org/10.21236/ada230252.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie