Academic literature on the topic 'Explicit machine computation and programs (not the theory of computation or programming)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Explicit machine computation and programs (not the theory of computation or programming).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Explicit machine computation and programs (not the theory of computation or programming)"

1

Taha, Walid, and Peter Wadler. "Special issue on Semantics, Applications, and Implementation of Program Generation." Journal of Functional Programming 10, no. 6 (November 2000): 627. http://dx.doi.org/10.1017/s0956796800003890.

Full text
Abstract:
Program generation has the prospect of being an integral part of a wide range of software development processes. Recent studies investigate different aspects of program generation systems, including their semantics, their applications, and their implementation. Existing theories and systems address both high-level (source) language and low-level (machine) language generation. A number of programming languages now support program generation and manipulation, with different goals, implementation techniques, and targeted at different applications. In this context, a PLI workshop dedicated to this theme (SAIG'00) was held in Montreal in September 2000. Following on from this workshop, a special issue of the Journal of Functional Programming will be devoted to the same theme.Full-length, archival-quality submissions are solicited on topics including both theoretical and practical models and tools for building program generation systems, Examples include:[bull ] Semantics, type systems, and implementations for multi-stage languages.[bull ] Run-time specialization systems, e.g. compilers, operating systems.[bull ] High-level program generation (applications, foundations, environments).[bull ] Symbolic computation, linking and explicit substitution, in-lining and macros.Reports on applications of these techniques to real-world problems are especially encouraged, as are submissions that relate ideas and concepts from several of these topics, or bridge the gap between theory and practice.Contributors to SAIG'00 are encouraged to submit, but submission is open to everyone. Papers will be reviewed as regular JFP submissions, and acceptance in the special issue will be based on relevance to the theme. The special issue also welcomes high-quality survey and position papers that would benefit a wide audience. Accepted papers exceeding the space restrictions will be published as regular JFP papers.Submissions should be sent to the guest editor (address below), with a copy to Nasreen Ahmad (nasreen@dcs.gla.ac.uk). Submitted articles should be sent in postscript format, preferably gzipped and uuencoded. In addition, please send, as plain text, title, abstract, and contact information. The submission deadline is 1st February 2001. For other submission details, please consult an issue of the Journal of Functional Programming or see the Journal's web page at http://www.dcs.gla.ac.uk/jfp/.
APA, Harvard, Vancouver, ISO, and other styles
2

Bi, Q., and P. Yu. "Computation of Normal Forms of Differential Equations Associated with Non-Semisimple Zero Eigenvalues." International Journal of Bifurcation and Chaos 08, no. 12 (December 1998): 2279–319. http://dx.doi.org/10.1142/s0218127498001868.

Full text
Abstract:
This paper presents a method to compute the normal forms of differential equations whose Jacobian evaluated at an equilibrium includes a double zero or a triple zero eigenvalue. The method combines normal form theory with center manifold theory to deal with a general n-dimensional system. Explicit formulas are derived and symbolic computer programs have been developed using a symbolic computation language Maple. This enables one to easily compute normal forms and nonlinear transformations up to any order for a given specific problem. The programs can be conveniently executed on a main frame, workstation or a PC machine without any interaction. Mathematical and practical examples are presented to show the applicability of the method.
APA, Harvard, Vancouver, ISO, and other styles
3

ACAR, UMUT A., MATTHIAS BLUME, and JACOB DONHAM. "A consistent semantics of self-adjusting computation." Journal of Functional Programming 23, no. 3 (May 2013): 249–92. http://dx.doi.org/10.1017/s0956796813000099.

Full text
Abstract:
AbstractThis paper presents a semantics of self-adjusting computation and proves that the semantics is correct and consistent. The semantics introduces memoizing change propagation, which enhances change propagation with the classic idea of memoization to enable reuse of computations even when memory is mutated via side effects. During evaluation, computation reuse via memoization triggers a change-propagation algorithm that adapts the reused computation to the memory mutations (side effects) that took place since the creation of the computation. Since the semantics includes both memoization and change propagation, it involves both non-determinism (due to memoization) and mutation (due to change propagation). Our consistency theorem states that the non-determinism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: Self-adjusting programs are compatible with purely functional programming. We formalize the semantics and its meta-theory in the LF logical framework and machine check our proofs using Twelf.
APA, Harvard, Vancouver, ISO, and other styles
4

COHEN, SHAY B., ROBERT J. SIMMONS, and NOAH A. SMITH. "Products of weighted logic programs." Theory and Practice of Logic Programming 11, no. 2-3 (January 28, 2011): 263–96. http://dx.doi.org/10.1017/s1471068410000529.

Full text
Abstract:
AbstractWeighted logic programming, a generalization of bottom-up logic programming, is a well-suited framework for specifying dynamic programming algorithms. In this setting, proofs correspond to the algorithm's output space, such as a path through a graph or a grammatical derivation, and are given a real-valued score (often interpreted as a probability) that depends on the real weights of the base axioms used in the proof. The desired output is a function over all possible proofs, such as a sum of scores or an optimal score. We describe the product transformation, which can merge two weighted logic programs into a new one. The resulting program optimizes a product of proof scores from the original programs, constituting a scoring function known in machine learning as a “product of experts.” Through the addition of intuitive constraining side conditions, we show that several important dynamic programming algorithms can be derived by applying product to weighted logic programs corresponding to simpler weighted logic programs. In addition, we show how the computation of Kullback–Leibler divergence, an information-theoretic measure, can be interpreted using product.
APA, Harvard, Vancouver, ISO, and other styles
5

NADATHUR, GOPALAN. "A treatment of higher-order features in logic programming." Theory and Practice of Logic Programming 5, no. 3 (May 2005): 305–54. http://dx.doi.org/10.1017/s1471068404002297.

Full text
Abstract:
The logic programming paradigm provides the basis for a new intensional view of higher-order notions. This view is realized primarily by employing the terms of a typed lambda calculus as representational devices and by using a richer form of unification for probing their structures. These additions have important meta-programming applications but they also pose non-trivial implementation problems. One issue concerns the machine representation of lambda terms suitable to their intended use: an adequate encoding must facilitate comparison operations over terms in addition to supporting the usual reduction computation. Another aspect relates to the treatment of a unification operation that has a branching character and that sometimes calls for the delaying of the solution of unification problems. A final issue concerns the execution of goals whose structures become apparent only in the course of computation. These various problems are exposed in this paper and solutions to them are described. A satisfactory representation for lambda terms is developed by exploiting the nameless notation of de Bruijn as well as explicit encodings of substitutions. Special mechanisms are molded into the structure of traditional Prolog implementations to support branching in unification and carrying of unification problems over other computation steps; a premium is placed in this context on exploiting determinism and on emulating usual first-order behaviour. An extended compilation model is presented that treats higher-order unification and also handles dynamically emergent goals. The ideas described here have been employed in the Teyjus implementation of the $\lambda$Prolog language, a fact that is used to obtain a preliminary assessment of their efficacy.
APA, Harvard, Vancouver, ISO, and other styles
6

Demidova, Liliya A., and Artyom V. Gorchakov. "Classification of Program Texts Represented as Markov Chains with Biology-Inspired Algorithms-Enhanced Extreme Learning Machines." Algorithms 15, no. 9 (September 15, 2022): 329. http://dx.doi.org/10.3390/a15090329.

Full text
Abstract:
The massive nature of modern university programming courses increases the burden on academic workers. The Digital Teaching Assistant (DTA) system addresses this issue by automating unique programming exercise generation and checking, and provides means for analyzing programs received from students by the end of semester. In this paper, we propose a machine learning-based approach to the classification of student programs represented as Markov chains. The proposed approach enables real-time student submissions analysis in the DTA system. We compare the performance of different multi-class classification algorithms, such as support vector machine (SVM), the k nearest neighbors (KNN) algorithm, random forest (RF), and extreme learning machine (ELM). ELM is a single-hidden layer feedforward network (SLFN) learning scheme that drastically speeds up the SLFN training process. This is achieved by randomly initializing weights of connections among input and hidden neurons, and explicitly computing weights of connections among hidden and output neurons. The experimental results show that ELM is the most computationally efficient algorithm among the considered ones. In addition, we apply biology-inspired algorithms to ELM input weights fine-tuning in order to further improve the generalization capabilities of this algorithm. The obtained results show that ELMs fine-tuned with biology-inspired algorithms achieve the best accuracy on test data in most of the considered problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Sarkar, Aritra, Zaid Al-Ars, and Koen Bertels. "Estimating Algorithmic Information Using Quantum Computing for Genomics Applications." Applied Sciences 11, no. 6 (March 17, 2021): 2696. http://dx.doi.org/10.3390/app11062696.

Full text
Abstract:
Inferring algorithmic structure in data is essential for discovering causal generative models. In this research, we present a quantum computing framework using the circuit model, for estimating algorithmic information metrics. The canonical computation model of the Turing machine is restricted in time and space resources, to make the target metrics computable under realistic assumptions. The universal prior distribution for the automata is obtained as a quantum superposition, which is further conditioned to estimate the metrics. Specific cases are explored where the quantum implementation offers polynomial advantage, in contrast to the exhaustive enumeration needed in the corresponding classical case. The unstructured output data and the computational irreducibility of Turing machines make this algorithm impossible to approximate using heuristics. Thus, exploring the space of program-output relations is one of the most promising problems for demonstrating quantum supremacy using Grover search that cannot be dequantized. Experimental use cases for quantum acceleration are developed for self-replicating programs and algorithmic complexity of short strings. With quantum computing hardware rapidly attaining technological maturity, we discuss how this framework will have significant advantage for various genomics applications in meta-biology, phylogenetic tree analysis, protein-protein interaction mapping and synthetic biology. This is the first time experimental algorithmic information theory is implemented using quantum computation. Our implementation on the Qiskit quantum programming platform is copy-left and is publicly available on GitHub.
APA, Harvard, Vancouver, ISO, and other styles
8

Leivant, Daniel, and Bob Constable. "Editorial." Journal of Functional Programming 11, no. 1 (January 2001): 1. http://dx.doi.org/10.1017/s0956796801009030.

Full text
Abstract:
This issue of the Journal of Functional Programming is dedicated to work presented at the Workshop on Implicit Computational Complexity in Programming Languages, affiliated with the 1998 meeting of the International Conference on Functional Programming in Baltimore.Several machine-independent approaches to computational complexity have been developed in recent years; they establish a correspondence linking computational complexity to conceptual and structural measures of complexity of declarative programs and of formulas, proofs and models of formal theories. Examples include descriptive complexity of finite models, restrictions on induction in arithmetic and related first order theories, complexity of set-existence principles in higher order logic, and specifications in linear logic. We refer to these approaches collectively as Implicit Computational Complexity. This line of research provides a framework for a streamlined incorporation of computational complexity into areas such as formal methods in software development, programming language theory, and database theory.A fruitful thread in implicit computational complexity is based on exploring the computational complexity consequences of introducing various syntactic control mechanisms in functional programming, including restrictions (akin to static typing) on scoping, data re-use (via linear modalities), and iteration (via ramification of data). These forms of control, separately and in combination, can certify bounds on the time and space resources used by programs. In fact, all results in this area establish that each restriction considered yields precisely a major computational complexity class. The complexity classes thus obtained range from very restricted ones, such as NC and Alternating logarithmic time, through the central classes Poly-Time and Poly-Space, to broad classes such as the Elementary and the Primitive Recursive functions.Considerable effort has been invested in recent years to relax as much as possible the structural restrictions considered, allowing for more exible programming and proof styles, while still guaranteeing the same resource bounds. Notably, more exible control forms have been developed for certifying that functional programs execute in Poly-Time.The 1998 workshop covered both the theoretical foundations of the field and steps toward using its results in various implemented systems, for example in controlling the computational complexity of programs extracted from constructive proofs. The five papers included in this issue nicely represent this dual concern of theory and practice. As they are going to print, we should note that the field of Implicit Computational Complexity continues to thrive: successful workshops dedicated to it were affiliated with both the LICS'99 and LICS'00 conferences. Special issues, of Information and Computation dedicated to the former, and of Theoretical Computer Science to the latter, are in preparation.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Qianyun, Julie M. Vandenbossche, and Amir H. Alavi. "An evolutionary computational method to formulate the response of unbonded concrete overlays to temperature loading." Engineering Computations ahead-of-print, ahead-of-print (June 15, 2021). http://dx.doi.org/10.1108/ec-11-2020-0641.

Full text
Abstract:
PurposeUnbonded concrete overlays (UBOLs) are commonly used in pavement rehabilitation. The current models included in the Mechanistic-Empirical Pavement Design Guide cannot properly predict the structural response of UBOLs. In this paper, a multigene genetic programming (MGGP) approach is proposed to derive new prediction models for the UBOLs response to temperature loading.Design/methodology/approachMGGP is a promising variant of evolutionary computation capable of developing highly nonlinear explicit models for characterizing complex engineering problems. The proposed UBOL response models are formulated in terms of several influencing parameters including joint spacing, radius of relative stiffness, temperature gradient and adjusted load/pavement weight ratio. Furthermore, linear regression models are developed to benchmark the MGGP models.FindingsThe derived design equations accurately characterize the UBOLs response under temperature loading and remarkably outperform the regression models. The conducted parametric analysis implies the efficiency of the MGGP-based model in capturing the underlying physical behavior of the UBOLs response to temperature loading. Based on the results, the proposed models can be reliably deployed for design purposes.Originality/valueA challenge in the design of UBOLs is that their interlayer effects have not been directly considered in previous design procedures. To achieve better performance predictions, it is necessary to capture the effect of the interlayer in the design process. This study addresses this important issue via developing new models that can efficiently account for the effects of interlayer on the stress and deflections. In addition, it provides an insight into the effect of several parameters influencing the deflections of the UBOLs. From a computing perspective, a powerful evolutionary computation technique is introduced that overcomes the shortcomings of existing machine learning methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Explicit machine computation and programs (not the theory of computation or programming)"

1

McDermott, Matthew. "Fast Algorithms for Analyzing Partially Ranked Data." Scholarship @ Claremont, 2014. http://scholarship.claremont.edu/hmc_theses/58.

Full text
Abstract:
Imagine your local creamery administers a survey asking their patrons to choose their five favorite ice cream flavors. Any data collected by this survey would be an example of partially ranked data, as the set of all possible flavors is only ranked into subsets of the chosen flavors and the non-chosen flavors. If the creamery asks you to help analyze this data, what approaches could you take? One approach is to use the natural symmetries of the underlying data space to decompose any data set into smaller parts that can be more easily understood. In this work, I describe how to use permutation representations of the symmetric group to create and study efficient algorithms that yield such decompositions.
APA, Harvard, Vancouver, ISO, and other styles
2

Nievas, Lio Estefanía. "Aplicando máquinas de soporte vectorial al análisis de pérdidas no técnicas de energía eléctrica." Bachelor's thesis, 2016. http://hdl.handle.net/11086/3946.

Full text
Abstract:
Tesis (Lic. en Matemática)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2016.
Las pérdidas no técnicas en la distribución de energía eléctrica generan grandes gastos a las empresas encargadas de prestar el servicio de energía eléctrica y son extremadamente difíciles de detectar. En este proyecto se usa una técnica de aprendizaje automático (más conocida como Machine Learning) basada en máquinas de soporte vectorial (SVM, siglas en inglés de Support Vector Machine) para poder clasificar, de la manera más confiable posible, a los usuarios de la red en dos grupos diferenciados: los que cometen fraude y los que no. El entrenamiento se realiza a partir de una base de datos ya clasificada y tomando en cuenta el consumo de los usuarios a lo largo de un período de tiempo. Tales datos, en este proyecto, serán de usuarios de la ciudad de Córdoba. En nuestro trabajo implementaremos un algoritmo que construya el clasificador y luego analizaremos su confiabilidad clasificando a consumidores de la ciudad que han sido sometidos a una auditoría. Luego de obtener un clasificador confiable el mismo servirá para detectar posibles fraudes de los usuarios.
APA, Harvard, Vancouver, ISO, and other styles
3

Taubitz, Christian. "Investigation of the magnetic and electronic structure of Fe in molecules and chalcogenide systems." Doctoral thesis, 2010. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-201006096312.

Full text
Abstract:
In this work the electronic and magnetic structure of the crystals Sr2FeMoO6, Fe0.5Cu0.5Cr2S4, LuFe2O4 and the molecules FeStar, Mo72Fe30, W72Fe30 are investigated by means of X-ray spectroscopic techniques. These advanced materials exhibit very interesting properties like magnetoresistance or multiferroic behaviour. In case of the molecules they also could be used as spin model systems. A long standing issue concerning the investigation of these materials are contradicting results found for the magnetic and electronic state of the iron (Fe) ions present in these compounds. Therefore this work focuses on the Fe state of these materials in order to elucidate reasons for these problems. Thereby the experimental results are compared to multiplet simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Masurowski, Frank. "Eine deutschlandweite Potenzialanalyse für die Onshore-Windenergie mittels GIS einschließlich der Bewertung von Siedlungsdistanzenänderungen." Doctoral thesis, 2016. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2016071114613.

Full text
Abstract:
Die Windenergie an Land (Onshore-Windenergie) ist neben der Photovoltaik eine der tragenden Säulen der Energiewende in Deutschland. Wie schon in der Vergangenheit wird auch zukünftig der Ausbau der Onshore-Windenergie, mit dem Ziel eine umweltgerechte und sichere Energieversorgung für zukünftige Generationen aufzubauen, durch die Politik massiv vorangetrieben. Für eine planvolle Umsetzung der Energiewende, insbesondere im Bereich der Windenergie, müssen Kenntnisse über den zur Verfügung stehenden Raum und der Wirkungsweise standortspezifischer Faktoren auf planungsrechtlicher Ebene vorhanden sein. In der vorliegenden Arbeit wurde die Region Deutschland auf das für dieWindenergie an Land nutzbare Flächenpotenzial analysiert, von diesem allgemein gültige Energiepotenziale abgeleitet und in einer Sensitivitätsanalyse die Einflüsse verschiedener Abstände zwischen den Windenergieanlagen und Siedlungsstrukturen auf das ermittelte Energiepotenzial untersucht. Des Weiteren wurden für die beobachteten Zusammenhänge zwischen den Distanz- und Energiepotenzialänderungen mathematische Formeln erstellt, mit deren Hilfe eine Energiepotenzialänderung in Abhängigkeit von spezifischen Siedlungsdistanzänderungen vorhersagbar sind. Die Analyse des Untersuchungsgebiets (USG) hinsichtlich des zur Verfügung stehenden Flächenpotenzials wurde anhand eines theoretischen Modells, welches die reale Landschaft mit ihren unterschiedlichen Landschaftstypen und Infrastrukturen widerspiegelt, umgesetzt. Auf Basis dieses Modells wurden so genannte „Basisflächen“ sowie für die Onshore-Windenergie nicht nutzbare Flächen (Tabu- oder Ausschlussflächen) identifiziert und mittels einer GIS-Software (Geographisches Informationssystem) verschnitten. Die Identifizierung der Ausschlussflächen erfolgte über regionalisierte beziehungsweise im gesamten USG geltende multifaktorielle Bestimmungen für die Platzierung von Windenergieanlagen (WEA). Zur Gewährleistung einer einheitlichen Konsistenz wurden die verschiedenen Regelungen, welche aus den unterschiedlichsten Quellen stammen, vereinheitlicht, vereinfacht und in einem so genannten „Regelkatalog“ festgeschrieben. Die Berechnung des im USG maximal möglichen Energiepotenzials erfolgte durch eine Referenzanlage, welche im USG räumlich verteilt platziert wurde. Die Energiepotenziale (Leistungs- und Ertragspotenzial) leiten sich dabei aus der Kombination der räumlichen Lage der WEA, den technischen Leistungsspezifikationen der Referenzanlage und dem regionalem Windangebot ab. Eine wesentliche Grundvoraussetzung für die Berechnung der Energiepotenziale lag in der im Vorfeld durchzuführenden Windenergieanlagenallokation auf den Potenzialflächen begründet. Zu diesem Zweck wurde die integrierte Systemlösung „MAXPLACE“ entwickelt. Mit dieser ist es möglich, WEA unter Berücksichtigung von anlagenspezifischen, wirtschaftlichen und sicherheitstechnischen Aspekten in einzelnen oder zusammenhängenden Untersuchungsregionen zu platzieren. Im Gegensatz zu bereits bestehenden Systemlösungen (Allokationsalgorithmen) aus anderen Windenergie-Potenzialanalysen zeichnet sich die integrierte Systemlösung „MAXPLACE“ durch eine sehr gute Effizienz, ein breites Anwendungsspektrum sowie eine einfache Handhabung aus. Der Mindestabstand zwischen den WEA und den Siedlungsstrukturen stellt den größten Restriktionsfaktor für das ermittelte Energiepotenzial dar. Zur Bestimmung der Einflussnahme von Siedlungsdistanzänderungen auf das Energiepotenzial wurde mit Hilfe des erstellten Landschaftsmodells eine Sensitivitätsanalyse durchgeführt. In dieser wurden die vorherrschenden Landschafts- und Infrastrukturen analysiert und daraus standortbeschreibende Parameter abgeleitet. Neben der konkreten Benennung der Energiepotenzialänderungen, wurden für das gesamte USG mathematische Abstraktionen der beobachteten Zusammenhänge in Form von Regressionsformeln ermittelt. Diese Formeln ermöglichen es, ohne die in dieser Arbeit beschriebene aufwendige Methodik nachzuvollziehen, mit nur wenigen Parametern die Auswirkungen einer Siedlungsdistanzänderung auf das Energiepotenzial innerhalb des Untersuchungsgebiets zu berechnen.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Explicit machine computation and programs (not the theory of computation or programming)"

1

Bard, Gregory V. Sage for undergraduates. Providence, Rhode Island: American Mathematical Society, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Flannery, D. L. (Dane Laurence), 1965-, ed. Algebraic design theory. Providence, R.I: American Mathematical Society, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1974-, Zomorodian Afra J., ed. Advances in applied and computational topology: American Mathematical Society Short Course on Computational Topology, January 4-5, 2011, New Orleans, Louisiana. Providence, R.I: American Mathematical Society, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brauer groups, Tamagawa measures, and rational points on algebraic varieties. Providence, Rhode Island: American Mathematical Society, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Andrzej, Lingas, Karlsson R. 1950-, and Carlsson Svante, eds. Automata, languages, and programming: 20th International Colloquium, ICALP 93, Lund, Sweden, July 5-9, 1993 : proceedings. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bard, Gregory V. Sage for Undergraduates: Compatible with Python 3. American Mathematical Society, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lingas, A., and R. Karlsson. Automata, Languages and Programming: 20th International Colloquium, Icalp 93 Lund, Sweden, July 5-9, 1993 Proceedings (I C a L P//Automata, Languages, and Programming). Springer, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

(Editor), Andrzej Lingas, Rolf Karlsson (Editor), and Svante Carlsson (Editor), eds. Automata, Languages and Programming: 20th International Colloquium, Icalp 93, Lund, Sweden, July 5-9, 1993. Proceedings (Lecture Notes in Computer Science). Not Avail, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ellis, Graham. An Invitation to Computational Homotopy. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198832973.001.0001.

Full text
Abstract:
This book is an introduction to elementary algebraic topology for students with an interest in computers and computer programming. Its aim is to illustrate how the basics of the subject can be implemented on a computer. The transition from basic theory to practical computation raises a range of non-trivial algorithmic issues and it is hoped that the treatment of these will also appeal to readers already familiar with basic theory who are interested in developing computational aspects. The book covers a subset of standard introductory material on fundamental groups, covering spaces, homology, cohomology and classifying spaces as well as some less standard material on crossed modules, homotopy 2- types and explicit resolutions for an eclectic selection of discrete groups. It attempts to cover these topics in a way that hints at potential applications of topology in areas of computer science and engineering outside the usual territory of pure mathematics, and also in a way that demonstrates how computers can be used to perform explicit calculations within the domain of pure algebraic topology itself. The initial chapters include examples from data mining, biology and digital image analysis, while the later chapters cover a range of computational examples on the cohomology of classifying spaces that are likely beyond the reach of a purely paper-and-pen approach to the subject. The applied examples in the initial chapters use only low-dimensional and mainly abelian topological tools. Our applications of higher dimensional and less abelian computational methods are currently confined to pure mathematical calculations. The approach taken to computational homotopy is very much based on J.H.C. Whitehead’s theory of combinatorial homotopy in which he introduced the fundamental notions of CW-space, simple homotopy equivalence and crossed module. The book should serve as a self-contained informal introduction to these topics and their computer implementation. It is written in a style that tries to lead as quickly as possible to a range of potentially useful machine computations.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Explicit machine computation and programs (not the theory of computation or programming)"

1

Brackett, Robert. "Architecture Revisits Math & Science: Computation in a Visual Thinking Pedagogy." In Schools of Thought Conference. University of Oklahoma, 2020. http://dx.doi.org/10.15763/11244/335059.

Full text
Abstract:
This paper makes a case for the greater integration of computational logic and principles in core undergraduate architectural design courses as visual thinking pedagogy. Math and computation present abstract problems that may seem at odds with the real-world design concepts with which students are familiar. Because architecture students are typically strong visual thinkers, abstract mathematical language can be difficult to learn, but these concepts can be used as a pedagogical interface to support visual problem-solving in the design process. Building on the work of Christopher Alexander in Notes on the Synthesis of Form and A Pattern Language, the idea of “pattern languages” can be used to develop a curriculum that relies on math and computation to connect the visual and social systems at work in the design process. Design curricula can integrate computational thinking based on vector math, geometry, calculus, matrices, set theory, visual programming, and scripting to build students’ computational literacy through visual problem-solving. George Stiny’s “shape grammars” offer an intuitive analog method for introducing students to computational thinking through elements and rules in preparation for designing with digital tools. The further we distance ourselves from the fundamental operations of mathematics and computation, the more we risk becoming obsolete in the process. Computer programs can automate modeling, analyzing, programming, reviewing, and even designing buildings. For now, that places the architect in a narrow domain of design and visual aesthetics, which will quickly be subsumed by machine algorithms deployed by software companies. These machine constructions operate at the social/cultural scale, a place suited for the critical position and service of architects. The education of an architect should therefore provide students with critical knowledge and skills that position them to define the parameters of automation and challenge the computer programmers with radical ideas, communicated in a shared language of mathematics that is both visual and abstract.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography