Academic literature on the topic 'Neural networks (Computer science) Statistical mechanics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural networks (Computer science) Statistical mechanics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural networks (Computer science) Statistical mechanics"

1

Bahri, Yasaman, Jonathan Kadmon, Jeffrey Pennington, Sam S. Schoenholz, Jascha Sohl-Dickstein, and Surya Ganguli. "Statistical Mechanics of Deep Learning." Annual Review of Condensed Matter Physics 11, no. 1 (March 10, 2020): 501–28. http://dx.doi.org/10.1146/annurev-conmatphys-031119-050745.

Full text
Abstract:
The recent striking success of deep neural networks in machine learning raises profound questions about the theoretical principles underlying their success. For example, what can such deep networks compute? How can we train them? How does information propagate through them? Why can they generalize? And how can we teach them to imagine? We review recent work in which methods of physical analysis rooted in statistical mechanics have begun to provide conceptual insights into these questions. These insights yield connections between deep learning and diverse physical and mathematical topics, including random landscapes, spin glasses, jamming, dynamical phase transitions, chaos, Riemannian geometry, random matrix theory, free probability, and nonequilibrium statistical mechanics. Indeed, the fields of statistical mechanics and machine learning have long enjoyed a rich history of strongly coupled interactions, and recent advances at the intersection of statistical mechanics and deep learning suggest these interactions will only deepen going forward.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Lifu, and Tarek S. Abdelrahman. "Pipelined Training with Stale Weights in Deep Convolutional Neural Networks." Applied Computational Intelligence and Soft Computing 2021 (September 21, 2021): 1–16. http://dx.doi.org/10.1155/2021/3839543.

Full text
Abstract:
The growth in size and complexity of convolutional neural networks (CNNs) is forcing the partitioning of a network across multiple accelerators during training and pipelining of backpropagation computations over these accelerators. Pipelining results in the use of stale weights. Existing approaches to pipelined training avoid or limit the use of stale weights with techniques that either underutilize accelerators or increase training memory footprint. This paper contributes a pipelined backpropagation scheme that uses stale weights to maximize accelerator utilization and keep memory overhead modest. It explores the impact of stale weights on the statistical efficiency and performance using 4 CNNs (LeNet-5, AlexNet, VGG, and ResNet) and shows that when pipelining is introduced in early layers, training with stale weights converges and results in models with comparable inference accuracies to those resulting from nonpipelined training (a drop in accuracy of 0.4%, 4%, 0.83%, and 1.45% for the 4 networks, respectively). However, when pipelining is deeper in the network, inference accuracies drop significantly (up to 12% for VGG and 8.5% for ResNet-20). The paper also contributes a hybrid training scheme that combines pipelined with nonpipelined training to address this drop. The potential for performance improvement of the proposed scheme is demonstrated with a proof-of-concept pipelined backpropagation implementation in PyTorch on 2 GPUs using ResNet-56/110/224/362, achieving speedups of up to 1.8X over a 1-GPU baseline.
APA, Harvard, Vancouver, ISO, and other styles
3

Fu, Jinlong, Shaoqing Cui, Song Cen, and Chenfeng Li. "Statistical characterization and reconstruction of heterogeneous microstructures using deep neural network." Computer Methods in Applied Mechanics and Engineering 373 (January 2021): 113516. http://dx.doi.org/10.1016/j.cma.2020.113516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Machrowska, Anna, Jakub Szabelski, Robert Karpiński, Przemysław Krakowski, Józef Jonak, and Kamil Jonak. "Use of Deep Learning Networks and Statistical Modeling to Predict Changes in Mechanical Parameters of Contaminated Bone Cements." Materials 13, no. 23 (November 28, 2020): 5419. http://dx.doi.org/10.3390/ma13235419.

Full text
Abstract:
The purpose of the study was to test the usefulness of deep learning artificial neural networks and statistical modeling in predicting the strength of bone cements with defects. The defects are related to the introduction of admixtures, such as blood or saline, as contaminants into the cement at the preparation stage. Due to the wide range of applications of deep learning, among others in speech recognition, bioinformation processing, and medication design, the extent was checked to which it is possible to obtain information related to the prediction of the compressive strength of bone cements. Development and improvement of deep learning network (DLN) algorithms and statistical modeling in the analysis of changes in the mechanical parameters of the tested materials will enable determining an acceptable margin of error during surgery or cement preparation in relation to the expected strength of the material used to fill bone cavities. The use of the abovementioned computer methods may, therefore, play a significant role in the initial qualitative assessment of the effects of procedures and, thus, mitigation of errors resulting in failure to maintain the required mechanical parameters and patient dissatisfaction.
APA, Harvard, Vancouver, ISO, and other styles
5

Takahashi, Shuntaro, and Kumiko Tanaka-Ishii. "Evaluating Computational Language Models with Scaling Properties of Natural Language." Computational Linguistics 45, no. 3 (September 2019): 481–513. http://dx.doi.org/10.1162/coli_a_00355.

Full text
Abstract:
In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf’s law, Heaps’ law, Ebeling’s method, Taylor’s law, and long-range correlation analysis) can serve for evaluation of computational models. Specifically, we test n-gram language models, a probabilistic context-free grammar, language models based on Simon/Pitman-Yor processes, neural language models, and generative adversarial networks for text generation. Our analysis reveals that language models based on recurrent neural networks with a gating mechanism (i.e., long short-term memory; a gated recurrent unit; and quasi-recurrent neural networks) are the only computational models that can reproduce the long memory behavior of natural language. Furthermore, through comparison with recently proposed model-based evaluation methods, we find that the exponent of Taylor’s law is a good indicator of model quality.
APA, Harvard, Vancouver, ISO, and other styles
6

Krč, Rostislav, Jan Podroužek, Martina Kratochvílová, Ivan Vukušič, and Otto Plášek. "Neural Network-Based Train Identification in Railway Switches and Crossings Using Accelerometer Data." Journal of Advanced Transportation 2020 (November 24, 2020): 1–10. http://dx.doi.org/10.1155/2020/8841810.

Full text
Abstract:
This paper aims to analyse possibilities of train type identification in railway switches and crossings (S&C) based on accelerometer data by using contemporary machine learning methods such as neural networks. That is a unique approach since trains have been only identified in a straight track. Accelerometer sensors placed around the S&C structure were the source of input data for subsequent models. Data from four S&C at different locations were considered and various neural network architectures evaluated. The research indicated the feasibility to identify trains in S&C using neural networks from accelerometer data. Models trained at one location are generally transferable to another location despite differences in geometrical parameters, substructure, and direction of passing trains. Other challenges include small dataset and speed variation of the trains that must be considered for accurate identification. Results are obtained using statistical bootstrapping and are presented in a form of confusion matrices.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, D. H., D. J. Kim, and B. M. Kim. "The Application of Neural Networks and Statistical Methods to Process Design in Metal Forming Processes." International Journal of Advanced Manufacturing Technology 15, no. 12 (December 6, 1999): 886–94. http://dx.doi.org/10.1007/s001700050146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pled, Florent, Christophe Desceliers, and Tianyu Zhang. "A robust solution of a statistical inverse problem in multiscale computational mechanics using an artificial neural network." Computer Methods in Applied Mechanics and Engineering 373 (January 2021): 113540. http://dx.doi.org/10.1016/j.cma.2020.113540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Zhenyi, Bin Zhou, Chunge Ju, Qi Wei, Xinxi Zhang, and Rong Zhang. "Online Nonlinear Error Compensation Circuit Based on Neural Networks." Machines 9, no. 8 (July 31, 2021): 151. http://dx.doi.org/10.3390/machines9080151.

Full text
Abstract:
Nonlinear errors of sensor output signals are common in the field of inertial measurement and can be compensated with statistical models or machine learning models. Machine learning solutions with large computational complexity are generally offline or implemented on additional hardware platforms, which are difficult to meet the high integration requirements of microelectromechanical system inertial sensors. This paper explored the feasibility of an online compensation scheme based on neural networks. In the designed solution, a simplified small-scale network is used for modeling, and the peak-to-peak value and standard deviation of the error after compensation are reduced to 17.00% and 16.95%, respectively. Additionally, a compensation circuit is designed based on the simplified modeling scheme. The results show that the circuit compensation effect is consistent with the results of the algorithm experiment. Under SMIC 180 nm complementary metal-oxide semiconductor (CMOS) technology, the circuit has a maximum operating frequency of 96 MHz and an area of 0.19 mm2. When the sampling signal frequency is 800 kHz, the power consumption is only 1.12 mW. This circuit can be used as a component of the measurement and control system on chip (SoC), which meets real-time application scenarios with low power consumption requirements.
APA, Harvard, Vancouver, ISO, and other styles
10

Quiza, Ramón, Luis Figueira, and J. Paulo Davim. "Comparing statistical models and artificial neural networks on predicting the tool wear in hard machining D2 AISI steel." International Journal of Advanced Manufacturing Technology 37, no. 7-8 (March 28, 2007): 641–48. http://dx.doi.org/10.1007/s00170-007-0999-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Neural networks (Computer science) Statistical mechanics"

1

Whyte, William John. "Statistical mechanics of neural networks." Thesis, University of Oxford, 1995. http://ora.ox.ac.uk/objects/uuid:e17f9b27-58ac-41ad-8722-cfab75139d9a.

Full text
Abstract:
We investigate five different problems in the field of the statistical mechanics of neural networks. The first three problems involve attractor neural networks that optimise particular cost functions for storage of static memories as attractors of the neural dynamics. We study the effects of replica symmetry breaking (RSB) and attempt to find algorithms that will produce the optimal network if error-free storage is impossible. For the Gardner-Derrida network we show that full RSB is necessary for an exact solution everywhere above saturation. We also show that, no matter what the cost function that is optimised, if the distribution of stabilities has a gap then the Parisi replica ansatz that has been made is unstable. For the noise-optimal network we find a continuous transition to replica symmetry breaking at the AT line, in line with previous studies of RSB for different networks. The change to RSBl improves the agreement between "experimental" and theoretical calculations of the local stability distribution ρ(λ) significantly. The effect on observables is smaller. We show that if the network is presented with a training set which has been generated from a set of prototypes by some noisy rule, but neither the noise level nor the prototypes are known, then the perceptron algorithm is the best initial choice to produce a network that will generalise well. If additional information is available more sophisticated algorithms will be faster and give a smaller generalisation error. The remaining problems deal with attractor neural networks with separable interaction matrices which can be used (under parallel dynamics) to store sequences of patterns without the need for time delays. We look at the effects of correlations on a singlesequence network, and numerically investigate the storage capacity of a network storing an extensive number of patterns in such sequences. When correlations are implemented along with a term in the interaction matrix designed to suppress some of the effects of those correlations, the competition between the two produces a rich range of behaviour. Contrary to expectations, increasing the correlations and the operating temperature proves capable of improving the sequenceprocessing behaviour of the network. Finally, we demonstrate that a network storing a large number of sequences of patterns using a Hebb-like rule can store approximately twice as many patterns as the network trained with the Hebb rule to store individual patterns.
APA, Harvard, Vancouver, ISO, and other styles
2

Morabito, David L. "Statistical mechanics of neural networks and combinatorial opimization problems /." Online version of thesis, 1991. http://hdl.handle.net/1850/11089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chavali, Krishna Kumar. "Integration of statistical and neural network method for data analysis." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4749.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
APA, Harvard, Vancouver, ISO, and other styles
4

Ramachandran, Sowmya. "Theory refinement of Bayesian networks with hidden variables /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mitchell, David. "Classification by Neural Network and Statistical Models in Tandem: Does Integration Enhance Performance?" Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278874/.

Full text
Abstract:
The major purposes of the current research are twofold. The first purpose is to present a composite approach to the general classification problem by using outputs from various parametric statistical procedures and neural networks. The second purpose is to compare several parametric and neural network models on a transportation planning related classification problem and five simulated classification problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Nortje, Willem Daniel. "Comparison of Bayesian learning and conjugate gradient descent training of neural networks." Pretoria : [s.n.], 2001. http://upetd.up.ac.za/thesis/available/etd-11092004-091241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abu-Rahmeh, Osama. "A statistical mechanics approach for an effective, scalable, and reliable distributed load balancing scheme for grid networks." Thesis, Liverpool John Moores University, 2009. http://researchonline.ljmu.ac.uk/5903/.

Full text
Abstract:
The advances in computer and networking technologies over the past decades produced new type of collaborative computing environment called Grid Networks. Grid network is a parallel and distributed computing network system that possesses the ability to achieve a higher computing throughput by taking advantage of many computing resources available in the network. To achieve a scalable and reliable Grid network system, the workload needs to be efficiently distributed among the resources accessible on the network. A novel distributed algorithm based on statistical mechanics that provides an efficient load-balancing paradigm without any centralised monitoring is proposed here. The resulting load-balancer would be integrated into Grid network to increase its efficiency and resources utilisation. This distributed and scalable load-balancing framework is conducted using the biased random sampling (BRS) algorithm. In this thesis, a novel statistical mechanics approach that gives a distributed loadbalancing scheme by generating almost regular networks is proposed. The generated network system is self-organised and depends only on local information for load distribution and resource discovery. The in-degree of each node refers to its free resources, and job assignment and resource updating processes required for load balancing are accomplished by using random sampling (RS). An analytical solution for the stationary degree distributions has been derived that confirms that the edge distribution of the proposed network system is compatible with ER random networks. Therefore, the generated network system can provide an effective loadbalancing paradigm for the distributed resources accessible on large-scale network 1 systems. Furthermore, it has been demonstrated that introducing a geographic awareness factor in the random walk sampling can reduce the effects of communication latency in the Grid network environment. Theoretical and simulation results prove that the proposed BRS load-balancing scheme provides an effective, scalable, and reliable distributed load-balancing scheme for the distributed resources available on Grid networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Riggelsen, Carsten. "Approximation methods for efficient learning of Bayesian networks /." Amsterdam ; Washington, DC : IOS Press, 2008. http://www.loc.gov/catdir/toc/fy0804/2007942192.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kapur, Loveena. "Investigation of artificial neural networks, alternating conditional expectation, and Bayesian methods for reservoir characterization /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Suermondt, Henri Jacques. "Explanation in Bayesian belief networks." Full text available online (restricted access), 1992. http://images.lib.monash.edu.au/ts/theses/suermondt.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Neural networks (Computer science) Statistical mechanics"

1

F, De Groff Dolores, ed. Neural network modeling: Statistical mechanics and cybernetic perspectives. Boca Raton, Fla: CRC Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Davino, Cristina, and Salvatore Ingrassia. Reti neuronali e metodi statistici. Milano: F. Angeli, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

An introduction to the theory of spin glasses and neural networks. Singapore: World Scientific, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jong-Hoon, Oh, Kwon Chulan, Cho Sungzoon, Sŏul Taehakkyo. Chayŏn Kwahak Taehak. Pusŏl Iron Mullihak Yŏnʼguso., and Pʻohang Kongkwa Taehak (Korea). Kichʻo Kwahak Yŏnʼguso., eds. Neural networks: The statistical mechanics perspective : proceedings of the CTP-PBSRI Joint Workshop on Theoretical Physics, POSTECH, Pohang, Korea, 2-4 February 95. Singapore: World Scientific, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neural networks for statistical modeling. New York: Van Nostrand Reinhold, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Theoretical mechanics of biological neural networks. Boston: Academic Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Neal, Radford M. Bayesian learning for neural networks. New York: Springer, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Neal, Radford M. Bayesian learning for neural networks. Toronto: University of Toronto, Dept. of Computer Science, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

M, Noble John, ed. Bayesian networks: An introduction. Hoboken, NJ: John Wiley & Sons, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koski, Timo. Bayesian networks: An introduction. Chichester, West Sussex, UK: Wiley, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Neural networks (Computer science) Statistical mechanics"

1

Biehl, Michael. "The Statistical Physics of Learning Revisited: Typical Learning Curves in Model Scenarios." In Lecture Notes in Computer Science, 128–42. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_10.

Full text
Abstract:
AbstractThe exchange of ideas between computer science and statistical physics has advanced the understanding of machine learning and inference significantly. This interdisciplinary approach is currently regaining momentum due to the revived interest in neural networks and deep learning. Methods borrowed from statistical mechanics complement other approaches to the theory of computational and statistical learning. In this brief review, we outline and illustrate some of the basic concepts. We exemplify the role of the statistical physics approach in terms of a particularly important contribution: the computation of typical learning curves in student teacher scenarios of supervised learning. Two, by now classical examples from the literature illustrate the approach: the learning of a linearly separable rule by a perceptron with continuous and with discrete weights, respectively. We address these prototypical problems in terms of the simplifying limit of stochastic training at high formal temperature and obtain the corresponding learning curves.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Chengqiang, Zheng Hu, Xiaowei Huang, and Ke Pei. "Statistical Certification of Acceptable Robustness for Neural Networks." In Lecture Notes in Computer Science, 79–90. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86362-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Montúfar, Guido, Johannes Rauh, and Nihat Ay. "Maximal Information Divergence from Statistical Models Defined by Neural Networks." In Lecture Notes in Computer Science, 759–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40020-9_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Guan, Y., T. G. Clarkson, and J. G. Taylor. "Learning transformed prototypes (LTP) — A statistical pattern classification technique of neural networks." In Lecture Notes in Computer Science, 441–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-59497-3_207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hervás Martínez, C., E. J. Romero Soto, N. García Pedrajas, and R. Medina Carnicer. "Comparison between artificial neural networks and classical statistical methods in pattern recognition." In Lecture Notes in Computer Science, 351–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56735-6_73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pessa, Eliano. "Neural Network Models." In Relational Methodologies and Epistemology in Economics and Management Sciences, 100–127. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9770-6.ch003.

Full text
Abstract:
The Artificial Neural Network (ANN) models gained a wide popularity owing to a number of claimed advantages such as biological plausibility, tolerance with respect to errors or noise in the input data, learning ability allowing an adaptability to environmental constraints. Notwithstanding the fact that most of these advantages are not typical only of ANNs, engineers, psychologists and neuroscientists made an extended use of ANN models in a large number of scientific investigations. In most cases, however, these models have been introduced in order to provide optimization tools more useful than the ones commonly used by traditional Optimization Theory. Unfortunately, just the successful performance of ANN models in optimization tasks produced a widespread neglect of the true – and important – objectives pursued by the first promoters of these models. These objectives can be shortly summarized by the manifesto of connectionist psychology, stating that mental processes are nothing but macroscopic phenomena, emergent from the cooperative interaction of a large number of microscopic knowledge units. This statement – wholly in line with the goal of statistical mechanics – can be readily extended to other processes, beyond the mental ones, including social, economic, and, in general, organizational ones. Therefore this chapter has been designed in order to answer a number of related questions, such as: are the ANN models able to grant for the occurrence of this sort of emergence? How can the occurrence of this emergence be empirically detected? How can the emergence produced by ANN models be controlled? In which sense the ANN emergence could offer a new paradigm for the explanation of macroscopic phenomena? Answering these questions induces to focus the chapter on less popular ANNs, such as the recurrent ones, while neglecting more popular models, such as perceptrons, and on less used units, such as spiking neurons, rather than on McCulloch-Pitts neurons. Moreover, the chapter must mention a number of strategies of emergence detection, useful for researchers performing computer simulations of ANN behaviours. Among these strategies it is possible to quote the reduction of ANN models to continuous models, such as the neural field models or the neural mass models, the recourse to the methods of Network Theory and the employment of techniques borrowed by Statistical Physics, like the one based on the Renormalization Group. Of course, owing to space (and mathematical expertise) requirements, most mathematical details of the proposed arguments are neglected, and, to gain more information, the reader is deferred to the quoted literature.
APA, Harvard, Vancouver, ISO, and other styles
7

Reidys, Christian M. "Combinatorics of Genotype-Phenotype Maps: An RNA Case Study." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0021.

Full text
Abstract:
The fundamental mechanisms of biological evolution have fascinated generations of researchers and remain popular to this day. The formulation of such a theory goes back to Darwin (1859), who in the The Origin of Species presented two fundamental principles: genetic variability caused by mutation, and natural selection. The first principle leads to diversity and the second one to the concept of survival of the fittest, where fitness is an inherited characteristic property of an individual and can basically be identified with its reproduction rate. Wright [530, 531] first recognized the importance of genetic drift in evolution in improving the evolutionary search capacity of the whole population. He viewed genetic drift merely as a process that could improve evolutionary search. About a decade later, Kimura proposed [317] that the majority of changes that are observed in evolution at the molecular level are the results of random drift of genotypes. The neutral theory of Kimura does not deny that selection plays a role, but claims that no appreciable fraction of observable molecular change can be caused by selective forces: mutations are either a disadvantage or, at best, neutral in present day organisms. Only negative selection plays a major role in the neutral evolution, in that deleterious mutants die out due to their lower fitness. Over the last few decades, there has been a shift of emphasis in the study of evolution. Instead of focusing on the differences in the selective value of mutants and on population genetics, interest has moved to evolution through natural selection as an abstract optimization problem. Given the tremendous opportunities that computer science and the physical sciences now have for contributing to the study of biological phenomena, it is fitting to study the evolutionary optimization problem in the present volume. In this chapter, we adopt the following framework: assuming that selection acts exclusively upon isolated phenotypes, we introduce the following compositum of mappings . . . Genotypes→ Phenotypes →Fitness . . . . We will refer to the first map as to the genotype-phenotype map and call the preimage of a given phenotype its neutral network. Clearly, the main ingredients here are the phenotypes and genotypes and their respective organization. In the following we will study various combinatorial properties of phenotypes and genotypes for RNA folding maps.
APA, Harvard, Vancouver, ISO, and other styles
8

Hunt III, Harry B., and Madhav V. Marathe. "Towards a Predictive Computational Complexity Theory for Periodically Specified Problems: A Survey." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0022.

Full text
Abstract:
The preceding chapters in this volume have documented the substantial recent progress towards understanding the complexity of randomly specified combinatorial problems. This improved understanding has been obtained by combining concepts and ideas from theoretical computer science and discrete mathematics with those developed in statistical mechanics. Techniques such as the cavity method and the replica method, primarily developed by the statistical mechanics community to understand physical phenomena, have yielded important insights into the intrinsic difficulty of solving combinatorial problems when instances are chosen randomly. These insights have ultimately led to the development of efficient algorithms for some of the problems. A potential weakness of these results is their reliance on random instances. Although the typical probability distributions used on the set of instances make the mathematical results tractable, such instances do not, in general, capture the realistic instances that arise in practice. This is because practical applications of graph theory and combinatorial optimization in CAD systems, mechanical engineering, VLSI design, transportation networks, and software engineering involve processing large but regular objects constructed in a systematic manner from smaller and more manageable components. Consequently, the resulting graphs or logical formulas have a regular structure, and are defined systematically in terms of smaller graphs or formulas. It is not unusual for computer scientists and physicists interested in worst-case complexity to study problem instances with regular structure, such as lattice-like or tree-like instances. Motivated by this, we discuss periodic specifications as a method for specifying regular instances. Extensions of the basic formalism that give rise to locally random but globally structured instances are also discussed. These instances provide one method of producing random instances that might capture the structured aspect of practical instances. The specifications also yield methods for constructing hard instances of satisfiability and various graph theoretic problems, important for testing the computational efficiency of algorithms that solve such problems. Periodic specifications are a mechanism for succinctly specifying combinatorial objects with highly regular repetitive substructure. In the past, researchers have also used the term dynamic to refer to such objects specified using periodic specifications (see, for example, Orlin [419], Cohen and Megiddo [103], Kosaraju and Sullivan [347], and Hoppe and Tardos [260]).
APA, Harvard, Vancouver, ISO, and other styles
9

Perry, Theodore L., Travis Tucker, Laurel R. Hudson, William Gandy, Amy L. Neftzger, and Guy B. Hamar. "The Application of Data Mining Techniques in Health Plan Population Management." In Data Warehousing and Mining, 1799–809. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch106.

Full text
Abstract:
Healthcare has become a data-intensive business. Over the last 30 years, we have seen significant advancements in the areas of health information technology and health informatics as well as healthcare modeling and artificial intelligence techniques. Health informatics, which is the science of health information,1 has made great progress during this period (American Medical Informatics Association). Likewise, data mining, which has been generally defined as the application of technology and statistical/mathematical methods to uncover relationships and patterns between variables in data sets, has experienced noteworthy improvements in computer technology (e.g., hardware and software) in addition to applications and methodologies (e.g., statistical and biostatistical techniques such as neural networks, regression analysis, and classification/segmentation methods) (Kudyba & Hoptroff, 2001). Though health informatics is a relatively young science, the impact of this area on the health system and health information technology industry has already been seen, evidenced by improvements in healthcare delivery models, information systems, and assessment/diagnostic tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Hillbrand, Christian. "Empirical Inference of Numerical Information into Causal Strategy Models by Means of Artificial Intelligence." In Machine Learning, 283–303. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch212.

Full text
Abstract:
The motivation for this chapter is the observation that many companies build their strategy upon poorly validated hypotheses about cause and effect of certain business variables. However, the soundness of these cause-and-effect-relations as well as the knowledge of the approximate shape of the functional dependencies underlying these associations turns out to be the biggest issue for the quality of the results of decision supporting procedures. Since it is sufficiently clear that mere correlation of time series is not suitable to prove the causality of two business concepts, there seems to be a rather dogmatic perception of the inadmissibility of empirical validation mechanisms for causal models within the field of strategic management as well as management science. However, one can find proven causality techniques in other sciences like econometrics, mechanics, neuroscience, or philosophy. Therefore this chapter presents an approach which applies a combination of well-established statistical causal proofing methods to strategy models in order to validate them. These validated causal strategy models are then used as the basis for approximating the functional form of causal dependencies by the means of Artificial Neural Networks. This in turn can be employed to build an approximate simulation or forecasting model of the strategic system.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural networks (Computer science) Statistical mechanics"

1

Melnyk, Roman, and Arsenii Zawyalow. "Image retrieval by statistical features and artificial neural networks." In 2016 13th International Conference on Modern Problems of Radio Engineering. Telecommunications and Computer Science (TCSET). IEEE, 2016. http://dx.doi.org/10.1109/tcset.2016.7452159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography