Academic literature on the topic 'Natural gradient descent'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Natural gradient descent.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Natural gradient descent"

1

Stokes, James, Josh Izaac, Nathan Killoran, and Giuseppe Carleo. "Quantum Natural Gradient." Quantum 4 (May 25, 2020): 269. http://dx.doi.org/10.22331/q-2020-05-25-269.

Full text
Abstract:
A quantum generalization of Natural Gradient Descent is presented as part of a general-purpose optimization framework for variational quantum circuits. The optimization dynamics is interpreted as moving in the steepest descent direction with respect to the Quantum Information Geometry, corresponding to the real part of the Quantum Geometric Tensor (QGT), also known as the Fubini-Study metric tensor. An efficient algorithm is presented for computing a block-diagonal approximation to the Fubini-Study metric tensor for parametrized quantum circuits, which may be of independent interest.
APA, Harvard, Vancouver, ISO, and other styles
2

Rattray, Magnus, David Saad, and Shun-ichi Amari. "Natural Gradient Descent for On-Line Learning." Physical Review Letters 81, no. 24 (December 14, 1998): 5461–64. http://dx.doi.org/10.1103/physrevlett.81.5461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heskes, Tom. "On “Natural” Learning and Pruning in Multilayered Perceptrons." Neural Computation 12, no. 4 (April 1, 2000): 881–901. http://dx.doi.org/10.1162/089976600300015637.

Full text
Abstract:
Several studies have shown that natural gradient descent for on-line learning is much more efficient than standard gradient descent. In this article, we derive natural gradients in a slightly different manner and discuss implications for batch-mode learning and pruning, linking them to existing algorithms such as Levenberg-Marquardt optimization and optimal brain surgeon. The Fisher matrix plays an important role in all these algorithms. The second half of the article discusses a layered approximation of the Fisher matrix specific to multilayered perceptrons. Using this approximation rather than the exact Fisher matrix, we arrive at much faster “natural” learning algorithms and more robust pruning procedures.
APA, Harvard, Vancouver, ISO, and other styles
4

Rattray, Magnus, and David Saad. "Analysis of natural gradient descent for multilayer neural networks." Physical Review E 59, no. 4 (April 1, 1999): 4523–32. http://dx.doi.org/10.1103/physreve.59.4523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Inoue, Masato, Hyeyoung Park, and Masato Okada. "On-Line Learning Theory of Soft Committee Machines with Correlated Hidden Units –Steepest Gradient Descent and Natural Gradient Descent–." Journal of the Physical Society of Japan 72, no. 4 (April 15, 2003): 805–10. http://dx.doi.org/10.1143/jpsj.72.805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Pu, Pin-yu Chen, Siyue Wang, and Xue Lin. "Towards Query-Efficient Black-Box Adversary with Zeroth-Order Natural Gradient Descent." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6909–16. http://dx.doi.org/10.1609/aaai.v34i04.6173.

Full text
Abstract:
Despite the great achievements of the modern deep neural networks (DNNs), the vulnerability/robustness of state-of-the-art DNNs raises security concerns in many application domains requiring high reliability. Various adversarial attacks are proposed to sabotage the learning performance of DNN models. Among those, the black-box adversarial attack methods have received special attentions owing to their practicality and simplicity. Black-box attacks usually prefer less queries in order to maintain stealthy and low costs. However, most of the current black-box attack methods adopt the first-order gradient descent method, which may come with certain deficiencies such as relatively slow convergence and high sensitivity to hyper-parameter settings. In this paper, we propose a zeroth-order natural gradient descent (ZO-NGD) method to design the adversarial attacks, which incorporates the zeroth-order gradient estimation technique catering to the black-box attack scenario and the second-order natural gradient descent to achieve higher query efficiency. The empirical evaluations on image classification datasets demonstrate that ZO-NGD can obtain significantly lower model query complexities compared with state-of-the-art attack methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Howard Hua, and Shun-ichi Amari. "Complexity Issues in Natural Gradient Descent Method for Training Multilayer Perceptrons." Neural Computation 10, no. 8 (November 1, 1998): 2137–57. http://dx.doi.org/10.1162/089976698300017007.

Full text
Abstract:
The natural gradient descent method is applied to train an n-m-1 multilayer perceptron. Based on an efficient scheme to represent the Fisher information matrix for an n-m-1 stochastic multilayer perceptron, a new algorithm is proposed to calculate the natural gradient without inverting the Fisher information matrix explicitly. When the input dimension n is much larger than the number of hidden neurons m, the time complexity of computing the natural gradient is O(n).
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Hyeyoung, and Kwanyong Lee. "Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode." Applied Sciences 9, no. 21 (October 28, 2019): 4568. http://dx.doi.org/10.3390/app9214568.

Full text
Abstract:
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
9

MUKUNO, Jun-ichi, and Hajime MATSUI. "Natural Gradient Descent of Complex-Valued Neural Networks Invariant under Rotations." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E102.A, no. 12 (December 1, 2019): 1988–96. http://dx.doi.org/10.1587/transfun.e102.a.1988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Neumann, K., C. Strub, and J. J. Steil. "Intrinsic plasticity via natural gradient descent with application to drift compensation." Neurocomputing 112 (July 2013): 26–33. http://dx.doi.org/10.1016/j.neucom.2012.12.047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Natural gradient descent"

1

Inoue, Masato. "On-line learning theory of soft committee machines with correlated hidden units : Steepest gradient descent and natural gradient descent." Kyoto University, 2003. http://hdl.handle.net/2433/148746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aguiar, Eliane Martins de. "Aplicação do Word2vec e do Gradiente descendente dstocástico em tradução automática." reponame:Repositório Institucional do FGV, 2016. http://hdl.handle.net/10438/16798.

Full text
Abstract:
Submitted by Eliane Martins de Aguiar (elianemart@gmail.com) on 2016-08-01T21:03:09Z No. of bitstreams: 1 dissertacao-ElianeMartins.pdf: 6062037 bytes, checksum: 14567c2feca25a81d6942be3b8bc8a65 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2016-08-03T20:29:34Z (GMT) No. of bitstreams: 1 dissertacao-ElianeMartins.pdf: 6062037 bytes, checksum: 14567c2feca25a81d6942be3b8bc8a65 (MD5)
Approved for entry into archive by Maria Almeida (maria.socorro@fgv.br) on 2016-08-23T20:12:35Z (GMT) No. of bitstreams: 1 dissertacao-ElianeMartins.pdf: 6062037 bytes, checksum: 14567c2feca25a81d6942be3b8bc8a65 (MD5)
Made available in DSpace on 2016-08-23T20:12:54Z (GMT). No. of bitstreams: 1 dissertacao-ElianeMartins.pdf: 6062037 bytes, checksum: 14567c2feca25a81d6942be3b8bc8a65 (MD5) Previous issue date: 2016-05-30
O word2vec é um sistema baseado em redes neurais que processa textos e representa pa- lavras como vetores, utilizando uma representação distribuída. Uma propriedade notável são as relações semânticas encontradas nos modelos gerados. Este trabalho tem como objetivo treinar dois modelos utilizando o word2vec, um para o Português e outro para o Inglês, e utilizar o gradiente descendente estocástico para encontrar uma matriz de tradução entre esses dois espaços.
APA, Harvard, Vancouver, ISO, and other styles
3

Casero, Cañas Ramón. "Left ventricle functional analysis in 2D+t contrast echocardiography within an atlas-based deformable template model framework." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:b17b3670-551d-4549-8f10-d977295c1857.

Full text
Abstract:
This biomedical engineering thesis explores the opportunities and challenges of 2D+t contrast echocardiography for left ventricle functional analysis, both clinically and within a computer vision atlas-based deformable template model framework. A database was created for the experiments in this thesis, with 21 studies of contrast Dobutamine Stress Echo, in all 4 principal planes. The database includes clinical variables, human expert hand-traced myocardial contours and visual scoring. First the problem is studied from a clinical perspective. Quantification of endocardial global and local function using standard measures shows expected values and agreement with human expert visual scoring, but the results are less reliable for myocardial thickening. Next, the problem of segmenting the endocardium with a computer is posed in a standard landmark and atlas-based deformable template model framework. The underlying assumption is that these models can emulate human experts in terms of integrating previous knowledge about the anatomy and physiology with three sources of information from the image: texture, geometry and kinetics. Probabilistic atlases of contrast echocardiography are computed, while noting from histograms at selected anatomical locations that modelling texture with just mean intensity values may be too naive. Intensity analysis together with the clinical results above suggest that lack of external boundary definition may preclude this imaging technique for appropriate measuring of myocardial thickening, while endocardial boundary definition is appropriate for evaluation of wall motion. Geometry is presented in a Principal Component Analysis (PCA) context, highlighting issues about Gaussianity, the correlation and covariance matrices with respect to physiology, and analysing different measures of dimensionality. A popular extension of deformable models ---Active Appearance Models (AAMs)--- is then studied in depth. Contrary to common wisdom, it is contended that using a PCA texture space instead of a fixed atlas is detrimental to segmentation, and that PCA models are not convenient for texture modelling. To integrate kinetics, a novel spatio-temporal model of cardiac contours is proposed. The new explicit model does not require frame interpolation, and it is compared to previous implicit models in terms of approximation error when the shape vector changes from frame to frame or remains constant throughout the cardiac cycle. Finally, the 2D+t atlas-based deformable model segmentation problem is formulated and solved with a gradient descent approach. Experiments using the similarity transformation suggest that segmentation of the whole cardiac volume outperforms segmentation of individual frames. A relatively new approach ---the inverse compositional algorithm--- is shown to decrease running times of the classic Lucas-Kanade algorithm by a factor of 20 to 25, to values that are within real-time processing reach.
APA, Harvard, Vancouver, ISO, and other styles
4

"Adaptive Curvature for Stochastic Optimization." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53675.

Full text
Abstract:
abstract: This thesis presents a family of adaptive curvature methods for gradient-based stochastic optimization. In particular, a general algorithmic framework is introduced along with a practical implementation that yields an efficient, adaptive curvature gradient descent algorithm. To this end, a theoretical and practical link between curvature matrix estimation and shrinkage methods for covariance matrices is established. The use of shrinkage improves estimation accuracy of the curvature matrix when data samples are scarce. This thesis also introduce several insights that result in data- and computation-efficient update equations. Empirical results suggest that the proposed method compares favorably with existing second-order techniques based on the Fisher or Gauss-Newton and with adaptive stochastic gradient descent methods on both supervised and reinforcement learning tasks.
Dissertation/Thesis
Masters Thesis Computer Science 2019
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Natural gradient descent"

1

Yang, H. H., and S. Amari. "Statistical Learning by Natural Gradient Descent." In New Learning Paradigms in Soft Computing, 1–29. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1803-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ibnkahla, Mohamed. "Nonlinear Channel Identification Using Natural Gradient Descent: Application to Modeling and Tracking." In Soft Computing in Communications, 55–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-45090-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

A B, Pawar, Jawale M A, and Kyatanavar D N. "Analyzing Fake News Based on Machine Learning Algorithms." In Intelligent Systems and Computer Technology. IOS Press, 2020. http://dx.doi.org/10.3233/apc200146.

Full text
Abstract:
Usages of Natural Language Processing techniques in the field of detection of fake news is analyzed in this research paper. Fake news are misleading concepts spread by invalid resources can provide damages to human-life, society. To carry out this analysis work, dataset obtained from web resource OpenSources.co is used which is mainly part of Signal Media. The document frequency terms as TF-IDF of bi-grams used in correlation with PCFG (Probabilistic Context Free Grammar) on a set of 11,000 documents extracted as news articles. This set tested on classification algorithms namely SVM (Support Vector Machines), Stochastic Gradient Descent, Bounded Decision Trees, Gradient Boosting algorithm with Random Forests. In experimental analysis, found that combination of Stochastic Gradient Descent with TF-IDF of bi-grams gives an accuracy of 77.2% in detecting fake contents, which observes with PCFGs having slight recalling defects
APA, Harvard, Vancouver, ISO, and other styles
4

Naik, Bighnaraj, Janmenjoy Nayak, and H. S. Behera. "A Hybrid Model of FLANN and Firefly Algorithm for Classification." In Handbook of Research on Natural Computing for Optimization Problems, 491–522. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0058-2.ch021.

Full text
Abstract:
Since last decade, biologically inspired optimization techniques have been a keen interest among the researchers of optimization community. Some of the well developed and advanced popular algorithms such as GA, PSO etc. are found to be performing well for solving large scale problems. In this chapter, a recently developed nature inspired firefly algorithm has been proposed by the combination of an efficient higher order functional link neural network for the classification of the real world data. The main advantage of firefly algorithm is to obtain the solutions for global optima, where some of the earlier developed swarm intelligence algorithms fail to do so. For learning the neural network, efficient gradient descent learning is used to optimize the weights. The proposed method is able to classify the non-linear data more efficiently with less error rate. Under null-hypothesis, the proposed method has been tested with various statistical methods to prove its statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
5

Nayak, Sarat Chandra, Bijan Bihari Misra, and Himansu Sekhar Behera. "Improving Performance of Higher Order Neural Network using Artificial Chemical Reaction Optimization." In Advances in Computational Intelligence and Robotics, 253–80. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0063-6.ch011.

Full text
Abstract:
Multilayer neural networks are commonly and frequently used technique for mapping complex nonlinear input-output relationship. However, they add more computational cost due to structural complexity in architecture. This chapter presents different functional link networks (FLN), a class of higher order neural network (HONN). FLNs are capable to handle linearly non-separable classes by increasing the dimensionality of the input space by using nonlinear combinations of input signals. Usually such network is trained with gradient descent based back propagation technique, but it suffers from many drawbacks. To overcome the drawback, here a natural chemical reaction inspired metaheuristic technique called as artificial chemical reaction optimization (ACRO) is used to train the network. As a case study, forecasting of the stock index prices of different stock markets such as BSE, NASDAQ, TAIEX, and FTSE are considered here to compare and analyze the performance gain over the traditional techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Narayanan, Swathi Jamjala, Boominathan Perumal, and Jayant G. Rohra. "Swarm-Based Nature-Inspired Metaheuristics for Neural Network Optimization." In Advances in Computational Intelligence and Robotics, 23–53. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2857-9.ch002.

Full text
Abstract:
Nature-inspired algorithms have been productively applied to train neural network architectures. There exist other mechanisms like gradient descent, second order methods, Levenberg-Marquardt methods etc. to optimize the parameters of neural networks. Compared to gradient-based methods, nature-inspired algorithms are found to be less sensitive towards the initial weights set and also it is less likely to become trapped in local optima. Despite these benefits, some nature-inspired algorithms also suffer from stagnation when applied to neural networks. The other challenge when applying nature inspired techniques for neural networks would be in handling large dimensional and correlated weight space. Hence, there arises a need for scalable nature inspired algorithms for high dimensional neural network optimization. In this chapter, the characteristics of nature inspired techniques towards optimizing neural network architectures along with its applicability, advantages and limitations/challenges are studied.
APA, Harvard, Vancouver, ISO, and other styles
7

Mukhopadhyay, Sumitra, and Soumyadip Das. "Application of Nature-Inspired Algorithms for Sensing Error Optimisation in Dynamic Environment." In Nature-Inspired Algorithms for Big Data Frameworks, 124–69. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5852-1.ch006.

Full text
Abstract:
Spectrum sensing errors in cognitive radio may occur due to constant changes in the environment like changes in background noise, movements of the users, temperature variations, etc. It leads to under usage of available spectrum bands or may cause interference to the primary user transmission. So, sensing parameters like detection threshold are required to adapt dynamically to the changing environment to minimise sensing errors. Correct sensing requires processing huge data sets just like Big Data. This chapter investigates sensing in light of Big Data and presents the study of the nature inspired algorithms in sensing error minimisation by dynamic adaptation of the threshold value. Death penalty constrained handing techniques are integrated to the genetic algorithm, particle swarm optimisation, the firefly algorithm and the bat algorithm. Based on them, four algorithms are developed for minimizing sensing errors. The reported algorithms are found to be faster and more accurate when compared with previously proposed threshold adaptation algorithms based on a gradient descend.
APA, Harvard, Vancouver, ISO, and other styles
8

Benes, Peter Mark, Miroslav Erben, Martin Vesely, Ondrej Liska, and Ivo Bukovsky. "HONU and Supervised Learning Algorithms in Adaptive Feedback Control." In Advances in Computational Intelligence and Robotics, 35–60. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0063-6.ch002.

Full text
Abstract:
This chapter is a summarizing study of Higher Order Neural Units featuring the most common learning algorithms for identification and adaptive control of most typical representatives of plants of single-input single-output (SISO) nature in the control engineering field. In particular, the linear neural unit (LNU, i.e., 1st order HONU), quadratic neural unit (QNU, i.e. 2nd order HONU), and cubic neural unit (CNU, i.e. 3rd order HONU) will be shown as adaptive feedback controllers of typical models of linear plants in control including identification and control of plants with input time delays. The investigated and compared learning algorithms for HONU will be the step-by-step Gradient Descent adaptation with the study of known modifications of learning rate for improved convergence, the batch Levenberg-Marquardt algorithm, and the Resilient Back-Propagation algorithm. The theoretical achievements will be summarized and discussed as regards their usability and the real issues of control engineering tasks.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Natural gradient descent"

1

Hong, Yuan, Changhao Xia, Shixiang Zhang, Lin Wu, Chao Yuan, Ying Huang, Xuxu Wang, and Haifeng Zhu. "Load forecasting using elastic gradient descent." In 2013 9th International Conference on Natural Computation (ICNC). IEEE, 2013. http://dx.doi.org/10.1109/icnc.2013.6817979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aji, Alham Fikri, and Kenneth Heafield. "Sparse Communication for Distributed Gradient Descent." In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/d17-1045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Malago, Luigi, Matteucci Matteo, and Giovanni Pistone. "Stochastic Natural Gradient Descent by estimation of empirical covariances." In 2011 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2011. http://dx.doi.org/10.1109/cec.2011.5949720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Izadi, Mohammad Rasool, Yihao Fang, Robert Stevenson, and Lizhen Lin. "Optimization of Graph Neural Networks with Natural Gradient Descent." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhijian Luo, Danping Liao, and Yuntao Qian. "Bound analysis of natural gradient descent in stochastic optimization setting." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bogoychev, Nikolay, Kenneth Heafield, Alham Fikri Aji, and Marcin Junczys-Dowmunt. "Accelerating Asynchronous Stochastic Gradient Descent for Neural Machine Translation." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/d18-1332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Keyang, Fei Tao, and Jianming Zhang. "A Stochastic Parallel Gradient Descent Algorithem for Pedestrian Re-identification." In 2018 14th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD). IEEE, 2018. http://dx.doi.org/10.1109/fskd.2018.8686843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khan, Mohammad Emtiyaz, and Didrik Nielsen. "Fast yet Simple Natural-Gradient Descent for Variational Inference in Complex Models." In 2018 International Symposium on Information Theory and Its Applications (ISITA). IEEE, 2018. http://dx.doi.org/10.23919/isita.2018.8664326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ibnkahla, M., and J. Yuan. "A neural network MLSE receiver based on natural gradient descent: application to satellite communications." In Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings. IEEE, 2003. http://dx.doi.org/10.1109/isspa.2003.1224633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Prellberg, Jonas, and Oliver Kramer. "Learned Weight Sharing for Deep Multi-Task Learning by Natural Evolution Strategy and Stochastic Gradient Descent." In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020. http://dx.doi.org/10.1109/ijcnn48605.2020.9207139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography