Gotowa bibliografia na temat „Approximate identity neural networks”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Approximate identity neural networks”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Approximate identity neural networks"

1

Moon, Sunghwan. "ReLU Network with Bounded Width Is a Universal Approximator in View of an Approximate Identity." Applied Sciences 11, no. 1 (January 4, 2021): 427. http://dx.doi.org/10.3390/app11010427.

Pełny tekst źródła
Streszczenie:
Deep neural networks have shown very successful performance in a wide range of tasks, but a theory of why they work so well is in the early stage. Recently, the expressive power of neural networks, important for understanding deep learning, has received considerable attention. Classic results, provided by Cybenko, Barron, etc., state that a network with a single hidden layer and suitable activation functions is a universal approximator. A few years ago, one started to study how width affects the expressiveness of neural networks, i.e., a universal approximation theorem for a deep neural networ
Style APA, Harvard, Vancouver, ISO itp.
2

Funahashi, Ken-Ichi. "Approximate realization of identity mappings by three-layer neural networks." Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 73, no. 11 (1990): 61–68. http://dx.doi.org/10.1002/ecjc.4430731107.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zainuddin, Zarita, and Saeed Panahian Fard. "The Universal Approximation Capabilities of Cylindrical Approximate Identity Neural Networks." Arabian Journal for Science and Engineering 41, no. 8 (March 4, 2016): 3027–34. http://dx.doi.org/10.1007/s13369-016-2067-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Turchetti, C., M. Conti, P. Crippa, and S. Orcioni. "On the approximation of stochastic processes by approximate identity neural networks." IEEE Transactions on Neural Networks 9, no. 6 (1998): 1069–85. http://dx.doi.org/10.1109/72.728353.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Conti, M., and C. Turchetti. "Approximate identity neural networks for analog synthesis of nonlinear dynamical systems." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 41, no. 12 (1994): 841–58. http://dx.doi.org/10.1109/81.340846.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Fard, Saeed Panahian, and Zarita Zainuddin. "Almost everywhere approximation capabilities of double Mellin approximate identity neural networks." Soft Computing 20, no. 11 (July 2, 2015): 4439–47. http://dx.doi.org/10.1007/s00500-015-1753-y.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Panahian Fard, Saeed, та Zarita Zainuddin. "The universal approximation capabilities of double 2 $$\pi $$ π -periodic approximate identity neural networks". Soft Computing 19, № 10 (6 вересня 2014): 2883–90. http://dx.doi.org/10.1007/s00500-014-1449-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Panahian Fard, Saeed, and Zarita Zainuddin. "Analyses for L p [a, b]-norm approximation capability of flexible approximate identity neural networks." Neural Computing and Applications 24, no. 1 (October 8, 2013): 45–50. http://dx.doi.org/10.1007/s00521-013-1493-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

DiMattina, Christopher, and Kechen Zhang. "How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality." Neural Computation 22, no. 1 (January 2010): 1–47. http://dx.doi.org/10.1162/neco.2009.05-08-781.

Pełny tekst źródła
Streszczenie:
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical
Style APA, Harvard, Vancouver, ISO itp.
10

Germani, S., G. Tosti, P. Lubrano, S. Cutini, I. Mereu, and A. Berretta. "Artificial Neural Network classification of 4FGL sources." Monthly Notices of the Royal Astronomical Society 505, no. 4 (June 24, 2021): 5853–61. http://dx.doi.org/10.1093/mnras/stab1748.

Pełny tekst źródła
Streszczenie:
ABSTRACT The Fermi-LAT DR1 and DR2 4FGL catalogues feature more than 5000 gamma-ray sources of which about one fourth are not associated with already known objects, and approximately one third are associated with blazars of uncertain nature. We perform a three-category classification of the 4FGL DR1 and DR2 sources independently, using an ensemble of Artificial Neural Networks (ANNs) to characterize them based on the likelihood of being a Pulsar (PSR), a BL Lac type blazar (BLL) or a Flat Spectrum Radio Quasar (FSRQ). We identify candidate PSR, BLL, and FSRQ among the unassociated sources with
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Rozprawy doktorskie na temat "Approximate identity neural networks"

1

Ling, Hong. "Implementation of Stochastic Neural Networks for Approximating Random Processes." Master's thesis, Lincoln University. Environment, Society and Design Division, 2007. http://theses.lincoln.ac.nz/public/adt-NZLIU20080108.124352/.

Pełny tekst źródła
Streszczenie:
Artificial Neural Networks (ANNs) can be viewed as a mathematical model to simulate natural and biological systems on the basis of mimicking the information processing methods in the human brain. The capability of current ANNs only focuses on approximating arbitrary deterministic input-output mappings. However, these ANNs do not adequately represent the variability which is observed in the systems’ natural settings as well as capture the complexity of the whole system behaviour. This thesis addresses the development of a new class of neural networks called Stochastic Neural Networks (SNNs) in
Style APA, Harvard, Vancouver, ISO itp.
2

Garces, Freddy. "Dynamic neural networks for approximate input- output linearisation-decoupling of dynamic systems." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368662.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Li, Yingzhen. "Approximate inference : new visions." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277549.

Pełny tekst źródła
Streszczenie:
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Leo M. Eng Massachusetts Institute of Technology. "Acoustic models for speech recognition using Deep Neural Networks based on approximate math." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100633.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 81-83).<br>Deep Neural Networks (DNNs) are eective models for machine learning. Unfortunately, training a DNN is extremely time-consuming, even with the aid of a graphics processing unit (GPU). DNN training is especially slo
Style APA, Harvard, Vancouver, ISO itp.
5

Scotti, Andrea. "Graph Neural Networks and Learned Approximate Message Passing Algorithms for Massive MIMO Detection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-284500.

Pełny tekst źródła
Streszczenie:
Massive multiple-input and multiple-output (MIMO) is a method to improvethe performance of wireless communication systems by having a large numberof antennas at both the transmitter and the receiver. In the fifth-generation(5G) mobile communication system, Massive MIMO is a key technology toface the increasing number of mobile users and satisfy user demands. At thesame time, recovering the transmitted information in a massive MIMO uplinkreceiver requires more computational complexity when the number of transmittersincreases. Indeed, the optimal maximum likelihood (ML) detector hasa complexity
Style APA, Harvard, Vancouver, ISO itp.
6

Gaur, Yamini. "Exploring Per-Input Filter Selection and Approximation Techniques for Deep Neural Networks." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90404.

Pełny tekst źródła
Streszczenie:
We propose a dynamic, input dependent filter approximation and selection technique to improve the computational efficiency of Deep Neural Networks. The approximation techniques convert 32 bit floating point representation of filter weights in neural networks into smaller precision values. This is done by reducing the number of bits used to represent the weights. In order to calculate the per-input error between the trained full precision filter weights and the approximated weights, a metric called Multiplication Error (ME) has been chosen. For convolutional layers, ME is calculated by subtract
Style APA, Harvard, Vancouver, ISO itp.
7

Dumlupinar, Taha. "Approximate Analysis And Condition Assesment Of Reinforced Concrete T-beam Bridges Using Artificial Neural Networks." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609732/index.pdf.

Pełny tekst źródła
Streszczenie:
In recent years, artificial neural networks (ANNs) have been employed for estimation and prediction purposes in many areas of civil/structural engineering. In this thesis, multilayered feedforward backpropagation algorithm is used for the approximate analysis and calibration of RC T-beam bridges and modeling of bridge ratings of these bridges. Currently bridges are analyzed using a standard FEM program. However, when a large population of bridges is concerned, such as the one considered in this project (Pennsylvania T-beam bridge population), it is impractical to carry out FEM analysis of all
Style APA, Harvard, Vancouver, ISO itp.
8

Tornstad, Magnus. "Evaluating the Practicality of Using a Kronecker-Factored Approximate Curvature Matrix in Newton's Method for Optimization in Neural Networks." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275741.

Pełny tekst źródła
Streszczenie:
For a long time, second-order optimization methods have been regarded as computationally inefficient and intractable for solving the optimization problem associated with deep learning. However, proposed in recent research is an adaptation of Newton's method for optimization in which the Hessian is approximated by a Kronecker-factored approximate curvature matrix, known as KFAC. This work aims to assess its practicality for use in deep learning. Benchmarks were performed using abstract, binary, classification problems, as well as the real-world Boston Housing regression problem, and both deep a
Style APA, Harvard, Vancouver, ISO itp.
9

Hanselmann, Thomas. "Approximate dynamic programming with adaptive critics and the algebraic perceptron as a fast neural network related to support vector machines." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2004.0005.

Pełny tekst źródła
Streszczenie:
[Truncated abstract. Please see the pdf version for the complete text. Also, formulae and special characters can only be approximated here. Please see the pdf version of this abstract for an accurate reproduction.] This thesis treats two aspects of intelligent control: The first part is about long-term optimization by approximating dynamic programming and in the second part a specific class of a fast neural network, related to support vector machines (SVMs), is considered. The first part relates to approximate dynamic programming, especially in the framework of adaptive critic designs (ACDs
Style APA, Harvard, Vancouver, ISO itp.
10

Malfatti, Guilherme Meneguzzi. "Técnicas de agrupamento de dados para computação aproximativa." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/169096.

Pełny tekst źródła
Streszczenie:
Dois dos principais fatores do aumento da performance em aplicações single-thread – frequência de operação e exploração do paralelismo no nível das instruções – tiveram pouco avanço nos últimos anos devido a restrições de potência. Neste contexto, considerando a natureza tolerante a imprecisões (i.e.: suas saídas podem conter um nível aceitável de ruído sem comprometer o resultado final) de muitas aplicações atuais, como processamento de imagens e aprendizado de máquina, a computação aproximativa torna-se uma abordagem atrativa. Esta técnica baseia-se em computar valores aproximados ao invés d
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł

Książki na temat "Approximate identity neural networks"

1

Snail, Mgebwi Lavin. The antecedens [sic] and the emergence of the black consciousness movement in South Africa: Its ideology and organisation. München: Akademischer Verlag, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Butz, Martin V., and Esther F. Kutter. Brain Basics from a Computational Perspective. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.003.0007.

Pełny tekst źródła
Streszczenie:
This chapter provides a crude overview of current knowledge in neuroscience about the human nervous system and its functionality. The distinction between the peripheral and central nervous systems is introduced. Next, brain anatomy is introduced, as well as nerve cells and the information processing principles that unfold in biological neural networks. Moreover, brain modules are covered, including their interconnected communication. With modularizations and wiring systematicities in mind, functional and structural systematicities are surveyed, including neural homunculi, cortical columnar str
Style APA, Harvard, Vancouver, ISO itp.
3

Bindemann, Markus, ed. Forensic Face Matching. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198837749.001.0001.

Pełny tekst źródła
Streszczenie:
Person identification at passport control, at borders, in police investigations, and in criminal trials relies critically on the identity verification of people via image-to-image or person-to-image comparison. While this task is known as ‘facial image comparison’ in forensic settings, it has been studied as ‘unfamiliar face matching’ in cognitive science. This book brings together expertise from practitioners, and academics in psychology and law, to draw together what is currently known about these tasks. It explains the problem of identity impostors and how within-person variability and betw
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Approximate identity neural networks"

1

Fard, Saeed Panahian, and Zarita Zainuddin. "Toroidal Approximate Identity Neural Networks Are Universal Approximators." In Neural Information Processing, 135–42. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12637-1_17.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zainuddin, Zarita, and Saeed Panahian Fard. "Double Approximate Identity Neural Networks Universal Approximation in Real Lebesgue Spaces." In Neural Information Processing, 409–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34475-6_49.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Panahian Fard, Saeed, and Zarita Zainuddin. "The Universal Approximation Capabilities of Mellin Approximate Identity Neural Networks." In Advances in Neural Networks – ISNN 2013, 205–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39065-4_26.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Panahian Fard, Saeed, and Zarita Zainuddin. "Universal Approximation by Generalized Mellin Approximate Identity Neural Networks." In Proceedings of the 4th International Conference on Computer Engineering and Networks, 187–94. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11104-9_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Fard, Saeed Panahian, and Zarita Zainuddin. "The Universal Approximation Capability of Double Flexible Approximate Identity Neural Networks." In Lecture Notes in Electrical Engineering, 125–33. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-01766-2_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Panahian Fard, Saeed, and Zarita Zainuddin. "On the Universal Approximation Capability of Flexible Approximate Identity Neural Networks." In Emerging Technologies for Information Systems, Computing, and Management, 201–7. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7010-6_23.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hanif, Muhammad Abdullah, Muhammad Usama Javed, Rehan Hafiz, Semeen Rehman, and Muhammad Shafique. "Hardware–Software Approximations for Deep Neural Networks." In Approximate Circuits, 269–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99322-5_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Choi, Jungwook, and Swagath Venkataramani. "Approximate Computing Techniques for Deep Neural Networks." In Approximate Circuits, 307–29. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-99322-5_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ishibuchi, H., and H. Tanaka. "Approximate Pattern Classification Using Neural Networks." In Fuzzy Logic, 225–36. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-011-2014-2_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Bai, Xuerui, Jianqiang Yi, and Dongbin Zhao. "Approximate Dynamic Programming for Ship Course Control." In Advances in Neural Networks – ISNN 2007, 349–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-72383-7_41.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Approximate identity neural networks"

1

Zainuddin, Zarita, and Saeed Panahian Fard. "Spherical approximate identity neural networks are universal approximators." In 2014 10th International Conference on Natural Computation (ICNC). IEEE, 2014. http://dx.doi.org/10.1109/icnc.2014.6975812.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Fard Panahian, Saeed, and Zarita Zainuddin. "Universal Approximation Property of Weighted Approximate Identity Neural Networks." In The 5th International Conference on Computer Engineering and Networks. Trieste, Italy: Sissa Medialab, 2015. http://dx.doi.org/10.22323/1.259.0007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Panahian Fard, Saeed, and Zarita Zainuddin. "The Universal Approximation Capabilities of 2pi-Periodic Approximate Identity Neural Networks." In 2013 International Conference on Information Science and Cloud Computing Companion (ISCC-C). IEEE, 2013. http://dx.doi.org/10.1109/iscc-c.2013.147.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fard, Saeed Panahian. "Solving Universal Approximation Problem by Hankel Approximate Identity Neural Networks in Function Spaces." In The fourth International Conference on Information Science and Cloud Computing. Trieste, Italy: Sissa Medialab, 2016. http://dx.doi.org/10.22323/1.264.0031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zainuddin, Zarita, та Saeed Panahian Fard. "Approximation of multivariate 2π-periodic functions by multiple 2π-periodic approximate identity neural networks based on the universal approximation theorems". У 2015 11th International Conference on Natural Computation (ICNC). IEEE, 2015. http://dx.doi.org/10.1109/icnc.2015.7377957.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ahmadian, M. T., and A. Mobini. "Online Prediction of Plate Deformations Under External Forces Using Neural Networks." In ASME 2006 International Mechanical Engineering Congress and Exposition. ASMEDC, 2006. http://dx.doi.org/10.1115/imece2006-15844.

Pełny tekst źródła
Streszczenie:
Recently online prediction of plate deformations in modern systems have been considered by many researchers, common standard methods are highly time consuming and powerful processors are needed for online computation of deformations. Artificial neural networks have capability to develop complex, nonlinear functional relationships between input and output patterns based on limited data. A good trained network could predict output data very fast with acceptable accuracy. This paper describes the application of an artificial neural network to identify deformation pattern of a four-side clamped pl
Style APA, Harvard, Vancouver, ISO itp.
7

Mao, X., V. Joshi, T. P. Miyanawala, and Rajeev K. Jaiman. "Data-Driven Computing With Convolutional Neural Networks for Two-Phase Flows: Application to Wave-Structure Interaction." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78425.

Pełny tekst źródła
Streszczenie:
Fluctuating wave force on a bluff body is of great significance in many offshore and marine engineering applications. We present a Convolutional Neural Network (CNN) based data-driven computing to predict the unsteady wave forces on bluff bodies due to the free-surface wave motion. For the full-order modeling and high-fidelity data generation, the air-water interface for such wave-body problems must be captured accurately for a broad range of physical and geometric parameters. Originated from the thermodynamically consistent theories, the physically motivated Allen-Cahn phase-field method has
Style APA, Harvard, Vancouver, ISO itp.
8

Li, Longyuan, Junchi Yan, Xiaokang Yang, and Yaohui Jin. "Learning Interpretable Deep State Space Model for Probabilistic Time Series Forecasting." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/402.

Pełny tekst źródła
Streszczenie:
Probabilistic time series forecasting involves estimating the distribution of future based on its history, which is essential for risk management in downstream decision-making. We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks and the dependency is modeled by recurrent neural nets. We take the automatic relevance determination (ARD) view and devise a network to exploit the exogenous variables in addition to time series. In particular, our ARD network can incorporate the uncertai
Style APA, Harvard, Vancouver, ISO itp.
9

Sen, Sanchari, Swagath Venkataramani, and Anand Raghunathan. "Approximate computing for spiking neural networks." In 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2017. http://dx.doi.org/10.23919/date.2017.7926981.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Xu, Xiangrui, Yaqin Lee, Yunlong Gao, and Cao Yuan. "Adding identity numbers to deep neural networks." In Automatic Target Recognition and Navigation, edited by Hanyu Hong, Jianguo Liu, and Xia Hua. SPIE, 2020. http://dx.doi.org/10.1117/12.2540293.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!