Academic literature on the topic 'Sigmoid approximation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sigmoid approximation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sigmoid approximation"

1

Ito, Yoshifusa. "Approximation Capability of Layered Neural Networks with Sigmoid Units on Two Layers." Neural Computation 6, no. 6 (November 1994): 1233–43. http://dx.doi.org/10.1162/neco.1994.6.6.1233.

Full text
Abstract:
Using only an elementary constructive method, we prove the universal approximation capability of three-layered feedforward neural networks that have sigmoid units on two layers. We regard the Heaviside function as a special case of sigmoid function and measure accuracy of approximation in either the supremum norm or in the Lp-norm. Given a continuous function defined on a unit hypercube and the required accuracy of approximation, we can estimate the numbers of necessary units on the respective sigmoid unit layers. In the case where the sigmoid function is the Heaviside function, our result improves the estimation of Kůrková (1992). If the accuracy of approximation is measured in the LP-norm, our estimation also improves that of Kůrková (1992), even when the sigmoid function is not the Heaviside function.
APA, Harvard, Vancouver, ISO, and other styles
2

Bagul, Yogesh J., and Christophe Chesneau. "Sigmoid functions for the smooth approximation to the absolute value function." Moroccan Journal of Pure and Applied Analysis 7, no. 1 (January 1, 2021): 12–19. http://dx.doi.org/10.2478/mjpaa-2021-0002.

Full text
Abstract:
AbstractWe present smooth approximations to the absolute value function |x| using sigmoid functions. In particular, x erf(x/μ) is proved to be a better smooth approximation for |x| than x tanh(x/μ) and \sqrt {{x^2} + \mu } with respect to accuracy. To accomplish our goal we also provide sharp hyperbolic bounds for the error function.
APA, Harvard, Vancouver, ISO, and other styles
3

Vlček, Miroslav. "CHEBYSHEV POLYNOMIAL APPROXIMATION FOR ACTIVATION SIGMOID FUNCTION." Neural Network World 22, no. 4 (2012): 387–93. http://dx.doi.org/10.14311/nnw.2012.22.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

NGUYEN, Vantruong, Jueping CAI, Linyu WEI, and Jie CHU. "Neural Networks Probability-Based PWL Sigmoid Function Approximation." IEICE Transactions on Information and Systems E103.D, no. 9 (September 1, 2020): 2023–26. http://dx.doi.org/10.1587/transinf.2020edl8007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Waissi, Gary R., and Donald F. Rossin. "A sigmoid approximation of the standard normal integral." Applied Mathematics and Computation 77, no. 1 (June 1996): 91–95. http://dx.doi.org/10.1016/0096-3003(95)00190-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hornik, Kurt, Maxwell Stinchcombe, Halbert White, and Peter Auer. "Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives." Neural Computation 6, no. 6 (November 1994): 1262–75. http://dx.doi.org/10.1162/neco.1994.6.6.1262.

Full text
Abstract:
Recently Barron (1993) has given rates for hidden layer feedforward networks with sigmoid activation functions approximating a class of functions satisfying a certain smoothness condition. These rates do not depend on the dimension of the input space. We extend Barron's results to feedforward networks with possibly nonsigmoid activation functions approximating mappings and their derivatives simultaneously. Our conditions are similar but not identical to Barron's, but we obtain the same rates of approximation, showing that the approximation error decreases at rates as fast as n−1/2, where n is the number of hidden units. The dimension of the input space appears only in the constants of our bounds.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, J., and I. W. Sandberg. "Universal Approximation Using Radial-Basis-Function Networks." Neural Computation 3, no. 2 (June 1991): 246–57. http://dx.doi.org/10.1162/neco.1991.3.2.246.

Full text
Abstract:
There have been several recent studies concerning feedforward networks and the problem of approximating arbitrary functionals of a finite number of real variables. Some of these studies deal with cases in which the hidden-layer nonlinearity is not a sigmoid. This was motivated by successful applications of feedforward networks with nonsigmoidal hidden-layer units. This paper reports on a related study of radial-basis-function (RBF) networks, and it is proved that RBF networks having one hidden layer are capable of universal approximation. Here the emphasis is on the case of typical RBF networks, and the results show that a certain class of RBF networks with the same smoothing factor in each kernel node is broad enough for universal approximation.
APA, Harvard, Vancouver, ISO, and other styles
8

Bhattacharyya, C., and S. S. Keerthi. "Mean Field Methods for a Special Class of Belief Networks." Journal of Artificial Intelligence Research 15 (August 1, 2001): 91–114. http://dx.doi.org/10.1613/jair.734.

Full text
Abstract:
The chief aim of this paper is to propose mean-field approximations for a broad class of Belief networks, of which sigmoid and noisy-or networks can be seen as special cases. The approximations are based on a powerful mean-field theory suggested by Plefka. We show that Saul, Jaakkola and Jordan' s approach is the first order approximation in Plefka's approach, via a variational derivation. The application of Plefka's theory to belief networks is not computationally tractable. To tackle this problem we propose new approximations based on Taylor series. Small scale experiments show that the proposed schemes are attractive.
APA, Harvard, Vancouver, ISO, and other styles
9

Chiluveru, S. R., M. Tripathy, and B. Mohapatra. "Accuracy controlled iterative method for efficient sigmoid function approximation." Electronics Letters 56, no. 18 (September 3, 2020): 914–16. http://dx.doi.org/10.1049/el.2020.0854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koutroumbas, K. "On the Partitioning Capabilities of Feedforward Neural Networks with Sigmoid Nodes." Neural Computation 15, no. 10 (October 1, 2003): 2457–81. http://dx.doi.org/10.1162/089976603322362437.

Full text
Abstract:
In this letter, the capabilities of feedforward neural networks (FNNs) on the realization and approximation of functions of the form g: R1 → A, which partition the R1 space into polyhedral sets, each one being assigned to one out of the c classes of A, are investigated. More specifically, a constructive proof is given for the fact that FNNs consisting of nodes having sigmoid output functions are capable of approximating any function g with arbitrary accuracy. Also, the capabilities of FNNs consisting of nodes having the hard limiter as output function are reviewed. In both cases, the two-class as well as the multiclass cases are considered.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sigmoid approximation"

1

Minařík, Miloš. "Souběžný evoluční návrh hardwaru a softwaru." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-412594.

Full text
Abstract:
Genetické programování (GP) je v určitém rozsahu schopno automaticky generovat požadované programy, aniž by uživatel musel určit, jakým způsobem má program postupovat. GP bylo s úspěchem použito k řešení široké škály praktických problémů z různých oblastí, přičemž výsledky byly často srovnatelné s řešeními vytvořenými člověkem. Doposud však nebyla zodpovězena otázka, zda GP dokáže generovat vysoce optimalizovaný výpočetní model (platformu) spolu s programem spustitelným na této platformě, který by řešil daný problém při dodržení všech omezení (například na plochu na čipu a zpoždění). V případě scénářů, kdy je optimalizováno více kritérií, by uživatelským výstupem měla být množina nedominovaných řešení s různými kombinacemi úrovně využití zdrojů (plocha, příkon) a výkonu (rychlosti provádění). Tento problém může být chápán jako souběžný návrh hardwaru a softwaru, zkráceně HW/SW codesign. Tato práce zkoumá způsoby, jakými lze souběžně evolučně vyvíjet platformu a programy v případě, že je problém zadán množinou vektorů vstupů a jim odpovídajících výstupů. Nejprve byl vytvořen model architektury a evoluční platforma zajišťující zpracování a evoluční vývoj těchto architektur. Kandidátní mikroprogramové architektury byly evolvovány spolu s programy pomocí lineárního genetického programování. Následně byla provedena série jednodušších experimentů. Navržená platforma dosahovala výsledků srovnatelných s nejnovějšími metodami. Na základě slabých míst objevených během počátečních experimentů byla platforma rozšířena. Rozšířená platforma byla poté ověřena na několika složitějších experimentech. Jeden z nich byla zaměřen na efektivní implementaci aproximace sigmoidální funkce. Platforma v tomto případě našla řadu různých řešení implementujících aproximaci sigmoidy, z nichž některá byla sekvenční a jiná čistě kombinační. V rámci experimentu byly evolučně nalezeny i známé algoritmy, přičemž některé z nich byly evolucí dokonce optimalizovány pro podmnožinu definičního oboru zvolenou pro daný experiment. Poslední sada experimentů byla zaměřena na evoluční návrh obrazových filtrů pro redukci šumu typu sůl a pepř. Platforma v tomto případě znovuobjevila koncept přepínaných filtrů a naezla variantu přepínaného mediánového filtru, která byla z hlediska výsledků filtrace srovnatelná s běžně používanými metodami. Tato práce prokázala, že pomocí genetického programování lze navrhovat a optimalizovat malé HW/SW systémy. Automatizovaný evoluční návrh složitějších HW/SW systémů zůstává otevřeným problémem vhodným k dalšímu výzkumu.
APA, Harvard, Vancouver, ISO, and other styles
2

Bharkhada, Bharat Kishore. "Efficient Fpga Implementation of a Generic Function Approximator and Its Application to Neural Net Computation." University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060978658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Quee, Graham. "Ramp approximations of finitely steep sigmoid control functions in soft-switching ODE networks." Thesis, 2019. http://hdl.handle.net/1828/10746.

Full text
Abstract:
In models for networks of regulatory interactions in biological molecules, the sigmoid relationship between concentration of regulating bodies and the production rates they control has lead to the use of continuous time 'switching' ordinary differential equations (ODEs), sometimes referred to as Glass networks. These Glass networks are the result of a simplifying assumption that the switching behaviour occurs instantaneously at particular threshold values. Though this assumption produces highly tractable models, it also causes analytic difficulties in certain cases due to the discontinuities of the system, such as non-uniqueness. In this thesis we explore the use of 'ramp' functions as an alternative approximation to the sigmoid, which restores continuity to the ODE and removes the assumption of infinitely fast switching by linearly interpolating the focal point values used in a corresponding Glass network. A general framework for producing a ramp system from a certain Glass network is given. Solutions are explored in two dimensions, and then in higher dimensions under two different restrictions. Periodic behaviour is explored in both cases using mappings between threshold boundaries. Limitations in these methods are explored, and a general proof of the existence of periodic solutions in negative feedback loops is given.
Graduate
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sigmoid approximation"

1

Zhen-zhen, Xie, and Zhang Su-yu. "A Non-linear Approximation of the Sigmoid Function Based FPGA." In Advances in Intelligent and Soft Computing, 125–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-25188-7_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Minarik, Milos, and Lukas Sekanina. "On Evolutionary Approximation of Sigmoid Function for HW/SW Embedded Systems." In Lecture Notes in Computer Science, 343–58. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55696-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dineva, Adrienn, József K. Tar, Annamária Várkonyi-Kóczy, János T. Tóth, and Vincenzo Piuri. "Non-conventional Control Design by Sigmoid Generated Fixed Point Transformation Using Fuzzy Approximation." In Studies in Systems, Decision and Control, 1–15. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78437-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anastassiou, George A. "Univariate Sigmoidal Neural Network Quantitative Approximation." In Intelligent Systems Reference Library, 1–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21431-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Anastassiou, George A. "Multivariate Sigmoidal Neural Network Quantitative Approximation." In Intelligent Systems Reference Library, 67–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21431-8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lenze, Burkhard. "Constructive Multivariate Approximation with Sigmoidal Functions and Applications to Neural Networks." In Numerical Methods in Approximation Theory, Vol. 9, 155–75. Basel: Birkhäuser Basel, 1992. http://dx.doi.org/10.1007/978-3-0348-8619-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Montaña, José L., and Cruz E. Borges. "Lower Bounds for Approximation of Some Classes of Lebesgue Measurable Functions by Sigmoidal Neural Networks." In Lecture Notes in Computer Science, 1–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02478-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chan, Veronica, and Christine W. Chan. "Towards Developing the Piece-Wise Linear Neural Network Algorithm for Rule Extraction." In Deep Learning and Neural Networks, 1632–49. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch091.

Full text
Abstract:
This paper discusses development and application of a decomposition neural network rule extraction algorithm for nonlinear regression problems. The algorithm is called the piece-wise linear artificial neural network or PWL-ANN algorithm. The objective of the algorithm is to “open up” the black box of a neural network model so that rules in the form of linear equations are generated by approximating the sigmoid activation functions of the hidden neurons in an artificial neural network (ANN). The preliminary results showed that the algorithm gives high fidelity and satisfactory results on sixteen of the nineteen tested datasets. By analyzing the values of R2 given by the PWL approximation on the hidden neurons and the overall output, it is evident that in addition to accurate approximation of each individual node of a given ANN model, there are more factors affecting the fidelity of the PWL-ANN algorithm Nevertheless, the algorithm shows promising potential for domains when better understanding about the problem is needed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sigmoid approximation"

1

Murugadoss, R., and M. Ramakrishnan. "Universal approximation using probabilistic neural networks with sigmoid activation functions." In 2014 International Conference on Advances in Engineering and Technology Research (ICAETR). IEEE, 2014. http://dx.doi.org/10.1109/icaetr.2014.7012920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xie, Zhenzhen. "A non-linear approximation of the sigmoid function based on FPGA." In 2012 IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI). IEEE, 2012. http://dx.doi.org/10.1109/icaci.2012.6463155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zaki, Peter W., Ahmed M. Hashem, Emad A. Fahim, Mostafa A. Masnour, Sarah M. ElGenk, Maggie Mashaly, and Samar M. Ismail. "A Novel Sigmoid Function Approximation Suitable for Neural Networks on FPGA." In 2019 15th International Computer Engineering Conference (ICENCO). IEEE, 2019. http://dx.doi.org/10.1109/icenco48310.2019.9027479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Russell, Gary, and Laurene V. Fausett. "Comparison of function approximation with sigmoid and radial basis function networks." In Aerospace/Defense Sensing and Controls, edited by Steven K. Rogers and Dennis W. Ruck. SPIE, 1996. http://dx.doi.org/10.1117/12.235903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stinchcombe and White. "Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions." In International Joint Conference on Neural Networks. IEEE, 1989. http://dx.doi.org/10.1109/ijcnn.1989.118640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Murugadoss, R., and M. Ramakrishnan. "Universal approximation of nonlinear system predictions in sigmoid activation functions using artificial neural networks." In 2014 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). IEEE, 2014. http://dx.doi.org/10.1109/iccic.2014.7238539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lopez-Benitez, Miguel, and Dhaval Patel. "Sigmoid Approximation to the Gaussian $Q$-function and its Applications to Spectrum Sensing Analysis." In 2019 IEEE Wireless Communications and Networking Conference (WCNC). IEEE, 2019. http://dx.doi.org/10.1109/wcnc.2019.8886061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Marar, Joao F., and Ana C. Patrocionio. "Comparative study between powers of sigmoid functions, MLP backpropagation, and polynomials in function approximation problems." In AeroSense '99, edited by Ivan Kadar. SPIE, 1999. http://dx.doi.org/10.1117/12.357191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, Harry W. H., Jack P. K. Ma, Donald P. H. Wong, Lucien K. L. Ng, and Sherman S. M. Chow. "Learning Model with Error -- Exposing the Hidden Model of BAYHENN." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/488.

Full text
Abstract:
Privacy-preserving deep neural network (DNN) inference remains an intriguing problem even after the rapid developments of different communities. One challenge is that cryptographic techniques such as homomorphic encryption (HE) do not natively support non-linear computations (e.g., sigmoid). A recent work, BAYHENN (Xie et al., IJCAI'19), considers HE over the Bayesian neural network (BNN). The novelty lies in "meta-prediction" over a few noisy DNNs. The claim was that the clients can get intermediate outputs (to apply non-linear function) but are still prevented from learning the exact model parameters, which was justified via the widely-used learning-with-error (LWE) assumption (with Gaussian noises as the error). This paper refutes the security claim of BAYHENN via both theoretical and empirical analyses. We formally define a security game with different oracle queries capturing two realistic threat models. Our attack assuming a semi-honest adversary reveals all the parameters of single-layer BAYHENN, which generalizes to recovering the whole model that is "as good as" the BNN approximation of the original DNN, either under the malicious adversary model or with an increased number of oracle queries. This shows the need for rigorous security analysis ("the noise introduced by BNN can obfuscate the model" fails -- it is beyond what LWE guarantees) and calls for the collaboration between cryptographers and machine-learning experts to devise practical yet provably-secure solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Tarasov, Dmitry A., Andrey G. Tyagunov, and Oleg B. Milder. "Approximating heat resistance of nickel-based superalloys by a sigmoid." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0026744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography