Siga este link para ver outros tipos de publicações sobre o tema: Approximate identity neural networks.

Artigos de revistas sobre o tema "Approximate identity neural networks"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Approximate identity neural networks".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Moon, Sunghwan. "ReLU Network with Bounded Width Is a Universal Approximator in View of an Approximate Identity." Applied Sciences 11, no. 1 (January 4, 2021): 427. http://dx.doi.org/10.3390/app11010427.

Texto completo da fonte
Resumo:
Deep neural networks have shown very successful performance in a wide range of tasks, but a theory of why they work so well is in the early stage. Recently, the expressive power of neural networks, important for understanding deep learning, has received considerable attention. Classic results, provided by Cybenko, Barron, etc., state that a network with a single hidden layer and suitable activation functions is a universal approximator. A few years ago, one started to study how width affects the expressiveness of neural networks, i.e., a universal approximation theorem for a deep neural networ
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Funahashi, Ken-Ichi. "Approximate realization of identity mappings by three-layer neural networks." Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 73, no. 11 (1990): 61–68. http://dx.doi.org/10.1002/ecjc.4430731107.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zainuddin, Zarita, and Saeed Panahian Fard. "The Universal Approximation Capabilities of Cylindrical Approximate Identity Neural Networks." Arabian Journal for Science and Engineering 41, no. 8 (March 4, 2016): 3027–34. http://dx.doi.org/10.1007/s13369-016-2067-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Turchetti, C., M. Conti, P. Crippa, and S. Orcioni. "On the approximation of stochastic processes by approximate identity neural networks." IEEE Transactions on Neural Networks 9, no. 6 (1998): 1069–85. http://dx.doi.org/10.1109/72.728353.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Conti, M., and C. Turchetti. "Approximate identity neural networks for analog synthesis of nonlinear dynamical systems." IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications 41, no. 12 (1994): 841–58. http://dx.doi.org/10.1109/81.340846.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Fard, Saeed Panahian, and Zarita Zainuddin. "Almost everywhere approximation capabilities of double Mellin approximate identity neural networks." Soft Computing 20, no. 11 (July 2, 2015): 4439–47. http://dx.doi.org/10.1007/s00500-015-1753-y.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Panahian Fard, Saeed, та Zarita Zainuddin. "The universal approximation capabilities of double 2 $$\pi $$ π -periodic approximate identity neural networks". Soft Computing 19, № 10 (6 вересня 2014): 2883–90. http://dx.doi.org/10.1007/s00500-014-1449-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Panahian Fard, Saeed, and Zarita Zainuddin. "Analyses for L p [a, b]-norm approximation capability of flexible approximate identity neural networks." Neural Computing and Applications 24, no. 1 (October 8, 2013): 45–50. http://dx.doi.org/10.1007/s00521-013-1493-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

DiMattina, Christopher, and Kechen Zhang. "How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality." Neural Computation 22, no. 1 (January 2010): 1–47. http://dx.doi.org/10.1162/neco.2009.05-08-781.

Texto completo da fonte
Resumo:
It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Germani, S., G. Tosti, P. Lubrano, S. Cutini, I. Mereu, and A. Berretta. "Artificial Neural Network classification of 4FGL sources." Monthly Notices of the Royal Astronomical Society 505, no. 4 (June 24, 2021): 5853–61. http://dx.doi.org/10.1093/mnras/stab1748.

Texto completo da fonte
Resumo:
ABSTRACT The Fermi-LAT DR1 and DR2 4FGL catalogues feature more than 5000 gamma-ray sources of which about one fourth are not associated with already known objects, and approximately one third are associated with blazars of uncertain nature. We perform a three-category classification of the 4FGL DR1 and DR2 sources independently, using an ensemble of Artificial Neural Networks (ANNs) to characterize them based on the likelihood of being a Pulsar (PSR), a BL Lac type blazar (BLL) or a Flat Spectrum Radio Quasar (FSRQ). We identify candidate PSR, BLL, and FSRQ among the unassociated sources with
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Kaminski, P. C. "The Approximate Location of Damage through the Analysis of Natural Frequencies with Artificial Neural Networks." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 209, no. 2 (August 1995): 117–23. http://dx.doi.org/10.1243/pime_proc_1995_209_238_02.

Texto completo da fonte
Resumo:
An effective and reliable damage assessment methodology is a valuable tool for the timely determination of damage and the deterioration stage of structural members as well as for non-destructive testing (NDT). In this work artificial neural networks are used to identify the approximate location of damage through the analysis of changes in the natural frequencies. At first, a methodology for the use of artificial neural networks for this purpose is described. Different ways of pre-processing the data are discussed. The proposed approach is illustrated through the simulation of a free-free beam
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Konstantaras, A., M. R. Varley, F. Vallianatos, G. Collins, and P. Holifield. "A neuro-fuzzy approach to the reliable recognition of electric earthquake precursors." Natural Hazards and Earth System Sciences 4, no. 5/6 (October 18, 2004): 641–46. http://dx.doi.org/10.5194/nhess-4-641-2004.

Texto completo da fonte
Resumo:
Abstract. Electric Earthquake Precursor (EEP) recognition is essentially a problem of weak signal detection. An EEP signal, according to the theory of propagating cracks, is usually a very weak electric potential anomaly appearing on the Earth's electric field prior to an earthquake, often unobservable within the electric background, which is significantly stronger and embedded in noise. Furthermore, EEP signals vary in terms of duration and size making reliable recognition even more difficult. An average model for EEP signals has been identified based on a time function describing the evoluti
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Maurya, Sunil Kumar, Xin Liu, and Tsuyoshi Murata. "Graph Neural Networks for Fast Node Ranking Approximation." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (June 26, 2021): 1–32. http://dx.doi.org/10.1145/3446217.

Texto completo da fonte
Resumo:
Graphs arise naturally in numerous situations, including social graphs, transportation graphs, web graphs, protein graphs, etc. One of the important problems in these settings is to identify which nodes are important in the graph and how they affect the graph structure as a whole. Betweenness centrality and closeness centrality are two commonly used node ranking measures to find out influential nodes in the graphs in terms of information spread and connectivity. Both of these are considered as shortest path based measures as the calculations require the assumption that the information flows be
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

FUENTES, R., A. POZNYAK, I. CHAIREZ, M. FRANCO, and T. POZNYAK. "CONTINUOUS NEURAL NETWORKS APPLIED TO IDENTIFY A CLASS OF UNCERTAIN PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS." International Journal of Modeling, Simulation, and Scientific Computing 01, no. 04 (December 2010): 485–508. http://dx.doi.org/10.1142/s1793962310000304.

Texto completo da fonte
Resumo:
There are a lot of examples in science and engineering that may be described using a set of partial differential equations (PDEs). Those PDEs are obtained applying a process of mathematical modeling using complex physical, chemical, etc. laws. Nevertheless, there are many sources of uncertainties around the aforementioned mathematical representation. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set. If the continuous mathematical model is incomplete or partially known, the methodology based on Differential Neural Network (DNN) p
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Yuan, Hao, Yongjun Chen, Xia Hu, and Shuiwang Ji. "Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5717–24. http://dx.doi.org/10.1609/aaai.v33i01.33015717.

Texto completo da fonte
Resumo:
Interpreting deep neural networks is of great importance to understand and verify deep models for natural language processing (NLP) tasks. However, most existing approaches only focus on improving the performance of models but ignore their interpretability. In this work, we propose an approach to investigate the meaning of hidden neurons of the convolutional neural network (CNN) models. We first employ saliency map and optimization techniques to approximate the detected information of hidden neurons from input sentences. Then we develop regularization terms and explore words in vocabulary to i
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Bartlett, Peter L., David P. Helmbold, and Philip M. Long. "Gradient Descent with Identity Initialization Efficiently Learns Positive-Definite Linear Transformations by Deep Residual Networks." Neural Computation 31, no. 3 (March 2019): 477–502. http://dx.doi.org/10.1162/neco_a_01164.

Texto completo da fonte
Resumo:
We analyze algorithms for approximating a function [Formula: see text] mapping [Formula: see text] to [Formula: see text] using deep linear neural networks, that is, that learn a function [Formula: see text] parameterized by matrices [Formula: see text] and defined by [Formula: see text]. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least-squares matrix [Formula: see text], in the case whe
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Wu, Ga, Buser Say, and Scott Sanner. "Scalable Planning with Deep Neural Network Learned Transition Models." Journal of Artificial Intelligence Research 68 (July 20, 2020): 571–606. http://dx.doi.org/10.1613/jair.1.11829.

Texto completo da fonte
Resumo:
In many complex planning problems with factored, continuous state and action spaces such as Reservoir Control, Heating Ventilation and Air Conditioning (HVAC), and Navigation domains, it is difficult to obtain a model of the complex nonlinear dynamics that govern state evolution. However, the ubiquity of modern sensors allows us to collect large quantities of data from each of these complex systems and build accurate, nonlinear deep neural network models of their state transitions. But there remains one major problem for the task of control – how can we plan with deep network learned transitio
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Li, Pengfei, Yan Li, and Xiucheng Guo. "A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data." Computational Intelligence and Neuroscience 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/892132.

Texto completo da fonte
Resumo:
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was al
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Jassem, Wiktor, and Waldemar Grygiel. "Off-line classification of Polish vowel spectra using artificial neural networks." Journal of the International Phonetic Association 34, no. 1 (January 2004): 37–52. http://dx.doi.org/10.1017/s0025100304001537.

Texto completo da fonte
Resumo:
The mid-frequencies and bandwidths of formants 1–5 were measured at targets, at plus 0.01 s and at minus 0.01 s off the targets of vowels in a 100-word list read by five male and five female speakers, for a total of 3390 10-variable spectrum specifications. Each of the six Polish vowel phonemes was represented approximately the same number of times. The 3390* 10 original-data matrix was processed by probabilistic neural networks to produce a classification of the spectra with respect to (a) vowel phoneme, (b) identity of the speaker, and (c) speaker gender. For (a) and (b), networks with added
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Song, Chang-Yong. "A Study on Learning Parameters in Application of Radial Basis Function Neural Network Model to Rotor Blade Design Approximation." Applied Sciences 11, no. 13 (July 1, 2021): 6133. http://dx.doi.org/10.3390/app11136133.

Texto completo da fonte
Resumo:
Meta-model sre generally applied to approximate multi-objective optimization, reliability analysis, reliability based design optimization, etc., not only in order to improve the efficiencies of numerical calculation and convergence, but also to facilitate the analysis of design sensitivity. The radial basis function neural network (RBFNN) is the meta-model employing hidden layer of radial units and output layer of linear units, and characterized by relatively fast training, generalization and compact type of networks. It is important to minimize some scattered noisy data to approximate the des
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Mengall, G. "Fuzzy modelling for aircraft dynamics identification." Aeronautical Journal 105, no. 1051 (September 2001): 551–55. http://dx.doi.org/10.1017/s0001924000018029.

Texto completo da fonte
Resumo:
A new methodology is described to identify aircraft dynamics and extract the corresponding aerodynamic coefficients. The proposed approach makes use of fuzzy modelling for the identification process where input/output data are first classified by means of the concept of fuzzy clustering and then the linguistic rules are extracted from the fuzzy clusters. The fuzzy rule-based models are in the form of affine Takagi-Sugeno models, that are able to approximate a large class of nonlinear systems. A comparative study is performed with existing techniques based on the employment of neural networks,
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Choi, Hwiyong, Haesang Yang, Seungjun Lee, and Woojae Seong. "Classification of Inter-Floor Noise Type/Position Via Convolutional Neural Network-Based Supervised Learning." Applied Sciences 9, no. 18 (September 7, 2019): 3735. http://dx.doi.org/10.3390/app9183735.

Texto completo da fonte
Resumo:
Inter-floor noise, i.e., noise transmitted from one floor to another floor through walls or ceilings in an apartment building or an office of a multi-layered structure, causes serious social problems in South Korea. Notably, inaccurate identification of the noise type and position by human hearing intensifies the conflicts between residents of apartment buildings. In this study, we propose a robust approach using deep convolutional neural networks (CNNs) to learn and identify the type and position of inter-floor noise. Using a single mobile device, we collected nearly 2000 inter-floor noise ev
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Silveira, Ana Claudia da, Luis Paulo Baldissera Schorr, Elisabete Vuaden, Jéssica Talheimer Aguiar, Tarik Cuchi, and Giselli Castilho Moraes. "MODELAGEM DA ALTURA E DO INCREMENTO EM ÁREA TRANSVERSAL DE LOURO PARDO." Nativa 6, no. 2 (March 26, 2018): 191. http://dx.doi.org/10.31413/nativa.v6i2.4790.

Texto completo da fonte
Resumo:
O estudo teve como objetivo verificar a melhor técnica para a modelagem da altura e do incremento periódico anual em área transversal para Cordia trichotoma (Vell.) Arrab. ex Steud. Para isso, foram identificados e mensurados os diâmetros à altura do peito e as alturas totais de 35 indivíduos localizados em área de preservação permanente e de pastagem, com aproximadamente 4 ha, no município de Salto do Lontra, estado do Paraná. Posteriormente, foi realizada a análise de tronco pelo método não destrutivo verificando o incremento dos últimos 5 anos. Para a estimativa da altura e do incremento pe
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Cintra, Renato J., Stefan Duffner, Christophe Garcia, and Andre Leite. "Low-Complexity Approximate Convolutional Neural Networks." IEEE Transactions on Neural Networks and Learning Systems 29, no. 12 (December 2018): 5981–92. http://dx.doi.org/10.1109/tnnls.2018.2815435.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Llanas, B., and F. J. Sainz. "Constructive approximate interpolation by neural networks." Journal of Computational and Applied Mathematics 188, no. 2 (April 2006): 283–308. http://dx.doi.org/10.1016/j.cam.2005.04.019.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

TAKAGI, Hideyuki, Toshiyuki KOUDA, and Yoshihiro KOJIMA. "Neural-networks designed on Approximate Reasoning Architecture." Journal of Japan Society for Fuzzy Theory and Systems 3, no. 1 (1991): 133–41. http://dx.doi.org/10.3156/jfuzzy.3.1_133.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Shen, Zuliang, Ho Chung Lui, and Liya Ding. "Approximate case-based reasoning on neural networks." International Journal of Approximate Reasoning 10, no. 1 (January 1994): 75–98. http://dx.doi.org/10.1016/0888-613x(94)90010-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Robinson, Haakon. "Approximate Piecewise Affine Decomposition of Neural Networks." IFAC-PapersOnLine 54, no. 7 (2021): 541–46. http://dx.doi.org/10.1016/j.ifacol.2021.08.416.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Shyamalagowri, M., and R. Rajeswari. "Neural Network Predictive Controller Based Nonlinearity Identification Case Study: Nonlinear Process Reactor - CSTR." Advanced Materials Research 984-985 (July 2014): 1326–34. http://dx.doi.org/10.4028/www.scientific.net/amr.984-985.1326.

Texto completo da fonte
Resumo:
In the last decades, a substantial amount of research has been carried out on identification of nonlinear processes. Dynamical systems can be better represented by nonlinear models, which illustrate the global behavior of the nonlinear process reactor over the entire range. CSTR is highly nonlinear chemical reactor. A compact and resourceful model which approximates both linear and nonlinear component of the process is of highly demand. Process modeling is an essential constituent in the growth of sophisticated model-based process control systems. Driven by the contemporary economical needs, d
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Xu, Xiangrui, Yaqin Li, and Cao Yuan. "“Identity Bracelets” for Deep Neural Networks." IEEE Access 8 (2020): 102065–74. http://dx.doi.org/10.1109/access.2020.2998784.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Masegosa, Andrés R., Rafael Cabañas, Helge Langseth, Thomas D. Nielsen, and Antonio Salmerón. "Probabilistic Models with Deep Neural Networks." Entropy 23, no. 1 (January 18, 2021): 117. http://dx.doi.org/10.3390/e23010117.

Texto completo da fonte
Resumo:
Recent advances in statistical inference have significantly expanded the toolbox of probabilistic modeling. Historically, probabilistic modeling has been constrained to very restricted model classes, where exact or approximate probabilistic inference is feasible. However, developments in variational inference, a general form of approximate probabilistic inference that originated in statistical physics, have enabled probabilistic modeling to overcome these limitations: (i) Approximate probabilistic inference is now possible over a broad class of probabilistic models containing a large number of
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Stinchcombe, Maxwell B. "Precision and Approximate Flatness in Artificial Neural Networks." Neural Computation 7, no. 5 (September 1995): 1021–39. http://dx.doi.org/10.1162/neco.1995.7.5.1021.

Texto completo da fonte
Resumo:
Several of the major classes of artificial neural networks' output functions are linear combinations of elements of approximately flat sets. This gives a tool for understanding the precision problem as well as providing a rationale for mixing types of networks. Approximate flatness also helps explain the power of artificial neural network techniques relative to series regressions—series regressions take linear combinations of flat sets, while neural networks take linear combinations of the much larger class of approximately flat sets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Narendra, K. S., and S. Mukhopadhyay. "Adaptive control using neural networks and approximate models." IEEE Transactions on Neural Networks 8, no. 3 (May 1997): 475–85. http://dx.doi.org/10.1109/72.572089.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Lotrič, Uroš, and Patricio Bulić. "Applicability of approximate multipliers in hardware neural networks." Neurocomputing 96 (November 2012): 57–65. http://dx.doi.org/10.1016/j.neucom.2011.09.039.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Gerber, B. S., T. G. Tape, R. S. Wigton, and P. S. Heckerling. "Entering the Black Box of Neural Networks." Methods of Information in Medicine 42, no. 03 (2003): 287–96. http://dx.doi.org/10.1055/s-0038-1634363.

Texto completo da fonte
Resumo:
Summary Objectives: Artificial neural networks have proved to be accurate predictive instruments in several medical domains, but have been criticized for failing to specify the information upon which their predictions are based. We used methods of relevance analysis and sensitivity analysis to determine the most important predictor variables for a validated neural network for community-acquired pneumonia. Methods: We studied a feed-forward, back-propagation neural network trained to predict pneumonia among patients presenting to an emergency department with fever or respiratory complaints. We
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Mohaghegh, Shahab, Khalid Mohamad, Popa Andrei, Ameri Sam, and Dan Wood. "Performance Drivers in Restimulation of Gas-Storage Wells." SPE Reservoir Evaluation & Engineering 4, no. 06 (December 1, 2001): 536–42. http://dx.doi.org/10.2118/74715-pa.

Texto completo da fonte
Resumo:
Summary To maintain or enhance deliverability of gas-storage wells in the Clinton sand in northeast Ohio, an annual restimulation program was established in the late 1960s. The program calls for as many as 20 hydraulic fractures and refractures per year. Several wells have been refractured three to four times, while there are still wells that have only been fractured once in the past 30 years. As the program continues, many wells will be stimulated a second, third, or fourth time. This paper summarizes an attempt to carefully study the response of the Clinton sand to hydraulic fractures and to
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Gunhan, Atilla E., László P. Csernai, and Jørgen Randrup. "UNSUPERVISED COMPETITIVE LEARNING IN NEURAL NETWORKS." International Journal of Neural Systems 01, no. 02 (January 1989): 177–86. http://dx.doi.org/10.1142/s0129065789000086.

Texto completo da fonte
Resumo:
We study an idealized neural network that may approximate certain neurophysiological features of natural neural systems. The network contains a mutual lateral inhibition and is subjected to unsupervised learning by means of a Hebb-type learning principle. Its learning ability is analysed as a function of the strength of lateral inhibition and the training set.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Jonathan Lee* and Hsiao-Fan Wang**. "Selected Papers from IFSA'99." Journal of Advanced Computational Intelligence and Intelligent Informatics 5, no. 3 (May 20, 2001): 127. http://dx.doi.org/10.20965/jaciii.2001.p0127.

Texto completo da fonte
Resumo:
The past few years we have witnessed a crystallization of soft computing as a means towards the conception and design of intelligent systems. Soft Computing is a synergetic integration of neural networks, fuzzy logic and evolutionary computation including genetic algorithms, chaotic systems, and belief networks. In this volume, we are featuting seven papers devoted to soft computing as a special issue. These papers are selected from papers submitted to the "The eighth International Fuzzy Systems Association World Congress (IFSA'99)", held in Taipei, Taiwan, in August 1999. Each paper received
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Moran, Maira, Marcelo Faria, Gilson Giraldi, Luciana Bastos, Larissa Oliveira, and Aura Conci. "Classification of Approximal Caries in Bitewing Radiographs Using Convolutional Neural Networks." Sensors 21, no. 15 (July 31, 2021): 5192. http://dx.doi.org/10.3390/s21155192.

Texto completo da fonte
Resumo:
Dental caries is an extremely common problem in dentistry that affects a significant part of the population. Approximal caries are especially difficult to identify because their position makes clinical analysis difficult. Radiographic evaluation—more specifically, bitewing images—are mostly used in such cases. However, incorrect interpretations may interfere with the diagnostic process. To aid dentists in caries evaluation, computational methods and tools can be used. In this work, we propose a new method that combines image processing techniques and convolutional neural networks to identify a
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Sutcliffe, P. R. "Substorm onset identification using neural networks and Pi2 pulsations." Annales Geophysicae 15, no. 10 (October 31, 1997): 1257–64. http://dx.doi.org/10.1007/s00585-997-1257-x.

Texto completo da fonte
Resumo:
Abstract. The pattern recognition capabilities of artificial neural networks (ANNs) have for the first time been used to identify Pi2 pulsations in magnetometer data, which in turn serve as indicators of substorm onsets and intensifications. The pulsation spectrum was used as input to the ANN and the network was trained to give an output of +1 for Pi2 signatures and -1 for non-Pi2 signatures. In order to evaluate the degree of success of the neural-network procedure for identifying Pi2 pulsations, the ANN was used to scan a number of data sets and the results compared with visual identificatio
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Yang, Xiaofeng, Tielong Shen, and Katsutoshi Tamura. "Approximate solution of Hamilton-Jacobi inequality by neural networks." Applied Mathematics and Computation 84, no. 1 (June 1997): 49–64. http://dx.doi.org/10.1016/s0096-3003(96)00053-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Kim, Min Soo, Alberto A. Del Barrio, Leonardo Tavares Oliveira, Roman Hermida, and Nader Bagherzadeh. "Efficient Mitchell’s Approximate Log Multipliers for Convolutional Neural Networks." IEEE Transactions on Computers 68, no. 5 (May 1, 2019): 660–75. http://dx.doi.org/10.1109/tc.2018.2880742.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

van der Baan, Mirko, and Christian Jutten. "Neural networks in geophysical applications." GEOPHYSICS 65, no. 4 (July 2000): 1032–47. http://dx.doi.org/10.1190/1.1444797.

Texto completo da fonte
Resumo:
Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Tian, Hao, and Yue Qing Yu. "Neural Network Trajectory Tracking Control of Compliant Parallel Robot." Applied Mechanics and Materials 799-800 (October 2015): 1069–73. http://dx.doi.org/10.4028/www.scientific.net/amm.799-800.1069.

Texto completo da fonte
Resumo:
Trajectory tracking control of compliant parallel robot is presented. According to the characteristics of compliant joint, the system model is derived and the dynamic equation is obtained based on the Lagrange method. Radial Basis Function (RBF) neural network control is designed to globally approximate the model uncertainties. Further, an itemized approximate RBF control method is proposed for higher identify precision. The trajectory tracking abilities of two control strategies are compared through simulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Vanchurin, Vitaly. "The World as a Neural Network." Entropy 22, no. 11 (October 26, 2020): 1210. http://dx.doi.org/10.3390/e22111210.

Texto completo da fonte
Resumo:
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons). We first consider stochastic evolution of the trainable variables to argue that near equilibrium their dynamics is well approximated by Madelung equations (with free energy representing the phase) and further away from the equilibrium by Hamilton–Jacobi equations (with free energy representing the Hamilton’s principal
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Bhaya ‎, Eman Samir, and Zahraa Mahmoud Fadel. "Nearly Exponential Neural Networks Approximation in Lp Spaces." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 26, no. 1 (December 20, 2017): 103–13. http://dx.doi.org/10.29196/jub.v26i1.359.

Texto completo da fonte
Resumo:
In different applications, we can widely use the neural network approximation. They are being applied to solve many problems in computer science, engineering, physics, etc. The reason for successful application of neural network approximation is the neural network ability to approximate arbitrary function. In the last 30 years, many papers have been published showing that we can approximate any continuous function defined on a compact subset of the Euclidean spaces of dimensions greater than 1, uniformly using a neural network with one hidden layer. Here we prove that any real function in L_P
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Duggal, Ashmeet Kaur, and Meenu Dave Dr. "INTELLIGENT IDENTITY AND ACCESS MANAGEMENT USING NEURAL NETWORKS." Indian Journal of Computer Science and Engineering 12, no. 1 (February 20, 2021): 47–56. http://dx.doi.org/10.21817/indjcse/2021/v12i1/211201154.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Li, Xiangyu, Chunhua Yuan, and Bonan Shan. "System Identification of Neural Signal Transmission Based on Backpropagation Neural Network." Mathematical Problems in Engineering 2020 (August 12, 2020): 1–8. http://dx.doi.org/10.1155/2020/9652678.

Texto completo da fonte
Resumo:
The identification method of backpropagation (BP) neural network is adopted to approximate the mapping relation between input and output of neurons based on neural firing trajectory in this paper. In advance, the input and output data of neural model is used for BP neural network learning, so that the identified BP neural network can present the transfer characteristics of the model, which makes the network precisely predict the firing trajectory of the neural model. In addition, the method is applied to identify electrophysiological experimental data of real neurons, so that the output of the
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Abboud, Ralph, Ismail Ceylan, and Thomas Lukasiewicz. "Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3097–104. http://dx.doi.org/10.1609/aaai.v34i04.5705.

Texto completo da fonte
Resumo:
Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Cao, Feilong, Shaobo Lin, and Zongben Xu. "Constructive approximate interpolation by neural networks in the metric space." Mathematical and Computer Modelling 52, no. 9-10 (November 2010): 1674–81. http://dx.doi.org/10.1016/j.mcm.2010.06.035.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!