To see the other types of publications on this topic, follow the link: Deep learning neural network.

Journal articles on the topic 'Deep learning neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep learning neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Banzi, Jamal, Isack Bulugu, and Zhongfu Ye. "Deep Predictive Neural Network: Unsupervised Learning for Hand Pose Estimation." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 432–39. http://dx.doi.org/10.18178/ijmlc.2019.9.4.822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nizami Huseyn, Elcin. "APPLICATION OF DEEP LEARNING TECHNOLOGY IN DISEASE DIAGNOSIS." NATURE AND SCIENCE 04, no. 05 (December 28, 2020): 4–11. http://dx.doi.org/10.36719/2707-1146/05/4-11.

Full text
Abstract:
The rapid development of deep learning technology provides new methods and ideas for assisting physicians in high-precision disease diagnosis. This article reviews the principles and features of deep learning models commonly used in medical disease diagnosis, namely convolutional neural networks, deep belief networks, restricted Boltzmann machines, and recurrent neural network models. Based on several typical diseases, the application of deep learning technology in the field of disease diagnosis is introduced; finally, the future development direction is proposed based on the limitations of current deep learning technology in disease diagnosis. Keywords: Artificial Intelligence; Deep Learning; Disease Diagnosis; Neural Network
APA, Harvard, Vancouver, ISO, and other styles
3

Bashar, Dr Abul. "SURVEY ON EVOLVING DEEP LEARNING NEURAL NETWORK ARCHITECTURES." December 2019 2019, no. 2 (December 14, 2019): 73–82. http://dx.doi.org/10.36548/jaicn.2019.2.003.

Full text
Abstract:
The deep learning being a subcategory of the machine learning follows the human instincts of learning by example to produce accurate results. The deep learning performs training to the computer frame work to directly classify the tasks from the documents available either in the form of the text, image, or the sound. Most often the deep learning utilizes the neural network to perform the accurate classification and is referred as the deep neural networks; one of the most common deep neural networks used in a broader range of applications is the convolution neural network that provides an automated way of feature extraction by learning the features directly from the images or the text unlike the machine learning that extracts the features manually. This enables the deep learning neural networks to have a state of art accuracy that mostly expels even the human performance. So the paper is to present the survey on the deep learning neural network architectures utilized in various applications for having an accurate classification with an automated feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
4

Bunrit, Supaporn, Thuttaphol Inkian, Nittaya Kerdprasop, and Kittisak Kerdprasop. "Text-Independent Speaker Identification Using Deep Learning Model of Convolution Neural Network." International Journal of Machine Learning and Computing 9, no. 2 (April 2019): 143–48. http://dx.doi.org/10.18178/ijmlc.2019.9.2.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bodyansky, E. V., and Т. Е. Antonenko. "Deep neo-fuzzy neural network and its learning." Bionics of Intelligence 1, no. 92 (June 2, 2019): 3–8. http://dx.doi.org/10.30837/bi.2019.1(92).01.

Full text
Abstract:
Optimizing the learning speedof deep neural networks is an extremely important issue. Modern approaches focus on the use of neural networksbased on the Rosenblatt perceptron. But the results obtained are not satisfactory for industrial and scientific needs inthe context of the speed of learning neural networks. Also, this approach stumbles upon the problems of a vanishingand exploding gradient. To solve the problem, the paper proposed using a neo-fuzzy neuron, whose properties arebased on the F-transform. The article discusses the use of neo-fuzzy neuron as the main component of the neuralnetwork. The architecture of a deep neo-fuzzy neural network is shown, as well as a backpropagation algorithmfor this architecture with a triangular membership function for neo-fuzzy neuron. The main advantages of usingneo-fuzzy neuron as the main component of the neural network are given. The article describes the properties of aneo-fuzzy neuron that addresses the issues of improving speed and vanishing or exploding gradient. The proposedneo-fuzzy deep neural network architecture is compared with standard deep networks based on the Rosenblattperceptron.
APA, Harvard, Vancouver, ISO, and other styles
6

CHOI, Young-Seok. "Neuromorphic Learning: Deep Spiking Neural Network." Physics and High Technology 28, no. 4 (April 30, 2019): 16–21. http://dx.doi.org/10.3938/phit.28.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Patel, Hima, Amit Thakkar, Mrudang Pandya, and Kamlesh Makwana. "Neural network with deep learning architectures." Journal of Information and Optimization Sciences 39, no. 1 (November 10, 2017): 31–38. http://dx.doi.org/10.1080/02522667.2017.1372908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kriegeskorte, Nikolaus, and Tal Golan. "Neural network models and deep learning." Current Biology 29, no. 7 (April 2019): R231—R236. http://dx.doi.org/10.1016/j.cub.2019.02.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Muşat, Bogdan, and Răzvan Andonie. "Semiotic Aggregation in Deep Learning." Entropy 22, no. 12 (December 3, 2020): 1365. http://dx.doi.org/10.3390/e22121365.

Full text
Abstract:
Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
10

V.M., Sineglazov, and Chumachenko O.I. "Structural-parametric synthesis of deep learning neural networks." Artificial Intelligence 25, no. 4 (December 25, 2020): 42–51. http://dx.doi.org/10.15407/jai2020.04.042.

Full text
Abstract:
The structural-parametric synthesis of neural networks of deep learning, in particular convolutional neural networks used in image processing, is considered. The classification of modern architectures of convolutional neural networks is given. It is shown that almost every convolutional neural network, depending on its topology, has unique blocks that determine its essential features (for example, Squeeze and Excitation Block, Convolutional Block of Attention Module (Channel attention module, Spatial attention module), Residual block, Inception module, ResNeXt block. It is stated the problem of structural-parametric synthesis of convolutional neural networks, for the solution of which it is proposed to use a genetic algorithm. The genetic algorithm is used to effectively overcome a large search space: on the one hand, to generate possible topologies of the convolutional neural network, namely the choice of specific blocks and their locations in the structure of the convolutional neural network, and on the other hand to solve the problem of structural-parametric synthesis of convolutional neural network of selected topology. The most significant parameters of the convolutional neural network are determined. An encoding method is proposed that allows to repre- sent each network structure in the form of a string of fixed length in binary format. After that, several standard genetic operations were identified, i.e. selection, mutation and crossover, which eliminate weak individuals of the previous generation and use them to generate competitive ones. An example of solving this problem is given, a database (ultrasound results) of patients with thyroid disease was used as a training sample.
APA, Harvard, Vancouver, ISO, and other styles
11

Fathima, Sheeba. "Music Genre Classification using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 10, 2021): 66–71. http://dx.doi.org/10.22214/ijraset.2021.36087.

Full text
Abstract:
Many subjects are affected by digital music production., including music genre prediction. Machine learning techniques were used to classify music genres in this research. Deep neural networks (DNN) have recently been demonstrated to be effective in a variety of classification tasks. Including music genre classification. In this paper, we propose two methods for boosting music genre classification with convolutional neural networks: 1) using a process inspired by residual learning to combine peak- and average pooling to provide more statistical information to higher level neural networks; and 2) To bypass one or more layers, use shortcut connections. To perform classification, the KNN output is fed into another deep neural network. Our preliminary experimental results on the GTZAN data set show that the above two methods, especially the second one, can effectively improve classification accuracy when compared to two different network topologies.
APA, Harvard, Vancouver, ISO, and other styles
12

Pang, Bo, Erik Nijkamp, and Ying Nian Wu. "Deep Learning With TensorFlow: A Review." Journal of Educational and Behavioral Statistics 45, no. 2 (September 10, 2019): 227–48. http://dx.doi.org/10.3102/1076998619872761.

Full text
Abstract:
This review covers the core concepts and design decisions of TensorFlow. TensorFlow, originally created by researchers at Google, is the most popular one among the plethora of deep learning libraries. In the field of deep learning, neural networks have achieved tremendous success and gained wide popularity in various areas. This family of models also has tremendous potential to promote data analysis and modeling for various problems in educational and behavioral sciences given its flexibility and scalability. We give the reader an overview of the basics of neural network models such as the multilayer perceptron, the convolutional neural network, and stochastic gradient descent, the most commonly used optimization method for neural network models. However, the implementation of these models and optimization algorithms is time-consuming and error-prone. Fortunately, TensorFlow greatly eases and accelerates the research and application of neural network models. We review several core concepts of TensorFlow such as graph construction functions, graph execution tools, and TensorFlow’s visualization tool, TensorBoard. Then, we apply these concepts to build and train a convolutional neural network model to classify handwritten digits. This review is concluded by a comparison of low- and high-level application programming interfaces and a discussion of graphical processing unit support, distributed training, and probabilistic modeling with TensorFlow Probability library.
APA, Harvard, Vancouver, ISO, and other styles
13

Kiyak, Emre, and Gulay Unal. "Small aircraft detection using deep learning." Aircraft Engineering and Aerospace Technology 93, no. 4 (June 2, 2021): 671–81. http://dx.doi.org/10.1108/aeat-11-2020-0259.

Full text
Abstract:
Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.
APA, Harvard, Vancouver, ISO, and other styles
14

Zambra, Matteo, Amos Maritan, and Alberto Testolin. "Emergence of Network Motifs in Deep Neural Networks." Entropy 22, no. 2 (February 11, 2020): 204. http://dx.doi.org/10.3390/e22020204.

Full text
Abstract:
Network science can offer fundamental insights into the structural and functional properties of complex systems. For example, it is widely known that neuronal circuits tend to organize into basic functional topological modules, called network motifs. In this article, we show that network science tools can be successfully applied also to the study of artificial neural networks operating according to self-organizing (learning) principles. In particular, we study the emergence of network motifs in multi-layer perceptrons, whose initial connectivity is defined as a stack of fully-connected, bipartite graphs. Simulations show that the final network topology is shaped by learning dynamics, but can be strongly biased by choosing appropriate weight initialization schemes. Overall, our results suggest that non-trivial initialization strategies can make learning more effective by promoting the development of useful network motifs, which are often surprisingly consistent with those observed in general transduction networks.
APA, Harvard, Vancouver, ISO, and other styles
15

Matsumoto, Kazuma, Takato Tatsumi, Hiroyuki Sato, Tim Kovacs, and Keiki Takadama. "XCSR Learning from Compressed Data Acquired by Deep Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 21, no. 5 (September 20, 2017): 856–67. http://dx.doi.org/10.20965/jaciii.2017.p0856.

Full text
Abstract:
The correctness rate of classification of neural networks is improved by deep learning, which is machine learning of neural networks, and its accuracy is higher than the human brain in some fields. This paper proposes the hybrid system of the neural network and the Learning Classifier System (LCS). LCS is evolutionary rule-based machine learning using reinforcement learning. To increase the correctness rate of classification, we combine the neural network and the LCS. This paper conducted benchmark experiments to verify the proposed system. The experiment revealed that: 1) the correctness rate of classification of the proposed system is higher than the conventional LCS (XCSR) and normal neural network; and 2) the covering mechanism of XCSR raises the correctness rate of proposed system.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhou, Ding-Xuan. "Deep distributed convolutional neural networks: Universality." Analysis and Applications 16, no. 06 (November 2018): 895–919. http://dx.doi.org/10.1142/s0219530518500124.

Full text
Abstract:
Deep learning based on structured deep neural networks has provided powerful applications in various fields. The structures imposed on the deep neural networks are crucial, which makes deep learning essentially different from classical schemes based on fully connected neural networks. One of the commonly used deep neural network structures is generated by convolutions. The produced deep learning algorithms form the family of deep convolutional neural networks. Despite of their power in some practical domains, little is known about the mathematical foundation of deep convolutional neural networks such as universality of approximation. In this paper, we propose a family of new structured deep neural networks: deep distributed convolutional neural networks. We show that these deep neural networks have the same order of computational complexity as the deep convolutional neural networks, and we prove their universality of approximation. Some ideas of our analysis are from ridge approximation, wavelets, and learning theory.
APA, Harvard, Vancouver, ISO, and other styles
17

Luo, Shaobo, Yuzhi Shi, Lip Ket Chin, Yi Zhang, Bihan Wen, Ying Sun, Binh T. T. Nguyen, et al. "Rare bioparticle detection via deep metric learning." RSC Advances 11, no. 29 (2021): 17603–10. http://dx.doi.org/10.1039/d1ra02869c.

Full text
Abstract:
Conventional deep neural networks use simple classifiers to obtain highly accurate results. However, they have limitations in practical applications. This study demonstrates a robust deep metric neural network model for rare bioparticle detection.
APA, Harvard, Vancouver, ISO, and other styles
18

Xiao, Yong-Liang, Sikun Li, Guohai Situ, and Zhisheng You. "Unitary learning for diffractive deep neural network." Optics and Lasers in Engineering 139 (April 2021): 106499. http://dx.doi.org/10.1016/j.optlaseng.2020.106499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Meyer, Jesse G. "Deep learning neural network tools for proteomics." Cell Reports Methods 1, no. 2 (June 2021): 100003. http://dx.doi.org/10.1016/j.crmeth.2021.100003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Xie, Xurong, Xunying Liu, Tan Lee, and Lan Wang. "Bayesian Learning for Deep Neural Network Adaptation." IEEE/ACM Transactions on Audio, Speech, and Language Processing 29 (2021): 2096–110. http://dx.doi.org/10.1109/taslp.2021.3084072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Djellali, Choukri, and Mehdi adda. "An Enhanced Deep Learning Model to Network Attack Detection, by using Parameter Tuning, Hidden Markov Model and Neural Network." Journal of Ubiquitous Systems and Pervasive Networks 15, no. 01 (March 1, 2021): 35–41. http://dx.doi.org/10.5383/juspn.15.01.005.

Full text
Abstract:
In recent years, Deep Learning has become a critical success factor for Machine Learning. In the present study, we introduced a Deep Learning model to network attack detection, by using Hidden Markov Model and Artificial Neural Networks. We used a model aggregation technique to find a single consolidated Deep Learning model for better data fitting. The model selection technique is applied to optimize the bias-variance trade-off of the expected prediction. We demonstrate its ability to reduce the convergence, reach the optimal solution and obtain more cluttered decision boundaries. Experimental studies conducted on attack detection indicate that our proposed model outperformed existing Deep Learning models and gives an enhanced generalization.
APA, Harvard, Vancouver, ISO, and other styles
22

Merkel, Gregory, Richard Povinelli, and Ronald Brown. "Short-Term Load Forecasting of Natural Gas with Deep Neural Network Regression †." Energies 11, no. 8 (August 2, 2018): 2008. http://dx.doi.org/10.3390/en11082008.

Full text
Abstract:
Deep neural networks are proposed for short-term natural gas load forecasting. Deep learning has proven to be a powerful tool for many classification problems seeing significant use in machine learning fields such as image recognition and speech processing. We provide an overview of natural gas forecasting. Next, the deep learning method, contrastive divergence is explained. We compare our proposed deep neural network method to a linear regression model and a traditional artificial neural network on 62 operating areas, each of which has at least 10 years of data. The proposed deep network outperforms traditional artificial neural networks by 9.83% weighted mean absolute percent error (WMAPE).
APA, Harvard, Vancouver, ISO, and other styles
23

Vandit Gupta. "COVID-19 Detection using Deep Learning." International Journal for Modern Trends in Science and Technology 6, no. 12 (December 18, 2020): 421–25. http://dx.doi.org/10.46501/ijmtst061281.

Full text
Abstract:
Deep learning is an artificial intelligence function that imitates the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of machine learning in artificial intelligence (AI) that has networks capable of learning and recognizing patterns from data that is unstructured or unlabelled. It is also known as deep neural learning or deep neural network. Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self-driving cars. ConvNets can also be used for Radio Imaging which helps in disease detection. This paper helps in detecting COVID-19 from the X-ray images provided to the model using Convolutional Neural Networks (CNN) and image augmentation techniques.
APA, Harvard, Vancouver, ISO, and other styles
24

Wright, Alec, Eero-Pekka Damskägg, Lauri Juvela, and Vesa Välimäki. "Real-Time Guitar Amplifier Emulation with Deep Learning." Applied Sciences 10, no. 3 (January 21, 2020): 766. http://dx.doi.org/10.3390/app10030766.

Full text
Abstract:
This article investigates the use of deep neural networks for black-box modelling of audio distortion circuits, such as guitar amplifiers and distortion pedals. Both a feedforward network, based on the WaveNet model, and a recurrent neural network model are compared. To determine a suitable hyperparameter configuration for the WaveNet, models of three popular audio distortion pedals were created: the Ibanez Tube Screamer, the Boss DS-1, and the Electro-Harmonix Big Muff Pi. It is also shown that three minutes of audio data is sufficient for training the neural network models. Real-time implementations of the neural networks were used to measure their computational load. To further validate the results, models of two valve amplifiers, the Blackstar HT-5 Metal and the Mesa Boogie 5:50 Plus, were created, and subjective tests were conducted. The listening test results show that the models of the first amplifier could be identified as different from the reference, but the sound quality of the best models was judged to be excellent. In the case of the second guitar amplifier, many listeners were unable to hear the difference between the reference signal and the signals produced with the two largest neural network models. This study demonstrates that the neural network models can convincingly emulate highly nonlinear audio distortion circuits, whilst running in real-time, with some models requiring only a relatively small amount of processing power to run on a modern desktop computer.
APA, Harvard, Vancouver, ISO, and other styles
25

Reddy*, M. Venkata Krishna, and Pradeep S. "Envision Foundational of Convolution Neural Network." International Journal of Innovative Technology and Exploring Engineering 10, no. 6 (April 30, 2021): 54–60. http://dx.doi.org/10.35940/ijitee.f8804.0410621.

Full text
Abstract:
1. Bilal, A. Jourabloo, M. Ye, X. Liu, and L. Ren. Do Convolutional Neural Networks Learn Class Hierarchy? IEEE Transactions on Visualization and Computer Graphics, 24(1):152–162, Jan. 2018. 2. M. Carney, B. Webster, I. Alvarado, K. Phillips, N. Howell, J. Griffith, J. Jongejan, A. Pitaru, and A. Chen. Teachable Machine: Approachable Web-Based Tool for Exploring Machine Learning Classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20. ACM, Honolulu, HI, USA, 2020. 3. A. Karpathy. CS231n Convolutional Neural Networks for Visual Recognition, 2016 4. M. Kahng, N. Thorat, D. H. Chau, F. B. Viegas, and M. Wattenberg. GANLab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation. IEEE Transactions on Visualization and Computer Graphics, 25(1):310–320, Jan. 2019. 5. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015 6. M. Kahng, P. Y. Andrews, A. Kalro, and D. H. Chau. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models. IEEE Transactions on Visualization and Computer Graphics, 24(1):88–97, Jan. 2018. 7. https://cs231n.github.io/convolutional-networks/ 8. https://www.analyticsvidhya.com/blog/2020/02/learn-imageclassification-cnn-convolutional-neural-networks-3-datasets/ 9. https://towardsdatascience.com/understanding-cnn-convolutionalneural- network-69fd626ee7d4 10. https://medium.com/@birdortyedi_23820/deep-learning-lab-episode-2- cifar- 10-631aea84f11e 11. J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen. Recent advances in convolutional neural networks. Pattern Recognition, 77:354–377, May 2018. 12. Hamid, Y., Shah, F.A. and Sugumaram, M. (2014), ―Wavelet neural network model for network intrusion detection system‖, International Journal of Information Technology, Vol. 11 No. 2, pp. 251-263 13. G Sreeram , S Pradeep, K SrinivasRao , B.Deevan Raju , Parveen Nikhat , ― Moving ridge neuronal espionage network simulation for reticulum invasion sensing‖. International Journal of Pervasive Computing and Communications.https://doi.org/10.1108/IJPCC-05- 2020-0036 14. E. Stevens, L. Antiga, and T. Viehmann. Deep Learning with PyTorch. O’Reilly Media, 2019. 15. J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson. Understanding Neural Networks Through Deep Visualization. In ICML Deep Learning Workshop, 2015. 16. Aman Dureja, Payal Pahwa, ―Analysis of Non-Linear Activation Functions for Classification Tasks Using Convolutional Neural Networks‖, Recent Advances in Computer Science , Vol 2, Issue 3, 2019 ,PP-156-161 17. https://missinglink.ai/guides/neural-network-concepts/7-types-neuralnetwork-activation-functions-right/
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Jingmei, Weifei Wu, Di Xue, and Peng Gao. "Multi-Source Deep Transfer Neural Network Algorithm." Sensors 19, no. 18 (September 16, 2019): 3992. http://dx.doi.org/10.3390/s19183992.

Full text
Abstract:
Transfer learning can enhance classification performance of a target domain with insufficient training data by utilizing knowledge relating to the target domain from source domain. Nowadays, it is common to see two or more source domains available for knowledge transfer, which can improve performance of learning tasks in the target domain. However, the classification performance of the target domain decreases due to mismatching of probability distribution. Recent studies have shown that deep learning can build deep structures by extracting more effective features to resist the mismatching. In this paper, we propose a new multi-source deep transfer neural network algorithm, MultiDTNN, based on convolutional neural network and multi-source transfer learning. In MultiDTNN, joint probability distribution adaptation (JPDA) is used for reducing the mismatching between source and target domains to enhance features transferability of the source domain in deep neural networks. Then, the convolutional neural network is trained by utilizing the datasets of each source and target domain to obtain a set of classifiers. Finally, the designed selection strategy selects classifier with the smallest classification error on the target domain from the set to assemble the MultiDTNN framework. The effectiveness of the proposed MultiDTNN is verified by comparing it with other state-of-the-art deep transfer learning on three datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Han, Yuna, and Byung-Woo Hong. "Deep Learning Based on Fourier Convolutional Neural Network Incorporating Random Kernels." Electronics 10, no. 16 (August 19, 2021): 2004. http://dx.doi.org/10.3390/electronics10162004.

Full text
Abstract:
In recent years, convolutional neural networks have been studied in the Fourier domain for a limited environment, where competitive results can be expected for conventional image classification tasks in the spatial domain. We present a novel efficient Fourier convolutional neural network, where a new activation function is used, the additional shift Fourier transformation process is eliminated, and the number of learnable parameters is reduced. First, the Phase Rectified Linear Unit (PhaseReLU) is proposed, which is equivalent to the Rectified Linear Unit (ReLU) in the spatial domain. Second, in the proposed Fourier network, the shift Fourier transform is removed since the process is inessential for training. Lastly, we introduce two ways of reducing the number of weight parameters in the Fourier network. The basic method is to use a three-by-three sized kernel instead of five-by-five in our proposed Fourier convolutional neural network. We use the random kernel in our efficient Fourier convolutional neural network, whose standard deviation of the Gaussian distribution is used as a weight parameter. In other words, since only two scalars for each imaginary and real component per channel are required, a very small number of parameters is applied compressively. Therefore, as a result of experimenting in shallow networks, such as LeNet-3 and LeNet-5, our method achieves competitive accuracy with conventional convolutional neural networks while dramatically reducing the number of parameters. Furthermore, our proposed Fourier network, using a basic three-by-three kernel, mostly performs with higher accuracy than traditional convolutional neural networks in shallow and deep neural networks. Our experiments represent that presented kernel methods have the potential to be applied in all architecture based on convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
28

Shchetinin, Eugene Yu, and Leonid Sevastianov. "Improving the Learning Power of Artificial Intelligence Using Multimodal Deep Learning." EPJ Web of Conferences 248 (2021): 01017. http://dx.doi.org/10.1051/epjconf/202124801017.

Full text
Abstract:
Computer paralinguistic analysis is widely used in security systems, biometric research, call centers and banks. Paralinguistic models estimate different physical properties of voice, such as pitch, intensity, formants and harmonics to classify emotions. The main goal is to find such features that would be robust to outliers and will retain variety of human voice properties at the same time. Moreover, the model used must be able to estimate features on a time scale for an effective analysis of voice variability. In this paper a paralinguistic model based on Bidirectional Long Short-Term Memory (BLSTM) neural network is described, which was trained for vocal-based emotion recognition. The main advantage of this network architecture is that each module of the network consists of several interconnected layers, providing the ability to recognize flexible long-term dependencies in data, which is important in context of vocal analysis. We explain the architecture of a bidirectional neural network model, its main advantages over regular neural networks and compare experimental results of BLSTM network with other models.
APA, Harvard, Vancouver, ISO, and other styles
29

Yan, Yilin, Min Chen, Saad Sadiq, and Mei-Ling Shyu. "Efficient Imbalanced Multimedia Concept Retrieval by Deep Learning on Spark Clusters." International Journal of Multimedia Data Engineering and Management 8, no. 1 (January 2017): 1–20. http://dx.doi.org/10.4018/ijmdem.2017010101.

Full text
Abstract:
The classification of imbalanced datasets has recently attracted significant attention due to its implications in several real-world use cases. The classifiers developed on datasets with skewed distributions tend to favor the majority classes and are biased against the minority class. Despite extensive research interests, imbalanced data classification remains a challenge in data mining research, especially for multimedia data. Our attempt to overcome this hurdle is to develop a convolutional neural network (CNN) based deep learning solution integrated with a bootstrapping technique. Considering that convolutional neural networks are very computationally expensive coupled with big training datasets, we propose to extract features from pre-trained convolutional neural network models and feed those features to another full connected neutral network. Spark implementation shows promising performance of our model in handling big datasets with respect to feasibility and scalability.
APA, Harvard, Vancouver, ISO, and other styles
30

Nurmukhanov, T. A., and B. S. Daribayev. "RECOGNITION OF THE TEXT BY MEANS OF DEEP LEARNING." BULLETIN Series of Physics & Mathematical Sciences 69, no. 1 (March 10, 2020): 378–83. http://dx.doi.org/10.51889/2020-1.1728-7901.68.

Full text
Abstract:
Using neural networks, various variations of the classification of objects can be performed. Neural networks are used in many areas of recognition. A big area in this area is text recognition. The paper considers the optimal way to build a network for text recognition, the use of optimal methods for activation functions, and optimizers. Also, the article checked the correctness of text recognition with different optimization methods. This article is devoted to the analysis of convolutional neural networks. In the article, a convolutional neural network model will be trained with a teacher. Teaching with a teacher is a type of training for neural networks in which you provide the input data and the desired result, that is, the student looking at the input data will understand that you need to strive for the result that was provided to him.
APA, Harvard, Vancouver, ISO, and other styles
31

Yan, Peizhi, and Yi Feng. "Using Convolution and Deep Learning in Gomoku Game Artificial Intelligence." Parallel Processing Letters 28, no. 03 (September 2018): 1850011. http://dx.doi.org/10.1142/s0129626418500111.

Full text
Abstract:
Gomoku is an ancient board game. The traditional approach to solving the Gomoku game is to apply tree search on a Gomoku game tree. Although the rules of Gomoku are straightforward, the game tree complexity is enormous. Unlike many other board games such as chess and Shogun, the Gomoku board state is more intuitive. That is to say, analyzing the visual patterns on a Gomoku game board is fundamental to play this game. In this paper, we designed a deep convolutional neural network model to help the machine learn from the training data (collected from human players). Based on this original neural network model, we made some changes and get two variant neural networks. We compared the performance of the original neural network with its variants in our experiments. Our original neural network model got 69% accuracy on the training data and 38% accuracy on the testing data. Because the decision made by the neural network is intuitive, we also designed a hard-coded convolution-based Gomoku evaluation function to assist the neural network in making decisions. This hybrid Gomoku artificial intelligence (AI) further improved the performance of a pure neural network-based Gomoku AI.
APA, Harvard, Vancouver, ISO, and other styles
32

Constantinides, G. A. "Rethinking arithmetic for deep neural networks." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, no. 2166 (January 20, 2020): 20190051. http://dx.doi.org/10.1098/rsta.2019.0051.

Full text
Abstract:
We consider efficiency in the implementation of deep neural networks. Hardware accelerators are gaining interest as machine learning becomes one of the drivers of high-performance computing. In these accelerators, the directed graph describing a neural network can be implemented as a directed graph describing a Boolean circuit. We make this observation precise, leading naturally to an understanding of practical neural networks as discrete functions, and show that the so-called binarized neural networks are functionally complete. In general, our results suggest that it is valuable to consider Boolean circuits as neural networks , leading to the question of which circuit topologies are promising. We argue that continuity is central to generalization in learning, explore the interaction between data coding, network topology, and node functionality for continuity and pose some open questions for future research. As a first step to bridging the gap between continuous and Boolean views of neural network accelerators, we present some recent results from our work on LUTNet, a novel Field-Programmable Gate Array inference approach. Finally, we conclude with additional possible fruitful avenues for research bridging the continuous and discrete views of neural networks. This article is part of a discussion meeting issue ‘Numerical algorithms for high-performance computational science’.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Shijie, Yan Cui, Linwei Huang, Li Xie, Yaowu Chen, Junwei Han, Lei Guo, Shu Zhang, Tianming Liu, and Jinglei Lv. "Supervised Brain Network Learning Based on Deep Recurrent Neural Networks." IEEE Access 8 (2020): 69967–78. http://dx.doi.org/10.1109/access.2020.2984948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

S., Smys, Joy Iong Zong Chen, and Subarna Shakya. "Survey on Neural Network Architectures with Deep Learning." Journal of Soft Computing Paradigm 2, no. 3 (July 30, 2020): 186–94. http://dx.doi.org/10.36548/jscp.2020.3.007.

Full text
Abstract:
In the present research era, machine learning is an important and unavoidable zone where it provides better solutions to various domains. In particular deep learning is one of the cost efficient, effective supervised learning model, which can be applied to various complicated issues. Since deep learning has various illustrative features and it doesn’t depend on any limited learning methods which helps to obtain better solutions. As deep learning has significant performance and advancements it is widely used in various applications like image classification, face recognition, visual recognition, language processing, speech recognition, object detection and various science, business analysis, etc., This survey work mainly provides an insight about deep learning through an intensive analysis of deep learning architectures and its characteristics along with its limitations. Also, this research work analyses recent trends in deep learning through various literatures to explore the present evolution in deep learning models.
APA, Harvard, Vancouver, ISO, and other styles
35

Gan, Wen-Cong, and Fu-Wen Shu. "Holography as deep learning." International Journal of Modern Physics D 26, no. 12 (October 2017): 1743020. http://dx.doi.org/10.1142/s0218271817430209.

Full text
Abstract:
Quantum many-body problem with exponentially large degrees of freedom can be reduced to a tractable computational form by neural network method [G. Carleo and M. Troyer, Science 355 (2017) 602, arXiv:1606.02318.] The power of deep neural network (DNN) based on deep learning is clarified by mapping it to renormalization group (RG), which may shed lights on holographic principle by identifying a sequence of RG transformations to the AdS geometry. In this paper, we show that any network which reflects RG process has intrinsic hyperbolic geometry, and discuss the structure of entanglement encoded in the graph of DNN. We find the entanglement structure of DNN is of Ryu–Takayanagi form. Based on these facts, we argue that the emergence of holographic gravitational theory is related to deep learning process of the quantum-field theory.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Ying, Rodrigo Caballero, and Joy Merwin Monteiro. "RadNet 1.0: exploring deep learning architectures for longwave radiative transfer." Geoscientific Model Development 13, no. 9 (September 21, 2020): 4399–412. http://dx.doi.org/10.5194/gmd-13-4399-2020.

Full text
Abstract:
Abstract. Simulating global and regional climate at high resolution is essential to study the effects of climate change and capture extreme events affecting human populations. To achieve this goal, the scalability of climate models and efficiency of individual model components are both important. Radiative transfer is among the most computationally expensive components in a typical climate model. Here we attempt to model this component using a neural network. We aim to study the feasibility of replacing an explicit, physics-based computation of longwave radiative transfer by a neural network emulator and assessing the resultant performance gains. We compare multiple neural-network architectures, including a convolutional neural network, and our results suggest that the performance loss from the use of conventional convolutional networks is not offset by gains in accuracy. We train the networks with and without noise added to the input profiles and find that adding noise improves the ability of the networks to generalise beyond the training set. Prediction of radiative heating rates using our neural network models achieve up to 370× speedup on a GTX 1080 GPU setup and 11× speedup on a Xeon CPU setup compared to the a state-of-the-art radiative transfer library running on the same Xeon CPU. Furthermore, our neural network models yield less than 0.1 K d−1 mean squared error across all pressure levels. Upon introducing this component into a single-column model, we find that the time evolution of the temperature and humidity profiles is physically reasonable, though the model is conservative in its prediction of heating rates in regions where the optical depth changes quickly. Differences exist in the equilibrium climate simulated when using the neural network, which are attributed to small systematic errors that accumulate over time. Thus, we find that the accuracy of the neural network in the “offline” mode does not reflect its performance when coupled with other components.
APA, Harvard, Vancouver, ISO, and other styles
37

Östling, Robert. "Part of Speech Tagging: Shallow or Deep Learning?" Northern European Journal of Language Technology 5 (June 19, 2018): 1–15. http://dx.doi.org/10.3384/nejlt.2000-1533.1851.

Full text
Abstract:
Deep neural networks have advanced the state of the art in numerous fields, but they generally suffer from low computational efficiency and the level of improvement compared to more efficient machine learning models is not always significant. We perform a thorough PoS tagging evaluation on the Universal Dependencies treebanks, pitting a state-of-the-art neural network approach against UDPipe and our sparse structured perceptron-based tagger, efselab. In terms of computational efficiency, efselab is three orders of magnitude faster than the neural network model, while being more accurate than either of the other systems on 47 of 65 treebanks.
APA, Harvard, Vancouver, ISO, and other styles
38

Jones, William, Kaur Alasoo, Dmytro Fishman, and Leopold Parts. "Computational biology: deep learning." Emerging Topics in Life Sciences 1, no. 3 (November 14, 2017): 257–74. http://dx.doi.org/10.1042/etls20160025.

Full text
Abstract:
Deep learning is the trendiest tool in a computational biologist's toolbox. This exciting class of methods, based on artificial neural networks, quickly became popular due to its competitive performance in prediction problems. In pioneering early work, applying simple network architectures to abundant data already provided gains over traditional counterparts in functional genomics, image analysis, and medical diagnostics. Now, ideas for constructing and training networks and even off-the-shelf models have been adapted from the rapidly developing machine learning subfield to improve performance in a range of computational biology tasks. Here, we review some of these advances in the last 2 years.
APA, Harvard, Vancouver, ISO, and other styles
39

Sergeev, Fedor, Elena Bratkovskaya, Ivan Kisel, and Iouri Vassiliev. "Deep learning for quark–gluon plasma detection in the CBM experiment." International Journal of Modern Physics A 35, no. 33 (November 30, 2020): 2043002. http://dx.doi.org/10.1142/s0217751x20430022.

Full text
Abstract:
Classification of processes in heavy-ion collisions in the CBM experiment (FAIR/GSI, Darmstadt) using neural networks is investigated. Fully-connected neural networks and a deep convolutional neural network are built to identify quark–gluon plasma simulated within the Parton-Hadron-String Dynamics (PHSD) microscopic off-shell transport approach for central Au+Au collision at a fixed energy. The convolutional neural network outperforms fully-connected networks and reaches 93% accuracy on the validation set, while the remaining only 7% of collisions are incorrectly classified.
APA, Harvard, Vancouver, ISO, and other styles
40

Chavan, Umesh B., and Dinesh Kulkarni. "Optimizing Deep Convolutional Neural Network for Facial Expression Recognition." European Journal of Engineering Research and Science 5, no. 2 (February 25, 2020): 192–95. http://dx.doi.org/10.24018/ejers.2020.5.2.495.

Full text
Abstract:
Facial expression recognition (FER) systems have attracted much research interest in the area of Machine Learning. We designed a large, deep convolutional neural network to classify 40,000 images in the data-set into one of seven categories (disgust, fear, happy, angry, sad, neutral, surprise). In this project, we have designed deep learning Convolution Neural Network (CNN) for facial expression recognition and developed model in Theano and Caffe for training process. The proposed architecture achieves 61% accuracy. This work presents results of accelerated implementation of the CNN with graphic processing units (GPUs). Optimizing Deep CNN is to reduce training time for system.
APA, Harvard, Vancouver, ISO, and other styles
41

Dawud, Awwal Muhammad, Kamil Yurtkan, and Huseyin Oztoprak. "Application of Deep Learning in Neuroradiology: Brain Haemorrhage Classification Using Transfer Learning." Computational Intelligence and Neuroscience 2019 (June 3, 2019): 1–12. http://dx.doi.org/10.1155/2019/4629859.

Full text
Abstract:
In this paper, we address the problem of identifying brain haemorrhage which is considered as a tedious task for radiologists, especially in the early stages of the haemorrhage. The problem is solved using a deep learning approach where a convolutional neural network (CNN), the well-known AlexNet neural network, and also a modified novel version of AlexNet with support vector machine (AlexNet-SVM) classifier are trained to classify the brain computer tomography (CT) images into haemorrhage or nonhaemorrhage images. The aim of employing the deep learning model is to address the primary question in medical image analysis and classification: can a sufficient fine-tuning of a pretrained model (transfer learning) eliminate the need of building a CNN from scratch? Moreover, this study also aims to investigate the advantages of using SVM as a classifier instead of a three-layer neural network. We apply the same classification task to three deep networks; one is created from scratch, another is a pretrained model that was fine-tuned to the brain CT haemorrhage classification task, and our modified novel AlexNet model which uses the SVM classifier. The three networks were trained using the same number of brain CT images available. The experiments show that the transfer of knowledge from natural images to medical images classification is possible. In addition, our results proved that the proposed modified pretrained model “AlexNet-SVM” can outperform a convolutional neural network created from scratch and the original AlexNet in identifying the brain haemorrhage.
APA, Harvard, Vancouver, ISO, and other styles
42

Orquia, John Jowil D., and El Jireh Bibangco. "Automated Fruit Classification Using Deep Convolutional Neural Network." Philippine Social Science Journal 3, no. 2 (November 16, 2020): 177–78. http://dx.doi.org/10.52006/main.v3i2.188.

Full text
Abstract:
Manual Fruit classification is the traditional way of classifying fruits. It is manual contact-labor that is time-consuming and often results in lesser productivity, inconsistency, and sometimes damaging the fruits (Prabha & Kumar, 2012). Thus, new technologies such as deep learning paved the way for a faster and more efficient method of fruit classification (Faridi & Aboonajmi, 2017). A deep convolutional neural network, or deep learning, is a machine learning algorithm that contains several layers of neural networks stacked together to create a more complex model capable of solving complex problems. The utilization of state-of-the-art pre-trained deep learning models such as AlexNet, GoogLeNet, and ResNet-50 was widely used. However, such models were not explicitly trained for fruit classification (Dyrmann, Karstoft, & Midtiby, 2016). The study aimed to create a new deep convolutional neural network and compared its performance to fine-tuned models based on accuracy, precision, sensitivity, and specificity.
APA, Harvard, Vancouver, ISO, and other styles
43

Будыльский, Дмитрий, Dmitriy Budylskiy, Александр Подвесовский, and Aleksandr Podvesovskiy. "Application of deep learning models for aspect based sentiment analysis." Bulletin of Bryansk state technical university 2015, no. 3 (September 30, 2015): 117–26. http://dx.doi.org/10.12737/22917.

Full text
Abstract:
This paper describes actual problem of sentiment based aspect analysis and four deep learning models: convolutional neural network, recurrent neural network, GRU and LSTM networks. We evaluated these models on Russian text dataset from SentiRuEval-2015. Results show good efficiency and high potential for further natural language processing applications.
APA, Harvard, Vancouver, ISO, and other styles
44

Mohamed, Soha Abd El-Moamen, Marghany Hassan Mohamed, and Mohammed F. Farghally. "A New Cascade-Correlation Growing Deep Learning Neural Network Algorithm." Algorithms 14, no. 5 (May 19, 2021): 158. http://dx.doi.org/10.3390/a14050158.

Full text
Abstract:
In this paper, a proposed algorithm that dynamically changes the neural network structure is presented. The structure is changed based on some features in the cascade correlation algorithm. Cascade correlation is an important algorithm that is used to solve the actual problem by artificial neural networks as a new architecture and supervised learning algorithm. This process optimizes the architectures of the network which intends to accelerate the learning process and produce better performance in generalization. Many researchers have to date proposed several growing algorithms to optimize the feedforward neural network architectures. The proposed algorithm has been tested on various medical data sets. The results prove that the proposed algorithm is a better method to evaluate the accuracy and flexibility resulting from it.
APA, Harvard, Vancouver, ISO, and other styles
45

Wilkins, J., M. V. Nguyen, and B. Rahmani. "Application of Convolutional Neural Network In LAWN Measurement." Signal & Image Processing : An International Journal 12, no. 1 (February 28, 2021): 1–8. http://dx.doi.org/10.5121/sipij.2021.12101.

Full text
Abstract:
Lawn area measurement is an application of image processing and deep learning. Researchers used hierarchical networks, segmented images, and other methods to measure the lawn area. Methods’ effectiveness and accuracy varies. In this project, deep learning method, specifically Convolutional neural network, was applied to measure the lawn area. We used Keras and TensorFlow in Python to develop a model that was trained on the dataset of houses then tuned the parameters with GridSearchCV in ScikitLearn (a machine learning library in Python) to estimate the lawn area. Convolutional neural network or shortly CNN shows high accuracy (94 -97%). We may conclude that deep learning method, especially CNN, could be a good method with a high state-of-art accuracy.
APA, Harvard, Vancouver, ISO, and other styles
46

Atha, Deegan J., and Mohammad R. Jahanshahi. "Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection." Structural Health Monitoring 17, no. 5 (November 6, 2017): 1110–28. http://dx.doi.org/10.1177/1475921717737051.

Full text
Abstract:
Corrosion is a major defect in structural systems that has a significant economic impact and can pose safety risks if left untended. Currently, an inspector visually assesses the condition of a structure to identify corrosion. This approach is time-consuming, tedious, and subjective. Robotic systems, such as unmanned aerial vehicles, paired with computer vision algorithms have the potential to perform autonomous damage detection that can significantly decrease inspection time and lead to more frequent and objective inspections. This study evaluates the use of convolutional neural networks for corrosion detection. A convolutional neural network learns the appropriate classification features that in traditional algorithms were hand-engineered. Eliminating the need for dependence on prior knowledge and human effort in designing features is a major advantage of convolutional neural networks. This article presents different convolutional neural network–based approaches for corrosion assessment on metallic surfaces. The effect of different color spaces, sliding window sizes, and convolutional neural network architectures are discussed. To this end, the performance of two pretrained state-of-the-art convolutional neural network architectures as well as two proposed convolutional neural network architectures are evaluated, and it is shown that convolutional neural networks outperform state-of-the-art vision-based corrosion detection approaches that are developed based on texture and color analysis using a simple multilayered perceptron network. Furthermore, it is shown that one of the proposed convolutional neural networks significantly improves the computational time in contrast with state-of-the-art pretrained convolutional neural networks while maintaining comparable performance for corrosion detection.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Qiurui, Chun Yuan, and Yan Liu. "Learning Deep Conditional Neural Network for Image Segmentation." IEEE Transactions on Multimedia 21, no. 7 (July 2019): 1839–52. http://dx.doi.org/10.1109/tmm.2018.2890360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yasaka, Koichiro, Hiroyuki Akai, Akira Kunimatsu, Shigeru Kiryu, and Osamu Abe. "Deep learning with convolutional neural network in radiology." Japanese Journal of Radiology 36, no. 4 (March 1, 2018): 257–72. http://dx.doi.org/10.1007/s11604-018-0726-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Wei, Hamid Parvin, and Hadi Izadparast. "Deep Learning Neural Network for Unconventional Images Classification." Neural Processing Letters 52, no. 1 (April 23, 2020): 169–85. http://dx.doi.org/10.1007/s11063-020-10238-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gorshkova, Kristina, Victoria Zueva, Maria Kuznetsova, and Larisa Tugashova. "Optimizing Deep Learning Methods in Neural Network Architectures." International Review of Automatic Control (IREACO) 14, no. 2 (March 31, 2021): 93. http://dx.doi.org/10.15866/ireaco.v14i2.20591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography