To see the other types of publications on this topic, follow the link: Deep Convontional Neural Network.

Journal articles on the topic 'Deep Convontional Neural Network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep Convontional Neural Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jensen, Mitchell, Khamael Al-Dulaimi, Khairiyah Saeed Abduljabbar, and Jasmine Banks. "Automated Classification of Cell Level of HEp-2 Microscopic Images Using Deep Convolutional Neural Networks-Based Diameter Distance Features." JUCS - Journal of Universal Computer Science 29, no. (5) (2023): 432–45. https://doi.org/10.3897/jucs.96293.

Full text
Abstract:
To identify autoimmune diseases in humans, analysis of HEp-2 staining patterns at cell level is the gold standard for clinical practice research communities. An automated procedure is a complicated task due to variations in cell densities, sizes, shapes and patterns, overfitting of features, large-scale data volume, stained cells and poor quality of images. Several machine learning methods that analyse and classify HEp-2 cell microscope images currently exist. However, accuracy is still not at the level required for medical applications and computer aided diagnosis due to those challenges. The purpose of this work to automate classification procedure of HEp-2 stained cells from microscopic images and improve the accuracy of computer aided diagnosis. This work proposes Deep Convolutional Neural Networks (DCNNs) technique to classify HEp-2 cell patterns at cell level into six classes based on employing the level-set method via edge detection technique to segment HEp-2 cell shape. The DCNNs are designed to identify cell-shape and fundamental distance features related with HEp-2 cell types. This paper is investigated the effectiveness of our proposed method over benchmarked dataset. The result shows that the proposed method is highly superior comparing with other methods in benchmarked dataset and state-of-the-art methods. The result demonstrates that the proposed method has an excellent adaptability across variations in cell densities, sizes, shapes and patterns, overfitting features, large-scale data volume, and stained cells under different lab environments. The accurate classification of HEp-2 staining pattern at cell level helps increasing the accuracy of computer aided diagnosis for diagnosis process in the future.
APA, Harvard, Vancouver, ISO, and other styles
2

Rajadnya, Prof Kirti. "Speech Recognition using Deep Neural Network Neural (DNN) and Deep Belief Network (DBN)." International Journal for Research in Applied Science and Engineering Technology 8, no. 5 (2020): 1543–48. http://dx.doi.org/10.22214/ijraset.2020.5359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eom, Junsik, Sewon Kim, Hanbyol Jang, et al. "Neural spike classification via deep neural network." IBRO Reports 6 (September 2019): S139—S140. http://dx.doi.org/10.1016/j.ibror.2019.07.443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Szeto, Pok Man, Hamid Parvin, Mohammad Reza Mahmoudi, Bui Anh Tuan, and Kim-Hung Pho. "Deep neural network as deep feature learner." Journal of Intelligent & Fuzzy Systems 39, no. 1 (2020): 355–69. http://dx.doi.org/10.3233/jifs-191292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ghasemi, Fahimeh, Alireza Mehridehnavi, Afshin Fassihi, and Horacio Pérez-Sánchez. "Deep neural network in QSAR studies using deep belief network." Applied Soft Computing 62 (January 2018): 251–58. http://dx.doi.org/10.1016/j.asoc.2017.09.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elbrachter, Dennis, Dmytro Perekrestenko, Philipp Grohs, and Helmut Bolcskei. "Deep Neural Network Approximation Theory." IEEE Transactions on Information Theory 67, no. 5 (2021): 2581–623. http://dx.doi.org/10.1109/tit.2021.3062161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mashhadi, Peyman Sheikholharam, Sławomir Nowaczyk, and Sepideh Pashami. "Parallel orthogonal deep neural network." Neural Networks 140 (August 2021): 167–83. http://dx.doi.org/10.1016/j.neunet.2021.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

FUKUSHIMA, Kunihiko. "Neocognitron: Deep Convolutional Neural Network." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 27, no. 4 (2015): 115–25. http://dx.doi.org/10.3156/jsoft.27.4_115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Choi, Yoojin, Mostafa El-Khamy, and Jungwon Lee. "Universal Deep Neural Network Compression." IEEE Journal of Selected Topics in Signal Processing 14, no. 4 (2020): 715–26. http://dx.doi.org/10.1109/jstsp.2020.2975903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Majumdar, Somshubra, and Ishaan Jain. "Deep Columnar Convolutional Neural Network." International Journal of Computer Applications 145, no. 12 (2016): 25–32. http://dx.doi.org/10.5120/ijca2016910772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Atanassov, Krassimir, Sotir Sotirov, and Tania Pencheva. "Intuitionistic Fuzzy Deep Neural Network." Mathematics 11, no. 3 (2023): 716. http://dx.doi.org/10.3390/math11030716.

Full text
Abstract:
The concept of an intuitionistic fuzzy deep neural network (IFDNN) is introduced here as a demonstration of a combined use of artificial neural networks and intuitionistic fuzzy sets, aiming to benefit from the advantages of both methods. The investigation presents in a methodological way the whole process of IFDNN development, starting with the simplest form—an intuitionistic fuzzy neural network (IFNN) with one layer with single-input neuron, passing through IFNN with one layer with one multi-input neuron, further subsequent complication—an IFNN with one layer with many multi-input neurons, and finally—the true IFDNN with many layers with many multi-input neurons. The formulas for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas for NN parameters estimation, represented in the form of intuitionistic fuzzy pairs, are given here for the first time for each one of the presented IFNNs. To demonstrate its workability, an example of an IFDNN application to biomedical data is here presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Shamshiri, Samaneh, and Insoo Sohn. "Deep neural network topology optimization against neural attacks." Expert Systems with Applications 291 (October 2025): 128474. https://doi.org/10.1016/j.eswa.2025.128474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Niu, Huan, Wei Xu, Hamidreza Akbarzadeh, Hamid Parvin, Amin Beheshti, and Hamid Alinejad-Rokny. "Deep feature learnt by conventional deep neural network." Computers & Electrical Engineering 84 (June 2020): 106656. http://dx.doi.org/10.1016/j.compeleceng.2020.106656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Elnaggar, Sarah G., Ibrahim E. Elsemman, and Taysir Hassan A. Soliman. "Embedding-Based Deep Neural Network and Convolutional Neural Network Graph Classifiers." Electronics 12, no. 12 (2023): 2715. http://dx.doi.org/10.3390/electronics12122715.

Full text
Abstract:
One of the most significant graph data analysis tasks is graph classification, as graphs are complex data structures used for illustrating relationships between entity pairs. Graphs are essential in many domains, such as the description of chemical molecules, biological networks, social relationships, etc. Real-world graphs are complicated and large. As a result, there is a need to find a way to represent or encode a graph’s structure so that it can be easily utilized by machine learning models. Therefore, graph embedding is considered one of the most powerful solutions for graph representation. Inspired by the Doc2Vec model in Natural Language Processing (NLP), this paper first investigates different ways of (sub)graph embedding to represent each graph or subgraph as a fixed-length feature vector, which is then used as input to any classifier. Thus, two supervised classifiers—a deep neural network (DNN) and a convolutional neural network (CNN)—are proposed to enhance graph classification. Experimental results on five benchmark datasets indicate that the proposed models obtain competitive results and are superior to some traditional classification methods and deep-learning-based approaches on three out of five benchmark datasets, with an impressive accuracy rate of 94% on the NCI1 dataset.
APA, Harvard, Vancouver, ISO, and other styles
15

Low, Cheng-Yaw, Jaewoo Park, and Andrew Beng-Jin Teoh. "Stacking-Based Deep Neural Network: Deep Analytic Network for Pattern Classification." IEEE Transactions on Cybernetics 50, no. 12 (2020): 5021–34. http://dx.doi.org/10.1109/tcyb.2019.2908387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Khan, Zakiya Manzoor, and Harjit Singh. "Deep Neural Network Solution for Detecting Intrusion in Network." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 8 (2023): 160–71. http://dx.doi.org/10.17762/ijritcc.v11i8.7933.

Full text
Abstract:
In our experiment, we found that deep learning surpassed machine learning when utilizing the DSSTE algorithm to sample imbalanced training set samples. These methods excel in terms of throughput due to their complex structure and ability to autonomously acquire relevant features from a dataset. The current study focuses on employing deep learning techniques such as RNN and Deep-NN, as well as algorithm design, to aid network IDS designers. Since public datasets already preprocess the data features, deep learning is unable to leverage its automatic feature extraction capability, limiting its ability to learn from preprocessed features. To harness the advantages of deep learning in feature extraction, mitigate the impact of imbalanced data, and enhance classification accuracy, our approach involves directly applying the deep learning model for feature extraction and model training on the existing network traffic data. By doing so, we aim to capitalize on deep learning's benefits, improving feature extraction, reducing the influence of imbalanced data, and enhancing classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
17

Shpinareva, Irina M., Anastasia A. Yakushina, Lyudmila A. Voloshchuk, and Nikolay D. Rudnichenko. "Detection and classification of network attacks using the deep neural network cascade." Herald of Advanced Information Technology 4, no. 3 (2021): 244–54. http://dx.doi.org/10.15276/hait.03.2021.4.

Full text
Abstract:
This article shows the relevance of developing a cascade of deep neural networks for detecting and classifying network attacks based on an analysis of the practical use of network intrusion detection systems to protect local computer networks. A cascade of deep neural networks consists of two elements. The first network is a hybrid deep neural network that contains convolutional neural network layers and long short-term memory layers to detect attacks. The second network is a CNN convolutional neural network for classifying the most popular classes of network attacks such as Fuzzers, Analysis, Backdoors, DoS, Exploits, Generic, Reconnais-sance, Shellcode, and Worms. At the stage of tuning and training the cascade of deep neural networks, the selection of hyperparame-ters was carried out, which made it possible to improve the quality of the model. Among the available public datasets, one ofthe current UNSW-NB15 datasets was selected, taking into account modern traffic. For the data set under consideration, a data prepro-cessing technology has been developed. The cascade of deep neural networks was trained, tested, and validated on the UNSW-NB15 dataset. The cascade of deep neural networks was tested on real network traffic, which showed its ability to detect and classify at-tacks in a computer network. The use of a cascade of deep neural networks, consisting of a hybrid neural network CNN + LSTM and a neural network CNNhas improved the accuracy of detecting and classifying attacks in computer networks and reduced the fre-quency of false alarms in detecting network attacks
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Jian, Xianlong Zhang, and Xiaohua Shi. "Simulating Neural Network Processors." Wireless Communications and Mobile Computing 2022 (February 23, 2022): 1–12. http://dx.doi.org/10.1155/2022/7500195.

Full text
Abstract:
Deep learning has achieved competing results compared with human beings in many fields. Traditionally, deep learning networks are executed on CPUs and GPUs. In recent years, more and more neural network accelerators have been introduced in both academia and industry to improve the performance and energy efficiency for deep learning networks. In this paper, we introduce a flexible and configurable functional NN accelerator simulator, which could be configured to simulate u-architectures for different NN accelerators. The extensible and configurable simulator is helpful for system-level exploration of u-architecture, as well as operator optimization algorithm developments. The simulator is a functional simulator that simulates the latencies of calculation and memory access and the concurrent process between modules, and it gives the number of program execution cycles after the simulation is completed. We also integrated the simulator into the TVM compilation stack as an optional backend. Users can use TVM to write operators and execute them on the simulator.
APA, Harvard, Vancouver, ISO, and other styles
19

Guo, Ruipeng, Soren Nelson, and Rajesh Menon. "Needle-based deep-neural-network camera." Applied Optics 60, no. 10 (2021): B135. http://dx.doi.org/10.1364/ao.415059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

CHOI, Young-Seok. "Neuromorphic Learning: Deep Spiking Neural Network." Physics and High Technology 28, no. 4 (2019): 16–21. http://dx.doi.org/10.3938/phit.28.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Manpriya et al.,, Manpriya et al ,. "Crop Prediction Using Deep Neural Network." International Journal of Mechanical and Production Engineering Research and Development 10, no. 3 (2020): 4605–12. http://dx.doi.org/10.24247/ijmperdjun2020435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Faisal, Anas, and Agus Subekti. "Deep Neural Network untuk Prediksi Stroke." Jurnal Edukasi dan Penelitian Informatika (JEPIN) 7, no. 3 (2021): 443. http://dx.doi.org/10.26418/jp.v7i3.50094.

Full text
Abstract:
Pada Tahun 2019 Organisasi Kesehatan Dunia (WHO) mendudukkan stroke sebagai tujuh dari sepuluh penyebab utama kematian. Kementerian Kesehatan menggolongkan stroke sebagai penyakit katastropik karena dampaknya luas secara ekonomi dan sosial. Oleh karena itu, diperlukan peran dari teknologi informasi untuk memprediksi stroke guna pencegahan dan perawatan dini. Analisis data yang memiliki kelas tidak seimbang mengakibatkan ketidakakuratan dalam memprediksi stroke. Penelitian ini membandingkan tiga teknik oversampling untuk mendapatkan model prediksi yang lebih baik. Data kelas yang sudah diseimbangkan diuji menggunakan tiga model Arsitektur Deep Neural Network (DNN) dengan melakukan optimasi pada beberapa parameter yaitu optimizer, learning rate dan epoch. Hasil paling baik didapatkan teknik oversampling SMOTETomek dan Arsitektur DNN dengan lima hidden layer, optimasi Adam, learning rate 0.001 dan jumlah epoch 500. Skor akurasi, presisi, recall, dan f1-score masing-masing mendapatkan 0.96, 0.9614, 0.9608 dan 0.9611.
APA, Harvard, Vancouver, ISO, and other styles
23

Motomura, Masato. "3. Deep Neural Network Processors: Overview." Journal of The Institute of Image Information and Television Engineers 73, no. 1 (2019): 52–57. http://dx.doi.org/10.3169/itej.73.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zheng, Shuiqin, Shixiang Xu, and Dianyuan Fan. "Orthogonality of diffractive deep neural network." Optics Letters 47, no. 7 (2022): 1798. http://dx.doi.org/10.1364/ol.449899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Deeba, Farah, She Kun, Fayaz Ali Dharejo, Hameer Langah, and Hira Memon. "Digital Watermarking Using Deep Neural Network." International Journal of Machine Learning and Computing 10, no. 2 (2020): 277–82. http://dx.doi.org/10.18178/ijmlc.2020.10.2.932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bao, Chunhui, Yifei Pu, and Yi Zhang. "Fractional-Order Deep Backpropagation Neural Network." Computational Intelligence and Neuroscience 2018 (July 3, 2018): 1–10. http://dx.doi.org/10.1155/2018/7361628.

Full text
Abstract:
In recent years, the research of artificial neural networks based on fractional calculus has attracted much attention. In this paper, we proposed a fractional-order deep backpropagation (BP) neural network model with L2 regularization. The proposed network was optimized by the fractional gradient descent method with Caputo derivative. We also illustrated the necessary conditions for the convergence of the proposed network. The influence of L2 regularization on the convergence was analyzed with the fractional-order variational method. The experiments have been performed on the MNIST dataset to demonstrate that the proposed network was deterministically convergent and can effectively avoid overfitting.
APA, Harvard, Vancouver, ISO, and other styles
27

Smart, Ashley G. "A deep neural network of light." Physics Today 70, no. 8 (2017): 24. http://dx.doi.org/10.1063/pt.3.3654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Xu, Chunyan, Canyi Lu, Xiaodan Liang, et al. "Multi-loss Regularized Deep Neural Network." IEEE Transactions on Circuits and Systems for Video Technology 26, no. 12 (2016): 2273–83. http://dx.doi.org/10.1109/tcsvt.2015.2477937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Patel, Hima, Amit Thakkar, Mrudang Pandya, and Kamlesh Makwana. "Neural network with deep learning architectures." Journal of Information and Optimization Sciences 39, no. 1 (2017): 31–38. http://dx.doi.org/10.1080/02522667.2017.1372908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zhu, Songhao, Zhe Shi, Chengjian Sun, and Shuhan Shen. "Deep neural network based image annotation." Pattern Recognition Letters 65 (November 2015): 103–8. http://dx.doi.org/10.1016/j.patrec.2015.07.037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Anji, and Yuanjun Laili. "Balance gate controlled deep neural network." Neurocomputing 320 (December 2018): 183–94. http://dx.doi.org/10.1016/j.neucom.2018.08.075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sarigul, Mehmet, B. Melis Ozyildirim, and Mutlu Avci. "Deep Convolutional Generalized Classifier Neural Network." Neural Processing Letters 51, no. 3 (2020): 2839–54. http://dx.doi.org/10.1007/s11063-020-10233-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kriegeskorte, Nikolaus, and Tal Golan. "Neural network models and deep learning." Current Biology 29, no. 7 (2019): R231—R236. http://dx.doi.org/10.1016/j.cub.2019.02.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jawad, Eman. "THE DEEP NEURAL NETWORK-A REVIEW." IJRDO -JOURNAL OF MATHEMATICS 9, no. 9 (2023): 1–5. http://dx.doi.org/10.53555/m.v9i9.5842.

Full text
Abstract:
Deep neural networks are considered the backbone of artificial intelligence, we will present a review of an article about the importance of neural networks and their role in other sciences, their characteristic, networks architecture, types, mathematical definition of deep neural networks, as well as their applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Lu, Qiang, Yuanzhen Luo, Haoyang Li, Jake Luo, and Zhiguang Wang. "Deep Differentiable Symbolic Regression Neural Network." Neurocomputing 629 (May 2025): 129671. https://doi.org/10.1016/j.neucom.2025.129671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Boulealam, Chafik, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz, and Hamid Tairi. "Deep Multi-Component Neural Network Architecture." Computation 13, no. 4 (2025): 93. https://doi.org/10.3390/computation13040093.

Full text
Abstract:
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address these issues, this paper introduces the Deep Multi-Components Neural Network (DMCNN), a novel architecture that processes variable-length data by regrouping samples into components of similar lengths, thereby preserving information that traditional methods discard. DMCNN dynamically prioritizes task-relevant features through a component-weighting mechanism, which calculates the importance of each component via loss functions and adjusts weights using a SoftMax function. This approach eliminates the need for dataset standardization while enhancing meaningful features and suppressing irrelevant ones. Additionally, DMCNN seamlessly integrates multimodal data (e.g., text, speech, and signals) as separate components, leveraging complementary information to improve accuracy without requiring dimension alignment. Evaluated on the Multimodal EmotionLines Dataset (MELD) and CIFAR-10, DMCNN achieves state-of-the-art accuracy of 99.22% on MELD and 97.78% on CIFAR-10, outperforming existing methods like MNN and McDFR. The architecture’s efficiency is further demonstrated by its reduced trainable parameters and robust handling of multimodal and variable-length inputs, making it a versatile solution for classification tasks.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Xiyu. "Research of Convolutional Neural Network on Image Classification." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 855–62. http://dx.doi.org/10.54097/hset.v39i.6656.

Full text
Abstract:
With the progress of artificial intelligence, technology based on deep learning is becoming more and more mature, and the application of deep convolutional neural network for image classification has become a popular topic for researchers. The number of the structure of deep convolutional neural network for image classification is keep increasing, and its performance is consistently improving, gradually replace that of traditional methods. According to the process of model development and model optimization, this paper divides the convolutional neural network into two models: classical deep convolutional neural network model and attention mechanism deep convolutional neural network model. The construction methods and characteristics of various kinds of deep convolutional neural network models are comprehensively reviewed, and the performance of various classification models is compared and analyzed. Finally, the problems of deep convolutional neural network model are presented.
APA, Harvard, Vancouver, ISO, and other styles
38

Kumari, Rekha, Gurpreet Kaur, Aditya Rawat, Harshit Chauhan, Kartik Singh Negi, and Rishi Mishra. "ANALYSIS OF TRANSFORMER-DEEP NEURAL NETWORK USING DEEP LEARNING." International Journal of Engineering Applied Sciences and Technology 8, no. 2 (2023): 313–19. http://dx.doi.org/10.33564/ijeast.2023.v08i02.048.

Full text
Abstract:
Transformers were first used for natural language processing (NLP) tasks, but they quickly spread to other deep learning fields, including computer vision. They assess the interdependence of pairs. Attention is a part that enables to dynamically highlight relevant features of the input data (words in the case of text strings, parts of images in the case of visual Transformers). The cost grows continually with the number of tokens. The most common Trans- former Architecture for image classification uses only the Transformer Encoder to transform the various input tokens. However, the decoder component of the traditional Transformer Architecture is also used in a variety of other applications. In this section, we first introduce the Attention Mechanism (Section 1), followed by the Basic Transformer Block, which includes the Vision Transformer (Section 2).
APA, Harvard, Vancouver, ISO, and other styles
39

Akshay, R. Naik, A. V. Deorankar Prof., and P. B. Ambhore Dr. "Design and Analysis of Deep Neural Network for Rainfall Prediction System." Advancement of Computer Technology and its Applications 3, no. 2 (2020): 1–4. https://doi.org/10.5281/zenodo.3886888.

Full text
Abstract:
<em>Rainfall prediction is useful for farmers and others to take decision for doing various activities. There are various methods are available for rainfall prediction like machine learning, artificial neural network. In this proposed system we are using deep neural network for rainfall prediction, as deep neural network gives better results than machine learning algorithms. This proposed method is based on classification technique which is supervised learning method in deep neural network, this classification technique we are using for predicting rainfall. As deep neural network are capable for solving difficult task than machine learning algorithms. We are using adam optimizer for optimizing deep neural network modal parameters by doing this model gives better prediction accuracy.</em>
APA, Harvard, Vancouver, ISO, and other styles
40

Abuelamayem, Ola. "A Deep Inverse Weibull Network." Statistics, Optimization & Information Computing 13, no. 4 (2024): 1357–67. https://doi.org/10.19139/soic-2310-5070-2070.

Full text
Abstract:
Survival analysis is heavily used in different fields like economics, engineering and medicine. The main core of the analysis is to understand the relationship between the covariates and the survival function. The analysis can be performed using traditional statistical models or neural networks. Recently, neural networks has attracted attention in analyzing lifetime data due to its flexibility in handling complex covariates. The networks introduced in the literature have some restrictions such as proportional hazard assumption, data discretization, monotonicity of hazard rates and heavy tailed assumption. In this paper, a novel neural network is introduced based on inverse Weibull distribution and random censoring that removes some of the restrictions introduced in the literature. The network doesn't put monotonicity, proportionality or heavy tailed assumptions on the hazard function. Also, the network doesn't require data discretization. To test its applicability, the network is applied on both simulated and real datasets and the numerical results show that our model outperforms some other methods in the literature.
APA, Harvard, Vancouver, ISO, and other styles
41

Jia, Yang, Meng Wang, and Yagang Wang. "Network intrusion detection algorithm based on deep neural network." IET Information Security 13, no. 1 (2019): 48–53. http://dx.doi.org/10.1049/iet-ifs.2018.5258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

SAZIB, SHEIKH. "Network forensics DDoS attack based on deep neural Network." International Journal of Scientific and Research Publications 12, no. 10 (2022): 9–47. http://dx.doi.org/10.29322/ijsrp.12.10.2022.p13004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ma, Hongli, Fang Xie, Tao Chen, Lei Liang, and Jie Lu. "Image recognition algorithms based on deep learning." Journal of Physics: Conference Series 2137, no. 1 (2021): 012056. http://dx.doi.org/10.1088/1742-6596/2137/1/012056.

Full text
Abstract:
Abstract Convolutional neural network is a very important research direction in deep learning technology. According to the current development of convolutional network, in this paper, convolutional neural networks are induced. Firstly, this paper induces the development process of convolutional neural network; then it introduces the structure of convolutional neural network and some typical convolutional neural networks. Finally, several examples of the application of deep learning is introduced.
APA, Harvard, Vancouver, ISO, and other styles
44

Panchuk, B. O. "Formal verification of deep neural networks." PROBLEMS IN PROGRAMMING, no. 2-3 (September 2024): 253–62. https://doi.org/10.15407/pp2024.02-03.253.

Full text
Abstract:
This paper introduces a method for the formal verification of neural networks using a Satisfiability Modulo Theories (SMT) solver. This approach enables the mathematical validation of specific neural network properties, enhancing their predictability. We propose a method for simplifying a neural network’s computational graph within certain input space regions. This is achieved by replacing neurons’ piecewise-linear activation functions with a subset of their linear segments. This optimization hypothesizes a simpler interpretation of a neural network over limited input data ranges. The simplified interpretation is derived from the incremental simplification of the neural network graph, achieved by solving local SMT tasks on a neuron-by-neuron basis. This optimization significantly speeds up the verification algorithm compared to solving a single SMT task over the entire unoptimized network graph. The method is applicable to any deep neural networks with piecewise-linear activation functions. The approach’s effectiveness was demonstrated by automatically verifying a network traffic classifier specializing in botnet activity detection. The classification model was tested for robustness against adversarial attacks, where attackers attempt to evade detection by introducing specially crafted disturbances into the network data. The verification procedure was conducted over regions in the feature-space near the classifier’s decision boundary. The results contribute to the prospects for more active application of artificial intelligence models in cybersecurity, where result predictability and interpretability are crucial. Additionally, the neuron - wise simplification technique proposed is a promising direction for further development in neural network verification.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Yihang. "Sparse-Aware Deep Learning Accelerator." Highlights in Science, Engineering and Technology 39 (April 1, 2023): 305–10. http://dx.doi.org/10.54097/hset.v39i.6544.

Full text
Abstract:
In view of the difficulty of hardware implementation of convolutional neural network computing, most of the previous convolutional neural network accelerator designs focused on solving the bottleneck of computational performance and bandwidth, ignoring the importance of convolutional neural network scarcity for accelerator design. In recent years, there are a few convolutional neural network accelerators that can take advantage of the scarcity, but they are usually difficult to consider in terms of computational flexibility, parallel efficiency and resource overhead. In view of the problem that the application of convolutional neural network (CNN) on the embedded side is limited by real-time, and there is a large degree of sparsity in CNN convolution calculation. This paper summarizes the methods of sparsification from the algorithm level and based on FPGA level. The different methods of sparsification and the research and analysis of different application layers are introduced. The advantages and development trend of sparsification are analyzed and summarized.
APA, Harvard, Vancouver, ISO, and other styles
46

Tang, Wen, Emilie Chouzenoux, Jean-Christophe Pesquet, and Hamid Krim. "Deep transform and metric learning network: Wedding deep dictionary learning and neural network." Neurocomputing 509 (October 2022): 244–56. http://dx.doi.org/10.1016/j.neucom.2022.08.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Leijnen, Stefan, and Fjodor van Veen. "The Neural Network Zoo." Proceedings 47, no. 1 (2020): 9. http://dx.doi.org/10.3390/proceedings2020047009.

Full text
Abstract:
An overview of neural network architectures is presented. Some of these architectures have been created in recent years, whereas others originate from many decades ago. Apart from providing a practical tool for comparing deep learning models, the Neural Network Zoo also uncovers a taxonomy of network architectures, their chronology, and traces back lineages and inspirations for these neural information processing systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Leijnen, Stefan, and Fjodor van Veen. "The Neural Network Zoo." Proceedings 47, no. 1 (2020): 9. http://dx.doi.org/10.3390/proceedings47010009.

Full text
Abstract:
An overview of neural network architectures is presented. Some of these architectures have been created in recent years, whereas others originate from many decades ago. Apart from providing a practical tool for comparing deep learning models, the Neural Network Zoo also uncovers a taxonomy of network architectures, their chronology, and traces back lineages and inspirations for these neural information processing systems.
APA, Harvard, Vancouver, ISO, and other styles
49

Hindarto, Djarot, and Handri Santoso. "Plat Nomor Kendaraan dengan Convolution Neural Network." Jurnal Inovasi Informatika 6, no. 2 (2021): 1–12. http://dx.doi.org/10.51170/jii.v6i2.202.

Full text
Abstract:
The development of Deep Learning technology is very good at detecting Objects. One of them is detection on the vehicle number plate. This method can be applied to Computer Vision to process images using DensetNet121, NasNetLarge, VGG16 and VGG19 methods. The most basic difference between Machine Learning and Deep Learning is the inclusion of a Hidden Layer and what distinguishes the Deep Learning process using neurons as a process from input, process to output. Feature extraction is done directly with the Deep Learning process. In terms of time, training models with Deep Learning are very long, when compared to Machine Learning. The dataset comes from Kaggle, then training is carried out with four Deep Learning models, resulting in a model. There are differences in conducting the training process. Before carrying out the Training process, a pre-paration process from the Image Dataset is carried out. The dataset is divided into two parts, the Training Dataset and the Testing Dataset. After the training model is completed, it is continued with the Testing process and measuring the performance of the model's accuracy. The accuracy of the four models resulting from Deep Learning training is also presented
APA, Harvard, Vancouver, ISO, and other styles
50

S.SriSakthi Hamrish, T. Deepa, G. Satyavathy, A. Priya ,. "Deep Learning with Pytorch: Siamese Network." Tuijin Jishu/Journal of Propulsion Technology 44, no. 4 (2023): 4412–21. http://dx.doi.org/10.52783/tjjpt.v44.i4.1683.

Full text
Abstract:
"DEEP LEARNING WITH PYTORCH: SIAMESE NETWORK" is a work that addresses person re-identification (re-ID), a difficult computer vision challenge that entails identifying the same person from several camera angles. Because SNNs may learn similarity instead of straight classification, they are becoming a preferred method for this kind of assignment. Using this method, a ranking loss function is optimized by two concurrent CNNs that learn an embedding, or reduced dimensional representation, of the input images. An overview of the procedures involved in person re-identification using SNNs is given in the study, including training, testing, deployment, network architecture, and data preparation. It makes use of the Triplet Ranking Loss function, a popular loss function for SNNs.For similarity-based learning tasks including face recognition, image matching, and document similarity, Siamese Neural Networks are one kind of neural network design that is utilized. The paper offers a thorough tutorial on training a Siamese neural network for a goal based on similarity, namely using the Siamese Neural Network (SNN) to re-identify images taken by different cameras.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography