To see the other types of publications on this topic, follow the link: Fully connected Neural Network.

Journal articles on the topic 'Fully connected Neural Network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Fully connected Neural Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Wei, Zhi Han, Xiai Chen, Baichen Liu, Huidi Jia, and Yandong Tang. "Fully Kernected Neural Networks." Journal of Mathematics 2023 (June 28, 2023): 1–9. http://dx.doi.org/10.1155/2023/1539436.

Full text
Abstract:
In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Qipin, and Wenrui Hao. "A homotopy training algorithm for fully connected neural networks." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 475, no. 2231 (2019): 20190662. http://dx.doi.org/10.1098/rspa.2019.0662.

Full text
Abstract:
In this paper, we present a homotopy training algorithm (HTA) to solve optimization problems arising from fully connected neural networks with complicated structures. The HTA dynamically builds the neural network starting from a simplified version and ending with the fully connected network via adding layers and nodes adaptively. Therefore, the corresponding optimization problem is easy to solve at the beginning and connects to the original model via a continuous path guided by the HTA, which provides a high probability of obtaining a global minimum. By gradually increasing the complexity of the model along the continuous path, the HTA provides a rather good solution to the original loss function. This is confirmed by various numerical results including VGG models on CIFAR-10. For example, on the VGG13 model with batch normalization, HTA reduces the error rate by 11.86% on the test dataset compared with the traditional method. Moreover, the HTA also allows us to find the optimal structure for a fully connected neural network by building the neutral network adaptively.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Jiayuan. "Application and Performance Comparison of Compound Neural Network Model based on CNN Feature Extraction in House Price Forecast." Applied and Computational Engineering 96, no. 1 (2024): 210–17. http://dx.doi.org/10.54254/2755-2721/96/20241281.

Full text
Abstract:
Abstract. This study used a total of eight machine learning algorithms to forecast property prices, it not only provides a robust comparison of the predictive power of different algorithms but also significantly advances our understanding of the factors that influence property prices. In this paper, four traditional machine learning algorithms and four neural network models are selected for comparative study and analysis, of which the neural network models include fully connected neural networks (FCNN), convolutional fully connected neural networks (FCNN+CNN), generative adversarial fully connected networks (FCNN+GANs) and generative adversarial convolutional fully connected neural networks (FCNN+GANs+CNN). This study applied to a Kaggle's sample. The results reveal that the models based on FCNN+CNN and FCNN+GANs+CNN perform relatively well in house price prediction, with both obtaining an explanatory power of R as high as 0. 96 and 0. 97, respectively and significantly outperforming traditional machine learning algorithms. It is worth mentioning that the FCNN+CNN model is slightly stronger in terms of error minimization, but both perform better in terms of stability and generalization capabilities. The conclusion is that neural network models generally have better results than traditional algorithms in house price prediction, and the neural network model of CNN composite has significantly better prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Erichsen, R., W. K. Theumann, and D. R. C. Dominguez. "Categorization in fully connected multistate neural network models." Physical Review E 60, no. 6 (1999): 7321–31. http://dx.doi.org/10.1103/physreve.60.7321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hsu, K. Y., H. Y. Li, and D. Psaltis. "Holographic implementation of a fully connected neural network." Proceedings of the IEEE 78, no. 10 (1990): 1637–45. http://dx.doi.org/10.1109/5.58357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sergeev, Fedor, Elena Bratkovskaya, Ivan Kisel, and Iouri Vassiliev. "Deep learning for quark–gluon plasma detection in the CBM experiment." International Journal of Modern Physics A 35, no. 33 (2020): 2043002. http://dx.doi.org/10.1142/s0217751x20430022.

Full text
Abstract:
Classification of processes in heavy-ion collisions in the CBM experiment (FAIR/GSI, Darmstadt) using neural networks is investigated. Fully-connected neural networks and a deep convolutional neural network are built to identify quark–gluon plasma simulated within the Parton-Hadron-String Dynamics (PHSD) microscopic off-shell transport approach for central Au+Au collision at a fixed energy. The convolutional neural network outperforms fully-connected networks and reaches 93% accuracy on the validation set, while the remaining only 7% of collisions are incorrectly classified.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Gang, Xing San Qian, Chun Ming Ye, and Lin Zhao. "A Clustering Method for Pruning Fully Connected Neural Network." Advanced Materials Research 204-210 (February 2011): 600–603. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.600.

Full text
Abstract:
This paper focuses mainly on a clustering method for pruning Fully Connected Backpropagation Neural Network (FCBP). The initial neural network is fully connected, after training with sample data, a clustering method is employed to cluster weights between input to hidden layer and from hidden to output layer, and connections that are relatively unnecessary are deleted, thus the initial network becomes a PCBP (Partially Connected Backpropagation) Neural Network. PCBP can be used in prediction or data mining more efficiently than FCBP. At the end of this paper, An experiment is conducted to illustrate the effects of PCBP using the submersible pump repair data set.
APA, Harvard, Vancouver, ISO, and other styles
8

Qian, Wei, and Yijie Wang. "Analyzing E-Commerce Market Data Using Deep Learning Techniques to Predict Industry Trends." Journal of Organizational and End User Computing 36, no. 1 (2024): 1–22. http://dx.doi.org/10.4018/joeuc.342093.

Full text
Abstract:
Faced with challenges in sales predicting research, this article combines the capabilities of deep learning algorithms in handling complex tasks and unstructured data. Through analyzing consumer behavior, it selects factors influencing sales, including images, prices and discounts, and historical sales, as input variables for the model. Three different types of neural network models-fully connected neural networks, convolutional neural networks, and recurrent neural networks-are employed to process structured data, image data, and sales sequence data, respectively. This forms a deep neural network for feature representation. Subsequently, based on the outputs of these three types of deep neural networks, a fully connected neural network is employed to train the sales prediction model. Ultimately, experimental results demonstrate that the proposed sales prediction method outperforms exponential regression and shallow neural networks in terms of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

GUSSO, M., C. MARANGI, G. NARDULLI, and G. PASQUARIELLO. "DYNAMICS OF FULLY CONNECTED NEURAL NETWORKS WITH SIGN CONSTRAINTS." International Journal of Modern Physics C 03, no. 06 (1992): 1221–33. http://dx.doi.org/10.1142/s0129183192000841.

Full text
Abstract:
We consider fully connected neural networks near saturation, trained by a modified Edinburgh algorithm, with sign constraints on the synaptic couplings. We study the domains of attraction of the stored patterns for both the balanced and the unbalanced case (excess of positive over negative constraints). A comparison with the dilute network is also included.
APA, Harvard, Vancouver, ISO, and other styles
10

Shapalin, Vitaliy Gennadiyevich, and Denis Vladimirovich Nikolayenko. "Comparison of the structure, efficiency, and speed of operation of feedforward, convolutional, and recurrent neural networks." Research Result. Information technologies 9, no. 4 (2024): 21–35. https://doi.org/10.18413/2518-1092-2024-9-4-0-3.

Full text
Abstract:
This article examines the efficiency of fully connected, recurrent, and convolutional neural networks in the context of developing a simple model for weather forecasting. The architectures and working principles of fully connected neural networks, the structure of one-dimensional and two-dimensional convolutional neural networks, as well as the architecture, features, advantages, and disadvantages of recurrent neural networks—specifically, simple recurrent neural networks, LSTM, and GRU, along with their bidirectional variants for each of the three aforementioned types—are discussed. Based on the available theoretical materials, simple neural networks were developed to compare the efficiency of each architecture, with training time and error magnitude serving as criteria, and temperature, wind speed, and atmospheric pressure as training data. The training speed, minimum and average error values for the fully connected neural network, convolutional neural network, simple recurrent network, LSTM, and GRU, as well as for bidirectional recurrent neural networks, were examined. Based on the results obtained, an analysis was conducted to explore the possible reasons for the effectiveness of each architecture. Graphs were plotted to show the relationship between processing speed and error magnitude for the three datasets examined: temperature, wind speed, and atmospheric pressure. Conclusions were drawn about the efficiency of specific models in the context of forecasting time series of meteorological data.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Jiajie. "Diabetes classification and prediction using artificial neural networks." Applied and Computational Engineering 4, no. 1 (2023): 804–9. http://dx.doi.org/10.54254/2755-2721/4/2023434.

Full text
Abstract:
Diabetes is a chronic disease which threatens the global human health. Over time it could lead to other serious medical problems and does not have a permanent cure. Hence if we could diagnose or predict it early, then it might be possible to prevent it. Several researches have shown that computer technology can effectively assist in the diagnosis of diseases. And neural network could be used for classification and prediction. In order to identify the most significant element that has the greatest impact on the classification and prediction, this paper applies artificial neural networks to the PIMA Indian diabetes dataset. This study's neural network is a fully connected neural network. We used the fully connected neural network discussed above to the dataset that deleted one factor column per time, and we compared their accuracy to determine which factor was the most significant in this dataset. Finally, this study discovers that "Glucose" level, followed by "BMI," is the most crucial component for diabetes in this dataset, with a fully connected neural network having an accuracy of 84%.
APA, Harvard, Vancouver, ISO, and other styles
12

Greif, Kevin, and Kevin Lannon. "Physics Inspired Deep Neural Networks for Top Quark Reconstruction." EPJ Web of Conferences 245 (2020): 06029. http://dx.doi.org/10.1051/epjconf/202024506029.

Full text
Abstract:
Deep neural networks (DNNs) have been applied to the fields of computer vision and natural language processing with great success in recent years. The success of these applications has hinged on the development of specialized DNN architectures that take advantage of specific characteristics of the problem to be solved, namely convolutional neural networks for computer vision and recurrent neural networks for natural language processing. This research explores whether a neural network architecture specific to the task of identifying t → Wb decays in particle collision data yields better performance than a generic, fully-connected DNN. Although applied here to resolved top quark decays, this approach is inspired by an DNN technique for tagging boosted top quarks, which consists of defining custom neural network layers known as the combination and Lorentz layers. These layers encode knowledge of relativistic kinematics applied to combinations of particles, and the output of these specialized layers can then be fed into a fully connected neural network to learn tasks such as classification. This research compares the performance of these physics inspired networks to that of a generic, fully-connected DNN, to see if there is any advantage in terms of classification performance, size of the network, or ease of training.
APA, Harvard, Vancouver, ISO, and other styles
13

Karande, Aarti M., and D. R. Kalbande. "Weight Assignment Algorithms for Designing Fully Connected Neural Network." International Journal of Intelligent Systems and Applications 10, no. 6 (2018): 68–76. http://dx.doi.org/10.5815/ijisa.2018.06.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Lei, Xia, Yongkai Fan, Kuan-Ching Li, Arcangelo Castiglione, and Qian Hu. "High-precision linearized interpretation for fully connected neural network." Applied Soft Computing 109 (September 2021): 107572. http://dx.doi.org/10.1016/j.asoc.2021.107572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Zhikui, and Lina Wu. "Research on Continuous Pipeline Life Prediction Method Based on Fully Connected Neural Network." Academic Journal of Science and Technology 8, no. 3 (2023): 69–73. http://dx.doi.org/10.54097/fcqfsz74.

Full text
Abstract:
Aiming at the low accuracy of traditional empirical formulas in predicting the fatigue life of continuous oil pipelines, a fully connected neural network is utilized to predict the low-week fatigue life of continuous oil pipelines. Considering the influence of internal pressure on the fatigue life of continuous oil pipeline during operation, a prediction method combining the fully connected neural network and gated recirculation unit is proposed, and the experiment proves that the FCNN-GRU neural network performs better in terms of prediction accuracy and stability compared with the BP neural network.
APA, Harvard, Vancouver, ISO, and other styles
16

Mamontov, Andrey I. "On Computer Memory Saving Methods in Performing Data Classification Using Fully Connected Neural Networks." Vestnik MEI 3, no. 3 (2021): 103–9. http://dx.doi.org/10.24160/1993-6982-2021-3-103-109.

Full text
Abstract:
In solving the classification problem, a fully connected trainable neural network (with adjusting the parameters represented by double-precision real numbers) is used as a mathematical model. After the training is completed, the neural network parameters are rounded and represented as fixed-point numbers (integers). The aim of the study is to reduce the required amount of the computing system memory for storing the obtained integer parameters. To reduce the amount of memory, the following methods for storing integer parameters are developed, which are based on representing the linear polynomials included in a fully connected neural network using compositions of simpler functions: - a method based on representing the considered polynomial as a sum of simpler polynomials; - a method based on separately storing the information about additions and multiplications. In the experiment with the MNIST data set, it took 1.41 MB to store real parameters of a fully connected neural network, 0.7 MB to store integer parameters without using the proposed methods, 0.47 MB in the RAM and 0.3 MB in compressed form on the disk when using the first method, and 0.25 MB on the disk when using the second method. In the experiment with the USPS data set, it took 0.25 MB to store real parameters of a fully connected neural network, 0.1 MB to store integer parameters without using the proposed methods, 0.05 MB in the RAM and approximately the same amount in compressed form on the disk when using the first method, and 0.03 MB on the disk when using the second method. The study results can be applied in using fully connected neural networks to solve various recognition problems under the conditions of limited hardware capacities.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Yuhong, and Xiangdong Hu. "An Intrusion Detection Method Based on Fully Connected Recurrent Neural Network." Scientific Programming 2022 (September 26, 2022): 1–11. http://dx.doi.org/10.1155/2022/7777211.

Full text
Abstract:
Now, the use of deep learning technology to solve the problems of the low multiclassification task detection accuracy and complex feature engineering existing in traditional intrusion detection technology has become a research hotspot. In all kinds of deep learning, recurrent neural networks (RNN) are very important. The RNN processes 41 feature attributes and maps them to a 122-dimensional high-dimensional feature space. To detect multiclassification tasks, this study proposes an intrusion detection method based on fully connected recurrent neural networks and compares its performance with previous machine learning methods on benchmark datasets. The research results show that the intrusion detection system (IDS) model based on fully connected recurrent neural network is very suitable for classification of intrusion detection. Classification methods, especially in multiclassification tasks, have high detection accuracy, significantly improve the detection performance of detection attacks and DoS attacks, and it provides a new research direction for the future attempts of intrusion detection methods for industrial control systems.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Houjie, Lei Wu, Jianjun He, Ruirui Zheng, Yu Zhou, and Shuang Qiao. "Partial Label Learning Based on Fully Connected Deep Neural Network." International Journal of Circuits, Systems and Signal Processing 16 (January 12, 2022): 287–97. http://dx.doi.org/10.46300/9106.2022.16.35.

Full text
Abstract:
The ambiguity of training samples in the partial label learning framework makes it difficult for us to develop learning algorithms and most of the existing algorithms are proposed based on the traditional shallow machine learn- ing models, such as decision tree, support vector machine, and Gaussian process model. Deep neu- ral networks have demonstrated excellent perfor- mance in many application fields, but currently it is rarely used for partial label learning frame- work. This study proposes a new partial label learning algorithm based on a fully connected deep neural network, in which the relationship between the candidate labels and the ground- truth label of each training sample is established by defining three new loss functions, and a regu- larization term is added to prevent overfitting. The experimental results on the controlled U- CI datasets and real-world partial label datasets reveal that the proposed algorithm can achieve higher classification accuracy than the state-of- the-art partial label learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Peleshchak, R. М., V. V. Lytvyn, О. І. Cherniak, І. R. Peleshchak, and М. V. Doroshenko. "STOCHASTIC PSEUDOSPIN NEURAL NETWORK WITH TRIDIAGONAL SYNAPTIC CONNECTIONS." Radio Electronics, Computer Science, Control, no. 2 (July 7, 2021): 114–22. http://dx.doi.org/10.15588/1607-3274-2021-2-12.

Full text
Abstract:
Context. To reduce the computational resource time in the problems of diagnosing and recognizing distorted images based on a fully connected stochastic pseudospin neural network, it becomes necessary to thin out synaptic connections between neurons, which is solved using the method of diagonalizing the matrix of synaptic connections without losing interaction between all neurons in the network. Objective. To create an architecture of a stochastic pseudo-spin neural network with diagonal synaptic connections without loosing the interaction between all the neurons in the layer to reduce its learning time. Method. The paper uses the Hausholder method, the method of compressing input images based on the diagonalization of the matrix of synaptic connections and the computer mathematics system MATLAB for converting a fully connected neural network into a tridiagonal form with hidden synaptic connections between all neurons. Results. We developed a model of a stochastic neural network architecture with sparse renormalized synaptic connections that take into account deleted synaptic connections. Based on the transformation of the synaptic connection matrix of a fully connected neural network into a Hessenberg matrix with tridiagonal synaptic connections, we proposed a renormalized local Hebb rule. Using the computer mathematics system “WolframMathematica 11.3”, we calculated, as a function of the number of neurons N, the relative tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network. Conclusions. We found that with an increase in the number of neurons, the tuning time of synaptic connections (per iteration) in a stochastic pseudospin neural network with a tridiagonal connection Matrix, relative to the tuning time of synaptic connections (per iteration) in a fully connected synaptic neural network, decreases according to a hyperbolic law. Depending on the direction of pseudospin neurons, we proposed a classification of a renormalized neural network with a ferromagnetic structure, an antiferromagnetic structure, and a dipole glass.
APA, Harvard, Vancouver, ISO, and other styles
20

Lytvyn, Vasyl, Roman Peleshchak, Ivan Peleshchak, Oksana Cherniak, and Lyubomyr Demkiv. "Building a mathematical model and an algorithm for training a neural network with sparse dipole synaptic connections for image recognition." Eastern-European Journal of Enterprise Technologies 6, no. 4 (114) (2021): 21–27. http://dx.doi.org/10.15587/1729-4061.2021.245010.

Full text
Abstract:
Large enough structured neural networks are used for solving the tasks to recognize distorted images involving computer systems. One such neural network that can completely restore a distorted image is a fully connected pseudospin (dipole) neural network that possesses associative memory. When submitting some image to its input, it automatically selects and outputs the image that is closest to the input one. This image is stored in the neural network memory within the Hopfield paradigm. Within this paradigm, it is possible to memorize and reproduce arrays of information that have their own internal structure. In order to reduce learning time, the size of the neural network is minimized by simplifying its structure based on one of the approaches: underlying the first is «regularization» while the second is based on the removal of synaptic connections from the neural network. In this work, the simplification of the structure of a fully connected dipole neural network is based on the dipole-dipole interaction between the nearest adjacent neurons of the network. It is proposed to minimize the size of a neural network through dipole-dipole synaptic connections between the nearest neurons, which reduces the time of the computational resource in the recognition of distorted images. The ratio for weight coefficients of synaptic connections between neurons in dipole approximation has been derived. A training algorithm has been built for a dipole neural network with sparse synaptic connections, which is based on the dipole-dipole interaction between the nearest neurons. A computer experiment was conducted that showed that the neural network with sparse dipole connections recognizes distorted images 3 times faster (numbers from 0 to 9, which are shown at 25 pixels), compared to a fully connected neural network
APA, Harvard, Vancouver, ISO, and other styles
21

Vasyl, Lytvyn, Peleshchak Roman, Peleshchak Ivan, Cherniak Oksana, and Demkiv Lyubomyr. "Building a mathematical model and an algorithm for training a neural network with sparse dipole synaptic connections for image recognition." Eastern-European Journal of Enterprise Technologies 6, no. 4 (114) (2021): 21–27. https://doi.org/10.15587/1729-4061.2021.245010.

Full text
Abstract:
Large enough structured neural networks are used for solving the tasks to recognize distorted images involving computer systems. One such neural network that can completely restore a distorted image is a fully connected pseudospin (dipole) neural network that possesses associative memory. When submitting some image to its input, it automatically selects and outputs the image that is closest to the input one. This image is stored in the neural network memory within the Hopfield paradigm. Within this paradigm, it is possible to memorize and reproduce arrays of information that have their own internal structure. In order to reduce learning time, the size of the neural network is minimized by simplifying its structure based on one of the approaches: underlying the first is «regularization» while the second is based on the removal of synaptic connections from the neural network. In this work, the simplification of the structure of a fully connected dipole neural network is based on the dipole-dipole interaction between the nearest adjacent neurons of the network. It is proposed to minimize the size of a neural network through dipole-dipole synaptic connections between the nearest neurons, which reduces the time of the computational resource in the recognition of distorted images. The ratio for weight coefficients of synaptic connections between neurons in dipole approximation has been derived. A training algorithm has been built for a dipole neural network with sparse synaptic connections, which is based on the dipole-dipole interaction between the nearest neurons. A computer experiment was conducted that showed that the neural network with sparse dipole connections recognizes distorted images 3 times faster (numbers from 0 to 9, which are shown at 25 pixels), compared to a fully connected neural network
APA, Harvard, Vancouver, ISO, and other styles
22

Chakraborty, Goutam, Vadim Azhmyakov, and Luz Adriana Guzman Trujillo. "A Formal Approach to Optimally Configure a Fully Connected Multilayer Hybrid Neural Network." Mathematics 13, no. 1 (2024): 129. https://doi.org/10.3390/math13010129.

Full text
Abstract:
This paper is devoted to a novel formal analysis, optimizing the learning models for feedforward multilayer neural networks with hybrid structures. The proposed mathematical description replicates a specific switched-type optimal control problem (OCP). We have developed an equivalent, optimal control-based formulation of the given problem of training a hybrid feedforward multilayer neural network, to train the target mapping function constrained by the training samples. This novel formal approach makes it possible to apply some well-established optimal control techniques to design a versatile type of full connection neural networks. We next discuss the irrelevance of the necessity of Pontryagin-type optimality conditions for the construction of the obtained switched-type OCP. This fact motivated us to consider the so-called direct-solution approaches to the switched OCPs, which can be associated with the learning of hybrid neural networks. Concretely, we consider the generalized reduced-gradient algorithm in the framework of the auxiliary switched OCP.
APA, Harvard, Vancouver, ISO, and other styles
23

Alekseev, Aleksandr, Leonid Kozhemyakin, Vladislav Nikitin, and Julia Bolshakova. "Data Preprocessing and Neural Network Architecture Selection Algorithms in Cases of Limited Training Sets—On an Example of Diagnosing Alzheimer’s Disease." Algorithms 16, no. 5 (2023): 219. http://dx.doi.org/10.3390/a16050219.

Full text
Abstract:
This paper aimed to increase accuracy of an Alzheimer’s disease diagnosing function that was obtained in a previous study devoted to application of decision roots to the diagnosis of Alzheimer’s disease. The obtained decision root is a discrete switching function of several variables applicated to aggregation of a few indicators to one integrated assessment presents as a superposition of few functions of two variables. Magnetic susceptibility values of the basal veins and veins of the thalamus were used as indicators. Two categories of patients were used as function values. To increase accuracy, the idea of using artificial neural networks was suggested, but a feature of medical data is its limitation. Therefore, neural networks based on limited training datasets may be inefficient. The solution to this problem is proposed to preprocess initial datasets to determine the parameters of the neural networks based on decisions’ roots, because it is known that any can be represented in the incompletely connected neural network form with a cascade structure. There are no publicly available specialized software products allowing the user to set the complex structure of a neural network, which is why the number of synaptic coefficients of an incompletely connected neural network has been determined. This made it possible to predefine fully connected neural networks, comparable in terms of the number of unknown parameters. Acceptable accuracy was obtained in cases of one-layer and two-layer fully connected neural networks trained on limited training sets on an example of diagnosing Alzheimer’s disease. Thus, the scientific hypothesis on preprocessing initial datasets and neural network architecture selection using special methods and algorithms was confirmed.
APA, Harvard, Vancouver, ISO, and other styles
24

Wei, LI, Zhu Wei-gang, Pang Hong-feng, and Zhao Hong-yu. "Radar Emitter Identification Based on Fully Connected Spiking Neural Network." Journal of Physics: Conference Series 1914, no. 1 (2021): 012036. http://dx.doi.org/10.1088/1742-6596/1914/1/012036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Cai, Bowen. "Fully Connected Convolutional Neural Network in PCB Soldering Point Inspection." Journal of Computer and Communications 10, no. 12 (2022): 62–70. http://dx.doi.org/10.4236/jcc.2022.1012005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Alwan, Ali H., and Ali H. Kashmar. "Block Ciphers Analysis Based on a Fully Connected Neural Network." Ibn AL-Haitham Journal For Pure and Applied Sciences 36, no. 1 (2023): 415–27. http://dx.doi.org/10.30526/36.1.3058.

Full text
Abstract:
With the development of high-speed network technologies, there has been a recent rise in the transfer of significant amounts of sensitive data across the Internet and other open channels. The data will be encrypted using the same key for both Triple Data Encryption Standard (TDES) and Advanced Encryption Standard (AES), with block cipher modes called cipher Block Chaining (CBC) and Electronic CodeBook (ECB). Block ciphers are often used for secure data storage in fixed hard drives, portable devices, and safe network data transport. Therefore, to assess the security of the encryption method, it is necessary to become familiar with and evaluate the algorithms of cryptographic systems. Block cipher users need to be sure that the ciphers they employ are secure against various attacks. A Fully Connected Neural Network (FCNN) model was initially used to assess how well the models were classified. Then, all models, including encoder models, were assessed using True Positive (TP) measures for successful classification of the discovered encoder and False Positive (FP) measures for imprecise categorization. The accuracy value, retrieval, loss, precision value, and F1 score were then calculated using a confusion matrix to assess the model's efficacy (abbreviated as F1). ECB results with an accuracy of 85% and CBC results with an accuracy of 88% were produced, and the parameters of the FCNN model were tweaked to provide better results. These results helped to identify the encryption algorithm more precisely and evaluate it.
APA, Harvard, Vancouver, ISO, and other styles
27

Song, Alexander, Sai Nikhilesh Murty Kottapalli, and Peer Fischer. "Image classification with a fully connected opto-electronic neural network." EPJ Web of Conferences 287 (2023): 13013. http://dx.doi.org/10.1051/epjconf/202328713013.

Full text
Abstract:
Optical approaches have made great strides enabling high-speed, scalable computing necessary for modern deep learning and AI applications. In this study, we introduce a multilayer optoelectronic computing framework that alternates between optical and optoelectronic layers to implement matrix-vector multiplications and rectified linear functions, respectively. The system is designed to be real-time and parallelized, utilizing arrays of light emitters and detectors connected with independent analog electronics. We experimentally demonstrate the operation of our system and compare its performance to a single-layer analog through simulations.
APA, Harvard, Vancouver, ISO, and other styles
28

Dapkus, Paulius, Liudas Mažeika, and Vytautas Sliesoraitis. "A study of supervised combined neural-network-based ultrasonic method for reconstruction of spatial distribution of material properties." Information Technology And Control 49, no. 3 (2020): 381–94. http://dx.doi.org/10.5755/j01.itc.49.3.26792.

Full text
Abstract:
This paper examines the performance of the commonly used neural-network-based classifiers for investigating a structural noise in metals as grain size estimation. The biggest problem which aims to identify the object structure grain size based on metal features or the object structure itself. When the structure data is obtained, a proposed feature extraction method is used to extract the feature of the object. Afterwards, the extracted features are used as the inputs for the classifiers. This research studies is focused to use basic ultrasonic sensors to obtain objects structural grain size which are used in neural network. The performance for used neural-network-based classifier is evaluated based on recognition accuracy for individual object. Also, traditional neural networks, namely convolutions and fully connected dense networks are shown as a result of grain size estimation model. To evaluate robustness property of neural networks, the original samples data is mixed for three types of grain sizes. Experimental results show that combined convolutions and fully connected dense neural networks with classifiers outperform the others single neural networks with original samples with high SN data. The Dense neural network as itself demonstrates the best robustness property when the object samples not differ from trained datasets.
APA, Harvard, Vancouver, ISO, and other styles
29

Su, Fang, Hai-Yang Shang, and Jing-Yan Wang. "Low-Rank Deep Convolutional Neural Network for Multitask Learning." Computational Intelligence and Neuroscience 2019 (May 20, 2019): 1–10. http://dx.doi.org/10.1155/2019/7410701.

Full text
Abstract:
In this paper, we propose a novel multitask learning method based on the deep convolutional network. The proposed deep network has four convolutional layers, three max-pooling layers, and two parallel fully connected layers. To adjust the deep network to multitask learning problem, we propose to learn a low-rank deep network so that the relation among different tasks can be explored. We proposed to minimize the number of independent parameter rows of one fully connected layer to explore the relations among different tasks, which is measured by the nuclear norm of the parameter of one fully connected layer, and seek a low-rank parameter matrix. Meanwhile, we also propose to regularize another fully connected layer by sparsity penalty so that the useful features learned by the lower layers can be selected. The learning problem is solved by an iterative algorithm based on gradient descent and back-propagation algorithms. The proposed algorithm is evaluated over benchmark datasets of multiple face attribute prediction, multitask natural language processing, and joint economics index predictions. The evaluation results show the advantage of the low-rank deep CNN model over multitask problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Nuralem, Abizov, Yuan Huang Jia, and Gao Fei. "Developing a Humanoid Robot Platform." International Journal of Engineering and Management Research 8, no. 3 (2018): 66–70. https://doi.org/10.31033/ijemr.8.3.9.

Full text
Abstract:
This paper is focused on developing a platform that helps researchers to create verify and implement their machine learning algorithms to a humanoid robot in real environment. The presented platform is durable, easy to fix, upgrade, fast to assemble and cheap. Also, using this platform we present an approach that solves a humanoid balancing problem, which uses only fully connected neural network as a basic idea for real time balancing. The method consists of 3 main conditions: 1) using different types of sensors detect the current position of the body and generate the input information for the neural network, 2) using fully connected neural network produce the correct output, 3) using servomotors make movements that will change the current position to the new one. During field test the humanoid robot can balance on the moving platform that tilts up to 10 degrees to any direction. Finally, we have shown that using our platform we can do research and compare different neural networks in similar conditions which can be important for the researchers to do analyses in machine learning and robotics.
APA, Harvard, Vancouver, ISO, and other styles
31

Novikov, N. P., and V. I. Vinogradov. "Experience in Using the Transformer Network Architecture to Approximate Agent’s Policy in Reinforcement Learning." Моделирование и анализ данных 14, no. 2 (2024): 7–22. http://dx.doi.org/10.17759/mda.2024140201.

Full text
Abstract:
<p>This paper discusses the basics of the deep reinforcement learning algorithm and the use of neural networks to approximate the agent’s policy. The comparison of using a fully connected neural network and a transformer network in the reinforcement learning algorithm is considered.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Kuo, Chun Lin, Ercan Engin Kuruoglu, and Wai Kin Victor Chan. "Neural Network Structure Optimization by Simulated Annealing." Entropy 24, no. 3 (2022): 348. http://dx.doi.org/10.3390/e24030348.

Full text
Abstract:
A critical problem in large neural networks is over parameterization with a large number of weight parameters, which limits their use on edge devices due to prohibitive computational power and memory/storage requirements. To make neural networks more practical on edge devices and real-time industrial applications, they need to be compressed in advance. Since edge devices cannot train or access trained networks when internet resources are scarce, the preloading of smaller networks is essential. Various works in the literature have shown that the redundant branches can be pruned strategically in a fully connected network without sacrificing the performance significantly. However, majority of these methodologies need high computational resources to integrate weight training via the back-propagation algorithm during the process of network compression. In this work, we draw attention to the optimization of the network structure for preserving performance despite compression by pruning aggressively. The structure optimization is performed using the simulated annealing algorithm only, without utilizing back-propagation for branch weight training. Being a heuristic-based, non-convex optimization method, simulated annealing provides a globally near-optimal solution to this NP-hard problem for a given percentage of branch pruning. Our simulation results have shown that simulated annealing can significantly reduce the complexity of a fully connected network while maintaining the performance without the help of back-propagation.
APA, Harvard, Vancouver, ISO, and other styles
33

Verma, Vikas, Meng Qu, Kenji Kawaguchi, et al. "GraphMix: Improved Training of GNNs for Semi-Supervised Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 10024–32. http://dx.doi.org/10.1609/aaai.v35i11.17203.

Full text
Abstract:
We present GraphMix, a regularization method for Graph Neural Network based semi-supervised object classification, whereby we propose to train a fully-connected network jointly with the graph neural network via parameter sharing and interpolation-based regularization. Further, we provide a theoretical analysis of how GraphMix improves the generalization bounds of the underlying graph neural network, without making any assumptions about the "aggregation" layer or the depth of the graph neural networks. We experimentally validate this analysis by applying GraphMix to various architectures such as Graph Convolutional Networks, Graph Attention Networks and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can consistently improve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and Co-author-Physics.
APA, Harvard, Vancouver, ISO, and other styles
34

Solovyeva, Elena, and Ali Abdullah. "Binary and Multiclass Text Classification by Means of Separable Convolutional Neural Network." Inventions 6, no. 4 (2021): 70. http://dx.doi.org/10.3390/inventions6040070.

Full text
Abstract:
In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of neural classifiers. Advantages are demonstrated at binary and multiclass classifications of written texts by means of the proposed networks under the sigmoid and Softmax activation functions in convolutional layer. At binary and multiclass classifications, the accuracy obtained by separable convolutional neural networks is higher in comparison with some investigated types of recurrent neural networks and fully connected networks.
APA, Harvard, Vancouver, ISO, and other styles
35

Morozov, A. Yu, D. L. Reviznikov, and K. K. Abgaryan. "Issues of implementing neural network algorithms on memristor crossbars." Izvestiya Vysshikh Uchebnykh Zavedenii. Materialy Elektronnoi Tekhniki = Materials of Electronics Engineering 22, no. 4 (2020): 272–78. http://dx.doi.org/10.17073/1609-3577-2019-4-272-278.

Full text
Abstract:
The property of natural parallelization of matrix-vector operations inherent in memristor crossbars creates opportunities for their effective use in neural network computing. Analog calculations are orders of magnitude faster in comparison to calculations on the central processor and on graphics accelerators. Besides, mathematical operations energy costs are significantly lower. The essential feature of analog computing is its low accuracy. In this regard, studying the dependence of neural network quality on the accuracy of setting its weights is relevant. The paper considers two convolutional neural networks trained on the MNIST (handwritten digits) and CIFAR_10 (airplanes, boats, cars, etc.) data sets. The first convolutional neural network consists of two convolutional layers, one subsample layer and two fully connected layers. The second one consists of four convolutional layers, two subsample layers and two fully connected layers. Calculations in convolutional and fully connected layers are performed through matrix-vector operations that are implemented on memristor crossbars. Sub-sampling layers imply the operation of finding the maximum value from several values. This operation can be implemented at the analog level. The process of training a neural network runs separately from data analysis. As a rule, gradient optimization methods are used at the training stage. It is advisable to perform calculations using these methods on CPU. When setting the weights, 3—4 precision bits are required to obtain an acceptable recognition quality in the case the network is trained on MNIST. 6-10 precision bits are required if the network is trained on CIFAR_10.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Guangsheng, Chao Li, Wei Wei, et al. "Fully Convolutional Neural Network with Augmented Atrous Spatial Pyramid Pool and Fully Connected Fusion Path for High Resolution Remote Sensing Image Segmentation." Applied Sciences 9, no. 9 (2019): 1816. http://dx.doi.org/10.3390/app9091816.

Full text
Abstract:
Recent developments in Convolutional Neural Networks (CNNs) have allowed for the achievement of solid advances in semantic segmentation of high-resolution remote sensing (HRRS) images. Nevertheless, the problems of poor classification of small objects and unclear boundaries caused by the characteristics of the HRRS image data have not been fully considered by previous works. To tackle these challenging problems, we propose an improved semantic segmentation neural network, which adopts dilated convolution, a fully connected (FC) fusion path and pre-trained encoder for the semantic segmentation task of HRRS imagery. The network is built with the computationally-efficient DeepLabv3 architecture, with added Augmented Atrous Spatial Pyramid Pool and FC Fusion Path layers. Dilated convolution enlarges the receptive field of feature points without decreasing the feature map resolution. The improved neural network architecture enhances HRRS image segmentation, reaching the classification accuracy of 91%, and the precision of recognition of small objects is improved. The applicability of the improved model to the remote sensing image segmentation task is verified.
APA, Harvard, Vancouver, ISO, and other styles
37

Asadullaev, R. G., and M. A. Sitnikova. "INTELLIGENT MODEL FOR CLASSIFYING HEMODYNAMIC PATTERNS OF BRAIN ACTIVATION TO IDENTIFY NEUROCOGNITIVE MECHANISMS OF SPATIAL-NUMERICAL ASSOCIATIONS." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 235 (January 2024): 38–45. http://dx.doi.org/10.14489/vkit.2024.01.pp.038-045.

Full text
Abstract:
The study presents the results of the development and testing of deep learning neural network architectures, which demonstrate high accuracy rates in classifying neurophysiological data, in particular hemodynamic brain activation patterns obtained by functional near-infrared spectroscopy, during solving mathematical problems on spatial-numerical associations. The analyzed signal represents a multidimensional time series of oxyhemoglobin and deoxyhemoglobin dynamics. Taking the specificity of the fNIRS signal into account, a comparative analysis of 2 types of neural network architectures was carried out: (1) architectures based on recurrent neural networks: recurrent neural network with long short-term memory, recurrent neural network with long short-term memory with fully connected layers, bidirectional recurrent neural network with long short-term memory, convolutional recurrent neural network with long short-term memory; (2) architectures based on convolutional neural networks with 1D convolutions: convolutional neural network, fully convolutional neural network, residual neural network. Trained long short-term memory recurrent neural network architectures showed worse results in accuracy in comparison with 1D convolutional neural network architectures. Residual neural network (model_Resnet) demonstrated the highest accuracy rates in three experimental conditions more than 88% in detecting age-related differences in brain activation during spatial-numerical association tasks considering the individual characteristics of the respondents’ signal.
APA, Harvard, Vancouver, ISO, and other styles
38

Jang, Seok-Woo. "Classification of Epileptic Seizure EEG Based on Fully Connected Neural Network." International Journal of Emerging Trends in Engineering Research 8, no. 7 (2020): 3012–15. http://dx.doi.org/10.30534/ijeter/2020/21872020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Yakun, Bo Qiu, Guanjie Xiang, Mengci Li, and Zhendong He. "Stellar Spectral Classification with 2D Spectrum and Fully Connected Neural Network." Journal of Physics: Conference Series 1626 (October 2020): 012016. http://dx.doi.org/10.1088/1742-6596/1626/1/012016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Bollé, D., J. Busquets Blanco, and G. M. Shim. "Parallel dynamics of the fully connected Blume–Emery–Griffiths neural network." Physica A: Statistical Mechanics and its Applications 318, no. 3-4 (2003): 613–36. http://dx.doi.org/10.1016/s0378-4371(02)01528-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ullah, Ubaid, Alain Garcia Olea Jurado, Ignacio Diez Gonzalez, and Begonya Garcia-Zapirain. "A Fully Connected Quantum Convolutional Neural Network for Classifying Ischemic Cardiopathy." IEEE Access 10 (2022): 134592–605. http://dx.doi.org/10.1109/access.2022.3232307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gokul Kannan, K., T. R. Ganesh Babu, R. Praveena, P. Sukumar, G. Sudha, and M. Birunda. "Classification of WBC cell classification using fully connected convolution neural network." Journal of Physics: Conference Series 2466, no. 1 (2023): 012033. http://dx.doi.org/10.1088/1742-6596/2466/1/012033.

Full text
Abstract:
Abstract White blood cells (WBCs) are cells that is key factor of the immune systems which is help to our body fight off contagions and other diseases. In order to enhance the diagnosis of various diseases in medical field by using image processing techniques from the blood cells. In that, Leukemia is associated with one type of cancer of the blood and bone marrow. It is look like spongy tissue inside the bones where blood cells are made. In this paper, a fully connected. Convolution neural network is used to segmented and classification of blood cell microscope WBC images for healthy and unhealthy conditions. The performance of the classifier was analyzed. The accuracy sensitivity specificity and pression are 96.84%, 96.26%,97.35% and 96.39% respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Dong, Cui, Rongfu Wang, and Yuanqin Hang. "Facial expression recognition based on improved VGG convolutional neural network." Journal of Physics: Conference Series 2083, no. 3 (2021): 032030. http://dx.doi.org/10.1088/1742-6596/2083/3/032030.

Full text
Abstract:
Abstract With the development of artificial intelligence, facial expression recognition based on deep learning has become a current research hotspot. The article analyzes and improves the VGG16 network. First, the three fully connected layers of the original network are changed to two convolutional layers and one fully connected layer, which reduces the complexity of the network; Then change the maximum pooling in the network to local-based adaptive pooling to help the network select feature information that is more conducive to facial expression recognition, so that the network can be used on the facial expression datasets RAF-DB and SFEW. The recognition rate increased by 4.7% and 7% respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Kumar Reddy, Pottipati Dileep, and Kota Venkata Ramanaiah. "Field-programmable gate array implementation of efficient deep neural network architecture." International Journal of Electrical and Computer Engineering (IJECE) 14, no. 4 (2024): 3863. http://dx.doi.org/10.11591/ijece.v14i4.pp3863-3875.

Full text
Abstract:
Deep neural network (DNN) comprises multiple stages of data processing sub-systems with one of the primary sub-systems is a fully connected neural network (FCNN) model. This fully connected neural network model has multiple layers of neurons that need to be implemented using arithmetic units with suitable number representation to optimize area, power, and speed. In this work, the network parameters are analyzed, and redundancy in weights is eliminated. A pipelined and parallel structure is designed for the fully connected network information. The proposed FCNN structure has 16 inputs, 3 hidden layers, and an output layer. Each hidden layer consists of 4 neurons and describes how the inputs are connected to hidden layer neurons to process the raw data. A hardware description language (HDL) model is developed for the proposed structure and the verified model is implemented on Xilinx field-programmable gate array (FPGA). The modified structure comprises registers, demultiplexers, weight registers, multipliers, adders, and read-only memory lookup table (ROM/LUT). The modified architecture implemented on FPGA is estimated to reduce area by 87.5% and improve timing by 3x compared with direct implementation methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Petzka, Henning, Martin Trimmel, and Cristian Sminchisescu. "Notes on the Symmetries of 2-Layer ReLU-Networks." Proceedings of the Northern Lights Deep Learning Workshop 1 (February 6, 2020): 6. http://dx.doi.org/10.7557/18.5150.

Full text
Abstract:
Symmetries in neural networks allow different weight configurations leading to the same network function. For odd activation functions, the set of transformations mapping between such configurations have been studied extensively, but less is known for neural networks with ReLU activation functions. We give a complete characterization for fully-connected networks with two layers. Apart from two well-known transformations, only degenerated situations allow additional transformations that leave the network function unchanged. Reduction steps can remove only part of the degenerated cases. Finally, we present a non-degenerate situation for deep neural networks leading to new transformations leaving the network function intact.
APA, Harvard, Vancouver, ISO, and other styles
46

Жук, К. Д., С. А. Угрюмов, and Ф. В. Свойкин. "Classification of tree species in the process of logging using machine learning methods." Известия СПбЛТА, no. 242 (April 24, 2023): 167–78. http://dx.doi.org/10.21266/2079-4304.2023.242.167-178.

Full text
Abstract:
Приведены сдерживающие факторы, ограничивающие повышение эффективности лесозаготовительного производства современными многооперационными машинами, работающими по скандинавской сортиментной технологии в фазе валки, а именно - выбор и регистрация породы древесины. Приведены факторы для создания полной архитектуры полносвязной нейронной сети, зависимости точности прогнозирования полносвязной нейронной сети натестовой выборке от размера обучающего набора данных, показано изображение зависимости точности прогнозирования от количества деревьев при методе случайного леса для классификации изображений. Исследованы классификаторы для задачи определения породы ствола дерева по изображению на основе методов полносвязной нейронной сети и случайного леса. Рассматриваемые классификаторы написаны на языке программирования Python с применением библиотеки tensorflow. Для написания кода использовалась кроссплатформенная интегрированная среда разработки PyCharm Community. Для полносвязных нейронных сетей установлены достаточное количество изображений и размер тестовой выборки для обучения при применении в качестве целевых значений меток классов породы ствола дерева. Определено необходимое количество изображений для обучения полносвязных нейронных сетей при применении в качестве целевых значений меток классов породы ствола дерева с высокой точностью прогнозирования. Построены зависимости точности прогнозирования полносвязной нейронной сети на тестовой выборке от размера обучающего набора данных. The article presents the constraining factors that limit the increase in the efficiency of logging production by modern multi-operation machines operating on the Scandinavian cut-to-length technology in the felling phase, namely the selection and registration of wood species. The factors for creating a complete architecture of a fully connected neural network are given, the dependence of the prediction accuracy of a fully connected neural network on a test sample on the size of the training data set, an image of the dependence of the prediction accuracy on the number of trees in the random forest method for image classification is shown. Classifiers for the problem of determining the species of a tree trunk from an image based on the methods of a fully connected neural network and random forest have been studied. The considered classifiers are written in the Python programming language using the tensorflow library. The PyCharm Community cross-platform integrated development environment was used to write the code. For fully connected neural networks, a sufficient number of images and a test sample size were established for training, using tree trunk breed class labels as target values. As a result of the research, the required number of images was determined for training fully connected neural networks when using tree trunk breed class labels with high prediction accuracy as target values. Dependences of the prediction accuracy of a fully connected neural network on a test set on the size of the training data set are constructed.
APA, Harvard, Vancouver, ISO, and other styles
47

Brennsteiner, Stefan, Tughrul Arslan, John Thompson, and Andrew McCormick. "A Real-Time Deep Learning OFDM Receiver." ACM Transactions on Reconfigurable Technology and Systems 15, no. 3 (2022): 1–25. http://dx.doi.org/10.1145/3494049.

Full text
Abstract:
Machine learning in the physical layer of communication systems holds the potential to improve performance and simplify design methodology. Many algorithms have been proposed; however, the model complexity is often unfeasible for real-time deployment. The real-time processing capability of these systems has not been proven yet. In this work, we propose a novel, less complex, fully connected neural network to perform channel estimation and signal detection in an orthogonal frequency division multiplexing system. The memory requirement, which is often the bottleneck for fully connected neural networks, is reduced by ≈ 27 times by applying known compression techniques in a three-step training process. Extensive experiments were performed for pruning and quantizing the weights of the neural network detector. Additionally, Huffman encoding was used on the weights to further reduce memory requirements. Based on this approach, we propose the first field-programmable gate array based, real-time capable neural network accelerator, specifically designed to accelerate the orthogonal frequency division multiplexing detector workload. The accelerator is synthesized for a Xilinx RFSoC field-programmable gate array, uses small-batch processing to increase throughput, efficiently supports branching neural networks, and implements superscalar Huffman decoders.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Kun, Yuanjie Zheng, Xiaobo Deng, Weikuan Jia, Jian Lian, and Xin Chen. "Guided Networks for Few-Shot Image Segmentation and Fully Connected CRFs." Electronics 9, no. 9 (2020): 1508. http://dx.doi.org/10.3390/electronics9091508.

Full text
Abstract:
The goal of the few-shot learning method is to learn quickly from a low-data regime. Structured output tasks like segmentation are challenging for few-shot learning, due to their being high-dimensional and statistically dependent. For this problem, we propose improved guided networks and combine them with a fully connected conditional random field (CRF). The guided network extracts task representations from annotated support images through feature fusion to do fast, accurate inference on new unannotated query images. By bringing together few-shot learning methods and fully connected CRFs, our method can do accurate object segmentation by overcoming poor localization properties of deep convolutional neural networks and can quickly updating tasks, without further optimization, when faced with new data. Our guided network is at the forefront of accuracy for the terms of annotation volume and time.
APA, Harvard, Vancouver, ISO, and other styles
49

Bogdanov, S. A., O. S. Sidelnikov, and A. A. Redyuk. "Application of complex fully connected neural networks to compensate for nonlinearity in fibre-optic communication lines with polarisation division multiplexing." Quantum Electronics 51, no. 12 (2021): 1076–80. http://dx.doi.org/10.1070/qel17656.

Full text
Abstract:
Abstract A scheme is proposed to compensate for nonlinear distortions in extended fibre-optic communication lines with polarisation division multiplexing, based on fully connected neural networks with complex-valued arithmetic. The activation function of the developed scheme makes it possible to take into account the nonlinear interaction of signals from different polarisation components. This scheme is compared with a linear one and a neural network that processes signals of different polarisations independently, and the superiority of the proposed neural network architecture is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
50

Masala, Eugene, and Laura Blomeley. "MACHINE-LEARNING ALGORITHM FOR SHIELDED SPECIAL NUCLEAR MATERIALS DETECTION." CNL Nuclear Review 8, no. 2 (2019): 145–57. http://dx.doi.org/10.12943/cnr.2018.00004.

Full text
Abstract:
A machine-learning algorithm has been implemented by use of a neural network as a preliminary study on the applicability of this method to special nuclear materials detection. The algorithm predicts the presence of the 238U isotope when learning from a gamma spectrum data measured with a high-purity germanium detector from a sample of depleted uranium. In this work, both a fully connected neural network and a convolutional neural network have been implemented, and the performance of different configurations of the network has been studied. The use of convolutional network showed better performance over the fully connected network, with cost function and success rate values supporting a better prediction while avoiding overfitting. Furthermore, implemented network features such as filtering, max-pooling, dropout regularization, and momentum optimization also showed improved prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography