To see the other types of publications on this topic, follow the link: Binary neural networks.

Journal articles on the topic 'Binary neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Binary neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Vidyasagar, M. "Are analog neural networks better than binary neural networks?" Circuits, Systems, and Signal Processing 17, no. 2 (1998): 243–70. http://dx.doi.org/10.1007/bf01202855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qin, Haotong, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, and Nicu Sebe. "Binary neural networks: A survey." Pattern Recognition 105 (September 2020): 107281. http://dx.doi.org/10.1016/j.patcog.2020.107281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SAITO, Taisei, Kota ANDO, and Tetsuya ASAI. "Extending Binary Neural Networks to Bayesian Neural Networks with Probabilistic Interpretation of Binary Weights." IEICE Transactions on Information and Systems E107.D, no. 8 (2024): 949–57. http://dx.doi.org/10.1587/transinf.2023lop0009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xu, Sheng, Yanjing Li, Teli Ma, et al. "Resilient Binary Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (2023): 10620–28. http://dx.doi.org/10.1609/aaai.v37i9.26261.

Full text
Abstract:
Binary neural networks (BNNs) have received ever-increasing popularity for their great capability of reducing storage burden as well as quickening inference time. However, there is a severe performance drop compared with {real-valued} networks, due to its intrinsic frequent weight oscillation during training. In this paper, we introduce a Resilient Binary Neural Network (ReBNN) to mitigate the frequent oscillation for better BNNs' training. We identify that the weight oscillation mainly stems from the non-parametric scaling factor. To address this issue, we propose to parameterize the scaling
APA, Harvard, Vancouver, ISO, and other styles
5

Qian, Yan-min, and Xu Xiang. "Binary neural networks for speech recognition." Frontiers of Information Technology & Electronic Engineering 20, no. 5 (2019): 701–15. http://dx.doi.org/10.1631/fitee.1800469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Srinivas, K., B. Kavitha Rani, M. Varaprasad Rao, G. Madhukar, and B. Venkata Ramana. "Convolution Neural Networks for Binary Classification." Journal of Computational and Theoretical Nanoscience 16, no. 11 (2019): 4877–82. http://dx.doi.org/10.1166/jctn.2019.8399.

Full text
Abstract:
Convolutional neural networks (CNNs) are similar to “ordinary” neural networks in the sense that they are made up of hidden layers consisting of neurons with “learnable” parameters. These neurons receive inputs, perform a dot product, and then follows it with a non-linearity. The whole network expresses the mapping between raw image pixels and their class scores. Conventionally, the Softmax function is the classifier used at the last layer of this network. However, there have been studies conducted to challenge this norm. Empirical data has shown that the CNN model was able to achieve a test a
APA, Harvard, Vancouver, ISO, and other styles
7

Penney, R. W., and D. Sherrington. "Noise-optimal binary-synapse neural networks." Journal of Physics A: Mathematical and General 26, no. 16 (1993): 3995–4010. http://dx.doi.org/10.1088/0305-4470/26/16/016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prasad, P. W. C., Ali Assi, and Azam Beg. "Binary Decision Diagrams and neural networks." Journal of Supercomputing 39, no. 3 (2007): 301–20. http://dx.doi.org/10.1007/s11227-006-0010-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rozen, Tal, Moshe Kimhi, Brian Chmiel, Avi Mendelson, and Chaim Baskin. "Bimodal-Distributed Binarized Neural Networks." Mathematics 10, no. 21 (2022): 4107. http://dx.doi.org/10.3390/math10214107.

Full text
Abstract:
Binary neural networks (BNNs) are an extremely promising method for reducing deep neural networks’ complexity and power consumption significantly. Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts. Prior work mainly focused on strategies for sign function approximation during the forward and backward phases to reduce the quantization error during the binarization process. In this work, we propose a bimodal-distributed binarization method (BD-BNN). The newly proposed technique aims to impose a bimodal distribution of t
APA, Harvard, Vancouver, ISO, and other styles
10

Hooshmand, Mohammad Kazim, and Manjaiah Doddaghatta Huchaiah. "Network Intrusion Detection with 1D Convolutional Neural Networks." Digital Technologies Research and Applications 1, no. 2 (2022): 25. http://dx.doi.org/10.54963/dtra.v1i2.64.

Full text
Abstract:
Computer network assets expose to various cyber threats in today’s digital era. Network Anomaly Detection Systems (NADS) play a vital role in protecting digital assets in the purview of network security. Intrusion detection systems data are imbalanced and high dimensioned, affecting models’ performance in classifying malicious traffic. This paper uses a denoising autoencoder (DAE) for feature selection to reduce data dimension. To balance the data, the authors use a combined approach of oversampling technique, adaptive synthetic (ADASYN) and a cluster-based under-sampling method using a cluste
APA, Harvard, Vancouver, ISO, and other styles
11

Campbell, Colin, and C. Perez Vicente. "The Target Switch Algorithm: A Constructive Learning Procedure for Feed-Forward Neural Networks." Neural Computation 7, no. 6 (1995): 1245–64. http://dx.doi.org/10.1162/neco.1995.7.6.1245.

Full text
Abstract:
We propose an efficient procedure for constructing and training a feedforward neural network. The network can perform binary classification for binary or analogue input data. We show that the procedure can also be used to construct feedforward neural networks with binary-valued weights. Neural networks with binary-valued weights are potentially straightforward to implement using microelectronic or optical devices and they can also exhibit good generalization.
APA, Harvard, Vancouver, ISO, and other styles
12

Vorabbi, Lorenzo, Davide Maltoni, and Stefano Santi. "Optimizing Data Flow in Binary Neural Networks." Sensors 24, no. 15 (2024): 4780. http://dx.doi.org/10.3390/s24154780.

Full text
Abstract:
Binary neural networks (BNNs) can substantially accelerate a neural network’s inference time by substituting its costly floating-point arithmetic with bit-wise operations. Nevertheless, state-of-the-art approaches reduce the efficiency of the data flow in the BNN layers by introducing intermediate conversions from 1 to 16/32 bits. We propose a novel training scheme, denoted as BNN-Clip, that can increase the parallelism and data flow of the BNN pipeline; specifically, we introduce a clipping block that reduces the data width from 32 bits to 8. Furthermore, we decrease the internal accumulator
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Wei-Ping. "Binary-Oscillator Networks: Bridging a Gap between Experimental and Abstract Modeling of Neural Networks." Neural Computation 8, no. 2 (1996): 319–39. http://dx.doi.org/10.1162/neco.1996.8.2.319.

Full text
Abstract:
This paper proposes a simplified oscillator model, called binary-oscillator, and develops a class of neural network models having binary-oscillators as basic units. The binary-oscillator has a binary dynamic variable v = ±1 modeling the “membrane potential” of a neuron, and due to the presence of a “slow current” (as in a classical relaxation-oscillator) it can oscillate between two states. The purpose of the simplification is to enable abstract algorithmic study on the dynamics of oscillator networks. A binary-oscillator network is formally analogous to a system of stochastic binary spins (at
APA, Harvard, Vancouver, ISO, and other styles
14

Dong, Zhongtian, Marçal Comajoan Cara, Gopal Ramesh Dahale та ін. "ℤ2 × ℤ2 Equivariant Quantum Neural Networks: Benchmarking against Classical Neural Networks". Axioms 13, № 3 (2024): 188. http://dx.doi.org/10.3390/axioms13030188.

Full text
Abstract:
This paper presents a comparative analysis of the performance of Equivariant Quantum Neural Networks (EQNNs) and Quantum Neural Networks (QNNs), juxtaposed against their classical counterparts: Equivariant Neural Networks (ENNs) and Deep Neural Networks (DNNs). We evaluate the performance of each network with three two-dimensional toy examples for a binary classification task, focusing on model complexity (measured by the number of parameters) and the size of the training dataset. Our results show that the Z2×Z2 EQNN and the QNN provide superior performance for smaller parameter sets and modes
APA, Harvard, Vancouver, ISO, and other styles
15

姜, 馨蕊, 楠楠 王, 经纬 辛, 柯宇 李, 曦. 杨, and 新波 高. "Binary neural networks for image super-resolution." SCIENTIA SINICA Informationis 51, no. 10 (2021): 1690. http://dx.doi.org/10.1360/ssi-2020-0346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

CHENG, Song, Zixuan LI, Yongsen WANG, et al. "Gradient Corrected Approximation for Binary Neural Networks." IEICE Transactions on Information and Systems E104.D, no. 10 (2021): 1784–88. http://dx.doi.org/10.1587/transinf.2021edl8026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Pengyu, Yufan Cheng, Binhong Dong, and Guan Gui. "Binary Neural Networks for Wireless Interference Identification." IEEE Wireless Communications Letters 11, no. 1 (2022): 23–27. http://dx.doi.org/10.1109/lwc.2021.3118903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

KOUZUKI, Ryota, and Toshimichi SAITO. "Learning of Simple Dynamic Binary Neural Networks." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E96.A, no. 8 (2013): 1775–82. http://dx.doi.org/10.1587/transfun.e96.a.1775.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Joe. "Astrometric Binary Classification via Artificial Neural Networks." Astrophysical Journal 974, no. 1 (2024): 96. http://dx.doi.org/10.3847/1538-4357/ad7731.

Full text
Abstract:
Abstract With nearly two billion stars observed and their corresponding astrometric parameters evaluated in the recent Gaia mission, the number of astrometric binary candidates has risen significantly. Due to the surplus of astrometric data, the current computational methods employed to inspect these astrometric binary candidates are both computationally expensive and cannot be executed in a reasonable time frame. In light of this, a machine learning (ML) technique to automatically classify whether a set of stars belongs to an astrometric binary pair via an artificial neural network (ANN) is p
APA, Harvard, Vancouver, ISO, and other styles
20

Penney, R. W., and D. Sherrington. "Pattern selectivity and binary-synapse neural networks." Journal of Physics A: Mathematical and General 26, no. 17 (1993): 4479–83. http://dx.doi.org/10.1088/0305-4470/26/17/052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, J. H., and Sung-Kwon Park. "The geometrical learning of binary neural networks." IEEE Transactions on Neural Networks 6, no. 1 (1995): 237–47. http://dx.doi.org/10.1109/72.363432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Muselli, M. "On sequential construction of binary neural networks." IEEE Transactions on Neural Networks 6, no. 3 (1995): 678–90. http://dx.doi.org/10.1109/72.377973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Leroux, Sam, Bert Vankeirsbilck, Tim Verbelen, Pieter Simoens, and Bart Dhoedt. "Training binary neural networks with knowledge transfer." Neurocomputing 396 (July 2020): 534–41. http://dx.doi.org/10.1016/j.neucom.2018.09.103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Lihong, Xiaoqin Zeng, Shuiming Zhong, and Lixin Han. "Sensitivity study of Binary Feedforward Neural Networks." Neurocomputing 136 (July 2014): 268–80. http://dx.doi.org/10.1016/j.neucom.2014.01.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Xin, Yuxiang Xie, Jie Chen, Lingda Wu, Qixiang Ye, and Li Liu. "Rotation Invariant Local Binary Convolution Neural Networks." IEEE Access 6 (2018): 18420–30. http://dx.doi.org/10.1109/access.2018.2818887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Neungyun, Seonhee Oh, and Tae-Hwan Kim. "Lightweight Binary Neural Networks with Reduced Parameters." Journal of the Institute of Electronics and Information Engineers 59, no. 12 (2022): 65–72. http://dx.doi.org/10.5573/ieie.2022.59.12.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

ANDREW, LACHLAN L. H. "On binary ouput of cellular neural networks." International Journal of Circuit Theory and Applications 25, no. 2 (1997): 147–49. http://dx.doi.org/10.1002/(sici)1097-007x(199703/04)25:2<147::aid-cta954>3.0.co;2-#.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Secco, Jacopo, Mauro Poggio, and Fernando Corinto. "Supervised neural networks with memristor binary synapses." International Journal of Circuit Theory and Applications 46, no. 1 (2018): 221–33. http://dx.doi.org/10.1002/cta.2429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Simons, Taylor, and Dah-Jye Lee. "A Review of Binarized Neural Networks." Electronics 8, no. 6 (2019): 661. http://dx.doi.org/10.3390/electronics8060661.

Full text
Abstract:
In this work, we review Binarized Neural Networks (BNNs). BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values. With binary values, BNNs can execute computations using bitwise operations, which reduces execution time. Model sizes of BNNs are much smaller than their full precision counterparts. While the accuracy of a BNN model is generally less than full precision models, BNNs have been closing accuracy gap and are becoming more accurate on larger datasets like ImageNet. BNNs are also good candidates for deep learning implementation
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Peisong, Xiangyu He, Gang Li, Tianli Zhao, and Jian Cheng. "Sparsity-Inducing Binarized Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (2020): 12192–99. http://dx.doi.org/10.1609/aaai.v34i07.6900.

Full text
Abstract:
Binarization of feature representation is critical for Binarized Neural Networks (BNNs). Currently, sign function is the commonly used method for feature binarization. Although it works well on small datasets, the performance on ImageNet remains unsatisfied. Previous methods mainly focus on minimizing quantization error, improving the training strategies and decomposing each convolution layer into several binary convolution modules. However, whether sign is the only option for binarization has been largely overlooked. In this work, we propose the Sparsity-inducing Binarized Neural Network (Si-
APA, Harvard, Vancouver, ISO, and other styles
31

DasGupta, Bhaskar, and Georg Schnitger. "Analog versus Discrete Neural Networks." Neural Computation 8, no. 4 (1996): 805–18. http://dx.doi.org/10.1162/neco.1996.8.4.805.

Full text
Abstract:
We show that neural networks with three-times continuously differentiable activation functions are capable of computing a certain family of n-bit Boolean functions with two gates, whereas networks composed of binary threshold functions require at least Ω(log n) gates. Thus, for a large class of activation functions, analog neural networks can be more powerful than discrete neural networks, even when computing Boolean functions.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Wenzhe, Jiehua Zhang, Zhuo Su, Zhongzhu Zhou, and Li Liu. "Binary Neural Network for Automated Visual Surface Defect Detection." Sensors 21, no. 20 (2021): 6868. http://dx.doi.org/10.3390/s21206868.

Full text
Abstract:
As is well-known, defects precisely affect the lives and functions of the machines in which they occur, and even cause potentially catastrophic casualties. Therefore, quality assessment before mounting is an indispensable requirement for factories. Apart from the recognition accuracy, current networks suffer from excessive computing complexity, making it of great difficulty to deploy in the manufacturing process. To address these issues, this paper introduces binary networks into the area of surface defect detection for the first time, for the reason that binary networks prohibitively constrai
APA, Harvard, Vancouver, ISO, and other styles
33

Qiaoben, You, Zheng Wang, Jianguo Li, Yinpeng Dong, Yu-Gang Jiang, and Jun Zhu. "Composite Binary Decomposition Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4747–54. http://dx.doi.org/10.1609/aaai.v33i01.33014747.

Full text
Abstract:
Binary neural networks have great resource and computing efficiency, while suffer from long training procedure and non-negligible accuracy drops, when comparing to the fullprecision counterparts. In this paper, we propose the composite binary decomposition networks (CBDNet), which first compose real-valued tensor of each layer with a limited number of binary tensors, and then decompose some conditioned binary tensors into two low-rank binary tensors, so that the number of parameters and operations are greatly reduced comparing to the original ones. Experiments demonstrate the effectiveness of
APA, Harvard, Vancouver, ISO, and other styles
34

Solovyeva, Elena, and Ali Abdullah. "Binary and Multiclass Text Classification by Means of Separable Convolutional Neural Network." Inventions 6, no. 4 (2021): 70. http://dx.doi.org/10.3390/inventions6040070.

Full text
Abstract:
In this paper, the structure of a separable convolutional neural network that consists of an embedding layer, separable convolutional layers, convolutional layer and global average pooling is represented for binary and multiclass text classifications. The advantage of the proposed structure is the absence of multiple fully connected layers, which is used to increase the classification accuracy but raises the computational cost. The combination of low-cost separable convolutional layers and a convolutional layer is proposed to gain high accuracy and, simultaneously, to reduce the complexity of
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Yanfei, Tong Geng, Ang Li, and Huimin Yu. "BCNN: Binary complex neural network." Microprocessors and Microsystems 87 (November 2021): 104359. http://dx.doi.org/10.1016/j.micpro.2021.104359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Boldasov, D. D., J. V. Drozdova, A. S. Komshin, and A. B. Syritskii. "Neural networks application for phasechronometric measurement information processing." Izmeritel`naya Tekhnika, no. 9 (2020): 31–35. http://dx.doi.org/10.32446/0368-1025it.2020-9-31-35.

Full text
Abstract:
This article describes the processing technique of measuring phasechronometric information based on the neural networks use. The novelty of the proposed approach lies in the choice of a classification feature and the perceptron algorithm use as an algorithm for binary classification performing. In this article, to assess the concept operability, the simplest binary classification of the lathe operation modes is made: idle or cutting.
APA, Harvard, Vancouver, ISO, and other styles
37

Ding, Rui, Haijun Liu, and Xichuan Zhou. "IE-Net: Information-Enhanced Binary Neural Networks for Accurate Classification." Electronics 11, no. 6 (2022): 937. http://dx.doi.org/10.3390/electronics11060937.

Full text
Abstract:
Binary neural networks (BNNs) have been proposed to reduce the heavy memory and computation burdens in deep neural networks. However, the binarized weights and activations in BNNs cause huge information loss, which leads to a severe accuracy decrease, and hinders the real-world applications of BNNs. To solve this problem, in this paper, we propose the information-enhanced network (IE-Net) to improve the performance of BNNs. Firstly, we design an information-enhanced binary convolution (IE-BC), which enriches the information of binary activations and boosts the representational power of the bin
APA, Harvard, Vancouver, ISO, and other styles
38

Yu, Zeping, Rui Cao, Qiyi Tang, Sen Nie, Junzhou Huang, and Shi Wu. "Order Matters: Semantic-Aware Neural Networks for Binary Code Similarity Detection." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (2020): 1145–52. http://dx.doi.org/10.1609/aaai.v34i01.5466.

Full text
Abstract:
Binary code similarity detection, whose goal is to detect similar binary functions without having access to the source code, is an essential task in computer security. Traditional methods usually use graph matching algorithms, which are slow and inaccurate. Recently, neural network-based approaches have made great achievements. A binary function is first represented as an control-flow graph (CFG) with manually selected block features, and then graph neural network (GNN) is adopted to compute the graph embedding. While these methods are effective and efficient, they could not capture enough sem
APA, Harvard, Vancouver, ISO, and other styles
39

Jonnala, Yamini Devi, Vamshi Sai Mahajan, Dheeraj Menon, Sampath Reddy Kothakapu, and Sumanth Reddy Chandamollu. "Malware Detection Using Binary Visualization and Neural Networks." E3S Web of Conferences 391 (2023): 01107. http://dx.doi.org/10.1051/e3sconf/202339101107.

Full text
Abstract:
Any programme or code that is damaging to our systems or networks is known as Malware or malicious software. Malware attempts to infiltrate, damage, or destroy our gadgets such as computers, networks, tablets, and so on. Malware may also grant partial or total control over the affected systems. Malware is often detected using classic approaches such as static programme analysis or dynamic execution analysis. The exponential rise of malware variations requires us to look beyond the obvious in order to identify them before they do harm or take control of our systems. To address these drawbacks,
APA, Harvard, Vancouver, ISO, and other styles
40

Ryu, Changho, Hyeongseok Lee, and Tae-Hwan Kim. "Efficient Training Acceleration System for Binary Neural Networks." Journal of the Institute of Electronics and Information Engineers 59, no. 1 (2022): 3–9. http://dx.doi.org/10.5573/ieie.2022.59.1.3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yang, Yuzhi, Zhaoyang Zhang, and Qianqian Yang. "Communication-Efficient Federated Learning With Binary Neural Networks." IEEE Journal on Selected Areas in Communications 39, no. 12 (2021): 3836–50. http://dx.doi.org/10.1109/jsac.2021.3118415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kiger, John, Shen-Shyang Ho, and Vahid Heydari. "Malware Binary Image Classification Using Convolutional Neural Networks." International Conference on Cyber Warfare and Security 17, no. 1 (2022): 469–78. http://dx.doi.org/10.34190/iccws.17.1.59.

Full text
Abstract:
The persistent shortage of cybersecurity professionals combined with enterprise networks tasked with processing more data than ever before has led many cybersecurity experts to consider automating some of the most common and time-consuming security tasks using machine learning. One of these cybersecurity tasks where machine learning may prove advantageous is malware analysis and classification. To evade traditional detection techniques, malware developers are creating more complex malware. This is achieved through more advanced methods of code obfuscation and conducting more sophisticated atta
APA, Harvard, Vancouver, ISO, and other styles
43

Guo, Lei, and Baolong Guo. "A Constraint Satisfaction Theory for Binary Neural Networks." Journal of Intelligent and Fuzzy Systems 4, no. 3 (1996): 235–42. http://dx.doi.org/10.3233/ifs-1996-4306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

TSODYKS, M. V. "ASSOCIATIVE MEMORY IN NEURAL NETWORKS WITH BINARY SYNAPSES." Modern Physics Letters B 04, no. 11 (1990): 713–16. http://dx.doi.org/10.1142/s0217984990000891.

Full text
Abstract:
The simple learning algorithm in the neural network with binary synapses, which take one step for storing one pattern is considered. The resulting model turns out to be palimpsestic, and the number of patterns which can be effectively retrieved is L~N1/2.
APA, Harvard, Vancouver, ISO, and other styles
45

Wilbraham, Liam, Reiner Sebastian Sprick, Kim E. Jelfs, and Martijn A. Zwijnenburg. "Mapping binary copolymer property space with neural networks." Chemical Science 10, no. 19 (2019): 4973–84. http://dx.doi.org/10.1039/c8sc05710a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Baram, Y. "Ground states of partially connected binary neural networks." Proceedings of the IEEE 78, no. 10 (1990): 1575–78. http://dx.doi.org/10.1109/5.58340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Hanxiao, Hongwei Li, Meng Hao, et al. "SecBNN: Efficient Secure Inference on Binary Neural Networks." IEEE Transactions on Information Forensics and Security 19 (2024): 10273–86. http://dx.doi.org/10.1109/tifs.2024.3484936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Shakkouf, A. "Review on Optimization Techniques of Binary Neural Networks." Izvestiâ vysših učebnyh zavedenij. Priborostroenie 66, no. 11 (2023): 926–35. http://dx.doi.org/10.17586/0021-3454-2023-66-11-926-935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Stepanyan, I. V. "Methodology and Tools for Designing Binary Neural Networks." Programming and Computer Software 46, no. 1 (2020): 49–56. http://dx.doi.org/10.1134/s0361768820010065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gray, D. L., and A. N. Michel. "A training algorithm for binary feedforward neural networks." IEEE Transactions on Neural Networks 3, no. 2 (1992): 176–94. http://dx.doi.org/10.1109/72.125859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!