To see the other types of publications on this topic, follow the link: Deep neural networks (DNNs).

Journal articles on the topic 'Deep neural networks (DNNs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Deep neural networks (DNNs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Lei, Shengyuan Zhou, Tian Zhi, Zidong Du, and Yunji Chen. "TDSNN: From Deep Neural Networks to Deep Spike Neural Networks with Temporal-Coding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1319–26. http://dx.doi.org/10.1609/aaai.v33i01.33011319.

Full text
Abstract:
Continuous-valued deep convolutional networks (DNNs) can be converted into accurate rate-coding based spike neural networks (SNNs). However, the substantial computational and energy costs, which is caused by multiple spikes, limit their use in mobile and embedded applications. And recent works have shown that the newly emerged temporal-coding based SNNs converted from DNNs can reduce the computational load effectively. In this paper, we propose a novel method to convert DNNs to temporal-coding SNNs, called TDSNN. Combined with the characteristic of the leaky integrate-andfire (LIF) neural mode
APA, Harvard, Vancouver, ISO, and other styles
2

Galván, Edgar. "Neuroevolution in deep neural networks." ACM SIGEVOlution 14, no. 1 (2021): 3–7. http://dx.doi.org/10.1145/3460310.3460311.

Full text
Abstract:
A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNNs). These methods play a crucial role in the success or failure of the DNNs for most problems. Evolutionary Algorithms are gaining momentum as a computationally feasible method for the automated optimisation of DNNs. Neuroevolution is a term that describes these processes. This newsletter article summarises the full version available at https://arxiv.org/abs/2006.05415.
APA, Harvard, Vancouver, ISO, and other styles
3

Saravanan, Kavya, and Abbas Z. Kouzani. "Advancements in On-Device Deep Neural Networks." Information 14, no. 8 (2023): 470. http://dx.doi.org/10.3390/info14080470.

Full text
Abstract:
In recent years, rapid advancements in both hardware and software technologies have resulted in the ability to execute artificial intelligence (AI) algorithms on low-resource devices. The combination of high-speed, low-power electronic hardware and efficient AI algorithms is driving the emergence of on-device AI. Deep neural networks (DNNs) are highly effective AI algorithms used for identifying patterns in complex data. DNNs, however, contain many parameters and operations that make them computationally intensive to execute. Accordingly, DNNs are usually executed on high-resource backend proc
APA, Harvard, Vancouver, ISO, and other styles
4

Díaz-Vico, David, Jesús Prada, Adil Omari, and José Dorronsoro. "Deep support vector neural networks." Integrated Computer-Aided Engineering 27, no. 4 (2020): 389–402. http://dx.doi.org/10.3233/ica-200635.

Full text
Abstract:
Kernel based Support Vector Machines, SVM, one of the most popular machine learning models, usually achieve top performances in two-class classification and regression problems. However, their training cost is at least quadratic on sample size, making them thus unsuitable for large sample problems. However, Deep Neural Networks (DNNs), with a cost linear on sample size, are able to solve big data problems relatively easily. In this work we propose to combine the advanced representations that DNNs can achieve in their last hidden layers with the hinge and ϵ insensitive losses that are used in t
APA, Harvard, Vancouver, ISO, and other styles
5

Awan, Burhan Humayun. "Deep Learning Neural Networks in the Cloud." International Journal of Advanced Engineering, Management and Science 9, no. 10 (2023): 09–26. http://dx.doi.org/10.22161/ijaems.910.2.

Full text
Abstract:
Deep Neural Networks (DNNs) are currently used in a wide range of critical real-world applications as machine learning technology. Due to the high number of parameters that make up DNNs, learning and prediction tasks require millions of floating-point operations (FLOPs). Implementing DNNs into a cloud computing system with centralized servers and data storage sub-systems equipped with high-speed and high-performance computing capabilities is a more effective strategy. This research presents an updated analysis of the most recent DNNs used in cloud computing. It highlights the necessity of clou
APA, Harvard, Vancouver, ISO, and other styles
6

Cai, Chenghao, Yanyan Xu, Dengfeng Ke, and Kaile Su. "Deep Neural Networks with Multistate Activation Functions." Computational Intelligence and Neuroscience 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/721367.

Full text
Abstract:
We propose multistate activation functions (MSAFs) for deep neural networks (DNNs). These MSAFs are new kinds of activation functions which are capable of representing more than two states, including theN-order MSAFs and the symmetrical MSAF. DNNs with these MSAFs can be trained via conventional Stochastic Gradient Descent (SGD) as well as mean-normalised SGD. We also discuss how these MSAFs perform when used to resolve classification problems. Experimental results on the TIMIT corpus reveal that, on speech recognition tasks, DNNs with MSAFs perform better than the conventional DNNs, getting a
APA, Harvard, Vancouver, ISO, and other styles
7

Yu, Haichao, Haoxiang Li, Humphrey Shi, Thomas S. Huang, and Gang Hua. "Any-Precision Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10763–71. http://dx.doi.org/10.1609/aaai.v35i12.17286.

Full text
Abstract:
We present any-precision deep neural networks (DNNs), which are trained with a new method that allows the learned DNNs to be flexible in numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-widths, by truncating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low-bits, we show that the model achieved accuracy comparable to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learning models in real-world applications, where i
APA, Harvard, Vancouver, ISO, and other styles
8

Tao, Zhe, Stephanie Nawas, Jacqueline Mitchell, and Aditya V. Thakur. "Architecture-Preserving Provable Repair of Deep Neural Networks." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 443–67. http://dx.doi.org/10.1145/3591238.

Full text
Abstract:
Deep neural networks (DNNs) are becoming increasingly important components of software, and are considered the state-of-the-art solution for a number of problems, such as image recognition. However, DNNs are far from infallible, and incorrect behavior of DNNs can have disastrous real-world consequences. This paper addresses the problem of architecture-preserving V-polytope provable repair of DNNs. A V-polytope defines a convex bounded polytope using its vertex representation. V-polytope provable repair guarantees that the repaired DNN satisfies the given specification on the infinite set of po
APA, Harvard, Vancouver, ISO, and other styles
9

Verpoort, Philipp C., Alpha A. Lee, and David J. Wales. "Archetypal landscapes for deep neural networks." Proceedings of the National Academy of Sciences 117, no. 36 (2020): 21857–64. http://dx.doi.org/10.1073/pnas.1919995117.

Full text
Abstract:
The predictive capabilities of deep neural networks (DNNs) continue to evolve to increasingly impressive levels. However, it is still unclear how training procedures for DNNs succeed in finding parameters that produce good results for such high-dimensional and nonconvex loss functions. In particular, we wish to understand why simple optimization schemes, such as stochastic gradient descent, do not end up trapped in local minima with high loss values that would not yield useful predictions. We explain the optimizability of DNNs by characterizing the local minima and transition states of the los
APA, Harvard, Vancouver, ISO, and other styles
10

Marrow, Scythia, Eric J. Michaud, and Erik Hoel. "Examining the Causal Structures of Deep Neural Networks Using Information Theory." Entropy 22, no. 12 (2020): 1429. http://dx.doi.org/10.3390/e22121429.

Full text
Abstract:
Deep Neural Networks (DNNs) are often examined at the level of their response to input, such as analyzing the mutual information between nodes and data sets. Yet DNNs can also be examined at the level of causation, exploring “what does what” within the layers of the network itself. Historically, analyzing the causal structure of DNNs has received less attention than understanding their responses to input. Yet definitionally, generalizability must be a function of a DNN’s causal structure as it reflects how the DNN responds to unseen or even not-yet-defined future inputs. Here, we introduce a s
APA, Harvard, Vancouver, ISO, and other styles
11

Kutz, J. Nathan. "Deep learning in fluid dynamics." Journal of Fluid Mechanics 814 (January 31, 2017): 1–4. http://dx.doi.org/10.1017/jfm.2016.803.

Full text
Abstract:
It was only a matter of time before deep neural networks (DNNs) – deep learning – made their mark in turbulence modelling, or more broadly, in the general area of high-dimensional, complex dynamical systems. In the last decade, DNNs have become a dominant data mining tool for big data applications. Although neural networks have been applied previously to complex fluid flows, the article featured here (Ling et al., J. Fluid Mech., vol. 807, 2016, pp. 155–166) is the first to apply a true DNN architecture, specifically to Reynolds averaged Navier Stokes turbulence models. As one often expects wi
APA, Harvard, Vancouver, ISO, and other styles
12

Banerjee, Debangshu, Changming Xu, and Gagandeep Singh. "Input-Relational Verification of Deep Neural Networks." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 1–27. http://dx.doi.org/10.1145/3656377.

Full text
Abstract:
We consider the verification of input-relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations, monotonicity, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. We introduce a novel concept of difference tracking to compute the difference between the outputs of two executions of the same DNN at all layers. We design a new abstract domain, DiffPoly for efficient difference tracking that can scale large DNNs. DiffPoly is equipped with custom abstract transformers for comm
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Xiangxiang, Shao-Lun Huang, Lizhong Zheng, and Gregory W. Wornell. "An Information Theoretic Interpretation to Deep Neural Networks." Entropy 24, no. 1 (2022): 135. http://dx.doi.org/10.3390/e24010135.

Full text
Abstract:
With the unprecedented performance achieved by deep learning, it is commonly believed that deep neural networks (DNNs) attempt to extract informative features for learning tasks. To formalize this intuition, we apply the local information geometric analysis and establish an information-theoretic framework for feature selection, which demonstrates the information-theoretic optimality of DNN features. Moreover, we conduct a quantitative analysis to characterize the impact of network structure on the feature extraction process of DNNs. Our investigation naturally leads to a performance metric for
APA, Harvard, Vancouver, ISO, and other styles
14

Nakamura, Kensuke, Bilel Derbel, Kyoung-Jae Won, and Byung-Woo Hong. "Learning-Rate Annealing Methods for Deep Neural Networks." Electronics 10, no. 16 (2021): 2029. http://dx.doi.org/10.3390/electronics10162029.

Full text
Abstract:
Deep neural networks (DNNs) have achieved great success in the last decades. DNN is optimized using the stochastic gradient descent (SGD) with learning rate annealing that overtakes the adaptive methods in many tasks. However, there is no common choice regarding the scheduled-annealing for SGD. This paper aims to present empirical analysis of learning rate annealing based on the experimental results using the major data-sets on the image classification that is one of the key applications of the DNNs. Our experiment involves recent deep neural network models in combination with a variety of lea
APA, Harvard, Vancouver, ISO, and other styles
15

Shu, Hai, and Hongtu Zhu. "Sensitivity Analysis of Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4943–50. http://dx.doi.org/10.1609/aaai.v33i01.33014943.

Full text
Abstract:
Deep neural networks (DNNs) have achieved superior performance in various prediction tasks, but can be very vulnerable to adversarial examples or perturbations. Therefore, it is crucial to measure the sensitivity of DNNs to various forms of perturbations in real applications. We introduce a novel perturbation manifold and its associated influence measure to quantify the effects of various perturbations on DNN classifiers. Such perturbations include various external and internal perturbations to input samples and network parameters. The proposed measure is motivated by information geometry and
APA, Harvard, Vancouver, ISO, and other styles
16

Ding, Junhua, Haihua Chen, Yunhe Feng, and Tozammel Hossain. "Applications of Deep Learning Techniques." Electronics 13, no. 17 (2024): 3354. http://dx.doi.org/10.3390/electronics13173354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Roberto G. Pacheco, Fernanda D.V.R. Oliveira, and Rodrigo S. Couto. "Early exit deep neural networks for distorted images on edge environments." ITU Journal on Future and Evolving Technologies 5, no. 3 (2024): 344–55. http://dx.doi.org/10.52953/fohp3741.

Full text
Abstract:
Deep Neural Networks (DNNs) are widely used for image classification but can struggle with distorted images, leading to reduced accuracy. Moreover, these applications often demand meeting a strict deadline. To this end, an alternative involves employing an adaptive offloading based on early exit NNs (EE-DNNs). EE-DNNs have branches inserted into their middle layers at the edge device. These branches provide confidence estimates. If the classification is sufficiently confident, the inference terminates at the edge device. Otherwise, the edge offloads the inference task to the cloud which runs t
APA, Harvard, Vancouver, ISO, and other styles
18

Syed, Rizwan, Markus Ulbricht, Krzysztof Piotrowski, and Milos Krstic. "A Survey on Fault-Tolerant Methodologies for Deep Neural Networks." Pomiary Automatyka Robotyka 27, no. 2 (2023): 89–98. http://dx.doi.org/10.14313/par_248/89.

Full text
Abstract:
Asignificant rise in Artificial Intelligence (AI) has impacted many applications around us, so much so that AI has now been increasingly used in safety-critical applications. AI at the edge is the reality, which means performing the data computation closer to the source of the data, as opposed to performing it on the cloud. Safety-critical applications have strict reliability requirements; therefore, it is essential that AI models running on the edge (i.e., hardware) must fulfill the required safety standards. In the vast field of AI, Deep Neural Networks (DNNs) are the focal point of this sur
APA, Harvard, Vancouver, ISO, and other styles
19

O’Connell, Thomas P., Tyler Bonnen, Yoni Friedman, et al. "Approximating Human-Level 3D Visual Inferences With Deep Neural Networks." Open Mind 9 (2025): 305–24. https://doi.org/10.1162/opmi_a_00189.

Full text
Abstract:
Abstract Humans make rich inferences about the geometry of the visual world. While deep neural networks (DNNs) achieve human-level performance on some psychophysical tasks (e.g., rapid classification of object or scene categories), they often fail in tasks requiring inferences about the underlying shape of objects or scenes. Here, we ask whether and how this gap in 3D shape representation between DNNs and humans can be closed. First, we define the problem space: after generating a stimulus set to evaluate 3D shape inferences using a match-to-sample task, we confirm that standard DNNs are unabl
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Shenghe, Shivendra S. Panwar, Murali Kodialam, and T. V. Lakshman. "Deep Neural Network Approximated Dynamic Programming for Combinatorial Optimization." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 02 (2020): 1684–91. http://dx.doi.org/10.1609/aaai.v34i02.5531.

Full text
Abstract:
In this paper, we propose a general framework for combining deep neural networks (DNNs) with dynamic programming to solve combinatorial optimization problems. For problems that can be broken into smaller subproblems and solved by dynamic programming, we train a set of neural networks to replace value or policy functions at each decision step. Two variants of the neural network approximated dynamic programming (NDP) methods are proposed; in the value-based NDP method, the networks learn to estimate the value of each choice at the corresponding step, while in the policy-based NDP method the DNNs
APA, Harvard, Vancouver, ISO, and other styles
21

Jahnvi and . Rohit Maheshwari Mr. "CNN-RNN: The Dynamic Duo of Deep Learning." Career Point International Journal of Research (CPIJR) 4 (January 10, 2024): 109–16. https://doi.org/10.5281/zenodo.11291549.

Full text
Abstract:
Deep neural networks (DNNs) have brought about a transformative shift in the realm of natural language processing (NLP). Within the domain of DNNs, Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) stand out as the predominant choices, each excelling in distinct aspects of NLP. While CNNs are adept at extracting features regardless of their position in a sequence, RNNs specialize in modeling sequential elements. This review delves into their core principles, architectures, and applications, highlighting their distinct strengths in computer vision and natural language pr
APA, Harvard, Vancouver, ISO, and other styles
22

Putra, Prasetia Utama, Keisuke Shima, and Koji Shimatani. "A deep neural network model for multi-view human activity recognition." PLOS ONE 17, no. 1 (2022): e0262181. http://dx.doi.org/10.1371/journal.pone.0262181.

Full text
Abstract:
Multiple cameras are used to resolve occlusion problem that often occur in single-view human activity recognition. Based on the success of learning representation with deep neural networks (DNNs), recent works have proposed DNNs models to estimate human activity from multi-view inputs. However, currently available datasets are inadequate in training DNNs model to obtain high accuracy rate. Against such an issue, this study presents a DNNs model, trained by employing transfer learning and shared-weight techniques, to classify human activity from multiple cameras. The model comprised pre-trained
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Hongtao, Shinichi Yoshida, and Zhen Li. "Brain-like illusion produced by Skye’s Oblique Grating in deep neural networks." PLOS ONE 19, no. 2 (2024): e0299083. http://dx.doi.org/10.1371/journal.pone.0299083.

Full text
Abstract:
The analogy between the brain and deep neural networks (DNNs) has sparked interest in neuroscience. Although DNNs have limitations, they remain valuable for modeling specific brain characteristics. This study used Skye’s Oblique Grating illusion to assess DNNs’ relevance to brain neural networks. We collected data on human perceptual responses to a series of visual illusions. This data was then used to assess how DNN responses to these illusions paralleled or differed from human behavior. We performed two analyses:(1) We trained DNNs to perform horizontal vs. non-horizontal classification on i
APA, Harvard, Vancouver, ISO, and other styles
24

Jang, Hojin, Devin McCormack, and Frank Tong. "Noise-trained deep neural networks effectively predict human vision and its neural responses to challenging images." PLOS Biology 19, no. 12 (2021): e3001418. http://dx.doi.org/10.1371/journal.pbio.3001418.

Full text
Abstract:
Deep neural networks (DNNs) for object classification have been argued to provide the most promising model of the visual system, accompanied by claims that they have attained or even surpassed human-level performance. Here, we evaluated whether DNNs provide a viable model of human vision when tested with challenging noisy images of objects, sometimes presented at the very limits of visibility. We show that popular state-of-the-art DNNs perform in a qualitatively different manner than humans—they are unusually susceptible to spatially uncorrelated white noise and less impaired by spatially corr
APA, Harvard, Vancouver, ISO, and other styles
25

Lestari, Wulan Sri, Yuni Marlina Saragih, and Caroline Caroline. "MULTICLASS CLASSIFICATION FOR STUNTING PREDICTION USING DEEP NEURAL NETWORKS." JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) 10, no. 2 (2024): 386–93. http://dx.doi.org/10.33480/jitk.v10i2.5636.

Full text
Abstract:
Stunting is a chronic nutritional issue that hinders child growth and leads to serious long-term health and developmental impacts, particularly in developing countries. Therefore, early and accurate prediction of stunting is crucial for implementing effective interventions. This research aims to develop a multiclass classification model based on Deep Neural Networks (DNNs) to predict stunting status. The model is trained using a comprehensive dataset that encompasses various health variables related to stunting. The research process includes data collection, data preprocessing, dataset splitti
APA, Harvard, Vancouver, ISO, and other styles
26

Chinta, Rajashekar Reddy. "Watermarking Deep Neural Networks for Embedded Systems." Journal For Innovative Development in Pharmaceutical and Technical Science 8, no. 8 (2020): 24–30. https://doi.org/10.5281/zenodo.4009031.

Full text
Abstract:
<strong>Abstract</strong><strong> :</strong> <em>Deep neural systems (DNNs) turned into a critical instrument for carrying insight into versatile and inserted gadgets. E logically extensive preparing, sharing and possible improvement of DNN models produce a convincing might want for holding (IP) protection. As of late, DNN watermarking rises as a conceivable IP assurance procedure. Empowering DNN watermarking on installed gadgets in a very in an exceptionally needs a discovery approach, existing DNN watermarking structures either neglect to fulfill the discovery request or are inclined to nume
APA, Harvard, Vancouver, ISO, and other styles
27

Jacobs, Robert A., and Christopher J. Bates. "Comparing the Visual Representations and Performance of Humans and Deep Neural Networks." Current Directions in Psychological Science 28, no. 1 (2018): 34–39. http://dx.doi.org/10.1177/0963721418801342.

Full text
Abstract:
Although deep neural networks (DNNs) are state-of-the-art artificial intelligence systems, it is unclear what insights, if any, they provide about human intelligence. We address this issue in the domain of visual perception. After briefly describing DNNs, we provide an overview of recent results comparing human visual representations and performance with those of DNNs. In many cases, DNNs acquire visual representations and processing strategies that are very different from those used by people. We conjecture that there are at least two factors preventing them from serving as better psychologic
APA, Harvard, Vancouver, ISO, and other styles
28

Xie, Xuan, Fuyuan Zhang, Xinwen Hu, and Lei Ma. "DeepGemini: Verifying Dependency Fairness for Deep Neural Network." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15251–59. http://dx.doi.org/10.1609/aaai.v37i12.26779.

Full text
Abstract:
Deep neural networks (DNNs) have been widely adopted in many decision-making industrial applications. Their fairness issues, i.e., whether there exist unintended biases in the DNN, receive much attention and become critical concerns, which can directly cause negative impacts in our daily life and potentially undermine the fairness of our society, especially with their increasing deployment at an unprecedented speed. Recently, some early attempts have been made to provide fairness assurance of DNNs, such as fairness testing, which aims at finding discriminatory samples empirically, and fairness
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Mengting. "Unraveling Financial Markets: Deep Neural Network-Based Models for Stock Price Prediction." Advances in Economics, Management and Political Sciences 82, no. 1 (2024): 186–94. http://dx.doi.org/10.54254/2754-1169/82/20231111.

Full text
Abstract:
This paper delves into the potential and challenges of leveraging deep neural networks (DNNs) in stock price forecasting. Traditional econometric models often grapple with the complexities of financial time series data, leading to the exploration of DNNs, especially architectures like Long Short-Term Memory (LSTM) networks, to capture intricate patterns in such data. While these networks present promising results, challenges such as model interpretability, non-stationarity of data, overfitting, and computational demands remain. The financial sector's increasing digitization and influx of alter
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Hongtao, and Shinichi Yoshida. "Exploring Deep Neural Networks in Simulating Human Vision through Five Optical Illusions." Applied Sciences 14, no. 8 (2024): 3429. http://dx.doi.org/10.3390/app14083429.

Full text
Abstract:
Recent research has delved into the biological parallels between deep neural networks (DNNs) in vision and human perception through the study of visual illusions. However, the bulk of this research is currently constrained to the investigation of visual illusions within a single model focusing on a singular type of illusion. There exists a need for a more comprehensive explanation of visual illusions in DNNs, as well as an expansion in the variety of illusions studied. This study is pioneering in its application of representational dissimilarity matrices and feature activation visualization te
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Xinyang, Ren Pang, Shouling Ji, Fenglong Ma, and Ting Wang. "i-Algebra: Towards Interactive Interpretability of Deep Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 13 (2021): 11691–98. http://dx.doi.org/10.1609/aaai.v35i13.17390.

Full text
Abstract:
Providing explanations for deep neural networks (DNNs) is essential for their use in domains wherein the interpretability of decisions is a critical prerequisite. Despite the plethora of work on interpreting DNNs, most existing solutions offer interpretability in an ad hoc, one-shot, and static manner, without accounting for the perception, understanding, or response of end-users, resulting in their poor usability in practice. In this paper, we argue that DNN interpretability should be implemented as the interactions between users and models. We present i-Algebra, a first-of-its-kind interacti
APA, Harvard, Vancouver, ISO, and other styles
32

Servais, Jason, and Ehsan Atoofian. "Adaptive Computation Reuse for Energy-Efficient Training of Deep Neural Networks." ACM Transactions on Embedded Computing Systems 20, no. 6 (2021): 1–24. http://dx.doi.org/10.1145/3487025.

Full text
Abstract:
In recent years, Deep Neural Networks (DNNs) have been deployed into a diverse set of applications from voice recognition to scene generation mostly due to their high-accuracy. DNNs are known to be computationally intensive applications, requiring a significant power budget. There have been a large number of investigations into energy-efficiency of DNNs. However, most of them primarily focused on inference while training of DNNs has received little attention. This work proposes an adaptive technique to identify and avoid redundant computations during the training of DNNs. Elements of activatio
APA, Harvard, Vancouver, ISO, and other styles
33

Gao, Yuyang, Tong Steven Sun, Liang Zhao, and Sungsoo Ray Hong. "Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment." Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (2022): 1–28. http://dx.doi.org/10.1145/3555590.

Full text
Abstract:
While Deep Neural Networks (DNNs) are deriving the major innovations through their powerful automation, we are also witnessing the peril behind automation as a form of bias, such as automated racism, gender bias, and adversarial bias. As the societal impact of DNNs grows, finding an effective way to steer DNNs to align their behavior with the human mental model has become indispensable in realizing fair and accountable models. While establishing the way to adjust DNNs to "think like humans'' is in pressing need, there have been few approaches aiming to capture how "humans would think'' when DN
APA, Harvard, Vancouver, ISO, and other styles
34

P, Ganesh Kumar, and Ramesh G. "AN EXPLORATION OF THE POTENTIAL OF DEEP NEURAL NETWORKS IN ARTIFICIAL INTELLIGENCE." ICTACT Journal on Data Science and Machine Learning 4, no. 3 (2023): 466–69. https://doi.org/10.21917/ijdsml.2023.0108.

Full text
Abstract:
Deep Neural Networks (DNNs) have revolutionized the field of Artificial Intelligence (AI). These networks have enabled machines to learn complex tasks. DNNs are especially useful when the task involves large amounts of data. This is because they can effectively model the non-linear relationships that exist in the data. This allows them to make accurate predictions for previously unseen data. Deep Neural Networks have the potential to become a powerful tool to drive advances in artificial intelligence technologies. They can be used for a variety of tasks such as computer vision, natural languag
APA, Harvard, Vancouver, ISO, and other styles
35

Jin, Wei, Yaxing Li, Han Xu, et al. "Adversarial Attacks and Defenses on Graphs." ACM SIGKDD Explorations Newsletter 22, no. 2 (2021): 19–34. http://dx.doi.org/10.1145/3447556.3447566.

Full text
Abstract:
Deep neural networks (DNNs) have achieved significant performance in various tasks. However, recent studies have shown that DNNs can be easily fooled by small perturbation on the input, called adversarial attacks.
APA, Harvard, Vancouver, ISO, and other styles
36

Zatadini, Tangang Qisthina Handayani, Achmad Farid Wadjdi, I. Made Wiryana, et al. "Modified Of Evaluating Shallow And Deep Neural Networks For Network Intrusion Detection Systems In Cyber Security." International Journal of Progressive Sciences and Technologies 42, no. 1 (2023): 105. http://dx.doi.org/10.52155/ijpsat.v42.1.5822.

Full text
Abstract:
Abstract—Intrusion Detection Systems (IDS) have developed into a crucial layer in all contemporary Information and Communication Technology (ICT) systems as a result of a demand for cyber safety in real-world situations. IDS advises integrating Deep Neural Networks (DNN) because, among other things, it might be challenging to identify certain types of assaults and advanced cyberattacks are complex (DNNs). DNNs were employed in this study to anticipate Network Intrusion Detection System attacks (N-IDS). The network has been trained and benchmarked using the KDDCup-'99 dataset, and a DNN with a
APA, Harvard, Vancouver, ISO, and other styles
37

Luo, Yaoru, Guole Liu, Yuanhao Guo, and Ge Yang. "Deep Neural Networks Learn Meta-Structures from Noisy Labels in Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (2022): 1908–16. http://dx.doi.org/10.1609/aaai.v36i2.20085.

Full text
Abstract:
How deep neural networks (DNNs) learn from noisy labels has been studied extensively in image classification but much less in image segmentation. So far, our understanding of the learning behavior of DNNs trained by noisy segmentation labels remains limited. In this study, we address this deficiency in both binary segmentation of biological microscopy images and multi-class segmentation of natural images. We generate extremely noisy labels by randomly sampling a small fraction (e.g., 10%) or flipping a large fraction (e.g., 90%) of the ground truth labels. When trained with these noisy labels,
APA, Harvard, Vancouver, ISO, and other styles
38

Altoub, Majed, Fahad AlQurashi, Tan Yigitcanlar, Juan M. Corchado, and Rashid Mehmood. "An Ontological Knowledge Base of Poisoning Attacks on Deep Neural Networks." Applied Sciences 12, no. 21 (2022): 11053. http://dx.doi.org/10.3390/app122111053.

Full text
Abstract:
Deep neural networks (DNNs) have successfully delivered cutting-edge performance in several fields. With the broader deployment of DNN models on critical applications, the security of DNNs has become an active and yet nascent area. Attacks against DNNs can have catastrophic results, according to recent studies. Poisoning attacks, including backdoor attacks and Trojan attacks, are one of the growing threats against DNNs. Having a wide-angle view of these evolving threats is essential to better understand the security issues. In this regard, creating a semantic model and a knowledge graph for po
APA, Harvard, Vancouver, ISO, and other styles
39

Cheng, Hao, Dongze Lian, Shenghua Gao, and Yanlin Geng. "Utilizing Information Bottleneck to Evaluate the Capability of Deep Neural Networks for Image Classification." Entropy 21, no. 5 (2019): 456. http://dx.doi.org/10.3390/e21050456.

Full text
Abstract:
Inspired by the pioneering work of the information bottleneck (IB) principle for Deep Neural Networks’ (DNNs) analysis, we thoroughly study the relationship among the model accuracy, I ( X ; T ) and I ( T ; Y ) , where I ( X ; T ) and I ( T ; Y ) are the mutual information of DNN’s output T with input X and label Y. Then, we design an information plane-based framework to evaluate the capability of DNNs (including CNNs) for image classification. Instead of each hidden layer’s output, our framework focuses on the model output T. We successfully apply our framework to many application scenarios a
APA, Harvard, Vancouver, ISO, and other styles
40

Abomakhelb, Abdulruhman, Kamarularifin Abd Jalil, Alya Geogiana Buja, Abdulraqeb Alhammadi, and Abdulmajeed M. Alenezi. "A Comprehensive Review of Adversarial Attacks and Defense Strategies in Deep Neural Networks." Technologies 13, no. 5 (2025): 202. https://doi.org/10.3390/technologies13050202.

Full text
Abstract:
Artificial Intelligence (AI) security research is promising and highly valuable in the current decade. In particular, deep neural network (DNN) security is receiving increased attention. Although DNNs have recently emerged as a prominent tool for addressing complex challenges across various machine learning (ML) tasks and DNNs stand out as the most widely employed, as well as holding a significant share in both research and industry, DNNs exhibit vulnerabilities to adversarial attacks where slight but intentional perturbations can deceive DNNs models. Consequently, several studies have propose
APA, Harvard, Vancouver, ISO, and other styles
41

Pandey, Lalit, Donsuk Lee, Samantha M. W. Wood, and Justin N. Wood. "Parallel development of object recognition in newborn chicks and deep neural networks." PLOS Computational Biology 20, no. 12 (2024): e1012600. https://doi.org/10.1371/journal.pcbi.1012600.

Full text
Abstract:
How do newborns learn to see? We propose that visual systems are space-time fitters, meaning visual development can be understood as a blind fitting process (akin to evolution) in which visual systems gradually adapt to the spatiotemporal data distributions in the newborn’s environment. To test whether space-time fitting is a viable theory for learning how to see, we performed parallel controlled-rearing experiments on newborn chicks and deep neural networks (DNNs), including CNNs and transformers. First, we raised newborn chicks in impoverished environments containing a single object, then si
APA, Harvard, Vancouver, ISO, and other styles
42

Eduardo, Stefanato, Oliveira Vitor, Pinheiro Christiano, Barroso Regina, and Meneses Anderson. "Segmentation of Lung Tomographic Images Using U-Net Deep Neural Networks." Latin-American Journal of Computing 10, no. 2 (2023): 106–19. https://doi.org/10.5281/zenodo.8071498.

Full text
Abstract:
Deep Neural Networks (DNNs) are among the best methods of Artificial Intelligence, especially in computer vision, where convolutional neural networks play an important role. There are numerous architectures of DNNs, but for image processing, U-Net offers great performance in digital processing tasks such as segmentation of organs, tumors, and cells for supporting medical diagnoses. In the present work, an assessment of U-Net models is proposed, for the segmentation of computed tomography of the lung, aiming at comparing networks with different parameters. In this study, the models scored 96% D
APA, Harvard, Vancouver, ISO, and other styles
43

Cao, Yuan, and Quanquan Gu. "Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3349–56. http://dx.doi.org/10.1609/aaai.v34i04.5736.

Full text
Abstract:
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with over-parameterization and proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing
APA, Harvard, Vancouver, ISO, and other styles
44

Grant, Lauren L., and Clarissa S. Sit. "De novo molecular drug design benchmarking." RSC Medicinal Chemistry 12, no. 8 (2021): 1273–80. http://dx.doi.org/10.1039/d1md00074h.

Full text
Abstract:
Deep neural networks (DNNs) used for de novo drug design have different architectures and hyperparameters that impact the final output of suggested drug candidates. Herein we review benchmarking platforms that assess the utility and validity of DNNs.
APA, Harvard, Vancouver, ISO, and other styles
45

Laxman Doddipatla. "Deep neural networks in foreign exchange market: A predictive classification framework for real-time price movement”." World Journal of Advanced Engineering Technology and Sciences 10, no. 2 (2023): 326–38. https://doi.org/10.30574/wjaets.2023.10.2.0140.

Full text
Abstract:
The application of deep neural networks (DNNs) in foreign exchange (FX) markets introduces a novel methodology for predicting short-term price movements with greater accuracy. This study explores a classification-based approach, where market price direction is forecasted using a comprehensive model trained on high-frequency historical FX data. By employing advanced deep learning techniques and leveraging co-movement patterns between various currency pairs, the model classi- fies price changes into positive, negative, or neutral outcomes. Through a backtested strategy on multiple FX futures ove
APA, Harvard, Vancouver, ISO, and other styles
46

Firdaus, Nurmaini Siti, Firsandaya Malik Reza, et al. "Author identification in bibliographic data using deep neural networks." TELKOMNIKA (Telecommunication, Computing, Electronics and Control) 19, no. 3 (2021): 911–18. https://doi.org/10.12928/telkomnika.v19i3.18877.

Full text
Abstract:
Author name disambiguation (AND) is a challenging task for scholars who mine bibliographic information for scientific knowledge. A constructive approach for resolving name ambiguity is to use computer algorithms to identify author names. Some algorithm-based disambiguation methods have been developed by computer and data scientists. Among them, supervised machine learning has been stated to produce decent to very accurate disambiguation results. This paper presents a combination of principal component analysis (PCA) as a feature reduction and deep neural networks (DNNs), as a supervised algori
APA, Harvard, Vancouver, ISO, and other styles
47

Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Selective Poisoning Attack on Deep Neural Networks †." Symmetry 11, no. 7 (2019): 892. http://dx.doi.org/10.3390/sym11070892.

Full text
Abstract:
Studies related to pattern recognition and visualization using computer technology have been introduced. In particular, deep neural networks (DNNs) provide good performance for image, speech, and pattern recognition. However, a poisoning attack is a serious threat to a DNN’s security. A poisoning attack reduces the accuracy of a DNN by adding malicious training data during the training process. In some situations, it may be necessary to drop a specifically chosen class of accuracy from the model. For example, if an attacker specifically disallows nuclear facilities to be selectively recognized
APA, Harvard, Vancouver, ISO, and other styles
48

Villalobos, Kimberly, Vilim Štih, Amineh Ahmadinejad, et al. "Do Neural Networks for Segmentation Understand Insideness?" Neural Computation 33, no. 9 (2021): 2511–49. http://dx.doi.org/10.1162/neco_a_01413.

Full text
Abstract:
Abstract The insideness problem is an aspect of image segmentation that consists of determining which pixels are inside and outside a region. Deep neural networks (DNNs) excel in segmentation benchmarks, but it is unclear if they have the ability to solve the insideness problem as it requires evaluating long-range spatial dependencies. In this letter, we analyze the insideness problem in isolation, without texture or semantic cues, such that other aspects of segmentation do not interfere in the analysis. We demonstrate that DNNs for segmentation with few units have sufficient complexity to sol
APA, Harvard, Vancouver, ISO, and other styles
49

Aamir, Aisha, Minija Tamosiunaite, and Florentin Wörgötter. "Caffe2Unity: Immersive Visualization and Interpretation of Deep Neural Networks." Electronics 11, no. 1 (2021): 83. http://dx.doi.org/10.3390/electronics11010083.

Full text
Abstract:
Deep neural networks (DNNs) dominate many tasks in the computer vision domain, but it is still difficult to understand and interpret the information contained within these networks. To gain better insight into how a network learns and operates, there is a strong need to visualize these complex structures, and this remains an important research direction. In this paper, we address the problem of how the interactive display of DNNs in a virtual reality (VR) setup can be used for general understanding and architectural assessment. We compiled a static library as a plugin for the Caffe framework i
APA, Harvard, Vancouver, ISO, and other styles
50

Cheng, Yihui, and Baiyi Liu. "A Study of the Computation Amount and Computation Time of Classical Deep Neural Networks." Journal of Big Data and Computing 3, no. 1 (2025): 141–46. https://doi.org/10.62517/jbdc.202501119.

Full text
Abstract:
Deep convolutional neural networks have made great progress in a variety of computer vision tasks. With the gradual improvement of its performance, the layers of neural networks become deeper, and the training-validation time and computational complexity increase dramatically. How to find the relationship between characteristics of deep neural networks and their training-validation time is of great significance for accelerating convolutional neural networks. This paper analyzes the computation of several classical deep convolutional neural networks (DNNs) proposed in the field of image recogni
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!