Academic literature on the topic 'Depth-wise convolution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Depth-wise convolution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Depth-wise convolution"

1

Hossain, Syed Mohammad Minhaz, Kaushik Deb, Pranab Kumar Dhar, and Takeshi Koshiba. "Plant Leaf Disease Recognition Using Depth-Wise Separable Convolution-Based Models." Symmetry 13, no. 3 (2021): 511. http://dx.doi.org/10.3390/sym13030511.

Full text
Abstract:
Proper plant leaf disease (PLD) detection is challenging in complex backgrounds and under different capture conditions. For this reason, initially, modified adaptive centroid-based segmentation (ACS) is used to trace the proper region of interest (ROI). Automatic initialization of the number of clusters (K) using modified ACS before recognition increases tracing ROI’s scalability even for symmetrical features in various plants. Besides, convolutional neural network (CNN)-based PLD recognition models achieve adequate accuracy to some extent. However, memory requirements (large-scaled parameters) and the high computational cost of CNN-based PLD models are burning issues for the memory restricted mobile and IoT-based devices. Therefore, after tracing ROIs, three proposed depth-wise separable convolutional PLD (DSCPLD) models, such as segmented modified DSCPLD (S-modified MobileNet), segmented reduced DSCPLD (S-reduced MobileNet), and segmented extended DSCPLD (S-extended MobileNet), are utilized to represent the constructive trade-off among accuracy, model size, and computational latency. Moreover, we have compared our proposed DSCPLD recognition models with state-of-the-art models, such as MobileNet, VGG16, VGG19, and AlexNet. Among segmented-based DSCPLD models, S-modified MobileNet achieves the best accuracy of 99.55% and F1-sore of 97.07%. Besides, we have simulated our DSCPLD models using both full plant leaf images and segmented plant leaf images and conclude that, after using modified ACS, all models increase their accuracy and F1-score. Furthermore, a new plant leaf dataset containing 6580 images of eight plants was used to experiment with several depth-wise separable convolution models.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Daehee, Juhee Kang, and Jaekoo Lee. "Lightweighting of Super-Resolution Model Using Depth-Wise Separable Convolution." Journal of Korean Institute of Communications and Information Sciences 46, no. 4 (2021): 591–97. http://dx.doi.org/10.7840/kics.2021.46.4.591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ke, Ken Cheng, Jingjing Li, and Yuanyuan Peng. "A Channel Pruning Algorithm Based on Depth-Wise Separable Convolution Unit." IEEE Access 7 (2019): 173294–309. http://dx.doi.org/10.1109/access.2019.2956976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Siddiqua, Shahzia, Naveena Chikkaguddaiah, Sunilkumar S. Manvi, and Manjunath Aradhya. "AksharaNet: A GPU Accelerated Modified Depth-Wise Separable Convolution for Kannada Text Classification." Revue d'Intelligence Artificielle 35, no. 2 (2021): 145–52. http://dx.doi.org/10.18280/ria.350206.

Full text
Abstract:
For content-based indexing and retrieval applications, text characters embedded in images are a rich source of information. Owing to their different shapes, grayscale values, and dynamic backgrounds, these text characters in scene images are difficult to detect and classify. The complexity increases when the text involved is a vernacular language like Kannada. Despite advances in deep learning neural networks (DLNN), there is a dearth of fast and effective models to classify scene text images and the availability of a large-scale Kannada scene character dataset to train them. In this paper, two key contributions are proposed, AksharaNet, a graphical processing unit (GPU) accelerated modified convolution neural network architecture consisting of linearly inverted depth-wise separable convolutions and a Kannada Scene Individual Character (KSIC) dataset which is grounds-up curated consisting of 46,800 images. From results it is observed AksharaNet outperforms four other well-established models by 1.5% on CPU and 1.9% on GPU. The result can be directly attributed to the quality of the developed KSIC dataset. Early stopping decisions at 25% and 50% epoch with good and bad accuracies for complex and light models are discussed. Also, useful findings concerning learning rate drop factor and its ideal application period for application are enumerated.
APA, Harvard, Vancouver, ISO, and other styles
5

Chao, Xiaofei, Xiao Hu, Jingze Feng, Zhao Zhang, Meili Wang, and Dongjian He. "Construction of Apple Leaf Diseases Identification Networks Based on Xception Fused by SE Module." Applied Sciences 11, no. 10 (2021): 4614. http://dx.doi.org/10.3390/app11104614.

Full text
Abstract:
The fast and accurate identification of apple leaf diseases is beneficial for disease control and management of apple orchards. An improved network for apple leaf disease classification and a lightweight model for mobile terminal usage was designed in this paper. First, we proposed SE-DEEP block to fuse the Squeeze-and-Excitation (SE) module with the Xception network to get the SE_Xception network, where the SE module is inserted between the depth-wise convolution and point-wise convolution of the depth-wise separable convolution layer. Therefore, the feature channels from the lower layers could be directly weighted, which made the model more sensitive to the principal features of the classification task. Second, we designed a lightweight network, named SE_miniXception, by reducing the depth and width of SE_Xception. Experimental results show that the average classification accuracy of SE_Xception is 99.40%, which is 1.99% higher than Xception. The average classification accuracy of SE_miniXception is 97.01%, which is 1.60% and 1.22% higher than MobileNetV1 and ShuffleNet, respectively, while its number of parameters is less than those of MobileNet and ShuffleNet. The minimized network decreases the memory usage and FLOPs, and accelerates the recognition speed from 15 to 7 milliseconds per image. Our proposed SE-DEEP block provides a choice for improving network accuracy and our network compression scheme provides ideas to lightweight existing networks.
APA, Harvard, Vancouver, ISO, and other styles
6

Kate, Vandana, and Pragya Shukla. "Breast Cancer Image Multi-Classification Using Random Patch Aggregation and Depth-Wise Convolution based Deep-Net Model." International Journal of Online and Biomedical Engineering (iJOE) 17, no. 01 (2021): 83. http://dx.doi.org/10.3991/ijoe.v17i01.18513.

Full text
Abstract:
Adapting the profound, deep convolutional neural network models for large image classification can result in the layout of network architectures with a large number of learnable parameters and tuning of those varied parameters can considerably grow the complexity of the model. To address this problem a convolutional Deep-Net Model based on the extraction of random patches and enforcing depth-wise convolutions is proposed for training and classification of widely known benchmark Breast Cancer histopathology images. The classification result of these patches is aggregated using majority vote casting in deciding the final image classification type. It has been observed that the proposed Deep-Net model implementation results when compared with classification results of the VGG Net(16 layers) learned features, outclasses in terms of accuracy when applied to breast tumor Histopathology images. The objective of this work is to examine and comprehensively analyze the sub-class classification performance of the proposed model across all optical magnification frontiers.
APA, Harvard, Vancouver, ISO, and other styles
7

Dang, Lanxue, Peidong Pang, and Jay Lee. "Depth-Wise Separable Convolution Neural Network with Residual Connection for Hyperspectral Image Classification." Remote Sensing 12, no. 20 (2020): 3408. http://dx.doi.org/10.3390/rs12203408.

Full text
Abstract:
The neural network-based hyperspectral images (HSI) classification model has a deep structure, which leads to the increase of training parameters, long training time, and excessive computational cost. The deepened network models are likely to cause the problem of gradient disappearance, which limits further improvement for its classification accuracy. To this end, a residual unit with fewer training parameters were constructed by combining the residual connection with the depth-wise separable convolution. With the increased depth of the network, the number of output channels of each residual unit increases linearly with a small amplitude. The deepened network can continuously extract the spectral and spatial features while building a cone network structure by stacking the residual units. At the end of executing the model, a 1 × 1 convolution layer combined with a global average pooling layer can be used to replace the traditional fully connected layer to complete the classification with reduced parameters needed in the network. Experiments were conducted on three benchmark HSI datasets: Indian Pines, Pavia University, and Kennedy Space Center. The overall classification accuracy was 98.85%, 99.58%, and 99.96% respectively. Compared with other classification methods, the proposed network model guarantees a higher classification accuracy while spending less time on training and testing sample sites.
APA, Harvard, Vancouver, ISO, and other styles
8

商, 丽娟. "Super-Resolution Reconstruction Algorithm for Cross-Module Based on Depth-Wise Separable Convolution." Journal of Image and Signal Processing 07, no. 02 (2018): 96–104. http://dx.doi.org/10.12677/jisp.2018.72011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Gangjin, Yuanliang Zhang, and Jiayu Ou. "Transfer remaining useful life estimation of bearing using depth-wise separable convolution recurrent network." Measurement 176 (May 2021): 109090. http://dx.doi.org/10.1016/j.measurement.2021.109090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cho, Sung In, Jae Hyeon Park, and Suk-Ju Kang. "A Generative Adversarial Network-Based Image Denoiser Controlling Heterogeneous Losses." Sensors 21, no. 4 (2021): 1191. http://dx.doi.org/10.3390/s21041191.

Full text
Abstract:
We propose a novel generative adversarial network (GAN)-based image denoising method that utilizes heterogeneous losses. In order to improve the restoration quality of the structural information of the generator, the heterogeneous losses, including the structural loss in addition to the conventional mean squared error (MSE)-based loss, are used to train the generator. To maximize the improvements brought on by the heterogeneous losses, the strength of the structural loss is adaptively adjusted by the discriminator for each input patch. In addition, a depth wise separable convolution-based module that utilizes the dilated convolution and symmetric skip connection is used for the proposed GAN so as to reduce the computational complexity while providing improved denoising quality compared to the convolutional neural network (CNN) denoiser. The experiments showed that the proposed method improved visual information fidelity and feature similarity index values by up to 0.027 and 0.008, respectively, compared to the existing CNN denoiser.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!