Academic literature on the topic 'Modified convolutional neural network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Modified convolutional neural network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Modified convolutional neural network"

1

Sapunov, V. V., S. A. Botman, G. V. Kamyshov, and N. N. Shusharina. "Application of Convolution with Periodic Boundary Condition for Processing Data from Cylindrical Electrode Arrays." INFORMACIONNYE TEHNOLOGII 27, no. 3 (2021): 125–31. http://dx.doi.org/10.17587/it.27.125-131.

Full text
Abstract:
In this paper, modification of convolutional neural networks for purposes of processing electromyographic data obtained from cylindrical arrays of electrodes was proposed. Taking into account the spatial symmetry of the array, convolution operation was redefined using periodic boundary conditions, which allowed to construct a neural network that is invariant to rotations of electrodes array around its axis. Applicability of the proposed approach was evaluated by constructing a neural network containing a new type of convolutional layer and training it on the open UC2018 DualMyo dataset in order to classify gestures basing on data from a single myobracelet. The network based on the new type of convolution performed better compared to common convolutions when trained on data without augmentation, which indicates that such a network is invari­able to cyclic shifts in the input data. Neural networks with modified convolutional layers and common convolutional layers achieved f-1 scores of 0.96 and 0.65 respectively with no augmentation for input data and f-1 scores of 0.98 and 0.96 in case when train-time augmentation was applied. Test data was augmented in both cases. Potentially, proposed convolution can be applied in processing any data with the same connectivity in such a way that allows to adapt time-tested architectural solutions for networks by replacing common convolutions with modified ones.
APA, Harvard, Vancouver, ISO, and other styles
2

Wasim Khan. "Image Classification using modified Convolutional Neural Network." Journal of Electrical Systems 20, no. 3 (2024): 3465–72. https://doi.org/10.52783/jes.4982.

Full text
Abstract:
Image classification is the field of research since decades. With evaluation of new technologies, the performance of image classification has been improved and this is evident by it’s us in routine life. However there are scopes to use the deep learning networks to further improve the complex image classification problems. In this paper, the Convolution neural network based(CNN) image classification is evaluated by changing the parameters of CNN like number of layers, number of neurons, block size of convolution operation etc. The parametric analysis in terms of accuracy number of iteration for convergence is illustrated in result section. The standard dataset of Intel image classification is used for evaluation of performance. The maximum accuracy has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Iatsenko, D. V., and B. B. Zhmaylov. "IMPROVING THE EFFICIENCY OF THE CONVOLUTIONAL NEURAL NETWORK USING THE METHOD OF INCREASING THE RECEPTIVE FIELD." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 195 (September 2020): 18–24. http://dx.doi.org/10.14489/vkit.2020.09.pp.018-024.

Full text
Abstract:
In many pattern recognition problems solved using convolutional neural networks (CNN), one of the important characteristics of network architecture is the size of the convolution kernel, since it coincides with the size of the maximum element that can act as a recognition sign. However, increasing the size of the convolution kernel greatly increases the number of tunable network parameters. The method of effective receptive field was first applied on AlexNet in 2012. The practical application of the method of increasing the effective receptive field without increasing convolution kernel size is discussed in this article. A presented example of a small network designed to recognize a fire in apicture demonstrates the use of an effective receptive field which consists of a stack of smaller convolutions. Comparison of a original network with a large convolution core and a modified network with a stack of smaller cores shows that, with equal network characteristics, such as prediction accuracy, prediction time, the number of parameters in the network with an effective receptive field, the number of tunable parameters is significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
4

Iatsenko, D. V., and B. B. Zhmaylov. "IMPROVING THE EFFICIENCY OF THE CONVOLUTIONAL NEURAL NETWORK USING THE METHOD OF INCREASING THE RECEPTIVE FIELD." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 195 (September 2020): 18–24. http://dx.doi.org/10.14489/vkit.2020.09.pp.018-024.

Full text
Abstract:
In many pattern recognition problems solved using convolutional neural networks (CNN), one of the important characteristics of network architecture is the size of the convolution kernel, since it coincides with the size of the maximum element that can act as a recognition sign. However, increasing the size of the convolution kernel greatly increases the number of tunable network parameters. The method of effective receptive field was first applied on AlexNet in 2012. The practical application of the method of increasing the effective receptive field without increasing convolution kernel size is discussed in this article. A presented example of a small network designed to recognize a fire in apicture demonstrates the use of an effective receptive field which consists of a stack of smaller convolutions. Comparison of a original network with a large convolution core and a modified network with a stack of smaller cores shows that, with equal network characteristics, such as prediction accuracy, prediction time, the number of parameters in the network with an effective receptive field, the number of tunable parameters is significantly reduced.
APA, Harvard, Vancouver, ISO, and other styles
5

Murinto, Murinto, and Sri Winiarti. "Modified particle swarm optimization (MPSO) optimized CNN’s hyperparameters for classification." International Journal of Advances in Intelligent Informatics 11, no. 1 (2025): 133. https://doi.org/10.26555/ijain.v11i1.1303.

Full text
Abstract:
This paper proposes a convolutional neural network architectural design approach using the modified particle swarm optimization (MPSO) algorithm. Adjusting hyper-parameters and searching for optimal network architecture from convolutional neural networks (CNN) is an interesting challenge. Network performance and increasing the efficiency of learning models on certain problems depend on setting hyperparameter values, resulting in large and complex search spaces in their exploration. The use of heuristic-based searches allows for this type of problem, where the main contribution in this research is to apply the MPSO algorithm to find the optimal parameters of CNN, including the number of convolution layers, the filters used in the convolution process, the number of convolution filters and the batch size. The parameters obtained using MPSO are kept in the same condition in each convolution layer, and the objective function is evaluated by MPSO, which is given by classification rate. The optimized architecture is implemented in the Batik motif database. The research found that the proposed model produced the best results, with a classification rate higher than 94%, showing good results compared to other state-of-the-art approaches. This research demonstrates the performance of the MPSO algorithm in optimizing CNN architectures, highlighting its potential for improving image recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun, Kai, Jiangshe Zhang, Junmin Liu, Shuang Xu, Xiangyong Cao, and Rongrong Fei. "Modified Dynamic Routing Convolutional Neural Network for Pan-Sharpening." Remote Sensing 15, no. 11 (2023): 2869. http://dx.doi.org/10.3390/rs15112869.

Full text
Abstract:
Based on deep learning, various pan-sharpening models have achieved excellent results. However, most of them adopt simple addition or concatenation operations to merge the information of low spatial resolution multi-spectral (LRMS) images and panchromatic (PAN) images, which may cause a loss of detailed information. To tackle this issue, inspired by capsule networks, we propose a plug-and-play layer named modified dynamic routing layer (MDRL), which modifies the information transmission mode of capsules to effectively fuse LRMS images and PAN images. Concretely, the lower-level capsules are generated by applying transform operation to the features of LRMS images and PAN images, which preserve the spatial location information. Then, the dynamic routing algorithm is modified to adaptively select the lower-level capsules to generate the higher-level capsule features to represent the fusion of LRMS images and PAN images, which can effectively avoid the loss of detailed information. In addition, the previous addition and concatenation operations are illustrated as special cases of our MDRL. Based on MIPSM with addition operations and DRPNN with concatenation operations, two modified dynamic routing models named MDR–MIPSM and MDR–DRPNN are further proposed for pan-sharpening. Extensive experimental results demonstrate that the proposed method can achieve remarkable spectral and spatial quality.
APA, Harvard, Vancouver, ISO, and other styles
7

Adhari, Firman Maulana, Taufik Fuadi Abidin, and Ridha Ferdhiana. "License Plate Character Recognition using Convolutional Neural Network." Journal of Information Systems Engineering and Business Intelligence 8, no. 1 (2022): 51–60. http://dx.doi.org/10.20473/jisebi.8.1.51-60.

Full text
Abstract:
Background: In the last decade, the number of registered vehicles has grown exponentially. With more vehicles on the road, traffic jams, accidents, and violations also increase. A license plate plays a key role in solving such problems because it stores a vehicle’s historical information. Therefore, automated license-plate character recognition is needed. Objective: This study proposes a recognition system that uses convolutional neural network (CNN) architectures to recognize characters from a license plate’s images. We called it a modified LeNet-5 architecture. Methods: We used four different CNN architectures to recognize license plate characters: AlexNet, LeNet-5, modified LeNet-5, and ResNet-50 architectures. We evaluated the performance based on their accuracy and computation time. We compared the deep learning methods with the Freeman chain code (FCC) extraction with support vector machine (SVM). We also evaluated the Otsu and the threshold binarization performances when applied in the FCC extraction method. Results: The ResNet-50 and modified LeNet-5 produces the best accuracy during the training at 0.97. The precision and recall scores of the ResNet-50 are both 0.97, while the modified LeNet-5’s values are 0.98 and 0.96, respectively. The modified LeNet-5 shows a slightly higher precision score but a lower recall score. The modified LeNet-5 shows a slightly lower accuracy during the testing than ResNet-50. Meanwhile, the Otsu binarization’s FCC extraction is better than the threshold binarization. Overall, the FCC extraction technique performs less effectively than CNN. The modified LeNet-5 computes the fastest at 7 mins and 57 secs, while ResNet-50 needs 42 mins and 11 secs. Conclusion: We discovered that CNN is better than the FCC extraction method with SVM. Both ResNet-50 and the modified LeNet-5 perform best during the training, with F measure scoring 0.97. However, ResNet-50 outperforms the modified LeNet-5 during the testing, with F-measure at 0.97 and 1.00, respectively. In addition, the FCC extraction using the Otsu binarization is better than the threshold binarization. Otsu binarization reached 0.91, higher than the static threshold binarization at 127. In addition, Otsu binarization produces a dynamic threshold value depending on the images’ light intensity. Keywords: Convolutional Neural Network, Freeman Chain Code, License Plate Character Recognition, Support Vector Machine
APA, Harvard, Vancouver, ISO, and other styles
8

Misko, Joshua, Shrikant S. Jadhav, and Youngsoo Kim. "Extensible Embedded Processor for Convolutional Neural Networks." Scientific Programming 2021 (April 21, 2021): 1–12. http://dx.doi.org/10.1155/2021/6630552.

Full text
Abstract:
Convolutional neural networks (CNNs) require significant computing power during inference. Smart phones, for example, may not run a facial recognition system or search algorithm smoothly due to the lack of resources and supporting hardware. Methods for reducing memory size and increasing execution speed have been explored, but choosing effective techniques for an application requires extensive knowledge of the network architecture. This paper proposes a general approach to preparing a compressed deep neural network processor for inference with minimal additions to existing microprocessor hardware. To show the benefits to the proposed approach, an example CNN for synthetic aperture radar target classification is modified and complimentary custom processor instructions are designed. The modified CNN is examined to show the effects of the modifications and the custom processor instructions are profiled to illustrate the potential performance increase from the new extended instructions.
APA, Harvard, Vancouver, ISO, and other styles
9

Prochukhan, Dmytro. "IMPLEMENTATION OF TECHNOLOGY FOR IMPROVING THE QUALITY OF SEGMENTATION OF MEDICAL IMAGES BY SOFTWARE ADJUSTMENT OF CONVOLUTIONAL NEURAL NETWORK HYPERPARAMETERS." Information and Telecommunication Sciences, no. 1 (June 24, 2023): 59–63. http://dx.doi.org/10.20535/2411-2976.12023.59-63.

Full text
Abstract:
Background. The scientists have built effective convolutional neural networks in their research, but the issue of optimal setting of the hyperparameters of these neural networks remains insufficiently researched. Hyperparameters affect model selection. They have the greatest impact on the number and size of hidden layers. Effective selection of hyperparameters improves the speed and quality of the learning algorithm. It is also necessary to pay attention to the fact that the hyperparameters of the convolutional neural network are interconnected. That is why it is very difficult to manually select the effective values of hyperparameters, which will ensure the maximum efficiency of the convolutional neural network. It is necessary to automate the process of selecting hyperparameters, to implement a software mechanism for setting hyperparameters of a convolutional neural network. The author has successfully implemented the specified task.
 Objective. The purpose of the paper is to develop a technology for selecting hyperparameters of a convolutional neural network to improve the quality of segmentation of medical images..
 Methods. Selection of a convolutional neural network model that will enable effective segmentation of medical images, modification of the Keras Tuner library by developing an additional function, use of convolutional neural network optimization methods and hyperparameters, compilation of the constructed model and its settings, selection of the model with the best hyperparameters.
 Results. A comparative analysis of U-Net and FCN-32 convolutional neural networks was carried out. U-Net was selected as the tuning network due to its higher quality and accuracy of image segmentation. Modified the Keras Tuner library by developing an additional function for tuning hyperparameters. To optimize hyperparameters, the use of the Hyperband method is justified. The optimal number of epochs was selected - 20. In the process of setting hyperparameters, the best model with an accuracy index of 0.9665 was selected. The hyperparameter start_neurons is set to 80, the hyperparameter net_depth is 5, the activation function is Mish, the hyperparameter dropout is set to False, and the hyperparameter bn_after_act is set to True.
 Conclusions. The convolutional neural network U-Net, which is configured with the specified parameters, has a significant potential in solving the problems of segmentation of medical images. The prospect of further research is the use of a modified network for the diagnosis of symptoms of the coronavirus disease COVID-19, pneumonia, cancer and other complex medical diseases.
APA, Harvard, Vancouver, ISO, and other styles
10

Luo, Guoliang, Bingqin He, Yanbo Xiong, et al. "An Optimized Convolutional Neural Network for the 3D Point-Cloud Compression." Sensors 23, no. 4 (2023): 2250. http://dx.doi.org/10.3390/s23042250.

Full text
Abstract:
Due to the tremendous volume taken by the 3D point-cloud models, knowing how to achieve the balance between a high compression ratio, a low distortion rate, and computing cost in point-cloud compression is a significant issue in the field of virtual reality (VR). Convolutional neural networks have been used in numerous point-cloud compression research approaches during the past few years in an effort to progress the research state. In this work, we have evaluated the effects of different network parameters, including neural network depth, stride, and activation function on point-cloud compression, resulting in an optimized convolutional neural network for compression. We first have analyzed earlier research on point-cloud compression based on convolutional neural networks before designing our own convolutional neural network. Then, we have modified our model parameters using the experimental data to further enhance the effect of point-cloud compression. Based on the experimental results, we have found that the neural network with the 4 layers and 2 strides parameter configuration using the Sigmoid activation function outperforms the default configuration by 208% in terms of the compression-distortion rate. The experimental results show that our findings are effective and universal and make a great contribution to the research of point-cloud compression using convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Modified convolutional neural network"

1

Ayoub, Issa. "Multimodal Affective Computing Using Temporal Convolutional Neural Network and Deep Convolutional Neural Networks." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39337.

Full text
Abstract:
Affective computing has gained significant attention from researchers in the last decade due to the wide variety of applications that can benefit from this technology. Often, researchers describe affect using emotional dimensions such as arousal and valence. Valence refers to the spectrum of negative to positive emotions while arousal determines the level of excitement. Describing emotions through continuous dimensions (e.g. valence and arousal) allows us to encode subtle and complex affects as opposed to discrete emotions, such as the basic six emotions: happy, anger, fear, disgust, sad and neutral. Recognizing spontaneous and subtle emotions remains a challenging problem for computers. In our work, we employ two modalities of information: video and audio. Hence, we extract visual and audio features using deep neural network models. Given that emotions are time-dependent, we apply the Temporal Convolutional Neural Network (TCN) to model the variations in emotions. Additionally, we investigate an alternative model that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). Given our inability to fit the latter deep model into the main memory, we divide the RNN into smaller segments and propose a scheme to back-propagate gradients across all segments. We configure the hyperparameters of all models using Gaussian processes to obtain a fair comparison between the proposed models. Our results show that TCN outperforms RNN for the recognition of the arousal and valence emotional dimensions. Therefore, we propose the adoption of TCN for emotion detection problems as a baseline method for future work. Our experimental results show that TCN outperforms all RNN based models yielding a concordance correlation coefficient of 0.7895 (vs. 0.7544) on valence and 0.8207 (vs. 0.7357) on arousal on the validation dataset of SEWA dataset for emotion prediction.
APA, Harvard, Vancouver, ISO, and other styles
2

Long, Cameron E. "Quaternion Temporal Convolutional Neural Networks." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1565303216180597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bylund, Andreas, Anton Erikssen, and Drazen Mazalica. "Hyperparameters impact in a convolutional neural network." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-18670.

Full text
Abstract:
Machine learning and image recognition is a big and growing subject in today's society. Therefore the aim of this thesis is to compare convolutional neural networks with different hyperparameter settings and see how the hyperparameters affect the networks test accuracy in identifying images of traffic signs. The reason why traffic signs are chosen as objects to evaluate hyperparameters is due to the author's previous experience in the domain. The object itself that is used for image recognition does not matter. Any dataset with images can be used to see the hyperparameters affect. Grid search is used to create a large amount of models with different width and depth, learning rate and momentum. Convolution layers, activation functions and batch size are all tested separately. These experiments make it possible to evaluate how the hyperparameters affect the networks in their performance of recognizing images of traffic signs. The models are created using Keras API and then trained and tested on the dataset Traffic Signs Preprocessed. The results show that hyperparameters affect test accuracy, some affect more than others. Configuring learning rate and momentum can in some cases result in disastrous results if they are set too high or too low. Activation function also show to be a crucial hyperparameter where it in some cases produce terrible results.
APA, Harvard, Vancouver, ISO, and other styles
4

Reiling, Anthony J. "Convolutional Neural Network Optimization Using Genetic Algorithms." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1512662981172387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

DiMascio, Michelle Augustine. "Convolutional Neural Network Optimization for Homography Estimation." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1544214038882564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Embretsén, Niklas. "Representing Voices Using Convolutional Neural Network Embeddings." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-261415.

Full text
Abstract:
In today’s society services centered around voices are gaining popularity. Being able to provide the users with voices they like, to obtain and sustain their attention, is of importance for enhancing the overall experience of the service. Finding an efficient way of representing voices such that similarity comparisons can be performed is therefore of great use. In the field of Natural Language Processing great progress has been made using embeddings from Deep Learning models to represent words in an unsupervised fashion. These representations managed to capture the semantics of the words. This thesis sets out to explore whether such embeddings can be found for audio data as well, more specifically voices from narrators of audiobooks, that captures similarities between different voices. For this two different Convolutional Neural Networks are developed and evaluated, trained on spectrogram representations of the voices. One is performing regular classification while the other one uses pairwise relationships and a Kullback–Leibler divergence based loss function, in an attempt to minimize and maximize the difference of the output between similar and dissimilar pairs of samples. From these models the embeddings used to represent each sample are extracted from the different layers of the fully connected part of the network during the evaluation. Both an objective and a subjective evaluation is performed. During the objective evaluation of the models it is first investigated whether the found embeddings are distinct for the different narrators, as well as if the embeddings do encode information about gender. The regular classification model is then further evaluated through a user test, as it achieved an order of magnitude better results during the objective evaluation. The user test sets out to evaluate whether the found embeddings capture information based on perceived similarity. It is concluded that the proposed approach has the potential to be used for representing voices in a way such that similarity is encoded, although more extensive testing, research and evaluation has to be performed to know for sure. For future work it is proposed to perform more sophisticated pre-proceessing of the data and also to collect and include data about relationships between voices during the training of the models.<br>I dagens samhälle ökar populariteten för röstbaserade tjänster. Att kunna förse användare med röster de tycker om, för att fånga och behålla deras uppmärksamhet, är därför viktigt för att förbättra användarupplevelsen. Att hitta ett effektiv sätt att representera röster, så att likheter mellan dessa kan jämföras, är därför av stor nytta. Inom fältet språkteknologi i maskininlärning har stora framstegs gjorts genom att skapa representationer av ord från de inre lagren av neurala nätverk, så kallade neurala nätverksinbäddningar. Dessa representationer har visat sig innehålla semantiken av orden. Denna uppsats avser att undersöka huruvida liknande representationer kan hittas för ljuddata i form av berättarröster från ljudböcker, där likhet mellan röster fångas upp. För att undersöka detta utvecklades och utvärderades två faltningsnätverk som använde sig av spektrogramrepresentationer av röstdata. Den ena modellen är konstruerad som en vanlig klassificeringsmodell, tränad för att skilja mellan uppläsare i datasetet. Den andra modellen använder parvisa förhållanden mellan datapunkterna och en Kullback–Leibler divergensbaserad optimeringsfunktion, med syfte att minimera och maximera skillnaden mellan lika och olika par av datapunkter. Från dessa modeller används representationer från de olika lagren av nätverket för att representera varje datapunkt under utvärderingen. Både en objektiv och subjektiv utvärderingsmetod används. Under den objektiva utvärderingen undersöks först om de funna representationerna är distinkta för olika uppläsare, sedan undersöks även om dessa fångar upp information om uppläsarens kön. Den vanliga klassificeringsmodellen utvärderas också genom ett användartest, eftersom den modellen nådde en storleksordning bättre resultat under den objektiva utvärderingen. Syftet med användartestet var att undersöka om de funna representationerna innehåller information om den upplevda likheten mellan rösterna. Slutsatsen är att det föreslagna tillvägagångssättet har potential till att användas för att representera röster så att information om likhet fångas upp, men att det krävs mer omfattande testning, undersökning och utvärdering. För framtida studier föreslås mer sofistikerad förbehandling av data samt att samla in och använda sig av data kring förhållandet mellan röster under träningen av modellerna.
APA, Harvard, Vancouver, ISO, and other styles
7

Tawfique, Ziring. "Tool-Mediated Texture Recognition Using Convolutional Neural Network." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-303774.

Full text
Abstract:
Vibration patterns can be captured by an accelerometer sensor attached to a hand-held device when it is scratched on various type of surface textures. These acceleration signals can carry relevant information for surface texture classification. Typically, methods rely on hand crafted feature engineering but with the use of Convolutional Neural Network manual feature engineering can be eliminated. A proposed method using modern machine learning techniques such as Dropout is introduced by training a Convolutional Neural Network to distinguish between 69 and 100 various surface textures. EHapNet, which is the proposed Convolutional Neural Network model, managed to achieve state of the art results with the used datasets.
APA, Harvard, Vancouver, ISO, and other styles
8

Winicki, Elliott. "ELECTRICITY PRICE FORECASTING USING A CONVOLUTIONAL NEURAL NETWORK." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2126.

Full text
Abstract:
Many methods have been used to forecast real-time electricity prices in various regions around the world. The problem is difficult because of market volatility affected by a wide range of exogenous variables from weather to natural gas prices, and accurate price forecasting could help both suppliers and consumers plan effective business strategies. Statistical analysis with autoregressive moving average methods and computational intelligence approaches using artificial neural networks dominate the landscape. With the rise in popularity of convolutional neural networks to handle problems with large numbers of inputs, and convolutional neural networks conspicuously lacking from current literature in this field, convolutional neural networks are used for this time series forecasting problem and show some promising results. This document fulfills both MSEE Master's Thesis and BSCPE Senior Project requirements.
APA, Harvard, Vancouver, ISO, and other styles
9

Cui, Chen. "Convolutional Polynomial Neural Network for Improved Face Recognition." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1497628776210369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Chao. "WELD PENETRATION IDENTIFICATION BASED ON CONVOLUTIONAL NEURAL NETWORK." UKnowledge, 2019. https://uknowledge.uky.edu/ece_etds/133.

Full text
Abstract:
Weld joint penetration determination is the key factor in welding process control area. Not only has it directly affected the weld joint mechanical properties, like fatigue for example. It also requires much of human intelligence, which either complex modeling or rich of welding experience. Therefore, weld penetration status identification has become the obstacle for intelligent welding system. In this dissertation, an innovative method has been proposed to detect the weld joint penetration status using machine-learning algorithms. A GTAW welding system is firstly built. Project a dot-structured laser pattern onto the weld pool surface during welding process, the reflected laser pattern is captured which contains all the information about the penetration status. An experienced welder is able to determine weld penetration status just based on the reflected laser pattern. However, it is difficult to characterize the images to extract key information that used to determine penetration status. To overcome the challenges in finding right features and accurately processing images to extract key features using conventional machine vision algorithms, we propose using convolutional neural network (CNN) to automatically extract key features and determine penetration status. Data-label pairs are needed to train a CNN. Therefore, an image acquiring system is designed to collect reflected laser pattern and the image of work-piece backside. Data augmentation is performed to enlarge the training data size, which resulting in 270,000 training data, 45,000 validation data and 45,000 test data. A six-layer convolutional neural network (CNN) has been designed and trained using a revised mini-batch gradient descent optimizer. Final test accuracy is 90.7% and using a voting mechanism based on three consequent images further improve the prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Modified convolutional neural network"

1

Ally, Afshan. A Hopfield neural network decoder for convolutional codes. National Library of Canada = Bibliothèque nationale du Canada, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shanthini, A., Gunasekaran Manogaran, and G. Vadivu. Deep Convolutional Neural Network for The Prognosis of Diabetic Retinopathy. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-3877-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Doan, Tai. Convolutional Neural Network in Classifying Scanned Documents. GRIN Verlag GmbH, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Journey from Artificial to Convolutional Neural Network. Central West Publishing, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deep Convolutional Neural Network for the Prognosis of Diabetic Retinopathy. Springer, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Manogaran, Gunasekaran, G. Vadivu, and A. Shanthini. Deep Convolutional Neural Network for the Prognosis of Diabetic Retinopathy. Springer, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kashyap, Dr Nikita, Dr Dharmendra Kumar Singh, Dr Girish Kumar Singh, and Dr Arun Kumar Kashyap, eds. Identification of Diabetic Retinopathy Stages Using Modified DWT and Artificial Neural Network. AkiNik Publications, 2021. http://dx.doi.org/10.22271/ed.book.1314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

National Aeronautics and Space Administration (NASA) Staff. Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kypraios, Ioannis. Performance Analysis of the Modified-Hybrid Optical Neural Network Object Recognition System Within Cluttered Scenes. INTECH Open Access Publisher, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Modified convolutional neural network"

1

Vinotheni, C., S. Lakshmana Pandian, and G. Lakshmi. "Modified Convolutional Neural Network of Tamil Character Recognition." In Lecture Notes in Networks and Systems. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4218-3_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Ho-Joon, Joseph S. Lee, and Hyun-Seung Yang. "Human Action Recognition Using a Modified Convolutional Neural Network." In Advances in Neural Networks – ISNN 2007. Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-72393-6_85.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sharma, Aditi, and D. Franklin Vinod. "Classification of Bacterial Skin Disease Images Using Modified Convolutional Neural Network." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0769-4_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Parida, Prasanta Kumar, Lingraj Dora, Rutuparna Panda, and Sanjay Agrawal. "Multi-grade Brain Tumor Classification Using a Modified Convolutional Neural Network." In Intelligent Systems Design and Applications. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-64836-6_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brinthakumari, S., and P. M. Sivaraja. "mCNN: An Approach for Plant Disease Detection Using Modified Convolutional Neural Network." In Lecture Notes in Electrical Engineering. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8477-8_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gupta, Nidhi, Akhilesh Latoria, and Akash Goel. "Blood Cancer Classification with Gene Expression Using Modified Convolutional Neural Network Approach." In Artificial Intelligence in Cyber-Physical Systems. CRC Press, 2023. http://dx.doi.org/10.1201/9781003248750-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhattacharya, Suchimita, Manas Ghosh, and Aniruddha Dey. "Face Detection in Unconstrained Environments Using Modified Multitask Cascade Convolutional Neural Network." In Proceedings of International Conference on Industrial Instrumentation and Control. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-7011-4_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Srinivasulu, Asadi, Umesh Neelakantan, Tarkeswar Barua, Srinivas Nowduri, and MM Subramanyam. "Early Prediction of COVID-19 Using Modified Convolutional Neural Networks." In Data Analytics, Computational Statistics, and Operations Research for Engineers. CRC Press, 2022. http://dx.doi.org/10.1201/9781003152392-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Srinivasulu, Asadi, Tarkeshwar Barua, Umesh Neelakantan, and Srinivas Nowduri. "Early Prediction of COVID-19 Using Modified Convolutional Neural Networks." In Advanced Technologies and Societal Change. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-5090-1_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hui, Wenxin Liang, and Zihan Liao. "Detection of Spammers Using Modified Diffusion Convolution Neural Network." In Lecture Notes in Computer Science. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60470-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Modified convolutional neural network"

1

Singh, Brahmjit, Poonam Jindal, Pankaj Verma, Vishal Sharma, and Chandra Prakash. "Automatic Modulation Recognition Using Modified Convolutional Neural Network." In 2025 3rd International Conference on Device Intelligence, Computing and Communication Technologies (DICCT). IEEE, 2025. https://doi.org/10.1109/dicct64131.2025.10986502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beeharry, Yogesh, and Didier Gael Daryl Emilien. "A Modified Convolutional Neural Network Model for Automatic Modulation Classification." In 2025 Emerging Technologies for Intelligent Systems (ETIS). IEEE, 2025. https://doi.org/10.1109/etis64005.2025.10961873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nayak, Debasish Swapnesh Kumar, Arpita Priyadarshini, Pabani Mahanta, Soumyarashmi Panigrahi, Sushanta Meher, and Satyananda Swain. "Modified Deep Neural Network Approach to Identify Heart Disease using IoMT: Artificial Neural Networks or Convolutional Neural Networks!" In 2024 International Conference on Intelligent Computing and Sustainable Innovations in Technology (IC-SIT). IEEE, 2024. https://doi.org/10.1109/ic-sit63503.2024.10862075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Verma, Sonia, Pooja Singhal, Ritu Gupta, Abhilasha Singh, and Arun Kumar. "Facial Keypoint Detection using a Modified Convolutional Neural Network with RESNET50." In 2024 2nd International Conference on Advancements and Key Challenges in Green Energy and Computing (AKGEC). IEEE, 2024. https://doi.org/10.1109/akgec62572.2024.10868470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krishnamaneni, Ramesh, Muralidhar Kurni, Souptik Sen, and Ashwin Murthy. "Modified Convolutional Neural Network with Multiple Features for Multimodal Sarcasm Detection." In 2024 2nd International Conference on Recent Advances in Information Technology for Sustainable Development (ICRAIS). IEEE, 2024. https://doi.org/10.1109/icrais62903.2024.10811714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sahoo, Parthasarathi, Aryadutta Khandual, Soumya Rath, Lipsarani Parida, Debendra Muduli, and Santosh Kumar Sharma. "Enhanced Brain Tumor Classification Using a Modified Xception Convolutional Neural Network." In 2024 2nd International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). IEEE, 2024. https://doi.org/10.1109/scopes64467.2024.10990472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lad, Saket, Bhavisha Chafekar, and Pramod Bide. "Lung Cancer Classification Using Deep Learning: A Comprehensive Approach with Modified Convolutional Neural Networks." In 2024 International Conference on Computational Intelligence and Network Systems (CINS). IEEE, 2024. https://doi.org/10.1109/cins63881.2024.10864431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Ajay, and Abhimanyu Singh Panwar. "Human Mental State Detection Using Modified Convolutional Neural Network with Leaky Rectified Linear Unit." In 2024 IEEE Region 10 Symposium (TENSYMP). IEEE, 2024. http://dx.doi.org/10.1109/tensymp61132.2024.10752185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Guangjuan, Ke Li, and Yalong Jiang. "A Modified Dwarf Mongoose Optimization Based Deep Convolutional Neural Network for Building Structural Damage Detection." In 2024 International Conference on Data Science and Network Security (ICDSNS). IEEE, 2024. http://dx.doi.org/10.1109/icdsns62112.2024.10691168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Zhenyu. "Big Data Based Network Security Awareness Model using Modified Lion Swarm Optimization with Fully Convolutional Neural Network." In 2024 Second International Conference on Networks, Multimedia and Information Technology (NMITCON). IEEE, 2024. http://dx.doi.org/10.1109/nmitcon62075.2024.10699123.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Modified convolutional neural network"

1

Guan, Hui, Xipeng Shen, Seung-Hwan Lim, and Robert M. Patton. Composability-Centered Convolutional Neural Network Pruning. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1427608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tilton, Miranda. CoNNOR: Convolutional Neural Network for Outsole Recognition. Iowa State University, 2019. http://dx.doi.org/10.31274/cc-20240624-416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shao, Lu. Automatic Seizure Detection based on a Convolutional Neural Network-Recurrent Neural Network Model. Iowa State University, 2022. http://dx.doi.org/10.31274/cc-20240624-269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tarasenko, Andrii O., Yuriy V. Yakimov, and Vladimir N. Soloviev. Convolutional neural networks for image classification. [б. в.], 2020. http://dx.doi.org/10.31812/123456789/3682.

Full text
Abstract:
This paper shows the theoretical basis for the creation of convolutional neural networks for image classification and their application in practice. To achieve the goal, the main types of neural networks were considered, starting from the structure of a simple neuron to the convolutional multilayer network necessary for the solution of this problem. It shows the stages of the structure of training data, the training cycle of the network, as well as calculations of errors in recognition at the stage of training and verification. At the end of the work the results of network training, calculation of recognition error and training accuracy are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Rocco, Dominick Rosario. Muon Neutrino Disappearance in NOvA with a Deep Convolutional Neural Network Classifier. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1294514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Shu. Overcoming the reality gap: Studying synthetic image modalities for convolutional neural network training. Iowa State University, 2019. http://dx.doi.org/10.31274/cc-20240624-1095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cheniour, Amani, Amir Ziabari, Elena Tajuelo Rodriguez, Mohammed Alnaggar, Yann Le Pape, and T. M. Rosseel. Reconstruction of 3D Concrete Microstructures Combining High-Resolution Characterization and Convolutional Neural Network for Image Segmentation. Office of Scientific and Technical Information (OSTI), 2024. http://dx.doi.org/10.2172/2311320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Debroux, Patrick. The Use of Adjacent Video Frames to Increase Convolutional Neural Network Classification Robustness in Stressed Environments. DEVCOM Analaysis Center, 2023. http://dx.doi.org/10.21236/ad1205367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KAN-ICE’s 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (https://github.com/mferdaus/kanice).
APA, Harvard, Vancouver, ISO, and other styles
10

Eka Saputro, Widianto. PENGENALAN ALFABET BAHASA ISYARAT TANGAN PADA CITRA DIGITAL MENGGUNAKAN PENDEKATAN CONVEX HULL DAN CONVOLUTIONAL NEURAL NETWORK (CNN). ResearchHub Technologies, Inc., 2025. https://doi.org/10.55277/researchhub.rwpbjj07.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography