Academic literature on the topic 'Pre-trained convolutional neural networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Pre-trained convolutional neural networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Pre-trained convolutional neural networks"

1

Jadhav, Sachin B. "Convolutional Neural Networks for Leaf Image-Based Plant Disease Classification." IAES International Journal of Artificial Intelligence (IJ-AI) 8, no. 4 (2019): 328. http://dx.doi.org/10.11591/ijai.v8.i4.pp328-341.

Full text
Abstract:
<span lang="EN-US">Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4 %, 96.4 %, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy</span>
APA, Harvard, Vancouver, ISO, and other styles
2

Sachin, B. Jadhav, R. Udupi Vishwanath, and B. Patil Sanjay. "Convolutional neural networks for leaf image-based plant disease classification." International Journal of Artificial Intelligence (IJ-AI) 8, no. 4 (2019): 328–41. https://doi.org/10.11591/ijai.v8.i4.pp328-341.

Full text
Abstract:
Plant pathologists desire soft computing technology for accurate and reliable diagnosis of plant diseases. In this study, we propose an efficient soybean disease identification method based on a transfer learning approach by using a pre-trained convolutional neural network (CNN’s) such as AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201. The proposed convolutional neural networks were trained using 1200 plant village image dataset of diseased and healthy soybean leaves, to identify three soybean diseases out of healthy leaves. Pre-trained CNN used to enable a fast and easy system implementation in practice. We used the five-fold cross-validation strategy to analyze the performance of networks. In this study, we used a pre-trained convolutional neural network as feature extractors and classifiers. The experimental results based on the proposed approach using pre-trained AlexNet, GoogleNet, VGG16, ResNet101, and DensNet201 networks achieve an accuracy of 95%, 96.4%, 96.4%, 92.1%, 93.6% respectively. The experimental results for the identification of soybean diseases indicated that the proposed networks model achieves the highest accuracy.
APA, Harvard, Vancouver, ISO, and other styles
3

Thirumaladevi, Satharajupalli, Satharajupalli Thirumaladevi, and Sailaja Maruvada. "Competent scene classification using feature fusion of pre-trained convolutional neural networks." TELKOMNIKA 21, no. 04 (2023): 805–14. https://doi.org/10.12928/telkomnika.v21i4.24463.

Full text
Abstract:
In view of the fact that the development of convolutional neural networks (CNN) and other deep learning techniques, scientists have become more interested in the scene categorization of remotely acquired images as well as other algorithms and datasets. The spatial geometric detail information may be lost as the convolution layer thickness increases, which would have a significant impact on the classification accuracy. Fusion-based techniques, which are regarded to be a viable way to express scene features, have recently attracted a lot of interest as a solution to this issue. Here, we suggested a convolutional feature fusion network that makes use of canonical correlation, which is the linear correlation between two feature maps. Then, to improve scene classification accuracy, the deep features extracted from various pre-trained convolutional neural networks are efficiently fused. We thoroughly evaluated three different fused CNN designs to achieve the best results. Finally, we used the support vector machine for categorization (SVM). In the analysis, two real-world datasets UC Merced and SIRI-WHU were employed, and the competitiveness of the investigated technique was evaluated. The improved categorization accuracy demonstrates that the fusion technique under consideration has produced affirmative results when compared to individual networks.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yufei, and Garrison Cottrell. "Recognizing Urban Tribes with pre-trained Convolutional Neural Networks." Journal of Vision 15, no. 12 (2015): 1171. http://dx.doi.org/10.1167/15.12.1171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Karim, Mokdad, Koushavand Behrang, and Boisvert Jeff. "Automatic variogram inference using pre-trained Convolutional Neural Networks." Applied Computing and Geosciences 25 (February 2025): 100219. https://doi.org/10.1016/j.acags.2025.100219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dakdareh, Sara Ghasemi, and Karim Abbasian. "Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment Using Convolutional Neural Networks." Journal of Alzheimer's Disease Reports 8, no. 1 (2024): 317–28. http://dx.doi.org/10.3233/adr-230118.

Full text
Abstract:
Background: Alzheimer’s disease and mild cognitive impairment are common diseases in the elderly, affecting more than 50 million people worldwide in 2020. Early diagnosis is crucial for managing these diseases, but their complexity poses a challenge. Convolutional neural networks have shown promise in accurate diagnosis. Objective: The main objective of this research is to diagnose Alzheimer’s disease and mild cognitive impairment in healthy individuals using convolutional neural networks. Methods: This study utilized three different convolutional neural network models, two of which were pre-trained models, namely AlexNet and DenseNet, while the third model was a CNN1D-LSTM neural network. Results: Among the neural network models used, the AlexNet demonstrated the highest accuracy, exceeding 98%, in diagnosing mild cognitive impairment and Alzheimer’s disease in healthy individuals. Furthermore, the accuracy of the DenseNet and CNN1D-LSTM models is 88% and 91.89%, respectively. Conclusions: The research highlights the potential of convolutional neural networks in diagnosing mild cognitive impairment and Alzheimer’s disease. The use of pre-trained neural networks and the integration of various patient data contribute to achieving accurate results. The high accuracy achieved by the AlexNet neural network underscores its effectiveness in disease classification. These findings pave the way for future research and improvements in the field of diagnosing these diseases using convolutional neural networks, ultimately aiding in early detection and effective management of mild cognitive impairment and Alzheimer’s disease.
APA, Harvard, Vancouver, ISO, and other styles
7

Dudekula, Usen, and Purnachand N. "Linear fusion approach to convolutional neural networks for facial emotion recognition." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (2022): 1489. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1489-1500.

Full text
Abstract:
Facial expression recognition is a challenging problem in the scientific field of computer vision. Several face expression recognition (FER) algorithms are proposed in the field of machine learning, and deep learning to extract expression knowledge from facial representations. Even though numerous algorithms have been examined, several issues like lighting changes, rotations and occlusions. We present an efficient approach to enhance recognition accuracy in this study, advocates transfer learning to fine-tune the parameters of the pre-trained model (VGG19 model ) and non-pre-trained model convolutional neural networks (CNNs) for the task of image classification. The VGG19 network and convolutional network derive two channels of expression related characteristics from the facial grayscale images. The linear fusion algorithm calculates the class by taking an average of each classification decision on training samples of both channels. Final recognition is calculated using convolution neural network architecture followed by a softmax classifier. Seven basic facial emotions (BEs): happiness, surprise, anger, sadness, fear, disgust, and neutral facial expressions can be recognized by the proposed algorithm, The average accuracies for standard data set’s “CK+,” and “JAFFE,” 98.3 % and 92.4%, respectively. Using a deep network with one channel, the proposed algorithm can achieve well comparable performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Towpunwong, Nattakan, and Napa Sae-Bae. "Dog Breed Classification and Identification Using Convolutional Neural Networks." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 17, no. 4 (2023): 554–63. http://dx.doi.org/10.37936/ecti-cit.2023174.253728.

Full text
Abstract:
This study aimed to assess the effectiveness of using pre-trained models to extract biometric information, specifically the dog breed and dog identity, from images of dogs. The study employed pre-trained models to extract feature vectors from the dog images. Multi-Layer Perceptron (MLP) models then used these vectors as input to train dog breed and identity classifiers. The dog breeds used in this study comprised two Thai breeds, Bangkaew and Ridgeback, and 120 foreign breeds. For dog breed classification, the results showed that, among the ImageNet classification models, the pre-trained NasNetLarge model has the highest dog breed classification accuracy (91%). The newly trained MLP model, which used feature vectors obtained by NasNetLarge, achieved higher accuracy at 93%. For dog identification, the results showed that, without data augmentation, the pre-trained ResNet50 model had the highest dog identification accuracy (75%). However, with data augmentation, MobileNetV2 could achieve a higher accuracy of 77%. When evaluating the identification performance of each breed, it is important to note that pugs achieved the lowest identification rate at 57.4%. Conversely, Bangkaew dogs demonstrated outstanding performance, with the highest identification rate at 98.6%.
APA, Harvard, Vancouver, ISO, and other styles
9

Omran, Eman M., Randa F. Soliman, Ayman A. Eisa, Nabil A. Ismail, and Fathi E. Abd El-Samie. "Cancelable Iris Recognition System with Pre-trained Convolutional Neural Networks." Menoufia Journal of Electronic Engineering Research 28, no. 1 (2019): 95–101. http://dx.doi.org/10.21608/mjeer.2019.76778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

T. Blessington, Dr Praveen, and Prof Ravindra Mule. "Image Forgery Detection Based on Parallel Convolutional Neural Networks." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (2024): 1–10. http://dx.doi.org/10.55041/ijsrem28428.

Full text
Abstract:
Abstract— Due to the availability of deep networks, progress has been made in the field of image recognition. Images and videos are spreading very conveniently and with the availability of strong editing tools the tampering of digital content become easy. To detect such scams, we proposed techniques. In our paper, we proposed two important aspects of employing deep convolutional neural networks to image forgery detection. We first explore and examine different preprocessing method along with convolutional neural networks (CNN) architecture. Later we evaluated the different transfer learning for pre-trained ImageNet(via-fine-tuning) and implement it over our dataset CASIA V2.0. So, it covers the pre-processing techniques with basic CNN model and later see the powerful effect of the transfer learning models. Keywords— image tampering, convolution neural network (CNN), error level analysis (ELA), transfer learning, sharpening filter, fine-tuning
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Pre-trained convolutional neural networks"

1

Lundström, Dennis. "Data-efficient Transfer Learning with Pre-trained Networks." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138612.

Full text
Abstract:
Deep learning has dominated the computer vision field since 2012, but a common criticism of deep learning methods is their dependence on large amounts of data. To combat this criticism research into data-efficient deep learning is growing. The foremost success in data-efficient deep learning is transfer learning with networks pre-trained on the ImageNet dataset. Pre-trained networks have achieved state-of-the-art performance on many tasks. We consider the pre-trained network method for a new task where we have to collect the data. We hypothesize that the data efficiency of pre-trained networks can be improved through informed data collection. After exhaustive experiments on CaffeNet and VGG16, we conclude that the data efficiency indeed can be improved. Furthermore, we investigate an alternative approach to data-efficient learning, namely adding domain knowledge in the form of a spatial transformer to the pre-trained networks. We find that spatial transformers are difficult to train and seem to not improve data efficiency.
APA, Harvard, Vancouver, ISO, and other styles
2

Sahlgren, Michaela, and Nour Alhunda Almajni. "Skin Cancer Image Classification with Pre-trained Convolutional Neural Network Architectures." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259622.

Full text
Abstract:
In this study we compare the performance of different pre-trained deep convolutional neural network architectures on classification of skin lesion images. We analyse the ISIC skin cancer image dataset. Our results indicate that the architectures analyzed achieve similar performance, with each algorithm reaching a mean five-fold cross-validation ROC AUC value between 0.82 and 0.89. The VGG-11 architecture achieved highest performance, with a mean ROC AUC value of 0.89, despite the fact that it performs considerably worse than some of other architectures on the ILSVRC task. Overall, our results suggest that the choice of architecture may not be as crucial on skin-cancer classification compared with the ImageNet classification problem.<br>I denna studie jämför vi hur väl olika förtränade konvolutionella neurala nätverksarkitekturer klassificerar bilder av potentiellt maligna födelsemärken. Detta med hjälp av datasetet ISIC, innehållande bilder av hudcancer. Våra resultat indikerar att alla arkitekturer som undersöktes gav likvärdiga resultat vad gäller hur väl de kan avgöra huruvida ett födelsemärke är malignt eller ej. Efter en femfaldig korsvalidering nådde de olika arkitekturerna ett ROC AUC-medelvärde mellan 0.82 och 0.89, där nätverket Vgg-11 gjorde allra bäst ifrån sig. Detta trots att samma nätvärk är avsevärt sämre på ILSVRC. Sammantaget indikterar våra resultat att valet av arkitektur kan vara mindre viktigt vid bildklassificering av hudcancer än vid klassificering av bilder på ImageNet.
APA, Harvard, Vancouver, ISO, and other styles
3

Franke, Cameron. "Autonomous Driving with a Simulation Trained Convolutional Neural Network." Scholarly Commons, 2017. https://scholarlycommons.pacific.edu/uop_etds/2971.

Full text
Abstract:
Autonomous vehicles will help society if they can easily support a broad range of driving environments, conditions, and vehicles. Achieving this requires reducing the complexity of the algorithmic system, easing the collection of training data, and verifying operation using real-world experiments. Our work addresses these issues by utilizing a reflexive neural network that translates images into steering and throttle commands. This network is trained using simulation data from Grand Theft Auto V~\cite{gtav}, which we augment to reduce the number of simulation hours driven. We then validate our work using a RC car system through numerous tests. Our system successfully drive 98 of 100 laps of a track with multiple road types and difficult turns; it also successfully avoids collisions with another vehicle in 90\% of the trials.
APA, Harvard, Vancouver, ISO, and other styles
4

BJOERKLUND, TOMAS PER ROLF. "License Plate Recognition using Convolutional Neural Networks Trained on Synthetic Images." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2709876.

Full text
Abstract:
In this thesis, we propose a license plate recognition system and study the feasibility of using synthetic training samples to train convolutional neural networks for a practical application. First we develop a modular framework for synthetic license plate generation; to generate different license plate types (or other objects) only the first module needs to be adapted. The other modules apply variations to the training samples such as background, occlusions, camera perspective projection, object noise and camera acquisition noise, with the aim to achieve enough variation of the object that the trained networks will also recognize real objects of the same class. Then we design two convolutional neural networks of low-complexity for license plate detection and character recognition. Both are designed for simultaneous classification and localization by branching the networks into a classification and a regression branch and are trained end-to-end simultaneously over both branches, on only our synthetic training samples. To recognize real license plates, we design a pipeline for scale invariant license plate detection with a scale pyramid and a fully convolutional application of the license plate detection network in order to detect any number of license plates and of any scale in an image. Before character classification is applied, potential plate regions are un-skewed based on the detected plate location in order to achieve an as optimal representation of the characters as possible. The character classification is also performed with a fully convolutional sweep to simultaneously find all characters at once. Both the plate and the character stages apply a refinement classification where initial classifications are first centered and rescaled. We show that this simple, yet effective trick greatly improves the accuracy of our classifications, and at a small increase of complexity. To our knowledge, this trick has not been exploited before. To show the effectiveness of our system we first apply it on a dataset of photos of Italian license plates to evaluate the different stages of our system and which effect the classification thresholds have on the accuracy. We also find robust training parameters and thresholds that are reliable for classification without any need for calibration on a validation set of real annotated samples (which may not always be available) and achieve a balanced precision and recall on the set of Italian license plates, both in excess of 98%. Finally, to show that our system generalizes to new plate types, we compare our system to two reference system on a dataset of Taiwanese license plates. For this, we only modify the first module of the synthetic plate generation algorithm to produce Taiwanese license plates and adjust parameters regarding plate dimensions, then we train our networks and apply the classification pipeline, using the robust parameters, on the Taiwanese reference dataset. We achieve state-of-the-art performance on plate detection (99.86% precision and 99.1% recall), single character detection (99.6%) and full license reading (98.7%).
APA, Harvard, Vancouver, ISO, and other styles
5

Ronneling, Benjamin, and Källman Marcus Dypbukt. "Impact of using different brain layer amounts on the accuracy of Convolutional Neural networks trained on MR-Images to identify Parkinson's Disease." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301742.

Full text
Abstract:
Parkinson’s Disease (PD) is a neurodegenerative disease and brain disorder which affects the motor system and leads to shaking, stiffness, impaired balance and coordination. Diagnosing PD from Magnetic resonance images (MR-images) is difficult and often not possible for medical experts and therefore Convolutional neural networks (CNNs) are used instead. CNNs can detect small abnormalities in the MR-images that can be insignificant and undetectable for the human eye and this is the reason they are widely used in PD diagnosis with MR-images. CNNs have traditionally been trained on image data where PD affected brain areas (called brain slices) are converted into images first. Using this method, other large areas of the brain which might also be affected by PD are missed because it is not possible to combine more than 3 brain slices into the color channels of an image for training. This study aims to create a CNN and train it on larger parts of the brain and compare the accuracy of the created CNN when it is trained on different amounts of brain slices. The study then investigates if there is an optimal amount of brain area that produces the highest accuracy in the created CNN. During the study, we gathered results which show that, for our dataset, the accuracy increases when more brain slices are used. The trained CNN in this study reaches a maximum accuracy of 75% when it is trained on 7 slices and an accuracy of 60% when it is trained on a single slice. Training on 7 slices results in a significant improvement over training on a single slice. We believe that these 7 slices of brain contain a brain region called basal ganglia which is affected by PD and this is the reason that our CNN achieves the highest accuracy at 7 brain slices. We concluded that an optimal brain slice amount can be found which can increase the accuracy of the network by a considerable amount but this process takes a lot of time.<br>Parkinsons sjukdom (PD) är en progressiv neurologisk sjukdom som orsakar rörelseproblem, skakningar, stelhet och nedsatt balans. Att diagnostisera PD från Magnetisk resonanstomografi bilder (MR-bilder) är svårt och ofta inte möjligt för medicinska experter, i stället används Convolutions Neurala nätverk (CNN). CNNs kan upptäcka små avvikelser i MR-bilder som kan vara obetydliga och omöjliga att upptäcka för det mänskliga ögat och detta är anledningen till att de används i stor utsträckning vid PD-diagnos med MR-bilder. CNNs har traditionellt utbildats på bilddata där PD-drabbade hjärnområden (kallat hjärnskivor) omvandlas till bilder först. Genom denna metod excluderas stora delar av hjärnan som också kan påverkas av PD eftersom det inte är möjligt att kombinera mer än tre hjärnskivor i färgkanalerna på en bild för träning. Denna studie syftar till att skapa en CNN och träna den på större delar av hjärnan och jämföra noggrannheten hos den skapade CNN när den tränas på olika mängder hjärnskivor. Studien undersöker sedan om det finns en optimal mängd hjärnområde som ger högsta noggrannhet i det skapade CNN. Under studien samlade vi resultat som visade att noggrannheten för vår dataset ökar när fler hjärnskivor används. Den utbildade CNN i denna studie når en maximal noggrannhet på 75% när den tränas på 7 skivor och en noggrannhet på 60% när den tränas på en enda skiva. Träning på 7 skivor resulterar i en signifikant förbättring jämfört med träning på en skiva. Vi tror att dessa 7 hjärnskivor innehåller en hjärnregion som kallas basala ganglier som påverkas av PD och detta är anledningen till att vår CNN uppnår högsta noggrannhet vid 7 hjärnskivor. Vi drog slutsatsen att en optimal mängd hjärnskivor kan hittas som kan öka nätverkets noggrannhet avsevärt men denna process tar mycket tid.
APA, Harvard, Vancouver, ISO, and other styles
6

Barkman, Richard Dan William. "Object Tracking Achieved by Implementing Predictive Methods with Static Object Detectors Trained on the Single Shot Detector Inception V2 Network." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-73313.

Full text
Abstract:
In this work, the possibility of realising object tracking by implementing predictive methods with static object detectors is explored. The static object detectors are obtained as models trained on a machine learning algorithm, or in other words, a deep neural network. Specifically, it is the single shot detector inception v2 network that will be used to train such models. Predictive methods will be incorporated to the end of improving the obtained models’ precision, i.e. their performance with respect to accuracy. Namely, Lagrangian mechanics will be employed to derived equations of motion for three different scenarios in which the object is to be tracked. These equations of motion will be implemented as predictive methods by discretising and combining them with four different iterative formulae. In ch. 1, the fundamentals of supervised machine learning, neural networks, convolutional neural networks as well as the workings of the single shot detector algorithm, approaches to hyperparameter optimisation and other relevant theory is established. This includes derivations of the relevant equations of motion and the iterative formulae with which they were implemented. In ch. 2, the experimental set-up that was utilised during data collection, and the manner by which the acquired data was used to produce training, validation and test datasets is described. This is followed by a description of how the approach of random search was used to train 64 models on 300×300 datasets, and 32 models on 512×512 datasets. Consecutively, these models are evaluated based on their performance with respect to camera-to-object distance and object velocity. In ch. 3, the trained models were verified to possess multi-scale detection capabilities, as is characteristic of models trained on the single shot detector network. While the former is found to be true irrespective of the resolution-setting of the dataset that the model has been trained on, it is found that the performance with respect to varying object velocity is significantly more consistent for the lower resolution models as they operate at a higher detection rate. Ch. 3 continues with that the implemented predictive methods are evaluated. This is done by comparing the resulting deviations when they are let to predict the missing data points from a collected detection pattern, with varying sampling percentages. It is found that the best predictive methods are those that make use of the least amount of previous data points. This followed from that the data upon which evaluations were made contained an unreasonable amount of noise, considering that the iterative formulae implemented do not take noise into account. Moreover, the lower resolution models were found to benefit more than those trained on the higher resolution datasets because of the higher detection frequency they can employ. In ch. 4, it is argued that the concept of combining predictive methods with static object detectors to the end of obtaining an object tracker is promising. Moreover, the models obtained on the single shot detector network are concluded to be good candidates for such applications. However, the predictive methods studied in this thesis should be replaced with some method that can account for noise, or be extended to be able to account for it. A profound finding is that the single shot detector inception v2 models trained on a low-resolution dataset were found to outperform those trained on a high-resolution dataset in certain regards due to the higher detection rate possible on lower resolution frames. Namely, in performance with respect to object velocity and in that predictive methods performed better on the low-resolution models.<br>I detta arbete undersöks möjligheten att åstadkomma objektefterföljning genom att implementera prediktiva metoder med statiska objektdetektorer. De statiska objektdetektorerna erhålls som modeller tränade på en maskininlärnings-algoritm, det vill säga djupa neurala nätverk. Specifikt så är det en modifierad version av entagningsdetektor-nätverket, så kallat entagningsdetektor inception v2 nätverket, som används för att träna modellerna. Prediktiva metoder inkorporeras sedan för att förbättra modellernas förmåga att kunna finna ett eftersökt objekt. Nämligen används Lagrangiansk mekanik för härleda rörelseekvationer för vissa scenarion i vilka objektet är tänkt att efterföljas. Rörelseekvationerna implementeras genom att låta diskretisera dem och därefter kombinera dem med fyra olika iterationsformler. I kap. 2 behandlas grundläggande teori för övervakad maskininlärning, neurala nätverk, faltande neurala nätverk men också de grundläggande principer för entagningsdetektor-nätverket, närmanden till hyperparameter-optimering och övrig relevant teori. Detta inkluderar härledningar av rörelseekvationerna och de iterationsformler som de skall kombineras med. I kap. 3 så redogörs för den experimentella uppställning som användes vid datainsamling samt hur denna data användes för att producera olika data set. Därefter följer en skildring av hur random search kunde användas för att träna 64 modeller på data av upplösning 300×300 och 32 modeller på data av upplösning 512×512. Vidare utvärderades modellerna med avseende på deras prestanda för varierande kamera-till-objekt avstånd och objekthastighet. I kap. 4 så verifieras det att modellerna har en förmåga att detektera på flera skalor, vilket är ett karaktäristiskt drag för modeller tränade på entagninsdetektor-nätverk. Medan detta gällde för de tränade modellerna oavsett vilken upplösning av data de blivit tränade på, så fanns detekteringsprestandan med avseende på objekthastighet vara betydligt mer konsekvent för modellerna som tränats på data av lägre upplösning. Detta resulterade av att dessa modeller kan arbeta med en högre detekteringsfrekvens. Kap. 4 fortsätter med att de prediktiva metoderna utvärderas, vilket de kunde göras genom att jämföra den resulterande avvikelsen de respektive metoderna innebar då de läts arbeta på ett samplat detektionsmönster, sparat från då en tränad modell körts. I och med denna utvärdering så testades modellerna för olika samplingsgrader. Det visade sig att de bästa iterationsformlerna var de som byggde på färre tidigare datapunkter. Anledningen för detta är att den insamlade data, som testerna utfördes på, innehöll en avsevärd mängd brus. Med tanke på att de implementerade iterationsformlerna inte tar hänsyn till brus, så fick detta avgörande konsekvenser. Det fanns även att alla prediktiva metoder förbättrade objektdetekteringsförmågan till en högre utsträckning för modellerna som var tränade på data av lägre upplösning, vilket följer från att de kan arbeta med en högre detekteringsfrekvens. I kap. 5, argumenteras det, bland annat, för att konceptet att kombinera prediktiva metoder med statiska objektdetektorer för att åstadkomma objektefterföljning är lovande. Det slutleds även att modeller som erhålls från entagningsdetektor-nätverket är lovande kandidater för detta applikationsområde, till följd av deras höga detekteringsfrekvenser och förmåga att kunna detektera på flera skalor. Metoderna som användes för att förutsäga det efterföljda föremålets position fanns vara odugliga på grund av deras oförmåga att kunna hantera brus. Det slutleddes därmed att dessa antingen bör utökas till att kunna hantera brus eller ersättas av lämpligare metoder. Den mest väsentliga slutsats detta arbete presenterar är att lågupplösta entagninsdetektormodeller utgör bättre kandidater än de tränade på data av högre upplösning till följd av den ökade detekteringsfrekvens de erbjuder.
APA, Harvard, Vancouver, ISO, and other styles
7

Vi, Margareta. "Object Detection Using Convolutional Neural Network Trained on Synthetic Images." Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153224.

Full text
Abstract:
Training data is the bottleneck for training Convolutional Neural Networks. A larger dataset gives better accuracy though also needs longer training time. It is shown by finetuning neural networks on synthetic rendered images, that the mean average precision increases. This method was applied to two different datasets with five distinctive objects in each. The first dataset consisted of random objects with different geometric shapes. The second dataset contained objects used to assemble IKEA furniture. The neural network with the best performance, trained on 5400 images, achieved a mean average precision of 0.81 on a test which was a sample of a video sequence. Analysis of the impact of the factors dataset size, batch size, and numbers of epochs used in training and different network architectures were done. Using synthetic images to train CNN’s is a promising path to take for object detection where access to large amount of annotated image data is hard to come by.
APA, Harvard, Vancouver, ISO, and other styles
8

Ekblad, Voltaire Fanny, and Noah Mannberg. "Evaluation of transferability of Convolutional Neural Network pre-training with regard to image characteristics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302519.

Full text
Abstract:
This study evaluates the impact of pre-training on a medical classification task and investigates what characteristics if images affect the transferability of learned features from the pre-training. Cardiotocography (CTG) is a combined electronic measurement of the fetal heart rate (FHR) and maternal uterine contractions during labor and delivery and is commonly analyzed to prevent hypoxia. The records of FHR-signals can be represented as images, where the time-frequency curves are transformed to color spectrums, known as spectrograms. The CTU-UHB database consists of 552 CTG-recordings, with 44 samples of hypoxic cases, rendering a small data set with a large imbalance between the hypoxic class versus normal class. Transfer learning can be applied to mitigate this problem if the pre-training is relevant. The convolutional neural network AlexNet has previously been trained on natural images with distinct motifs, including images of flowers, cars, and animals. The spectrograms of FHR-signals are on the other hand computer generated images (synthetic) with abstract motifs. These characteristics guided the selection of benchmark data sets, to study how beneficial the AlexNet pre-training is with regard to the characteristics. 5-fold cross-validation and t-tests with a 1% significance level were used for performance estimations. The ability to classify images from the benchmark data sets was significantly improved by pre-training, however not for the FHR-spectrograms. Varying the balance between classes or amount of data did not produce significant performance variations in any of the benchmark data sets, and each of them significantly outperformed the FHR-data set in all trials. Attempts to replicate previous results were unsuccessful. The suspected causes are methodological differences regarding preprocessing of the FHR- signals, differences in the AlexNet implementations and testing method. The performance when classifying the FHR-spectrograms was, therefore, unable to be validated. In conclusion, the results indicate that the AlexNet pre-training could generalize to synthetic images and improved performance for the benchmark data sets. Pre-training on natural images with distinct motifs does, however, not seem to contribute to an increase in model performance when classifying FHR- spectrograms. Pre-training and/or comparing with alternative spectrogram images is recommended for future research.<br>Studien utvärderar effekten av pre-training för en medicinsk klassificeringsuppgift och undersöker vilka bildegenskaper som påverkar överförbarheten av inlärda mönster från pre-training. Kardiotokografi (CTG) är ett mått på fostrets hjärtfrekvens (FHR) och moderns livmodersammandragningar under förlossning, vilka kan analyseras för att förutsäga hypoxi hos fostret. CTU-UHB- databasen består av 552 CTG-inspelningar, med 44 hypoxiska fall, vilket gav ett litet data set med en stor obalans mellan den hypoxi- och normal-klassen. Överföringsinlärning kan användas för att addressera problemet, förutsatt att pre-trainingen är relevant. Det konvolutionära neurala nätverket AlexNet har tidigare tränats på naturliga bilder med distinkta motiv, såsom blommor, bilar och djur. Spektrogram av FHR-signaler är istället datorgenererade (syntetiska) bilder med abstrakta motiv. Dessa egenskaper styrde valet av referensdataset för att studera hur AlexNet pre-trainingen bidrog med avseende på egenskaperna. 5-delad korsvalidering och t-test med en 5%signifikansnivå användes som mått på prestanda. Förmågan att klassificera bilder från referensdatasetten förbättrades avsevärt genom pre-training, dock inte för FHR-spektrogram. Variering av balansen mellan klasser eller datamängd gav inte signifikanta prestationsförändringar i någon av referensdatasetten, och respektive data set presterade signifikant bättre än FHR-data settet i samtliga försök. Försök att replikera tidigare resultat misslyckades. De misstänkta orsakerna är metodologiska skillnader beträffande förprocessering av FHR-signalerna, skillnader i AlexNet-implementationer och testmetod. Prestationen vid klassificering av FHR-spektrogrammen kunde därför inte valideras. Sammanfattningsvis indikerar resultaten att AlexNet pre-training kan generalisera till syntetiska bilder och förbättra prestandan för referensdatasetten. Pre-training på naturliga bilder med distinkta motiv verkar dock inte bidra till en prestandaökning av vid klassificering av FHR-spektrogram. Pre-training på och/eller i jämförelse med alternativa spektrogrambilder, rekommenderas för framtida forskning.
APA, Harvard, Vancouver, ISO, and other styles
9

Sparr, Henrik. "Object detection for a robotic lawn mower with neural network trained on automatically collected data." Thesis, Uppsala universitet, Datorteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444627.

Full text
Abstract:
Machine vision is hot research topic with findings being published at a high pace and more and more companies currently developing automated vehicles. Robotic lawn mowers are also increasing in popularity but most mowers still use relatively simple methods for cutting the lawn. No previous work has been published on machine learning networks that improved between cutting sessions by automatically collecting data and then used it for training. A data acquisition pipeline and neural network architecture that could help the mower in avoiding collision was therefor developed. Nine neural networks were tested of which a convolutional one reached the highest accuracy. The performance of the data acquisition routine and the networks show that it is possible to design a object detection model that improves between runs.
APA, Harvard, Vancouver, ISO, and other styles
10

Mocko, Štefan. "Využitie pokročilých segmentačných metód pre obrazy z TEM mikroskopov." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-378145.

Full text
Abstract:
Tato magisterská práce se zabývá využitím konvolučních neuronových sítí pro segmentační účely v oblasti transmisní elektronové mikroskopie. Také popisuje zvolenou topologii neuronové sítě - U-NET, použíté augmentační techniky a programové prostředí. Firma Thermo Fisher Scientific (dříve FEI Czech Republic s.r.o) poskytla obrazová data pro účely této práce. Získané segmentační výsledky jsou prezentovány ve formě křivek (ROC, PRC) a ve formě numerických hodnot (ARI, DSC, Chybová matice). Zvolená UNET topologie dosáhla excelentních výsledků v oblasti pixelové segmentace. S největší pravděpodobností, budou tyto výsledky sloužit jako odrazový můstek pro interní firemní výzkum.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Pre-trained convolutional neural networks"

1

Patil, Bhuvaneshwari, and Mallikarjun Hangarge. "Pre-trained Convolutional Neural Networks for Gender Classification." In Proceedings of the First International Conference on Advances in Computer Vision and Artificial Intelligence Technologies (ACVAIT 2022). Atlantis Press International BV, 2023. http://dx.doi.org/10.2991/978-94-6463-196-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sudha, V., and Anna Saro Vijendran. "Automation Algorithm for Labeling of Oil Spill Images using Pre-trained Deep Learning Model." In IoT-enabled Convolutional Neural Networks: Techniques and Applications. River Publishers, 2023. http://dx.doi.org/10.1201/9781003393030-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Banerji, Sugata, and Atreyee Sinha. "Painting Classification Using a Pre-trained Convolutional Neural Network." In Computer Vision, Graphics, and Image Processing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68124-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jadhav, Sachin, Vishwanath Udupi, and Sanjay Patil. "Classification of Soybean Diseases Using Pre-trained Deep Convolutional Neural Networks." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-51859-2_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Oloko-Oba, Mustapha, and Serestina Viriri. "Pre-trained Convolutional Neural Network for the Diagnosis of Tuberculosis." In Advances in Visual Computing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-64559-5_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hendrawan, Yusuf, Wais Al Qorni, Gunomo Djoyowasito, and Dimas Firmanda Al Riza. "Classification of Rhizomes Using Pre-trained Convolutional Neural Network Method." In Advances in Economics, Business and Management Research. Atlantis Press International BV, 2024. http://dx.doi.org/10.2991/978-94-6463-525-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bhuma, Chandra Mohan. "Virus Texture Classification Using Genetic Algorithm and Pre-trained Convolutional Neural Networks." In Springer Proceedings in Mathematics & Statistics. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-15175-0_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xing, V., and C. J. Lapeyre. "Deep Convolutional Neural Networks for Subgrid-Scale Flame Wrinkling Modeling." In Lecture Notes in Energy. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-16248-0_6.

Full text
Abstract:
AbstractSubgrid-scale flame wrinkling is a key unclosed quantity for premixed turbulent combustion models in large eddy simulations. Due to the geometrical and multi-scale nature of flame wrinkling, convolutional neural networks are good candidates for data-driven modeling of flame wrinkling. This chapter presents how a deep convolutional neural network called a U-Net is trained to predict the total flame surface density from the resolved progress variable. Supervised training is performed on a database of filtered and downsampled direct numerical simulation fields. In an a priori evaluation on a slot burner configuration, the network outperforms classical dynamic models. In closing, challenges regarding the ability of deep convolutional networks to generalize to unseen configurations and their practical deployment with fluid solvers are discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Le, Jagannadan Varadarajan, and Yong Pei. "Action Recognition Using Co-trained Deep Convolutional Neural Networks." In Artificial Intelligence. IJCAI 2019 International Workshops. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-56150-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sudan, Akash, Goutam Singh Chouhan, Dilip Singh Sisodia, and Arti Anuragi. "An Empirical Evaluation of Pre-trained Convolutional Neural Network Models for Neural Style Transfer." In Communications in Computer and Information Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-69115-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Pre-trained convolutional neural networks"

1

Changela, Nirav, Nauka Shah, Jay Mehta, Divya Sonani, Parth Goel, and Vaishali Vadhavana. "Hyacinth Bean Quality Classification Using Pre-Trained Convolutional Neural Networks." In 2024 International Conference on Sustainable Communication Networks and Application (ICSCNA). IEEE, 2024. https://doi.org/10.1109/icscna63714.2024.10864313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sahil, P. Sam, Bhawesh K. Sinha, Thirumala Akash K, and Mohammed Riyaz Ahmed. "Alzheimer's Disease Prediction Using Convolutional Neural Networks Pre Trained Architecture." In 2024 IEEE International Conference on Intelligent Signal Processing and Effective Communication Technologies (INSPECT). IEEE, 2024. https://doi.org/10.1109/inspect63485.2024.10896128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Golkhatmi, Benyamin Mirab, and Mohammad Hossein Moattar. "Attention-Boosted Ensemble of Pre-Trained Convolutional Neural Networks for Accurate Diabetic Retinopathy Detection." In 2024 14th International Conference on Computer and Knowledge Engineering (ICCKE). IEEE, 2024. https://doi.org/10.1109/iccke65377.2024.10874472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mahendran, Nivedhitha, and P. M. Durai Raj Vincent. "Alzheimer’s Disease Classification via Pre-trained Convolutional Neural Network Variants Utilizing Multi-level Thresholding based on Kapur’s Entropy." In 2024 Second International Conference on Networks, Multimedia and Information Technology (NMITCON). IEEE, 2024. http://dx.doi.org/10.1109/nmitcon62075.2024.10698892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ebrahimpour, Nader, and Faruk Baturalp Günay. "Palmprint Recognition Using Pre-Trained Convolutional Neural Networks." In Cognitive Models and Artificial Intelligence Conference. SETSCI, 2023. http://dx.doi.org/10.36287/setsci.6.1.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aly, Cherry, Fazly Salleh Abas, and Hock Ann Goh. "Human Action Recognition using Pre-trained Convolutional Neural Networks." In VSIP '20: 2020 2nd International Conference on Video, Signal and Image Processing. ACM, 2020. http://dx.doi.org/10.1145/3442705.3442710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wilkinson, Eric, and Takeshi Takahashi. "Efficient aspect object models using pre-trained convolutional neural networks." In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). IEEE, 2015. http://dx.doi.org/10.1109/humanoids.2015.7363556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vaityshyn, Valentyn, Hanna Porieva, and Anastasiia Makarenkova. "Pre-trained Convolutional Neural Networks for the Lung Sounds Classification." In 2019 IEEE 39th International Conference on Electronics and Nanotechnology (ELNANO). IEEE, 2019. http://dx.doi.org/10.1109/elnano.2019.8783850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vasu, Aman Jain, and Raj Gaurang Tiwari. "Epithelial Tissue Classification using Pre-Trained Deep Convolutional Neural Networks." In 2023 IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT). IEEE, 2023. http://dx.doi.org/10.1109/csnt57126.2023.10134602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Masood, Momina, Marriam Nawaz, Ali Javed, Tahira Nazir, Awais Mehmood, and Rabbia Mahum. "Classification of Deepfake Videos Using Pre-trained Convolutional Neural Networks." In 2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2). IEEE, 2021. http://dx.doi.org/10.1109/icodt252288.2021.9441519.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Pre-trained convolutional neural networks"

1

Slone, Scott Michael, Marissa Torres, Nathan Lamie, Samantha Cook, and Lee Perren. Automated change detection in ground-penetrating radar using machine learning in R. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49442.

Full text
Abstract:
Ground-penetrating radar (GPR) is a useful technique for subsurface change detection but is limited by the need for a subject matter expert to process and interpret coincident profiles. Use of a machine learning model can automate this process to reduce the need for subject matter expert processing and interpretation. Several machine learning models were investigated for the purpose of comparing coincident GPR profiles. Based on our literature review, a Siamese Twin model using a twinned convolutional network was identified as the optimum choice. Two neural networks were tested for the internal twinned model, ResNet50 and MobileNetV2, with the former historically having higher accuracy and the latter historically having faster processing time. When trained and tested on experimentally obtained GPR profiles with synthetically added changes, ResNet50 had a higher accuracy. Thanks to this higher accuracy, less computational processing was needed, leading to ResNet50 needing only 107 s to make a prediction compared to MobileNetV2 needing 223 s. Results imply that twinned models with higher historical accuracies should be investigated further. It is also recommended to test Siamese Twin models further with experimentally produced changes to verify the changed detection model’s accuracy is not merely specific to synthetically produced changes.
APA, Harvard, Vancouver, ISO, and other styles
2

Stevenson, G. Analysis of Pre-Trained Deep Neural Networks for Large-Vocabulary Automatic Speech Recognition. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1289367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and feature selection/reduction. These pre-processing techniques play an important role in training a neural network to optimize its performance. This research studies the impact of applying normalization techniques as a pre-processing step to learning, as used by the IDSs. The impacts of pre-processing techniques play an important role in training neural networks to optimize its performance. This report proposes a Deep Neural Network (DNN) model with two hidden layers for IDS architecture and compares two commonly used normalization pre-processing techniques. Our findings are evaluated using accuracy, Area Under Curve (AUC), Receiver Operator Characteristic (ROC), F-1 Score, and loss. The experimentations demonstrate that Z-Score outperforms no-normalization and the use of Min-Max normalization.
APA, Harvard, Vancouver, ISO, and other styles
4

Mbani, Benson, Timm Schoening, and Jens Greinert. Automated and Integrated Seafloor Classification Workflow (AI-SCW). GEOMAR, 2023. http://dx.doi.org/10.3289/sw_2_2023.

Full text
Abstract:
The Automated and Integrated Seafloor Classification Workflow (AI-SCW) is a semi-automated underwater image processing pipeline that has been customized for use in classifying the seafloor into semantic habitat categories. The current implementation has been tested against a sequence of underwater images collected by the Ocean Floor Observation System (OFOS), in the Clarion-Clipperton Zone of the Pacific Ocean. Despite this, the workflow could also be applied to images acquired by other platforms such as an Autonomous Underwater Vehicle (AUV), or Remotely Operated Vehicle (ROV). The modules in AI-SCW have been implemented using the python programming language, specifically using libraries such as scikit-image for image processing, scikit-learn for machine learning and dimensionality reduction, keras for computer vision with deep learning, and matplotlib for generating visualizations. Therefore, AI-SCW modularized implementation allows users to accomplish a variety of underwater computer vision tasks, which include: detecting laser points from the underwater images for use in scale determination; performing contrast enhancement and color normalization to improve the visual quality of the images; semi-automated generation of annotations to be used downstream during supervised classification; training a convolutional neural network (Inception v3) using the generated annotations to semantically classify each image into one of pre-defined seafloor habitat categories; evaluating sampling strategies for generation of balanced training images to be used for fitting an unsupervised k-means classifier; and visualization of classification results in both feature space view and in map view geospatial co-ordinates. Thus, the workflow is useful for a quick but objective generation of image-based seafloor habitat maps to support monitoring of remote benthic ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
5

DEEP LEARNING DAMAGE IDENTIFICATION METHOD FOR STEEL- FRAME BRACING STRUCTURES USING TIME–FREQUENCY ANALYSIS AND CONVOLUTIONAL NEURAL NETWORKS. The Hong Kong Institute of Steel Construction, 2023. http://dx.doi.org/10.18057/ijasc.2023.19.4.8.

Full text
Abstract:
Lattice bracing, commonly used in steel construction systems, is vulnerable to damage and failure when subjected to horizontal seismic pressure. To identify damage, manual examination is the conventional method applied. However, this approach is time-consuming and typically unable to detect damage in its early stage. Determining the exact location of damage has been problematic for researchers. Nevertheless, detecting the failure of lateral supports in various parts of a structure using time–frequency analysis and deep learning methods, such as convolutional neural networks, is possible. Then, the damaged structure can be rapidly rebuilt to ensure safety. Experiments are conducted to determine the vibration acceleration modes of a four-storey steel structure considering various support structure damage scenarios. The acceleration signals at each measurement point are then analysed with respect to time and frequency to generate appropriate three-dimensional spectral matrices. In this study, the MobileNetV2 deep learning model was trained on a labelled picture collection of damaged matrix images. Hyperparameter tweaking and training resulted in a prediction accuracy of 97.37% for the complete dataset and 99.30% and 96.23% for the training and testing sets, respectively. The findings indicate that a combination of time–frequency analysis and deep learning methods may pinpoint the position of the damaged steel frame support components more accurately.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography