Academic literature on the topic 'Deep learning CNN'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Deep learning CNN.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Deep learning CNN"

1

Aysuh, Jaggi, and Vinod Sharma Prof. "Classification of Healthy Seeds Using Deep Learning." Journal of Scientific Research and Technology (JSRT) 1, no. 4 (2023): 10–23. https://doi.org/10.5281/zenodo.8222793.

Full text
Abstract:
With the increasing demand for healthy and high-quality seeds in agriculture, accurate and efficient seed classification methods are essential for seed quality control and optimisation of crop production. This work utilises a deep learning-based approach for healthy seed classification. It proposes a deep learning-based approach for beneficial seed classification, leveraging the power of neural networks to learn discriminative features from seed images automatically. The proposed method involves a multi-step pipeline that includes Image preprocessing, and Classification. The seed images are initially preprocessed to enhance their quality and reduce noise using image normalisation and denoising techniques. Next, a Deep convolutional neural network (CNN) is employed to extract relevant features from the preprocessed seed images. The CNN model is designed to capture the seeds' local and global characteristics, enabling it to learn complex patterns and textures.
APA, Harvard, Vancouver, ISO, and other styles
2

Arora, Chinmay, Ritvik Gupta, and S. Sridhar. "Face Mask Detection using Deep Learning CNN Architecture." International Journal of Scientific Engineering and Research 10, no. 12 (2022): 1–10. https://doi.org/10.70729/se221206135738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dr., Rekha Patil, Kumar Katrabad Vidya, Mahantappa, and Kumar Sunil. "Image Classification Using CNN Model Based on Deep Learning." Journal Of Scientific Research And Technology (JSRT) 1, no. 2 (2023): 60–71. https://doi.org/10.5281/zenodo.7965526.

Full text
Abstract:
In this work, we will use a convolutional neural network to classify images. In the field of visual image analysis, CNNs (a subset of deep neural networks) are the norm. Multilayer perceptron is used to develop CNN; it is based on a hierarchical model that works on network construction and then delivers to a fully linked layer. All the neurons are linked together and their output is processed in this layer. Here, we demonstrate how our system can get the job done in challenging domains like computer vision by using a deep learning approach. Convolutional Neural Networks (CNNs) are a machine learning method employed by our system for automated picture categorization. For grayscale picture categorization, our method compares to the Digit of MNIST data set. More processing power is needed for picture classification because of the grayscale images in the training data set. Our model's great accuracy in picture classification can be seen in the experimental phase, where we trained it using a convolutional neural network and obtained a result of 98% accuracy
APA, Harvard, Vancouver, ISO, and other styles
4

Santosh, Giri1 and Basanta Joshi. "TRANSFER LEARNING BASED IMAGE VISUALIZATION USING CNN." International Journal of Artificial Intelligence and Applications (IJAIA) 10, July (2019): 47–55. https://doi.org/10.5281/zenodo.3371299.

Full text
Abstract:
Image classification is a popular machine learning based applications of deep learning. Deep learning techniques are very popular because they can be effectively used in performing operations on image data in large-scale. In this paper CNN model was designed to better classify images. We make use of featureextraction part of inception v3 model for feature vector calculation and retrained the classification layer with these feature vector. By using the transfer learning mechanism the classification layer of the CNN model was trained with 20 classes of Caltech101 image dataset and 17 classes of Oxford 17 flower image dataset. After training, network was evaluated with testing dataset images from Oxford 17 flower dataset and Caltech101 image dataset. The mean testing precision of the neural network architecture with Caltech101 dataset was 98 % and with Oxford 17 Flower image dataset was 92.27 %.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohebbanaaz, Mohebbanaaz, Y. Padma Sai, and L. V. Rajani Kumari. "Detection of cardiac arrhythmia using deep CNN and optimized SVM." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 1 (2021): 217–25. https://doi.org/10.11591/ijeecs.v24.i1.pp217-225.

Full text
Abstract:
Deep learning (DL) has become a topic of study in various applications, including healthcare. Detection of abnormalities in an electrocardiogram (ECG) plays a significant role in patient monitoring. It is noted that a deep neural network when trained on huge data, can easily detect cardiac arrhythmia. This may help cardiologists to start treatment as early as possible. This paper proposes a new deep learning model adapting the concept of transfer learning to extract deep-CNN features and facilitates automated classification of electrocardiogram (ECG) into sixteen types of ECG beats using an optimized support vector machine (SVM). The proposed strategy begins with gathering ECG datasets, removal of noise from ECG signals, and extracting beats from denoised ECG signals. Feature extraction is done using ResNet18 via concept of transfer learning. These extracted features are classified using optimized SVM. These methods are evaluated and tested on the MIT-BIH arrhythmia database. Our proposed model is effective compared to all State of Art Techniques with an accuracy of 98.70%.
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmed, M. Alkababji, and H. Mohammed Omar. "Real time ear recognition using deep learning." TELKOMNIKA Telecommunication, Computing, Electronics and Control 19, no. 2 (2021): pp. 523~530. https://doi.org/10.12928/TELKOMNIKA.v19i2.18322.

Full text
Abstract:
Automatic identity recognition of ear images represents an active area of interest within the biometric community. The human ear is a perfect source of data for passive person identification. Ear images can be captured from a distance and in a covert manner; this makes ear recognition technology an attractive choice for security applications and surveillance in addition to related application domains. Differing from other biometric modalities, the human ear is neither affected by expressions like faces are nor do need closer touching like fingerprints do. In this paper, a deep learning object detector called faster region based convolutional neural networks (Faster R-CNN) is used for ear detection. A convolutional neural network (CNN) is used as feature extraction. principal component analysis (PCA) and genetic algorithm are used for feature reduction and selection respectively and a fully connected artificial neural network as a matcher. The testing proved the accuracy of 97.8% percentage of success with acceptable speed and it confirmed the accuracy and robustness of the proposed system.  
APA, Harvard, Vancouver, ISO, and other styles
7

Prof., A. R. Ghongade Sneha Zade Yash Malankar Sameer Kamble Pranali Dhenge. "Object Caption Generator Using Deep Learning." International Journal of Advanced Innovative Technology in Engineering 9, no. 3 (2024): 324–28. https://doi.org/10.5281/zenodo.12747531.

Full text
Abstract:
In this project, we use CNN and LSTM to identify the caption of the object. As the deep learning techniques are growing, huge datasets and computer power are helpful to build models that can generate captions for an object. This is what we are going to implement in this Python based project where we will use deep learning techniques like CNN and RNN. Object caption generator is a process which involves natural language processing and computer vision concepts to recognize the context of an object and present it in English. In this survey, we carefully follow some of the core concepts of object captioning and its common approaches. We discuss Keras library, numpy and Pycharm for the making of this project.We also discuss about flickr_dataset and CNN used for object classification. The system is trained on a large dataset of objects and their corresponding captions, using techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The CNNs are used to extract features from the objects, while the RNNs are used to generate the textual descriptions. The object caption generator is a promising application of machine learning in the field of computer vision. It has many potential uses, including assisting the visually impaired, creating better search results for object-based queries, and helping with content creation for social media and marketing purposes.
APA, Harvard, Vancouver, ISO, and other styles
8

A., Sasi Kumar, and S. Aithal P. "DeepQ Residue Analysis of Brain-Computer Classification and Prediction using Deep CNN." International Journal of Applied Engineering and Management Letters (IJAEML) 7, no. 2 (2023): 144–63. https://doi.org/10.5281/zenodo.8104434.

Full text
Abstract:
<strong>Purpose: </strong><em>During this</em><em> article, we are going to consistently explore the kinds of brain signals for Brain Computer Interface (BCI) and discover the related ideas of the in-depth learning of brain signal analysis. We talk review recent machine Associate in Nursing deep learning approaches within the detection of two brain unwellness just like Alzheimer&#39; disease (AD), brain tumor. In addition, a quick outline of the varied marker extraction techniques that want to characterize brain diseases is provided. Project work, the automated tool for tumor classification supported by image resonance information. It is given by various convolutional neural network (CNN) samples with ResNet Squeeze.</em> <strong>Objectives: </strong><em>This paper is to analyse brain diseases classification and prediction using deep learning concepts</em>. <em>Deep learning is a group of machine learning in computer science that has networks capable of unattended learning from data that&#39;s unstructured or unlabelled. conjointly called deep neural learning could be a operation of Al that mimics however, the human brain works in process data to be used in object detection, speech recognition, language translation, and call making. </em> <strong>Methodology: </strong><em>To test the result by measuring the semantics in the input sentence, the creation of embedded vectors with the same value is achieved. In this case, a sentence with a different meaning is used. Since it is difficult to collect a large amount of labelled data, it simulates the signal in different sentences. As you progress, teach for extra complicated capabilities with layers from the shared output of preceding layers. We examine forms of deep getting to know methods: LSTM Model with RNN, CNN results. CNN is a multi-layer feed-ahead neural community. The gadget weight is up to date via way of means of the Backpropagation Error procedure. TF-IDF of time period t in record d. Unlike traditional precis models, the ahead engineering feature is predicated on understanding of the required records area. In addition, this framework is related to synthetic abbreviations, which might be then used to put off the impact of guide function improvement and records labelling.</em> <strong>Results: </strong><em>We will follow this option of 257 factors as vector enter category algorithms. It is a aggregate of the subsequent forms with enter layer, convolution layer, linear unit (ReLU) layer, pooling layer, absolutely coupled layer. A recurrent neural community (RNN) is a form of a neural community that defines connections among loop units. This creates an inner community country that allows. Feature choice is a extensively used approach that improves the overall performance of classifiers. Here, we examine the consequences of conventional magnificence fires with correlation-primarily based totally man or woman choice. </em> <strong>Originality: </strong><em>Analysis of Brain Diseases with the approach of Computer Classification and Prediction using Deep CNN </em><em>with ResNet Squeeze.</em> <strong>Type of Paper:</strong> <em>Conceptual research paper.</em>
APA, Harvard, Vancouver, ISO, and other styles
9

Pravallika, V., V. Uday Kiran, B. Rahul, N. Neelima, G. Rishi Patnaik, and DR Sreejyothshna Ankam. "Deep Learning-Based Image Captioning: A Hybrid CNN-LSTM Approach." International Journal of Research Publication and Reviews 6, no. 4 (2025): 2459–63. https://doi.org/10.55248/gengpi.6.0425.1392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gupta, Jaya, Sunil Pathak, and Gireesh Kumar. "Deep Learning (CNN) and Transfer Learning: A Review." Journal of Physics: Conference Series 2273, no. 1 (2022): 012029. http://dx.doi.org/10.1088/1742-6596/2273/1/012029.

Full text
Abstract:
Abstract Deep Learning is a machine learning area that has recently been used in a variety of industries. Unsupervised, semi-supervised, and supervised-learning are only a few of the strategies that have been developed to accommodate different types of learning. A number of experiments showed that deep learning systems fared better than traditional ones when it came to image processing, computer vision, and pattern recognition. Several real-world applications and hierarchical systems have utilised transfer learning and deep learning algorithms for pattern recognition and classification tasks. Real-world machine learning settings, on the other hand, often do not support this assumption since training data can be difficult or expensive to get, and there is a constant need to generate high-performance beginners who can work with data from a variety of sources. The objective of this paper is using deep learning to uncover higher-level representational features, to clearly explain transfer learning, to provide current solutions and evaluate applications in diverse areas of transfer learning as well as deep learning.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Deep learning CNN"

1

Samal, Kruttidipta. "FPGA acceleration of CNN training." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54467.

Full text
Abstract:
This thesis presents the results of an architectural study on the design of FPGA- based architectures for convolutional neural networks (CNNs). We have analyzed the memory access patterns of a Convolutional Neural Network (one of the biggest networks in the family of deep learning algorithms) by creating a trace of a well-known CNN architecture and by developing a trace-driven DRAM simulator. The simulator uses the traces to analyze the effect that different storage patterns and dissonance in speed between memory and processing element, can have on the CNN system. This insight is then used create an initial design for a layer architecture for the CNN using an FPGA platform. The FPGA is designed to have multiple parallel-executing units. We design a data layout for the on-chip memory of an FPGA such that we can increase parallelism in the design. As the number of these parallel units (and hence parallelism) depends on the memory layout of input and output, particularly if parallel read and write accesses can be scheduled or not. The on-chip memory layout minimizes access contention during the operation of parallel units. The result is an SoC (System on Chip) that acts as an accelerator and can have more number of parallel units than previous work. The improvement in design was also observed by comparing post synthesis loop latency tables between our design and one with a single unit design. This initial design can help in designing FPGAs targeted for deep learning algorithms that can compete with GPUs in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Mukhtar, Hind. "Machine Learning Enabled-Localization in 5G and LTE Using Image Classification and Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42449.

Full text
Abstract:
Demand for localization has been growing due to the increase in location-based services and high bandwidth applications requiring precise localization of users to improve resource management and beam forming. Outdoor localization has been traditionally done through Global Positioning System (GPS), however it’s performance degrades in urban settings due to obstruction and multi-path effects, creating the need for better localization techniques. This thesis proposes a technique using a cascaded approach composed of image classification and deep learning using LIDAR or satellite images and Channel State In-formation (CSI) data from base stations to predict the location of moving vehicles and users outdoors. The algorithm’s performance is assessed using 3 different datasets. The first two use simulated data in the Milli-meter Wave (mmWave) band and lidar images that are collected from the neighbourhood of Rosslyn in Arlington, Virginia. The results show an improvement in localization accuracy as a result of the hierarchical architecture, with a Mean Absolute Error (MAE) of 6.55m for the proposed technique in comparison to a MAE of 9.82m using one Convolutional Neural Network (CNN). The third dataset uses measurements from an LTE mobile communication system along with satellite images that take place at the University of Denmark. The results achieve a MAE of 9.45 m fort he heirchichal approach in comparison to a MAE of 15.74 m for one Feed-Forward Neural Network (FFNN).
APA, Harvard, Vancouver, ISO, and other styles
3

Ramesh, Shreyas. "Deep Learning for Taxonomy Prediction." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/89752.

Full text
Abstract:
The last decade has seen great advances in Next-Generation Sequencing technologies, and, as a result, there has been a rise in the number of genomes sequenced each year. In 2017, there were as many as 10,000 new organisms sequenced and added into the RefSeq Database. Taxonomy prediction is a science involving the hierarchical classification of DNA fragments up to the rank species. In this research, we introduce Predicting Linked Organisms, Plinko, for short. Plinko is a fully-functioning, state-of-the-art predictive system that accurately captures DNA - Taxonomy relationships where other state-of-the-art algorithms falter. Plinko leverages multi-view convolutional neural networks and the pre-defined taxonomy tree structure to improve multi-level taxonomy prediction. In the Plinko strategy, each network takes advantage of different word usage patterns corresponding to different levels of evolutionary divergence. Plinko has the advantages of relatively low storage, GPGPU parallel training and inference, making the solution portable, and scalable with anticipated genome database growth. To the best of our knowledge, Plinko is the first to use multi-view convolutional neural networks as the core algorithm in a compositional,alignment-free approach to taxonomy prediction.<br>Master of Science<br>Taxonomy prediction is a science involving the hierarchical classification of DNA fragments up to the rank species. Given species diversity on Earth, taxonomy prediction gets challenging with (i) increasing number of species (labels) to classify and (ii) decreasing input (DNA) size. In this research, we introduce Predicting Linked Organisms, Plinko, for short. Plinko is a fully-functioning, state-of-the-art predictive system that accurately captures DNA - Taxonomy relationships where other state-of-the-art algorithms falter. Three major challenges in taxonomy prediction are (i) large dataset sizes (order of 109 sequences) (ii) large label spaces (order of 103 labels) and (iii) low resolution inputs (100 base pairs or less). Plinko leverages multi-view convolutional neural networks and the pre-defined taxonomy tree structure to improve multi-level taxonomy prediction for hard to classify sequences under the three conditions stated above. Plinko has the advantage of relatively low storage footprint, making the solution portable, and scalable with anticipated genome database growth. To the best of our knowledge, Plinko is the first to use multi-view convolutional neural networks as the core algorithm in a compositional, alignment-free approach to taxonomy prediction.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Tairui. "Going Deeper with Convolutional Neural Network for Intelligent Transportation." Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/144.

Full text
Abstract:
Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep feature from covolutional neural network(CNN) have attracted many researchers to solve many problems in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function for different tasks. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work aims to uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. This work aims to uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. First, we apply deep feature in object detection task, especially in vehicle detection task. Second, to make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding in road. We experiment each task for different public datasets, and prove our framework is robust.
APA, Harvard, Vancouver, ISO, and other styles
5

Meng, Zhaoxin. "A deep learning model for scene recognition." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491.

Full text
Abstract:
Scene recognition is a hot research topic in the field of image recognition. It is necessary that we focus on the research on scene recognition, because it is helpful to the scene understanding topic, and can provide important contextual information for object recognition. The traditional approaches for scene recognition still have a lot of shortcomings. In these years, the deep learning method, which uses convolutional neural network, has got state-of-the-art results in this area. This thesis constructs a model based on multi-layer feature extraction of CNN and transfer learning for scene recognition tasks. Because scene images often contain multiple objects, there may be more useful local semantic information in the convolutional layers of the network, which may be lost in the full connected layers. Therefore, this paper improved the traditional architecture of CNN, adopted the existing improvement which enhanced the convolution layer information, and extracted it using Fisher Vector. Then this thesis introduced the idea of transfer learning, and tried to introduce the knowledge of two different fields, which are scene and object. We combined the output of these two networks to achieve better results. Finally, this thesis implemented the method using Python and PyTorch. This thesis applied the method to two famous scene datasets. the UIUC-Sports and Scene-15 datasets. Compared with traditional CNN AlexNet architecture, we improve the result from 81% to 93% in UIUC-Sports, and from 79% to 91% in Scene- 15. It shows that our method has good performance on scene recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Wasnik, Sachinkumar. "Fatigue Detection in EEG Time Series Data Using Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/24917.

Full text
Abstract:
Fatigue has widespread effects on the brain’s executive function, reaction time and information processing, causing loss of alertness, that affect safety, and productivity. There are various subjective and behavioural methods to measure fatigue. However, none of them is precise. The work in this thesis employs physiological measures such as heart rate, blood pressure, and breathing that are objective and quantitative indicators. These are thought to provide reliable measures of fatigue and may be easier to deploy in real world scenarios, compared to the subjective or behavioural methods. In particular, electroencephalogram (EEG) signals have the advantage of being able to measure fatigue in the early stages, and therefore have great potential in the design of early warning system to detect fatigue. Traditional computational models trained using EEG data show potential improvement in detecting fatigue but require a significant number of electrodes, making deployment in a real-world fatigue detection scenario difficult (e.g., on a driver who is on the road). This project aims to develop computational models to perform fatigue detection using sparse EEG data from only two electrodes. The resulting algorithms could potentially be deployed in pragmatic situations (e.g., embedded in a wearable device), making the contribution of this study useful for real- world scenarios In machine learning approaches, the area of deep learning has shown excellent performance in tackling problems of image classification and speech recognition. This project introduces the application of deep learning methods in early warning systems of fatigue detection. EEG data of patients suffering from mild to severe Obstructive Sleep Apnoea (OSA) are used in this study. These patients performed a driving simulation test under varying conditions of sleep deprivation, with their wake EEG and driving performance variables continuously monitored. The data collected during a driving simulation test of 57 sleep-deprived subjects were used for training and evaluating the computational models. The principal machine learning task was to employ the EEG data as input and predict the probability of a crash (crash / no crash) before the actual crash event. After testing a preliminary EEG-K-Nearest Neighbour (EEG-KNN) as proof of concept to test data cleaning and pre-processing, two deep learning models were introduced, EEG-Deep Neural Network (EEG-DNN) and EEG Convolutional Neural Network (EEG-CNN). The Least Absolute Sum of Squares Operator (LASSO) was applied as a feature selection method in EEG-KNN to overcome the curse of dimensionality and identify promising features. EEG-KNN was used to predict a crash in the short-term (i.e., 5-second preduration), while EEG-DNN and EEG-CNN were used to predict a crash in the longer term (i.e., 6-minute pre-duration and 3- minute pre-duration respectively). Techniques such as dropout regularisation and early stopping were used to improve the performance of EEG-DNN and EEG-CNN on the test data. The Receiver Operating Curve (ROC) is widely used to assess the performance of a classifier and compare the number of true positives (actual crash events) to the number of false positives. The metric considered for the evaluation of computational models on test data is the area under the ROC curve (AUROC). A larger value indicates better classification performance. The EEG-KNN in this study achieved an AUROC of 0.77 in short-term fatigue detection. The Deep learning model, EEG-DNN significantly improved the performance of crash prediction and achieved a sensitivity level of 87%. Further, the EEG-CNN was used to reduce the number of electrodes required to detect fatigue. The EEG-CNN achieved an AUROC of 0.95. This project has developed a data framework and computational models to detect fatigue ahead of crash events, making intervention possible in the real-world scenarios. The proposed computational models utilised a lower number of electrodes and worked with sparse EEG data to detect fatigue, thus enabling a practical, effective and easy-to-use solution to be devised.
APA, Harvard, Vancouver, ISO, and other styles
7

Moniruzzaman, Md. "Seagrass detection using deep learning." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2019. https://ro.ecu.edu.au/theses/2261.

Full text
Abstract:
Seagrasses play an essential role in the marine ecosystem by providing foods, nutrients, and habitat to the marine lives. They work as marine bioindicators by reflecting the health condition of aquatic environments. Seagrasses also act as a significant atmospheric carbon sink that mitigates global warming and rapid climate changes. Considering the importance, it is critical to monitor seagrasses across the coastlines which includes detection, mapping, percentage cover calculation, and health estimation. Remote sensing-based aerial and spectral images, acoustic images, underwater two-dimensional and three-dimensional digital images have so far been used to monitor seagrasses. For close monitoring, different machine learning classifiers such as the support vector machine (SVM), the maximum likelihood classifier (MLC), the logistic model tree (LMT) and the multilayer perceptron (MP) have been used for seagrass classification from two-dimensional digital images. All of these approaches used handcrafted feature extraction methods, which are semi-automatic. In recent years, deep learning-based automatic object detection and image classification have achieved tremendous success, especially in the computer vision area. However, to the best of our knowledge, no attempts have been made for using deep learning for seagrass detection from underwater digital images. Possible reasons include unavailability of enough image data to train a deep neural network. In this work, we have proposed a Faster R-CNN architecture based deep learning detector that automatically detects Halophila ovalis (a common seagrass species) from underwater digital images. To train the object detector, we have collected a total of 2,699 underwater images both from real-life shorelines, and from an experimental facility. The selected seagrass (Halophila ovalis) are labelled using LabelImg software, commonly used by the research community. An expert in seagrass reviewed the extracted labels. We have used VGG16, Resnet50, Inception V2, and NASNet in the Faster R-CNN object detection framework which were originally trained on COCO dataset. We have applied the transfer learning technique to re-train them using our collected dataset to be able to detect the seagrasses. Inception V2 based Faster R-CNN achieved the highest mean average precision (mAP) of 0.261. The detection models proposed in this dissertation can be transfer learned with labelled two-dimensional digital images of other seagrass species and can be used to detect them from underwater seabed images automatically.
APA, Harvard, Vancouver, ISO, and other styles
8

Alammari, Ali. "Traffic Forecasting Applications Using Crowdsourced Traffic Reports and Deep Learning." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1703305/.

Full text
Abstract:
Intelligent transportation systems (ITS) are essential tools for traffic planning, analysis, and forecasting that can utilize the huge amount of traffic data available nowadays. In this work, we aggregated detailed traffic flow sensor data, Waze reports, OpenStreetMap (OSM) features, and weather data, from California Bay Area for 6 months. Using that data, we studied three novel ITS applications using convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The first experiment is an analysis of the relation between roadway shapes and accident occurrence, where results show that the speed limit and number of lanes are significant predictors for major accidents on highways. The second experiment presents a novel method for forecasting congestion severity using crowdsourced data only (Waze, OSM, and weather), without the need for traffic sensor data. The third experiment studies the improvement of traffic flow forecasting using accidents, number of lanes, weather, and time-related features, where results show significant performance improvements when the additional features where used.
APA, Harvard, Vancouver, ISO, and other styles
9

Ridolfi, Federico. "Applicazioni di deep learning per CAD mammografico." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12264/.

Full text
Abstract:
Il tumore alla mammella è ad oggi una delle principali cause di mortalità femminile e, sebbene attualmente la medicina offra una buona possibilità di guarigione, l'arma più potente contro questa particolare neoplasia è la prevenzione, svolta in particolare tramite programmi di screening mammografici sui pazienti in fasce d'età a rischio. Dall'esigenza di massimizzare l'efficenza di tali pratiche nascono i CAD (Computer Assisted Diagnosis), pacchetti software in grado di affiancare lo specialista nell'analisi del referto medico aiutandolo ad identificare e localizzare la patologia. I moderni CAD si basano sulle tecniche più avanzate di machine learning. In questo lavoro si discuterà della nuova rete CaffeNet_MAMMO, un prototipo di sistema CAD, basato su reti neurali a convoluzione (CNN) e del suo addestramento sul database MiniMIAS. Una delle caratteristiche peculiari del metodo qui illustrato è la capacità di fornire risultati comparabili ad altri metodi CAD basati su classificatori CNN monostato (pur rimanendo al di sotto dei prodotti commerciali) nonostante l'addestramento sia stato svolto su un database di ridotte dimensioni, normalmente insufficiente per dare risultati accettabili su siffatta architettura. Il metodo proposto risulta estremamente promettente, portando a un classificatore con un'efficienza AUC pari a 0.68 +- 0.08, con una specificità e una sensibilità fino al 70% , che corrisponde alla fascia alta dei classificatori appartenenti alla stessa famiglia. Viene altresì proposta una bozza di algoritmo di localizzazione della patologia, il quale, sebbene lontano dagli standard di riferimento, riesce ad identificare con sufficiente precisione le patologie presenti nelle immagini proposte. Tuttavia l'algoritmo di localizzazione presenta una specificità insufficiente per un'applicazione sul campo allo stato attuale, risultando comunque un interessante punto di partenza per futuri lavori.
APA, Harvard, Vancouver, ISO, and other styles
10

Rintala, Jonathan. "Speech Emotion Recognition from Raw Audio using Deep Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278858.

Full text
Abstract:
Traditionally, in Speech Emotion Recognition, models require a large number of manually engineered features and intermediate representations such as spectrograms for training. However, to hand-engineer such features often requires both expert domain knowledge and resources. Recently, with the emerging paradigm of deep-learning, end-to-end models that extract features themselves and learn from the raw speech signal directly have been explored. A previous approach has been to combine multiple parallel CNNs with different filter lengths to extract multiple temporal features from the audio signal, and then feed the resulting sequence to a recurrent block. Also, other recent work present high accuracies when utilizing local feature learning blocks (LFLBs) for reducing the dimensionality of a raw audio signal, extracting the most important information. Thus, this study will combine the idea of LFLBs for feature extraction with a block of parallel CNNs with different filter lengths for capturing multitemporal features; this will finally be fed into an LSTM layer for global contextual feature learning. To the best of our knowledge, such a combined architecture has yet not been properly investigated. Further, this study will investigate different configurations of such an architecture. The proposed model is then trained and evaluated on the well-known speech databases EmoDB and RAVDESS, both in a speaker-dependent and speaker-independent manner. The results indicate that the proposed architecture can produce comparable results with state-of-the-art; despite excluding data augmentation and advanced pre-processing. It was reported 3 parallel CNN pipes yielded the highest accuracy, together with a series of modified LFLBs that utilize averagepooling and ReLU activation. This shows the power of leaving the feature learning up to the network and opens up for interesting future research on time-complexity and trade-off between introducing complexity in pre-processing or in the model architecture itself.<br>Traditionellt sätt, vid talbaserad känsloigenkänning, kräver modeller ett stort antal manuellt konstruerade attribut och mellanliggande representationer, såsom spektrogram, för träning. Men att konstruera sådana attribut för hand kräver ofta både domänspecifika expertkunskaper och resurser. Nyligen har djupinlärningens framväxande end-to-end modeller, som utvinner attribut och lär sig direkt från den råa ljudsignalen, undersökts. Ett tidigare tillvägagångssätt har varit att kombinera parallella CNN:er med olika filterlängder för att extrahera flera temporala attribut från ljudsignalen och sedan låta den resulterande sekvensen passera vidare in i ett så kallat Recurrent Neural Network. Andra tidigare studier har också nått en hög noggrannhet när man använder lokala inlärningsblock (LFLB) för att reducera dimensionaliteten hos den råa ljudsignalen, och på så sätt extraheras den viktigaste informationen från ljudet. Således kombinerar denna studie idén om att nyttja LFLB:er för extraktion av attribut, tillsammans med ett block av parallella CNN:er som har olika filterlängder för att fånga multitemporala attribut; detta kommer slutligen att matas in i ett LSTM-lager för global inlärning av kontextuell information. Så vitt vi vet har en sådan kombinerad arkitektur ännu inte undersökts. Vidare kommer denna studie att undersöka olika konfigurationer av en sådan arkitektur. Den föreslagna modellen tränas och utvärderas sedan på de välkända taldatabaserna EmoDB och RAVDESS, både via ett talarberoende och talaroberoende tillvägagångssätt. Resultaten indikerar att den föreslagna arkitekturen kan ge jämförbara resultat med state-of-the-art, trots att ingen ökning av data eller avancerad förbehandling har inkluderats. Det rapporteras att 3 parallella CNN-lager gav högsta noggrannhet, tillsammans med en serie av modifierade LFLB:er som nyttjar average-pooling och ReLU som aktiveringsfunktion. Detta visar fördelarna med att lämna inlärningen av attribut till nätverket och öppnar upp för intressant framtida forskning kring tidskomplexitet och avvägning mellan introduktion av komplexitet i förbehandlingen eller i själva modellarkitekturen.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Deep learning CNN"

1

Gad, Ahmed Fawzy. Practical Computer Vision Applications Using Deep Learning with CNNs. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4167-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Neural Networks with R: Smart models using CNN, RNN, deep learning, and artificial intelligence principles. Packt Publishing - ebooks Account, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Calix, Ricardo. Deep Learning Algorithms: Transformers, Gans, Encoders, Rnns, Cnns, and More. Independently Published, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Calix, Ricardo. Deep Learning Algorithms: Transformers, Gans, Encoders, Cnns, Rnns, and More. Independently Published, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gollapudi, Sunila. Learn Computer Vision Using OpenCV: With Deep Learning CNNs and RNNs. Apress, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Thomas, Sherin, and Sudhanshu Passi. PyTorch Deep Learning Hands-On: Build CNNs, RNNs, GANs, reinforcement learning, and more, quickly and easily. Packt Publishing, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cluster, Konnor. Artificial Intelligence for Business: How Your Company Can Make More Profit with Machine Learning, Data Science, Big Data, and Deep Learning. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Eldar, Yonina C., Andrea Goldsmith, Deniz Gündüz, and H. Vincent Poor, eds. Machine Learning and Wireless Communications. Cambridge University Press, 2022. http://dx.doi.org/10.1017/9781108966559.

Full text
Abstract:
How can machine learning help the design of future communication networks – and how can future networks meet the demands of emerging machine learning applications? Discover the interactions between two of the most transformative and impactful technologies of our age in this comprehensive book. First, learn how modern machine learning techniques, such as deep neural networks, can transform how we design and optimize future communication networks. Accessible introductions to concepts and tools are accompanied by numerous real-world examples, showing you how these techniques can be used to tackle longstanding problems. Next, explore the design of wireless networks as platforms for machine learning applications – an overview of modern machine learning techniques and communication protocols will help you to understand the challenges, while new methods and design approaches will be presented to handle wireless channel impairments such as noise and interference, to meet the demands of emerging machine learning applications at the wireless edge.
APA, Harvard, Vancouver, ISO, and other styles
9

Koss, Lisa J. Leading for Learning: How Managers Can Get Business Results Through Developmental Coaching and Inspire Deep Employee Commitment. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Koss, Lisa J. Leading for Learning: How Managers Can Get Business Results Through Developmental Coaching and Inspire Deep Employee Commitment. Productivity Press, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Deep learning CNN"

1

Vasudevan, Shriram K., Sini Raj Pulari, and Subashri Vasudevan. "CNN Architectures: An Evolution." In Deep Learning. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003185635-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manaswi, Navin Kumar. "CNN in TensorFlow." In Deep Learning with Applications Using Python. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3516-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Manaswi, Navin Kumar. "CNN in Keras." In Deep Learning with Applications Using Python. Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3516-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shaik, Farooq, Y. Rajesh, Noman Aasif Gudur, and Jatindra Kumar Dash. "Deep CNN in Healthcare." In Deep Learning in Biomedical Signal and Medical Imaging. CRC Press, 2024. http://dx.doi.org/10.1201/9781032635149-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gharehbaghi, Arash. "Convolutional Neural Networks (CNN)." In Deep Learning in Time Series Analysis. CRC Press, 2023. http://dx.doi.org/10.1201/9780429321252-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xiao, Cao, and Jimeng Sun. "Convolutional Neural Networks (CNN)." In Introduction to Deep Learning for Healthcare. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82184-5_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Abdelouahab, Kamel, Maxime Pelcat, and François Berry. "Accelerating the CNN Inference on FPGAs." In Deep Learning in Computer Vision. CRC Press, 2020. http://dx.doi.org/10.1201/9781351003827-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ros, Frederic, and Rabia Riad. "Deep clustering techniques based on CNN." In Unsupervised and Semi-Supervised Learning. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-48743-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bisong, Ekaba. "Convolutional Neural Networks (CNN)." In Building Machine Learning and Deep Learning Models on Google Cloud Platform. Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-4470-8_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Chenchen, Yutong Zheng, Khoa Luu, and Marios Savvides. "CMS-RCNN: Contextual Multi-Scale Region-Based CNN for Unconstrained Face Detection." In Deep Learning for Biometrics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61657-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Deep learning CNN"

1

Benedict, J. N., J. Praveen, G. Santhosh Kumar, and S. Senthil Pandi. "Android Threat Detection Using Deep Learning (CNN)." In 2024 International Conference on Computational Intelligence for Green and Sustainable Technologies (ICCIGST). IEEE, 2024. http://dx.doi.org/10.1109/iccigst60741.2024.10717612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nwaneri, Ifeanyi, and Daniel Uyeh. "AgriMoistNet: a low-cost CNN-based system for moisture content prediction in livestock feed." In Real-Time Image Processing and Deep Learning 2025, edited by Nasser Kehtarnavaz and Mukul V. Shirvaikar. SPIE, 2025. https://doi.org/10.1117/12.3053526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chauhan, Shanvi. "Tuberculosis Diagnosis Using CNN: A Deep Learning Approach." In 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI). IEEE, 2024. http://dx.doi.org/10.1109/icoici62503.2024.10695977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

N, Divya, Dhilip P, Manish S. A, and Abilash I. "Deep Learning Based Lung Cancer Prediction Using CNN." In 2024 International Conference on Signal Processing, Computation, Electronics, Power and Telecommunication (IConSCEPT). IEEE, 2024. http://dx.doi.org/10.1109/iconscept61884.2024.10627846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Reddy, K. V. Narasimha, Yenuganti Narendra, Medam Adi Nagamanendra Reddy, Avula Ramu, Dodda Venkata Reddy, and Sireesha Moturi. "Automated Traffic Sign Recognition via CNN Deep Learning." In 2025 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI). IEEE, 2025. https://doi.org/10.1109/iatmsi64286.2025.10985223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tamanna, Sheeban E., Mohammed Ezhan, R. Mahesh, et al. "Musical Instrument Classification Using Deep Learning CNN Models." In 2024 International Conference on Integrated Intelligence and Communication Systems (ICIICS). IEEE, 2024. https://doi.org/10.1109/iciics63763.2024.10859695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Esanmurodova, N., Hansi Negi, Meenakshi Garg, Himanshu Sharma, Myasar Mundher Adnan, and Varsha Mittal. "Deep Learning R-CNN for Throat Cancer Identification." In 2024 International Conference on Communication, Computing and Energy Efficient Technologies (I3CEET). IEEE, 2024. https://doi.org/10.1109/i3ceet61722.2024.10993677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vishnuvarthan, K., R. Renugadevi, and S. Santhi. "A CNN and HBA Based Approach for Grape Disease Identification." In 2025 4th International Conference on Sentiment Analysis and Deep Learning (ICSADL). IEEE, 2025. https://doi.org/10.1109/icsadl65848.2025.10933347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guo, Wenjing, Yuan Jin, and Xiaodong Cheng. "Research on crop remote sensing image segmentation method integrating CNN and transformer." In International Conference on Cloud Computing, Performance Computing, and Deep Learning, edited by Wanyang Dai and Xiangjie Kong. SPIE, 2024. http://dx.doi.org/10.1117/12.3050729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abd-Alhalem, Samia M., Ali E. Takieldeen, Hesham A. Ali, and Hanaa Salem Marie. "Deep Learning Approach to Taxonomic Classification with CNN-ELM." In 2024 International Telecommunications Conference (ITC-Egypt). IEEE, 2024. http://dx.doi.org/10.1109/itc-egypt61547.2024.10620514.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Deep learning CNN"

1

Panta, Manisha, Md Tamjidul Hoque, Kendall Niles, Joe Tom, Mahdi Abdelguerfi, and Maik Flanagin. Deep learning approach for accurate segmentation of sand boils in levee systems. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49460.

Full text
Abstract:
Sand boils can contribute to the liquefaction of a portion of the levee, leading to levee failure. Accurately detecting and segmenting sand boils is crucial for effectively monitoring and maintaining levee systems. This paper presents SandBoilNet, a fully convolutional neural network with skip connections designed for accurate pixel-level classification or semantic segmentation of sand boils from images in levee systems. In this study, we explore the use of transfer learning for fast training and detecting sand boils through semantic segmentation. By utilizing a pretrained CNN model with ResNet50V2 architecture, our algorithm effectively leverages learned features for precise detection. We hypothesize that controlled feature extraction using a deeper pretrained CNN model can selectively generate the most relevant feature maps adapting to the domain, thereby improving performance. Experimental results demonstrate that SandBoilNet outperforms state-of-the-art semantic segmentation methods in accurately detecting sand boils, achieving a Balanced Accuracy (BA) of 85.52%, Macro F1-score (MaF1) of 73.12%, and an Intersection over Union (IoU) of 57.43% specifically for sand boils. This proposed approach represents a novel and effective solution for accurately detecting and segmenting sand boils from levee images toward automating the monitoring and maintenance of levee infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KAN-ICE’s 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (https://github.com/mferdaus/kanice).
APA, Harvard, Vancouver, ISO, and other styles
3

Jiménez Láinez, Andrés, and María Dolores Pérez Godoy. Experimentación con modelos de Deep Learning para la detección de objetos. Fundación Avanza, 2023. http://dx.doi.org/10.60096/fundacionavanza/2032022.

Full text
Abstract:
Experimentaremos con un modelo de deep learning para detectar diferentes tipos de flores en una imagen, analizando varios parámetros para ver su correcto funcionamiento y explicar las posibles causas de los mismos.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Lei, Meng Song, Hui Shen, et al. Deep learning methods for omics data imputation. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/48221.

Full text
Abstract:
One common problem in omics data analysis is missing values, which can arise due to various reasons, such as poor tissue quality and insufficient sample volumes. Instead of discarding missing values and related data, imputation approaches offer an alternative means of handling missing data. However, the imputation of missing omics data is a non-trivial task. Difficulties mainly come from high dimensionality, non-linear or nonmonotonic relationships within features, technical variations introduced by sampling methods, sample heterogeneity, and the non-random missingness mechanism. Several advanced imputation methods, including deep learning-based methods, have been proposed to address these challenges. Due to its capability of modeling complex patterns and relationships in large and high-dimensional datasets, many researchers have adopted deep learning models to impute missing omics data. This review provides a comprehensive overview of the currently available deep learning-based methods for omics imputation from the perspective of deep generative model architectures such as autoencoder, variational autoencoder, generative adversarial networks, and Transformer, with an emphasis on multi-omics data imputation. In addition, this review also discusses the opportunities that deep learning brings and the challenges that it might face in this field.
APA, Harvard, Vancouver, ISO, and other styles
5

Cerulli, Giovanni. Deep Learning and AI for Research in Python. Instats Inc., 2023. http://dx.doi.org/10.61700/g6nxp3uxsvu3l469.

Full text
Abstract:
This seminar is an introduction to Deep Learning and Artificial Intelligence methods for the social, economic, and health sciences using Python. After introducing the subject, the seminar will cover the following methods: (i) Feedforward Neural Networks (FNNs) (ii) Convolutional Neural Networks (CNNs); and (iii) Recursive Neural Networks (RNNs). The course will offer various instructional examples using real datasets in Python. An Instats certificate of completion is provided at the end of the seminar, and 2 ECTS equivalent points are offered.
APA, Harvard, Vancouver, ISO, and other styles
6

Ogunbire, Abimbola, Panick Kalambay, Hardik Gajera, and Srinivas Pulugurtha. Deep Learning, Machine Learning, or Statistical Models for Weather-related Crash Severity Prediction. Mineta Transportation Institute, 2023. http://dx.doi.org/10.31979/mti.2023.2320.

Full text
Abstract:
Nearly 5,000 people are killed and more than 418,000 are injured in weather-related traffic incidents each year. Assessments of the effectiveness of statistical models applied to crash severity prediction compared to machine learning (ML) and deep learning techniques (DL) help researchers and practitioners know what models are most effective under specific conditions. Given the class imbalance in crash data, the synthetic minority over-sampling technique for nominal (SMOTE-N) data was employed to generate synthetic samples for the minority class. The ordered logit model (OLM) and the ordered probit model (OPM) were evaluated as statistical models, while random forest (RF) and XGBoost were evaluated as ML models. For DL, multi-layer perceptron (MLP) and TabNet were evaluated. The performance of these models varied across severity levels, with property damage only (PDO) predictions performing the best and severe injury predictions performing the worst. The TabNet model performed best in predicting severe injury and PDO crashes, while RF was the most effective in predicting moderate injury crashes. However, all models struggled with severe injury classification, indicating the potential need for model refinement and exploration of other techniques. Hence, the choice of model depends on the specific application and the relative costs of false negatives and false positives. This conclusion underscores the need for further research in this area to improve the prediction accuracy of severe and moderate injury incidents, ultimately improving available data that can be used to increase road safety.
APA, Harvard, Vancouver, ISO, and other styles
7

Alhasson, Haifa F., and Shuaa S. Alharbi. New Trends in image-based Diabetic Foot Ucler Diagnosis Using Machine Learning Approaches: A Systematic Review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2022. http://dx.doi.org/10.37766/inplasy2022.11.0128.

Full text
Abstract:
Review question / Objective: A significant amount of research has been conducted to detect and recognize diabetic foot ulcers (DFUs) using computer vision methods, but there are still a number of challenges. DFUs detection frameworks based on machine learning/deep learning lack systematic reviews. With Machine Learning (ML) and Deep learning (DL), you can improve care for individuals at risk for DFUs, identify and synthesize evidence about its use in interventional care and management of DFUs, and suggest future research directions. Information sources: A thorough search of electronic databases such as Science Direct, PubMed (MIDLINE), arXiv.org, MDPI, Nature, Google Scholar, Scopus and Wiley Online Library was conducted to identify and select the literature for this study (January 2010-January 01, 2023). It was based on the most popular image-based diagnosis targets in DFu such as segmentation, detection and classification. Various keywords were used during the identification process, including artificial intelligence in DFu, deep learning, machine learning, ANNs, CNNs, DFu detection, DFu segmentation, DFu classification, and computer-aided diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
8

Panta, Manisha, Padam Thapa, Md Hoque, et al. Application of deep learning for segmenting seepages in levee systems. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/49453.

Full text
Abstract:
Seepage is a typical hydraulic factor that can initiate the breaching process in a levee system. If not identified and treated on time, seepages can be a severe problem for levees, weakening the levee structure and eventually leading to collapse. Therefore, it is essential always to be vigilant with regular monitoring procedures to identify seepages throughout these levee systems and perform adequate repairs to limit potential threats from unforeseen levee failures. This paper introduces a fully convolutional neural network to identify and segment seepage from the image in levee systems. To the best of our knowledge, this is the first work in this domain. Applying deep learning techniques for semantic segmentation tasks in real-world scenarios has its own challenges, especially the difficulty for models to effectively learn from complex backgrounds while focusing on simpler objects of interest. This challenge is particularly evident in the task of detecting seepages in levee systems, where the fault is relatively simple compared to the complex and varied background. We addressed this problem by introducing negative images and a controlled transfer learning approach for semantic segmentation for accurate seepage segmentation in levee systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Luc, Brunet. Formulate: a python library for formulation. Github, 2021. http://dx.doi.org/10.17601/rdmediation.2021.1.

Full text
Abstract:
Formulate is a library to build and manipulate formulations. It can be use for materials, cosmetics or any activities involving mixing of components. This version computes oxygen balance, eutectic points, equilibrium temperature by deep learning. The purpose of this library is allowing a way to build deep learning datasets for materials and formulations.
APA, Harvard, Vancouver, ISO, and other styles
10

Buckland, Leonora, Deborah Gold, Lisa Hehenberger, and Laura Reijnders. Walking the tightrope: How foundations can find a balance between learning and accountability lenses. Esade Cnter for Social Impact, 2023. http://dx.doi.org/10.56269/lb20230307.

Full text
Abstract:
Over the last two years, the Esade Center for Social Impact, which is part of Esade Business School in Spain, and its partner BBK, a banking foundation in Bilbao, have been at the center of a web of committed European foundation professionals sharing their thoughts, learnings, practices, frustrations, and eureka moments related to impact measurement and management (IMM). This paper is not a practical guide to Impact Measurement and Management (IMM) – we believe that there are other publications which may help with implementation. Rather, it recounts how foundations at different stages of development and with a range of profiles (corporate, family, operating, and grant-making) are going about IMM on a day-to-day basis and grappling with some of the challenges and philosophical issues arising. For the impact geeks, this will no doubt be interesting grist for the mill. For those not so deep into this space, it might provide an overview of where foundations are in Europe and how they are focusing their efforts on IMM. Our aim is that by synthesizing and sharing what we have heard in this safe space we can inform and inspire others.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography