To see the other types of publications on this topic, follow the link: Google Colab.

Journal articles on the topic 'Google Colab'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Google Colab.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ray, Sujan, Khaldoon Alshouiliy, and Dharma P. Agrawal. "Dimensionality Reduction for Human Activity Recognition Using Google Colab." Information 12, no. 1 (December 23, 2020): 6. http://dx.doi.org/10.3390/info12010006.

Full text
Abstract:
Human activity recognition (HAR) is a classification task that involves predicting the movement of a person based on sensor data. As we can see, there has been a huge growth and development of smartphones over the last 10–15 years—they could be used as a medium of mobile sensing to recognize human activity. Nowadays, deep learning methods are in a great demand and we could use those methods to recognize human activity. A great way is to build a convolutional neural network (CNN). HAR using Smartphone dataset has been widely used by researchers to develop machine learning models to recognize human activity. The dataset has two parts: training and testing. In this paper, we propose a hybrid approach to analyze and recognize human activity on the same dataset using deep learning method on cloud-based platform. We have applied principal component analysis on the dataset to get the most important features. Next, we have executed the experiment for all the features as well as the top 48, 92, 138, and 164 features. We have run all the experiments on Google Colab. In the experiment, for the evaluation of our proposed methodology, datasets are split into two different ratios such as 70–10–20% and 80–10–10% for training, validation, and testing, respectively. We have set the performance of CNN (70% training–10% validation–20% testing) with 48 features as a benchmark for our work. In this work, we have achieved maximum accuracy of 98.70% with CNN. On the other hand, we have obtained 96.36% accuracy with the top 92 features of the dataset. We can see from the experimental results that if we could select the features properly then not only could the accuracy be improved but also the training and testing time of the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Kuroki, Masanori. "Using Python and Google Colab to teach undergraduate microeconomic theory." International Review of Economics Education 38 (November 2021): 100225. http://dx.doi.org/10.1016/j.iree.2021.100225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gunawan, Teddy Surya, Arselan Ashraf, Bob Subhan Riza, Edy Victor Haryanto, Rika Rosnelly, Mira Kartiwi, and Zuriati Janin. "Development of video-based emotion recognition using deep learning with Google Colab." TELKOMNIKA (Telecommunication Computing Electronics and Control) 18, no. 5 (October 1, 2020): 2463. http://dx.doi.org/10.12928/telkomnika.v18i5.16717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Elnashar, Abdelrazek, Hongwei Zeng, Bingfang Wu, Ning Zhang, Fuyou Tian, Miao Zhang, Weiwei Zhu, et al. "Downscaling TRMM Monthly Precipitation Using Google Earth Engine and Google Cloud Computing." Remote Sensing 12, no. 23 (November 25, 2020): 3860. http://dx.doi.org/10.3390/rs12233860.

Full text
Abstract:
Accurate precipitation data at high spatiotemporal resolution are critical for land and water management at the basin scale. We proposed a downscaling framework for Tropical Rainfall Measuring Mission (TRMM) precipitation products through integrating Google Earth Engine (GEE) and Google Colaboratory (Colab). Three machine learning methods, including Gradient Boosting Regressor (GBR), Support Vector Regressor (SVR), and Artificial Neural Network (ANN) were compared in the framework. Three vegetation indices (Normalized Difference Vegetation Index, NDVI; Enhanced Vegetation Index, EVI; Leaf Area Index, LAI), topography, and geolocation are selected as geospatial predictors to perform the downscaling. This framework can automatically optimize the models’ parameters, estimate features’ importance, and downscale the TRMM product to 1 km. The spatial downscaling of TRMM from 25 km to 1 km was achieved by using the relationships between annual precipitations and annually-averaged vegetation index. The monthly precipitation maps derived from the annual downscaled precipitation by disaggregation. According to validation in the Great Mekong upstream region, the ANN yielded the best performance when simulating the annual TRMM precipitation. The most sensitive vegetation index for downscaling TRMM was LAI, followed by EVI. Compared with existing downscaling methods, the proposed framework for downscaling TRMM can be performed online for any given region using a wide range of machine learning tools and environmental variables to generate a precipitation product with high spatiotemporal resolution.
APA, Harvard, Vancouver, ISO, and other styles
5

Elsayed, Eman, and Doaa Fathy. "Semantic Deep Learning to Translate Dynamic Sign Language." International Journal of Intelligent Engineering and Systems 14, no. 1 (February 28, 2021): 316–25. http://dx.doi.org/10.22266/ijies2021.0228.30.

Full text
Abstract:
Dynamic Sign Language Recognition aims to recognize hand gestures of any person. Dynamic Sign Language Recognition systems have challenges in recognizing the semantic of hand gestures. These challenges come from the personal differences in hand signs from one person to another. Real-life video gesture frames couldn’t be treated as frame-level as a static sign. This research proposes a semantic translation system for dynamic hand gestures using deep learning and ontology. We used the proposed MSLO (Multi Sign Language Ontology) in the semantic translation step. Also, any user can retrain the system to be a personal one. We used Three-dimensional Convolutional Neural Networks followed by Convolutional long short-term memory to improve the recognition accuracy in Dynamic sign language recognition. We applied the proposed system on three dynamic gesture datasets from color videos. The recognition accuracy average was 97.4%. We did all the training and testing processes on the Graphics Processing Unit with the support of Google Colab. Using "Google Colab" in the training process decreases the average run time by about 87.9%. In addition to adding semantic in dynamic sign language translation, the proposed system achieves good results compared to some dynamic sign language recognition systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Kharisudin, I., A. Hidayati, A. Agoestanto, and M. Mashuri. "Convolutional neural network for classification of skin cancer based on image data using google colab." Journal of Physics: Conference Series 1968, no. 1 (July 1, 2021): 012015. http://dx.doi.org/10.1088/1742-6596/1968/1/012015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mohialden, Yasmin Makki, Muhanad Tahrir Younis, and Nadia Mahmood Hussien. "A Novel Approach to Arabic Chabot, Utilizing Google Colab and the Internet of Things: A Case Study at a Computer Center." Webology 18, no. 2 (December 23, 2021): 946–54. http://dx.doi.org/10.14704/web/v18i2/web18365.

Full text
Abstract:
A Chabot is a software program for humans to interact with natural-language computers. It has numerous applications in business, service, education, and healthcare, among others. Arabic Chabot’s, on the other hand, fight to generate and display Arabic characters correctly because of linguistic problems. In this paper, we propose a new method for the development of effective Arabic Chabot’s, which is improved by the use of the Internet of things (IOT). An experiment was performed utilizing Google Colab and the Python Chatterbot library to build and deploy an Arabic Chabot for a computer center based on IOT.
APA, Harvard, Vancouver, ISO, and other styles
8

Veeramsetty, Venkataramana, Bhavana Reddy Edudodla, and Surender Reddy Salkuti. "Zero-Crossing Point Detection of Sinusoidal Signal in Presence of Noise and Harmonics Using Deep Neural Networks." Algorithms 14, no. 11 (November 8, 2021): 329. http://dx.doi.org/10.3390/a14110329.

Full text
Abstract:
Zero-crossing point detection is necessary to establish a consistent performance in various power system applications, such as grid synchronization, power conversion and switch-gear protection. In this paper, zero-crossing points of a sinusoidal signal are detected using deep neural networks. In order to train and evaluate the deep neural network model, new datasets for sinusoidal signals having noise levels from 5% to 50% and harmonic distortion from 10% to 50% are developed. This complete study is implemented in Google Colab using deep learning framework Keras. Results shows that the proposed deep learning model is able to detect zero-crossing points in a distorted sinusoidal signal with good accuracy.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Lisa, Pouria Fewzee, and Charbel Feghali. "AI education matters." AI Matters 7, no. 3 (September 2021): 18–20. http://dx.doi.org/10.1145/3511322.3511327.

Full text
Abstract:
We introduce a Model AI Assignment (Neller et al., 2021) where students combine various techniques from a deep learning course to build a denoising autoencoder (Shen, Mueller, Barzilay, & Jaakkola, 2020) for news headlines. Students then use this denoising autoencoder to query similar headlines, and interpolate between headlines. Building this denoising autoencoder requires students to apply many course concepts, including data augmentation, word and sentence embeddings, autoencoders, recurrent neural networks, sequence-to-sequence networks, and temperature. As such, this assignment can be ideal as a final assessment that synthesizes many topics. This assignment is written in PyTorch, uses the torchtext package, and is intended to be completed on the Google Colab platform.
APA, Harvard, Vancouver, ISO, and other styles
10

Brandolini, Filippo, Guillem Domingo-Ribas, Andrea Zerboni, and Sam Turner. "A Google Earth Engine-enabled Python approach for the identification of anthropogenic palaeo-landscape features." Open Research Europe 1 (September 3, 2021): 22. http://dx.doi.org/10.12688/openreseurope.13135.2.

Full text
Abstract:
The necessity of sustainable development for landscapes has emerged as an important theme in recent decades. Current methods take a holistic approach to landscape heritage and promote an interdisciplinary dialogue to facilitate complementary landscape management strategies. With the socio-economic values of the “natural” and “cultural” landscape heritage increasingly recognised worldwide, remote sensing tools are being used more and more to facilitate the recording and management of landscape heritage. The advent of freeware cloud computing services has enabled significant improvements in landscape research allowing the rapid exploration and processing of satellite imagery such as the Landsat and Copernicus Sentinel datasets. This research represents one of the first applications of the Google Earth Engine (GEE) Python application programming interface (API) in studies of historic landscapes. The complete free and open-source software (FOSS) cloud protocol proposed here consists of a Python code script developed in Google Colab, which could be adapted and replicated in different areas of the world. A multi-temporal approach has been adopted to investigate the potential of Sentinel-2 satellite imagery to detect buried hydrological and anthropogenic features along with spectral index and spectral decomposition analysis. The protocol's effectiveness in identifying palaeo-riverscape features has been tested in the Po Plain (N Italy).
APA, Harvard, Vancouver, ISO, and other styles
11

Alves, Francisco Regis Vieira, Renata Passos Machado Vieira, and Paula Maria Machado Cruz Catarino. "Visualizing the Newtons Fractal from the Recurring Linear Sequence with Google Colab: An Example of Brazil X Portugal Research." International Electronic Journal of Mathematics Education 15, no. 3 (May 17, 2020): em0594. http://dx.doi.org/10.29333/iejme/8280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Crowdis, Jett, Meng Xiao He, Brendan Reardon, and Eliezer M. Van Allen. "CoMut: visualizing integrated molecular information with comutation plots." Bioinformatics 36, no. 15 (June 5, 2020): 4348–49. http://dx.doi.org/10.1093/bioinformatics/btaa554.

Full text
Abstract:
Abstract Motivation Large-scale sequencing studies have created a need to succinctly visualize genomic characteristics of patient cohorts linked to widely variable phenotypic information. This is often done by visualizing the co-occurrence of variants with comutation plots. Current tools lack the ability to create highly customizable and publication quality comutation plots from arbitrary user data. Results We developed CoMut, a stand-alone, object-oriented Python package that creates comutation plots from arbitrary input data, including categorical data, continuous data, bar graphs, side bar graphs and data that describes relationships between samples. Availability and implementation The CoMut package is open source and is available at https://github.com/vanallenlab/comut under the MIT License, along with documentation and examples. A no installation, easy-to-use implementation is available on Google Colab (see GitHub).
APA, Harvard, Vancouver, ISO, and other styles
13

Vassányi, Gergely, and Mátyás Gede. "Automatic vectorization of point symbols on archive maps using deep convolutional neural network." Proceedings of the ICA 4 (December 3, 2021): 1–5. http://dx.doi.org/10.5194/ica-proc-4-109-2021.

Full text
Abstract:
Abstract. Archive topographical maps are a key source of geographical information from past ages, which can be valuable for several science fields. Since manual digitization is usually slow and takes much human resource, automatic methods are preferred, such as deep learning algorithms. Although automatic vectorization is a common problem, there have been few approaches regarding point symbols. In this paper, a point symbol vectorization method is proposed, which was tested on Third Military Survey map sheets using a Mask Regional Convolutional Neural Network (MRCNN). The MRCNN implementation uses the ResNet101 network improved with the Feature Pyramid Network architecture and is developed in a Google Colab environment. The pretrained network was trained on four point symbol categories simultaneously. Results show 90% accuracy, while 94% of symbols detected for some categories on the complete test sheet.
APA, Harvard, Vancouver, ISO, and other styles
14

Brandolini, Filippo, Guillem Domingo-Ribas, Andrea Zerboni, and Sam Turner. "A Google Earth Engine-enabled Python approach to improve identification of anthropogenic palaeo-landscape features." Open Research Europe 1 (March 24, 2021): 22. http://dx.doi.org/10.12688/openreseurope.13135.1.

Full text
Abstract:
The necessity of sustainable development for landscapes has emerged as an important theme in recent decades. Current methods take a holistic approach to landscape heritage and promote an interdisciplinary dialogue to facilitate complementary landscape management strategies. With the socio-economic values of the “natural” and “cultural” landscape heritage increasingly recognised worldwide, remote sensing tools are being used more and more to facilitate the recording and management of landscape heritage. Satellite remote sensing technologies have enabled significant improvements in landscape research. The advent of the cloud-based platform of Google Earth Engine (GEE) has allowed the rapid exploration and processing of satellite imagery such as the Landsat and Copernicus Sentinel datasets. In this paper, the use of Sentinel-2 satellite data in the identification of palaeo-riverscape features has been assessed in the Po Plain, selected because it is characterized by human exploitation since the Mid-Holocene. A multi-temporal approach has been adopted to investigate the potential of satellite imagery to detect buried hydrological and anthropogenic features along with spectral index and spectral decomposition analysis. This research represents one of the first applications of the GEE Python application programming interface (API) in landscape studies. The complete free and open-source software (FOSS) cloud protocol proposed here consists of a Python code script developed in Google Colab which could be simply adapted and replicated in different areas of the world.
APA, Harvard, Vancouver, ISO, and other styles
15

Aigouy, Benoit, Claudio Cortes, Shanda Liu, and Benjamin Prud'Homme. "EPySeg: a coding-free solution for automated segmentation of epithelia using deep learning." Development 147, no. 24 (December 2, 2020): dev194589. http://dx.doi.org/10.1242/dev.194589.

Full text
Abstract:
ABSTRACTEpithelia are dynamic tissues that self-remodel during their development. During morphogenesis, the tissue-scale organization of epithelia is obtained through a sum of individual contributions of the cells constituting the tissue. Therefore, understanding any morphogenetic event first requires a thorough segmentation of its constituent cells. This task, however, usually involves extensive manual correction, even with semi-automated tools. Here, we present EPySeg, an open-source, coding-free software that uses deep learning to segment membrane-stained epithelial tissues automatically and very efficiently. EPySeg, which comes with a straightforward graphical user interface, can be used as a Python package on a local computer, or on the cloud via Google Colab for users not equipped with deep-learning compatible hardware. By substantially reducing human input in image segmentation, EPySeg accelerates and improves the characterization of epithelial tissues for all developmental biologists.
APA, Harvard, Vancouver, ISO, and other styles
16

Handayanto, Rahmadya Trias, and Herlawati Herlawati. "Prediksi Kelas Jamak dengan Deep Learning Berbasis Graphics Processing Units." Jurnal Kajian Ilmiah 20, no. 1 (January 25, 2020): 67–76. http://dx.doi.org/10.31599/jki.v20i1.71.

Full text
Abstract:
For the first time, machine learning did the classical classification process using two classes (bi-class) such as class -1 and class +1, 0 and 1, or the form of categories such as true and false. Famous methods used are Artificial Neural Networks (ANN) and Support Vector Machine (SVM). The current development was a problem with more than two classes, known as multi-class classes. For SVM sometimes the plural classes are overcome by doing a gradual process like a decision tree (DT) method. Meanwhile, ANN has experienced rapid development and is currently being developed with a large number of layers with the new activation functions, i.e. the rectified linear units (ReLu), and the probabilistic-based activation, i.e. softmax, including its optimizer methods (adam, sgd, and others). Then the term changed to Deep Learning (DL). This study aimed to compare two well-known methods (DL and SVM) in classifying multiple classes. The number of DL layers was six with the neuron composition are 128, 64, 32, 8, 4, and 3, while SVM uses a radial kernel base function with gamma and c respectively 0.7 and 5. Besides, this study intends to compare the use of the Graphics Processing Unit (GPU) available on Google Interactive Notebook (Google Colab), an online Python language programming application. The results showed that DL accuracy outperformed SVM but required large computational resources, with the accuracy for DL and SVM are 99% and 98%, respectively. However, the use of the GPU can overcome these problems and is proven to increase the speed of the process as much as 47 times. Keywords: Artificial Neural Networks, Graphics Processing Unit, Google Interactive Notebook, Rectified Linear units, Support Vector Machine. Abstrak Di awal perkembangannya mesin pembelajaran melakukan proses klasikfikasi menggunakan dua kelas (bi-class) misalnya kelas -1 dan kelas +1, 0 dan 1, atau bentuk kategori seperti benar dan salah. Metode terkenal yang digunakan adalah Jaringan Syaraf Tiruan (JST) dan Support Vector Machine (SVM). Perkembangan selanjutnya adalah problem dengan kelas yang lebih dari dua kelas, dikenal dengan istilah kelas jamak (multi-class). Untuk SVM terkadang kelas jamak diatasi dengan melakukan proses berjenjang mirip pohon keputusan (decision tree). Sementara itu JST telah mengalami perkembangan yang pesat dan saat ini sudah dikembangkan dengan jumlah layer yang banyak disertai dengan fungsi-fungsi aktivasi terkini seperti rectified linear unit (ReLu), dan softmax yang berbasis probabilistik, termasuk juga metode-metode optimizernya (adam, sgd, dan lain-lain). Kemudian istilahnya berubah menjadi Deep Learning (DL). Penelitian ini mencoba membandingkan dua metode terkenal (DL dan SVM) dalam melakukan klasifikasi kelas jamak. Jumlah layer DL sebanyak enam dengan masing-masing neuron sebesar 128, 64, 32, 8, 4, dan 3, sementara SVM menggunakan kernel radial basis function dengan gamma dan c berturut-turut 0.7 dan 5. Selain itu penelitian ini bermaksud membandingkan penggunaan Graphics Processing Unit (GPU) yang tersedia di Google Interactive Notebook (Google Colab), sebuah aplikasi online pemrograman bahasa Python. Hasil penelitian menunjukan akurasi DL unggul tipis dibanding SVM namun memerlukan sumber daya komputasi yang besar masing-masing dengan akurasi 99% dan 98%. Namun penggunaan GPU mampu mengatasi permasalahan tersebut dan terbukti meningkatkan kecepatan proses sebanyak 47 kali. Kata kunci: Jaringan Syaraf Tiruan, Graphics Processing Unit, Google Interactive Notebook, Rectified Linear units, Support Vector Machine.
APA, Harvard, Vancouver, ISO, and other styles
17

Balaniuk, Remis, Olga Isupova, and Steven Reece. "Mining and Tailings Dam Detection in Satellite Imagery Using Deep Learning." Sensors 20, no. 23 (December 4, 2020): 6936. http://dx.doi.org/10.3390/s20236936.

Full text
Abstract:
This work explores the combination of free cloud computing, free open-source software, and deep learning methods to analyze a real, large-scale problem: the automatic country-wide identification and classification of surface mines and mining tailings dams in Brazil. Locations of officially registered mines and dams were obtained from the Brazilian government open data resource. Multispectral Sentinel-2 satellite imagery, obtained and processed at the Google Earth Engine platform, was used to train and test deep neural networks using the TensorFlow 2 application programming interface (API) and Google Colaboratory (Colab) platform. Fully convolutional neural networks were used in an innovative way to search for unregistered ore mines and tailing dams in large areas of the Brazilian territory. The efficacy of the approach is demonstrated by the discovery of 263 mines that do not have an official mining concession. This exploratory work highlights the potential of a set of new technologies, freely available, for the construction of low cost data science tools that have high social impact. At the same time, it discusses and seeks to suggest practical solutions for the complex and serious problem of illegal mining and the proliferation of tailings dams, which pose high risks to the population and the environment, especially in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
18

Vallejo, William, Carlos Díaz-Uribe, and Catalina Fajardo. "Google Colab and Virtual Simulations: Practical e-Learning Tools to Support the Teaching of Thermodynamics and to Introduce Coding to Students." ACS Omega 7, no. 8 (February 18, 2022): 7421–29. http://dx.doi.org/10.1021/acsomega.2c00362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kadri, Ouahab, Abderrezak Benyahia, and Adel Abdelhadi. "Tifinagh Handwriting Character Recognition Using a CNN Provided as a Web Service." International Journal of Cloud Applications and Computing 12, no. 1 (January 2022): 1–17. http://dx.doi.org/10.4018/ijcac.297093.

Full text
Abstract:
Many cloud providers offer very high precision services to exploit Optical Character Recognition (OCR). However, there is no provider offers Tifinagh Optical Character Recognition (OCR) as Web Services. Several works have been proposed to build powerful Tifinagh OCR. Unfortunately, there is no one developed as a Web Service. In this paper, we present a new architecture of Tifinagh Handwriting Recognition as a web service based on a deep learning model via Google Colab. For the implementation of our proposal, we used the new version of the TensorFlow library and a very large database of Tifinagh characters composed of 60,000 images from the Mohammed Vth University in Rabat. Experimental results show that the TensorFlow library based on a Tensor processing unit constitutes a very promising framework for developing fast and very precise Tifinagh OCR web services. The results show that our method based on convolutional neural network outperforms existing methods based on support vector machines and extreme learning machine.
APA, Harvard, Vancouver, ISO, and other styles
20

Gavrylenko, S., and V. Zozulia. "ДОСЛІДЖЕННЯ МЕТОДІВ ВИЯВЛЕННЯ АНОМАЛІЙ НА ЕТАПІ ПОПЕРЕДНЬОЇ ОБРОБКИ ДАНИХ." Системи управління, навігації та зв’язку. Збірник наукових праць 1, no. 67 (April 1, 2022): 52–56. http://dx.doi.org/10.26906/sunz.2022.1.052.

Full text
Abstract:
Предметом дослідження є методи та засоби виявлення аномалій в даних. Метою статті є підвищення якості класифікації даних за рахунок виявлення аномалій на етапі їх попередньої обробки. Завдання: дослідити методи виявлення аномалій на етапі попередньої обробки даних, визначити поріг прийняття рішень anomaly_score для кожного із методів та оцінити якість класифікації до та після preprocessing. Використовуваними методами є: методи штучного інтелекту, машинного навчання, ансамблеві методи. Отримано такі результати: досліджено методи виявлення аномалій: метод стандартного відхилення (Standard Deviation Method), метод локального рівня викидів (Local Outlier Factor), метод Ізолюючого лісу (Isolation Forest). Отримано залежність кількості аномалій від порогу прийняття рішень для кожного із методів. Оцінку якості попередньої обробки даних виконано з використанням класифікаторів на основі методів KNN та беггінгу (Bagging). Досліджені методи реалізовані програмно з використанням хмарного сервісу GOOGLE COLAB на основі Jupyter Notebook. Висновки. Наукова новизна отриманих результатів полягає у дослідженні методів виявлення аномалій на етапі попередньої обробки даних, вибору мета-алгоритму preprocessing та визначення оптимальних параметрів його налаштування.
APA, Harvard, Vancouver, ISO, and other styles
21

Hairani, Hairani, and Ahmad Zuli Amrullah. "Pelatihan Pengenalan Data Science untuk Meningkatkan Kemampuan dalam Pengolahan Data." Jurnal Abdidas 1, no. 3 (July 5, 2020): 95–99. http://dx.doi.org/10.31004/abdidas.v1i3.31.

Full text
Abstract:
Data science merupakan gabungan ilmu komputer, statistika, dan pengetahuan domain bisnis untuk ekstraksi tumpukan data yang besar menjadi pengetahuan sehingga mendapatkan pattern atau pola-pola yang dapat memudahkan pengambil keputusan. Adapun orang menggeluti bidang data science disebut data scientist. Profesi data scientist akhir-akhir ini menjadi profesi yang sangat seksi di abad 21. Sumber daya manusia yang berprofesi sebagai data scientist di Indonesia sangat sedikit bila dibandingkan ketersedian lapangan kerja dibidang data science. Dengan kata lain, ketersediaan lapangan kerja data science berbanding terbalik dengan ketersediaan SDM yang berprofesi sebagai data scientist, dimana jumlah SDM data scientist sangat sedikit dibandingkan dengan jumlah lapangan kerja yang berlimpah. Salah satu solusi yang ditawarkan adalah mengadakan pelatihan dan workshop untuk pengenalan data science untuk meningkatkan SDM bidang data science khususnya di Universitas Bumigora. Metode pelaksanaan yang digunakan adalah penyampaian materi tentang data science dan simulasi penggunaan metode data science dalam kasus real menggunakan Google Colab. Berdasarkan hasil pelatihan dan workshop yang telah dilaksanakan, dapat meningkatkan pemahaman dan kemampuan para peserta untuk menggunakan metode-metode yang ada pada data science untuk mengolah data menjadi sebuah pengetahuan.
APA, Harvard, Vancouver, ISO, and other styles
22

Ekmekcioglu, O., and N. Demir. "AUTOMATED DETECTION OF COLLAPSED BUILDINGS WITH USE OF OPTICAL AND SAR IMAGES, CASE STUDY: IZMIR EARTHQUAKE ON OCTOBER 30TH, 2020." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (June 29, 2021): 707–12. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-707-2021.

Full text
Abstract:
Abstract. In this study, we have analysed the optical and SAR images both to detect the collapsed building automatically with the use of the cloud-based programming environment Google Colab Cloud environment. We have used the existing digital map of buildings which were provided by Here Maps Company, for each building feature, the histograms were generated both for optical and SAR images, the unmatched histograms on the optical image were mainly the destroyed buildings and newly established tent areas for the people who lost their homes. In the method, the most recent (before and after) optical images of the earthquake zone are taken. Some pre-processing steps were performed including principal component analysis, K-Means clustering. Then, the statistical values of area overlap with the building vectors are calculated and the threshold values are determined. SAR images are used to refine the results. he used optical satellite images are Worldview images with 30 cm GSD, and for SAR images, Sentinel 1 C band and ICEYE X band SAR images are used. Sentinel 1 and ICEYE images are provided from ESA.
APA, Harvard, Vancouver, ISO, and other styles
23

Simanjuntak, Ricky, and Dedy Irawan. "Applying Artificial Neural Network and XGBoost to Improve Data Analytics in Oil and Gas Industry." Indonesian Journal of Energy 4, no. 1 (February 26, 2021): 26–35. http://dx.doi.org/10.33116/ije.v4i1.103.

Full text
Abstract:
The application of machine learning and artificial intelligence is popular nowadays to improve data analytics in the oil and gas industry. A huge amount of data can be processed to gain insights about the subsurface conditions, even reducing time for manual review or interpretation. There are three cases to be discussed in this study that starts from porosity estimation of thin core image using Otsu's thresholding, estimation of oil production rate from sucker-rod pumping wells and sonic travel-time log generation. Two supervised learning algorithms are applied, XGBoost and Keras. These algorithms will capture all possible correlations between the input and output data. From data normalization, exploratory data analysis and model building, the workflow is built on Google Colab. The original dataset is split into training and testing. Tuning hyperparameters such as the number of hidden layers, neurons, activation function, optimizers and learning rates are captured to reduce the complexity of the model. The model is evaluated by error values and the coefficient of determination to estimate the model skill on unseen data.
APA, Harvard, Vancouver, ISO, and other styles
24

Wibawa, Aji Prasetya, Wahyu Arbianda Yudha Pratama, Anik Nur Handayani, and Anusua Ghosh. "Convolutional Neural Network (CNN) to determine the character of wayang kulit." International Journal of Visual and Performing Arts 3, no. 1 (June 28, 2021): 1–8. http://dx.doi.org/10.31763/viperarts.v3i1.373.

Full text
Abstract:
Indonesia is a country with diverse cultures. One of which is Wayang Kulit, which has been recognized by UNESCO. Wayang kulit has a variety of names and personalities, however most younger generations are not familiar with the characters of these shadow puppets. With today's rapid technological advancements, people could use this technology to detect objects using cameras. Convolutional Neural Network (CNN) is one method that can be used. CNN is a learning process that is included in the Deep Learning section and is used to find the best representation. The CNN is commonly used for object detection, would be used to classify good and bad characters. The data used consists of 100 black and white puppet images that were downloaded one at a time. The data was obtained through a training process that uses the CNN method and Google Colab to help speed up the training process. After that, a new model is created to test the puppet images. The result obtained a 92 percent accuracy rate, means that CNN can differentiate the Wayang Kulit character
APA, Harvard, Vancouver, ISO, and other styles
25

Harahap, Mawaddah, Em Manuel Laia, Lilis Suryani Sitanggang, Melda Sinaga, Daniel Franci Sihombing, and Amir Mahmud Husein. "Deteksi Penyakit Covid-19 Pada Citra X-Ray Dengan Pendekatan Convolutional Neural Network (CNN)." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 6, no. 1 (February 27, 2022): 70–77. http://dx.doi.org/10.29207/resti.v6i1.3373.

Full text
Abstract:
The Coronavirus (COVID-19) pandemic has resulted in the worldwide death rate continuing to increase significantly, identification using medical imaging such as X-rays and computed tomography plays an important role in helping medical personnel diagnose positive negative COVID-19 patients, several works have proven the learning approach in-depth using a Convolutional Neural Network (CNN) produces good accuracy for COVID detection based on chest X-Ray images, in this study we propose different transfer learning architectures VGG19, MobileNetV2, InceptionResNetV2 and ResNet (ResNet101V2, ResNet152V2 and ResNet50V2) to analyze their performance, testing conducted in the Google Colab work environment as a platform for creating Python-based applications and all datasets are stored on the Google Drive application, the preprocessing stages are carried out before training and testing, the datasets are grouped into theNormal and COVID folders then combined m become a set of data by dividing them into training sets of 352 images, testing 110 images and validating 88 images, then the detection results are labeled with the number 1 means COVID and the number 0 for NORMAL. Based on the test results, the ResNet50V2 model has a better accuracy rate than other models with an accuracy level of about 0.95 (95%) Precision 0.96, Recall 0.973, F1-Score 0.966, and Support of 74, then InceptionResNetV2, VGG19, and MobileNetV2, so that ResNet50V2-based CNNs can be used as initial identification for the classification of a patientinfected with COVID or NORMAL.
APA, Harvard, Vancouver, ISO, and other styles
26

Sokolovskyy, Yaroslav, Denys Manokhin, Yaroslav Kaplunsky, and Olha Mokrytska. "Development of software and algorithms of parallel learning of artificial neural networks using CUDA technologies." Technology audit and production reserves 5, no. 2(61) (September 23, 2021): 21–25. http://dx.doi.org/10.15587/2706-5448.2021.239784.

Full text
Abstract:
The object of research is to parallelize the learning process of artificial neural networks to automate the procedure of medical image analysis using the Python programming language, PyTorch framework and Compute Unified Device Architecture (CUDA) technology. The operation of this framework is based on the Define-by-Run model. The analysis of the available cloud technologies for realization of the task and the analysis of algorithms of learning of artificial neural networks is carried out. A modified U-Net architecture from the MedicalTorch library was used. The purpose of its application was the need for a network that can effectively learn with small data sets, as in the field of medicine one of the most problematic places is the availability of large datasets, due to the requirements for data confidentiality of this nature. The resulting information system is able to implement the tasks set before it, contains the most user-friendly interface and all the necessary tools to simplify and automate the process of visualization and analysis of data. The efficiency of neural network learning with the help of the central processor (CPU) and with the help of the graphic processor (GPU) with the use of CUDA technologies is compared. Cloud technology was used in the study. Google Colab and Microsoft Azure were considered among cloud services. Colab was first used to build a prototype. Therefore, the Azure service was used to effectively teach the finished architecture of the artificial neural network. Measurements were performed using cloud technologies in both services. The Adam optimizer was used to learn the model. CPU duration measurements were also measured to assess the acceleration of CUDA technology. An estimate of the acceleration obtained through the use of GPU computing and cloud technologies was implemented. CPU duration measurements were also measured to assess the acceleration of CUDA technology. The model developed during the research showed satisfactory results according to the metrics of Jaccard and Dyce in solving the problem. A key factor in the success of this study was cloud computing services.
APA, Harvard, Vancouver, ISO, and other styles
27

Camara, G. S., S. P. Camboim, and J. V. M. Bravo. "USING JUPYTER NOTEBOOKS FOR VIEWING AND ANALYSING GEOSPATIAL DATA: TWO EXAMPLES FOR EMOTIONAL MAPS AND EDUCATION DATA." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W2-2021 (August 19, 2021): 17–24. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w2-2021-17-2021.

Full text
Abstract:
Abstract. This article presents two applications developed using Jupyter Notebook in the Google Colab, combining several Python libraries that enable an interactive environment to query, manipulate, analyse, and visualise spatial data. The first application is from an educational context within the MAPFOR project, aiming to elaborate an interactive map of the spatial distributions of teachers with higher education degrees or pedagogical complementation per vacancies in higher education courses. The Jupyter solutions were applied in MAPFOR to better communicate within the research team, mainly in the development area. The second application is a framework to analyse and visualise collaborative emotional mapping data in urban mobility, where the emotions were collected and represented through emojis. The computational notebook was applied in this emotional mapping to enable the interaction of users, without a SQL background, with spatial data stored in a database through widgets to analyse and visualise emotional spatial data. We developed these different contexts in a Jupyter Notebook to practice the FAIR principles, promote the Open Science movement, and Open Geospatial Resources. Finally, we aim to demonstrate the potential of using a mix of open geospatial technologies for generating solutions that disseminate geographic information.
APA, Harvard, Vancouver, ISO, and other styles
28

Smaida, Mahmoud, Serhii Yaroshchak, and Ahmed Y. Ben Sasi. "Learning Rate Optimization in CNN for Accurate Ophthalmic Classification." International Journal of Innovative Technology and Exploring Engineering 10, no. 4 (February 28, 2021): 211–16. http://dx.doi.org/10.35940/ijitee.b8259.0210421.

Full text
Abstract:
One of the most important hyper-parameters for model training and generalization is the learning rate. Recently, many research studies have shown that optimizing the learning rate schedule is very useful for training deep neural networks to get accurate and efficient results. In this paper, different learning rate schedules using some comprehensive optimization techniques have been compared in order to measure the accuracy of a convolutional neural network CNN model to classify four ophthalmic conditions. In this work, a deep learning CNN based on Keras and TensorFlow has been deployed using Python on a database that contains 1692 images, which consists of four types of ophthalmic cases: Glaucoma, Myopia, Diabetic retinopathy, and Normal eyes. The CNN model has been trained on Google Colab. GPU with different learning rate schedules and adaptive learning algorithms. Constant learning rate, time-based decay, step-based decay, exponential decay, and adaptive learning rate optimization techniques for deep learning have been addressed. Adam adaptive learning rate method. has outperformed the other optimization techniques and achieved the best model accuracy of 92.58% for training set and 80.49% for validation datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Suryaman, Sean Alexander, Rita Magdalena, and Sofia Sa'idah. "Klasifikasi Cuaca Menggunakan Metode VGG-16, Principal Component Analysis Dan K-Nearest Neighbor." Jurnal Ilmu Komputer dan Informatika 1, no. 1 (August 28, 2021): 1–8. http://dx.doi.org/10.54082/jiki.1.

Full text
Abstract:
Cuaca merupakan suatu fenomena alam yang sangat berdampak bagi manusia. Informasi tentang kondisi cuaca sangat dibutuhkan oleh manusia. Informasi ini sangat bermanfaat untuk mengetahui kejadian cuaca disekitar kita. Sistem klasifikasi saat ini mengandalkan serangkaian sensor mahal atau bantuan manusia. Kecerdasan buatan merupakan suatu cabang ilmu komputer yang membantu manusia dalam mengatasi masalah yang ada. Penelitian ini menggunakan kecerdasan buatan untuk mengklasifikasi kondisi cuaca dengan menggunakan metode VGG-16, Principal Component Analysis (PCA) dan K-Nearest Neighbor (KNN). Pertama ciri akan dicari menggunakan VGG-16, lalu memanfaatkan Principal Component Analysis (PCA) untuk mereduksi data agar lebih efektif. Dan menggunakan K-Nearest Neighbor (KNN) untuk mengklasifikasian data. K-Nearest Neighbor (KNN) menggunakan jarak untuk mengklasifikasikan data. Jarak yang dipilih merupakan jarak terpendek yang akan menunjukan ketetanggan untuk menghasilkan keluaran apakah cuaca sedang cerah, berawan, berkabut, hujan dan matahari terbit. Sistem tersebut dibuat menggunakan platform Google Colab dengan bahasa pemrograman Python. Berdasarkan hasil penelitian, diperoleh sistem klasifikasi cuaca dengan akurasi sebesar 87,50%. Hasil akurasi tersebut diperoleh ketika digunakan 450 data uji dan 1050 data latih. Adapun parameter terbaik yang dihasilkan, yaitu ukuran citra 256 x 256, jenis KNN adalah Cosine, nilai KNN di k = 9, dan Persentase PCA 30%.
APA, Harvard, Vancouver, ISO, and other styles
30

Amoozadeh, Sahel, Jodie Johnston, and Claudia-Nicole Meisrimler. "Exploiting Structural Modelling Tools to Explore Host-Translocated Effector Proteins." International Journal of Molecular Sciences 22, no. 23 (November 30, 2021): 12962. http://dx.doi.org/10.3390/ijms222312962.

Full text
Abstract:
Oomycete and fungal interactions with plants can be neutral, symbiotic or pathogenic with different impact on plant health and fitness. Both fungi and oomycetes can generate so-called effector proteins in order to successfully colonize the host plant. These proteins modify stress pathways, developmental processes and the innate immune system to the microbes’ benefit, with a very different outcome for the plant. Investigating the biological and functional roles of effectors during plant–microbe interactions are accessible through bioinformatics and experimental approaches. The next generation protein modeling software RoseTTafold and AlphaFold2 have made significant progress in defining the 3D-structure of proteins by utilizing novel machine-learning algorithms using amino acid sequences as their only input. As these two methods rely on super computers, Google Colabfold alternatives have received significant attention, making the approaches more accessible to users. Here, we focus on current structural biology, sequence motif and domain knowledge of effector proteins from filamentous microbes and discuss the broader use of novel modelling strategies, namely AlphaFold2 and RoseTTafold, in the field of effector biology. Finally, we compare the original programs and their Colab versions to assess current strengths, ease of access, limitations and future applications.
APA, Harvard, Vancouver, ISO, and other styles
31

Bambang Pilu Hartato. "Penerapan Convolutional Neural Network pada Citra Rontgen Paru-Paru untuk Deteksi SARS-CoV-2." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 5, no. 4 (August 24, 2021): 747–59. http://dx.doi.org/10.29207/resti.v5i4.3153.

Full text
Abstract:
COVID-19 was officially declared as a pandemic by the WHO on March 11, 2020. For COVID-19, the testing methods commonly used are the Antibody Testing and RT-PCR Testing. Both methods are considered to be the most effective in determining whether a person has been suffered from COVID-19 or not. However, alternative testing methods need to be tried. One of them is using the Convolutional Neural Network. This study aims to measure the performance of CNN in classifying x-ray image of a person’s chest to determine whether the person is suffered from COVID-19 or not. The CNN model that was built consists of 1 convolutional 2D layer, 2 activation layers, 1 maxpooling layer, 1 dropout layer, 1 flatten layer, and 1 dense layer. Meanwhile, the chest x-ray image dataset used is the COVID-19 Radiography Database. This dataset consists of 3 classes, i.e. COVID-19 class, NORMAL class, and VIRAL_PNEUMONIA. The experiments consisted of 4 scenarios and were carried out using Google Colab. Based on the experiments, the CNN model can achieve an accuracy of 98.69%, a sensitivity of 97.71%, and a specificity of 98.90%. Thus, CNN has a very good performance to classify the disease based on a person’s chest x-ray.
APA, Harvard, Vancouver, ISO, and other styles
32

Kamala Kumari, P., and Joseph Beatrice Seventline. "A Robust Feature Extraction and Deep Learning Approach for Cancer Gene Prognosis." International Journal of Biology and Biomedical Engineering 16 (January 4, 2022): 126–33. http://dx.doi.org/10.46300/91011.2022.16.16.

Full text
Abstract:
Mutated genes are one of the prominent factors in origination and spread of cancer disease. Here we have used Genomic signal processing methods to identify the patterns that differentiate cancer and non-cancerous genes. Furthermore, Deep learning algorithms were used to model a system that automatically predicts the cancer gene. Unlike the existing methods, two feature extraction modules are deployed to extract six attributes. Power Spectral Density based module was used to extract statistical parameters like Mean, Median, Standard deviation, Mean Deviation and Median Deviation. Adaptive Functional Link Network (AFLN) based filter module was used to extract Normalized Mean Square Error (NMSE). The uniqueness of this paper is identification of six input features that differentiates cancer genes. In this work artificial neural network is developed to predict cancer genes. Comparison is done on three sets of datasets with 6 attributes, 5 attributes and one attribute. We performed all the training and testing on the Tensorflow using the Keras library in Python using Google Colab. The developed approach proved its efficiency with 6 attributes attaining an accuracy of 98% for 150 epochs. The ANN model was also compared with existing work and attained a 10 fold cross validation accuracy of 96.26% with an increase of 1.2%.
APA, Harvard, Vancouver, ISO, and other styles
33

Trivedi, Naresh K., Vinay Gautam, Abhineet Anand, Hani Moaiteq Aljahdali, Santos Gracia Villar, Divya Anand, Nitin Goyal, and Seifedine Kadry. "Early Detection and Classification of Tomato Leaf Disease Using High-Performance Deep Neural Network." Sensors 21, no. 23 (November 30, 2021): 7987. http://dx.doi.org/10.3390/s21237987.

Full text
Abstract:
Tomato is one of the most essential and consumable crops in the world. Tomatoes differ in quantity depending on how they are fertilized. Leaf disease is the primary factor impacting the amount and quality of crop yield. As a result, it is critical to diagnose and classify these disorders appropriately. Different kinds of diseases influence the production of tomatoes. Earlier identification of these diseases would reduce the disease’s effect on tomato plants and enhance good crop yield. Different innovative ways of identifying and classifying certain diseases have been used extensively. The motive of work is to support farmers in identifying early-stage diseases accurately and informing them about these diseases. The Convolutional Neural Network (CNN) is used to effectively define and classify tomato diseases. Google Colab is used to conduct the complete experiment with a dataset containing 3000 images of tomato leaves affected by nine different diseases and a healthy leaf. The complete process is described: Firstly, the input images are preprocessed, and the targeted area of images are segmented from the original images. Secondly, the images are further processed with varying hyper-parameters of the CNN model. Finally, CNN extracts other characteristics from pictures like colors, texture, and edges, etc. The findings demonstrate that the proposed model predictions are 98.49% accurate.
APA, Harvard, Vancouver, ISO, and other styles
34

Thắng, Huỳnh Việt. "NGHIÊN CỨU TRIỂN KHAI MẠNG HỌC SÂU LENET5 TRÊN VI ĐIỀU KHIỂN STM32 ỨNG DỤNG TRONG NHẬN DẠNG HÌNH ẢNH." TNU Journal of Science and Technology 226, no. 11 (August 9, 2021): 191–99. http://dx.doi.org/10.34238/tnu-jst.4497.

Full text
Abstract:
Sự ra đời của các thiết bị di động thông minh, cùng với sự bùng nổ của các ứng dụng và dịch vụ trên nền tảng Internet dẫn đến sự ra đời của mô hình tính toán mới – điện toán biên. Cùng với xu hướng ứng dụng trí tuệ nhân tạo đang rộng mở hiện nay, triển khai các ứng dụng trí tuệ nhân tạo và học sâu trên nền tảng điện toán biên là một xu hướng nổi bật. Bài báo này sẽ khảo sát khả năng thực thi mô hình học sâu sử dụng mạng nơ-ron tích chập LeNet5 cho các bài toán học sâu được triển khai trên các vi điều khiển công suất thấp dựa trên kiến trúc ARM. Chúng tôi trình bày quá trình thiết kế và thực thi bài toán nhận dạng hình ảnh là chữ số viết tay trên board phát triển STM32. Chúng tôi sử dụng Google Colab và ngôn ngữ Python để huấn luyện mô hình mạng nơ-ron tích chập, sau đó ánh xạ mô hình đã huấn luyện lên thực thi trên board phát triển vi điều khiển STM32F411 với công cụ X-Cube-AI. Kết quả đánh giá thực tế trên phần cứng cho thấy việc thực thi trên vi điều khiển đạt hiệu năng gần tương đương với thực thi trên máy tính đa mục đích.
APA, Harvard, Vancouver, ISO, and other styles
35

Tazin, Tahia, Sraboni Sarker, Punit Gupta, Fozayel Ibn Ayaz, Sumaia Islam, Mohammad Monirujjaman Khan, Sami Bourouis, Sahar Ahmed Idris, and Hammam Alshazly. "A Robust and Novel Approach for Brain Tumor Classification Using Convolutional Neural Network." Computational Intelligence and Neuroscience 2021 (December 21, 2021): 1–11. http://dx.doi.org/10.1155/2021/2392395.

Full text
Abstract:
Brain tumors are the most common and aggressive illness, with a relatively short life expectancy in their most severe form. Thus, treatment planning is an important step in improving patients’ quality of life. In general, image methods such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images are used to assess tumors in the brain, lung, liver, breast, prostate, and so on. X-ray images, in particular, are utilized in this study to diagnose brain tumors. This paper describes the investigation of the convolutional neural network (CNN) to identify brain tumors from X-ray images. It expedites and increases the reliability of the treatment. Because there has been a significant amount of study in this field, the presented model focuses on boosting accuracy while using a transfer learning strategy. Python and Google Colab were utilized to perform this investigation. Deep feature extraction was accomplished with the help of pretrained deep CNN models, VGG19, InceptionV3, and MobileNetV2. The classification accuracy is used to assess the performance of this paper. MobileNetV2 had the accuracy of 92%, InceptionV3 had the accuracy of 91%, and VGG19 had the accuracy of 88%. MobileNetV2 has offered the highest level of accuracy among these networks. These precisions aid in the early identification of tumors before they produce physical adverse effects such as paralysis and other impairments.
APA, Harvard, Vancouver, ISO, and other styles
36

Dralle, David N., W. Jesse Hahm, K. Dana Chadwick, Erica McCormick, and Daniella M. Rempe. "Technical note: Accounting for snow in the estimation of root zone water storage capacity from precipitation and evapotranspiration fluxes." Hydrology and Earth System Sciences 25, no. 5 (May 27, 2021): 2861–67. http://dx.doi.org/10.5194/hess-25-2861-2021.

Full text
Abstract:
Abstract. A common parameter in hydrological modeling frameworks is root zone water storage capacity (SR[L]), which mediates plant water availability during dry periods as well as the partitioning of rainfall between runoff and evapotranspiration. Recently, a simple flux-tracking-based approach was introduced to estimate the value of SR (Wang-Erlandsson et al., 2016). Here, we build upon this original method, which we argue may overestimate SR in snow-dominated catchments due to snow melt and evaporation processes. We propose a simple extension to the method presented by Wang-Erlandsson et al. (2016) and show that the approach provides a lower estimate of SR in snow-dominated watersheds. This SR dataset is available at a 1 km resolution for the continental USA, along with the full analysis code, on the Google Colab and Earth Engine platforms. We highlight differences between the original and new methods across the rain–snow transition in the Southern Sierra Nevada, California, USA. As climate warms and precipitation increasingly arrives as rain instead of snow, the subsurface may be an increasingly important reservoir for storing plant-available water between wet and dry seasons; therefore, improved estimates of SR will better clarify the future role of the subsurface as a storage reservoir that can sustain forests during seasonal dry periods and episodic drought.
APA, Harvard, Vancouver, ISO, and other styles
37

Ghorbanian, Arsalan, Seyed Ali Ahmadi, Meisam Amani, Ali Mohammadzadeh, and Sadegh Jamali. "Application of Artificial Neural Networks for Mangrove Mapping Using Multi-Temporal and Multi-Source Remote Sensing Imagery." Water 14, no. 2 (January 15, 2022): 244. http://dx.doi.org/10.3390/w14020244.

Full text
Abstract:
Mangroves, as unique coastal wetlands with numerous benefits, are endangered mainly due to the coupled effects of anthropogenic activities and climate change. Therefore, acquiring reliable and up-to-date information about these ecosystems is vital for their conservation and sustainable blue carbon development. In this regard, the joint use of remote sensing data and machine learning algorithms can assist in producing accurate mangrove ecosystem maps. This study investigated the potential of artificial neural networks (ANNs) with different topologies and specifications for mangrove classification in Iran. To this end, multi-temporal synthetic aperture radar (SAR) and multi-spectral remote sensing data from Sentinel-1 and Sentinel-2 were processed in the Google Earth Engine (GEE) cloud computing platform. Afterward, the ANN topologies and specifications considering the number of layers and neurons, learning algorithm, type of activation function, and learning rate were examined for mangrove ecosystem mapping. The results indicated that an ANN model with four hidden layers, 36 neurons in each layer, adaptive moment estimation (Adam) learning algorithm, rectified linear unit (Relu) activation function, and the learning rate of 0.001 produced the most accurate mangrove ecosystem map (F-score = 0.97). Further analysis revealed that although ANN models were subjected to accuracy decline when a limited number of training samples were used, they still resulted in satisfactory results. Additionally, it was observed that ANN models had a high resistance when training samples included wrong labels, and only the ANN model with the Adam learning algorithm produced an accurate mangrove ecosystem map when no data standardization was performed. Moreover, further investigations showed the higher potential of multi-temporal and multi-source remote sensing data compared to single-source and mono-temporal (e.g., single season) for accurate mangrove ecosystem mapping. Overall, the high potential of the proposed method, along with utilizing open-access satellite images and big-geo data processing platforms (i.e., GEE, Google Colab, and scikit-learn), made the proposed approach efficient and applicable over other study areas for all interested users.
APA, Harvard, Vancouver, ISO, and other styles
38

Praneeth, Vipparthy, Kontham Raja Kumar, and Nagarjuna Karyemsetty. "Security: Intrusion Prevention System Using Deep Learning on the Internet of Vehicles." International Journal of Safety and Security Engineering 11, no. 3 (June 30, 2021): 231–37. http://dx.doi.org/10.18280/ijsse.110303.

Full text
Abstract:
Internet of vehicles supports to transfer of safety-related messages, which help to mitigate road accidents. Internet of vehicles allows vehicle to cooperative communicate, share position and speed data among vehicles and road side units. The vehicular network become prone to large number of attacks including false warnings, mispositioning of vehicles etc. The authentication of messages to identify the normal message packet from attack messages packet and its prevention is a major challenging task. This paper focuses on applying deep learning approach using binary classification to classify the normal packets from malicious packets. The process starts with preparing the training dataset from the open-source KDD99 and CICIDS 2018 datasets, consisting of 1,20,223 network packets with 41 features. The one-dimensional network data is preprocessed using an autoencoder to eliminate the unwanted data in the initial stage. The valuable features are then filtered as 23 out of 41, and the model is trained with structured deep neural networks, then combined with the Softmax classifier and Relu activation functions. The proposed Intrusion prevention model is trained and tested with google Colab, an open platform cloud service, and the open-source tensor flow. The proposed prevention classifier model was validated with the simulation dataset generated in network simulation. The experimental results show 99.57% accuracy, which is the highest among existing RNN and CNN-based models. In the future, the model can be trained on different datasets, which will further improve the model's efficiency and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Ardhuha, Jannatin, I Wayan Sudiarta, Lalu Rudyat Telly Savalas, Ap’aluddin, Thufail Mujaddid Al-Qoyim, Putri Julia Maemum, Mega Safana, et al. "Pelatihan Bahasa Pemograman Python Berbasis Modul Sympy Untuk Memvisualisasi Konsep Fisika Matematika Bagi Mahasiswa Calon Guru." Jurnal Pengabdian Magister Pendidikan IPA 4, no. 4 (December 18, 2021): 466–73. http://dx.doi.org/10.29303/jpmpi.v4i4.1238.

Full text
Abstract:
Kemampuan dasar pemrograman dirasakan semakin penting dalam era yang berubah sangat cepat ini. Penguasaan bahasa pemrograman tidak hanya penting bagi pegiat ilmu informatika dan teknologi komputer, tetapi juga menjadi esensial bagi mahasiswa sains, pendidikan sains dan calon guru sains. Untuk itu, telah dilakukan kegiatan pengabdian kepada masyarakat berupa pelatihan bahasa pemrograman python berbasis modul sympy bagi mahasiswa calon guru pendidikan sains. Kegiatan pengabdian ini bertujuan untuk memberikan pelatihan kepada mahasiswa calon guru agar memiliki pengetahuan dasar mengenai bahasa pemograman python berbasis modul sympy, terampil dalam membuat program untuk menvisualisasikan konsep Fisika Matematika dan mengembangkan keterampilan tersebut untuk dapat menunjang pemahaman dan penguasan konsep Fisika lainnya. Kegiatan pelatihan yang dilakukan meliputi pengenalan python, instalasi program python- jupyter dan sympy, penggunaan skrip dan modul, pengenalan google colab, dan aplikasi pyhton berbasis modul sympy untuk visualisasi konsep Fisika Matematika. Kegiatan pelatihan yang berlangsung selama dua hari mendapat tanggapan yang baik dari peserta. Sebanyak 95,5% peserta menilai materi pelatihan sangat relevan dan sesuai dengan harapan mengikuti pelatihan tersebut. Selain itu, kegiatan diskusi dan tanya jawab pada akhir kegiatan turut membantu peserta memahami materi dengan lebih baik. Secara khusus, peserta dari kalangan mahasiswa mendapatkan gambaran mengenai penyelesaian persoalan Fisika Matematika dengan mudah dan cepat ketika menggunakan program python berbasis modul sympy. Dari rangkaian kegiatan yang telah dilaksanakan, dapat disimpulkan bahwa kegiatan pengabdian kepada masyarakat ini dirasakan secara nyata manfaatnya oleh mahasiswa calon guru, sehingga di masa mendatang kegiatan serupa dapat dikembangkan dengan materi yang lebih kaya dan bersifat multidisiplin.
APA, Harvard, Vancouver, ISO, and other styles
40

Štefek, Alexandr, Van Thuan Pham, Vaclav Krivanek, and Khac Lam Pham. "Optimization of Fuzzy Logic Controller Used for a Differential Drive Wheeled Mobile Robot." Applied Sciences 11, no. 13 (June 29, 2021): 6023. http://dx.doi.org/10.3390/app11136023.

Full text
Abstract:
The energy-efficient motion control of a mobile robot fueled by batteries is an especially important and difficult problem, which needs to be continually addressed in order to prolong the robot’s independent operation time. Thus, in this article, a full optimization process for a fuzzy logic controller (FLC) is proposed. The optimization process employs a genetic algorithm (GA) to minimize the energy consumption of a differential drive wheeled mobile robot (DDWMR) and still ensure its other performances of the motion control. The earlier approaches mainly focused on energy reduction by planning the shortest path whereas this approach aims to optimize the controller for minimizing acceleration of the robot during point-to-point movement and thus minimize the energy consumption. The proposed optimized controller is based on fuzzy logic systems. At first, an FLC has been designed based on the experiment and as well as an experience to navigate the DDWMR to a known destination by following the given path. Next, a full optimization process by using the GA is operated to automatically generate the best parameters of all membership functions for the FLC. To evaluate its effectiveness, a set of other well-known controllers have been implemented in Google Colab® and Jupyter platforms in Python language to compare them with each other. The simulation results have shown that about 110% reduction of the energy consumption was achieved using the proposed method compared to the best of six alternative controllers. Also, this simulation program has been published as an open-source code for all readers who want to continue in the research.
APA, Harvard, Vancouver, ISO, and other styles
41

Iyer, Tharun J., Alex Noel Joseph Raj, Sushil Ghildiyal, and Ruban Nersisson. "Performance analysis of lightweight CNN models to segment infectious lung tissues of COVID-19 cases from tomographic images." PeerJ Computer Science 7 (February 22, 2021): e368. http://dx.doi.org/10.7717/peerj-cs.368.

Full text
Abstract:
The pandemic of Coronavirus Disease-19 (COVID-19) has spread around the world, causing an existential health crisis. Automated detection of COVID-19 infections in the lungs from Computed Tomography (CT) images offers huge potential in tackling the problem of slow detection and augments the conventional diagnostic procedures. However, segmenting COVID-19 from CT Scans is problematic, due to high variations in the types of infections and low contrast between healthy and infected tissues. While segmenting Lung CT Scans for COVID-19, fast and accurate results are required and furthermore, due to the pandemic, most of the research community has opted for various cloud based servers such as Google Colab, etc. to develop their algorithms. High accuracy can be achieved using Deep Networks but the prediction time would vary as the resources are shared amongst many thus requiring the need to compare different lightweight segmentation model. To address this issue, we aim to analyze the segmentation of COVID-19 using four Convolutional Neural Networks (CNN). The images in our dataset are preprocessed where the motion artifacts are removed. The four networks are UNet, Segmentation Network (Seg Net), High-Resolution Network (HR Net) and VGG UNet. Trained on our dataset of more than 3,000 images, HR Net was found to be the best performing network achieving an accuracy of 96.24% and a Dice score of 0.9127. The analysis shows that lightweight CNN models perform better than other neural net models when to segment infectious tissue due to COVID-19 from CT slices.
APA, Harvard, Vancouver, ISO, and other styles
42

Chukwunazo, Ezeofor, Akpado Kenneth, and Ulasi Afamefuna. "Predictive Model for Maize Stem Borers’ Classification in Precision Farming." International Journal of Artificial Intelligence & Applications 12, no. 04 (July 31, 2021): 33–49. http://dx.doi.org/10.5121/ijaia.2021.12403.

Full text
Abstract:
This paper presents Predictive Model for Stem Borers’ classification in Precision Farming. The recent announcement of the aggressive attack of stem borers (Spodoptera species) to maize crops in Africa is alarming. These species migrate in large numbers and feed on maize leaf, stem, and ear of corn. The male of these species are the target because after mating with their female counterpart, thousands of eggs are laid which produces larvae that create the havoc. Currently, Nigerian farmers find it difficult to distinguish between these targeted species (Fall Armyworm-FAW, African Armyworm-AAW and Egyptian cotton leaf worm-ECLW only) because they look alike in appearance. For these reasons, the network model that would predict the presence of these species in the maize farm to farmers is proposed. The maize species were captured using delta pheromone traps and laboratory breeding for each category. The captured images were pre-processed and stored in an online Google drive image dataset folder created. The convolutional neural network (CNN) model for classifying these targeted maize moths was designed from the scratch. The Google Colab platform with Python libraries was used to train the model called MothNet. The images of the FAW, AAW, and ECLW were inputted to the designed MothNet model during learning process. Dropout and data augmentation were added to the architecture of the model for an efficient prediction. After training the MothNet model, the validation accuracy achieved was 90.37% with validation loss of 24.72%, and training accuracy 90.8% with loss of 23.25%, and the training occurred within 5minutes 33seconds. Due to the small amount of images gathered (1674), the model prediction on each image was of low confident. Because of this, transfer learning was deployed and Resnet 50 pretrained model selected and modified. The modified ResNet-50 model was fine-tuned and tested. The model validation accuracy achieved was 99.21%, loss of 3.79%, and training accuracy of 99.75% with loss of 2.55% within 10mins 5 seconds. Hence, MothNet model can be improved on by gathering more images and retraining the system for optimum performance while modified ResNet 50 is recommended to be integrated in Internet of Things device for maize moths’ classification on-site.
APA, Harvard, Vancouver, ISO, and other styles
43

Ambildhuke, Geeta Mahadeo, and Barnali Gupta Banik. "Transfer Learning Approach - An Efficient Method to Predict Rainfall Based on Ground-Based Cloud Images." Ingénierie des systèmes d information 26, no. 4 (August 31, 2021): 345–56. http://dx.doi.org/10.18280/isi.260402.

Full text
Abstract:
Clouds play a vital role in climate prediction. Rainfall prediction also majorly depends on the status and types of clouds present in the sky. Therefore, cloud identification is the most exciting and vital topic in meteorology and attracts most researchers from other areas. This paper presents the transfer learning technique to predict the Rainfall based on ground-based Cloud images responsible for rains. It will predict the estimated Rainfall by identifying the type of cloud by taking cloud images as input. The cloud images in the dataset are divided into three categories(classes) labeled as no-rain to very low-rain, low to medium-rain, and medium to high Rain based on the associated Precipitation responsible for the appropriate Rainfall. This model will be most helpful to the farmers to manage their Irrigation by knowing the status of Rainfall before every irrigation cycle or can also be helpful to take decisions on the outdoor events by taking prior knowledge of Rain. The model is trained on three classes to predict the Rainfall and firstly experimented with CNN. To improve the performance, the experiment is carried out with some best-pretrained models VGG16, Inception-V3, and XCeption using transfer learning and, the results are compared to the regular CNN model. The transfer learning technique is outperformed to get good accuracy as the dataset is too small and presented the best possible results of the model. Google colab with GPU setting makes the task fast and efficient to get the appropriate results in time, and performance achieved by transfer learning is excellent and can fulfill real-time requirements.
APA, Harvard, Vancouver, ISO, and other styles
44

Siddiqui, Shama, Rory Nesbitt, Muhammad Zeeshan Shakir, Anwar Ahmed Khan, Ausaf Ahmed Khan, Karima Karam Khan, and Naeem Ramzan. "Artificial Neural Network (ANN) Enabled Internet of Things (IoT) Architecture for Music Therapy." Electronics 9, no. 12 (November 29, 2020): 2019. http://dx.doi.org/10.3390/electronics9122019.

Full text
Abstract:
Alternative medicine techniques such as music therapy have been a recent interest of medical practitioners and researchers. Significant clinical evidence suggests that music has a positive influence over pain, stress and anxiety for the patients of cancer, pre and post surgery, insomnia, child birth, end of life care, etc. Similarly, the technologies of Internet of Things (IoT), Body Area Networks (BAN) and Artificial Neural Networks (ANN) have been playing a vital role to improve the health and safety of the population through offering continuous remote monitoring facilities and immediate medical response. In this article, we propose a novel ANN enabled IoT architecture to integrate music therapy with BAN and ANN for providing immediate assistance to patients by automating the process of music therapy. The proposed architecture comprises of monitoring the body parameters of patients using BAN, categorizing the disease using ANN and playing music of the most appropriate type over the patient’s handheld device, when required. In addition, the ANN will also exploit Music Analytics such as the type and duration of music played and its impact over patient’s body parameters to iteratively improve the process of automated music therapy. We detail development of a prototype Android app which builds a playlist and plays music according to the emotional state of the user, in real time. Data for pulse rate, blood pressure and breath rate has been generated using Node-Red, and ANN has been created using Google Colaboratory (Colab). MQTT broker has been used to send generated data to Android device. The ANN uses binary and categorical cross-entropy loss functions, Adam optimiser and ReLU activation function to predict the mood of patient and suggest the most appropriate type of music.
APA, Harvard, Vancouver, ISO, and other styles
45

Tillaguango Jiménez, Jonathan Ricardo. "Revisión Sistemática de Literatura: Análisis de viabilidad para la detección y diagnóstico de Covid-19, aplicando modelos de Inteligencia Artificial (IA)." CEDAMAZ 11, no. 2 (December 24, 2021): 142–51. http://dx.doi.org/10.54753/cedamaz.v11i2.1183.

Full text
Abstract:
Desde la declaración de la emergencia sanitaria provocada por el Covid-19 en marzo del 2020, hasta la fecha, existen aproximadamente 219 millones de contagiados, de los cuales 4,5 millones han muerto. En nuestro país, se estima que existen 508 mil casos confirmados y aproximadamente 32 mil muertes a causa de esta enfermedad. Pese a disponer de métodos verificados para diagnosticar Covid-19, las pruebas Polymerase Chain Reaction (PCR) o Real Time-PCR (RT-PCR), tienden a generar falsos positivos y negativos entre el 30\% y el 40\%. Por tal razón, ayudar a los métodos tradicionales a realizar un diagnóstico clínico preciso, usando como datos de entrada radiografías pulmonares, supone un cambio radical en la detección de Covid-19, puesto que, es una alternativa mucho más cómoda para el paciente y lo que es más importante, aumenta el nivel de precisión reduciendo a la vez, las tasas de falsos positivos y negativos. En la presente Revisión Sistemática de Literatura (RSL), la cual se ha basado en la metodología de Bárbara Kitchenham, busca sustentar la creación de un modelo basado en la arquitectura de Redes Neuronales Convolucionales (CNN), capaz de analizar radiografías pulmonares para el diagnóstico de Covid-19. Como resultado, se pudo dar contestación a las tres preguntas de investigación planteadas, mismas que sirvieron para delimitar el presente estudio, para ello se analizó 41 trabajos relacionados (TR), los cuales se enfocaban en diferentes métodos de diagnóstico basados en Inteligencia Artificial (IA), no obstante 16 de estos TR hacían referencia al uso de CNN para el diagnóstico de Covid-19 mediante el análisis de tomografías computarizadas (TC) y radiografías pulmonares (Rayos X), siendo esta última la opción más viable para aplicarlo en nuestro entorno, debido la disponibilidad de datos. Además, el uso de recursos por parte de estos métodos es asequible tanto a nivel local, usando la Unidad de Procesamiento Gráfico (GPU) Nvidia y memoria RAM superior a 8GB como base, o utilizar procesamiento en la nube usando Google Colab.
APA, Harvard, Vancouver, ISO, and other styles
46

Kruitbosch, Herbert T., Yasmin Mzayek, Sara Omlor, Paolo Guerra, and Andreas Milias-Argeitis. "A convolutional neural network for segmentation of yeast cells without manual training annotations." Bioinformatics 38, no. 5 (December 10, 2021): 1427–33. http://dx.doi.org/10.1093/bioinformatics/btab835.

Full text
Abstract:
Abstract Motivation Single-cell time-lapse microscopy is a ubiquitous tool for studying the dynamics of complex cellular processes. While imaging can be automated to generate very large volumes of data, the processing of the resulting movies to extract high-quality single-cell information remains a challenging task. The development of software tools that automatically identify and track cells is essential for realizing the full potential of time-lapse microscopy data. Convolutional neural networks (CNNs) are ideally suited for such applications, but require great amounts of manually annotated data for training, a time-consuming and tedious process. Results We developed a new approach to CNN training for yeast cell segmentation based on synthetic data and present (i) a software tool for the generation of synthetic images mimicking brightfield images of budding yeast cells and (ii) a convolutional neural network (Mask R-CNN) for yeast segmentation that was trained on a fully synthetic dataset. The Mask R-CNN performed excellently on segmenting actual microscopy images of budding yeast cells, and a density-based spatial clustering algorithm (DBSCAN) was able to track the detected cells across the frames of microscopy movies. Our synthetic data creation tool completely bypassed the laborious generation of manually annotated training datasets, and can be easily adjusted to produce images with many different features. The incorporation of synthetic data creation into the development pipeline of CNN-based tools for budding yeast microscopy is a critical step toward the generation of more powerful, widely applicable and user-friendly image processing tools for this microorganism. Availability and implementation The synthetic data generation code can be found at https://github.com/prhbrt/synthetic-yeast-cells. The Mask R-CNN as well as the tuning and benchmarking scripts can be found at https://github.com/ymzayek/yeastcells-detection-maskrcnn. We also provide Google Colab scripts that reproduce all the results of this work. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
47

Wazery, Y. M., Marwa E. Saleh, Abdullah Alharbi, and Abdelmgeid A. Ali. "Abstractive Arabic Text Summarization Based on Deep Learning." Computational Intelligence and Neuroscience 2022 (January 11, 2022): 1–14. http://dx.doi.org/10.1155/2022/1566890.

Full text
Abstract:
Text summarization (TS) is considered one of the most difficult tasks in natural language processing (NLP). It is one of the most important challenges that stand against the modern computer system’s capabilities with all its new improvement. Many papers and research studies address this task in literature but are being carried out in extractive summarization, and few of them are being carried out in abstractive summarization, especially in the Arabic language due to its complexity. In this paper, an abstractive Arabic text summarization system is proposed, based on a sequence-to-sequence model. This model works through two components, encoder and decoder. Our aim is to develop the sequence-to-sequence model using several deep artificial neural networks to investigate which of them achieves the best performance. Different layers of Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Bidirectional Long Short-Term Memory (BiLSTM) have been used to develop the encoder and the decoder. In addition, the global attention mechanism has been used because it provides better results than the local attention mechanism. Furthermore, AraBERT preprocess has been applied in the data preprocessing stage that helps the model to understand the Arabic words and achieves state-of-the-art results. Moreover, a comparison between the skip-gram and the continuous bag of words (CBOW) word2Vec word embedding models has been made. We have built these models using the Keras library and run-on Google Colab Jupiter notebook to run seamlessly. Finally, the proposed system is evaluated through ROUGE-1, ROUGE-2, ROUGE-L, and BLEU evaluation metrics. The experimental results show that three layers of BiLSTM hidden states at the encoder achieve the best performance. In addition, our proposed system outperforms the other latest research studies. Also, the results show that abstractive summarization models that use the skip-gram word2Vec model outperform the models that use the CBOW word2Vec model.
APA, Harvard, Vancouver, ISO, and other styles
48

O’Connor, Owen M., Razan N. Alnahhas, Jean-Baptiste Lugagne, and Mary J. Dunlop. "DeLTA 2.0: A deep learning pipeline for quantifying single-cell spatial and temporal dynamics." PLOS Computational Biology 18, no. 1 (January 18, 2022): e1009797. http://dx.doi.org/10.1371/journal.pcbi.1009797.

Full text
Abstract:
Improvements in microscopy software and hardware have dramatically increased the pace of image acquisition, making analysis a major bottleneck in generating quantitative, single-cell data. Although tools for segmenting and tracking bacteria within time-lapse images exist, most require human input, are specialized to the experimental set up, or lack accuracy. Here, we introduce DeLTA 2.0, a purely Python workflow that can rapidly and accurately analyze images of single cells on two-dimensional surfaces to quantify gene expression and cell growth. The algorithm uses deep convolutional neural networks to extract single-cell information from time-lapse images, requiring no human input after training. DeLTA 2.0 retains all the functionality of the original version, which was optimized for bacteria growing in the mother machine microfluidic device, but extends results to two-dimensional growth environments. Two-dimensional environments represent an important class of data because they are more straightforward to implement experimentally, they offer the potential for studies using co-cultures of cells, and they can be used to quantify spatial effects and multi-generational phenomena. However, segmentation and tracking are significantly more challenging tasks in two-dimensions due to exponential increases in the number of cells. To showcase this new functionality, we analyze mixed populations of antibiotic resistant and susceptible cells, and also track pole age and growth rate across generations. In addition to the two-dimensional capabilities, we also introduce several major improvements to the code that increase accessibility, including the ability to accept many standard microscopy file formats as inputs and the introduction of a Google Colab notebook so users can try the software without installing the code on their local machine. DeLTA 2.0 is rapid, with run times of less than 10 minutes for complete movies with hundreds of cells, and is highly accurate, with error rates around 1%, making it a powerful tool for analyzing time-lapse microscopy data.
APA, Harvard, Vancouver, ISO, and other styles
49

Hasan, Tayyabah, Fahad Ahmad, Muhammad Rizwan, Nasser Alshammari, Saad Awadh Alanazi, Iftikhar Hussain, and Shahid Naseem. "Edge Caching in Fog-Based Sensor Networks through Deep Learning-Associated Quantum Computing Framework." Computational Intelligence and Neuroscience 2022 (January 7, 2022): 1–17. http://dx.doi.org/10.1155/2022/6138434.

Full text
Abstract:
Fog computing (FC) based sensor networks have emerged as a propitious archetype for next-generation wireless communication technology with caching, communication, and storage capacity services in the edge. Mobile edge computing (MEC) is a new era of digital communication and has a rising demand for intelligent devices and applications. It faces performance deterioration and quality of service (QoS) degradation problems, especially in the Internet of Things (IoT) based scenarios. Therefore, existing caching strategies need to be enhanced to augment the cache hit ratio and manage the limited storage to accelerate content deliveries. Alternatively, quantum computing (QC) appears to be a prospect of more or less every typical computing problem. The framework is basically a merger of a deep learning (DL) agent deployed at the network edge with a quantum memory module (QMM). Firstly, the DL agent prioritizes caching contents via self organizing maps (SOMs) algorithm, and secondly, the prioritized contents are stored in QMM using a Two-Level Spin Quantum Phenomenon (TLSQP). After selecting the most appropriate lattice map (32 × 32) in 750,000 iterations using SOMs, the data points below the dark blue region are mapped onto the data frame to get the videos. These videos are considered a high priority for trending according to the input parameters provided in the dataset. Similarly, the light-blue color region is also mapped to get medium-prioritized content. After the SOMs algorithm’s training, the topographic error (TE) value together with quantization error (QE) value (i.e., 0.0000235) plotted the most appropriate map after 750,000 iterations. In addition, the power of QC is due to the inherent quantum parallelism (QP) associated with the superposition and entanglement principles. A quantum computer taking “n” qubits that can be stored and execute 2n presumable combinations of qubits simultaneously reduces the utilization of resources compared to conventional computing. It can be analyzed that the cache hit ratio will be improved by ranking the content, removing redundant and least important content, storing the content having high and medium prioritization using QP efficiently, and delivering precise results. The experiments for content prioritization are conducted using Google Colab, and IBM’s Quantum Experience is considered to simulate the quantum phenomena.
APA, Harvard, Vancouver, ISO, and other styles
50

Parra, Federico, Yannick Benezeth, and Fan Yang. "Automatic Assessment of Emotion Dysregulation in American, French, and Tunisian Adults and New Developments in Deep Multimodal Fusion: Cross-sectional Study." JMIR Mental Health 9, no. 1 (January 24, 2022): e34333. http://dx.doi.org/10.2196/34333.

Full text
Abstract:
Background Emotion dysregulation is a key dimension of adult psychological functioning. There is an interest in developing a computer-based, multimodal, and automatic measure. Objective We wanted to train a deep multimodal fusion model to estimate emotion dysregulation in adults based on their responses to the Multimodal Developmental Profile, a computer-based psychometric test, using only a small training sample and without transfer learning. Methods Two hundred and forty-eight participants from 3 different countries took the Multimodal Developmental Profile test, which exposed them to 14 picture and music stimuli and asked them to express their feelings about them, while the software extracted the following features from the video and audio signals: facial expressions, linguistic and paralinguistic characteristics of speech, head movements, gaze direction, and heart rate variability derivatives. Participants also responded to the brief version of the Difficulties in Emotional Regulation Scale. We separated and averaged the feature signals that corresponded to the responses to each stimulus, building a structured data set. We transformed each person’s per-stimulus structured data into a multimodal codex, a grayscale image created by projecting each feature’s normalized intensity value onto a cartesian space, deriving each pixel’s position by applying the Uniform Manifold Approximation and Projection method. The codex sequence was then fed to 2 network types. First, 13 convolutional neural networks dealt with the spatial aspect of the problem, estimating emotion dysregulation by analyzing each of the codified responses. These convolutional estimations were then fed to a transformer network that decoded the temporal aspect of the problem, estimating emotional dysregulation based on the succession of responses. We introduce a Feature Map Average Pooling layer, which computes the mean of the convolved feature maps produced by our convolution layers, dramatically reducing the number of learnable weights and increasing regularization through an ensembling effect. We implemented 8-fold cross-validation to provide a good enough estimation of the generalization ability to unseen samples. Most of the experiments mentioned in this paper are easily replicable using the associated Google Colab system. Results We found an average Pearson correlation (r) of 0.55 (with an average P value of <.001) between ground truth emotion dysregulation and our system’s estimation of emotion dysregulation. An average mean absolute error of 0.16 and a mean concordance correlation coefficient of 0.54 were also found. Conclusions In psychometry, our results represent excellent evidence of convergence validity, suggesting that the Multimodal Developmental Profile could be used in conjunction with this methodology to provide a valid measure of emotion dysregulation in adults. Future studies should replicate our findings using a hold-out test sample. Our methodology could be implemented more generally to train deep neural networks where only small training samples are available.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography