To see the other types of publications on this topic, follow the link: Plotly.

Journal articles on the topic 'Plotly'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Plotly.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Adebanjo, Seun, and Emmanuel Banchani. "Application of Different Python Libraries for Visualisation of Female Genital Mutilation." International Journal of Data Science 4, no. 2 (2023): 67–83. http://dx.doi.org/10.18517/ijods.4.2.67-83.2023.

Full text
Abstract:
Utilizing data visualization facilitates the analysis and comprehension of common data provided by the media, individuals, governments, and other sectors. Python is a well-known programming language that excels at scientific data visualization. This thesis utilizes a variety of Python modules, including Pandas, NumPy, Matplotlib, Seaborn, Plotly, and Bokeh, to illustrate female genital mutilation. The purpose of this thesis is to illustrate female genital mutilation and explain its performance pattern using a complex, interactive diagram that integrates multiple types of Python libraries. In comparison to other libraries, Plotly is the simplest, yet it performs at the highest level. NumPy and Matplotlib are combined to produce Hexbins charts. NumPy provides an N-dimensional plot, and Matplotlib allows for the plot's colours to be customized. Despite its limited customization options, the Seaborn library is suitable for both data visualization and statistical modelling. Due to this deficiency, the Seaborn library is frequently combined with Matplotlib to generate superior visualizations. As a result, this thesis will be recommended to both specialists and novices as worthwhile reading. In addition, it will assist the government in drafting legislation to end female genital mutilation. They will comprehend the significance of combining multiple Python modules to generate intricate interactive diagrams for data visualization in the field of data science. This information will be posted online to contribute to the corpus of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
2

Аттокуров, Урмат Тологонович, Гулнарида Расулбековна Жалилова та Алийма Торожановна Маматкасымова. "ОБРАБОТКА БОЛЬШИХ ДАННЫХ С ИСПОЛЬЗОВАНИЕМ ЯЗЫКА PYTHON В МЕДИЦИНЕ". Engineering problems and innovations 1, № 2 (2023): 49–54. https://doi.org/10.5281/zenodo.7902867.

Full text
Abstract:
Данная статья посвящена вопросам использования и дальнейшего применения библиотек Python в медицинской отрасли. Эти библиотеки включают NumPy, Pandas, Scikit-learn, Keras и TensorFlow, Matplotlib, Seaborn, and Plotly. На примере библиотеки Keras была рассмотрена задача, связанная с применением анализа медицинских данных.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Yuchen, Chicheng Xu, Tao Lin, Weichang Li, and Mohamed Larbi Zeghlache. "Python Dash for Well Data Validation, Visualization, and Processing." Petrophysics – The SPWLA Journal of Formation Evaluation and Reservoir Description 64, no. 4 (2023): 568–73. http://dx.doi.org/10.30632/pjv64n4-2023a6.

Full text
Abstract:
Open-source Python libraries play a critical role in facilitating the digital transformation of the energy industry by enabling quick deployment of intelligent data-driven solutions. In this paper, we demonstrate an example of using Dash, a Python framework introduced by Plotly for creating interactive web applications. A fit-for-purpose software was tailored for an in-house research project in well-data validation, visualization, and processing. The application automates quality control of different sets of well-log data files (DLIS/LIS or LAS) for completeness, validity, and repeatability. For this tedious and critical process, a human expert is required to perform tasks using well-log interpretation software. A typical digital log file may contain hundreds or thousands of data channels that are difficult are difficult to visualize and validate manually. Sometimes it takes multiple iterations of communication between the data provider and the data receiver to achieve a final valid deliverable copy. By utilizing open-source Python libraries, such as DLISIO (Equinor ASA, 2022) and LASIO (Inverarity, 2023), a web interface based on Plotly-Dash is developed to visualize and check all data channels automatically and then produce a compliance summary report in PDF or HTML format. The time for validating one DLIS file that has hundreds of data channels is significantly reduced. Implementation of this automated data quality control workflow demonstrates that open-source Python libraries can significantly reduce the time from development to the deployment cycle. Quick implementation of intelligent software based on Python Plotly-Dash enables customized solutions or workflows that further improve both the effectiveness and efficiency of routine data quality control processes.
APA, Harvard, Vancouver, ISO, and other styles
4

Ergashev, Otabek, Nurillo Mamadaliev, Sardorbek Khonturaev, and Muzaffar Sobirov. "Programming and processing of big data using python language in medicine." E3S Web of Conferences 538 (2024): 02027. http://dx.doi.org/10.1051/e3sconf/202453802027.

Full text
Abstract:
This article is devoted to the use and further application of Python libraries in the medical industry. These libraries include NumPy, Pandas, Scikit-learn, Keras and TensorFlow, Matplotlib, Seaborn, and Plotly. On the example of the Keras library, the problem associated with the use of medical data analysis was considered.
APA, Harvard, Vancouver, ISO, and other styles
5

Yerlekar, Prof Ashwini, Shreyash Urkude, Nisarg Thool, Mrunal Maheshkar, Tushar Gedam, and Subodh Rangari. "Visualising and Forecasting Stock Prices with Flask." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (2023): 4113–15. http://dx.doi.org/10.22214/ijraset.2023.51207.

Full text
Abstract:
Abstract: Stock price forecasting and visualisation are critical tasks for investors and traders in financial markets. In this project, we have developed a web application that can visualise stock prices of the company in form of charts and predicts future stock prices using machine learning algorithms. The web application is built using Flask(python), a popular web framework, and integrates several APIs and libraries, including News API, Alpha Vantage API, Beautiful Soup (bs4), Pandas, Numpy, Plotly, and Scikit-learn. The application provides users with a basic overview of the company, current prices of the stock, news related to the company, and visualisations of historical stock prices in the form of charts. The historical stock prices are fetched using Alpha Vantage API, and the charts are generated using Plotly library. The web application also includes a predictive model built using Support Vector Regression (SVR) algorithm which forecasts the stock prices. This paper demonstrates the potential of machine learning algorithms and web technologies in the field of stock price forecasting and visualisation. The web application developed, provides users with valuable insights and information related to stock prices, and it can be used as a powerful tool in financial markets.
APA, Harvard, Vancouver, ISO, and other styles
6

Zedek, Lukáš, and Jan Šembera. "Comparison of Waste Production in Regions of the Czech Republic in 2019-2021." ACC JOURNAL 30, no. 1 (2024): 18–23. http://dx.doi.org/10.2478/acc-2024-0002.

Full text
Abstract:
Abstract The purpose of this article is to illustrate the intuitively understood links between the social and economic characteristics of an area and the waste production at a given location. These relationships have been investigated using statistical data from thirteen regions in the Czech Republic between 2019 and 2021. In order to evaluate the data, freely available tools such as Python 3.8.16 and a number of its libraries, e.g. matplotlib, plotly, sklearn, numpy and others, have been used.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Ran, and Usama Bilal. "Interactive web‐based data visualization with R, plotly, and shiny (Carson Sievert)." Biometrics 77, no. 2 (2021): 776–77. http://dx.doi.org/10.1111/biom.13474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pádua, Everaldo Júnior Borges Garcia de, Fabrício Barros Gonçalves, and Layanne Andrade Mendonça. "Development of a decision-making system based on the mandala of water and agricultural sustainability." Journal of Engineering and Exact Sciences 10, no. 2 (2024): 17200. http://dx.doi.org/10.18540/jcecvl10iss2pp17200.

Full text
Abstract:
This article aimed to develop an innovative decision support tool for sustainable water and agricultural management in river basins, using the Mandala of Water and Agricultural Sustainability. The system was built with technologies such as Django and Plotly, allowing a facilitating interface for water resources management with inclusive and participatory decision-making. A free web application was created, paving the way for user testing, updates based on results and final launch of the tool. It is believed that this will promote greater accessibility to information and involvement of social actors in water and agricultural management, resulting in more integrated and sustainable decisions.
APA, Harvard, Vancouver, ISO, and other styles
9

Schneider, Kevin, Benedikt Venn, and Timo Mühlhaus. "Plotly.NET: A fully featured charting library for .NET programming languages." F1000Research 11 (September 23, 2022): 1094. http://dx.doi.org/10.12688/f1000research.123971.1.

Full text
Abstract:
Data visualization is integral for exploratory data analysis and data presentation. Consequently, every programming environment suitable for data analysis needs a feature-rich visualization library. In this paper, we present Plotly.NET - a fully featured charting library for the .NET ecosystem. Plotly.NET requires little ceremony to use, provides multiple API (application programming interface) layers for both low-code and maximum customizability, and can be used in many different visualization workflows - from interactive scripting and notebooks to user interface applications. Plotly.NET brings the charting capabilities of plotly.js to the .NET ecosystem, successfully combining the flexibility of the plotly grammar with the type-safety and expressiveness of .NET.
APA, Harvard, Vancouver, ISO, and other styles
10

Gultom, Edra Arkananta, Kartika Dewi Sri Susilowati, and Anik Kusmintarti. "Design of a Stock Forecasting Dashboard using Python-Streamlit and FB Prophet with AI." Formosa Journal of Science and Technology 3, no. 11 (2024): 2445–64. https://doi.org/10.55927/fjst.v3i11.12216.

Full text
Abstract:
This research aims to develop a stock price forecasting application using time series analysis with the Prophet model. The application retrieves historical stock data from Yahoo Finance (2015–present) for Indonesian stocks, which is then processed and analyzed to predict future prices. The study integrates yfinance for data collection, Prophet for forecasting, and Plotly for visualizing the results. The application allows users to select stocks and customize prediction periods (1–4 years). The findings indicate that while the model provides useful short-term predictions, its accuracy is limited by market volatility and external factors. This tool can support decision-making but should be used in conjunction with other forecasting methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Schneider, Kevin, Benedikt Venn, and Timo Mühlhaus. "Plotly.NET: A fully featured charting library for .NET programming languages." F1000Research 11 (February 19, 2024): 1094. http://dx.doi.org/10.12688/f1000research.123971.2.

Full text
Abstract:
Data visualization is integral for exploratory data analysis and data presentation. Consequently, every programming environment suitable for data analysis needs a feature-rich visualization library. In this paper, we present Plotly.NET - a fully featured charting library for the .NET ecosystem. Plotly.NET requires little ceremony to use, provides multiple API (application programming interface) layers for both low-code and maximum customizability, and can be used in many different visualization workflows - from interactive scripting and notebooks to user interface applications. Plotly.NET brings the charting capabilities of plotly.js to the .NET ecosystem, successfully combining the flexibility of the plotly grammar with the type-safety and expressiveness of .NET.
APA, Harvard, Vancouver, ISO, and other styles
12

Tolstobrov, Ilya M., and Yulia B. Kamalova. "ANALYSIS OF SECTORAL INDICES OF THE USA, CHINA, JAPAN, GERMANY AND INDIA DURING THE FIRST YEAR OF THE INFLUENZA PANDEMIC BASED ON THE KRUSKAL-WALLIS TEST." SOFT MEASUREMENTS AND COMPUTING 3, no. 64 (2023): 58–70. http://dx.doi.org/10.36871/2618-9976.2023.03.005.

Full text
Abstract:
In the work, the movement of the stock market in different countries during the crisis year of 2020 was studied, which did not depend on the decisions made in this country, but obeyed the general trend. In particular, the behavior of sectoral indices of different countries during the first year of the coronavirus was studied and it was concluded that the movement of markets is subject to the global trend. For the analysis, sectoral indices of countries affected to varying degrees by the coronavirus were selected, whose securities markets received various support measures: the United States, Russia, China, Japan and Germany. The impact of the pandemic on industries such as financial services, metallurgy, chemicals, consumer goods and telecommunications is considered. Quotes are considered for the period from 01/01/2020 to 12/31/2020. Graphs of the values of sectoral indices of the Russian Federation and the USA in the specified period are presented and analyzed. Graphs are built in the Jupyter Notebook environment using the Python language and the plotly data visualization library. The concept of hypothesis and criterion is defined and the most important definitions are given. With a decrease in the probability of making a type 1 error, the probability of making a type 2 error increases, so the significance level is chosen so that the power of the criterion is maximum. To test the hypothesis, we used the multivariate KruskalWallis test and its modified version for mediumsized samples with a significance level of α=0.05. The novelty of the study lies in the application of the KruskalWallis criterion to financial data. The study was conducted in Python in the Jupyter Notebook environment using the numpy, pandas, itertools, scipy and plotly libraries.
APA, Harvard, Vancouver, ISO, and other styles
13

Апалькова, Т. Г., та Т. С. Чуприн. "ВОЗМОЖНОСТИ ЯЗЫКА R В ПОСТРОЕНИИ АНАЛИТИЧЕСКИХ ДАШБОРДОВ НА ПРИМЕРЕ СЕГМЕНТАЦИИ КЛИЕНТСКОЙ БАЗЫ". Management Accounting, № 9 2023 (10 вересня 2023): 285–92. http://dx.doi.org/10.25806/uu92023285-292.

Full text
Abstract:
Визуализация данных и аналитических отчётов в форме дашбордов получила сегодня широкое распространение в бизнес среде. Помимо специализированных конструкторов, бесплатные версии которых имеют ограниченный функционал, для их построения можно использовать языки программирования с открытым программным кодом, в частности – язык R, в рамках среды R Markdown. Статья преследует цель ознакомить читателя с некоторыми возможностями R в создании дашбордов, поскольку в русскоязычных публикациях этот вопрос практически не освещается. Рассмотренный пример демонстрирует использование различных инструментов библиотек flexdashboard, Shiny и plotly при построении аналитической панели по данным условного примера о клиентах банка. В качестве основных достоинств представленной реализации заявляются: информативность, наличие нескольких тематических панелей-разделов, способность обновляться в режиме реального времени, экономия затрат, объясняемая использованием программного обеспечения с открытым кодом
APA, Harvard, Vancouver, ISO, and other styles
14

De Oliveira, Leandro, André André, José Airton Azevedo dos Santos, Vanessa Hlenka, Liliane Hellmann, and Renato Hallal. "Análise exploratória dos dados de potência de empreendimentos de geração distribuída fotovoltaica no estado do Paraná." Peer Review 5, no. 20 (2023): 354–64. http://dx.doi.org/10.53660/1072.prw2626.

Full text
Abstract:
A mini e a microgeração distribuída no Brasil são compostas, em sua maioria, pelos empreendimentos fotovoltaicos. No Paraná, o total de potência instalada desses sistemas somam 498.075,05 kW. Neste contexto, este trabalho tem como objetivo realizar, no estado Paraná, uma análise exploratória dos dados de potência dos empreendimentos fotovoltaicos de mini e microgeração distribuída. Para a análise dos dados foi utilizada a linguagem Python com as bibliotecas Pandas e Plotly. Os resultados obtidos, para o estado do Paraná, sugerem que a classe residencial possui 1/3 da potência dos empreendimentos de mini e microgeração fotovoltaicos. A região oeste do estado possui o maior impacto em relação à população e os resultados apontam também uma baixa participação do setor público.
APA, Harvard, Vancouver, ISO, and other styles
15

HAMBRIC, Stephen. "Experiential learning with online vibro-acoustic demonstrators." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 270, no. 11 (2024): 136–42. http://dx.doi.org/10.3397/in_2024_1803.

Full text
Abstract:
After 25 years of delivering lecture-based learning at a university with mixed success I now supplement my short courses with online experiential learning demonstrators for visualizing and interacting with vibro-acoustic systems. The demonstrators use simple javascript coding and calls to the plotly free libraries. No special software is required (but stronger computers obviously perform faster). Learners can quickly adjust parameters like stiffnesses, masses, and damping using sliders and immediately visualize changes to vibration response and sound fields. Students may exercise demonstrators for one- and two-dimensional acoustic waves; vibrations and mode shapes of simple oscillators, beams, and plates; and sound radiated by and transmitted through panels. Some of the demonstrators are freely available at hambricacoustics.com (others are only available to my short course students).
APA, Harvard, Vancouver, ISO, and other styles
16

Sharov, Sergii, Yurii Sitsylitsyn, Oleksii Naumuk, Dmytro Lubko, and Vira Kolmakova. "Choosing a library for the Python programming language for visualizing the operation of parallel algorithms." E3S Web of Conferences 508 (2024): 03002. http://dx.doi.org/10.1051/e3sconf/202450803002.

Full text
Abstract:
The research compares the capabilities of several libraries for the Python language, which allow creating a test application and visually demonstrate the operation of a parallel program in real time. It was found that the Python language is often used to develop parallel programs with internal and external libraries. To provide multithreading and parallelism, applications created in Python use external libraries, including mpi4py.futures, PETSc for Python, MPI for Python, d2o, Playdoh, PyOMP, and others. Visualization and animation of the operation of parallel programs will help to understand the principles of parallel computing. We compared test applications created with the use of Matplotlib, Seaborn, Plotly, Bokeh, Pygame, PyOpenGL libraries. According to the results of the observation, it was found that the Seaborn library is the best choice for developing a test application for animating the operation of a parallel program.
APA, Harvard, Vancouver, ISO, and other styles
17

Wisna, Nelsi, Aulia Nanda, and Tora Fahrudin. "KLASTERISASI PERUSAHAAN SUB KONTRAKTOR BERDASARKAN RASIO LIKUIDITAS MENGGUNAKAN K-MEANS CLUSTERING." Jurnal Ilmiah Manajemen, Ekonomi, & Akuntansi (MEA) 7, no. 2 (2023): 139–50. http://dx.doi.org/10.31955/mea.v7i2.2994.

Full text
Abstract:
Kemampuan perusahaan melunasi hutang jangka pendek secara cepat merupakan pengertian dari rasio likuiditas. Rasio likuditas yang digunakan dalam penelitian ini yaitu Current Ratio (CR), Quick Ratio (QR) dan Cash Ratio (CaR). Peneliitian ini menggunakan sampel dari perusahaan sub sektor konstruksi yang terdaftar di Bursa Efek Indonesia periode 2019-2021 dan perusahaan yang secara konsisten menerbitkan laporan tahunan. Proses clustering dimulai dengan tahapan awal seleksi hingga tahapan evaluasi dengan tools google colab dalam bahasa pemograman python menggunakan beberapa library seperti numpy, pandas, matplotlib, plotly express dan scikit learn. Hasil penelitian menunjukan bahwa perusahaan yang masuk dalam cluster 0 dan 1 merupakan perusahaan yang secara perhitungan tidak sesuai standar industri. Perusahaan yang masuk dalam cluster 2 dan 4 merupakan perusahaan yang sesuai dengan standar industri. Cluster 3 merupakan perusahaan dengan dengan rasio CR dan QR sesuai standar industri tapi nilai rasio CaR tidak sesuai dengan standar indusri.
APA, Harvard, Vancouver, ISO, and other styles
18

Manoj, N.M, Kumar Manog, R. Hema, J. Lavanya, NR Deepak, and B. Shruthi. "An In-Depth Relative Study of ML Models for Predicting Stock Value." Journal of Recent Trends in Blockchain Technology & It's Applications 1, no. 1 (2025): 43–50. https://doi.org/10.5281/zenodo.15193437.

Full text
Abstract:
<em>A web application that combines Streamlit, TensorFlow/Keras, and finance to make everything smooth running. Using the application, it predicts future stock prices. Fetching and analyzing historical stock values information for higher user interaction, it offers both current predictions and actual prices through interactive Plotly charts. Basically, the tool uses a pre-trained model that is specifically customized for time series forecasting; its core allows it to recognize patterns up to Users can input a stock ticker symbol, such as GOOGL, to compare side by side historical and predicted trends in stock prices. The application has functionality for possible problems, such as missing columns and incorrect formatting, and reliability while giving clear feedback throughout the user experience. Analysts, data scientists, and investors may use it to visualize stock value movement and make informed decisions by bridging the gap between "data science" and financial planning. This illustrated the power of web technologies and of ML for practical use in the banking sector; utility sets in Python were combined to deliver one powerful platform for financial forecasting.</em>
APA, Harvard, Vancouver, ISO, and other styles
19

Soliman, Mohammad, Lucília Cardoso, Giovana Feijó de Almeida, Arthur Araújo, and Noelia Vila. "Mapping smart experiences in tourism: A bibliometric approach." European Journal of Tourism Research 28 (March 15, 2021): 2809. http://dx.doi.org/10.54055/ejtr.v28i.2254.

Full text
Abstract:
There has been a significant increase in the number of studies addressing smart experiences in the context of tourism (SET). Nevertheless, no previous work has systematically and critically analysed the literature on this subject. To fulfil this gap, this paper examines the evolution of SET as a research topic by mapping the scientific production on it. To this end, 84 papers published from 2011 to April 2019 were retrieved from Scopus. To map this data, a blend of bibliometric analysis techniques, including the analysis of evaluative measures (productivity and impact metrics) and relational techniques (word frequency analysis and key theme analysis), was employed. Evaluative metrics were carried out using Plotly, Excel, and DB Gnosis software. The reviewed papers were then visually analysed through mind maps using BizAgi Process Modeler. The study presents an original theoretical insight, as well as a significant methodological contribution, as it employs an integrative approach of bibliometric analysis that relies on a set of rigorous techniques and can be implemented in future studies.
APA, Harvard, Vancouver, ISO, and other styles
20

Haq, Mia Karisma, Ihsan Ghozi Zulfikar, Rhisma Syahrul Putra, Mohamad Azfar Syazani Bin Md Yusof, Arianti Apriani Sagita, and Della Rachmatika Noer Intanty. "Insight and Foresight: Mortality Trends Due to Malnutrition." Jurnal Aplikasi dan Teori Ilmu Komputer 7, no. 2 (2024): 64–68. https://doi.org/10.17509/jatikom.v7i2.72389.

Full text
Abstract:
Malnutrition remains a significant global health problem, linked to a substantial proportion of child deaths worldwide. According to the United Nations, malnutrition is responsible for 45% of deaths in children under five. The World Food Programme estimates that over 820 million people globally suffer from hunger, with malnutrition playing a crucial role in this crisis. This study uses Python for data analysis and visualization, integrating time-series analysis and deep learning to forecast global malnutrition trends. The system processes data from 1970 to 2022, normalizes it, and trains a model comprising Conv1D and LSTM layers. The predictions are visualized using Plotly and displayed in a Flask web application, offering interactive features for exploring the data. The results highlight a notable decline in malnutrition-related deaths in both developing and developed nations, reflecting the success of previous interventions. However, developing countries continue to report a higher number of diseases and conditions associated with malnutrition, underscoring the need for further targeted interventions.
APA, Harvard, Vancouver, ISO, and other styles
21

Chandel, Garima, Pathan Sahimkhan, Saweta Verma, and Ashish Sharm. "Machine Learning Based Remote Sensing Technique for Analysis of The Glaciated Regions." E3S Web of Conferences 405 (2023): 02019. http://dx.doi.org/10.1051/e3sconf/202340502019.

Full text
Abstract:
Remote Sensing has become one of the most developed technologies in the world. Its applications are wide, like it can be used in agriculture, disaster observing, water resources monitoring, environment, marine resources, forestry as well as the forest fire, coastal zone snow and glacier etc. Machine learning applications like visualisation of data are used for understanding the remote sensing data graphically. In this paper presents the method for the process of representing the remote sensing data on glaciers graphically and pictorially. The matplotlib and seaborn libraries in python are used for this process. Python is the easy programming language used for visualisation of data with its libraries NumPy, pandas, matplotlib, seaborn and plotly. These libraries are used in python for representing the data graphically. In this work, the benchmark WGI dataset on remote sensing of glaciers covered with the debris has been used. Machine learning algorithms has been proposed for classification of the glaciers that are covered with the debris.
APA, Harvard, Vancouver, ISO, and other styles
22

Wisna, Nelsi, Shafira Aulia Putri Lisna, Tora Fahrudin, and Raswyshnoe Boing Kotjoprayudi. "ANALISIS GROSS PROFIT MARGIN (GPM) DAN NET PROFIT MARGIN (NPM) DENGAN METODE ALGORITMA K-MEANS MENGGUNAKAN BAHASA PEMROGRAMAN PYTHON." Jurnal Ilmiah Manajemen, Ekonomi, & Akuntansi (MEA) 7, no. 2 (2023): 1199–210. http://dx.doi.org/10.31955/mea.v7i2.3121.

Full text
Abstract:
Kemampuan perusahaan dalam menghasilkan laba dan mengukur efektifitas pada manajemen perusahaan adalah pengertian dari rasio profitabilitas. Pada penelitian ini digunakan 15 sampel laporan tahunan perusahaan sub sektor konstruksi yang terdapat pada Bursa Efek Indonesia (BEI) dan terdapat 2 rasio profitabilitas sebagai perhitungan utama yaitu Gross Profit Margin (GPM) dan Net Profit Margin (NPM) menggunakan k-means clustering sebagai metode dalam clustering data. Tujuan dari penelitian ini adalah untuk megkaji tingkat laba perusahaan pada periode 2019-2021. Proses clustering dimulai dengan tahap seleksi hingga tahap evaluasi menggunakan tools dari Google Colab yang diaplikasikan dalam bahasa pemrograman python pada beberapa library seperti pandas, matplotlib, mumpy, scikit learn dan plotly express. Hasil dari penelitian menjunjukkan bahwa cluster 0 adalah perusahaan dengan nilai GPM stabil dan beberapa data yang masuk kedalam standar indusrtri. Cluster 1 adalah perusahaan dengan nilai NPM dan GPM terendah, yang tidak memiliki hasil baik berdasarkan standar industri. Cluster 2 adalah perusahaan dengan nilai rasio yang masuk kedalam standar industri.
APA, Harvard, Vancouver, ISO, and other styles
23

Bhaskar, T., Vivek Kadam, Devang Kolhe, Yashodip Kolhe, Vedant Kotkar, and Gaurav Nangare. "Stock Price Prediction and Pattern Detection Using Deep Learning." Journal of Big Data Technology and Business Analytics 3, no. 1 (2024): 34–42. http://dx.doi.org/10.46610/jbdtba.2024.v03i01.005.

Full text
Abstract:
Accurately predicting stock market movements remains a significant challenge, yet one that continues to attract intense interest. This project leverages data mining and warehousing techniques to explore historical stock price data, aiming to uncover recurring patterns, forecast potential future trends, and evaluate the consistency of a specific stock's behaviour relative to its identified patterns. By employing various data mining algorithms and implementing effective warehousing strategies, the project seeks to extract valuable insights that can inform investment decisions and potentially contribute to superior market performance. But among these goals are some serious obstacles. Accurate prediction is hampered by the noise and uncertainties that are present in stock market data. Furthermore, fine-tuning and experimentation are necessary to achieve optimal results when optimizing the hyperparameters of long short-term memory networks (LSTM networks). Moreover, strong preprocessing methods are required to handle datasets that contain inconsistent or missing data. Lastly, proficient data visualization and user interface design abilities are required when creating an educational and engaging dashboard with Plotly Dash.
APA, Harvard, Vancouver, ISO, and other styles
24

Jongile, S., L. Marian, P. Dimitriou, and M. Wiedeking. "A Web Application for the Photon Strength Function Database." EPJ Web of Conferences 322 (2025): 06005. https://doi.org/10.1051/epjconf/202532206005.

Full text
Abstract:
A specialized web application has been developed to manage, query, and visualize photon strength function (PSF) data, compiled as a part of an IAEA Coordinated Research Project. This application significantly advances accessibility and interaction with the data over existing platforms, improving upon the capabilities of the prior resource. Central to this application is the database structure, crucial for efficient data management and retrieval. Built using the Django framework, the Web application’s user interface presents queried data and search results in an accessible format. t provides details on basic nuclear properties, including the atomic number (Z), mass number (A), multipolarity, energy range, and the experimental methods used. In addition, the application offers functionalities for searching and sorting data by author names or publication years, facilitating refined navigation through the extensive PSF data repository. A notable feature is the dynamic data visualization tool that utilizes Plotly to create interactive graphs. The platform also features a comprehensive data processing pipeline that ensures data integrity and accessibility.
APA, Harvard, Vancouver, ISO, and other styles
25

V., Sri Harshitha, Sanjana R., Harini C., and Hifsa Naaz Syeda. "Analyzing Ed Sheeran's Musical Characteristics and Popularity Using Spotify Data." Advancement of Computer Technology and its Applications 8, no. 3 (2025): 11–18. https://doi.org/10.5281/zenodo.15093622.

Full text
Abstract:
<em>In </em><em>this article, we will be using a sample data set of songs to find correlations between users and songs so that a new song will be recommended to them. We will analyze Ed Sheeran's songs' musical characteristics and popularity using Spotify data. By analyzing audio features such as dance ability, energy, acousticness, instrumentalness, and release date, the research identifies patterns in his music, investigates the correlation between musical features and popularity, and examines how his style has evolved across albums. By leveraging libraries such as NumPy, Pandas, Seaborn, Plotly, and Scikit-learn, and applying statistical analysis, visualization techniques, and machine learning, K-Nearest Neighbors (KNN). We aim to extract insights from his music metrics, identify trends, and determine the most "average" Ed Sheeran song based. The results provide insights into what makes a song popular, how featured artists impact track success, and how Ed Sheeran&rsquo;s music has changed over time.</em> <em>&nbsp;</em>
APA, Harvard, Vancouver, ISO, and other styles
26

Prado, Lílian Moreira do, Tereza Cristina Felippe Guimarães, Isabel Cristina Pacheco da Nóbrega, et al. "Gestão de indicadores de segurança e qualidade na enfermagem cardiovascular: relato de experiência." CONTRIBUCIONES A LAS CIENCIAS SOCIALES 17, no. 13 (2024): e13225. https://doi.org/10.55905/revconv.17n.13-013.

Full text
Abstract:
Objetivo: Relatar a construção de um banco de dados com indicadores de segurança e qualidade de enfermagem ao paciente cardiopata. Métodos: Trata-se de um relato de experiência, dividido em 5 etapas: revisão de literatura, elaboração dos indicadores, validação por especialistas, construção do banco de dados e implementação. As informações coletadas foram adicionadas na plataforma REDCap para gerar os indicadores. Os dados adquiridos foram examinados por método estatístico no software RStudio. Painéis visuais dos indicadores foram elaborados no software Plotly R Library Basic Chart. Parecer do CEP sob n.º 6.591.927. Resultados: 23 indicadores foram utilizados como base no Manual de Indicadores de Enfermagem – NAGEH. Após a análise dos especialistas, foram selecionados 18 indicadores de segurança e qualidade para áreas de enfermagem de uma instituição especializada no cuidado de alta complexidade a pacientes cardiopatas. 15 bancos de dados foram desenvolvidos no REDCap. Os dados oriundos foram representados em “dashboards” para análise mensal. Conclusão: O banco de dados possibilita monitoramento e avaliação contínua de situações que podem impactar a segurança e qualidade do paciente, tomada de decisões e aprimoramento da comunicação em equipe.
APA, Harvard, Vancouver, ISO, and other styles
27

Santos, Rafael Cavalcante dos, Gilwan Souza Santos, and Samara Martins Nascimento Gonçalves. "DASHMOB: uma ferramenta para análise dos dados abertos da PRF." Brazilian Journal of Development 10, no. 3 (2024): e67858. http://dx.doi.org/10.34117/bjdv10n3-018.

Full text
Abstract:
A ferramenta DashMob foi desenvolvida a partir de informações disponibilizadas pela PRF entre janeiro e maio de 2023, obtidas do Portal Transparência do Governo Federal. Esse sistema se destaca por fornecer não apenas uma visão abrangente dos acidentes notificados nas rodovias federais por todo Brasil, mas também análises detalhadas e dados relevantes para investigações e políticas de segurança nas rodovias. O DashMob foi desenvolvido em Python, levando em consideração os processos de limpeza, processamento e visualização dos dados através das bibliotecas Pandas e Plotly. Com mapas e gráficos interativos, a ferramenta oferece instruções para reconhecer tendências e padrões nos acidentes. Seu sistema informativo inclui a obtenção de dados da PRF e a implementação de práticas de limpeza para o desenvolvimento de um painel informativo coeso. Além disso, o sistema possui uma arquitetura simplificada que enfatiza o uso de bibliotecas Python para agilizar a tomada de decisões e facilitar a compreensão sobre os dados. Desde colisões traseiras até colisões frontais, os resultados mostram a classificação de diversos tipos de acidentes, oferecendo uma compreensão abrangente das ocorrências registradas.
APA, Harvard, Vancouver, ISO, and other styles
28

Costa, Walingson da Silva da, Rivanildo Dallacort, Marcos Antônio Camillo de Carvalho, and Silmara Bispo dos Santos. "Sistema Web para pré-processamento e análise de dados meteorológicos." Revista Brasileira de Climatologia 30 (April 16, 2022): 591–610. http://dx.doi.org/10.55761/abclima.v30i18.15079.

Full text
Abstract:
O entendimento do tempo e do clima é indispensável para decisões assertivas em diversos campos da atuação humana. Necessitando, portando de dados consistentes e confiáveis para inferências e tomadas de decisão. Deste modo, o objetivo deste trabalho é descrever as funcionalidades de um sistema (web) desenvolvido com intuito de identificar erros e imputar dados ausentes em séries históricas de dados meteorológicos, descrevendo as características e erros da base de dados do INMET (Instituto Nacional de Meteorologia) nos municípios de Matupá MT e Sinop MT. O sistema foi construído com a linguagem de programação Python, as bibliotecas Scikit-learn, SciPy, Pandas, Plotly e o Framework Streamlit. Para validação do sistema foi utilizado série histórica de dados meteorológicos fornecidos pelo INMET, tratados suas falhas e imputados os valores ausentes com o algoritmo KNNImputer. A assertividade da imputação de valores ausentes foi verificada através das métricas de Acurácia, Precisão, Recall, F1-score e Erro Quadrático Médio (QMS). Tais métricas são oriundas de comparação de valores previstos e valores originais por matriz de confusão. O sistema foi eficiente na identificação de outliers e na imputação de valores ausentes, identificando 100% dos valores discrepantes das variáveis analisadas.
APA, Harvard, Vancouver, ISO, and other styles
29

Selli, Alana, Stephen P. Miller, and Ricardo V. Ventura. "The Use of Interactive Visualizations for Tracking Haplotypic Inheritance in Livestock." Ruminants 4, no. 1 (2024): 90–111. http://dx.doi.org/10.3390/ruminants4010006.

Full text
Abstract:
Our objective was to harness the power of interactive visualizations by utilizing open-source tools to develop an efficient strategy for visualizing Single Nucleotide Polymorphism data within a livestock population, focusing on tracking the transmission of haplotypes. To achieve this, we simulated a realistic beef cattle population in order to obtain phased haplotypes and generate the necessary inputs for creating our visualizations. The visualization tool was built using Python and the Plotly library, which enables interactivity. We set out to explore three scenarios: trio comparison, visualization of grandparents, and half-sibling evaluation. These scenarios enabled us to trace the inheritance of genetic segments, identify crossover events, and uncover common regions within related and unrelated animals. The potential applications of this approach are significant, particularly for improving genomic selection in smaller breeding programs and farms, and it provides valuable insights for guiding more in-depth genomic region analysis. Beyond its practical applications, we believe this strategy can be a valuable educational tool, helping educators clarify complex concepts like Mendelian sampling and haplotypic diversity. Furthermore, we hope it will encourage livestock producers to adopt advanced technologies like genotyping and genomic selection, thereby contributing to the advancement of livestock genetics.
APA, Harvard, Vancouver, ISO, and other styles
30

Sanskriti, Harmukh, Mishra Mansi, Jain Satyam, Chawda Archit, Prasad Kauleshwar, and Kumar Bhawnani Dinesh. "Forecasting Stock Market Index using Artificial Intelligence." Journal of Advances in Computational Intelligence Theory 4, no. 1 (2022): 1–7. https://doi.org/10.5281/zenodo.6500420.

Full text
Abstract:
<em>In this project, we attempt to implement the most popular Deep Learning technique for Time Series Forecasting since they allow for making reliable predictions on time series in many different problems. Instead of dealing with the data points collected randomly, we are using Time Series model to work upon a sequence of data points at a particular time interval. We are using three major modules to forecast the data, and they are Streamlit, Yahoo Finance, and Facebook Prophet. The user can select the number of years according to their convenience for prediction. The data is collected by yfinance and plotted using a python library called Plotly. Each point on the graph represents the date and the opening and closing stock prices for the share market. Based on the historical data we used fbprophet to forecast the stock quotes for the near future. The concerning forecast components like trends and weekly and yearly variations are also plotted. It helps to analyse the prices at a closer range and study the records effectively. This project aims to ease the problem of trading that is faced by Financial Investors. </em>
APA, Harvard, Vancouver, ISO, and other styles
31

Romero Cardenas, Diana Carolina, Alejandro Ospina Mejía, and Luis Ariosto Serna Cardona. "Model for Predicting Suppliers in the Financial Sector of the Vehicle Manufacturing Company in the City of Pereira." Scientia et Technica 30, no. 01 (2025): 26–35. https://doi.org/10.22517/23447214.25692.

Full text
Abstract:
Este trabajo de investigación presenta el desarrollo de un modelo utilizando técnicas de minería de datos para identificar variables financieras en una empresa manufacturera de carrocerías para vehículos automotores en Pereira. El estudio se estructura en cuatro fases claves. La primera fase se centra en el preprocesamiento de datos, incluyendo la caracterización, normalización, y reducción de dimensionalidad mediante PCA, Relief y Correlación. La segunda fase aplica aprendizaje no supervisado con K-means y Mezclas Gaussianas (GMM) para agrupar y validar datos según una variable objetivo definida. En la tercera fase, se emplean clasificadores supervisados como el Clasificador Bayesiano, Redes Neuronales Artificiales, Máquinas de Soporte Vectorial y KNN para predecir la eficiencia de los proveedores, optimizando los procesos de inversión y costeo. Finalmente, la cuarta fase integra el preprocesamiento y la predicción en un formulario práctico, utilizando librerías como Plotly y Dash para visualizaciones detalladas, y herramientas como GitHub y Heroku para el desarrollo de la aplicación. Este estudio destaca la importancia de la inteligencia artificial en la toma de decisiones empresariales, demostrando cómo las técnicas de ciencia de datos y las herramientas de visualización pueden facilitar la interpretación y el aprovechamiento de los resultados del análisis de datos.
APA, Harvard, Vancouver, ISO, and other styles
32

Razali, Mulkal, and Risa Wandi. "Inverse Distance Weight Spatial Interpolation for Topographic Surface 3D Modelling." TECHSI - Jurnal Teknik Informatika 11, no. 3 (2019): 385. http://dx.doi.org/10.29103/techsi.v11i3.1934.

Full text
Abstract:
AbstractTopographic modelling is an important aspect in engineering work and planning such as dam design, road planning, volumetric calculation, water flow analysis and so forth. To get topographic data, usually a land survey or topographic measurement is taking for a region or an area under study. A number of points that represent an area are measured to get a height dataset. The dataset which is consist of height points will be used to model topographic condition of the area with generating contour map and also 3D model. There are some methods can be used to generate topographic surface in 3D model like linear interpolation in Triangular Irregular Network (TIN), Kriging and Inverse Distance Weight (IDW). This research implemented IDW spatial interpolation algorithm to model earth surface or topographic model in 3D. The IDW method was implemented in Python programming language using Numpy library for computation and Plotly graphic library to visualize the 3D Model. Using 2500 and 10000 interpolation points with 100 random sampling points that extracted from Digital Surface Model (DSM), IDW was successfully estimated the height at unsampled locations. The results show, more higher interpolation point number will produce more detail surface texture. Keywords : IDW, Topographic, 3D Modelling
APA, Harvard, Vancouver, ISO, and other styles
33

Schmidt, Breon, Marek Cmero, Paul Ekert, Nadia Davidson, and Alicia Oshlack. "Slinker: Visualising novel splicing events in RNA-Seq data." F1000Research 10 (December 7, 2021): 1255. http://dx.doi.org/10.12688/f1000research.74836.1.

Full text
Abstract:
Visualisation of the transcriptome relative to a reference genome is fraught with sparsity. This is due to RNA sequencing (RNA-Seq) reads being predominantly mapped to exons that account for just under 3% of the human genome. Recently, we have used exon-only references, superTranscripts, to improve visualisation of aligned RNA-Seq data through the omission of supposedly unexpressed regions such as introns. However, variation within these regions can lead to novel splicing events that may drive a pathogenic phenotype. In these cases, the loss of information in only retaining annotated exons presents significant drawbacks. Here we present Slinker, a bioinformatics pipeline written in Python and Bpipe that uses a data-driven approach to assemble sample-specific superTranscripts. At its core, Slinker uses Stringtie2 to assemble transcripts with any sequence across any gene. This assembly is merged with reference transcripts, converted to a superTranscript, of which rich visualisations are made through Plotly with associated annotation and coverage information. Slinker was validated on five novel splicing events of rare disease samples from a cohort of primary muscular disorders. In addition, Slinker was shown to be effective in visualising deletion events within transcriptomes of tumour samples in the important leukemia gene, IKZF1. Slinker offers a succinct visualisation of RNA-Seq alignments across typically sparse regions and is freely available on Github.
APA, Harvard, Vancouver, ISO, and other styles
34

Hidalgo, Sergio, and Joanna C. Chiu. "CRUMB: a shiny-based app to analyze rhythmic feeding in Drosophila using the FLIC system." F1000Research 12 (April 6, 2023): 374. http://dx.doi.org/10.12688/f1000research.132587.1.

Full text
Abstract:
Rhythmic feeding activity has become an important research area for circadian biologists as it is now clear that metabolic input is critical for regulating circadian rhythms, and chrononutrition has been shown to promote health span. In contrast to locomotor activity rhythm, studies conducting high throughput analysis of Drosophila rhythmic food intake have been limited and few monitoring system options are available. One monitoring system, the Fly Liquid-Food Interaction Counter (FLIC) has become popular, but there is a lack of efficient analysis toolkits to facilitate scalability and ensure reproducibility by using unified parameters for data analysis. Here, we developed Circadian Rhythm Using Mealtime Behavior (CRUMB), a user-friendly Shiny app to analyze data collected using the FLIC system. CRUMB leverages the ‘plotly’ and ‘DT’ packages to enable interactive raw data review as well as the generation of easily manipulable graphs and data tables. We used the main features of the FLIC master code provided with the system to retrieve feeding events and provide a simplified pipeline to conduct circadian analysis. We also replaced the use of base functions in time-consuming processes such as ‘rle’ and ‘read.csv’ with faster versions available from other packages to optimize computing time. We expect CRUMB to facilitate analysis of feeding-fasting rhythm as a robust output of the circadian clock.
APA, Harvard, Vancouver, ISO, and other styles
35

Hidalgo, Sergio, and Joanna C. Chiu. "CRUMB: a shiny-based app to analyze rhythmic feeding in Drosophila using the FLIC system." F1000Research 12 (June 19, 2023): 374. http://dx.doi.org/10.12688/f1000research.132587.2.

Full text
Abstract:
Rhythmic feeding activity has become an important research area for circadian biologists as it is now clear that metabolic input is critical for regulating circadian rhythms, and chrononutrition has been shown to promote health span. In contrast to locomotor activity rhythm, studies conducting high throughput analysis of Drosophila rhythmic food intake have been limited and few monitoring system options are available. One monitoring system, the Fly Liquid-Food Interaction Counter (FLIC) has become popular, but there is a lack of efficient analysis toolkits to facilitate scalability and ensure reproducibility by using unified parameters for data analysis. Here, we developed Circadian Rhythm Using Mealtime Behavior (CRUMB), a user-friendly Shiny app to analyze data collected using the FLIC system. CRUMB leverages the ‘plotly’ and ‘DT’ packages to enable interactive raw data review as well as the generation of easily manipulable graphs and data tables. We used the main features of the FLIC master code provided with the system to retrieve feeding events and provide a simplified pipeline to conduct circadian analysis. We also replaced the use of base functions in time-consuming processes such as ‘rle’ and ‘read.csv’ with faster versions available from other packages to optimize computing time. We expect CRUMB to facilitate analysis of feeding-fasting rhythm as a robust output of the circadian clock.
APA, Harvard, Vancouver, ISO, and other styles
36

Mohammad Mahbubur Rashid, Mohamad Farhan Abd Razak, and Tahsin Fuad Hasan. "Analysis of Impedance Based Sensor to Discover the Invasive Nature of A549 Lung Cancer Cell." Journal of Advanced Research in Applied Sciences and Engineering Technology 41, no. 2 (2024): 238–55. http://dx.doi.org/10.37934/araset.41.2.238255.

Full text
Abstract:
Numerous studies have been conducted to investigate the effectiveness of impedance-based sensors in detecting the invasive behavior of cancer cells, specifically through the use of Electric Cell-substrate Impedance Sensing (ECIS) methodology. However, the current equivalent circuit models used to represent the invasive nature of cancer cells have limitations and inaccuracies, and there has been a lack of utilization of mathematical and data analysis software to better understand the growth and invasive behavior of these cells. To address these gaps, this research aims to measure the impedance of A549 lung cancer cells, develop a simplified equivalent circuit model, and analyze the results using mathematical and data analysis software. The invasive behavior of A549 cells will first be studied through impedance measurements, and then a circuit model will be designed and simulated using software tools to reveal the true invasive nature of these cells. Finally, rigorous mathematical analysis and the use of suitable data analysis software (such as Matlab Powergui and Simulink, EIS Spectrum Analyzer, and Plotly) will be applied to gain a comprehensive understanding of the morphological behavior of the cells. The experimental results of this research are expected to align with previous findings, and a quadratic equation will be derived to predict the body's resistance of the A549 lung cancer cell using mathematical and data analysis approaches.
APA, Harvard, Vancouver, ISO, and other styles
37

Loo, Yew Jie, Hooi-Ten Wong Doris, Mat Zain Zarina, Nur Amir Sjarif Nilam, Ibrahim Roslina, and Maarop Nurazean. "Metrics and Benchmarks for Empirical and Comprehension Focused Visualization Research in the Sales Domain." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 3 (2018): 1340–48. https://doi.org/10.11591/ijeecs.v12.i3.pp1340-1348.

Full text
Abstract:
Data visualization is an effort which aims to communicate data effectively and clearly to the audience through graphical representation. Data visualization efforts must be coordinated with an understanding into the Cognitive Learning Theory (CLT). In the sales domain, sales data visualization are made possible with the available Business Intelligence (BI) tools such as Microsoft Power BI, Tableau, Plotly, and others. These tools allow seamless interaction for the top management as well as the sales force with regard to the data. Sales data visualization comes with an array of advantages such as self-service analysis by business users, rapidly adapt to changing business conditions, and enable continuous on-demand reporting among others. The advantages of sales data visualization also comes with the challenges such as difficulty in identifying visual noise, high rate of image change, and high performance requirements. In an effort to reduce cognitive activity that does not enhance learning, sales visualization dashboard must be designed in a way that is neither too simplistic nor too complex to ensure that the Intrinsic Cognitive Load (ICL), Extrinsic Cognitive Load (ECL), and Germane Cognitive Load (GCL) are in sync with the audience. With the combination of sales data visualization and CLT, understanding complex sales details quickly is made possible by not only the top management of the organization, but also the sales force of the organization.
APA, Harvard, Vancouver, ISO, and other styles
38

Valeriy, Lakhno, Malyukov Volodimir, Akhmetov Berik, Kasatkin Dmytro, and Plyska Liubov. "Development of a model for choosing strategies for investing in information security." Eastern-European Journal of Enterprise Technologies 2, no. 3(110) (2021): 43–51. https://doi.org/10.15587/1729-4061.2021.228313.

Full text
Abstract:
This paper has proposed a model of the computational core for the decision support system (DSS) when investing in the projects of information security (IS) of the objects of informatization (OBI). Including those OBI that can be categorized as critically important. Unlike existing solutions, the proposed model deals with decision-making issues in the ongoing process of investing in the projects to ensure the OBI IS by a group of investors. The calculations were based on the bilinear differential quality games with several terminal surfaces. Finding a solution to these games is a big challenge. It is due to the fact that the Cauchy formula for bilinear systems with arbitrary strategies of players, including immeasurable functions, cannot be applied in such games. This gives grounds to continue research on finding solutions in the event of a conflict of multidimensional objects. The result was an analytical solution based on a new class of bilinear differential games. The solution describes the interaction of objects investing in OBI IS in multidimensional spaces. The modular software product &quot;Cybersecurity Invest decision support system &quot; (Ukraine) for the Windows platform is described. Applied aspects of visualization of the results of calculations obtained with the help of DSS have been also considered. The Plotly library for the Python algorithmic language was used to visualize the results. It has been shown that the model reported in this work can be transferred to other tasks related to the development of DSS in the process of investing in high-risk projects, such as information technology, cybersecurity, banking, etc.
APA, Harvard, Vancouver, ISO, and other styles
39

Panos, Agapios, and Dimitris Mavridis. "TableOne: an online web application and R package for summarising and visualising data." Evidence Based Mental Health 23, no. 3 (2020): 127–30. http://dx.doi.org/10.1136/ebmental-2020-300162.

Full text
Abstract:
ObjectiveTo develop an easy-to-use R package and web application that summarises baseline characteristics across different arms of a clinical trial or different exposures.MethodsTables and figures are the efficient means of visualising, communicating and summarising data. It is common in comparative effectiveness research to provide a synopsis of characteristics and outcomes across the various treatment groups. The popularity of such a table has earned it a name and we simply call it the ‘TableOne’, as it is usually the first TableOne encounters looking at a published clinical trial. Such a table includes not only descriptive statistics for each group but also appropriate tests (p values and 95% CIs) for checking for differences across groups. We have developed an R package (called TableOne) (accessible through https://github.com/agapiospanos/TableOne) that quickly summarises and compares results across different groups. We have also extended it to an online web application that is easily handled by the researcher. All computations are done in R and plots are produced using the plotly library. We provide a detailed description on how to use the web application.ResultsThe application guides the user in a step by step format (wizard) and it is accessible through any browser in the following link (https://esm.uoi.gr/shiny/tableone/). Finally, appropriate interactive plots are provided for each variable.ConclusionsThis easy-to-use web application will help researchers quickly and easily to visualise differences across treatment groups or different exposures.
APA, Harvard, Vancouver, ISO, and other styles
40

Yew Jie, Loo, Doris Hooi-Ten Wong, Zarina Mat Zain, Nilam Nur Amir Sjarif, Roslina Ibrahim, and Nurazean Maarop. "Metrics and Benchmarks for Empirical and Comprehension Focused Visualization Research in the Sales Domain." Indonesian Journal of Electrical Engineering and Computer Science 12, no. 3 (2018): 1340. http://dx.doi.org/10.11591/ijeecs.v12.i3.pp1340-1348.

Full text
Abstract:
&lt;p&gt;Data visualization is an effort which aims to communicate data effectively and clearly to the audience through graphical&lt;br /&gt;representation. Data visualization efforts must be coordinated with an understanding into the Cognitive Learning Theory (CLT). In the sales domain, sales data visualization are made possible with the available Business Intelligence (BI) tools such as Microsoft Power BI, Tableau, Plotly, and others. These tools allow seamless interaction for the top management as well as the sales force with regard to the data. Sales data visualization comes with an array of advantages such as self-service analysis by business users, rapidly adapt to changing business conditions, and enable continuous on-demand reporting among others. The advantages of sales data visualization also comes with the challenges such as difficulty in identifying visual noise, high rate of image change, and high performance requirements. In an effort to reduce cognitive activity that does not enhance learning, sales visualization dashboard must be designed in a way that is neither&lt;br /&gt;too simplistic nor too complex to ensure that the Intrinsic Cognitive Load (ICL), Extrinsic Cognitive Load (ECL), and Germane Cognitive Load (GCL) are in sync with the audience. With the combination of sales data visualization and CLT, understanding complex sales details quickly is made possible by not only the top management of the organization, but also the sales force of the organization.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
41

Kuznetsova, I. R., A. S. Bukunov, and S. V. Bukunov. "Use of Modern Information Technologies to Optimization Business Processes." LETI Transactions on Electrical Engineering & Computer Science 15, no. 8 (2022): 57–68. http://dx.doi.org/10.32603/2071-8985-2022-15-8-57-68.

Full text
Abstract:
A web-application developed at the Department of Information Technologies of the Saint Petersburg State University of Architecture and Civil Engineering, which makes it possible to optimize business processes of the active travel club, is described. The application can be categorized as business software. It allows the club's management to significantly facilitate the management of the club's management and financial accounting operations by automating of routine operations and visualization of the information necessary for making management decisions. The work was carried out according to the technical specification given by the club management. The application implements authorization and user authentication systems. Each user group has its own set of features when working with the application. The application is developed by Python programming language and Django framework. Relational data base was designed and implemented for data storage. MySQL data base management system (DBMS) is used to organize interaction with the database. A built-in analytical panel designed with the Plotly graphic library is implemented to visualize and analyze information stored in the data base. To solve some problems such languages as HTML, CSS, JavaScript and AJAX technology were used. The system does not require the installation of additional software. Access to the Internet need to use the system only. The approaches and technological solutions implemented in the application can be successfully used to create similar systems for other organizations and companies involved in similar activities.
APA, Harvard, Vancouver, ISO, and other styles
42

Magray, Imtiyaz Ahmad, and Gurinder Kaur Sodhi. "Predicting Student Placement in College Using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 12, no. 2 (2024): 1384–95. http://dx.doi.org/10.22214/ijraset.2024.58548.

Full text
Abstract:
Abstract: The research on student placement prediction in higher education has been a focal point, addressing challenges related to balanced accuracy and generalization across diverse datasets. Prior studies grappled with issues in feature representation and interpretability. This study aims to overcome these challenges by introducing a comprehensive machine learning framework for student placement prediction, leveraging advanced techniques in exploratory data analysis, preprocessing, and model evaluation. Drawing on previous research experiences, our proposed work targets specific issues related to feature engineering, categorical variable representation, and result interpretability. The methodology employs key libraries, including NumPy, pandas, Matplotlib, Seaborn, Plotly, scikit-learn, WordCloud, and DateTime, for efficient data manipulation, visualization, and analysis. Ensemble learning techniques, such as Random Forest and XGBoost, along with traditional algorithms like Decision Trees and K-Nearest Neighbors, contribute to enhancing predictive accuracy and model robustness. To fine-tune the models, a Randomized Search for Hyperparameters is implemented for the XGBoost classifier, optimizing parameters like learning rate, maximum depth, minimum child weight, gamma, and colsample by tree. This approach effectively addresses overfitting and underfitting issues, maximizing overall model performance. The accuracy percentages achieved through our models represent significant advancements. For instance, the Decision Tree model achieves an accuracy of 87.74%, the Random Forest model achieves 87.60%, the XGBoost model achieves 87.60%, and the K-Nearest Neighbors model achieves 85.18%. These results underscore the effectiveness of our approach in achieving high accuracy while maintaining interpretability.
APA, Harvard, Vancouver, ISO, and other styles
43

V., Bala Dhandayuthapani. "Python Data Analysis and Visualization in Java GUI Applications Through TCP Socket Programming." International Journal of Information Technology and Computer Science 16, no. 3 (2024): 72–92. http://dx.doi.org/10.5815/ijitcs.2024.03.07.

Full text
Abstract:
Python is popular in artificial intelligence (AI) and machine learning (ML) due to its versatility, adaptability, rich libraries, and active community. The existing Python interoperability in Java was investigated using socket programming on a non-graphical user interface (GUI). Python's data analysis library modules such as numpy, pandas, and scipy, together with visualization library modules such as Matplotlib and Seaborn, and Scikit-learn for machine-learning, aim to integrate into Java graphical user interface (GUI) applications such as Java applets, Java Swing, and Java FX. The substantial method used in the integration process is TCP socket programming, which makes instruction and data transfers to provide interoperability between Python and Java GUIs. This empirical research integrates Python data analysis and visualization graphs into Java applications and does not require any additional libraries or third-party libraries. The experimentation confirmed the advantages and challenges of this integration with a concrete solution. The intended audience for this research extends to software developers, data analysts, and scientists, recognizing Python's broad applicability to artificial intelligence (AI) and machine learning (ML). The integration of data analysis and visualization and machine-learning functionalities within the Java GUI. It emphasizes the self-sufficiency of the integration process and suggests future research directions, including comparative analysis with Java's native capabilities, interactive data visualization using libraries like Altair, Bokeh, Plotly, and Pygal, performance and security considerations, and no-code and low-code implementations.
APA, Harvard, Vancouver, ISO, and other styles
44

Lakhno, Valeriy, Volodimir Malyukov, Berik Akhmetov, Dmytro Kasatkin, and Liubov Plyska. "Development of a model for choosing strategies for investing in information security." Eastern-European Journal of Enterprise Technologies 2, no. 3 (110) (2021): 43–51. http://dx.doi.org/10.15587/1729-4061.2021.228313.

Full text
Abstract:
This paper has proposed a model of the computational core for the decision support system (DSS) when investing in the projects of information security (IS) of the objects of informatization (OBI). Including those OBI that can be categorized as critically important. Unlike existing solutions, the proposed model deals with decision-making issues in the ongoing process of investing in the projects to ensure the OBI IS by a group of investors. The calculations were based on the bilinear differential quality games with several terminal surfaces. Finding a solution to these games is a big challenge. It is due to the fact that the Cauchy formula for bilinear systems with arbitrary strategies of players, including immeasurable functions, cannot be applied in such games. This gives grounds to continue research on finding solutions in the event of a conflict of multidimensional objects. The result was an analytical solution based on a new class of bilinear differential games. The solution describes the interaction of objects investing in OBI IS in multidimensional spaces. The modular software product "Cybersecurity Invest decision support system " (Ukraine) for the Windows platform is described. Applied aspects of visualization of the results of calculations obtained with the help of DSS have been also considered. The Plotly library for the Python algorithmic language was used to visualize the results. It has been shown that the model reported in this work can be transferred to other tasks related to the development of DSS in the process of investing in high-risk projects, such as information technology, cybersecurity, banking, etc.
APA, Harvard, Vancouver, ISO, and other styles
45

Ezra Morrison, Douglas, Gareth Parry, Vladimir Manuel, Tony Kuo, and Moira Inkelas. "115 Interactive data displays for rapid responses to COVID-19 response in K-12 schools." Journal of Clinical and Translational Science 6, s1 (2022): 5. http://dx.doi.org/10.1017/cts.2022.34.

Full text
Abstract:
OBJECTIVES/GOALS: A UCLA Clinical and Translational Science Institute (CTSI) science team partnered with the second largest US school district, with over 500,000 K-12 students, to design and implement a statistical process control dashboard to guide COVID-19 response, including mitigation and vaccination outreach. METHODS/STUDY POPULATION: District data for students, teachers, and staff are updated daily and include COVID-19 test results, counts of quarantine after positive tests, and COVID-19 vaccination rates. Displays used a new hybrid Shewhart control chart to detect changes in test positivity rates and distinguish meaningful signals from noise (random day-to-day variation). The dashboard uses the Shiny and plotly packages in R to display interactive graphs of each data stream (cases, tests, and vaccinations) charted at multiple levels (districtwide, subdistricts, schools). Displays of variation over time show policy impacts and inequities. Selected displays use municipal COVID-19 data to complement district data. RESULTS/ANTICIPATED RESULTS: The district has used the displays to assess the impact of their COVID-19 response and to identify variation in close to real-time to suggest areas with need for additional resources for mitigation or vaccination. The CTSI team has continued to edit and add displays in response to the district’s changing operational needs and questions. DISCUSSION/SIGNIFICANCE: The UCLA CTSI team developed and implemented a robust data visualization dashboard to monitor COVID-19 case rates and plan vaccination outreach efforts. Control charts enabled the district to distinguish noise from signal, thereby rapidly identifying when specific parts of the district needed targeted support to achieve equity goals.
APA, Harvard, Vancouver, ISO, and other styles
46

Reis, João. "Exploring Applications and Practical Examples by Streamlining Material Requirements Planning (MRP) with Python." Logistics 7, no. 4 (2023): 91. http://dx.doi.org/10.3390/logistics7040091.

Full text
Abstract:
Background: Material Requirements Planning (MRP) is critical in Supply Chain Management (SCM), facilitating effective inventory management and meeting production demands in the manufacturing sector. Despite the potential benefits of automating the MRP tasks to meet the demand for expedited and efficient management, the field appears to be lagging behind in harnessing the advancements offered by Artificial Intelligence (AI) and sophisticated programming languages. Consequently, this study aims to address this gap by exploring the applications of Python in simplifying the MRP processes. Methods: This article offers a twofold approach: firstly, it conducts research to uncover the potential applications of the Python code in streamlining the MRP operations, and the practical examples serve as evidence of Python’s efficacy in simplifying the MRP tasks; secondly, this article introduces a conceptual framework that showcases the Python ecosystem, highlighting libraries and structures that enable efficient data manipulation, analysis, and optimization techniques. Results: This study presents a versatile framework that integrates a variety of Python tools, including but not limited to Pandas, Matplotlib, and Plotly, to streamline and actualize an 8-step MRP process. Additionally, it offers preliminary insights into the integration of the Python-based MRP solution (MRP.py) with Enterprise Resource Planning (ERP) systems. Conclusions: While the article focuses on demonstrating the practicality of Python in MRP, future endeavors will entail empirically integrating MRP.py with the ERP systems in small- and medium-sized companies. This integration will establish real-time data synchronization between the Python and ERP systems, leading to accurate MRP calculations and enhanced decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
47

Obermayer, Benedikt, Manuel Holtgrewe, Mikko Nieminen, Clemens Messerschmidt, and Dieter Beule. "SCelVis: exploratory single cell data analysis on the desktop and in the cloud." PeerJ 8 (February 19, 2020): e8607. http://dx.doi.org/10.7717/peerj.8607.

Full text
Abstract:
Background Single cell omics technologies present unique opportunities for biomedical and life sciences from lab to clinic, but the high dimensional nature of such data poses challenges for computational analysis and interpretation. Furthermore, FAIR data management as well as data privacy and security become crucial when working with clinical data, especially in cross-institutional and translational settings. Existing solutions are either bound to the desktop of one researcher or come with dependencies on vendor-specific technology for cloud storage or user authentication. Results To facilitate analysis and interpretation of single-cell data by users without bioinformatics expertise, we present SCelVis, a flexible, interactive and user-friendly app for web-based visualization of pre-processed single-cell data. Users can survey multiple interactive visualizations of their single cell expression data and cell annotation, define cell groups by filtering or manual selection and perform differential gene expression, and download raw or processed data for further offline analysis. SCelVis can be run both on the desktop and cloud systems, accepts input from local and various remote sources using standard and open protocols, and allows for hosting data in the cloud and locally. We test and validate our visualization using publicly available scRNA-seq data. Methods SCelVis is implemented in Python using Dash by Plotly. It is available as a standalone application as a Python package, via Conda/Bioconda and as a Docker image. All components are available as open source under the permissive MIT license and are based on open standards and interfaces, enabling further development and integration with third party pipelines and analysis components. The GitHub repository is https://github.com/bihealth/scelvis.
APA, Harvard, Vancouver, ISO, and other styles
48

Al-Funjan, Amera, Farid Meziane, and Rob Aspin. "Describing Pulmonary Nodules Using 3D Clustering." Advanced Engineering Research 22, no. 3 (2022): 261–71. http://dx.doi.org/10.23947/2687-1653-2022-22-3-261-271.

Full text
Abstract:
Introduction. Determining the tumor (nodule) characteristics in terms of the shape, location, and type is an essential step after nodule detection in medical images for selecting the appropriate clinical intervention by radiologists. Computer-aided detection (CAD) systems efficiently succeeded in the nodule detection by 2D processing of computed tomography (CT)-scan lung images; however, the nodule (tumor) description in more detail is still a big challenge that faces these systems.Materials and Methods. In this paper, the 3D clustering is carried out on volumetric CT-scan images containing the nodule and its structures to describe the nodule progress through the consecutive slices of the lung in CT images.Results. This paper combines algorithms to cluster and define nodule’s features in 3D visualization. Applying some 3D functions to the objects, clustered using the K-means technique of CT lung images, provides a 3D visual exploration of the nodule shape and location. This study mainly focuses on clustering in 3D to discover complex information for a case missed in the radiologist’s report. In addition, the 3D-Density-based spatial clustering of applications with noise (DBSCAN) method and another 3D application (plotly) have been applied to evaluate the proposed system in this work. The proposed method has discovered a complicated case in data and automatically provides information about the nodule types (spherical, juxta-pleural, and pleural-tail). The algorithm is validated on the standard data consisting of the lung computed tomography scans with nodules greater and less than 3mm in size.Discussion and Conclusions. Based on the proposed model, it is possible to cluster lung nodules in volumetric CT scan and determine a set of characteristics such as the shape, location and type.
APA, Harvard, Vancouver, ISO, and other styles
49

Ramakrishnan, R., and P. Angarika. "SMART WATCH DATA ANALYSIS USING PYTHON AND HUMAN HEALTH PREDICTION." International Scientific Journal of Engineering and Management 03, no. 12 (2024): 1–5. https://doi.org/10.55041/isjem02154.

Full text
Abstract:
This project leverages smartwatch fitness data to predict health patterns and monitor daily activity trends, underscoring the role of wearables in personal health management. Using Python, Pandas, and Plotly, it handles data preprocessing, visualization, and predictive analysis on metrics such as step counts, calories burned, and active minutes. Data preprocessing includes managing missing values and standardizing the "Activity Date" field. Descriptive statistics and visualizations, including scatter plots, pie charts, and bar charts, uncover trends and behavioral patterns. Descriptive statistics provide insight into data distribution, while visualizations reveal significant trends. Scatter plots highlight correlations, such as between calories burned and steps taken, pie charts depict activity time allocation, and bar charts present active minutes across different days. These visualizations uncover behavioral patterns and emphasize data-driven insights. For predictive analysis, a Random Forest model is applied to forecast "very active minutes," representing high-intensity activity. Key predictive features include steps and calories burned, which strongly correlate with active minutes. The model achieved an accuracy of 80%, and validation metrics, such as MSE and R2, confirmed its reliability. This predictive capability offers users actionable insights for fitness improvement, helping them set realistic goals and monitor progress effectively. In conclusion, this study illustrates the practical applications of machine learning in wearable data analysis, showing potential for integration into fitness-tracking apps. The model’s insights support both short-term fitness and long- term health, with future improvements including additional metrics, like heart rate and sleep data, for comprehensive health monitoring. KEYWORDS: Smartwatch data analysis, very active minutes, Physical activity prediction, Random Forest algorithm, Personalized fitness monitoring, High-intensity activity, Machine learning in health, Predictive modeling.
APA, Harvard, Vancouver, ISO, and other styles
50

Sultania, Saket, Rohit Sonawane, and Prashasti Kanikar. "Machine Learning based Wildfire Area Estimation Leveraging Weather Forecast Data." International Journal of Information Technology and Computer Science 17, no. 1 (2025): 1–15. https://doi.org/10.5815/ijitcs.2025.01.01.

Full text
Abstract:
Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!