Academic literature on the topic 'Python ECG Analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Python ECG Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Python ECG Analysis"

1

Fedjajevs, Andrejs, Willemijn Groenendaal, Carlos Agell, and Evelien Hermeling. "Platform for Analysis and Labeling of Medical Time Series." Sensors 20, no. 24 (December 19, 2020): 7302. http://dx.doi.org/10.3390/s20247302.

Full text
Abstract:
Reliable and diverse labeled reference data are essential for the development of high-quality processing algorithms for medical signals, such as electrocardiogram (ECG) and photoplethysmogram (PPG). Here, we present the Platform for Analysis and Labeling of Medical time Series (PALMS) designed in Python. Its graphical user interface (GUI) facilitates three main types of manual annotations—(1) fiducials, e.g., R-peaks of ECG; (2) events with an adjustable duration, e.g., arrhythmic episodes; and (3) signal quality, e.g., data parts corrupted by motion artifacts. All annotations can be attributed to the same signal simultaneously in an ergonomic and user-friendly manner. Configuration for different data and annotation types is straightforward and flexible in order to use a wide range of data sources and to address many different use cases. Above all, configuration of PALMS allows plugging-in existing algorithms to display outcomes of automated processing, such as automatic R-peak detection, and to manually correct them where needed. This enables fast annotation and can be used to further improve algorithms. The GUI is currently complemented by ECG and PPG algorithms that detect characteristic points with high accuracy. The ECG algorithm reached 99% on the MIT/BIH arrhythmia database. The PPG algorithm was validated on two public databases with an F1-score above 98%. The GUI and optional algorithms result in an advanced software tool that allows the creation of diverse reference sets for existing datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Durán-Acevedo, Cristhian Manuel, Jeniffer Katerine Carrillo-Gómez, and Camilo Andrés Albarracín-Rojas. "Electronic Devices for Stress Detection in Academic Contexts during Confinement Because of the COVID-19 Pandemic." Electronics 10, no. 3 (January 27, 2021): 301. http://dx.doi.org/10.3390/electronics10030301.

Full text
Abstract:
This article studies the development and implementation of different electronic devices for measuring signals during stress situations, specifically in academic contexts in a student group of the Engineering Department at the University of Pamplona (Colombia). For the research’s development, devices for measuring physiological signals were used through a Galvanic Skin Response (GSR), the electrical response of the heart by using an electrocardiogram (ECG), the electrical activity produced by the upper trapezius muscle (EMG), and the development of an electronic nose system (E-nose) as a pilot study for the detection and identification of the Volatile Organic Compounds profiles emitted by the skin. The data gathering was taken during an online test (during the COVID-19 Pandemic), in which the aim was to measure the student’s stress state and then during the relaxation state after the exam period. Two algorithms were used for the data process, such as Linear Discriminant Analysis and Support Vector Machine through the Python software for the classification and differentiation of the assessment, achieving 100% of classification through GSR, 90% with the E-nose system proposed, 90% with the EMG system, and 88% success by using ECG, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodriguez-Torres, Erika, Alejandra Rosales-Lagarde, Carlos Fernando Chávez Vega, José Luis Ocaña Garrido, Yair Alejandro Pardo Rosales, and Rodrigo Silva Mota. "Análisis Fractal del Electroencefalograma Durante la Vigilia en Reposo de Adultos Mayores Hidalguenses y Deterioro Cognitivo." Pädi Boletín Científico de Ciencias Básicas e Ingenierías del ICBI 7, no. 14 (January 5, 2020): 10–13. http://dx.doi.org/10.29057/icbi.v7i14.4334.

Full text
Abstract:
Se sabe que el Análisis de Fluctuaciones sin Tendencia o DFA (Detrended Fluctuations Analysis, por sus siglas en inglés) de las series de tiempo biológicas como el Electrocardiograma (ECG), el Electroencefalograma (EEG) y otras resulta ser una herramienta útil para discriminar entre la salud o la enfermedad. Con el fin de corroborar si existen diferencias entre los DFA de Adultos Mayores (AM) con Deterioro Cognitivo (DC) y sin Deterioro Cognitivo (sDC), se realizaron DFA en dos sujetos, cada uno de ellos diagnosticado con DC o sDC. Se obtuvieron los registros de EEG y Electromiografía (EMG) durante la condición de vigilia con ojos cerrados. Se desarrolló una interfaz amigable en Python con la que se selecciona la serie de tiempo para el cálculo de los DFA. Se observó que en reposo con ojos cerrados, el sujeto con DC presenta mayores valores en regiones frontales que aquel sujeto sDC. Se concluye que el DFA aporta información cuantificable sobre la localización y mecanismos subyacentes al DC que pueden servir para ayudar a monitorear el curso del DC del AM.
APA, Harvard, Vancouver, ISO, and other styles
4

Kitzes, Justin, and Mark Wilber. "macroeco: reproducible ecological pattern analysis in Python." Ecography 39, no. 4 (January 14, 2016): 361–67. http://dx.doi.org/10.1111/ecog.01905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bao, Forrest Sheng, Xin Liu, and Christina Zhang. "PyEEG: An Open Source Python Module for EEG/MEG Feature Extraction." Computational Intelligence and Neuroscience 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/406391.

Full text
Abstract:
Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.
APA, Harvard, Vancouver, ISO, and other styles
6

Badenhorst, Melinda, Christopher J. Barry, Christiaan J. Swanepoel, Charles Theo van Staden, Julian Wissing, and Johann M. Rohwer. "Workflow for Data Analysis in Experimental and Computational Systems Biology: Using Python as ‘Glue’." Processes 7, no. 7 (July 18, 2019): 460. http://dx.doi.org/10.3390/pr7070460.

Full text
Abstract:
Bottom-up systems biology entails the construction of kinetic models of cellular pathways by collecting kinetic information on the pathway components (e.g., enzymes) and collating this into a kinetic model, based for example on ordinary differential equations. This requires integration and data transfer between a variety of tools, ranging from data acquisition in kinetics experiments, to fitting and parameter estimation, to model construction, evaluation and validation. Here, we present a workflow that uses the Python programming language, specifically the modules from the SciPy stack, to facilitate this task. Starting from raw kinetics data, acquired either from spectrophotometric assays with microtitre plates or from Nuclear Magnetic Resonance (NMR) spectroscopy time-courses, we demonstrate the fitting and construction of a kinetic model using scientific Python tools. The analysis takes place in a Jupyter notebook, which keeps all information related to a particular experiment together in one place and thus serves as an e-labbook, enhancing reproducibility and traceability. The Python programming language serves as an ideal foundation for this framework because it is powerful yet relatively easy to learn for the non-programmer, has a large library of scientific routines and active user community, is open-source and extensible, and many computational systems biology software tools are written in Python or have a Python Application Programming Interface (API). Our workflow thus enables investigators to focus on the scientific problem at hand rather than worrying about data integration between disparate platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Trainor-Guitton, Whitney, Leo Turon, and Dominique Dubucq. "Python Earth Engine API as a new open-source ecosphere for characterizing offshore hydrocarbon seeps and spills." Leading Edge 40, no. 1 (January 2021): 35–44. http://dx.doi.org/10.1190/tle40010035.1.

Full text
Abstract:
The Python Earth Engine application programming interface (API) provides a new open-source ecosphere for testing hydrocarbon detection algorithms on large volumes of images curated with the Google Earth Engine. We specifically demonstrate the Python Earth Engine API by calculating three hydrocarbon indices: fluorescence, rotation absorption, and normalized fluorescence. The Python Earth Engine API provides an ideal environment for testing these indices with varied oil seeps and spills by (1) removing barriers of proprietary software formats and (2) providing an extensive library of data analysis tools (e.g., Pandas and Seaborn) and classification algorithms (e.g., Scikit-learn and TensorFlow). Our results demonstrate end-member cases in which fluorescence and normalized fluorescence indices of seawater and oil are statistically similar and different. As expected, predictive classification is more effective and the calculated probability of oil is more accurate for scenarios in which seawater and oil are well separated in the fluorescence space.
APA, Harvard, Vancouver, ISO, and other styles
8

Cadieux, Nicolas, Margaret Kalacska, Oliver T. Coomes, Mari Tanaka, and Yoshito Takasaki. "A Python Algorithm for Shortest-Path River Network Distance Calculations Considering River Flow Direction." Data 5, no. 1 (January 16, 2020): 8. http://dx.doi.org/10.3390/data5010008.

Full text
Abstract:
Vector based shortest path analysis in geographic information system (GIS) is well established for road networks. Even though these network algorithms can be applied to river layers, they do not generally consider the direction of flow. This paper presents a Python 3.7 program (upstream_downstream_shortests_path_dijkstra.py) that was specifically developed for river networks. It implements multiple single-source (one to one) weighted Dijkstra shortest path calculations, on a list of provided source and target nodes, and returns the route geometry, the total distance between each source and target node, and the total upstream and downstream distances for each shortest path. The end result is similar to what would be obtained by an “all-pairs” weighted Dijkstra shortest path algorithm. Contrary to an “all-pairs” Dijkstra, the algorithm only operates on the source and target nodes that were specified by the user and not on all of the nodes contained within the graph. For efficiency, only the upper distance matrix is returned (e.g., distance from node A to node B), while the lower distance matrix (e.g., distance from nodes B to A) is not. The program is intended to be used in a multiprocessor environment and relies on Python’s multiprocessing package.
APA, Harvard, Vancouver, ISO, and other styles
9

Lopez, F., G. Charbonnier, Y. Kermezli, M. Belhocine, Q. Ferré, N. Zweig, M. Aribi, A. Gonzalez, S. Spicuglia, and D. Puthier. "Explore, edit and leverage genomic annotations using Python GTF toolkit." Bioinformatics 35, no. 18 (February 15, 2019): 3487–88. http://dx.doi.org/10.1093/bioinformatics/btz116.

Full text
Abstract:
AbstractMotivationWhile Python has become very popular in bioinformatics, a limited number of libraries exist for fast manipulation of gene coordinates in Ensembl GTF format.ResultsWe have developed the GTF toolkit Python package (pygtftk), which aims at providing easy and powerful manipulation of gene coordinates in GTF format. For optimal performances, the core engine of pygtftk is a C dynamic library (libgtftk) while the Python API provides usability and readability for developing scripts. Based on this Python package, we have developed the gtftk command line interface that contains 57 sub-commands (v0.9.10) to ease handling of GTF files. These commands may be used to (i) perform basic tasks (e.g. selections, insertions, updates or deletions of features/keys), (ii) select genes/transcripts based on various criteria (e.g. size, exon number, transcription start site location, intron length, GO terms) or (iii) carry out more advanced operations such as coverage analyses of genomic features using bigWig files to create faceted read-coverage diagrams. In conclusion, the pygtftk package greatly simplifies the annotation of GTF files with external information while providing advance tools to perform gene analyses.Availability and implementationpygtftk and gtftk have been tested on Linux and MacOSX and are available from https://github.com/dputhier/pygtftk under the MIT license. The libgtftk dynamic library written in C is available from https://github.com/dputhier/libgtftk.
APA, Harvard, Vancouver, ISO, and other styles
10

Ono, Keiichiro, Tanja Muetze, Georgi Kolishovski, Paul Shannon, and Barry Demchak. "CyREST: Turbocharging Cytoscape Access for External Tools via a RESTful API." F1000Research 4 (August 5, 2015): 478. http://dx.doi.org/10.12688/f1000research.6767.1.

Full text
Abstract:
As bioinformatic workflows become increasingly complex and involve multiple specialized tools, so does the difficulty of reliably reproducing those workflows. Cytoscape is a critical workflow component for executing network visualization, analysis, and publishing tasks, but it can be operated only manually via a point-and-click user interface. Consequently, Cytoscape-oriented tasks are laborious and often error prone, especially with multistep protocols involving many networks.In this paper, we present the new cyREST Cytoscape app and accompanying harmonization libraries. Together, they improve workflow reproducibility and researcher productivity by enabling popular languages (e.g., Python and R, JavaScript, and C#) and tools (e.g., IPython/Jupyter Notebook and RStudio) to directly define and query networks, and perform network analysis, layouts and renderings. We describe cyREST’s API and overall construction, and present Python- and R-based examples that illustrate how Cytoscape can be integrated into large scale data analysis pipelines.cyREST is available in the Cytoscape app store (http://apps.cytoscape.org) where it has been downloaded over 1900 times since its release in late 2014.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Python ECG Analysis"

1

Veselá, Barbora. "Gnu Health Monitoring module." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399271.

Full text
Abstract:
This thesis focuses on the development of a GNU Health Module for electrocardiogram monitoring and the development of an application providing a fundamental electrocardiogram analysis. The theoretical part contains a brief introduction to hospital information systems including electronic patient record and healthcare data standards information, followed by a description of the GNU Health application and the implementation of the electrocardiogram analysis, written in the Python programming language. The practical part deals with the development of the GNU Health Monitoring module and the external application for signal analysis. The results, disscussion and the conclusion follow.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Python ECG Analysis"

1

Ramasamy, Prema, Shri Tharanyaa Jothimani Palanivelu, and Abin Sathesan. "Certain Applications of LabVIEW in the Field of Electronics and Communication." In LabVIEW - A Flexible Environment for Modeling and Daily Laboratory Use. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.96301.

Full text
Abstract:
The LabVIEW platform with graphical programming environment, will help to integrate the human machine interface controller with the software like MATLAB, Python etc. This platform plays the vital role in many pioneering areas like speech signal processing, bio medical signals like Electrocardiogram (ECG) and Electroencephalogram (EEG) processing, fault analysis in analog electronic circuits, Cognitive Radio(CR), Software Defined Radio (SDR), flexible and wearable electronics. Nowadays most engineering colleges redesign their laboratory curricula for the students to enhance the potential inclusion of remote based laboratory to facilitate and encourage the students to access the laboratory anywhere and anytime. This would help every young learner to bolster their innovation, if the laboratory environment is within the reach of their hand. LabVIEW is widely recognized for its flexibility and adaptability. Due to the versatile nature of LabVIEW in the Input- Output systems, it has find its broad applications in integrated systems. It can provide a smart assistance to deaf and dumb people for interpreting the sign language by gesture recognition using flex sensors, monitor the health condition of elderly people by predicting the abnormalities in the heart beat through remote access, and identify the stage of breast cancer from the Computed tomography (CT) and Magnetic resonance imaging (MRI) scans using image processing techniques. In this chapter, the previous work of authors who have extensively incorporated LabVIEW in the field of electronics and communication are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Python ECG Analysis"

1

Henson, Jonathan, Richard Dolan, Gareth Thomas, and Christos Georgakis. "Automated Optimisation of T-Root Rotor Grooves With B-Splines and Finite Element Analysis." In ASME Turbo Expo 2015: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/gt2015-43179.

Full text
Abstract:
An Alstom tool is described for the automated and simultaneous design optimisation of 2 and 4-hook T-root grooving of multiple steam turbine rotor stages in order to minimise the peak stress. The finite element axisymmetric thermal-stress calculation is performed with Abaqus in a few hours on modest hardware. The tool embeds Python scripting to facilitate the rotor groove model definition and meshing within Abaqus/CAE, with emphasis placed on minimising the effort for the initial setup. Rotor groove shapes are described with B-splines, maintained and modified within the in-house tool. Their shape is progressively refined as directed by a hybrid evolutionary-gradient based optimisation engine in order to achieve the minimum stress objective. In the region of highest stress, the groove boundary shape adjusts as the optimisation proceeds to conform to the local contours of stress. Application to a low pressure steam turbine rotor demonstrates comparable or lower stresses with this tool compared to those from manual expert optimisation. The method can be readily extended to other geometric entities on the rotor described with B-spline curves, e.g. cavities, seals.
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Harshkumar, Jianlin Cai, Gautier Noiray, and Subrata Bhowmik. "Digital Transformation and Automation of Flow Assurance Engineering Workflows Using Digital Field Twin." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31017-ms.

Full text
Abstract:
Abstract Flow assurance is central to the design of a subsea production system and requires frequent interfacing with engineers from multiple disciplines. The objective of this paper is to demonstrate how cloud based digital field twin can be leveraged to automate subsea flow assurance engineering workflows and consequently, achieve efficient collaborations, faster and reliable designs, and reduced costs. In this proposed workflow, engineers use a web application built on top of a cloud-based digital twin platform to perform flow assurance calculations and design analysis. The web based platform integrates multiphase flow simulators and other relevant engineering tools through python scripts. A user is only required to input design constraints and necessary basic information. The application acquires inter-disciplinary data (e.g. pipeline, layout, equipment, etc.) and automatically performs pre-processing, model setup, simulation, and results processing in the background and make results available to all the users at the front-end. The digital flow assurance platform replaces traditional workflows requiring use of different standalone engineering software, and frequent exchange of information with other engineering teams in form of documents and spreadsheets. The proposed cloud-based workflow allows engineers to focus on technical analysis by eliminating several manual and repeating processes such as accessing different software, creating models, results extraction and formatting, etc. The ability to share results in form of auto-generated reports and formatted spreadsheets; minimizes human errors and promotes information exchange and transparency among project team members from different disciplines. The cloud based platform enables engineers to work on a same project from different geophysical locations and devices. Overall, this digital flow assurance workflow significantly improves engineering efficiency, save costs, and allows faster and reliable concept design and FEED (Front End Engineering Design). The ideas widely discussed for flow assurance digitalization typically include use of data analytics and machine learning, virtual flow metering, real time data monitoring, predictive analytics, etc. This paper, however, presents novel practical idea to bring digital transformation to the way flow assurance engineers work and collaborate.
APA, Harvard, Vancouver, ISO, and other styles
3

"Changing Paradigms of Technical Skills for Data Engineers." In InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4001.

Full text
Abstract:
Aim/Purpose: [This Proceedings paper was revised and published in the 2018 issue of the journal Issues in Informing Science and Information Technology, Volume 15] This paper investigates the new technical skills that are needed for Data Engineering. Past research is compared to new research which creates a list of the 20 top tech-nical skills required by a Data Engineer. The growing availability of Data Engineering jobs is discussed. The research methodology describes the gathering of sample data and then the use of Pig and MapReduce on AWS (Amazon Web Services) to count occurrences of Data Engineering technical skills from 100 Indeed.com job advertisements in July, 2017. Background: A decade ago, Data Engineering relied heavily on the technology of Relational Database Management Sys-tems (RDBMS). For example, Grisham, P., Krasner, H., and Perry D. (2006) described an Empirical Soft-ware Engineering Lab (ESEL) that introduced Relational Database concepts to students with hands-on learning that they called “Data Engineering Education with Real-World Projects.” However, as seismic im-provements occurred for the processing of large distributed datasets, big data analytics has moved into the forefront of the IT industry. As a result, the definition for Data Engineering has broadened and evolved to include newer technology that supports the distributed processing of very large amounts of data (e.g. Hadoop Ecosystem and NoSQL Databases). This paper examines the technical skills that are needed to work as a Data Engineer in today’s rapidly changing technical environment. Research is presented that re-views 100 job postings for Data Engineers from Indeed (2017) during the month of July, 2017 and then ranks the technical skills in order of importance. The results are compared to earlier research by Stitch (2016) that ranked the top technical skills for Data Engineers in 2016 using LinkedIn to survey 6,500 peo-ple that identified themselves as Data Engineers. Methodology: A sample of 100 Data Engineering job postings were collected and analyzed from Indeed during July, 2017. The job postings were pasted into a text file and then related words were grouped together to make phrases. For example, the word “data” was put into context with other related words to form phrases such as “Big Data”, “Data Architecture” and “Data Engineering”. A text editor was used for this task and the find/replace functionality of the text editor proved to be very useful for this project. After making phrases, the large text file was uploaded to the Amazon cloud (AWS) and a Pig batch job using Map Reduce was leveraged to count the occurrence of phrases and words within the text file. The resulting phrases/words with occurrence counts was download to a Personal Computer (PC) and then was loaded into an Excel spreadsheet. Using a spreadsheet enabled the phrases/words to be sorted by oc-currence count and then facilitated the filtering out of irrelevant words. Another task to prepare the data involved the combination phrases or words that were synonymous. For example, the occurrence count for the acronym ELT and the occurrence count for the acronym ETL were added together to make an overall ELT/ETL occurrence count. ETL is a Data Warehousing acronym for Extracting, Transforming and Loading data. This task required knowledge of the subject area. Also, some words were counted in lower case and then the same word was also counted in mixed or upper case, thus producing two or three occur-rence counts for the same word. These different counts were added together to make an overall occur-rence count for the word (e.g. word occurrence counts for Python and python were added together). Fi-nally, the Indeed occurrence counts were sorted to allow for the identification of a list of the top 20 tech-nical skills needed by a Data Engineer. Contribution: Provides new information about the Technical Skills needed by Data Engineers. Findings: Twelve of the 20 Stitch (2016) report phrases/words that are highlighted in bold above matched the tech-nical skills mentioned in the Indeed research. I considered C, C++ and Java a match to the broader cate-gory of Programing in the Indeed data. Although the ranked order of the two lists did not match, the top five ranked technical skills for both lists are similar. The reader of this paper might consider the skills of SQL, Python, Hadoop/HDFS to be very important technical skills for a Data Engineer. Although the programming language R is very popular with Data Scientists, it did not make the top 20 skills for Data Engineering; it was in the overall list from Indeed. The R programming language is oriented towards ana-lytical processing (e.g. used by Data Scientists), whereas the Python language is a scripting and object-oriented language that facilitates the creation of Data Pipelines (e.g. used by Data Engineers). Because the data was collected one year apart and from very different data sources, the timing of the data collection and the different data sources could account for some of the differences in the ranked lists. It is worth noting that the Indeed research ranked list introduced the technical skills of Design Skills, Spark, AWS (Amazon Web Services), Data Modeling, Kafta, Scala, Cloud Computing, Data Pipelines, APIs and AWS Redshift Data Warehousing to the top 20 ranked technical skills list. The Stitch (2016) report that did not have matches to the Indeed (2017) sample data for Linux, Databases, MySQL, Business Intelligence, Oracle, Microsoft SQL Server, Data Analysis and Unix. Although many of these Stitch top 20 technical skills were on the Indeed list, they did not make the top 20 ranked technical skills. Recommendations for Practitioners: Some of the skills needed for Database Technologies are transferable to Data Engineering. Recommendation for Researchers: None Impact on Society: There is not much peer reviewed literature on the subject of Data Engineering, this paper will add new information to the subject area. Future Research: I'm developing a Specialization in Data Engineering for the MS in Data Science degree at our university.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhong, Mengqi, and Yifan Yu. "The Spatio-temporal Disparities in Healthy Food Accessibility: A Case Study of Shanghai, China." In 55th ISOCARP World Planning Congress, Beyond Metropolis, Jakarta-Bogor, Indonesia. ISOCARP, 2019. http://dx.doi.org/10.47472/mboc5872.

Full text
Abstract:
The supply of healthy food is distributed unequally in city. The accessibility of healthy foods is affected by both locations and traffic conditions. This paper examines spatio-temporal disparities in healthy food accessibility in Shanghai communities. Firstly, we choose all communities in Shanghai and use python as a crawling tool to collect healthy food store POI (e.g. agricultural markets, vegetable markets, fruit markets, aquatic seafood markets, supermarkets and comprehensive markets) from Gaode Map and get 23,436 points to calculate the amount and density of healthy food store in various communities. Secondly, after comparing Baidu Map and Gaode Map, leading platforms of Web GIS services in China, we choose Baidu Map to collect data to study the spatio-temporal difference in accessibility by using network analysis and developing a crawling tool to collect different travel time (e.g. walking and public transportation) for each community to the closest healthy food store at each time of day (0:00-24:00). Thirdly, we set up a variable to see at what time are people in the communities able to reach their nearest healthy food store in 15 minutes and the ratio of the above-mentioned time to the whole day is calculated so that we can evaluate the temporal disparities of healthy food accessibility. Additionally, we use global and local spatial autocorrelation to analyze the spatial patterns of the temporal disparities of healthy food accessibility, based on the Moran’s index and the local indicator spatial association (LISA) index. Finally, on the basis of the research above, the food desert map is drawn. The results of this analysis identify the communities in Shanghai with the greatest need for improved access to healthy food stores and the variance of accessibility affected by the traffic in different times will be taken into account. Ultimately, this study explores a more complete and realistic condition of healthy food accessibility in Shanghai and the corresponding improvement strategy is proposed.
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Karanpreet, Wei Zhao, and Rakesh K. Kapania. "An Optimization Framework for Curvilinearly Stiffened Composite Pressure Vessels and Pipes." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-65469.

Full text
Abstract:
With improvement in innovative manufacturing technologies, e.g. automated tow-placement, it is now possible to fabricate any complex shaped structural design for practical applications. This innovative manufacturing technology allows for the fabrication of curvilinearly stiffened pressure vessels and pipes. Compared to straight stiffeners, curvilinear stiffeners have been shown to have better structural performance and weight savings under certain loading conditions. In this paper, an optimization framework for optimal structural design for curvi-linearly stiffened composite pressure vessels and pipes is presented. Non-Uniform Rational B-Spline (NURBS) curves are utilized to define curvilinear stiffeners over the surface of the pipe. An integrated tool using Python, NURBS-based Rhinoceros 3D, MSC.PATRAN and MSC.NASTRAN is implemented for performing topology optimization of curvilinearly stiffened cylindrical shells. Rhinoceros 3D is used for creating the geometry, which later can be exported to MSC.PATRAN for finite element model generation. Finally, MSC.NASTRAN is used to perform structural analysis. A hybrid optimization technique, consisting of Particle Swarm Optimization (PSO) and Gradient Based Optimization (GBO), is used for finding the optimized locations of stiffeners, optimal geometric dimensions for stiffener cross-sections and the optimal layer thickness for the composite skin. Optimization studies show that stiffener placement influences the buckling mode of the structure. Furthermore, the structural weight can be decreased by optimizing the stiffener’s cross-section and skin thickness. In this paper, a cylindrical pipe stiffened by orthogonal and curvilinear stiffeners under internal pressure and bending load is studied. It is shown that curvilinear stiffeners lead to a potential 8% weight saving in the composite laminated skin as compared to the case of using straight stiffeners.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yan-Hui, Tyler London, and Damaso DeBono. "Developing Mk Solutions for Fatigue Crack Growth Assessment of Flaws at Weld Root Toes in Girth Welds." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-77067.

Full text
Abstract:
Engineering critical assessment (ECA) is increasingly being used in the offshore industry to determine the maximum tolerable initial flaw size in girth welds for pipelines and risers. To account for the effect of the stress concentration factor (SCF) at the weld toe on the stress intensity factor range, ΔK, a magnification factor, Mk, is used. The existing Mk solutions given in BS 7910 were developed for fatigue assessment of flaws at the toes of fillet and butt welds and may not be suitable for assessing flaws at girth weld root toes, where the weld width is relatively small. On the other hand, for single-sided girth welds, fatigue cracking often initiates from weld toes on the root side, rather than on the weld cap side. Finite element (FE) modelling was performed to determine a 2D Mk solution for ECA of a flaw at the weld root bead toe. The weld root bead profile was uniquely characterised by five variables including weld root bead width, weld root bead height, hi-lo, weld root bead angle and weld root bead radius. Following a parametric sensitivity study, defect size, weld root bead height and hi-lo were identified as the governing parameters. A total of 6,000 FE simulations was performed and three types of defect models, which covered different combinations of weld root bead height and hi-lo, were generated and analysed. A series of automation scripts were developed in the Python programming language and the Mk solution for each type of defect model was developed and provided in a parametric equation. The accuracy of the 2D Mk solutions was confirmed by the experimental data, in terms of both fatigue crack growth and S-N curves. It was found that the methods and Mk solutions currently recommended in BS 7910 and DNV OS-F101 are inappropriate for assessing a flaw at a girth weld root toe.
APA, Harvard, Vancouver, ISO, and other styles
7

"Interactive 3D Representation of Business Case Studies in the Classroom." In InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4047.

Full text
Abstract:
Aim/Purpose: In our previous paper, we have proposed a methodology to deliver an applied business course to the multicultural audience having in mind embedding into the course cultural sensitivity and create a safe place for multicultural students to use own cultural metaphors in a learning place. We have proposed a fusion of ancient storytelling tradition creating an overall context for the teaching process and specific use of rich picture coming from Soft System Methodology (SSM). The used teaching approach is promising and brings the required results. However, the proposed method, to be fully effective requires a computerized supporting tool in a form of sophisticated graphical editor/presentation application displaying in real-time case study progress along with the in-class discussion. This tool is a central topic for this paper. Background: The existing tools like for example MS PowerPoint, MS Visio, or Prezi used by us so far cannot be used for our purpose as the interactive image update distract the students. The MS PowerPoint and Prezi require visible mode switching between design mode (edit) and presentation mode. Whereas MS Visio editing is too slow for our purposes. This switching or editing time create a meaningful distraction during the discussion. Methodology: As a solution for the above problem, the authors work on the development of own specialized tool using open source software Blender 3D (http://blender.org) along with Python. The code will be released to open source domain to enable further co-operation with other researchers. Contribution: The described effort, if successful, should create a new presentation tool allowing among the other features, seamless in-class knowledge transfer and in the future will enable the way for gamification of case studies. Impact on Society: A definite improvement of teaching quality in applied business (however, not limited to) with further possibility to extend to deliver courses e.g. for company’s executives. The tool and methodology allow embedding cultural sensitivity into the learning process and will have an impact on digital inclusiveness. Future Research: The tool enables possibility for further analysis of the business situation by artificial intelligence interface. In fact, a whole interactive process of reaching the case conclusion may be observed (allowing collecting analytics and insights on teacher and student’s behavior and performance).
APA, Harvard, Vancouver, ISO, and other styles
8

Karnik, Saniya, Supriya Gupta, and Jason Baihly. "Machine Intelligence for Integrated Workover Operations." In SPE/ICoTA Well Intervention Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204423-ms.

Full text
Abstract:
Abstract Because of recent advancements in the field of natural language processing (NLP) and machine learning, there is potential to ingest decades of field history and heterogeneous production records. This paper proposes an analytics workflow that leverages artificial intelligence to process thousands of historical workover reports (handwritten and electronic), extract important information, learn patterns in production activity, and train machines to quantify workover impact and derive best practices for field operations. Natural language processing libraries were developed to ingest and catalog gigabytes of field data, identify rich sources of workover information, and extract workover and cost information from unstructured reports. A machine learning (ML) model was developed and trained to predict well intervention categories based on free text describing workovers found in reports. This ML model learnt pattern and context of repeating words pertaining to a workover type (e.g. Artificial Lift, Well Integrity, etc.) and to classify reports accordingly. Statistical models were built to determine return on investment from workovers and rank them based on production improvement and payout time. Today, 80% of an oilfield expert's time can be spent manually organizing data. When processing decades of historical oilfield production data spread across both structured (production timeseries) and unstructured records (e.g., workover reports), experts often face two major challenges: 1) How to rapidly analyze field data with thousands of historical records. 2) How to use the rich historical information to generate effective insights to optimize production. In this paper, we analyzed multiple field datasets in a heterogeneous file environment with 20 different file formats (PDF, Excel, and other formats), 2,000+ files and production history spanning 50+ years across and 2000+ producing wells. Libraries were developed to extract workover files from complex folder hierarchies through an intelligent automated search. Information from reports was extracted through Python libraries and optical character recognition technology to build master data source with production history, workover, and cost information. A neural network model was trained to predict workover class for each report with >85% accuracy. The rich dataset was then used to analyze episodic workover activity by well and compute key performance indicators (KPIs) to identify well candidates for production enhancement. The building blocks included quantifying production upside and calculating return of investment for various workover classes. O&G companies have vast volumes of unstructured data and use less than 1% of it to uncover meaningful insights about field operations. Our workflow describes methodology to ingest both structured and unstructured documents, capture knowledge, quantify production upside, understand capital spending, and learn best practices in workover operations through an automated process. This process helps optimize forward operating expense (OPEX) plan with focus on cost reduction and shortens turnaround time for decision making.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography