To see the other types of publications on this topic, follow the link: Continuous Deep Analytics.

Journal articles on the topic 'Continuous Deep Analytics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Continuous Deep Analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Koutsomitropoulos, Dimitrios, Spiridon Likothanassis, and Panos Kalnis. "Semantics in the Deep: Semantic Analytics for Big Data." Data 4, no. 2 (May 7, 2019): 63. http://dx.doi.org/10.3390/data4020063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Williams, Haney W., and Steven J. Simske. "Object Tracking Continuity through Track and Trace Method." Electronic Imaging 2020, no. 16 (January 26, 2020): 299–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-258.

Full text
Abstract:
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This paper presents a system implementing a means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and the Feedback Analyzer. Combined, these systems allow for reasonable object continuity in the face of object concealment.
APA, Harvard, Vancouver, ISO, and other styles
3

Williams, Haney W., Steven J. Simske, and Fr Gregory Bishay. "Unify The View of Camera Mesh Network to a Common Coordinate System." Electronic Imaging 2021, no. 17 (January 18, 2021): 175–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.17.avm-175.

Full text
Abstract:
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest, including security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and AI/deep learning have facilitated the improved recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest in many research projects. [1] In our past research, we proposed a system that implements the means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The second phase of this system proposed developing a common knowledge among a mesh of fixed cameras, akin to a real-time panorama. This paper discusses the method to coordinate the cameras' view to a common frame of reference so that the object location is known by all participants in the network.
APA, Harvard, Vancouver, ISO, and other styles
4

Derhab, Abdelouahid, Arwa Aldweesh, Ahmed Z. Emam, and Farrukh Aslam Khan. "Intrusion Detection System for Internet of Things Based on Temporal Convolution Neural Network and Efficient Feature Engineering." Wireless Communications and Mobile Computing 2020 (December 22, 2020): 1–16. http://dx.doi.org/10.1155/2020/6689134.

Full text
Abstract:
In the era of the Internet of Things (IoT), connected objects produce an enormous amount of data traffic that feed big data analytics, which could be used in discovering unseen patterns and identifying anomalous traffic. In this paper, we identify five key design principles that should be considered when developing a deep learning-based intrusion detection system (IDS) for the IoT. Based on these principles, we design and implement Temporal Convolution Neural Network (TCNN), a deep learning framework for intrusion detection systems in IoT, which combines Convolution Neural Network (CNN) with causal convolution. TCNN is combined with Synthetic Minority Oversampling Technique-Nominal Continuous (SMOTE-NC) to handle unbalanced dataset. It is also combined with efficient feature engineering techniques, which consist of feature space reduction and feature transformation. TCNN is evaluated on Bot-IoT dataset and compared with two common machine learning algorithms, i.e., Logistic Regression (LR) and Random Forest (RF), and two deep learning techniques, i.e., LSTM and CNN. Experimental results show that TCNN achieves a good trade-off between effectiveness and efficiency. It outperforms the state-of-the-art deep learning IDSs that are tested on Bot-IoT dataset and records an accuracy of 99.9986% for multiclass traffic detection, and shows a very close performance to CNN with respect to the training time.
APA, Harvard, Vancouver, ISO, and other styles
5

Albrecht, Conrad M., Rui Zhang, Xiaodong Cui, Marcus Freitag, Hendrik F. Hamann, Levente J. Klein, Ulrich Finkler, et al. "Change Detection from Remote Sensing to Guide OpenStreetMap Labeling." ISPRS International Journal of Geo-Information 9, no. 7 (July 2, 2020): 427. http://dx.doi.org/10.3390/ijgi9070427.

Full text
Abstract:
The growing amount of openly available, meter-scale geospatial vertical aerial imagery and the need of the OpenStreetMap (OSM) project for continuous updates bring the opportunity to use the former to help with the latter, e.g., by leveraging the latest remote sensing data in combination with state-of-the-art computer vision methods to assist the OSM community in labeling work. This article reports our progress to utilize artificial neural networks (ANN) for change detection of OSM data to update the map. Furthermore, we aim at identifying geospatial regions where mappers need to focus on completing the global OSM dataset. Our approach is technically backed by the big geospatial data platform Physical Analytics Integrated Repository and Services (PAIRS). We employ supervised training of deep ANNs from vertical aerial imagery to segment scenes based on OSM map tiles to evaluate the technique quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Dr Joy Iong Zong, and Dr Smys S. "Social Multimedia Security and Suspicious Activity Detection in SDN using Hybrid Deep Learning Technique." June 2020 2, no. 2 (May 27, 2020): 108–15. http://dx.doi.org/10.36548/jitdw.2020.2.004.

Full text
Abstract:
Social multimedia traffic is growing exponentially with the increased usage and continuous development of services and applications based on multimedia. Quality of Service (QoS), Quality of Information (QoI), scalability, reliability and such factors that are essential for social multimedia networks are realized by secure data transmission. For delivering actionable and timely insights in order to meet the growing demands of the user, multimedia analytics is performed by means of a trust-based paradigm. Efficient management and control of the network is facilitated by limiting certain capabilities such as energy-aware networking and runtime security in Software Defined Networks. In social multimedia context, suspicious flow detection is performed by a hybrid deep learning based anomaly detection scheme in order to enhance the SDN reliability. The entire process is divided into two modules namely – Abnormal activities detection using support vector machine based on Gradient descent and improved restricted Boltzmann machine which facilitates the anomaly detection module, and satisfying the strict requirements of QoS like low latency and high bandwidth in SDN using end-to-end data delivery module. In social multimedia, data delivery and anomaly detection services are essential in order to improve the efficiency and effectiveness of the system. For this purpose, we use benchmark datasets as well as real time evaluation to experimentally evaluate the proposed scheme. Detection of malicious events like confidential data collection, profile cloning and identity theft are performed to analyze the performance of the system using CMU-based insider threat dataset for large scale analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Sapci, A. Hasan, and H. Aylin Sapci. "Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review." JMIR Medical Education 6, no. 1 (June 30, 2020): e19285. http://dx.doi.org/10.2196/19285.

Full text
Abstract:
Background The use of artificial intelligence (AI) in medicine will generate numerous application possibilities to improve patient care, provide real-time data analytics, and enable continuous patient monitoring. Clinicians and health informaticians should become familiar with machine learning and deep learning. Additionally, they should have a strong background in data analytics and data visualization to use, evaluate, and develop AI applications in clinical practice. Objective The main objective of this study was to evaluate the current state of AI training and the use of AI tools to enhance the learning experience. Methods A comprehensive systematic review was conducted to analyze the use of AI in medical and health informatics education, and to evaluate existing AI training practices. PRISMA-P (Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols) guidelines were followed. The studies that focused on the use of AI tools to enhance medical education and the studies that investigated teaching AI as a new competency were categorized separately to evaluate recent developments. Results This systematic review revealed that recent publications recommend the integration of AI training into medical and health informatics curricula. Conclusions To the best of our knowledge, this is the first systematic review exploring the current state of AI education in both medicine and health informatics. Since AI curricula have not been standardized and competencies have not been determined, a framework for specialized AI training in medical and health informatics education is proposed.
APA, Harvard, Vancouver, ISO, and other styles
8

Pereira, Filipe Dwan, Samuel C. Fonseca, Elaine H. T. Oliveira, David B. F. Oliveira, Alexandra I. Cristea, and Leandro S. G. Carvalho. "Deep learning for early performance prediction of introductory programming students: a comparative and explanatory study." Revista Brasileira de Informática na Educação 28 (October 12, 2020): 723–48. http://dx.doi.org/10.5753/rbie.2020.28.0.723.

Full text
Abstract:
Introductory programming may be complex for many students. Moreover, there is a high failure and dropout rate in these courses. A potential way to tackle this problem is to predict student performance at an early stage, as it facilitates human-AI collaboration towards prescriptive analytics, where the instructors/monitors will be told how to intervene and support students - where early intervention is crucial. However, the literature states that there is no reliable predictor yet for programming students’ performance, since even large-scale analysis of multiple features have resulted in only limited predictive power. Notice that Deep Learning (DL) can provide high-quality results for huge amount of data and complex problems. In this sense, we employed DL for early prediction of students’ performance using data collected in the very first two weeks from introductory programming courses offered for a total of 2058 students during 6 semesters (longitudinal study). We compared our results with the state-of-the-art, an Evolutionary Algorithm (EA) that automatic creates and optimises machine learning pipelines. Our DL model achieved an average accuracy of 82.5%, which is statistically superior to the model constructed and optimised by the EA (p-value << 0.05 even with Bonferroni correction). In addition, we also adapted the DL model in a stacking ensemble for continuous prediction purposes. As a result, our regression model explained ~62% of the final grade variance. In closing, we also provide results on the interpretation of our regression model to understand the leading factors of success and failure in introductory programming.
APA, Harvard, Vancouver, ISO, and other styles
9

Savoska, Snezana, and Blagoj Ristevski. "Towards Implementation of Big Data Concepts in a Pharmaceutical Company." Open Computer Science 10, no. 1 (October 27, 2020): 343–56. http://dx.doi.org/10.1515/comp-2020-0201.

Full text
Abstract:
AbstractNowadays, big data is a widely utilized concept that has been spreading quickly in almost every domain. For pharmaceutical companies, using this concept is a challenging task because of the permanent pressure and business demands created through the legal requirements, research demands and standardization that have to be adopted. These legal and standards’ demands are associated with human healthcare safety and drug control that demands continuous and deep data analysis. Companies update their procedures to the particular laws, standards, market demands and regulations all the time by using contemporary information technology. This paper highlights some important aspects of the experience and change methodology used in one Macedonian pharmaceutical company, which has employed information technology solutions that successfully tackle legal and business pressures when dealing with a large amount of data. We used a holistic view and deliverables analysis methodology to gain top-down insights into the possibilities of big data analytics. Also, structured interviews with the company’s managers were used for information collection and proactive methodology with workshops was used in data integration toward the implementation of big data concepts. The paper emphasizes the information and knowledge used in this domain to improve awareness for the needs of big data analysis to achieve a competitive advantage. The main results are focused on systematizing the whole company’s data, information and knowledge and propose a solution that integrates big data to support managers’ decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Yadav, Piyush, Dhaval Salwala, Dibya Prakash Das, and Edward Curry. "Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing." International Journal of Semantic Computing 14, no. 03 (September 2020): 423–55. http://dx.doi.org/10.1142/s1793351x20500051.

Full text
Abstract:
Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph-driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization — VEKG-Time Aggregated Graph (VEKG-TAG) — is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with [Formula: see text]-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19[Formula: see text] faster search time, achieving sub-second median latency of 4–20[Formula: see text]ms.
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Hong Xia, Yan Qin Guo, and Yong Wei Wang. "Composite Support Structure of Continuous Arch Internal Support Design and Calculation and Analysis of Engineering Example." Applied Mechanics and Materials 204-208 (October 2012): 1125–30. http://dx.doi.org/10.4028/www.scientific.net/amm.204-208.1125.

Full text
Abstract:
Composite support structure of continuous arch internal support is a new type of maintenance structure of deep foundation. In this paper, as the cylindrical shell in the ring uniform load analytical solutions of elasticity a starting point, the design and calculations methods of composite support structure of continuous arch internal supporting are discussed. Engineering examples of composite support structure of continuous arch internal supporting in deep excavation engineering applications are analyzed and summarized. At the same time, the design parameters are provided for more engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Büttner, Christin, Thomas L. Milani, and Freddy Sichting. "Integrating a Potentiometer into a Knee Brace Shows High Potential for Continuous Knee Motion Monitoring." Sensors 21, no. 6 (March 19, 2021): 2150. http://dx.doi.org/10.3390/s21062150.

Full text
Abstract:
Continuous monitoring of knee motion can provide deep insights into patients’ rehabilitation status after knee injury and help to better identify their individual therapeutic needs. Potentiometers have been identified as one possible sensor type for continuous monitoring of knee motion. However, to verify their use in monitoring real-life environments, further research is needed. We aimed to validate a potentiometer-embedded knee brace to measure sagittal knee kinematics during various daily activities, as well as to assess its potential to continuously monitor knee motion. To this end, the sagittal knee motion of 32 healthy subjects was recorded simultaneously by an instrumented knee brace and an optoelectronic reference system during activities of daily living to assess the agreement between these two measurement systems. To evaluate the potentiometer’s behavior during continuous monitoring, knee motion was continuously recorded in a subgroup (n = 9) who wore the knee brace over the course of a day. Our results show a strong agreement between the instrumented knee brace and reference system across all investigated activities as well as stable sensor behavior during continuous tracking. The presented potentiometer-based sensor system demonstrates strong potential as a device for measuring sagittal knee motion during daily activities as well as for continuous knee motion monitoring.
APA, Harvard, Vancouver, ISO, and other styles
13

Tzou, H. S., and Y. Bao. "Dynamics and Control of Adaptive Shells with Curvature Transformations." Shock and Vibration 2, no. 2 (1995): 143–54. http://dx.doi.org/10.1155/1995/157612.

Full text
Abstract:
Adaptive structures with controllable geometries and shapes are rather useful in many engineering applications, such as adaptive wings, variable focus mirrors, adaptive machines, micro-electromechanical systems, etc. Dynamics and feedback control effectiveness of adaptive shells whose curvatures are actively controlled and continuously changed are evaluated. An adaptive piezoelectric laminated cylindrical shell composite with continuous curvature changes is studied, and its natural frequencies and controlled damping ratios are evaluated. The curvature change of the adaptive shell starts from an open shallow shell (30°) and ends with a deep cylindrical shell (360°). Dynamic characteristics and control effectiveness (via the proportional velocity feedback) of this series of shells are investigated and compared at every 30° curvature change. Analytical solutions suggest that the lower modes are sensitive to curvature changes and the higher modes are relatively insensitive.
APA, Harvard, Vancouver, ISO, and other styles
14

Peng, Deping, Zhongwang Gong, Shumin Zhang, and Gaochao Yu. "Two-Roller Continuous Calibration Process by Compression for Submarine Pipelines." Symmetry 13, no. 7 (July 7, 2021): 1224. http://dx.doi.org/10.3390/sym13071224.

Full text
Abstract:
Submarine pipeline is a key part in the development of deep sea and ultra-deep sea oil and gas. In order to reduce the ovality of pipes and improve their compressive strength, a two-roller continuous calibration (TRCC) process by compression is proposed. A springback analysis of compress bending is carried out, and an analytical model is established, which predicts ovality after calibration and provides a theoretical basis for roller shape design and process parameter formulation. Numerical simulation and physical experiments are carried out. The distribution of stress and strain is analyzed. The effects of initial ovality, reduction ratio and initial placement angle on the ovality after calibration are studied. When the reduction ratio is about 1%, the ovality is optimal. The theoretical analysis shows that the ovality after calibration is about 0.03%, and the ovality after calibration by numerical simulation and experiment is less than 0.45%, proving the feasibility of the process.
APA, Harvard, Vancouver, ISO, and other styles
15

Alonso, E. E., S. Sauter, and A. Ramon. "Pile groups under deep expansion: a case history." Canadian Geotechnical Journal 52, no. 8 (August 2015): 1111–21. http://dx.doi.org/10.1139/cgj-2014-0407.

Full text
Abstract:
A viaduct in a high-speed railway line experienced severe heave of its central pillars as a result of deep expansion of an anhydrite rock. Bridge pillars were founded on pile groups that experienced vertical heave displacements as well as lateral displacements and rotations. A semi-analytical solution for the response of a pile group under loading and arbitrary located soil expansion was developed, integrating fundamental solutions for the elastic half-space. The procedure was first validated and then applied to explain the recorded behaviour of the pile groups. The deep expansion was identified from independent surface heave and continuous extensometer readings. Group rotations were well predicted. Observed tensile fissures at the cap–pile contact were explained by the calculated forces and moments on the piles.
APA, Harvard, Vancouver, ISO, and other styles
16

Somma, Renato, Claudia Troise, Luigi Zeni, Aldo Minardo, Alessandro Fedele, Maurizio Mirabile, and Giuseppe De Natale. "Long-Term Monitoring with Fiber Optics Distributed Temperature Sensing at Campi Flegrei: The Campi Flegrei Deep Drilling Project." Sensors 19, no. 5 (February 27, 2019): 1009. http://dx.doi.org/10.3390/s19051009.

Full text
Abstract:
Monitoring volcanic phenomena is a key question, for both volcanological research and for civil protection purposes. This is particularly true in densely populated volcanic areas, like the Campi Flegrei caldera, which includes part of the large city of Naples (Italy). Borehole monitoring of volcanoes is the most promising way to improve classical methods of surface monitoring, although not commonly applied yet. Fiber optics technology is the most practical and suitable way to operate in such high temperature and aggressive environmental conditions. In this paper, we describe a fiber optics Distributed Temperature Sensing (DTS) sensor, which has been designed to continuously measure temperature all along a 500 m. deep well drilled in the west side of Naples (Bagnoli area), lying in the Campi Flegrei volcanic area. It has then been installed as part of the international ‘Campi Flegrei Deep Drilling Project’, and is continuously operating, giving insight on the time variation of temperature along the whole borehole depth. Such continuous monitoring of temperature can in turn indicate volcanic processes linked to magma dynamics and/or to changes in the hydrothermal system. The developed monitoring system, working at bottom temperatures higher than 100 °C, demonstrates the feasibility and effectiveness of using DTS for borehole volcanic monitoring.
APA, Harvard, Vancouver, ISO, and other styles
17

Mignan, Arnaud, and Marco Broccardo. "Neural Network Applications in Earthquake Prediction (1994–2019): Meta-Analytic and Statistical Insights on Their Limitations." Seismological Research Letters 91, no. 4 (May 20, 2020): 2330–42. http://dx.doi.org/10.1785/0220200021.

Full text
Abstract:
Abstract In the last few years, deep learning has solved seemingly intractable problems, boosting the hope to find approximate solutions to problems that now are considered unsolvable. Earthquake prediction, the Grail of Seismology, is, in this context of continuous exciting discoveries, an obvious choice for deep learning exploration. We reviewed the literature of artificial neural network (ANN) applications for earthquake prediction (77 articles, 1994–2019 period) and found two emerging trends: an increasing interest in this domain over time and a complexification of ANN models toward deep learning. Despite the relatively positive results claimed in those studies, we verified that far simpler (and traditional) models seem to offer similar predictive powers, if not better ones. Those include an exponential law for magnitude prediction and a power law (approximated by a logistic regression or one artificial neuron) for aftershock prediction in space. Because of the structured, tabulated nature of earthquake catalogs, and the limited number of features so far considered, simpler and more transparent machine-learning models than ANNs seem preferable at the present stage of research. Those baseline models follow first physical principles and are consistent with the known empirical laws of statistical seismology (e.g., the Gutenberg–Richter law), which are already known to have minimal abilities to predict large earthquakes.
APA, Harvard, Vancouver, ISO, and other styles
18

Anandan, R., Srikanth Bhyrapuneni, K. Kalaivani, and P. Swaminathan. "A survey on big data analytics with deep learning in text using machine learning mechanisms." International Journal of Engineering & Technology 7, no. 2.21 (April 20, 2018): 335. http://dx.doi.org/10.14419/ijet.v7i2.21.12398.

Full text
Abstract:
Big Data Analytics and Deep Learning are two immense purpose of meeting of data science. Big Data has ended up being major a tantamount number of affiliations both open and private have been gathering huge measures of room specific information, which can contain enduring information about issues, for instance, national cognizance, motorized security, coercion presentation, advancing, and healing informatics. Relationship, for instance, Microsoft and Google are researching wide volumes of data for business examination and decisions, influencing existing and future progression. Critical Learning figuring's isolate odd state, complex reflections as data outlines through another levelled learning practice. Complex reflections are learnt at a given level in setting of all around less asking for thoughts figured in the past level in the dynamic framework. An indispensable favoured perspective of Profound Learning is the examination and culture of beast measures of unconfirmed data, making it a fundamental contraption for Great Statistics Analytics where offensive data is, everything seen as, unlabelled and un-arranged. In the present examination, we investigate how Deep Learning can be used for keeping an eye out for some essential issues in Big Data Analytics, including removing complex cases from Big volumes of information, semantic asking for, information naming, smart data recovery, and streamlining discriminative errands .Deep learning using Machine Learning(ML) is continuously unleashing its power in a wide range of applications. It has been pushed to the front line as of late mostly attributable to the advert of huge information. ML counts have never been remarkable ensured while tried by gigantic data. Gigantic data engages ML counts to uncover more fine-grained cases and make more advantageous and correct gauges than whenever in late memory with deep learning; on the other hand, it exhibits genuine challenges to deep learning in ML, for instance, show adaptability and appropriated enlisting. In this paper, we introduce a framework of Deep learning in ML on big data (DLiMLBiD) to guide the discussion of its opportunities and challenges. In this paper, different machine learning algorithms have been talked about. These calculations are utilized for different purposes like information mining, picture handling, prescient examination, and so forth to give some examples. The fundamental favourable position of utilizing machine learning is that, once a calculation realizes what to do with information, it can do its work consequently. In this paper we are providing the review of different Deep learning in text using Machine Learning and Big data methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Littot, Geneviève C., Robert Mulvaney, Regine Röthlisberger, Roberto Udisti, Eric W. Wolff, Emiliano Castellano, Martine De Angelis, Margareta E. Hansson, Stefan Sommer, and Jørgen P. Steffensen. "Comparison of analytical methods used for measuring major ions in the EPICA Dome C (Antarctica) ice core." Annals of Glaciology 35 (2002): 299–305. http://dx.doi.org/10.3189/172756402781817022.

Full text
Abstract:
AbstractIn the past, ionic analyses of deep ice cores tended to consist of a few widely spaced measurements that indicated general trends in concentration. the ion-chromatographic methods widely used provide well-validated individual data, but are time-consuming. the development of continuous flow analysis (CFA) methods has allowed very rapid, high-resolution data to be collected in the field for a wide range of ions. In the European Project for Ice Coring in Antarctica (EPICA) deep ice-core drilling at Dome C, many ions have been measured at high resolution, and several have been analyzed by more than one method. the full range of ions has been measured in five different laboratories by ion chromatography (IC), at resolutions of 2.5–10 cm. In the field, CFA was used to measure the ions Na+, Ca2+, nitrate and ammonium. Additionally, a new semi-continuous in situ IC method, fast ion chromatography (FIC), was used to analyze sulphate, nitrate and chloride. Some data are now available to 788 m depth. In this paper we compare the data obtained by the three methods, and show that the rapid methods (CFA and FIC) give an excellent indication of trends in ionic data. Differences between the data from the different methods do occur, and in some cases these are genuine, being due to differences in speciation in the methods. We conclude that the best system for most deep ice-core analysis is a rapid system of CFA and FIC, along with in situ meltwater collection for analysis of other ions by IC, but that material should be kept aside for a regular check on analytical quality and for more detailed analysis of some sections.
APA, Harvard, Vancouver, ISO, and other styles
20

Bouktif, Salah, Abderraouf Cheniki, and Ali Ouni. "Traffic Signal Control Using Hybrid Action Space Deep Reinforcement Learning." Sensors 21, no. 7 (March 25, 2021): 2302. http://dx.doi.org/10.3390/s21072302.

Full text
Abstract:
Recent research works on intelligent traffic signal control (TSC) have been mainly focused on leveraging deep reinforcement learning (DRL) due to its proven capability and performance. DRL-based traffic signal control frameworks belong to either discrete or continuous controls. In discrete control, the DRL agent selects the appropriate traffic light phase from a finite set of phases. Whereas in continuous control approach, the agent decides the appropriate duration for each signal phase within a predetermined sequence of phases. Among the existing works, there are no prior approaches that propose a flexible framework combining both discrete and continuous DRL approaches in controlling traffic signal. Thus, our ultimate objective in this paper is to propose an approach capable of deciding simultaneously the proper phase and its associated duration. Our contribution resides in adapting a hybrid Deep Reinforcement Learning that considers at the same time discrete and continuous decisions. Precisely, we customize a Parameterized Deep Q-Networks (P-DQN) architecture that permits a hierarchical decision-making process that primarily decides the traffic light next phases and secondly specifies its the associated timing. The evaluation results of our approach using Simulation of Urban MObility (SUMO) shows its out-performance over the benchmarks. The proposed framework is able to reduce the average queue length of vehicles and the average travel time by 22.20% and 5.78%, respectively, over the alternative DRL-based TSC systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Yung-Hui, Latifa Nabila Harfiya, Kartika Purwandari, and Yue-Der Lin. "Real-Time Cuffless Continuous Blood Pressure Estimation Using Deep Learning Model." Sensors 20, no. 19 (September 30, 2020): 5606. http://dx.doi.org/10.3390/s20195606.

Full text
Abstract:
Blood pressure monitoring is one avenue to monitor people’s health conditions. Early detection of abnormal blood pressure can help patients to get early treatment and reduce mortality associated with cardiovascular diseases. Therefore, it is very valuable to have a mechanism to perform real-time monitoring for blood pressure changes in patients. In this paper, we propose deep learning regression models using an electrocardiogram (ECG) and photoplethysmogram (PPG) for the real-time estimation of systolic blood pressure (SBP) and diastolic blood pressure (DBP) values. We use a bidirectional layer of long short-term memory (LSTM) as the first layer and add a residual connection inside each of the following layers of the LSTMs. We also perform experiments to compare the performance between the traditional machine learning methods, another existing deep learning model, and the proposed deep learning models using the dataset of Physionet’s multiparameter intelligent monitoring in intensive care II (MIMIC II) as the source of ECG and PPG signals as well as the arterial blood pressure (ABP) signal. The results show that the proposed model outperforms the existing methods and is able to achieve accurate estimation which is promising in order to be applied in clinical practice effectively.
APA, Harvard, Vancouver, ISO, and other styles
22

Eom, Heesang, Dongseok Lee, Seungwoo Han, Yuli Sun Hariyani, Yonggyu Lim, Illsoo Sohn, Kwangsuk Park, and Cheolsoo Park. "End-To-End Deep Learning Architecture for Continuous Blood Pressure Estimation Using Attention Mechanism." Sensors 20, no. 8 (April 20, 2020): 2338. http://dx.doi.org/10.3390/s20082338.

Full text
Abstract:
Blood pressure (BP) is a vital sign that provides fundamental health information regarding patients. Continuous BP monitoring is important for patients with hypertension. Various studies have proposed cuff-less BP monitoring methods using pulse transit time. We propose an end-to-end deep learning architecture using only raw signals without the process of extracting features to improve the BP estimation performance using the attention mechanism. The proposed model consisted of a convolutional neural network, a bidirectional gated recurrent unit, and an attention mechanism. The model was trained by a calibration-based method, using the data of each subject. The performance of the model was compared to the model that used each combination of the three signals, and the model with the attention mechanism showed better performance than other state-of-the-art methods, including conventional linear regression method using pulse transit time (PTT). A total of 15 subjects were recruited, and electrocardiogram, ballistocardiogram, and photoplethysmogram levels were measured. The 95% confidence interval of the reference BP was [86.34, 143.74] and [51.28, 88.74] for systolic BP (SBP) and diastolic BP (DBP), respectively. The R 2 values were 0.52 and 0.49, and the mean-absolute-error values were 4.06 ± 4.04 and 3.33 ± 3.42 for SBP and DBP, respectively. In addition, the results complied with global standards. The results show the applicability of the proposed model as an analytical metric for BP estimation.
APA, Harvard, Vancouver, ISO, and other styles
23

De Miranda, Luis. "Think Into the Place of the Other." International Journal of Philosophical Practice 7, no. 1 (2021): 89–103. http://dx.doi.org/10.5840/ijpp2021717.

Full text
Abstract:
The present article introduces eight empirically-tested concepts that guide the crealectic practice of philosophical counseling: philosophical health, deep listening, the Creal, the possible, imparadisation, deep orientation, eudynamia , and mental heroism. The crealectic framework is grounded on a process-philosophy axiom of absolute possibility and continuous cosmological and cosmopolitical creation, termed "Creal". The approach also posits that there are three complementary modes of intelligence, namely analytic, dialectic, and crealectic, the balance of which is necessary to live a healthy human life. Beyond what is physically possible and psychologically possible, an underestimated force of social and personal deployment is the philosophical possible . In a context of personal counseling and philosophical care, the crealectic approach endeavors to slowly connect the patient to a field of harmonious and generative potentiality termed eudynamia.
APA, Harvard, Vancouver, ISO, and other styles
24

Xie, Jingyi, Xiaodong Peng, Haijiao Wang, Wenlong Niu, and Xiao Zheng. "UAV Autonomous Tracking and Landing Based on Deep Reinforcement Learning Strategy." Sensors 20, no. 19 (October 1, 2020): 5630. http://dx.doi.org/10.3390/s20195630.

Full text
Abstract:
Unmanned aerial vehicle (UAV) autonomous tracking and landing is playing an increasingly important role in military and civil applications. In particular, machine learning has been successfully introduced to robotics-related tasks. A novel UAV autonomous tracking and landing approach based on a deep reinforcement learning strategy is presented in this paper, with the aim of dealing with the UAV motion control problem in an unpredictable and harsh environment. Instead of building a prior model and inferring the landing actions based on heuristic rules, a model-free method based on a partially observable Markov decision process (POMDP) is proposed. In the POMDP model, the UAV automatically learns the landing maneuver by an end-to-end neural network, which combines the Deep Deterministic Policy Gradients (DDPG) algorithm and heuristic rules. A Modular Open Robots Simulation Engine (MORSE)-based reinforcement learning framework is designed and validated with a continuous UAV tracking and landing task on a randomly moving platform in high sensor noise and intermittent measurements. The simulation results show that when the moving platform is moving in different trajectories, the average landing success rate of the proposed algorithm is about 10% higher than that of the Proportional-Integral-Derivative (PID) method. As an indirect result, a state-of-the-art deep reinforcement learning-based UAV control method is validated, where the UAV can learn the optimal strategy of a continuously autonomous landing and perform properly in a simulation environment.
APA, Harvard, Vancouver, ISO, and other styles
25

Cai, Qingpeng, Ling Pan, and Pingzhong Tang. "Deterministic Value-Policy Gradients." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3316–23. http://dx.doi.org/10.1609/aaai.v34i04.5732.

Full text
Abstract:
Reinforcement learning algorithms such as the deep deterministic policy gradient algorithm (DDPG) has been widely used in continuous control tasks. However, the model-free DDPG algorithm suffers from high sample complexity. In this paper we consider the deterministic value gradients to improve the sample efficiency of deep reinforcement learning algorithms. Previous works consider deterministic value gradients with the finite horizon, but it is too myopic compared with infinite horizon. We firstly give a theoretical guarantee of the existence of the value gradients in this infinite setting. Based on this theoretical guarantee, we propose a class of the deterministic value gradient algorithm (DVG) with infinite horizon, and different rollout steps of the analytical gradients by the learned model trade off between the variance of the value gradients and the model bias. Furthermore, to better combine the model-based deterministic value gradient estimators with the model-free deterministic policy gradient estimator, we propose the deterministic value-policy gradient (DVPG) algorithm. We finally conduct extensive experiments comparing DVPG with state-of-the-art methods on several standard continuous control benchmarks. Results demonstrate that DVPG substantially outperforms other baselines.
APA, Harvard, Vancouver, ISO, and other styles
26

Jeong, Chi Yoon, and Mooseop Kim. "An Energy-Efficient Method for Human Activity Recognition with Segment-Level Change Detection and Deep Learning." Sensors 19, no. 17 (August 25, 2019): 3688. http://dx.doi.org/10.3390/s19173688.

Full text
Abstract:
Human activity recognition (HAR), which is important in context awareness services, needs to occur continuously in daily life, owing to which an energy-efficient method is needed. However, because human activities have a longer cycle than HAR methods, which have analysis cycles of a few seconds, continuous classification of human activities using these methods is computationally and energy inefficient. Therefore, we propose segment-level change detection to identify activity change with very low computational complexity. Additionally, a fully convolutional network (FCN) with a high recognition rate is used to classify the activity only when activity change occurs. We compared the accuracy and energy consumption of the proposed method with that of a method based on a convolutional neural network (CNN) by using a public dataset on different embedded platforms. The experimental results showed that, although the recognition rate of the proposed FCN model is similar to that of the CNN model, the former requires only 10% of the network parameters of the CNN model. In addition, our experiments to measure the energy consumption on the embedded platforms showed that the proposed method uses as much as 6.5 times less energy than the CNN-based method when only HAR energy consumption is compared.
APA, Harvard, Vancouver, ISO, and other styles
27

Zeng, Junjie, Rusheng Ju, Long Qin, Yue Hu, Quanjun Yin, and Cong Hu. "Navigation in Unknown Dynamic Environments Based on Deep Reinforcement Learning." Sensors 19, no. 18 (September 5, 2019): 3837. http://dx.doi.org/10.3390/s19183837.

Full text
Abstract:
In this paper, we propose a novel Deep Reinforcement Learning (DRL) algorithm which can navigate non-holonomic robots with continuous control in an unknown dynamic environment with moving obstacles. We call the approach MK-A3C (Memory and Knowledge-based Asynchronous Advantage Actor-Critic) for short. As its first component, MK-A3C builds a GRU-based memory neural network to enhance the robot’s capability for temporal reasoning. Robots without it tend to suffer from a lack of rationality in face of incomplete and noisy estimations for complex environments. Additionally, robots with certain memory ability endowed by MK-A3C can avoid local minima traps by estimating the environmental model. Secondly, MK-A3C combines the domain knowledge-based reward function and the transfer learning-based training task architecture, which can solve the non-convergence policies problems caused by sparse reward. These improvements of MK-A3C can efficiently navigate robots in unknown dynamic environments, and satisfy kinetic constraints while handling moving objects. Simulation experiments show that compared with existing methods, MK-A3C can realize successful robotic navigation in unknown and challenging environments by outputting continuous acceleration commands.
APA, Harvard, Vancouver, ISO, and other styles
28

Sadrawi, Muammar, Yin-Tsong Lin, Chien-Hung Lin, Bhekumuzi Mathunjwa, Shou-Zen Fan, Maysam F. Abbod, and Jiann-Shing Shieh. "Genetic Deep Convolutional Autoencoder Applied for Generative Continuous Arterial Blood Pressure via Photoplethysmography." Sensors 20, no. 14 (July 9, 2020): 3829. http://dx.doi.org/10.3390/s20143829.

Full text
Abstract:
Hypertension affects a huge number of people around the world. It also has a great contribution to cardiovascular- and renal-related diseases. This study investigates the ability of a deep convolutional autoencoder (DCAE) to generate continuous arterial blood pressure (ABP) by only utilizing photoplethysmography (PPG). A total of 18 patients are utilized. LeNet-5- and U-Net-based DCAEs, respectively abbreviated LDCAE and UDCAE, are compared to the MP60 IntelliVue Patient Monitor, as the gold standard. Moreover, in order to investigate the data generalization, the cross-validation (CV) method is conducted. The results show that the UDCAE provides superior results in producing the systolic blood pressure (SBP) estimation. Meanwhile, the LDCAE gives a slightly better result for the diastolic blood pressure (DBP) prediction. Finally, the genetic algorithm-based optimization deep convolutional autoencoder (GDCAE) is further administered to optimize the ensemble of the CV models. The results reveal that the GDCAE is superior to either the LDCAE or UDCAE. In conclusion, this study exhibits that systolic blood pressure (SBP) and diastolic blood pressure (DBP) can also be accurately achieved by only utilizing a single PPG signal.
APA, Harvard, Vancouver, ISO, and other styles
29

Gyürkés, Martin, Lajos Madarász, Ákos Köte, András Domokos, Dániel Mészáros, Áron Kristóf Beke, Brigitta Nagy, et al. "Process Design of Continuous Powder Blending Using Residence Time Distribution and Feeding Models." Pharmaceutics 12, no. 11 (November 20, 2020): 1119. http://dx.doi.org/10.3390/pharmaceutics12111119.

Full text
Abstract:
The present paper reports a thorough continuous powder blending process design of acetylsalicylic acid (ASA) and microcrystalline cellulose (MCC) based on the Process Analytical Technology (PAT) guideline. A NIR-based method was applied using multivariate data analysis to achieve in-line process monitoring. The process dynamics were described with residence time distribution (RTD) models to achieve deep process understanding. The RTD was determined using the active pharmaceutical ingredient (API) as a tracer with multiple designs of experiment (DoE) studies to determine the effect of critical process parameters (CPPs) on the process dynamics. To achieve quality control through material diversion from feeding data, soft sensor-based process control tools were designed using the RTD model. The operation block model of the system was designed to select feasible experimental setups using the RTD model, and feeder characterizations as digital twins, therefore visualizing the output of theoretical setups. The concept significantly reduces the material and instrumental costs of process design and implementation.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, Zhongfeng, Minjae Lee, and Seungwon Choi. "Deep-Learning-Based Wi-Fi Indoor Positioning System Using Continuous CSI of Trajectories." Sensors 21, no. 17 (August 27, 2021): 5776. http://dx.doi.org/10.3390/s21175776.

Full text
Abstract:
In a Wi-Fi indoor positioning system (IPS), the performance of the IPS depends on the channel state information (CSI), which is often limited due to the multipath fading effect, especially in indoor environments involving multiple non-line-of-sight propagation paths. In this paper, we propose a novel IPS utilizing trajectory CSI observed from predetermined trajectories instead of the CSI collected at each stationary location; thus, the proposed method enables all the CSI along each route to be continuously encountered in the observation. Further, by using a generative adversarial network (GAN), which helps enlarge the training dataset, the cost of trajectory CSI collection can be significantly reduced. To fully exploit the trajectory CSI’s spatial and temporal information, the proposed IPS employs a deep learning network of a one-dimensional convolutional neural network–long short-term memory (1DCNN-LSTM). The proposed IPS was hardware-implemented, where digital signal processors and a universal software radio peripheral were used as a modem and radio frequency transceiver, respectively, for both access point and mobile device of Wi-Fi. We verified that the proposed IPS based on the trajectory CSI far outperforms the state-of-the-art IPS based on the CSI collected from stationary locations through extensive experimental tests and computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
31

Villalba-Díez, Javier, Martin Molina, Joaquín Ordieres-Meré, Shengjing Sun, Daniel Schmidt, and Wanja Wellbrock. "Geometric Deep Lean Learning: Deep Learning in Industry 4.0 Cyber–Physical Complex Networks." Sensors 20, no. 3 (January 30, 2020): 763. http://dx.doi.org/10.3390/s20030763.

Full text
Abstract:
In the near future, value streams associated with Industry 4.0 will be formed by interconnected cyber–physical elements forming complex networks that generate huge amounts of data in real time. The success or failure of industry leaders interested in the continuous improvement of lean management systems in this context is determined by their ability to recognize behavioral patterns in these big data structured within non-Euclidean domains, such as these dynamic sociotechnical complex networks. We assume that artificial intelligence in general and deep learning in particular may be able to help find useful patterns of behavior in 4.0 industrial environments in the lean management of cyber–physical systems. However, although these technologies have meant a paradigm shift in the resolution of complex problems in the past, the traditional methods of deep learning, focused on image or video analysis, both with regular structures, are not able to help in this specific field. This is why this work focuses on proposing geometric deep lean learning, a mathematical methodology that describes deep-lean-learning operations such as convolution and pooling on cyber–physical Industry 4.0 graphs. Geometric deep lean learning is expected to positively support sustainable organizational growth because customers and suppliers ought to be able to reach new levels of transparency and traceability on the quality and efficiency of processes that generate new business for both, hence generating new products, services, and cooperation opportunities in a cyber–physical environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Rodríguez-Ramos, Ruth, Álvaro Santana-Mayor, Bárbara Socas-Rodríguez, and Miguel Ángel Rodríguez-Delgado. "Recent Applications of Deep Eutectic Solvents in Environmental Analysis." Applied Sciences 11, no. 11 (May 23, 2021): 4779. http://dx.doi.org/10.3390/app11114779.

Full text
Abstract:
The incessant generation of toxic waste and the growing concern over the environment have led the scientific community to delve into the search for more sustainable systems. In this regard, the application of deep eutectic solvents (DESs) has become one of the main strategies in green chemistry. These solvents have emerged as a promising alternative to conventional toxic solvents and even to the well-known ionic liquids. Their unique properties, components availability, and easy preparation, among others, have led to a new trend within the scientific community and industry, based on the use of these up-and-coming solvents not only in science but also in quotidian life. Among the areas that have benefited from the advantages of DESs is analytical chemistry, in which they have been largely used for sample preparation, including the extraction and determination of organic and inorganic compounds from environmental samples. The considerable number of applications developed in the last year in this field and the increasing generation of new data necessitate the continuous updating of the literature. This review pretends to compile the most relevant applications of DESs in environmental analysis and critically discuss them to provide a global vision about the advantages and drawbacks/limitations of these neoteric solvents in the area of environmental analysis.
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Sheng, Xiang Zuo, Zhengying Li, and Honghai Wang. "Applying Deep Learning to Continuous Bridge Deflection Detected by Fiber Optic Gyroscope for Damage Detection." Sensors 20, no. 3 (February 8, 2020): 911. http://dx.doi.org/10.3390/s20030911.

Full text
Abstract:
Improving the accuracy and efficiency of bridge structure damage detection is one of the main challenges in engineering practice. This paper aims to address this issue by monitoring the continuous bridge deflection based on the fiber optic gyroscope and applying the deep-learning algorithm to perform structural damage detection. With a scale-down bridge model, three types of damage scenarios and an intact benchmark were simulated. A supervised learning model based on the deep convolutional neural networks was proposed. After the training process under ten-fold cross-validation, the model accuracy can reach 96.9% and significantly outperform that of other four traditional machine learning methods (random forest, support vector machine, k-nearest neighbor, and decision tree) used for comparison. Further, the proposed model illustrated its decent ability in distinguishing damage from structurally symmetrical locations.
APA, Harvard, Vancouver, ISO, and other styles
34

Phan, Bao Chau, Ying-Chih Lai, and Chin E. Lin. "A Deep Reinforcement Learning-Based MPPT Control for PV Systems under Partial Shading Condition." Sensors 20, no. 11 (May 27, 2020): 3039. http://dx.doi.org/10.3390/s20113039.

Full text
Abstract:
On the issues of global environment protection, the renewable energy systems have been widely considered. The photovoltaic (PV) system converts solar power into electricity and significantly reduces the consumption of fossil fuels from environment pollution. Besides introducing new materials for the solar cells to improve the energy conversion efficiency, the maximum power point tracking (MPPT) algorithms have been developed to ensure the efficient operation of PV systems at the maximum power point (MPP) under various weather conditions. The integration of reinforcement learning and deep learning, named deep reinforcement learning (DRL), is proposed in this paper as a future tool to deal with the optimization control problems. Following the success of deep reinforcement learning (DRL) in several fields, the deep Q network (DQN) and deep deterministic policy gradient (DDPG) are proposed to harvest the MPP in PV systems, especially under a partial shading condition (PSC). Different from the reinforcement learning (RL)-based method, which is only operated with discrete state and action spaces, the methods adopted in this paper are used to deal with continuous state spaces. In this study, DQN solves the problem with discrete action spaces, while DDPG handles the continuous action spaces. The proposed methods are simulated in MATLAB/Simulink for feasibility analysis. Further tests under various input conditions with comparisons to the classical Perturb and observe (P&O) MPPT method are carried out for validation. Based on the simulation results in this study, the performance of the proposed methods is outstanding and efficient, showing its potential for further applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Ogungbenle, G. M., and J. S. Adeyele. "Analytical model construction of optimal mortality intensities using polynomial estimation." Nigerian Journal of Technology 39, no. 1 (April 3, 2020): 25–35. http://dx.doi.org/10.4314/njt.v39i1.3.

Full text
Abstract:
The aim of this paper is to describe a non-parametric technique as a means of estimating the instantaneous force of mortality which serves as the underlying concept in modeling the future lifetime. It relies heavily on the analytic properties of life table survival functions 𝒍𝒙+𝒕. The specific objective of the study is to estimate the force of mortality using the Taylor series expansion to a desired degree of accuracy. The estimation of the continuous death probabilities has aroused keen research interest in mortality literature on life assurance practice. However, the estimation of 𝝁𝒙 involves a model dependent on deep knowledge of differencing and differential equation of first order. The suggested method of approximation with limiting optimal properties is the Newton’s forward difference model. Initiating Newton’s process is an important level in terms of theoretical work which produces parallel results of great impact in the study of mortality functions. The paper starts from an assumption that 𝒍𝒙 function follows a polynomial of least degree and hence gives an answer to a simple model which overcomes points of singularity. Keywords: polynomials, contingency, analyticity, basis, differential, mortality, modeling
APA, Harvard, Vancouver, ISO, and other styles
36

Choi, Kanghae, Hokyoung Ryu, and Jieun Kim. "Deep Residual Networks for User Authentication via Hand-Object Manipulations." Sensors 21, no. 9 (April 23, 2021): 2981. http://dx.doi.org/10.3390/s21092981.

Full text
Abstract:
With the ubiquity of wearable devices, various behavioural biometrics have been exploited for continuous user authentication during daily activities. However, biometric authentication using complex hand behaviours have not been sufficiently investigated. This paper presents an implicit and continuous user authentication model based on hand-object manipulation behaviour, using a finger-and hand-mounted inertial measurement unit (IMU)-based system and state-of-the-art deep learning models. We employed three convolutional neural network (CNN)-based deep residual networks (ResNets) with multiple depths (i.e., 50, 101, and 152 layers) and two recurrent neural network (RNN)-based long short-term memory (LSTMs): simple and bidirectional. To increase ecological validity, data collection of hand-object manipulation behaviours was based on three different age groups and simple and complex daily object manipulation scenarios. As a result, both the ResNets and LSTMs models acceptably identified users’ hand behaviour patterns, with the best average accuracy of 96.31% and F1-score of 88.08%. Specifically, in the simple hand behaviour authentication scenarios, more layers in residual networks tended to show better performance without showing conventional degradation problems (the ResNet-152 > ResNet-101 > ResNet-50). In a complex hand behaviour scenario, the ResNet models outperformed user authentication compared to the LSTMs. The 152-layered ResNet and bidirectional LSTM showed an average false rejection rate of 8.34% and 16.67% and an equal error rate of 1.62% and 9.95%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Gao, Qinghua, Shuo Jiang, and Peter B. Shull. "Simultaneous Hand Gesture Classification and Finger Angle Estimation via a Novel Dual-Output Deep Learning Model." Sensors 20, no. 10 (May 24, 2020): 2972. http://dx.doi.org/10.3390/s20102972.

Full text
Abstract:
Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Siyu, Xiuguo Zhang, Yisong Zheng, and Yiquan Du. "An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning." Sensors 20, no. 2 (January 11, 2020): 426. http://dx.doi.org/10.3390/s20020426.

Full text
Abstract:
Deep reinforcement learning (DRL) has excellent performance in continuous control problems and it is widely used in path planning and other fields. An autonomous path planning model based on DRL is proposed to realize the intelligent path planning of unmanned ships in the unknown environment. The model utilizes the deep deterministic policy gradient (DDPG) algorithm, through the continuous interaction with the environment and the use of historical experience data; the agent learns the optimal action strategy in a simulation environment. The navigation rules and the ship’s encounter situation are transformed into a navigation restricted area, so as to achieve the purpose of planned path safety in order to ensure the validity and accuracy of the model. Ship data provided by ship automatic identification system (AIS) are used to train this path planning model. Subsequently, the improved DRL is obtained by combining DDPG with the artificial potential field. Finally, the path planning model is integrated into the electronic chart platform for experiments. Through the establishment of comparative experiments, the results show that the improved model can achieve autonomous path planning, and it has good convergence speed and stability.
APA, Harvard, Vancouver, ISO, and other styles
39

Sundareswaran, Anveshrithaa, and Lavanya K. "Real-Time Vehicle Traffic Prediction in Apache Spark Using Ensemble Learning for Deep Neural Networks." International Journal of Intelligent Information Technologies 16, no. 4 (October 2020): 19–36. http://dx.doi.org/10.4018/ijiit.2020100102.

Full text
Abstract:
Escalating traffic congestion in large and rapidly evolving metropolitan areas all around the world is one of the inescapable problems in our daily lives. In light of this situation, traffic monitoring and analytics is becoming the need of the hour in today's world. Real-time traffic analysis requires processing of data streams that are being generated continuously in real time to gain quick insights. The challenge of analyzing streaming data for real-time prediction can be overcome by exploiting deep learning techniques. Taking this as a motivation, this work aims to integrate big data technologies and deep learning techniques to develop a real-time data stream processing model for vehicle traffic forecast using ensemble learning approach. Real-time traffic data from an API is streamed using a distributed streaming platform called Kafka into Apache Spark where it is processed, and the traffic flow is predicted by a neural network ensemble model. This will reduce the travel time, cost, and energy through efficient decision making, thus having a positive impact on the environment.
APA, Harvard, Vancouver, ISO, and other styles
40

Peppes, Nikolaos, Theodoros Alexakis, Evgenia Adamopoulou, and Konstantinos Demestichas. "Driving Behaviour Analysis Using Machine and Deep Learning Methods for Continuous Streams of Vehicular Data." Sensors 21, no. 14 (July 9, 2021): 4704. http://dx.doi.org/10.3390/s21144704.

Full text
Abstract:
In the last few decades, vehicles are equipped with a plethora of sensors which can provide useful measurements and diagnostics for both the vehicle’s condition as well as the driver’s behaviour. Furthermore, the rapid increase for transportation needs of people and goods together with the evolution of Information and Communication Technologies (ICT) push the transportation domain towards a new more intelligent and efficient era. The reduction of CO2 emissions and the minimization of the environmental footprint is, undeniably, of utmost importance for the protection of the environment. In this light, it is widely acceptable that the driving behaviour is directly associated with the vehicle’s fuel consumption and gas emissions. Thus, given the fact that, nowadays, vehicles are equipped with sensors that can collect a variety of data, such as speed, acceleration, fuel consumption, direction, etc. is more feasible than ever to put forward solutions which aim not only to monitor but also improve the drivers’ behaviour from an environmental point of view. The approach presented in this paper describes a holistic integrated platform which combines well-known machine and deep learning algorithms together with open-source-based tools in order to gather, store, process, analyze and correlate different data flows originating from vehicles. Particularly, data streamed from different vehicles are processed and analyzed with the utilization of clustering techniques in order to classify the driver’s behaviour as eco-friendly or not, followed by a comparative analysis of supervised machine and deep learning algorithms in the given labelled dataset.
APA, Harvard, Vancouver, ISO, and other styles
41

Weng, Haoyang, Jingen Deng, Chunfang Zhang, Qiang Tan, Zhuo Chen, and Wei Liu. "Evaluation Method of Production Pressure Differential in Deep Carbonate Reservoirs: A Case Study in Tarim Basin, Northwest China." Energies 14, no. 9 (May 10, 2021): 2721. http://dx.doi.org/10.3390/en14092721.

Full text
Abstract:
Deep and even ultra-deep petroleum resources play a gradually increasing and important role with the worldwide continuous advancement of oil and gas exploration and development. In China, the deep carbonate reservoirs in the Tarim Basin are regarded as the key development areas due to their huge reserves. However, due to the unreasonable design of production pressure differential, some production wells suffered from severe borehole collapse and tubing blockage. Therefore, the main purpose of this paper is to optimize a more practical method for predicting the critical production pressure differential. The commonly used analytical methods with different failure criteria for predicting production pressure differential were summarized. Furthermore, their advantages and disadvantages were analyzed. A new numerical model is established based on the finite element theory in order to make the prediction of production pressure differential more accurate. Additionally, both analytical and numerical methods were applied to evaluating the production pressure differential of deep carbonate reservoirs in the Tarim Basin, and the results were discussed compared with field data. In addition, a series of laboratory tests, including porosity and permeability measurements, electron microscope scanning, XRD for mineral analysis, uniaxial and triaxial compressive strength test, etc., were carried out by using the collected carbonate cores from formations deeper than 7000 m to obtain the input parameters of the simulation such as the rock properties. The experimental results showed that the carbonate rocks exhibited a remarkable brittleness and post-peak strain softening. The calculation results revealed that the Mogi-Coulomb criterion is slightly conservative; however, it is more suitable than other criteria to evaluate pressure differential. Furthermore, it has been confirmed by the field data that the finite element numerical method can not only reveal the instability mechanism of the wellbore but also predict the critical production pressure differential accurately. Unfortunately, the on-site operators sometimes require a more convenient way, such as an analytical method, to figure out the pressure differential, even though the evaluation of the numerical method is more accurate. Therefore, the discussion in this paper can provide a basis for the operators to determine the production pressure differential flexibly.
APA, Harvard, Vancouver, ISO, and other styles
42

Gorbunov, Sergey, Ernst Hellbär, Gian Michele Innocenti, Marian Ivanov, Maja Kabus, Matthias Kleiner, Haris Riaz, et al. "Deep neural network techniques in the calibration of space-charge distortion fluctuations for the ALICE TPC." EPJ Web of Conferences 251 (2021): 03020. http://dx.doi.org/10.1051/epjconf/202125103020.

Full text
Abstract:
The Time Projection Chamber (TPC) of the ALICE experiment at the CERN LHC was upgraded for Run 3 and Run 4. Readout chambers based on Gas Electron Multiplier (GEM) technology and a new readout scheme allow continuous data taking at the highest interaction rates expected in Pb-Pb collisions. Due to the absence of a gating grid system, a significant amount of ions created in the multiplication region is expected to enter the TPC drift volume and distort the uniform electric field that guides the electrons to the readout pads. Analytical calculations were considered to correct for space-charge distortion fluctuations but they proved to be too slow for the calibration and reconstruction workflow in Run 3. In this paper, we discuss a novel strategy developed by the ALICE Collaboration to perform distortion-fluctuation corrections with machine learning and convolutional neural network techniques. The results of preliminary studies are shown and the prospects for further development and optimization are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Fan, Xiaomao, Hailiang Wang, Yang Zhao, Ye Li, and Kwok Leung Tsui. "An Adaptive Weight Learning-Based Multitask Deep Network for Continuous Blood Pressure Estimation Using Electrocardiogram Signals." Sensors 21, no. 5 (February 25, 2021): 1595. http://dx.doi.org/10.3390/s21051595.

Full text
Abstract:
Estimating blood pressure via combination analysis with electrocardiogram and photoplethysmography signals has attracted growing interest in continuous monitoring patients’ health conditions. However, most wearable/portal monitoring devices generally acquire only one kind of physiological signals due to the consideration of energy cost, device weight and size, etc. In this study, a novel adaptive weight learning-based multitask deep learning framework based on single lead electrocardiogram signals is proposed for continuous blood pressure estimation. Specifically, the proposed method utilizes a 2-layer bidirectional long short-term memory network as the sharing layer, followed by three identical architectures of 2-layer fully connected networks for task-specific blood pressure estimation. To learn the importance of task-specific losses automatically, an adaptive weight learning scheme based on the trend of validation loss is proposed. Extensive experiment results on Physionet Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) II waveform database demonstrate that the proposed method using electrocardiogram signals obtains estimating performance of 0.12±10.83 mmHg, 0.13±5.90 mmHg, and 0.08±6.47 mmHg for systolic blood pressure, diastolic blood pressure, and mean arterial pressure, respectively. It can meet the requirements of the British Hypertension Society standard and US Association of Advancement of Medical Instrumentation standard with a considerable margin. Combined with a wearable/portal electrocardiogram device, the proposed model can be deployed to a healthcare system to provide a long-term continuous blood pressure monitoring service, which would help to reduce the incidence of malignant complications to hypertension.
APA, Harvard, Vancouver, ISO, and other styles
44

Doi, Ryoichi. "Maximizing the Accuracy of Continuous Quantification Measures Using Discrete PackTest Products with Deep Learning and Pseudocolor Imaging." Journal of Analytical Methods in Chemistry 2019 (April 9, 2019): 1–12. http://dx.doi.org/10.1155/2019/1685382.

Full text
Abstract:
Using the standard colors provided in the instructions, PackTest products can approximate and quickly estimate the chemical characteristics of liquid samples. The combination of PackTest products and deep learning was examined for its accuracy and precision in quantifying chemical oxygen demand, ammonium ion, and phosphate ion using a pseudocolor imaging method. Each PackTest product underwent reactions with standard solutions. The generated color was scanner-read. From the color image, ten grayscale images representing the intensity values of red, green, blue, cyan, magenta, yellow, key black, and L∗, and the values of a∗ and b∗ were generated. Using the grayscale images representing the red, green, and blue intensity values, 73 other grayscale images were generated. The grayscale intensity values were used to prepare datasets for the ten and 83 (=10 + 73) images. For both datasets, chemical oxygen demand quantification was successful, resulting in values of normalized mean absolute error of less than 0.4% and coefficients of determination that were greater than 0.9996. However, the quantification of ammonium and phosphate ions commonly provided false positive results for the standard solution that contained no ammonium ion/phosphate ion. For ammonium ion, multiple regression markedly improved the accuracy using the pseudocolor method. Phosphate ion quantification was also improved by avoiding the use of an estimated value for the reference solution that contained no phosphate ion. Real details of the measurements and the perspectives were discussed.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Jialin, Xueyi Li, David He, and Yongzhi Qu. "A Novel Method for Early Gear Pitting Fault Diagnosis Using Stacked SAE and GBRBM." Sensors 19, no. 4 (February 13, 2019): 758. http://dx.doi.org/10.3390/s19040758.

Full text
Abstract:
Research on data-driven fault diagnosis methods has received much attention in recent years. The deep belief network (DBN) is a commonly used deep learning method for fault diagnosis. In the past, when people used DBN to diagnose gear pitting faults, it was found that the diagnosis result was not good with continuous time domain vibration signals as direct inputs into DBN. Therefore, most researchers extracted features from time domain vibration signals as inputs into DBN. However, it is desirable to use raw vibration signals as direct inputs to achieve good fault diagnosis results. Therefore, this paper proposes a novel method by stacking spare autoencoder (SAE) and Gauss-Binary restricted Boltzmann machine (GBRBM) for early gear pitting faults diagnosis with raw vibration signals as direct inputs. The SAE layer is used to compress the raw vibration data and the GBRBM layer is used to effectively process continuous time domain vibration signals. Vibration signals of seven early gear pitting faults collected from a gear test rig are used to validate the proposed method. The validation results show that the proposed method maintains a good diagnosis performance under different working conditions and gives higher diagnosis accuracy compared to other traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Albuquerque, Geyslane Pereira Melo de, Fátima Maria da Silva Abrão, Ana Maria de Almeida, Débora Larissa Rufino Alves, Priscyla de Oliveira Nascimento Andrade, and Aurélio Molina da Costa. "Quality of life in the climacteric of nurses working in primary care." Revista Brasileira de Enfermagem 72, suppl 3 (December 2019): 154–61. http://dx.doi.org/10.1590/0034-7167-2018-0306.

Full text
Abstract:
ABSTRACT Objective: to evaluate the quality of life of primary care nurses in the climacteric. Method: A cross-sectional descriptive-analytic study, performed with 98 female nurses, aged 40-65 years, using the WHOQOL-Bref questionnaire. Results: the worst level of quality of life was observed for professionals aged 50-59 years, non-white, specialists, divorced or widowed, with children, a lower income, with another employment relationship, a weekly workload of more than 40 hours, who consumed alcoholic beverages weekly, with chronic disease, in continuous use of medications, sedentary, who did not menstruate and did not receive hormonal treatment, and who went through menopause between the ages of 43-47 years. Conclusion: Although the variables “physical activity” and “age” have a statistically significant association with quality of life, other variables seem to interfere in these professionals’ lives, indicating the need for a more critical and deep reflection on these relations.
APA, Harvard, Vancouver, ISO, and other styles
47

Rudobashta, Stanislav P., Galina A. Zueva, and Victor A. Zaytsev. "MODELING OF THE DEEP DRYING PROCESS OF GRANULATED POLYAMIDE AT CONVECTIVE-INFRARED ENERGY SUB-SUPPLY." IZVESTIYA VYSSHIKH UCHEBNYKH ZAVEDENII KHIMIYA KHIMICHESKAYA TEKHNOLOGIYA 62, no. 12 (December 8, 2019): 94–100. http://dx.doi.org/10.6060/ivkkt.20196212.6064.

Full text
Abstract:
On the basis of an analytical solution of the problem describing the drying process of a cylindrical body with a combined heat supply (continuous convective and intermittent infrared), the influence of the temperature mode of heating the body on the drying time and the consumption of electrical energy consumed by an infrared emitter is analyzed. The dynamics of intermittent infrared heating of a cylindrical body was modeled on the basis of an analytical solution of the problem, which takes into account the irregularity of irradiation using a unit Heaviside function and the absorption of electromagnetic energy exponentially, as well as convective heat and mass transfer of the body surface with the environment of constant parameters. The kinetics of body drying has been calculated by analytically solving the problem of moisture diffusion in the material with the third type mass transfer boundary condition provided that phase transformations occur at the surface of the body. An internal mass transfer analysis has been performed with deep drying of granular polymers, on the basis of which the diffusion mechanism of moisture transfer inside the material was substantiated, for which PA-6 polyamide in the form of a cylindrical rod has been chosen. The analysis showed that the drying process of the specified polyamide rod occurs in the intra-diffusion region, characterized by the condition that the moisture content of the rod near its surface immediately after the process starts takes an equilibrium value, which was calculated during the analysis of the drying temperature regime. A numerical simulation of the interconnected process of drying and heating a cylindrical rod of polyamide PA-6 under conditions of a combined convective-infrared power supply was conducted, on the basis of which conclusions were made about the choice of the temperature mode of the process.
APA, Harvard, Vancouver, ISO, and other styles
48

Ladovskii, Igor V., Petr S. Martyshko, Alexander G. Tsidaev, and Denis D. Byzov. "A Method for Quantitative Interpretation of Stationary Thermal Fields for Layered Media." Geosciences 10, no. 5 (May 22, 2020): 199. http://dx.doi.org/10.3390/geosciences10050199.

Full text
Abstract:
A new method to solve thermal conjugacy problems is presented for layered models with a thermal conductivity jump at their boundaries. The purpose of this method is to approximate the inverse thermal conductivity coefficient, which has breaks, by using a combination of step functions. A generalized continuous operator is constructed in a continuous space of piecewise–homogeneous media. We obtained an analytical solution for the stationary problem of heat conjugacy in the layered model with finite thickness and with Dirichlet–Neumann conditions at the external boundaries. An algorithm was constructed for downward continuation of the heat flux to depths that correspond to the top of the mantle layer. The advantages of this method are illustrated by testing the crustal seismic, gravity and geothermal data of a study area in the Urals and neighboring regions of Russia. We examined statistical relations between density and thermal parameters and determined heat flux components for the crust and the mantle. The method enables a downward continuation of the heat flux to the base of the upper mantle and allows us to determine the thermal effects of the lateral and vertical features of deep tectonic structures.
APA, Harvard, Vancouver, ISO, and other styles
49

Niitsoo, Arne, Thorsten Edelhäußer, Ernst Eberlein, Niels Hadaschik, and Christopher Mutschler. "A Deep Learning Approach to Position Estimation from Channel Impulse Responses." Sensors 19, no. 5 (March 2, 2019): 1064. http://dx.doi.org/10.3390/s19051064.

Full text
Abstract:
Radio-based locating systems allow for a robust and continuous tracking in industrial environments and are a key enabler for the digitalization of processes in many areas such as production, manufacturing, and warehouse management. Time difference of arrival (TDoA) systems estimate the time-of-flight (ToF) of radio burst signals with a set of synchronized antennas from which they trilaterate accurate position estimates of mobile tags. However, in industrial environments where multipath propagation is predominant it is difficult to extract the correct ToF of the signal. This article shows how deep learning (DL) can be used to estimate the position of mobile objects directly from the raw channel impulse responses (CIR) extracted at the receivers. Our experiments show that our DL-based position estimation not only works well under harsh multipath propagation but also outperforms state-of-the-art approaches in line-of-sight situations.
APA, Harvard, Vancouver, ISO, and other styles
50

Parras, Juan, Patricia A. Apellániz, and Santiago Zazo. "Deep Learning for Efficient and Optimal Motion Planning for AUVs with Disturbances." Sensors 21, no. 15 (July 23, 2021): 5011. http://dx.doi.org/10.3390/s21155011.

Full text
Abstract:
We use the recent advances in Deep Learning to solve an underwater motion planning problem by making use of optimal control tools—namely, we propose using the Deep Galerkin Method (DGM) to approximate the Hamilton–Jacobi–Bellman PDE that can be used to solve continuous time and state optimal control problems. In order to make our approach more realistic, we consider that there are disturbances in the underwater medium that affect the trajectory of the autonomous vehicle. After adapting DGM by making use of a surrogate approach, our results show that our method is able to efficiently solve the proposed problem, providing large improvements over a baseline control in terms of costs, especially in the case in which the disturbances effects are more significant.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography