Добірка наукової літератури з теми "Snapshot-based update condition detection"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Snapshot-based update condition detection".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Snapshot-based update condition detection":

1

Tan, Lei, Peng Li, Aimin Miao, and Yong Chen. "Online process monitoring and fault-detection approach based on adaptive neighborhood preserving embedding." Measurement and Control 52, no. 5-6 (April 16, 2019): 387–98. http://dx.doi.org/10.1177/0020294019838580.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This study aims to solve the problem involving the high false alarm rate experienced during the detection process when using the traditional multivariate statistical process monitoring method. In addition, the existing model cannot be updated according to the actual situation. This article proposes a novel adaptive neighborhood preserving embedding algorithm as well as an online fault-detection approach based on adaptive neighborhood preserving embedding. This approach combines the approximate linear dependence condition with neighborhood preserving embedding. According to the newly proposed update strategy, the algorithm can achieve an adaptive update model that realizes the online fault detection of processes. The effectiveness and feasibility of the proposed approach are verified by experiments of the Tennessee Eastman process. Theoretical analysis and application experiment of Tennessee Eastman process demonstrate that in this article proposed fault-detection method based on adaptive neighborhood preserving embedding can effectively reduce the false alarm rate and improve the fault-detection performance.
2

Kobayashi, Takahisa, and Donald L. Simon. "Hybrid Kalman Filter Approach for Aircraft Engine In-Flight Diagnostics: Sensor Fault Detection Case." Journal of Engineering for Gas Turbines and Power 129, no. 3 (November 17, 2006): 746–54. http://dx.doi.org/10.1115/1.2718572.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, a diagnostic system based on a uniquely structured Kalman filter is developed for its application to in-flight fault detection of aircraft engine sensors. The Kalman filter is a hybrid of a nonlinear on-board engine model (OBEM) and piecewise linear models. The utilization of the nonlinear OBEM allows the reference health baseline of the diagnostic system to be updated, through a relatively simple process, to the health condition of degraded engines. Through this health baseline update, the diagnostic effectiveness of the in-flight sensor fault detection system is maintained as the health of the engine degrades over time. The performance of the sensor fault detection system is evaluated in a simulation environment at several operating conditions during the cruise phase of flight.
3

Fang, Liang, Yun Zhou, Yunzhong Jiang, Yilin Pei, and Weijian Yi. "Vibration-Based Damage Detection of a Steel-Concrete Composite Slab Using Non-Model-Based and Model-Based Methods." Advances in Civil Engineering 2020 (September 11, 2020): 1–20. http://dx.doi.org/10.1155/2020/8889277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents vibration-based damage detection (VBDD) for testing a steel-concrete composite bridge deck in a laboratory using both model-based and non-model-based methods. Damage that appears on a composite bridge deck may occur either in the service condition or in the loading condition. To verify the efficiency of the dynamic test methods for assessing different damage scenarios, two defect cases were designed in the service condition by removing the connection bolts along half of a steel girder and replacing the boundary conditions, while three damage cases were introduced in the loading condition by increasing the applied load. A static test and a multiple reference impact test (MRIT) were conducted in each case to obtain the corresponding deflection and modal data. For the non-model-based method, modal flexibility and modal flexibility displacement (MFD) were used to detect the location and extent of the damage. The test results showed that the appearance and location of the damage in defect cases and loading conditions can be successfully identified by the MFD values. A finite element (FE) model was rationally selected to represent the dynamic characteristics of the physical model, while four highly sensitive physical parameters were rationally selected using sensitivity analysis. The model updating technique was used to assess the condition of the whole deck in the service condition, including the boundary conditions, connectors, and slab. Using damage functions, Strand7 software was used to conduct FE analysis coupled with the MATLAB application programming interface to update multiple physical parameters. Of the three different FE models used to simulate the behavior of the composite slab, the calculated MFD of the shell-solid FE model was almost identical to the test results, indicating that the performance of the tested composite structure could be accurately predicted by this type of FE model.
4

Guo, Ruijun, Guobin Zhang, Qian Zhang, Lei Zhou, Haicun Yu, Meng Lei, and You Lv. "An Adaptive Early Fault Detection Model of Induced Draft Fans Based on Multivariate State Estimation Technique." Energies 14, no. 16 (August 6, 2021): 4787. http://dx.doi.org/10.3390/en14164787.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The induced draft (ID) fan is an important piece of auxiliary equipment in coal-fired power plants. Early fault detection of the ID fan can provide predictive maintenance and reduce unscheduled shutdowns, thus improving the reliability of the power generation. In this study, an adaptive model was developed to achieve the early fault detection of ID fans. First, a non-parametric monitoring model was constructed to describe the normal operating characteristics with the multivariate state estimation technique (MSET). A similarity index representing operation status was defined according to the prediction deviations to produce warnings of early faults. To deal with the model accuracy degradation because of variant condition operation of the ID fan, an adaptive strategy was proposed by using the samples with a high data quality index (DQI) to manage the memory matrix and update the MSET model, thereby improving the fault detection results. The proposed method was applied to a 300 MW coal-fired power plant to achieve the early fault detection of an ID fan. In addition, fault detection by using the model without an update was also compared. Results show that the update strategy can greatly improve the MSET model accuracy when predicting normal operations of the ID fan; accordingly, the fault can be detected more than 4 h earlier by using the strategy with the adaptive update when compared to the model without an update.
5

Han, Jian Ping, and Yong Peng Luo. "Static and Dynamic Finite Element Model Updating of a Rigid Frame-Continuous Girders Bridge Based on Response Surface Method." Advanced Materials Research 639-640 (January 2013): 992–97. http://dx.doi.org/10.4028/www.scientific.net/amr.639-640.992.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Using the static and dynamic test data simultaneously to update the finite element model can increase the available information for updating. It can overcome the disadvantages of updating based on static or dynamic test data only. In this paper, the response surface method is adopted to update the finite element model of the structure based on the static and dynamic test. Using the reasonable experiment design and regression techniques, a response surface model is formulated to approximate the relationships between the parameters and response values instead of the initial finite element model for further updating. First, a numerical example of a reinforced concrete simply supported beam is used to demonstrate the feasibility of this approach. Then, this approach is applied to update the finite element model of a prestressed reinforced concrete rigid frame-continuous girders bridge based on in-situ static and dynamic test data. Results show that this approach works well and achieve reasonable physical explanations for the updated parameters. The results from the updated model are in good agreement with the results from the in-situ measurement. The updated finite element model can accurately represent mechanical properties of the bridge and it can serve as a benchmark model for further damage detection and condition assessment of the bridge.
6

Roy, Prasenjit, and Baher Abdulhai. "GAID: Genetic Adaptive Incident Detection for Freeways." Transportation Research Record: Journal of the Transportation Research Board 1856, no. 1 (January 2003): 96–105. http://dx.doi.org/10.3141/1856-10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Extensive research on point-detector-based automatic traffic-impeding incident detection indicates the potential superiority of neural networks over conventional approaches. All approaches, however, including neural networks, produce detection algorithms that are location specific—that is, neither transferable nor adaptive. A recently designed and ready-to-implement freeway incident detection algorithm based on genetically optimized probabilistic neural networks (PNN) is presented. The combined use of genetic algorithms and neural networks produces GAID, a genetic adaptive incident detection logic that uses flow and occupancy values from the upstream and downstream loop detector stations to automatically detect an incident between the said stations. As input, GAID uses modified input feature space based on the difference of the present volume and occupancy condition from the average condition for time and location. On the output side, it uses a Bayesian update process and converts isolated binary outputs into a continuous probabilistic measure—that is, updated every time step. GAID implements genetically optimized separate smoothing parameters for its input variables, which in turn increase the overall generalization accuracy of the detector algorithm. The detector was subjected to off-line tests with real incident data from a number of freeways in California. Results and further comparison with the McMaster algorithm indicate that GAID with a PNN core has a better detection rate and a lower false alarm rate than the PNN alone and the well-established McMaster algorithm. Results also indicate that the algorithm is the least location specific, and the automated genetic optimization process makes it adapt to new site conditions.
7

Soni, Mukesh, Ihtiram Raza Khan, Sameer Basir, Raman Chadha, Arnold C. Alguno, and Tapas Bhowmik. "Light-Weighted Deep Learning Model to Detect Fault in IoT-Based Industrial Equipment." Computational Intelligence and Neuroscience 2022 (June 29, 2022): 1–10. http://dx.doi.org/10.1155/2022/2455259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Industry 4.0, with the widespread use of IoT, is a significant opportunity to improve the reliability of industrial equipment through problem detection. It is difficult to utilize a unified model to depict the working condition of devices in real-world industrial scenarios because of the complex and dynamic relationship between devices. The scope of this research is that it can detect equipment defects and deploys them in a natural production environment. The proposed research is describing an online detection method for system failures based on long short-term memory neural networks. In recent years, deep learning technology has taken over as the primary method for detecting faults. A neural network with a long short-term memory is used to develop an online defect detection model. Feature extraction from sensor data is done using the curve alignment method. Based on long-term memory neural networks, the fault detection model is built (LSTM). In the end, sliding window technology is used to identify and fix the problem: the model’s online detection and update. The method’s efficacy is demonstrated by experiments based on real data from power plant sensors.
8

Jin, Seung-Seop, and Hyung-Jo Jung. "Vibration-based damage detection using online learning algorithm for output-only structural health monitoring." Structural Health Monitoring 17, no. 4 (July 7, 2017): 727–46. http://dx.doi.org/10.1177/1475921717717310.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Damage-sensitive features such as natural frequencies are widely used for structural health monitoring; however, they are also influenced by the environmental condition. To address the environmental effect, principal component analysis is widely used. Before performing principal component analysis, the training data should be defined for the normal condition (baseline model) under environmental variability. It is worth noting that the natural change of the normal condition may exist due to an intrinsic behavior of the structural system. Without accounting for the natural change of the normal condition, numerous false alarms occur. However, the natural change of the normal condition cannot be known in advance. Although the description of the normal condition has a significant influence on the monitoring performance, it has received much less attention. To capture the natural change of the normal condition and detect the damage simultaneously, an adaptive statistical process monitoring using online learning algorithm is proposed for output-only structural health monitoring. The novelty aspect of the proposed method is the adaptive learning capability by moving the window of the recent samples (from normal condition) to update the baseline model. In this way, the baseline model can reflect the natural change of the normal condition in environmental variability. To handle both change rate of the normal condition and non-linear dependency of the damage-sensitive features, a variable moving window strategy is also proposed. The variable moving window strategy is the block-wise linearization method using k-means clustering based on Linde–Buzo–Gray algorithm and Bayesian information criterion. The proposed method and two existing methods (static linear principal component analysis and incremental linear principal component analysis) were applied to a full-scale bridge structure, which was artificially damaged at the end of the long-term monitoring. Among the three methods, the proposed method is the only successful method to deal with the non-linear dependency among features and detect the structural damage timely.
9

Akkar, Hanan A. R., and Suhad Q. Hadad. "Diagnosis of Lung Cancer Disease Based on Back-Propagation Artificial Neural Network Algorithm." Engineering and Technology Journal 38, no. 3B (December 25, 2020): 184–96. http://dx.doi.org/10.30684/etj.v38i3b.1666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Early stage detection of lung cancer is important for successful controlling of the diseases, also to offer additional chance to the patients in order to survive. So , algorithms that are related with computer vision and Image processing are extremely important for early medical diagnosis of lung cancer. In current work () computed tomography scan images were collected from several patients Classification was done using Back Propagation Artificial Neural Network ( ).It is considered as a powerful artificially intelligent technique with training rule for optimization to update the weights of the overall connections in order to determine the abnormal image. Several pre-processing operations and morphologic techniques were introduced to improve the condition of the image and make it suitable for detection cancer.Histogram and () Gray Level Co-occurrence Matrix were applied toget best features extraction analysis from lung image.Three types of activation functions(trainlm ,trainbr ,traingd) were used which gives a significant accuracy for detecting cancer in scan lung image related to the suggested algorithm. Best results were obtained with accuracy rate 95.9 % in trainlm activation function.. Graphic User Interface ( ) was displaying to show the final diagnosis for lung.
10

Akkar, Hanan A. R., and Suhad Q. Hadad. "Diagnosis of Lung Cancer Disease Based on Back-Propagation Artificial Neural Network Algorithm." Engineering and Technology Journal 38, no. 3B (December 25, 2020): 184–96. http://dx.doi.org/10.30684/etj.v38i3b.1666.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Early stage detection of lung cancer is important for successful controlling of the diseases, also to offer additional chance to the patients in order to survive. So , algorithms that are related with computer vision and Image processing are extremely important for early medical diagnosis of lung cancer. In current work () computed tomography scan images were collected from several patients Classification was done using Back Propagation Artificial Neural Network ( ).It is considered as a powerful artificially intelligent technique with training rule for optimization to update the weights of the overall connections in order to determine the abnormal image. Several pre-processing operations and morphologic techniques were introduced to improve the condition of the image and make it suitable for detection cancer.Histogram and () Gray Level Co-occurrence Matrix were applied toget best features extraction analysis from lung image.Three types of activation functions(trainlm ,trainbr ,traingd) were used which gives a significant accuracy for detecting cancer in scan lung image related to the suggested algorithm. Best results were obtained with accuracy rate 95.9 % in trainlm activation function.. Graphic User Interface ( ) was displaying to show the final diagnosis for lung.

Дисертації з теми "Snapshot-based update condition detection":

1

Wang, Yuwei. "Evolution of microservice-based applications : Modelling and dafe dynamic updating." Thesis, Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les architectures à base de microservices permettent de construire des systèmes répartis complexes composés de microservices indépendants. Le découplage et la modularité des microservices facilitent leur remplacement et leur mise à jour de manière indépendante. Depuis l'émergence du développement agile et de l'intégration continue (DevOps et CI/CD), la tendance est aux changements de version plus fréquents et en cours d'exécution des applications. La réalisation des changements de version est effectuée par un processus d'évolution consistant à passer de la version actuelle de l'application à une nouvelle version. Cependant, les coûts de maintenance et d'évolution de ces systèmes répartis augmentent rapidement avec le nombre de microservices.L'objectif de cette thèse est de répondre aux questions suivantes~:Comment aider les ingénieurs à mettre en place une gestion de version unifiée et efficace pour les microservices et comment tracer les changements de version dans les applications à base de microservices ?Quand les applications à base de microservices, en particulier celles dont les activités sont longues, peuvent-elles être mises à jour dynamiquement sans arrêter l'exécution de l'ensemble du système ? Comment la mise à jour doit-elle être effectuée pour assurer la continuité du service et maintenir la cohérence du système ?En réponse à ces questions, cette thèse propose deux contributions principales. La première contribution est constituée de modèles architecturaux et d'un graphe d'évolution pour modéliser et tracer la gestion des versions des microservices. Ces modèles sont construits lors de la conception et utilisés durant l'exécution. Cette contribution aide les ingénieurs à abstraire l'évolution architecturale afin de gérer les déploiements lors d'une reconfiguration, et fournit la base de connaissances nécessaire à un intergiciel de gestion autonomique des activités d'évolution. La deuxième contribution est une approche basée sur les instantanés pour la mise à jour dynamique (DSU) des applications à base de microservices. Les instantanés répartis cohérents de l'application en cours d'exécution sont construits pour être utilisés dans la spécification de la continuité du service, l'évaluation des conditions de mise à jour sûre et dans la mise en œuvre des stratégies de mise à jour. La complexité en nombre de messages de l'algorithme DSU n'est alors pas égale à la complexité de l'application répartie, mais correspond à la complexité de l'algorithme de construction d'un instantané réparti cohérent
Microservice architectures contribute to building complex distributed systems as sets of independent microservices. The decoupling and modularity of distributed microservices facilitates their independent replacement and upgradeability. Since the emergence of agile DevOps and CI/CD, there is a trend towards more frequent and rapid evolutionary changes of the running microservice-based applications in response to various evolution requirements. Applying changes to microservice architectures is performed by an evolution process of moving from the current application version to a new version. The maintenance and evolution costs of these distributed systems increase rapidly with the number of microservices.The objective of this thesis is to address the following issues: How to help engineers to build a unified and efficient version management for microservices and how to trace changes in microservice-based applications? When can microservice-based applications, especially those with long-running activities, be dynamically updated without stopping the execution of the whole system? How should the safe updating be performed to ensure service continuity and maintain system consistency?In response to these questions, this thesis proposes two main contributions. The first contribution is runtime models and an evolution graph for modelling and tracing version management of microservices. These models are built at design time and used at runtime. It helps engineers abstract architectural evolution in order to manage reconfiguration deployments, and it provides the knowledge base to be manipulated by an autonomic manager middleware in various evolution activities. The second contribution is a snapshot-based approach for dynamic software updating (DSU) of microservices. The consistent distributed snapshots of microservice-based applications are constructed to be used for specifying continuity of service, evaluating the safe update conditions and realising the update strategies. The message complexity of the DSU algorithm is not the message complexity of the distributed application, but the complexity of the consistent distributed snapshot algorithm

Частини книг з теми "Snapshot-based update condition detection":

1

Pandian, Raviraj, and Ramya A. "Detecting and Tracking Segmentation of Moving Objects Using Graph Cut Algorithm." In Feature Dimension Reduction for Content-Based Image Identification, 177–92. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5775-3.ch010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Real-time moving object detection, classification, and tracking capabilities are presented with system operates on both color and gray-scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. Object detection in a video is usually performed by object detectors or background subtraction techniques. The proposed method determines the threshold automatically and dynamically depending on the intensities of the pixels in the current frame. In this method, it updates the background model with learning rate depending on the differences of the pixels in the background model of the previous frame. The graph cut segmentation-based region merging algorithm approaches achieve both segmentation and optical flow computation accurately and they can work in the presence of large camera motion. The algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group, and vehicle.
2

Abdellatif, Abir, Jacques Bouaud, Joël Belmin, and Brigitte Seroussi. "Visualization of Guideline-Based Decision Support for the Management of Pressure Ulcers in Nursing Homes." In Studies in Health Technology and Informatics. IOS Press, 2020. http://dx.doi.org/10.3233/shti200683.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Though a preventable risk, the management of pressure ulcers (PUs) in nursing homes is not satisfactory due to inadequate prevention and complex care plans. PUs early detection and wound assessment require to know the patient condition and risk factors and to have a good knowledge of best practices. We built a guideline-based clinical decision support system (CDSS) for the prevention, the assessment, and the management of PUs. Clinical practice guidelines have been modeled as decision trees and formalized as IF-THEN rules to be triggered by electronic health record (EHR) data. From PU assessment yielded by the CDSS, we propose a synthetic visualization of PU current and previous stages as a gauge that illustrates the different stages of PU continuous evolution. This allows to display PU current and previous stages to inform health care professionals of PU updated assessment and support their evaluation of previously delivered care efficiency. The CDSS will be integrated in NETSoins nursing homes EHR where gauges for several health problems constitute a patient dashboard.
3

Tung, Tony, and Takashi Matsuyama. "Visual Tracking Using Multimodal Particle Filter." In Computer Vision, 1072–90. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5204-8.ch044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Visual tracking of humans or objects in motion is a challenging problem when observed data undergo appearance changes (e.g., due to illumination variations, occlusion, cluttered background, etc.). Moreover, tracking systems are usually initialized with predefined target templates, or trained beforehand using known datasets. Hence, they are not always efficient to detect and track objects whose appearance changes over time. In this paper, we propose a multimodal framework based on particle filtering for visual tracking of objects under challenging conditions (e.g., tracking various human body parts from multiple views). Particularly, the authors integrate various cues such as color, motion and depth in a global formulation. The Earth Mover distance is used to compare color models in a global fashion, and constraints on motion flow features prevent common drifting effects due to error propagation. In addition, the model features an online mechanism that adaptively updates a subspace of multimodal templates to cope with appearance changes. Furthermore, the proposed model is integrated in a practical detection and tracking process, and multiple instances can run in real-time. Experimental results are obtained on challenging real-world videos with poorly textured models and arbitrary non-linear motions.

Тези доповідей конференцій з теми "Snapshot-based update condition detection":

1

Kobayashi, Takahisa, and Donald L. Simon. "Hybrid Kalman Filter Approach for Aircraft Engine In-Flight Diagnostics: Sensor Fault Detection Case." In ASME Turbo Expo 2006: Power for Land, Sea, and Air. ASMEDC, 2006. http://dx.doi.org/10.1115/gt2006-90870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, a diagnostic system based on a uniquely structured Kalman filter is developed for its application to inflight fault detection of aircraft engine sensors. The Kalman filter is a hybrid of a nonlinear on-board engine model (OBEM) and piecewise linear models. The utilization of the nonlinear OBEM allows the reference health baseline of the diagnostic system to be updated, through a relatively simple process, to the health condition of degraded engines. Through this health baseline update, the diagnostic effectiveness of the in-flight sensor fault detection system is maintained as the health of the engine degrades over time. The performance of the sensor fault detection system is evaluated in a simulation environment at several operating conditions during the cruise phase of flight.
2

Pinzón, Horacio, Cinthia Audivet, Ivan Portnoy, Marlon Consuegra, Javier Alexander, and Marco Sanjuán. "An Extended Implementation of Fault Detection in Multi-State Systems Based on Warp Analysis: A Case Study on Natural Gas Transmission Systems in Tropical Regions." In ASME 2017 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/imece2017-71862.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Natural gas transmission infrastructure is a large-scale complex system often exhibiting a considerable operating states not only due to natural, slow and normal process changes related to aging but also to a dynamic interaction with multiple agents overall having different functional parameters, an irregular demand trend adjusted by the hour, and sometimes affected by external conditions as severe climate periods. As traditional fault detection relies in alarm management system and operator’s expertise, it is paramount to deploy a strategy being able to update its underlying structure and effectively adapting to such process shifts. This feature would allow operators and engineers to have a better framework to address the online data being gathered in dynamic on transient conditions. This paper presents an extended analysis on WARP technique to address the abnormal condition management activities of multiple-state processes deployed in critical natural gas transmission infrastructure. Special emphasis is made on the updating activity to incorporate effectively the operating shifts exhibited by a new operating condition implemented on a fault detection strategy. This analysis broadens the authors’ original algorithm scope to include multi-state systems in addition to process drifting behavior. The strategy is assessed under two different scenarios rendering a major shift in process’ operating conditions related to natural gas transmission systems: A transition between low and high natural gas demand to support hydroelectric generation matrix on severe tropical conditions. Performance evaluation of fault detection algorithm is carried out based on false alarm rate, detection time and misdetection rate estimated around the model update.
3

Shou, Yu-Wen. "Intelligent Judgment System for Vehicle-Overtaking by Motion Detection in Subsequent Images." In ASME 2009 International Mechanical Engineering Congress and Exposition. ASMEDC, 2009. http://dx.doi.org/10.1115/imece2009-10922.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We propose an intelligent judgment system to determine the exact timing for overtaking another car in simulated dynamic circumstances by motion detection and feature analysis in subsequent images based on digital image processing in this paper. The strategic methodology in detection of motion vectors extracted from the source video file can effectively evaluate and predict the behavior of surrounding vehicles and at the same time give an appropriate suggestion whether the driver could overtake another car, which is not only different from the traditional methods in computational feature-only analysis for some specific static image but also provides an innovative idea among the related applications of intelligent transportation system (ITS). Our system makes use of the video-typed files recorded from the rear-view mirror in the vehicle that should be obtained from the digital image capturing devices. It also constantly reevaluates the information of motion vectors in surrounding environments to update the useful information between the driver’s vehicle and background. In order to tackle the problem of real-time processing, this paper simplifies the processes of feature selection and analysis for video processing in particular. The crucial features used to give dynamic information, motion vectors, can be obtained from defined consecutive images. We define a variable number of images according to the extent of motion variation in different real-time situations. Our dynamic features are composed of geometrical and statistical characteristics from each processed image in the defined duration. Our scheme can identify the difference between the background and object of interest, which also reveals the dynamic information of the determined number of images extracted from the whole video file. Our experimental results show that the proposed features can give the useful information in a given traffic condition, such as the locations of surrounding vehicles, and the way of vehicles’ moving. The real-time problems of ITS are taken into consideration in this paper and the developed feature series are flexible to the changes of occasions. More useful features in dynamic environments as well as our feature series will be applied in our systematical mechanisms, and the improvement on real-time problems by motion vectors should be progressively made in the near future.
4

Liu, Biyue. "A Numerical Study of Wall Shear Stress of Viscous Flows in Curved Atherosclerotic Tubes." In ASME 2002 International Mechanical Engineering Congress and Exposition. ASMEDC, 2002. http://dx.doi.org/10.1115/imece2002-32132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Atherosclerosis is a disease of large- and medium-size arteries, which involves complex interactions between the artery wall and the blood flow. Both clinical observations and experimental results showed that the fluid shear stress acting on the artery wall plays a significant role in the physical processes which lead to atherosclerosis [1,2]. Therefore, a sound understanding of the effect of the wall shear stress on atherosclerosis is of practical importance to early detection, prevention and treatment of the disease. A considerable number of studies have been performed to investigate the flow phenomena in human carotid artery bifurcations or curved tubes during the past decades [3–8]. Numerical studies have supported the experimental results on the correlation between blood flow parameters and atherosclerosis [6–8]. The objective of this work is to understand the effect of the wall shear stress on atherosclerosis. The mathematical description of pulsatile blood flows is modeled by applying the time-dependent, incompressible Navier-Stokes equations for Newtonian fluids. The equations of motion and the incompressibility condition are ρut+ρ(u·∇)u=−∇p+μΔu, inΩ, (1)∇·u=0, inΩ (2) where ρ is the density of the fluid, μ is the viscosity of the fluid, u = (u1, u2, u3) is the flow velocity, p is the internal pressure, Ω is a curved tube with wall boundary Γ (see Figure 1). At the inflow boundary, fully developed velocity profiles corresponding to the common carotid velocity pulse waveform are prescribed u2=0,u3=0,u1=U(1+Asin(2πt/tp)), (3) where A is the amplitude of oscillation, tp is the period of oscillation; U is a fully developed velocity profile at the symmetry entrance plane. At the outflow boundary, surface traction force is prescribed as Tijnjni=0, (4)uiti=0 (5) where Tij=−pδij+μ(∂ui/∂xj+∂uj/∂xi) (6) is the stress tensor, n = (n1, n2, n3) is the out normal vector of the outlet boundary. On the wall boundary Γ, we assume that no slipping takes place between the fluid and the wall, no penetration of the fluid through the artery wall occurs: u|r=nHt, (7) where n = (n1, n2, n3) is the out normal vector of the wall boundary Γ. H is the function representing the location of the wall boundary. At initial time t = 0, H is input as shown in Figure 1. During the computation, H is updated by a geometry update condition based on the localized blood flow information. The initial condition is prescribed as u|t=0=u0,p|t=0=p0, where u0, p0 can be obtained by solving a Stokes problem: −μΔu0+∇p0=0,∇·u0=0, with boundary conditions (3)–(7) but zero in the right hand side of (7).
5

Turner, Cameron J., Abiola M. Ajetunmobi, and Richard H. Crawford. "Fault Detection With NURBs-Based Metamodels." In ASME 2006 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/detc2006-99637.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Developing the ability for a system to self-monitor its condition is a desirable feature in many modern engineering systems. This capability facilitates a maintenance-as-needed rather than a maintenance-as-scheduled paradigm, offering potential efficiency improvements and corresponding cost savings. By using continuously updated Non-Uniform Rational B-spline (NURBs) metamodels of system performance to monitor the system condition, the onset of incipient faults can be detected by comparison to a self-generated as-built system metamodel, providing a basis for determining off-normal operating conditions. This capability is demonstrated for three distinct fault conditions prevalent in brushless DC motors. The results show that this technique can be used to develop an as-built system metamodel, develop a current system model during system operation, and detect the presence of an incipient fault condition despite the compensation provided by a feedback control system.
6

Kariyawasam, Shahani, Patrick Yeung, Stuart Clouston, and Geoffrey Hurd. "Overcoming Technical Limitations in Identifying and Characterizing Critical Complex Corrosion." In 2012 9th International Pipeline Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ipc2012-90529.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In 2009 a pipeline within the TransCanada pipeline system experienced a rupture. As this pipeline was already under a rigorous In Line Inspection (ILI) based corrosion management program this failure led to an extensive root cause analysis. Even though the hazard causing the failure was microbiologically induced corrosion (MIC) under tape coating, the more troubling question was “Why had the severity of this anomaly not been determined by the ILI based corrosion management program?” This led to an investigation of what key characteristics of the ILI signals resulting from areas of “complex corrosion” are more difficult to correctly interpret and size and furthermore where the line condition is such that manual verification is needed. By better understanding the limitations of the technology, processes used, and the critical defect signal characteristics, criteria were developed to ensure that “areas of concern” are consistently identified, manually verified and therefore the sizing is validated at these potentially higher risk locations. These new criteria were applied on ILI data and then validated against in-the-ditch measurements and a hydrotest. This process in conjunction with optimization of ILI sizing algorithms enabled the operator to overcome some of the known challenges in sizing areas of complex corrosion and update its corrosion management process to improve the detection and remediation of critical defects. This paper describes this investigation of the failure location, development of the complex corrosion criteria, and the validation of effectiveness of the criteria. The criteria are focused on external corrosion and have been currently validated on pipelines of concern. Application to other lines should be similarly validated.
7

Mondal, Subhajit, Sushanta Chakraborty, and Nilanjan Mitra. "Estimation of Elastic Parameters of Sandwich Composite Plates Using a Gradient Based Finite Element Model Updating Approach." In ASME 2016 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/smasis2016-9005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The dynamic behavior of sandwich composite structures needs to be predicted as accurately as possible for ensuring safety and serviceability. A properly converged finite element model can accurately predict such behavior, if the current material properties are determined within very close ranges to their actual values. The initial nominal values of material properties are guessed from established standards or from manufacturer’s data, followed by verification through quasi-static characterization tests of extracted samples. Such structures can be modal tested to determine the dynamic responses very accurately, as and when required. A mathematically well posed inverse problem can thus be formulated to inversely update the material parameters accurately from initial guesses through finite element model updating procedures. Such exercise can be conveniently used for condition assessment and health monitoring of sandwich composite structures. The method is capable of determining the degradation of material properties, hence suitable for damage detection. The in-plane as well as out-of-plane elastic moduli can be determined to predict the actual responses which can be verified by physical measurement. In the present investigation, the in-plane and out-of-plane elastic parameters of the face sheets made of glass fiber reinforced plastics, i.e. E1, E2, G12, G13, G23 of the face sheet and the Young’s modulus (E) of the core of a sandwich composite plate has been determined inversely from available modal responses. The method is based on the correlation between the dynamic responses as predicted using finite element model and those measured from modal testing to form the objective function, sensitive enough to the in-plane and out-of-plane material constants. A gradient based Inverse Eigensensivity Method (IEM) has been implemented to identify these material parameters of a rectangular sandwich composite plate from natural frequencies. It may be noted that the initial characterization test data may not be useful in predicting accurate dynamic responses of existing degraded sandwich structures, if the material constants have changed substantially. Destructive characterization test on existing structure is mostly not permitted as samples need to be extracted which may damage the otherwise intact structure.
8

Pichler, Ruprecht M. J. "Improved Leak Detection by Method Diversity." In 1998 2nd International Pipeline Conference. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/ipc1998-2100.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Leak detection systems for liquid pipelines are installed to minimize spillage in case of a leak. Therefore reliability, sensitivity and detection time under practical operating conditions are the most important parameters of a leak detection system. Noise factors to be considered among others are unknown fluid property data, friction factor, instrument errors, transient flow, slack-line operation and SCADA update time. The opening characteristics and the size of leaks differ considerably from case to case. Each software-based leak detection method available today has its particular strength. As long as just one or two of these methods are applied to a pipeline a compromise has to be found for the key parameters of the leak detection system. The paper proposed illustrates how a combination of several different software-based leak detection methods together with observer-type system identification and a knowledge-based evaluation can improve leak detection. Special focus is given to leak detection and automated leak locating under transient flow conditions. Practical results are shown for a crude oil pipeline and a product pipeline.
9

Saeedzadeh, Ahsan, Saeid Habibi, and Marjan Alavi. "A Model-Based FDD Approach for an EHA Using Updated Interactive Multiple Model SVSF." In ASME/BATH 2021 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/fpmc2021-68065.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Ubiquitous applications, especially in harsh environments and with strict safety requirements, make Fault Detection and Diagnosis (FDD) in hydraulic actuators an imperative concern for the industry. Model-based FDD uses estimation strategies, including observers and filters as estimation tools. In these methods, observability is a limiting factor in information extraction and parameter estimation for most applications such as in fluid power systems. To address the observability problem, adaptive strategies like Interactive Multiple Model (IMM) estimation have proven to be effective. In this paper a computationally efficient form of IMM referred to as the Updated IMM (UIMM) is used and applied to an Electro-Hydrostatic Actuator (EHA) for FDD. The UIMM is suited to fault conditions that are irreversible, meaning that if a fault happens it will persist in the system. In essence the UIMM follows through a progression of models that in line with the progression of the fault condition in lieu of having all models being considered at the same time (as is the case for IMM). Hence, UIMM significantly reduces the number of models running in parallel and at the same time. This has two major advantages which are higher computational efficiency and avoiding combinatorial explosion. The state and parameters estimation strategies that is used in conjunction with UIMM is the Variable Boundary Layer Smooth Variable Structure Filter (VBL-SVSF). The VBL SVSF is a robust optimal estimation strategy that is more stable than the Kalman Filter in relation to system and modeling uncertainties. The UIMM method is validated by simulation of fault conditions on an EHA.
10

Arévalo, Pedro J., Olof Hummes, and Matthew Forshaw. "Integrated Real-Time Simulation in an Earth Model – Automating Drilling and Driving Efficiency." In SPE/IADC International Drilling Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/204069-ms.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Real-time while drilling simulations use an evergreen digital twin of the well, consisting of physics-based models in an earth model to constantly update boundary conditions and parameters while drilling. The approach actively contributes to prediction or early detection of specific drilling issues, thus reducing drilling-related risk, non-productive time (NPT), and invisible-lost time (ILT). The method also unlocks further drilling optimization opportunities, while staying within a safe operative envelope that protects the wellbore. In the planning phase, a run plan is prepared based on drilling engineering simulations – such as downhole hydraulics and Torque and Drag (T&D) – within the lithology and geomechanics of the earth model. While drilling, the run plan continuously evolves as automatic updates with actual drilling parameters refine the simulations. Smart triggering algorithms constantly monitor sensor data at surface and downhole, automatically updating the simulations. Drilling automation services consume the simulation results, shared across an aggregation layer, to predict drilling dysfunctions related to hole-cleaning, downhole pressure, tripping velocity (which might lead to fractured formations or formation fluids entering the wellbore), tight hole and pipe sticking. Drillers receive actionable information, and drilling automation applications are equipped to control specific drilling processes. Case studies from drilling runs in the North Sea and in Middle East confirm the effectiveness of the approach. Deployment on these runs used a modular and scalable system architecture to allow seamless integration of all components (surface data acquisition, drilling engineering simulations, and monitoring applications). As designed, the system allows the integration of new services, and different data providers and consumers.

Звіти організацій з теми "Snapshot-based update condition detection":

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.

До бібліографії