To see the other types of publications on this topic, follow the link: Continuum fault models.

Journal articles on the topic 'Continuum fault models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Continuum fault models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

So, Byung-Dal, and Fabio A. Capitanio. "Self-consistent stick-slip recurrent behaviour of elastoplastic faults in intraplate environment: a Lagrangian solid mechanics approach." Geophysical Journal International 221, no. 1 (2020): 151–62. http://dx.doi.org/10.1093/gji/ggz581.

Full text
Abstract:
SUMMARY Our understanding of the seismicity of continental interiors, far from plate margins, relies on the ability to account for behaviours across a broad range of time and spatial scales. Deformation rates around seismic faults range from the slip-on-fault during earthquakes to the long-term viscous deformation of surrounding lithosphere, thereby presenting a challenge to modelling techniques. The aim of this study was to test a new method to simulate seismic faults using a continuum approach, reconciling the deformation of viscoelastoplastic lithospheres over geological timescales. A von Mises yield criterion is adopted as a proxy for the frictional shear strength of a fault. In the elastoplastic fault models a rapid change in strength occurs after plastic yielding, to achieve stress–strain equilibrium, when the coseismic slip and slip velocity from the strain-rate response and size of the fault are calculated. The cumulative step-function shape of the slip and temporally partitioned slip velocity of the fault demonstrated self-consistent discrete fault motion. The implementation of elastoplastic faults successfully reproduced the conceptual models of seismic recurrence, that is strictly periodic and time- and slip-predictable. Elastoplastic faults that include a slip velocity strengthening and weakening with reduction of the time-step size during the slip stage generated yield patterns of coseismic stress changes in surrounding areas, which were similar to those calculated from actual earthquakes. A test of fault interaction captured the migration of stress between two faults under different spatial arrangements, reproducing realistic behaviours across time and spatial scales of faults in continental interiors.
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Haoran, Shuang Tian, Yuankai Xiang, Leiming Cheng, and Fujian Yang. "Analysis of Stress Perturbation Patterns in Oil and Gas Reservoirs Induced by Faults." Processes 13, no. 5 (2025): 1416. https://doi.org/10.3390/pr13051416.

Full text
Abstract:
The distribution of in situ stress fields in reservoirs is critical for the accurate exploration and efficient exploitation of hydrocarbon resources, especially in deep, fault-developed strata where tectonic activities significantly complicate stress field patterns. To clarify the perturbation effects of faults on in situ stress fields in deep reservoirs, this study combines dynamic–static parameter conversion models derived from laboratory experiments (acoustic emission Kaiser effect and triaxial compression tests) with a coupled “continuous matrix–discontinuous fault” numerical framework implemented in FLAC3D6.0. Focusing on the BKQ Formation reservoir in the MH area, China, we developed a multivariate regression-based inversion model integrating gravitational and bidirectional tectonic stress fields, validated against field measurements with errors of −2.96% to 9.07%. The key findings of this study include the following: (1) fault slip induces stress reductions up to 22.3 MPa near fault zones, with perturbation ranges quantified via exponential decay functions (184.91–317.74 m); (2) the “continuous matrix–discontinuous fault” coupling method resolves limitations of traditional continuum models by simulating fault slip through interface contact elements; and (3) stress redistribution exhibits NW-SE gradients, aligning with regional tectonic compression. These results provide quantitative guidelines for optimizing hydrocarbon development boundaries and hydraulic fracturing designs in faulted reservoirs.
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Feng, Andrew V. Zuza, Peter J. Haproff, et al. "Accommodation of India–Asia convergence via strike-slip faulting and block rotation in the Qilian Shan fold–thrust belt, northern margin of the Tibetan Plateau." Journal of the Geological Society 178, no. 3 (2021): jgs2020–207. http://dx.doi.org/10.1144/jgs2020-207.

Full text
Abstract:
Existing models of intracontinental deformation have focused on plate-like rigid body motion v. viscous-flow-like distributed deformation. To elucidate how plate convergence is accommodated by intracontinental strike-slip faulting and block rotation within a fold–thrust belt, we examine the Cenozoic structural framework of the central Qilian Shan of northeastern Tibet, where the NW-striking, right-slip Elashan and Riyueshan faults terminate at the WNW-striking, left-slip Haiyuan and Kunlun faults. Field- and satellite-based observations of discrete right-slip fault segments, releasing bends, horsetail termination splays and off-fault normal faulting suggest that the right-slip faults accommodate block rotation and distributed west–east crustal stretching between the Haiyuan and Kunlun faults. Luminescence dating of offset terrace risers along the Riyueshan fault yields a Quaternary slip rate of c. 1.1 mm a−1, which is similar to previous estimates. By integrating our results with regional deformation constraints, we propose that the pattern of Cenozoic deformation in northeastern Tibet is compatible with west–east crustal stretching/lateral displacement, non-rigid off-fault deformation and broad clockwise rotation and bookshelf faulting, which together accommodate NE–SW India–Asia convergence. In this model, the faults represent strain localization that approximates continuum deformation during regional clockwise lithospheric flow against the rigid Eurasian continent.Supplementary material: Luminescence dating procedures and protocols is available at https://doi.org/10.17605/OSF.IO/CR9MNThematic collection: This article is part of the Fold-and-thrust belts and associated basins collection available at: https://www.lyellcollection.org/cc/fold-and-thrust-belts
APA, Harvard, Vancouver, ISO, and other styles
4

Mancini, Simone, Margarita Segou, Maximilian Jonas Werner, and Tom Parsons. "The Predictive Skills of Elastic Coulomb Rate-and-State Aftershock Forecasts during the 2019 Ridgecrest, California, Earthquake Sequence." Bulletin of the Seismological Society of America 110, no. 4 (2020): 1736–51. http://dx.doi.org/10.1785/0120200028.

Full text
Abstract:
ABSTRACT Operational earthquake forecasting protocols commonly use statistical models for their recognized ease of implementation and robustness in describing the short-term spatiotemporal patterns of triggered seismicity. However, recent advances on physics-based aftershock forecasting reveal comparable performance to the standard statistical counterparts with significantly improved predictive skills when fault and stress-field heterogeneities are considered. Here, we perform a pseudoprospective forecasting experiment during the first month of the 2019 Ridgecrest (California) earthquake sequence. We develop seven Coulomb rate-and-state models that couple static stress-change estimates with continuum mechanics expressed by the rate-and-state friction laws. Our model parameterization supports a gradually increasing complexity; we start from a preliminary model implementation with simplified slip distributions and spatially homogeneous receiver faults to reach an enhanced one featuring optimized fault constitutive parameters, finite-fault slip models, secondary triggering effects, and spatially heterogenous planes informed by pre-existing ruptures. The data-rich environment of southern California allows us to test whether incorporating data collected in near-real time during an unfolding earthquake sequence boosts our predictive power. We assess the absolute and relative performance of the forecasts by means of statistical tests used within the Collaboratory for the Study of Earthquake Predictability and compare their skills against a standard benchmark epidemic-type aftershock sequence (ETAS) model for the short (24 hr after the two Ridgecrest mainshocks) and intermediate terms (one month). Stress-based forecasts expect heightened rates along the whole near-fault region and increased expected seismicity rates in central Garlock fault. Our comparative model evaluation not only supports that faulting heterogeneities coupled with secondary triggering effects are the most critical success components behind physics-based forecasts, but also underlines the importance of model updates incorporating near-real-time available aftershock data reaching better performance than standard ETAS. We explore the physical basis behind our results by investigating the localized shut down of pre-existing normal faults in the Ridgecrest near-source area.
APA, Harvard, Vancouver, ISO, and other styles
5

Mancini, Simone, Margarita Segou, Maximilian Werner, and Tom Parsons. "The Predictive Skills of Elastic Coulomb Rate‐and‐State Aftershock Forecasts during the 2019 Ridgecrest, California, Earthquake Sequence." Bulletin of the Seismological Society of America 110, no. 4 (2020): 1736–51. https://doi.org/10.1785/0120200028.

Full text
Abstract:
Operational earthquake forecasting protocols commonly use statistical models for their recognized ease of implementation and robustness in describing the short‐term spatiotemporal patterns of triggered seismicity. However, recent advances on physics‐based aftershock forecasting reveal comparable performance to the standard statistical counterparts with significantly improved predictive skills when fault and stress‐field heterogeneities are considered. Here, we perform a pseudoprospective forecasting experiment during the first month of the 2019 Ridgecrest (California) earthquake sequence. We develop seven Coulomb rate‐and‐state models that couple static stress‐change estimates with continuum mechanics expressed by the rate‐and‐state friction laws. Our model parameterization supports a gradually increasing complexity; we start from a preliminary model implementation with simplified slip distributions and spatially homogeneous receiver faults to reach an enhanced one featuring optimized fault constitutive parameters, finite‐fault slip models, secondary triggering effects, and spatially heterogenous planes informed by pre‐existing ruptures. The data‐rich environment of southern California allows us to test whether incorporating data collected in near‐real time during an unfolding earthquake sequence boosts our predictive power. We assess the absolute and relative performance of the forecasts by means of statistical tests used within the Collaboratory for the Study of Earthquake Predictability and compare their skills against a standard benchmark epidemic‐type aftershock sequence (ETAS) model for the short (24 hr after the two Ridgecrest mainshocks) and intermediate terms (one month). Stress‐based forecasts expect heightened rates along the whole near‐fault region and increased expected seismicity rates in central Garlock fault. Our comparative model evaluation not only supports that faulting heterogeneities coupled with secondary triggering effects are the most critical success components behind physics‐based forecasts, but also underlines the importance of model updates incorporating near‐real‐time available aftershock data reaching better performance than standard ETAS. We explore the physical basis behind our results by investigating the localized shut down of pre‐existing normal faults in the Ridgecrest near‐source area.
APA, Harvard, Vancouver, ISO, and other styles
6

Jung, Jai K., Thomas D. O’Rourke, and Christina Argyrou. "Multi-directional force–displacement response of underground pipe in sand." Canadian Geotechnical Journal 53, no. 11 (2016): 1763–81. http://dx.doi.org/10.1139/cgj-2016-0059.

Full text
Abstract:
A methodology is presented to evaluate multi-directional force–displacement relationships for soil–pipeline interaction analysis and design. Large-scale tests of soil reaction to pipe lateral and uplift movement in dry and partially saturated sand are used to validate plane strain, finite element (FE) soil, and pipe continuum models. The FE models are then used to characterize force versus displacement performance for lateral, vertical upward, vertical downward, and oblique orientations of pipeline movement in soil. Using the force versus displacement relationships, the analytical results for pipeline response to strike-slip fault rupture are shown to compare favorably with the results of large-scale tests in which strike-slip fault movement was imposed on 250 and 400 mm diameter high-density polyethylene pipelines in partially saturated sand. Analytical results normalized with respect to maximum lateral force are provided on 360° plots to predict maximum pipe loads for any movement direction. The resulting methodology and dimensionless plots are applicable for underground pipelines and conduits at any depth, subjected to relative soil movement in any direction in dry or saturated and partially saturated medium to very dense sands.
APA, Harvard, Vancouver, ISO, and other styles
7

Armandine Les Landes, Antoine, Théophile Guillon, Mariane Peter-Borie, Arnold Blaisonneau, Xavier Rachez, and Sylvie Gentier. "Locating Geothermal Resources: Insights from 3D Stress and Flow Models at the Upper Rhine Graben Scale." Geofluids 2019 (May 12, 2019): 1–24. http://dx.doi.org/10.1155/2019/8494539.

Full text
Abstract:
To be exploited, geothermal resources require heat, fluid, and permeability. These favourable geothermal conditions are strongly linked to the specific geodynamic context and the main physical transport processes, notably stresses and fluid circulations, which impact heat-driving processes. The physical conditions favouring the setup of geothermal resources can be searched for in predictive models, thus giving estimates on the so-called “favourable areas.” Numerical models could allow an integrated evaluation of the physical processes with adapted time and space scales and considering 3D effects. Supported by geological, geophysical, and geochemical exploration methods, they constitute a useful tool to shed light on the dynamic context of the geothermal resource setup and may provide answers to the challenging task of geothermal exploration. The Upper Rhine Graben (URG) is a data-rich geothermal system where deep fluid circulations occurring in the regional fault network are the probable origin of local thermal anomalies. Here, we present a current overview of our team’s efforts to integrate the impacts of the key physics as well as key factors controlling the geothermal anomalies in a fault-controlled geological setting in 3D physically consistent models at the regional scale. The study relies on the building of the first 3D numerical flow (using the discrete-continuum method) and mechanical models (using the distinct element method) at the URG scale. First, the key role of the regional fault network is taken into account using a discrete numerical approach. The geometry building is focused on the conceptualization of the 3D fault zone network based on structural interpretation and generic geological concepts and is consistent with the geological knowledge. This DFN (discrete fracture network) model is declined in two separate models (3D flow and stress) at the URG scale. Then, based on the main characteristics of the geothermal anomalies and the link with the physics considered, criteria are identified that enable the elaboration of indicators to use the results of the simulation and identify geothermally favourable areas. Then, considering the strong link between the stress, fluid flow, and geothermal resources, a cross-analysis of the results is realized to delineate favourable areas for geothermal resources. The results are compared with the existing thermal data at the URG scale and compared with knowledge gained through numerous studies. The good agreement between the delineated favourable areas and the locations of local thermal anomalies (especially the main one close to Soultz-sous-Forêts) demonstrates the key role of the regional fault network as well as stress and fluid flow on the setup of geothermal resources. Moreover, the very encouraging results underline the potential of the first 3D flow and 3D stress models at the URG scale to locate geothermal resources and offer new research opportunities.
APA, Harvard, Vancouver, ISO, and other styles
8

Weismüller, Christopher, Janos L. Urai, Michael Kettermann, Christoph von Hagke, and Klaus Reicherter. "Structure of massively dilatant faults in Iceland: lessons learned from high-resolution unmanned aerial vehicle data." Solid Earth 10, no. 5 (2019): 1757–84. http://dx.doi.org/10.5194/se-10-1757-2019.

Full text
Abstract:
Abstract. Normal faults in basalts develop massive dilatancy in the upper few hundred meters below the Earth's surface with corresponding interactions with groundwater and lava flow. These massively dilatant faults (MDFs) are widespread in Iceland and the East African Rift, but the details of their geometry are not well documented, despite their importance for fluid flow in the subsurface, geohazard assessment and geothermal energy. We present a large set of digital elevation models (DEMs) of the surface geometries of MDFs with 5–15 cm resolution, acquired along the Icelandic rift zone using unmanned aerial vehicles (UAVs). Our data present a representative set of outcrops of MDFs in Iceland, formed in basaltic sequences linked to the mid-ocean ridge. UAVs provide a much higher resolution than aerial/satellite imagery and a much better overview than ground-based fieldwork, bridging the gap between outcrop-scale observations and remote sensing. We acquired photosets of overlapping images along about 20 km of MDFs and processed these using photogrammetry to create high-resolution DEMs and orthorectified images. We use this dataset to map the faults and their damage zones to measure length, opening width and vertical offset of the faults and identify surface tilt in the damage zones. Ground truthing of the data was done by field observations. Mapped vertical offsets show typical trends of normal fault growth by segment coalescence. However, opening widths in map view show variations at much higher frequency, caused by segmentation, collapsed relays and tilted blocks. These effects commonly cause a higher-than-expected ratio of vertical offset and opening width for a steep normal fault at depth. Based on field observations and the relationships of opening width and vertical offset, we define three endmember morphologies of MDFs: (i) dilatant faults with opening width and vertical offset, (ii) tilted blocks (TBs) and (iii) opening-mode (mode I) fissures. Field observation of normal faults without visible opening invariably shows that these have an opening filled with recent sediment. TB-dominated normal faults tend to have the largest ratio of opening width and vertical offset. Fissures have opening widths up to 15 m with throw below a 2 m threshold. Plotting opening width versus vertical offset shows that there is a continuous transition between the endmembers. We conclude that for these endmembers, the ratio between opening width and vertical offset R can be reliably used to predict fault structures at depth. However, fractures associated with MDFs belong to one larger continuum and, consequently, where different endmembers coexist, a clear identification of structures solely via the determination of R is impossible.
APA, Harvard, Vancouver, ISO, and other styles
9

Dey, Sandip, and Solomon Tesfamariam. "Probabilistic Seismic Risk Analysis of Buried Pipelines Due to Permanent Ground Deformation for Victoria, BC." Geotechnics 2, no. 3 (2022): 731–53. http://dx.doi.org/10.3390/geotechnics2030035.

Full text
Abstract:
Buried continuous pipelines are prone to failure due to permanent ground deformation as a result of fault rupture. Since the failure mode is dependent on a number of factors, a probabilistic approach is necessary to correctly compute the seismic risk. In this study, a novel method to estimate regional seismic risk to buried continuous pipelines is presented. The seismic risk assessment method is thereafter illustrated for buried gas pipelines in the City of Victoria, British Columbia. The illustrated example considers seismic hazard from the Leech River Valley Fault Zone (LRVFZ). The risk assessment approach considers uncertainties of earthquake rupture, soil properties at the site concerned, geometric properties of pipes and operating conditions. Major improvements in this method over existing comparable studies include the use of stochastic earthquake source modeling and analytical Okada solutions to generate regional ground deformation, probabilistically. Previous studies used regression equations to define probabilistic ground deformations along a fault. Secondly, in the current study, experimentally evaluated 3D shell and continuum pipe–soil finite element models were used to compute pipeline responses. Earlier investigations used simple soil spring–beam element pipe models to evaluate the pipeline response. Finally, the current approach uses the multi-fidelity Gaussian process surrogate model to ensure efficiency and limit required computational resources. The developed multi-fidelity Gaussian process surrogate model was successfully cross-validated with high coefficients of determination of 0.92 and 0.96. A fragility curve was generated based on failure criteria from ALA strain limits. The seismic risks of pipeline failure due to compressive buckling and tensile rupture at the given site considered were computed to be 1.5 percent and 0.6 percent in 50 years, respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Olsen-Kettle, Louise, Hans Mühlhaus, and Christian Baillard. "A study of localization limiters and mesh dependency in earthquake rupture." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 368, no. 1910 (2010): 119–30. http://dx.doi.org/10.1098/rsta.2009.0190.

Full text
Abstract:
No complete physically consistent model of earthquake rupture exists that can fully describe the rich hierarchy of scale dependencies and nonlinearities associated with earthquakes. We study mesh sensitivity in numerical models of earthquake rupture and demonstrate that this mesh sensitivity may provide hidden clues to the underlying physics generating the rich dynamics associated with earthquake rupture. We focus on unstable slip events that occur in earthquakes when rupture is associated with frictional weakening of the fault. Attempts to simulate these phenomena directly by introducing the relevant constitutive behaviour leads to mesh-dependent results, where the deformation localizes in one element, irrespective of size. Interestingly, earthquake models with oversized mesh elements that are ill-posed in the continuum limit display more complex and realistic physics. Until now, the mesh-dependency problem has been regarded as a red herring—but have we overlooked an important clue arising from the mesh sensitivity? We analyse spatial discretization errors introduced into models with oversized meshes to show how the governing equations may change because of these error terms and give rise to more interesting physics.
APA, Harvard, Vancouver, ISO, and other styles
11

Ahmed, Barzan I., and Mohammed S. Al-Jawad. "Geomechanical modelling and two-way coupling simulation for carbonate gas reservoir." Journal of Petroleum Exploration and Production Technology 10, no. 8 (2020): 3619–48. http://dx.doi.org/10.1007/s13202-020-00965-7.

Full text
Abstract:
Abstract Geomechanical modelling and simulation are introduced to accurately determine the combined effects of hydrocarbon production and changes in rock properties due to geomechanical effects. The reservoir geomechanical model is concerned with stress-related issues and rock failure in compression, shear, and tension induced by reservoir pore pressure changes due to reservoir depletion. In this paper, a rock mechanical model is constructed in geomechanical mode, and reservoir geomechanics simulations are run for a carbonate gas reservoir. The study begins with assessment of the data, construction of 1D rock mechanical models along the well trajectory, the generation of a 3D mechanical earth model, and running a 4D geomechanical simulation using a two-way coupling simulation method, followed by results analysis. A dual porosity/permeability model is coupled with a 3D geomechanical model, and iterative two-way coupling simulation is performed to understand the changes in effective stress dynamics with the decrease in reservoir pressure due to production, and therefore to identify the changes in dual-continuum media conductivity to fluid flow and field ultimate recovery. The results of analysis show an observed effect on reservoir flow behaviour of a 4% decrease in gas ultimate recovery and considerable changes in matrix contribution and fracture properties, with the geomechanical effects on the matrix visibly decreasing the gas production potential, and the effect on the natural fracture contribution is limited on gas inflow. Generally, this could be due to slip flow of gas at the media walls of micro-extension fractures, and the flow contribution and fracture conductivity is quite sufficient for the volume that the matrixes feed the fractures. Also, the geomechanical simulation results show the stability of existing faults, emphasizing that the loading on the fault is too low to induce fault slip to create fracturing, and enhanced permeability provides efficient conduit for reservoir fluid flow in reservoirs characterized by natural fractures.
APA, Harvard, Vancouver, ISO, and other styles
12

Compastié, Maxime, Antonio López Martínez, Carolina Fernández, et al. "PALANTIR: An NFV-Based Security-as-a-Service Approach for Automating Threat Mitigation." Sensors 23, no. 3 (2023): 1658. http://dx.doi.org/10.3390/s23031658.

Full text
Abstract:
Small and medium enterprises are significantly hampered by cyber-threats as they have inherently limited skills and financial capacities to anticipate, prevent, and handle security incidents. The EU-funded PALANTIR project aims at facilitating the outsourcing of the security supervision to external providers to relieve SMEs/MEs from this burden. However, good practices for the operation of SME/ME assets involve avoiding their exposure to external parties, which requires a tightly defined and timely enforced security policy when resources span across the cloud continuum and need interactions. This paper proposes an innovative architecture extending Network Function Virtualisation to externalise and automate threat mitigation and remediation in cloud, edge, and on-premises environments. Our contributions include an ontology for the decision-making process, a Fault-and-Breach-Management-based remediation policy model, a framework conducting remediation actions, and a set of deployment models adapted to the constraints of cloud, edge, and on-premises environment(s). Finally, we also detail an implementation prototype of the framework serving as evaluation material.
APA, Harvard, Vancouver, ISO, and other styles
13

Maxime, Compastié, López Martínez Antonio, Fernández Carolina, et al. "PALANTIR: An NFV-Based Security-as-a-Service Approach for Automating Threat Mitigation." MDPI Sensors 23, no. 3 (2023): 1658. https://doi.org/10.3390/s23031658.

Full text
Abstract:
Small and medium enterprises are significantly hampered by cyber-threats as they have inherently limited skills and financial capacities to anticipate, prevent, and handle security incidents. The EU-funded PALANTIR project aims at facilitating the outsourcing of the security supervision to external providers to relieve SMEs/MEs from this burden. However, good practices for the operation of SME/ME assets involve avoiding their exposure to external parties, which requires a tightly defined and timely enforced security policy when resources span across the cloud continuum and need interactions. This paper proposes an innovative architecture extending Network Function Virtualisation to externalise and automate threat mitigation and remediation in cloud, edge, and on-premises environments. Our contributions include an ontology for the decision-making process, a Fault-and-Breach-Management-based remediation policy model, a framework conducting remediation actions, and a set of deployment models adapted to the constraints of cloud, edge, and on-premises environment(s). Finally, we also detail an implementation prototype of the framework serving as evaluation material.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Hui, Baojun Ge, and Bin Han. "Real-Time Motor Fault Diagnosis Based on TCN and Attention." Machines 10, no. 4 (2022): 249. http://dx.doi.org/10.3390/machines10040249.

Full text
Abstract:
Motor failure can result in damage to resources and property. Real-time motor fault diagnosis technology can detect faults and diagnosis in time to prevent serious consequences caused by the continued operation of the machine. Neural network models can easily and accurately fault diagnose from vibration signals. However, they cannot notice faults in time. In this study, a deep learning model based on a temporal convolutional network (TCN) and attention is proposed for real-time motor fault diagnosis. TCN can extract features from shorter vibration signal sequences to allow the system to detect and diagnose faults faster. In addition, attention allows the model to have higher diagnostic accuracy. The experiments demonstrate that the proposed model is able to detect faults in time when they occur and has an excellent diagnostic accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Bogusz, Piotr, Mariusz Korkosz, Adam Powrózek, Jan Prokop, and Piotr Wygonik. "Research of influence of open-winding faults on properties of brushless permanent magnets motor." Open Physics 15, no. 1 (2017): 959–64. http://dx.doi.org/10.1515/phys-2017-0118.

Full text
Abstract:
AbstractThe paper presents an analysis of influence of selected fault states on properties of brushless DC motor with permanent magnets. The subject of study was a BLDC motor designed by the authors for unmanned aerial vehicle hybrid drive. Four parallel branches per each phase were provided in the discussed 3-phase motor. After open-winding fault in single or few parallel branches, a further operation of the motor can be continued. Waveforms of currents, voltages and electromagnetic torque were determined in discussed fault states based on the developed mathematical and simulation models. Laboratory test results concerning an influence of open-windings faults in parallel branches on properties of BLDC motor were presented.
APA, Harvard, Vancouver, ISO, and other styles
16

Johnston, M. J. S., A. T. Linde, and D. C. Agnew. "Continuous borehole strain in the San Andreas fault zone before, during, and after the 28 June 1992, MW 7.3 Landers, California, earthquake." Bulletin of the Seismological Society of America 84, no. 3 (1994): 799–805. http://dx.doi.org/10.1785/bssa0840030799.

Full text
Abstract:
Abstract High-precision strain was observed with a borehole dilational strainmeter in the Devil's Punchbowl during the 11:58 UT 28 June 1992 MW 7.3 Landers earthquake and the large Big Bear aftershock (MW 6.3). The strainmeter is installed at a depth of 176 m in the fault zone approximately midway between the surface traces of the San Andreas and Punchbowl faults and is about 100 km from the 85-km-long Landers rupture. We have questioned whether unusual amplified strains indicating precursive slip or high fault compliance occurred on the faults ruptured by the Landers earthquake, or in the San Andreas fault zone before and during the earthquake, whether static offsets for both the Landers and Big Bear earthquakes agree with expectation from geodetic and seismologic models of the ruptures and with observations from a nearby two-color geodimeter network, and whether postseismic behavior indicated continued slip on the Landers rupture or local triggered slip on the San Andreas. We show that the strain observed during the earthquake at this instrument shows no apparent amplification effects. There are no indications of precursive strain in these strain data due to either local slip on the San Andreas or precursive slip on the eventual Landers rupture. The observations are generally consistent with models of the earthquake in which fault geometry and slip have the same form as that determined by either inversion of the seismic data or inversion of geodetically determined ground displacements produced by the earthquake. Finally, there are some indications of minor postseismic behavior, particularly during the month following the earthquake.
APA, Harvard, Vancouver, ISO, and other styles
17

Reitman, Nadine G., Richard W. Briggs, William D. Barnhart, et al. "Rapid Surface Rupture Mapping from Satellite Data: The 2023 Kahramanmaraş, Turkey (Türkiye), Earthquake Sequence." Seismic Record 3, no. 4 (2023): 289–98. http://dx.doi.org/10.1785/0320230029.

Full text
Abstract:
Abstract The 6 February 2023 Kahramanmaraş, Turkey (Türkiye), earthquake sequence produced > 500 km of surface rupture primarily on the left-lateral East Anatolian (~345 km) and Çardak (~175 km) faults. Constraining the length and magnitude of surface displacement on the causative faults is critical for loss estimates, recovery efforts, rapid identification of impacted infrastructure, and fault displacement hazard analysis. To support these efforts, we rapidly mapped the surface rupture from satellite data with support from remote sensing and field teams, and released the results to the public in near-real time. Detailed surface rupture mapping commenced on 7 February and continued as high-resolution (< 1.0 m/pixel) optical images from WorldView satellites (2023 Maxar) became available. We interpreted the initial simplified rupture trace from subpixel offset fields derived from Advanced Land Observation Satellite2 and Sentinel-1A synthetic aperture radar image pairs available on 8 and 10 February, respectively. The mapping was released publicly on 10 February, with frequent updates, and published in final form four months postearthquake (Reitman, Briggs, et al., 2023). This publicly available, rapid mapping helped guide fieldwork and constrained U.S. Geological Survey finite-fault and loss estimate models, as well as stress change estimates and dynamic rupture models.
APA, Harvard, Vancouver, ISO, and other styles
18

Ootes, Luke, William J. Davis, Valerie A. Jackson, and Otto van Breemen. "Chronostratigraphy of the Hottah terrane and Great Bear magmatic zone of Wopmay Orogen, Canada, and exploration of a terrane translation model." Canadian Journal of Earth Sciences 52, no. 12 (2015): 1062–92. http://dx.doi.org/10.1139/cjes-2015-0026.

Full text
Abstract:
The Paleoproterozoic Hottah terrane is the westernmost exposed bedrock of the Canadian Shield and a critical component for understanding the evolution of the Wopmay Orogen. Thirteen new high-precision U–Pb zircon crystallization ages are presented and support field observations of a volcano-plutonic continuum from Hottah terrane through to the end of the Great Bear magmatism, from >1950 to 1850 Ma. The new crystallization ages, new geochemical data, and newly published detrital zircon U–Pb data are used to challenge hitherto accepted models for the evolution of the Hottah terrane as an exotic arc and microcontinent that arrived over a west-dipping subduction zone and collided with the Slave craton at ca. 1.88 Ga. Although the Hottah terrane does have a tectonic history that is distinct from that of the neighbouring Slave craton, it shares a temporal history with a number of domains to the south and east — domains that were tied to the Slave craton by ca. 1.97 Ga. It is interpreted herein that Hottah terrane began to the south of its current position and evolved in an active margin over an always east-dipping subduction system that began prior to ca. 2.0 Ga and continued to ca. 1.85 Ga, and underwent tectonic switching and migration. The stratigraphy of the ca. 1913–1900 Ma Hottah plutonic complex and Bell Island Bay Group includes a subaerial rifting arc sequence, followed by basinal opening represented by marginal marine quartz arenite and overlying ca. 1893 Ma pillowed basalt flows and lesser rhyodacites. We interpret this stratigraphy to record Hottah terrane rifting off its parental arc crust — in essence the birth of the new Hottah terrane. This model is similar to rapidly rifting arcs in active margins — for example, modern Baja California. These rifts generally occur at the transition between subduction zones (e.g., Cocos–Rivera plates) and transtensional shear zones (e.g., San Andreas fault), and we suggest that extension-driven transtensional shearing, or, more simply, terrane translation, was responsible for the evolution of Bell Island Bay Group stratigraphy and that it transported this newly born Hottah terrane laterally (northward in modern coordinates), arriving adjacent to the Slave craton at ca. 1.88 Ga. Renewed east-dipping subduction led to the Great Bear arc flare-up at ca. 1876 Ma, continuing to ca. 1869 Ma. This was followed by voluminous Great Bear plutonism until ca. 1855 Ma. The model implies that it was the westerly Nahanni terrane and its subducting oceanic crust that collided with this active margin, shutting down the >120 million year old, east-dipping subduction system.
APA, Harvard, Vancouver, ISO, and other styles
19

McGregor, Ian S., and Nathan W. Onderdonk. "Late Pleistocene rates of rock uplift and faulting at the boundary between the southern Coast Ranges and the western Transverse Ranges in California from reconstruction and luminescence dating of the Orcutt Formation." Geosphere 17, no. 3 (2021): 932–56. http://dx.doi.org/10.1130/ges02274.1.

Full text
Abstract:
Abstract The western Transverse Ranges and southern Coast Ranges of California are lithologically similar but have very different styles and rates of Quaternary deformation. The western Transverse Ranges are deformed by west-trending folds and reverse faults with fast rates of Quaternary fault slip (1–11 mm/yr) and uplift (1–7 mm/yr). The southern Coast Ranges, however, are primarily deformed by northwest-trending folds and right-lateral strike-slip faults with much slower slip rates (3 mm/yr or less) and uplift rates (<1 mm/yr). Faults and folds at the boundary between these two structural domains exhibit geometric and kinematic characteristics of both domains, but little is known about the rate of Quaternary deformation along the boundary. We used a late Pleistocene sedimentary deposit, the Orcutt Formation, as a marker to characterize deformation within the boundary zone over the past 120 k.y. The Orcutt Formation is a fluvial deposit in the Santa Maria Basin that formed during regional planation by a broad fluvial system that graded into a shoreline platform at the coast. We used post-infrared–infrared-stimulated luminescence (pIR-IRSL) dating to determine that the Orcutt Formation was deposited between 119 ± 8 and 85 ± 6 ka, coincident with oxygen isotope stages 5e-a paleo–sea-level highstands and regional depositional events. The deformed Orcutt basal surface closely follows the present-day topography of the Santa Maria Basin and is folded by northwest-trending anticlines that are a combination of fault-propagation and fault-bend-folding controlled by deeper thrust faults. Reconstructions of the Orcutt basal surface and forward modeling of balanced cross sections across the study area allowed us to measure rock uplift rates and fault slip rates. Rock uplift rates at the crests of two major anticlinoria are 0.9–4.9 mm/yr, and the dip-slip rate along the blind fault system that underlies these folds is 5.6–6.7 mm/yr. These rates are similar to those reported from the Ventura area to the southeast and indicate that the relatively high rates of deformation in the western Transverse Ranges are also present along the northern boundary zone. The deformation style and rates are consistent with models that attribute shortening across the Santa Maria Basin to accommodation of clockwise rotation of the western Transverse Ranges and suggest that rotation has continued into late Quaternary time.
APA, Harvard, Vancouver, ISO, and other styles
20

Mhamdi, Lotfi, Lobna Belkacem, Hedi Dhouibi, and Zineb Simeu Abazi. "Using Hybrid Automata for Diagnosis of Hybrid Dynamical Systems." International Journal of Electrical and Computer Engineering (IJECE) 5, no. 6 (2015): 1396. http://dx.doi.org/10.11591/ijece.v5i6.pp1396-1406.

Full text
Abstract:
Physical systems can fail. For this reason the problem of identifying and reacting to faults has received a large attention in the control and computer science communities. In this paper we study the fault diagnosis problem and modeling of Hybrid Dynamical Systems (HDS). Generally speaking, HDS is a system mixing continuous and discrete behaviors that cannot be faithfully modeled neither by using formalism with continuous dynamics only nor by a formalism including only discrete dynamics. We use the well known framework of hybrid automata for modeling hybrid systems, because they combine the continous and discretes parts on the same structure. Hybrid automaton is a states-transitions graph, whose dynamic evolution is represented by discretes and continous steps alternations, also, continous evolution happens in the automaton apexes, while discrete evolution is realized by transitions crossing (arcs) of the graph. Their simulation presents many problems mainly the synchronisation between the two models. Stateflow, used to describe the discrete model, is co-ordinated with Matlab, used to describe the continuous model. This article is a description of a case study, which is a two tanks system.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Kang, Douglas S. Dreger, Elisa Tinti, Roland Bürgmann, and Taka’aki Taira. "Rupture Process of the 2019 Ridgecrest, California Mw 6.4 Foreshock and Mw 7.1 Earthquake Constrained by Seismic and Geodetic Data." Bulletin of the Seismological Society of America 110, no. 4 (2020): 1603–26. http://dx.doi.org/10.1785/0120200108.

Full text
Abstract:
ABSTRACT The 2019 Ridgecrest earthquake sequence culminated in the largest seismic event in California since the 1999 Mw 7.1 Hector Mine earthquake. Here, we combine geodetic and seismic data to study the rupture process of both the 4 July Mw 6.4 foreshock and the 6 July Mw 7.1 mainshock. The results show that the Mw 6.4 foreshock rupture started on a northwest-striking right-lateral fault, and then continued on a southwest-striking fault with mainly left-lateral slip. Although most moment release during the Mw 6.4 foreshock was along the southwest-striking fault, slip on the northwest-striking fault seems to have played a more important role in triggering the Mw 7.1 mainshock that happened ∼34 hr later. Rupture of the Mw 7.1 mainshock was characterized by dominantly right-lateral slip on a series of overall northwest-striking fault strands, including the one that had already been activated during the nucleation of the Mw 6.4 foreshock. The maximum slip of the 2019 Ridgecrest earthquake was ∼5 m, located at a depth range of 3–8 km near the Mw 7.1 epicenter, corresponding to a shallow slip deficit of ∼20%–30%. Both the foreshock and mainshock had a relatively low-rupture velocity of ∼2 km/s, which is possibly related to the geometric complexity and immaturity of the eastern California shear zone faults. The 2019 Ridgecrest earthquake produced significant stress perturbations on nearby fault networks, especially along the Garlock fault segment immediately southwest of the 2019 Ridgecrest rupture, in which the coulomb stress increase was up to ∼0.5 MPa. Despite the good coverage of both geodetic and seismic observations, published coseismic slip models of the 2019 Ridgecrest earthquake sequence show large variations, which highlight the uncertainty of routinely performed earthquake rupture inversions and their interpretation for underlying rupture processes.
APA, Harvard, Vancouver, ISO, and other styles
22

Syed, Rizwan, Markus Ulbricht, Krzysztof Piotrowski, and Milos Krstic. "A Survey on Fault-Tolerant Methodologies for Deep Neural Networks." Pomiary Automatyka Robotyka 27, no. 2 (2023): 89–98. http://dx.doi.org/10.14313/par_248/89.

Full text
Abstract:
Asignificant rise in Artificial Intelligence (AI) has impacted many applications around us, so much so that AI has now been increasingly used in safety-critical applications. AI at the edge is the reality, which means performing the data computation closer to the source of the data, as opposed to performing it on the cloud. Safety-critical applications have strict reliability requirements; therefore, it is essential that AI models running on the edge (i.e., hardware) must fulfill the required safety standards. In the vast field of AI, Deep Neural Networks (DNNs) are the focal point of this survey as it has continued to produce extraordinary outcomes in various applications i.e. medical, automotive, aerospace, defense, etc. Traditional reliability techniques for DNNs implementation are not always practical, as they fail to exploit the unique characteristics of the DNNs. Furthermore, it is also essential to understand the targeted edge hardware because the impact of the faults can be different in ASICs and FPGAs. Therefore, in this survey, first, we have examined the impact of the fault in ASICs and FPGAs, and then we seek to provide a glimpse of the recent progress made towards the fault-tolerant DNNs. We have discussed several factors that can impact the reliability of the DNNs. Further, we have extended this discussion to shed light on many state-of-the-art fault mitigation techniques for DNNs.
APA, Harvard, Vancouver, ISO, and other styles
23

Gao, Pubo, Sihai Zhao, and Yi Zheng. "Failure Prediction of Coal Mine Equipment Braking System Based on Digital Twin Models." Processes 12, no. 4 (2024): 837. http://dx.doi.org/10.3390/pr12040837.

Full text
Abstract:
The primary function of a mine hoist is the transportation of personnel and equipment, serving as a crucial link between underground and surface systems. The proper functioning of key components such as work braking and safety braking is essential for ensuring the safety of both personnel and equipment, thereby playing a critical role in the safe operation of coal mines. As coal mining operations extend to greater depths, they introduce heightened challenges for safe transportation, compounded by increased equipment loss. Consequently, there is a pressing need to enhance safety protocols to safeguard personnel and materials. Traditional maintenance and repair methods, characterized by routine equipment inspections and scheduled downtime, often fall short in addressing emerging issues promptly, leading to production delays and heightened risks for maintenance personnel. This underscores the necessity of adopting predictive maintenance strategies, leveraging digital twin models to anticipate and prevent potential faults in mine hoists. In summary, the implementation of predictive maintenance techniques grounded in digital twin technology represents a proactive and scientifically rigorous approach to ensuring the continued safe operation of mine hoists amidst the evolving challenges of deepening coal mining operations. In this study, we propose the integration of a CNN-LSTM algorithm within a digital twin framework for predicting faults in mine hoist braking systems. Utilizing software such as AMESim 2019 and MATLAB 2016b, we conduct joint simulations of the hoist braking digital twin system. Subsequently, leveraging the simulation model, we establish a fault diagnosis platform for the hoist braking system. Finally, employing the CNN-LSTM network model, we forecast failures in the mine hoist braking system. Experimental findings demonstrate the effectiveness of our proposed algorithm, achieving a prediction accuracy of 95.35%. Comparative analysis against alternative algorithms confirms the superior performance of our approach.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Chong, Xinxing Chen, Xin Qiang, Haoran Fan, and Shaohua Li. "Recent advances in mechanism/data-driven fault diagnosis of complex engineering systems with uncertainties." AIMS Mathematics 9, no. 11 (2024): 29736–72. http://dx.doi.org/10.3934/math.20241441.

Full text
Abstract:
<p>The relentless advancement of modern technology has given rise to increasingly intricate and sophisticated engineering systems, which in turn demand more reliable and intelligent fault diagnosis methods. This paper presents a comprehensive review of fault diagnosis in uncertain environments, focusing on innovative strategies for intelligent fault diagnosis. To this end, conventional fault diagnosis methods are first reviewed, including advances in mechanism-driven, data-driven, and hybrid-driven diagnostic models and their strengths, limitations, and applicability across various scenarios. Subsequently, we provide a thorough exploration of multi-source uncertainty in fault diagnosis, addressing its generation, quantification, and implications for diagnostic processes. Then, intelligent strategies for all stages of fault diagnosis starting from signal acquisition are highlighted, especially in the context of complex engineering systems. Finally, we conclude with insights and perspectives on future directions in the field, emphasizing the need for the continued evolution of intelligent diagnostic systems to meet the challenges posed by modern engineering complexities.</p>
APA, Harvard, Vancouver, ISO, and other styles
25

Robson, Alexander, Rosalind King, and Simon Holford. "Normal fault growth and segment linkage in a gravitationally detached delta system: evidence from 3D seismic reflection data from the Ceduna Sub-basin, Great Australian Bight." APPEA Journal 55, no. 2 (2015): 467. http://dx.doi.org/10.1071/aj14102.

Full text
Abstract:
The authors used three-dimensional (3D) seismic reflection data from the central Ceduna Sub-Basin, Australia, to establish the structural evolution of a linked normal fault assemblage at the extensional top of a gravitationally driven delta system. The fault assemblage presented is decoupled at the base of a marine mud from the late Albian age. Strike-linkage has created a northwest–southeast oriented assemblage of normal fault segments and dip-linkage through Santonian strata, which connects a post-Santonian normal fault system to a Cenomanian-Santonian listric fault system. Cenomanian-Santonian fault growth is on the kilometre scale and builds an underlying structural grain, defining the geometry of the post-Santonian fault system. A fault plane dip-angle model has been created and established through simplistic depth conversion. This converts throw into fault plane dip-slip displacement, incorporating increasing heave of a listric fault and decreasing in dip-angle with depth. The analysis constrains fault growth into six evolutionary stages: early Cenomanian nucleation and radial growth of isolated fault segments; linkage of fault segments by the latest Cenomanian; latest Santonian Cessation of fault growth; erosion and heavy incision during the continental break-up of Australia and Antarctica (c. 83 Ma); vertically independent nucleation of the post-Santonian fault segments with rapid length establishment before significant displacement accumulation; and, continued displacement into the Cenozoic. The structural evolution of this fault system is compatible with the isolated fault model and segmented coherent fault model, indicating that these fault growth models do not need to be mutually exclusive to the growth of normal fault assemblages.
APA, Harvard, Vancouver, ISO, and other styles
26

Mikolaichuk, A. V., F. K. Apayarov, D. V. Gordeev, and A. N. Esmintsev. "Correlation of geological complexes of the Khan-Tengri Mountain Massif in Border Regions of the Kyrgyz, Kazakh, and Chinese Tianshan." Geology and Environment 3, no. 4 (2023): 7–36. http://dx.doi.org/10.26516/2541-9641.2023.4.7.

Full text
Abstract:
Researchers of the border regions in Kyrgyzstan, Kazachstan, and China discuss two two alternative models for the relationships between the main structural units of the Tianshan mountains. Most researchers believe that the Kyrgyz Middle Tianshan wedges out to the east along the Atbashi-Inylchek-Nalati marginal fault. According to another hypothesis, the Middle Tianshan structures continue within the range Nalati where they are described as Chinese Central Tianshan. Comparing the characteristics of the Paleozoic and Proterozoic sedimentary, volcanogenic, intrusive, and metamorphic formations of these regions leads us to the conclusion that the structural units of the Kyrgyz Middle and most of the Northern Tianshan, including the superimposed Middle-Late Paleozoic troughs, are not continued into China, but are successively cut along the echelon system of conjugated strike-slip faults, united by us into the Frontal Tianshan Dextral Strike-slip (FTDS). And only the northern segment of the Issyk-Kul terrane can be considered as an analogue of the Chinese Central Tianshan, displaced along the FTDS to the northwest for a distance over 80 km. Therefore, adjacent geological complexes are eroded along the FTDS similar to the oblique boundaries of convergent lithospheric plates affected by tectonic erosion.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Shuming, and Hong Zhou. "Transformer Fault Diagnosis Based on Multi-Strategy Enhanced Dung Beetle Algorithm and Optimized SVM." Energies 17, no. 24 (2024): 6296. https://doi.org/10.3390/en17246296.

Full text
Abstract:
Accurate fault diagnosis of transformers is crucial for preventing power system failures and ensuring the continued reliability of electrical grids. To address the challenge of low accuracy in transformer fault diagnosis using support vector machines (SVMs), an enhanced fault diagnosis model is proposed, which utilizes an improved dung beetle optimization algorithm (IDBO) to optimize an SVM. First, based on dissolved gas analysis (DGA), five characteristic quantities are selected as input features. Second, improvements to the DBO algorithm are made by incorporating Chebyshev chaotic mapping, a golden sine strategy, and dynamic weight coefficients for position updates. The performance of the IDBO is validated using four benchmark test functions, demonstrating faster convergence. Subsequently, the IDBO optimizes the SVM’s penalty factor C and kernel function parameter g, which are then input into the SVM for training, yielding an efficient fault diagnosis model. Finally, comparisons with other methods confirm the usefulness of the proposed model. Experimental results demonstrate that the IDBO–SVM model attains accuracy improvements of 1.69%, 8.47%, and 10.17% over dung beetle optimization–SVM (DBO–SVM), sparrow search algorithm–SVM (SSA–SVM), and grey wolf optimization–SVM (GWO–SVM) models, respectively. In addition to higher accuracy, the IDBO–SVM model also delivers a faster runtime, further highlighting its superior performance in transformer fault diagnosis. The proposed model has practical significance for enhancing the stability of transformer operation.
APA, Harvard, Vancouver, ISO, and other styles
28

Anjan Kumar Dash. "Distributed Training Frameworks for Large Language Models: Architectures, Challenges, and Innovations." Journal of Computer Science and Technology Studies 7, no. 5 (2025): 109–18. https://doi.org/10.32996/jcsts.2025.7.5.15.

Full text
Abstract:
The exponential growth of large language models has necessitated the development of sophisticated distributed training frameworks to efficiently manage computational resources, model complexity, and parallelization strategies. This article presents a comprehensive analysis of distributed training architectures for large language models, examining their technical foundations, implementation challenges, and recent innovations. Beginning with a detailed exploration of core parallelization strategies—data parallelism, model parallelism, and pipeline parallelism—the article evaluates how each approach addresses fundamental constraints in training massive neural networks. It then examines leading frameworks, including Megatron-LM, DeepSpeed, and Alpa, highlighting their unique approaches to memory optimization, parallelization automation, and computational efficiency. The article further investigates persistent challenges in distributed training, including communication overhead, memory management limitations, and fault tolerance requirements. Finally, it explores emerging trends in heterogeneous computing and energy efficiency that promise to shape the future development of distributed training systems. Throughout, the article emphasizes how these frameworks and techniques collectively enable the continued scaling of language models while managing the associated computational demands.
APA, Harvard, Vancouver, ISO, and other styles
29

Poria, Pralay. "Justifiable Logistic Network Management: A Production Model for Optimizing Energy Use, Cost, and Carbon Radiations." International Journal of Science and Social Science Research 1, no. 3 (2023): 16–26. https://doi.org/10.5281/zenodo.13509910.

Full text
Abstract:
&mdash;<strong> </strong>Automating the manufacturing process is a top priority for many businesses today because it allows them to increase output while maintaining quality, which is essential for quickly responding to client demands. This pattern has resulted in a progressive shift in technology, with the inevitable consequence of a rise in energy demand. In order to prevent the need for increased energy consumption for improved manufacturing technology in industrialized countries, academics have begun working on continual development in conjunction with cleaner-energy regulations. In addition to nuclear weapons, global warming caused by human-produced greenhouse gases is another major problem in our society today. So as to make up for the energy need and lower the carbon balance for cleaner manufacturing, renewable energies like insolation have expanded rapidly in recent years. This paper discusses the Logistic Network management of the automotive industry with its suppliers in order to maximize production while simultaneously minimizing costs, reducing carbon exhalation and making the most of renewable energy sources. This analysis considers a scenario where providers monitor and control faulty products as an outsourced service. The suggested mathematical model considers sustainable suppliers and is solved using a loaded goal programming approach. The model's responsiveness to changes in energy use is evaluated across a range of scenarios. Documentation of successful&nbsp; down-to-earth use in the automotive industry includes minimum production costs and carbon emissions. Considering the manufacturer and suppliers, the results verify the model's potential to provide a foundation for sustainability in the logistics network environment.
APA, Harvard, Vancouver, ISO, and other styles
30

Poria, Pralay. "Justifiable Logistic Network Management: A Production Model for Optimizing Energy Use, Cost, and Carbon Radiations." International Journal of Science and Social Science Research 1, no. 3 (2023): 16–26. https://doi.org/10.5281/zenodo.13509910.

Full text
Abstract:
&mdash;<strong> </strong>Automating the manufacturing process is a top priority for many businesses today because it allows them to increase output while maintaining quality, which is essential for quickly responding to client demands. This pattern has resulted in a progressive shift in technology, with the inevitable consequence of a rise in energy demand. In order to prevent the need for increased energy consumption for improved manufacturing technology in industrialized countries, academics have begun working on continual development in conjunction with cleaner-energy regulations. In addition to nuclear weapons, global warming caused by human-produced greenhouse gases is another major problem in our society today. So as to make up for the energy need and lower the carbon balance for cleaner manufacturing, renewable energies like insolation have expanded rapidly in recent years. This paper discusses the Logistic Network management of the automotive industry with its suppliers in order to maximize production while simultaneously minimizing costs, reducing carbon exhalation and making the most of renewable energy sources. This analysis considers a scenario where providers monitor and control faulty products as an outsourced service. The suggested mathematical model considers sustainable suppliers and is solved using a loaded goal programming approach. The model's responsiveness to changes in energy use is evaluated across a range of scenarios. Documentation of successful&nbsp; down-to-earth use in the automotive industry includes minimum production costs and carbon emissions. Considering the manufacturer and suppliers, the results verify the model's potential to provide a foundation for sustainability in the logistics network environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Poria, Pralay. "Justifiable Logistic Network Management: A Production Model for Optimizing Energy Use, Cost, and Carbon Radiations." International Journal of Science and Social Science Research 1, no. 3 (2023): 16–26. https://doi.org/10.5281/zenodo.13509910.

Full text
Abstract:
&mdash;<strong> </strong>Automating the manufacturing process is a top priority for many businesses today because it allows them to increase output while maintaining quality, which is essential for quickly responding to client demands. This pattern has resulted in a progressive shift in technology, with the inevitable consequence of a rise in energy demand. In order to prevent the need for increased energy consumption for improved manufacturing technology in industrialized countries, academics have begun working on continual development in conjunction with cleaner-energy regulations. In addition to nuclear weapons, global warming caused by human-produced greenhouse gases is another major problem in our society today. So as to make up for the energy need and lower the carbon balance for cleaner manufacturing, renewable energies like insolation have expanded rapidly in recent years. This paper discusses the Logistic Network management of the automotive industry with its suppliers in order to maximize production while simultaneously minimizing costs, reducing carbon exhalation and making the most of renewable energy sources. This analysis considers a scenario where providers monitor and control faulty products as an outsourced service. The suggested mathematical model considers sustainable suppliers and is solved using a loaded goal programming approach. The model's responsiveness to changes in energy use is evaluated across a range of scenarios. Documentation of successful&nbsp; down-to-earth use in the automotive industry includes minimum production costs and carbon emissions. Considering the manufacturer and suppliers, the results verify the model's potential to provide a foundation for sustainability in the logistics network environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Poria, Pralay. "Justifiable Logistic Network Management: A Production Model for Optimizing Energy Use, Cost, and Carbon Radiations." International Journal of Science and Social Science Research 1, no. 3 (2023): 16–26. https://doi.org/10.5281/zenodo.13509910.

Full text
Abstract:
&mdash;<strong> </strong>Automating the manufacturing process is a top priority for many businesses today because it allows them to increase output while maintaining quality, which is essential for quickly responding to client demands. This pattern has resulted in a progressive shift in technology, with the inevitable consequence of a rise in energy demand. In order to prevent the need for increased energy consumption for improved manufacturing technology in industrialized countries, academics have begun working on continual development in conjunction with cleaner-energy regulations. In addition to nuclear weapons, global warming caused by human-produced greenhouse gases is another major problem in our society today. So as to make up for the energy need and lower the carbon balance for cleaner manufacturing, renewable energies like insolation have expanded rapidly in recent years. This paper discusses the Logistic Network management of the automotive industry with its suppliers in order to maximize production while simultaneously minimizing costs, reducing carbon exhalation and making the most of renewable energy sources. This analysis considers a scenario where providers monitor and control faulty products as an outsourced service. The suggested mathematical model considers sustainable suppliers and is solved using a loaded goal programming approach. The model's responsiveness to changes in energy use is evaluated across a range of scenarios. Documentation of successful&nbsp; down-to-earth use in the automotive industry includes minimum production costs and carbon emissions. Considering the manufacturer and suppliers, the results verify the model's potential to provide a foundation for sustainability in the logistics network environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Veeranna, Kotagi. "Adaptive and Predictive Testing Frameworks Using Chaos Engineering and Deep Learning for Enterprise QA." Recent Trends in Data Knowledge Discovery and Data Mining 1, no. 1 (2025): 16–22. https://doi.org/10.5281/zenodo.15187073.

Full text
Abstract:
<em>As enterprise software systems grow in complexity and scale, traditional quality assurance (QA) methods fall short in predicting failures and ensuring system resilience. This research proposes an innovative QA paradigm that integrates chaos engineering with deep learning to create adaptive and predictive testing frameworks. Chaos engineering introduces controlled disruptions to identify system weaknesses under real-world stress, while deep learning models analyze test outcomes to anticipate failure points and adapt testing strategies dynamically. The proposed framework supports continuous testing across microservices, cloud-native environments, and large-scale enterprise platforms. Experimental validation on simulated and real enterprise workloads demonstrates enhanced fault detection rates, reduced mean time to recovery, and intelligent test case prioritization. The results affirm that integrating chaos experimentation with AI-based learning models leads to more robust, self-improving QA processes capable of coping with the volatility and dynamism of modern software ecosystems.</em>
APA, Harvard, Vancouver, ISO, and other styles
34

HYSA, Ferit, and Alison TAYSUM. "Using A Blueprint for Character Development for Evolution (ABCDE) to Build Relationships Through Talk to Mobilise Attachment Theory to Develop Children’s Working Mental Models for Good Choices that Regulate Continued Good Lives." Polis 21, no. 2 (2022): 12–33. http://dx.doi.org/10.58944/ndkb1142.

Full text
Abstract:
This study is a Ground Work Case in Albania which aims to reveal how adults talking with children can build relationships between the adult and the children to support the children’s appropriate development through the four phases of Bowlby’s attachment theory. If trauma is experienced which is not the fault of the child or the preferred caregiver, the child can become stuck and unable to develop the mental models required to become self-determining and live a good life with the conditions for homeostasis (continued life) with good faculty of judgement. This has implications for adults who may have experience trauma, through no fault of their own, who have not passed through the phases of attachment theory, and are expected to support children through the phases of attachment theory with no working mental model of what that looks like. A groundwork case was conducted in a city in Albania with kindergarten staff and revealed i) the curriculum of kindergarten staff’s qualification did include attachment theory, ii) kindergarten staff were unaware of attachment theory. Findings reveal the Covid pandemic 19 has caused trauma that is preventing children from passing through the phases of attachment theory leading to poor working mental models and poor mental health. A Blueprint for Character Development for Evolution (ABCDE) is presented as an incremental model to enable staff, students and parents to evaluate progress through the phases of attachment theory and moving from fear to good faculty of judgement required for self-determining homeostasis.
APA, Harvard, Vancouver, ISO, and other styles
35

Mullet, B., P. Segall, and A. H. Fávero Neto. "Numerical modeling of caldera formation using Smoothed Particle Hydrodynamics (SPH)." Geophysical Journal International 234, no. 2 (2023): 887–902. http://dx.doi.org/10.1093/gji/ggad084.

Full text
Abstract:
SUMMARY Calderas are kilometer-scale basins formed when magma is rapidly removed from shallow magma storage zones. Despite extensive previous research, many questions remain about how host rock material properties influence the development of caldera structures. We employ a mesh-free, continuum numerical method, Smoothed Particle Hydrodynamics (SPH) to study caldera formation, with a focus on the role of host rock material properties. SPH provides several advantages over previous numerical approaches (finite element or discrete element methods), naturally accommodating strain localization and large deformations while employing well-known constitutive models. A continuum elastoplastic constitutive model with a simple Drucker–Prager yield condition can explain many observations from analogue sandbox models of caldera development. For this loading configuration, shear band orientation is primarily controlled by the angle of dilation. Evolving shear band orientation, as commonly observed in analogue experiments, requires a constitutive model where frictional strength and dilatancy decrease with strain, approaching a state of zero volumetric strain rate. This constitutive model also explains recorded loads on the down-going trapdoor in analogue experiments. Our results, combined with theoretical scaling arguments, raise questions about the use of analogue models to study caldera formation. Finally, we apply the model to the 2018 caldera collapse at Kīlauea volcano and conclude that the host rock at Kīlauea must exhibit relatively low dilatancy to explain the inferred near-vertical ring faults.
APA, Harvard, Vancouver, ISO, and other styles
36

Dooley, Tim P., and Michael R. Hudec. "Extension and inversion of salt-bearing rift systems." Solid Earth 11, no. 4 (2020): 1187–204. http://dx.doi.org/10.5194/se-11-1187-2020.

Full text
Abstract:
Abstract. We used physical models to investigate the structural evolution of segmented extensional rifts containing syn-rift evaporites and their subsequent inversion. An early stage of extension generated structural topography consisting of a series of en-échelon graben. Our salt analog filled these graben and the surroundings before continued extension and, finally, inversion. During post-salt extension, deformation in the subsalt section remained focused on the graben-bounding fault systems, whereas deformation in suprasalt sediments was mostly detached, forming a sigmoidal extensional minibasin system across the original segmented graben array. Little brittle deformation was observed in the post-salt section. Sedimentary loading from the minibasins drove salt up onto the footwalls of the subsalt faults, forming diapirs and salt-ridge networks on the intra-rift high blocks. Salt remobilization and expulsion from beneath the extensional minibasins was enhanced along and up the major relay or transfer zones that separated the original sub-salt grabens, forming major diapirs in these locations. Inversion of this salt-bearing rift system produced strongly decoupled shortening belts in basement and suprasalt sequences. Suprasalt deformation geometries and orientations are strongly controlled by the salt diapir and ridge network produced during extension and subsequent downbuilding. Thrusts are typically localized at minibasin margins where the overburden was thinnest, and salt had risen diapirically on the horst blocks. In the subsalt section, shortening strongly inverted sub-salt grabens, which uplifted the suprasalt minibasins. New pop-up structures also formed in the subsalt section. Primary welds formed as suprasalt minibasins touched down onto inverted graben. Model geometries compare favorably to natural examples such as those in the Moroccan High Atlas.
APA, Harvard, Vancouver, ISO, and other styles
37

Ryan, Georgina, George Bernardel, John Kennard, Andrew T. Jones, Graham Logan, and Nadege Rollet. "A pre-cursor extensive Miocene reef system to the Rowley Shoals reefs, Western Australia: evidence for structural control of reef growth or natural hydrocarbon seepage?" APPEA Journal 49, no. 1 (2009): 337. http://dx.doi.org/10.1071/aj08021.

Full text
Abstract:
Numerous Miocene reefs and related carbonate build-ups have been identified in the Rowley Shoals region of the central North West Shelf, offshore Western Australia. The reefs form part of an extensive Miocene reef tract over 1,600 km long, which extended northward into the Browse and Bonaparte basins and southward to North West Cape in the Carnarvon Basin—comparable in length to the modern Great Barrier Reef. Growth of the vast majority of these Miocene reefs failed to keep pace with relative sea-level changes in the latest Miocene, whereas reef growth continued on the central North West Shelf to form the three present-day atolls of the Rowley Shoals: Mermaid, Clerke and Imperieuse reefs. In the Rowley Shoals region, scattered small build-ups and local reef complexes were first established in the Early Miocene, but these build-ups were subsequently terminated at a major Mid Miocene sequence boundary. Widespread buildups and atoll reefs were re-established in the Middle Miocene, and the internal stacking geometries of the reefs appear to relate to distinct growth phases that are correlated with eustatic sea-level fluctuations. These geometries include: a basal aggradational buildup of early Middle Miocene age; a strongly progradational growth phase in the late Middle to early Late Miocene that constructed large reef atolls with infilling lagoon deposits; and a back-stepped aggradational growth phase that formed smaller reef caps in the early–latest Late Miocene. Growth of the majority of the reefs ceased at a major sea-level fall in the Late Miocene (Messinian), and only the reefs of the present-day Rowley Shoals (Mermaid, Clerke and Imperieuse reefs, as well as a drowned shoal to the southwest of Imperieuse Reef) continued to grow after this event. Growth of the Rowley Shoals reefs continued to keep pace with Pliocene-Recent sea-level changes, whereas the surrounding shelf subsided to depths of 230–440 m. We conclude that initial reef growth in the Rowley Shoals region was controlled by transpressional reactivation and structuring of the Mermaid Fault Zone during the early stage of collision between the Australian and Eurasian plates. During this structural reactivation, seabed fault scarps and topographic highs likely provided ideal sites for the initiation of reef growth. The subsequent growth and selective demise of the reefs was controlled by the interplay of eustatic sea-level variations and differential subsidence resulting from continued structural reactivation of the Mermaid Fault Zone. In contrast to models proposed in other regions, there is no direct evidence that active or palaeo hydrocarbon seepage triggered or controlled growth of the Rowley Shoals reefs or their buried Miocene predecessors.
APA, Harvard, Vancouver, ISO, and other styles
38

Duboeuf, Laure, Anna Maria Dichiarante, and Volker Oye. "Interplay of large-scale tectonic deformation and local fluid injection investigated through seismicity patterns at the Reykjanes Geothermal Field, Iceland." Geophysical Journal International 228, no. 3 (2021): 1866–86. http://dx.doi.org/10.1093/gji/ggab423.

Full text
Abstract:
SUMMARY Occurrence of seismicity sequences as consequence of fluid injection or extraction has long been studied and documented. Causal relations between injection parameters, such as injection pressure, injection rates, total injected volumes and injectivity, with seismicity derived parameters, such as seismicity rate, cumulative seismic moment, distance of seismicity (RT-plot), b-values, etc. have been derived. In addition, reservoir engineering parameters such as permeability/porosity relations and flow types play a role together with geology knowledge on fault and fracture properties, influenced by the stress field on different scales. In this paper, we study observed seismicity related to water injection at the Reykjanes Geothermal Field, Iceland. The region near the injection well did not experience seismicity before the start of injection. However, we observed continued seismic activity during the 3 months of injection in 2015, resulting in a cloud of about 700 events ranging in magnitude from Mw 0.7 to 3.3. We re-located these events using a modified double-difference algorithm and determined focal mechanism of event subsets. Characteristic for the site is that the events are bound to about 4 km distance to the injection point, and moreover known faults seem to act as barrier to fluids and seismicity. Several repeating sequences of seismicity, defined as bursts of seismicity have hypocenter migration velocities larger than 4 km d–1 and their dominant direction of propagation is away from the injection point towards larger depths. The seismic events within the bursts lack larger magnitude events, have elevated b-values (∼1.5) and consist of many multiplets. Except from the coinciding onset of seismicity with the start of fluid injection, no correlation between injection rates and volumes could be identified, neither could hydraulic diffusivity models explain observed seismicity patterns. Comparison of our results with investigations on background seismicity from 1995 to 2019 and from a seismic swarm in 1972 revealed similar focal mechanism patterns and burst-like seismicity patterns. We finally present a conceptual model where we propose that the observed seismicity patterns represent a stress release mechanism in the area close to the injection well, controlled by an interplay of local pore pressure and stress field changes with continued extensional stress build up at the Reykjanes Ridge.
APA, Harvard, Vancouver, ISO, and other styles
39

R.Shankar and D. Sridhar Dr. "A Comprehensive Review on Test Case Prioritization in Continuous Integration Platforms." International Journal of Innovative Science and Research Technology 8, no. 4 (2023): 3223–29. https://doi.org/10.5281/zenodo.8282823.

Full text
Abstract:
Continuous Integration (CI) platforms enable recurrent integration of software variations, creating software development rapidly and cost-effectively. In these platforms, integration, and regression testing play an essential role in Test Case Prioritization (TCP) to detect the test case order, which enhances specific objectives like early failure discovery. Currently, Artificial Intelligence (AI) models have emerged widely to solve complex software testing problems like integration and regression testing that create a huge quantity of data from iterative code commits and test executions. In CI testing scenarios, AI models comprising machine and deep learning predictors can be trained by using large test data to predict test cases and speed up the discovery of regression faults during code integration. But these models attain various efficiencies based on the context and factors of CI testing such as varying time cost or the size of test execution history utilized to prioritize failing test cases. Earlier research on TCP using AI models does not often learn these variables that are crucial for CI testing. In this article, a comprehensive review of the different TCP models using deep-learning algorithms including Reinforcement Learning (RL) is presented to pay attention to the software testing field. Also, the merits and demerits of those models for TCP in CI testing are examined to comprehend the challenges of TCP in CI testing. According to the observed challenges, possible solutions are given to enhance the accuracy and stability of deep learning models in CI testing for TCP.
APA, Harvard, Vancouver, ISO, and other styles
40

Pandey, Arjun, and R. Jayangondaperumal. "Is the 1697 Sadiya Earthquake Large or Great?: A Critical Appraisal through on-Fault Paleoseismology." Journal Of The Geological Society Of India 101, no. 6 (2025): 804–8. https://doi.org/10.17491/jgsi/2025/174164.

Full text
Abstract:
ABSTRACT Geological signatures of an earthquake and their coherence with the historical chronicles have always been controversial in paleoseismology. These inferences characterize past earthquake parameters, such as fault location, magnitude, and recurrence interval, which are essential for developing a robust dataset to validate models and assess seismic hazards. Here, we discuss the surface rupture imprints of a ~17th-century earthquake in the eastern Himalaya at Himebasti village of Arunachal Pradesh, known as “the 1697 CE Sadiya earthquake” in the historical chronicles. The trench exposures at Himebasti site entails the deformation along a northeast-dipping (~ 9° NE−11° NE) basal thrust fault with a dip-slip displacement of 15.3 ± 4.6 m. This ~15.2m slip is solely coseismic, including post-seismic relaxation, or may be related to the ramp flat geometry or due to the simple bulldogging effect along the upper flat. The twenty-one radiocarbon dates of the trench exposures limit the timing of displacement after 1445 CE. The earthquake caused massive destruction in the old town of “Sadiya” and triggered a series of aftershocks that continued for about six months. The minimum magnitude of the 1697 CE Sadiya earthquake has been assigned to 7.7 Mw . However, it needs further validation using its surface rupture length along the eastern Himalayan arc to determine whether it is a great or a large event.
APA, Harvard, Vancouver, ISO, and other styles
41

Rosado, Belén, Vanessa Jiménez, Alejandro Pérez-Peña, et al. "GNSS-Based Models of Displacement, Stress, and Strain in the SHETPENANT Region: Impact of Geodynamic Activity from the ORCA Submarine Volcano." Remote Sensing 17, no. 14 (2025): 2370. https://doi.org/10.3390/rs17142370.

Full text
Abstract:
The South Shetland Islands and Antarctic Peninsula (SHETPENANT region) constitute a geodynamically active area shaped by the interaction of major tectonic plates and active magmatic systems. This study analyzes GNSS time series spanning from 2017 to 2024 to investigate surface deformation associated with the 2020–2021 seismic swarm near the Orca submarine volcano. Horizontal and vertical displacement velocities were estimated for the preseismic, coseismic, and postseismic phases using the CATS method. Results reveal significant coseismic displacements exceeding 20 mm in the horizontal components near Orca, associated with rapid magmatic pressure release and dike intrusion. Postseismic velocities indicate continued, though slower, deformation attributed to crustal relaxation. Stations located near the Orca exhibit nonlinear, transient behavior, whereas more distant stations display stable, linear trends, highlighting the spatial heterogeneity of crustal deformation. Stress and strain fields derived from the velocity models identify zones of extensional dilatation in the central Bransfield Basin and localized compression near magmatic intrusions. Maximum strain rates during the coseismic phase exceeded 200 νstrain/year, supporting a scenario of crustal thinning and fault reactivation. These patterns align with the known structural framework of the region. The integration of GNSS-based displacement and strain modeling proves essential for resolving active volcano-tectonic interactions. The findings enhance our understanding of back-arc deformation processes in polar regions and support the development of more effective geohazard monitoring strategies.
APA, Harvard, Vancouver, ISO, and other styles
42

Baird, Graham B. "Late Ottawan orogenic collapse of the Adirondacks in the Grenville province of New York State (USA): Integrated petrologic, geochronologic, and structural analysis of the Diana Complex in the southern Carthage-Colton mylonite zone." Geosphere 16, no. 3 (2020): 844–74. http://dx.doi.org/10.1130/ges02155.1.

Full text
Abstract:
Abstract Crustal-scale shear zones can be highly important but complicated orogenic structures, therefore they must be studied in detail along their entire length. The Carthage-Colton mylonite zone (CCMZ) is one such shear zone in the northwestern Adirondacks of northern New York State (USA), part of the Mesoproterozoic Grenville province. The southern CCMZ is contained within the Diana Complex, and geochemistry and U-Pb zircon geochronology demonstrate that the Diana Complex is expansive and collectively crystallized at 1164.3 ± 6.2 Ma. Major ductile structures within the CCMZ and Diana Complex include a northwest-dipping penetrative regional mylonitic foliation with north-trending lineation that bisects a conjugate set of mesoscale ductile shear zones. These ductile structures formed from the same 1060–1050 Ma pure shear transitioning to a top-to-the-SSE shearing event at ∼700 °C. Other important structures include a ductile fault and breccia zones. The ductile fault formed immediately following the major ductile structures, while the breccia zones may have formed at ca. 945 Ma in greenschist facies conditions. Two models can explain the studied structures and other regional observations. Model 1 postulates that the CCMZ is an Ottawan orogeny (1090–1035 Ma) thrust, which was later reactivated locally as a tectonic collapse structure. Model 2, the preferred model, postulates that the CCMZ initially formed as a subhorizontal mid-crustal mylonite zone during collapse of the Ottawan orogen. With continued collapse, a metamorphic core complex formed and the CCMZ was rotated into is current orientation and overprinted with other structures.
APA, Harvard, Vancouver, ISO, and other styles
43

Karthi, K., and A. Ramkumar. "ANFIS-Fuzzy Logic-based Hybrid DFIG and PMSG Grid Connected System with TCSC." International Journal of Electrical Engineering and Computer Science 6 (January 30, 2024): 51–63. http://dx.doi.org/10.37394/232027.2024.6.6.

Full text
Abstract:
Variable-speed wind turbines might provide green electricity. Grid operators’ grid regulations require wind turbines to recover from grid disruptions and help maintain electricity networks. Having wind turbines equipped with fault current limiters (FCLs) may ensure their continued functioning in the event of a power loss. In this piece, we will talk about how to improve the two most common types of variable-speed wind turbines: the Doubly Fed Induction Generator (DFIG) and the Permanent Magnet Synchronous Generator (PMSG). Both wind generators were evaluated using the Thyristor Controlled Series Compensator (TCSC) with ANFIS and Fuzzy Logic. It is important to understand the dynamic behavior of wind turbines, hence models of their FCLs were built for steady state and grid disruptions. Power interruptions switched the FCLs in both wind turbines utilising grid voltage variation. Both wind turbines underwent a no-control FCL scenario. Both wind turbines’ FCLs were measured and compared under load from a severe three-phase to ground failure at their terminals. Both wind turbines were operated under similar circumstances to examine FCL control tactics during power interruptions.
APA, Harvard, Vancouver, ISO, and other styles
44

Cohen, Wendy E., Richard D. Marshall, Allison C. Yacker, and Lance A. Zinman. "SEC sues asset managers for using untested, error-filled quantitative investment models." Journal of Investment Compliance 20, no. 1 (2019): 44–46. http://dx.doi.org/10.1108/joic-01-2019-0004.

Full text
Abstract:
Purpose To explain actions the US Securities and Exchange Commission (SEC) brought on August 27, 2018, against a group of affiliated investment advisers and broker-dealers for what the SEC considered misleading and insufficient representations and disclosures, insufficient compliance policies and procedures, and insufficient research and oversight concerning the use of faulty quantitative models to manage certain client accounts. Design/methodology/approach Explains the SEC’s findings concerning the advisers’ and broker-dealers’ failure to confirm that certain models worked as intended, to disclose the risks associated with the use of those models, to disclose the role of a research analyst in developing the models, to disclose the use of volatility overlays along with the associated risks, to determine whether a fund’s holdings were sufficient to support a consistent dividend payout without a return of capital, and to take sufficient steps to confirm the advertised performance of another investment manager whose products they were marketing. Provides insight into the SEC’s position and offers key takeaways. Findings These cases are significant for advisers who use quantitative models to implement their investment strategies in the management of client accounts and signal the SEC’s continued focus on investment advisers’ compliance with disclosure obligations to discretionary account investors. Practical implications Each manager should consider its own facts and circumstances, and should consult with counsel, in assessing how and to what extent to incorporate the SEC’s conclusions in crafting disclosure and other communications with investors on matters such as adequate representations, testing and validation of models, disclosure of errors, and verifying performance claims. Originality/value Practical guidance from experienced securities lawyers.
APA, Harvard, Vancouver, ISO, and other styles
45

Buys, Jonas, Vincenzo De Florio, and Chris Blondia. "Optimization of WS-BPEL Workflows through Business Process Re-Engineering Patterns." International Journal of Adaptive, Resilient and Autonomic Systems 1, no. 3 (2010): 25–41. http://dx.doi.org/10.4018/jaras.2010070102.

Full text
Abstract:
With the advent of XML-based SOA, WS-BPEL swiftly became a widely accepted standard for modeling business processes. Although SOA is said to embrace the principle of business agility, BPEL process definitions are still manually crafted into their final executable version. While SOA has proven to be a giant leap forward in building flexible IT systems, this static BPEL workflow model should be enhanced to better sustain continual process evolution. In this paper, the authors discuss the potential for adding business intelligence with respect to business process re-engineering patterns to the system to allow for automatic business process optimization. Furthermore, the paper examines how these re-engineering patterns may be implemented, leveraging techniques that were applied successfully in computer science. Several practical examples illustrate the benefit of such adaptive process models. These preliminary findings indicate that techniques like the re-sequencing and parallelization of instructions, further optimized by introspection, as well as techniques for achieving software fault tolerance, are particularly valuable for optimizing business processes. Finally, the authors elaborate on the design of people-oriented business processes using common human-centric re-engineering patterns.
APA, Harvard, Vancouver, ISO, and other styles
46

Ahn, Joongho, Eojin Yi, and Moonsoo Kim. "Blockchain Consensus Mechanisms: A Bibliometric Analysis (2014–2024) Using VOSviewer and R Bibliometrix." Information 15, no. 10 (2024): 644. http://dx.doi.org/10.3390/info15100644.

Full text
Abstract:
Blockchain consensus mechanisms play a critical role in ensuring the security, decentralization, and integrity of distributed networks. As blockchain technology expands beyond cryptocurrencies into broader applications such as supply chain management and healthcare, the importance of efficient and scalable consensus algorithms has grown significantly. This study provides a comprehensive bibliometric analysis of blockchain and consensus mechanism research from 2014 to 2024, using tools such as VOSviewer and R’s Bibliometrix package. The analysis traces the evolution from foundational mechanisms like Proof of ork (PoW) to more advanced models such as Proof of Stake (PoS) and Byzantine Fault Tolerance (BFT), with particular emphasis on Ethereum’s “The Merge” in 2022, which marked the historic shift from PoW to PoS. Key findings highlight emerging themes, including scalability, security, and the integration of blockchain with state-of-the-art technologies like artificial intelligence (AI), the Internet of Things (IoT), and energy trading. The study also identifies influential authors, institutions, and countries, emphasizing the collaborative and interdisciplinary nature of blockchain research. Through thematic analysis, this review uncovers the challenges and opportunities in decentralized systems, underscoring the need for continued innovation in consensus mechanisms to address efficiency, sustainability, scalability, and privacy concerns. These insights offer a valuable foundation for future research aimed at advancing blockchain technology across various industries.
APA, Harvard, Vancouver, ISO, and other styles
47

Cascio, Michele, Ioannis Deretzis, Giuseppe Fisicaro, Giuseppe Falci, Giovanni Mannino, and Antonino Magna. "Tailoring Active Defect Centers During the Growth of Group IV Crystals†." Proceedings 12, no. 1 (2019): 32. http://dx.doi.org/10.3390/proceedings2019012032.

Full text
Abstract:
Defects, e.g., Vacancies (Vs) and Defect-impurity centers, e.g., Nitrogen-Vacancy complexes (NVs), in group IV materials (diamond, SiC, graphene) are unique systems for Quantum Technologies (QT). The control of their positioning is a key issue for any realistic QT application and their tailored inclusion during controlled crystal-growth processes could overcome the limitations of other incorporation methods (e.g., ion implantation causing strong lattice damage). To date, the atomistic evolution regarding the growth of group IV crystals is barely known and this missing knowledge often results in a lack of process control in terms of mesoscopic crystal quality, mainly concerning the eventual generation of local or extended defects and their space distribution. We have developed Kinetic Monte Carlo models to study the growth kinetics of materials characterized by sp 3 bonding symmetries with an atomic-level accuracy. The models can be also coupled to the continuum simulation of the gas-phase status generated in the equipment to estimate the deposition rate and reproduce a variety of growth techniques (e.g., Chemical and Physical Vapour deposition, sublimation, etc.). Evolution is characterized by nucleation and growth of ideal or defective structures and their balance depends critically on process-related parameters. Quantitative predictions of the process evolution can be obtained and readily compared with the structural characterization of the processed samples. In particular, we can describe the surface state of the crystal and the defect generation/evolution (for both point and extended defects, e.g., stacking faults) as a function of the initial substrate conditions and the process parameters (e.g., temperature, pressure, gas flow).
APA, Harvard, Vancouver, ISO, and other styles
48

Bray, Matthew, Jacob Utley, Yanuri Ning, et al. "Multidisciplinary analysis of hydraulic stimulation and production effects within the Niobrara and Codell reservoirs, Wattenberg Field, Colorado — Part 2: Analysis of hydraulic fracturing and production." Interpretation 9, no. 4 (2021): SG13—SG29. http://dx.doi.org/10.1190/int-2020-0153.1.

Full text
Abstract:
Enhanced hydrocarbon recovery is essential for continued economic development of unconventional reservoirs. We have focused on dynamic characterization of the Niobrara and Codell Formations in Wattenberg Field through the development and analysis of a full integrated reservoir model. We determine the effectiveness of the hydraulic fracturing and production with two seismic monitor surveys, surface microseismic, completion data, and production data. The two monitor surveys were recorded after stimulation and again after two years of production. Identification of reservoir deformation due to hydraulic fracturing and production improves reservoir models by mapping nonstimulated and nonproducing zones. Monitoring these time-variant changes improves the prediction capability of reservoir models, which in turn leads to improved well and stage placement. We quantify dynamic reservoir changes with time-lapse P-wave seismic data using prestack inversion and velocity-independent layer stripping for velocity and attenuation changes within the Niobrara and Codell reservoirs. A 3D geomechanical model and production data are history matched, and a simulation is run for two years of production. Results are integrated with time-lapse seismic data to illustrate the effects of hydraulic fracturing and production. Our analyses illustrate that chalk facies have significantly higher hydraulic fracture efficiency and production performance than marl facies. In addition, structural and hydraulic complexity associated with faults generate spatial variability in a well’s total production.
APA, Harvard, Vancouver, ISO, and other styles
49

ÇAKMAK, Mustafa Doğukan, and Burcu GEDİZ ORAL. "The Success of Public Private Partnerships with Transparency and Accountability Principles." Sosyolojik Bağlam Dergisi 4, no. 3 (2023): 289–311. http://dx.doi.org/10.52108/2757-5942.4.3.5.

Full text
Abstract:
Harsh debates on the restructuring of public administration and the failures of states have continued for decades, Public Private Partnership (PPP) suggests a different function to the state as an application of the "New Public Management" approach, for solving some of these debates. Although PPP can be briefly defined as the provision of public services by the private sector, its complex relationship structure and risks behind this definition do likely make the model unsuccessful. In pursuit of the best model, international organizations and mechanisms, including the OECD, the UN, and the European Commission, have attempted to apply some models of successful examples, procedures, and laws. Some studies learning from faulty designs and experiences have focused on what should not be. As a result, the critical success factors seem to be the most proper tool to improve the PPP method. This study focuses on providing support information for countries trying to design and develop a PPP model. Afterward, it deals with the performance-measured PPP applications in the world in the context of transparency and accountability and reveals the significance of the principles of transparency and accountability in the success of the PPP model.
APA, Harvard, Vancouver, ISO, and other styles
50

Mrs., D. K. Kothimbire, S. Shelke D., S. V. Gaikwad Mrs., P. Yelpale A., and N. Shinde R. "A Comprehensive Review of Graph Theory Applications in Network Analysis." International Journal of Mathematics and Computer Research 13, no. 03 (2025): 4956–67. https://doi.org/10.5281/zenodo.15064224.

Full text
Abstract:
This review paper investigates the extensive role of graph theory as a unifying framework for network analysis across diverse domains. The study begins by out- lining fundamental concepts, such as adjacency matrices, centrality measures, and community detection algorithms, which together enable systematic exploration of net- work topologies. Next, it examines pivotal applications, illustrating how graph-based techniques facilitate tasks like influencer detection in social media, energy-efficient routing in communication networks, and large-scale protein-interaction modeling in bioinformatics. Methodologically, the paper consolidates theoretical foundations with real-world case studies, highlighting both classical graph models (e.g., Erd˝os&ndash;R&acute;enyi, Watts&ndash;Strogatz, Barab&acute;asi&ndash;Albert) and advanced solutions (e.g., graph neural networks and quantum walks) that address emerging challenges of dynamic, multilayered, and high-dimensional data. The key findings demonstrate that graph theory consistently delivers actionable in- sights&mdash;enhancing traffic management in transportation, bolstering fault tolerance in critical infrastructures, and supporting cutting-edge cybersecurity anomaly detection. Moreover, the exploration of hypergraphs and quantum computing signals promising avenues for further research. In practical terms, the ability to handle massive datasets in near-real-time has positioned graph analysis as an essential tool for academia, indus- try, and public policy. Overall, this study underscores the versatility of graph theory and points to new interdisciplinary opportunities, emphasizing the need for continued innovation in handling computational complexity, data privacy, and dynamic network evolution.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!