To see the other types of publications on this topic, follow the link: Seismic reflection method – Data processing.

Journal articles on the topic 'Seismic reflection method – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Seismic reflection method – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Soleimani, Mehrdad, and Iradj Piruz. "Common reflection surface stack, new method in seismic reflection data processing: A synthetic data example." ASEG Extended Abstracts 2007, no. 1 (December 1, 2007): 1–4. http://dx.doi.org/10.1071/aseg2007ab207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

French, W. S. "Practical seismic imaging." Exploration Geophysics 20, no. 2 (1989): 11. http://dx.doi.org/10.1071/eg989011.

Full text
Abstract:
Data examples clearly show that advances in seismic reflection methods over the past few years provide the interpreter with improved geologic information. The shift over the last ten years from 2-D to 3-D surveys and the shift over the past five years from processing based on surface geometry to processing based on subsurface geometry represent the principal advancements. Despite these advancements, the seismic reflection method is not mature.There exists no unified processing method to produce a 3-D geologic picture in depth directly from the data. Current processing techniques are a conglomeration of surface referenced methods (most noise suppression techniques), subsurface referenced methods (DMO, prestack migration) and in-between methods (velocity analysis). Interpreters, processors and field people must all keep abreast of the technology of our profession in order to improve our final product: greater success in both exploration and production.
APA, Harvard, Vancouver, ISO, and other styles
3

Steeples, Don W., and Richard D. Miller. "Avoiding pitfalls in shallow seismic reflection surveys." GEOPHYSICS 63, no. 4 (July 1998): 1213–24. http://dx.doi.org/10.1190/1.1444422.

Full text
Abstract:
Acquiring shallow reflection data requires the use of high frequencies, preferably accompanied by broad bandwidths. Problems that sometimes arise with this type of seismic information include spatial aliasing of ground roll, erroneous interpretation of processed airwaves and air‐coupled waves as reflected seismic waves, misinterpretation of refractions as reflections on stacked common‐midpoint (CMP) sections, and emergence of processing artifacts. Processing and interpreting near‐surface reflection data correctly often requires more than a simple scaling‐down of the methods used in oil and gas exploration or crustal studies. For example, even under favorable conditions, separating shallow reflections from shallow refractions during processing may prove difficult, if not impossible. Artifacts emanating from inadequate velocity analysis and inaccurate static corrections during processing are at least as troublesome when they emerge on shallow reflection sections as they are on sections typical of petroleum exploration. Consequently, when using shallow seismic reflection, an interpreter must be exceptionally careful not to misinterpret as reflections those many coherent waves that may appear to be reflections but are not. Evaluating the validity of a processed, shallow seismic reflection section therefore requires that the interpreter have access to at least one field record and, ideally, to copies of one or more of the intermediate processing steps to corroborate the interpretation and to monitor for artifacts introduced by digital processing.
APA, Harvard, Vancouver, ISO, and other styles
4

Dell, Sergius, and Dirk Gajewski. "Common-reflection-surface-based workflow for diffraction imaging." GEOPHYSICS 76, no. 5 (September 2011): S187—S195. http://dx.doi.org/10.1190/geo2010-0229.1.

Full text
Abstract:
Imaging of diffractions is a challenge in seismic processing. Standard seismic processing is tuned to enhance reflections. Separation of diffracted from reflected events is frequently used to achieve an optimized image of diffractions. We present a method to effectively separate and image diffracted events in the time domain. The method is based on the common-reflection-surface-based diffraction stacking and the application of a diffraction-filter. The diffraction-filter uses kinematic wavefield attributes determined by the common-reflection-surface approach. After the separation of seismic events, poststack time-migration velocity analysis is applied to obtain migration velocities. The velocity analysis uses a semblance based method of diffraction traveltimes. The procedure is incorporated into the conventional common-reflection-surface workflow. We apply the procedure to 2D synthetic data. The application of the method to simple and complex synthetic data shows promising results.
APA, Harvard, Vancouver, ISO, and other styles
5

Shiraishi, Kazuya, Gou Fujie, Takeshi Sato, Susumu Abe, Eiichi Asakawa, and Shuichi Kodaira. "Interferometric OBS imaging for wide-angle seismic data." GEOPHYSICS 82, no. 5 (September 1, 2017): Q39—Q51. http://dx.doi.org/10.1190/geo2016-0482.1.

Full text
Abstract:
Marine wide-angle seismic data obtained using air guns and ocean-bottom seismographs (OBSs) are effective for determining large-scale subseafloor seismic velocities, but they are ineffective for imaging details of shallow seismic reflection structures because of poor illumination. Surface-related multiple reflections offer the potential to enlarge the OBS data illumination area. We have developed a new seismic imaging method for OBS surveys applying seismic interferometry, a technique that uses surface-related multiples similarly to mirror imaging. Seismic interferometry can use higher order multiple reflections than mirror imaging, which mainly uses first-order multiple reflections. A salient advantage of interferometric OBS imaging over mirror imaging is that it requires only single-component data, whereas mirror imaging requires vertical geophone and hydrophone components to separate upgoing and downgoing wavefields. We applied interferometric OBS imaging to actual 175 km long wide-angle OBS data acquired in the Nankai Trough subduction zone. We obtained clear continuous reflection images in the deep and shallow parts including the seafloor from the OBS data acquired with large spacing. Deconvolution interferometry is more suitable than correlation interferometry to improve spatial resolution because of the effects of spectral division when applied to common receiver gathers. We examined the imaging result dependence on data acquisition and processing parameters considering the data quality and target depth. An air-gun-to-OBS distance of up to 50 km and a record length of 80 s were necessary for better imaging. In addition, our decimation tests confirmed that denser OBS spacing yielded better quality and higher resolution images. Understanding crosstalk effects due to the acquisition setting will be useful to optimize methods for eliminating them. Interferometric OBS imaging merged with conventional primary reflection imaging is a powerful method for revealing crustal structures.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Linfei, Zhong Wang, Huaishan Liu, Jin Zhang, Lei Xing, and Yanxin Yin. "Hydrate-Bearing Sediment Imaging of Ghost Reflection in Vertical Cable Seismic Data Using Seismic Interferometry." Geofluids 2022 (September 25, 2022): 1–7. http://dx.doi.org/10.1155/2022/3501755.

Full text
Abstract:
Marine vertical cable seismic (VCS) collects seismic waves by hydrophone array vertically suspended in seawater to prospect the offshore geological structure and monitor the reservoir. Due to its irregular source-receiver geometry, the primary imaging has narrow illustration coverage. Here, we proposed a cross-correlation transformation based on ghost wave interferometry. This method can transform the ghost reflections from the vertical cable seismic profile into the virtual surface seismic primaries just like those excited by the source and recorded by marine seismic towed-streamer below sea surface. After processing these virtual primaries with conventional method, we can obtain the ghost reflection imaging section with high resolution which effectively extend the illustration footprints in the subsurface. By application of this transform, virtual primaries are generated from the first-order ghost reflections of the actual VCS data. Then, migration of these virtual primaries provides a high-resolution image of hydrate-bearing sediments.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, David Y., and Douglas W. McCowan. "Spherical divergence correction for seismic reflection data using slant stacks." GEOPHYSICS 54, no. 5 (May 1989): 563–69. http://dx.doi.org/10.1190/1.1442683.

Full text
Abstract:
We have developed a method for the spherical divergence correction of seismic reflection data based on normal moveout and stacking of cylindrical slant stacks. The method is illustrated on some Gulf of Mexico data. The results show that our method yields essentially the same traveltime information as does conventional processing. Our amplitudes, however, are more interpretable in terms of reflectivity than are those obtained by using an empirical spherical divergence correction.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Juan, and Min Bai. "Adaptive rank-reduction method for seismic data reconstruction." Journal of Geophysics and Engineering 15, no. 4 (May 16, 2018): 1688–703. http://dx.doi.org/10.1093/jge/aabc74.

Full text
Abstract:
Abstract Seismic data reconstruction plays an important role in the whole seismic data processing and imaging workflow, especially for those data that are acquired from severe field environment and are missing a large portion of the reflection signals. The rank-reduction method is considered to be a very effective method for interpolating data that are of small curvature, e.g. the post-stack data. However, when the data are more complicated, the rank-reduction method may fail to achieve acceptable performance. A useful strategy is to use local windows to process the data so that the data in each local window satisfy the plane-wave assumption of the rank-reduction method. However, the rank in each window requires a careful selection. Traditional methods select a global rank for all windows. We have proposed an automatic algorithm to select the rank in each processing window. The energy ratio between two consecutive singular values is chosen as the criterion to define the optimal rank. We apply this strategy to seismic data interpolation and use both synthetic and field data examples to demonstrate its potential in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Mark, Norman. "Case history: Seismic exploration in Egypt’s Eastern Desert." GEOPHYSICS 57, no. 2 (February 1992): 296–305. http://dx.doi.org/10.1190/1.1443243.

Full text
Abstract:
Although oil exploration has been performed in the Eastern Desert of Egypt for over a century, seismic reflection techniques have only been in use for less than a fourth of that time. In an effort to improve seismic imaging of geologic targets, many styles of acquisition and processing have been tested, accepted, or discarded. Over the last twenty‐four years, seismic data acquisition has evolved from low‐channel analog to high‐channel digital recordings. The most difficult exploration problems encountered in these efforts have been the low‐frequency and high‐energy ground roll and depth of penetration when imaging the oil producing Pre‐Miocene sandy reservoirs below the highly reflective salt and evaporites. Efforts have been focused on developing seismic processing procedures to enhance the seismic data quality of recently acquired seismic data and developing new acquisition methods to improve seismic data through acquisition and processing. In older acquisition, the new processing has improved the seismic quality (vertical and lateral resolution), but it still retains a low‐frequency character. In the newly acquired seismic data, however, there is improved reflection continuity, depth of penetration, and resolution. We attribute this result to the change from low‐fold (6–24 fold), long receiver and source patterns (50 to 222 m) to high fold (96 fold) short receiver and source group (25 m), and spectral balancing in the processing. The most recent acquisition and processing have greatly improved the quality of the shallow seismic reflections and the deeper reflections that have helped unravel the structural and stratigraphic style of the deeper portions of the basin.
APA, Harvard, Vancouver, ISO, and other styles
10

Miller, Kate C., Steven H. Harder, Donald C. Adams, and Terry O’Donnell. "Integrating high‐resolution refraction data into near‐surface seismic reflection data processing and interpretation." GEOPHYSICS 63, no. 4 (July 1998): 1339–47. http://dx.doi.org/10.1190/1.1444435.

Full text
Abstract:
Shallow seismic reflection surveys commonly suffer from poor data quality in the upper 100 to 150 ms of the stacked seismic record because of shot‐associated noise, surface waves, and direct arrivals that obscure the reflected energy. Nevertheless, insight into lateral changes in shallow structure and stratigraphy can still be obtained from these data by using first‐arrival picks in a refraction analysis to derive a near‐surface velocity model. We have used turning‐ray tomography to model near‐surface velocities from seismic reflection profiles recorded in the Hueco Bolson of West Texas and southern New Mexico. The results of this analysis are interval‐velocity models for the upper 150 to 300 m of the seismic profiles which delineate geologic features that were not interpretable from the stacked records alone. In addition, the interval‐velocity models lead to improved time‐to‐depth conversion; when converted to stacking velocities, they may provide a better estimate of stacking velocities at early traveltimes than other methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Bakulin, Andrey, Ilya Silvestrov, Maxim Dmitriev, Dmitry Neklyudov, Maxim Protasov, Kirill Gadylshin, and Victor Dolgov. "Nonlinear beamforming for enhancement of 3D prestack land seismic data." GEOPHYSICS 85, no. 3 (May 1, 2020): V283—V296. http://dx.doi.org/10.1190/geo2019-0341.1.

Full text
Abstract:
We have developed nonlinear beamforming (NLBF), a method for enhancing modern 3D prestack seismic data acquired onshore with small field arrays or single sensors in which weak reflected signals are buried beneath the strong scattered noise induced by a complex near surface. The method is based on the ideas of multidimensional stacking techniques, such as the common-reflection-surface stack and multifocusing, but it is designed specifically to improve the prestack signal-to-noise ratio of modern 3D land seismic data. Essentially, NLBF searches for coherent local events in the prestack data and then performs beamforming along the estimated surfaces. Comparing different gathers that can be extracted from modern 3D data acquired with orthogonal acquisition geometries, we determine that the cross-spread domain (CSD) is typically the most convenient and efficient. Conventional noise removal applied to modern data from small arrays or single sensors does not adequately reveal the underlying reflection signal. Instead, NLBF supplements these conventional tools and performs final aggregation of weak and still broken reflection signals, where the strength is controlled by the summation aperture. We have developed the details of the NLBF algorithm in CSD and determined the capabilities of the method on real 3D land data with the focus on enhancing reflections and early arrivals. We expect NLBF to help streamline seismic processing of modern high-channel-count and single-sensor data, leading to improved images as well as better prestack data for estimation of reservoir properties.
APA, Harvard, Vancouver, ISO, and other styles
12

Sloan, Steven D., J. Tyler Schwenk, and Robert H. Stevens. "An example of extreme near-surface variability in shallow seismic reflection data." Interpretation 4, no. 3 (August 1, 2016): SH1—SH9. http://dx.doi.org/10.1190/int-2015-0215.1.

Full text
Abstract:
Variability of material properties in the shallow subsurface presents challenges for near-surface geophysical methods and exploration-scale applications. As the depth of investigation decreases, denser sampling is required, especially of the near offsets, to accurately characterize the shallow subsurface. We have developed a field data example using high-resolution shallow seismic reflection data to demonstrate how quickly near-surface properties can change over short distances and the effects on field data and processed sections. The addition of a relatively thin, 20 cm thick, low-velocity layer can lead to masked reflections and an inability to map shallow reflectors. Short receiver intervals, on the order of 10 cm, were necessary to identify the cause of the diminished data quality and would have gone unknown using larger, more conventional station spacing. Combined analysis of first arrivals, surface waves, and reflections aided in determining the effects and extent of a low-velocity layer that inhibited the identification and constructive stacking of the reflection from a shallow water table using normal-moveout-based processing methods. Our results also highlight the benefits of using unprocessed gathers to pragmatically guide processing and interpretation of seismic data.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhu, Xiaosan, Rui Gao, Qiusheng Li, Ye Guan, Zhanwu Lu, and Haiyan Wang. "Static corrections methods in the processing of deep reflection seismic data." Journal of Earth Science 25, no. 2 (April 2014): 299–308. http://dx.doi.org/10.1007/s12583-014-0422-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Phạm, Thế Hoàng Hà, Huy Hien Đoàn, Quang Minh Tạ, Thị Lụa Mai, and Hoàng Anh Nguyễn. "Some results of seismic travel-time reflection tomography study." Petrovietnam Journal 10 (November 30, 2021): 4–16. http://dx.doi.org/10.47800/pvj.2021.10-01.

Full text
Abstract:
Velocity model is essential for seismic data processing as it plays an important role in migration processes as well as time depth conversion. There are several techniques to reach that goal, among which tomographic inversion is an efficient one. As an upgrade version of handpicked velocity analysis, the tomography technique is based on the reflection ray tracing and conjugate gradient method to estimate an optimum velocity model and can create an initial high quality model for other intensive imaging and modelling module such as reverse-time migration (RTM) and full-waveform inversion (FWI). For the mentioned benefit, we develop a seismic travel-time reflection tomography (SeisT) module to study the accuracy of the approach along with building the technical capability in seismic processing. The accuracy of the module has been tested by both synthetic and real seismic field data; the efficiency and the accuracy of the model have been proven in terms of development method as well as field data application.
APA, Harvard, Vancouver, ISO, and other styles
15

Irnaka, Theodosius Marwan, Wahyudi Wahyudi, Eddy Hartantyo, Adien Akhmad Mufaqih, Ade Anggraini, and Wiwit Suryanto. "SEISGAMA: A Free C# Based Seismic Data Processing Software Platform." International Journal of Geophysics 2018 (2018): 1–8. http://dx.doi.org/10.1155/2018/2913591.

Full text
Abstract:
Seismic reflection is one of the most popular methods in geophysical prospecting. Nevertheless, obtaining high resolution and accurate results requires a sophisticated processing stage. There are many open-source seismic reflection data processing software programs available; however, they often use a high-level programming language that decreases its overall performance, lacks intuitive user-interfaces, and is limited to a small set of tasks. These shortcomings reveal the need to develop new software using a programming language that is natively supported by Windows® operating systems, which uses a relatively medium-level programming language (such as C#) and can be enhanced by an intuitive user interface. SEISGAMA was designed to address this need and employs a modular concept, where each processing group is combined into one module to ensure continuous and easy development and documentation. SEISGAMA can perform basic seismic reflection processes. This ability is very useful, especially for educational purposes or during a quality control process (in the acquisition stage). Those processes can be easily carried out by users via specific menus on SEISGAMA’s main user interface. SEISGAMA has been tested, and its results have been verified using available theoretical frameworks and by comparison to similar commercial software.
APA, Harvard, Vancouver, ISO, and other styles
16

Poletto, Flavio, and Biancamaria Farina. "Synthesis and composition of virtual-reflector (VR) signals." GEOPHYSICS 75, no. 4 (July 2010): SA45—SA59. http://dx.doi.org/10.1190/1.3433311.

Full text
Abstract:
The virtual-reflector (VR) method creates new seismic signals by processing seismic traces that have been produced by impulsive or transient sources. Under proper recording-coverage conditions, this technique allows a seismogram to be obtained as if there were an ideal reflector at the position of the receivers (or sources). Only the reflected signals from this reflector are synthesized. The algorithm is independent of the medium-velocity model and is based on convolution of the recorded traces and on subsequent integration of the crossconvolved signals in the receiver (or source) space. We use the VR method in combination with seismic interferometry (SI) by crosscorrelation to compose corresponding virtual-reflection events in seismic exploration. For that purpose, we use weighted-summation and data-crossfiltering approaches. In applying these combination methods, we assume common travel paths in the virtual signals, taking into account that VR and SI by crosscorrelation imply different stationary-phase conditions. We present applications in which we combine the SI-by-crosscorrelation and VR signals to (1) suppress unwanted effects, such as marine water-layer reflections in synthetic ocean-bottom-cable data, and (2) obtain virtual two-way traveltime seismograms with real borehole data from walkaway vertical seismic profiling (VSP). Analysis shows that time gating and selection of reflection events are critical steps in processing water-layer multiples.
APA, Harvard, Vancouver, ISO, and other styles
17

Ma, Xiong, Guofa Li, Hao Li, and Wuyang Yang. "Multichannel absorption compensation with a data-driven structural regularization." GEOPHYSICS 85, no. 1 (November 22, 2019): V71—V80. http://dx.doi.org/10.1190/geo2019-0132.1.

Full text
Abstract:
Seismic absorption compensation is an important processing approach to mitigate the attenuation effects caused by the intrinsic inelasticity of subsurface media and to enhance seismic resolution. However, conventional absorption compensation approaches ignore the spatial connection along seismic traces, which makes the compensation result vulnerable to high-frequency noise amplification, thus reducing the signal-to-noise ratio (S/N) of the result. To alleviate this issue, we have developed a structurally constrained multichannel absorption compensation (SC-MAC) algorithm. In the cost function of this algorithm, we exploit an [Formula: see text] norm to constrain the reflectivity series and an [Formula: see text] norm to regularize the reflection structural characteristic of the compensation data. The reflection structural characteristic operator, extracted from the observed stacked seismic data, is the core of the structural regularization term. We then solve the cost function of SC-MAC by the alternating direction method of multipliers. Benefiting from the introduction of reflection structure constraint, SC-MAC improves the stability of the compensation result and inhibits the amplification of high-frequency noise. Synthetic and field data examples demonstrate that our proposed method is more robust to random noise and can not only improve the resolution of seismic data, but also maintain the S/N of the compensation seismic data.
APA, Harvard, Vancouver, ISO, and other styles
18

Weibull, Wiktor Waldemar, and Børge Arntsen. "Reverse-time demigration using the extended-imaging condition." GEOPHYSICS 79, no. 3 (May 1, 2014): WA97—WA105. http://dx.doi.org/10.1190/geo2013-0232.1.

Full text
Abstract:
The forward and inverse process of seismic migration and demigration or remodeling has many useful applications in seismic data processing. We evaluated a method to reobtain the seismic reflection data after migration, by inverting the common image point gathers produced by reverse-time migration (RTM) with an extended-imaging condition. This provided a transformation of the results of seismic data processing in the image domain back to the data domain. To be able to reconstruct the data with high fidelity, we set up demigration as a least-squares inverse problem and we solved it iteratively using a steepest-descent method. Because we used an extended-imaging condition, the method is not dependent on an accurate estimate of the migration-velocity field, and it is able to accurately reconstruct both primaries and multiples. At the same time, because the method is based on RTM, it can accurately handle seismic reflection data acquired over complex geologic media. Numerical results showed the feasibility of the method and highlighted some of its applications on 2D synthetic and field data sets.
APA, Harvard, Vancouver, ISO, and other styles
19

Xiao, Liying, Zhifu Zhang, and Jianjun Gao. "Ground Roll Attenuation of Multicomponent Seismic Data with the Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD) Method." Applied Sciences 12, no. 5 (February 25, 2022): 2429. http://dx.doi.org/10.3390/app12052429.

Full text
Abstract:
Multicomponent seismic exploration provides more wavefield information for imaging complex subsurface structures and predicting reservoirs. Ground roll is strongly coherent noise in land multicomponent seismic data and exhibits similar features, which are strong energy, low frequency, low velocity and dispersion, in each component. Ground roll attenuation is an important step in seismic data processing. In this study, we utilized multivariate empirical mode decomposition to multicomponent seismic data for attenuating ground roll. By adding extra components containing independent white noise, noise-assisted multivariate empirical mode decomposition is adopted to overcome the mode-mixing effect in standard empirical mode decomposition. This method provides a more robust analysis than the standard empirical mode decomposition EMD method performed separately for each component. Multicomponent seismic data are decomposed into different intrinsic mode functions in frequency scale. According to different frequency scales of seismic reflection wave and ground roll, intrinsic mode functions with low frequency are eliminated to suppress ground roll, and the remaining are reconstructed for seismic reflection waves. Synthetic and field data tests show that the proposed approach performs better than the traditional attenuation method.
APA, Harvard, Vancouver, ISO, and other styles
20

Lu, Zilin, Nuan Xia, Liang Sun, Wenxing Xu, Guangcheng Zhang, Haiyue Dou, and Qifeng Jiang. "An Alternative Adaptive Method for Seismic Data Denoising and Interpolation." Mathematical Problems in Engineering 2020 (August 30, 2020): 1–16. http://dx.doi.org/10.1155/2020/6295902.

Full text
Abstract:
Seismic data denoising and interpolation are generally essential steps for reflection processing and imaging workflow especially for the complex surface geologic conditions and the irregular acquisition field area. The rank-reduction method is a valid way for the attenuation of random noise and data interpolation by selecting the suitable threshold, i.e., the rank of the useful signals. However, it is difficult for the traditional rank-reduction method to select an appropriate threshold. In this paper, we propose an adaptive rank-reduction method based on the energy entropy to automatically estimate the rank as the threshold for seismic data processing and interpolation. This method considers the energy entropy into the traditional rank-reduction method. The energy entropy of signals can be used to indicate the energy intensity of a signal component in the total energy. The difference of the energy entropy between the useful signals and random noise is perceived as a measurement for selecting the appropriate threshold. Synthetic and field examples indicate that the proposed method can well achieve the attenuation of random noise and interpolation automatically without the estimation of the ranks and demonstrate the feasibility of the new adaptive method in seismic data denoising and interpolation.
APA, Harvard, Vancouver, ISO, and other styles
21

Ross, Christopher P., and Paul L. Beale. "Seismic offset balancing." GEOPHYSICS 59, no. 1 (January 1994): 93–101. http://dx.doi.org/10.1190/1.1443538.

Full text
Abstract:
The ability to successfully predict lithology and fluid content from reflection seismic records using AVO techniques is contingent upon accurate pre‐analysis conditioning of the seismic data. However, all too often, residual amplitude effects remain after the many offset‐dependent processing steps are completed. Residual amplitude effects often represent a significant error when compared to the amplitude variation with offset (AVO) response that we are attempting to quantify. We propose a model‐based, offset‐dependent amplitude balancing method that attempts to correct for these residuals and other errors due to sub‐optimal processing. Seismic offset balancing attempts to quantify the relationship between the offset response of back‐ground seismic reflections and corresponding theoretical predictions for average lithologic interfaces thought to cause these background reflections. It is assumed that any deviation from the theoretical response is a result of residual processing phenomenon and/or suboptimal processing, and a simple offsetdependent scaling function is designed to correct for these differences. This function can then be applied to seismic data over both prospective and nonprospective zones within an area where the theoretical values are appropriate and the seismic characteristics are consistent. A conservative application of the above procedure results in an AVO response over both gas sands and wet sands that is much closer to theoretically expected values. A case history from the Gulf of Mexico Flexure Trend is presented as an example to demonstrate the offset balancing technique.
APA, Harvard, Vancouver, ISO, and other styles
22

WANG, YANFEI, CHANGCHUN YANG, and JINGJIE CAO. "ON TIKHONOV REGULARIZATION AND COMPRESSIVE SENSING FOR SEISMIC SIGNAL PROCESSING." Mathematical Models and Methods in Applied Sciences 22, no. 02 (February 2012): 1150008. http://dx.doi.org/10.1142/s0218202511500084.

Full text
Abstract:
Using compressive sensing and sparse regularization, one can nearly completely reconstruct the input (sparse) signal using limited numbers of observations. At the same time, the reconstruction methods by compressing sensing and optimizing techniques overcome the obstacle of the number of sampling requirement of the Shannon/Nyquist sampling theorem. It is well known that seismic reflection signal may be sparse, sometimes and the number of sampling is insufficient for seismic surveys. So, the seismic signal reconstruction problem is ill-posed. Considering the ill-posed nature and the sparsity of seismic inverse problems, we study reconstruction of the wavefield and the reflection seismic signal by Tikhonov regularization and the compressive sensing. The l0, l1 and l2 regularization models are studied. Relationship between Tikhonov regularization and the compressive sensing is established. In particular, we introduce a general lp - lq (p, q ≥ 0) regularization model, which overcome the limitation on the assumption of convexity of the objective function. Interior point methods and projected gradient methods are studied. To show the potential for application of the regularized compressive sensing method, we perform both synthetic seismic signal and field data compression and restoration simulations using a proposed piecewise random sub-sampling. Numerical performance indicates that regularized compressive sensing is applicable for practical seismic imaging.
APA, Harvard, Vancouver, ISO, and other styles
23

Ebuna, Daniel R., Jared W. Kluesner, Kevin J. Cunningham, and Joel H. Edwards. "Statistical approach to neural network imaging of karst systems in 3D seismic reflection data." Interpretation 6, no. 3 (August 1, 2018): B15—B35. http://dx.doi.org/10.1190/int-2017-0197.1.

Full text
Abstract:
The current lack of a robust standardized technique for geophysical mapping of karst systems can be attributed to the complexity of the environment and prior technological limitations. Abrupt lateral variations in physical properties that are inherent to karst systems generate significant geophysical noise, challenging conventional seismic signal processing and interpretation. The application of neural networks (NNs) to multiattribute seismic interpretation can provide a semiautomated method for identifying and leveraging the nonlinear relationships exhibited among seismic attributes. The ambiguity generally associated with designing NNs for seismic object detection can be reduced via statistical analysis of the extracted attribute data. A data-driven approach to selecting the appropriate set of input seismic attributes, as well as the locations and suggested number of training examples, provides a more objective and computationally efficient method for identifying karst systems using reflection seismology. This statistically optimized NN technique is demonstrated using 3D seismic reflection data collected from the southeastern portion of the Florida carbonate platform. Several dimensionality reduction methods are applied, and the resulting karst probability models are evaluated relative to one another based on quantitative and qualitative criteria. Comparing the preferred model, using quadratic discriminant analysis, with previously available seismic object detection workflows demonstrates the karst-specific nature of the tool. Results suggest that the karst multiattribute workflow presented is capable of approximating the structural boundaries of karst systems with more accuracy and efficiency than a human counterpart or previously presented seismic interpretation schemes. This objective technique, using solely 3D seismic reflection data, is proposed as a practical approach to mapping karst systems for subsequent hydrogeologic modeling.
APA, Harvard, Vancouver, ISO, and other styles
24

Kaiser, A. E., H. Horstmeyer, A. G. Green, F. M. Campbell, R. M. Langridge, and A. F. McClymont. "Detailed images of the shallow Alpine Fault Zone, New Zealand, determined from narrow-azimuth 3D seismic reflection data." GEOPHYSICS 76, no. 1 (January 2011): B19—B32. http://dx.doi.org/10.1190/1.3515920.

Full text
Abstract:
Previous high-resolution seismic reflection investigations of active faults have been based on 2D profiles. Unfortunately, 2D data may be contaminated by out-of-the-plane reflections and diffractions that may be difficult to identify and eliminate. Although full 3D seismic reflection methods allow out-of-the-plane events to be recognized and provide superior resolution to 2D methods, they are only rarely applied in environmental and engineering studies because of high costs. A narrow-azimuth 3D acquisition and processing strategy is introduced to produce a high-resolution seismic reflection volume centered on the Alpine Fault Zone (New Zealand). The shallow 3D images reveal late Quaternary deformation structures associated with this major transpressional plate-boundary fault. The relatively inexpensive narrow-azimuth 3D acquisition pattern consisting of inline source and receiver lines was easily implemented in the field to provide 2- by [Formula: see text] CMP coverage over an approximately 500- by [Formula: see text] area.The narrow-azimuth acquisition strategy was well suited for resolving complex structures within the fault zone. Challenges in processing the data were amplified by the effects of strong velocity heterogeneity in the near surface and the presence of complex dipping, diffracted, and truncated events. A carefully tailored processing scheme including surface-consistent deconvolution, refraction static corrections, noise reduction, dip moveout (DMO) corrections, and 3D depth migration greatly improved the appearance of the final stacks. The 3D images reveal strong reflections from the faulted and folded late Pleistocene erosional basement surface. A steeply dipping planar main (dominant) fault strand can be inferred from the geometry and truncations of the overlying postglacial sediments. The 3D images reveal that the average apparent vertical displacement [Formula: see text] of the basement surface across the dominant fault strand at this location is somewhat less than that estimated from a pilot 2D seismic reflection profile, suggesting that the provisional dip-slip rate based on the 2D data is a maximum.
APA, Harvard, Vancouver, ISO, and other styles
25

Tiwari, R. K., R. Rajesh, T. Seshunarayana, and K. Dhanam. "Complex noise suppression and reconstruction of seismic reflection data from fault structures using Space Lagged Singular Spectral Analysis." Nonlinear Processes in Geophysics Discussions 1, no. 1 (April 11, 2014): 649–63. http://dx.doi.org/10.5194/npgd-1-649-2014.

Full text
Abstract:
Abstract. The seismic reflection data processing to identify thin coal beds and intrinsic fault structure associated with coalmines suffers from the coherent noise that arises due to interference and diffraction of seismic signals from adjacent horizontal boundaries on either sides of the fault structure. The amplitudes of the interfering reflections mislead the interpretation of geological features like faults, curved reflectors, etc. In particular, correlated and erratic noise create more severe problem than the random noise in the interpretation of such complex geological structures. Here, we employed Space Lagged Singular Spectral Analysis (SLSSA) algorithm, which decomposes the amplitudes from a constant time/depth to determine the original signal amplitude based on eigen properties of the signal. Thus, we can de-noise seismic signal to delineate the concealed discontinuities and to map the fault structures. Initially, we tested the algorithm on the synthetic data of fault structure embedded with complex mixed noise (random and colored) of known percentage. Finally, the method was employed on high-resolution seismic reflection observations recorded from Singareni coalfield, India. The SLSSA method reveals some significant kinematic fault structures in the coal-bearing zone, which agreed with regional fault structures in the PG basin and correlates well with available geological information in the area.
APA, Harvard, Vancouver, ISO, and other styles
26

Berryman, James. "Approximate methods for time‐reversal processing of large seismic reflection data sets." Journal of the Acoustical Society of America 115, no. 5 (May 2004): 2471. http://dx.doi.org/10.1121/1.4782461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Pan, Shulin, Ke Yan, Haiqiang Lan, José Badal, and Ziyu Qin. "A Sparse Spike Deconvolution Algorithm Based on a Recurrent Neural Network and the Iterative Shrinkage-Thresholding Algorithm." Energies 13, no. 12 (June 13, 2020): 3074. http://dx.doi.org/10.3390/en13123074.

Full text
Abstract:
Conventional sparse spike deconvolution algorithms that are based on the iterative shrinkage-thresholding algorithm (ISTA) are widely used. The aim of this type of algorithm is to obtain accurate seismic wavelets. When this is not fulfilled, the processing stops being optimum. Using a recurrent neural network (RNN) as deep learning method and applying backpropagation to ISTA, we have developed an RNN-like ISTA as an alternative sparse spike deconvolution algorithm. The algorithm is tested with both synthetic and real seismic data. The algorithm first builds a training dataset from existing well-logs seismic data and then extracts wavelets from those seismic data for further processing. Based on the extracted wavelets, the new method uses ISTA to calculate the reflection coefficients. Next, inspired by the backpropagation through time (BPTT) algorithm, backward error correction is performed on the wavelets while using the errors between the calculated reflection coefficients and the reflection coefficients corresponding to the training dataset. Finally, after performing backward correction over multiple iterations, a set of acceptable seismic wavelets is obtained, which is then used to deduce the sequence of reflection coefficients of the real data. The new algorithm improves the accuracy of the deconvolution results by reducing the effect of wrong seismic wavelets that are given by conventional ISTA. In this study, we account for the mechanism and the derivation of the proposed algorithm, and verify its effectiveness through experimentation using theoretical and real data.
APA, Harvard, Vancouver, ISO, and other styles
28

Wu, Jianjun. "Potential pitfalls of crooked‐line seismic reflection surveys." GEOPHYSICS 61, no. 1 (January 1996): 277–81. http://dx.doi.org/10.1190/1.1443949.

Full text
Abstract:
During the last few years, the Geological Survey of Canada has pioneered the application of seismic reflection profiling to mineral exploration, in close collaboration with Canadian mining companies and with the Lithoprobe project (e.g., Spencer et al., 1993; Milkereit et al., 1994). Because of the rugged terrain in crystalline rock environments (Dahle et al., 1985; Spencer et al., 1993), vibroseis seismic surveys are frequently conducted along existing roads, resulting in extremely crooked survey profiles. Crooked profiling geometry, coupled with the complex nature of the geological targets, pose special challenges for seismic data processing and interpretation. Many common‐midpoint seismic processing techniques are based on an implicit assumption of a straight‐line survey and are most effective with uniform fold and even offset distribution within common‐midpoint (CMP) gathers. However, with crooked‐line acquisition the CMP gathers are characterized by variable fold and uneven offset distribution. Based on experience with several seismic data sets from mining camps, I have identified two potential pitfalls that stem from acquisition along crooked profiles: (1) seismic transparent zones; and (2) coherent noise. To address these problems, I have critically re‐examined the basic aspects of the CMP processing techniques and have developed robust strategies for dealing with crooked profiles. In this paper, I present a field data example to demonstrate the artifacts and also discuss solutions to eliminate them. Although developed for seismic prospecting in mining camps, the methods presented here are applicable to seismic data acquired in any environment.
APA, Harvard, Vancouver, ISO, and other styles
29

Wiguna, Taufan, Rahadian Rahadian, Sri Ardhyastuti, Safira Rahmah, and Tati Zera. "SEISMIC FACIES ANALYSIS ON 2D SEISMIC REFLECTION PROFILE IN BARUNA AND JAYA LINE AT NORTH EAST JAVA BASIN." Jurnal Neutrino 9, no. 1 (October 31, 2016): 15. http://dx.doi.org/10.18860/neu.v9i1.3665.

Full text
Abstract:
<p class="abstrak">Two dimension (2D) seismic profile of Baruna and Jaya lines at North-East Java Basin show seismic reflector characteristics that can be used to interpret sediment thickness and continuity. Those reflector characteristics that can be applied for seismic facies analysis that represent depositional environment. This study starts from seismic data processing that using Kirchhoff Post Stack Time Migration method which is 2D seismic profile as result. Seismic reflector characterization has been done to both 2D profiles. Seismic reflector characterization was grouped as (i) individual reflection, (ii) reflection configuration, (iii) reflection termination, (iv) external form. Individual reflection characteristics show high and medium amplitude, medium and low frequency, and continuous. Configuration reflection is continuous with parallel and subparallel type. Reflection termination shows onlap, and external form shows sheet drape. Local mound appearance can be interpreted as paleo-reef. Facies seismic anlysis result for this study area is shelf.</p>
APA, Harvard, Vancouver, ISO, and other styles
30

Sun, Miaomiao, Zhenchun Li, Yanli Liu, Jiao Wang, and Yufei Su. "Low-Frequency Expansion Approach for Seismic Data Based on Compressed Sensing in Low SNR." Applied Sciences 11, no. 11 (May 29, 2021): 5028. http://dx.doi.org/10.3390/app11115028.

Full text
Abstract:
Low-frequency information can reflect the basic trend of a formation, enhance the accuracy of velocity analysis and improve the imaging accuracy of deep structures in seismic exploration. However, the low-frequency information obtained by the conventional seismic acquisition method is seriously polluted by noise, which will be further lost in processing. Compressed sensing (CS) theory is used to exploit the sparsity of the reflection coefficient in the frequency domain to expand the low-frequency components reasonably, thus improving the data quality. However, the conventional CS method is greatly affected by noise, and the effective expansion of low-frequency information can only be realized in the case of a high signal-to-noise ratio (SNR). In this paper, well information is introduced into the objective function to constrain the inversion process of the estimated reflection coefficient, and then, the low-frequency component of the original data is expanded by extracting the low-frequency information of the reflection coefficient. It has been proved by model tests and actual data processing results that the objective function of estimating the reflection coefficient constrained by well logging data based on CS theory can improve the anti-noise interference ability of the inversion process and expand the low-frequency information well in the case of a low SNR.
APA, Harvard, Vancouver, ISO, and other styles
31

Fang, Jie, Yu Liu, and Guofeng Liu. "Enhancing body waves in passive seismic reflection exploration: A case study in Inner Mongolia, China." Interpretation 10, no. 2 (April 11, 2022): B13—B24. http://dx.doi.org/10.1190/int-2021-0113.1.

Full text
Abstract:
Among all geophysical exploration methods, seismic exploration is undoubtedly the most important due to its ability to allow depth exploration at high resolutions. Traditionally speaking, the method needs an active seismic source, such as dynamite, to generate energy and perform reflection and refractions. An active source usually means high cost, and it also can be quite difficult to implement when surface conditions are particularly complex. The use of passive seismic for reflection exploration does not require an active seismic source. It has the potential of providing a low-cost alternative technique in some exploration areas. The main issues related to passive-source body-wave exploration include suppressing the surface waves retrieved from sources located at or near the surface, as well as enhancing the body waves from random sources at depth. We address these problems by developing a preprocessing workflow to suppress the surface waves and the other unwanted coherent noise events in the original data without seriously affecting the remaining body waves. We also propose a method for separating surface and body waves based on the signal-to-noise ratio of the frequency-domain signals. Next, we use crosscorrelation to generate virtual shot gathers, just like the active ones, and use conventional seismic data processing steps to generate the final seismic imaging. By analyzing the passive data set collected from Inner Mongolia, we have verified the applicability of the proposed method, and the retrieved final stack section indicates good consistency with an active seismic stack section along the same line. Accordingly, we assert that the application of this data processing method will contribute to the body-wave imaging and the inversion analysis of passive seismic records.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Ning, and Yi Duan. "Human vision system-based structural similarity model for evaluating seismic image quality." GEOPHYSICS 83, no. 5 (September 1, 2018): F49—F54. http://dx.doi.org/10.1190/geo2016-0163.1.

Full text
Abstract:
The quality of a seismic section is generally evaluated based on relevant qualitative and quantitative criteria. We have developed a new quantitative model to evaluate seismic sections, i.e., a full-reference human visual system based on seismic image structural similarity. With this model, we compared and analyzed two seismic sections before and after data processing in terms of energy intensity, contrast, and seismic reflection configuration similarity measures. We evaluated the quality of the seismic section in combination with the three measures, and their changing trends before and after data processing are represented with relevant quantitative indicators. The numerical simulation used algorithms and actual data, and the results indicate that compared with conventional evaluation methods, our method is easier to understand and simpler to calculate. Our method can also highlight the differences between changing seismic sections and improve the consistency of the objective evaluation results of seismic images with the corresponding human subjective perception.
APA, Harvard, Vancouver, ISO, and other styles
33

Spitzer, Roman, Frank O. Nitsche, Alan G. Green, and Heinrich Horstmeyer. "Efficient acquisition, processing, and interpretation strategy for shallow 3D seismic surveying: A Case Study." GEOPHYSICS 68, no. 6 (November 2003): 1792–806. http://dx.doi.org/10.1190/1.1635032.

Full text
Abstract:
A new 3D seismic reflection data set has been used to map the shallow subsurface beneath a key region of the Swiss Rhine Valley. Seismic signals generated by a pipegun were recorded with single 30‐Hz geophones distributed across a 277.5 × 357.0‐m area. The dense distribution of sources and receivers resulted in a binning grid of 2.12 × 2.12 m and an average fold of ∼22. To improve the visibility and continuity of reflections, a novel processing strategy was designed and applied to the acquired data. A combination of regridding and sharing traces in the common midpoint (CMP) domain resulted in increased S/N ratios with only minor loss of resolution. This prestack interpolation method yielded composite CMPs distributed on a 1.5 × 1.5‐m binning grid and an increased average fold of ∼44. The composite CMPs were subjected to a combined linear and hyperbolic τ–p processing scheme that led to the effective separation of reflections from source‐generated noise. Finally, 3D depth migration of the stacked data produced high‐resolution images of the subsurface from ∼15 to ∼130 m depth. On the basis of characteristic seismic facies and information from nearby boreholes, four principal lithological units were identified. At increasing depths they were glaciofluvial sand and gravel, glaciolacustrine clay and silt, morainal deposits, and sandstone basement. These lithological units were separated by three principal reflecting boundaries that were mapped through the data volume using semiautomatic tracking procedures. The deepest boundary defined a trough‐shaped basement structure.
APA, Harvard, Vancouver, ISO, and other styles
34

Baker, Gregory S., Don W. Steeples, and Matt Drake. "Muting the noise cone in near‐surface reflection data: An example from southeastern Kansas." GEOPHYSICS 63, no. 4 (July 1998): 1332–38. http://dx.doi.org/10.1190/1.1444434.

Full text
Abstract:
A 300-m near‐surface seismic reflection profile was collected in southeastern Kansas to locate a fault(s) associated with a recognized stratigraphic offset on either side of a region of unexposed bedrock. A substantial increase in the S/N ratio of the final stacked section was achieved by muting all data arriving in time after the airwave. Methods of applying traditional seismic data processing techniques to near‐surface data (200 ms of data or less) often differ notably from hydrocarbon exploration‐scale processing (3–4 s of data or more). The example of noise cone muting used is contrary to normal exploration‐scale seismic data processing philosophy, which is to include all data containing signal. The noise cone mute applied to the data removed more than one‐third of the total data volume, some of which contains signal. In this case, however, the severe muting resulted in a higher S/N ratio in the final stacked section, even though some signal could be identified within the muted data. This example supports the suggestion that nontraditional techniques sometimes need to be considered when processing near‐surface seismic data.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhao, Xuegong, Hao Wu, Xinyan Li, Zhenming Peng, and Yalin Li. "Seismic Reflection Coefficient Inversion Using Basis Pursuit Denoising in the Joint Time-Frequency Domain." Energies 13, no. 19 (September 24, 2020): 5025. http://dx.doi.org/10.3390/en13195025.

Full text
Abstract:
Seismic reflection coefficient inversion in the joint time-frequency domain is a method for inverting reflection coefficients using time domain and frequency domain information simultaneously. It can effectively improve the time-frequency resolution of seismic data. However, existing research lacks an analysis of the factors that affect the resolution of inversion results. In this paper, we analyze the influence of parameters, such as the length of the time window, the size of the sliding step, the dominant frequency band, and the regularization factor of the objective function on inversion results. The SPGL1 algorithm for basis pursuit denoising was used to solve our proposed objective function. The applied geological model and experimental field results show that our method can obtain a high-resolution seismic reflection coefficient section, thus providing a potential avenue for high-resolution seismic data processing and seismic inversion, especially for thin reservoir inversion and prediction.
APA, Harvard, Vancouver, ISO, and other styles
36

Konstantaki, Laura Amalia, Ranajit Ghose, Deyan Draganov, Giovanni Diaferia, and Timo Heimovaara. "Characterization of a heterogeneous landfill using seismic and electrical resistivity data." GEOPHYSICS 80, no. 1 (January 1, 2015): EN13—EN25. http://dx.doi.org/10.1190/geo2014-0263.1.

Full text
Abstract:
Understanding the processes occurring inside a landfill is important for improving the treatment of landfills. Irrigation and recirculation of leachate are widely used in landfill treatments. Increasing the efficiency of such treatments requires a detailed understanding of the flow inside the landfill. The flow depends largely on the heterogeneous distribution of density. It is, therefore, of great practical interest to determine the density distribution affecting the flow paths inside a landfill. Studies in the past have characterized landfill sites but have not led to high-resolution, detailed quantitative results. We performed an S-wave reflection survey, multichannel analysis of surface waves (MASW), and electrical resistivity survey to investigate the possibility of delineating the heterogeneity distribution in the body of a landfill. We found that the high-resolution S-wave reflection method offers the desired resolution. However, in the case of a very heterogeneous landfill and a high noise level, the processing of high-resolution, shallow reflection data required special care. In comparison, MASW gave the general trend of the changes inside the landfill, whereas the electrical resistivity (ER) survey provides useful clues for interpretation of seismic reflection data. We found that it is possible to localize fine-scale heterogeneities in the landfill using the S-wave reflection method using a high-frequency vibratory source. Using empirical relations specific to landfill sites, we then estimated the density distribution inside the landfill, along with the associated uncertainty considering different methods. The final interpretation was guided by supplementary information provided by MASW and ER tomography.
APA, Harvard, Vancouver, ISO, and other styles
37

Socco, Laura Valentina, Daniele Boiero, Sebastiano Foti, and Roger Wisén. "Laterally constrained inversion of ground roll from seismic reflection records." GEOPHYSICS 74, no. 6 (November 2009): G35—G45. http://dx.doi.org/10.1190/1.3223636.

Full text
Abstract:
Seismic reflection data contain surface waves that can be processed and interpreted to supply shear-wave velocity models along seismic reflection lines. The coverage of seismic reflection data allows the use of automated multifold processing to extract high-quality dispersion curves and experimental uncertainties in a moving spatial window. The dispersion curves are then inverted using a deterministic, laterally constrained inversion to obtain a pseudo-2D model of the shear-wave velocity. A Monte Carlo global search inversion algorithm optimizes the parameterization. When the strategy is used with synthetic and field data, consistent final models with smooth lateral variations are successfully retrieved. This method constitutes an improvement over the individual inversion of single dispersion curves.
APA, Harvard, Vancouver, ISO, and other styles
38

Wu, Chao Rong, Wen Shen Duan, and Rong Cai Zheng. "3D Visualization Prediction of Fractured Reservoir in Xujiahe Formation of Sichuan Basin in China." Advanced Materials Research 1010-1012 (August 2014): 1440–45. http://dx.doi.org/10.4028/www.scientific.net/amr.1010-1012.1440.

Full text
Abstract:
Rock of Xujiahe formation in Sichuan basin is characterized by very low porosity and permeability, forecast the spatial distribution of fractured reservoir is the key of success in exploration. Combine logging and seismic data, analysis multi-attributes, optimizing post-stack seismic data processing method, we selected amplitude, average energy, chaotic reflection and amplitude weighted instantaneous frequency (AWIF) to study seismic response. The fractured reservoir have weak amplitude, low average energy, low AWIF and high chaotic reflection. Then with the aid of 3D visual technology, research data set, and recognize that some data, like amplitude and impedance etc, should emphases low value, others like chaotic reflection and auto-fault extraction(AFE) etc, should stand out high value. Base on these, form a suit of technology predicting fractured reservoir by using 3D visualization method. Take amplitude, chaotic reflection and AFE data as case, depict the spatial distribution of fractured reservoir. The results can clearly exhibit the spatial distribution of reservoir
APA, Harvard, Vancouver, ISO, and other styles
39

Fei, Jianbo, and Yanchun Wang. "Interactive Multimedia Data Coscattering Point Imaging for Low Signal-to-Noise Ratio 3D Seismic Data Processing." Wireless Communications and Mobile Computing 2022 (August 4, 2022): 1–12. http://dx.doi.org/10.1155/2022/6904653.

Full text
Abstract:
In this paper, low signal-to-noise ratio 3D seismic data are processed by the method of coscattered point imaging, and the imaging method is analyzed in combination with interactive multimedia for 3D seismic data. The reconstruction is carried out using a convex set projection algorithm based on the curvilinear wave transform. The track set is extracted from the 3D data body and transformed into the common offset distance-center point tract set to achieve the reconstruction of seismic data in the common offset distance track set domain and through comparison. It is concluded that the reconstruction effect is better in the common. The reconstruction results are better in the common offset distance track set domain. To shorten the processing time and obtain better reconstruction results, this paper proposes the idea of direct reconstruction of frequency slices. Experiments on the actual seismic three-component wavefield based on velocity-type and acceleration-type three-component geophones are carried out to reveal the signal characteristics of the actual seismic wavefield under the mining space. Due to the limitation of the construction observation space and the particularity of the actual needs of mine detection, the application of the scattered wave imaging method in the mine must be based on the corresponding detection space and detection purpose. The implementation of this thesis improves the signal-to-noise ratio, resolution, and fidelity of the 3D seismic data of the Shawan Formation, which is more conducive to the search for lithological traps. Combined with the seismic geological data, several traps were finally found and implemented, indicating that the fidelity of the resultant information is good and can meet the needs of interpretation and comprehensive research. The multiwave scattering imaging method in this paper can complete multiwave field imaging of longitudinal, transverse, and slot waves, which has the advantages of data redundancy, high superposition number, and more accurate imaging than conventional reflection wave imaging and provides field application value for ensuring mine safety production.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Yang, and Sergey Fomel. "OC-seislet: Seislet transform construction with differential offset continuation." GEOPHYSICS 75, no. 6 (November 2010): WB235—WB245. http://dx.doi.org/10.1190/1.3479554.

Full text
Abstract:
Many of the geophysical data-analysis problems such as signal-noise separation and data regularization are conveniently formulated in a transform domain, in which the signal appears sparse. Classic transforms such as the Fourier transform or the digital wavelet transform (DWT) fail occasionally in processing complex seismic wavefields because of the nonstationarity of seismic data in time and space dimensions. We present a sparse multiscale transform domain specifically tailored to seismic reflection data. The new wavelet-like transform — the OC-seislet transform — uses a differential offset-continuation (OC) operator that predicts prestack reflection data in offset, midpoint, and time coordinates. It provides a high compression of reflection events. Its compression properties indicate the potential of OC seislets for applications such as seismic data regularization or noise attenuation. Results of applying the method to synthetic and field data examples demonstrate that the OC-seislet transform can reconstruct missing seismic data and eliminate random noise even in structurally complex areas.
APA, Harvard, Vancouver, ISO, and other styles
41

Key, Scott C., and Scott B. Smithson. "New approach to seismic‐reflection event detection and velocity determination." GEOPHYSICS 55, no. 8 (August 1990): 1057–69. http://dx.doi.org/10.1190/1.1442918.

Full text
Abstract:
Recently a number of high‐resolution techniques for event detection and parameter estimation (velocity) have emerged in the field of multichannel sonar array processing. We find that a practical and computationally efficient implementation of these methods is possible for seismic reflection data. This implementation results in development of the covariance measure for event detection and velocity estimation in seismic reflection data. The measure is based on the eigenstructure of the sampled data covariance matrix. This decomposition is carried out within hyperbolic windows that are moved through common‐depth‐point (CDP) data. Eigenvalues of the data covariance matrix allow simultaneous estimation of the noise and signal energy present within each window, resulting in the development of a coherency measure. Computer implementation of the covariance measure provides a high resolution method for determination of velocity spectra relative to currently used techniques. The method applied to synthetic data resolves reflections closely spaced in time and velocity as well as those with low signal‐to‐noise ratio (S/N). These results are also achieved when the method is applied to real data. The results in each case are directly compared to the widely used semblance measure. The covariance measure provides results superior to the crosscorrelation‐based semblance measure and offers a degree of resolution not previously attainable. These results can be achieved with a reduction in computational cost relative to semblance for typical analysis parameters. We find that the ability to continuously estimate noise energy, as well as signal energy, is critical to event resolution and noise rejection.
APA, Harvard, Vancouver, ISO, and other styles
42

Isaac, J. Helen, and Don C. Lawton. "A practical method for estimating effective parameters of anisotropy from reflection seismic data." GEOPHYSICS 69, no. 3 (May 2004): 681–89. http://dx.doi.org/10.1190/1.1759454.

Full text
Abstract:
The location of any event imaged by P‐wave reflection seismic data beneath a tilted transversely isotropic (TTI) overburden is shifted laterally if isotropic velocities are used during data processing. The magnitude of the shift depends on five independent parameters: overburden thickness, angle of tilt, symmetry‐axis velocity, and the Thomsen anisotropy parameters and δ. The shift also varies with source–receiver offset. We have developed a procedure to estimate these five parameters when the tilt of the symmetry axis from the vertical is equal to the dip of the TTI layer (except in the special cases transverse isotropy with vertical or horizontal axis of symmetry). We observe three attributes of seismic data processed using isotropic velocities: the zero‐offset arrival time of a selected reflection, the difference in arrival time between a near‐offset and a far‐offset arrival, and the difference in imaged location (smear) of this target event between the same offsets. We then perform a cascaded scan of the five parameters to determine those combinations of the five that result in calculated attributes equivalent to the observed attributes. The multiple solutions are averaged to give the parameter estimates. Application of this method to synthetic and physical model reflection data results in multiple solutions, which are constrained and averaged to obtain the effective imaging parameters. These effective parameters are close estimates of the true model parameters in both cases. For field seismic data this procedure requires that there be a suitable observable event below the TTI overburden and assumes that the measured times and shifts are reasonably accurate.
APA, Harvard, Vancouver, ISO, and other styles
43

Sanchis, Charlotte, and Alfred Hanssen. "Enhanced local correlation stacking method." GEOPHYSICS 76, no. 3 (May 2011): V33—V45. http://dx.doi.org/10.1190/1.3552687.

Full text
Abstract:
Stacking is a common technique to improve the signal-to-noise ratio (S/N) and the imaging quality of seismic data. Conventional stacking that averages equally a collection of normal moveout corrected or migrated shot gathers with a common reflection point is not always satisfactory. Instead, we propose a novel time-dependent weighted average stacking method that utilizes local correlation between each individual trace and a chosen reference trace as a measure of weight and a new weight normalization scheme that ensures meaningful amplitudes of the output. Three different reference traces have been proposed. These are based on conventional stacking, S/N estimation, and Kalman filtering. The outputs of the enhanced stacking methods, as well as their reference traces, were compared on both synthetic data and real marine migrated subsalt data. We conclude that both S/N estimation and Kalman reference stacking methods as well as the output of the enhanced stacking method yield consistently better results than conventional stacking. They exhibit cleaner and better defined reflection events and a larger number of reflections. We found that the Kalman reference method produces the best overall seismic image contrast and reveals many more reflected events, but at the cost of a higher noise level and a longer processing time. Thus, enhanced stacking using S/N estimation as reference method is a possible alternative that has the advantages of running faster, but also emphasizes some reflected events under the subsalt structure.
APA, Harvard, Vancouver, ISO, and other styles
44

Beckel, Ruth A., and Christopher Juhlin. "The cross-dip correction as a tool to improve imaging of crooked-line seismic data: a case study from the post-glacial Burträsk fault, Sweden." Solid Earth 10, no. 2 (April 29, 2019): 581–98. http://dx.doi.org/10.5194/se-10-581-2019.

Full text
Abstract:
Abstract. Understanding the development of post-glacial faults and their associated seismic activity is crucial for risk assessment in Scandinavia. However, imaging these features and their geological environment is complicated due to special challenges of their hardrock setting, such as weak impedance contrasts, often high noise levels and crooked acquisition lines. A crooked-line geometry can cause time shifts that seriously de-focus and deform reflections containing a cross-dip component. Advanced processing methods like swath 3-D processing and 3-D pre-stack migration can, in principle, handle the crooked-line geometry but may fail when the noise level is too high. For these cases, the effects of reflector cross-dip can be compensated for by introducing a linear correction term into the standard processing flow. However, existing implementations of the cross-dip correction rely on a slant stack approach which can, for some geometries, lead to a duplication of reflections. Here, we present a module for the cross-dip correction that avoids the reflection duplication problem by shifting the reflections prior to stacking. Based on tests with synthetic data, we developed an iterative processing scheme where a sequence consisting of cross-dip correction, velocity analysis and dip-moveout (DMO) correction is repeated until the stacked image converges. Using our new module to reprocess a reflection seismic profile over the post-glacial Burträsk fault in northern Sweden increased the image quality significantly. Strike and dip information extracted from the cross-dip analysis helped to interpret a set of southeast-dipping reflections as shear zones belonging to the regional-scale Burträsk Shear Zone (BSZ), implying that the BSZ itself is not a vertical but a southeast-dipping feature. Our results demonstrate that the cross-dip correction is a highly useful alternative to more sophisticated processing methods for noisy datasets. This highlights the often underestimated potential of rather simple but noise-tolerant methods in processing hardrock seismic data.
APA, Harvard, Vancouver, ISO, and other styles
45

Meles, Giovanni Angelo, Kees Wapenaar, and Andrew Curtis. "Reconstructing the primary reflections in seismic data by Marchenko redatuming and convolutional interferometry." GEOPHYSICS 81, no. 2 (March 1, 2016): Q15—Q26. http://dx.doi.org/10.1190/geo2015-0377.1.

Full text
Abstract:
State-of-the-art methods to image the earth’s subsurface using active-source seismic reflection data involve reverse time migration. This and other standard seismic processing methods such as velocity analysis provide best results only when all waves in the data set are primaries (waves reflected only once). A variety of methods are therefore deployed as processing to predict and remove multiples (waves reflected several times); however, accurate removal of those predicted multiples from the recorded data using adaptive subtraction techniques proves challenging, even in cases in which they can be predicted with reasonable accuracy. We present a new, alternative strategy to construct a parallel data set consisting only of primaries, which is calculated directly from recorded data. This obviates the need for multiple prediction and removal methods. Primaries are constructed by using convolutional interferometry to combine the first-arriving events of upgoing and direct-wave downgoing Green’s functions to virtual receivers in the subsurface. The required upgoing wavefields to virtual receivers are constructed by Marchenko redatuming. Crucially, this is possible without detailed models of the earth’s subsurface reflectivity structure: Similar to the most migration techniques, the method only requires surface reflection data and estimates of direct (nonreflected) arrivals between the virtual subsurface sources and the acquisition surface. We evaluate the method on a stratified synclinal model. It is shown to be particularly robust against errors in the reference velocity model used and to improve the migrated images substantially.
APA, Harvard, Vancouver, ISO, and other styles
46

Baykulov, Mikhail, and Dirk Gajewski. "Prestack seismic data enhancement with partial common-reflection-surface (CRS) stack." GEOPHYSICS 74, no. 3 (May 2009): V49—V58. http://dx.doi.org/10.1190/1.3106182.

Full text
Abstract:
We developed a new partial common-reflection-surface (CRS) stacking method to enhance the quality of sparse low-fold seismic data. For this purpose, we use kinematic wavefield attributes computed during the automatic CRS stack. We apply a multiparameter CRS traveltime formula to compute partial stacked CRS supergathers. Our algorithm allows us to generate NMO-uncorrected gathers without the application of inverse NMO/DMO. Gathers obtained by this approach are regularized and have better signal-to-noise ratio compared with original common-midpoint gathers. Instead of the original data, these improved prestack data can be used in many conventional processing steps, e.g., velocity analysis or prestack depth migration, providing enhanced images and better quality control. We verified the method on 2D synthetic data and applied it to low-fold land data from northern Germany. The synthetic examples show the robustness of the partial CRS stack in the presence of noise. Sparse land data became regularized, and the signal-to-noise ratio of the seismograms increased as a result of the partial CRS stack. Prestack depth migration of the generated partially stacked CRS supergathers produced significantly improved common-image gathers as well as depth-migrated sections.
APA, Harvard, Vancouver, ISO, and other styles
47

Shtivelman, Vladimir, Uri Frieslander, Ezra Zilberman, and Rivka Amit. "Mapping shallow faults at the Evrona playa site using high‐resolution reflection method." GEOPHYSICS 63, no. 4 (July 1998): 1257–64. http://dx.doi.org/10.1190/1.1444427.

Full text
Abstract:
A shallow high‐resolution seismic reflection survey was carried out at the Evrona playa site in the southern Arava valley, Israel. The aim of the survey was to detect and map faults in the shallow subsurface (upper 100–150 m) and establish the relationship of the morphological features revealed by aerial photographs and surface geological mapping with the faults detected in the subsurface. The survey included three seismic lines shot using the P-wave technique and one SH-wave line, which overlapped one of the P-wave lines. The seismic energy source on all the lines was a sledge hammer. The acquired reflection data were of good quality and did not require special processing efforts. The seismic sections along the lines show a sequence of reflected events within the 8–150 m range. At several locations, continuity of the events is interrupted by a system of faults. These faults form flower structures apparently related to strike‐slip motions typical of the region. Comparison of the faults mapped on the seismic sections with those expressed by surface morphological features generally show good correspondence. The results of the seismic survey provide important information for the study of paleoseismicity and seismic hazards in the investigated area.
APA, Harvard, Vancouver, ISO, and other styles
48

Campman, Xander H., Gérard C. Herman, and Everhard Muyzert. "Suppressing near-receiver scattered waves from seismic land data." GEOPHYSICS 71, no. 4 (July 2006): S121—S128. http://dx.doi.org/10.1190/1.2204965.

Full text
Abstract:
Upgoing body waves that travel through a heterogeneous near-surface region can excite scattered waves. When the scattering takes place close to the receivers, secondary waves interfere with the upcoming reflections, diminishing the continuity of the wavefront. We estimate a near-surface scattering distribution from a subset of a data record and use this scattering distribution to predict the secondary waves of the entire data record with a wave-theoretical model for near-receiver scattering. We then subtract the predicted scattered waves from the record to obtain the wavefield that would have been measured in the absence of near-surface heterogeneities. We apply this method to part of a field data set acquired in an area with significant near-surface heterogeneity. The main result of our processing scheme is that we effectively remove near-surface scattered waves. This, in turn, increases trace-to-trace coherence of reflection events. Moreover, application of our method improves the results obtained from just an application of a dip filter because we remove parts of the scattered wave with apparent velocities that are typically accepted by the pass zone of the dip filter. Based on these results, we conclude that our method for suppressing near-receiver scattered waves works well on densely sampled land data collected in areas with strong near-surface heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
49

Burschil, T., T. Beilecke, and C. M. Krawczyk. "Finite-difference modelling to evaluate seismic P-wave and shear-wave field data." Solid Earth 6, no. 1 (January 13, 2015): 33–47. http://dx.doi.org/10.5194/se-6-33-2015.

Full text
Abstract:
Abstract. High-resolution reflection seismic methods are an established non-destructive tool for engineering tasks. In the near surface, shear-wave reflection seismic measurements usually offer a higher spatial resolution in the same effective signal frequency spectrum than P-wave data, but data quality varies more strongly. To discuss the causes of these differences, we investigated a P-wave and a SH-wave seismic reflection profile measured at the same location on the island of Föhr, Germany and applied seismic reflection processing to the field data as well as finite-difference modelling of the seismic wave field. The simulations calculated were adapted to the acquisition field geometry, comprising 2 m receiver distance (1 m for SH wave) and 4 m shot distance along the 1.5 km long P-wave and 800 m long SH-wave profiles. A Ricker wavelet and the use of absorbing frames were first-order model parameters. The petrophysical parameters to populate the structural models down to 400 m depth were taken from borehole data, VSP (vertical seismic profile) measurements and cross-plot relations. The simulation of the P-wave wave-field was based on interpretation of the P-wave depth section that included a priori information from boreholes and airborne electromagnetics. Velocities for 14 layers in the model were derived from the analysis of five nearby VSPs (vP =1600–2300 m s-1). Synthetic shot data were compared with the field data and seismic sections were created. Major features like direct wave and reflections are imaged. We reproduce the mayor reflectors in the depth section of the field data, e.g. a prominent till layer and several deep reflectors. The SH-wave model was adapted accordingly but only led to minor correlation with the field data and produced a higher signal-to-noise ratio. Therefore, we suggest to consider for future simulations additional features like intrinsic damping, thin layering, or a near-surface weathering layer. These may lead to a better understanding of key parameters determining the data quality of near-surface shear-wave seismic measurements.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Hong Bing, Cheng Hao Cao, Zhi Wei Dan, Zuo Ping Shang, Tian Cai Li, and Huan Wan. "Improving Pre-Stack Seismic Data Resolution Based on Inverse Q Filter." Applied Mechanics and Materials 331 (July 2013): 617–21. http://dx.doi.org/10.4028/www.scientific.net/amm.331.617.

Full text
Abstract:
The precision of seismic reflection data does not fulfil to explore private and complex structure oil-gas pools. So we need to improve the resolution of seismic data. To achieve this, we used the inverse Q filter methods to improve the resolution for the seismic data. The results for processing the pre-stack seismic data indicated that the inverse Q filter method can improve peak frequency and frequency bandwidth. Low and middle-high frequency components in those results are reasonable, and its signal/noise ratios are obviously larger than that for other methods in middle-high frequency. Moreover, the results using various Q values indicated that the peak frequency is little added and the frequency bandwidth is expanded when Q value is reducing, but the signal/noise ratio is reducing. Moreover, the improving of the signal/noise ratio is very small when Q value is larger than 200.0, namely addition for Q value only gains reducing of the peak frequency and the bandwidth.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography