Font Size: a A A

Study Of Typical Processing Methods And Their Parameters In Advance Seismic Data Processing Workflow

Posted on:2017-01-14Degree:DoctorType:Dissertation
Country:ChinaCandidate:MOHAMED MHMODFull Text:PDF
GTID:1220330482494883Subject:Solid Earth Physics
Abstract/Summary:PDF Full Text Request
This thesis adds the following techniques to the existing seismic Processing stage:· Provides a new stage preprocessing call "parameters stage"; which showing the effects of the parameters and help us to choose better parameters, flows, methods for Processing stage.· Solves the problem of what should we apply or what methods give better results.· These thesis shows and made compensation between most common and uncommon methods used in seismic Processing and analysis.· Reducing the time and making Processing stage faster by having already knowledge about what methods to ase and what parameters give better results.Chapter 1 provides an introduction to Processing and analysis methods and brief about the work done in this thesis, also provide some briefs about data used. The data used are:1-2d land and Marin data2-3d land data.3-VSP Far and near offset.We are using different kinds of data due using variant kinds of methods and parameters. Fig (1.2-1) Showing main work and methods applying.In chapter 2, we applied F-K Spectral analysis to see the effect of trace spacing and smoother trace on our in-put data. The data are 2d input data and 3d input data, second Time Variant Frequency Analysis by studying the effect of length of windows and lower, upper frequency, third Time Variant Amplitude. We are going to study the effect of length FFT windows of our input data, and finally we are going to study S/N calculation by applying different methods (Crosscorrelation, Multi Coherence and Singular Value Decomposition (SVD)). We tested the effect of trace spacing/FT using values (0,0.5,25and 50) m for 2d land data, when the value of trace spacing/FT was smaller the F max was bigger. We made a comparison of input and output to see the dif-ference made on output after applying Pie Rejection. Fig (1.2-2) showing analysis applying on raw data.In chapter 3, we will deal with deconvolution. The definition of convolution is a mathematical way of com-bining two signals to achieve a third, modified signal. The signal we record seems to respond well to being eated as a series of signals superimposed upon each other that is seismic signals seem to respond convolu-The process of deconvolution is the reversal of the convolution process. Convolution in the time do-represented in the frequency domain by a multiplying the amplitude spectra and adding the phase Wra. The method which was discussed can be done by reversing the process of the convolution. The com-monest way that performs deconvolution by designing a Wiener Filter to transform one wavelet into another wavelet in a least-squares sense. It is often applied at least once to marine seismic data. The attenuation of short-period multiples (most notably reverberations from a relatively flat, shallow water-bottom) can be phieved with predictive deconvolution. The periodicity of the multiples is exploited to design an operator, [hich identifies and removes the predictable part of the wavelet, leaving only its non-predictable part (signal). (?)st We apply predictive deconvolution on singular traces and see the effect of operator length on performing (?)e deconvolution on singular traces. Here we represent the effect of operator length on performing predictive deconvolution on single traces. When the source signature is known a designator process (Which is, in essence, a deterministic deconvolution) can be applied as an alternative or a complement to this step. In our case, we are going to have a different approach. First, we will apply a trace by trace predictive deconvolution to eliminate some multiple reverberation, and the deconvolution which in this case the desired output is a lagged version of the input. Hence more lags of the autocorrelation are calculated. The latter lags are used as the cross correlation of the input and desired output. The standard equations are solved for the predictive oper-ator. We tested operator length with n= operator length (where n= 240,128,40) ms, the predication lag (a=2 ms), and the percent prewhitening (1%). Followed by a surface consistent predication convolution to improve the frequency content of the Predictive data. This done by Applying SCDsolve flow.the comparison for dif-ferent value of operator length. Then we applied to surface at consistent deconvolution with predication de-convolution type, comparison after applying Predictive and SC predication deconconvolution. Comparison after Spectral Balancing with the operator (240 ms,128 ms and 40 ms).Then we study the effect of operator length on performing spiking deconvolution on singular traces. We represent the effect of operator length on performing spiking deconvolution on singular traces. The data used to design the operator may lie in a sloping (with trace offset) time window. Spike deconvolution is a standard Wiener Levinson algorithm. The auto correlation of the design time gate (a segment of the trace which normally varies with offset because deconvolution is done before Normal move-out (NMO) is computed and there is a specified taper on the design gate before the autocorrelation is done. Then the standard equations are set up, prewhitening is added to the zeroth lag value of the autocorrelation, and the matrix is inverted to derive the spiking operator. First, we will apply spiking deconvolution, and the deconvolution which in this case the desired output is Zero-lag Spike. The standard equations are solved for the spiking operator. We tested operator length with n= operator length (where n= 240,128, 40,20,10) ms, and the percent prewhitening (1%). Followed by a surface consistent deconvolution to improve the frequency content of the data Predictive by Applying SCD solve flow and the comparison for different value of operator length. Then we applied surface consistent deconvolution with spiking deconvolution type, comparison after applying spiking deconvolution and SCD spiking deconconvolution, also comparison after Spectral Balancing with operators (240 ms,128 ms 40 ms,20 ms,10 ms). We also applied the surface consistent deconvolution by computing the autocorrelation of each traces, afterwards the square root of the Zero-phase Amplitude Spectrum and finally taking logarithms. This equation becomes a sum of factors rather than a series of convolutions. The sum can next be solved by the usual Gauss-Seidel iterative process for the individual shot, receiver, offset, CMP, etc. Thus, one derives an autocorrelation function for each component defined. These components are subsequently input into the SC Deconvolution Apply (Surface Con. Deconvolution applied) command to derive the deconvolution operators and applies it to the input data. In application, we generally only apply the shot and receiver components of the solution. While SCD applied works the shot, Receiver, offset or other defined components, frequency spectrums are used to derive a deconvolution operator. These operators may be convolved with each trace of the input data. In application, one generally only applies the shot and receiver components of the solution. The effect of dispersion and attenuation on land seismic data can be removed by applying inverse Q filtering thus improves the seismic resolution. In this case study, the Q filter structure was designed by deterministic Processing sequences of real data where there is a need to improve signal to compensate for attenuation losses. We applied Q filter and to see the earth Q effect on seismic waves and application of inverse Q filtering on real data. Finally in this chapter, we apply spiking and predictive deconvolution on land 2D data (PSTM). To see the effects of parameters on our output’s data.CROSS-CORRELATION:is a statistical measure used to compare two signals as a function of the time shift (lag) between them.AUTOCORRELATION:is a special case where the signal is compared with itself for a variety of time shifts (lags) and is particularly useful for detecting repeating periods within signals in the presence of noise.A wavelet:is a term used to describe a short time series (typically less than 100 samples) which can be used to represent, for example, the source function. As previously shown, the wavelet can be studied as a time series in the time domain or in the frequency domain as an amplitude or phase spectrum. For any amplitude spectrum there are an infinite number of time domain wavelets which can be constructed by varying the phase spectrum. There are two special types of phase spectra of specific interest.In chapter 4 we will work with,2d transform, the following processes:1- FX-FK Filter application to remove or isolate Linear Noise and 3D Linear Noise.2- Forward Tau-p transforms.3- Inverse Tau-p transforms.4- Radial trace forward transform, Radial trace inverse transform.Here we will apply (FK-FX) filter on 2d and 3d land data. The filter is applied to remove or isolate linear noise. Close to a pie-shape operator is designed in FK domain within defined the apparent velocity range, as well as the noise temporal and spatial frequency ranges. Operator is converted in FX domain and applied using the exact shot/re-ceiver coordinates. The level of noise removal is controlled by the length of the applied operator.3D case:operator is applied on shot gathers in moving azimuthal sectors. As a next step noise is obtained by subtracting FX-FK filtered data from the initial data. The Adaptive subtraction is to accurately remove noise without damaging the signal.We will review of forward Tau-p transforms in F-K and F-X domain. The Tau-p transforms is another special case of RADON transform where the data are decomposed as a series of straight lines which map to points in the tau-p domain. Hyperbolic events (e.g. those in shot gathers) map to elliptical curves in Tau-P. This process also used to be referred to as slant-stacking since to produce the tau-p domain the input data may be stacked along a series of straight lines. A seismic section in the Tau-p domain offers an alternative view in which all subsurface reflectors are illumi-nated by incident energy of a fixed ray parameter. One advantage of working in the Tau-p domain is that we can study the different wave modes as function of their corresponding slowness values (p=1/v), where v is the propaga-tion velocity. Then, the Tau-p transforms is a useful Processing tool because it provides an increased separation between different seismic waves (i.e., multiples, ground-roll, P and S waves among others).also review of inverse Tau-p transforms in F-K and F-X domain (remove surface noise).Here will work by the same way, we worked for forward Tau-p transform, but here add to flow in Fig (4.2.2-1) inverse Tau-p transfer, this shows in Fig (4.2.3-1), for other parameters will be the same. And then a review of forward & inverse Radial Trace transforms. We are applying the forward & inverse Radial Trace Transform to remove surface noise. The Radial transform is a mapping of the data from the offset-time (X-T) domain to a velocity-Time-domain. The Radial Trace transforms is a re-mapping of the Normal X-T seismic domain with coordinates of a source-receiver offset and two-way travel times into a domain whose coordinates are apparent velocity and two-way travel time. Traces in this domain all share the same X-T origin and hence are "Radial" with respect to that origin (often the shot origin). Because the Radial transform has the same time coordinate as the original X-T domain, the transform operation can be posed as a simple interpolation of trace samples from X-T time slices to R-T time slices. A major effect of re-mapping seismic data into the R-T domain is that linear events which have apparent velocity and origin in common with those of Radial trace trajectories have their apparent frequencies dramatically lowered in the Radial domain; while events, such as reflections, which do not share apparent velocity and origin with any Radial traces, are unaffected. We are going to do, first creates a plot of the Radial Transform Forward to illustrate the principles of Radial Trace Transform and effective Automatic Gain Control and Ormsby’s pass band filter for coherent noise attenuation and wave field separation. It then illustrates the use of both the Radial Transform Forward and Reverse to attenuate the surface noise. An Ormsby low-pass filter is applied in the Radial domain to extract the surface noise. Automatic Gain Control (AGC) is used to scale the data to display the transformed data. Fig (1.2-6) showing Signal Enhancement,2d FX-deconvolution methods and filters applied.In chapter 5 we are going to apply some flow process to enhancement our data signal, with 2d data and 3d land seismic data, first we will apply 2DFX-Decon on 2d data, second applying Logifer design, Alpha trim Mean Filter, and finally 3D FKx-Ky Linear noise extraction.On 3d land data. We work with Signal Enhancement using 2D F-X Prediction Design Levinson-Durbin filter mode. The seismic noise attenuation is very important for seismic data interpretation and analysis. In our paper, we propose a 2D F-X Prediction Design Levinson-Durbin filter mode method for 2D seismic land data random noise attenuation by applying Design Levinson-Durbin filter. The key idea of this paper is to consider that the FX-deacon works on each frequency slice in the frequency spatial domain. And because the filter is based on a linear prediction theory, the filter is desired for linear events. Therefore, FX-deacon is applied to a window of data with the assumption that inside the window, the seismic events are approximately linear. The effects of F-X predictions are harsher on smaller windows when fewer traces and short time intervals. The big disadvantage of F-X is of course the inability to handle conflicting dips such as curving structure, so split the data into sections each containing only consistent dip prior to inputting to F-X prediction. Our way is to improve the signal through using 2D F-X Prediction Design Levinson-Durbin filter mode and also compare the effect of the parameters on our signal. We took raw data then applied filter twice. The amplitude for was applying this filter was getting higher, but when applied more than three times the single started effect and losing clear frequency.Then we compare Levinson-Durbin filter mode and other kind of filter mode as shows in (Fig 5.2-8).we also applied F-X Prediction Design using F-xy Cadzow Filter mode. The Cadzow filtering has previously been applied along constant-frequency slices to remove random noise from 2-D seismic data. Here I extend Cadzow filtering to two or more spatial dimensions. The resulting method is superior to both f-xy prediction (deconvolution) and projection filtering, especially for very noisy data. In particular, it preserves signal better and can be made much harsher. Logifer Logical Filter noise Reduction applied.The Logifer works by performing slant stacks on the input data according to some velocity. These are "smoothed" according to the operator specification. A probability is calculated at every sample position that the sample there belongs to a noise train or not. The final output is scaled according to a blend of the input samples, detected noise samples and their difference.The AtmFilt applies an Alpha trim Mean Filter. (Median Filter is a special case of this filter). This works by looking at the samples of several adjacent traces for each time step. The samples are examined and the mean (average), and the Median (middle in size) values are calculated for that time step.Finally in this chapter, we applied 3D FKx-Ky Linear noise extraction. The FKxKy works in shot (or other ensembles-e.g. receiver gathers) mode. A new regular rectangular grid is created according to a user defined X, and Y offsets increment and rotated if necessary. The new grid is filled with the available input data (positioned at the nearest grid node) and zero data at all other locations. This griddled data is then converted to the frequency-domain and into the 2-dimensional KxKy wave number domain.In chapter 6, we will apply a different kind of command and test the effect of each command parameter. The input for all commands will be same in some command the sort kind will be different, due to every command need a special input sort. And will just show summary of command mathematic.The processes in this section are:Ormsby Band Pass (Normal filter to use), Notch filter-Remove a single frequency, Butterworth filter-Approximation to an electronic filter, and Shaping Filter Fig (1.2-7) showing filtering methods applied on raw data. Fig (1.2-8) showing applying Ormsby Band Pass Filter on our data.In chapter 7, we will work with 2d and 3d seismic Processing and see the effects of some parameters on our Pro-cessing stage. Due to the seismic, Processing will be very long so we will only display the Processing section’s figures with short explanations. The 2D Land Straight is a small 2D data set is used. The SEG-Y headers are fully populated with all the information needed to begin processing the data. The SEG-Y file with 20 shots,120 channel/shot.2D Land Crooked Line data were used called Benjamin Creek data was shot in the foothills of the Canadian Rockies. Husky Oil and Talisman Energy that have released it for aiding general industry technological development owned the data set. There is a total of 39,763 traces comprising 141 shots of approximately 300 traces or less in each shot. The sample rate is four milliseconds and there are 3 seconds of data. The surface has considerable topography (ele-vations) and the structure while complex is predominantly 2D. Furthermore, will use 3d data.
Keywords/Search Tags:Processing
PDF Full Text Request
Related items