To see the other types of publications on this topic, follow the link: Geostatistical methods.

Dissertations / Theses on the topic 'Geostatistical methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Geostatistical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Giorgi, Emanuele. "Geostatistical methods for disease prevalence mapping." Thesis, Lancaster University, 2015. http://eprints.lancs.ac.uk/75770/.

Full text
Abstract:
Geostatistical methods are increasingly used in low-resource settings where disease registries are either non-existent or geographically incomplete. In this thesis, which is comprised of four papers, we address some of the common issues that arise from analysing disease prevalence data. In the first paper we consider the problem of combining data from multiple spatially referenced surveys so as to account for two main sources of variation: temporal variation, when surveys are repeated over time; data-quality variation, e.g. between randomised and non-randomised surveys. We then propose a multivariate binomial geostatistical model for the combined analysis of data from multiple surveys. We also show an application to malaria prevalence data from three surveys conducted in two consecutive years in Chikwawa District, Malawi, one of which used a more economical convenience sampling strategy. In the second paper, we analyse river-blindness prevalence data from a survey conducted in 20 African countries enrolled in the African Programme of Onchocerciasis Control (APOC). The main challenge of this analysis is computational, as a binomial geostatistical model has to be fitted to more than 14,000 village locations and predictions carried out on about 10 millions locations across Africa. To make the computation feasible and efficient, we then develop a low rank approximation based on a convolution-kernel representation which avoids matrix inversion. The third paper is a tutorial on the use of a new R package, namely “PrevMap”, which provides functions for both likelihood-based and Bayesian analysis of spatially referenced prevalence data. In the fourth paper, we present some extensions of the standard geostatistical model for spatio-temporal analysis of prevalence data and modelling of spatially structured zero-inflation. We then describe three applications that have arisen through our collaborations with researchers and public health programmers in African countries.
APA, Harvard, Vancouver, ISO, and other styles
2

Bandarian, Ellen. "Linear transformation methods for multivariate geostatistical simulation." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/191.

Full text
Abstract:
Multivariate geostatistical techniques take into account the statistical and spatial relationships between attributes but can be inferentially and computationally expensive. One way to circumvent these issues is to transform the spatially correlated attributes into a set of decorrelated factors for which the off diagonal elements of the spatial covariance matrix are zero. This requires the derivation of a transformation matrix that exactly or approximately diagonalises the spatial covariance matrix for all separation distances. The resultant factors can then analysed using the more straightforward univariate techniques. This thesis is concerned with the investigation of linear decorrclation methods whereby the resulting factors are linear combinations of the original attributes.
APA, Harvard, Vancouver, ISO, and other styles
3

YATES, SCOTT RAYMOND. "GEOSTATISTICAL METHODS FOR ESTIMATING SOIL PROPERTIES (KRIGING, COKRIGING, DISJUNCTIVE)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187990.

Full text
Abstract:
Geostatistical methods were investigated in order to find efficient and accurate means for estimating a regionalized random variable in space based on limited sampling. The random variables investigated were (1) the bare soil temperature (BST) and crop canopy temperature (CCT) which were collected from a field located at the University of Arizona's Maricopa Agricultural Center, (2) the bare soil temperature and gravimetric moisture content (GMC) collected from a field located at the Campus Agricultural Center and (3) the electrical conductivity (EC) data collected by Al-Sanabani (1982). The BST was found to exhibit strong spatial auto-correlation (typically greater than 0.65 at 0⁺ lagged distance). The CCT generally showed a weaker spatial correlation (values varied from 0.15 to 0.84) which may be due to the length of time required to obtain an "instantaneous" sample as well as wet soil conditions. The GMC was found to be strongly spatially dependent and at least 71 samples were necessary in order to obtain reasonably well behaved covariance functions. Two linear estimators, the ordinary kriging and cokriging estimators, were investigated and compared in terms of the average kriging variance and the sum of squares error between the actual and estimated values. The estimate was obtained using the jackknifing technique. The results indicate that a significant improvement in the average kriging variance and the sum of squares could be expected by using cokriging for GMC and including 119 BST values in the analysis. A nonlinear estimator in one variable, the disjunctive kriging estimator, was also investigated and was found to offer improvements over the ordinary kriging estimator in terms of the average kriging variance and the sum of squares error. It was found that additional information at the estimation site is a more important consideration than whether the estimator is linear or nonlinear. Disjunctive kriging produces an estimator of the conditional probability that the value at an unsampled location is greater than an arbitrary cutoff level. This latter feature of disjunctive kriging is explored and has implications in aiding management decisions.
APA, Harvard, Vancouver, ISO, and other styles
4

Ghassemi, Ali. "Nonparametric geostatistical estimation of soil physical properties." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Long, Andrew Edmund. "Cokriging, kernels, and the SVD: Toward better geostatistical analysis." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186892.

Full text
Abstract:
Three forms of multivariate analysis, one very classical and the other two relatively new and little-known, are showcased and enhanced: the first is the Singular Value Decomposition (SVD), which is at the heart of many statistical, and now geostatistical, techniques; the second is the method of Variogram Analysis, which is one way of investigating spatial correlation in one or several variables; and the third is the process of interpolation known as cokriging, a method for optimizing the estimation of multivariate data based on the information provided through variogram analysis. The SVD is described in detail, and it is shown that the SVD can be generalized from its familiar matrix (two-dimensional) case to three, and possibly n, dimensions. This generalization we call the "Tensor SVD" (or TSVD), and we demonstrate useful applications in the field of geostatistics (and indicate ways in which it will be useful in other areas). Applications of the SVD to the tools of geostatistics are described: in particular, applications dependent on the TSVD, including variogram modelling in coregionalization. Variogram analysis in general is explored, and we propose broader use of an old tool (which we call the "corhogram ", based on the variogram) which proves useful in helping one choose variables for multivariate interpolation. The reasoning behind kriging and cokriging is discussed, and a better algorithm for solving the cokriging equations is developed, which results in simultaneous kriging estimates for comparison with those obtained from cokriging. Links from kriging systems to kernel systems are made; discovering kerneIs equivalent to kriging systems will be useful in the case where data are plentiful. Finally, some results of the application of geostatistical techniques to a data set concerning nitrate pollution in the West Salt River Valley of Arizona are described.
APA, Harvard, Vancouver, ISO, and other styles
6

Mandallaz, Daniel. "Geostatistical methods for double sampling schemes : application to combined forest inventories /." Zürich : Chair of Forest Inventory and Planning, Swiss Federal Institute of Technology (ETH), 1993. http://e-collection.ethbib.ethz.ch/show?type=habil&nr=19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Murphy, Mark P. "Geostatistical optimisation of sampling and estimation in a nickel laterite deposit." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1295.

Full text
Abstract:
Nickel and cobalt are key additives to metal alloys in modem industry. The largest worldwide nickel-cobalt resources occur in nickel laterite deposits that have formed during the chemical weathering of ultramafic rocks at the Earth's surface. At the Murrin Murrin mine in Western Australia, the nickel laterite deposits occur as laterally extensive, undulating blankets of mineralisation with strong vertical anisotropy, near normal nickel distributions, and positively skewed cobalt distributions. The mineral resources in nickel laterite deposits in Murrin Murrin are usually estimated from drilling and sampling on relatively wide-spaced drill patterns that are supported by local clusters of close-spaced sampling. The combination of deposit geometry and sampling configuration presents several estimation challenges for geostatistical resource estimation methods. In this thesis, close-spaced grade control drill sampling data from Murrin Murrin is used to quantify the estimation effectiveness of the wider spaced actual exploration pattern used to define the original resource, and an alternative cost saving stratified sampling pattern. Additionally, an unfolding of the laterite blanket by vertical data translation prior to nickel and cobalt grade estimation is tested for each exploration pattern. The unfolding essentially removes undulations in the laterite blanket prior to grade estimation by vertical translation of the sample data relative to a surface of high grade nickel-cobalt connectivity. Unfolding is expected to improve estimation accuracy in terms of grade and volume, as well as improve the quality of variography analyses. The stratified pattern is expected to give similar estimation accuracy to the actual exploration pattern. The effectiveness of ordinary kriging and full indicator kriging estimation algorithms from GSLIB software are compared for the combinations of in situ and unfolded cases of the actual sampling pattern used to define the deposit and an alternative stratified sampling pattern. For each combination, the estimates are made at the data locations of closed spaced grade control ‘reality'. The accuracy of each estimate is quantified by comparing the error, degree of bias and pseudo grade-volume relationships of the estimate to the 'reality' data. Additionally, the quality of exploration pattern variography is assessed against the grade continuity of the grade control information. Importantly, the main focus of these comparisons is on the correct estimation of local high grade nickel and cobalt resources that are preferentially processed in the early years mining. The results of comparisons between estimation methods and sample configuration combinations investigated show that the combination of unfolding and indicator kriging gives the best correspondence (in terms of grade and volume) of the various estimates to the grade control reality. The results of comparisons between the actual and the alternative stratified exploration pattern show that the cost saving' alternative pattern produces estimates similar to the actual exploration estimates.
APA, Harvard, Vancouver, ISO, and other styles
8

Daviau, Jean-Luc. "Spatially explicit regional flood frequency analysis using L-moment, GIS and geostatistical methods." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq36680.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nowak, Wolfgang. "Geostatistical methods for the identification of flow and transport parameters in the subsurface." Stuttgart Inst. für Wasserbau, 2004. http://deposit.d-nb.de/cgi-bin/dokserv?idn=97474896X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nowak, Wolfgang. "Geostatistical methods for the identification of flow and transport parameters in the subsurface." [S.l. : s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11759377.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Si-Yong. "Heterogeneity and transport : geostatistical modeling, non-Fickian transport, and efficiency of remediation methods /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2004. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Patton, William. "Modelling of unequally sampled rock properties using geostatistical simulation and machine learning methods." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2022. https://ro.ecu.edu.au/theses/2530.

Full text
Abstract:
Important orebody characteristics that determine viability of the mineral resource and ore reserve potential such as physical properties, mineralogical and geochemical compositions often vary substantially across an ore deposit. Geometallurgical models aim to capture the spatial relationships between mineral compositions, physical properties of rock and their interactions with mechanical and chemical processes during mining extraction and processing. This characterisation of physical and chemical properties of ores can in turn be used to inform mining and processing decisions that enable the extraction of the maximum value from the ore deposit most efficiently. During the construction of such spatial geometallurgical models, practitioners are presented with many challenges. These include modelling high-dimensional data of various types including categorical, continuous and compositional attributes and their uncertainties. Decisions on how to segregate samples data into spatially and statistically homogeneous groups to satisfy modelling assumptions such as stationarity are often a requirement. Secondary properties such as metallurgical test results are often few in number, acquired on larger scales than that of primary rock property data and non-additive in nature. In this thesis a data driven workflow that aims to address these challenges when constructing geometallurgical models of ore deposits is devised. Spatial machine learning techniques are used to derive geometallurgical categories, or classes, from multiscale, multiresolution, high dimensional rock properties. In supervised mode these methods are also used to predict geometallurgical classes at samples where rock property information is incomplete. Realisations of the layout of geometallurgical classes and the variabilities of associated rock properties are then mapped using geostatistical simulations and machine learning. The workflow is demonstrated using a case study at Orebody H; a complex stratabound Bedded Iron Ore deposit in Western Australia’s Pilbara. A detailed stochastic model of five compositions representing primary rock properties and geometallurgical responses in the form of lump and fine product iron ore quality specifications was constructed. The predicted product grade recoveries are realistic values that honour constraints of the predicted head grade compositions informed by more abundant and regularly spaced sampling than metallurgical tests. Finally, uncertainties are quantified to assess risk following a confidence interval based framework. This could be used to identify zones of high uncertainty where collection of additional data might help mitigate or minimise risks and in turn improve forecast production performances.
APA, Harvard, Vancouver, ISO, and other styles
13

Adisoma, Gatut Suryoprapto. "The application of the jackknife in geostatistical resource estimation: Robust estimator and its measure of uncertainty." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186547.

Full text
Abstract:
The application of the jackknife in geostatistical resource estimation (in conjunction with kriging) is shown to yield two significant contributions. The first one is a robust new estimator, called jackknife kriging, which retains ordinary kriging's simplicity and global unbiasedness while at the same time reduces its local bias and oversmoothing tendency. The second contribution is the ability, through the jackknife standard deviation, to set a confidence limit for a reserve estimate of a general shape. Jackknifing the ordinary kriging estimate maximizes sample utilization, as well as information of sample spatial correlation. The jackknife kriging estimator handles the high grade smearing problem typical in ordinary kriging by assigning more weight to the closest sample(s). The result is a reduction in the local bias without sacrificing global unbiasedness. When data distribution is skewed, log transformation of the data prior to jackknifing is shown to improve the estimate by making the data behave better under jackknifing. The technique of block kriging short-cut, combined with jackknifing, are shown as an easy-to-use solution to the problem of grade estimation of a general three-dimensional digitized shape and the uncertainty associated with the estimate. The results are a single jackknife kriging estimate for the shape and its corresponding jackknife variance. This approach solves the problem of combining independent block estimation variances, and provides a simple way to set confidence levels for global estimates. Unlike the ordinary kriging variance, which is a measure of data configuration and is independent of data values, the jackknife kriging variance reflects the variability of the values being inferred, both on an individual block level and on the global level. Case studies involving two exhaustive (symmetric and highly skewed) data sets indicates the superiority of the jackknife kriging estimator over the original (ordinary kriging) estimator. Some instability of the log-transformed jackknife estimate is noted in the highly skewed situation, where the data do not generally behave well under standard jackknifing. A promising solution for future investigations seems to lie in the use of weighted jackknife formulation, which should better handle a wider spectrum of data distribution.
APA, Harvard, Vancouver, ISO, and other styles
14

Owaniyi, Kunle Meshach. "Geostatistical Interpolation and Analyses of Washington State AADT Data from 2009 – 2016." Thesis, North Dakota State University, 2019. https://hdl.handle.net/10365/31649.

Full text
Abstract:
Annual Average Daily Traffic (AADT) data in the transportation industry today is an important tool used in various fields such as highway planning, pavement design, traffic safety, transport operations, and policy-making/analyses. Systematic literature review was used to identify the current methods of estimating AADT and ranked. Ordinary linear kriging occurred most. Also, factors that influence the accuracy of AADT estimation methods as identified include geographical location and road type amongst others. In addition, further analysis was carried out to determine the most apposite kriging algorithm for AADT data. Three linear (universal, ordinary, and simple), three nonlinear (disjunctive, probability, and indicator) and bayesian (empirical bayesian) kriging methods were compared. Spherical and exponential models were employed as the experimental variograms to aid the spatial interpolation and cross-validation. Statistical measures of correctness (mean prediction and root-mean-square errors) were used to compare the kriging algorithms. Empirical bayesian with exponential model yielded the best result.
APA, Harvard, Vancouver, ISO, and other styles
15

Parsons, Robert Lee. "Assessment and optimization of site characterization and monitoring activities using geostatistical methods within a geographic information systems environment." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/32847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Keil, Fabian [Verfasser]. "Investigation of spatial correlation in MR images of human cerebral white matter using geostatistical methods / Fabian Keil." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1052254632/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Šumskis, Donatas. "Soil sampling methods for pH tests in soils of different genesis and relief and geostatistical analysis of data." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20111207_081512-93669.

Full text
Abstract:
Tasks: 1. To determine a soil sampling method most suitable for pH tests in soils on flat, rolling and hilly relief using regular grid sampling, soil database (Dirv_DB10LT) and soil agrochemical properties database (DirvAgroch_DB10LT). 2. To investigate the suitability of geostatistical methods for spatial distribution of pH data using different soil sampling methods. 3. To determine an impact of different soil sampling methods on spatial distribution of areas to be limed and on the needed amount of lime. Propositions to be defended: 1. Soil sampling plots for detailed pH tests should be shaped using soil database (Dirv_DB10LT) and soil agrochemical properties database (DirvAgroch_DB10LT); in case of high variability of pH values soil sampling plots should be smaller and in case of lower variability of pH values – larger. 2. Interpolation of pH data using IDW, Simple Kriging and Simple Cokriging methods results in decreased share of determined areas of conditionally acid soils when compared to that obtained using not interpolated pH data. 3. The needed amount of lime depends on soil sampling method. Larger needed amount of lime is calculated when soil samples are collected using databases (Dirv_DB10LT) and (DirvAgroch_DB10LT).
Uždaviniai: 1. Nustatyti dirvoţemio pH tyrimams tinkamiausią ėminių paėmimo metodą lyguminio, banguoto ir kalvoto reljefo plotuose, taikant taisyklingą tinklelį, dirvoţemio (Dirv_DB10LT) ir agrocheminių savybių (DirvAgroch_DB10LT) duomenų bazes. 2. Ištirti geostatistinių metodų tinkamumą pH duomenų erdviniam pasiskirstymui, taikant skirtingus ėminių paėmimo metodus. 3. Nustatyti ėminių paėmimo metodų įtaką kalkintinų plotų erdviniam pasiskirstymui ir kalkių reikmei. 32 Ginami disertacijos teiginiai: 1. Išsamiam dirvoţemio pH tyrimui ėminio paėmimo laukelius tikslinga formuoti naudojant dirvoţemių (Dirv_DB10LT) ir agrocheminių savybių (DirvAgroch_DB10LT) duomenų bazes, esant dideliam pH įvairavimui, dirvoţemio ėminius reikėtų imti tankiau, kai įvairavimas maţesnis – rečiau. 2. Dirvoţemio pH duomenis interpoliuojant IDW, paprastojo krigingo ir paprastojo kokrigingo metodais, sąlygiškai rūgščių plotų gaunama maţiau, palyginti su neinterpoliuotais. 3. Priklausomai nuo dirvoţemio ėminio paėmimo metodo, apskaičiuojama skirtinga kalkinių trąšų reikmė, ji didesnė plotuose, kuriuose dirvoţemio ėminiai imami naudojantis (Dirv_DB10LT) ir (DirvAgroch_DB10LT) duomenų bazėmis.
APA, Harvard, Vancouver, ISO, and other styles
18

Savory, Philip John. "Geostatistical methods for estimating iron, silica and alumina grades within the hardcap of the section seven iron deposit, Tom Price." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2012. https://ro.ecu.edu.au/theses/515.

Full text
Abstract:
Many iron ore deposits have a weathered zone (Hardcap) near the surface which is highly variable in grades. Estimating the amount of ore grade material (HG) in this zone is difficult as a result of this variability. The Section Seven Deposit at Tom Price is largely mined out and has production data available in the form of grade blocks that were marked out during mining as HG and non- HG. Hardcap domains and a block model representing them were created and estimates were made from original exploration data using Ordinary Kriging, Global Change of Support, Indicator Kriging and Median Indicator Kriging techniques. The estimates were compared to the production data The production data total HG blocked out was 6.4 Mt and the best central estimator of ore was Ordinary Kriging (2.0 Mt). Indicator and Median Indicator Kriging E-type estimates of ore were very similar at ~ 1.6 Mt. The Global Change of Support estimate was 4.0 Mt. An effective way of seeing the excessive smoothing in the central estimates was to compare the grade tonnage curves. All the central estimate of grades (OK, IK and Median IK E-type) were inaccurate and over smoothed. Given good quality samples and assays as well as sound estimation parameters the accuracy of these methods fundamentally comes down to the amount of data available to estimate from. There is insufficient data to get accurate estimates using these techniques. The main information that Indicator Kriging provides is not the E-type estimate but an estimate of the distribution of grades for each block from which a pseudo-probability that the block is HG can be derived. The pseudo probability was used to create maps of HG at different probability levels and there was a good match visually and between the production data HG blocks and blocks that had a greater than 0 chance of being HG. In comparison to the maps of HG generated from Ordinary Kriging which feature very few HG blocks and many sub-HG blocks these are a great improvement. Median Indicator Kriging was just as effective as Indicator Kriging in this regard, which is an important point as the former is less work than the later. Quantitative reconciliation of the Median Indicator Kriging results against the production data showed that blocks with a probability of 0.3 of being HG totalled 6.7 Mt and 49% of this matched HG production data. This gives rise to a methodology as follows: If OK has been used in estimating hardcap and if the Global Change of Support estimate indicates that there is a risk of oversmoothing with regard to the HG cut-off then Median IK should be used to identify areas which have a chance of being HG and then deciding on the best way to take advantage of this information. Some possibilities would be to: target these areas for closer spaced drilling in order to generate an improved OK estimate; use the area defined above to sub-domain the hardcap and re-estimate using OK; target these areas for mining first as they have a good chance of being HG.
APA, Harvard, Vancouver, ISO, and other styles
19

Holloway, Jacinta. "Extending decision tree methods for the analysis of remotely sensed images." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/207763/1/Jacinta_Holloway_Thesis.pdf.

Full text
Abstract:
One UN Sustainable Development Goal focuses on monitoring the presence, growth, and loss of forests. The cost of tracking progress towards this goal is often prohibitive. Satellite images provide an opportunity to use free data for environmental monitoring. However, these images have missing data due to cloud cover, particularly in the tropics. In this thesis I introduce fast and accurate new statistical methods to fill these data gaps. I create spatial and stochastic extensions of decision tree machine learning methods for interpolating missing data. I illustrate these methods with case studies monitoring forest cover in Australia and South America.
APA, Harvard, Vancouver, ISO, and other styles
20

Nowak, Wolfgang [Verfasser]. "Geostatistical methods for the identification of flow and transport parameters in the subsurface / Institut für Wasserbau der Universität Stuttgart. Vorgelegt von Wolfgang Nowak." Stuttgart : Inst. für Wasserbau, 2005. http://d-nb.info/97474896X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rabeiy, Ragab Elsayed [Verfasser]. "Spatial modeling of heavy metal pollution of forest soils in an historical mining area using geostatistical methods and air despersion modeling / Ragab Elsayed Rabeiy." Clausthal-Zellerfeld : Universitätsbibliothek Clausthal, 2010. http://d-nb.info/1007800925/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Thakur, Jay Krishna [Verfasser], Peter [Akademischer Betreuer] Wycisk, Holger [Akademischer Betreuer] Weiß, and Carsten [Akademischer Betreuer] Lorz. "Methods in groundwater monitoring : strategies based on statistical, geostatistical and hydrogeological modelling and visualization / Jay Krishna Thakur. Betreuer: Peter Wycisk ; Holger Weiß ; Carsten Lorz." Halle, Saale : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2013. http://d-nb.info/1044891890/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

He, Juan Xia. "Assessing and Improving Methods for the Effective Use of Landsat Imagery for Classification and Change Detection in Remote Canadian Regions." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34221.

Full text
Abstract:
Canadian remote areas are characterized by a minimal human footprint, restricted accessibility, ubiquitous lichen/snow cover (e.g. Arctic) or continuous forest with water bodies (e.g. Sub-Arctic). Effective mapping of earth surface cover and land cover changes using free medium-resolution Landsat images in remote environments is a challenge due to the presence of spectrally mixed pixels, restricted field sampling and ground truthing, and the often relatively homogenous cover in some areas. This thesis investigates how remote sensing methods can be applied to improve the capability of Landsat images for mapping earth surface features and land cover changes in Canadian remote areas. The investigation is conducted from the following four perspectives: 1) determining the continuity of Landsat-8 images for mapping surficial materials, 2) selecting classification algorithms that best address challenges involving mixed pixels, 3) applying advanced image fusion algorithms to improve Landsat spatial resolution while maintaining spectral fidelity and reducing the effects of mixed pixels on image classification and change detection, and, 4) examining different change detection techniques, including post-classification comparisons and threshold-based methods employing PCA(Principal Components Analysis)-fused multi-temporal Landsat images to detect changes in Canadian remote areas. Three typical landscapes in Canadian remote areas are chosen in this research. The first is located in the Canadian Arctic and is characterized by ubiquitous lichen and snow cover. The second is located in the Canadian sub-Arctic and is characterized by well-defined land features such as highlands, ponds, and wetlands. The last is located in a forested highlands region with minimal built-environment features. The thesis research demonstrates that the newly available Landsat-8 images can be a major data source for mapping Canadian geological information in Arctic areas when Landsat-7 is decommissioned. In addition, advanced classification techniques such as a Support-Vector-Machine (SVM) can generate satisfactory classification results in the context of mixed training data and minimal field sampling and truthing. This thesis research provides a systematic investigation on how geostatistical image fusion can be used to improve the performance of Landsat images in identifying surface features. Finally, SVM-based post-classified multi-temporal, and threshold-based PCA-fused bi-temporal Landsat images are shown to be effective in detecting different aspects of vegetation change in a remote forested region in Ontario. This research provides a comprehensive methodology to employ free Landsat images for image classification and change detection in Canadian remote regions.
APA, Harvard, Vancouver, ISO, and other styles
24

Ranjineh, Khojasteh Enayatollah [Verfasser], Thomas [Akademischer Betreuer] Ptak-Fix, Martin [Akademischer Betreuer] Sauter, Xavier [Akademischer Betreuer] Emery, and Raimon [Akademischer Betreuer] Tolosana-Delgado. "Geostatistical three-dimensional modeling of the subsurface unconsolidated materials in the Göttingen area : The transitional-probability Markov chain versus traditional indicator methods for modeling the geotechnical categories in a test site / Enayatollah Ranjineh Khojasteh. Gutachter: Martin Sauter ; Xavier Emery ; Raimon Tolosana-Delgado. Betreuer: Thomas Ptak-Fix." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2013. http://d-nb.info/1044769602/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Yeh, T. C. Jim, and Jinqi Zhang. "A Geostatistical Inverse Method for Variably Saturated Flow in the Vadose Zone." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1995. http://hdl.handle.net/10150/614185.

Full text
Abstract:
A geostatistical inverse technique utilizing both primary and secondary information is developed to estimate conditional means of unsaturated hydraulic conductivity parameters (saturated hydraulic conductivity and pore -size distribution parameters) in the vadose zone. Measurements of saturated hydraulic conductivity and pore -size distribution parameters are considered as the primary information, while measurements of steady -state flow processes (soil -water pressure head and degree of saturation) are regarded as the secondary information. This inverse approach relies on the classical linear predictor (cokriging) theory and takes the advantage of the spatial cross- correlation between soil -water pressure head, degree of saturation, saturated hydraulic conductivity, and pore -size distribution parameter. Using an approximate perturbation solution for steady, variably saturated flow under general boundary conditions, the cross- covariances between the primary and secondary information are derived. The approximate solution is formulated based on a first -order Taylor series expansion of a discretized finite element equation. The sensitivity matrix in the solution is evaluated by an adjoint state sensitivity approach for flow in heterogeneous media under variably saturated conditions. Through several numerical examples, the inverse model demonstrates its ability to improve the estimates of the spatial distribution of saturated hydraulic conductivity and pore -size distribution parameters using the secondary information.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Jinqi, and T. C. Jim Yeh. "An Iterative Geostatistical Inverse Method For Steady-Flow In The Vadose Zone." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1996. http://hdl.handle.net/10150/614010.

Full text
Abstract:
An iterative stochastic inverse technique utilizing both primary and secondary information is developed to estimate conditional means of unsaturated hydraulic conductivity parameters (saturated hydraulic conductivity and pore -size distribution parameters) in the vadose zone. Measurements of saturated hydraulic conductivity and pore -size distribution parameter are considered as the primary information, while measurements of steady -state flow processes (soil -water pressure head and degree of saturation) are regarded as the secondary information. This inverse approach is similar to the classical geostatistical approach, which utilizing a linear estimator that depends on the cross- covariance and covariance functions of unsaturated hydraulic conductivity parameters and flow processes. The linear estimator is, however, improved successively by solving the governing flow equation and by updating the residual covariance and cross- covariance functions, in an iterative manner. Using an approximate perturbation solution for steady, variably saturated flow under general boundary conditions, the covariances of secondary information and the cross -covariance between the primary and secondary information are derived. The approximate solution is formulated based on a first -order Taylor series expansion of a discretized finite element equation. The sensitivity matrices in the solution are evaluated by an adjoint state sensitivity approach for flow in heterogeneous media under variably saturated conditions. As a result, the nonlinear relationships between unsaturated hydraulic conductivity parameters and flow processes are incorporated in the estimation. Through some numerical examples, the iterative inverse model demonstrates its ability to improve the estimates of the spatial distribution of saturated hydraulic conductivity and pore -size distribution parameters compared to the classical geostatistical inverse approach. In addition, the inconsistency problem existing in classical geostatistical inverse approach is alleviated. The estimated fields of unsaturated hydraulic conductivity parameters and flow fields not only retain their observed values at sample locations, but satisfy the governing flow equation as well.
APA, Harvard, Vancouver, ISO, and other styles
27

Nogueira, Neto Joao Antunes 1952. "APPLICATION OF GEOSTATISTICS TO AN OPERATING IRON ORE MINE." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276417.

Full text
Abstract:
The competition in the world market for iron ore has increased lately. Therefore, an improved method of estimating the ore quality in small working areas has become an attractive cost-cutting strategy in short-term mine plans. Estimated grades of different working areas of a mine form the basis of any short-term mine plan. The generally sparse exploration data obtained during the development phase is not enough to accurately estimate the grades of small working areas. Therefore, additional sample information is often required in any operating mine. The findings of this case study show that better utilization of all available exploration information at this mine would improve estimation of small working areas even without additional face samples. Through the use of kriging variance, this study also determined the optimum face sampling grid, whose spacing turned out to be approximately 100 meters as compared to 50 meters in use today. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO, and other styles
28

Malama, Bwalya, and Bwalya Malama. "Inverse Stochastic Moment Analysis of Transient Flow in Randomly Heterogeneous Media." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/193932.

Full text
Abstract:
A geostatistical inverse method of estimating hydraulic parameters of a heterogeneous porous medium at discrete points in space, called pilot points, is presented. In this inverse method the parameter estimation problem is posed as a nonlinear optimization problem with a likelihood based objective function. The likelihood based objective function is expressed in terms of head residuals at head measurement locations in the flow domain, where head residuals are the differences between measured and model-predicted head values. Model predictions of head at each iteration of the optimization problem are obtained by solving a forward problem that is based on nonlocal conditional ensemble mean flow equations. Nonlocal moment equations make possible optimal deterministic predictions of fluid flow in randomly heterogenous porous media as well as assessment of the associated predictive uncertainty. In this work, the nonlocal moment equations are approximated to second order in the standard deviation of log-transformed hydraulic conductivity, and are solved using the finite element method. To enhance computational efficiency, computations are carried out in the complex Laplace-transform space, after which the results are inverted numerically to the real temporal domain for analysis and presentation. Whereas a forward solution can be conditioned on known values of hydraulic parameters, inversion allows further conditioning of the solution on measurements of system state variables, as well as for the estimation of unknown hydraulic parameters. The Levenberg-Marquardt algorithm is used to solve the optimization problem. The inverse method is illustrated through two numerical examples where parameter estimates and the corresponding predictions of system state are conditioned on measurements of head only, and on measurements of head and log-transformed hydraulic conductivity with prior information. An example in which predictions of system state are conditioned only on measurements of log-conductivity is also included for comparison. A fourth example is included in which the estimation of spatially constant specific storage is demonstrated. In all the examples, a superimposed mean uniform and convergent transient flow field through a bounded square domain is used. The examples show that conditioning on measurements of both head and hydraulic parameters with prior information yields more reliable (low uncertainty and good fit) predictions of system state, than when such information is not incorporated into the estimation process.
APA, Harvard, Vancouver, ISO, and other styles
29

MOURA, PEDRO NUNO DE SOUZA. "LSHSIM: A LOCALITY SENSITIVE HASHING BASED METHOD FOR MULTIPLE-POINT GEOSTATISTICS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=32005@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A modelagem de reservatórios consiste em uma tarefa de muita relevância na medida em que permite a representação de uma dada região geológica de interesse. Dada a incerteza envolvida no processo, deseja-se gerar uma grande quantidade de cenários possíveis para se determinar aquele que melhor representa essa região. Há, então, uma forte demanda de se gerar rapidamente cada simulação. Desde a sua origem, diversas metodologias foram propostas para esse propósito e, nas últimas duas décadas, Multiple-Point Geostatistics (MPS) passou a ser a dominante. Essa metodologia é fortemente baseada no conceito de imagem de treinamento (TI) e no uso de suas características, que são denominadas de padrões. No presente trabalho, é proposto um novo método de MPS que combina a aplicação de dois conceitos-chave: a técnica denominada Locality Sensitive Hashing (LSH), que permite a aceleração da busca por padrões similares a um dado objetivo; e a técnica de compressão Run-Length Encoding (RLE), utilizada para acelerar o cálculo da similaridade de Hamming. Foram realizados experimentos com imagens de treinamento tanto categóricas quanto contínuas que evidenciaram que o LSHSIM é computacionalmente eciente e produz realizações de boa qualidade, enquanto gera um espaço de incerteza de tamanho razoável. Em particular, para dados categóricos, os resultados sugerem que o LSHSIM é mais rápido do que o MS-CCSIM, que corresponde a um dos métodos componentes do estado-da-arte.
Reservoir modeling is a very important task that permits the representation of a geological region of interest. Given the uncertainty involved in the process, one wants to generate a considerable number of possible scenarios so as to find those which best represent this region. Then, there is a strong demand for quickly generating each simulation. Since its inception, many methodologies have been proposed for this purpose and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this work, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. We have performed experiments with both categorical and continuous images which showed that LSHSIM is computationally efficient and produce good quality realizations, while achieving a reasonable space of uncertainty. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Walker, Matthew James. "Methods for Bayesian inversion of seismic data." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10504.

Full text
Abstract:
The purpose of Bayesian seismic inversion is to combine information derived from seismic data and prior geological knowledge to determine a posterior probability distribution over parameters describing the elastic and geological properties of the subsurface. Typically the subsurface is modelled by a cellular grid model containing thousands or millions of cells within which these parameters are to be determined. Thus such inversions are computationally expensive due to the size of the parameter space (being proportional to the number of grid cells) over which the posterior is to be determined. Therefore, in practice approximations to Bayesian seismic inversion must be considered. A particular, existing approximate workflow is described in this thesis: the so-called two-stage inversion method explicitly splits the inversion problem into elastic and geological inversion stages. These two stages sequentially estimate the elastic parameters given the seismic data, and then the geological parameters given the elastic parameter estimates, respectively. In this thesis a number of methodologies are developed which enhance the accuracy of this approximate workflow. To reduce computational cost, existing elastic inversion methods often incorporate only simplified prior information about the elastic parameters. Thus a method is introduced which transforms such results, obtained using prior information specified using only two-point geostatistics, into new estimates containing sophisticated multi-point geostatistical prior information. The method uses a so-called deep neural network, trained using only synthetic instances (or `examples') of these two estimates, to apply this transformation. The method is shown to improve the resolution and accuracy (by comparison to well measurements) of elastic parameter estimates determined for a real hydrocarbon reservoir. It has been shown previously that so-called mixture density network (MDN) inversion can be used to solve geological inversion analytically (and thus very rapidly and efficiently) but only under certain assumptions about the geological prior distribution. A so-called prior replacement operation is developed here, which can be used to relax these requirements. It permits the efficient MDN method to be incorporated into general stochastic geological inversion methods which are free from the restrictive assumptions. Such methods rely on the use of Markov-chain Monte-Carlo (MCMC) sampling, which estimate the posterior (over the geological parameters) by producing a correlated chain of samples from it. It is shown that this approach can yield biased estimates of the posterior. Thus an alternative method which obtains a set of non-correlated samples from the posterior is developed, avoiding the possibility of bias in the estimate. The new method was tested on a synthetic geological inversion problem; its results compared favourably to those of Gibbs sampling (a MCMC method) on the same problem, which exhibited very significant bias. The geological prior information used in seismic inversion can be derived from real images which bear similarity to the geology anticipated within the target region of the subsurface. Such so-called training images are not always available from which this information (in the form of geostatistics) may be extracted. In this case appropriate training images may be generated by geological experts. However, this process can be costly and difficult. Thus an elicitation method (based on a genetic algorithm) is developed here which obtains the appropriate geostatistics reliably and directly from a geological expert, without the need for training images. 12 experts were asked to use the algorithm (individually) to determine the appropriate geostatistics for a physical (target) geological image. The majority of the experts were able to obtain a set of geostatistics which were consistent with the true (measured) statistics of the target image.
APA, Harvard, Vancouver, ISO, and other styles
31

Ward, Clint. "Compositions, logratios and geostatistics: An application to iron ore." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2015. https://ro.ecu.edu.au/theses/1581.

Full text
Abstract:
Common implementations of geostatistical methods, kriging and simulation, ignore the fact that geochemical data are usually reported in weight percent, sum to a constant, and are thus compositional in nature. The constant sum implies that rescaling has occurred and this can be shown to produce spurious correlations. Compositional geostatistics is an approach developed to ensure that the constant sum constraint is respected in estimation while removing dependencies on the spurious correlations. This study tests the applicability of this method against the commonly implemented ordinary cokriging method. The sample data are production blast cuttings analyses drawn from a producing iron ore mine in Western Australia. Previous studies using the high spatial density blast hole data and compositional geostatistical approach returned encouraging results, results other practitioners suggested were due to the high spatial density. This assertion is tested through sub-sampling of the initial data to create four subsets of successively lower spatial densities representing densities, spacings, and orientations typical of the different stages of mine development. The same compositional geostatistical approach was then applied to the subsets using jack-knifing to produce estimates at the removed data locations. Although other compositional geostatistical solutions are available, the additive logratio (alr) approach used in this study is the simplest to implement using commercially available software. The advantages of the logratio methodology are the removal of the constant sum constraint, allowing the resulting quantities to range freely within the real space and, importantly, the use of many proven statistical and geostatistical methods. The back transformation of linear combinations of these quantities and associated estimation variances to the constrained sample space is known to be biased; this study used numerical integration by Gauss-Hermite quadrature to overcome this drawback. The Aitchison and Euclidean distances were used to quantify both the univariate and compositional errors between the estimates and original sample values from each estimation method. The errors of each method are analysed using common descriptive and graphical criteria including the standardised residual sum of squares and an assessment of the accuracy and precision. The highest spatial density dataset is equally well reproduced by either method. The compositional method is generally more accurate and precise than the conventional method. In general the compositional error analyses favour the compositional techniques, producing more geologically plausible results, and which sum to the required value. The results support the application of the logratio compositional methodology to low spatial density data over the commonly implemented ordinary cokriging.
APA, Harvard, Vancouver, ISO, and other styles
32

Lebrenz, Hans-Henning [Verfasser], and András [Akademischer Betreuer] Bárdossy. "Addressing the input uncertainty for hydrological modeling by a new geostatistical method / Hans-Henning Lebrenz. Betreuer: András Bárdossy." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2013. http://d-nb.info/1032171049/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Morakinyo, Jimoh Akindele. "Development of an optimal hazard assessment method for contaminated sites." Thesis, University of Newcastle Upon Tyne, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ellabad, Yasin Ramadan. "A method for reservoirs modeling incorporating geostatistical models of flow-storage elements calibrated by dynamic tools : Nakhla Oil Field, Sirte Basin." Thesis, Heriot-Watt University, 2003. http://hdl.handle.net/10399/279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Wenbing. "A method and program for quantitative description of fracture data and fracture data extrapolation from scanline or wellbore data /." May be available electronically:, 2001. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Johansson, Björn. "Statistical Methods for Mineral Models of Drill Cores." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279848.

Full text
Abstract:
In modern mining industry, new resource efficient and climate resilient methods have been gaining traction. Commissioned efforts to improve the efficiency of European mining is further helping us to such goals. Orexplore AB's X-ray technology for analyzing drill cores is currently involved in two such project. Orexplore AB wishes to incorporate geostatistics (spatial statistics) into their analyzing process in order to further extend the information gained from the mineral data. The geostatistical method implemented here is ordinary kriging which is an interpolation method that, given measured data, predicts intermediate values governed by prior covariance models. Ordinary kriging facilitates prediction of mineral concentrations on a continuous grid in 1-D up to 3-D. Intermediate values are predicted on a Gaussian process regression line, governed by prior covariances. The covariance is modeled by fitting a model to a calculated experimental variogram. Mineral concentrations are available along the lateral surface of the drill core. Ordinary kriging is implemented to sequentially predict mineral concentrations on shorter sections of the drill core, one mineral at a time. Interpolation of mineral concentrations is performed on the data considered in 1-D and 3-D. The validation is performed by calculating the corresponding density at each section that concentrations are predicted on and compare each such value to measured densities. The performance of the model is evaluated by subjective visual evaluation of the fit of the interpolation line, its smoothness, together with the variance. Moreover, the fit is tested through cross-validation using different metrics that evaluates the variance and prediction errors of different models. The results concluded that this method accurately reproduces the measured concentrations while performing well according to the above mentioned metrics, but does not outperform the measured concentrations when evaluated against the measured densities. However, the method was successful in providing information of the minerals in the drill core by producing mineral concentrations on a continuous grid. The method also produced mineral concentrations in 3-D that reproduced the measured densities well. It can be concluded that ordinary kriging implemented according to the methodology described in this report efficiently produces mineral concentrations that can be used to obtain information of the distribution of concentrations in the interior of the drill core.
I den moderna gruvindustrin har nya resurseffektiva och klimatbeständiga metoder ökat i efterfråga. Beställda projekt för att förbättra effektiviteten gällande den europeiska gruvdriften bidrar till denna effekt ytterligare. Orexplore AB:s röntgenteknologi för analys av borrkärnor är för närvarande involverad i två sådana projekt. Orexplore AB vill integrera geostatistik (spatial statistik) i sin analysprocess för att ytterligare vidga informationen från mineraldatan. Den geostatistiska metoden som implementeras här är ordinary kriging, som är en interpolationsmetod som, givet uppmätta data, skattar mellanliggande värden betingade av kovariansmodeller. Ordinary kriging tillåter skattning av mineralkoncentrationer på ett kontinuerligt nät i 1-D upp till 3-D. Mellanliggande värden skattas enligt en Gaussisk process-regressionslinje. Kovariansen modelleras genom att passa en modell till ett beräknat experimentellt variogram. Mineralkoncentrationer är tillgängliga längs borrkärnans mantelyta. Ordinary kriging implementeras för att sekventiellt skatta mineralkoncentrationer på kortare delar av borrkärnan, ett mineral i taget. Interpolering av mineralkoncentrationer utförs på datan betraktad i 1-D och 3-D. Valideringen utförs genom att utifrån de skattade koncentrationerna beräkna den motsvarande densiteten vid varje sektion som koncentrationer skattas på och jämföra varje sådant värde med uppmätta densiteter. Undersökning av modellen utförs genom subjektiv visuell utvärdering av interpolationslinjens passning av datan, dess mjukhet, tillsammans med variansen. Dessutom testas passformen genom korsvalidering med olika mätvärden som utvärderar varians- och skattningsfel för olika modeller. Slutsatsen från resultaten är att denna metod reproducerar de uppmätta koncentrationerna väl samtidigt som den presterar bra enligt de mätvärden som utvärderas, men överträffar ej de uppmätta koncentrationerna vid utvärdering mot de uppmätta densiteterna. Metoden var emellertid framgångsrik med att tillhandahålla information om mineralerna i borrkärnan genom att producera mineralkoncentrationer på ett kontinuerligt rutnät. Metoden producerade också mineralkoncentrationer i 3-D som reproducerade de uppmätta densiteterna väl. Slutsatsen dras att ordinary kriging, implementerad enligt den metod som beskrivs i denna rapport, effektivt skattar mineralkoncentrationer som kan användas för att få information om fördelningen av koncentrationer i det inre av borrkärnan.
APA, Harvard, Vancouver, ISO, and other styles
37

Yildirim, Akbas Ceylan. "Determination Of Flow Units For Carbonate Reservoirs By Petrophysical - Based Methods." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606343/index.pdf.

Full text
Abstract:
Characterization of carbonate reservoirs by flow units is a practical way of reservoir zonation. This study represents a petrophysical-based method that uses well loggings and core plug data to delineate flow units within the most productive carbonate reservoir of Derdere Formation in Y field, Southeast Turkey. Derdere Formation is composed of limestones and dolomites. Logs from the 5 wells are the starting point for the reservoir characterization. The general geologic framework obtained from the logs point out for discriminations within the formation. 58 representative core plug data from 4 different wells are utilized to better understand the petrophysical framework of the formation. The plots correlating petrophysical parameters and the frequency histograms suggest the presence of distinctive reservoir trends. These discriminations are also represented in Winland porosity-permeability crossplots resulted in clusters for different port-sizes that are responsible for different flow characteristics. Although the correlation between core plug porosity and air permeability yields a good correlation coefficient, the formation has to be studied within units due to differences in port-sizes and reservoir process speed. Linear regression and multiple regression analyses are used for the study of each unit. The results are performed using STATGRAPH Version Plus 5.1 statistical software. The permeability models are constructed and their reliabilities are compared by the regression coefficients for predictions in un-cored sections. As a result of this study, 4 different units are determined in the Derdere Formation by using well logging data, and core plug analyses with the help of geostatistical methods. The predicted permeabilities for each unit show good correlations with the calculated ones from core plugs. Highly reliable future estimations can be based on the derived methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Rodriguez-Vilca, Juliet, Jose Paucar-Vilcañaupa, Humberto Pehovaz-Alvarez, Carlos Raymundo, Nestor Mamani-Macedo, and Javier M. Moguerza. "Method for the Interpretation of RMR Variability Using Gaussian Simulation to Reduce the Uncertainty in Estimations of Geomechanical Models of Underground Mines." Springer, 2020. http://hdl.handle.net/10757/656171.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
The application of conventional techniques, such as kriging, to model rock mass is limited because rock mass spatial variability and heterogeneity are not considered in such techniques. In this context, as an alternative solution, the application of the Gaussian simulation technique to simulate rock mass spatial heterogeneity based on the rock mass rating (RMR) classification is proposed. This research proposes a methodology that includes a variographic analysis of the RMR in different directions to determine its anisotropic behavior. In the case study of an underground deposit in Peru, the geomechanical record data compiled in the field were used. A total of 10 simulations were conducted, with approximately 6 million values for each simulation. These were calculated, verified, and an absolute mean error of only 3.82% was estimated. It is acceptable when compared with the value of 22.15% obtained with kriging.
APA, Harvard, Vancouver, ISO, and other styles
39

Castioni, Guilherme Adalberto Ferreira 1985. "Variabilidade espacial de atributos do solo e produtividade do feijoeiro em função da geoforma da paisagem e da irrigação por pivô central." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/256788.

Full text
Abstract:
Orientadores: Zigomar Menezes de Souza, Reginaldo Barbosa da Silva
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola
Made available in DSpace on 2018-08-23T03:01:49Z (GMT). No. of bitstreams: 1 Castioni_GuilhermeAdalbertoFerreira_M.pdf: 7918974 bytes, checksum: db6b566f58c4c70fe9c016e48a618074 (MD5) Previous issue date: 2013
Resumo: O emprego de irrigação tem promovido alterações na função do tipo de íons e sua valência presentes na solução do solo, forma variável de pH do solo no grau de floculação crítica de partículas. O desequilíbrio e interação destes fatores podem causar alteração na dispersão da fração argila, alterando a estrutura do solo. O objetivo deste trabalho foi verificar o Delta pH do solo e sua relação com o grau de floculação e dispersão de argila no solo e então avaliar o efeito da migração de argila no adensamento do solo, bem como o aumento da compactação do solo no desenvolvimento de raízes no solo e a produtividade alcançada do feijoeiro. O experimento foi realizado na região de Cristalina-GO, em área irrigada sob pivô central, com coordenadas geográficas de 16°53'35,59" de latitude sul e 47°32'16,75" de longitude oeste, 1.021 m de altitude, o solo foi classificado como Argissolo Vermelho-Amarelo. Os atributos físicos do solo foram coletados nos pontos de cruzamento de uma malha com intervalos regulares de 10 m entre pontos em três posições ao longo do declive da área com 1,8 ha nas profundidades de 0,00-0,10, 0,10-0,20 e 0,20-0,30 m, sob um pivô central, em cada posição da encosta, ou seja, terço superior, médio e inferior foram coletados 60 pontos perfazendo o total de 180 pontos. Os dados foram submetidos à estatística descritiva, bem como a geoestatística considerando os modelos esféricos, o exponencial, o linear e o gaussiano, posteriormente, tais modelos foram usados no desenvolvimento de mapas de isolinhas (krigagem). Os resultados confirmam a predominância de carga líquida negativa pelos valores de delta pH alcançados, que provocou o elevado grau de floculação e dispersão do solo causando a movimentação de argila na vertente, contribuindo para o aumento da densidade do solo e a resistência do solo à penetração, a compactação do solo limitou o alcance do sistema radicular do feijoeiro as camadas mais profundas implicando em perdas significativas de produtividade do feijoeiro
Abstract: The use of irrigation has promoted changes in the type of ions and their valence present in the soil solution, so variable soil pH in critical flocculation of particles. The imbalance and interaction factors can cause this change in dispersion of clay, changing the soil structure. The objective of this work was to verify the Delta soil pH and its relationship to the degree of flocculation and dispersion of clay in the soil and then assess the effect of the migration of clay in soil compaction and increased soil compaction in the development of roots in the soil and the productivity achieved bean. The experiment was carried out in the region of Crystalline-GO in irrigated area under central pivot, with geographic coordinates of 16 ° 53'35, 59 "south latitude and 47 ° 32'16, 75" W, 1021 m altitude the soil was classified as Ultisol. The physical attributes of soil were collected at the intersections of a grid with intervals of 10 m between points in three positions along the slope area with 1.8 ha at depths of 0.00-0.10, 0.10 -0.20 and 0.20-0.30 m, under a center pivot at each position of the slope, ie the upper, middle and bottom were collected 60 points totaling 180 points. Data were subjected to descriptive statistics and geostatistics models considering spherical, exponential, linear, and Gaussian, then such models were used in the development of maps (kriging). The results confirm the predominance of net negative charge by the values of delta pH achieved, which caused the high degree of flocculation and dispersion of the soil causing the movement of clay in part, contributing to the increase of soil density and soil resistance to penetration soil compaction limited the scope of the root bean deeper layers resulting in significant losses in grain yield
Mestrado
Agua e Solo
Mestre em Engenharia Agrícola
APA, Harvard, Vancouver, ISO, and other styles
40

Cobb, Matthew. "Recoverable resources calculation using non-linear methods: a comparative study." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2016. https://ro.ecu.edu.au/theses/1809.

Full text
Abstract:
The prediction of recoverable resources at an operating manganese mine is currently undertaken using univariate ordinary kriging of the target variable manganese, and 5 deleterious variables. Input data densities at the time of this calculation are considerably lower than at the time of final selection (grade control), and the potential for unnacceptable conditional bias to be introduced through the use of linear geostatistical methods when determining grade estimates over a small support has led to assessment of the potential benefit of employing the local change of support methods Localised Uniform Conditioning (LUC) and Conditional Simulation (CS). Allowances for the operating conditions, including time frames for estimation / simulation, and likely software limitations are accounted for by also requiring decorrelation to be used in instances where the data are considered in a multivariate sense. A novel method for decorrelation of geostatistical datasets, Independent Components Analysis (ICA), is compared against the more common method of Minimum-Maximum Autocorrelation Factorisation (MAF). ICA performs comparably against MAF in terms of its ability to diagonalise the variance-covariance matrix of the test dataset over multiple lags, for a variety of input data densities and treatments (log-ratio transformed and raw oxide data). Based on these results, ICA decorrelated data were incorporated into a comparative study of LUC and CS against block ordinary kriging (BOK), using an input dataset of reduced density, treated variously as raw univariate oxide data, decorrelated oxide data, and log-ratio transformed decorrelated data. The use of the log-ratio transform, designed to account for the 100% sum constraint inherent to the input data, proved impractical for LUC due to difficulties associated with the discrete Gaussian model change of support method employed by this technique. Log-ratio data transformation was restricted to use with CS where back transformation to raw oxide space could take place on a pseudo-equivalent support to the input data, prior to change of support. While use of the log-ratio transformation for CS guaranteed adherence to the sum constraint for results (the only method to do so) it resulted in distortion to both the spatial and grade distribution of results. Decorrelation by ICA also posed difficulties, with biases introduced to final back transformed results as a result of the decorrelation algorithm in both log-ratio transformed and oxide data, which in some instances caused impossible negative values to be returned for some variables in the final results. In a comparison of net profit calculations for each method, the distortions introduced from both log-ratio transformation, and decorrelation become evident in either overly optimistic or conservative profit distributions for methods in which they were used. Of the results presented, only BOK, CS and LUC of non-decorrelated oxide data appear to show results similar to those which would be used at the operation during final selection (based on ordinary kriging of a complete dataset). Based on the comparison of spatial grade distributions and both net profit spatial distribution and summary, the decision to employ a non-linear method of recoverable resource calculation at the operation under question would be questionable in terms of its reward for effort, given that the current method of BOK appears to produce equivalent results.
APA, Harvard, Vancouver, ISO, and other styles
41

Kleingeld, Wynand. "La geostatistique pour des variables discretes." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0064.

Full text
Abstract:
On s'interesse au developpement des techniques d'estimation geostatistiques pour les gisements dans lesquels le minerai est reparti sous forme de particules discretes. Compte tenu de la taille habituelle des echantillons, la loi de distribution des particules peut etre extremement dissymetrique, et les lois particulieres elaborees pour ce type de gisements sent examinees dans cette these. La recherche a ete orientee vers l'estimation des reserves locales et globales, les techniques statistiques d'estimation des parametres, les calculs d'intervalle de confiance, l'estimation isofactorielle des lois bivariables, l'influence de l'effet de support et l'application de diverses techniques de krigeage
APA, Harvard, Vancouver, ISO, and other styles
42

Blanchard, Pierre. "Fast hierarchical algorithms for the low-rank approximation of matrices, with applications to materials physics, geostatistics and data analysis." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0016/document.

Full text
Abstract:
Les techniques avancées pour l’approximation de rang faible des matrices sont des outils de réduction de dimension fondamentaux pour un grand nombre de domaines du calcul scientifique. Les approches hiérarchiques comme les matrices H2, en particulier la méthode multipôle rapide (FMM), bénéficient de la structure de rang faible par bloc de certaines matrices pour réduire le coût de calcul de problèmes d’interactions à n-corps en O(n) opérations au lieu de O(n2). Afin de mieux traiter des noyaux d’interaction complexes de plusieurs natures, des formulations FMM dites ”kernel-independent” ont récemment vu le jour, telles que les FMM basées sur l’interpolation polynomiale. Cependant elles deviennent très coûteuses pour les noyaux tensoriels à fortes dimensions, c’est pourquoi nous avons développé une nouvelle formulation FMM efficace basée sur l’interpolation polynomiale, appelée Uniform FMM. Cette méthode a été implémentée dans la bibliothèque parallèle ScalFMM et repose sur une grille d’interpolation régulière et la transformée de Fourier rapide (FFT). Ses performances et sa précision ont été comparées à celles de la FMM par interpolation de Chebyshev. Des simulations numériques sur des cas tests artificiels ont montré que la perte de précision induite par le schéma d’interpolation était largement compensées par le gain de performance apporté par la FFT. Dans un premier temps, nous avons étendu les FMM basées sur grille de Chebyshev et sur grille régulière au calcul des champs élastiques isotropes mis en jeu dans des simulations de Dynamique des Dislocations (DD). Dans un second temps, nous avons utilisé notre nouvelle FMM pour accélérer une factorisation SVD de rang r par projection aléatoire et ainsi permettre de générer efficacement des champs Gaussiens aléatoires sur de grandes grilles hétérogènes. Pour finir, nous avons développé un algorithme de réduction de dimension basé sur la projection aléatoire dense afin d’étudier de nouvelles façons de caractériser la biodiversité, à savoir d’un point de vue géométrique
Advanced techniques for the low-rank approximation of matrices are crucial dimension reduction tools in many domains of modern scientific computing. Hierarchical approaches like H2-matrices, in particular the Fast Multipole Method (FMM), benefit from the block low-rank structure of certain matrices to reduce the cost of computing n-body problems to O(n) operations instead of O(n2). In order to better deal with kernels of various kinds, kernel independent FMM formulations have recently arisen such as polynomial interpolation based FMM. However, they are hardly tractable to high dimensional tensorial kernels, therefore we designed a new highly efficient interpolation based FMM, called the Uniform FMM, and implemented it in the parallel library ScalFMM. The method relies on an equispaced interpolation grid and the Fast Fourier Transform (FFT). Performance and accuracy were compared with the Chebyshev interpolation based FMM. Numerical experiments on artificial benchmarks showed that the loss of accuracy induced by the interpolation scheme was largely compensated by the FFT optimization. First of all, we extended both interpolation based FMM to the computation of the isotropic elastic fields involved in Dislocation Dynamics (DD) simulations. Second of all, we used our new FMM algorithm to accelerate a rank-r Randomized SVD and thus efficiently generate multivariate Gaussian random variables on large heterogeneous grids in O(n) operations. Finally, we designed a new efficient dimensionality reduction algorithm based on dense random projection in order to investigate new ways of characterizing the biodiversity, namely from a geometric point of view
APA, Harvard, Vancouver, ISO, and other styles
43

Sancevero, Sergio Sacani. "Estudo de aplicação de metodos quantitativos em dados sismicos no processo de caracterização integrada de reservatorios." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/287444.

Full text
Abstract:
Orientador: Armando Zaupa Remacre
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Geociencias
Made available in DSpace on 2018-08-09T03:37:21Z (GMT). No. of bitstreams: 1 Sancevero_SergioSacani_D.pdf: 13993022 bytes, checksum: beb2507aee5ca130897dad57c706b2a9 (MD5) Previous issue date: 2007
Resumo: O processo de caracterização de reservatórios é atualmente uma das etapas mais importantes na exploração, desenvolvimento e produção de um campo. Porém, para que esse processo seja realizado da melhor forma possível é preciso se ter o conhecimento de determinados métodos, que integram as diferentes informações disponíveis. Desse modo, o objetivo principal dessa tese é estudar de forma criteriosa e quantitativa o processo de caracterização de reservatórios do ponto de vista dos dados sísmicos, avaliando antigos e novos métodos, e definindo novas metodologias que possam ser aplicadas de maneira decisiva neste processo. Para que esses métodos pudessem ser avaliados de maneira conclusiva foi utilizado nesta tese um modelo sintético que reproduzisse minimamente algumas características cruciais de determinados reservatórios como a complexa distribuição dos corpos de areia e a presença de corpos com espessura subsísmica que levassem ao limite as técnicas de modelagem tradicionais, proporcionando avaliar novos métodos. Assim, para caracterizar essas complexas feições, foram utilizados nesta tese dois meios principais de interpretação, primeiro a inversão sísmica dando um caráter preditivo ao dado sísmico e por fim a análise multiatributos, dando um caráter classificatório. No caso da inversão sísmica foram utilizados três métodos de obtenção da impedância acústica. Entre eles, foi a inversão geoestatística que demonstrou ser a mais eficiente das técnicas no que diz respeito à caracterização de reservatórios com espessura subsísmica e complexa distribuição dos corpos de areia. No caso dos atributos, pôde-se demonstrar que é necessário que sejam tratados com uma abordagem multivariada para que seja aproveitada a correlação entre eles e que por meio de técnicas de classificação e modelagem possa se decidir os mais relevantes para o processo. Neste caso 3 métodos de análise multivariada foram apresentados e testados, sendo que dois deles (ICA e MAF) de maneira inédita e que produziram resultados superiores àqueles alcançados quando a tradicional técnica de PCA é aplicada. Assim, com o que foi apresentado, pode-se concluir que o processo de caracterização é um estágio crucial para o desenvolvimento dos campos, mas não é fácil de ser realizado, a menos que os métodos e as técnicas envolvidas sejam conhecidas de maneira profunda. Só assim, é possível extrair o máximo de informações do dado sísmico, caracterizando o reservatório de forma quantitativa e integrada, otimizando sua produção e reduzindo os riscos e custos com a sua explotação
Abstract: The reservoir characterization process can be considered curretly the most important stage in the exploration, development and production of the oil field. However, this process is only carried out in the best way if the geologists, geophisicist and engineering has the knowledge of some definitive methods and techniques that integrated all information available about the field. Thus, the aim of this thesis is to study in a criterious and quantitative way the reservoirscharacterization process, analyzing the seismic data,by the evaluation of classic and novel methods, to defining new methodlogies that can be applied in decisive way into this process. So, for these methods could be evaluated in a conclusive way, were used in this thesis asynthetic reference model that reproduced some critical features of determined reservoirs, as the complex distribution of sand bodies and the subseismic thickness. These characteristics pushing to the limit the traditional modelling techniques. In this thesis to characterizze the complex features present in the reference model we used two interpretation techniques, first we analyze the seismic inversion that give a preditive character to the seismic data and after we study the multiattribute analysis that give a classificatory caracter to the seismic interpretation. For the seismic inversion, the stochastic or geostatistical inversion, that demonstrated to be the most efficient technique to characterized the complex and the subseismic features present in the model. About the seismic attributes it could be demonstrated that even so in some cases they represent the features of the model, are necessary that they are dealt with a multivariate approach, to used the advantage of the correlation between them. For the seismic attribute analysis, 3 methods of multivariate statistics analysis were used, two of them (ICA and MAF) for the first time in the reservoir characterization processo With the results we can proved that these 2 new methods improved the process of multiattribute anlysis prducing superior results when compare with the results obtained by the application of traditional PCA technique. With it was presented, can be concluded that the reservoir characterization process is a crucial stage and have some difficults to be accomplishment, unless the methods and the involved techniques are known deeply. Thus it is possible to extract the maximum informations from the datasets, characterizing the reservoir in a quantitative and integrated environmental, optimizing its production and reducing the risks and the costs with its explotation
Doutorado
Administração e Politica de Recursos Minerais
Doutor em Ciências
APA, Harvard, Vancouver, ISO, and other styles
44

Fu, Jianlin. "A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment." Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/1969.

Full text
Abstract:
Unlike the traditional two-stage methods, a conditional and inverse-conditional simulation approach may directly generate independent, identically distributed realizations to honor both static data and state data in one step. The Markov chain Monte Carlo (McMC) method was proved a powerful tool to perform such type of stochastic simulation. One of the main advantages of the McMC over the traditional sensitivity-based optimization methods to inverse problems is its power, flexibility and well-posedness in incorporating observation data from different sources. In this work, an improved version of the McMC method is presented to perform the stochastic simulation of reservoirs and aquifers in the framework of multi-Gaussian geostatistics. First, a blocking scheme is proposed to overcome the limitations of the classic single-component Metropolis-Hastings-type McMC. One of the main characteristics of the blocking McMC (BMcMC) scheme is that, depending on the inconsistence between the prior model and the reality, it can preserve the prior spatial structure and statistics as users specified. At the same time, it improves the mixing of the Markov chain and hence enhances the computational efficiency of the McMC. Furthermore, the exploration ability and the mixing speed of McMC are efficiently improved by coupling the multiscale proposals, i.e., the coupled multiscale McMC method. In order to make the BMcMC method capable of dealing with the high-dimensional cases, a multi-scale scheme is introduced to accelerate the computation of the likelihood which greatly improves the computational efficiency of the McMC due to the fact that most of the computational efforts are spent on the forward simulations. To this end, a flexible-grid full-tensor finite-difference simulator, which is widely compatible with the outputs from various upscaling subroutines, is developed to solve the flow equations and a constant-displacement random-walk particle-tracking method, which enhances the com
Fu, J. (2008). A markov chain monte carlo method for inverse stochastic modeling and uncertainty assessment [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1969
Palancia
APA, Harvard, Vancouver, ISO, and other styles
45

Reyes, Gómez Sandra Tatiana. "Avaliação da distribuição espacial de poluentes de origem industrial na bacia hidrográfica Taquari-Antas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150545.

Full text
Abstract:
Os recursos hídricos representam para a sociedade e o meio ambiente um papel de suma importância. Em termos de sociedade relacionamos os múltiplos usos que se fazem destes, sem esquecer que o principal uso é para o consumo e abastecimento das necessidades primárias. Já em termos de meio ambiente sabemos que são os pilares para o suporte e desenvolvimento da biodiversidade e produção de biomassa na terra. A destinação dos resíduos industriais é uma preocupação na atualidade, e mesmo das indústrias sendo obrigadas a tratar seus resíduos antes de despejá-los nos corpos hídricos, não o estão executando com eficiência. Uma das razões que levam a esta situação é a falta de conhecimento dos efeitos que seus resíduos podem ocasionar, deixando-os em um segundo plano, e o orçamento elevado que se requer para investir em uma estação de tratamento de efluentes, considerando não somente a construção, mas também a demanda que exige a sua manutenção. Cada vez mais a integração dos métodos geoestatísticos, Sensoriamento Remoto e SIG, está sendo utilizado para estudos de contaminação ambiental. Suas vantagens e grande variedade de ferramentas permitem um primeiro acesso qualificado a todas essas questões e informações que são onerosas e às vezes desconhecidas. Indo ao encontro da busca de soluções para esta problemática e, através da técnica de análise de componentes principais, se estabeleceu uma ferramenta adequada para diagnóstico da distribuição espacial de concentração de potencial poluidor dos efluentes industriais, tendo como área de estudo a bacia hidrográfica Taquari-Antas. Um total de 393 indústrias foram classificadas em 24 setores. O potencial poluidor de Metais da água (MA), Tóxicos da água (TA), Demanda Bioquímica de Oxigênio (DBO) e Sólidos em Suspensão Totais (SST) para o meio aquático foi estimado através da metodologia The Industrial Pollution Projection System (IPPS). Foram gerados valores para as concentrações dos poluentes para cada mês do ano, utilizando uma série histórica de 26 anos das vazões na bacia. Os padrões temporais para as concentrações mensais foram verificados por meio de testes estatísticos dos modelos ANOVA e testes TukeyHSD, para cada tipo de poluente. A principal tendência temporal encontrada para os quatro tipos de poluentes são a transição do outono para o inverno, onde há uma grande queda dos valores de concentração devido ao aumento da vazão dos rios (época de cheia). Da primavera para o outono os valores vão crescendo novamente até se tornarem os maiores. Na sequência foram gerados mapas de contorno para o potencial poluidor estimado e concentrações mensais além de mapas de classificação das áreas da bacia segundo a legislação do CONAMA.
The water resources represent an important roll for society and the environment. In terms of society we relate the multiple uses that are made of them, without forgetting that their main use is for consumption and supply of primary needs. On the environmental side we know that they are the pillar for the support and development of biodiversity and production of biomass on earth. The destination of the industrial residues are a concern today, despite the industries being obliged to treat their waste before disposing them into any body of water, it’s not being done efficiently. Some of the reasons that lead to this situation are the lack of knowledge of the effects that may result from their residues, pushing them aside. Another reason is the elevated budget required to invest in an industrial wastewater treatment station (ETI), considering not only construction but also the demand that requires its maintenance. Increasingly, the integration of geostatistic methods, Remote Sensing and GIS are being used for environmental contamination studies. Its advantages and wide variety of tools allow an initial quality access to all these matters and information that are costly and sometimes unknown. Seeking solutions of this issue and, through the principal component analysis technique, it has established a suitable tool for the diagnosis of spatial concentration distribution of industrial effluents emissions, having as the subject of study the Taquari-Antas watershed. A total of 393 industries were classified into 24 sectors. Water metals pollution potential (MA), Water Toxics (TA), Biochemical Oxygen Demand (BOD) and Total Suspended Solids (TSS) to water were estimated by The Industrial Pollution Projection System (IPPS) methodology. Values were generated for concentrations of pollutants for each month of the year, using a historical series of 26 years of stream flow in the watershed. The temporal patterns for monthly concentrations were verified by means of statistical tests of ANOVA models and TukeyHSD tests for each type of pollutant. The main temporal trends found for the four types of pollutants are the autumn transition to winter, where there is a decrease of concentration values due to increased river flows (flood season). From spring to autumn the values will grow again till becoming the highest. Following the temporal trends were generated contour maps for estimated pollution potential and monthly concentrations as well as areas of classification maps of the watershed according to CONAMA legislation.
APA, Harvard, Vancouver, ISO, and other styles
46

Santos, Karoline Eduarda Lima. "Geoestatística e geoprocessamento aplicados à tomada de decisão agroambiental em um sistema de produção de leite a pasto intensivo." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18139/tde-27102017-162650/.

Full text
Abstract:
Movido pelo crescimento populacional, a visão de sistemas sustentáveis tem despertado a atenção de diversos setores. Sendo um dos principais domínios economicamente ativo do país, a agropecuária vem buscando meios para se adequar a essa realidade. Nesse contexto, surgem as Boas Práticas Agropecuárias, das quais pode-se citar a Agricultura de Precisão, o pastejo rotacionado e o manejo ambiental, os quais se implementados em conjunto proporcionam um melhor gerenciamento da área de interesse. Assim, objetivou-se, aplicar conceitos de geoestatística e geoprocessamento para a obtenção de zonas de manejo de uma área de pastagem de capim Tanzânia, em São Carlos - SP, e delimitação de unidades de manejo para aplicação de calagem e adubação, com base no melhor método de interpolação. Com os resultados de análise de solo foram realizadas análises geoestatísticas para avaliação da dependência espacial dos atributos químicos, e a Validação Cruzada dos modelos adotados. Os mapas foram obtidos pelo método de interpolação por Krigagem Ordinária e a definição das zonas de manejo foi realizada por meio de lógica fuzzy. A partir dos mapas dos parâmetros químicos do solo gerou-se o mapa de zonas de manejo resultando em cinco zonas sendo: 0,02 ha (1,2% da área total) considerada como \"muito baixa\" fertilidade; 0,3 ha (18%) \"baixa\" fertilidade; 0,75 ha (44%) como \"média\" fertilidade; 0,55 ha (32%) como \"alta\" fertilidade e, 0,08 ha (4,8%) como \"muita alta\" fertilidade. A comparação dos métodos de interpolação demonstrou que a Krigagem Ordinária foi a melhor metodologia para o estudo. A geoestatística e o geoprocessamento demonstraram ser técnicas que auxiliam nas decisões estratégicas e complexas em relação ao gerenciamento do sistema de produção agrícola.
Movin by population growth, the vision of sustainable systems has attracted the attention of various sectors. Being one of the main areas economically active of the country, agriculture has been seeking ways to adapt to this reality. In this context, emerge the Good Farming Practices, which among them we can mention the Agriculture of Precision, the rotate pasture and environmental management, which if implemented together will provide a better management of the area of interest. The present study aimed to apply the concepts of geostatistics and gis to obtain areas of management of an area of pasture grass, Tanzania, São Carlos – SP, and delimitation of management units for the application of liming and fertilization, based on the best interpolation method. With the analysis results of the soil analyses were performed geo-statistical for evaluation of the dependence on the spatial attributes of chemicals. The maps were obtained by the method of interpolation by Kriging Ordinary and the definition of zones for the management was performed by fuzzy logic. From the maps of chemical parameters of the soil has resulted from the management zone map, resulting in five areas being: 0.02 ha (1.2% of total area) regarded as \"very low\" fertility; and 0.3 ha (18%) \"low\" fertility; 0.75 ha (44%) as \"average\" fertility; 0.55 ha (32%) as \"high\" fertility and, 0.08 ha (4.8%) as \"very high\" fertility. The comparison of the interpolation methods showed that Kriging Ordinary was the best methodology for the study. The geostatistics and gis have proved to be techniques that help with strategic and complex decisions in relation to the management of the agricultural production system.
APA, Harvard, Vancouver, ISO, and other styles
47

Goldman, Gretchen Tanner. "Characterization and impact of ambient air pollution measurement error in time-series epidemiologic studies." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41158.

Full text
Abstract:
Time-series studies of ambient air pollution and acute health outcomes utilize measurements from fixed outdoor monitoring sites to assess changes in pollution concentration relative to time-variable health outcome measures. These studies rely on measured concentrations as a surrogate for population exposure. The degree to which monitoring site measurements accurately represent true ambient concentrations is of interest from both an etiologic and regulatory perspective, since associations observed in time-series studies are used to inform health-based ambient air quality standards. Air pollutant measurement errors associated with instrument precision and lack of spatial correlation between monitors have been shown to attenuate associations observed in health studies. Characterization and adjustment for air pollution measurement error can improve effect estimates in time-series studies. Measurement error was characterized for 12 ambient air pollutants in Atlanta. Simulations of instrument and spatial error were generated for each pollutant, added to a reference pollutant time-series, and used in a Poisson generalized linear model of air pollution and cardiovascular emergency department visits. This method allows for pollutant-specific quantification of impacts of measurement error on health effect estimates, both the assessed strength of association and its significance. To inform on the amount and type of error present in Atlanta measurements, air pollutant concentrations were simulated over the 20-county metropolitan area for a 6-year period, incorporating several distribution characteristics observed in measurement data. The simulated concentration fields were then used to characterize the amount and type of error due to spatial variability in ambient concentrations, as well as the impact of use of different exposure metrics in a time-series epidemiologic study. Finally, methodologies developed for the Atlanta area were applied to air pollution measurements in Dallas, Texas with consideration for the impact of this error on a health study of the Dallas-Fort Worth region that is currently underway.
APA, Harvard, Vancouver, ISO, and other styles
48

Kiefer, Hua. "Essays on applied spatial econometrics and housing economics." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180467420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Negro, Sérgio Ricardo Lima. "Correlação linear e espacial da produtividade da soja com atributos físicos da relação massa volume do solo." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/152375.

Full text
Abstract:
Submitted by SÉRGIO RICARDO LIMA NEGRO (limanegro@gmail.com) on 2017-12-18T23:04:47Z No. of bitstreams: 1 Tese Sérgio Ricardo Lima Negro final.pdf: 15461052 bytes, checksum: 652aaf7a76daf5705197093369cbb997 (MD5)
Rejected by Cristina Alexandra de Godoy null (cristina@adm.feis.unesp.br), reason: Solicitamos que realize correções no arquivo submetido seguindo as orientações abaixo: 1. Verificar com o João (joao@adm.feis.unesp.br) a normalização do seu trabalho conforme as normas Abnt, por exemplo: título acima das figuras, retirar pontuação entre o número e o título da seção e subseção, alinhamento do sumário, alterar a capa e etc., seguindo as orientações da biblioteca. 2. Inserir a imagem escaneada da folha de aprovação no local adequado. 3. Inserir a ficha catalográfica elaborada pela biblioteca após a normalização do seu trabalho. Agradecemos a compreensão. on 2017-12-19T11:06:29Z (GMT)
Submitted by SÉRGIO RICARDO LIMA NEGRO (limanegro@gmail.com) on 2017-12-20T13:58:16Z No. of bitstreams: 1 tese definitiva corrigida.pdf: 12662965 bytes, checksum: 993de583d00cdfbd324c80a85757183f (MD5)
Approved for entry into archive by Cristina Alexandra de Godoy null (cristina@adm.feis.unesp.br) on 2017-12-20T18:28:52Z (GMT) No. of bitstreams: 1 negro_srl_dr_ilha.pdf: 12662965 bytes, checksum: 993de583d00cdfbd324c80a85757183f (MD5)
Made available in DSpace on 2017-12-20T18:28:52Z (GMT). No. of bitstreams: 1 negro_srl_dr_ilha.pdf: 12662965 bytes, checksum: 993de583d00cdfbd324c80a85757183f (MD5) Previous issue date: 2017-08-18
A variabilidade espacial dos atributos físicos do solo é importante indicador de manejo localizado nas áreas agrosilvopastoris. No ano agrícola de 2009/10 em Selvíria (MS), analisaram-se os componentes de produção da soja e atributos físicos da relação massa/volume de um Latossolo Vermelho Distroférrico em plantio direto, com objetivo de encontrar correlações lineares e espaciais entre eles. Foi instalada uma malha geoestatística para a coleta dos dados, totalizando 99 pontos amostrais numa área de 10 ha. Os atributos do solo, nas camadas de 0,00-0,10 e 0,10-0,20 m foram: a densidade do solo, DS (métodos do anel volumétrico e do torrão parafinado), densidade de partículas do solo, DP (métodos do balão volumétrico e do balão volumétrico modificado) e a porosidade total do solo, PT, utilizando os valores de DS e DP dos diferentes métodos de determinação, calculada pela fórmula PT= (1-DS/DP). Os componentes de produção da soja foram: número de vagens por planta (NVP), número de grãos por vagem (NGV), massa de cem grãos (MCG), massa de grãos por planta (MGP), população de plantas (POP), altura de plantas (ALT) e produtividade de grãos obtida (PGO). Alguns dos componentes de produção da soja e dos atributos físicos do solo revelaram dependência espacial, possibilitando mapear a área de produção. Assim, os alcances geoestatísticos recomendados para futuras pesquisas deverão estar entre 273 e 526,5 m. Espacialmente, foi possível estimar a PGO pela co-krigagem com a MGP; com a DS (método do anel volumétrico) de 0,00-0,10 m; com a PT, calculada pela relação entre a DS (método anel volumétrico)/DP (método do balão volumétrico) de 0,10-0,20 m; e com a PT, calculada pela DS (método do anel volumétrico)/DP (método do balão volumétrico modificado) de 0,00-0,10 m. A MGP pode ser estimada pela co-krigagem com a DS, quando determinada pelo método do anel volumétrico, na camada de 0,00-0,10 m. Portanto, foi possível estimar a variabilidade espacial da PGO e da MGP e mapear a área, a fim de propor estratégias de manejo visando aumentar a produtividade da soja.
The soil physical attributes spatial variability is an important indicator of localized management practices in the agrosilvopastoral areas. In the 2009/2010 agricultural year in Selvíria (MS), soybean yield components and physical attributes of the mass/volume ratio of a dystroferric Red Latosol (Typic Acrustox) under a no-tillage system were analyzed in order to find linear and spatial correlations between them. It was installed a geostatistical grid to collect the data, totalizing 99 sampling points in an area of 10 ha. The soil attributes in the 0.00-0.10 and 0.10-0.20 m layers were: soil bulk density, BD (volumetric ring and paraffin sealed clod methods), soil particle density, PD (volumetric flask and modified volumetric flask methods) and total soil porosity, TP, calculated by the resulting ratio between BD and PD, using the formula TP = (1-BD/PD). The components of soybean yield were: number of pods per plant, number of grains per pod, mass of 100 grains, grain mass per plant, plant population, plant height and soybean yield. Some of the determined soybean yield components and some of the physical soil attributes revealed spatial dependence, making it possible to map the yield area. Thus, the recommended geostatistical ranges for further researches thus shall be between 273 and 526.5 m. Spatially, it was possible to estimate the soybean yield by co-kriging it with the grain mass per plant; and with the soil bulk density (volumetric ring method) of 0.00-0.10 m; and with the total soil porosity, calculated by the ratio between soil bulk density (volumetric ring method)/soil particle density (volumetric flask method) of 0.10-0.20 m; and with the total soil porosity, calculated by the soil bulk density (volumetric ring method)/soil particle density (modified volumetric flask method) of 0.00-0.10 m. The grain mass per plant could be estimated by co-kriging it with the soil bulk density (volumetric ring method) of 0.00-0.10 m. Therefore, it was possible to estimate the spatial variability of soybean yield and of grain mass per plant in order to map the area to propose management strategies aiming to increase soybean yield.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Zi. "Stochastic Identification of Pollutant Sources in Aquifers by the Ensemble Kalman Filter." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160628.

Full text
Abstract:
[ES] Como parte de los métodos de asimilacíon de datos, los métodos basados en conjuntos han ganado popularidad en hidrogeología dada su capacidad para manejar grandes cantidades de datos observados simultáneamente. Recientemente, se ha comenzado a emplear este método para la identificacíon de fuentes de contaminacíon en casos sintéticos. Basándonos en estos trabajos anteriores, hemos dado un paso adelante evaluando su rendimiento en experimentos de tanque de laboratorio. La tesis se puede dividir en cuatro partes. En la primera parte, el filtro de Kalman de conjuntos con reinicio (r-EnKF) se utiliza para la identificacíon espacio-temporal de una fuente puntual de contaminantes en un experimento en tanque de laboratorio, junto con la identificacíon de la posicíon y longitud de una placa vertical insertada en el tanque que modifica la geometría del sistema. Los resultados muestran que el r-EnKF es capaz de identificar tanto la fuente como los parámetros relacionados con la geometría del acuífero. La segunda parte muestra una aplicacíon del filtro de Kalman de conjuntos con anamorfosis normal y reinicio (NS-EnKF) y con inflacíon de la covarianza en un experimento de laboratorio con conductividad heterogénea. El método se prueba primero utilizando un caso sintético que imita el experimento del tanque para establecer el número mínimo de miembros del conjunto y la mejor técnica para evitar el colapso del filtro. Luego, su aplicacíon a los datos del tanque muestra que el NS-EnKF con reinicio puede beneficiarse de la inflacíon de Bauser para reducir el tama ñ o del conjunto y llegar a una buena identificacíon conjunta tanto de la fuente de contaminantes como de la heterogeneidad espacial de las conductividades. En la tercera parte, el filtro de Kalman de conjuntos suavizado con asimilacíon múltiple de datos (ES-MDA) se emplea para la identificacíon simultánea de una fuente de contaminantes y la distribucíon espacial de la conductividad hidráulica utilizando el r-EnKF como punto de referencia. El resultado muestra que el ES-MDA puede superar al r-EnKF, marginalmente, para el caso sintético específico analizado con el mismo consumo de CPU, y puede funcionar mucho mejor que el r-EnKF a cambio de un mayor costo de CPU. La cuarta y última parte investiga el rendimiento del ES-MDA en un problema de identificacíon de una inyeccíon de contaminante que varía en el tiempo. Se analiza la influencia de diferentes intervalos de observacíon y esquemas de inflacíon de la covarianza en la determinacíon de la curva de inyeccíon. El resultado muestra que el ES-MDA funciona muy bien en la identificacíon de la curva de inyeccíon cuando la discretizacíon de la misma no es muy alta, pero encuentra problemas de fluctuacíon en los casos con discretizaciones altas. La frecuencia con la que se muestrean los datos de observacíon es un factor influyente, mientras que el número de iteraciones o los métodos de inflacíon de la covarianza tienen menos efecto.
[CA] Com a part dels mètodes d'assimilació de dades, els mètodes basats en conjunts han guanyat popularitat en hidrogeologia donada la seua capacitat per a manejar grans quantitats de dades observades simultàniament. Recentment, s'ha començat a emprar aquest mètode per a la identificació de fonts de contaminació en casos sintètics. Basant-nos en aquests treballs anteriors, hem fet un pas avant avaluant el seu rendiment en experiments de tanc de laboratori. La tesi es pot dividir en quatre parts.En la primera part, el filtre de Kalman de conjunts amb reinici (r-EnKF) s'utilitza per a la identificació espaciotemporal d'una font puntual de contaminants en un experiment en tanc de laboratori, juntament amb la identificació de la posició i longitud d'una placa vertical inserida en el tanc que modifica la geometria del sistema. Els resultats mostren que el r-EnKF és capaç d'identificar tant la font com els paràmetres relacionats amb la geometria de l'aqüífer. La segona part mostra una aplicació del filtre de Kalman de conjunts amb anamorfosis normal i reinici (NS-EnKF) i amb inflació de la covariància en un experiment de laboratori amb conductivitat heterogènia. El mètode es prova primer utilitzant un cas sintètic que imita l'experiment del tanc per a establir el nombre mínim de membres del conjunt i la millor tècnica per a evitar el col·lapse del filtre. Després, la seua aplicació a les dades del tanc mostra que el NS-EnKF amb reinici pot beneficiar-se de la inflació de Bauser per a reduir la grandària del conjunt i arribar a una bona identificació conjunta tant de la font de contaminants com de l'heterogeneïtat espacial de les conductivitats. En la tercera part, el filtre de Kalman de conjunts suavitzat amb assimilació múltiple de dades (ES-MDA) s'empra per a la identificació simultània d'una font de contaminants i la distribució espacial de la conductivitat hidràulica utilitzant el r-EnKF com a punt de referència. El resultat mostra que l'ES-MDA pot superar al r-EnKF, marginalment, per al cas sintètic específic analitzat amb el mateix consum de CPU, i pot funcionar molt millor que el r-EnKF a canvi d'un major cost de CPU. La quarta i última part investiga el rendiment de l'ES-MDA en un problema d'identificació d'una injecció de contaminant que varia en el temps. S'analitza la influència de diferents intervals d'observació i esquemes de inflació de la covariància en la determinació de la corba d'injecció. El resultat mostra que l'ES-MDA funciona molt bé en la identificació de la corba d'injecció quan la discretització no és massa alta, però troba problemes de fluctuació amb discretitzacions massa fines. La freqüència amb la qual es mostregen les dades d'observació és un factor influent en aquesta aplicació, mentre que el nombre d'iteracions o els mètodes d'inflació de la covariància tenen menys efecte.
[EN] As part of the data assimilation methods, the ensemble-based methods have gained popularity in hydrogeology given their ability to deal with huge amounts of observed data simultaneously. More recently, researchers have started to employ these methods to deduce contamination source information in synthetic cases. Based on these previous work, we take a step further to evaluate their performance in sandbox experiments. The main objective of this thesis is to verify the capacity of the ensemble-based methods in identifying contaminant sources and complex geological heterogeneity. The thesis could be divided into four parts. In the first part, the restart ensemble Kalman filter (r-EnKF) is used for the spatiotemporal identification of a point contaminant source in a sandbox experiment, together with the identification of the position and length of a vertical plate inserted in the sandbox that modifies the geometry of the system. The results show that the r-EnKF is capable of identifying both contaminant source information and aquifer-geometry-related parameters. The second part shows an application of the restart normal-score ensemble Kalman filter (NS-EnKF) with covariance inflation in a heterogenous conductivity laboratory experiment. The method is first tested using a synthetic case that mimics the sandbox experiment to establish the minimum number of ensemble members and the best technique to prevent filter collapse. Then, its application to the sandbox data shows that the restart NS-EnKF can benefit from Bauser's inflation to reduce the ensemble size and to arrive to a good joint identification of both the contaminant source and the spatial heterogeneity of conductivities. In the third part, the ensemble smoother with multiple data assimilation (ES-MDA) is employed for the simultaneous identification of a contaminant source and the spatial distribution of hydraulic conductivity while using the r-EnKF as a benchmark. The outcome shows that the ES-MDA is able to outperform the r-EnKF, marginally, for the specific synthetic case analyzed with almost the same CPU consumption, and it can perform far better than the r-EnKF just with a cost of larger CPU usage. The forth and last part investigates the performance of the ES-MDA in a time-varying release history identification problem. The influence of different observation intervals and inflation factor schemes on the determination of the release curve are discussed. The outcome shows that the ES-MDA performs great in recovering release history when the history curve is discretized in not too many steps, and that it fails when the discretization is large. The frequency at which observation data are sampled is an influential factor in this application, while the number of iterations or the inflation scheme have less effect.
Thanks to the institutions that financed my studies. The support to carry out my work was received from the Spanish Ministry of Economy and Competitiveness through project CGL2014-59841-P, and from the Spanish Ministry of Education, Culture and Sports through a fellowship for the mobility of professors in foreign research and higher education institutions to my supervisor, reference PRX17/00150
Chen, Z. (2020). Stochastic Identification of Pollutant Sources in Aquifers by the Ensemble Kalman Filter [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160628
TESIS
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography