Academic literature on the topic 'Multi-Stage Random Sampling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Stage Random Sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-Stage Random Sampling"

1

Raina, SK. "Performing multi stage random sampling in community based surveys." Journal of Postgraduate Medicine 60, no. 2 (2014): 221. http://dx.doi.org/10.4103/0022-3859.132385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chaudhuri, Arijit. "Estimation in Multi-Stage Sampling with Partially Missing First Stage Units." Calcutta Statistical Association Bulletin 56, no. 1-4 (2005): 113–24. http://dx.doi.org/10.1177/0008068320050507.

Full text
Abstract:
Summary It is a practical problem to suitably estimate a finite population total providing an estimate of its measure of error when a sample is taken in multistages but some of the chosen first-stage units cannot be covered. Presuming the misses to occur at random, certain estimators of total along with estimators of their measures uf error are derived covering varying probability sampling.
APA, Harvard, Vancouver, ISO, and other styles
3

Gallagher, Michael, and A. R. Unwin. "Electoral Distortion under STV Random Sampling Procedures." British Journal of Political Science 16, no. 2 (1986): 243–53. http://dx.doi.org/10.1017/s0007123400003902.

Full text
Abstract:
This Note will discuss the impact of random sampling at elections conducted under the single transferable vote (STV) electoral system in multi-member constituencies in the Republic of Ireland. STV, partly because of its popularity among electoral reformers, has received considerable theoretical scrutiny. It has been given an ‘intermediate’ rating in recent assessment of a number of electoral systems, and dismissed as a ‘perverse social choice function’ because it is subject to non-monotonicity. This shortcoming is also mainly responsible for the low degree of acceptance accorded to it by Brams and Fishburn. Nurmi concludes that STV (like other multi-stage systems) performs poorly, with regard to a number of criteria, in comparison with one-stage systems like approval voting. Black complains that STV ‘is a compound of minor complexities and is difficult to remember’. Others have discussed shortcomings in STV and suggested remedies which can be implemented where the counting of votes is entirely computerized.
APA, Harvard, Vancouver, ISO, and other styles
4

Hossan, Dalowar, Zuraina Dato’ Mansor, and Nor Siah Jaharuddin. "Research Population and Sampling in Quantitative Study." International Journal of Business and Technopreneurship (IJBT) 13, no. 3 (2023): 209–22. http://dx.doi.org/10.58915/ijbt.v13i3.263.

Full text
Abstract:
The study underscores the paramount importance of meticulous population selection and sampling strategy in research design. Providing researchers with a comprehensive overview of population considerations and sampling methods, it offers a valuable resource for enhancing the robustness and applicability of research outcomes across diverse disciplines. Researchers discuss the unit of analysis, unit of observation, population of interest, target population, sampling framework, and sampling methods in light of employee work engagement in Malaysia. Simple random sampling, stratified random sampling, systematic random sampling, cluster sampling (single-stage, double-stage, and multi-stage), phase sampling (two-phase and multiphase), convenience sampling, purposive sampling, quota sampling, snowball sampling, and volunteer sampling have been discussed for selecting the appropriate sampling method for the research titled Revisiting of JD-R Theory and the effect of leadership style and meaningful work on employee work engagement among the full-time operational employee in Malaysia. According to the discussion on population and sampling methods, researchers use non-probability sampling, specifically convenience sampling techniques, based on the accessibility and availability of the full-time operational employees of successful organisations in Malaysia. Researchers and practitioners alike can leverage the insights presented in this review to make informed decisions about population selection and sampling methods, ultimately contributing to the advancement of credible and impactful research.
APA, Harvard, Vancouver, ISO, and other styles
5

Gong, Y., H. Xie, X. Tong, Y. Jin, X. Xv, and Q. Wang. "AREA ESTIMATION OF MULTI-TEMPORAL GLOBAL IMPERVIOUS LAND COVER BASED ON STRATIFIED RANDOM SAMPLING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2020 (August 24, 2020): 103–8. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2020-103-2020.

Full text
Abstract:
Abstract. Estimating area of impervious land cover is the most useful and one of the ecological assessment indexes of urban and regional environment. Global land cover maps are inevitably misclassified, which affects the quality and application of the data. Statistical approach for assessing the accuracy is critical to understand the global change information and area estimation is usually based on sample data with a probability-based estimator. However, research on evaluation of multi-temporal global impervious land cover maps has not been implemented. In this study, spatial characteristics of the data are considered to assess the thematic map accuracy with a two-stage stratified random sampling plan. The first stage of stratification is determined by the global urban ecoregion and the second one is determined by land cover classes. Additionally, sample size of both map stage and pixel stage are calculated using a probability sampling model. A response design is constructed for a per-pixel accuracy assessment and blind interpretation is implemented using sample pixels and its surrounding area. Our method is applied to the multi-temporal global impervious land cover maps between 2000 and 2010 with a time interval of 5 years and the estimated area in different epoch are listed in detail. The main contribution of our research is illustrating the details for calculating the proportion area of impervious land cover and corresponding confidence intervals based on the reference classification. The experimental results show that the increasing area of the impervious surface according to the sample unit shows good agreement with the urbanization and descriptive accuracy assessments by user’s, producer’s and overall accuracy are shown respectively.
APA, Harvard, Vancouver, ISO, and other styles
6

Puerta, Patricia, Lorenzo Ciannelli, and Bethany Johnson. "A simulation framework for evaluating multi-stage sampling designs in populations with spatially structured traits." PeerJ 7 (February 25, 2019): e6471. http://dx.doi.org/10.7717/peerj.6471.

Full text
Abstract:
Selecting an appropriate and efficient sampling strategy in biological surveys is a major concern in ecological research, particularly when the population abundance and individual traits of the sampled population are highly structured over space. Multi-stage sampling designs typically present sampling sites as primary units. However, to collect trait data, such as age or maturity, only a sub-sample of individuals collected in the sampling site is retained. Therefore, not only the sampling design, but also the sub-sampling strategy can have a major impact on important population estimates, commonly used as reference points for management and conservation. We developed a simulation framework to evaluate sub-sampling strategies from multi-stage biological surveys. Specifically, we compare quantitatively precision and bias of the population estimates obtained using two common but contrasting sub-sampling strategies: the random and the stratified designs. The sub-sampling strategy evaluation was applied to age data collection of a virtual fish population that has the same statistical and biological characteristics of the Eastern Bering Sea population of Pacific cod. The simulation scheme allowed us to incorporate contributions of several sources of error and to analyze the sensitivity of the different strategies in the population estimates. We found that, on average across all scenarios tested, the main differences between sub-sampling designs arise from the inability of the stratified design to reproduce spatial patterns of the individual traits. However, differences between the sub-sampling strategies in other population estimates may be small, particularly when large sub-sample sizes are used. On isolated scenarios (representative of specific environmental or demographic conditions), the random sub-sampling provided better precision in all population estimates analyzed. The sensitivity analysis revealed the important contribution of spatial autocorrelation in the error of population trait estimates, regardless of the sub-sampling design. This framework will be a useful tool for monitoring and assessment of natural populations with spatially structured traits in multi-stage sampling designs.
APA, Harvard, Vancouver, ISO, and other styles
7

Nelson, Gary A. "Bias in common catch-curve methods applied to age frequency data from fish surveys." ICES Journal of Marine Science 76, no. 7 (2019): 2090–101. http://dx.doi.org/10.1093/icesjms/fsz085.

Full text
Abstract:
Abstract Catch curve analysis is often used in data-limited fisheries stock assessments to estimate total instantaneous mortality (Z). There are now six catch-curve methods available in the literature: the Chapman–Robson, linear regression, weighted linear regression, Heincke, generalized Poisson linear, and random-intercept Poisson linear mixed model. An assumption shared among the underyling probability models of these estimators is that fish collected for ageing are sampled from the population by simple random sampling. This type of sampling is nearly impossible in fisheries research because populations are sampled in surveys that use gears that capture individuals in clusters and often fish for ageing are selected from multi-stage sampling. In this study, I explored the effects of multi-stage cluster sampling on the bias of the estimates of Z and their associated standard errors. I found that the generalized Poisson linear model and the Chapman–Robson estimators were the least biased, whereas the random-intercept Poisson linear mixed model was the most biased under a wide range of simulation scenarios that included different levels of recruitment variation, intra-cluster correlation, sample sizes, and methods used to generate age frequencies. Standard errors of all estimators were under-estimated in almost all cases and should not be used in statistical comparisons.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Qiyao, Yilei Wang, Ke Yi, Sheng Wang, and Feifei Li. "Secure Sampling for Approximate Multi-party Query Processing." Proceedings of the ACM on Management of Data 1, no. 3 (2023): 1–27. http://dx.doi.org/10.1145/3617339.

Full text
Abstract:
We study the problem of random sampling in the secure multi-party computation (MPC) model. In MPC, taking a sample securely must have a cost Ω(n) irrespective to the sample size s. This is in stark contrast with the plaintext setting, where a sample can be taken in O(s) time trivially. Thus, the goal of approximate query processing (AQP) with sublinear costs seems unachievable under MPC. To get around this inherent barrier, in this paper we take a two-stage approach: In the offline stage, we generate a batch of n/s samples with (n) total cost, which can then be consumed to answer queries as they arrive online. Such an approach allows us to achieve an Õ(s) amortized cost per query, similar to the plaintext setting. Based on our secure batch sampling algorithms, we build MASQUE, an MPC-AQP system that achieves sublinear online query costs by running an MPC protocol to evaluate the queries on pre-generated samples. MASQUE achieves the strong security guarantee of the MPC model, i.e., nothing is revealed beyond the query result, which itself can be further protected by (amplified) differential privacy
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Haoming. "Three-Stage Sampling Algorithm for Highly Imbalanced Multi-Classification Time Series Datasets." Symmetry 15, no. 10 (2023): 1849. http://dx.doi.org/10.3390/sym15101849.

Full text
Abstract:
To alleviate the data imbalance problem caused by subjective and objective factors, scholars have developed different data-preprocessing algorithms, among which undersampling algorithms are widely used because of their fast and efficient performance. However, when the number of samples of some categories in a multi-classification dataset is too small to be processed via sampling or the number of minority class samples is only one or two, the traditional undersampling algorithms will be less effective. In this study, we select nine multi-classification time series datasets with extremely few samples as research objects, fully consider the characteristics of time series data, and use a three-stage algorithm to alleviate the data imbalance problem. In stage one, random oversampling with disturbance items is used to increase the number of sample points; in stage two, on the basis of the latter operation, SMOTE (synthetic minority oversampling technique) oversampling is employed; in stage three, the dynamic time-warping distance is used to calculate the distance between sample points, identify the sample points of Tomek links at the boundary, and clean up the boundary noise. This study proposes a new sampling algorithm. In the nine multi-classification time series datasets with extremely few samples, the new sampling algorithm is compared with four classic undersampling algorithms, namely, ENN (edited nearest neighbours), NCR (neighborhood cleaning rule), OSS (one-side selection), and RENN (repeated edited nearest neighbors), based on the macro accuracy, recall rate, and F1-score evaluation indicators. The results are as follows: of the nine datasets selected, for the dataset with the most categories and the fewest minority class samples, FiftyWords, the accuracy of the new sampling algorithm was 0.7156, far beyond that of ENN, RENN, OSS, and NCR; its recall rate was also better than that of the four undersampling algorithms used for comparison, corresponding to 0.7261; and its F1-score was 200.71%, 188.74%, 155.29%, and 85.61% better than that of ENN, RENN, OSS, and NCR, respectively. For the other eight datasets, this new sampling algorithm also showed good indicator scores. The new algorithm proposed in this study can effectively alleviate the data imbalance problem of multi-classification time series datasets with many categories and few minority class samples and, at the same time, clean up the boundary noise data between classes.
APA, Harvard, Vancouver, ISO, and other styles
10

nath, Ms Sandhyarani, and dr Bimla rani. "A STUDY TO ASSESS THE PRE-TEST AND POST-TEST KNOWLEDGE ON HOME BASED NEWBORN CARE AMONG ASHAS IN CONTROL AND EXPERIMENTAL GROUP IN SELECTED CHCS OF DISTRICT KHORDA, ODISHA." RESEARCH RESERVOIR 9, no. 1 (2023): 74–77. http://dx.doi.org/10.47211/trr.2023.v09i01.018.

Full text
Abstract:
Newborn health is the key to child survival. Newborn is the foundation of human life. The journey for its protection has already started and still there is a long way to go. The birth of a newborn baby is a special moment of joy with lot of expectations. Quasi-experimental pre and post-test with control group design with experimental approach was used to evaluate the effectiveness of the video-assisted teaching module on home-based newborn care. Study was conducted in PHCs of Khorda district. - The population for the present study was all the ASHAs working in PHCs, Khorda. The sample size constitutes 100 ASHAs working in selected PHCs, Khorda. In Khorda district, there are quite a number blocks in first stage, 2nd stage-selection of blocks and third stage decision of CHCs and PHCs. So Multi stage random sampling was appropriate Cluster/ multi stage sampling technique was used to select the sample for the study.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Multi-Stage Random Sampling"

1

Hankin, David, Michael S. Mohr, and Kenneth B. Newman. Sampling Theory. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.001.0001.

Full text
Abstract:
We present a rigorous but understandable introduction to the field of sampling theory for ecologists and natural resource scientists. Sampling theory concerns itself with development of procedures for random selection of a subset of units, a sample, from a larger finite population, and with how to best use sample data to make scientifically and statistically sound inferences about the population as a whole. The inferences fall into two broad categories: (a) estimation of simple descriptive population parameters, such as means, totals, or proportions, for variables of interest, and (b) estimation of uncertainty associated with estimated parameter values. Although the targets of estimation are few and simple, estimates of means, totals, or proportions see important and often controversial uses in management of natural resources and in fundamental ecological research, but few ecologists or natural resource scientists have formal training in sampling theory. We emphasize the classical design-based approach to sampling in which variable values associated with units are regarded as fixed and uncertainty of estimation arises via various randomization strategies that may be used to select samples. In addition to covering standard topics such as simple random, systematic, cluster, unequal probability (stressing the generality of Horvitz–Thompson estimation), multi-stage, and multi-phase sampling, we also consider adaptive sampling, spatially balanced sampling, and sampling through time, three areas of special importance for ecologists and natural resource scientists. The text is directed to undergraduate seniors, graduate students, and practicing professionals. Problems emphasize application of the theory and R programming in ecological and natural resource settings.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-Stage Random Sampling"

1

Diestmann, Thomas, Nils Broedling, Benedict Götz, and Tobias Melz. "Surrogate Model-Based Uncertainty Quantification for a Helical Gear Pair." In Lecture Notes in Mechanical Engineering. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-77256-7_16.

Full text
Abstract:
AbstractCompetitive industrial transmission systems must perform most efficiently with reference to complex requirements and conflicting key performance indicators. This design challenge translates into a high-dimensional multi-objective optimization problem that requires complex algorithms and evaluation of computationally expensive simulations to predict physical system behavior and design robustness. Crucial for the design decision-making process is the characterization, ranking, and quantification of relevant sources of uncertainties. However, due to the strict time limits of product development loops, the overall computational burden of uncertainty quantification (UQ) may even drive state-of-the-art parallel computing resources to their limits. Efficient machine learning (ML) tools and techniques emphasizing high-fidelity simulation data-driven training will play a fundamental role in enabling UQ in the early-stage development phase.This investigation surveys UQ methods with a focus on noise, vibration, and harshness (NVH) characteristics of transmission systems. Quasi-static 3D contact dynamic simulations are performed to evaluate the static transmission error (TE) of meshing gear pairs under different loading and boundary conditions. TE indicates NVH excitation and is typically used as an objective function in the early-stage design process. The limited system size allows large-scale design of experiments (DoE) and enables numerical studies of various UQ sampling and modeling techniques where the design parameters are treated as random variables associated with tolerances from manufacturing and assembly processes. The model accuracy of generalized polynomial chaos expansion (gPC) and Gaussian process regression (GPR) is evaluated and compared. The results of the methods are discussed to conclude efficient and scalable solution procedures for robust design optimization.
APA, Harvard, Vancouver, ISO, and other styles
2

"Multi-stage Simple Random Sampling." In Practical Sampling Techniques. CRC Press, 1995. http://dx.doi.org/10.1201/9781482273465-29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Stratified Multi-stage Simple Random Sampling." In Practical Sampling Techniques. CRC Press, 1995. http://dx.doi.org/10.1201/9781482273465-35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ayansina, Simeon Olusola, Isreal Ajibade Adedeji, Fadilat Adefunke Ayinde, and Abiodun Elijah Obayelu. "Determinants of Farmer Participation in Private and Public Extension Organizations in Southwestern Nigeria." In Advances in Electronic Government, Digital Divide, and Regional Development. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-6471-4.ch021.

Full text
Abstract:
This study was designed to analyze the participation of farmers in the public and private extension organizations in Nigeria. Multi-stage random sampling methods was used in selection of 30 beneficiaries from ADP, FADU, and JDPM-RUDEP in three states from Nigeria. Questionnaires were used to collect data and analyzed with descriptive and inferential statistics. Kruskal Wallis test of difference (X2 =0.79, assymp. Sig of 0.72) shows that beneficiaries' participation in the extension services of public and private organizations was not different but correlation results indicated association between farmers' participation in public (r =0.279, p<lt; 0.10) and FADU (r =0.790, p <lt; 0.10) and benefits derived. It was concluded that farmers' participation in the study organizations were not significantly different, but was associated with benefits from some organizations. Benefit-oriented extension programmes are recommended for extension organizations in order to benefit the participants in their specific needs.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-Stage Random Sampling"

1

Malik, Ariff Md Ab, Masri Ayob, and Abdul Razak Hamdan. "Stratified random sampling technique for integrated two-stage multi-neighbourhood tabu search for examination timetabling problem." In 10th International Conference on Intelligent Systems Design and Applications (ISDA 2010). IEEE, 2010. http://dx.doi.org/10.1109/isda.2010.5687093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Abaa, Angela Ebere, Josephine Shola Aina, Rotimi Michael Akande, and Taiyelolu Martins Ogunjirin. "An Assessment of Students’ Readiness for Digital Learning in Senior Secondary Schools in Lagos State." In Tenth Pan-Commonwealth Forum on Open Learning. Commonwealth of Learning, 2022. http://dx.doi.org/10.56059/pcf10.5882.

Full text
Abstract:
This study investigated students' readiness for digital learning in senior secondary schools in Lagos state, Nigeria. Descriptive survey research design was adopted in the study. A sample size of 245 respondents was randomly selected from four educational districts in Lagos using confidence level of 95% (0.05). A Multi stage sampling approach involving both simple and stratified random sampling technique was used to select the students. The sample for the study is made up two hundred students randomly selected across 8 schools in four educational districts in Lagos. A self-developed 4-point Likert-type scale on the research objectives was used as an instrument of data collection and the instrument was thoroughly scrutinized by an expert in the area of ICT. The instrument was validated and found to be reliable. It was personally administered by the researchers. Four research questions and two hypotheses guided the study. Both descriptive statistics such as mean and standard deviation were used to answer the research questions and inferential statistics as used to test the hypotheses. The findings of the study revealed among others that there is positive disposition/perception of respondents towards digital learning. The study also revealed that there is no significant gender difference in perception and utilization of digital learning facilities among the students. The study therefore recommended that the secondary school administrators should incorporate digital learning as part of the curriculum to enhance the interest of learner.
APA, Harvard, Vancouver, ISO, and other styles
3

XU, YANG, YI LI, YUNFEI FAN, XIAODONG ZHENG, and YUEQUAN BAO. "TASK-SIGNIFICANCE-AWARE META LEARNING FOR FEW-IMAGE-BASED STRUCTURAL DAMAGE RECOGNITION." In Structural Health Monitoring 2023. Destech Publications, Inc., 2023. http://dx.doi.org/10.12783/shm2023/36869.

Full text
Abstract:
Recently, structural damage recognition has gained significant progress using deep learning and computer vision techniques. However, massive training images, the interclass balance and completeness of damage categories are essential to ensure recognition accuracy. In addition, the generalization ability for new damage categories and robustness under real-world scenarios are limited. This study proposes a task-aware meta-learning paradigm using limited images for universal structural damage segmentation. First, a novel task generation strategy instead of random sampling is designed based on feature density clustering. A synthetical metric of Jaccard distance and Euclidean distance is established to measure the feature similarity among multitype damage images. The class separability discovered in the high-level feature space of multi-type structural damage enhances the interpretability of randomly-generated tasks for conventional meta-learning. Second, a dual-stage optimization framework is built based on Model-Agnostic Meta-Learning (MAML), comprising an internal optimization stage of the semantic segmentation model (U-Net) and an external optimization stage of the meta-learning machine. Third, a set of core samples around the cluster center is selected to form an additional query pool and evaluate the tasksignificance scores of different tasks within a meta-batch by the same criteria. The task-significance scores are utilized in the external optimization to control the orientation of gradient updates towards more significant tasks. To verify the effectiveness and necessity of the proposed method, ablation experiments are performed using a multi-type structural damage dataset, including concrete crack, steel fatigue crack, and concrete spalling. The proposed method outperforms directly training the original U-Net and the conventional MAML algorithm using only a handful of training samples with improvements in segmentation accuracy. In addition, the improvement in recognition accuracy increases when using fewer training images, further indicating the efficacy of the proposed method. The generalization ability for new structural damage of steel corrosion is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
4

Amanuel, Atile, Fanta Workneh, Sorsa Zamach, Limani Belete, Anna Brisola, and Brentha Murugan. "Determinants of wheat commercialization in Damot Gale district of Wolaita zone." In Employment, Education and Entrepreneurship 2024. Faculty of Business Economics and Entrepreneurship, 2024. https://doi.org/10.5937/eee24018a.

Full text
Abstract:
Transforming subsistence-farming to market-oriented production as a way to increase household's income and reduce poverty in Ethiopia. The objectives are to identify factors determining wheat commercialization in Damot Gale district of Wolaita zone. Multi-stage sampling techniques were employed to select total sample size of 120 households. Firstly, Damot Gale was purposively selected due to its high production potential of cereal crops. Three Kebeles, namely Wandara Boloso, Woshi Gale and Fate were purposively selected. The sample for each kebele was determined by using probability proportional to size using simple random sampling technique. Both primary and secondary data sources were used to generate qualitative and quantitative data types through structured questionnaire, focus group discussion, personal observation and in-depth interview. Data collected were analyzed using household commercialization index and binary logit model. The household commercialization index showed that 45.9% of wheat producing households were commercialized. From sample households, 72.5% participated in wheat output market. Binary logit regression model result revealed that the sex of household head, education level of household head, market-oriented production, credit utilization, extension services use and market information use, number of oxen owned, annual household income, quantity of wheat produced, use of farm inputs and age of household head. Therefore, market orientated production, farm inputs utilization, demonstrative trainings, MFI services, market information dissemination and functional adult literacy can contribute wheat commercialization of households in the study area.
APA, Harvard, Vancouver, ISO, and other styles
5

Michael, Andreas, Chukwuemeka Kalu, and Nassim Bouabdallah. "MultiFracSimPPM: A Data-Driven Probabilistic Predictive Model for Hydraulic Fracture Growth from Uniformly and Non-Uniformly-Spaced Perforation Clusters." In 58th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2024. http://dx.doi.org/10.56952/arma-2024-0529.

Full text
Abstract:
ABSTRACT: The thrust of this study is the assessment of using non-uniform-spaced perforation-clusters (as opposed to uniformly-spaced perforation clusters) in order to promote improved multi-frac growth geometries, by reducing stress-induced interference during simultaneous-HF growth. A probabilistic predictive model (PPM) that integrates empirical data obtained from laboratory-scale experiments is overviewed. MultifracSimPPM employs published results from multi-frac growth experiments on transparent gelatin blocks with single-phased perforations and various perforations-spacing arrays. A realistic and dependable expectation for multi-frac geometry was generated through the implementation of a Monte-Carlo simulation scheme, assessing such "limited-entry" technique. Statistical bias is introduced into random probabilistic distributions using this empirical dataset by MultifracSimPPM, resulting in geometric profiles that correspond to experimental findings that disprove the initial hypothesis (non-uniformly-spaced perforation clusters did not perform better than uniformly-spaced perforation clusters). Integrating economic principles could further enhance HF-treatment design, leading to higher post-stimulation hydrocarbon production and improved economics. 1. INTRODUCTION Multistage hydraulic fracturing (HF) operations in horizontal wells enables hydrocarbon production from unconventional, low-permeability shale formations. The key element of HF modelling remains the prediction of the generated HF growth. When it comes to reservoir stimulation using multistage HF treatments, where many HFs are being "pumped" simultaneously within each frac stage, the problem's complexity increases exponentially. Standard treatment design software do not incorporate "real-world" phenomena, such as dominant HF creation and inactive perforation clusters (Michael, 2022). This work presents a probabilistic predictive model (PPM) that incorporates empirical inputs from laboratory-scale experiments. HF geometry is arguably the most critical element of HF-treatment design, with a number of considerations made for influencing multi-frac growth to optimize production and economics. Employing random sampling, the algorithm, which is executed in MATLAB, populates output arrays that contain vectors of the expected HF heights and lengths. The statistical probability of occurrence of the corresponding HF-growth dimensions (length and height) in our laboratory experiments is utilized to populate these input arrays. The width of each HF is determined using the "Perkins-Kern-Nordgren" (PKN) geometry model. Subsequently, the calculated HF length, height, and width are applied to the formation of semi-ellipsoid-shaped HF-growth profiles that illustrate the anticipated final multiple-HF-growth geometry. The homogeneity of multiple-HF growth is likewise quantified by using our novel fracture-geometric-homogeneity index (FGHI), as a metric.
APA, Harvard, Vancouver, ISO, and other styles
6

Alhemdi, Aymen, and Ming Gu. "A Robust Workflow for Optimizing Drilling/Completion/Frac Design Using Machine Learning and Artificial Intelligence." In SPE Annual Technical Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210160-ms.

Full text
Abstract:
Abstract One of the biggest challenges in drilling/completion/hydraulic fracturing optimization is determining the optimal parameters in the infinite space of possible solutions. Applying a comprehensive parametric study with various geomechanical properties using both a frac simulator and a reservoir simulator is low efficient. This study proposes a workflow for optimizing unconventional reservoir development using machine learning and artificial intelligence (AI) in conjunction with advanced geomechanical modeling. The workflow consists of four steps: in Step1, appropriate acoustic interpretation models are used for geomechanical and in-situ stress characterization. In Step2, unsupervised machine learning optimizes completion designs based on formation anisotropy and heterogeneity along a well. In step3, a training database is built by generating multiple cases based on various simulations guided by a smart sampling algorithm. Proxy models are trained and validated by feeding the training datasets to supervised machine learning algorithms. Lastly, the tested proxy models are run for a multi-parameter sensitivity study for design optimization. The workflow was validated by a Marcellus field case. First, the newly proposed orthorhombic acoustic interpretation model yielded in-situ stress results more consistent with field measurements than the traditional acoustic models. Second, using the C-Means Fuzzy Clustering, the stage and cluster spacings were optimized to overcome the low cluster efficiency issue led by the current geometric completion design. Last, using the newly proposed smart sampling algorithm, a 200-critical-case database was built and fed into the Neural Network algorithm for training proxy models. After running the proxy models in a random-search algorithm, the optimal design parameter values were obtained statistically, leading to the Return-On-Frac-Investment (ROFI) improved by 22-40% from the current base case. The study introduces a robust four-step workflow combining unsupervised and supervised machine learning to examine high-dimensional multivariable drilling/completion/frac designs efficiently. The new workflow enables the evaluation of the statistical significance of the influencing parameters and, most importantly, their interactions, which have often been neglected in the current simulation-based optimization workflow. Moreover, the trained proxy models can be applied to optimize the design of the current wellbore as well as any other future wells drilled in the same basin in a convenient and time-efficient manner.
APA, Harvard, Vancouver, ISO, and other styles
7

Alhemdi, Aymen, and Ming Gu. "Optimizing Unconventional Hydraulic Fracturing Design Using Machine Learning and Artificial Intelligent." In SPE Western Regional Meeting. SPE, 2022. http://dx.doi.org/10.2118/209269-ms.

Full text
Abstract:
Abstract For optimizing the hydraulic fracture design in shales, it is challenging to understand the impact of several different parameters on fracture propagation and production, such as geomechanical properties and fracturing treatment parameters. Current frac simulators do not exhibit consideration of the anisotropy of rock elasticity in the shales. Additionally, using the fracture simulation linked with reservoir simulation for the parametric study is low efficient. Due to its lamination nature, shale has different geomechanical properties along with the directions vertically and horizontally. Anisotropic elastic properties and stresses lead to more complications for predicting the fracture. This study introduces a comprehensive workflow for fracturing design optimization by applying supervised machine learning. The research also aims to develop an algorithm that can help any shale reservoir optimize the pumping treatment design of hydraulic fracture. The workflow is divided into six steps. Firstly, acoustic and density logs for a research well in Marcellus shale are used to interpret Young's modulus, Poisson's ratio, and minimum horizontal stress magnitude by anisotropic VTI model. In step 2, the interpreted mechanical properties, including the current treatment design of the target well, are inserted into the frac simulator to obtain the conductivity distribution inside the fracture. The conductivity distribution converts to fracture permeability matrix. As for the third step, the fracture permeability matrix is consequently entered into the reservoir model for estimating the production. The output production is matched with the field history data. For the fourth step, a random sampling algorithm is applied to build a database with a rational sample size. In step 5, the generated database is employed to train and validate an artificial neural network model (ANN). Lastly, parametric studies are performed through the trained ANN model to analyze the multi-parameter effect on cumulative production. This workflow can predict the early and late production for a given fracture design based on multiple fracture treatment parameters such as initial fracture depth, cluster numbers of each stage, and proppant type. Besides, the study provides a capability for multivariable analysis to better understand the productivity behavior of the fractured well.
APA, Harvard, Vancouver, ISO, and other styles
8

Hui, Zhang, Li Jun, and Zhang Xiaojun. "A New Model for Risk Assessment of Fault Slip Induced by Hydraulic Fracturing of Shale Gas in the Sichuan Basin." In 56th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2022. http://dx.doi.org/10.56952/arma-2022-0867.

Full text
Abstract:
ABSTRACT: The current research generally believes that fault slip in the hydraulic fracturing process is the main cause of casing deformation. Therefore, accurate risk assessment of fault slip is an important way to prevent casing deformation. However, there are few studies on the prediction of fault slip probability caused by hydraulic fracturing. The existing calculation formulas are considered in relatively simply factors. Based on the Mohr-Coulomb criterion, this paper further considered the influence of local in-situ stress changes and seepage pathways, and established a new slip risk assessment model suitable for arbitrarily distributed faults. The model could predict the accurate probability of fault slip through deterministic parameters, and calculate the probability cumulative curve of fault slip through random sampling by the Monte Carlo method. The verification results of an example showed that the model had certain feasibility and accuracy. 1. INTRODUCTION According to statistics, as of April 2021, there were more than 74 wells in the Weirong block of Sinopec, of which 32.4% had been deformed (24/74). As of December 2018, more than 377 shale gas wells were fractured in national shale gas demonstration bases such as Changning-Weiyuan, Zhaotong, and the casing deformation rate was 35.3% (133/377). We run multi-finger imaging caliper tool and lead mold to observe the casing deformation. The amount of deformation is relatively large, usually 1~3 cm, and the most serious one exceeds 5 cm. The lead impression is worn out on one side, which is in line with the characteristics of casing shear deformation. At present, it is generally believed that the fault slip during hydraulic fracturing is the main cause of casing deformation, and a large number of studies have been carried out with the faults near the wellbore as the core. Chen et al. (2017) first argued that faults were the internal cause of casing deformation, and hydraulic fracturing was the external cause. To further verify this view, field tests, laboratory tests and finite element simulations have been carried out (Li, 2017; Liu, 2017; Jalali et al., 2016)., and it was confirmed that the shear deformation of casing downhole was closely related to fault slip. Theoretical research on fault slip has been adequate, but how to assess the risk of fault slip and prevent casing deformation in advance is the ultimate goal of all research. Regrettably, there are few studies in this area at present. And the existing software is only suitable for injection wells, which do not meet the calculation conditions of large displacement, high pump pressure and multi-stage fracturing of shale gas horizontal wells (Walsh et al., 2016).
APA, Harvard, Vancouver, ISO, and other styles
9

Olagunju, Y., Billy Oluwale, and Matthew Ilori. "Characterization of Production Technologies Employed by Selected Small and Medium Scale Enterprises in Food Industry in Southwestern Nigeria." In 2019 African Institute for Science Policy and Innovation International Biennial Conference. Koozakar LLC, 2019. http://dx.doi.org/10.69798/12195941.

Full text
Abstract:
The study examined the production technologies used by selected small and medium enterprises (SMEs) in the food industry in Southwestern Nigeria. This is with a view to determining the extent to which production of quality foods is done with adopted technologies. The study was carried out in Lagos, Ogun and Oyo States in Southwestern Nigeria where there is high concentration of food processing firms (FPFs). Multi-stage sampling technique was used to select the local governments and towns with high concentration of FPFs in each state. Two hundred and fifty small and medium scale FPFs were selected using purposive sampling. Primary data were collected with two sets of questionnaire. The first set elicited information on the type and nature of production technologies and was administered on production managers of the firms. The second set elicited information on effectiveness of production technologies and was administered on one randomly selected production employee from each firm. Data collected were analyzed using mean and frequency distribution. The results showed that 48% of the production technologies of the firms were for baking, 31% were for filtration and 14.4% were used for pasteurization. Furthermore, 39.2% and 41.6% of the firms used automated and a mixture of automated and manual machines respectively. A total of 42% used imported machines, while 41.6% used a mixture of imported and local machines. Majority (74%) used batch production system. The study concluded that the reliance of firms on imported technologies, which they have poorly maintained, cannot help to achieve sustainable development in Nigeria.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography