Academic literature on the topic 'Sample size re-estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sample size re-estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sample size re-estimation"

1

Govindarajulu, Z. "Sample Size Re-Estimation: Nonparametric Approach." Journal of Statistical Theory and Practice 1, no. 2 (2007): 253–64. http://dx.doi.org/10.1080/15598608.2007.10411837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sobel, Marc, and Ibrahim Turkoz. "Bayesian blinded sample size re-estimation." Communications in Statistics - Theory and Methods 47, no. 24 (2017): 5916–33. http://dx.doi.org/10.1080/03610926.2017.1404097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lawrence Gould, A. "Issues in blinded sample size re-estimation." Communications in Statistics - Simulation and Computation 26, no. 3 (1997): 1229–39. http://dx.doi.org/10.1080/03610919708813436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Proschan, Michael A. "Sample size re-estimation in clinical trials." Biometrical Journal 51, no. 2 (2009): 348–57. http://dx.doi.org/10.1002/bimj.200800266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiaoyu, Cai, Tsong Yi, and Shen Meiyu. "A Critical Review on Adaptive Sample Size Re-estimation (SSR) Designs for Superiority Trials with Continuous Endpoints." Open Journal of Pharmaceutical Science and Research 1, no. 1 (2019): 01–13. https://doi.org/10.36811/ojpsr.2019.110001.

Full text
Abstract:
Adaptive sample size re-estimation (SSR) methods have been widely used for designing clinical trials, especially during the past two decades. We give a critical review for several commonly used two-stage adaptive SSR designs for superiority trials with continuous endpoints. The objective, design and some of our suggestions and concerns of each design will be discussed in this paper. Keywords: Adaptive Design; Sample Size Re-estimation; Review
APA, Harvard, Vancouver, ISO, and other styles
6

Brakenhoff, TB, KCB Roes, and S. Nikolakopoulos. "Bayesian sample size re-estimation using power priors." Statistical Methods in Medical Research 28, no. 6 (2018): 1664–75. http://dx.doi.org/10.1177/0962280218772315.

Full text
Abstract:
The sample size of a randomized controlled trial is typically chosen in order for frequentist operational characteristics to be retained. For normally distributed outcomes, an assumption for the variance needs to be made which is usually based on limited prior information. Especially in the case of small populations, the prior information might consist of only one small pilot study. A Bayesian approach formalizes the aggregation of prior information on the variance with newly collected data. The uncertainty surrounding prior estimates can be appropriately modelled by means of prior distributions. Furthermore, within the Bayesian paradigm, quantities such as the probability of a conclusive trial are directly calculated. However, if the postulated prior is not in accordance with the true variance, such calculations are not trustworthy. In this work we adapt previously suggested methodology to facilitate sample size re-estimation. In addition, we suggest the employment of power priors in order for operational characteristics to be controlled.
APA, Harvard, Vancouver, ISO, and other styles
7

Zellner, Dietmar, Günter E. Zellner, and Frieder Keller. "A SAS macro for sample size re-estimation." Computer Methods and Programs in Biomedicine 65, no. 3 (2001): 183–90. http://dx.doi.org/10.1016/s0169-2607(00)00119-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Ruitao, Zhao Yang, Ying Yuan, and Guosheng Yin. "Sample size re-estimation in adaptive enrichment design." Contemporary Clinical Trials 100 (January 2021): 106216. http://dx.doi.org/10.1016/j.cct.2020.106216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shih, Weichung Joseph. "Sample size re-estimation - journey for a decade." Statistics in Medicine 20, no. 4 (2001): 515–18. http://dx.doi.org/10.1002/sim.532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lake, Stephen, Erin Kammann, Neil Klar, and Rebecca Betensky. "Sample size re-estimation in cluster randomization trials." Statistics in Medicine 21, no. 10 (2002): 1337–50. http://dx.doi.org/10.1002/sim.1121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Sample size re-estimation"

1

Banton, Dwaine Stephen. "A BAYESIAN DECISION THEORETIC APPROACH TO FIXED SAMPLE SIZE DETERMINATION AND BLINDED SAMPLE SIZE RE-ESTIMATION FOR HYPOTHESIS TESTING." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/369007.

Full text
Abstract:
Statistics<br>Ph.D.<br>This thesis considers two related problems that has application in the field of experimental design for clinical trials: • fixed sample size determination for parallel arm, double-blind survival data analysis to test the hypothesis of no difference in survival functions, and • blinded sample size re-estimation for the same. For the first problem of fixed sample size determination, a method is developed generally for testing of hypothesis, then applied particularly to survival analysis; for the second problem of blinded sample size re-estimation, a method is developed specifically for survival analysis. In both problems, the exponential survival model is assumed. The approach we propose for sample size determination is Bayesian decision theoretical, using explicitly a loss function and a prior distribution. The loss function used is the intrinsic discrepancy loss function introduced by Bernardo and Rueda (2002), and further expounded upon in Bernardo (2011). We use a conjugate prior, and investigate the sensitivity of the calculated sample sizes to specification of the hyper-parameters. For the second problem of blinded sample size re-estimation, we use prior predictive distributions to facilitate calculation of the interim test statistic in a blinded manner while controlling the Type I error. The determination of the test statistic in a blinded manner continues to be nettling problem for researchers. The first problem is typical of traditional experimental designs, while the second problem extends into the realm of adaptive designs. To the best of our knowledge, the approaches we suggest for both problems have never been done hitherto, and extend the current research on both topics. The advantages of our approach, as far as we see it, are unity and coherence of statistical procedures, systematic and methodical incorporation of prior knowledge, and ease of calculation and interpretation.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
2

Ntambwe, Lupetu Ives. "Sequential sample size re-estimation in clinical trials with multiple co-primary endpoints." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/66339/.

Full text
Abstract:
In this thesis, we consider interim sample size adjustment in clinical trials with multiple co-primary continuous endpoints. We aim to answer two questions: First, how to adjust a sample size in clinical trial with multiple continuous co-primary endpoints using adaptive and group sequential design. Second, how to construct a test in order to control the family-wise type I error rate and maintain the power, even if the correlation ρ between endpoints is not known. To answer the first question, we conduct K different interim tests, each for one endpoint and each at level α/K (i.e. Bonferroni adjustment). To answer the second question, either we perform a sample size re-estimation in which the results of the interim analysis are used to estimate one or more nuisance parameters, and this information is used to determine the sample size for the rest of the trial or the inverse normal combination test type approach; or we conduct a group sequential test where we monitor the information, and the information is adjusted to allow the correlation ρ to be estimated at each stage or the inverse normal combination test type approach. We show that both methods control the family-wise type I error α and maintain the power and that the group sequential methodology seems to be more powerful, as this depends on the spending function.
APA, Harvard, Vancouver, ISO, and other styles
3

Cong, Danni. "The effect of sample size re-estimation on type I error rates when comparing two binomial proportions." Kansas State University, 2016. http://hdl.handle.net/2097/34504.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Christopher I. Vahl<br>Estimation of sample size is an important and critical procedure in the design of clinical trials. A trial with inadequate sample size may not produce a statistically significant result. On the other hand, having an unnecessarily large sample size will definitely increase the expenditure of resources and may cause a potential ethical problem due to the exposure of unnecessary number of human subjects to an inferior treatment. A poor estimate of the necessary sample size is often due to the limited information at the planning stage. Hence, the adjustment of the sample size mid-trial has become a popular strategy recently. In this work, we introduce two methods for sample size re-estimation for trials with a binary endpoint utilizing the interim information collected from the trial: a blinded method and a partially unblinded method. The blinded method recalculates the sample size based on the first stage’s overall event proportion, while the partially unblinded method performs the calculation based only on the control event proportion from the first stage. We performed simulation studies with different combinations of expected proportions based on fixed ratios of response rates. In this study, equal sample size per group was considered. The study shows that for both methods, the type I error rates were preserved satisfactorily.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Songnian. "The impact of sample size re-estimation on the type I error rate in the analysis of a continuous end-point." Kansas State University, 2017. http://hdl.handle.net/2097/35326.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Christopher Vahl<br>Sample size estimation is generally based on assumptions made during the planning stage of a clinical trial. Often, there is limited information available to estimate the initial sample size. This may result in a poor estimate. For instance, an insufficient sample size may not have the capability to produce statistically significant results, while an over-sized study will lead to a waste of resources or even ethical issues in that too many patients are exposed to potentially ineffective treatments. Therefore, an interim analysis in the middle of a trial may be worthwhile to assure that the significance level is at the nominal level and/or the power is adequate to detect a meaningful treatment difference. In this report, the impact of sample size re-estimation on the type I error rate for the continuous end-point in a clinical trial with two treatments is evaluated through a simulation study. Two sample size estimation methods are taken into consideration: blinded and partially unblinded. For the blinded method, all collected data for two groups are used to estimate the variance, while only data from the control group are used to re-estimate the sample size for the partially unblinded method. The simulation study is designed with different combinations of assumed variance, assumed difference in treatment means, and re-estimation methods. The end-point is assumed to follow normal distribution and the variance for both groups are assumed to be identical. In addition, equal sample size is required for each group. According to the simulation results, the type I error rates are preserved for all settings.
APA, Harvard, Vancouver, ISO, and other styles
5

Asendorf, Thomas [Verfasser]. "Blinded Sample Size Re-estimation for Longitudinal Overdispersed Count Data in Randomized Clinical Trials with an Application in Multiple Sclerosis / Thomas Asendorf." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1228364591/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Asendorf, Thomas. "Blinded Sample Size Re-estimation for Longitudinal Overdispersed Count Data in Randomized Clinical Trials with an Application in Multiple Sclerosis." Thesis, 2021. http://hdl.handle.net/21.11130/00-1735-0000-0005-1581-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bliss, Caleb Andrew. "Sample size re-estimation for superiority clinical trials with a dichotomous outcome using an unblinded estimate of the control group outcome rate." Thesis, 2014. https://hdl.handle.net/2144/14282.

Full text
Abstract:
Superiority clinical trials are often designed with a planned interim analysis for the purpose of sample size re-estimation (SSR) when limited information is available at the start of the trial to estimate the required sample size. Typically these trials are designed with a two-arm internal pilot where subjects are enrolled to both treatment arms prior to the interim analysis. Circumstances may sometimes call for a trial with a single-arm internal pilot (enroll only in the control group). For a dichotomous outcome, Herson and Wittes proposed a SSR method (HW-SSR) that can be applied to single-arm internal pilot trials using an unblinded estimate of the control group outcome rate. Previous evaluations of the HW-SSR method reported conflicting results regarding the impact of the method on the two-sided Type I error rate and power of the final hypothesis test. In this research we evaluate the HW-SSR method under the null and alternative hypothesis in various scenarios to investigate the one-sided Type I error rate and power of trials with a two-arm internal pilot. We find that the one-sided Type I error rate is sometimes inflated and that the power is sometimes reduced. We propose a new method, the Critical Value and Power Adjusted Sample Size Re-estimation (CVPA-SSR) algorithm to adjust the critical value cutoff used in the final Z-test and the power critical value used in the interim SSR formula to preserve the nominal Type I error rate and the desired power. We conduct simulations for trials with single-arm and two-arm internal pilots to confirm that the CVPA-SSR algorithm does preserve the nominal Type I error rate and the desired power. We investigate the robustness of the CVPA-SSR algorithm for trials with single-arm and two-arm internal pilots when the assumptions used in designing the trial are incorrect. No Type I error inflation is observed but significant over- or under-powering of the trial occurs when the treatment effect used to design the trial is misspecified.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sample size re-estimation"

1

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sample size re-estimation"

1

Mütze, Tobias, and Tim Friede. "Sample Size Re-Estimation." In Handbook of Statistical Methods for Randomized Controlled Trials. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781315119694-16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

"Sample Size Re-Estimation in Adaptively Randomized." In Modern Adaptive Randomized Clinical Trials. Chapman and Hall/CRC, 2015. http://dx.doi.org/10.1201/b18640-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cui, Lu. "Sample Size Re-estimation Based on Observed Treatment Difference." In Encyclopedia of Biopharmaceutical Statistics, Second edition. CRC Press, 2003. http://dx.doi.org/10.1201/b14760-125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cui, Lu. "Sample Size Re-estimation Based on Observed Treatment Difference." In Encyclopedia of Biopharmaceutical Statistics, Third Edition. CRC Press, 2012. http://dx.doi.org/10.1201/b14674-195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Lu. "Sample Size Re‐estimation Based on Observed Treatment Difference." In Encyclopedia of Biopharmaceutical Statistics. Informa Healthcare, 2010. http://dx.doi.org/10.3109/9781439822463.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Case study: Proof of concept trial with sample size re-estimation." In Design and Analysis of Cross-Over Trials. Chapman and Hall/CRC, 2014. http://dx.doi.org/10.1201/b17537-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Case study: Blinded sample size re-estimation in a bioequivalence study." In Design and Analysis of Cross-Over Trials. Chapman and Hall/CRC, 2014. http://dx.doi.org/10.1201/b17537-18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Govindarajulu, Z. "Robustness of a Sample Size Re-Estimation Procedure in Clinical Trials." In Advances on Methodological and Applied Aspects of Probability and Statistics. CRC Press, 2019. http://dx.doi.org/10.1201/9780203493212-21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Govindarajulu. "Robustness of a Sample Size Re-Estimation Procedure in Clinical Trials." In Advances on Methodological and Applied Aspects of Probability and Statistics. CRC Press, 2003. http://dx.doi.org/10.1201/9780203493212.ch21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Case study: Various methods for an unblinded sample size re-estimation in a bioequivalence study." In Design and Analysis of Cross-Over Trials. Chapman and Hall/CRC, 2014. http://dx.doi.org/10.1201/b17537-20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sample size re-estimation"

1

Chen, Weijie, Zhipeng Huang, Frank W. Samuelson, and Lucas Tcheuko. "Adaptive sample size re-estimation in MRMC studies." In Image Perception, Observer Performance, and Technology Assessment, edited by Robert M. Nishikawa and Frank W. Samuelson. SPIE, 2019. http://dx.doi.org/10.1117/12.2513646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Randell, David, Elena Zanini, Michael Vogel, Kevin Ewans, and Philip Jonathan. "Omnidirectional Return Values for Storm Severity From Directional Extreme Value Models: The Effect of Physical Environment and Sample Size." In ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/omae2014-23156.

Full text
Abstract:
Ewans and Jonathan [2008] shows that characteristics of extreme storm severity in the northern North Sea vary with storm direction. Jonathan et al. [2008] demonstrates, when directional effects are present, that omnidirectional return values should be estimated using a directional extreme value model. Omnidirectional return values so calculated are different in general to those estimated using a model which incorrectly assumes stationarity with respect to direction. The extent of directional variability of extreme storm severity depends on a number of physical factors, including fetch variability. Our ability to assess directional variability of extreme value parameters and return values also improves with increasing sample size in general. In this work, we estimate directional extreme value models for samples of hindcast storm peak significant wave height from locations in ocean basins worldwide, for a range of physical environments, sample sizes and periods of observation. At each location, we compare distributions of omnidirectional 100-year return values estimated using a directional model, to those (incorrectly) estimated assuming stationarity. The directional model for peaks over threshold of storm peak significant wave height is estimated using a non-homogeneous point process model as outlined in Randell et al. [2013]. Directional models for extreme value threshold (using quantile regression), rate of occurrence of threshold exceedances (using a Poisson model), and size of exceedances (using a generalised Pareto model) are estimated. Model parameters are described as smooth functions of direction using periodic B-splines. Parameter estimation is performed using maximum likelihood estimation penalised for parameter roughness. A bootstrap re-sampling procedure, encompassing all inference steps, quantifies uncertainties in, and dependence structure of, parameter estimates and omnidirectional return values.
APA, Harvard, Vancouver, ISO, and other styles
3

Kriventsev, Vladimir, Hiroyuki Ohshima, Akira Yamaguchi, and Hisashi Ninokata. "Numerical Prediction of Secondary Flows in Complex Areas Using Concept of Local Turbulent Reynolds Number." In 10th International Conference on Nuclear Engineering. ASMEDC, 2002. http://dx.doi.org/10.1115/icone10-22333.

Full text
Abstract:
A new model of turbulence is proposed for the estimation of Reynolds stresses in turbulent fully-developed flow in a wall-bounded straight channel of an arbitrary shape. Ensemble-averaged Navier-Stokes, or Reynolds, equations are considered to be sufficient and practical enough to describe the turbulent flow in complex geometry of rod bundle array. We suggest the turbulence is a process of developing of external perturbations due to wall roughness, inlet conditions and other factors. We also assume that real flows are always affected by perturbations of any possible scale lower than the size of the channel. Thus, turbulence can be modeled in the form of internal or “turbulent” viscosity. The main idea of a Multi-Scale Viscosity (MSV) model can be expressed in the following phenomenological rule: A local deformation of axial velocity can generate the turbulence with the intensity that keeps the value of the local turbulent Reynolds number below some critical one. Therefore, in MSV, the only empirical parameter is the critical Reynolds number. From analysis of dimensions, some physical explanations of Reynolds number are possible. We can define the local turbulent Reynolds number in two ways: i) simply as Re = ul/v, where u is a local velocity deformation within the local scale l and v is total accumulated molecular and turbulent viscosity of all scales lower then 1. ii) Re = K/W, where K is kinetic energy and W is work of friction/dissipation forces. Both definitions above have been implemented in the calculation of samples of basic fully-developed turbulent flows in straight channels such as a circular tube and annular channel. MSV has been also applied to prediction of turbulence-driven secondary flow in elementary cell of the infinitive hexagonal rod array. It is known that the nature of these turbulence-driven motions is originated in anisotropy of turbulence structure. Due to the lack of experimental data up to date, numerical analysis seems to be the only way to estimate intensity of the secondary flows in hexagonal fuel assemblies of fast breeder reactors (FBR). Since MSV can naturally predict turbulent viscosity anisotropy in directions normal and parallel to the wall, it is capable to calculate secondary flows in the cross-section of the rod bundle. Calculations have shown that maximal intensity of secondary flow is about 1% of the mean axial velocity for the low-Re flows (Re = 8170), while for higher Reynolds number (Re = 160,100) the intensity of secondary flow is as negligible as 0.2%.
APA, Harvard, Vancouver, ISO, and other styles
4

Kimura, Kazuhiro, and Masatsugu Yaguchi. "Revision of Long-Term Creep Strength of Base Metal of ASME Grade 91 Type Steel." In ASME 2024 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/pvp2024-122999.

Full text
Abstract:
Abstract In a preliminary study on the creep rupture life scatter estimation of base metals of ASME Grade 91 type steel under actual service conditions performed by one of the authors, it was found that the creep rupture life curves and allowable stress of Grade 91 type steel pipes which were determined in Japan in 2015 may not be appropriate. Therefore, those were re-evaluated in this work by the Assessment Committee on Creep Data of High Chromium Steels, which consists of the Federation of Electric Power Companies of Japan, material producers, power plant manufacturers, and research institutes in Japan, which had evaluated the material in 2015. Creep rupture data were again provided for the re-evaluation by each organization of the committee, and creep rupture strength was assessed using the latest dataset of Grade 91 type steel by the region splitting analysis method in the same manner as in the previous study. It was suggested that the assessment should be performed at once simultaneously regardless of the product form such as pipes, plates, and tubes. As a result, the creep rupture life curves and allowable stresses of steel pipes were revised to the lower strength side than those in 2015. Furthermore, on the basis of the data and knowledge regarding the effect of chemical composition on the creep life of Grade 91 type steel, revisions to Japanese material codes regarding the chemical composition of Grade 91 type steel were proposed to prevent the production of low-strength Grade 91 type steel.
APA, Harvard, Vancouver, ISO, and other styles
5

Thuillet, Swann, Davide Zuzio, Olivier Rouzaud, and Pierre Gajan. "Multi-scale Eulerian-Lagrangian simulation of a liquid jet in cross-flow under acoustic perturbations." In ILASS2017 - 28th European Conference on Liquid Atomization and Spray Systems. Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/ilass2017.2017.4697.

Full text
Abstract:
The design of modern aeronautical propulsion systems is constantly optimized to reduce pollutant emissions whileincreasing fuel combustion efficiency. In order to get a proper mixing of fuel and air, Liquid Jets Injected in gaseous Crossflows (LJICF) are found in numerous injection devices. However, should combustion instabilities appear in the combustion chamber, the response of the liquid jet and its primary atomization is still largely unknown. Coupling between an unstable combustion and the fuel injection process has not been well understood and can result from multiple basic interactions.The aim of this work is to predict by numerical simulation the effect of an acoustic perturbation of the shearing air flow on the primary breakup of a liquid jet. Being the DNS approach too expensive for the simulation of complex injector geometries, this paper proposes a numerical simulation of a LJICF based on a multiscale approach which can be easily integrated in industrial LES of combustion chambers. This approach results in coupling of two models: a two-fluid model, based on the Navier-Stokes equations for compressible fluids, able to capture the largest scales of the jet atomization and the breakup process of the liquid column; and a dispersed phase approach, used for describing the cloud of droplets created by the atomization of the liquid jet. The coupling of these two approaches is provided by an atomization and re-impact models, which ensure liquid transfer between the two-fluid model and the spray model. The resulting numerical method is meant to capture the main jet body characteristics, the generation of the liquid spray and the formation of a liquid film whenever the spray impacts a solid wall.Three main features of the LJICF can be used to describe, in a steady state flow as well as under the effect of the acoustic perturbation, the jet atomization behavior: the jet trajectory, the jet breakup length and droplets size and distribution.The steady state simulations provide good agreement with ONERA experiments conducted under the same condi- tions, characterized by a high Weber number (We&amp;gt;150). The multiscale computation gives the good trajectory of the liquid column and a good estimation of the column breakup location, for different liquid to air momentum flux ratios. The analysis of the droplet distribution in space is currently undergoing. A preliminary unsteady simulation was able to capture the oscillation of the jet trajectory, and the unsteady droplets generation responding to the acousticperturbation.DOI: http://dx.doi.org/10.4995/ILASS2017.2017.4697
APA, Harvard, Vancouver, ISO, and other styles
6

Shah, Jamari M., Nur Athirah Md Dahlan, Hazreen Harris Lee, and Nur Fatihah M. Zulkifli. "Establishing Rapport Throughout Carbonate Reservoirs: A Rock Typing Networking Based on Pore Throat." In Offshore Technology Conference Asia. OTC, 2022. http://dx.doi.org/10.4043/31629-ms.

Full text
Abstract:
Abstract Carbonates reservoir has an elevated level of heterogeneity than clastic reservoir, which is relatively controlled only by depositional facies. It is because of the facies variation vertically and laterally which is more intensive, as well as intensive diagenesis. Therefore, an accurate method is required to ensure hydrocarbon development is effective and efficient. Challenges in the characterization of the carbonate are related to rock type and porosity. The permeability of rocks cannot be determined only by porosity. The method that can be used to determine rock type and rock permeability estimation is through rock typing method. This method is aptly applied for carbonate reservoir which is dynamically change due to diagenesis. It is believed to predict and optimize carbonate reservoir better. Core data can be used to determine rock type based on geology named litho-facies or petrophysics named electro-facies characterization. There are many rock typing methods, which are pore throat group based on shape and trend, PGS - pore geometry structure, Lucia, FZI – flow zone indicator, Winland R35. Those methods use different principles in classifying rock type. Main objective to merge core results between geological statement information based with digital engineering data. By combining these two pieces of information and data, the more precise rock type and able to achieve in solving more finer on carbonate reservoir characterization. Furthermore, the analysis has been conducted over multiple carbonates environments including platform carbonate, pinnacle carbonate and complex carbonate lithology. This paper presents the rock typing classification in carbonate environments which consider geological, and engineering elements mainly through pore throat based rock typing. The main rock typing group can be derived from either stratigraphy or the distribution shape of the pore throat. This will produce the porosity-permeability relationship for all the samples. Geological inputs are then used to describe more refined and detailed characteristics of the relationship. These variety sets of data will help to populate the geological features of the reservoir in bulk and each individual layer in depths. The process includes developing the correlation between pore throat size and pore throat connectivity networking. Defined from core plug pore throat pattern and tie to well logs respond. Consequently, to be propagated in the non-cored intervals through correlation between multiple well logs respond. Some of the key petrophysical measurements will be discussed and how to interpret the borehole images associated with carbonates. As well as looking at different methods of rock typing and best practices to build a static carbonate model. This approach is using pore throat group to classify the rock typing of the carbonate reservoirs. The main rock typing group can be derived from either stratigraphy or the distribution shape of the pore throat. The methodology must be tested first in cored intervals. This is to ensure that sufficient data has been incorporated considering the complexity of the carbonate structure. This will produce the porosity-permeability relationship for all the samples. Geological inputs are then used to describe more refined and detailed characteristics of the relationship. Post drill analysis of the core plugs usually come from the sedimentology analysis, thin section, SEM, XRD and even the core photos. These variety sets of data will help to populate the geological features of the reservoir in bulk and each individual layer in depths. These will be the steps that will aid in re-clustering the porosity-permeability relationship. After these steps have been implemented, the outputs will be calibrated before the methodology will be adopted and regressed to the un-cored intervals. The permeability prediction based on pore throat group by using this methodology matches with measured core permeability with capture the complex respond of permeability variation. The result shows rock typing can be generated by using the pore throat distribution of the reservoirs. This is because permeability populated by this method captures the complexity of the reservoir. Results are more detailed by creating rock typing based on the pore throat. This is furthermore supported and incorporated with all available geological data. There is a significant difference that can be seen between platform, pinnacle, and complex carbonate. The workflow integrates critical information to further capture the complex carbonate reservoir system. This kind of approach is novel and should be adopted to the other carbonate reservoirs in the world for us to understand more on complicated carbonate reservoir structures or network. This study is robust and able to capture multiple carbonate environments and in comparison, with several basins from various parts of the world.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography