Academic literature on the topic 'Variable sample size'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Variable sample size.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Variable sample size"

1

Aparisi, Francisco, Eugenio Epprecht, Andrés Carrión, and Omar Ruiz. "The variable sample size variable dimensionT2control chart." International Journal of Production Research 52, no. 2 (2013): 368–83. http://dx.doi.org/10.1080/00207543.2013.826832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Costa, Antonio F. B. "X̄Charts with Variable Sample Size." Journal of Quality Technology 26, no. 3 (1994): 155–63. http://dx.doi.org/10.1080/00224065.1994.11979523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jianrong. "Sample size calculation – continuous outcome variable." Southwest Respiratory and Critical Care Chronicles 6, no. 25 (2018): 60–62. http://dx.doi.org/10.12746/swrccc.v6i25.487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Changsoon, and Marion R. Reynolds. "Economic design of a variable sample size -chart." Communications in Statistics - Simulation and Computation 23, no. 2 (1994): 467–83. http://dx.doi.org/10.1080/03610919408813182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chan, Lai K., and H. J. Xiao. "Weighted attribute control charts for variable sample size." Total Quality Management 1, no. 3 (1990): 345–54. http://dx.doi.org/10.1080/09544129000000043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Costa, Antonio F. B. "X̄Chart with Variable Sample Size and Sampling Intervals." Journal of Quality Technology 29, no. 2 (1997): 197–204. http://dx.doi.org/10.1080/00224065.1997.11979750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Krejić, Nataša, and Nataša Krklec Jerinkić. "Nonmonotone line search methods with variable sample size." Numerical Algorithms 68, no. 4 (2014): 711–39. http://dx.doi.org/10.1007/s11075-014-9869-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shiffler, Ronald E., and Arthur J. Adams. "A Correction for Biasing Effects of Pilot Sample Size on Sample Size Determination." Journal of Marketing Research 24, no. 3 (1987): 319–21. http://dx.doi.org/10.1177/002224378702400309.

Full text
Abstract:
When a pilot study variance is used to estimate σ2 in the sample size formula, the resulting [Formula: see text] is a random variable. The authors investigate the theoretical behavior of [Formula: see text]. Though [Formula: see text] is more likely to underachieve than overachieve the unbiased n, correction factors to balance the bias are provided.
APA, Harvard, Vancouver, ISO, and other styles
9

Kretzschmar, André, and Gilles E. Gignac. "At what sample size do latent variable correlations stabilize?" Journal of Research in Personality 80 (June 2019): 17–22. http://dx.doi.org/10.1016/j.jrp.2019.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mahadik, Shashibhushan B., and Digambar T. Shirke. "A Special Variable Sample Size and Sampling Interval Chart." Communications in Statistics - Theory and Methods 38, no. 8 (2009): 1284–99. http://dx.doi.org/10.1080/03610920802404108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Variable sample size"

1

Nataša, Krklec Jerinkić. "Line search methods with variable sample size." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. http://dx.doi.org/10.2298/NS20140117KRKLEC.

Full text
Abstract:
The problem under consideration is an unconstrained optimization&nbsp;problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective &nbsp;function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.The main goal of this thesis is to develop the clas of methods that&nbsp;can decrease the cost of an algorithm by decreasing the number of&nbsp;function evaluations. The idea is to decrease the sample size whenever&nbsp;it seems to be reasonable - roughly speaking, we do not want to impose&nbsp;a large precision, i.e. a large sample size when we are far away from the&nbsp;solution we search for. The detailed description of the new methods&nbsp;is presented in Chapter 4 together with the convergence analysis. It&nbsp;is shown that the approximate solution is of the same quality as the&nbsp;one obtained by dealing with the full sample from the start.Another important characteristic of the methods that are proposed&nbsp;here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search&nbsp;along it until we obtain a sucient decrease in the &nbsp;function value. The&nbsp;sucient decrease is determined throughout the line search rule. In&nbsp;Chapter 4, that rule is supposed to be monotone, i.e. we are imposing&nbsp;strict decrease of the function value. In order to decrease the cost of&nbsp;the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented.&nbsp;In Chapter 6, numerical results are presented. The test problems&nbsp;are various - some of them are academic and some of them are real&nbsp;world problems. The academic problems are here to give us more&nbsp;insight into the behavior of the algorithms. On the other hand, data&nbsp;that comes from the real world problems are here to test the real&nbsp;applicability of the proposed algorithms. In the rst part of that&nbsp;chapter, the focus is on the variable sample size techniques. Different&nbsp;implementations of the proposed algorithm are compared to each other&nbsp;and to the other sample schemes as well. The second part is mostly&nbsp;devoted to the comparison of the various line search rules combined&nbsp;with dierent search directions in the variable sample size framework.&nbsp;The overall numerical results show that using the variable sample size&nbsp;can improve the performance of the algorithms signicantly, especially&nbsp;when the nonmonotone line search rules are used.The rst chapter of this thesis provides the background material&nbsp;for the subsequent chapters. In Chapter 2, basics of the nonlinear&nbsp;optimization are presented and the focus is on the line search, while&nbsp;Chapter 3 deals with the stochastic framework. These chapters are&nbsp;here to provide the review of the relevant known results, while the&nbsp;rest of the thesis represents the original contribution.&nbsp;<br>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju&nbsp; moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki tro&scaron;ak i uobičajeno je da se ukupan tro&scaron;ak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne tro&scaron;skove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti&nbsp; (velikog uzorka) kada smo daleko od re&scaron;senja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija re&scaron;enja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.Jo&scaron; jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vr&scaron;si pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono &scaron;to znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja tro&scaron;kova optimizacije kao i pro&scaron;irenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.Numerički rezultati su predstavljeni u &scaron;estom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u pona&scaron;anje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim &scaron;emama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosada&scaron;njih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.
APA, Harvard, Vancouver, ISO, and other styles
2

Oymak, Okan. "Sample size determination for estimation of sensor detection probabilities based on a test variable." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Jun%5FOymak.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2007.<br>Thesis Advisor(s): Lyn R. Whitaker. "June 2007." Includes bibliographical references (p. 95-96). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
3

Andrea, Rožnjik. "Optimizacija problema sa stohastičkim ograničenjima tipa jednakosti – kazneni metodi sa promenljivom veličinom uzorka." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=107819&source=NDLTD&language=en.

Full text
Abstract:
U disertaciji je razmatran problem stohastičkog programiranja s ograničenjima tipa jednakosti, odnosno problem minimizacije s ograničenjima koja su u obliku matematičkog očekivanja. Za re&scaron;avanje posmatranog problema kreirana su dva iterativna postupka u kojima se u svakoj iteraciji računa s uzoračkim očekivanjem kao aproksimacijom matematičkog očekivanja. Oba postupka koriste prednosti postupaka s promenljivom veličinom uzorka zasnovanih na adaptivnom ažuriranju veličine uzorka. To znači da se veličina uzorka određuje na osnovu informacija u tekućoj iteraciji. Konkretno, tekuće informacije o preciznosti aproksimacije očekivanja i tačnosti aproksimacije re&scaron;enja problema defini&scaron;u veličinu uzorka za narednu iteraciju. Oba iterativna postupka su zasnovana na linijskom pretraživanju, a kako je u pitanju problem s ograničenjima, i na kvadratnom kaznenom postupku prilagođenom stohastičkom okruženju. Postupci su zasnovani na istim idejama, ali s različitim pristupom.Po prvom pristupu postupak je kreiran za re&scaron;avanje SAA reformulacije problema stohastičkog programiranja, dakle za re&scaron;avanje aproksimacije originalnog problema. To znači da je uzorak definisan pre iterativnog postupka, pa je analiza konvergencije algoritma deterministička. Pokazano je da se, pod standardnim pretpostavkama, navedenim algoritmom dobija podniz iteracija čija je tačka nagomilavanja KKT tačka SAA reformulacije.Po drugom pristupu je formiran algoritam za re&scaron;avanje samog problemastohastičkog programiranja, te je analiza konvergencije stohastička. Predstavljenim algoritmom se generi&scaron;e podniz iteracija čija je tačka nagomilavanja, pod standardnim pretpostavkama za stohastičku optimizaciju, skoro sigurnoKKT tačka originalnog problema.Predloženi algoritmi su implementirani na istim test problemima. Rezultati numeričkog testiranja prikazuju njihovu efikasnost u re&scaron;avanju posmatranih problema u poređenju s postupcima u kojima je ažuriranje veličine uzorkazasnovano na unapred definisanoj &scaron;emi. Za meru efikasnosti je upotrebljenbroj izračunavanja funkcija. Dakle, na osnovu rezultata dobijenih na skuputestiranih problema može se zaključiti da se adaptivnim ažuriranjem veličineuzorka može u&scaron;tedeti u broju evaluacija funkcija kada su u pitanju i problemi sograničenjima.Kako je posmatrani problem deterministički, a formulisani postupci su stohastički, prva tri poglavlja disertacije sadrže osnovne pojmove determinističkei stohastiˇcke optimizacije, ali i kratak pregled definicija i teorema iz drugihoblasti potrebnih za lak&scaron;e praćenje analize originalnih rezultata. Nastavak disertacije čini prikaz formiranih algoritama, analiza njihove konvergencije i numerička implementacija.&nbsp;<br>Stochastic programming problem with equality constraints is considered within thesis. More precisely, the problem is minimization problem with constraints in the form of mathematical expectation. We proposed two iterative methods for solving considered problem. Both procedures, in each iteration, use a sample average function instead of the mathematical expectation function, and employ the advantages of the variable sample size method based on adaptive updating the sample size. That means, the sample size is determined at every iteration using information from the current iteration. Concretely, the current precision of the approximation of expectation and the quality of the approximation of solution determine the sample size for the next iteration. Both iterative procedures are based on the line search technique as well as on the quadratic penalty method adapted to stochastic environment, since the considered problem has constraints. Procedures relies on same ideas, but the approach is different.By first approach, the algorithm is created for solving an SAA reformulation of the stochastic programming problem, i.e., for solving the approximation of the original problem. That means the sample size is determined before the iterative procedure, so the convergence analyses is deterministic. We show that, under the standard assumptions, the proposed algorithm generates a subsequence which accumulation point is the KKT point of the SAA problem. Algorithm formed by the second approach is for solving the stochastic programming problem, and therefore the convergence analyses is stochastic. It generates a subsequence with&nbsp; accumulation point that is almost surely the KKT point of the original problem, under the standard assumptions for stochastic optimization.for sample size. The number of function evaluations is used as measure of efficiency. Results of the set of tested problems suggest that it is possible to make smaller number of function evaluations by adaptive sample size scheduling in the case of constrained problems, too.Since the considered problem is deterministic, but the formed procedures are stochastic, the first three chapters of thesis contain basic notations of deterministic and stochastic optimization, as well as a short sight of definitions and theorems from another fields necessary for easier tracking the original results analysis. The rest of thesis consists of the presented algorithms, their convergence analysis and numerical implementation.
APA, Harvard, Vancouver, ISO, and other styles
4

Marsal, Mora Josep Ramon. "Estimación del tamaño muestral requerido en el uso de variables de respuesta combinadas: nuevas aportaciones." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/666768.

Full text
Abstract:
Se define como Variable de Resultado Combinada (VRC), la combinación de dos o más sucesos clínicamente relevantes en un único evento que se utilizará como variable de resultado principal en un ensayo clínico. Los eventos combinados deberían tener la misma importancia para el paciente, tener incidencias parecidas y el efecto de la intervención estudiada debería ser parecido. Una de las ventajas del uso de VRC es la reducción del Tamaño de Muestra Requerido (TMR) para demostrar el efecto de una intervención debido a un incremento de la potencia estadística. Su principal inconveniente es el incremento en cuanto a la complejidad tanto de análisis como de su interpretación. La cuantificación del TMR depende de la incidencia de ocurrencia de cada uno de los eventos combinados, del efecto que la intervención tiene sobre éstos y del grado en que se asocian los eventos entre sí. La forma en que afecta al TMR la probabilidad de ocurrencia y el efecto de la intervención es conocido ampliamente. No obstante, la influencia del grado de asociación entre los eventos que conforman la VRC en el TMR apenas ha sido explorado. En esta Tesis se realiza una aproximación pragmática en la creación de herramientas que objetivamente ayuden a los diferentes profesionales involucrados en el diseño de Ensayos Clínicos Aleatorizados al crear VRC binarias eficientes en cuanto al TMR. Previo al desarrollo de dicha herramienta se realiza un estudio formal tanto de la cuantificación del grado de asociación entre variables binarias como de la forma en que dicha asociación afecta al TMR cuando se define una VRC donde se combinan solamente dos eventos binarios. En una primera parte, se define y caracteriza la asociación entre variables binarias, estudiando en profundidad el concepto de asociación. Se listan diferentes formas de estimar la asociación y se definen diferentes métricas que servirán para compararlas entre ellas. Concluimos que la correlación de Pearson, a pesar de ser un buen estimador del grado de asociación, no es óptimo cuando se usa en el contexto de variables binarias, en comparación con la probabilidad conjunta o el grado relativo de solapamiento, que muestran mejores características. En una segunda parte, se identifican mediante simulación los escenarios en los que el uso de una VRC binaria es preferible al uso de un único Evento Relevante para reducir el TMR. Determinamos en qué sentido y magnitud afectan las variaciones en las incidencias, la magnitud del efecto de la intervención y, especialmente, el grado de asociación entre los distintos eventos. El grado de asociación puede determinar que la unión de dos eventos sea aconsejable o no para reducir el TMR. Finalmente, se desarrolla una herramienta que determina, a partir de un conjunto de eventos binarios, la combinación óptima para conseguir la máxima reducción del TMR. Esta herramienta se ha desarrollado teniendo en cuenta el perfil clínico de los usuarios. Ha sido programada utilizando software libre y es de acceso gratuito a todos los usuarios que lo deseen en https://uesca-apps.shinyapps.io/bincep.<br>We define a Composite Endpoint (CE) as the combination of two or more clinically relevant events in a unique event. The CE will be used as a primary endpoint in a clinical trial (CT). The events to combine must comply with characteristics such as similar incidence, similar magnitude of the intervention effect and importance for patients. The main advantage of the use of CE is the potential reduction on the Sample Size Requirement (SSR) resulting in an increase of statistical power (i.e. increase of the net number of patients with one or more events). On other hand, the main disadvantage of the use of CEs is an increase in the complexity of both the statistical analysis and the interpretation of results. The quantification of SSR depends mainly on the incidence and the effect of each of the combined events but also on the grade of association between the combined events. The impact of incidence and the magnitude of the effect of the combined events in the SSR quantification is well-known. However, the impact of the association between events has not been fully assessed. Using a pragmatic approximation, we have created a web application that quantifies the SSR when using a binary CE that is available for professionals designing CT (i.e. clinicians, trialists and biostatistics). As a previous step for the development of the tool, we studied in depth the concept of strength of association between two binary variables. We also studied the impact of the association between two binary events conforming a CE on the SSR. In the first section of this Thesis, we define and characterize the concept of association between binary events. We list different ways of quantifying the association and different metrics which will be used to compare them. We conclude that the Pearson’s correlation is not the best indicator of association between two binary variables. The joint probability (the probability of both events) or the overlap show better characteristics. In the second section, we define, using simulation, the scenarios where the use of a binary CE is better than a single relevant endpoint in terms of SSR reduction. We evaluate the impact of incidence, the impact of the magnitude of the intervention effect and the impact of the magnitude of association between the two binary events on the SSR. We conclude that the magnitude of the association will determine whether a combination of two endpoints in a CE is efficient in terms of SSR reduction. Finally, we develop Bin-CE, a free tool that calculates the SSR of a CE when combining a set of binary events. This tool identifies the combination of events which minimizes the SSR. It has been built under a clinical point-of-view. Bin-CE is accessible on: https://uesca-apps.shinyapps.io/bincep.
APA, Harvard, Vancouver, ISO, and other styles
5

Saha, Dibakar. "Improved Criteria for Estimating Calibration Factors for Highway Safety Manual (HSM) Applications." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1701.

Full text
Abstract:
The Highway Safety Manual (HSM) estimates roadway safety performance based on predictive models that were calibrated using national data. Calibration factors are then used to adjust these predictive models to local conditions for local applications. The HSM recommends that local calibration factors be estimated using 30 to 50 randomly selected sites that experienced at least a total of 100 crashes per year. It also recommends that the factors be updated every two to three years, preferably on an annual basis. However, these recommendations are primarily based on expert opinions rather than data-driven research findings. Furthermore, most agencies do not have data for many of the input variables recommended in the HSM. This dissertation is aimed at determining the best way to meet three major data needs affecting the estimation of calibration factors: (1) the required minimum sample sizes for different roadway facilities, (2) the required frequency for calibration factor updates, and (3) the influential variables affecting calibration factors. In this dissertation, statewide segment and intersection data were first collected for most of the HSM recommended calibration variables using a Google Maps application. In addition, eight years (2005-2012) of traffic and crash data were retrieved from existing databases from the Florida Department of Transportation. With these data, the effect of sample size criterion on calibration factor estimates was first studied using a sensitivity analysis. The results showed that the minimum sample sizes not only vary across different roadway facilities, but they are also significantly higher than those recommended in the HSM. In addition, results from paired sample t-tests showed that calibration factors in Florida need to be updated annually. To identify influential variables affecting the calibration factors for roadway segments, the variables were prioritized by combining the results from three different methods: negative binomial regression, random forests, and boosted regression trees. Only a few variables were found to explain most of the variation in the crash data. Traffic volume was consistently found to be the most influential. In addition, roadside object density, major and minor commercial driveway densities, and minor residential driveway density were also identified as influential variables.
APA, Harvard, Vancouver, ISO, and other styles
6

Caltabiano, Ana Maria de Paula. "Gráficos de controle com tamanho de amostra variável : classificando sua estratégia conforme sua destinação por intermédio de um estudo bibliométrico /." Guaratinguetá, 2018. http://hdl.handle.net/11449/180553.

Full text
Abstract:
Orientador: Antonio Fernando Branco Costa<br>Resumo: Os gráficos de controle foram criados por Shewhart em torno de 1924. Desde então foram propostas muitas estratégias para melhorar o desempenho de tais ferramentas estatísticas. Dentre elas, destaca-se a estratégia dos parâmetros adaptativos, que deu origem a uma linha de pesquisa bastante fértil. Uma de suas vertentes está voltada ao gráfico de tamanho da amostra variável, que depende da posição do ponto amostral atual. Se ele está perto da linha central, a próxima amostra será pequena. Se ele está distante, mas ainda não na região de ação, a próxima amostra será grande. Este esquema de amostragem com tamanho de amostra variável se tornou conhecido com esquema VSS (variable sample size). Esta dissertação revisa os trabalhos da área de monitoramento de processos que tem como foco principal os esquemas VSS de amostragem. Foi feita uma revisão sistemática da literatura, por intermédio de uma análise bibliométrica do período de 1980 a 2018 com o objetivo de classificar a estratégia VSS, segundo sua destinação, por exemplo, os gráficos de com parâmetros conhecidos e observação independente. As destinações foram divididas em dez classes: I – tipo de VSS ; II – tipo de monitoramento; III – número de variáveis sob monitoramento; IV – tipo de gráfico; V – parâmetros do processo; VI – regras de sinalização; VII – natureza do processo; VIII – tipo de otimização; IX – modelo matemático das propriedades do gráfico; X – tipo de produção. A conclusão principal deste estudo foi que nas class... (Resumo completo, clicar acesso eletrônico abaixo)<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
7

You, Zhiying. "Power and sample size of cluster randomized trials." Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2009r/you.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fernandes, Jessica Katherine de Sousa. "Estudo de algoritmos de otimização estocástica aplicados em aprendizado de máquina." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-28092017-182905/.

Full text
Abstract:
Em diferentes aplicações de Aprendizado de Máquina podemos estar interessados na minimização do valor esperado de certa função de perda. Para a resolução desse problema, Otimização estocástica e Sample Size Selection têm um papel importante. No presente trabalho se apresentam as análises teóricas de alguns algoritmos destas duas áreas, incluindo algumas variações que consideram redução da variância. Nos exemplos práticos pode-se observar a vantagem do método Stochastic Gradient Descent em relação ao tempo de processamento e memória, mas, considerando precisão da solução obtida juntamente com o custo de minimização, as metodologias de redução da variância obtêm as melhores soluções. Os algoritmos Dynamic Sample Size Gradient e Line Search with variable sample size selection apesar de obter soluções melhores que as de Stochastic Gradient Descent, a desvantagem se encontra no alto custo computacional deles.<br>In different Machine Learnings applications we can be interest in the minimization of the expected value of some loss function. For the resolution of this problem, Stochastic optimization and Sample size selection has an important role. In the present work, it is shown the theoretical analysis of some algorithms of these two areas, including some variations that considers variance reduction. In the practical examples we can observe the advantage of Stochastic Gradient Descent in relation to the processing time and memory, but considering accuracy of the solution obtained and the cost of minimization, the methodologies of variance reduction has the best solutions. In the algorithms Dynamic Sample Size Gradient and Line Search with variable sample size selection, despite of obtaining better solutions than Stochastic Gradient Descent, the disadvantage lies in their high computational cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Pfister, Mark. "Distribution of a Sum of Random Variables when the Sample Size is a Poisson Distribution." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etd/3459.

Full text
Abstract:
A probability distribution is a statistical function that describes the probability of possible outcomes in an experiment or occurrence. There are many different probability distributions that give the probability of an event happening, given some sample size n. An important question in statistics is to determine the distribution of the sum of independent random variables when the sample size n is fixed. For example, it is known that the sum of n independent Bernoulli random variables with success probability p is a Binomial distribution with parameters n and p: However, this is not true when the sample size is not fixed but a random variable. The goal of this thesis is to determine the distribution of the sum of independent random variables when the sample size is randomly distributed as a Poisson distribution. We will also discuss the mean and the variance of this unconditional distribution.
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Keunpyo. "Process Monitoring with Multivariate Data:Varying Sample Sizes and Linear Profiles." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29741.

Full text
Abstract:
Multivariate control charts are used to monitor a process when more than one quality variable associated with the process is being observed. The multivariate exponentially weighted moving average (MEWMA) control chart is one of the most commonly recommended tools for multivariate process monitoring. The standard practice, when using the MEWMA control chart, is to take samples of fixed size at regular sampling intervals for each variable. In the first part of this dissertation, MEWMA control charts based on sequential sampling schemes with two possible stages are investigated. When sequential sampling with two possible stages is used, observations at a sampling point are taken in two groups, and the number of groups actually taken is a random variable that depends on the data. The basic idea is that sampling starts with a small initial group of observations, and no additional sampling is done at this point if there is no indication of a problem with the process. But if there is some indication of a problem with the process then an additional group of observations is taken at this sampling point. The performance of the sequential sampling (SS) MEWMA control chart is compared to the performance of standard control charts. It is shown that that the SS MEWMA chart is substantially more efficient in detecting changes in the process mean vector than standard control charts that do not use sequential sampling. Also the situation is considered where different variables may have different measurement costs. MEWMA control charts with unequal sample sizes based on differing measurement costs are investigated in order to improve the performance of process monitoring. Sequential sampling plans are applied to MEWMA control charts with unequal sample sizes and compared to the standard MEWMA control charts with a fixed sample size. The steady-state average time to signal (SSATS) is computed using simulation and compared for some selected sets of sample sizes. When different variables have significantly different measurement costs, using unequal sample sizes can be more cost effective than using the same fixed sample size for each variable. In the second part of this dissertation, control chart methods are proposed for process monitoring when the quality of a process or product is characterized by a linear function. In the historical analysis of Phase I data, methods including the use of a bivariate <i>T</i>² chart to check for stability of the regression coefficients in conjunction with a univariate Shewhart chart to check for stability of the variation about the regression line are recommended. The use of three univariate control charts in Phase II is recommended. These three charts are used to monitor the <i>Y</i>-intercept, the slope, and the variance of the deviations about the regression line, respectively. A simulation study shows that this type of Phase II method can detect sustained shifts in the parameters better than competing methods in terms of average run length (ARL) performance. The monitoring of linear profiles is also related to the control charting of regression-adjusted variables and other methods.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Variable sample size"

1

Lin, Nancy Pei-ching. A new approach to sample size determination of replicated Latin square designs and analysis of multiple comparison procedures. Ching sheng wen wu kung ying kung ssu, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robertson, Rob. Effects of collinearity, sample size, multiple correlation, and predictor-criterion correlation salience on the order of variable entry in stepwise regression. 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Christine, Bachrach, National Survey of Family Growth (U.S.), and National Center for Health Statistics (U.S.), eds. National survey of family growth, cycle III: Sample design, weighting, and variance estimation : this report describes the procedures used to select the sample. U.S. Dept. of Health and Human Services, Public Health Service, National Center for Health Statistics, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

J, Potter Frank, ed. Sample design, sampling weights, imputation, and variance estimation in the 1995 National Survey of Family Growth. National Center for Health Statistics, Centers for Disease Control and Prevention, U.S. Dept. of Health and Human Services, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fonseca, Raquel, Arie Kapteyn, and Gema Zamarro. Retirement and Cognitive Functioning. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198808039.003.0004.

Full text
Abstract:
This chapter surveys recent literature on the effects of retirement on cognitive functioning at older ages around the world. Studies using similar data, definitions of cognition, and instruments to capture causal effects find that being retired leads to a decline of cognition, controlling for different specifications of age functions and other covariates. The size and significance of the estimated effects varied depending on specifications used, such as whether or not models included fixed effects, dynamic specifications, or alternative specifications of instrumental variables. The authors replicated several of these results using the same datasets. Factors that are likely causing the differences across specifications include endogeneity of right-hand side variables, and heterogeneity across gender, occupation, or skill levels. Results were especially sensitive to the inclusion of country fixed effects, to control for unobserved country differences, suggesting the key role of unobserved differences across countries, which both affect retirement ages and cognitive decline.
APA, Harvard, Vancouver, ISO, and other styles
6

National Survey of Family Growth, Cycle 6: Sample Design, Weighting, Imputation and Variance Estimation (Vital and Health Statistics). Dept. of Health and Human Services Centers fo, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Laver, Michael, and Ernest Sergenti. Systematically Interrogating Agent-Based Models. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691139036.003.0004.

Full text
Abstract:
This chapter develops the methods for designing, executing, and analyzing large suites of computer simulations that generate stable and replicable results. It starts with a discussion of the different methods of experimental design, such as grid sweeping and Monte Carlo parameterization. Next, it demonstrates how to calculate mean estimates of output variables of interest. It does so by first discussing stochastic processes, Markov Chain representations, and model burn-in. It focuses on three stochastic process representations: nonergodic deterministic processes that converge on a single state; nondeterministic stochastic processes for which a time average provides a representative estimate of the output variables; and nondeterministic stochastic processes for which a time average does not provide a representative estimate of the output variables. The estimation strategy employed depends on which stochastic process the simulation follows. Lastly, the chapter presents a set of diagnostic checks used to establish an appropriate sample size for the estimation of the means.
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Colin. Wreck-Site Formation Processes. Edited by Ben Ford, Donny L. Hamilton, and Alexis Catsambis. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199336005.013.0002.

Full text
Abstract:
The environmental settings within which shipwrecks occur are matters of chance rather than of choice. It is primarily the wreck and not its physical context that is of consequence to nautical archaeologists. No two wreck-site formations are the same, since the complex and interacting variables that constitute the environmental setting, the nature of the ship, and the circumstances of its loss combine to create a set of attributes unique to each site. The dynamic phase, which begins with the event of shipwreck, is characterized by the wreck's status as an environmental anomaly. It is unstable, lacks integration with its surroundings, and is prone to further disintegration and dispersal by external influences. The chemical and physical properties of water cause reactions with the metals. Understanding these natural processes in the context of the distinctively anthropogenic inputs, this article characterizes archaeology as an essential prerequisite to the interpretation of any shipwreck.
APA, Harvard, Vancouver, ISO, and other styles
9

Sampling Techniques: Methods and Applications. Nova Science Publishers Inc, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Streiner, David L., Geoffrey R. Norman, and John Cairney. Reliability. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199685219.003.0008.

Full text
Abstract:
This chapter reviews the basic theory of reliability, and examines the relation between reliability and measurement error. It derives the standard form of reliability, the intraclass correlation or ICC, from repeated measures ANOVA. The chapter explores issues in the application of the reliability coefficient, including absolute versus relative reliability, the reliability of multiple observations, and the standard error of measurement. It examines several other measures of reliability—Cohen’s kappa, Pearson r, and the method of Altman and Bland—and derives the relation between them and the ICC. The chapter determines the variance of a reliability estimate. It also calculates sample size estimates for reliability studies, and methods to combine reliability estimates in systematic reviews.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Variable sample size"

1

Yamamura, Mariko, Hirokazu Yanagihara, and Muni S. Srivastava. "Variable Selection by C p Statistic in Multiple Responses Regression with Fewer Sample Size Than the Dimension." In Knowledge-Based and Intelligent Information and Engineering Systems. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15393-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Plant, Richard E. "Variance Estimation, the Effective Sample Size, and the Bootstrap." In Spatial Data Analysis in Ecology and Agriculture Using R. CRC Press, 2018. http://dx.doi.org/10.1201/9781351189910-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hyakutake, Hiroto, and Minoru Siotani. "Mean and Variance of Sample Size in Multivariate Heteroscedastic Method." In Contributions to Probability and Statistics. Springer New York, 1989. http://dx.doi.org/10.1007/978-1-4612-3678-8_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Barrio, Irantzu, Inmaculada Arostegui, and María-Xosé Rodríguez-Álvarez. "Sample Size Impact on the Categorisation of Continuous Variables in Clinical Prediction." In Trends in Mathematics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55639-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jankovič, Slobodanka. "Some properties of random variables which are stable with respect to the random sample size." In Stability Problems for Stochastic Models. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/bfb0084483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liefbroer, Aart C., and Mioara Zoutewelle-Terovan. "Meta-Analysis and Meta-Regression: An Alternative to Multilevel Analysis When the Number of Countries Is Small." In Social Background and the Demographic Life Course: Cross-National Comparisons. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67345-1_6.

Full text
Abstract:
AbstractHierarchically nested data structures are often analyzed by means of multilevel techniques. A common situation in cross-national comparative research is data on two levels, with information on individuals at level 1 and on countries at level 2. However, when dealing with few level-2 units (e.g. countries), results from multilevel models may be unreliable due to estimation bias (e.g. underestimated standard errors, unreliable country-level variance estimates). This chapter provides a discussion on multilevel modeling inaccuracies when using a small level-2 sample size, as well as a list of available alternative analytic tools for analyzing such data. However, as in practice many of these alternatives remain unfeasible in testing hypotheses central to cross-national comparative research, the aim of this chapter is to propose and illustrate a new technique – the 2-step meta-analytic approach – reliable in the analysis of nested data with few level-2 units. In addition, this method is highly infographic and accessible to the average social scientist (not skilled in advanced simulation techniques).
APA, Harvard, Vancouver, ISO, and other styles
7

Odongtoo, Godfrey, Denis Ssebuggwawo, and Peter Okidi Lating. "Water Resource Management Frameworks in Water-Related Adaptation to Climate Change." In African Handbook of Climate Change Adaptation. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-42091-8_24-1.

Full text
Abstract:
AbstractThis chapter addresses the use of partial least squares–structural equation modeling (PLS-SEM) to determine the requirements for an effective development of water resource management frameworks. The authors developed a quantitative approach using Smart-PLS version 3 to reveal the views of different experts based on their experiences in water-related adaptation to climate change in the Lake Victoria Basin (LVB) in Uganda. A sample size of 152 was computed from a population size of 245 across the districts of Buikwe, Jinja, Mukono, Kampala, and Wakiso. The chapter aimed to determine the relationship among the availability of legal, regulatory, and administrative frameworks, public water investment, price and demand management, information requirements, coordination structures, and analytical frameworks and how they influence the development of water resource management frameworks. The findings revealed that the availability of legal, regulatory, and administrative frameworks, public water investment, price and demand management, information requirements, and coordination structures had significant and positive effects on the development of water resource management frameworks. Public water investment had the highest path coefficient (β = 0.387 and p = 0.000), thus indicating that it has the greatest influence on the development of water resource management frameworks. The R2 value of the model was 0.714, which means that the five exogenous latent constructs collectively explained 71.4% of the variance in the development. The chapter suggests putting special emphasis on public water investment to achieve an effective development of water resource management frameworks. These findings can support the practitioners and decision makers engaged in water-related adaptation to climate change within the LVB and beyond.
APA, Harvard, Vancouver, ISO, and other styles
8

Odongtoo, Godfrey, Denis Ssebuggwawo, and Peter Okidi Lating. "Water Resource Management Frameworks in Water-Related Adaptation to Climate Change." In African Handbook of Climate Change Adaptation. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-45106-6_24.

Full text
Abstract:
AbstractThis chapter addresses the use of partial least squares–structural equation modeling (PLS-SEM) to determine the requirements for an effective development of water resource management frameworks. The authors developed a quantitative approach using Smart-PLS version 3 to reveal the views of different experts based on their experiences in water-related adaptation to climate change in the Lake Victoria Basin (LVB) in Uganda. A sample size of 152 was computed from a population size of 245 across the districts of Buikwe, Jinja, Mukono, Kampala, and Wakiso. The chapter aimed to determine the relationship among the availability of legal, regulatory, and administrative frameworks, public water investment, price and demand management, information requirements, coordination structures, and analytical frameworks and how they influence the development of water resource management frameworks. The findings revealed that the availability of legal, regulatory, and administrative frameworks, public water investment, price and demand management, information requirements, and coordination structures had significant and positive effects on the development of water resource management frameworks. Public water investment had the highest path coefficient (β = 0.387 and p = 0.000), thus indicating that it has the greatest influence on the development of water resource management frameworks. The R2 value of the model was 0.714, which means that the five exogenous latent constructs collectively explained 71.4% of the variance in the development. The chapter suggests putting special emphasis on public water investment to achieve an effective development of water resource management frameworks. These findings can support the practitioners and decision makers engaged in water-related adaptation to climate change within the LVB and beyond.
APA, Harvard, Vancouver, ISO, and other styles
9

Chakraborty, Supratik, Ashutosh Gupta, and Divyesh Unadkat. "Diffy: Inductive Reasoning of Array Programs Using Difference Invariants." In Computer Aided Verification. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_42.

Full text
Abstract:
AbstractWe present a novel verification technique to prove properties of a class of array programs with a symbolic parameter N denoting the size of arrays. The technique relies on constructing two slightly different versions of the same program. It infers difference relations between the corresponding variables at key control points of the joint control-flow graph of the two program versions. The desired post-condition is then proved by inducting on the program parameter N, wherein the difference invariants are crucially used in the inductive step. This contrasts with classical techniques that rely on finding potentially complex loop invaraints for each loop in the program. Our synergistic combination of inductive reasoning and finding simple difference invariants helps prove properties of programs that cannot be proved even by the winner of Arrays sub-category in SV-COMP 2021. We have implemented a prototype tool called Diffy to demonstrate these ideas. We present results comparing the performance of Diffy with that of state-of-the-art tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Hankin, David G., Michael S. Mohr, and Ken B. Newman. "Equal probability sampling." In Sampling Theory. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198815792.003.0003.

Full text
Abstract:
This chapter presents a formal quantitative treatment of material covered conceptually in Chapter 2, all with respect to equal probability with replacement (SWR) and without replacement selection simple random sampling, (SRS) of samples of size n from a finite population of size N. Small sample space examples are used to illustrate unbiasedness of mean-per-unit estimators of the mean, total and proportion of the target variable, y, for SWR and SRS. Explicit formulas for sampling variance indicate how estimator uncertainty depends on finite population variance, sample size and sampling fraction. Measures of the relative performance of alternative sampling strategies (relative precision, relative efficiency, net relative efficiency) are introduced and applied to mean-per-unit estimators used for the SWR and SRS selection methods. Normality of the sampling distribution of the SRS mean-per-unit estimator depends on sample size but also on the shape of the distribution of the target variable, y, values over the finite population units. Normality of the sampling distribution is required to justify construction of valid 95% confidence intervals that may be constructed around sample estimates based on unbiased estimates of sampling variance. Methods to calculate sample size to achieve accuracy objectives are presented. Additional topics include Bernoulli sampling (a without replacement selection scheme for which sample size is a random variable), the Rao–Blackwell theorem (which allows improvement of estimators that are based on selection methods which may result in repeated selection of the same units), oversampling and nonresponse.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Variable sample size"

1

Ming Lei, Barend J. van Wyk, and Guoyuan Qi. "A variable sample size particle filter." In IEEE International Conference on Automation and Logistics (ICAL). IEEE, 2008. http://dx.doi.org/10.1109/ical.2008.4636206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Su-Fen, and Chih-Ching Yang. "Optimal variable sample size and sampling interval MSE chart." In 2011 8th International Conference on Service Systems and Service Management (ICSSSM 2011). IEEE, 2011. http://dx.doi.org/10.1109/icsssm.2011.5959315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muhammad, Anis Nabila Binti, Chong Zhi Lin, Yeong Wai Chung, and Lam Weng Siew. "Variable sample size EWMA CV chart based on expected average run length." In THE 4TH INNOVATION AND ANALYTICS CONFERENCE & EXHIBITION (IACE 2019). AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5121124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lim, Sok Li, Wai Chung Yeong, Michael Boon Chong Khoo, and Xin Ying Chew. "A Comparison of the Variable Sampling Interval (VSI) and Variable Sample Size (VSS) Coefficient of Variation Charts." In The 4th World Congress on Mechanical, Chemical, and Material Engineering. Avestia Publishing, 2018. http://dx.doi.org/10.11159/icmie18.129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Y., L. Zhou, and A. Zielinski. "Variable Step Size Adaptive Sub-Sample Delay Estimation Using a Quadrature Phase Detector." In Oceans 2007. IEEE, 2007. http://dx.doi.org/10.1109/oceans.2007.4449160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jalilzadeh, Afrooz, Angelia Nedic, Uday V. Shanbhag, and Farzad Yousefian. "A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8619209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jalilzadeh, Afrooz, and Uday V. Shanbhag. "eg-VSSA: An extragradient variable sample-size stochastic approximation scheme: Error analysis and complexity trade-offs." In 2016 Winter Simulation Conference (WSC). IEEE, 2016. http://dx.doi.org/10.1109/wsc.2016.7822133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lei, Jinlong, and Uday V. Shanbhag. "Linearly Convergent Variable Sample-Size Schemes for Stochastic Nash Games: Best-Response Schemes and Distributed Gradient-Response Schemes." In 2018 IEEE Conference on Decision and Control (CDC). IEEE, 2018. http://dx.doi.org/10.1109/cdc.2018.8618953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bakhshande, Fateme, and Dirk Söffker. "Variable Step Size Kalman Filter Using Event Handling Algorithm for Switching Systems." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85435.

Full text
Abstract:
This paper presents a novel variable step size Kalman Filter by augmenting the event handling procedure of Ordinary Differential Equation (ODE) solvers with the predictor-corrector scheme of well-known discrete Kalman Filter (KF). The main goal is to increase the estimation performance of Kalman Filter in the case of switching/stiff systems. Unlike fixed step size Kalman Filter the sample time (ST) is adapted in the proposed approach based on current estimation performance (KF innovation) of system states and can change during the estimation procedure. The proposed event handling algorithm consists of two main parts: relaxing ST and restricting ST. Relaxing procedure is used to avoid high computational time when no rapid change exists in system dynamics. Restricting procedure is considered to improve the estimation performance by decreasing the Kalman filter step size in the case of fast dynamical behavior (switching behavior). The accuracy and computational time are controlled by using design parameters. The effectiveness of the proposed approach is verified by simulation results using the bouncing ball example as a switching system.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Xue-Bin, and Xiao-Ling Yu. "Influence of Sample Size on Prediction of Animal Phenotype Value Using Back-Propagation Artificial Neural Network with Variable Hidden Neurons." In 2009 International Conference on Computational Intelligence and Software Engineering. IEEE, 2009. http://dx.doi.org/10.1109/cise.2009.5366246.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Variable sample size"

1

Kott, Phillip S. Better Coverage Intervals for Estimators from a Complex Sample Survey. RTI Press, 2020. http://dx.doi.org/10.3768/rtipress.2020.mr.0041.2002.

Full text
Abstract:
Coverage intervals for a parameter estimate computed using complex survey data are often constructed by assuming the parameter estimate has an asymptotically normal distribution and the measure of the estimator’s variance is roughly chi-squared. The size of the sample and the nature of the parameter being estimated render this conventional “Wald” methodology dubious in many applications. I developed a revised method of coverage-interval construction that “speeds up the asymptotics” by incorporating an estimated measure of skewness. I discuss how skewness-adjusted intervals can be computed for ratios, differences between domain means, and regression coefficients.
APA, Harvard, Vancouver, ISO, and other styles
2

Lo, Andrew, and A. Craig MacKinlay. The Size and Power of the Variance Ratio Test in Finite Samples: A Monte Carlo Investigation. National Bureau of Economic Research, 1988. http://dx.doi.org/10.3386/t0066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Scholz, Florian. Sedimentary fluxes of trace metals, radioisotopes and greenhouse gases in the southwestern Baltic Sea Cruise No. AL543, 23.08.2020 – 28.08.2020, Kiel – Kiel - SEDITRACE. GEOMAR Helmholtz Centre for Ocean Research Kiel, 2020. http://dx.doi.org/10.3289/cr_al543.

Full text
Abstract:
R/V Alkor Cruise AL543 was planned as a six-day cruise with a program of water column and sediment sampling in Kiel Bight and the western Baltic Sea. Due to restrictions related to the Covid-19 pandemic, the original plan had to be changed and the cruise was realized as six oneday cruises with sampling in Kiel Bight exclusively. The first day was dedicated to water column and sediment sampling for radionuclide analyses at Boknis Eck and Mittelgrund in Eckernförde Bay. On the remaining five days, water column, bottom water, sediment and pore water samples were collected at eleven stations covering different types of seafloor environment (grain size, redox conditions) in western Kiel Bight. The data and samples obtained on cruise AL543 will be used to investigate (i) the sedimentary cycling of bio-essential metals (e.g., nickel, zinc, and their isotopes) as a function of variable redox conditions, (ii) the impact of submarine groundwater discharge and diffusive benthic fluxes on the distribution of radium and radon as well as greenhouse gases (methane and nitrous oxide) in the water column, and (iii) to characterize and quantify the impact of coastal erosion on sedimentary iron, phosphorus and rare earth element cycling in Kiel Bight.
APA, Harvard, Vancouver, ISO, and other styles
4

McPhedran, R., K. Patel, B. Toombs, et al. Food allergen communication in businesses feasibility trial. Food Standards Agency, 2021. http://dx.doi.org/10.46756/sci.fsa.tpf160.

Full text
Abstract:
Background: Clear allergen communication in food business operators (FBOs) has been shown to have a positive impact on customers’ perceptions of businesses (Barnett et al., 2013). However, the precise size and nature of this effect is not known: there is a paucity of quantitative evidence in this area, particularly in the form of randomised controlled trials (RCTs). The Food Standards Agency (FSA), in collaboration with Kantar’s Behavioural Practice, conducted a feasibility trial to investigate whether a randomised cluster trial – involving the proactive communication of allergen information at the point of sale in FBOs – is feasible in the United Kingdom (UK). Objectives: The trial sought to establish: ease of recruitments of businesses into trials; customer response rates for in-store outcome surveys; fidelity of intervention delivery by FBO staff; sensitivity of outcome survey measures to change; and appropriateness of the chosen analytical approach. Method: Following a recruitment phase – in which one of fourteen multinational FBOs was successfully recruited – the execution of the feasibility trial involved a quasi-randomised matched-pairs clustered experiment. Each of the FBO’s ten participating branches underwent pair-wise matching, with similarity of branches judged according to four criteria: Food Hygiene Rating Scheme (FHRS) score, average weekly footfall, number of staff and customer satisfaction rating. The allocation ratio for this trial was 1:1: one branch in each pair was assigned to the treatment group by a representative from the FBO, while the other continued to operate in accordance with their standard operating procedure. As a business-based feasibility trial, customers at participating branches throughout the fieldwork period were automatically enrolled in the trial. The trial was single-blind: customers at treatment branches were not aware that they were receiving an intervention. All customers who visited participating branches throughout the fieldwork period were asked to complete a short in-store survey on a tablet affixed in branches. This survey contained four outcome measures which operationalised customers’: perceptions of food safety in the FBO; trust in the FBO; self-reported confidence to ask for allergen information in future visits; and overall satisfaction with their visit. Results: Fieldwork was conducted from the 3 – 20 March 2020, with cessation occurring prematurely due to the closure of outlets following the proliferation of COVID-19. n=177 participants took part in the trial across the ten branches; however, response rates (which ranged between 0.1 - 0.8%) were likely also adversely affected by COVID-19. Intervention fidelity was an issue in this study: while compliance with delivery of the intervention was relatively high in treatment branches (78.9%), erroneous delivery in control branches was also common (46.2%). Survey data were analysed using random-intercept multilevel linear regression models (due to the nesting of customers within branches). Despite the trial’s modest sample size, there was some evidence to suggest that the intervention had a positive effect for those suffering from allergies/intolerances for the ‘trust’ (β = 1.288, p&lt;0.01) and ‘satisfaction’ (β = 0.945, p&lt;0.01) outcome variables. Due to singularity within the fitted linear models, hierarchical Bayes models were used to corroborate the size of these interactions. Conclusions: The results of this trial suggest that a fully powered clustered RCT would likely be feasible in the UK. In this case, the primary challenge in the execution of the trial was the recruitment of FBOs: despite high levels of initial interest from four chains, only one took part. However, it is likely that the proliferation of COVID-19 adversely impacted chain participation – two other FBOs withdrew during branch eligibility assessment and selection, citing COVID-19 as a barrier. COVID-19 also likely lowered the on-site survey response rate: a significant negative Pearson correlation was observed between daily survey completions and COVID-19 cases in the UK, highlighting a likely relationship between the two. Limitations: The trial was quasi-random: selection of branches, pair matching and allocation to treatment/control groups were not systematically conducted. These processes were undertaken by a representative from the FBO’s Safety and Quality Assurance team (with oversight from Kantar representatives on pair matching), as a result of the chain’s internal operational restrictions.
APA, Harvard, Vancouver, ISO, and other styles
5

Plueddemann, Albert, Benjamin Pietro, and Emerson Hasbrouck. The Northwest Tropical Atlantic Station (NTAS): NTAS-19 Mooring Turnaround Cruise Report Cruise On Board RV Ronald H. Brown October 14 - November 1, 2020. Woods Hole Oceanographic Institution, 2021. http://dx.doi.org/10.1575/1912/27012.

Full text
Abstract:
The Northwest Tropical Atlantic Station (NTAS) was established to address the need for accurate air-sea flux estimates and upper ocean measurements in a region with strong sea surface temperature anomalies and the likelihood of significant local air–sea interaction on interannual to decadal timescales. The approach is to maintain a surface mooring outfitted for meteorological and oceanographic measurements at a site near 15°N, 51°W by successive mooring turnarounds. These observations will be used to investigate air–sea interaction processes related to climate variability. This report documents recovery of the NTAS-18 mooring and deployment of the NTAS-19 mooring at the same site. Both moorings used Surlyn foam buoys as the surface element. These buoys were outfitted with two Air–Sea Interaction Meteorology (ASIMET) systems. Each system measures, records, and transmits via Argos satellite the surface meteorological variables necessary to compute air–sea fluxes of heat, moisture and momentum. The upper 160 m of the mooring line were outfitted with oceanographic sensors for the measurement of temperature, salinity and velocity. Deep ocean temperature and salinity are measured at approximately 38 m above the bottom. The mooring turnaround was done on the National Oceanic and Atmospheric Administration (NOAA) Ship Ronald H. Brown, Cruise RB-20-06, by the Upper Ocean Processes Group of the Woods Hole Oceanographic Institution. The cruise took place between 14 October and 1 November 2020. The NTAS-19 mooring was deployed on 22 October, with an anchor position of about 14° 49.48° N, 51° 00.96° W in 4985 m of water. A 31-hour intercomparison period followed, during which satellite telemetry data from the NTAS-19 buoy and the ship’s meteorological sensors were monitored. The NTAS-18 buoy, which had gone adrift on 28 April 2020, was recovered on 20 October near 13° 41.96° N, 58° 38.67° W. This report describes these operations, as well as other work done on the cruise and some of the pre-cruise buoy preparations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography