Academic literature on the topic 'Dynamic sample size selection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Dynamic sample size selection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Dynamic sample size selection"

1

ORTIZ, A. R., H. T. BANKS, C. CASTILLO-CHAVEZ, G. CHOWELL, and X. WANG. "A DETERMINISTIC METHODOLOGY FOR ESTIMATION OF PARAMETERS IN DYNAMIC MARKOV CHAIN MODELS." Journal of Biological Systems 19, no. 01 (2011): 71–100. http://dx.doi.org/10.1142/s0218339011003798.

Full text
Abstract:
A method for estimating parameters in dynamic stochastic (Markov Chain) models based on Kurtz's limit theory coupled with inverse problem methods developed for deterministic dynamical systems is proposed and illustrated in the context of disease dynamics. This methodology relies on finding an approximate large-population behavior of an appropriate scaled stochastic system. The approach leads to a deterministic approximation obtained as solutions of rate equations (ordinary differential equations) in terms of the large sample size average over sample paths or trajectories (limits of pure jump Markov processes). Using the resulting deterministic model, we select parameter subset combinations that can be estimated using an ordinary-least-squares (OLS) or generalized-least-squares (GLS) inverse problem formulation with a given data set. The selection is based on two criteria of the sensitivity matrix: the degree of sensitivity measured in the form of its condition number and the degree of uncertainty measured in the form of its parameter selection score. We illustrate the ideas with a stochastic model for the transmission of vancomycin-resistant enterococcus (VRE) in hospitals and VRE surveillance data from an oncology unit.
APA, Harvard, Vancouver, ISO, and other styles
2

Hussain, Sarfraz, Abdul Quddus, Pham Phat Tien, Muhammad Rafiq, and Drahomíra Pavelková. "The moderating role of firm size and interest rate in capital structure of the firms: selected sample from sugar sector of Pakistan." Investment Management and Financial Innovations 17, no. 4 (2020): 341–55. http://dx.doi.org/10.21511/imfi.17(4).2020.29.

Full text
Abstract:
The selection of financing is a top priority for businesses, particularly in short- and long-term investment decisions. Mixing debt and equity leads to decisions on the financial structure for businesses. This research analyzes the moderate position of company size and the interest rate in the capital structure over six years (2013–2018) for 29 listed Pakistani enterprises operating in the sugar market. This research employed static panel analysis and dynamic panel analysis on linear and nonlinear regression methods. The capital structure included debt to capital ratio, non-current liabilities, plus current liabilities to capital as a dependent variable. Independent variables were profitability, firm size, tangibility, Non-Debt Tax Shield, liquidity, and macroeconomic variables were exchange rates and interest rates. The investigation reported that profitability, firm size, and Non-Debt Tax Shield were significant and negative, while tangibility and interest rates significantly and positively affected debt to capital ratio. This means the sugar sector has greater financial leverage to manage the funding obligations for the better performance of firms. Therefore, the outcomes revealed that the moderators have an important influence on capital structure.
APA, Harvard, Vancouver, ISO, and other styles
3

Shan, Gui Jun. "A Dynamic Neighborhood Selection Approach for Locally Linear Embedding." Advanced Materials Research 1033-1034 (October 2014): 1369–72. http://dx.doi.org/10.4028/www.scientific.net/amr.1033-1034.1369.

Full text
Abstract:
Locally linear embedding is based on the assumption that the whole data manifolds are evenly distributed so that they determine the neighborhood for all points with the same neighborhood size. Accordingly, they fail to nicely deal with most real problems that are unevenly distributed. This paper presents a new approach that takes the general conceptual framework of Hessian locally linear embedding so as to guarantee its correctness in the setting of local isometry to an open connected subset but dynamically determines the local neighborhood size for each point. This approach estimates the approximate geodesic distance between any two points by the shortest path in the local neighborhood graph, and then determines the neighborhood size for each point by using the relationship between its local estimated geodesic distance matrix and local Euclidean distance matrix. This approach has clear geometry intuition as well as the better performance and stability to deal with the sparsely sampled or noise contaminated data sets that are often unevenly distributed. The conducted experiments on benchmark data sets validate the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Orlova, Vera, Vyacheslav Goiko, Yulia Alexandrova, and Evgeny Petrov. "Potential of the dynamic approach to data analysis." E3S Web of Conferences 258 (2021): 07012. http://dx.doi.org/10.1051/e3sconf/202125807012.

Full text
Abstract:
Explores the potential of a dynamic data analysis approach to study user behavior in social networks. Currently, information appears on social networks that allows differentiating user groups by their activity within the technical capabilities of a particular social network. The description of the information field of Tomsk is presented, a brief analysis is given. A dynamic approach to the study of user behavior, the structure of nodes and connections of social networks makes it possible to identify the rate of growth or decrease in the size of the network, the redistribution of connections between groups. There are four main stages in the analysis of social networks: 1) data collection; 2) selection of data for analysis; 3) selection and application of the analysis method; and 4) drawing conclusions. To obtain a complete picture of the information field of the Tomsk region, posts for 2019 were unloaded from all regional communities. All posts were classified based on training sample and specialized machine learning algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Thomas, Zoë A., Chris S. M. Turney, Alan Hogg, Alan N. Williams, and Chris J. Fogwill. "Investigating Subantarctic 14C Ages of Different Peat Components: Site and Sample Selection for Developing Robust Age Models in Dynamic Landscapes." Radiocarbon 61, no. 4 (2019): 1009–27. http://dx.doi.org/10.1017/rdc.2019.54.

Full text
Abstract:
ABSTRACTPrecise radiocarbon (14C) dating of sedimentary sequences is important for developing robust chronologies of environmental change, but sampling of suitable components can be challenging in highly dynamic landscapes. Here we investigate radiocarbon determinations of different peat size fractions from six peat sites, representing a range of geomorphological contexts on the South Atlantic subantarctic islands of the Falklands and South Georgia. To investigate the most suitable fraction for dating, 112 measurements were obtained from three components within selected horizons: a fine fraction <0.2 mm, a coarse fraction >0.2 mm, and bulk material. We find site selection is critical, with locations surrounded by high-ground and/or relatively slowly accumulating sites more susceptible to the translocation of older carbon. Importantly, in locations with reduced potential for redeposition of material, our results show that there is no significant or systematic difference between ages derived from bulk material, fine or coarse (plant macrofossil) material, providing confidence in the resulting age model. Crucially, in areas comprising complex terrain with extreme relief, we recommend dating macrofossils or bulk carbon rather than a fine fraction, or employing comprehensive dating of multiple sedimentary fractions to determine the most reliable fraction(s) for developing a robust chronological framework.
APA, Harvard, Vancouver, ISO, and other styles
6

Mao, W., M. Tian, and G. Yan. "Research of load identification based on multiple-input multiple-output SVM model selection." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 226, no. 5 (2011): 1395–409. http://dx.doi.org/10.1177/0954406211423454.

Full text
Abstract:
In this article, the problem of multiple-input multiple-output (MIMO) load identification is addressed. First, load identification is proved in dynamic theory as non-linear MIMO black-box modelling process. Second, considering the effect of hyper-parameters in small-size sample problem, a new MIMO Support Vector Machine (SVM) model selection method based on multi-objective particle swarm optimization is proposed in order to improve the identification's performance. The proposed method treats the model selection of MIMO SVM as a multi-objective optimization problem, and leave-one-out generalization errors of all output models are minimized simultaneously. Once the Pareto-optimal solutions are found, the SVM model with the best generalization ability is determined. The proposed method is evaluated in the experiment of dynamic load identification on cylinder stochastic vibration system, demonstrating its benefits in comparison to the existing model selection methods in terms of identification accuracy and numerical stability, especially near the peaks.
APA, Harvard, Vancouver, ISO, and other styles
7

Kelbulan, Emanuel ,., Jane S. Tambas, and Oktavianus ,. Parajouw. "DINAMIKA KELOMPOK TANI KALELON DI DESA KAUNERAN KECAMATAN SONDER." AGRI-SOSIOEKONOMI 14, no. 3 (2018): 55. http://dx.doi.org/10.35791/agrsosek.14.3.2018.21534.

Full text
Abstract:
This study aims to see how the dynamics of the Kalelon farmer groups are examined from the elements of group dynamics. This research took place for 3 months from October to December 2017. The research site was in Kauneran Village, Sonder Sub-district. Data collection techniques carried out in this study were interview techniques. Sample selection using the method purposive sampling. The number of respondents is 15 people who are members of the group. The analysis technique used in this study is a qualitative descriptive analysis technique. The results showed that the Kalelon farmer group was dynamic because of the nine elements of group dynamics consisting of group goals, group structure, group development and coaching, group cohesiveness, group task functions, group atmosphere, group effectiveness, and hidden intentions were dynamic or good dynamic even though there is one element that is not dynamic, namely pressure in the group. *eprm*.
APA, Harvard, Vancouver, ISO, and other styles
8

Govoreanu, R., H. Saveyn, P. Van der Meeren, I. Nopens, and P. A. Vanrolleghem. "A methodological approach for direct quantification of the activated sludge floc size distribution by using different techniques." Water Science and Technology 60, no. 7 (2009): 1857–67. http://dx.doi.org/10.2166/wst.2009.535.

Full text
Abstract:
The activated sludge floc size distribution (FSD) is investigated by using different measurement techniques in order to gain insight in FSD assessment as well as to detect the strengths and limitations of each technique. A second objective was to determine the experimental conditions that allow a representative and accurate measurement of activated sludge floc size distributions. Laser diffraction, Time Of Transition (TOT) and Dynamic Image Analysis (DIA) devices were connected in series. The sample dilution liquid, the dilution factor and hydraulic flow conditions avoiding flocculation proved to be important. All methods had certain advantages and limitations. The MastersizerS has a broader dynamic size range and provides accurate results at high concentrations. However, it suffers from an imprecise evaluation of small size flocs and is susceptible to particle shape effects. TOT suffers less from size overestimation for non-spherical particles. However, care should be taken with the settings of the transparency check. Being primarily a counting technique, DIA suffers from a limited size detection range but is an excellent technique for process visualization. All evaluated techniques turned out to be reliable methods to quantify the floc size distribution. Selection of a certain method depends on the purpose of the measurement.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Chunyu, Yan Cui, Xiaoyu Zhou, and Ying Wang. "Evaluation of Performance of Different Methods in Detecting Abrupt Climate Changes." Discrete Dynamics in Nature and Society 2016 (2016): 1–14. http://dx.doi.org/10.1155/2016/5898697.

Full text
Abstract:
We compared and evaluated the performance of five methods for detecting abrupt climate changes using a time series with artificially generated abrupt characteristics. Next, we analyzed these methods using annual mean surface air temperature records from the Shenyang meteorological station. Our results show that the movingt-test (MTT), Yamamoto (YAMA), and LePage (LP) methods can correctly and effectively detect abrupt changes in means, trends, and dynamic structure; however, they cannot detect changes in variability. We note that the sample size of the subseries used in these tests can affect their results. When the sample size of the subseries ranges from one-quarter to three-quarters of the jump scale, these methods can effectively detect abrupt changes; they perform best when the sample size is one-half of the jump scale. The Cramer method can detect abrupt changes in the mean and trend of a series but not changes in variability or dynamic structure. Finally, we found that the Mann-Kendall test could not detect any type of abrupt change. We found no difference in the results of any of the methods following removal of the mean, creation of an anomaly series, or normalization. However, detrending and study period selection affected the results of the Cramer and Mann-Kendall methods; in the latter case, they could lead to a completely different result.
APA, Harvard, Vancouver, ISO, and other styles
10

Atasever, Sema, Zafer Aydın, Hasan Erbay, and Mostafa Sabzekar. "Sample Reduction Strategies for Protein Secondary Structure Prediction." Applied Sciences 9, no. 20 (2019): 4429. http://dx.doi.org/10.3390/app9204429.

Full text
Abstract:
Predicting the secondary structure from protein sequence plays a crucial role in estimating the 3D structure, which has applications in drug design and in understanding the function of proteins. As new genes and proteins are discovered, the large size of the protein databases and datasets that can be used for training prediction models grows considerably. A two-stage hybrid classifier, which employs dynamic Bayesian networks and a support vector machine (SVM) has been shown to provide state-of-the-art prediction accuracy for protein secondary structure prediction. However, SVM is not efficient for large datasets due to the quadratic optimization involved in model training. In this paper, two techniques are implemented on CB513 benchmark for reducing the number of samples in the train set of the SVM. The first method randomly selects a fraction of data samples from the train set using a stratified selection strategy. This approach can remove approximately 50% of the data samples from the train set and reduce the model training time by 73.38% on average without decreasing the prediction accuracy significantly. The second method clusters the data samples by a hierarchical clustering algorithm and replaces the train set samples with nearest neighbors of the cluster centers in order to improve the training time. To cluster the feature vectors, the hierarchical clustering method is implemented, for which the number of clusters and the number of nearest neighbors are optimized as hyper-parameters by computing the prediction accuracy on validation sets. It is found that clustering can reduce the size of the train set by 26% without reducing the prediction accuracy. Among the clustering techniques Ward’s method provided the best accuracy on test data.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Dynamic sample size selection"

1

Fernandes, Jessica Katherine de Sousa. "Estudo de algoritmos de otimização estocástica aplicados em aprendizado de máquina." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-28092017-182905/.

Full text
Abstract:
Em diferentes aplicações de Aprendizado de Máquina podemos estar interessados na minimização do valor esperado de certa função de perda. Para a resolução desse problema, Otimização estocástica e Sample Size Selection têm um papel importante. No presente trabalho se apresentam as análises teóricas de alguns algoritmos destas duas áreas, incluindo algumas variações que consideram redução da variância. Nos exemplos práticos pode-se observar a vantagem do método Stochastic Gradient Descent em relação ao tempo de processamento e memória, mas, considerando precisão da solução obtida juntamente com o custo de minimização, as metodologias de redução da variância obtêm as melhores soluções. Os algoritmos Dynamic Sample Size Gradient e Line Search with variable sample size selection apesar de obter soluções melhores que as de Stochastic Gradient Descent, a desvantagem se encontra no alto custo computacional deles.<br>In different Machine Learnings applications we can be interest in the minimization of the expected value of some loss function. For the resolution of this problem, Stochastic optimization and Sample size selection has an important role. In the present work, it is shown the theoretical analysis of some algorithms of these two areas, including some variations that considers variance reduction. In the practical examples we can observe the advantage of Stochastic Gradient Descent in relation to the processing time and memory, but considering accuracy of the solution obtained and the cost of minimization, the methodologies of variance reduction has the best solutions. In the algorithms Dynamic Sample Size Gradient and Line Search with variable sample size selection, despite of obtaining better solutions than Stochastic Gradient Descent, the disadvantage lies in their high computational cost.
APA, Harvard, Vancouver, ISO, and other styles
2

Cao, Hongliu. "Forêt aléatoire pour l'apprentissage multi-vues basé sur la dissimilarité : Application à la Radiomique." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR073/document.

Full text
Abstract:
Les travaux de cette thèse ont été initiés par des problèmes d’apprentissage de données radiomiques. La Radiomique est une discipline médicale qui vise l’analyse à grande échelle de données issues d’imageries médicales traditionnelles, pour aider au diagnostic et au traitement des cancers. L’hypothèse principale de cette discipline est qu’en extrayant une grande quantité d’informations des images, on peut caractériser de bien meilleure façon que l’œil humain les spécificités de cette pathologie. Pour y parvenir, les données radiomiques sont généralement constituées de plusieurs types d’images et/ou de plusieurs types de caractéristiques (images, cliniques, génomiques). Cette thèse aborde ce problème sous l’angle de l’apprentissage automatique et a pour objectif de proposer une solution générique, adaptée à tous problèmes d’apprentissage du même type. Nous identifions ainsi en Radiomique deux problématiques d’apprentissage: (i) l’apprentissage de données en grande dimension et avec peu d’instances (high dimension, low sample size, a.k.a.HDLSS) et (ii) l’apprentissage multi-vues. Les solutions proposées dans ce manuscrit exploitent des représentations de dissimilarités obtenues à l’aide des Forêts Aléatoires. L’utilisation d’une représentation par dissimilarité permet de contourner les difficultés inhérentes à l’apprentissage en grande dimension et facilite l’analyse conjointe des descriptions multiples (les vues). Les contributions de cette thèse portent sur l’utilisation de la mesure de dissimilarité embarquée dans les méthodes de Forêts Aléatoires pour l’apprentissage multi-vue de données HDLSS. En particulier, nous présentons trois résultats: (i) la démonstration et l’analyse de l’efficacité de cette mesure pour l’apprentissage multi-vue de données HDLSS; (ii) une nouvelle méthode pour mesurer les dissimilarités à partir de Forêts Aléatoires, plus adaptée à ce type de problème d’apprentissage; et (iii) une nouvelle façon d’exploiter l’hétérogénèité des vues, à l’aide d’un mécanisme de combinaison dynamique. Ces résultats ont été obtenus sur des données radiomiques mais aussi sur des problèmes multi-vue classiques<br>The work of this thesis was initiated by a Radiomic learning problem. Radiomics is a medical discipline that aims at the large-scale analysis of data from traditional medical imaging to assist in the diagnosis and treatment of cancer. The main hypothesis of this discipline is that by extracting a large amount of information from the images, we can characterize the specificities of this pathology in a much better way than the human eye. To achieve this, Radiomics data are generally based on several types of images and/or several types of features (from images, clinical, genomic). This thesis approaches this problem from the perspective of Machine Learning (ML) and aims to propose a generic solution, adapted to any similar learning problem. To do this, we identify two types of ML problems behind Radiomics: (i) learning from high dimension, low sample size (HDLSS) and (ii) multiview learning. The solutions proposed in this manuscript exploit dissimilarity representations obtained using the Random Forest method. The use of dissimilarity representations makes it possible to overcome the well-known difficulties of learning high dimensional data, and to facilitate the joint analysis of the multiple descriptions, i.e. the views.The contributions of this thesis focus on the use of the dissimilarity easurement embedded in the Random Forest method for HDLSS multi-view learning. In particular, we present three main results: (i) the demonstration and analysis of the effectiveness of this measure for HDLSS multi-view learning; (ii) a new method for measuring dissimilarities from Random Forests, better adapted to this type of learning problem; and (iii) a new way to exploit the heterogeneity of views, using a dynamic combination mechanism. These results have been obtained on radiomic data but also on classical multi-view learning problems
APA, Harvard, Vancouver, ISO, and other styles
3

Cheng, Dunlei Stamey James D. "Topics in Bayesian sample size determination and Bayesian model selection." Waco, Tex. : Baylor University, 2007. http://hdl.handle.net/2104/5039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Angeles, Mary Stankovich. "Use of Dynamic Pool Size to Regulate Selection Pressure in Cooperative Coevolutionary Algorithms." NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/78.

Full text
Abstract:
Cooperative coevolutionary algorithms (CCEA) are a form of evolutionary algorithm that is applicable when the problem can be decomposed into components. Each component is assigned a subpopulation that evolves a good solution to the subproblem. To compute an individual's fitness, it is combined with collaborators drawn from the other subpopulations to form a complete solution. The individual's fitness is a function of this solution's fitness. The contributors to the comprehensive fitness formula are known as collaborators. The number of collaborators allowed from each subpopulation is called pool size. It has been shown that the outcome of the CCEA can be improved by allowing multiple collaborators from each subpopulation. This results in larger pool sizes, but improved fitness. The improvement in fitness afforded by larger pool sizes is offset by increased calculation costs. This study targeted the pool size parameter of CCEAs by devising dynamic strategies for the assignment of pool size to regulate selection pressure. Subpopulations were rewarded with a larger pool size or penalized with a smaller pool size based on measures of their diversity and/or fitness. Measures for population diversity and fitness used in this study were derived from various works involving evolutionary computation. This study showed that dynamically assigning pool size based on these measures of the diversity and fitness of the subpopulations can yield improved fitness results with significant reduction in calculation costs over statically assigned pool sizes.
APA, Harvard, Vancouver, ISO, and other styles
5

Feng, Jianshe. "Methodology of Adaptive Prognostics and Health Management in Dynamic Work Environment." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1593267012325542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lartigue, Thomas. "Mixtures of Gaussian Graphical Models with Constraints Gaussian Graphical Model exploration and selection in high dimension low sample size setting." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX034.

Full text
Abstract:
La description des co-variations entre plusieurs variables aléatoires observées est un problème délicat. Les réseaux de dépendance sont des outils populaires qui décrivent les relations entre les variables par la présence ou l’absence d’arêtes entre les nœuds d’un graphe. En particulier, les graphes de corrélations conditionnelles sont utilisés pour représenter les corrélations “directes” entre les nœuds du graphe. Ils sont souvent étudiés sous l’hypothèse gaussienne et sont donc appelés “modèles graphiques gaussiens” (GGM). Un seul réseau peut être utilisé pour représenter les tendances globales identifiées dans un échantillon de données. Toutefois, lorsque les données observées sont échantillonnées à partir d’une population hétérogène, il existe alors différentes sous-populations qui doivent toutes être décrites par leurs propres graphes. De plus, si les labels des sous populations (ou “classes”) ne sont pas disponibles, des approches non supervisées doivent être mises en œuvre afin d’identifier correctement les classes et de décrire chacune d’entre elles avec son propre graphe. Dans ce travail, nous abordons le problème relativement nouveau de l’estimation hiérarchique des GGM pour des populations hétérogènes non labellisées. Nous explorons plusieurs axes clés pour améliorer l’estimation des paramètres du modèle ainsi que l’identification non supervisee des sous-populations. ´ Notre objectif est de s’assurer que les graphes de corrélations conditionnelles inférés sont aussi pertinents et interprétables que possible. Premièrement - dans le cas d’une population simple et homogène - nous développons une méthode composite qui combine les forces des deux principaux paradigmes de l’état de l’art afin d’en corriger les faiblesses. Pour le cas hétérogène non labellisé, nous proposons d’estimer un mélange de GGM avec un algorithme espérance-maximisation (EM). Afin d’améliorer les solutions de cet algorithme EM, et d’éviter de tomber dans des extrema locaux sous-optimaux quand les données sont en grande dimension, nous introduisons une version tempérée de cet algorithme EM, que nous étudions théoriquement et empiriquement. Enfin, nous améliorons le clustering de l’EM en prenant en compte l’effet que des cofacteurs externes peuvent avoir sur la position des données observées dans leur espace<br>Describing the co-variations between several observed random variables is a delicate problem. Dependency networks are popular tools that depict the relations between variables through the presence or absence of edges between the nodes of a graph. In particular, conditional correlation graphs are used to represent the “direct” correlations between nodes of the graph. They are often studied under the Gaussian assumption and consequently referred to as “Gaussian Graphical Models” (GGM). A single network can be used to represent the overall tendencies identified within a data sample. However, when the observed data is sampled from a heterogeneous population, then there exist different sub-populations that all need to be described through their own graphs. What is more, if the sub-population (or “class”) labels are not available, unsupervised approaches must be implemented in order to correctly identify the classes and describe each of them with its own graph. In this work, we tackle the fairly new problem of Hierarchical GGM estimation for unlabelled heterogeneous populations. We explore several key axes to improve the estimation of the model parameters as well as the unsupervised identification of the sub-populations. Our goal is to ensure that the inferred conditional correlation graphs are as relevant and interpretable as possible. First - in the simple, homogeneous population case - we develop a composite method that combines the strengths of the two main state of the art paradigms to correct their weaknesses. For the unlabelled heterogeneous case, we propose to estimate a Mixture of GGM with an Expectation Maximisation (EM) algorithm. In order to improve the solutions of this EM algorithm, and avoid falling for sub-optimal local extrema in high dimension, we introduce a tempered version of this EM algorithm, that we study theoretically and empirically. Finally, we improve the clustering of the EM by taking into consideration the effect of external co-features on the position in space of the observed data
APA, Harvard, Vancouver, ISO, and other styles
7

Janse, Sarah A. "INFERENCE USING BHATTACHARYYA DISTANCE TO MODEL INTERACTION EFFECTS WHEN THE NUMBER OF PREDICTORS FAR EXCEEDS THE SAMPLE SIZE." UKnowledge, 2017. https://uknowledge.uky.edu/statistics_etds/30.

Full text
Abstract:
In recent years, statistical analyses, algorithms, and modeling of big data have been constrained due to computational complexity. Further, the added complexity of relationships among response and explanatory variables, such as higher-order interaction effects, make identifying predictors using standard statistical techniques difficult. These difficulties are only exacerbated in the case of small sample sizes in some studies. Recent analyses have targeted the identification of interaction effects in big data, but the development of methods to identify higher-order interaction effects has been limited by computational concerns. One recently studied method is the Feasible Solutions Algorithm (FSA), a fast, flexible method that aims to find a set of statistically optimal models via a stochastic search algorithm. Although FSA has shown promise, its current limits include that the user must choose the number of times to run the algorithm. Here, statistical guidance is provided for this number iterations by deriving a lower bound on the probability of obtaining the statistically optimal model in a number of iterations of FSA. Moreover, logistic regression is severely limited when two predictors can perfectly separate the two outcomes. In the case of small sample sizes, this occurs quite often by chance, especially in the case of a large number of predictors. Bhattacharyya distance is proposed as an alternative method to address this limitation. However, little is known about the theoretical properties or distribution of B-distance. Thus, properties and the distribution of this distance measure are derived here. A hypothesis test and confidence interval are developed and tested on both simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
8

Thiebaut, Nicolene Magrietha. "Statistical properties of forward selection regression estimators." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/29520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Matsouaka, Roland Albert. "Contributions to Imputation Methods Based on Ranks and to Treatment Selection Methods in Personalized Medicine." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10078.

Full text
Abstract:
The chapters of this thesis focus two different issues that arise in clinical trials and propose novel methods to address them. The first issue arises in the analysis of data with non-ignorable missing observations. The second issue concerns the development of methods that provide physicians better tools to understand and treat diseases efficiently by using each patient's characteristics and personal biomedical profile. Inherent to most clinical trials is the issue of missing data, specially those that arise when patients drop out the study without further measurements. Proper handling of missing data is crucial in all statistical analyses because disregarding missing observations can lead to biased results. In the first two chapters of this thesis, we deal with the "worst-rank score" missing data imputation technique in pretest-posttest clinical trials. Subjects are randomly assigned to two treatments and the response is recorded at baseline prior to treatment (pretest response), and after a pre-specified follow-up period (posttest response). The treatment effect is then assessed on the change in response from baseline to the end of follow-up time. Subjects with missing response at the end of follow-up are assign values that are worse than any observed response (worst-rank score). Data analysis is then conducted using Wilcoxon-Mann-Whitney test. In the first chapter, we derive explicit closed-form formulas for power and sample size calculations using both tied and untied worst-rank score imputation, where the worst-rank scores are either a fixed value (tied score) or depend on the time of withdrawal (untied score). We use simulations to demonstrate the validity of these formulas. In addition, we examine and compare four different simplification approaches to estimate sample sizes. These approaches depend on whether data from the literature or a pilot study are available. In second chapter, we introduce the weighted Wilcoxon-Mann-Whitney test on un-tied worst-rank score (composite) outcome. First, we demonstrate that the weighted test is exactly the ordinary Wilcoxon-Mann-Whitney test when the weights are equal. Then, we derive optimal weights that maximize the power of the corresponding weighted Wilcoxon-Mann-Whitney test. We prove, using simulations, that the weighted test is more powerful than the ordinary test. Furthermore, we propose two different step-wise procedures to analyze data using the weighted test and assess their performances through simulation studies. Finally, we illustrate the new approach using data from a recent randomized clinical trial of normobaric oxygen therapy on patients with acute ischemic stroke. The third and last chapter of this thesis concerns the development of robust methods for treatment groups identification in personalized medicine. As we know, physicians often have to use a trial-and-error approach to find the most effective medication for their patients. Personalized medicine methods aim at tailoring strategies for disease prevention, detection or treatment by using each individual subject's personal characteristics and medical profile. This would result to (1) better diagnosis and earlier interventions, (2) maximum therapeutic benefits and reduced adverse events, (3) more effective therapy, and (4) more efficient drug development. Novel methods have been proposed to identify subgroup of patients who would benefit from a given treatment. In the last chapter of this thesis, we develop a robust method for treatment assignment for future patients based on the expected total outcome. In addition, we provide a method to assess the incremental value of new covariate(s) in improving treatment assignment. We evaluate the accuracy of our methods through simulation studies and illustrate them with two examples using data from two HIV/AIDS clinical trials.
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Christoph. "Untersuchung von Holzwerkstoffen unter Schlagbelastung zur Beurteilung der Werkstoffeignung für den Maschinenbau." Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-184057.

Full text
Abstract:
In der vorliegenden Arbeit werden Holzwerkstoffe im statischen Biegeversuch und im Schlagbiegeversuch vergleichend geprüft. Ausgewählte Holzwerkstoffe werden thermisch geschädigt, zudem wird eine relevante Kerbgeometrie geprüft. Ziel der Untersuchungen ist die Eignung verschiedenartiger Werkstoffe für den Einsatz in sicherheitsrelevanten Anwendungen mit Schlagbelastungen zu prüfen. Hierzu werden zunächst die Grundlagen der instrumentierten Schlagprüfung und der Holzwerkstoffe erarbeitet. Der Stand der Technik wird dargelegt und bereits durchgeführte Studien werden analysiert. Darauf aufbauend wird eine eigene Prüfeinrichtung zur zeitlich hoch aufgelösten Kraft-Beschleunigungs-Messung beim Schlagversuch entwickelt. Diese wird anhand verschiedener Methoden auf ihre Eignung und die Messwerte auf Plausibilität geprüft. Darüber hinaus wird ein statistisches Verfahren zur Überprüfung auf ausreichende Stichprobengröße entwickelt und auf die durchgeführten Messungen angewendet. Anhand der unter statischer und schlagartiger Biegebeanspruchung ermittelten charakteristischen Größen, wird ein Klassenmodell zum Werkstoffvergleich und zur Werkstoffauswahl vorgeschlagen. Dieses umfasst integral die mechanische Leistungsfähigkeit der geprüften Holzwerkstoffe und ist für weitere Holzwerkstoffe anwendbar. Abschließend wird, aufbauend auf den gewonnenen Erkenntnissen, ein Konzept für die Bauteilprüfung unter Schlagbelastung für weiterführende Untersuchungen vorgeschlagen<br>In the present work wood-based materials are compared under static bending load and impact bending load. Several thermal stress conditions are applied to selected materials, furthermore one relevant notch geometry is tested. The objective of these tests is to investigate the suitability of distinct wood materials for security relevant applications with the occurrence of impact loads. For this purpose the basics of instrumented impact testing and wood-based materials are acquired. The state of the technology and a comprehensive analysis of original studies are subsequently presented. On this basis an own impact pendulum was developed to allow force-acceleration measurement with high sample rates. The apparatus is validated by several methods and the achieved signals are tested for plausibility. A general approach of testing for adequate sample size is implemented and applied to the tested samples. Based on the characteristic values of the static bending and impact bending tests a classification model for material selection and comparison is proposed. The classification model is an integral approach for mechanical performance assessment of wood-based materials. In conclusion a method for impact testing of components (in future studies) is introduced
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Dynamic sample size selection"

1

Tuite, Cl´ıodhna, Michael O’Neill, and Anthony Brabazon. Economic and Financial Modeling with Genetic Programming. Edited by Shu-Heng Chen, Mak Kaboudan, and Ye-Rong Du. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199844371.013.10.

Full text
Abstract:
This chapter focuses on genetic programming (GP), a stochastic optimization and model induction technique. An advantage of GP is that the modeler need not select the exact parameters to be used in the model beforehand. Rather, GP can effectively search a complex model space defined by a set of building blocks specified by the modeler. This flexibility has allowed GP to be used for many applications. The chapter reviews some of the most significant developments using GP: forecasting, stock selection, derivative pricing and trading, bankruptcy and credit risk assessment, and agent-based and economic modeling. Conclusions reached by studies investigating similar problems do not always agree; however, GP has proved useful across a wide range of problem areas. Recent and future work is increasingly concerned with adapting genetic programming to more dynamic environments and ensuring that solutions generalize robustly to out-of-sample data, to further improve model performance.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Dynamic sample size selection"

1

Bruvold, Norman T. "Power and Effect Size in Sample Size Selection for Proportions." In Proceedings of the 1986 Academy of Marketing Science (AMS) Annual Conference. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11101-8_81.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reddy, Raghunath, M. Kumara Swamy, and D. Ajay Kumar. "Feature and Sample Size Selection for Malware Classification Process." In Lecture Notes in Electrical Engineering. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7961-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gnedin, Alex. "Sequential selection of an increasing subsequence from a random sample with geometrically distributed sample-size." In Institute of Mathematical Statistics Lecture Notes - Monograph Series. Institute of Mathematical Statistics, 2000. http://dx.doi.org/10.1214/lnms/1215089747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thaher, Thaer, Ali Asghar Heidari, Majdi Mafarja, Jin Song Dong, and Seyedali Mirjalili. "Binary Harris Hawks Optimizer for High-Dimensional, Low Sample Size Feature Selection." In Algorithms for Intelligent Systems. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9990-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cutello, V., D. Lee, S. Leone, G. Nicosia, and M. Pavone. "Clonal Selection Algorithm with Dynamic Population Size for Bimodal Search Spaces." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11881070_125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maciejewski, Henryk. "Risk of Selection of Irrelevant Features from High-Dimensional Data with Small Sample Size." In Springer Proceedings in Mathematics & Statistics. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13881-7_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ben Brahim, Afef, and Mohamed Limam. "A Stable Instance Based Filter for Feature Selection in Small Sample Size Data Sets." In Advanced Data Mining and Applications. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14717-8_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yamamura, Mariko, Hirokazu Yanagihara, and Muni S. Srivastava. "Variable Selection by C p Statistic in Multiple Responses Regression with Fewer Sample Size Than the Dimension." In Knowledge-Based and Intelligent Information and Engineering Systems. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15393-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Desu, M. M., and D. Raghavarao. "Ranking and Selection." In Sample Size Methodology. Elsevier, 1990. http://dx.doi.org/10.1016/b978-0-12-212165-4.50011-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bäck, Thomas. "An Empirical Comparison." In Evolutionary Algorithms in Theory and Practice. Oxford University Press, 1996. http://dx.doi.org/10.1093/oso/9780195099713.003.0009.

Full text
Abstract:
Given the discussions about Evolutionary Algorithms from the previous chapters, we shall now apply them to the artificial topologies just presented. This will be done by simply running the algorithms in their standard forms (according to the definitions of standard forms as given in sections 2.1.6, 2.2.6, and 2.3.6) for a reasonable number of function evaluations on these problems. The experiment compares an algorithm that self-adapts n standard deviations and uses recombination (the Evolution Strategy), an algorithm that self-adapts n standard deviations and renounces recombination (meta-Evolutionary Programming), and an algorithm that renounces self-adaptation but stresses the role of recombination (the Genetic Algorithm). Furthermore, all algorithms rely on different selection mechanisms. With respect to the level of self-adaptation, the choice of the Evolution Strategy and Evolutionary Programming variants is fair, while the Genetic Algorithm leaves us no choice (i.e., no self-adaptation mechanism is used within the standard Genetic Algorithm). Concerning the population size the number of offspring individuals (λ) is adjusted to a common value of λ = 100 in order to achieve comparability of population sizes while at the same time limiting the computational requirements to a justifiable amount. This results in the following three algorithmic instances that are compared here (using the standard notation introduced in chapter 2): • ES(n,0,rdI, s(15,100)): An Evolution Strategy that self-adapts n standard deviations but does not use correlated mutations. Recombination is discrete on object variables and global intermediate on standard deviations, and the algorithm uses a (15,100)-selection mechanism. • mEP(6,10,100): A meta-Evolutionary Programming algorithm that — by default — self-adapts n variances and controls mutation of variances by a meta-parameter ζ = 6. The tournament size for selection and the population size amount to q = 10 and μ = 100, respectively. • GA(30,0.001,r{0.6, 2}, 5,100): A Genetic Algorithm that evolves a population of μ = 100 bitstrings of length l = 30 • n, each. The scaling window size for linear dynamic scaling is set to ω = 5. Proportional selection, a two-point crossover operator with application rate 0.6 and a mutation operator with bit-reversal probability 1.0·10−3 complete the algorithm.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Dynamic sample size selection"

1

Ninomiya, Hiroshi. "Dynamic sample size selection based quasi-Newton training for highly nonlinear function approximation using multilayer neural networks." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maitland, Anson, Chi Jin, and John McPhee. "The Restricted Newton Method for Fast Nonlinear Model Predictive Control." In ASME 2019 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/dscc2019-9067.

Full text
Abstract:
Abstract We introduce the Restricted Newton’s Method (RNM), a basic optimization method, to accelerate model predictive control turnaround times. RNM is a hybrid of Newton’s method (NM) and gradient descent (GD) that can be used as a building block in nonlinear programming. The two parameters of RNM are the subspace on which we restrict the Newton steps and the maximal size of the GD step. We present a convergence analysis of RNM and demonstrate how these parameters can be selected for MPC applications using simple machine learning methods. This leads to two parameter selection strategies with different convergence behaviour. Lastly, we demonstrate the utility of RNM on a sample autonomous vehicle problem with promising results.
APA, Harvard, Vancouver, ISO, and other styles
3

Ghorbanian, Parham, and Hashem Ashrafiuon. "A Numerical Study of Information Entropy in EEG Wavelet Analysis." In ASME 2016 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/dscc2016-9836.

Full text
Abstract:
The purpose of this study is to numerically evaluate the performance of information entropy in electroencephalography (EEG) signal analysis. In particular, we use EEG data from an Alzheimer’s disease (AD) pilot study and apply several wavelet functions to determine the signals’ time and frequency characteristics. The wavelet entropy and wavelet sample entropy of the continuous wavelet transformed data are then determined at various scale ranges corresponding to major brain frequency bands. Non-parametric statistical analysis is then used to compare the entropy features of the EEG data obtained in trials with AD patients and age-matched healthy normal subjects under resting eyes-closed (EC) and eye-open (EO) conditions. The effectiveness and reliability of both choice of wavelet functions and the parameters used in wavelet sample entropy calculations are discussed and the ideal choices are identified. The result shows that, when applied to wavelet transformed filtered data, information entropy can be effective in determining EEG discriminant features, after selecting the best wavelet functions and window size of the sample entropy.
APA, Harvard, Vancouver, ISO, and other styles
4

Breedlove, Evan L., Mark T. Gibson, Aaron T. Hedegaard, and Emilie L. Rexeisen. "Evaluation of Dynamic Mechanical Test Methods." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-65742.

Full text
Abstract:
Dynamic mechanical properties are critical in the evaluation of materials with viscoelastic behavior. Various techniques, including dynamic mechanical analysis (DMA), rheology, nanoindentation, and others have been developed for this purpose and typically report complex modulus. Each of these techniques has strengths and weaknesses depending on sample geometry and length scale, mechanical properties, and skill of the user. In many industry applications, techniques may also be blindly applied according to a standard procedure without optimization for a specific sample. This can pose challenges for correct characterization of novel materials, and some techniques are more robust to agnostic application than others. A relative assessment of dynamic mechanical techniques is important when considering the appropriate technique to use to characterize a material. It also has bearing on organizations with limited resources that must strategically select one or two capabilities to meet as broad a set of materials as possible. The purpose of this study was to evaluate the measurement characteristics (e.g., precision and bias) of a selection of six dynamic mechanical test methods on a range of polymeric materials. Such a comprehensive comparison of dynamic mechanical testing methods was not identified in the literature. We also considered other technical characteristics of the techniques that influence their usability and strategic value to a laboratory and introduce a novel use of the House of Quality method to systematically compare measurement techniques. The selected methods spanned a range of length scales, frequency ranges, and prevalence of use. DMA, rheology, and oscillatory loading using a servohydraulic tensile tester were evaluated as traditional bulk techniques. Determination of complex modulus by beam vibration was also considered as a bulk technique. At a small length scale, both an oscillatory nanoindentation method and AFM were evaluated. Each method was employed to evaluate samples of polycarbonate, polypropylene, amorphous PET, and semi-crystalline PET. A measurement systems analysis (MSA) based on the ANOVA methods outlined in ASTM E2782 was conducted using storage modulus data obtained at 1 Hz. Additional correlations over a range of frequencies were tested between rheology/DMA and the remaining methods. Note that no attempts were made to optimize data collection for the test specimens. Rather, typical test methods were applied in order to simulate the type of results that would be expected in typical industrial characterization of materials. Data indicated low levels of repeatability error (&lt;5%) for DMA, rheology, and nanoindentation. Biases were material dependent, indicating nonlinearity in the measurement systems. Nanoindentation and AFM results differed from the other techniques for PET samples, where anisotropy is believed to have affected in-plane versus out-of-plane measurements. Tensile-tester based results were generally poor and were determined to be related to the controllability of the actuator relative to the size of test specimens. The vibrations-based test method showed good agreement with time-temperature superposition determined properties from DMA. This result is particularly interesting since the vibrations technique directly accesses higher frequency responses and does not rely on time-temperature superposition, which is not suitable for all materials. MSA results were subsequently evaluated along with other technical attributes of the instruments using the House of Quality method. Technical attributes were weighted against a set of “user demands” that reflect the qualitative expectations often placed on measurement systems. Based on this analysis, we determined that DMA and rheology provide the broadest capability while remaining robust and easy to use. Other techniques, such as nanoindentation and vibrations, have unique qualities that fulfill niche applications where DMA and rheology are not suitable. This analysis provides an industry-relevant evaluation of measurement techniques and demonstrates a framework for evaluating the capabilities of analytical equipment relative to organizational needs.
APA, Harvard, Vancouver, ISO, and other styles
5

Tomick, John J., Steven F. Arnold, and Russell R. Barton. "Sample size selection for improved Nelder-Mead performance." In the 27th conference. ACM Press, 1995. http://dx.doi.org/10.1145/224401.224630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Babcock, Brian, Surajit Chaudhuri, and Gautam Das. "Dynamic sample selection for approximate query processing." In the 2003 ACM SIGMOD international conference on. ACM Press, 2003. http://dx.doi.org/10.1145/872757.872822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McCool, John I. "Life Test Sample Size Selection Under a Weibull Failure Model." In International Off-Highway & Powerplant Congress & Exposition. SAE International, 1999. http://dx.doi.org/10.4271/1999-01-2860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Golugula, Abhishek, George Lee, and Anant Madabhushi. "Evaluating feature selection strategies for high dimensional, small sample size datasets." In 2011 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2011. http://dx.doi.org/10.1109/iembs.2011.6090214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmed, Ahmed Awad E., and Issa Traore. "Dynamic sample size detection in continuous authentication using sequential sampling." In the 27th Annual Computer Security Applications Conference. ACM Press, 2011. http://dx.doi.org/10.1145/2076732.2076756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

kumar, Rohit, and Sumit J. Darak. "Channel selection in dynamic networks of unknown size." In ICDCN '19: International Conference on Distributed Computing and Networking. ACM, 2019. http://dx.doi.org/10.1145/3288599.3299727.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Dynamic sample size selection"

1

Bobashev, Georgiy, R. Joey Morris, Elizabeth Costenbader, and Kyle Vincent. Assessing network structure with practical sampling methods. RTI Press, 2018. http://dx.doi.org/10.3768/rtipress.2018.op.0049.1805.

Full text
Abstract:
Using data from an enumerated network of worldwide flight connections between airports, we examine how sampling designs and sample size influence network metrics. Specifically, we apply three types of sampling designs: simple random sampling, nonrandom strategic sampling (i.e., selection of the largest airports), and a variation of snowball sampling. For the latter sampling method, we design what we refer to as a controlled snowball sampling design, which selects nodes in a manner analogous to a respondent-driven sampling design. For each design, we evaluate five commonly used measures of network structure and examine the percentage of total air traffic accounted for by each design. The empirical application shows that (1) the random and controlled snowball sampling designs give rise to more efficient estimates of the true underlying structure, and (2) the strategic sampling method can account for a greater proportion of the total number of passenger movements occurring in the network.
APA, Harvard, Vancouver, ISO, and other styles
2

McPhedran, R., K. Patel, B. Toombs, et al. Food allergen communication in businesses feasibility trial. Food Standards Agency, 2021. http://dx.doi.org/10.46756/sci.fsa.tpf160.

Full text
Abstract:
Background: Clear allergen communication in food business operators (FBOs) has been shown to have a positive impact on customers’ perceptions of businesses (Barnett et al., 2013). However, the precise size and nature of this effect is not known: there is a paucity of quantitative evidence in this area, particularly in the form of randomised controlled trials (RCTs). The Food Standards Agency (FSA), in collaboration with Kantar’s Behavioural Practice, conducted a feasibility trial to investigate whether a randomised cluster trial – involving the proactive communication of allergen information at the point of sale in FBOs – is feasible in the United Kingdom (UK). Objectives: The trial sought to establish: ease of recruitments of businesses into trials; customer response rates for in-store outcome surveys; fidelity of intervention delivery by FBO staff; sensitivity of outcome survey measures to change; and appropriateness of the chosen analytical approach. Method: Following a recruitment phase – in which one of fourteen multinational FBOs was successfully recruited – the execution of the feasibility trial involved a quasi-randomised matched-pairs clustered experiment. Each of the FBO’s ten participating branches underwent pair-wise matching, with similarity of branches judged according to four criteria: Food Hygiene Rating Scheme (FHRS) score, average weekly footfall, number of staff and customer satisfaction rating. The allocation ratio for this trial was 1:1: one branch in each pair was assigned to the treatment group by a representative from the FBO, while the other continued to operate in accordance with their standard operating procedure. As a business-based feasibility trial, customers at participating branches throughout the fieldwork period were automatically enrolled in the trial. The trial was single-blind: customers at treatment branches were not aware that they were receiving an intervention. All customers who visited participating branches throughout the fieldwork period were asked to complete a short in-store survey on a tablet affixed in branches. This survey contained four outcome measures which operationalised customers’: perceptions of food safety in the FBO; trust in the FBO; self-reported confidence to ask for allergen information in future visits; and overall satisfaction with their visit. Results: Fieldwork was conducted from the 3 – 20 March 2020, with cessation occurring prematurely due to the closure of outlets following the proliferation of COVID-19. n=177 participants took part in the trial across the ten branches; however, response rates (which ranged between 0.1 - 0.8%) were likely also adversely affected by COVID-19. Intervention fidelity was an issue in this study: while compliance with delivery of the intervention was relatively high in treatment branches (78.9%), erroneous delivery in control branches was also common (46.2%). Survey data were analysed using random-intercept multilevel linear regression models (due to the nesting of customers within branches). Despite the trial’s modest sample size, there was some evidence to suggest that the intervention had a positive effect for those suffering from allergies/intolerances for the ‘trust’ (β = 1.288, p&lt;0.01) and ‘satisfaction’ (β = 0.945, p&lt;0.01) outcome variables. Due to singularity within the fitted linear models, hierarchical Bayes models were used to corroborate the size of these interactions. Conclusions: The results of this trial suggest that a fully powered clustered RCT would likely be feasible in the UK. In this case, the primary challenge in the execution of the trial was the recruitment of FBOs: despite high levels of initial interest from four chains, only one took part. However, it is likely that the proliferation of COVID-19 adversely impacted chain participation – two other FBOs withdrew during branch eligibility assessment and selection, citing COVID-19 as a barrier. COVID-19 also likely lowered the on-site survey response rate: a significant negative Pearson correlation was observed between daily survey completions and COVID-19 cases in the UK, highlighting a likely relationship between the two. Limitations: The trial was quasi-random: selection of branches, pair matching and allocation to treatment/control groups were not systematically conducted. These processes were undertaken by a representative from the FBO’s Safety and Quality Assurance team (with oversight from Kantar representatives on pair matching), as a result of the chain’s internal operational restrictions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography